I made a small UI change in BottomFeeder - so far, only visible in the dev stream. On the Mac platform, the font used in the listbox is bold - which means that having new items bolded isn't meaningful. What I've done is made new items appear in red (as well as bold) to add an additional highlight. That should make new items in feeds much easier to spot.
With the deals announced Monday, JBoss Group will now sell services for software from the Tomcat, JGroups, Hibernate, Javassist and Nukes projects. As part of the arrangement, the lead developers from Tomcat, JGroups and Hibernate will become JBoss employees, and the project heads of Javassist and Nukes will work with JBoss Group on a consulting basis.
These open-source projects will remain independent in that development on each product will continue under the auspices of their respective leaders.
The move will allow JBoss to ensure that each open-source product works well with the JBoss software and expand the potential revenue the company generates from services, said Marc Fleury, the company's president and founder.
"We go out and hire these guys and turn them into professionals, as opposed to an amateur or a hobbyist," said Fleury. "Other (companies) expand by acquisition or by deepening their (software) stack. We expand by recruiting those projects."
Aside from the "professional Open Source" terminology, this is a familiar business plan - think any large consulting company and their various software "practices" - they tend to focus on a business domain - and in doing so, they tend to focus on a few software products. This looks an awful lot like that model carried over to open source. It will be interesting to see how well the model translates.
Outsourcing isn't just an American issue - Don Park reports that call centers in Korea are being outsourced to China. I guess "low cost" is a relative term.....
If you still don't follow what continuations are or how they are used in a web app, go read Chris Double's post now. It gives a simple example of how and why they work.
I've seen this business plan. Worse, I've worked with that business plan :)
Read this and you'll see why - simply accessing files from an EJB application is a pain. It's amazing how much work you apparently have to go through to do so:
Of course, many types of file access can be worked around. For example, configuration information can be placed in LDAP, JNDI, a database, or even properties files delivered inside your JAR files that get loaded as a resource through the classloader. In those circumstances where accessing files is a requirement, then other solutions include loading the file through the servlet container, having it sent to the EJB tier via messaging, downloading the file from a webserver through a socket connection and so on.
These are all workarounds for the programming restriction but at the end of the day I think you have to be pragmatic.
The sort of pragmatism we see is lots of use of JSP's, and not a lot of use of EJB. This sort of thing explains a lot of that. Of course, I use Smalltalk Server pages, so the entire problem is avoided....
Don Park explains that there are three ways to deal (technically speaking) with the Eolas patent:
- Add a special - i.e., non-standard - attribute
- Require users to click Ok in a dialog box before proceeding
As I reported yesterday, Eolas is now trying to get an injunction against IE distribution. This shouldn't make users of Netscape, Opera (et. al.) happy, because those browsers have exactly the same problem with respect to the Eolas suit. This whole thing just stinks, and I'm amazed that it hasn't been sunk yet by a prior art showing.
Phillip Brittan complains that Microsoft is is moving purposely away from the browser in order to push server managed fat clients. I'm wondering how that's different from the original Java vision of server managed applets running on the client. I guess it's different because it's not Java as the fat client....
This stinks worse than your job. You have to read it to believe it....
Probably the biggest problem with the current javah-based package is the tremendous difficulty using existing native libraries - there's no reverse version of javah that sucks up existing header files and produces a set of Java classes filled with native keywords that automatically binds.
DLLCC in VW does parse header files and produce classes and bindings. What were those Sun guys thinking? And they say that us Smalltalkers live on an island....
I linked yesterday to a post on the difficulties of doing file i/o from a server based EJB app. Today, I see more oddball work arounds to this problem. Here's a cluestick - it's a server side app. Presumably, the hosting company can set it up such that your application only has permissions to diddle in your directory (you know, using ultra-complex stuff like file permissions). So why again do you have to jump through hoops to do this? Maybe because the EJB Spec is just plain stupid in this area?
Ted Leung points out what the real costs of going overboard with DRM are - the RIAA likely won't understand it....
Just go read what Doc Searls posted on the latest silliness by the music industry. I can't possibly improve on his take :)
Wired News reports that parents in an Illinois district are suing the local scholl district over WiFi at the schools. Here's what should be asked of these clowns:
- How many of you have cell phones?
- How many of you have walk-around phones?
Sheesh. Luddites everywhere...
AT LUNCHTIME TODAY, I moderated a panel discussion on digital downloading and music, featuring a bunch of musicians, songwriters, and industry people from Nashville. Here's the scary bit: one of the industry guys said that their big legislative priority is to try to create a regime where you have to register with a unique, verifiable ID to access the Internet.
Just when I think those morons have gone over the top, they say or do something crazier....
I added filtering capabilities to BottomFeeder this morning - both a global capability, and a per-feed capability. The per-feed filter will override any global filter you have set. This is still early development on this - there may be some glitches. I'm pushing it out in the dev stream in order to get feedback
Joel on Software tells you everything you always wanted to know about encodings and character sets, but were afraid to ask....
Travis Griggs explains how to prevent code forks when you have more than one person working on a project. You can merge a published version with what's in your image, and then - via reconciling with what's in the db (i.e., making sure that Store versions your code based on the right published version), you can avoid a code fork.
We are about to release a new version of Cincom Smalltalk - both VisualWorks and ObjectStudio. There's a lot of cool new things coming up - there are details on the referenced page, but here are a few tidbits:
- Opentalk for ObjectStudio - this will allow object level communication between VW and OS applications. That means that OS developers will be able to expose their business objects to all the interfaces supported by VisualWorks
- New Platforms! We are releasing WinCE support in preview (beta). Check the wiki page for processor details
- Squeak plugin support in the VM - the plugin support that Squeak uses is now available to VisualWorks. This allows for tighter OS integration, if desired
And there's lots of other good stuff as well. Check the Wiki page for details. We expect to release in November.
I'm going to miss this WOAD event, as I'll be in Tokyo that week. Here's a rundown for those of you that might be interested:
(Washington Object-Oriented Architecture and Design)
WHEN Tuesday, October 21st, 2003 at 7:00 PM WHERE: Best Western of Rockville (in the Restaurant) DINNER: Buffet $12.50 (includes tax $ tip - to get the space, we have to eat) TOPIC: MDA: The Future of Modeling or Costly Diversion? SPEAKER: David Fado (co-author of UML 2 Toolkit)
This session reviews MDA as an initiative to guide modeling. In some ways, MDA provides an encouraging road map for the use of UML 2 and modeling for information technology. At the same time, the "big tent" of MDA contains goals and aspects that will likely prove a costly diversion. As MDA is very much in formation, how it will evolve depends on how modelers and tool vendors take advantage of the opportunities presented by MDA/UML 2 and deliver success under the MDA umbrella. This talk will look both at the "future of modeling" side and the "costly diversion" side and invite comment on the best direction for MDA's future.
The talk will include an example of using UML 2 activity diagrams as a way of managing high level information about a project with greater precision.
About Our Speaker:
David Fado is a software architect for Number Six Software in Rosslyn. He is co-author of the recently released second edition of the UML 2 Toolkit with Brian Lyons. He has presented at numerous software conferences and is currently on the program committee for the MDA implementer's conference run by the OMG. Before joining Number Six, he worked with Reuters Information as well as with an offshore development group using UML to communicate about software development.
Put WOAD-RSVP in the subject line. RSVP by email to firstname.lastname@example.org Space WILL be limited to those who RSVP.
These events always produce a lot of good discussion - well worth attending!
Don Park explains why you always have to read the license for software products:
My situation is a common one in that I will have a single server driving several websites and web services, some of which will be commercial. More servers might be added later, but still located at a single data center (ServerMatrix). BDBXML license allows free use under this situation. But the software that runs on my server(s) is being written at home which is in a different state. Since my development machine is in California and my production server(s) are in Texas, I am in fact redistributing whenever I upload my software to the server(s), violating free use under BDBXML license.
Liz Pennell, Account Executive at Sleepycat Software, confirmed this but, recognizing that this might discourage developers from developing software based-on their new product, they graciously granted me free license.
Kind of makes me wonder how many people are violating licenses without knowing it. Funky....
Apocryphal or not, Blaine Buxton quotes some truly amusing performance review lines.
Charles Miller has some hot tips for screenwriters of action flicks. Seems to me that most of them should pay attention to what he has to say.
I got a complaint about BottomFeeder's handling of network errors yesterday - specifically, the way it deals with getting timeouts from all (or most) of the feeds during an update cycle. The way it handled this before was to just ignore it, treating it as (yet another) transient network issue. However, some kinds of ISP issues can make this a painful choice - say your system acts like it has network connectivity, but all http requests yield timeouts. Given the time for any particular request to timeout, this can yield a very unresponsive application.
What I've done is added a counter - if 10% or more of your feeds are failing with timeouts, then the update process will be suspended, and you'll get prompted for what to do - ignore the errors and push on, or go offline. I also added a setting that allows the old behavior (just completely ignoring the timeouts) to take place - I often get transient errors that come and go, and I'd just as soon let them pass. The way Bf is now set up, that decision is up to you as the user
Tim Bray has an idea for killing spam - it's the pay for email variant (each message costs some small amount of money to send). The only problem with the idea is that - as with most such ideas - it could easily be defeated by some unscrupulous set of offshore operators. All it would take to get around it is someone willing to charge less than the typical rate and willing to accept spammers. He has the whole digital signature and certification idea for working around that, but I don't buy it - I just don't see all email systems going to this model simultaneously, and that's what it would take. In the meantime, some of his anecdotes explain why I still don't use a spam killer - I'd rather hand delete the stuff than try to figure out what I've lost.
Eric Sink lets the world know that his company's source tool (Vault) supports the generation of RSS from checkins. I suspect that this will become a demanded feature as time goes by - the feeds on our internal source db and the VW Public Store are highly useful - it's one of the best ways to track activity!
Gordon Weakliem's old blog is here, and his new one is here. He wanted to redirect his rss feed, but Radio doesn't support issuing 301's. So he was trying to use this RSS level redirection, but found that few aggregators supported it. Well heck, I didn't even know that existed! So I sat down and added that support, just now. It's only in the dev stream of BottomFeeder, but it will be supported in the 3.2 release.
Joel Spolsky seems to simply not get it on exceptions. This is somewhat surprising; I really like most of what he writes. In this case, he's just not right, at all. Here's where he starts:
People have asked why I don't like programming with exceptions. In both Java and C++, my policy is:
- Never throw an exception of my own
- Always catch any possible exception that might be thrown by a library I'm using on the same line as it is thrown and deal with it immediately.
The reasoning is that I consider exceptions to be no better than "goto's", considered harmful since the 1960s, in that they create an abrupt jump from one point of code to another.
Hmm. Loosely coupled code, anyone? Sometimes, you have exceptions at a low level in the application that really can't be dealt with, unless you are at the UI level. Here's what Joel says:
- They are invisible in the source code. Looking at a block of code, including functions which may or may not throw exceptions, there is know way to see which exceptions might be thrown and from where. This means that even careful code inspection doesn't reveal potential bugs.
- They create too many possible exit points for a function. To write correct code, you really have to think about every possible code path through your function. Every time you call a function that can raise an exception and don't catch it on the spot, you create opportunities for surprise bugs caused by functions that terminated abruptly, leaving data in an inconsistent state, or other code paths that you didn't think about.
Hmmm. Everything he says here about exceptions is true of events as well. Are they evil? Are they to be avoided? After all, a piece of code may not know that it will get interrupted by an inbound event. So what is an exception? It's an application error event. That's what it is - nothing more, nothing less. What's Joel's answer?
A better alternative is to have your functions return error values when things go wrong, and to deal with these explicitly, no matter how verbose it might be. It is true that what should be a simple 3 line program often blossoms to 48 lines when you put in good error checking, but that's life, and papering it over with exceptions does not make your program more robust.
Bleah. I've seen code written using that theory. It very, very quickly becomes an unmaintainable nightmare, and has errors being propagated from deep in the bowels of the application up to a level where they can be handled. This is clean how? Maybe the problem is that exception handling in Java and C++ sucks - in Smalltalk I can do something like this
answer := [self doSomeThing that Calls ManyLevelsDeep] on: SomeException do: [:exception | exception isResumable ifTrue: [exception resume] ifFalse: [self reportError: exception]
So what will that do? It will resume the exception (i.e., resume as if the exception never happened in some cases, and report the error in others. It's compact, and it's easy to follow - and it has the benefit of avoiding a whole bunch of checks on whether or not I got an error throughout the call chain. In other words, it makes the code easier to read and easier to maintain. Joel's way makes the code crusty, complex, and hard to follow. It puts error handling code up and down the call chain in places it has no business residing. What Joel is advocating is writing code that misplaces responsibility - very bad form. I don't usually disagree with him, but on this, he's just wrong. A lot
The more I read this post by Joel, the worse it looks. And mind you, I wasn't impressed during the first read. I just noticed that Mark Derricutt didn't think much of the "no exceptions" idea either. So let me give a concrete example of how and why I don't agree. Here's a longish snippet from the update loop of BottomFeeder - above this is the loop, the snippet is from the feed, at the point where it's trying to update:
safelyParseAndProcess: anUrl into: anObject forceUpdate: forceUpdate [| cache | self doFeedUpdateWithForce: forceUpdate. cache := HttpClientModel cacheAt: anUrl. (cache notNil and: [cache hasMoved]) ifTrue: [self modifyMovedFeedFrom: cache. self doForcedFeedUpdate]] on: self xmlExceptions , Object errorSignal do: [:ex | HttpClientModel removeCacheFor: url. [self doForcedFeedUpdate] on: self xmlExceptions, Object errorSignal do: [:ex2 | ex2 return]]
Now, the relevant message send here is #doFeedUpdateWithForce:. That method (optionally) does the Http query (if it's time, based on etags, etc). The result of that query is presumably an XML document. Now, I handle the HTTP errors at the point of the HTTP query - mostly they are ignored, unless they are interesting - a 304 (not updated), a 301 (moved) - most other errors are just ignored and presumed to be transient issues (there's slightly more to it than that, but it's not important for our purposes here).
Notice that I catch the XML errors here. Why is that? Well, I've made changes to the parser itself - it ignores rafts of invalid feed issues in an attempt to be "liberal". At this level though, failure to parse means one of two things:
- Either the XML is hosed, and there's nothing we can do
- The document returned was actually not xml (likely an html error page of some sort)
Notice how I try the query a second time on XML errors? What I do is try the query again masquerading as IE, and specifically not asking for mod gzip. Testing has shown that, for whatever reason, looking like IE yields a much higher rate of success. I've also stumbled across encoding issues (both in VW and from feeds) that prevented the proper handling of gzipped feeds. Which is why it's caught at this level. Caught here, the system can make a rational choice about what to do. At the level of the http query or xml parse, those modules have no idea what the higher level application code is up to.
Say I followed Joel's advice. I'd have to make sure that every possible execution path handled and returned error objects, all the way down. That's just stupid. It would make the code very brittle, and highly resistant to change. The way I've done it, the http level or the rss parsing level just toss exceptions, and leave it to the application layer to deal with those. Intermediate levels of the call stack neither know nore care. In Joel's scheme, I end up with nasty case statements littering every single level of the call chain. In the exception scheme, there's the toss, and then the application layer with enough information to catch does so. The error passing scheme leads to baroque code that rapidly becomes brittle. Just say no
The Register reports that France is putting WiFi on their TGV trains. In the San Francisco bay area, commuter rail is getting wireless. Want to take bets on which millenium will see similar things happen on Amtrak - say in the Northeast corridor, where it would truly be useful?
The Register reports that blog trackbacks are having unintended (and nasty) side effects on the google page ranking process. Here's the gist of it:
A "Trackback" is an auto-citation feature that allows solitary webloggers to feel as if they are part of a community. It's a cunning trick that allows the reader to indicate that they've read a weblog entry, or as the official description from MovableType has it: "Using TrackBack, the other weblogger can automatically send a ping to your weblog, indicating that he has written an entry referencing your original post."
The original blog then sprouts a list of "trackback" entries from other webloggers who have read, and linked to the original article. Kinda neat, huh? Except for one unforeseen technical consequence: the Trackback generates an empty page, and Google - being too dumb to tell an empty page from the context that surrounds it - gives it a very high value when it calculates its search results. So Google's search results are littered with empty pages.
Try this OS X Panther Discussion for size: it's a Google query for OS X Panther discussion. In what must be a record, Google is - at time of writing - returning empty Trackback pages as No.1, No.2, No.3 and No.4 positions. No.5 gets you to a real web page - an Apple Insider bulletin board. Then it's back to empty Trackback pages for results No.6, No.7 and No.10. In short, Google returns blog-infested blanks for seven of the top entries.
Wow. There are unintended consequences for lots of things, but that's an interesting one....
The Cincom Smalltalk distribution contains an ever increasing number of goodies - add ons submitted by various people. There are ones organized as internal - i.e., ones created by our own staff, but not considered (for a variety of reasons) to be part of the product. Keeping those in synch with new releases is (theoretically) easy. Then there are the rest of them - the goodies submitted by our users. There are a huge number of such goodies, and making sure that what we have on the CD is in synch with the release is a difficult problem. There's a stated process for it, but I've gotten complaints about that. Many authors now manage their packages in the Public Store, and have expressed the opinion that managing an additional level of versioning (via ftp) is onerous. On the other hand, asking Cincom staff to figure out what is and isn't new is a recipe for disaster as well - we just won't do a good job of it, or we won't notice.
The idea has been floated to build scripts to generate parcels from the public store, but there are problems with that -
- It's a database, not a server
- Unless the package was published binary, it's not even theoretically possible to generate a parcel from the db w/o loading the package first
That argues for one of two things - a Store server (hard, and not something that would appear overnight in any case) - or a small tool that would offer to publish a package to goodies. The idea would be that you would have ftp space on our server, and a small tool would be used as an interface to uploading the goodie to the correct place. That's simpler than the server idea, and - I hope - not too onerous for goodie authors. The tool would be able to push up either a single parcel or some specified archive file (for cases where a .pcl and a .pst aren't going to be enough).
So here's the question - would such a tool be useful? Would goodie authors use it? Would it make the whole submission process easier to deal with? Feel free to answer in comments or via email
I think I'm going to get BottomFeeder 3.2 out fairly soon - and this will be the last release on the VW 7.1 engine and image. When the next development cycle starts, I'll be using VW 7.2 - which means that users are going to have to grab a new base image and VM before they can go to that (future) release. The Upgrade tool is already aware of VW versions, and won't report versions for newer VW bases as valid. I'll be updating this as things move forward
Blaine Buxton likes Seaside, and talks about how easy Smalltalk servers are to work with:
Oh well, I just wanted to write about how blown away I have been. As a side note, I've only had to stop the web server once since I started it (and it was because of a mistake I made)! Of course, to restart the web server takes 1 second (I kid you not...try that with tomcat or apache). I've been changing the code while the server is running with no special cases or what have you (try that in Java...Yeah, I know about hot swap--it only works for simple method changes).
Productivity, anyone? Stick with that Java stuff if you want to develop and deploy 3x later....
The Red Sox continued to see the curse and the cubs were still haunted by the billy goat today. The Yankees put away another game this afternoon - and there were plenty of oddball infield hits and errors in this one. Meanwhile, the Cubs had a breakdown worthy of the '86 Sox. With one out, a fan grabbed a ball that would have been an easy catch (and out 2). Right after that, an error at short extended the inning. The Marlins managed to score 8 runs, in what had been looking like a cruise towards the Cubs first series since 1945. Looks like the fans who wanted a Cubs/Sox series are going to have to wait - looks to me like the Yankees will get there from the American league, and the NL side is completely in flux.
I've written about this post from Joel twice now - here and here. Lots of other people (here, for instance) put up their thoughts as well. I expected a response from Joel, but I was kind of surprised at what he wrote - have a look at yesterday's post on the subject. Basically, he opts out of the issue:
There's no perfect way to write code to handle errors. Arguments about whether exception handling is "good" or "bad" quickly devolve into disjointed pros and cons which never balance each other out, the hallmark of a religious debate. There are lots of good reasons to use exceptions, and lots of good reasons not to. All design is about tradeoffs. There is no perfect design and there is certainly never any perfect code.
The above is true enough, but it's a way around the discussion, not really an entry into it. Disappointing.
Keith Ray posts some thought provoking analysis of outsourcing - relating it to the movement of the garment industry and the manufacturing job losses in the US. The whole article is worth reading - here's one of the quotes keith pulls:
For example, see Johanna Rothman's blog: "I'm convinced that the reasons outsourcing works is that it forces organizations to document requirements and the outsourcers work on only one project at a time. The outsourcers' management can then choose any number of useful product development practices that increase the outsourcers' productivity. Management can't change their minds and refocus the outsourced project(s) in the same way they feel free to refocus the internal projects.".
The problem is lack of focus and lack of productivity - outsourcing works because the contracts signed at least force focus (even if they can't force productivity). Since IT shops have historically lacked focus and productivity, gaining one of the two at a lower cost looks like a great deal to CEOs. Here's another thought that has resonance in our industry:
With that kind of example, the early adopters tried it and found that it solved a lot of endemic problems. In the last two decades it's become the 'heads up' thing to do in manufacturing, but only now is the effect on inventory turns becoming aparent in the US national statistics.
And that is without full participation from the industrial establishment. If manufacturing was fully Lean, you wouldn't see jobs going overseas. It's impossible to operate a pull environment when one of your processing steps takes a month to transport goods by ship. That's old Henry Ford / Frederik Taylor thinking.
Now relate that back to development. How easy is it to deal with changing requirements in an environment where the developers are 12 hours distant (timezone) and a day's plane ride away? Where the language and cultural barrier creates easy misunderstandings? If your locally based developers were actually doing the right thing, then outsourcing overseas would clearly be slower and - over time, from an ROI standpoint - more expensive. The problem is, they don't do the right things. They follow language and platform fads. They reject changing requirements. They diss users, speaking of them with contempt.
And then, they are astonished - just like the auto factory workers with their byzantine work rules - when the jobs migrate somewhere else.