I won't be the only one blogging events at StS this year - Michael Lucas-Smith will be blogging the events he attends - if he can get is notebook configured, that is :). Rich Demers will be there as well, although I have no idea whether he'll be blogging on it. Sames and Alan will be there as well. This year I have a USB stick, so if anyone else transcribes notes that they'd like to see published, just find me - it should be easy enough to do. This is going to be a great show, and I look forward to seeing everyone there!
I'm giving a talk on BottomFeeder at StS 2004 - the slides are done, but I'm curious about one thing - is there anything in particular about the implementation that potential attendees would like to hear about? I've given variations on this talk twice now, and gotten a different set of questions each time. If anyone who plans to attend has thoughts on this, I'd love to hear them. Oh and btw - there's still time to register for the show! See you there.
Julia Lerman has posted again on the whole Women in Technology thing - brought on by this post by Ted Neward (and a bunch of others she links to). It really is a curious thing to me. My wife is a software developer, and she likes to point out what a great career software development is for women (meaning what, exactly?). Well, the fact is that many (not all, but many) women end up taking a "time out" in their career when they have children. This time out can be a complete break from the field or reduced hours.
The truth of the matter is, software is a (relatively) easy field to keep up with - the state of the art simply hasn't advanced all that much over time. Picking up the changes, even after a few years out, just isn't going to be that hard (seriously - if you were a Java developer 4 years ago, just how hard would it be to get going with C# and .NET?). So it's kind of confusing that more women don't choose this field. The options for flexible work hours are better than a lot of other fields, part time hours are fairly easy to accommodate, and time out of the field isn't going to get you hopelessly outdated. This tells me that the problems are cultural - and strongly so, as they override a lot of things that ought to make the field inviting. I'll be looking for Julia's comments from the BOF she says she'll be running at TechEd 2004 - maybe some answers will come out of that...
NBC is going to run an earthquake disaster movie next week - "10.5". The premise is that a magnitude 10.5 (ouch) Earthquake hits the west coast of the US (apparently, more than 1). You have to love these quotes I got from the CNN story on the show:
Howard Braunstein, executive producer of the miniseries, acknowledged that the film is meant as "fun entertainment" and plays loose with the facts.
Asked whether he consulted scientists in developing the project, Braunstein said: "Not really. We went on the Internet for backup research."
This feature is directly based on a request from a user - folder level item viewing. What does that mean? Well, it's pretty simple. If you select a folder, all the items for that folder will show up in the item view. I have to tune this some; the item view isn't showing the originating feed at this point. That's why it's a dev stream only feature :) I'll get that fixed today.
Update: I've added the feed information to the table view when you select a folder.
Yes, we were slow about this :) Still, better late than never. The presentations from last year's StS are online:
Presentations from Smalltalk Solutions 2003 are now available at: http://www.whysmalltalk.com/Smalltalk_Solutions/ss2003/ss2003presentations.htm
Smalltalk Solutions 2004
For those of you local to the Seattle area, or if you just happen to be in town for a day, we have added a one-day pass for the conference. The one-day conference pass is $200 USD and gives you full access to that day's events (except for tutorials). One-day passes can only be purchased at the conference. I look forward to seeing everyone next week in Seattle for the 2004 show.
It's not too late to register - details here. See you in Seattle!
Panopticon Central has some interesting points on what people notice in a UI. Specifically, the status bar:
But what this really makes me think of is a usability test they did on Access one day to see how effective text placed in the status bar was. The test went like this: the user was given some task to do in Access. Unbeknownst to them, we'd stuck a message in the status bar that read "If you notice this message and tell the usability lead, we will give you $15." Want to guess how many people got the $15? Zero. After that, we were careful not to put any important information down in the status bar, because it was 100% likely that no one would ever see it.
Combine that with the research showing that people often just hit return to any dialog box, and you have a real issue. I guess any user notification that's important just has to be part of the core UI.
National advertisers plan to cut spending on TV commercials as ad-skipping devices take hold, according to a survey. Web advertising is expected to benefit from the shift.
Now that advertisers have noticed that the model is failing, what's next? I suspect that subscription services are going to really start to drive broadband....
I made the mistake of upgrading my mail client the other day. For reasons that defy my understanding, it's interaction with one of the mail services I use changed - Eudora 6.0 read the mail just fine, and 6.1 wouldn't. I sent some time looking at the failure messages, and realized that it had a wild idea as to what the server name was - I had included the www. in the host name, and Eudora suddenly was baffled by that. Removing the leading www. solved the problem. Now back to my 200 inbound messages....
Chris Pratley, the Program Management manager for Word (amongst other things) discusses Word and its movement from "worst to first" in market share. It's an interesting read, and explains one thing very clearly (at least to me) - Word was a far, far better product back when it had meaningful competition. At this point, Word's developers have utterly forgotten what end users want, and it shows. What do I mean? Well, here's my list of irritations with Word - none of them utterly crippling, but the collection would make me switch products in a heartbeat if a decent competitor existed. Word Perfect isn't it, because it mostly stinks in the same ways (my Wife uses it).
It didn't used to be this way - I recall liking Word for Windows 2.0. It stayed out of my way, and did what I wanted. The current product mostly gets in my way, and does things I dislike:
- Bullets and Numbering - yes, I've mentioned this before. However, I shouldn't have to use copy/paste to ensure that a bullet goes where I want it. This part of Word is just broken
- Those adjusting menus - they drive me nuts, because all my learned behavior from older versions of Word is shot. When I pull a menu, the items aren't where I expect them - and often aren't there at all until I pull the whole menu. It ought to be easy to turn this off - but the options don't look obvious to me
- The HTML export - the HTML created is a mess. Does no one in Redmond actually know HTML? Based on Word, my guess would be "no".
Doesn't look like a long list, does it? It's not - but the mess with bullets ticks me off every time I use the product. It's a constant, low level irritation, just like the menu thing. The irritation is exacerbated by the knowledge that this stuff used to work - I know that I did not have to fight bullet lists every step of the way in WfW 2.0. It's been a downhill slide since 2.0, as far as I'm concerned - regardless of what the reviewers have said...
Holger released a new rev of Twoflower this morning, so I was able to update BottomFeeder to use it. This fixes some of the font issues (especially with bolded text) that had been present in the previous cut. I also received notice that Bf doesn't support urls that look like this:
That's shorthand for Basic Http Authorization - it's a way of providing the information up front instead of waiting to get prompted. The Http code wasn't parsing that out, and was instead barfing on that as an invalid url. I've addressed that in the latest update to the Http-Access module - that sort of url is now fully supported in the dev stream.
Many people have heard about the 1980's Van Halen contract rider specifying "no brown M&M" be present backsatge; here's the back story on that. Fascinating bit of trivia.
I'm giving a talk on the implementation of BottomFeeder at StS 2004 - there's still time to register, btw! Anyway, saying I'll talk about the implementation is a bit broad - what am I going to discuss?
I've now given a variation on this talk three times - to two STUG meetings, and at ot2004. The STUG talks were more technically oriented, but I got a lot of good feedback at ot2004 about it. So before I really get into implementation, I'll (briefly) do two things:
- Explain what a blog is - it's easy to assume that "everyone" knows what a blog is. It's still something of an insular world though, so I'll provide a brief overview
- What's an Aggregator? Again, as with blogs, not everyone knows what one is. I'll again give a brief overview of what an aggregator is.
Then I'll talk about how I stumbled into this field, followed by some of the implementation details of BottomFeeder. I've uploaded the slide deck here. I've made a few changes since I sent in the presentation for the StS CD, so it might be worth downloading. I'm speaking bright and early - 8:30 am on May 5th. See you there!
He may not know it, but Gary Short really wants dynamic typing. Just look at how C# makes him lie to the compiler....
In a recent Charles post points out that it takes a lot of (keyboard) typing to iterate over a list in Java, compared to the same piece of functionality Ruby, Perl, Python, Lisp, Smalltalk, OGNL and Haskell.
It would only be a fair comparison is these languages were suitable for large systems development.
At the time Java was created, Ruby, Perl, Python, Lisp, Smalltalk, OGNL and Haskell were either non-existent or suffered serious deficits as large-systems programming languages - Back then, the in-vogue large-systems programming language was C++
Apparently, Alan Green is still in High School - he equates "in vogue" with "serious". So prior to 1995, Smalltalk wasn't suitable for large systems development? Oh, really? I worked for ParcPlace Systems back then, and between 1990 and 1995, lots of very large, very successful projects were done in Smalltalk - both in VW and VSE, and in the (then new) IBM Smalltalk. Many of those projects - at Fortune 100 firms - are still in production. It could be simpler than this - I suppose one could translate Alan's statement this way: "It's not fair to compare Java facilities to Smalltalk, because when Java came out I hadn't heard of Smalltalk"
- Vendor websites?
- Community driven sites?
We would like to make things simpler with regards to getting Smalltalk information, but we need to know where you look now. Thanks in advance!
Interesting justification for log4J over on the manual site for log4J - an old quote from Kernighan and Pike related to C level debugging:
As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient.
Now, logging is useful - I still do that with remote Smalltalk servers that have no UI from time to time. However, this makes it sound like logging is still preferred to debugging. Admittedly, the home site for a logging tool is going to be biased, but still... to my mind, logging is a poor substitute for using a debugger. Of course, a Smalltalk debuggeris actually a code browser with the context stack riding along, so YMMV - if your tools are primitive, maybe logging looks better...
This is why I call it the dev stream. I added a check for the content-type into my http library the other day - the idea being that if the content type was application/*, then it should default to utf-8. They key there is default - it should assume utf-8 only if nothing else was specified. However, that's not how I wrote the code... I was checking the content-type first, instead of at the end if the encoding had not been found. That was a dumb error, and made for a lot of bad text coming through. Fortunately, that only impacted the dev stream, and I've just fixed it - if you follow the dev stream, just grab the latest Http-Access and BottomFeeder components via the upgrade tool.
The New York Times has an article on outsourcing in today's edition. It's interesting because it takes a mostly negative view of the issue from a technology standpoint. However, most of the problems with outsourcing flow from a fairly old problem in software development - ironically enough, it's well stated in this defense of outsourcing:
In the future international division of labor, Mr. Pradhan said, the production of the technology will be done in places like India, which can deliver it reliably at a low cost. What cannot be sent to India, he said, is the invention of new business processes and technologies.
Conceiving inventory-management software that helps a retailer make the best use of electronic product tags, for example, might be something best done by system designers in the United States working closely with the retailer. Once such a system and its tasks have been mapped out, though, the software code could be written by programmers in India.
And boom, we've fallen into a management rahole and we can't get up. The rathole? Thinking that we can specify the software, throw the requirements over the wall, and then get something useful back later. This doesn't work well at all, and it doesn't matter whether the requirements are tossed over the wall to the local IT group or over the wall to Bangalore - the problem is the assumption that such a process can work. We have decades of experience with this kind of thinking, and anyone who's been paying the least bit of attention has noticed that it doesn't work well. I think I'll be wary of any project I hear of that comes from these guys - where Mr. Pradhan is a Senior VP. if they are promoting rocket scientists like that to senior VP slots, the very last thing they should be allowed near is important software systems...
That's hardly the only problem. Back in tech bubble of the late 90's, a lot of big companies brought in armies of consultants to help them build systems. The thinking seemed to be that the entire problem was that the consultants from (insert large consulting firm here) were too expensive. Guess what - that wasn't what the problem was. The problem was actually a lot simpler - these firms were farming out work to a horde of people they didn't know, but who were sold to them as experts. Did they understand the business domain? Typically not. Did they know the actual user base? Typically not. So now what are these same firms doing? Farming work out to a horde of people they don't know, but who are sold to them as (inexpensive) experts.
There's a saying that Insanity is doing the same thing over again, but expecting different results. Well, here are a bunch of large firms, doing exactly the same thing they did in the 90's - sure, it will cost less - but the results are looking to be about the same. It shouldn't be a surprise - the issues are all the same, only now there are additional issues of language and cultural barriers (not to mention time zone!) to boot. Expecting different results is just... stupid. Outsourcing software jobs is going to go about as well as hiring the expensive folks from (insert large consulting firm here) did. The only question is how the failures will get spun this time
There's an interesting Smalltalk debate referenced here - if you read Swedish. The summary is what I found interesting:
My column on the Smalltalk heritage on IDG has spawned a small debate about "industry languages" such as Java and C# compared to more dynamic, "cutting edge" languages like Smalltalk and Python. My take on the debate is that if you want to get stuff done togheter with other developers that may not be on the same level as you, C# and Java will get you there with the lowest amount of risk. For single-developer projects, or for small projects that everyone involved are really bright, Python and similarly dynamic languages (including Smalltalk, Lisp/Scheme, and even Perl) can get you there faster, while allowing you to have more fun along the way.
I know what he means about risk - but keep in mind that training developers in Smalltalk is fairly easy - it's not a complex language. I'd love to read this stuff, but Babelfish doesn't offer a "from Swedish" option....
remember this post from StS 2003? Well, the Ambrai guys just announced a public beta release:
Ambrai Smalltalk Beta Release
Ottawa, Canada - April 28, 2004
Ambrai Inc. is pleased to announce the beta release of Ambrai Smalltalk, a new Smalltalk development environment for Mac OS X.
Ambrai Smalltalk features a complete suite of native development tools that tightly integrate with the Mac OS Desktop. Currently these tools include package, class, category browsers, debugger, inspectors, and workspaces. Ambrai Smalltalk can be used to deploy Mac OS standard or console applications.
"We aim to deliver a rapid application development platform suitable to create new or script existing Mac OS applications. Whether you are a seasoned programmer or have never written a single line of Smalltalk code, we hope you will consider Ambrai Smalltalk for your next project." - Ambrai Smalltalk Development Team.
Founded in 2003, Ambrai Inc. is a privately held company based in Ottawa, Canada. Please visit http://www.ambrai.com to find more details and to download Ambrai Smalltalk.
One more reason to come to Smalltalk Solutions - you'll see the future before it arrives!
- Avi Bryant's keynote address on Seaside, and why it's a better way to build web apps
- George Bosworth's keynote address on the Microsoft CLR, and how it fits with dynamic languages
- Lars Bak's keynote address on Resilient, and why he chose Smalltalk instead of Java for embedded systems
- Smalltalk night at the Space Needle - have dinner with the Smalltalk crowd at the top of Seattle
- All the other fine talks and tutorials at this year's show
There's a lot of great stuff packed into three short days. See you there!
ABC News to Sun: Drop Dead. This summarizes their take pretty well:
It is no longer enough to blame the high-tech recession for all of Sun's blues; other companies - including some even worse hit, like Yahoo! and Cisco - are making strong comebacks.
Sun is not coming back. It is a giant company without a business.
That's pretty much been my take - they spent too much time fighting the wrong war (Microsoft), and too much money on a software business (Java) with minimal revenues. The deal with MS was a desperation move, not a masterstroke. Here's an email I just got from someone (this story was pointed out to me in email):
BTW, we have more than 1000 Linux blades in the business. Nobody here talks about upgrading expensive Sun boxes anymore.
That's not the only company I know of making that kind of call. Heck, our engineering group is looking at a new file server. The old one was a Sun box, and it's not looking likely that we'll replace it with a Sun box. It's a huge problem for Sun, and I don't see an answer for them...
Chris Pratley talks about blogging - and in the process of talking about Word, hits on something relevant to any blogger:
A couple of people have asked about the permanence of electronic information and access to it in the future if it is in Word format. Microsoft takes this very seriously. That's one of the reasons we make the format documentation available to governments and other institutions, so that there is no concern that they will not have the ability to access the information at a later date. Personally, I find this whole discussion a little bit overwrought though. If it is access to the content of a Word doc that is a concern, just about any word processor available today can import Word documents sufficiently that you can access their content. You don 19t need a Microsoft product for that.
I've talked about this before - anything you write on a blog should be considered permanent. There have been somewhat embarrassing disclosures of old draft information found in word documents - but that's nothing compared to the permanence of what goes on a blog. Ultimately, you just can't delete your content. Why not? Well, your content got cached all over the place:
- Gosh knows how many client aggregators
I fully expect to see political campaigns using this as oppo research in a few years - imagine the 45 year old candidate for some office being called on stuff he wrote on a blog 20 years back - still available through some kind of search service. It's not going to affect only politics, either - employers are already doing google searches for references when a person is being considered for a job - a blog is the ultimate paper trail. Compared to that, Word documents are ephemeral....
This story seems to recycle every few months in a "the sky is falling, the sky is falling!" sort of fashion:
Already, aggregators have swamped some sites, slowing Web servers and eating up expensive bandwidth, according to bloggers and other Web publishers. The end may be near, unless something changes soon, said Gary Lawrence Murphy, whose Linux blog, TeledyN, has been overloaded.
There's an answer to this, and it's called conditional-get. Browsers implement it, and it's not hard to implement in an aggregator - I added it to BottomFeeder over a year ago. I'd be curious to know which aggregators don't implement this, and why. This does lead me into another area along the same bandwidth related lines though:
The Atom guys have been (and still are) seriously considering the addition of binary data blocks to Atom feeds (music, images, video, etc). That's a nice sounding idea until you combine it with the way HTTP works. Say I add a 3 MB video file to my feed. Smart aggregators that use conditional-get will only get that data once - until the feed is updated again, that is. The problem is, there's no way to get a partial document using HTTP. That means that large attachments to a feed will be fetched every time a feed is changed. If you have a rarely updated feed, that may not be a problem. If it's a blog that's updated regularly, it's a problem.
The funny thing is, this exact problem is addressed by RSS Enclosures - by providing a reference to that sort of data, and allowing the client aggregator to grab the data in any way it chooses - some kind of background process, a simple link for the user to follow, whatever. That's the right way to do this, IMHO - the proposed Atom support is just a bad, bad idea in the context of HTTP.
StS 2004 starts Monday - there's still time to sign up and get there. I'll be arriving late on Sunday - so I likely won't see anyone until then (unless there are some nightowls in the hotel bar on Sunday). It's going to be a great show - I look forward to seeing everyone there.
Internally, some of the non-Smalltalk groups at Cincom are using SharePoint (we use Wikis in the ST group). SharePoint seems like a natural for exporting RSS, and there's a free part for doing just that. There's a problem though - it generates odd RSS. It defines a namespace like this:
That namespace is the default namespace for the document, and none of the elements are in that namespace - which makes them invisible to BottomFeeder, which sees feeds from that tool as RSS 2.0. Never fear though - I added a handler specifically for SharePoint that expects this slight deviation and handles it. So if you use SharePoint, you can export RSS and have BottomFeeder pick up the results. There are a few commercial (i.e., paid) solutions for this as well; I have no idea what kind of RSS they pump out.
Somewhere out there, Michael Lucas-Smith is on the highway - trying to remember to stay on the right side :)
I was wrong. Tim Bray points to a - wait for it - HTTP over SOAP implementation. I'm thinking idle hands....
Via Ben Hammersley I found this specification for yet another syndication format. While I understand what's being done here (and it certainly looks simple enough) - it looks a lot like a more highly specified RSS with slightly different tag names. I think I'll wait this out and see if anyone pays attention....
Scoble is showing that the ad budget at Microsoft is being wasted on the marketing department. I agree with him about that MS Office campaign - the people who came up with that should all be fired as a lesson to the rest of the field :)
Attention goodie submitters:
If you've got code that we are distributing as a goodie, we really need any updates you have ASAP! We are getting close to the VW 7.2.1 and OST 6.9.1 releases, and need to get this closed out. Please see:
Chris Pratley makes some good points about the patent system. Well worth reading
This week, InfoWorld has a cover story on XML and how well the 4 major databases (DB/2, Sybase, Oracle, and SQL Server) handle it. The part I'm commenting on is in the introductory paragraph:
If you could do one thing to improve integration and automate processes with customers and business partners, it would be to implement XML, which has become the standard for exchanging information between disparate systems because it is easily transformed into any format.
Sure it is. Assuming you know what kind of schema the document is using. Simple example - say you picked up an old version of BottomFeeder and were confronted with an Atom document. heck, suppose you had version 3.1 (version 3.4 is the most recent) and wanted to deal with an Atom 0.3 document? That rev of Bf had not dealt with 0.3 yet, and the 0.2 handlers did not display information from 0.3 documents correctly. Sure, you could translate an 0.3 doc back into an o.2 doc - but would you have any idea that you should?
XML is a data format. It's no better or worse than any other format, and it's certainly not magic pixie dust - and yet publications like InfoWorld continue to present it as some kind of magic interoperability balm, ready to smooth away all problems. It's not - if you don't know what kind of doc you are getting, you aren't likely to be able to deal with it intelligently - whether it's human readable or not. XSLT assumes that you have the semantic knowledge handy to a do a transform - the simple presence of angle brackets does not grant that semantic knowledge. Here's a nice example of the magic pixie dust thinking that pervades the XML world:
For example, querying on a patient ID number in a relational database may allow you to quickly find the dates a certain patient visited the hospital, the conditions he was diagnsed with, and the treatments he was given. But it likely won't help you determine which treatments were provided for which conditions, or what times the treatments took place, nor will it give other useful information that XML versions of these records could provide
Say what? has this guy never heard of foreign keys and joins? Sure, they can get messy for complex data retrievals - but they are hardly impossible. Reporting systems are being built to handle these sorts of issues all the time. And XML records would magically solve the problem? I'd really like to know how. Apprently, applying XML will make it easy for me to solve every software integration problem I have.
The most amusing part of this is the complete lack of historical awareness. Hierarchical databases were the storage solution until the 80's. At that point, the RDBMS started to become supreme - primarily due to the large amounts of flexibility in changing the organization of data - and the greater ease in creating ad-hoc queries. What's an XML database? It's a "back to the future" kind of move, that's what. There's nothing new here except the realization that hierarchical storage of data has merits in some domains. It's no end-all be-all - if it were, the old hierarchical databases never would have gone by the boards.
In an SDTimes article, Andrew Binstock shows that minimal research is just beyond him. He's correct about the uptake of XP in IT shops - it's not a commonly used methodology. A lot of this is cultural - management likes paper trails, and the popular RUP guarantees lots of it. That's not the rathole Binstock runs down though - in talking about XP, he says this
Second, XP was embarrassed by the cancellation of the C3 project at Chrysler. This project was the poster child for XP. It was begun in 1996 with the mission of getting Chrysler's payroll off its mainframes in time for Y2K. Kent Beck, the father of XP, was brought in , and he implemented XP and made the project a showcase of the new methodology. Alas, Chrysler cancelled C3 in February 2000. The deadline had been missed, and the mainframes were still cranking at full tilt to handle Chrysler's payroll. With Beck running the show, there is no wiggle room for asserting that XP wasn't done correctly.
I guess Binstock's a busy guy - because he clearly didn't contact any of the people associated with that project. I've spoken to many of them, at different times and in different places. The C3 project went down for the same reasons that most IT projects go down - politics. I'm sure that SDTimes provides internet access for the staff - Binstock could have googled the email addresses (or even hunted around the C2 Wiki a little) and found this explanation of the issues. I guess it was easier to throw rocks at something he doesn't like though - facts would just get in the way...