There's a nifty piece over at Loosely Coupled on the current attitude towards IT spending. The basic problem - after so many promises, companies are a little unwilling to invest in the next big thing - because the last big thing didn't rally deliver. Here's the idea:
Loosely coupled architectures enabled by web services are going to bring the same waves of incremental innovation in IT - and in business - and the only IT that is going to be shown up as not working is the highly structured and centrally controlled architectures of monolithic, enterprise-scale computing systems, whose design philosophy owes more to the values of the industrial era than to the needs of the emerging information era.
Unfortunately, as I noted recently after listening to a debate on the potential value of investing in web services, "Businesses have had so many false bounces in the bear market of IT expectation that they just can't bring themselves to buy any more." John is right to speak of a "backlash". In many organizations, that backlash is going to throw out the baby of distributed services innovation along with the stale bathwater of inflexible enterprise-scale computing, just at the very moment when businesses should be buying into the new distributed architectures.
He's probably right about the benefits of web services - they offer the opportunity of stitching together truly best of breed systems from component parts, instead of the dangerous all in one nature of past promises. Can you blame companies for being gunshy though? Look at it from the business side - the IT group has switched technology bases how mahy times since 1993 - and, in terms of business benefit - to what end? Tons of money was spent on Java conversions, for instance - and the best end result of that was for the IT shops to end up right back where they started before the conversion. Again, from the standpoint of the business folks, what did all that spending accomplish? Precious little, which is why they are so unready to invest more - especially in a down economy.
Today's Dilbert strip skewers the multiplicity of management theories - but I think it could just have easily been skewering development methodologies....
I used to teach the ParcPlace Introduction to Smalltalk Class. Years later, I remember what used to happen on Wednesday when AspectAdaptors came up - a collective huh???, followed quickly by a blank, glazed-over-eyes stare. The confusion hasn't eased over the years; I still see many questions on this, even from experienced Smalltalkers. So what are they, what are useful for, why do you want them?
Well, consider a simple VisualWorks application - a class Counter with one instance variable - count. Here are some methods:
count: aNumber count := aNumber. count ^count. add: aNumber self count: self count + aNumber.
All pretty simple, right? Now let's have a UI on that - a simple UI that shows us the current value of the count, and gives us a button - each time we press the button, we send #count: to the Counter instance with an argument of 1. easy to enough to string together, but how do we get the UI to be aware of the changed value in the count variable, and display the new value? Well, there are two routes to follow - the MVC dependency model, and the new trigger event system. Here I'm going to use the MVC model - I'll post an alternative using events in the future.
Ok, so the first thing is to change the #count: method as follows:
count: aNumber count := count + aNumber. self changed: #count with: aNumber.
Ok, that signals a change in the Counter object - but how does the UI pick it up? Well, one (complex) way would be to make the UI a dependent of the Counter object, and then implement an #update:with:from: method. This method might look like this:
update: anAspect with: aValue from: aModel anAspect = #count ifTrue: [self counter value: aValue].
There's an obvious problem with this - as the number of objects of interest (in the domain and in the UI) grows, the #update method grows into a complex case statement. There's got to be a better way, right? It turns out that there is - AspectAdaptor. This object does exactly what the name sounds like - it adapts messages that are sent by the domain into ones that are understood by the UI. If you used the VW GUI builder, and your UI class has an input field for displaying counter, then you have a method that looks like this:
counter "This method was generated by UIDefiner. Any edits made here may be lost whenever methods are automatically defined. The initialization provided below may have been preempted by an initialize method." ^counter isNil ifTrue: [counter := 0 asValue] ifFalse: [counter].
That's defining a ValueHolder - a wrapper around a number in this case. We want to replace that with an AspectAdaptor. Here's what that looks like:
counter "This method was generated by UIDefiner. Any edits made here may be lost whenever methods are automatically defined. The initialization provided below may have been preempted by an initialize method." ^counter isNil ifTrue: [counter := (AspectAdaptor forAspect: #counter) subjectChannel: self model; subjectSendsUpdates: true] ifFalse: [counter]
That assumes that you have a line like this in #initialize:
model := Counter new asValue.
So what does the adaptor do? It sets up the communication channel for dependency updates for you automatically, on a per-object basis. The adaptor is now a dependent of the Counter object, and is specifically looking for the #counter aspect when updates come through. Note that the aspect is set to #counter - it's entirely possible to have the get/set messages be different. But AspectAdaptors are more than one-way conduits from the domain up - they are also conduits from the UI down. If you allow user input, a change in the UI will be pushed to the domain. Have a look at the class comments for AspectAdaptor and ProtocolAdaptor (its superclass) to see what's going on.
What if you don't want changes going down as each field on a form is changed though - what if you want a form level update? Have a look at BufferedValueHolder. You can use that to control when updates flow down, via a true/false switch.
This is all documented in the GUI developers guide as well - you should read that over for a more in-depth explanation. If you have questions, send them to me
Well, that was maddening. I've noticed that using pre tags (to escape code posts) has caused me grief here. I just figured out why. For presentation purposes, the server converts all CR characters into <br> tags. That's what I wanted - but it used to wreak havoc with tables I'd post - having br tags scattered through the table caused all sorts of interesting display issues. So I had the server look explicitly for table tags, and not convert within them. Turns out that doing the same with the pre tags gets rid of some presentation glitches as well. The joys of HTML....
My thoughts about inheritance have been evolving since I learned Java. The revelations I've had may not seem like much to a Smalltalk programmer, but they represent a complete shift in my thinking.
Nearly every introduction to OO concepts I've ever read or seen has dealt with inheritance very early, and then moved on to a cursory discussion of "polymorphism" as a sort of nice side-effect of inheritance. Partly as a result of this (and partly because of the underlying misunderstanding) the word "inheritance" is usually used to refer to some combination of behavior inheritance and subtyping.
Java is the first language I have used that mostly separates those two concepts. Extending or implementing an interface represents a subtyping relationship, whereas extending a class represents the more traditional combination of subtyping and behavior inheritance. The process of using and designing with interfaces has brought subtyping out of the shadows and into the foreground of my thinking
And the conclusion drawn:
I realized almost immediately after posting that that Smalltalk and its ilk (including, for example, Ruby) have essentially the characteristic I was talking about, where subtyping and inheritance are separate concepts. With those languages in particular, that distinction is present because there is no subtyping at all ... the notion really doesn't exist in those languages, because typing as Java and C++ folks think of it doesn't exist.
As soon as you see what unlimited polymorphic behavior can do for you, it's a real kick in the pants. I'm very pleased to see that the ideas that came out of Smalltalk (and Lisp) are finally coming to light in the wider developer community.
So now it's April 2003 and I'm hearing that .NET is dead--that Microsoft will continue downplaying both the name .NET and the technologies behind it. You can find hints all around that this ".NOT" strategy might be happening right now. The 64-bit versions of Windows Server 2003 (once called Windows .NET Server, by the way) contain absolutely no .NET bits at all: No .NET Framework and no ASP .NET. Exchange Server 2003, the company's next major messaging server, contains no .NET. Office 2003, the premier office productivity suite, contains XML functionality only in the high-cost business versions and contains few native .NET features. In the biggest year ever of new product introductions from Microsoft, few if any of its products promote .NET, its supposed vision for the future.
I can't say that I follow this stuff closely enough to know what's going on; still, omitting.NET support from the 64 bit platform does seem curious. Is it work delayed because MS thinks that 64 bit migration will be slow, or is it a de-emphasis of .NET? If it's the latter, it could easily be an issue with the huge installed base of VB developers not wanting to make (what sounds like) a difficult move to VB.NET. Whatever the reason, it's interesting - and bears some watching.
I've added a search by category in BottomFeeder - as well as an aggregated search by (title, body, category). Additionally, you can select an item in the item table and execute a search by that item's category or title.
The Module Support thus far is documented here. I'll be updating the page as things develop. I've copied the list here as well, but that link will have ongoing updates....
- Implement Module Support
- Many RSS features in 1.0 and 2.0 feeds exist in namespaced elements. BottomFeeder has added support for the following modules:
- Administration Module
- Support for errorReportsTo - user can send mail to the listed owner if there are issues with the feed
- generatorAgent - gathered, but not used
- ChannelDescription Module
- language - listed in feed properties
- creator - listed in feed properties
- rights - listed in feed properties
- date - listed in feed properties
- Content Module
- Used to grab description if present
- ItemDescription Module
- subject - used for category support - searchable and displayed
- creator - unused
- date - used instead of pubDate if present
- Syndication Module
- updatePeriod - used to determine whether to check for updates
- updateFrequency - used to determine whether to check for updates
- updateBase - used to determine whether to check for updates
- WellFormedWeb Module
- CommentAPI - used to offer comment posting to sites supporting it
Via Coding the Web comes this example -
Then I just saw this posting from Jason: Last night my journal was hacked. All of my Movable Type weblogs were deleted and the archives destroyed. Looks like they got in by just using my username and passwd which is not good.
This is why my posting API uses encryption, and why I don't support the blogger api on this blog - that pathetic excuse for an API passes usernames and passwords in the clear
On Windows, BottomFeeder can read in the IE Favorites menu. Up until now, it's just been building a matching menu and slapping it under the Browse menu. Now, there's also a set of pseudo-feeds for each menu category added to the main tree. That makes accessing Favorites easily possible without leaving BottomFeeder.
There have been a number of posts on mobile blogging (moblogging, they call it) using a cell phone as the interface. Is it just me, or is that the most useless idea ever? I can think of few things more irritating than trying to add a post via the tiny phone number pad. I really don't want to read acronym laden phone posts either - wait until you have a keyboard handy....
I really dislike the kind of patent suits that SCO has brought, but from this story it seems to be working. MS is buying rights to Unix sources:
No financial terms are being disclosed in the deal, under which Microsoft will license SCO's Unix patents and underlying technology called source code. But Microsoft's move suggests that the software company's lawyers view SCO's patents as important, and could encourage other companies to strike similar pacts.
Spotted via the .NET guy
There's more stuff out there on RSS and bandwidth:
RSS Bandwidth Usage We're all starting to see the inevitable issues with RSS bandwidth usage. Something that occurred to me in the shower that I haven't heard anyone talk about (although I am a veritable newbie compared to people like Dave Winer, Ben Hammersley, Bill Kearney and others) was can the feed itself specify the allowed polling interval ? I.e. if you're me and you update your blog constantly then I'd want to allow a fast polling interval but a lot of us don't update as frequently and they could set a daily interval. Or has this been hashed out, discussed and I'm just clue free ? And yeah I know that some aggregators would ignore it but it could be implemented as a default setting that the user manually forced if they needed to. The same way you can force a browser to refresh.
IMHO, the whole feed polling thing is a hack. BottomFeeder supports it, but there's a simpler answer (which Bf also supports) - conditional-get. You send a small query to the Http server and ask if the doc has been updated. If it hasn't, you get a documented answer, and you don't fetch the whole ything.
This is the prior art - it was invented years ago to solve scalability issues at the server. Http already has an answer to this problem. the various proposals to implement hacks with update intervals on fully dynamic feeds will make the problem worse, not better - let the Http server do what it's good at, and push out a static feed. Have aggregators do the right thing, and check for updates before pulling the whole feed. It ought to be simpler to have aggregators support the existing infrastructure (which works, btw) than to support some new thing which purports to solve the same problem.
The New York Times (registration required) has taken notice of Wikis as a possible business tool:
Whether wikis will make it in the mainstream business world, though, is an open question.
Software programmers have sought for decades to design products that help people collaborate in the virtual world as easily as they do in the real world. E-mail is by far the most successful result, but it is linear and best suited for back-and-forth communications involving two people or a small group. On the other end of the spectrum, groupware programs like the Lotus Notes software sold by I.B.M. are elaborate attempts to mimic work environments, with multiple levels of authorization, defined work flows and lots of rules - just like a corporation.
The most distinctive characteristic of a wiki is that anyone in the group (or for public wiki sites on the Internet, anyone who visits) can edit, modify or even delete material on the pages. Such a free-form collaborative process can be messy and chaotic, and it requires a commitment to the group that may not sit well with some egos. But over time, wiki advocates say, a group voice or consensus emerges into what some enthusiasts call "emergent intelligence."
The creative anarchy of the wiki is the philosophical inverse of conventional corporate groupware software. Groupware's highly structured rules and processes do not always reflect the way people really work. Employees often ignore costly corporate-sanctioned software and revert to informal social networks - whether simply e-mail or impromptu water-cooler discussions.
Ward Cunningham, who created the first wiki in 1995 and is the author of "The Wiki Way," a manifesto and how-to manual published by Addison-Wesley, says a wiki is a medium for connecting an electronic community and allows "idea keeping." A wiki presents its members with a blank slate, and their entries determine its structure and organization.
They even point out that there's a market for them - they reference a commercial Wiki vendor (SocialText>http://www.socialtext.com/]), which seems to be a small startup:
The SocialText software, which starts at a price of $995 a year for five users, is being used in about 20 companies, typically small businesses or departments within larger ones, according to Ross Mayfield, SocialText's chief executive
The Smalltalk team at Cincom was an early adopter of Wikis - we run a variety of internal and external wikis, and have found them to be highly useful for sharing information. One thing that we have found - of all the readers, a limited subset will actually add content. And of that subset, a smaller group will do wiki maintenance - cleaning up dead links, re-organizing pages thathave gotten too chaotic, etc.
Even if the licensed source and patents were completely useless to Microsoft, they would still greatly benefit from this. By providing cash to SCO, Microsoft is effectively sponsoring SCO's legal maneuvers against Linux. A strong cash position for SCO reduces the chances of a near-term settlement with IBM.
Fascinating. Could be the case, and it would certainly give SCO enough money to cause a few headaches.
Lots of people are talking about NewsGator, an RSS reader that plugs into Outlook. Even if I wasn't writing BottomFeeder, I wouldn't use anything that relies on the virus attracting menace that is Outlook. I don't have the time in the day to spend keeping up with security patches. I'll stay with Eudora, thanks.
Favorite session: Dave Thomas' Ruby For Java Programmers. I'm interested in Ruby but figured that I'd never get around to looking at it, but Dave inspired me to download it today. I love the idea of blocks.
If you like blocks, visit why smalltalk and give one of the Smalltalks a whirl. You won't regret it :)
You could've gone to the conference and never talk about Java, practically. James Duncan Davidson did 4 sessions, none on Java (my biggest regret is not going to one of his sessions). The closest you'd have to get to Java is Stuart Halloway's 3 hour session on "XML Schema for Java Programmers". There were 4 sessions that discussed .NET, on on Objective C, plus Dave Thomas' Ruby session and his overall "try Ruby" mantra in the keynote and panel discussion. It's half scary, and half really cool. Really cool in that it's a gutsy thing to talk about the "competition", scary in that I felt like I might be looking at the future of Java - except that there was no Java, at least not as we know it today.
Dynamic typing seems to be much less controversial that I expected. It seemed like the goodness of dynamic typing was pretty much moot in the panel discussion. Talking to some other attendees, I'm not sure if the message sunk in though. Dave Thomas gave some great reasons why Java's type system isn't what it's commonly believed to be, though I still think that the best argument is that once code hits the network, all typing is dynamic.
The world is ready for Smalltalk - is Smalltalk ready for the world?
I was on a conference panel recently. Someone in the audience asked a question about the future of SOAP and other similar messaging middlewares. The panelists all answered in various interesting ways. When it was my turn I just said: "I'd rather use a socket." To my surprise I got a polite sprinkling of applause in response.
I think we have gotten framework happy. If a framework exists we feel honor bound to use it. We are like the construction workers in Don Adam's "Hitchiker's Guide to the Galaxy" who were building a bypass through Arthur Dent's house. When he asked them why they were building it they said: "You've got to build bypasses."I think the industry should join frameworks anonymous and swear off gratuitous framework adoption. We should all start using sockets and flat files instead of huge middleware and enormous databases -- at least for those applications where the frameworks and databases aren't obviously necessary.
I can agree with that. Complexity just wends its way through the software industry....
I've gone ahead and implemented the Blogger API and the MetaWebLog API in my blog and the front end posting tools. I have no plans to actually post the blogger and metaweblog servlets - those apis pass usernames and passwords in the clear - something I'm not about to do. Still, having the client support will allow me to post at least a rudimentary blogging tool for BottomFeeder with the next release. There's a ton of refactoring to do - it's all pretty sloppy at the moment!
Scott explains the difference between code no one else will see, and code others have to deal with:
Even though no one else is looking at the Feedster code base, I'm starting to accept that it is a very real thing in the near future -- and that's scary.
Now for non-engineers, you're probably thinking "Oh Scott, that's just a load of absolute hooey". Well here's an analogy for you -- I'm dealing with the mental equivalent of taking my clothes off with a new lover for the first time. Does that make it more clear to you? Now for an engineer who takes an awful lot of pride in his work, this is a big deal. And just as one might go to the gym or dress up before a new lover, I'm doing the same. Of course my work isn't going to the gym -- it deals with "dress up" routines like:
- Clean up
But you know what? Its pretty much the same thing. Oh and in case you are wondering, yes, I can relate most technical issues to the male / female interaction. Not sure what that says about me (and no comments on that one).
Yes, I've noticed that I tend to be a whole lot more careful with BottomFeeder code than I am with the blog code - probably because other people are contributing to BottomFeeder - as Scott says above.
Ok, so I'm looking at Trackback auto-discovery. Assuming the ref is on the page, this is pretty simple. However, I'm looking at Sam Ruby's site, and no such refs exist. And yet his comments are filled with trackbacks. So I do a little investigation, and we discover that the trackback refs are in the RSS feed. So let's say I'm using a posting tool - I make a reference to this site. How do I auto-discover the trackback url for a post? Well, I have to download each RSS feed (there are a bunch there), parse it, look for trackback refs (to the specific post, no less), and then glom that information into my post. Is it just me, or does that seem like a huge, huge waste of resources?
When I'm adding a comment, getting this ref is easy - the aggregator can just look at the post information, see if there's a trackback url, and go to town. But on the posting side, it's just ugly. Put the frelling references on the html page being linked to, and make life easier for all of us. Gads.
Sun continues to wander down pathways others have trod before, while believing that they are the first ones to have visited:
The ACE project lets non-programmers draw programs by connecting boxes and lines. It's been tried before. Java developers will get a chance to check it out when Sun delivers the source code to ACE at JavaONE next month. "A common criticism has been that Java is very hard to use. Ace will bring the tools and facilities of Java to more people."
Point your browser here to get the latest ObjectStudio EVOL build. After registration, go to the ObjectStudio section, and grab the EVOL. Enjoy!
In Why Standards?, Jim Waldo points out the difference between de facto standards and de jure standards. The most important point to be made is that standards bodies are not a place for invention, but a codification of existing practices. Rushing to standardize before there's enought existing practice to learn from is a mistake.
Very much the case. Smalltalk, for instance, standardized in the late 90s - at a point where it was becoming clear that namespaces (for instance) were an important need in the language - but also at a point where no one had done anything yet. I think Smalltalk ended up rushing to standardize....
The end of Buffy the Vampire Slayer - I hope Joss wrapped it up well. I'd be watching right now, but the Replay is recording it, and I'd just as soon skip all the ads.
I won't toss any spoilers - I know there are people who aren't near the end of the season yet (in Australia, for instance). It ended well, I thought - there were a lot of parts of the episode that reminded me of the end of season 5 - the feel was very much the same. I'm glad they went out well, instead of trying to pull it along for another year.
There's a trackback module out there, and I'm in the midst of adding support for it to BottomFeeder. The idea is, when you use the existing Comment API support, BottomFeeder will try and send trackbacks to all the urls you reference, assuming there are matches in your feed list that list a trackback url in their feed. I'll be adding the same support to my posting tool as well - I really have to add a settings tool for that before I can release it.
Spotted this on Ted Leung's blog
The second presentation was on Model-Driven Development using Rational XDE. This just didn't do much for me, because I'm not a fan of RUP or ROSE, etc. I've used some tools to produce RUP diagrams from code, but I've never found tools like this to be helpful in the forward direction. Mostly they make it easier to deal with the structure of classes and objects, but specifying control flow via sequence diagrams is less efficient than banging out the code. Unfortunately for Rational, the speaker agreed -- he said he frequently writes code that he usese to generate the sequence diagrams. The presentation was short, and not that much of a product pitch.
Kind of funny that the speaker did that, actually...
I've been upgrading BottomFeeder and the blog all day - I added in the trackback module to the blog last night, but had to tweak that some today when I started looking at supporting it from BottomFeeder (the comment tool) and my posting tools (for this blog). In the process of doing all that, I now have the following:
- When you use the comment tool in Bf, a check will be made of each url you reference against the trackback urls gathered by Bf. Any matches will be sent a trackback
- I fixed my posting tool so it sends trackback information (using the same mechanism) to the back end. I had forgotten this when implementing the tool - only the web form actually did anything with the trackback field. Dohh!
- I added a 'regenerate feed' option to the Feed menu in BottomFeeder. If a feed changes formats, the cached items won't have any of the new information - this option lets you rebuild a feed without the remove/add cycle
This took a whole lot of testing before I was happy with it - and I still have to see how the blog deals with it in production.
I would have greatly preferred to see this in HTML (although not the atrocity that is HTML produced by Word - bleah) - but it's a usful summary of where RSS is.
Remember the great "should Google index blogs" discussion? Well, it continues... Microdoc News notes What Google Leaves Out, an interesting analysis of which 30% of the [estimated] 10B web pages Google indexes.
RDF has ignored what I consider to be the central lesson of the World Wide Web, the "View Source" lesson. The way the Web grew was, somebody pointed their browser at a URI, were impressed by what they saw, wondered "How'd they do that?", hit View Source, and figured it out by trial and error.
This hasn't happened and can't happen with RDF, for two reasons. First of all, the killer app that would make you want to View Source hasn't arrived. Second, if it had, nobody could possibly figure out what the source was trying to tell them. I don't know how to fix the no-killer-apps problem, but I'm pretty sure it's not worth trying until we fix the uglified-syntax problem.
Pretty much the case. If I want an unreadable format, it may as well be binary so that I get some benefit. An unreadable text format is just.... useless. Here's one problem - take a look at the examples in the link above - if you slap an rdf everywhere, it no longer provides meaning - it just clutters up the page. In other words, you can simply assume it and remove the blasted thing - leaving in the namespaces that actually carry semantic value (i.e., the modules). What am I missing here?
One use Scoble has for blogging:
I use this weblog for a variety of purposes, but lately it's just to keep track of useful stuff on the Internet that I might want to look at later. Believe it or not, I use Google to find links that I've put on my weblog in the past. For instance, when I need plumbing supplies, I just search Google for "scoble plumbing supply" and up comes my weblog where I talked about a plumbing supply place.
I was recently suprised when I googled for an answer to a .NET question and one of my own posts was the first hit. I don't know whether I was more amused by this or annoyed at having forgotten the answer to the question and that I had posted it.
There it is - search Google for your own commentary. I have to admit, I've done the same....
Ted Neward has a post that - on the whole ends up nodding in the direction of dynamic languages, but he confuses some terms:
Recently, as part of the NoFluffJustStuff conference in Denver (the Rocky Mountain Software Symposium), I participated in a speaker panel with Dave Thomas, the Pragmatic Programmer and recent apostle of the Holy Word of Untyped Programming (also known as Ruby). He speaks about loosely-typed languages and their benefits, and one of the questions asked of the panel was our opinions on loosely-typed languages; Glenn Vandenburg, another speaker at the show, blogged about my/our responses
Sigh. It's not untyped. It's Dynamically typed. Whether a language is manifestly (statically) typed or not has nothing to do with whether it has strong or weak typing. C++ and C - both statically, but weakly typed. You can send a message to an object that doesn't implement a matching method, and get a seg fault when the code tries to go ahead anyway. In Smalltalk, that can't happen. You get a well understood exception, which is quite different. He does bring up Dave Thomas' points on dynamic typing:
Dave raised a good point during the speaker panel, though. He pointed out that even though he's been programming in a loosely-typed environment (Ruby) for quite a while now, he's not found himself making the stupid mistakes that the strongly-typed environment is supposed to be protecting us from. If those mistakes aren't happening, then are we sacrificing flexibility in the system for nothing?
It's a good thing, to my mind, that people in the Java world are questioning assumptions about typing systems. I just wish they would get the terms right....
Someone told me today that it was Memorial Day this weekend (for non-U.S. readers that's a 3 day weekend near the end of May). I had absolutely no idea. None. And it struck me that this is really symptomatic of working from home exclusively. No coworkers to ask you what you're doing, etc. Very, very odd.
Yes, I've noticed the same thing - when you work at home, one day slides into the next, and you barely notice what time it is, much less what day it is...
I've added more module support to BottomFeeder - and to the blog as well. The Pingback Module - which is an awful lot like the Trackback Module - is now supported by BottomFeeder (via the Comment Tool) and by the blog (in the feed). I was puttering around with the blog code and the BottomFeeder code to get these working during the afternoon, in between bouts of telling my daughter to do her homework...
I got an email this morning with an interesting question:
I'm about to set up a web site. I'm curious as to your experiences. To start, I'm going to run with two machines: a firewall machine, and a main processing machine. The main processing machine will have my Postgres database, as well as my Wave app. Eventually, if I get some traffic volume, I'll move the wave stuff to a separate box.
My question is, how many wave images should I run? Should I just run one, and let all requests go there? Or should I run multiple smaller images, and let a Load Balancer manage them?
The main processing machine will have 1.5 GB of memory in it.
By way of answering, I pointed out how this site is set up. We run two Smalltalk images:
- The Cincom Smalltalk Wiki runs in one image
- Everything else runs in the second - this blog, the survey, and the NC Registration Application.
That second image runs a few other administrative applications, plus a few ad-hoc apps that run from time to time. I started with a single image; I split out the Wiki last year, mostly because the Wiki was a stable app (the code rarely changes) - while I muck with the blog code on a regular basis. I figured that the Wiki shouldn't be affected by my periodic tinkering. Thus far, this all scales fine - we certainly aren't a huge site, but we get a decent amount of traffic. There's always download activity for CST NC, for instance.
So ultimately, I advised the person who sent the email to start with one image - it's simple, and will likely scale for quite some time that way. Over time, that might change based on usage patterns - but there's no reason to set up a complex system right from the get go, IMHO.
A few days ago, Bob Martin commented on complexity in the protocol universe with the tongue in cheek comment: I'd rather use a socket. I commented on this here. Since then, there's been an utter failure to recognize this statement for what it was. Over on Sam Ruby's Blog, things started with Sam taking the comment seriously. It then proceeded to a rather long thread (scroll down) where poster after poster took the comment seriously.
Yeesh. The point, so far as I can tell, was that complexity for its own sake is a bad thing. J2EE, anyone? EJB? The nightmare that is the current version of MS Word (just try and put a bullet point where you want it, I dare you). The software industry seems particularly vulnerable to this - witness all the heavyweight development methodologies and tools, for instance (to which yes, XP and Agile are responses).
Sometimes I think that most developers have a motto rather like this:
Never pick a simple, straightforward solution where a complex, obfuscatory one can be used instead
Trend Micro said that only a few dozen users emailed them about the bug. Of course, they may have had trouble _osting _roblem re_orts. Rule 915. It has a nice ring. Could become a meme, like Catch-22. "Trend Micro is alerting its solution providers and customers about a bug in an update to one of its security products that inadvertently blocked all incoming e-mail containing the letter P."
Gordon Weakliem comments on the trend:
Also, Larry O' Brien says "it struck me that the biggest practical advantage of strong typing may be IntelliSense", which leads me to wonder if the next question is "why do we need IntelliSense?". John Lam is wondering about dynamically typed languages: "I wonder if it's just me, or whether the community that I frequent has this on its collective consciousness, but I've been spending quite a bit of time wondering about the benefits of dynamically typed languages." It's not just you, John.
There's a VW goodie that does Intellisense....
I decided to add geoUrl support to BottomFeeder today. There are two modules out there - here and here. What the heck; I support both of them. I only look for them at the channel level - I can't see any good reason to use them as item level resources. In any case, here's what I've done. I've added a new menu item to the feed level menu, Map it!. If the feed has the module, I enable that menu pick. Selecting it opens a browser that shows a map to the location in question. I may eventually do something else - for instance, Feedster is evolving support for GeoUrl, and I may well add a menu pick that uses that. We'll see what develops...
I'll be speaking at the Ottawa XP/STUG, so I'll miss this:
Who Uncle Bob Hosted By XpWdc What Talk and Book Giveaway When Thursday May 29th 7-9pm Where Room (310)Marvin Center Foggy Bottom Campus of GWU Corner of H and 21st St NW DC
Please Indicate if you will be attending. No RSVP required but a general count would be helpful.
For the last few weeks I've been asking anyone who will listen if it isn't weird that our economy is based on software, more and more, yet users don't want to pay for software.
In the same breath I express sympathy for the music industry, because they're going through the same devaluation we went through in software in the 80s and 90s. An average song is a bit bigger than the average software program of ten or twenty years ago, so it has taken a while for the distribution pipes to catch up. Today songs travel freely over the Internet, some people are optimistic about people paying -- I am not
Hmm. The problems are perhaps more similar than I had thought (in terms of the end result). In software, no one wants to pay for tools. This is in large measure due to the push from the industry heavyweights to go back to a free software model - IBM, for instance, seems to believe that a free tools model will help them sell truly high end software and services - which has the effect of squeezing the heck out of an awful lot of software vendors. Combine that with the rise of decent Open Source products, like PostgreSQL - and it gets even harder. The end result - an awful lot of potential consumers just download free stuff, and the market for tool vendors contracts. Music has some of the same things happening - it's very easy to download music (and ultimately video as well) - so why pay for it? This helps some artists who have had great difficulties breaking into the fairly closed music world, while it upsets the apple carts of all the existing vendors. Why is this happening? It's not just download ease - it's also the attitude and pricing models of a lot of the music industry. CD prices, for instance, have stayed at absurdly high levels - and this is where we see some similarity (again) with software. The existing vendors have gotten used to being able to charge high (and very limiting) license fees for software. Well, along comes Open Source - it may not be as good or as polished, but a free 80% solution seems better than the pricey 100% solution (and it's not as if a lot of the vendors have 100% solutions anyway). Back to music - the songs you download are often not as high quality as what you can get on a CD - but they are good enough.
It gets even odder. As this trend increases, you see the established players panic. Lawyers are deployed, lawsuits blossom. This has the effect of irritating the end users even more, which drives them further down the road towards free solutions. Look at the entertainment industry reaction to the ReplayTV - Commercial Skip and the ability to send shows got most of the major players to line up and have a complete snit. Most people are using this stuff for fair use purposes - and the hardline reaction merely torques them off. The software industry has the same problem. Attempts to increase leverage through ever more onerous licensing terms just torques people off.
And this is where Dave Winer (and a lot of other people) simply fail to see the problem. Markets change - just as Winer laments the passage of large IT shops, people in New England 150 years ago lamented the passage of large textile shops. The agile companies that saw change coming survived - just as the agile companies in software and music will survive this transition. The ones that don't survive will be the ones that keep looking back to the good old days in a vain effort to figure out how to get back there.
In the NY Times on Thursday, a stirring op-ed piece by Ellen Ullman, about what we've lost in software. In the 90s it was common for two or three generations of software developers to work in the same organization. There was a handing-down of ideas, practices, tradition -- the verbal history of how things came to be as they are, Ullman says. After the dotcom bust software is becoming a detail, again, something that workmen do, not artists. We lost something important when our folk heroes became the 20-something instant-multi-billionaire CEO. There's so much more to software than that, there really is. As I mentioned above, our whole economy is based on it. Our culture is too.Our culture? Please. Get out of the office and talk to some non-software people for awhile, and see just how little of our culture has anything to do with software. Heck, there's a pretty large number of people who aren't even online - and it's not always (or even mostly) for price reasons. The software industry could stand a whole lot less navel gazing, IMHO.