Matt Croyden gets his vacation rained on. I got the same stuff here in Melbourne Beach, but I did manage to get in 18 holes this morning before it started. I'm hoping for clearer weather tomorrow; we are supposed to head over to Universal for the day.
Here's an interesting article on light pollution. I've been noticing this ever since I moved from a NY State suburb - with virtually no street lights, and thus no light pollution - to the Baltimore-Washington corridor (bathed in light 24x7). Back when I lived in NY, I could see lots of stars, and identify constellations. I noticed how bad things were in my area 2 years ago when my daughter was supposed to go out and look at stars - no identifiable constellations, and few enough that I almost thought I could count them! The article raises some interesting environmental and health concerns - and has some practical suggestions as well. Well worth a read.
This article talks about managing the deployment of Java apps with the JRE, but it seems timely to as Product Manager for Cincom Smalltalk. Why? We are in the process of developing a simpler runtime deployment system for VisualWorks, and a lot of the issues in the aforementioned article - while written in the context of Java deployment - raise some good questions for any runtime deployment system. I'm going to have to keep a lot of the questions raised here in mind.
BottomFeeder now supports the 0.2 spec of the proposed Atom syndication format. I also changed the blog output - I now support RSS 2.0 for the main feed, RSS 2.0 for the comment feed, Atom 0.2 for the main feed, and Atom 0.2 for the comment feed. The Atom spec is still under development and still somewhat in flux, so the latter two links are likely to change as time goes by. I'm holding off implementing the Atom API until I'm sure the flux is reduced - it's not a lot of work, but I don't really feel like jumping around that much yet either.
In a 45-page document filed late Wednesday, IBM argues that because SCO distributed a version of Linux under the open-source General Public License (GPL), it can't claim that Linux software is proprietary. IBM also argues that SCO software violates four IBM patents and that the company interfered with IBM's business by saying it had terminated IBM's right to ship a Unix product, AIX.
IBM is seeking unspecified monetary damages and an injunction stopping SCO from shipping its software. The counterclaims came as part of Big Blue's answer to SCO's amended suit and were filed in the same federal district court in Utah.
That sound you heard was the other shoe dropping
Good article here on Javablogs today:
Do you know how to use a hammer? A screwdriver? A saw? Almost certainly you do. Does that mean you can build a house? Well maybe, but unless you have previous experience at building houses, I don't think I'd want to live in the one you built. Knowing how to use the tools is not the same skill as being able to build a house.
So why is it that so many people seem to think that knowing how to use a profiling tool means you know how to tune an application? For sure, having a profiler as opposed to not having one makes tuning much easier, just like having a hammer as opposed to not having one makes building a house much easier. But the tool increases your productivity, it doesn't enable the ability. You don't suddenly have the ability to build a house because you know how to use a hammer; you don't suddenly have the ability to tune an application because you know how to use a profiler.
That's a very good point. Another thing I see a lot of is assumptions - you get to a site where there's a performance problem, and the customers tell you (with complete assuredness) what's too slow. I find that 9 times out of 10, they haven't even run the profiler - they've just made a guess about the problem. However, the author of that article is correct - while profiling is useful, just using the profiler won't tell you everything. Sometimes you have to tune the memory configuration, and that only shows indirectly via profiling. There are plenty of other similar cases - and they vary by application/language/environment/platform.
Don Park reminds us where blogging sits in the grand scheme of things. That probably makes every blogosphere flame fest a real "tempest in a teapot" kind of thing. In any event, it's a useful piece of grounding information:
Most people on this planet knows nothing about blogging. I doubt if more than 5% of Internet users know what blogging is. Stepping back even further, Internet users are only a small portions of the world population. If the world population was a pancake, Internet users are the top crust and bloggers are just a small tip of it.
The Yankees just got Jeff Nelson back in a waiver wire deal. Why Seattle let him go baffles me, and why the Red Sox let him go by baffles me - but I'm certainly pleased to see him back in pinstripes. The combination of Nelson and Rivera in the bullpen is a real killer. Now the Yanks are ready for the end of season run and the playoffs!
Go read Joel first, and then really think about it. Can you afford the loss of expertise that will come with the relocation?
Sam Gentile likes his new Mac. I've spent a few days working on one (tweaking BottomFeeder to properly spawn a browser) at my parent's house. One thing I'll say - don't come at OS X expecting it to be just like any other Unix environment. You do have Unix, but you also have the Mac environment. I'm not yet sold on it being particularly easier than Windows - at tthis point, I think whatever you are used to is easiest. However, I am intrigued. Maybe I'll consider one next time I'm in the market....
I stumbled across this article in comp.lang.smalltalk. Here's the fun part:
The extends keyword is evil; maybe not at the Charles Manson level, but bad enough that it should be shunned whenever possible. The Gang of Four Design Patterns book discusses at length replacing implementation inheritance (extends) with interface inheritance (implements).
Good designers write most of their code in terms of interfaces, not concrete base classes. This article describes why designers have such odd habits, and also introduces a few interface-based programming basics.
Interfaces versus classes I once attended a Java user group meeting where James Gosling (Java's inventor) was the featured speaker. During the memorable Q&A session, someone asked him: "If you could do Java over again, what would you change?" "I'd leave out classes," he replied. After the laughter died down, he explained that the real problem wasn't classes per se, but rather implementation inheritance (the extends relationship). Interface inheritance (the implements relationship) is preferable. You should avoid implementation inheritance whenever possible
Hmm. Yes, you should avoid deep inheritance hierarchies. But Inheritance is a tool, not a problem. Maybe the problem is that Gosling and Holub aren't bright enough to know how to use it appropriately? Or maybe the locking they see in this comes from the combination with static typing.... Hard to tell, but clues are few and far between in this article. As often happens, Dave Buck has the best response to this silly article:
The first few examples given in this article actually point to difficulties with static typing, not inheritance. Smalltalk is immune to these since it uses dynamic typing.
The "fragile base problem" (for the benefit of people who didn't read the article) is that subclasses require knowledge of how the superclasses are implemented. If the superclass changes, changes may be needed in the subclasses to compensate.
Subclassing does allow a tighter coupling between classes than normal which makes it more fragile. Good design style is to reduce subclassing by using delegation instead. Good design style is also to avoid deep class hierarchies. I wouldn't say that subclassing is evil, though. It's like saying that lots of people are killed in car crashes every year, so we should avoid cars. Often, the alternatives are worse than the problem.
I have access to OS X this week, since I'm in Florida at my Parent's house. I finally managed to get the Mac to spin up the default browser - Alan Knight told me about the 'open' command. So now, on OS X, the default browser will be used whenever you request an external browser. With that out of the way, I'm off to this site to play some Puerto Rico
I've been adding more 'markup' support to the blog, in order to make my posting life simpler. Now I've got the following:
- Any line that starts with an * will be part of a bullet list
- Any line that starts with a # will be part of a numbered list
- Any line that starts with || will be a table, with || separating columns.
This is similar to (although not as extensive as) the markup support used by WikiWorks, the VW Wiki implementation. In any event, it makes it easier for me to add things like lists and tables to my posts, without having to punch out the full set of html tags...
Inc. columnist blames PowerPoint for - well, just about everything. It all depends on how and why you want to use it. This guy just goes off the deep end....
BitWorking has a good example of how the proposed Atom syndication format can be extended. The question I have is, how is this different from how RSS is used now?. Sure, Dave Winer doesn't like namespaces for some bizarre reason. The fact is, no one cares, and people are using namespaced extensions in RSS today. I am so not seeing the point of all this....
IDGNS: Your study has raised some eyebrows in the open source community. Why so?
Spindler: Regarding such legal principles as liability and warranty, the GPL clauses have absolutely no legal validity. Under the license, developers and distributors of open software are not liable for any problems with their products. The GPL avoids any wording that could imply liability. Such a license is simply unenforceable under German, or even European Union law for that matter.
IDGNS: Your study points to potential risks facing a number of groups involved in the open source value chain: developers, software companies and users. So, really, just about everyone who comes into contact with open source software in one way or other should be careful, right?
Spindler: Not everyone -- for instance, users who don't modify the software or distribute it. However, in the software developer community, liability is an unresolved issue. Consider developers working on a program from different countries. The legal question is: What sort of company is this? Is each participant liable or the group as a whole? Or consider a project in which one developer starts writing code and then hands over that code to another who continues writing and hands over to yet another. In this successive approach to code writing, is the author responsible only for the code he or she wrote or for all code in the final software product? The answer may differ in each jurisdiction.
It's interesting, and there's more to read. Well worth reading and considering.
Apparently, fans are upset over the new Battlestar Galactica miniseries coming to Sci Fi channel. Here's a clue folks - the original series was pretty lame
I spent a large part of the day adding features to BottomFeeder. First off, Rich Demers made the new delete option to the html pane. I merged that in with the bug mail work I had been doing earlier. Then I added an option to persist items. If you mark an item as persistent, it won't be truncated (assuming you have that option set) off when you save your feeds. This is only in the dev stream so far, so bear in mind that it's work in progress...
IT World has a fascinating article on the cost of saving disk space versus making application installation/maintenance easier. Their base point - with disk space so easily and cheaply available, we are worrying about the wrong things with shared libraries, etc.:
It seems to me that IT professionals spend an inordinate amount of time debugging problems that can be traced back to an anachronism in the way applications are built. The anachronism is the notion that disk space is more expensive than the person-hour cost of the poor customer installing the application. That used to be the case but is not the case any more.
In a world where a gigabyte of disk space costs less than a cup of coffee, why do developers regularly spend hours of expensive time (drinking multiple cups of coffee) in order to sort out problems that only exist because of a misplaced desire to save a gigabyte of disk space?
This is interesting. I hadn't given it a lot of thought, but there's something to this. I'll have to ponder this before I come to a conclusion, but it certainly made me think...
Starting with the introduction of the PDP into the base product, a new tool has been available to VisualWorks developers - the Process Monitor. There was a goodie version available in previous releases of VW, but it's now available as a menu pick straight from the launcher.
When you first fire it up (last item on the Debug menu of the launcher), you'll see a list of user processes. Right off, you'll notice that any open workspaces are running in their own process - this is an aspect of the MPUI (multi-process UI) that was introduced in VW 7.1. The bottom line on that is that there is no longer a singular (distinguished) UI process - so a process spawned from a workspace during experimental development won't hose down the whole UI.
Pull down the View menu in the process monitor - there are three options:
- Show all
- Show User (meaning, all ST processes launched via your actions)
- Show System (housekeeping stuff)
Switch to the system view - you'll see the idle loop process, the Low Space process, and a bunch of other things. You'll note that these processes are mostly at very high priorities - and that most of them are blocked most of the time. This explains how your system can become unresponsive if you get into a thrashing GC loop - the Low Space process is running at a priority of 91! If you get stuck in that, it's very hard to break out. If you see that happening, you might want to look at my memory management posts [here and here and here).
Now go back to the user view, and pull down the Process menu. Everything will be grayed out - because you haven't selected anything. So go ahead - select a workspace process and pull the menu again. You'll now see that only the proceed menu pick is disabled - because the process you have selected is, in fact, running. Go ahead an select Debug - bam, you get a debugger on that process. Go look at the process monitor - you'll see that the state is now suspended. Go ahead and hit the run button in the debugger, and notice that the state changes back.
This is very useful stuff. Say you have a background process in an application you are testing - and you suspect that the process is having a problem. Previously, it was fairly difficult to get into that process and figure out what was happening - now it's simple. Just select the process in the monitor, and either debug it or terminate it (depending on whatt your needs are). This can still be hard if your process becomes CPU bound and is running at a high priority, but it's a highly useful feature - I've made extensive use of it in BottomFeeder development.
Another nice thing you can do is use the dump option. That will dump the current execution stack for the selected process to a file. With a little work on your part, you could easily include this capability in an end user application as a diagnostic tool. The View option does almost the same thing, but dumps to a window instead of to a file.
The last thing to look at is the sample time - by default, every 2 seconds the monitor looks at the current state of running processes. You can increase or decrease the frequency.
All in all, the process monitor is a very useful tool - if you haven't been using it, make sure that you take a look
Maybe the Mac is an easier platform. All I know is, if you come at OS X expecting Unix you get a lot of near hits and a lot of misses. I was getting BottomFeeder set up for my Dad on his OS X machine - it took me awhile to figure out the way application directories are normally structured. Then I ran into a font problem. Finally, it took me awhile to figure out how to get an external browser launched. I'm sure this would all be simple for someone that actually knew the Mac, but that wasn't me. Today's lesson - every platform is difficult if you haven't worked on it.
I fixed a bunch of small but annoying issues with BottomFeeder:
- Added a 'subject' input field to the dialog that pops for 'email it!' on non-windows platforms
- In the bug mail dialog, added an option to cc yourself
- Fixed an error message issue in the Auto-Discovery dialog
- Mail settings entered into the mail dialogs will persist into settings now. I had not updated this when I changed the settings tool over
That should address some of the things I've been gettting bug reports on
Syndic8 pegs the sweet spot for RSS - replacing email newsletters with RSS feeds. The prevaleence of spam causes two problems for traditional email digests:
- Many spam filters block newsletters, because they "look like spam" to the filtering software - so that subscribers lose thee content they wanted in the first place
- Even when newsletters get through, the torrent of spam coming downstream makes it easier for the letter to be ignored or deleted
Enter an RSS feed and an aggregator - people who want the content subscribe and get updates as they happen, sans spam. Watch the switchover start to happen
CNet spots the soap opera that is the RSS vs. Atom fight. Here's something I found interesting:
"Dave Winer has on a number of occasions pointed out namespaces and said that they break interoperability," said Ruby, the RSS alternative advocate, who is a senior technical staff member at IBM in Raleigh, N.C., and a director of the Apache Software Foundation. "His RSS spec points to a list of namespaces, and it's extremely selective. It includes certain ones and not others. It's extremely confusing. I don't know anyone who knows what is and is not acceptable."
Wow - no one knows what's acceptable, and yet there are scads and scads of RSS feeds, and more readers than you can shake a stick at. For something that is so terribly confusing, it seems that somehow, people interested in the technology have managed to get past the personality problems of a few people who can't seem to get along. This has very little to do with technology, and an awful lot to do with a bunch of overwrought individuals who can't seem to "just get along".
Don't think so - just have a look at this:
The alternative - still in search of a name after being known variously as "Atom," "Echo" and "Pie"--would closely follow RSS technically but have different specifications. Ruby and other proponents say it would most likely wind up under the auspices of a standards organization, probably the Internet Engineering Task Force (IETF).
The degree to which the proposed alternative mirrors the fundamental structure of RSS is an indication of how much the debate has become a referendum on Winer's ownership of the format, rather than on the technology itself. While Winer relinquished his CEO duties at UserLand last summer, he retains his seat on its board of directors and remains the principal shareholder.
The new format looks an awful lot like the old format, but with all the tag names changed for fun. Heck, adding support for this nascent format in BottomFeeder didn't require any new domain objects - every single "Atom" artifact mapped straight over to an existing RSS artifact. To make matters more interesting, "Atom" adds a "capability" that is just plain stupid - mime encoded binary data sitting in the feed. Just what I want - downloading the same large dataset over and over again until it ages off.
The problem with Atom is that it's trying to do too much - the group backing it is getting into classic over-engineering mode, and trying to solve too many problems at once. What we need is a new, cleaner API for managing blogs (posting, editing, etc.). What we are going to end up with is one more format that needs supporting. The simple question you have to ask yourself is, if you have an RSS feed now, what value add do you get by switching to Atom? Probably none. The most likely result will be a need to keep pumping RSS, and support for the new format. Yay
I'm waiting to pick my sister up at a Florida Airport (Lauderdale) - and there's open Wireless access here. Very cool. I used the access to upload a new dev parcel for BottomFeeder - deleting an item now just flags it as deleted and causes it to not show - it can then be restored until such time as it naturally ages off the system (based on your purge settings). Next up - allowing items to be set persistent.
One question I'd have about the stuff Patrick points to - does it allow for updating installed software on the fly? It doesn't sound like it does, and to me, at least, that's a full stop. On my site, I'm doing live server updates all the time - it's just the way I work.
Charles Miller has issues with clothes shopping. My wife would say that's me - I've gone from my Mom buying my clothes to my wife buying them in one easy step ;)
"Gartner Group recommend that companies delay deployment of critical Linux applications, determine "whether Unix or Windows will provide functions equivalent to those of Linux deployments", and take a "go-slow" approach to Linux in high-value or mission-critical production systems."
As it turns out, as far as their internet presence goes, big companies are doing the exact opposite; over 100 enterprise sites run by probably the very same Fortune 1000 and global near equivalent companies that recieved the SCO letter have switched to Linux since May, including Schwab.com.
About on a par with the rest of Gartner's advice
Overheard about Gartner
Gartner - Gartner belt : Gartner earnings call on July 31. Rumor has it earnings will be low, as they've been consistently every earnings call for the past three years. Also... word is there's a mysterious meeting called for the 30th for all of tech opps, where heads will likely roll.
I've tweaked the dev stream for BottomFeeder some. you can now delete items from the item pane - bearing in mind that such items, if they still exist on the feed site in question, will return on the next update. I also fixed up some menu enablement issues - some items were not properly becoming disabled/enabled based on selection state and/or online status.
Frans Bourma discusses "edit and continue" in development:
Debugging isn't about searching for forgotten quotes or a ';' at the wrong spot. It's about a totally different thing. Let's categorize some types of bugs to make understanding how to fix them a little easier, shall we?
- Functionality bugs. These are the ones at the highest abstract level: in the functionality the software has to provide. An example of this kind of bug is the ability to execute script in an email in Outlook (Express) and enable that feature by default.
- Algorithmic bugs. These are the ones at the abstract level below the functionality bugs. An example of this kind of bug is the (now patched) flaw in Microsoft's TCP/IP stack which marked the TCP/IP packets with numbers that weren't random enough which then could lead to data exposure via sniffing. The code was good, the algorithm used was bad.
- Algorithm implementation bugs. This is the kind of bug you'll see when an algorithm is implemented wrong. This type shouldn't be confused with the next category, however. Algorithm implementation bugs are bugs which originate in the developers mind when the developer thinks s/he understands how the algorithm to implement works and starts cranking out code, however the developer clearly didn't fully understand the algorithm so a piece of code was written which will not function as expected, although the developer thinks it does.
- Plain old stupidity bugs.. Everyone knows them: forget to update a counter, add the wrong value to a variable, etc. etc.
He then goes on to point out how edit and continue is not a good style for these issues. This is one of those places where Smalltalk (and Lisp) developers really part company with the rest of the developer community. Why is that? It's because for most people, the debugger is a forensic tool - the patient is dead, and the debugger is a tool you can use to figure out the cause of death. It's not that great a tool even for that in most cases. Now consider Smalltalk - the debugger is a browser where you also have the full context stack available. Unlike the patches on edit and continue in other environments, we can arbitrarily rewind the stack and start again from any previous point. We have a real code browser at our disposal, so we can investigate not just the in process object states, but also everything else. We can scan back to the starting point of the problem, fix that, and then start over. It's a highly useful thing to be able to do - I've used it to fix this server as people were actually using it.
The problem actually isn't one of right and wrong in debugger use, so much as different tools supporting different development cultures. This is why a lot of Smalltalkers read articles like Frans' and immediately roll their eyes - we are really talking about different things. And here's the clincher on that viewpoint issues:
People who grew up with assemblers, the gnu commandline C debugger and other terrible tools, know that debugging using a debugger is a last resort and also learned that debugging is not about using a debugger, but about understanding the difference between the sourcecode which should have been written and the sourcecode that is written
And people who grew up using Smalltalk or Lisp learned an entirely different lesson - which is one of the reasons why you see such frustration when Smalltalkers use Java, or C#, etc. - a lot of the basic development patterns are just different
This reminds me of when I was programming in Windows 1.03: 640K wasn't enough memory to run both Windows and the compiler, so the edit/debug cycle was: Reboot your machine, fire up the editor exit the editor, compile, reboot, start Windows, start the debugger, run your app from the MS-DOS Executive, reboot again... this was all before CodeView (Microsoft's first Visual Debugger), so you didn't even think about stepping through the code... we've come a long way!
The ironic thing is, Smalltalk had edit and continue capability back then. So IMHO, we really haven't come all that far yet.
MS' idea of usefulness looks a lot like Sun's ideas - throw a bunch of new API's at the developer community:
But, there are more new APIs being shipped with Longhorn than we've shipped in a long time with Windows. One reason we're gonna show you Longhorn so darn early (the PDC will be about two years before Longhorn will ship) is to give developers time to learn all about it. It will take that long to really get a handle on it. I've been reading all the top secret documents inside Microsoft and it's taken me months to just wrap my brain around what is going on. And I'm not trying to learn all the APIs.
Ok, explain to me how this is a good thing? Why do I want and explosion of new API's? Are they going to be covering actual honest to goodness new ideas, or just new wrappers around stuff we already know how to do - in other words, an unfunded mandate on top of developers?.
Via Scoble comes this link to a story about ZIP splintering. On tthe zip side, there's at least commercial interest to explain what's going on; in the RSS world, it's a bunch of ill behaved children unable to share their toys in the sandbox.
On the other hand, end users aren't even going to notice the various RSS imbrogolios; two versions of ZIP are going to be noticed as soon as someone sends a zip file that they can't decompress.
Here's an interesting article on TCO as it relates to deploying Linux or Windows. It's not the usual open source jihad, or the pro-MS kind of pap some other analysts seem to reflexively put out either:
"If you've got somebody who's smart and can config it, then [Linux is] a beautiful desktop and runs well," says Meta Group analyst Thomas Murphy. "But for the average business owner, [it] does not have that kind of simple nature that you have in Windows."
Past all the hoopla, that pretty much captures it. Linux makes a great server platform - but as a client, you need to have a knowledgeable person set it up. Yeah, I can hear the yelling already about command line driven vs. dialog box, and about how great Gnome is. Doesn't matter - For the average PC user - who's not (and doesn't want to be) a power user, Windows is going to provide a better and simpler "out of the box experience". Of course, if my Dad and the rest of the Mac faithful get their two cents in, these same users would likely be happier with a Mac. After watching my wife fight with the digicam software, I might be inclined to agree.
My "favorites" list in Internet Explorer has over 1,000 websites. There are 20 top level categories. The "weblogs category, which is a top-level category, has approximately 100 weblogs, most of which change at least once a day (and most of which I visit at least twice a month). This is too much information. Things flow by, they get registered in fleeting glimpses, and then they're gone, leaving only a tiny subconscious wake to show they've ever been there.
And then along comes something like RSS. If you don't want to read that FAQ, here's the short version: RSS is like a card-catalog for the web. It's an XML feed that tells you a little about a web page, including when it last changed
Sums it right up - try BottomFeeder out of you are still reading this site in a browser.
"'Don't touch open source software unless you have a team of intellectual property lawyers prepared to scour every single piece [of the open source code]. We offer indemnification, but many suppliers do not. A lot of companies are going to get very disappointed as we move forward. It will become a very challenging intellectual property issue,' he told Sun's Technology Forum in St Andrews, Scotland, this week..."
IMHO, it's a sign of desperation when you feel the need to use FUD to sell your products. But look - Gartner is piling on as well!
But companies nevertheless are being urged to delay Linux projects until the legal impasse is broken. "Don't ignore the problem by hoping IBM will win or settle its lawsuit, which could take a year or more. An IBM win would not prevent SCO from pursuing individual claims, which, if successful, could cost far more in penalties than buying a SCO licence would," advises George Weiss, a Gartner analyst.
Yeah, right. The risks might actually be greater in closed source systems. Have you had your legal team pour over the Windows codebase or the Solaris codebase for potential copyright issues? Could you have them do it even if you wanted to? Sheesh.
Cincom Smalltalk Developers, we want some feedback - how many of you are interested in the AMD Opteron and/or the intel Itanium? For Windows or Linux? Send feedback to me
SAN DIEGO, CA. - Here at the San Diego zoo, experiments last month with baboons have proved that higher primates can perform software testing, traverse complex menus, and code simple XML schemas. The finding have implications for the entire software industry, with some scientists predicting routine programming such as maintenance and report writing will be performed by teams of primates within 10 years
most subjects immediately understood Visual Basic 3.0, and even displayed some comprehension of the VB3 debugger and simple VB data types. Most subjects could change properties of custom controls in the Properties window, and displayed some understanding of advanced concepts such as read-only properties. Humans and higher primates share approximately 97% of their DNA in common. Recent research in primate programming suggests computing is a task that most higher primates can easily perform. Visual Basic 6.0 was the preferred IDE for the majority of experiment primate subjects. Some researchers observing the experiments commented that Visual Basic 3.0 was "way too easy for these baboons" to learn, and pushed for more Java testing.
Test subjects with the best results were baboons and bonobo apes. Both primate species demonstrated stressful behaviors when presented with Java tools and utilities
Macromedia talks about a Flash based RSS Aggregator. I prefer client side solutions to this myself, but this would result in a pretty decent user experience for a web based app. Maybe I should grab it and try to push a Smalltalk server behind it...
This explains a lot of the RSS/Echo issues - the politics surrounding RSS 1.0 and RSS 2.0. Here's Bill's morninig rant on RSS:
Ok, so folks have been asking me lately, why is the crap Winer calls 2.0 worse than using the 1.0 format?
Here's one perfect example. Let's say you download an RSS file and save it. Let's say you download a lot of them. Or let's even say your browser, when clicking on RSS files, downloads it and then hands it over (via MIME types) to a local program for handling it. Well, surprise, surprise, a 1.0 file contains it's own URL. The file itself tells you what it's talking about and where it came from. A file using 2.0 has no way to do this. How dumb is that?
Hmm. Let's see - under that criteria
- Web Pages are useless. The url isn't embedded
- Most XML documents on the web are useless for the same reason
Why do I care if the feed contains an URL? I'm not typically working with RSS files locally, I'm working with them over the net. I get the URL when I grab the feed. If it's redirected, my aggregator silently follows and changes my subscription appropriately. Once I've got an RSS document, it doesn't stay in that format - I create Smalltalk objects from it, and one of those objects - the Feed - knows the URL. And knows how to update the URL on demand.
Bill goes on to explain all the other nifty things RSS 1.0 can do better than 2.0 in a non-blog context - which is fine. For many other uses, I'm sure that 1.0 has more power. For blogs and newsfeeds, RSS 2.0 is simpler and qualifies as good enough. IMHO, RSS 1.0 is overkill for syndication of simple content. Eco/Pie/Atom is moving towards needless complexity as well.
I just got done reviewing the new load balancing framework that our distribution guys are working on. I'm the Product Manager, so you might wonder why I was doing anything like a technical overview :) - well, it turns out that the DST based load balancer that has been available for VisualWave the last few years was co-authored by me. It was a quick piece of work - one of our engineers (playing consultant) and I built it for a customer back in the late 90's - we had 6 weeks to do it, so it was never as nice as it should have been - there were some pretty nasty hacks in there.
That gave me something resembling the experience necessary to take a look at the new system. It was also the case that the other co-author was on vacation, and a third guy was still settling in from having a second child - so there I was, ready and willing to do my bit for the team.
First off, the nicest thing about this new work is:
- It's got documentation already! It's not only understandable, but it also explains the ins and outs of the strategies being used in the system - and why you probably shouldn't go for a least-busy approach (something I learned the hard way on that project)
- There are a large number of tests, supporting both single image simulations of load balancing, as well as actual multi-image load balancing. The tests will aid in long term maintenance - and they help newbies looking at the system for examples
So right off my job was easier than the old days, when I was testing the hack job. For that, there was minimal doc, and no tests. Testing meant setting up three computers, starting the images and load balancer, and seeing what happened. That's still a critical test to run - but having other tests allows you to look at sub-systems in isolation.
What does all of this mean for you? Well, the load balancer is built in Opentalk - but it's independent of Opentalk in terms of what it can balance. It means that in the 7.2 version of VW (November 2003), you'll see a load balancing system that you can use for projects that need to scale across multiple running Smalltalk images. For 7.3 you'll see failover capability added - which means you won't have to rely on homebrew systems for that anymore. In 7.3 or 7.4, you'll see DST start to move on top of the Opentalk system, which will provide a much needed update to the core distribution framework of that product - and also give our engineers one less framework to maintain. So keep your eyes peeled, and get involved in vw-dev or vwnc so that you can watch the work as it proceeds
The problem is, the JVM is a horrible, horrible platform for dynamic languages - performance suffers greatly. If the Mono project succeeds, it will be very clear that dynamic languages fare better on the .NET side. I know people involved in the .NET world, and they tell me that MS is aware of performance issues related to dynamic languages (like SmallScript) - and they intend to address them. Sun, on the other hand, has frozen the JVM since the mid 90's, and it's clear that they intend to keep it frozen. Languages like Smalltalk, Python, and Ruby will appear on the .NET (and Mono) platforms, and will have decent performance. Meanwhile, things like Jython will run like a dog on the JVM.
We have no intention whatsoever of working on the JVM, because we know how bad performance would be. Smalltalk is fast (and getting faster) now, but the perception of slowness from the late 80's and early 90's lingers. The last thing we need is badly performing, widely used platform to bring that perception back to life. .NET, on the other hand, is part of our plans
The CLR is always a strongly-typed typesystem; it's built into the core of the runtime, and cannot be abridged or avoided in any way (except through some dangerous unmanaged code accessed through P/Invoke, requiring necessary CAS permissions). There is no notion of "weak typing" as in otherlanguages. But that doesn't stop the CLR (and JVM, among others) from providing the capability to interact with objects defined in this strongly-typed platform in a loosely-bound manner, through the Reflection APIs. Through Reflection, I can effectively defer all decisions about type-enforcement until runtime, gaining a certain measure of flexibility at the expense of compile-time assistance in proving program correctness.
- Dynamic Typiing != weak typing. For instance, C++ is statically, but weakly typed. Python and Smalltalk are dynamically, but strongly typed.
- Flexibility at the expense of runtime correctness? You mean I've never seen a Null Pointer Exception in Java?
McNealy clearly hasn't looked at his revenue numbers - he couldn't have, to be quoted saying this:
The OS war is over, as indeed is the OS, and Sun won. This interesting and challenging thesis was one of numerous presented by Scott McNealy at a European Technology Forum event in London this morning. Say what you like about Scott, he's generally good copy, even at 8.30am gigs
Sun has commoditized their base business (expensive hardware running Solaris) with Java - sure, Java adoption is huge (much to my chagrin), but - using Java on an intel Linux box, or on a smart phone represents a tiny level of income back to Sun - one that doesn't even pay for the cost of the JavaSoft group. Eventually, this is going to be noticed by the bean counters at Sun, and the results of that will be fascinating to watch
Afaik, E&C is not planned for vs.net 2004 [C#], and frankly I'm happy about it, because the [C#] devteam can spend that time on other, more valuable features :). (E&C IMHO creates bad debugging styles. Debugging isn't about trial & error which is the implication of E&C. It's about thinking through where teh bug can be, fire up the debugger to test your thoughts, then think about a fix, think through the change implications and fix it. Test it and if it fails again, start the debugger to see why. This way you save a lot of time, instead of poking around in the code inside the debugger :) )
That's right, rationalize the useful feature away because your tools are sub-optimal....
There's been an interesting discussion of Java classes vs. Smalltalk classes - with a fair bit of misunderstanding all the way around - in comp.lang.java.advocacy. David Buck makes it all very clear:
I can see where Smalltalk people and Java people can get confused about classes and why Smalltalk people claim that classes in Java are somehow second class objects. Many of the Java people don't see the difference or the problem. Perhaps I could express it this way:
1) Java classes are objects
If you call getClass() on a Java object, the thing you get back IS an object. It has the same status as any other object in Java. It allows polymorphism, dynamic binding, and all the other good things objects do. To say that it's not a real object is incorrect.
2)getClass() always returns an instance of Class
This is where Smalltalk and Java differ. In Java, getClass() returns an instance of Class. There are no subclasses of Class. In Smalltalk, the "class" method returns a subclass of Class - an object which not only contains information about the class but can also have methods associated to that class.
Suppose in Java you ask a String for its class. You'll get back in instance of Class with its instance variables filled in appropriately for String. The methods defined for this object are the same methods defined for every other Class object.
In Smalltalk, when you create a class, two objects are created - one we refer to as the class which holds information (including methods) of the instances of that class and another which we refer to as a metaclass which holds information (including methods) about the class.
The class is the sole instance of its metaclass.
'hello' class --> String String class -->
This means that you can define methods that 'hello' can understand in String (we call these instance methods) and you can define methods that String understands in
(we call these class methods)
If String is a subclass of ArrayedCollection, then the String metaclass is a subclass of the ArrayedCollection metaclass. This means that class methods can be inherited.
3)static methods are NOT class methods
Since Java always uses instances of Class (not sole instances of a metaclass) and cannot support methods specific to particular classes of objects (class methods), it offers a different mechanism for doing a similar thing. The mechanism is called static methods.
Static methods are not ordinary methods. They do not support "this", they cannot be inherited, and they are not polymorphic. Smalltalkers tend to interpret these as class methods then complain that they don't work right. In fact, they are a completely different beast.
I hope this helps explain some of the differences in understanding.
Dave Winer provides an RSS primer. I sure hope no one hand rolls that way though; it's error prone, and not likely to be something people will keep up with. On the other hand, it does show how simple RSS is, which is a good thing. Now try that in Atom/Pie/whatever....