There's a trackback module out there, and I'm in the midst of adding support for it to BottomFeeder. The idea is, when you use the existing Comment API support, BottomFeeder will try and send trackbacks to all the urls you reference, assuming there are matches in your feed list that list a trackback url in their feed. I'll be adding the same support to my posting tool as well - I really have to add a settings tool for that before I can release it.
Spotted this on Ted Leung's blog
The second presentation was on Model-Driven Development using Rational XDE. This just didn't do much for me, because I'm not a fan of RUP or ROSE, etc. I've used some tools to produce RUP diagrams from code, but I've never found tools like this to be helpful in the forward direction. Mostly they make it easier to deal with the structure of classes and objects, but specifying control flow via sequence diagrams is less efficient than banging out the code. Unfortunately for Rational, the speaker agreed -- he said he frequently writes code that he usese to generate the sequence diagrams. The presentation was short, and not that much of a product pitch.
Kind of funny that the speaker did that, actually...
I've been upgrading BottomFeeder and the blog all day - I added in the trackback module to the blog last night, but had to tweak that some today when I started looking at supporting it from BottomFeeder (the comment tool) and my posting tools (for this blog). In the process of doing all that, I now have the following:
- When you use the comment tool in Bf, a check will be made of each url you reference against the trackback urls gathered by Bf. Any matches will be sent a trackback
- I fixed my posting tool so it sends trackback information (using the same mechanism) to the back end. I had forgotten this when implementing the tool - only the web form actually did anything with the trackback field. Dohh!
- I added a 'regenerate feed' option to the Feed menu in BottomFeeder. If a feed changes formats, the cached items won't have any of the new information - this option lets you rebuild a feed without the remove/add cycle
This took a whole lot of testing before I was happy with it - and I still have to see how the blog deals with it in production.
I would have greatly preferred to see this in HTML (although not the atrocity that is HTML produced by Word - bleah) - but it's a usful summary of where RSS is.
Remember the great "should Google index blogs" discussion? Well, it continues... Microdoc News notes What Google Leaves Out, an interesting analysis of which 30% of the [estimated] 10B web pages Google indexes.
RDF has ignored what I consider to be the central lesson of the World Wide Web, the "View Source" lesson. The way the Web grew was, somebody pointed their browser at a URI, were impressed by what they saw, wondered "How'd they do that?", hit View Source, and figured it out by trial and error.
This hasn't happened and can't happen with RDF, for two reasons. First of all, the killer app that would make you want to View Source hasn't arrived. Second, if it had, nobody could possibly figure out what the source was trying to tell them. I don't know how to fix the no-killer-apps problem, but I'm pretty sure it's not worth trying until we fix the uglified-syntax problem.
Pretty much the case. If I want an unreadable format, it may as well be binary so that I get some benefit. An unreadable text format is just.... useless. Here's one problem - take a look at the examples in the link above - if you slap an rdf everywhere, it no longer provides meaning - it just clutters up the page. In other words, you can simply assume it and remove the blasted thing - leaving in the namespaces that actually carry semantic value (i.e., the modules). What am I missing here?
One use Scoble has for blogging:
I use this weblog for a variety of purposes, but lately it's just to keep track of useful stuff on the Internet that I might want to look at later. Believe it or not, I use Google to find links that I've put on my weblog in the past. For instance, when I need plumbing supplies, I just search Google for "scoble plumbing supply" and up comes my weblog where I talked about a plumbing supply place.
I was recently suprised when I googled for an answer to a .NET question and one of my own posts was the first hit. I don't know whether I was more amused by this or annoyed at having forgotten the answer to the question and that I had posted it.
There it is - search Google for your own commentary. I have to admit, I've done the same....
Ted Neward has a post that - on the whole ends up nodding in the direction of dynamic languages, but he confuses some terms:
Recently, as part of the NoFluffJustStuff conference in Denver (the Rocky Mountain Software Symposium), I participated in a speaker panel with Dave Thomas, the Pragmatic Programmer and recent apostle of the Holy Word of Untyped Programming (also known as Ruby). He speaks about loosely-typed languages and their benefits, and one of the questions asked of the panel was our opinions on loosely-typed languages; Glenn Vandenburg, another speaker at the show, blogged about my/our responses
Sigh. It's not untyped. It's Dynamically typed. Whether a language is manifestly (statically) typed or not has nothing to do with whether it has strong or weak typing. C++ and C - both statically, but weakly typed. You can send a message to an object that doesn't implement a matching method, and get a seg fault when the code tries to go ahead anyway. In Smalltalk, that can't happen. You get a well understood exception, which is quite different. He does bring up Dave Thomas' points on dynamic typing:
Dave raised a good point during the speaker panel, though. He pointed out that even though he's been programming in a loosely-typed environment (Ruby) for quite a while now, he's not found himself making the stupid mistakes that the strongly-typed environment is supposed to be protecting us from. If those mistakes aren't happening, then are we sacrificing flexibility in the system for nothing?
It's a good thing, to my mind, that people in the Java world are questioning assumptions about typing systems. I just wish they would get the terms right....
Someone told me today that it was Memorial Day this weekend (for non-U.S. readers that's a 3 day weekend near the end of May). I had absolutely no idea. None. And it struck me that this is really symptomatic of working from home exclusively. No coworkers to ask you what you're doing, etc. Very, very odd.
Yes, I've noticed the same thing - when you work at home, one day slides into the next, and you barely notice what time it is, much less what day it is...
I've added more module support to BottomFeeder - and to the blog as well. The Pingback Module - which is an awful lot like the Trackback Module - is now supported by BottomFeeder (via the Comment Tool) and by the blog (in the feed). I was puttering around with the blog code and the BottomFeeder code to get these working during the afternoon, in between bouts of telling my daughter to do her homework...
I got an email this morning with an interesting question:
I'm about to set up a web site. I'm curious as to your experiences. To start, I'm going to run with two machines: a firewall machine, and a main processing machine. The main processing machine will have my Postgres database, as well as my Wave app. Eventually, if I get some traffic volume, I'll move the wave stuff to a separate box.
My question is, how many wave images should I run? Should I just run one, and let all requests go there? Or should I run multiple smaller images, and let a Load Balancer manage them?
The main processing machine will have 1.5 GB of memory in it.
By way of answering, I pointed out how this site is set up. We run two Smalltalk images:
- The Cincom Smalltalk Wiki runs in one image
- Everything else runs in the second - this blog, the survey, and the NC Registration Application.
That second image runs a few other administrative applications, plus a few ad-hoc apps that run from time to time. I started with a single image; I split out the Wiki last year, mostly because the Wiki was a stable app (the code rarely changes) - while I muck with the blog code on a regular basis. I figured that the Wiki shouldn't be affected by my periodic tinkering. Thus far, this all scales fine - we certainly aren't a huge site, but we get a decent amount of traffic. There's always download activity for CST NC, for instance.
So ultimately, I advised the person who sent the email to start with one image - it's simple, and will likely scale for quite some time that way. Over time, that might change based on usage patterns - but there's no reason to set up a complex system right from the get go, IMHO.
A few days ago, Bob Martin commented on complexity in the protocol universe with the tongue in cheek comment: I'd rather use a socket. I commented on this here. Since then, there's been an utter failure to recognize this statement for what it was. Over on Sam Ruby's Blog, things started with Sam taking the comment seriously. It then proceeded to a rather long thread (scroll down) where poster after poster took the comment seriously.
Yeesh. The point, so far as I can tell, was that complexity for its own sake is a bad thing. J2EE, anyone? EJB? The nightmare that is the current version of MS Word (just try and put a bullet point where you want it, I dare you). The software industry seems particularly vulnerable to this - witness all the heavyweight development methodologies and tools, for instance (to which yes, XP and Agile are responses).
Sometimes I think that most developers have a motto rather like this:
Never pick a simple, straightforward solution where a complex, obfuscatory one can be used instead
Trend Micro said that only a few dozen users emailed them about the bug. Of course, they may have had trouble _osting _roblem re_orts. Rule 915. It has a nice ring. Could become a meme, like Catch-22. "Trend Micro is alerting its solution providers and customers about a bug in an update to one of its security products that inadvertently blocked all incoming e-mail containing the letter P."
Gordon Weakliem comments on the trend:
Also, Larry O' Brien says "it struck me that the biggest practical advantage of strong typing may be IntelliSense", which leads me to wonder if the next question is "why do we need IntelliSense?". John Lam is wondering about dynamically typed languages: "I wonder if it's just me, or whether the community that I frequent has this on its collective consciousness, but I've been spending quite a bit of time wondering about the benefits of dynamically typed languages." It's not just you, John.
There's a VW goodie that does Intellisense....
I decided to add geoUrl support to BottomFeeder today. There are two modules out there - here and here. What the heck; I support both of them. I only look for them at the channel level - I can't see any good reason to use them as item level resources. In any case, here's what I've done. I've added a new menu item to the feed level menu, Map it!. If the feed has the module, I enable that menu pick. Selecting it opens a browser that shows a map to the location in question. I may eventually do something else - for instance, Feedster is evolving support for GeoUrl, and I may well add a menu pick that uses that. We'll see what develops...
I'll be speaking at the Ottawa XP/STUG, so I'll miss this:
Who Uncle Bob Hosted By XpWdc What Talk and Book Giveaway When Thursday May 29th 7-9pm Where Room (310)Marvin Center Foggy Bottom Campus of GWU Corner of H and 21st St NW DC
Please Indicate if you will be attending. No RSVP required but a general count would be helpful.
For the last few weeks I've been asking anyone who will listen if it isn't weird that our economy is based on software, more and more, yet users don't want to pay for software.
In the same breath I express sympathy for the music industry, because they're going through the same devaluation we went through in software in the 80s and 90s. An average song is a bit bigger than the average software program of ten or twenty years ago, so it has taken a while for the distribution pipes to catch up. Today songs travel freely over the Internet, some people are optimistic about people paying -- I am not
Hmm. The problems are perhaps more similar than I had thought (in terms of the end result). In software, no one wants to pay for tools. This is in large measure due to the push from the industry heavyweights to go back to a free software model - IBM, for instance, seems to believe that a free tools model will help them sell truly high end software and services - which has the effect of squeezing the heck out of an awful lot of software vendors. Combine that with the rise of decent Open Source products, like PostgreSQL - and it gets even harder. The end result - an awful lot of potential consumers just download free stuff, and the market for tool vendors contracts. Music has some of the same things happening - it's very easy to download music (and ultimately video as well) - so why pay for it? This helps some artists who have had great difficulties breaking into the fairly closed music world, while it upsets the apple carts of all the existing vendors. Why is this happening? It's not just download ease - it's also the attitude and pricing models of a lot of the music industry. CD prices, for instance, have stayed at absurdly high levels - and this is where we see some similarity (again) with software. The existing vendors have gotten used to being able to charge high (and very limiting) license fees for software. Well, along comes Open Source - it may not be as good or as polished, but a free 80% solution seems better than the pricey 100% solution (and it's not as if a lot of the vendors have 100% solutions anyway). Back to music - the songs you download are often not as high quality as what you can get on a CD - but they are good enough.
It gets even odder. As this trend increases, you see the established players panic. Lawyers are deployed, lawsuits blossom. This has the effect of irritating the end users even more, which drives them further down the road towards free solutions. Look at the entertainment industry reaction to the ReplayTV - Commercial Skip and the ability to send shows got most of the major players to line up and have a complete snit. Most people are using this stuff for fair use purposes - and the hardline reaction merely torques them off. The software industry has the same problem. Attempts to increase leverage through ever more onerous licensing terms just torques people off.
And this is where Dave Winer (and a lot of other people) simply fail to see the problem. Markets change - just as Winer laments the passage of large IT shops, people in New England 150 years ago lamented the passage of large textile shops. The agile companies that saw change coming survived - just as the agile companies in software and music will survive this transition. The ones that don't survive will be the ones that keep looking back to the good old days in a vain effort to figure out how to get back there.
In the NY Times on Thursday, a stirring op-ed piece by Ellen Ullman, about what we've lost in software. In the 90s it was common for two or three generations of software developers to work in the same organization. There was a handing-down of ideas, practices, tradition -- the verbal history of how things came to be as they are, Ullman says. After the dotcom bust software is becoming a detail, again, something that workmen do, not artists. We lost something important when our folk heroes became the 20-something instant-multi-billionaire CEO. There's so much more to software than that, there really is. As I mentioned above, our whole economy is based on it. Our culture is too.Our culture? Please. Get out of the office and talk to some non-software people for awhile, and see just how little of our culture has anything to do with software. Heck, there's a pretty large number of people who aren't even online - and it's not always (or even mostly) for price reasons. The software industry could stand a whole lot less navel gazing, IMHO.
I spotted this in comp.lang.smalltalk:
You'll have to attend Smalltalk Solutions this year to see John O'Keefe and Eric Clayberg attempt to convince us that we should drop Smalltalk IDE's in favour of Eclipse, even for Smalltalk development.
To which Eric Clayberg responds:
Yikes! If you come to Smalltalk Solutions expecting to see a talk like that, you will be disappointed.
BTW, I did a talk last year at SS'02 on "Eclispe for Smalltalkers". You can see the presentation here:
Via Scott Knowles I found this essay on corporate blogging:
InfoWorld's list of disruptive technologies for 2003 included open source, self-service CRM, digital identity, and my personal favorite, weblogs. How can a simple web-based journal be "disruptive?"
Two important characteristics of blogs are that they are written by a person who is knowledgeable and passionate about the topic, and they are written in a "real voice." This is a cosmic shift from the marketing and public relations materials that are the staple of business communications.
Often, when information goes through a formal marketing or PR process, the end result is an attractive, expensive, stale, diluted document written in corporatespeak. This result is generally due not to any incompetence or malevolence on the part of corporate communicators but to the processes that have evolved to accommodate the costs and standards of print technology. As a result, the edge, the authenticity, and the voice of the professional speaking to his fellow professionals are lost.
Blogs offer the human voice, which can be loud, controversial, and even wacky. But the realness of the blog inspires trust and piques people's curiosity. A blog can create a community and a dynamic discussion
What I find interesting is that Microsoft has a lot of their staff blogging, with the blessings of management. They may well be on to something. Even when I was traveling extensively, I did not reach as many people as I do with this blog. I got one to one feedback, which was good - but little discussion, as a comment from customer A rarely impacted the thought processes of customer B. What will be interesting to watch is how many corporate blogs attempt to go out with the same "blow dried" sensibility that they use in standard corporate communications....
Gordon Weakliem tells us the score:
Larry O' Brien has some notes on programming against the Sabre GDS. Most of what Larry says applies to other GDS systems (Apollo/Galileo, Worldspan, and Amadeus), though it's interesting to hear bits about Sabre's specific implementation. Larry mentions abandoning OO purity, specifically mentioning the concept of "Flight". Many development teams have come to grief on this concept, not even to the extent of confusing a "Flight" with a physical airplane, but with simply getting an incorrect perception of what the data comprising a Flight means and what can be done with it (for starters, it's read-only and parts of it are subject to change at any time).
I never worked on that part of Sabre (back in the day when I was a Booz, Allen consultant) - but this sounds like it's of a piece with the TravelBase system I worked on. Interesting tidbits over there.
Dave Winer has a very timely piece on his site. He's discussing the economics of software development - his point about the size and cost of a software shop is something I've explained with regards to Cincom Smalltalk more than once. I found this via Mark Bernstein, who agrees with Dave. I do as well:
A professional software organization for a well-supported product has 10-20 people, maybe as many as 30 to 40. So when you hear yourself complaining about software quality, think about how much money the developer of the product has to fully support it. Could you run a car in the Indy 500 with no money? You could try, and that's what a lot of software developers do, to no avail. Sooner or later you have to pay the bills. It costs money to live. That's as true of software as it is of people
The consensus of many analysts (including me) is that SCO hopes IBM will buy
it just to keep this matter out of court. In a buyout, IBM would take over the rights to the System V code base. Competitors, including Sun and Hewlett-Packard, would undoubtedly pressure Big Blue to publish that code for the community's sake. If things go SCO's way, IBM could respond with a terse, "So where were you when SCO started this mess?"
The only outcomes most commentators are considering are that SCO will win and make its living filing lawsuits, or it will lose and go bust. But what if SCO proves its claim that Linux contains purloined Unix code, and IBM then buys SCO to avoid paying a costly judgment? IBM can use the courts, as SCO is doing, to decide who stays in the Linux business and who doesn't. Imagine Linux contributors, and every company that's ever bought Linux, tracing the provenance of each line of code back to its origins to make sure none is borrowed. It's probably impossible.
What a lovely mess.....
Keith Ray points out that these are very different activities:
First, keep refactoring and rewriting (bug-fixing) separate. These two activities may be done only five minutes apart, but you're wearing a different "hat" during each activity. Refactoring is improving the design of the code, while preserving its behavior. Bug-fixing is changing the behavior.
There's a lot more as well, but the above needs to be pointed out to an awful lot of people who confuse the two topics....
The nice thing about a language that takes hold is that you can work with it again and again. In 30 years we have built Smalltalk systems with quite different constraints. This talk will examine a few of these, and show how tricks of the trade can be applied to enhance one aspect or another and, frequently, to make real progress.
There are video feeds of the talk - check it out.
|Smalltalk Solutions 2003 Would like to announce its technical conference schedule.|
Links of Interest:
- To view this year's presentation schedule please go to the presentations page
- Also, we would like to announce the tutorial schedule. To view the tutorial schedule, please visit the tutorials page
- To register for the show, please visit the Smalltalk Solutions Home Page
Don't forget, early registration ends June 13th, 2003.
See you all in Toronto!
For additional information please contact: Joy Murray, Conference Coordinator
We had probably the bext day in 3 weeks yesterday - a few minutes of sunshine in between clouds. So I was watching the fantasy (I mean weather) report last night, and they told us clouds and sun, followed by rain again Wednesday. Better than we've had, even if it stinks... so I get up today to slate gray skies, and sure enough - rain by noon.
A few days is one thing, but 3 weeks? It sucks the energy right out of you.....
Think of what you have to offer an employer as your product, and what you do when you look for a job or try to keep the one you have as marketing and selling that product, and I think this will make the point clear. If not ... imagine you're Lucent and you have to compete with other PBX providers in an incredibly tight market. You have to build all of your costs into your pricing. You'll buy your chips from the lowest cost provider that delivers to your specifications; likewise engineering and programming. What else can you do - charge a lot more than your competitors and try to make it up in "brand management"?
The fact of the matter is that American technical professionals don't want to compete on price with their Asian and Russian counterparts. That's fine, but if we don't, we'd better find some other dimension in which we can compete successfully, because there's nothing in the social contract that obliges our customers ... the companies that employ us ... to pay more for the same service than they have to in the labor marketplace.
In fact, I think it would be pretty easy to construct an ethical argument that if American programmers charge more than their Asian counterparts for the same services, then it's the American programmers who are guilty of greed, not their employers. It comes down to how you define "fair price," I guess.
Ouch. Hard to argue with though.
Over at IUnknown:
I wonder if it's just me, or whether the community that I frequent has this on its collective consciousness, but I've been spending quite a bit of time wondering about the benefits of dynamically typed languages.
I take this as a very good sign. And I liked this:
Please do not mistake static/dynamic typing for strong/weak typing. Ruby is actually a strong dynamic typed language
Not everyone is as confused on this point as Ted Neward clearly is.
What gets lost in all of this, of course, is whether the technology being blessed by the standard actually helps the customer to solve a real problem. Can the standard be implemented? Will the implementation perform adequately? How much will it cost? All of these sorts of questions are not appropriate in the era of de jure standards.
What is also forgotten in all of this is how fragile the de jure standards have been in the past. I can't think of a single standard that was invented by committee that has survived in the marketplace. The long-standing standards are those that were first de facto standards, and were described (no invented) by the standards bodies.
Such standards didn't start out in a standards body. They started out solving problems. Because they solved the problems, people used them. The use drove the standard, not the other way around. This allows innovation, this allows technical progress. Things that work get used by people who are trying to solve problems.
This does take the decision making power out of the hands of the managers, and the IT departments, and the technical analysts. They aren't trying to solve problems in new ways; they are trying to lead parades, or keep their jobs, or show that they have influence. They aren't the engineers that can actually understand the solutions, but they do (for the most part) understand politics. Standards groups cater to their expertise, not the expertise of the engineer.
Of course, this hard dichotomy is something of a caricature. There are substantive discussion of technical merit in standards groups that are trying to invent. But that isn't all that goes on. And certainly there is little evidence that the best technology wins in such groups. Just look around you for evidence of that.
So the next time you are talking to a manager and he or she tells you that you have to use something "because it is a standard", push back. Ask why only standards can be used. Ask if the standard has actually been implemented, or if the standard will really solve the problem under discussion. For that matter, ask if the manager really knows what the standard is. If any of these questions can't be clearly answered, maybe the standard isn't the way you should approach your problem.
Hmmm. J2EE comes to mind. A big mass of code, designed by people who had (apparently) never written a distributed application, with no actual customer feedback. So - based on one of Sun's senior staff's recommendations - you should reject efforts to implement J2EE. When management balks, quote the Sun guy!
Via Sax.net comes this interesting twist:
And the saga continues... now Novell has released a press release, claiming that they, not SCO, own the patents and copyright to the "open source" Linux code in question. The press release also contains a letter Novell wrote to SCO:
Importantly, and contrary to SCO's assertions, SCO is not the owner of the UNIX copyrights. Not only would a quick check of U.S. Copyright Office records reveal this fact, but a review of the asset transfer agreement between Novell and SCO confirms it.
Maybe someone should buy the movie rights....
I spotted this in comp.lang.java.advocacy yesterday:
I work for a large company (20,000 employees) that is planning the re-write of a portion of its home-grown 2-tier ERP into a multi-tier J2EE environment. The bulk of the work will be outsourced, with future maintenance and enhancement done in-house. The application has about 1000 users in NY, with about 100-200 concurrent transactions during peak periods
Not replaced by an off the shlf package - a rewrite. I would have thought that this kind of insanity was past - what these people are going to do is recreate a bunch of stuff they already have - for large amounts of money. I posted a query as to why they were going this way, and what the ROI would be - but no answer. And people wonder why business units have no respect for IS....
I grabbed the old OLE code - the stuff that was being done for the Van Gogh release of VW (the one that would have had native widgets, the then current version of Store, and OLE). One of the many casualties of the PPS/Digitalk merger. Anyway, I grabbed the code, got it loaded, and published it to the public store. You'll want to visit this page for information and pointers to the original .st files - the ones I used to create the OLE bundle. The code loads into an image and can be published back out of an image - beyond that, I can't guarantee anything. COM/OLE is something I know very little about. Back in the day, I used the pre-releases of the this code to demonstrate Excel embedded in a VW 2.0 window - so all you C/Smalltalk hackers, have a look.
Silvermark and Cincom have agreed to deliver an evaluation version of TestMentor for VisualWorks with each release of VisualWorks. Additionally, Cincom will be reselling the TtestMentor product, and the two companies will cooperate on consulting opportunities. TestMentor will start shipping with VisualWorks 7.2 - both commercial and non-commercial.