The presentation went well - I would have loved to stay for drinks, but I had a train to catch (which, ironically, is late - I'm entering this from the Amtrak lounge in Penn Station, NY). We had a good conversation about the Cincom Smalltalk - Pollock, GLORP, and ObjectStudio being of interest to people. As always, tip of the hat to Charles Monteiro for setting it up.
Every so often I get a piece of junk that makes me laugh. I was going through my junk folder (a small percentage of real mail lands there.... grr...) and ran across a bogus offer for office programs. For one thing, get a load of their "sales pitch":
Are you looking for affordable operation progeam and system for your PC? Then you have found the best right store. You can check the site for a wide selection of quality program discs on sale. More than 850 program discs for office operation, programming, server maintenance, PC diagnostics, finance and graphic design& processing.
I love that word selection (not to mention spelling) - "affordable operation progeam and system for your PC". Makes me want to dash right off and click on the link they provided. Here's the best part though - at the very end of the message was this text, apparently intended to defeat filters:
media reported late Sunday that the head of Palestinian security in Gaza, Rashid Abu Shbak, added running back Brian Westbrook, who decimated the Viking defense early in the game as a
I guess that counts as broken field running :)
I just noticed an SDTimes promotion for a "cross platform build seminar" in my email:
John Graham-Cumming, chief scientist at Electric Cloud, Inc., will discuss how to build a manageable cross-platform build system using GNU Make. The system is flexible, capable of supporting many different platforms (including Windows, Linux, and all versions of Unix) and easy to maintain. He'll also outline a strategy for migrating an existing build system to a clearer Makefile structure that incorporates the ideas presented.
I have a better idea - visit our site, start up VisualWorks, and build an application. Ready for the cross platform build part? Just build the application on one of our supported platforms, and then deploy it. That's what I do with BottomFeeder. I bet my solution is simpler than theirs, and it doesn't require a full seminar to explain, either :)
Steve Kelly has been lending me a hand with the blog server code - there's a new bundle in the Public Store called Silt. We moved the client (blog posting) tools out of the server package - they add a bunch of dependencies not really needed on the server. Steve's done some nice Q/A on this code for me, and has sent some instructions on how to set it up. You'll want to visit two pages on the wiki:
- The home page for the server implementation. Grab the ssp files and the blog server parcel there
- The Getting Started page, which should help you set up and configure your server
Thanks for the help Steve!
Sriram Krishnan posted something interesting about the CLR garbage collector:
I was talking to a former MSFT employee who worked on the CLR team. The conversation drifted towards languages used to implement virtual machines. Here's what I learnt.
The CLR's Garbage Collection was initially written in Lisp by a Patrick Dussud (I can't find a blog). This code was then run through a Lisp->C converter which was then cleaned up by an intern.
That's interesting - it demonstrates to me that when MS needed something done fast, they knew well enough not to do it in C or C++ (assuming this story is correct, of course). On the other hand, if you do what they did:
- Write a sub-system in language 1
- Generate the C from the resulting code
- Manually modify the results
Can you actually maintain the results? Generated code is always hard to grok. It's one thing if you write in a high level langiage and then generate C (never actually looking at the C) - you can look at the C as something akin to byte code in that case. But if you muck with the generated code before deploying it? I'm not sure that you end up with something you can maintain...
Doc Searls Weblog makes some good points about the "power" of the blogsphere - which should calm down some of the triumphalists - and wake up some of the angry journalists who seem peeved that people are paying attention to what they write:
Second, people get fired every day for blabbing about private company stuff, whether or not it's in blogs. Earl Gilmore, the first tech client of my old ad agency (way back around the turn of the 80s) had an employee policy manual with two pages in it. Page 1 said "Rule #1: Use good judgement." Page 2 said "Violate Rule #1 and you're in deep s***." So, when somebody drowns in s*** for syndicating their own bad judgement, that's not a black eye for blogging. It's stupidity with an RSS feed.
Exactly right. The various media people who've been "brought down" by the blogsphere made their own mistakes. They engaged in bad PR that reflected badly on their employers - the only difference between now and 10 years ago is that RSS feeds and blog pages have a bigger soap box than letters to the editor and talk radio did. It's not complicated. If you say something that your employer will find embarrassing, it's more likely to get noticed now - but it was just as stupid 10 years ago.
I see that Tech Republic is reporting that Sun's latest round of layoffs has grown to 3600 people. Sun is a fascinating company. I like their OS (Solaris), and, back when I worked on their hardware regularly, I liked their boxes. Somewhere during the dotCom boom, Sun really, really lost its way.
The first problem is that they seem to believe their own marketing materials too much - they still seem to think that Java is a net postive money maker for them - note to Jonathan Schwartz - all those Java enabled handsets you are so proud of? They don't represent positive net revenue to Sun. I had friends come back from JavaOne last year who were incredulous that Sun was pushing the notion of Java games as a business opportunity. They jumped on the Linux on intel bandwagon way, way too late - their low end is being eaten alive by Dell, and IBM still pounds them on the high end. That might have something to do with costs on the low end, and IBM's ability to milk revenues off of Java on the high end (WebSphere, anyone?). Meanwhile, Sun gives away the software they could charge for (application servers), and sells the stuff that goes head to head with Office (StarOffice).
The thing is, I've seen this business plan before, only with far smaller piles of money to throw away - the whole PPD/OBJS nightmare was a lot like it - the same executive cluelessness, the same lack of engineering direction, and the same lack of decent oversight by the board of directors. Just as a sane board would have given Bill Lyons the boot sometime in 1996 (when things could potentially have been turned around), a sane Sun board would have cleaned house a few years ago, after watching the fruitless years of attacks on Microsoft. Instead, they seem to be watching a repeat performance, only without the dotCom bubble to support it.
They've learned nothing; their marketing is still all attack based, only the target has changed. Heck, I think they might be taking some of the anti-MS screeds from the late 90's and republishing them after a global copy/replace operation - IBM in, Microsoft out. Have a look at this note from Schwartz, for instance. Yeah, if I were IBM I'd be quaking in my boots over that. At least when I take aim, it's in pursuit of a target I might actually be able to hit.
I've been working on the posting tool - and on some matching back end code - all afternoon (well, other than the hour or so spent sledding with my daughter). Up until now, there was no support for uploading images to the server. I've been addressing that this afternoon, and I've got it working in test. What you can do is specify a set of files to upload, and then you can upload them to the server. They'll default into a subdirectory of the blog directory (the place where the SSP files live). I'll be rolling the new servlet into production shortly, and testing it myself. Once I'm sure it works, I'll release the new version of the client tool.
Vassili has been working on the CSS for the site again - I've just received an update from him. It's going to require some changes on my end, but we should have a facelift coming here soon.
A blog is a species of interactive electronic diary by means of which the unpublishable, untrammeled by editors or the rules of grammar, can communicate their thoughts via the web. (Though it sounds like something you would find stuck in a drain, the ugly neologism blog is a contraction of "web log.") Until recently, I had not spent much time thinking about blogs or Blog People.
You can hear the disdain dripping from his voice in that lead paragraph - he wants to make sure that he properly sets the stage. He's a professional - he deserves to be published. The rest of us? Why, we have no such rights, and it's just a horrible, horrible thing that we do. We should know our places, and hop back to them - the sooner the better.
Why was he motivated to object? It seems that various and sundry people (wait, bloggers, not people) objected to something he wrote recently:
I had heard of the activities of the latter and of the absurd idea of giving them press credentials (though, since the credentials were issued for political conventions, they were just absurd icing on absurd cakes). I was not truly aware of them until shortly after I published an op-ed piece in the Los Angeles Times ("Google and God's Mind," December 17, 2004). Then, thanks to kind friends with nothing but my welfare in mind, I rapidly learned more about the blog subcultures.
My piece had the temerity to question the usefulness of Google digitizing millions of books and making bits of them available via its notoriously inefficient search engine. The Google phenomenon is a wonderfully modern manifestation of the triumph of hope and boosterism over reality. Hailed as the ultimate example of information retrieval, Google is, in fact, the device that gives you thousands of "hits" (which may or may not be relevant) in no very useful order.
Ohh, I feel properly chastened now. I promise sir, I'll stop using Google this instant - instead, every time I'm curious about something, I'll dash down to the local library and use their clearly better resources instead. I'll be ever so much more productive plowing through the paper stacks - or using the search engines the library provides access to. Not to mention the productivity boost I'll gain by hopping in my car, driving 10 miles to the nearest branch, and walking in. Yes sir, I'll surely be better off doing it that way.
Gorman must know journalists - he slipped into "but on the other hand" mode half way down:
It is obvious that the Blog People read what they want to read rather than what is in front of them and judge me to be wrong on the basis of what they think rather than what I actually wrote. Given the quality of the writing in the blogs I have seen, I doubt that many of the Blog People are in the habit of sustained reading of complex texts. It is entirely possible that their intellectual needs are met by an accumulation of random facts and paragraphs. In that case, their rejection of my view is quite understandable.
At least two of the blog excerpts sent to me (each written under pseudonyms) come from self-proclaimed "conservatives," which I find odd because many of the others come from people who call me a Luddite and are, presumably, technology-obsessed progressives. The Luddite label is because my mild remarks have been portrayed as those of someone worried about the job security of librarians (I am not) rather than one who has a different point of view on the usefulness of this latest expression of Google hubris and vast expenditure of money involved.
Just marvel at that first sentence - we aren't capable of basic comprehension either. We read his words, but we don't understand them (perhaps that means he didn't make himself clear enough? Just a thought). I also see that he decided to hammer an entire portion of the political spectrum - note the scare quotes - as part of his argument. You can always tell when you've hit a sore point - the subject changes. Gorman changed the subject to politics and the presumed expenditures of large sums of cash by Google. It must be bad - he's linked politics and money. I'd hand the man a dictionary and point to the word argument, but I'm not sure he'd get it.
The best part is, he was too lazy to provide an actual link to the article he says was mis-characterized - an op-ed piece in the Los Angeles Times ("Google and God's Mind," December 17, 2004). I should cut him some slack - it sounds like he doesn't know how to use Google, and based on that vast well of knowledge, is convinced that it's inaccurate as all get out. Using the search engine access in BottomFeeder, I Googled for that - and came across this on the first page of hits. I'm sure it would have been easier to run down to the library and either go through the stacks of archived newspapers or turn to the microfiche. Turning to his actual argument (I'm stretching to call it that):
A good scholarly book on, say, prisons in 19th century France goes well beyond simply supplying facts. Just imagine that book digitized and available for Googling. Google isn't saying exactly how such a search would work, but if it's anything like the current system, you might enter, say, "Nantes+Prisons" and get back hundreds of thousands of "hits." Somewhere in those hundreds of thousands would be a reference to a paragraph or more in our book. If you found it, what would you do with it? Supposing it says " 26 there were few murderers in the prisons of Nantes in 1874 26 " and gives you the source of the paragraph. That is all but useless. Absent a lot more searching, you have no idea whether there are other references to the subject in the book, and the "information" you have found is almost meaningless out of context.
So, you abandon that line of inquiry or resolve to read the book. Are you going to do that online, assuming it's out of copyright? (In the Google scheme, hundreds of thousands of books in copyright will not be available to be read as a whole.) Not many would choose to stare at a screen long enough to do that.
So... I'm interested in some subject. Once Google has a decent start on this, I'll be able to get references to written works that I otherwise wouldn't know existed. In Michael Gorman's world, this is a bad thing. Oh, right - I'm supposed to have run down to my local library and found that information. Having Google make it available actually raises the liklihood of my doing so. Say I run into an out of print book, and I can't order it via Amazon. If I'm actually doing research, I'll know that the reference exists, and put the wheels of the library to work finding me a copy - they have book exchange and loan out agreements with other libraries. Gorman would rather have me remain unaware of the book's existence, or better still, use the arcane procedures in place to find what I need
Worse, he seems to assume that all of us have immediate access to a great metropolitan library. I live in the suburbs - The Howard County Library system is ok, but - to be blunt - they don't have nearly the reach that Google does. The Baltimore city system? Sorry, I'm not about to drive into Baltimore (not given the neighborhoods that the libraries there are in). I could hike down to DC, I suppose. Or, here's a thought, I could use Google, get a start on the information, and then drive to my local library and use their contacts with other libraries to get what I need.
Here's what it sounds like to me. Right now, tracking down hard to access information in the library system is something of a chore. It's well understood by a small cadre of professionals (like Gorman). That gatekeeping function makes him feel special, and he'd be happy to keep it. Along comes Google, ready to disintermediate him from all of us hoi polloi out here. Well gosh - we can't have that. If he doesn't have his special powers, what does he have? Worse, his special powers are being threatened by people without editors looking over their shoulders!
Finally, he objects to the possibility that people will pull partial content out of context and come to bad conclusions based on that (hmm - nope, the professional just never do that. Nor do they plagiarize or do other bad things):
The nub of the matter lies in the distinction between information (data, facts, images, quotes and brief texts that can be used out of context) and recorded knowledge (the cumulative exposition found in scholarly and literary texts and in popular nonfiction). When it comes to information, a snippet from Page 142 might be useful. When it comes to recorded knowledge, a snippet from Page 142 must be understood in the light of pages 1 through 141 or the text was not worth writing and publishing in the first place
His argument boils down to this - there's a high priesthood of professionals who guard all the information for us, and they use the thesaurus, the dictionary, and the services of editors. The rest of us? We just churn stuff out on a whim. The pros never make mistakes - by gosh, they're pros! The rest of us can barely walk, and god help us if we try to simultaneously chew gum. We need the aid of these people to protect us on our search for knowledge. They'll also make sure we avoid "bad ideas" from "disreputable people", I'm sure.
Gorman needs to wake up and realize that the days of the walled library are over. Google is doing a good thing here. The tools they plan to provide here will be value neutral - it will be possible to use them well or badly. Which means that the only difference between what they plan to provide and what already exists is ease of use. It's nice to know that Gorman stands on the worse is better side of the equation.
The hordes who love bad sci-fi are apparently protesting now - here's a Wired story on it. What these people need is Sci Friday on the Sci Fi channel - they can learn all about non-cardboard characters, plotlines that don't suck, and good acting.
You may have noticed an outage a few minutes ago - I've been working on an update to the server that would require a restart - I haven't been keeping track of the incremental changes, so there's no easy way for me to just kick it. Suffice to say, I'm back to testing on my local Linux box, and won't be trying the update again for a bit. Sorry about that
Hey look - there's a blog dedicated to keeping an eye on my favorite analysts, Gartner - GartnerWatch. Now when I miss one of their regular bouts with inanity, you won't have to - just subscribe to the feed. Just look at what's being reported now:
Gartner has started quietly hinting to customers that prices are going up. At minimum they will increase the per day cost to consult with an analyst anywhere from 50%-100%. The daily rate now is $10,000-$20,000, so that could mean a per day cost of $40,000.
You too can get bad advice from overpaid quacks. Open your wallet and enjoy it...
Here's a nice article by a Smalltalker who's been using Java for many years. It lays out pretty well why the mainstream languages are so deficient in comparison to Smalltalk:
Smalltalk is simple, terse and consistent. Everything is an object, and things get done by sending messages. There are 5 reserved words. The class library is well architected, and easy to navigate (I love Trailblazer in VAST). Everything is available right at your finger tips. You can execute code and inspect the results right away. Smalltalk gives you complete freedom to explore and learn. Once you've done it for a while, you can start to guess that classes will respond to certain messages. Once you break the shackles of your Pascal or C programming heritage, Smalltalk is much easier to read. Easy to read, means easy to learn, enhance and maintain.
Java is kind of like kindergarten. There are lots of rules you have to remember. If you don't follow them, the compiler makes you sit in the corner until you do. There are 59+ reserved words. Everything is not an object. There are primitives, and your classes are not first class objects. And you have to remember that there is no "this" in a static method (in Smalltalk calling self in a class method would return the class itself). You have to remember to tell the compiler things several times so it knows what you're talking about (Date date = new Date()).
There's a lot more - it's worth a read.
I see there's a new yahoo spamming the UIUC Wiki. I've got a script that restores pages in a jiffy; I've just restored all the blown pages. Thus guy has a new tactic though; he's creating reasonable sounding page names from scratch and then defacing those. I'm sure that UIUC will update their filters; in the meantime, I'm on it
Well, that was entertaining. Even though I don't use a database to back this blog site, I ended up having data integrity issues with some of the files. I've migrated the object model a few times over the last few years, and it seems that I hadn't gone back and fixed up all of the old data to include all of the required new data. That caused me a few headaches this afternoon on my test server. I'v just gotten all the migration done though, and the server is running the latest version of Silt - the same version that is in the public store.
That takes care of part one. Vassili has just sent me new templates and CSS files - so step two will be making the changes required to support that. That shouldn't take as long, but there's good TV on tonight, and I'm off to do some sledding tomorrow with my daughter's girl scout troop. I'll be trying to roll the new look out next week - we'll see how it goes.
I'm off to Ski Liberty for the morning - my daughter's girl scout troop is going snow tubing (apparently, sledding on inner tubes). It sounded like fun, so I volunteered to chaperone. The drawback - I'm up at this unholy hour of the day :)
Well, that was a fun morning. I'm not a great enthusiast for getting up at 5:30 am, but it was worth it today. My daughter's girl scout troop had arranged a trip to Ski Liberty, a ski resort about 75 miles north of here, just across the state line in Pennsylvania. We had tickets to use the snow tubing hill from 8 am til 10 am - it was a blast. The hill is shaped a lot like water park tube rides - here's a shot of my daughter on her way down the hill:
The inner tubes have a rope and handle - you can ride a lift up or walk (it was faster to walk after 9 am when the crowds started to arrive). You could also hook two, three, or more tubes together and go down as a group - here's a shot I took while I was in front and my daughter and a friend were in back - this was about 1/4th of the way down:
It was a fun trip, and I'd definitely enjoy doing it again. It was even worth going early - the temperature was rising as we left, and the runs wouldn't have been nearly as good with the temps above freezing.
Dave Buck is unimpressed with the refactoring support in VS 2005. It's things like this that remind me of the predictions by these morons in the early 90's that C++ development environments would catch up to Smalltalk within 18 months. It's multiple language fads later, and we're still waiting. At least I know what the Lisp folks feel like :)
It seems that Michael Gorman doesn't like being noticed by the hoi polloi. He's now claiming that his earlier piece was intended to be satirical. Righhhht..... Sure Michael. You claim that we aren't serious because we don't go through the edit/publish cycle? Boy, it's sure helped you be clear. Try reading what I wrote here. Let me know if I need to use smaller words.
Back when Star Trek-Next Gen was on, there was a specific point where the series went off the rails - it was when the writers created the Borg. Here was a race that was too powerful, that basicaly could not be defeated. It took the writers a lot of bad scripts to write their way our of that paper bag. Well, I'm starting to think that the writers for Stargate-SG-1 have created the same problem for themselves with the replicators. The disruptor weapons that Carter and the Asgard came up with don't work - the replicators adapt immediately - hmm, just like the Borg did to phasers. The Goa'uld are busy getting their collective butts hammered, and there doesn't seem to be any way out other than a miracle weapon of the ancients. It's sounding familiar, and not in a good way.
Mind you, I think it's a good thing that MS is working on this. Even so, I find some irony in this Cook Computing post. Many years ago, when Smalltalk started out at PARC, it was not only the development/deployment platform - it was the OS. Likewise, the old Lisp machines were the same thing, but running Lisp instead of Smalltalk. It's mildly amusing to see the industry slowly finding it's way back to ideas pioneered decades ago. Had the supposedly smart guys not been so enamored of curly braces and semi-colons, we could have been there a long, long time ago...
Update: Fixed the link
I've added support for file upload to the blog posting tool, and matching server side support. With any luck, that means that we'll start seeing a more interactive set of blogs here. I've been getting a lot of help cleaning stuff up from Steve Kelly of MetaCase - the SSP files have been nicely refactored by him. At the same time, Vassili has gotten me some new templates and CSS files, so I will likely be updating the basic look soon. On top of all that, Michael has been working on a WYSIWYG posting client - which will make it possible for us to create nice looking posts without using markup - either Wiki style or otherwise. This little blog server has come a long way in the last few weeks - stay tuned for more
if you tried to leave a comment on the site earlier, you ran across a nasty red warning about your comment being rejected if you weren't logged in. Well, I haven't added comment registration. What I've added is fat finger editing of files :) I updated some templates last night, and I accidentally pointed the comment entry name at the post entry form. Well, if you want to make a post (as the blog auther), it won't work unless you are logged in - thus the error message. So, I just went back and fixed the editing error, and it's all back to normal.
The original BattleStar Galactica (mostly a dog) was a 70's series - in that era, the Cylons were a tv shadow of the Soviet Union - a vast, impersonal empire out to mindlessly crush humanity. The new iteration is a great series - it's not campy, and it has real characters. The Cylons are very different. They aren't exactly robots, and their ships ssem to be cyborgs - living animals. Additionally, the series seems to be pulling themes from modern conflict. The Cylons seem to be religious - and their beliefs are different from those of humanity. In fact, it's starting to look to me like the Cylons consider humanity to be heretics. It's not clear if that's how things are going, but the writers are dropping a lot of hints that way - last night's episode in particular was fascinating (in deference to my readers down under, I won't reveal anything). In any case, I'm now very curious to see where they go with this.
Steve Kelly has put up a new wiki page detailing how to set up a Silt Blog Server. With VW 7.3, we ship a runtime image for use with web application servers - set up headless and with listeners already defined. The instructions walk you through the setup
I've gotten a few emails about the online tutorials for VW - they are here. The problem? Those tutorials make reference to categories in the System Browser. In 7.3, the categories aren't visible - instead, you see organization by package (the level used by Store, our version control tool). The tutorials will get updated, but in the meantime, you can just ignore the references to categories
After a very light (but cold) January, we've had a warm (but snowier) February. We are supposed to get 6-12 inches tomorrow, which is a lot for this part of Maryland at this time of year. I expect to be out sledding with my daughter a fair bit of tomorrow, and for school to be out. Since there's going to be snow through part of Monday evening, I won't be surprised if school is out Tuesday as well.
Tom Murphy points to an all too typical marketing approach - the allegedly customized form letter. Did these ever work? They didn't impress me in snail mail, they don't impress me in email. Tom mentions a few other problems:
If they had taken the time to read even a week's postings the publicist in question would have found a post I recently wrote on pitching blogs that would have saved him making this mistake.
However, the pitch was a mail merge which rather than being targeted was sent to probably a large number of bloggers. How do I know? Check out this paragraph for tell tale mail merge problems:
"Tom , we'd like to meet you and see where we might be able to serve as a source for future articles and offer some possible story ideas for your readers. If you'd like to have a one-on-one briefing, we'd like to get on your calendar right now. Please drop us an e-mail with times you've got available and we'll confirm your appointment and briefing."
The spaces after my name point to the tell tale signs of an incompetent mail merge. Looks like I'm not that special after all.
Heh, you would think someone bothering to put together a mass mailing would try not to look incompetent - without regard to the message, incompetence doesn't engender confidence.
Then again, many blogs didn't exactly distinguish themselves on Election Day. Some merely made bad guesses; some were truly off the wall. That's too bad. If they'd taken a step back, thought harder about their writing and addressed the consequences of their actions - which thoroughly professional journalists do with every story they write - the bloggers might have done a better job.
Hmm - I think I could say the same thing about the journalists at any number of newspapers, magazines, and tv networks. The difference? They get paid, and supposedly have competent editors. The evidence available doesn't engender a lot of confidence in the "competent" part of that equation.
Here's the thing - for many, many years now, professional journalists have gotten to decide what is and isn't news. You can see the results by looking at the sensational stories that pop from time to time - Lacy Peterson's murder got wall to wall coverage, while similar murders (not to mention international stories of note, like events in Darfur, Sudan) got ignored. Some people call that bias - I'd say it's more like a herd or pack mentality. The specifics aren't really the point though - the point is, the professionals are simply not the thorough, "check every facet" types that Friedman would have us believe they are.
Heck, think about it for a minute - what's the actual training for a journalist? It's not as if you have to spend eons in school to learn the basics:
- Take good notes
- Follow up on leads
- Cross examine for conflicting stories
That's not rocket science. Bloggers can do that as well as any journalist (within the constraints of budget, which does make a difference for large news organizations). Even without that though, bloggers can do what the editors all too often don't - basic fact checking. Oddly enough, fact checking is more relevant for a blogger than it typically has been for a media outlet. If a newspaper or tv station gets something wrong, they can ignore naysayers for as long as they want - they have the microphone, and can simply refuse to print (or air) any POV that counters theirs. A blogger doesn't have that luxury. If we make mistakes (and trust me, we do), there are plenty of other bloggers willing and able to point those mistakes out, using a megaphone that is as large or larger than ours. We can't sit back and stonewall effectively - something that the major media can do.
As Doc points out, Friedman does give an "on the other hand" side to his story on page two. Here's the thing though - there are literally millions of blogs, covering tons of subjects. Some cover an exclusive "beat" - politics, marketing, IT sector stuff - some cross fields. Some are careful, and some are just ranting. You can't really generalize about bloggers. Doesn't seem to stop the professionals from trying though, demonstrating again their incredible superiority over us, and showing us just how much good those careful editors do for them.
I think Scoble accidentally stumbled on something interesting - have a look at his anti-Auto-Link post. I haven't commented on this thing - truth be told, I haven't been able to get myself to care (can I avoid AutoLink? Yes. Ok then, I don't care...). Here's the interesting thing from Scoble:
I believe that anything that changes the linking behavior of the Web is evil. Anything that changes my content is evil. Particularly anything that messes with the integrity of the link system. And I do see this as a slippery slope. Today users have to jump through hoops to use this feature. What about tomorrow? Oh, and Google says they won't be evil, but what about their competitors who haven't taken such an anti-evil stance? (Hint: Microsoft isn't the only Google competitor).
Now, some other people tried to make the point that popup ad blockers and Tivo should also be seen as evil, then.
That's pushing the point a little far. The fundamental building block of the Web is linking. Linking is MY EDITORIAL CONTENT. That's different than advertising. And, if you got rid of popups, I still am able to get my point across here. In fact, I don't use them. And I don't have advertising here, so my point is still OK.
That may not be pushing the point too far. Say I visit a website - they sell space to advertisters, some (or all) of whom use pop-ups or pop-unders. Are they annoying? Heck yes. Do I use tools to block them? Heck yes. Does blocking them change the behavior of the web?
You can't really argue this point. The ads contain links that the site owner wanted you to see (he's paying for you to see them). By blocking them, you change the behavior of the web. See, this is why I simply can't get worked up over AutoLink. Given appropriate tools, I can decide whether I want to see pop ups or not. Google is providing me with a tool that lets me decide whether I want to see related information or not. Heck, I might as well rage against paid placement. Scoble blathers on and on about how AutoLink is an evil idea. Winer has been going on and on about it as well. I'll say the same thing I say to people who can't figure out the "change channel" or "off" switch on a TV or radio - you don't have to view/hear/read the content. It's an individual choice, and that's just fine. No one said you have to use Google. It's an open market for search engines guys - if this is an evil idea, people won't like it. If people don't like it, MS has the perfect opportunity to market their AutoLink free search engine.
There's definitely some irony in watching MS yap like a small dog when they are getting out-competed though.
The updated post tool that Michael created is now available as a development level update. To get it, you'll need to do a few things:
- Change the update path in BottomFeeder to end in /dev
- Check for updates, getting everything
- Go to the BottomFeeder download page, scroll down to the dev links, and grab the icons zip file
- Unzip the icons.zip file in your BottomFeeder install directory. You should end up with a new directory named "icons", filled with small image files
- Now restart BottomFeeder, and open the post tool from the plugins menu. You should see the new tool with the SwS editor.
When I release the next version of BottomFeeder, the new post tool (along with the required image files) will all be properly bundled. At this point, we have early access - there may be some issues with the tool (For instance, I know that creating tables is somewhat problematic). If you run into problems, let me know
That sound you here is the thud of death for the intel itanium. TechRepublic notes that IBM has dropped out as well.
I've finished testing the new look and feel stuff - it took longer than expected because there was simultaneous code evolution going on. The latest Silt bundle contains all the latest server code, and the SiltSSPFiles bundle contains all the latest SSP/CSS stuff.
I intend to migrate to the new look this evening - I need to update the server itself to do that - I haven't been tracking incremental changes at all. If you want to look at this stuff yourself, then have a look at the Silt Page on the Wiki. I'll be uploading the latest SSP/CSS files in a minute here. There are overlapping pages:
- View.ssp and View2.ssp
- CommentEntry.ssp and CommentEntry2.ssp
- Archive.ssp and Archive2.ssp
The "2.ssp" files all use the newer css look. If you intend to try those out, you'll have to rename them, or muck with your site configuration file (which, as generated by the creator tool, assumes the original names).
After some testing, it looks like the WYSIWYG posting tool isn't compatible with the current runtime build (which is still based on VW 7.2.1). I've been meaning to get Bf moved to VW 7.3, and this provides a reason to do so - in the meantime, the post tool update for the current build has been kicked back to the stable (non WYSIWYG) version.
I'm posting this from a new 7.3 build I've put together. It's been put up for download as well - go to the download page, and scroll down to the development links. If you already have BottomFeeder installed, just grab the baseapp zip file (appropriate to your platform) and replace your image/exe with the one in that archive. Otherwise, grab the installation files.
Once you get it installed, you'll have the latest code - including the new post tool - which I'm using to create this post. Enjoy
It looks like Yahoo is upping the ante with a search API. And to make it easier to follow, they've created a weblog to get the word out. That's cool. How is this upping the ante? Well, the Google API (which BottomFeeder supports) allows only 1000 queries a day. The new Yahoo one supports up to 5000. Hat tip to Phil Ringnalda.
Sometimes I'm just glad that I don't eat in truly high end restaurants. Scroll down on that link to this:
I just don't know what to say to you. A well done steak, particularly a filet, is a crime against god. Anthony Bourdain, who I consider to be a personal hero, said in Kitchen Confidential:
[steaks], if ordered well done were routinely thrown into the deep-fryer until crispy, then tossed into an oven to incinerate further ...
I cannot imagine how offended your waiter and chef were by being asked to destroy a piece of beef like that. Seared, with a cool red center, if you please. Well done? For f**** sake. I bet you like Pilsners and drink Corona with a lime.
If the chef and/or waiter were truly offended, someone needs to remind them of a simple truth - the diner is paying their salary. It's really not their problem how the diner orders food, so long as he pays. We have this same problem in the technology sector. We like our little holy wars over languages, operating systems, and hardware - and certainly we are entitled to make our pitches. The end customer is the one paying though, and we need to remember that.
Interested in the Smalltalk Solutions Coding Contest? Then register here - registration runs from now until May 13. We'll be announcing the contest itself shortly after registration closes.
Well, this is why I call them dev builds. The update tool in the 7.3 based BottomFeeder build is broken, so I'm in the process of putting together a new build. It was a stupid problem having to do with changes to the Http client code that the upgrade package wasn't accounting for. I'll have a new build up later today
Update: The new build is up
There's been a fair bit of commentary from this post on Google's AutoLink. Here's the thing - people complain that Google's service is "evil" because content producers have no opt-out option.
Well. I hope none of the people who make that complaint ever use any of the following pieces of equipment then:
- VCR's that have commercial skip capability, or fast forward
- TiVO or ReplayTV (or any PVR), using either 30 second skip or ad skip
- Any music copying capability that moves songs from "their intended place on an album" to a tape, iPod, custom CD (you get the idea)
With all of those tools, the original content producer has no opt out capability. In fact, most of us - including many of the same people complaining about AutoLink - have raised a hue and cry (properly, IMHO) over the RIAA/MPAA attempts to kill off those capabilities. But hey - if you oppose AutoLink on the grounds that content producers have no ability to opt out, then you better be willing to bend over and take it from the RIAA and MPAA. Because it's the same issue. The only difference is the size of the entity protesting.
Update: Looks like Scoble better give up his TiVo. After all, it's just horrible that it provides a butler service, allowing him to view content in ways that the producers don't control. Ditto any MP3 players you have lying around too, Scoble. Heck - why don't we forbid anything but read only CD's and DVD's - that'll keep us safe from anyone who wants to shaft those nice content producers. I'm sure that they have our best interests at heart, after all. You can send me the TiVo Scoble - clearly, it's evil technology...
I found an interesting request in Scoble's blog:
But Yahoo's API doesn't look like they really gave me what I wanted.
Here's the first thing I wanna try to build: a search engine without blogs.
Seriously. Blogs are increasing noise to lots of searches. We already have good engines that let you search blogs (Feedster, Pubsub, Newsgator, Technorati, and Bloglines all are letting you search blogs). What about an engine that lets you search everything BUT blogs? Where's that?
Well, explain something to me - how does a search engine differentiate a blog from an arbitrary website? It's not as if their labelled in some universal way (nor will they ever be). He then goes on to state that Yahoo's API "isn't good enough" to support that. Earth to Scoble: that might be because you asked for an impossible feature. Sure, we could have an engine omit things listed in those indices. The trouble is, it's not as if all blogs are listed in those indices. Second, there are things listed in those indices that aren't blogs - Feedster, for instance, indexes RSS and Atom feeds. I know that I've submitted non-blog RSS feeds to feedster.
I might as well as for a DWIMD (do what I meant, damnit) compiler...
I've updated the ssp pages and style sheets that Vassili sent me, and gotten everything in place. The changes are visible on the main page, the archive page, and on the comment entry page. I haven't moved over the other sites, but that's just a small bit of configuration. I wanted to get mine up with the new stuff first, and make sure that everything looked ok before I went through another round of symlink fun
It seems that when I put together the blogger and metaweblog api support for the post tool (2 years ago, I think), I didn't really read the specs (such as they are) correctly. Mind, you, these api's are a rats nest of oddness - semi-standard entry points like getUserBlogs, for instance, seems to be required by various tools, but is only anecdotally documented. In any case, I'm in the process of going through the weeds of these apis. I should have something semi-reasonable later today
I think I've got the Blogger API and MetaWebLog API sorted, both for the client posting tool and for the Silt server. I'll be pushing updates after lunch - I need to take my daughter to the Orthodontist now.
I understand the porn and poker spammers - they aren't "respectable" businesses, so they don't really worry about their reputations. It's different for a vendor like these guys: www.thebiggestdeal.com. It looks like someone ran a bot on a server owned by a business partner, sending referer spam out. It's unclear whether that partner did it, or had a machine "owned" - but it looks like my initial assumptions were wrong. I've been exchanging email with Bill White, the President of the company - he sounds like a standup guy. The spammers do damage wherever they go
To wit: can anyone tell me, for the ten years (give or take) between the introduction of VB 1.0 and the introduction of VB .NET 7.0, how much of the Win32 APIs or the COM APIs were written in VB?
Of course the answer is: none, to my knowledge. In fact, the VB team itself did not use VB in any meaningful way in its own product. The VB runtime functions were all written in C/C++. The VB forms package was written in C/C++. All of the VB controls were written in C/C++. Beyond the VB team, every major Microsoft product and operating system was written using C/C++. Every. Single. One.
And he says that last bit as if it's a good thing. What it indicates is either a severe weakness of VB, or a severe lack of vision by the VB team. The product either:
- Isn't good enough to write decent controls in
- The VB team wasn't smart enough to see the value in eating their own dogfood
And now they are "shocked, shocked" that people consider them to be a second class citizen of .NET? They shouldn't be surprised that VB was looked down upon for years, even given its popularity - the VB team itself implicitly told people that nothing of real importance should be done in the tool itself - "serious" work needed C++ in the past, and C# now.
Now yes, the runtimes (VM) for VW and OST are written in C. However, most of the environment is written in itself. Maybe VB just doesn't have that power...
I've been making major changes to the posting tool (full support for the Blogger and MetaWebLog APIs), and I've fixed bugs in the interface between Bf and the posting tool. As well, the current dev build can't actually download updates. Argh! I'l have a new dev build up tonight; anyone using the dev build should grab it (i'll update this post when it's ready)
Update: The new dev build is ready for download. Scroll down to the dev links
Ten is a good number notes a problem with mathematical education in the US:
In 2000, the state with students with the best mathematics proficiency percentage was Minnesota with 40%. That means that the best we could do in 2000 was 60% of 8-graders unable to apply mathematics to real-life problems. This is a sorry state of affairs.
He goes on to list many disturbing statistics that show just how innumerate most people end up. Towards the end, he adds a link to the sorry state of textbook production, implying that this is the biggest problem.
It might be the biggest problem. However, it's not the main reason (IMHO) that students end up having no practical mathematical skills. Let me run through the laundry list that I have:
- Calculators introduced in third grade
- No emphasis at all on basic computation skills
- An over-reliance on amorphous "computer skills"
I was very upset to find the local schools having the kids use calculators as early as third grade. Most students hadn't memorized basic multiplication (or even addition) facts; the school system seemed to think that "dull", and just handed out calculators. My wife and I had to do the drill work ourselves. Now sure, in "real life" you'll always have access to a calculator. But if you can't do basic computation, a lot of high school and college level math is really tedious. Go out and test anyone who's in their 20's (or younger) to get a feel for just how bad it is - now consider how they are going to make sense of whether a given sales price is of any value. If they can't do that, then they certainly can't make sense of political debate centered around budget figures.
What we've got is a completely innumerate voting population - which is every bit as dangerous as an illiterate one. It's not taken seriously though - do you ever see anyone making light of not being able to read in a movie or TV show? How many characters do you see saying "I'm no good at math" - or, on the other hand, why is it that most of the mathematically literate characters are portrayed as complete losers?
So yes, the way textbooks are prepared is a problem. There are simpler problems though, and yes: I'm willing to lay this one directly at the feet of the schools and the teachers. They know full well what they aren't teaching in this area, and there's no good reason for it.
Well, I knew that this was coming. I just didn't realize that it would be coming from something that pretends to be a news source:
Photo editors cropped her head onto a model's slimmer body to create the visual effect, which even the New York Post knows is an ethical black hole (err, maybe they don't). A footnote does appear on page three with the credits: "Cover: Photo illustration by Michael Elins ... head shot by Marc Bryan-Brown."
But that's not exactly Clarissa Explains It All for the average reader. Another Jennifer Aniston on Redbook, you say? So do we, even if assistant managing editor Lynn Staley believes "Anybody who knows the (Stewart) story and is familiar with Martha's current situation would know this particular picture" was a "photo illustration."
Yes, these fakes tend to get picked up quickly by attentive readers. However, how many casual readers hear about that? And yes, this particular case is trivial. I'm just waiting for the first political dirty trick launched using photo/video editing - it's a matter of when, not if. The bottom line - you simply can't trust photos, video, or audio anymore unless you trust the source. The thing is, news sources are tossing their believability down the tubes with stunts like this.
I've had harsh words to say about Atom in the past, but that was mostly over the feed format. I haven't looked at the posting API yet - maybe I should. The Blogger API and the MetaWebLog API are simply nightmares. There doesn't seem to be any standard way for client tools to interact with a server - I was debugging the interaction between a client and my server last night via IRC. Even better - the client was set to use the MetaWebLog api, but was sending requests to blogger.apiNameHere names. Sheesh. There was also an interesting difference in api points - I had implemented 'getUserBlogs', and the client was sending 'getUsersBlogs'. A quick Google search turned up references to both. Sigh.
I implemented both names, pointing to the same method. I had to map blogger names over to MetaWebLog entry points, at least for the tool being tested last night - who knows what oddness will turn up next. What a complete mess...
I like this rant in the BileBlog - no one does rants as well as Hani. Here's some of the milder stuff I'm willing to paste here - but go read it for a few devastating (and hilarious) take-downs:
I think one of the flaws of Mark's talk is that he's forgetting (or is unaware of) his audience. They aren't, as Floyd would like to think, clever leader types. They're just everyday grunts who have enough spare time and meaningless enough jobs that they can fart off on TSS every other day, interspersed with the odd person who has been sufficiently beaten with the cluebat.
The whole SOA myth makes for a great sales pitch by IBM types to high level 'architect' types whose job involves little more than doodling with crayons and going on IBM sponsored golfing trips. It does not, sadly, translate well to gruntspeak. us grunts are simple folk, we like code examples, we like concrete classes, and by god, we like xml. Anything else and most of us will be flailing about helplessly trying, and failing, to relate to the subject matter.
Read the whole thing, as they say
This is stupid, but here it is - if you are using the latest development version of BottomFeeder (i.e., you downloaded a dev build in the last few days), there's a bug in the update sub-system. The upgrade url in settings has to be modified as follows:
- Replace the text '721' with '73'
- Make sure that there's a '/' at the very end of the url
I'll be fixing those before I go to release, but you can get from here to there with those work-arounds.
The Wall Street Reporter interviewed Cincom's President, Tom Nies recently. Here's the interview:
WSR: Maybe we could start off with a brief history and a general overview of the company.
NIES: Cincom’s been operational for 36 years. We operate all around the world and have three to four thousand strategic customers. We deal with commercial, Government and institutional buyers. We don’t sell consumer software. We compete in the marketplace with firms like Oracle, SAP, and Siebel, selling not only strategic software applications and software products, but also application development, database management and software and services that clients use to build and develop their own applications.
WSR: Tell us about some of the emerging and developing trends that you see within the industry and explain to us how the company’s products are redesigned to capitalize and address these trends.
NIES: I think the most important trend is that customers are becoming much more demanding buyers. There is a significant supply of excess software in the marketplace today. I estimate as much as 40-50%. For any type of product or solution one wants there are half a dozen good potential providers. This gives customers the opportunity to demand more for their money. As a result, software implementations are now a better buy for them than they were in the past and will be even better in the future. Software companies who provide their customers more value at a lower cost, with more rapid ROI, while minimizing risk, are going to benefit handsomely in the future. Those who continue to require very drawn out and significantly excessive costs of implementation and support will suffer. Simple enough. In a buyer’s market, customers will demand and get greater value at much lower overall cost. Only naïve buyers will accommodate vendors in today’s marketplace as they did in the years leading up to 2000.
WSR: One of the developments we have seen in recent times is the whole regulatory requirement of Sarbanes-Oxley. And companies are looking to such areas as business intelligence software. How do you see this particular trend developing and how is the company addressing this?
NIES: Sarbanes-Oxley is a good indication of the fact that investors want more information about the business just as management needs more information about the business. There has to be a lot more integrity in the figures and facts because excessive risks will no longer be tolerated. But besides knowing more about the business and reporting to the owners and the regulatory bodies, the globalized world we are living in today has increased the competition so much that more effective, better and more comprehensive use of software in just about every area of the business is absolutely mandated. That is one of the reasons why I think we are going to see a great blossoming in software opportunities in the future, certainly in business intelligence. It’s not one that Cincom is now promoting heavily, but it’s a new opportunity area for us too.
WSR: In terms of partnerships and alliances within the industry, how does Cincom use them to further your objectives?
NIES: Partnerships are key. One has to minimize the cost of selling and distributing software, as well as broadening the base availability and distribution of software and solutions offered. So, we not only look to allies to supplement and round out our product line, but also we look to partners who would use our software line and some of the services we offer to better satisfy their customers. We are working heavily with developing and expanding partnerships. We originally built the company around a partner-related environment, and I think this is a way forward for most companies today. It’s a major trend in our industry to see expansion of partnerships everywhere.
WSR: Cincom competes in the industry against such leading companies as SAP, Oracle, and Siebel Systems. How does Cincom distinguish its technology and its product offerings from these competitors?
NIES: Today, almost all the software providers offer the customer more than they really need to satisfy their requirements. So, emphasis on product feature and functionality is an area of marginal utility and with diminishing diminishing returns to boot. To win more consistently in the marketplace today one must deliver more value to the customer at a lowered cost. The differentiator we show companies from an Oracle or a SAP is that Cincom will implement a system of similar capability for perhaps a fourth or fifth, maybe in some cases, a tenth of the total costs and half to a third of the time required to reach operational utility of the systems desired. That’s our value proposition. Cincom does this consistently and we believe that is the significance of our comparative. Advanced in more value, delivered for less time, cost and risk are having an impact for Cincom in the marketplace.
WSR: Can you tell us about the background and experience of the management team on board?
NIES: One of the great strengths of Cincom is that our company has proven to be a very attractive company to our associates. We not only are able to attract good managers and top executives to Cincom, but we were able to retain them. We have an average of 12 to 15 years or so leading and guiding every one of our business pursuits. Our people are committed to our business, and Cincom is committed to our people. We have probably the longest average employment rate in the industry. We also have the highest returnee rate. One out of every 12 of our people are people are returnees. Over 25% of Cincom’s staff have more than 15 years with our company. We are deep in talent. We have a really experienced and zealous management team. We can, and do consistently, deliver on our promises because of the skill, quality and experience of our people.
WSR: Cincom serves thousands of clients in six continents. How do you foresee the next two to three years— now—as a time of expansion and development for the company?
NIES: Customers want providers to take more and more responsibility. So, we are expanding our offerings. Hosting services, outsourcing services and more comprehensive facilities management types are now being provided to customers. We see this as a great growth opportunity for us because customers who are looking at the IBMs, Computer Sciences, EDS, and others are now looking for alternative suppliers who will provide the same type of quality and comprehensiveness of service, but at significantly lower prices. So, this is another market opportunity that we are pursuing with exactly the same model as we employed for our software product offerings. What Dell has done for PCs—that is to provide a very similar quality at a much lower price—is what we are doing in the area of software and outsourcing services. It is a model that we think will play well into the future; more value with lower cost. We see this not just with Dell, but we see these demands being made in almost every industry, in every part of our now globalized commercial world.
WSR: In terms of geographic expansion, what are some of your other new key markets that you think might represent an opportunity?
NIES: Asia-Pacific is absolutely on the top of the list by far. Europe and America are mature markets. They are good markets with moderate growth. And we are penetrating and developing these markets quite well. But, the growth rate in these markets is much less than the growth rate in places like China and throughout the rest of Asia. Japan is still a very good growth market, but China, India and much of Asia are now very, very substantial growth opportunities. That’s why just about everybody is going there as fast and as significantly as they can.
WSR: Perhaps you could give us just an idea of some of the key milestones that we can expect to see from the company over the next 12 to 18 months.
NIES: Throughout this entire decade, we have averaged over 80% compounded return on invested capital; 80%—that’s three to four times or five times what is typically generated among good performing companies in our industry. So, a very high return on investment and also significant increases in earnings per share is a key Cincom emphasis. We have increased earnings per share by seven-fold over the last five years and we are averaging, as I said, 80% return on capital investment. We are looking to expand the business significantly without any kind of adverse effect on these operating results. This will be no easy task for us. But, we are committed to high returns on investments for ourselves just as we look to provide our customers high returns on their investments with us.
This morning, I had a rant about the state of blog posting APIs. It's worse than I thought then :)
Have a look at the MetaWebLog "spec" page, for instance - I have no idea what data a server is expected to return, nor do I have any idea what a client should expect to see. I can code a client defensively, but a server? I've been testing on the IRC again, and the tool that was hitting my server couldn't handle some of the data I was sending back. This is just maddening. I mean really - just marvel at this excuse for a spec:
In newPost and editPost, content is not a string, as it is in the Blogger API, it's a struct. The defined members of struct are the elements of <item> in RSS 2.0, providing a rich variety of item-level metadata, with well-understood applications.
The three basic elements are title, link and description. For blogging tools that don't support titles and links, the description element holds what the Blogger API refers to as "content."
For tools that don't support titles and links? I'm supposed to know which ones those are.... how? Can't we find a happy medium between the undefined crap that we have now and the over-defined insanity that is Atom?
When this story applies to you, it's time to step away from the PC:
You're in the middle of a frenzied fragfest when it hits: You gotta pee--bad. Whatcha gonna do? Getting up from your computer clearly isn't an option--any 733t d00d knows the deathmatch owns the bladder.
Enter the Internet urinal, a handy-dandy portable pee device marketed specially for the PC-bound. Each contraption is made of hard plastic, comes with a "female adapter" and holds 32 ounces--a whole lotta recycled Red Bull.
"With the Internet Urinal, you'll never have to leave your computer again," touts a promo on ThinkGeek. "Imagine the freedom--destroy your opponents in that all-important 'Quake 3' clan match without taking a break; drink as many cans of BAWLS as you want and still be able to make that last important trade before the market closes."
Time to get a life :)