Back when Star Trek-Next Gen was on, there was a specific point where the series went off the rails - it was when the writers created the Borg. Here was a race that was too powerful, that basicaly could not be defeated. It took the writers a lot of bad scripts to write their way our of that paper bag. Well, I'm starting to think that the writers for Stargate-SG-1 have created the same problem for themselves with the replicators. The disruptor weapons that Carter and the Asgard came up with don't work - the replicators adapt immediately - hmm, just like the Borg did to phasers. The Goa'uld are busy getting their collective butts hammered, and there doesn't seem to be any way out other than a miracle weapon of the ancients. It's sounding familiar, and not in a good way.
Mind you, I think it's a good thing that MS is working on this. Even so, I find some irony in this Cook Computing post. Many years ago, when Smalltalk started out at PARC, it was not only the development/deployment platform - it was the OS. Likewise, the old Lisp machines were the same thing, but running Lisp instead of Smalltalk. It's mildly amusing to see the industry slowly finding it's way back to ideas pioneered decades ago. Had the supposedly smart guys not been so enamored of curly braces and semi-colons, we could have been there a long, long time ago...
Update: Fixed the link
I've added support for file upload to the blog posting tool, and matching server side support. With any luck, that means that we'll start seeing a more interactive set of blogs here. I've been getting a lot of help cleaning stuff up from Steve Kelly of MetaCase - the SSP files have been nicely refactored by him. At the same time, Vassili has gotten me some new templates and CSS files, so I will likely be updating the basic look soon. On top of all that, Michael has been working on a WYSIWYG posting client - which will make it possible for us to create nice looking posts without using markup - either Wiki style or otherwise. This little blog server has come a long way in the last few weeks - stay tuned for more
if you tried to leave a comment on the site earlier, you ran across a nasty red warning about your comment being rejected if you weren't logged in. Well, I haven't added comment registration. What I've added is fat finger editing of files :) I updated some templates last night, and I accidentally pointed the comment entry name at the post entry form. Well, if you want to make a post (as the blog auther), it won't work unless you are logged in - thus the error message. So, I just went back and fixed the editing error, and it's all back to normal.
The original BattleStar Galactica (mostly a dog) was a 70's series - in that era, the Cylons were a tv shadow of the Soviet Union - a vast, impersonal empire out to mindlessly crush humanity. The new iteration is a great series - it's not campy, and it has real characters. The Cylons are very different. They aren't exactly robots, and their ships ssem to be cyborgs - living animals. Additionally, the series seems to be pulling themes from modern conflict. The Cylons seem to be religious - and their beliefs are different from those of humanity. In fact, it's starting to look to me like the Cylons consider humanity to be heretics. It's not clear if that's how things are going, but the writers are dropping a lot of hints that way - last night's episode in particular was fascinating (in deference to my readers down under, I won't reveal anything). In any case, I'm now very curious to see where they go with this.
Steve Kelly has put up a new wiki page detailing how to set up a Silt Blog Server. With VW 7.3, we ship a runtime image for use with web application servers - set up headless and with listeners already defined. The instructions walk you through the setup
I've gotten a few emails about the online tutorials for VW - they are here. The problem? Those tutorials make reference to categories in the System Browser. In 7.3, the categories aren't visible - instead, you see organization by package (the level used by Store, our version control tool). The tutorials will get updated, but in the meantime, you can just ignore the references to categories
After a very light (but cold) January, we've had a warm (but snowier) February. We are supposed to get 6-12 inches tomorrow, which is a lot for this part of Maryland at this time of year. I expect to be out sledding with my daughter a fair bit of tomorrow, and for school to be out. Since there's going to be snow through part of Monday evening, I won't be surprised if school is out Tuesday as well.
Tom Murphy points to an all too typical marketing approach - the allegedly customized form letter. Did these ever work? They didn't impress me in snail mail, they don't impress me in email. Tom mentions a few other problems:
If they had taken the time to read even a week's postings the publicist in question would have found a post I recently wrote on pitching blogs that would have saved him making this mistake.
However, the pitch was a mail merge which rather than being targeted was sent to probably a large number of bloggers. How do I know? Check out this paragraph for tell tale mail merge problems:
"Tom , we'd like to meet you and see where we might be able to serve as a source for future articles and offer some possible story ideas for your readers. If you'd like to have a one-on-one briefing, we'd like to get on your calendar right now. Please drop us an e-mail with times you've got available and we'll confirm your appointment and briefing."
The spaces after my name point to the tell tale signs of an incompetent mail merge. Looks like I'm not that special after all.
Heh, you would think someone bothering to put together a mass mailing would try not to look incompetent - without regard to the message, incompetence doesn't engender confidence.
Then again, many blogs didn't exactly distinguish themselves on Election Day. Some merely made bad guesses; some were truly off the wall. That's too bad. If they'd taken a step back, thought harder about their writing and addressed the consequences of their actions - which thoroughly professional journalists do with every story they write - the bloggers might have done a better job.
Hmm - I think I could say the same thing about the journalists at any number of newspapers, magazines, and tv networks. The difference? They get paid, and supposedly have competent editors. The evidence available doesn't engender a lot of confidence in the "competent" part of that equation.
Here's the thing - for many, many years now, professional journalists have gotten to decide what is and isn't news. You can see the results by looking at the sensational stories that pop from time to time - Lacy Peterson's murder got wall to wall coverage, while similar murders (not to mention international stories of note, like events in Darfur, Sudan) got ignored. Some people call that bias - I'd say it's more like a herd or pack mentality. The specifics aren't really the point though - the point is, the professionals are simply not the thorough, "check every facet" types that Friedman would have us believe they are.
Heck, think about it for a minute - what's the actual training for a journalist? It's not as if you have to spend eons in school to learn the basics:
- Take good notes
- Follow up on leads
- Cross examine for conflicting stories
That's not rocket science. Bloggers can do that as well as any journalist (within the constraints of budget, which does make a difference for large news organizations). Even without that though, bloggers can do what the editors all too often don't - basic fact checking. Oddly enough, fact checking is more relevant for a blogger than it typically has been for a media outlet. If a newspaper or tv station gets something wrong, they can ignore naysayers for as long as they want - they have the microphone, and can simply refuse to print (or air) any POV that counters theirs. A blogger doesn't have that luxury. If we make mistakes (and trust me, we do), there are plenty of other bloggers willing and able to point those mistakes out, using a megaphone that is as large or larger than ours. We can't sit back and stonewall effectively - something that the major media can do.
As Doc points out, Friedman does give an "on the other hand" side to his story on page two. Here's the thing though - there are literally millions of blogs, covering tons of subjects. Some cover an exclusive "beat" - politics, marketing, IT sector stuff - some cross fields. Some are careful, and some are just ranting. You can't really generalize about bloggers. Doesn't seem to stop the professionals from trying though, demonstrating again their incredible superiority over us, and showing us just how much good those careful editors do for them.
I think Scoble accidentally stumbled on something interesting - have a look at his anti-Auto-Link post. I haven't commented on this thing - truth be told, I haven't been able to get myself to care (can I avoid AutoLink? Yes. Ok then, I don't care...). Here's the interesting thing from Scoble:
I believe that anything that changes the linking behavior of the Web is evil. Anything that changes my content is evil. Particularly anything that messes with the integrity of the link system. And I do see this as a slippery slope. Today users have to jump through hoops to use this feature. What about tomorrow? Oh, and Google says they won't be evil, but what about their competitors who haven't taken such an anti-evil stance? (Hint: Microsoft isn't the only Google competitor).
Now, some other people tried to make the point that popup ad blockers and Tivo should also be seen as evil, then.
That's pushing the point a little far. The fundamental building block of the Web is linking. Linking is MY EDITORIAL CONTENT. That's different than advertising. And, if you got rid of popups, I still am able to get my point across here. In fact, I don't use them. And I don't have advertising here, so my point is still OK.
That may not be pushing the point too far. Say I visit a website - they sell space to advertisters, some (or all) of whom use pop-ups or pop-unders. Are they annoying? Heck yes. Do I use tools to block them? Heck yes. Does blocking them change the behavior of the web?
You can't really argue this point. The ads contain links that the site owner wanted you to see (he's paying for you to see them). By blocking them, you change the behavior of the web. See, this is why I simply can't get worked up over AutoLink. Given appropriate tools, I can decide whether I want to see pop ups or not. Google is providing me with a tool that lets me decide whether I want to see related information or not. Heck, I might as well rage against paid placement. Scoble blathers on and on about how AutoLink is an evil idea. Winer has been going on and on about it as well. I'll say the same thing I say to people who can't figure out the "change channel" or "off" switch on a TV or radio - you don't have to view/hear/read the content. It's an individual choice, and that's just fine. No one said you have to use Google. It's an open market for search engines guys - if this is an evil idea, people won't like it. If people don't like it, MS has the perfect opportunity to market their AutoLink free search engine.
There's definitely some irony in watching MS yap like a small dog when they are getting out-competed though.
The updated post tool that Michael created is now available as a development level update. To get it, you'll need to do a few things:
- Change the update path in BottomFeeder to end in /dev
- Check for updates, getting everything
- Go to the BottomFeeder download page, scroll down to the dev links, and grab the icons zip file
- Unzip the icons.zip file in your BottomFeeder install directory. You should end up with a new directory named "icons", filled with small image files
- Now restart BottomFeeder, and open the post tool from the plugins menu. You should see the new tool with the SwS editor.
When I release the next version of BottomFeeder, the new post tool (along with the required image files) will all be properly bundled. At this point, we have early access - there may be some issues with the tool (For instance, I know that creating tables is somewhat problematic). If you run into problems, let me know
That sound you here is the thud of death for the intel itanium. TechRepublic notes that IBM has dropped out as well.
I've finished testing the new look and feel stuff - it took longer than expected because there was simultaneous code evolution going on. The latest Silt bundle contains all the latest server code, and the SiltSSPFiles bundle contains all the latest SSP/CSS stuff.
I intend to migrate to the new look this evening - I need to update the server itself to do that - I haven't been tracking incremental changes at all. If you want to look at this stuff yourself, then have a look at the Silt Page on the Wiki. I'll be uploading the latest SSP/CSS files in a minute here. There are overlapping pages:
- View.ssp and View2.ssp
- CommentEntry.ssp and CommentEntry2.ssp
- Archive.ssp and Archive2.ssp
The "2.ssp" files all use the newer css look. If you intend to try those out, you'll have to rename them, or muck with your site configuration file (which, as generated by the creator tool, assumes the original names).
After some testing, it looks like the WYSIWYG posting tool isn't compatible with the current runtime build (which is still based on VW 7.2.1). I've been meaning to get Bf moved to VW 7.3, and this provides a reason to do so - in the meantime, the post tool update for the current build has been kicked back to the stable (non WYSIWYG) version.
I'm posting this from a new 7.3 build I've put together. It's been put up for download as well - go to the download page, and scroll down to the development links. If you already have BottomFeeder installed, just grab the baseapp zip file (appropriate to your platform) and replace your image/exe with the one in that archive. Otherwise, grab the installation files.
Once you get it installed, you'll have the latest code - including the new post tool - which I'm using to create this post. Enjoy
It looks like Yahoo is upping the ante with a search API. And to make it easier to follow, they've created a weblog to get the word out. That's cool. How is this upping the ante? Well, the Google API (which BottomFeeder supports) allows only 1000 queries a day. The new Yahoo one supports up to 5000. Hat tip to Phil Ringnalda.
Sometimes I'm just glad that I don't eat in truly high end restaurants. Scroll down on that link to this:
I just don't know what to say to you. A well done steak, particularly a filet, is a crime against god. Anthony Bourdain, who I consider to be a personal hero, said in Kitchen Confidential:
[steaks], if ordered well done were routinely thrown into the deep-fryer until crispy, then tossed into an oven to incinerate further ...
I cannot imagine how offended your waiter and chef were by being asked to destroy a piece of beef like that. Seared, with a cool red center, if you please. Well done? For f**** sake. I bet you like Pilsners and drink Corona with a lime.
If the chef and/or waiter were truly offended, someone needs to remind them of a simple truth - the diner is paying their salary. It's really not their problem how the diner orders food, so long as he pays. We have this same problem in the technology sector. We like our little holy wars over languages, operating systems, and hardware - and certainly we are entitled to make our pitches. The end customer is the one paying though, and we need to remember that.
Interested in the Smalltalk Solutions Coding Contest? Then register here - registration runs from now until May 13. We'll be announcing the contest itself shortly after registration closes.
Well, this is why I call them dev builds. The update tool in the 7.3 based BottomFeeder build is broken, so I'm in the process of putting together a new build. It was a stupid problem having to do with changes to the Http client code that the upgrade package wasn't accounting for. I'll have a new build up later today
Update: The new build is up
There's been a fair bit of commentary from this post on Google's AutoLink. Here's the thing - people complain that Google's service is "evil" because content producers have no opt-out option.
Well. I hope none of the people who make that complaint ever use any of the following pieces of equipment then:
- VCR's that have commercial skip capability, or fast forward
- TiVO or ReplayTV (or any PVR), using either 30 second skip or ad skip
- Any music copying capability that moves songs from "their intended place on an album" to a tape, iPod, custom CD (you get the idea)
With all of those tools, the original content producer has no opt out capability. In fact, most of us - including many of the same people complaining about AutoLink - have raised a hue and cry (properly, IMHO) over the RIAA/MPAA attempts to kill off those capabilities. But hey - if you oppose AutoLink on the grounds that content producers have no ability to opt out, then you better be willing to bend over and take it from the RIAA and MPAA. Because it's the same issue. The only difference is the size of the entity protesting.
Update: Looks like Scoble better give up his TiVo. After all, it's just horrible that it provides a butler service, allowing him to view content in ways that the producers don't control. Ditto any MP3 players you have lying around too, Scoble. Heck - why don't we forbid anything but read only CD's and DVD's - that'll keep us safe from anyone who wants to shaft those nice content producers. I'm sure that they have our best interests at heart, after all. You can send me the TiVo Scoble - clearly, it's evil technology...
I found an interesting request in Scoble's blog:
But Yahoo's API doesn't look like they really gave me what I wanted.
Here's the first thing I wanna try to build: a search engine without blogs.
Seriously. Blogs are increasing noise to lots of searches. We already have good engines that let you search blogs (Feedster, Pubsub, Newsgator, Technorati, and Bloglines all are letting you search blogs). What about an engine that lets you search everything BUT blogs? Where's that?
Well, explain something to me - how does a search engine differentiate a blog from an arbitrary website? It's not as if their labelled in some universal way (nor will they ever be). He then goes on to state that Yahoo's API "isn't good enough" to support that. Earth to Scoble: that might be because you asked for an impossible feature. Sure, we could have an engine omit things listed in those indices. The trouble is, it's not as if all blogs are listed in those indices. Second, there are things listed in those indices that aren't blogs - Feedster, for instance, indexes RSS and Atom feeds. I know that I've submitted non-blog RSS feeds to feedster.
I might as well as for a DWIMD (do what I meant, damnit) compiler...
I've updated the ssp pages and style sheets that Vassili sent me, and gotten everything in place. The changes are visible on the main page, the archive page, and on the comment entry page. I haven't moved over the other sites, but that's just a small bit of configuration. I wanted to get mine up with the new stuff first, and make sure that everything looked ok before I went through another round of symlink fun
It seems that when I put together the blogger and metaweblog api support for the post tool (2 years ago, I think), I didn't really read the specs (such as they are) correctly. Mind, you, these api's are a rats nest of oddness - semi-standard entry points like getUserBlogs, for instance, seems to be required by various tools, but is only anecdotally documented. In any case, I'm in the process of going through the weeds of these apis. I should have something semi-reasonable later today
I think I've got the Blogger API and MetaWebLog API sorted, both for the client posting tool and for the Silt server. I'll be pushing updates after lunch - I need to take my daughter to the Orthodontist now.
I understand the porn and poker spammers - they aren't "respectable" businesses, so they don't really worry about their reputations. It's different for a vendor like these guys: www.thebiggestdeal.com. It looks like someone ran a bot on a server owned by a business partner, sending referer spam out. It's unclear whether that partner did it, or had a machine "owned" - but it looks like my initial assumptions were wrong. I've been exchanging email with Bill White, the President of the company - he sounds like a standup guy. The spammers do damage wherever they go
To wit: can anyone tell me, for the ten years (give or take) between the introduction of VB 1.0 and the introduction of VB .NET 7.0, how much of the Win32 APIs or the COM APIs were written in VB?
Of course the answer is: none, to my knowledge. In fact, the VB team itself did not use VB in any meaningful way in its own product. The VB runtime functions were all written in C/C++. The VB forms package was written in C/C++. All of the VB controls were written in C/C++. Beyond the VB team, every major Microsoft product and operating system was written using C/C++. Every. Single. One.
And he says that last bit as if it's a good thing. What it indicates is either a severe weakness of VB, or a severe lack of vision by the VB team. The product either:
- Isn't good enough to write decent controls in
- The VB team wasn't smart enough to see the value in eating their own dogfood
And now they are "shocked, shocked" that people consider them to be a second class citizen of .NET? They shouldn't be surprised that VB was looked down upon for years, even given its popularity - the VB team itself implicitly told people that nothing of real importance should be done in the tool itself - "serious" work needed C++ in the past, and C# now.
Now yes, the runtimes (VM) for VW and OST are written in C. However, most of the environment is written in itself. Maybe VB just doesn't have that power...
I've been making major changes to the posting tool (full support for the Blogger and MetaWebLog APIs), and I've fixed bugs in the interface between Bf and the posting tool. As well, the current dev build can't actually download updates. Argh! I'l have a new dev build up tonight; anyone using the dev build should grab it (i'll update this post when it's ready)
Update: The new dev build is ready for download. Scroll down to the dev links
Ten is a good number notes a problem with mathematical education in the US:
In 2000, the state with students with the best mathematics proficiency percentage was Minnesota with 40%. That means that the best we could do in 2000 was 60% of 8-graders unable to apply mathematics to real-life problems. This is a sorry state of affairs.
He goes on to list many disturbing statistics that show just how innumerate most people end up. Towards the end, he adds a link to the sorry state of textbook production, implying that this is the biggest problem.
It might be the biggest problem. However, it's not the main reason (IMHO) that students end up having no practical mathematical skills. Let me run through the laundry list that I have:
- Calculators introduced in third grade
- No emphasis at all on basic computation skills
- An over-reliance on amorphous "computer skills"
I was very upset to find the local schools having the kids use calculators as early as third grade. Most students hadn't memorized basic multiplication (or even addition) facts; the school system seemed to think that "dull", and just handed out calculators. My wife and I had to do the drill work ourselves. Now sure, in "real life" you'll always have access to a calculator. But if you can't do basic computation, a lot of high school and college level math is really tedious. Go out and test anyone who's in their 20's (or younger) to get a feel for just how bad it is - now consider how they are going to make sense of whether a given sales price is of any value. If they can't do that, then they certainly can't make sense of political debate centered around budget figures.
What we've got is a completely innumerate voting population - which is every bit as dangerous as an illiterate one. It's not taken seriously though - do you ever see anyone making light of not being able to read in a movie or TV show? How many characters do you see saying "I'm no good at math" - or, on the other hand, why is it that most of the mathematically literate characters are portrayed as complete losers?
So yes, the way textbooks are prepared is a problem. There are simpler problems though, and yes: I'm willing to lay this one directly at the feet of the schools and the teachers. They know full well what they aren't teaching in this area, and there's no good reason for it.
Well, I knew that this was coming. I just didn't realize that it would be coming from something that pretends to be a news source:
Photo editors cropped her head onto a model's slimmer body to create the visual effect, which even the New York Post knows is an ethical black hole (err, maybe they don't). A footnote does appear on page three with the credits: "Cover: Photo illustration by Michael Elins ... head shot by Marc Bryan-Brown."
But that's not exactly Clarissa Explains It All for the average reader. Another Jennifer Aniston on Redbook, you say? So do we, even if assistant managing editor Lynn Staley believes "Anybody who knows the (Stewart) story and is familiar with Martha's current situation would know this particular picture" was a "photo illustration."
Yes, these fakes tend to get picked up quickly by attentive readers. However, how many casual readers hear about that? And yes, this particular case is trivial. I'm just waiting for the first political dirty trick launched using photo/video editing - it's a matter of when, not if. The bottom line - you simply can't trust photos, video, or audio anymore unless you trust the source. The thing is, news sources are tossing their believability down the tubes with stunts like this.
I've had harsh words to say about Atom in the past, but that was mostly over the feed format. I haven't looked at the posting API yet - maybe I should. The Blogger API and the MetaWebLog API are simply nightmares. There doesn't seem to be any standard way for client tools to interact with a server - I was debugging the interaction between a client and my server last night via IRC. Even better - the client was set to use the MetaWebLog api, but was sending requests to blogger.apiNameHere names. Sheesh. There was also an interesting difference in api points - I had implemented 'getUserBlogs', and the client was sending 'getUsersBlogs'. A quick Google search turned up references to both. Sigh.
I implemented both names, pointing to the same method. I had to map blogger names over to MetaWebLog entry points, at least for the tool being tested last night - who knows what oddness will turn up next. What a complete mess...
I like this rant in the BileBlog - no one does rants as well as Hani. Here's some of the milder stuff I'm willing to paste here - but go read it for a few devastating (and hilarious) take-downs:
I think one of the flaws of Mark's talk is that he's forgetting (or is unaware of) his audience. They aren't, as Floyd would like to think, clever leader types. They're just everyday grunts who have enough spare time and meaningless enough jobs that they can fart off on TSS every other day, interspersed with the odd person who has been sufficiently beaten with the cluebat.
The whole SOA myth makes for a great sales pitch by IBM types to high level 'architect' types whose job involves little more than doodling with crayons and going on IBM sponsored golfing trips. It does not, sadly, translate well to gruntspeak. us grunts are simple folk, we like code examples, we like concrete classes, and by god, we like xml. Anything else and most of us will be flailing about helplessly trying, and failing, to relate to the subject matter.
Read the whole thing, as they say
This is stupid, but here it is - if you are using the latest development version of BottomFeeder (i.e., you downloaded a dev build in the last few days), there's a bug in the update sub-system. The upgrade url in settings has to be modified as follows:
- Replace the text '721' with '73'
- Make sure that there's a '/' at the very end of the url
I'll be fixing those before I go to release, but you can get from here to there with those work-arounds.
The Wall Street Reporter interviewed Cincom's President, Tom Nies recently. Here's the interview:
WSR: Maybe we could start off with a brief history and a general overview of the company.
NIES: Cincom’s been operational for 36 years. We operate all around the world and have three to four thousand strategic customers. We deal with commercial, Government and institutional buyers. We don’t sell consumer software. We compete in the marketplace with firms like Oracle, SAP, and Siebel, selling not only strategic software applications and software products, but also application development, database management and software and services that clients use to build and develop their own applications.
WSR: Tell us about some of the emerging and developing trends that you see within the industry and explain to us how the company’s products are redesigned to capitalize and address these trends.
NIES: I think the most important trend is that customers are becoming much more demanding buyers. There is a significant supply of excess software in the marketplace today. I estimate as much as 40-50%. For any type of product or solution one wants there are half a dozen good potential providers. This gives customers the opportunity to demand more for their money. As a result, software implementations are now a better buy for them than they were in the past and will be even better in the future. Software companies who provide their customers more value at a lower cost, with more rapid ROI, while minimizing risk, are going to benefit handsomely in the future. Those who continue to require very drawn out and significantly excessive costs of implementation and support will suffer. Simple enough. In a buyer’s market, customers will demand and get greater value at much lower overall cost. Only naïve buyers will accommodate vendors in today’s marketplace as they did in the years leading up to 2000.
WSR: One of the developments we have seen in recent times is the whole regulatory requirement of Sarbanes-Oxley. And companies are looking to such areas as business intelligence software. How do you see this particular trend developing and how is the company addressing this?
NIES: Sarbanes-Oxley is a good indication of the fact that investors want more information about the business just as management needs more information about the business. There has to be a lot more integrity in the figures and facts because excessive risks will no longer be tolerated. But besides knowing more about the business and reporting to the owners and the regulatory bodies, the globalized world we are living in today has increased the competition so much that more effective, better and more comprehensive use of software in just about every area of the business is absolutely mandated. That is one of the reasons why I think we are going to see a great blossoming in software opportunities in the future, certainly in business intelligence. It’s not one that Cincom is now promoting heavily, but it’s a new opportunity area for us too.
WSR: In terms of partnerships and alliances within the industry, how does Cincom use them to further your objectives?
NIES: Partnerships are key. One has to minimize the cost of selling and distributing software, as well as broadening the base availability and distribution of software and solutions offered. So, we not only look to allies to supplement and round out our product line, but also we look to partners who would use our software line and some of the services we offer to better satisfy their customers. We are working heavily with developing and expanding partnerships. We originally built the company around a partner-related environment, and I think this is a way forward for most companies today. It’s a major trend in our industry to see expansion of partnerships everywhere.
WSR: Cincom competes in the industry against such leading companies as SAP, Oracle, and Siebel Systems. How does Cincom distinguish its technology and its product offerings from these competitors?
NIES: Today, almost all the software providers offer the customer more than they really need to satisfy their requirements. So, emphasis on product feature and functionality is an area of marginal utility and with diminishing diminishing returns to boot. To win more consistently in the marketplace today one must deliver more value to the customer at a lowered cost. The differentiator we show companies from an Oracle or a SAP is that Cincom will implement a system of similar capability for perhaps a fourth or fifth, maybe in some cases, a tenth of the total costs and half to a third of the time required to reach operational utility of the systems desired. That’s our value proposition. Cincom does this consistently and we believe that is the significance of our comparative. Advanced in more value, delivered for less time, cost and risk are having an impact for Cincom in the marketplace.
WSR: Can you tell us about the background and experience of the management team on board?
NIES: One of the great strengths of Cincom is that our company has proven to be a very attractive company to our associates. We not only are able to attract good managers and top executives to Cincom, but we were able to retain them. We have an average of 12 to 15 years or so leading and guiding every one of our business pursuits. Our people are committed to our business, and Cincom is committed to our people. We have probably the longest average employment rate in the industry. We also have the highest returnee rate. One out of every 12 of our people are people are returnees. Over 25% of Cincom’s staff have more than 15 years with our company. We are deep in talent. We have a really experienced and zealous management team. We can, and do consistently, deliver on our promises because of the skill, quality and experience of our people.
WSR: Cincom serves thousands of clients in six continents. How do you foresee the next two to three years— now—as a time of expansion and development for the company?
NIES: Customers want providers to take more and more responsibility. So, we are expanding our offerings. Hosting services, outsourcing services and more comprehensive facilities management types are now being provided to customers. We see this as a great growth opportunity for us because customers who are looking at the IBMs, Computer Sciences, EDS, and others are now looking for alternative suppliers who will provide the same type of quality and comprehensiveness of service, but at significantly lower prices. So, this is another market opportunity that we are pursuing with exactly the same model as we employed for our software product offerings. What Dell has done for PCs—that is to provide a very similar quality at a much lower price—is what we are doing in the area of software and outsourcing services. It is a model that we think will play well into the future; more value with lower cost. We see this not just with Dell, but we see these demands being made in almost every industry, in every part of our now globalized commercial world.
WSR: In terms of geographic expansion, what are some of your other new key markets that you think might represent an opportunity?
NIES: Asia-Pacific is absolutely on the top of the list by far. Europe and America are mature markets. They are good markets with moderate growth. And we are penetrating and developing these markets quite well. But, the growth rate in these markets is much less than the growth rate in places like China and throughout the rest of Asia. Japan is still a very good growth market, but China, India and much of Asia are now very, very substantial growth opportunities. That’s why just about everybody is going there as fast and as significantly as they can.
WSR: Perhaps you could give us just an idea of some of the key milestones that we can expect to see from the company over the next 12 to 18 months.
NIES: Throughout this entire decade, we have averaged over 80% compounded return on invested capital; 80%—that’s three to four times or five times what is typically generated among good performing companies in our industry. So, a very high return on investment and also significant increases in earnings per share is a key Cincom emphasis. We have increased earnings per share by seven-fold over the last five years and we are averaging, as I said, 80% return on capital investment. We are looking to expand the business significantly without any kind of adverse effect on these operating results. This will be no easy task for us. But, we are committed to high returns on investments for ourselves just as we look to provide our customers high returns on their investments with us.
This morning, I had a rant about the state of blog posting APIs. It's worse than I thought then :)
Have a look at the MetaWebLog "spec" page, for instance - I have no idea what data a server is expected to return, nor do I have any idea what a client should expect to see. I can code a client defensively, but a server? I've been testing on the IRC again, and the tool that was hitting my server couldn't handle some of the data I was sending back. This is just maddening. I mean really - just marvel at this excuse for a spec:
In newPost and editPost, content is not a string, as it is in the Blogger API, it's a struct. The defined members of struct are the elements of <item> in RSS 2.0, providing a rich variety of item-level metadata, with well-understood applications.
The three basic elements are title, link and description. For blogging tools that don't support titles and links, the description element holds what the Blogger API refers to as "content."
For tools that don't support titles and links? I'm supposed to know which ones those are.... how? Can't we find a happy medium between the undefined crap that we have now and the over-defined insanity that is Atom?
When this story applies to you, it's time to step away from the PC:
You're in the middle of a frenzied fragfest when it hits: You gotta pee--bad. Whatcha gonna do? Getting up from your computer clearly isn't an option--any 733t d00d knows the deathmatch owns the bladder.
Enter the Internet urinal, a handy-dandy portable pee device marketed specially for the PC-bound. Each contraption is made of hard plastic, comes with a "female adapter" and holds 32 ounces--a whole lotta recycled Red Bull.
"With the Internet Urinal, you'll never have to leave your computer again," touts a promo on ThinkGeek. "Imagine the freedom--destroy your opponents in that all-important 'Quake 3' clan match without taking a break; drink as many cans of BAWLS as you want and still be able to make that last important trade before the market closes."
Time to get a life :)
Eduardo Pelegri-LLopart thinks that Binary XML is inevitable:
One of the arguments against the need for a binary encoding of XML (like Fast Infoset) can be summed up as: "just wait until the technology catches up", or maybe "Moore's Law makes Binary XML unncessary".
Although that may be true in what some people could describe as "traditional" applications of XML, there are many legitimate use case of XML where this is not so. Many of these use cases have appeared as the market wants to take advantage of the benefits of XML in new fields. There are a number of reasons, some economical in nature, some technological, underlying these use cases.
The main argument against binary xml is that it's an oxymoron. Pretty much by definition, XML is a textual interchange format. You want binary? There are some fine choices around that already work - ever heard of CORBA? And before I hear about CORBA's supposed faults in contrast to *cough* WS* *cough*, I'll point out that the only reason XML got traction where CORBA didn't is port 80 and HTTP.
Yeah, we need another binary RPC format... like we need a hole in the head.
I have a nagging doubt about the lovefest over in the Java tools universe. This comment from the eweek article explains why:
Meanwhile, although Borland was a founding member of Eclipse, the company never based its core Java tools framework around the Eclipse platform. Yet, sources said Borland has now set its sights on the overall application lifecycle and may be willing to offer concessions on the IDE side of things. For instance, Eclipse could become Borland's core IDE, or the company could deliver an enhanced version of JBuilder or a JBuilder replacement based on Eclipse, sources said.
Borland could be looking at Eclipse in the same way it views Microsoft Corp.'s Visual Studio .Net. "In the same way, rather than trying to compete with Visual Studio, they are going to build around it," Murphy said.
Here's the thing - that kind of unification means that Java developers will get exactly one baseline toolset. Sure, there are tons of plugins, and there will be more. The basic decisions have been made though, and that's it - no possibility of a different vision for development. Standards are nice, but they tend to freeze innovation - and I don't really think that Eclipse represents the end all, be all of software development. It's a distinct step back from what IBM had with VisualAge, for instance.
Cincom's President, Tom Nies, has been interviewed by Smart Business Magazine. Check it out here
I'm in the process of updating the development build of BottomFeeder now. The current development build (this is the full build from the dev links on the download page) had a bug in the update tool - a bug that prevents updates from properly downloading. I've fixed that, and I'll update this post when the upload is complete and in place. I've been working on getting the Blogger and MetaWebLog support working in the post tool working - I'm getting closer, but I need to set up a test blog to work against. Stay tuned on that.
Update: The new dev build is up
The development builds of BottomFeeder have had a problem getting updates - the downloads have been failing. I took a look at this just now, and realized that I was being hit by a change in the HTTP client libraries in VW 7.3. The files are being downloaded without any encoding information, so they default to iso8859. The trouble is, that's not what this data is - it's binary. For reasons I'm not entirely clear on, smaller parcels download fine - it's just bigger ones that have trouble. What I had to do is tell the client not to decode the content - which keeps it in binary form. Dump that to disk, and it all works. So - to get updates (if you have one of the new, 7.3 based development builds), do the following:
- Change your upgrade url so that the number is 73, not 721
- Check for updates and download PatchFileDelivery (this should come down)
- Quit BottomFeeder, restart
- Now you should be able to download any updates you want - and load them on the fly, as per usual
I'll be doing a new build this weekend to rid the downloads of this issue. I'll also be including the new docs, which I just got from Rich Demers. I have some more work to do on the post tool, and I intend to move the comment tool over to support WYSIWYG (like the post tool). Stay tuned.
Here's a fascinating article on the history of Eclipse at IBM. I got a laugh out of this explanation from Lee Nackman:
IBM turned the idea of a tools platform over to a subsidiary, Object Technology International, in Ottawa, which used small teams to develop new tools. IBM's own Visual Age toolset was based on the Smalltalk language and "was getting increasingly brittle." The new development environment would have to remain flexible and allow dissimilar tools to plug into it and share files.
Increasingly brittle? How so? I love the way he makes a sideways swipe at Smalltalk that way without really explaining what he meant. Oh wait - I think I get it - Smalltalk didn't require an army of consultants from IBM Global Services, or the large license fees they charge for WebSphere. That's what he meant by brittle - it worked for customers, but not for IBM sales reps.
So tell us Lee - what was brittle about your Smalltalk product? Inquiring minds want to know. If it is brittle, and you want to help your customers out - there are committed Smalltalk vendors around.
Hat tip to Jason Jones, who sent me this link
When Henry Ford famously adopted a 40-hour workweek in 1926, he was bitterly criticized by members of the National Association of Manufacturers. But his experiments, which he'd been conducting for at least 12 years, showed him clearly that cutting the workday from ten hours to eight hours - and the workweek from six days to five days - increased total worker output and reduced production cost. Ford spoke glowingly of the social benefits of a shorter workweek, couched firmly in terms of how increased time for consumption was good for everyone. But the core of his argument was that reduced shift length meant more output.
I have found many studies, conducted by businesses, universities, industry associations and the military, that support the basic notion that, for most people, eight hours a day, five days per week, is the best sustainable long-term balance point between output and exhaustion. Throughout the 30s, 40s, and 50s, these studies were apparently conducted by the hundreds; and by the 1960s, the benefits of the 40-hour week were accepted almost beyond question in corporate America. In 1962, the Chamber of Commerce even published a pamphlet extolling the productivity gains of reduced hours.
But, somehow, Silicon Valley didn't get the memo.
Now, I'm somewhat hypocritical bringing this up - anyone who hangs out on the Smalltalk IRC channel knows how many hours I tend to put in. There's a difference between that and crunch mode though - in my case, I'm working of my own free will - no one is forcing me to (heck, my management chain usually has no idea what I'm up to :) ). Crunch mode, as used by a lot of development shps, is not the same thing. With all the management and process fads out there, you would think that someone would try to correlate hours worked with bug rates...
Web Services are definitely too complex when even a Cxx type like Jonathan Schwartz's Weblog notices:
Web services may collapse under its own weight. No one at the conference said this. Those are my words. I'm beginning to feel that all the disparate web service specs and fragmented standards activities are way out of control. Want proof? Ask one of your IT folks to define web services. Ask two others. They won't match. We asked folks around the room - it was pretty grim. It's either got to be simplified, or radically rethought.
When you try to re-invent CORBA on port 80 over HTTP, bad things are bound to happen. Like, say, binary XML...
Chris Petrilli hits on the main reason that so many people like the movie "Sideways":
Then it hit me. The reviewers see themselves in the character of Miles Raymond (played by Paul Giamatti), or at least I suspect. A slightly pompous middle-age exterior that masks a self-destructive interior, and a passion for a semi-obscure subject to which they dedicate a disproportionate number of words, and especially bizarre “edam cheese”-like phrases. (Yes, I know I succumb to this sometimes.) Otherwise, the movie was not “great,” simply very good. Too many issues are simply wiped away at the end and “resolved” without resolution. And more than anything, an important question is never answered… why would two people, the main characters, who were roommate in college, still be friends after so many years when they are so completely different.
Seeing aspects of yourself in the Miles character definitely explains a lot. I saw Sideways with my wife yesterday - and I have to agree with Chris - it's a good movie, but not a great one. Still, it made me reflect. I have to say that I think I've made something of a "mark" with BottomFeeder, so I'm not where Miles was towards the end (but not the very end) of the movie.
I found the movie both fascinating and painful. There were points in the movie where I wanted to duck out (Miles' drunken call to his ex-wife, for instance). I stayed interested though. I agree with Chris that there's one big hurdle to overcome with "Sideways" - the friendship between Miles and Jack. To relate it to things I know, Miles was the "developer", and Jack was the "sales rep". Believe me when I tell you that those two archtypes rarely go on vacations together.
There's another thing too - I don't know that many guys that have the kinds of heart to heart talks that Jack and Miles had a few times during the movie. Women have those conversations - most men just don't.
Having said all that, I enjoyed the movie. My take away thought - in very rough terms, there are a couple of forks in the road for guys when they reach the "middle years" (roughly speaking, the late thirties and forties). Some guys have a crisis where - like Miles - they despair at not having accomplished anything "big". They get divorced and/or buy a sports car, probably have an affair. Then there are the rest of us, who manage to be happy with what we've done - and don't end up like Miles (or worse).
"My friends and I have recently been in the market for a good new boardgame or other tabletop game. We have worked through the gamut of games like Axis & Allies, Supremacy, and War! Age of Imperialism. More recently we have been playing tile based games like Carcasonne and Settlers of Catan. I am looking for some suggestions on some new games we could get into."
Well. The game my group has latched onto is Puerto Rico. It's a great game, can be played in 45 minutes to an hour - and has better "staying power" than any board game I've ever played. Heck, if you do get to know it too well, there's an expansion set of buildings available. If you like board games and haven't tried it - run, don't walk to your nearest game site and pick it up.
I've been doing some more work on BottomFeeder this afternoon. If you've got the latest (7.3 based) build from the dev downloads, then grab all the available updates (you can have them load without restarting). Have a look at the comment tool - it uses the same XHTML editor that the blog poster does. If you prefer plain text editing, you can flip it over from the "View" menu - and you can set it to either way by default - look on the UI page of the settings tool.
I see that the kerfuffle over AutoLink rages on - some people are now claiming that it's a possible legal issue. See this post, for instance:
From a legal standpoint, AutoLink looks questionable. The tool modifies publisher’s web pages by adding hypertext links without the publisher's consent. While this modification isn’t a huge change, I could still see some (many?) courts treating them as unauthorized derivative works. Honestly, it seems like a fairly routine copyright infringement. Google appears to be trying to position this as a situation where it’s merely acting as an agent for user instructions, but I’ve just recently blogged on how courts frequently slice through that argument pretty quickly.
Umm, let's see now. The marking up of the page happens on the client, not the server. Additionally, it's optional - very much so. First, you have to go out of your way to install the toolbar. Then, you have to enable AutoLink. So let's be very clear here - this almost certainly falls under fair use (at least, under any reasonable facsimile of it).
Based on the way courts act, there's no telling what they'll actually do, of course (I'm continually amazed at how the simple text of the 1st Amendment in the gets manhandled by US courts, for instance). In any event, I really don't understand the objections. This stuff is optional, under end user control, and it doesn't modify the server content. If this is a violation, then heck - so are highlighter pens and notes entered into a book.
I've modified the comment tool in BottomFeeder to use the XHTML editor as well. Like the blog poster, it can be toggled back to plain mode. The Wiki style markup still works for both tools, if you have those settings on. I'm finding that to be a lot less useful for most of my posting work. Things are progressing nicely towards the 3.9 release with all this. The main tasks:
- Create the release notes that list the major changes
- Run through a few issues in the blog/comment posting tools that should be fixed
- Make the blog poster international aware
- Make a new candidate build
None of that should take that long - I expect to have a 3.9 build ready for release within a few weeks. There's some very good news for BottomFeeder coming from the WithStyle guys as well. One of the issues we've had with libtidy is stability - it's a C program, and some content can send it off to hell. Word is, the WS guys are creating a Smalltalk version - which will work on all platforms (something we don't have with libtidy), and will be stable. That will be really great news!
There was an insidious little bug in the Silt blog server - updates to posts that changed categories didn't get visibly reflected in the sidebar - i.e., a category search would turn them up, but the count was always off. That was due to a caching error on my part - when updating the post, I was re-caching the old category instead of the new one. Dohh. This server is updated - if you are using Silt and want to update it, get the new code, load that, and then execute this snippet in a workspace (or in a headless server, file it in if you've left a path open for that):
| blogNames | blogNames := Blog.BlogSaver default keys. blogNames do: [:each | saver | saver := Blog.BlogSaver named: each. saver cache setupSearchCategoryCache].
That will get all the counts right, and the new code will keep them that way.
I've been working with James to try and get tools other than my own posting tool working with the Silt server. This has involved looking at the truly bizarro world of Weblog APIs. There are a few big problems:
- The specs (if you can *cough* call them *cough* specs) are very, very inexact
- Different posting tools (and servers, I guess) have interpreted these specs differently
- Some posting tools send any old api call up to the server, whether it's part of the API they claim to be using or not
For example - I grabbed a demo copy of ecto in order to do some testing. Just figuring out what it expected from metaWeblog.getCategories was an adventure. You go read what Dave Winer thinks is a spec, and see if you get it right. I Good luck - he makes some vague statement about it being a struct of structs; it should be an array of structs. What should be in said struct? Gosh knows, I had to dig through many websites in order to make intelligent guesses. Gads
I got through that mess, finally. Next, I was getting fault codes in ecto, even though posts were going to the server. A quick debugger session solved that - for completely inexplicable reasons, ecto sends mt.setPostCategories after a newPost. What the heck is up with that? That's Moveable Type, and that's not part of the MetaWebLog API. I implemented it anyway. Fine, that works. I still get some bizarre client message from ecto, a Dialog with the message: "Attempt to serialize data with a null reference". Not a clue what that is.
I know what insanity is now. It's the various extant WebLog APIs. Atom, you say? Sure, like those guys will ever get anywhere. They got bogged down in the non-problem of syndication formats, and are currently arguing over how many angels can dance on the head of a pin. Sigh...