Blaine notes that progress is happening in other corners of the Smalltalk world - he's got good things to say about the Dolphin 6 release.
There was a point, just before VCR's were introduced, when home media was simple. Less feature filled than today by a lot, but simple. What I mean by that is that you would get your components, a bunch of RCA audio cables (red and white), and just plug everything in. It was pretty easy to do, and the same cables worked everywhere.
Then cable TV and the VCR arrived, and things got a little more complicated. Suddenly you had the split the cable signal and feed it through the VCR. Still not too bad, but a little harder, especially if you decided to hook your speakers through it.
Now, we've arrived at the huge ball of complexity that Russell Beattie sketched out on his site. Click over there and look at the picture - it's more than most people have in their homes, but not a lot more. In our family room, we have a ReplayTV, a Comcast cable box/DVR, and an older digital cable box just for the Replay (since it can't handle HD). There are 5 other A/V inputs in use on that TV, and most of it flows through the receiver.
The array of cabling needed to hook it all up is no longer the simple set of RCA cables - now there's component video, S-Video, coax, ethernet, and DVI. Probably other things I'm not remembering right now. It's enough to make your head swim; we have photographs of the setup taped to the back of the TV, so we can get it right without having to hand trace the cables.
Now, there's a new kicker to slap into the mix: DRM. That's the focus of James Governor's addition to this topic. It's bad enough that the equipment is hard to set up optimally - it gets worse when you get to play a delightful game of "which piece of equipment will let me play this legally owned content I have here?". James has some thoughts on the subject, and they seem reasonable to me.
Wow, what a shocker - one of the trolls who's ethics were low enough to let him work for the RIAA doesn't like Google book search either. Next, you'll tell me that night follows day. Here's his argument:
In Google's case, they are clearly scanning in the original copyrighted pages of these books, no doubt using fairly high resolution digital photography, and then it seems they are using some sort of optical character recognition to transform the printed text on the page into a digital text on the computer. On Google's computer. I have no idea if Google is OCRing and then running an indexer on the resulting body of text and then tossing the body of text and keeping the index. I bet they're not. I bet they're keeping the whole enchilada. I mean, look at what Google's doing. Or must be doing: how would Google know where the words "Pioneer Life" appear in a bitmap image of the scanned book, unless they had scanned the book's entirety, kept all the bitmap images, and recorded where the printed words in the image match up to the text words in the digitized OCR'd body of text? That's got to be some pretty fancy technology to do that.
I love the way he says "fancy technology, as if that's enough to indict them right there. His point is that this technology doesn't make a lower quality copy, it makes a good one - and that the quality of the reproduction violates fair use. Never mind that they aren't:
- Listing entire books online
- Selling access to books online
What they are doing is improving search, making it more likely that I'll find a reference to a work that I didn't know existed - and that I could then go and *gasp* buy that book.
Which is where I think the real issue is for publishers - it's the same one that music publishers have with iTunes. They are being disintermediated, and they don't like it. They want to tell me which works exist on a given topic - under no circumstances do they want me finding that information out myself. Heck, if I do, I might find an older work, or a self published work - or anything that isn't going for the high margin they want to push. This has nothing to do with protecting the rights of authors, any more than the RIAA is out to protect the rights of artists. Quite the contrary - it's about protecting the rights of the established middlemen who enjoy skimming their percentage of the price.
Throughout history, new technology has supplanted older technology, and the incumbents have never been happy about it. The publishers and the RIAA are like the Luddites of old, but with bigger lawyers.
I don't have anything against the so-called "MSM" (Mainstream Media) - I just don't think that they should be accorded any more respect than anyone else. Here's another example of why I say that: a Honolulu based reporter has allegedly been plagiarizing from Wikipedia.
Now, plenty of authors have been discovered plagiarizing - as well as students and politicians. The sole point I'm trying to make here is that the press is not some higher level collection of experts. Like the rest of us, they have biases, foibles, and blindspots. It would be nice if their editors and fact checkers cleaned up more of that, but they suffer all the same problems.
StS 2006 is getting closer - I just received speaker instructions today. Looks like I have to get my slides ready well ahead of time, so that they can have handouts ready. Check back with the Linux World/Network World site for registration information.
We've been trying a new game - Caylus. It's an interesting boardgame, where you have to build up a town and castle, gathering the most victory points while you do it. It's a cooperative and competitive game - you compete by building up things that other people will want/need to use.
We've tried three times now - the first game, we were just learning, and it got too late. The second time, the wives were busy paying attention to a friend's baby (and with a boardgame set up - heresy!). Last night, we played a game of Puerto Rico first, and it just got too late.
It's a cool game - I just want to see a game go all the way to the end sometime :)
Ajax is the most overhyped thing at the moment. My predictions for it are, therefore, dire. I will bet it will go the way of XML - simple and interesting at first, then the “Enterprise” folk run away with it and within 2 years we have W3C AJAX standards that span 1000 pages. Wanna bet?
I'm not taking that bet. Recall that SOAP started out as a way to extend XML-RPC...
Cincom Smalltalk Announces 88% Revenue Growth
Simplification Through Innovation: Key to Growth
CINCINNATI, Ohio -- January 16, 2006 -- The Cincom Smalltalk division of Cincom Systems, Inc. announced today an 88% revenue increase from 2001 to 2005.
Simplification Through Innovation … Key to
Businesses dealing with highly complex systems and high rates of change have increasingly discovered that Cincom Smalltalk-developed applications allow them to simplify and innovate(PDF), giving them a competitive edge that can be key to survival.
50% Less Developers – ROI Twice as Fast
Cincom Smalltalk delivers applications and significant ROI twice as fast with half the number of developers.
“Advanced technology, excellent support, a superior product bundled with a knowledgeable sales team, and experienced and passionate partners continue to fuel Cincom Smalltalk revenue advancements,” said Suzanne Fortman, Marketing Manager, Cincom Smalltalk.
About Cincom Smalltalk
Cincom Smalltalk enables software developers to build applications quickly and efficiently, including scalable browser-based and client-server systems. Cincom Smalltalk delivers significant productivity over Java, C#, C++, or Visual Basic, allowing developers to bring their products to market significantly faster. For more information, please visit the Cincom Smalltalk Website.
For nearly 40 years, Cincom's software and services have helped thousands of clients worldwide simplify the management of complex business processes. Cincom specializes in the five areas of business where simplification brings the greatest value to managers who want to grow revenue, control costs, minimize risk, and achieve rapid ROI better than their competitors.
Cincom serves clients on six continents including BMW, Citibank, Boeing, Northwestern Mutual, Federal Express, Ericsson, Penn State University, Milacron, Siemens, Rockwell Automation, and Trane.
|Cincom Systems, Inc.||Cincom Systems, Inc.|
|Marketing Manager||Smalltalk Product Manager|
|Suzanne Fortman||James Robertson|
Ok, this is amusing. First, I ran across PR Opinions trying to explain this post by Steve Rubel, where Rubel says, more or less: "I can't possibly read all the email I get asking for links. Link to something I read, and I'll see it". PR Opinions thought that smelled of attitude, and these guys are positively put out by the whole thing.
Hmm. I've been doing this since 2002 - so I'm no "johnny come lately" to the party, but I'm also not one of the real early adopters. I've never sent anyone an email asking them to link to something I've written. Oh sure, I've written plenty of posts where I purposely linked to someone in hopes that an "ego search" would turn my post up and get a link, but that's not really the same thing.
Now, I don't have nearly the traffic of some of the "A-Listers" - mine seems to vary between about 8000 and 15,000 pageviews a day (meaning, total grabs of the HTML page). It's actually pretty hard to gauge real readership now - I can see the HTML hits, and the requests for the RSS feed, sure. Then there's Artima, where my posts are mirrored (I registered for that back in 2002). Then there's BlogLines, NewsGator Online (etc, etc - there are tons of online services). All that said, I'm sure that I get fewer email requests for links that Rubel or Scoble do (I wouldn't call what I do get overwhelming by any stretch).
No, I built my traffic up the hard way - I just keep at it, hoping that my rants will be of interest to someone :) Now, I do get a lot of email in general, and I subscribe to tons of content - so I understand the general "overwhelmed" complaint. What I don't really understand is why other people can't. If you're a well known PR guy, and it's known that lots of people read you - then it seems clear to me that you would get pitched a lot. At some point, that's going to be overwhelming - call in radio shows have screeners for a reason.
(via Dave Winer) Apple’s newly-released iPhoto 6 does an admirable job of making it easy for its users to publish feeds of their photos from their desktop. When Steve Jobs announced the product, he cited its use of “industry standard” RSS technology to make this posssible.
This all sounds great until you try to visit a photocasting feed in a browser other than Safari, or subscribe using a feed reader other than NetNewsWire. The site detects that you’re not using a photocast-capable client and displays an HTML page asking you to use Safari instead.
That's funny - I added that feed to BottomFeeder without any error messages. I added support for the Photo-Casting module a couple weeks ago as an update to the 4.1 release. Seems that Apple isn't turning away everyone else.
Update: What a shocker. Dave Winer gets it wrong too.
Update2: TechMemeorandum spreads the bad information around
I see that Bell South wants to start charging content providers on a per service basis:
Bill Smith, chief technology officer at BellSouth, justified content charging companies by saying they are using the telco's network without paying for it.
"Higher usage for broadband services drives more costs that we have to recover," he said in a telephone interview.
Interestingly enough, the first company he mentioned was Apple, and their iTunes store. Wow, it's almost as if music companies called Smith up and asked him to carry water for them. And he thinks we should like the idea:
"It's the shipping business of the digital age," Smith said, arguing that consumers should welcome the pay-for-delivery concept.
Yeah, I just love the idea. I really want every single ISP in the chain marking my content up by a nickle as it passes by - I don't pay my ISP enough, so I should be happy to have my monthly bill jacked up in anticipation of how much content I'll be incrementally responsible for. I'm sure they won't guess on the high side.
It's one thing if ISP's want to offer better QOS for higher fees, or even for one off services - that wouldn't be any different than pay-per view, and I expect most people would be fine with that. Whether they can actually deliver that to the home is another matter entirely. What won't be fine is what I think they'll actually do - try to ding a micro-payment from every packet that flies by, and push monthly service charges up based on perceived future usage.
The funny thing is, I keep seeing people claim that the net is going to collapse tomorrow is something isn't done - Mark Cuban is the latest chicken little to see the packets falling. Odd how time goes by, and the internet keeps working, isn't it?
I'm still planning to push a screencast out on the Change tools in Cincom Smalltalk, but I've got a sick kid and only one car, so I've got a few things to get to first. Stay tuned.
I love this guy. His notions of software development are fascinating. He's pondering cross platform software today:
In the example above, we could compile to bytecode for example, but then we need a VM (and I don't want to get into 40 year old technology) and more importantly, it doesn't take advantage of vector processing if it is available.
Can you say "premature optimization"? Apparently, he can't. So we follow his train of thought and end up down here:
Anyone see a way out of this? If not, I'll just use some form of intermediate code.
Meaning, a VM or interpreter. I wonder if he'll come up with a really cool new name to keep that fact from himself...
We are still looking into the problem - right now, VisualWorks does not always work on Ubuntu and Gentoo Linux distributions - and we think that it may be a configuration issue related to those particular distributions. When I get more details from our engineers, I'll pass them on.
In comments last week, there appeared to be some interest in how to recover lost code in Cincom Smalltalk - for instance, quitting an image without saving, forgetting to commit changes to the source code repository, a power outage, that kind of thing. So here it is - a quick walkthrough of recovering code with the change tool.
Up early again, this time I'm heading to Dallas to visit a customer. Staying up to play Civ 4 was not a good plan :/
I posted on live updating the other day, and had a commenter tell me that you could get to much the same thing in Java. Specifically, he said:
Most shops are using Tomcat for J2EE development, which has had code reloading for quite some time now (the other commercial ones like Orion have been doing this too). When I write my code I save, then reload the browser. Same as you. Please educate yourself before spreading FUD.
The problem is mostly one of understanding fully what I meant. I meant live updating of both a development time server (i.e., a test system) and live updating of a runtime server - with no downtime.
Here's what I do. I fire up the development environment for my server on my Linux box. Which, by the way, is an ancient PII 400 Mhz system. Try running any set of Java tools on that - Smalltalk, which people inaccurately call "bloated", runs just fine there. Anyway - I make my changes on that box, and test them - breakpoints in the server code, hit a service from the browser. Write code in the debugger and browsing tools, keep testing... until I get the change I need done. At that point, I take the following steps:
- Selecting the package (or packages) that I made changes to, I use the code browser option to file-out changes.
- Version the changes into my Store repository
- Save the new version(s) of the changed package(s) to disk as one or more parcels
- Upload the file from step (1), and the file(s) from step (3) to the server
- Drop the change file(s) into a patch directory, and the parcel(s) into the application startup directory
- Using a web interface, instruct the server to load the change file(s)
I upload the new parcels so that the next time I start the application server, the correct version of the code will load in. I start the system from a clean image that loads the entire application server codebase and configures itself. However, I don't need to restart the running server for updates - In step (6), I loaded the changes into it as it was running. Those changes might well include shape changes to classes, including classes for which there are serialized objects on disk (posts are saved as serialized objects in this server).
This gets to the real power of a Smalltalk system - the image updates every instance of the changed classes in memory. So if I have a class Post, and that class had 4 instance variables - and I just added a fifth - every single instantiated instance now has that fifth instance variable. That fact explains why I normally code my accessing methods as follows:
authorName ^authorName isNil ifTrue: [authorName := ''] ifFalse: [authorName].
That code makes sure that the new instance variable, when accessed, has a reasonable default. For more complex cases, I'll write a script (which I load the same way I loaded the patch) which makes sure that all the new instance variables have the right sort of values.
Notice what didn't happen here - downtime. The server was running while I did all that, quite possibly answering requests. If I needed to have it not answer requests during the update, that would be easily accomplished as well - I just add more code to my patch script.
This makes Smalltalk a perfect choice for systems that need to run all the time - but also need periodic updates. The server running this blog almost never goes down - I've been updating it using the procedure I just outlined since 2002. Some of the changes have been fairly major, including changes to the way posts are saved, and changes to the HTML output from domain objects. I've also made live changes in various other things, like the way that referers are found and saved. The bottom line is - I'm able to upgrade the server in place, with no downtime. Not just the test server, during development - the production server, as it's running.
I spent the day in Dallas, visiting a division of Northrop Grumman - their IT solutions group. These guys have a very cool Cincom Smalltalk (VisualWorks) manufacturing support application they call MES (Manufacturing Execution System).
It does a ton of stuff to manage the quality and execution of jobs in what they call "slow manufacturing". In this case, that means aerospace and ship-building (as opposed to assembly line car production, where things move very quickly. Cars roll off the line fast - a modern naval ship can take years to build. Airplanes, helicoptors (etc.) take less time, but are still much bigger (and slower) jobs.
They started building this application a long while back, in VW 3.0. They are just now getting it migrated to the latest (7.4) release - and they are very pleased with it. They gave me a tour of the application, and peppered me with questions about our roadmap - I didn't have time for the plant tour (our marketing person, Suzanne, got one - I'm jealous!). It would have been very cool to see BlackHawks getting put together, with all the shop floor PC's running their Cincom Smalltalk application.
Funny thing is, they told me there's quite a market for this kind of thing - an awful lot of shops are still heavily paper and spreadsheet bound (shades of what other customers have told me about financial services applications). They are also investigating some nifty handhelds - Rudy Morales, the project manager, showed me one of the military grade ones - by throwing it onto a table! Very cool device, that one.
Fun day, even if it started off way too early. next time, I'm making time for the plant tour :)
Imagine a world where you could use Java if you wanted to, but it would compile to native code and be usable with many other languages. Where you could save your Java source and later load it as C++ or even Perl code. Where specialisation and use of specific hardware features would be possible such as writing codecs and using vector operations, but in a portable way. Where you would control the dynamic aspects of your software rather than the other way around.
Pay no attention to the semantic differences behind the curtain.
The Register reports that the Revolution will hit the 2006 holiday season:
"As for North America, we need to release it by Thanksgiving, or otherwise we won't receive support from the retail industry. So the Revolution will be released prior to that period."
More importantly, they recognize that there's an important price barrier:
Iwata wouldn't be drawn on the console's price, but he did claim it will be "affordable".
"The amount of money that people are willing to spend on video games is getting less every year," he said. "Even if it's a superb machine, it's not going to sell if it's ¥50,000 (£246/$433). We plan to make [Revolution] an affordable price."
I think he's right about the price - Nintendo has managed to hit the impulse buying range with the GameCube - under $100. The Revolution will be higher than that, but it sounds like they know that there's an upper end to the impulse range.
TV Week reports that iTunes downloads are boosting ratings:
"TV Week reports on NBC's claims that iTunes downloads are boosting ratings for their primetime shows. Citing one example 'NBC's "The Office" delivered a 5.1-its highest ratings ever-last Thursday among adults 18 to 49, a bump the network credits in large part to the show's popularity as an iPod download.
What, you mean pirates aren't running off with all the eyeballs? Quick, someone send a few dozen cluesticks to the RIAA and the MPAA!
Eitan Suez sees the productivity behind tools that skip the "rinse/repeat" steps:
How many times have you heard or read of a selling point for a software framework or product or solution being "and you can make changes to the system without having to recompile the code." I must admit I'm sure I used that line more than once before (in my early days programming :-)).
So here's what I find interesting: instead of coming up with all kinds of schemes to get around the problem, why don't we just deal with the root cause? I think it's staring at us in the face: why don't we simply program in an interpreted environment?
We've had that over here for a long time Eitan - come on in, the water's fine. The only thing you have to lose is your lower level of productivity :)
There's a long post (and a lot of good comments) over on Sam Ruby's blog about the various problems with Apple's new PhotoCast RSS Module. I find that I'm having a very hard time getting worked up about this. Sure, Apple could have done things better on this module. Tim Bray and Mark Pilgrim have been two of the more vocal on this; witness Mark's comments:
To sum up, the “photocasting” feature centers around a single undocumented extension element in a namespace that doesn’t need to be declared. iPhoto 6 doesn’t understand the first thing about HTTP, the first thing about XML, or the first thing about RSS. It ignores features of HTTP that Netscape 4 supported in 1996, and mis-implements features of XML that Microsoft got right in 1997. It ignores 95% of RSS and Atom and gets most of the remaining 5% wrong.
I don't know. I understand the desire for cleaner XML; heck, I reversed my longstanding ambivalence toward Atom awhile back after Dave Winer decided announced that he was pushing work on the abomination that is OPML. It's hard to come up with something less well specified than MetaWebLog API, but Winer's just the guy to manage that feat.
Having said all that, at the end of the day, end users of aggregators don't really give a damn about how XML, or RSS, or OPML (et. al.) ought to be. They see nifty features (or hear about them), and want results. Which means that developers have to just suck it up and slog through the crap. It's not as if we always get things right either; I joined a long list of aggregator developers in not dealing properly with namespaces. Incidentally, I just uploaded a fix for that issue - if you care (in the wild, the kind of bug that post raised is unlikely to come up), you can grab the update.
I've also been following the OPML reading list threads, and - as much as I dislike OPML - the idea has merit. BottomFeeder already reads OPML and OCS, either for importing lists of feeds, or for addition to what's become an afterthought - feedlist support. As it happens, it won't take much work to change the current feedlist support into reading list support. At present, feedlist feeds only update when you select them, and the feedlist itself is never re-read. Changing that will be pretty easy, and will be something that hits the next release.
So at the end of the day, I just can't get that worked up over the various things that disturb the XML geeks a lot (and I don't use that phrase disrespectfully; I'm a Smalltalk geek, and care deeply about a number of things that many developers can't get worked up about). In my dealings with XML, I'm a developer, and I mostly have to make sure that my code works for my users.
Our engineering group recently got a message that Microsoft is sending out about SQL Server support for the (now ancient) DB-Lib libraries:
"Although the SQL Server 2005 Database Engine still supports connections from existing applications using the DB-Library and Embedded SQL APIs, it does not include the files or documentation needed to do programming work on applications that use these APIs. A future version of the SQL Server Database Engine will drop support for connections from DB-Library or Embedded SQL applications. Do not use DB-Library or Embedded SQL to develop new applications. Remove any dependencies on either DB-Library or Embedded SQL when modifying existing applications. Instead of these APIs, use the SQLClient namespace or an API such as OLE DB or ODBC. SQL Server 2005 does not include the DB-Library DLL required to run these applications. To run DB-Library or Embedded SQL applications you must have available the DB-Library DLL from SQL Server version 6.5, SQL Server 7.0, or SQL Server 2000."
What does this mean for ObjectStudio developers using SQL Server? You would be better off using the ODBC DB wrapper that comes with ObjectStudio than with the SQL Server wrapper (which, at present, still uses DB-Lib).
It's been possible to make use of standard out on Linux/Unix for a long time with VW, but it's always taken a bit of work. Over the last two releases, a few things have been done with command line argument processing and sub-system support that make things easier - even on Windows. For instance - try this using a Windows command shell, in the VW 7.4 image directory:
..\bin\win\vwntconsole.exe visual.im -nogui -doit "3 zork"
As you might expect, the #zork message is going to raise an MNU - notice how the stack is reported to standard out? The -gui argument is also useful (-headless does the same thing), and it's been in the system for a couple of releases now. You can start an image with the GUI inactivated without having to do anything special now.
A group of companies is attempting to extend SOA specifications to include a language-neutral object model, over objections that the effort is redundant to work already being done in the Java Community Process.
It's so heartening to see the OMG rebuilt from the ground up, but on port 80 with text...
Over the last couple of weeks, I've started seeing something new - a bunch of the mailing lists I'm on are getting spam. I guess the barrier to entry to most list managers is low enough to be scripted. Sigh
I'll have to add this to my set of release to-dos - the information pages here are out of date, and I know that this misled a few people into thinking that the winter release had slipped. In fact, it shipped in late December, and engineering is about to get the build cycle for the next release (summer 2006) under way.
I've put in a request to get the relevant pages on the server updated; sorry for the confusion.
Professor Peabody has been busy - I note that the RIAA thinks it's 1970, and taping music from the radio will kill - kill, I tell you - the business:
As satellite and digital radio offers a very wide range of music to choose from, the music industry is eager to control what consumers can do with radio broadcasts. According to the music industry, if consumers are given the opportunity to record music from digital radio, they will likely start recording songs instead of purchasing them. Last year, the RIAA began proposing a broadcast flag type system and now they are in negotiations with satellite radio companies, such as XM about controlling what their listeners can do.
Yeah, all those mix tapes I made back in college prevented me from buying the truckload of CD's sitting in front of me. Want a sure way to kill XM? Make it so that I can't do anything with it.
The "atom:updated" element is a Date construct indicating the most recent instant in time when an entry or feed was modified in a way the publisher considers significant. Therefore, not all modifications necessarily result in a changed atom:updated value.
There are two problems with this theory. One, as Dare notes, is that there are no mainstream tools in existence that allow a poster to mark an item as updated (such that the Atom flag would get ticked). Second, "updated" is fuzzy. Fixing a typo happens a lot - do users really want to be flagged every time that happens? I've learned from feedback that the answer is no. On the other hand, an actual update with actual new information would be nice to flag. Sadly, as dare notes, it's not easy to do that with existing tools.
Theory and practice often collide, and that messy collision area is where developers live.
Dave Winer is hardly the only one who seems to think that services should be free:
I talked with Matt and Scott Beale about how happy I am that Flickr is still free, and I was very pleased with how generous Yahoo was to let us use it without ads. They looked at me strangely and asked if I had a Pro account, and I said I did, it was a gift from Stewart. Then they told me that's why I didn't see any ads. Ahhhh. I guess when I point to something on Flickr y'all see ads? Uh huh. Oh. Kay. Now I'm slightly less pleased than before.
Well, if they aren't going to show ads, they'll have to fund the site somehow. What would you suggest? Subscription fees by members, such that the members don't see ads? Wait.... that's what they do now! So what does he want? Apparently, he wants them to host pictures freely for all comers, and just make money magically somehow, in a way that he never has to see.
This reminds me of the argument that tools like Cincom Smalltalk should be free, and that we could make plenty of money simply by charging for support. Umm, sure. That works if the free tools are loss leaders (like Eclipse, say) - it works less well in cases where you are actually trying to make money - JBoss comes to mind.
Under the old developer license for $X, and then $Y for maintenance and support model, ParcPlace (and its descendants) went under. Sure, there were plenty of other missteps along the way - but that model simply doesn't work. Unless you can guarantee a large (and always growing) number of new seats sold each year. The reality is, optional support licenses don't sell that well - prospects tend to see that as a place to save money, and then only buy them when they really need them (consider: how often do you spring for the extended warranty on things you buy personally?). If you give the tools away up front, you get even less money.
Ultimately, you have to pay the developers. Show me a model that works for that - either for tools, or for websites. In Flickr's case, I fail to see how ads are that irksome, when the end user (i.e., the one incrementally adding load to the site) gets to use the site without charge.
Here's a perfect example of how comment systems don't "scale" (in the social sense) - any site that covers controversial topics (especially political ones) will end up getting overwhelmed by commenters picking fights. Here's what the Washington Post had to do:
At its inception, the purpose of this blog was to open a dialogue about this site, the events of the day, the journalism of The Washington Post Company and other related issues.
But there are things that we said we would not allow, including personal attacks, the use of profanity and hate speech. Because a significant number of folks who have posted in this blog have refused to follow any of those relatively simple rules, we've decided not to allow comments for the time being. It's a shame that it's come to this. Transparency and reasoned debate are crucial parts of the Web culture, and it's a disappointment to us that we have not been able to maintain a civil conversation, especially about issues that people feel strongly (and differently) about.
This happens on any highly trafficked system - popular usenet groups became useless piles of dreck years ago, and blogs and forum sites are merely following that trend. Look at slashdot or digg, for instance - for every useful comment in a thread, there are dozens (sometimes hundreds) of comments that - boiled down - say "you're a moron" (only in less polite terms).
Any site that gets popular is going to end up going where a lot of the popular political blogs went a long while back - comments off. The only trouble with that is that tracking referers is hard, due to the absolutely enormous volume of referral spam.
The main Cincom Smalltalk site has been looking a little worn around the edges lately - and it's running on an old box. So, it looks like I've got the ability to move the service and upgrade it while I'm at it - which is what I've been up to all day. The upshot is, when I get done with this (and get it deployed), the site will have an RSS feed - which will bring it up to date.
If the MSM was left to their own accord there would never be feeds. There would be forced registration, robots.txt which blocks everything, horrible invalid HTML, and content without any links. It's their version of DRM...
It's all about disintermediation. Before talk radio, all you got was the bland opinions of the "my word" (with a required countervailing piece) on the 11 pm news. They didn't like seeing talk radio explode the old set piece environment. Then the cable explosion happened, and instead of 3-10 channels, there were 30. Then 80. Finally, 500. Instead of limited bandwidth - which made some level of regulation easy to push - we now have more channels than we know what to do with. Where there's room for the "Golf Channel" and the "Food Network", there's certainly room for non-bland opinions. They really didn't like that.
Then, to cap things, the internet stumbled by. That's ended up being a live hand grenade inside the media monopoly. Even with lots of channels, you still need real money to broadcast. With the internet, all you need is a broadband connection and a free (or very cheap) hosting solution. Suddenly, everyone's opinion is available free of charge, with no means to regulate it.
They really, really don't like that. It means that the MSM has been disintermediated. We no longer have to visit the font of all wisdom at (insert major network here) to find out what to think - we can hook our browser or aggregator up to a nearly infinite number of news sources and do our own thinking. Our opinions can't be reliably handed to us anymore; the talking heads have to actually work up an argument.
The music and tv/movie business are going through the same thing. Sure, they whine about piracy, and they do worry about that. I don't think it's their biggest worry though. Consider the iTunes music store, for instance. Like Amazon before it, it's pushed the content producers aside as the biggest source of "what's cool" information. A very simple feature - the "people who bought X have also bought Y" thing - has the ability to sweep marketing campaigns aside. Instead of the content producers relying on their ownership of a pliant set of artists under nasty contracts (along with a few mega-stars who've moved past the control, and up to their own positions of influence), we have a system of user ratings.
That's a massive piece of disintermediation, and the powers that be really, really hate it. It's what Joel was on about in his piece on variable pricing in music and movies. None of this is really new. As technology moves forward, it empowers some existing players at the expense of others. The rise of air travel did in passenger rail; there's still a large body of rail regulation in place here in the US which is a vestige of the preceding power position. I think Kevin is spot on with this observation about how it's going to play out, too:
We're seeing now with MSM what's being mirrored in the Entertainment industry. They're being dragged onto the Internet kicking and screaming and they don't like it. Things are going to have to get worse before they get better.
It's like watching an exhausted toddler who's just been told it's bedtime.
It's time for my weekly look at the log files; this week, BottomFeeder downloads held steady at 344 per day - here's the detail:
Who knew there were that many HP users out there? On to to the HTML pages access:
|Tool||Percentage of Accesses|
I swear, someone is just using a yo-yo to decide which browser to use week by week :) Anyway, off to the RSS results:
|Tool||Percentage of Accesses|
|Net News Wire||9.2%|
Tool Distribution looks about the same as usual. Back again next week.
Everyone understands subscribing. You’ve got your email newsletter subscriptions, your premium cable channel subscriptions, your magazine subscriptions (call now and subscribe to 52 weeks of…remember that?)
No one knows what “syndication” means, unless you’re talking about I Love Lucy reruns. Syndication is a publisher-centric, geek-centric term. For most people, it’s Really Simple Huh? Most people don’t even know that syndicate can be used as a verb!
Then Paul adds a few comments that likely apply widely:
Too little consistency. There is no uniformity about titles, titles plus summaries, or full-text feeds. I won't re-hash the debate on this subject, but let me just say if your feed isn't full-text it won't likely last long in my aggregator.
Too many posts. To be blunt: Faced with feeds regularly containing more than six or seven unread articles I, with rare friend-driven exceptions, usually nuke the whole list.
These are a good takedown for those of us that forget how little most people care about various and sundry things that we obsess about...
I just finished the History of Spices book I'd been reading - it was pretty good. Not a linear history; it jumped around quite a bit. For anyone interested in the Western obsession with spices, it's a great read. So now, I'm involved in two more historical tomes:
I've been meaning to dig into the Crusades for quite awhile; I've only just started that book. With Grant's Memoir, it's something I've been intending to get to for years, and my wife bought it as a gift for me. I've read a lot about the US Civil War, and it's an absolutely fascinating period. As with any crisis, the people who rose to prominence seem bigger than life - and Grant was one of those.
There's also a huge backlog of fiction reading stacked up, but I'm just not in the mood for that right now; my reading preferences seem to run like moods.
I took a look at the logs, seeing as how the HP downloads have looked fishy for awhile now. It turns out that there's a reason for that; I had a minimal search string for HP downloads that was over-reporting. So adjusting for that, it turns out that HP downloads are in the same neighborhood as Solaris downloads (which makes sense).
So, that means that this last week, the daily download rate was around 216, not 344. Kind of a largish error on my part. Anyhow, that means that the reported rates over the last few months need to be adjusted down the same way. I should have looked at those weird results sooner :)
Registration for StS 2006 at LWNW is now live. Get thee there and register now!
Update: Note that the registration fee is in CDN (Canadian) dollars.
Now we get to the real culprit. Method invocation. Allowing methods to be invoked at any time on any object is our generation's goto. This is also related to coupling, but even if you use interfaces, the problem is the same. There are no boundaries. Any object anywhere can basically create any object it wishes and start calling methods on it. I don't know about you, but most code I've seen consists of initialisation, method calls and control statements. Tracking all these method calls is what maintainers have to find out about. What do they do? Where are they? Is the source available? Can I change it if it fails? This has to be done everywhere. This is also where I see a lot of code duplication. One person does it one way. Then someone else comes along and finds an easier way to do things, but missed a few places where this same thing or something close to it is being done. Now your maintenance problem just went up exponentially. Maintainers usually know to try a compile by undefining some of the old methods, but this gets more difficult if there are other parts of your software that still need those calls or if your "easier way" uses those calls itself.
I'll agree with his assertion (not quoted above) that inheritance is overrated and over-used. However, the stuff above? How exactly do objects make this problem worse? The "initialize before use" issue does not disappear further down the food chain - I'd argue that it's easier to walk out into the weeds in C or assembly than it is in Smalltalk, for instance. I have no idea where he's going with this stuff, but so far, it's not making any sense at all.
Scoble tries to explain (via this blog) why Vista instead of XP. He lists a bunch of mostly - to be brutal - dull reasons. Kernel changes, new fonts, better UI.... yada yada yada. There's really nothing compelling there. Yes, my objection to PVP-OPM is not going to sweep the world anytime soon, but there's certainly nothing in that list that's going to convince me to ignore that and move ahead, either.
In day to day terms, working with Windows, what will Vista do for me that XP won't? As part of that explanation, I'd really like to hear "won't prevent you from watching content you own legally on your existing hardware". Oh, and this little escapade with Verizon didn't fill me with confidence either, btw...
Stop paying attention to Dave for even a nano-second, and see what happens.
Update: As noted by a commenter, it's all about Dave until it's inconvenient. At which point, he deletes the text down the memory hole. Via the wonders of aggregator caching, here's the text he doesn't want us to see anymore:
Now, let's talk about Microsoft. They're having their third Search Champs meeting this week in Seattle. At the first one, I urged them to support RSS. They did. As a result, their Live interface can plug components together in a very beautiful way. This is something I'm proud of, because it took quite a bit of struggling to get them to do it, and they didn't see the beauty of it until well after they did the work. The struggle, apparently left them angry with me, and they haven't invited me back. This time they've included many of the members of the Web 2.0 Workgroup, which I am a founding member of, but I'm not welcome, apparently. I've sent an email to Robert Scoble, Amar Gandhi, Dean Hachamovitch and Ray Ozzie explaining my embarassment, and also advising them that I will not be available for free consulting in the future. They can pay for it, and get in line. My time is in demand, and I have no time for companies that go out of their way to embarass me. I feel I must disclose this here, because the warm feeling I used to feel for Microsoft in re RSS has turned bitter. This is why.
Looks like Disney wasn't nuts when they killed the deal with Pixar - they just agreed to buy Pixar, getting themselves a complete CGI animation setup without the pain of building it from scratch. Interesting side effect: Jobs is now the single largest shareholder in Disney.
Best I can tell, Vorlath wants Self - but since it's 20 year old technology, he's going to go ahead and reinvent it with garbage collection that isn't garbage collection, and objects that aren't objects. Here's a sample of his thinking on memory management - make sure to put your sanity defenses on full:
This system also clearly defines who owns what. Your so-called GC will work quite differently under the hood than what you're used to. Passing objects around will become very controlled and the need for a GC will not be as great. But if you wish to use one, it will be local to the cluster. The super cluster can of course monitor its activities. The GC will not be a GC in the normal sense. There'll be a memory manager for each cluster. When you allocate memory, you specify whether you want the memory manager to deallocate it automatically, to use the stack or scope for deallocation, or manual deallocation. Most people will use only use one form, but all allocation methods will be compatible with each other. Even if you allocate something that you want garbage collected, you can still manually deallocate it, and vice-versa. You can of course tell the memory manager to only accept certain kinds of allocations and notify you of any inconsistencies. This is mostly to enforce your organization's guidelines. Notice too that you can automatically ignore manual deletions if you tell the memory manager to garbage collect everything no matter what. In this way, you control how things are done and the since the super cluster can invade its containing clusters to change the way certain parts work, you can fine-tune your application without any need to change a single line of your original code... even for how memory is handled. And since clusters are independent, this makes garbage collection much easier. Heck, I don't even see the reason for a GC if your clusters or modules are well written. This is why I'm against GC's. If your data is well organized, there's no need for it. And with clear boundaries and ownership, much of the code to handle memory management can be automatically generated without a GC. So yes, there are alternatives to GC. Dare to see in color and not in B&W.
If that makes sense to you, I'm afraid. The rest of the post is like that, so don't say I didn't warn you :)
I've been having fun with Vorlath's notions of language design, but I haven't picked apart a specific example. Well, here goes - in his latest post, he calls exception handling broken, and explains by way of an example:
I suppose a simple example would not hurt. Take for example opening a file. Currently, we usually check the error code or trap an exception. If the exception isn't caught then it backtracks along the execution chain. Now consider if the open file command passed the error automatically to its parent super cluster along with the filename and other associated information. The super cluster can now pass this information to the GUI sub cluster to display an error message and some kind of option like choosing an alternate file for example. Then the super cluster can replace the filename and retry the open file command. The key here is that there is a centralized location for resource management and handling of subcomponents. So you can have standard recovery practices for the same types of errors. It should also be noted that you can fine tune the error recovery. If you wanted the leaf object itself to try and correct this error, you can do that too. Sometimes, the fact that the file doesn't exist is perfectly OK. But if you run out of memory or hard drive space for example, it can be automatically recover by notifying the user without putting this directly into the main flow of your source code.
I think he's got the handling mechanism of Java/C# in mind here. In those systems, when you get to the handler, the stack has been unwound, so you're stuck wherever you are. Not so in Smalltalk - you can restart the execution chain pretty much anywhere along the chain you want, and you have full context information. I use that in BottomFeeder to deal with problems as they arise in grabbing a feed - the handling is quite different depending on whether the problem arose in the HTTP part of the equation, the XML parsing part of the equation, or the decomposition of XML (RSS/Atom) into domain objects part of it.
In fact, I managed the parsing issues by subclassing (cue the scary music) the XML Parser and tossing in my own, more fault tolerant one. Bottom line - the kind of exception handling he wants already exists - but since it's in Smalltalk, it's "old" technology, and he'll have to go ahead and recreate it from scratch. Just like GC, I suppose.
Another thing - about that centralized error recovery thing - Smalltalk has had a default handler (which you can replace with your own) forever. Unless I'm greatly mistaken, the newest Java SDK from Sun includes that kind of functionality as well (without actually having access to the whole stack, but still).
Joi Ito reminds us of something very important when doing business overseas - there are a lot of differences in business culture, and not all of the rule differences are formal - some of them are, and some of them aren't. This bit about transaction guaranties looks fairly formal:
I spent part of the day today in court. I was defending myself against the landlord of a friend of mine who has been unable to pay rent. I am the guarantor on the lease and the landlord has decided to come after me for the money. This is probably the fifth time that I've had debt collectors of various sorts come after me because of guarantees that I've made. I'm sure people wonder why the hell I keep guaranteeing things. The odd thing is that it is so common in Japan. It is as good as required for any significant transaction such as renting an apartment or borrowing money from a bank. Even government affiliated loans require personal guarantees by people other than the principles.
While this seems like an informal rule:
One of my portfolio companies failed several years ago. As the lead investor, I went around to the other investors and explained the situation. Two of the other investors asked me to PERSONALLY cover their loss. Both of these companies were public Japanese companies. I didn't pay of course, but they seemed to think that it would have been nifty if I had. I've never heard of such a thing happening in the US.
I'm not passing judgment (of any kind) on these practices - merely pointing out that they are an example of something that all international businesses need to be aware of. "Standard practice" may not be standard everywhere.
One of the most crucial things to remember in communications is the importance of tone. That might sound odd coming from me; after all, I take a fairly snarky tone on this blog. That's reflected right there in the title though - note the term "Rants" right there at the top, in the title. Which gets to part of what's important in tone management - truth in advertising. Anyone who reads this blog on a regular basis understands that I like to let loose with rants, and expects it - it's part of what I do here.
Where people get in trouble is when they use inappropriate tone based on the expectations of the audience. For instance - what works for me here wouldn't work well in a press release. The audience for a press release isn't expecting the kind of advocacy I do here.
There's another thing that an awful lot of people forget though, and it's far more widespread on controversial (especially political) topics than it tends to be in the tech sphere. It happens here though; read through any long conversation about Atom and RSS, and you're sure to find some vitriol (proving that any topic, no matter how small the niche, attracts partisans). What am I talking about? Language use. To put it most simply, the first person who pulls out the curse words loses. You can see that playing out in the ongoing reaction to the Washington Post's blog comments thing - a lot of the cries of censorship that came up included an astounding amount of cursing. What the advocates forget is that - as soon as the curse words come out, most people stop listening. More than that, they tend to discount future statements from that source as well.
This is something I had to learn the hard way - my father always swore quite a bit, and of course, I picked the habit up. What I figured out over time should be blindingly obvious, but any use of cursing in the course of an argument/discussion makes it less and less likely that you'll convince the people you are talking to. In general, the person who doesn't lose his temper and remains calm tends to be the most convincing - almost without regard to what his position is.
So, first they start off suing their customers, and now they are maliciously making it hard for their customers to even listen to music, and they will cripple your music and media player to boot. These guys deserve to go out of business, they obviously don't love music, and they don't understand their own customers.
Via Doc Searls, we find out that the "tiered service" outbursts from BellSouth and Verizon are even stupider than they sounded at first: Neither of them are tier one backbone providers. Which means that they aren't so much asking to provide better QOS for some providers who pay extra - they would actually be engaged in degrading the service of everyone who didn't. I kind of thought that such "offers" usually came from gun toting gang members. Doc quotes Hamish MacEwan:
Of course, as everyone not a Telco knows, things are not delivered to Internet customers, they are *requested* by the customer who pays for that service from BellSouth (substitute your Telco of choice, we have a microcosm of this debacle in NZ at present). Google, or any other website, server, etc. is irrelevant, they make their own arrangements independent of other users of the Internet.
It is this decoupling, and the autonomous networks structure of the Internet that is one of its critical advantages. If the customer wants better performance, they pay for it.
How was BellSouth proposing to impair the performance of Yahoo! vs Google anyway? Kneecapping packets as they entered Bellsouth's portion of the Internet?
The "Improved Quality" these guys plan to offer sounds an awful like the "Improved Quality" that Microsoft intends to give me with PVP-OPM. Thanks, but no thanks.
So I was trying to print something for my daughter on the color printer attached to my wife's machine - and Windows has decided that said machine - which has been on continuously, is connected by wire to the router, and is definitely in the same workgroup - doesn't exist. It was there a week ago, but now Windows assures me that it's not there.
So I glance over at the Mac, which is also on the network, and have a look, just to see what it thinks - and sure enough, it sees all the same printers (and shared drives, etc) that it's always seen. So why is it that Windows - which is presumably interoperable with itself - seems to forget which devices are on my LAN, while the Mac cheerfully sees the lot of them? Or for that matter, the ancient (RedHat 7, for gosh sakes) Linux box? It sees everything as well.
I love it when a simple print job takes 15 extra minutes of my time because the OS had a senior moment.