It's always fun to watch a public breakdown - kind of like a train wreck. Here's Dave Winer pushing his erstwhile ally Rogers Cadenhead under a bus.
Boy, if I could turn a phrase the way Lileks can. My wife and I were in stitches over this :)
Best I can figure, someone is testing a bot before they go hog wild with spamming. How else to explain a few spam comments I've seen on some of the CST blogs (and others) that look like this:
fLaa4mm8xjvC7 zDXPtJZOPDRt6B dvp6jiPulPCwEb
All of them have been on older posts - I suppose as some sort of "will the owners notice" kind of test.
Ok, this is amusing: The World's biggest Windows Error Message.
It's clear that the news media doesn't need facts. If they get in the way of a juicy story, what to do? Just get rid of them and run the story, even if it's all made up:
GamerDad, which is SPOnG’s new favourite site for all things to do with gaming and parenting, reported that one of their writers, David Long, was interviewed in depth for the piece by Nydia Han of Channel 6 Action News in Philadelphia (an ABC Affiliate) and that Mr Long made it clear to her that Pictochat was neither an Internet-enabled service, nor a threat to children from potential paedophiles anonymously attempting to meet or ‘groom’ children over the service
The problem? Pictochat is strictly peer to peer, operating only with other DS units within a few feet. Meaning, if you get unwelcome messages, you can probably see who's sending them - just look for someone within a few feet banging away on a DS. Never mind that though - it's not scary enough. ABC news had to make the story scary:
It seems Ms Han then decided to totally ignore all of the facts as presented to her by GamerDad's Long and run with the erroneous and misleading story about an 11-year old girl being stalked over Pictochat in a WiFi hotspot.
Now, whilst this is merely an ABC News affiliate mis-reporting a story about gaming - which regional press all over the world do with alarming regularity - it's still worth pointing out that the story was picked up by hundreds of gaming news sites and forums (SPOnG included) and even on Slashdot.
It's things like this that make me question nearly everything I see in the media. They don't get it right in areas that I happen to be informed about - which makes me wonder about the stuff I'm not that well informed about. I now cast a skeptical eye over all media reporting, whether it be about technology, science, environmental issues, politics - you name it.
Update: Here's a link to the ABC Story. They eventually (final paragraph) have a spokesman from Nintendo explain that you would have to be within 65 feet to get contacted in Pictochat - the rest of the story really pushes the idea that the wireless net connection is at fault. The scare quotes in the story push that idea hard.
What I think this boils down to is that interoperability testing of Web based services (not Web services), like any Web deployment, benefits from network effects not available with Web services, primarily due to the use of the uniform interface. So if we're testing out Web based services, and I write a test client, then that client can be used - as-is - to test all services. You simply don't get this with Web services, at least past the point where you get the equivalent of the "unknown operation" fault. As a result, there's a whole lot more testing going on, which should intuitively mean better interop.
Except that it doesn't work as well, and, if it can be believed, has even bigger interoperability problems. Patrick Logan has been on a tear about this lately - check out his latest post on it, where he sums up:
Simple dynamic programming languages and simple dynamic coordination languages are winning. Vendors will have to differentiate themselves on something more than wizards that mask complexity.
The development industry loves complexity though. Why use a language with 5 reserved words and 2 operators, when you can use one that has dozens of each?
Smalltalk Solutions 2006 is coming up fast - April 24-26. Once change this year, with the show being run at LinuxWorld/NetworkWorld: paying for a full registration covers any and all tutorials. This is a change from previous years, where tutorials were an additional cost. So don't delay - Register now!
Well, it looks like MPAA members don't want to let RIAA members get too far in front of them, stupidity wise. This last weekend, a bunch of them sued Samsung over a DVD player that's been discontinued since 2004:
Over the weekend, Bloomberg news reported Walt Disney, Time Warner and three other major film makers filed the lawsuit against Samsung in U.S. court.
They claimed that Samsung’s DVD players allowed consumers to avoid encryption features that prevent unauthorized duplication and demanded a recall of all the problematic products, Bloomberg said.
The Motion Picture Association of America estimates that the movie industry lost $5.4 billion last year due to piracy.
So their solution is to make the public more aware of a player that they might be able to buy on EBay? I sure hope that they aren't paying their lawyers too much for this one; it has "too clever by half" written all over it.
Sheesh, you would think that an effort to clean up some of the more ambiguous areas of RSS would be getting Kudos. Instead, we have Dave Winer and his consistent inability to work with others:
It concerns me to see five companies, Newsgator, SixApart, SocialText, Feedburner and Technorati, give themselves special position among the many companies using RSS, especially since UserLand unilaterally gave up its special position with respect to RSS. It seems to me this is an issue that should be discussed publicly.
That's right Dave - those small companies are going to ruin the universe as we know it. We had some sensible reaction from Sam Ruby, who said (in part):
Being allowed to clarify the specification is one thing. Whether or not others feel like Nick does is yet another. In the long run, the success of the work currently under the working title of RSS 2.0.2 depends little on what Harvard thinks, but instead depends very much on what people like Nick and companies like Microsoft actually do.
The leadership that Rogers is providing has been exemplary. I’ve been quietly aligning the Feed Validator RSS 2.0 test cases to track to the drafts that he has produced. I believe this work is important and should continue.
That resulted in a pathetic cry for attention from Steve Gillmor:
I've developed a new spray that detects b*******. I can't talk too much about the technology until the product launch, but I will demonstrate its usefullness by spraying it on this post by Sam Ruby:
I thought everything was about Dave, but apparently, the stuff that isn't is about Steve.
Now, back in the day, when Atom was first being talked about, I was pretty darn hostile. This was back before I really understood what a complete jerk Dave Winer is, and how utterly impossible he is to work with. The Atom group had a lot of discussions that looked trivial, but they moved the ball forward and worked on some of the problems that just cannot be addressed in RSS - due to the complete lack of understanding shown by Winer. Over time, here's how it's going to fall out. The name RSS will stick - it's become generic, in the same way that the term "Kleenex" has. However, most people doing serious work in the field will use Atom. At least there, they'll find a group of people who's first thought isn't to deny the possibility of problems.
We've finally found a game that we like as much as Puerto Rico - Caylus. We played another round last night, and while I didn't do at all well, it's a game I like quite a bit. The thing is, you need to be paying attention pretty much the entire game. There are things that you need to accomplish in the middle game which, if you neglect, will just take you completely out of the end game. That's what happened to me last night. I didn't get the right sort of buildings up then, and by the end game, I was way behind.
I highly recommend this one - it takes longer to play than PR, but it's well worth it
This is kind of amazing:
The Imperial Order, a World of Warcraft guild on the Detheroc server, is holding the server hostage. The guild has completed the various quests needed to obtain a septer used to ring a gong. Ringing the gong will open the gates giving everyone on the server access to new content, but the guild refuses to do it. At least, they refuse to do it until someone pays them 5,000 gold.
based on what I read about WoW on various blogs I subscribe to, it's like a whole second life for a lot of people.
I've taken a look at the creation code that I wrote (a long while back) for the Silt server, and discovered that it was broken. There have been some changes in the underlying Web Toolkit since I last looked at this, so it's not a huge surprise. I went ahead an patched the code up, so that you can now grab the latest Silt code and get a server set up easily. Here's the best path at present:
- Get an account for the Public Store.
- Then go to the Silt Page.
- Follow the directions for loading the SiltSSPFiles bundle
- Load the Silt Bundle
- From the Launcher, start the blog manager (Tools>>Blog Manager) tool
- Fill in the required fields, and you should get an initial blog set up
If you run into problems, send me an email.
This thread proves something - it proves that politics will enter any field that has more than one person involved in it. You might think that syndication formats in XML should be dull, and of interest only to the technically oriented - but you would be sadly mistaken.
I've updated the Silt server code that's available here, on the Wiki. The latest code is there, along with all the latest SSP templates (including all the css stuff as well). I've changed all the pages to report themselves as utf-8 as well, which is something I should have done awhile back - it lines up with the way the content is actually stored.
I haven't updated the prebuilt server quite yet; I intend to get to that shortly.
There's been a fair bit of buzz about the meaning of multiple core machines - especially given the fact that today's 2 and 4 core systems will become tomorrow's 16 and 32 (or more) core systems. However, I don't think that the answer lies in changing languages and compilers to parallelize applications - at least, not a general answer. That seems to be where Larry O'Brien was going in SD Times this week:
No mainstream programming language is automatically parallelizable. This is ironic, since object-oriented programming has its roots in simulation, where concurrency is a basic concern. However, since mainstream OO languages allow state to be shared between threads, they’re fundamentally crippled. When the basic rule for thread safety is “either write objects with no fields or write objects with no virtual method calls,” the paradigms are clashing.
Surprisingly, the mainstream language that seems to have the most far-reaching proposal for manycore programming is C/C++. Herb Sutter, who is an architect at Microsoft and chair of the ISO C++ committee, gave the first public airing of his Concur project at last September’s PDC. Along with emphasizing that Moore’s “free lunch is over,” Sutter proposes that existing approaches to concurrency such as OpenMP do not go far enough and that the abstractions of .NET (and Java, for that matter) are inadequate, focusing as they do on thread management, rather than the more general concept of delayed execution.
Developers have trouble writing multi-threaded applications now, especially when the threads are native. When you try to have multiple threads of execution access shared state, chaos tends to ensue. While the hardware will certainly get better, the "wet-ware" - i.e., our brains - won't.
Ironically, the answer to this problem came up a long while back, in the Unix world. Back in the day, Unix approached the idea of problem solving with lots of small applications that you wire together. Those ideas evolved into the modern architecture of things like Apache. Do a PS on a Linux box sometime - you'll see lots of Apache processes. That's because it's far easier to create a single threaded application, and run multiple copies of it than it is to figure out how to get shared state properly shared in a single executable space with multiple threads going at it.
The other nice thing: The multiple process approach works equally well if you scale via multiple systems rather than via multiple cores. Or if you use both approaches. It also works with existing development tools - it doesn't require custom compilers that will almost certainly be architecture specific.
Which is more expensive - the multi-core hardware, or the developer trying to work on it? Based on that answer, which one makes the most sense to optimize?
Charles Miller reports on just how easy it can be to track down someone's location/identity from the slimmest set of clues. The WaPo ran an anonymous interview with a hacker who didn't want to be identified, but allowed a small photo of part of his face to run with the story.
That's what got this kid (mostly) found. As it happens, you can get a lot of information on the kind of camera used to take a picture from the EXIF format. A little hunting with that will get you the location where said camera was used. Slapped together with other information this guy let slip in the interview, he's probably already been identified by people in his community. Have a look here and here to see how those small tidbits were used to find this kid.
If you want to stay anonymous, it looks like you have to be really quiet...
I had uploaded new files, but my build script had a huge "oops" - I was still integrating VW 7.3 VM's. So, I'm uploading the build again, with the proper VM's this time. Due to an addition in the base product, there's now a (development) rev of BottomFeeder for Solaris on x86. I'll update this post when the upload is done.
Update: The files are up now. The build scripts have not been updated yet; I'll get to those tomorrow. Enjoy!
I'm in the process of uploading an initial build now. It's an initial build - there are still things I'd like to change, and features that I would like to add before a general release. However, I've got the first cut uploading now. In a few hours, you'll be able to visit the download page, scroll down to the dev builds, and grab it.
The North American launch of Sony's much-anticipated PlayStation 3 could be delayed until next year, according to a research report issued by Merrill Lynch.
In the report (Click here for PDF), the analyst firm proposed the idea that high costs and Sony's decision to use an "ambitious new processor architecture--the Cell" is making it look like the company might not be able to meet its goal of getting the PS 3 out in the U.S. this year.
It's all speculation at this point, is what it amounts to. It does sound to me like Sony may have gone "a bridge too far" on the technical side - too many new things at once.
It's weekly log time here - looks like there were 264 downloads of BottomFeeder a day last week, which is a pretty decent clip. The details:
Fairly decent distribution spread, I think. On to the HTML blog page accesses:
|Tool||Percentage of Accesses|
A little higher Mozilla than average, but my traffic jumped a bit last week as well. I should walk through the specific page requests and see what, if anything in particular, was behind that. Finally, the RSS page accesses by tool:
|Tool||Percentage of Accesses|
|Net News Wire||10%|
The tool distribution for RSS access doesn't seem to be consolidating at all.
If Merrill Lynch has these numbers right, then Sony is going to have to sell a lot of games in order to make back the discounts they'll have to offer on the PS3:
If there are some people out there right now who are in the know when it comes to what the hell is going on -- we mean really going on -- with Sony, it's those investment firms. But even barring their research analysts getting all kinds of privvy information from direct executive input or connections on the supply side, it's kind of funny when one of these investment firms lets loose some juicy gossip. Like that Sony's
albatrossPlayStation 3 is going to cost them $800 per unit at launch (they list $900, but apparently Lynch financial analysts can't add their own totals). $800 per unit?
Merrill has them at a unit cost of $320 after 3 years (I suspect it will be lower over that time, but still. Check the site for the itemized list of costs. If Merrill is correct, then Sony is going to have some pain associated with this launch.
We went to see "Firewall" today - it's an action flick with Harrison Ford. However, it doesn't have Ford pretending to be a young man kicking butt - it accounts for his age, and does a pretty good job with it. The setup has some holes in it, and the initial phase of the movie moved a little slowly - but once you get to the "now you've made me mad" part of the move (you'll recognize it when you get there), it rocks along pretty nicely. From there to the end the pacing is quite good. I rather liked Mary Lynn Rajskub as the admin who helps Ford out, although a thought came to mind: do all the strong men she's helping have to be called "Jack"? Maybe there's a rule I missed :)
Anyway, it's a pretty decent flick. Nothing special, but it was entertaining for what it was.
I'm out of time for it today, based on other things I have to deal with. Like dinner and seeing friends :) Back to the grind on this tomorrow, I think.
I've just about got things done - I've solved the packaging issue I had, but managed to execute an entire build with an incorrect set of parcels. So... I'll have a dev build up later this afternoon.
Now here's an interesting piece of history - back in the days of above ground nuclear tests, some fascinating photos of the initial stages of a nuclear explosion were taken - go check them out. Hat tip Boris.
Here's how the RIAA (i.e., the big labels) will die - not with a bang, but with a whimper:
Tunecore is playing a dangerous game. They are a music publishing service operating at minimal costs, and they have contracts with iTunes and Rhapsody allowing artists to sell their music on two of the most powerful music sellers.
Prior to the creation of Tunecore, this was the domain of the record labels - essentially meaning the Big Four: Universal, Sony BMG, EMI and Warner. The Big Four occupy a uniquely powerful position - known in economic terms as an oligopoly - where the entire global market is made up of just 4 companies. Over 75% of all music sold worldwide comes from these four - and they work together to hold a life-and-death grip over artists and the industry.
Tunecore may not be the one to knock the RIAA down, but something like it will. There are just tons of bands producing good music out there (my cousin plays drums for one of them). Unlike the surgically enhanced sex symbols tossed our way by the labels, these bands can carry a tune without voice enhancement. As the costs of producing music fall, and the process of selling music to the likes of iTunes and Amazon gets disintermediated, the industry will go through a sea change. The whole DRM war we're seeing now is the dying gasp of a set of people who can't - or won't - see the future.
In the internet access market, we can see the very definition of the late movers - the people who do not yet have broadband access. Here's a report that goes through that in some detail, and of the people who don't have broadband, it's not all what you might think:
So why are some dial-up users resisting the tide? According to a new survey from the Yankee Group, the most common reason US consumers don't subscribe to broadband is that it's too expensive. Despite promotional price cuts for DSL (which often cover slower connection speeds and eventually expire, shooting the price up), broadband is more costly than dial-up, especially for truly high speeds. Presumably, dial-up consumers have little need for tasks beyond e-mail, IM and simple Web browsing, which are doable through broadband, and want to keep their monthly expenses low.
Price isn't the only factor. More than 30% of consumers say that they just don't want broadband, and about 14% say they feel dial-up is adequate for their needs. Less than 10% are not able to get broadband access in their area.
That 30% who don't actually want broadband - at least as it's currently been marketed to them - are the ones to examine, I think. I'd suspect - as this report says - that these are light users of the internet. They send emails, they browse a handful of sites; they just don't see the point in something more expensive. Getting that group to buy in won't be an easy exercise - it took my dad years to convince my uncle to move off Windows 95, and he still hasn't convinced him on broadband.
Vassili just sent me a style he's apparently had, but forgot to make available. Select the 'plastics' link in the style sidebar to see what it looks like.
Morgan McLintic notes that Steve Rubel has moved to Edelman, and the most interesting part to me is the disposition of his blog:
He keeps his excellent MicroPersuasion blog following a buyout deal with his previous agency, which had a branded practice of the same name. Richard Edelman is pumped about the appointment too - as well he should be.
The blog was a valuable piece of "real estate", and it had to be bought out. I expect we'll see more of this sort of thing as time goes by.
It's time to register for Smalltalk Solutions 2006. The site is accepting registrations now, and it's coming up sooner than you think: April 24- 26. Register now, so you can attend talk like this one:
Efficient Smalltalk (Travis Griggs)
Smalltalk is slow. Everyone knows it. So why try?” Not! In this tutorial Travis will dispel these myths, and along the way he will provide insight and patterns that will show you when and how to improve performance in Smalltalk programs. Looking for efficient Smalltalk this is the program to see.
See you there!
A bit of a short notice, mainly because we hade to shove this in between today and StS , but please mark 11th March 2006 in your diary if you’re a Smalltalker within reasonable travel distance of Brussels.
The VUB Programming Technology Lab will host this day of informal chat, presentations (Bryce will rehearse his StS Exupery presentation), and hopefully drinks&dinner afterwards.
Coen de Roover, our contact and thus Official Party Host (thanks Coen!), has setup a Swiki at http://prog.vub.ac.be:8080/SmalltalkParty/ - if you plan to attend, just add your name to the list.
I'm working on getting a dev build of BottomFeeder on VW 7.4 out. I've got it working, but some of the extras that I've come to rely on - the process monitor and inspectors - are still being stripped out. That's a consequence of the changes I spoke of in my previous post on RTP - and I'm still adjusting to that change. I should have a dev build by tomorrow, unless I get sidetracked.
Firefox started misbehaving, so I thought -- let's go download a fresh install. Guess what's waiting for me: no choice but to install the Google Toolbar. Remember what they said about their hack, if you don't like it, don't install it. Well, there it is. Where's the choice now. Back then I couldn't get anyone to listen. Letting Google modify our content to add links to their sites was a very bad idea then, now maybe others get that too? Now that they're doing it for the Chinese censors. Why do you guys trust Google so much . They're a corporation; they'll do whatever they have to do to make money, do you think the integrity of your writing is even the smallest little issue for them? I don't.
This has nothing to do with trust - it has to do with user options. For those slow on the uptake, there's an "options" button right on the Google toolbar. Not to mention that right clicking in the toolbar area allows you to turn the Google toolbar off. Not to mention that the Google toolbar can be uninstalled from the extension menu as well.
But hey - who wants to let a few facts get in the way of a pointless rant?
Alan Knight posted this on one of our mailing lists, and I thought it would be generally useful:
Yes, quite a lot of substantial things changed with Runtime Packager. See the release notes for some additional information. One of the significant and backward-incompatible changes, as Jim pointed out, is the change from using categories and namespaces as the organizing mechanisms to using packages and bundles. Most of the changes as to what did and did not get stripped would have made it strip less (e.g. allowing for pragmas and handling registries of classes). I don't know why it would strip productFromInteger:, and I haven't seen that behaviour.
The biggest problem with a simple, linear bootstrap chain is that the system does not start up in a simple, linear fashion. The most notable branching points in a basic image are between runtime and development usage, and between headless and headfull operation. I'm not sure where to draw the distinction is between something that allows clients to hook in their interests at a proper position and a confusing network of implicitly linked events, triggers, and pragma dependencies. In comparison to to the traditional mechanism, which could be described as a linear bootstrap chain with a place to hook interests, it proved to be too hard-coded and not flexible enough, not to have enough registration points, and the registration mechanism led to ordering dependencies on exactly when particular interests were registered. The Subsystem mechanism, and the SystemEventInterests have allowed for considerably greater flexibility, and there are a number of things implemented as a result that would have been quite difficult to do as ObjectMemory dependents.
I'm not sure if you were wishing for a tool that would show the ordering dependencies or complaining that one is necessary, but there is one. See Tools-StartupOrderingTool, which adds a tab in the RB code view when a Subsystem class is selected.
To disable the command-line options that we thought you might want to disable for security reasons, See Settings->System->Loading, or the class side of ImageConfigurationSystem.
Dynamically pausing parts of the system during runtime is something that has been part of the system since the beginning, and is fairly widely used. The mechanism on Subsystem merely makes it possible to do this for Subsystems without making them be dependents of ObjectMemory.
I suspect the increase in footprint from 7.3 to 7.4 is more likely due to the more conservative approach to stripping when confronted with difficulty to correctly trace constructs such as pragmas and class registries than to links between subsystems, but it's possible that that is a factor. I also note that by default the system will start by including all Subsystems in the image, and they would have to be explicitly excluded. You might be interested in looking at base.im as a basis for stripping, rather than a full development image, or at unloading development-time only functionality before stripping.
Improving and automating deployment procedures is one of our priorities, and the changes in Subsystems is part of that, as are the changes to Runtime Packager. We think the basic Subsystems code is stable, although adapting other parts of the system to use those mechanisms such as was done for Runtime Packager in 7.4 is still ongoing. Runtime Packager is likely to have more changes, and also to have less emphasis as the primary deployment mechanism, particularly with respect to stripping a development image.
Gizmodo quotes a source claiming to have the Sony PS3 release date:
According to well-placed industry sources, PlayStation HUB will offer PlayStation 3 owners much the same services as Xbox Live, including chat, downloadable demos, independent games and online play. The service is also designed to support PSP online play as well as PS3.
We'll have to wait and see. The most anticipated part of this? What's the price tag going to be?
The RIAA pulled its middle finger back out, and waved it at all of us again today. Here's what they said during the Grokster case last year:
"The record companies, my clients, have said, for some time now, and it's been on their website for some time now, that it's perfectly lawful to take a CD that you've purchased, upload it onto your computer, put it onto your iPod."
However, in a filing the just made, it's clear that they consider that to be inoperative:
"Nor does the fact that permission to make a copy in particular circumstances is often or even routinely granted, necessarily establish that the copying is a fair use when the copyright owner withholds that authorization. In this regard, the statement attributed to counsel for copyright owners in the MGM v. Grokster case is simply a statement about authorization, not about fair use."
So using their standard operating procedure, if they write a restriction against copying in tiny print, and hide it under the CD inside the case, that's fair warning. What a bunch of utter tools. Oh - even making a backup against a disaster is verboten:
The same filing also had this to say: "Similarly, creating a back-up copy of a music CD is not a non-infringing use...."
I repeat: what a bunch of tools.
Tim Bray wants to utilize the resources at the client side to lessen the load on servers:
I suspect there’s a huge system-wide optimization waiting out there for us to grab, by pushing as much of the templating and page generation work out there onto the clients. In particular, when you’re personalizing a page, assign all the work you can to the personal computer sitting in front of the person in question. Yeah, that cool, responsive AJAXy stuff is nice but maybe it’s the icing on the cake; the real win is making the Web run faster. ongoing doesn’t do any sexy-UI stuff, but you know, the page content could be a lot more dynamic, and there’s no reason for it to run slower; or even to upgrade from the single fairly-basic Athlon that hosts it.
The spread of blogging is making the job of PR people difficult and confusing. Witness this post from Ed Bott, which I found via Scoble - Bott has early access to Office 12, as a member of the tech press. He's restricted by the beta EULA from writing about it, and Scoble was trying to look into that. That's where the confusion started. Here's Bott explaining it:
- A coordinator of the Office 12 beta program dismissed the report as a “rumor” and said the confidentiality requirement of the EULA was still in effect.
- Scoble checked with his sources inside the Office team and said the official word was that press – including bloggers – are indeed allowed to write about Office 12 client apps.
- A few days later, Scoble backtracked: “It turns out that this isn’t quite the case,” he wrote. “There are different NDAs given to different groups. … If you’re an MVP, in the Technical Beta or on the TAP program you’ll need to comply with the EULA of Beta1, which maintains confidentiality except in cases where the information is already public.”
- The next day, I received an e-mail from a representative for the Office team confirming that technical beta testers can write about the client applications in Office 12 only if they’re among the group of press and reviewers recognized by Microsoft. “These folks (there are a few hundred of them) can blog all they want about client apps…”
That's enough to make your head spin, especially when there's been so much talk about various aspects of Office 12 (especially the new Ribbon UI).
What we're seeing is the old PR staff doing business as usual, and getting snared by the ground as it's shifting underneath them. I'm not sure where this is going to end up - it's (mostly) easy to identify a member of the press, and then control the level of conversation with them. It's a lot harder with bloggers popping up with unmoderated comments as that's all taking place.
A new model is trying to be born, and I don't think anyone knows what it's going to look like yet.
If you really want to ensure true freedom of your code, put it under the control of a Foundation like Apache or Eclipse. Foundations are not controlled by any one company so you can feel comfortable about being on a level playing field with your competitors.
Unless one company starts giving disproportionate sums to the foundation - and most foundations are not going to turn away money. You can see this in the political sphere - plenty of charitable foundations have entirely different political orientations from their (now deceased) founders, because they were "captured" by generous funding.
If you think that can't happen in the business world, you're simply naive.
This site explains it all, and a lot else besides. Go read it all, if you're a fan of sci-fi flicks.
Andreas Katsulas, the character actor known to SF fans as G'Kar on Babylon 5 and a familiar face from Star Trek and other SF&F TV shows, died Feb. 13 of lung cancer in Los Angeles
I really thought he did a great job as G'Kar in "Babylon 5".
Bob Congdon illustrates what I've been somewhat worried about:
Cable companies aren't exactly beloved by most people. Cable service is expensive and sometimes flaky or troublesome. It's pretty easy to find people who don't like Comcast . That said, our recent experience with Comcast was pretty good. We added HD and DVR services to digital cable. The DVR is nice — it stores up to 80 hours of standard broadcasts or 20 hours of HD content. We've never owned a TiVo so I can't compare the two services but with Comcast DVR there's a single box and single remote for both cable and DVR. Not sure if it's Comcastic or not but we're pretty happy.
The thing is, the Comcast DVR blows. I guess it seems ok if you have nothing to compare it to (and, to be fair, it does have some nice things that ReplayTV doesn't - HD recording and the ability to browse future recordings easily). However, those nice things are outweighed by some of the really stupid things:
- No skip. Yes, it has fast forward, but it would be very nice if it could just jump 30 seconds
- No ability to jump to minute X of a show. Instead, you have to forward/reverse.
- The second problem is more irritating due to this: Say you are watching a show as it records. Now the show ends, and the DVR starts recording a new show. You are behind, due to a few pauses. Bam - it skips to the start of the new show. Because, you know, that's what you want. Have fun getting back to where you were.
I'm finding that I prefer to watch non-HD versions of things simply so that I don't have to put up with that stuff. The Comcast DVR doesn't have to suck - it's got three tuners and a few nice features. The tragedy is, the flaws outweigh all that.
So first Oracle bought InnoDB. That wouldn't be big news except for the bind it puts MySQL in: InnoDB is the engine they use. Commercial users of MySQL now have a potential issue in their hands. It may never become a real one, but buying InnoDB was certainly a cheap way to give MySQL heartburn. To top that off, they just bought SleepyCat - which some had considered a possible replacement for InnoDB in MySQL. So much for that idea.
Recently, there have been rumors floating that Oracle is looking to buy JBoss. That would cap off an interesting set of buys, and would also get rid of a relatively small but troublesome corner for Oracle. Phil Windley explains what Oracle has done succinctly:
MySQL, InnoDB, and Sleepycat are all "open source" but they aren't "free." MySQL made a strategic blunder by not buying InnoDB to begin with. If they were, there'd be no company to buy. Take Linux as a counter example. Do you think that Microsoft would have bought Linux long ago and put the threat to their own OS to bed if they could? Sure, but fortunately there's nothing to buy. Oracle has effectively cornered MySQL by buying the storage engine they use. Moreover, they accomplished it much more cheaply than they could by buying MySQL outright.
Well, there may be "nothing" to buy on the Linux side, but I suspect that a purchase of RedHat and SuSe would change the playing field quite a bit. I would be astonished if Microsoft hadn't tried approaching one or both of them at some point.
Update: Apparently, they tried to buy MySQL as well.
I think this headline at ComputerWorld says it all:
Apparently, companies are starting to look elsewhere for cost savings. Give those spots 6-24 months, then: rinse, repeat.
As always, the best answer is to hire the best employees you can.
Looks like the marketing department at Fox is very, very slow on the uptake. After seeing Sony take damage from their CD rootkit fiasco, it's been reported that they are shipping DVD's with similar copy-protection schemes:
The Settec Alpha-DISC copy protection system used on the DVD contains user-mode rootkit-like features to hide itself. The system will hide it's own process, but does not appear to hide any files or registry entries. This makes the feature a bit less dangerous, as anti-virus products will still be able to scan all files on the disk. However, as we note in our article on rootkits, it's not that uncommon for real malware to only hide their processes.
Our message to software companies producing any software (not just copy protection products) is clear. You should always avoid hiding anything from the user, especially the administrator. It rarely serves the needs of the user, and in many cases it's very easy to create a security vulnerability this way.
What is it that people say about doing the same thing over and over, expecting a different result?
Mark Baker is right - BPEL wasn't really built for the web, and I seriously doubt that it'll be a player in the future. It reeks of all the same things that were wrong with the plethora of CORBA services back in the day - overly complex gunk that no one wants to figure out.
We are in the process of getting ObjectStudio 8 ready for early access - if having a look at that (and getting us feedback) is of interest to you, then send me email. I intend to push out a screencast on this later eventually, but in the meantime, here are a few screenshots. First, ObjectStudio running inside VW - click through for a larger image:
Now for the cool stuff - I loaded the OLE support into ObjectStudio, using the ObjectStudio tools, and then loaded a demonstration - hat tip to Mark Grinnell (our lead OST engineer). Here's what the ObjectStudio launcher looks like after loading those - a little unfamiliar to VW users:
Now, launching the IE demo - that pops up a window with the IE HTML control embedded in it. Again, you can click through for a larger image:
Now, that's running inside the ObjectStudio image, which itself is running inside VisualWorks. Which means that - once this is ready (and I need to emphasize that we at early alpha levels right now) - you'll be able to make use of this as either an ObjectStudio or a VisualWorks developer. Very cool stuff. Here's the ObjectStudio code, seen in the Refactoring Browser (again, click through for a larger image):
The Magic Middle is the 155,000 or so weblogs that have garnered between 20 and 1,000 inbound links. It is a realm of topical authority and significant posting and conversation within the blogosphere.
That's a much larger group of people commenting on things than you could find 15 years ago via traditional media outlets. Instead of the handful of voices on talk radio or in small publications, we now have this large (and growing) set of communities. That's a big change, IMHO.