Oh boy. My wife volunteered our house for a Girl Scout cooking demonstration - I'll have a house full of 7th grade girlscouts trying to cook. Oof.
More Seaside news, from the ESUG list:
netstyle.ch and ESUG are pleased to finally announce the new Seaside hosting service. Seaside-Hosting is a free hosting service for non-commercial Seaside applications. This service provides a simple to use web interface with FTP access to set up and run your Seaside applications. It allows you to put your own application online within minutes. The service is a Seaside application and is itself running on Seaside-Hosting too.
Seaside-Hosting currently offers 128 MB of file-space for saving the Squeak web-application image and static files, e.g., pictures or style sheets which you want to use as part of your application.
For general discussion we propose to use the Seaside mailing list. For specific questions or problems please contact us directly at email@example.com.
One of the things that has set of a furious round of politics on the RSS Advisory list is the <enclosure> element - does it allow multiple enclosures in a single item or not? Here's the text from the spec:
<enclosure> is an optional sub-element of
It has three required attributes. url says where the enclosure is located, length says how big it is in bytes, and type says what its type is, a standard MIME type.
The url must be an http url.<enclosure url="http://www.scripting.com/mp3s/weatherReportSuite.mp3" length="12216320" type="audio/mpeg" />
That doesn't say anything about it, which is why there's discussion. I thought it might be useful to have a look at actual practice - how many multiple enclosures do I see in the wild? Well, this is admittedly anecdotal - I'm looking through my own subscription list, and I'm not big into podcasts - so out of my 316 feeds, it's not a huge selection. Still, let's have a look. First, all the items that have at least one enclosure:
withEnclosures := OrderedCollection new. RSSFeedManager default getAllMyFeeds do: [:each | | with | with := each allItems select: [:eachItem | eachItem enclosure notNil and: [eachItem enclosure size >= 1]]. with notEmpty ifTrue: [withEnclosures add: (each ->with)]]. withEnclosures inject: 0 into: [:subTotal :next | subTotal + next value size]
That code, when inspecting the results, gave me this:
Ok, so there are 18 feeds, out of 316 feeds total (with a total of 17,968 items) that have even one enclosure. In that group, there are 327 items with at least one enclosure. That's more than I thought, actually. How many of those 327 items have more than one?
- Feed: Lessig Blog
- I'll be virtual next Wednesday 3
- the badge campaigns continue 4
- Feed: Instapundit.com
- The Glenn and Helen Show: Austin Bay and Jim Dunnigan on Ports, the Philippines, Iran, and More 2
- Feed: Power Line
- Still More Cartoon Madness 2
That little snippet of HTML was produced by this code:
stream := WriteStream on: (String new: 100). multiples := OrderedCollection new. withEnclosures do: [:each | | feed bigger | feed := each key. items := each value. bigger := items select: [:eachItem | eachItem enclosure size > 1]. bigger notEmpty ifTrue: [multiples add: (feed -> bigger)]]. stream nextPutAll: '<ul>'; cr. multiples do: [:each | stream nextPutAll: '<li>Feed: ', each key title, '</li>'. stream cr. stream nextPutAll: '<ul>'; cr. each value do: [:eachItem | stream nextPutAll: '<li>'. stream tab. stream nextPutAll: eachItem title. stream tab. stream nextPutAll: eachItem enclosure size printString. stream nextPutAll: '</li>'; cr.]. stream nextPutAll: '</ul>'; cr]. stream nextPutAll: '</ul>'. ^stream contents
The reason it looks like a lot of code is the production of the HTML mixed in there. All I'm really doing is looping over the data I produced in the last go-round, and getting numbers out of it. As it happens, there are 4 items in 3 feeds (out of 327 items in 18 feeds with enclosures) that have more than one enclosure - with a total of 11 enclosures across those 4 items.
So what does that show? Possibly that using multiple enclosures just isn't that common. On the other hand, the three feeds that did use them are quite popular.
So I almost forgot to mention the cool part. All that code was executed in a workspace in the runtime for BottomFeeder - not in the development environment. The tool is written in Smalltalk, and I can simply ask the domain objects questions at runtime. Pretty cool, IMHO.
Me: "Why would the on-off switch on a quantum computer have two positions? Shouldn't it just have one, labeled INDETERMINATE?"
He: "Only when you're not looking at it."
Phil Windley also took notes on this evening lecture. I've not been a huge fan of this initiative, and these two paragraphs explain why:
Half the price of a typical laptop is the marketing and distribution. Get a non-profit and drop that. Half of the remaining cost is Microsoft, or more generally commercial software vendors. Free and open source software more than adequately covers the computing needs of most people, particularly children. The fact that there are $122 DVD players says you can build a $100 laptop. The cheapest hard drives are too expensive; so use flash memory.
One big problem is the grey market. They’ll be diverted from children unless you do something to protect the laptop. A few ideas: an RFID card keyed to the specific owner helps. The device is networked, so the owner of the device has to log in every few days to get a token to keep it working. The color (green) helps. The child’s picture could be embedded in the plastic case.
The cost of that DVD player includes sales and marketing as well. What's driven the cost down is the free market at work - commoditization. Using Flash memory is fine, so long as you don't want to store a lot of stuff - and the open source alternatives to word processing packages aren't going to save on space (or performance. Pretty much the same thing goes for other open source efforts as well - in general, size isn't something that the developers have been trying to optimize for.
I like that he's recognized the problem of the grey market - but I think his proposed solutions will drive up cost without actually accomplishing much. If you are trying to introduce an item of value into an area that, in general, cannot afford the extant commercial products, then a lot of your target audience will try to sell the item. It's really that simple.
Having said all that, I like this summation:
The important question surrounding the $100 laptop is “will it be more than a mere technological artifact?” The answer depends on whether the content, and especially the mentoring, can be brought along with it to have real impact.
If they pick their target markets correctly, and are able to provide the right content, it could work. On the other hand, if that market exists, I expect that one or more commercial vendors will end up serving it.
Phil Windley took notes on Alan Kay's talk in Utah. His first talk is one I've seen (although it's evolved, with new events as examples, and the work in Croquet that he speaks to). Here's perhaps the best quote:
The good guys (late binders) lost in the late 70’s. The early binders won.
Good ideas don’t often scale.
Most people who graduate with CS degrees don’t understand the significance of Lisp. Lisp is the most important idea in computer science. Alan’s breakthrough in object oriented programming, wasn’t objects, it was the realizing the the Lisp metasystem was what we needed.
I've yet to see compelling, detailed, practical examples where real business are solving the problems that WS-* addresses with only HTTP+XML (unless one ignores the bazillion person-hours and immense amounts of code that industrial-strength web sites have so far had to deploy). And I'm still waiting for pragmatic guidance on exactly how to put these ideas in practice for an organization that needs something more complex than a stock quote service.
Which problems is WS* appropriate for again? It looks an awful lot like the exact same problems that CORBA was (and is, for that matter) approriate for. Which is to say, a small number of problems that a small number of people run across.
Here's the dirty secret that most developers - and most assuredly, most development managers - don't want to have to admit: Most of the problems they are confronted with just aren't that complicated. The complexity comes from the developers and managers themselves, who insist on reading the latest set of buzzwords with near reverence - and who then decide that all the software they currently have written in X needs to be redone in Y because... well, because it would be cool.
Sure, there are some hard problems out there. However, most people aren't working on them. They are instead making the simple complex with a vast array of overly complex crap, like the full J2EE stack or WS*.
Smalltalk community and STIC members
HAVE YOU HEARD!
Smalltalk Solutions 2006 is in the fabulous city of Toronto, April 24-26, 2006 and has joined LinuxWorld & NetworkWorld Canada.
This year we are delighted to offer you more than ever before ... more sessions, more tutorials, more networking and much much more ...
Your Smalltalk Solutions conference pass includes:
- All Smalltalk seminar sessions
- All Smalltalk tutorials
- All Smalltalk invited special guest sessions - Join Brian Foote & Avi Bryant
- All LinuxWorld & NetworkWorld seminar sessions
- All LinuxWorld & NetworkWorld tutorials
- All Exhibit Keynotes - IBM, Novell, Samsung, HP
THERE IS MORE
- The Smalltalk Solutions Pavilion on the exhibit floor
- All of the LinuxWorld & Network World Exhibits
- including the new Open Standards Desktop Live!
DID YOU THINK WE WERE FINISHED?
25% off listed conference fees for STIC members - Including the early bird and the advanced.
Register for the early bird through March 17 and save $231.00. See the registration form for the advanced rate.
DID WE TELL YOU THE FEES ARE IN CANADIAN DOLLARS!
Make sure you visit us at this year's Smalltalk Solutions 2006 - come up to the exhibit floor and visit the STIC booth and Cincom, GemStone and Instantiations.
We look forward to seeing you in Toronto!
PS... Oops! We nearly forgot. Be one of the first 200 to register get a Kensington Ultra Wireless Pocket Mouse - see details on the registration form
You've said you're concerned because Microsoft is making a strong move into RSS. That's a concern I share, so that's common ground. You've even found reasons to be concerned, things they've done that we need to talk about with them. So define your group on those lines, you're a study group, or a documentation project, or you're designing a tight profile of RSS that's intended to maximize interop, these are things I can support. I hope you see now that my support is worth something, that you can't just blow by me in RSS, and ignore what I say, that that just isn't going to work.
I think ignoring anything Dave says would be an excellent idea. Actual progress might happen then.
I'm with Dana VanDen Heuvel on this one - what on earth is Google doing introducing a basic HTML page builder that has no connection to Blogger? Do they think it's 1997, and their name is Geocities?
I have to ask "why?" What do we need this crap for and where's Blogger at? How come they didn't make this functionality with blogging in mind.
CNet gets to the meat of the issue:
The estimated total bill of materials for Sony's next-generation game console will be between $725 and $905, according to various estimates. In comparison, the Xbox 360 from Microsoft comes with a component bill between $501 and $525.
Though Sony hasn't disclosed the price of the PS3, analysts figure it will have to be in the ballpark of $299 to $399--the price for the two versions of the Xbox 360. PS3 pricing speculation has heated up in recent days, along with rumors that the long-awaited game console could be delayed for up to a year.
The current price for the 360 is the one that matters. Sony cannot price much (if any) higher than that, so the question that remains is this: how much damage will they take with each sale? If the PS3 is delayed, MS will likely be able to reduce the price of their console (admittedly, some of the pricier components in the PS3 could drop in price during that interval as well).
The big questions are going to be asked in Sony's boardroom. The company is not performing that well at the moment, and hemorrhaging from the game division could easily attract negative attention.
We must still be in the "denial" stage with respect to the mainstream media and blogs. Here's the Chicago Tribune, bucking itself up with survey results that make the case that few people read blogs:
Gallup finds only 9 percent of Internet users saying they frequently read blogs, with 11 percent reading them occasionally. Thirteen percent of Internet users rarely bother, and 66 percent never read blogs. Those numbers, essentially unchanged from a year earlier, put blog-reading dead last among Gallup's measures of 13 common Internet activities. E-mailing ranks first (with 87 percent of users doing so frequently or occasionally), followed by checking news and weather (72), shopping (52) and making travel plans (also 52). Gallup concludes that while the amount of time people spend online has risen, "it appears the online public is simply doing more of the same activities, rather than branching out and trying different Internet offerings."
Well, I completely buy the number, insofar as it means anything. It's almost certainly the case that few people go out specifically looking for blogs to read. However, if you're a typical, non-insider, what do you make of Jeff Jarvis' site? Do you think of that as a blog, or as an op-ed page? I'd guess that a fair number of people read blogs without classifying them that way.
Another thing - let's do a Google search for "Sony DRM" - the first result (and many of the others) are from blogs. People find stuff on the web via search - heck, it's how I found this article in the first place. In many cases, blog entries pop up as prominent search results. Did the participants in that survey realize that they were hitting blogs when they followed search results? I very much doubt it. I very much doubt that they cared, one way or the other. The Tribune cares, and they want to play the public's lack of concern as meaning something.
I guess the whole "prior art" thing is just passé down at the patent office - some clown claims to have invented "rich media applications delivered over the internet", and the PTO was just stupid enough to believe him. Here's what the bozo, a guy named Neil Balthaser, claims to have invented:
The patent issued on Valentine’s Day covers all rich-media technology implementations, including Flash, Flex, Java, Ajax, and XAML, when the rich-media application is accessed on any device over the Internet, including desktops, mobile devices, set-top boxes, and video game consoles, says inventor Neil Balthaser, CEO of Balthaser Online, which he owns with his father Ken. “You can consider it a pioneering or umbrella patent. The broader claim is one that basically says that if you got a rich Internet application, it is covered by this patent.”
Either Dave Winer is extraordinarily pig headed, or he just can't read with comprehension. It's definitely one of those two though. The RSS advisory board (public mailing list here) wants to get a handful of currently ambiguous things in the spec nailed down. It's not a long list, and the idea isn't to invent anything new. Here's what's being discussed:
- How many enclosures can an item have? It looks like only one, but it's not entirely clear. What's being asked for: clarification, not new capabilities
- What kind of data can appear in the description tag? HTML? Escaped HTML? XHTML? Plain Text? How are consumers of the data supposed to know? Again, what's needed is clarification, not new capabilities.
- Can HTML appear in any other items? Like the title?
This effort is pretty small beans, actually. No one is out to redefine RSS, or make massive changes. What's happening is an effort to nail down areas that could use nailing down. Winer's response to all this? Change the subject to how much money has been invested in RSS (irrelevant) and whine:
And why should we care? Well I care because it would help to explain to my colleagues in the XML world why it isn’t so easy to reinvent RSS. Do the math. Let’s say the actual number is, for the sake of argument, $8.2 billion. What does that look like?
back when I worked at ObjectShare (before Cincom bought the Smalltalk business), I had a manager who was fond of remarking on situations like this as follows:
"Either I've got s*** in my mouth, or you've got s*** in your ears"
What we seem to have here is an ear problem. Maybe we all need to use smaller words or something.
I decided to take a detailed look at the daily access logs, and see whether any post in particular had been requested a lot. Well - there was one specific page request that had a lot of requests. So I tried bringing it up, but no dice - it didn't exist. Not only did it not exist, but the request page ID (which is actually a timestamp), refers to a very strange date:
June 4, 1911 19:05:36.000
Well. I don't know about you, but I wasn't doing much blogging 50 some odd years before my birth :) I took a look at the referers - yup, they were all for pr0n sites. The impressive thing is, that out of 813 requests for this (non-existant) page, 654 of them used a different IP address as their origin.
I must be popular - I've got my own spammer :)
Scientists have managed to make chickens with mutant genes (their wording, not mine) grow teeth. I want to see chicken lips.
Eight years ago, Web Services wasn't an obviously absurd idea. The Web was for crashing Java applets and badges that claimed to work best in some crappy browser. With the benefit of hindsight, we can see it was a bad idea to try and abstract away application protocols using RPC calls tied to verbose, rigid, statically-typed languages mapped with a Rube Goldberg schema language that has a more flexible type system than said languages.
If you use Apache Axis against a web server that uses Relative References for redirects, you're in trouble. Web browsers happen to deal with it, but Axis throws a MalformedURLException. I was able diagnose the problem pretty quickly when I encountered it, but I think that counts for pretty intimate interaction with HTTP. Oh, also the API I was talking to used strings to transport SQL-esque statements. Thank goodness for that type-mapping.
Heh. I love the way so many static typing advocates dodge around those rules, all the while touting their value :)
You don't have to visit the extremes of politics to find the tinfoil hat brigade; Engadget has found that the ones who fear cell phones have moved on - they now fear WiFi:
some of them have found time to attack WiFi, and have had their first taste of success at a Canadian university, which has just banned wireless internet access. Officials at the school, Lakehead University, have banned WiFi, saying that they want to avoid "potential chronic exposure for our students." The officials point out that the "jury’s out" on the health risks from EMF generated by WiFi transmissions, and liken the risks of WiFi to those of second-hand tobacco smoke, which were not immediately apparent to researchers.
As Engadget says, someone should ask these clowns about cell phones, microwaves, TV's, and radios. There's no telling what those might be doing to their students either.
It's looking a lot like Sony went too far out on the bleeding edge with the PS3. Here's Mary Jo Foley reporting in Microsoft Watch:
Rumors are flitting about that Sony's PlayStation 3 ship date is slipping, with analysts suggesting that the PS3 may not launch in the U.S. and Europe until late 2006 or early 2007. At fault is overdevelopment; the Blu-ray Disc and Cell processor Sony is eager to succeed catapults the materials to a price a Merill Lynch report pegs at $900, or at least $400 over what the PS3 expected to sell for. In comparison, Microsoft's suspected loss of $126 per Xbox 360 console is practically deserving of accolades.
As I reported here, it's actually $800 - the sum on that site was incorrect. That's not that important though - the problem is, Sony is going to take a bath on each sale. Sony probably can't afford that either - their current financials look none too good.
Meanwhile, while MS and Sony are trying the old "make up the losses in volume" trick, Nintendo is actually selling consoles at a profit, and owns most of the revenue stream for their games. I wouldn't be at all surprised to see Sony knocked out of the game space in the next two years, and to see MS as the sole owner of the action game space. Nintendo won't care though; they'll just sit back and count the money. Sure, their revenues are down on game systems now, but that's because the Revolution buzz is quashing demand for the GameCube.
If you have the VW 7.4 based development build of BottomFeeder, then you may have noticed that the Comment Tool is broken. That was a code integration failure on my part; I updated some code that Michael wrote, and managed to drop two classes. I've added them back in, and the update is available for dev users now. Sorry about that!
Time to register for Smalltalk Solutions 2006 - this year it's being held in conjunction with LinuxWorld/NetworkWorld, so that we can spread our Smalltalk across a wider audience. You can see a list of the sessions here. Here's an example of the kind of content you can expect - Using GLORP (Monday April 24, 9- 12 (Tutorial)
GLORP is an open-source library for object-relational persistence. It includes some very sophisticated mapping and performance features, and current plans are for it to be incorporated as the core mapping layer in a future revision of Cincom's database toolset. This tutorial is designed to give an introduction to the concepts, capabilities, and best practices using GLORP. Alan Knight is the lead on the GLORP project, at Cincom Systems Inc., and has worked in relational persistence for many years. Previously, he was chief architect for the TOPLink family of products, and a member of the Sun expert groups on EJB 2.0 and JDO. He is co- author of Mastering ENVY/Developer (Cambridge, 2001) and has written and spoken extensively on a variety of topics. He is program chair of Smalltalk Solutions 2006.
Bear in mind that unlike past years, paying for the full conference covers an unlimited number of tutorials! Also, for my non-Canadian readers: the costs quoted on the registration page are in CDN, not USD.
Digg is adding an RSS module that will add Digg specific information to items in their feeds.
On another note Steve, I’m starting to worry about you, you are starting to sound more like Scoble every day…please don’t join the everything for free and damn the money crowd… some of us are trying to blog for a living.
He should also read Scoble's long post on this - it makes a lot of sense. I haven't unsubscribed to all the partial feeds yet, but I should - I tend to blip right over them. Why? Because I read most of my content in my aggregator, and I don't tend to bother with summaries (which don't tend to be teaser summaries anyway). I provide full content here (then again, I'm not trying to (directly) make money, either. However, it's instructive that I get more than 4x as many HTML readers on a weekly basis than I do RSS readers.
The roadmap actually encourages risk, but some people always seem to want to have their ideas accepted without taking the risk. They think they can make something better than RSS and shouldn't have to go through the same vetting process that RSS itself went through. Now, it may be possible that after three years in the market, that RSS 2.0 could be radically improved, but the roadmap says that no person or group of people has the exclusive right to improve it, and that no one can interfere with the stability of the platform. That's no different if you work for a small company or large, or don't work for a company at all.
He's referring obliquely to the RSS advisory board, (which has a public mailing list here) - which is trying to nail down a few things that are ambiguous in the spec (if you can call it that) for RSS. For instance:
- What should you expect to find in the <description> field?
- Is one enclosure the maximum?
- Is markup allowed, not allowed, or optional in the <title> element?
Those aren't things that have gone through a "vetting" process; they are things that tool developers have suffered with for years, and - if Winer has his way - we'll continue to suffer with. RSS is marginally better defined than OPML and MetaWebLog API (this page 404's at the moment), which are other underspecified formats that Winer has produced. There's a reason Atom exists, and that reason is amply demonstrated every single time Dave speaks on the subject.
Tim Bray (and a bunch of other people) pointed out Technorati's new Favorites feature. I didn't pay much attention yesterday, but it does sound like the Reading List idea with all that nasty OPML. As an added bonus, it already works with the tools you have lying around, which certainly makes my life as a developer easier :)
I'll have to take a look at it and see what I think.
It's always fun to watch a public breakdown - kind of like a train wreck. Here's Dave Winer pushing his erstwhile ally Rogers Cadenhead under a bus.
Boy, if I could turn a phrase the way Lileks can. My wife and I were in stitches over this :)
Best I can figure, someone is testing a bot before they go hog wild with spamming. How else to explain a few spam comments I've seen on some of the CST blogs (and others) that look like this:
fLaa4mm8xjvC7 zDXPtJZOPDRt6B dvp6jiPulPCwEb
All of them have been on older posts - I suppose as some sort of "will the owners notice" kind of test.
Ok, this is amusing: The World's biggest Windows Error Message.
It's clear that the news media doesn't need facts. If they get in the way of a juicy story, what to do? Just get rid of them and run the story, even if it's all made up:
GamerDad, which is SPOnG’s new favourite site for all things to do with gaming and parenting, reported that one of their writers, David Long, was interviewed in depth for the piece by Nydia Han of Channel 6 Action News in Philadelphia (an ABC Affiliate) and that Mr Long made it clear to her that Pictochat was neither an Internet-enabled service, nor a threat to children from potential paedophiles anonymously attempting to meet or ‘groom’ children over the service
The problem? Pictochat is strictly peer to peer, operating only with other DS units within a few feet. Meaning, if you get unwelcome messages, you can probably see who's sending them - just look for someone within a few feet banging away on a DS. Never mind that though - it's not scary enough. ABC news had to make the story scary:
It seems Ms Han then decided to totally ignore all of the facts as presented to her by GamerDad's Long and run with the erroneous and misleading story about an 11-year old girl being stalked over Pictochat in a WiFi hotspot.
Now, whilst this is merely an ABC News affiliate mis-reporting a story about gaming - which regional press all over the world do with alarming regularity - it's still worth pointing out that the story was picked up by hundreds of gaming news sites and forums (SPOnG included) and even on Slashdot.
It's things like this that make me question nearly everything I see in the media. They don't get it right in areas that I happen to be informed about - which makes me wonder about the stuff I'm not that well informed about. I now cast a skeptical eye over all media reporting, whether it be about technology, science, environmental issues, politics - you name it.
Update: Here's a link to the ABC Story. They eventually (final paragraph) have a spokesman from Nintendo explain that you would have to be within 65 feet to get contacted in Pictochat - the rest of the story really pushes the idea that the wireless net connection is at fault. The scare quotes in the story push that idea hard.
What I think this boils down to is that interoperability testing of Web based services (not Web services), like any Web deployment, benefits from network effects not available with Web services, primarily due to the use of the uniform interface. So if we're testing out Web based services, and I write a test client, then that client can be used - as-is - to test all services. You simply don't get this with Web services, at least past the point where you get the equivalent of the "unknown operation" fault. As a result, there's a whole lot more testing going on, which should intuitively mean better interop.
Except that it doesn't work as well, and, if it can be believed, has even bigger interoperability problems. Patrick Logan has been on a tear about this lately - check out his latest post on it, where he sums up:
Simple dynamic programming languages and simple dynamic coordination languages are winning. Vendors will have to differentiate themselves on something more than wizards that mask complexity.
The development industry loves complexity though. Why use a language with 5 reserved words and 2 operators, when you can use one that has dozens of each?
Smalltalk Solutions 2006 is coming up fast - April 24-26. Once change this year, with the show being run at LinuxWorld/NetworkWorld: paying for a full registration covers any and all tutorials. This is a change from previous years, where tutorials were an additional cost. So don't delay - Register now!
Well, it looks like MPAA members don't want to let RIAA members get too far in front of them, stupidity wise. This last weekend, a bunch of them sued Samsung over a DVD player that's been discontinued since 2004:
Over the weekend, Bloomberg news reported Walt Disney, Time Warner and three other major film makers filed the lawsuit against Samsung in U.S. court.
They claimed that Samsung’s DVD players allowed consumers to avoid encryption features that prevent unauthorized duplication and demanded a recall of all the problematic products, Bloomberg said.
The Motion Picture Association of America estimates that the movie industry lost $5.4 billion last year due to piracy.
So their solution is to make the public more aware of a player that they might be able to buy on EBay? I sure hope that they aren't paying their lawyers too much for this one; it has "too clever by half" written all over it.
Sheesh, you would think that an effort to clean up some of the more ambiguous areas of RSS would be getting Kudos. Instead, we have Dave Winer and his consistent inability to work with others:
It concerns me to see five companies, Newsgator, SixApart, SocialText, Feedburner and Technorati, give themselves special position among the many companies using RSS, especially since UserLand unilaterally gave up its special position with respect to RSS. It seems to me this is an issue that should be discussed publicly.
That's right Dave - those small companies are going to ruin the universe as we know it. We had some sensible reaction from Sam Ruby, who said (in part):
Being allowed to clarify the specification is one thing. Whether or not others feel like Nick does is yet another. In the long run, the success of the work currently under the working title of RSS 2.0.2 depends little on what Harvard thinks, but instead depends very much on what people like Nick and companies like Microsoft actually do.
The leadership that Rogers is providing has been exemplary. I’ve been quietly aligning the Feed Validator RSS 2.0 test cases to track to the drafts that he has produced. I believe this work is important and should continue.
That resulted in a pathetic cry for attention from Steve Gillmor:
I've developed a new spray that detects b*******. I can't talk too much about the technology until the product launch, but I will demonstrate its usefullness by spraying it on this post by Sam Ruby:
I thought everything was about Dave, but apparently, the stuff that isn't is about Steve.
Now, back in the day, when Atom was first being talked about, I was pretty darn hostile. This was back before I really understood what a complete jerk Dave Winer is, and how utterly impossible he is to work with. The Atom group had a lot of discussions that looked trivial, but they moved the ball forward and worked on some of the problems that just cannot be addressed in RSS - due to the complete lack of understanding shown by Winer. Over time, here's how it's going to fall out. The name RSS will stick - it's become generic, in the same way that the term "Kleenex" has. However, most people doing serious work in the field will use Atom. At least there, they'll find a group of people who's first thought isn't to deny the possibility of problems.
We've finally found a game that we like as much as Puerto Rico - Caylus. We played another round last night, and while I didn't do at all well, it's a game I like quite a bit. The thing is, you need to be paying attention pretty much the entire game. There are things that you need to accomplish in the middle game which, if you neglect, will just take you completely out of the end game. That's what happened to me last night. I didn't get the right sort of buildings up then, and by the end game, I was way behind.
I highly recommend this one - it takes longer to play than PR, but it's well worth it
This is kind of amazing:
The Imperial Order, a World of Warcraft guild on the Detheroc server, is holding the server hostage. The guild has completed the various quests needed to obtain a septer used to ring a gong. Ringing the gong will open the gates giving everyone on the server access to new content, but the guild refuses to do it. At least, they refuse to do it until someone pays them 5,000 gold.
based on what I read about WoW on various blogs I subscribe to, it's like a whole second life for a lot of people.
I've taken a look at the creation code that I wrote (a long while back) for the Silt server, and discovered that it was broken. There have been some changes in the underlying Web Toolkit since I last looked at this, so it's not a huge surprise. I went ahead an patched the code up, so that you can now grab the latest Silt code and get a server set up easily. Here's the best path at present:
- Get an account for the Public Store.
- Then go to the Silt Page.
- Follow the directions for loading the SiltSSPFiles bundle
- Load the Silt Bundle
- From the Launcher, start the blog manager (Tools>>Blog Manager) tool
- Fill in the required fields, and you should get an initial blog set up
If you run into problems, send me an email.
This thread proves something - it proves that politics will enter any field that has more than one person involved in it. You might think that syndication formats in XML should be dull, and of interest only to the technically oriented - but you would be sadly mistaken.
I've updated the Silt server code that's available here, on the Wiki. The latest code is there, along with all the latest SSP templates (including all the css stuff as well). I've changed all the pages to report themselves as utf-8 as well, which is something I should have done awhile back - it lines up with the way the content is actually stored.
I haven't updated the prebuilt server quite yet; I intend to get to that shortly.
There's been a fair bit of buzz about the meaning of multiple core machines - especially given the fact that today's 2 and 4 core systems will become tomorrow's 16 and 32 (or more) core systems. However, I don't think that the answer lies in changing languages and compilers to parallelize applications - at least, not a general answer. That seems to be where Larry O'Brien was going in SD Times this week:
No mainstream programming language is automatically parallelizable. This is ironic, since object-oriented programming has its roots in simulation, where concurrency is a basic concern. However, since mainstream OO languages allow state to be shared between threads, they’re fundamentally crippled. When the basic rule for thread safety is “either write objects with no fields or write objects with no virtual method calls,” the paradigms are clashing.
Surprisingly, the mainstream language that seems to have the most far-reaching proposal for manycore programming is C/C++. Herb Sutter, who is an architect at Microsoft and chair of the ISO C++ committee, gave the first public airing of his Concur project at last September’s PDC. Along with emphasizing that Moore’s “free lunch is over,” Sutter proposes that existing approaches to concurrency such as OpenMP do not go far enough and that the abstractions of .NET (and Java, for that matter) are inadequate, focusing as they do on thread management, rather than the more general concept of delayed execution.
Developers have trouble writing multi-threaded applications now, especially when the threads are native. When you try to have multiple threads of execution access shared state, chaos tends to ensue. While the hardware will certainly get better, the "wet-ware" - i.e., our brains - won't.
Ironically, the answer to this problem came up a long while back, in the Unix world. Back in the day, Unix approached the idea of problem solving with lots of small applications that you wire together. Those ideas evolved into the modern architecture of things like Apache. Do a PS on a Linux box sometime - you'll see lots of Apache processes. That's because it's far easier to create a single threaded application, and run multiple copies of it than it is to figure out how to get shared state properly shared in a single executable space with multiple threads going at it.
The other nice thing: The multiple process approach works equally well if you scale via multiple systems rather than via multiple cores. Or if you use both approaches. It also works with existing development tools - it doesn't require custom compilers that will almost certainly be architecture specific.
Which is more expensive - the multi-core hardware, or the developer trying to work on it? Based on that answer, which one makes the most sense to optimize?
Charles Miller reports on just how easy it can be to track down someone's location/identity from the slimmest set of clues. The WaPo ran an anonymous interview with a hacker who didn't want to be identified, but allowed a small photo of part of his face to run with the story.
That's what got this kid (mostly) found. As it happens, you can get a lot of information on the kind of camera used to take a picture from the EXIF format. A little hunting with that will get you the location where said camera was used. Slapped together with other information this guy let slip in the interview, he's probably already been identified by people in his community. Have a look here and here to see how those small tidbits were used to find this kid.
If you want to stay anonymous, it looks like you have to be really quiet...
I had uploaded new files, but my build script had a huge "oops" - I was still integrating VW 7.3 VM's. So, I'm uploading the build again, with the proper VM's this time. Due to an addition in the base product, there's now a (development) rev of BottomFeeder for Solaris on x86. I'll update this post when the upload is done.
Update: The files are up now. The build scripts have not been updated yet; I'll get to those tomorrow. Enjoy!
I'm in the process of uploading an initial build now. It's an initial build - there are still things I'd like to change, and features that I would like to add before a general release. However, I've got the first cut uploading now. In a few hours, you'll be able to visit the download page, scroll down to the dev builds, and grab it.
The North American launch of Sony's much-anticipated PlayStation 3 could be delayed until next year, according to a research report issued by Merrill Lynch.
In the report (Click here for PDF), the analyst firm proposed the idea that high costs and Sony's decision to use an "ambitious new processor architecture--the Cell" is making it look like the company might not be able to meet its goal of getting the PS 3 out in the U.S. this year.
It's all speculation at this point, is what it amounts to. It does sound to me like Sony may have gone "a bridge too far" on the technical side - too many new things at once.
It's weekly log time here - looks like there were 264 downloads of BottomFeeder a day last week, which is a pretty decent clip. The details:
Fairly decent distribution spread, I think. On to the HTML blog page accesses:
|Tool||Percentage of Accesses|
A little higher Mozilla than average, but my traffic jumped a bit last week as well. I should walk through the specific page requests and see what, if anything in particular, was behind that. Finally, the RSS page accesses by tool:
|Tool||Percentage of Accesses|
|Net News Wire||10%|
The tool distribution for RSS access doesn't seem to be consolidating at all.
If Merrill Lynch has these numbers right, then Sony is going to have to sell a lot of games in order to make back the discounts they'll have to offer on the PS3:
If there are some people out there right now who are in the know when it comes to what the hell is going on -- we mean really going on -- with Sony, it's those investment firms. But even barring their research analysts getting all kinds of privvy information from direct executive input or connections on the supply side, it's kind of funny when one of these investment firms lets loose some juicy gossip. Like that Sony's
albatrossPlayStation 3 is going to cost them $800 per unit at launch (they list $900, but apparently Lynch financial analysts can't add their own totals). $800 per unit?
Merrill has them at a unit cost of $320 after 3 years (I suspect it will be lower over that time, but still. Check the site for the itemized list of costs. If Merrill is correct, then Sony is going to have some pain associated with this launch.
We went to see "Firewall" today - it's an action flick with Harrison Ford. However, it doesn't have Ford pretending to be a young man kicking butt - it accounts for his age, and does a pretty good job with it. The setup has some holes in it, and the initial phase of the movie moved a little slowly - but once you get to the "now you've made me mad" part of the move (you'll recognize it when you get there), it rocks along pretty nicely. From there to the end the pacing is quite good. I rather liked Mary Lynn Rajskub as the admin who helps Ford out, although a thought came to mind: do all the strong men she's helping have to be called "Jack"? Maybe there's a rule I missed :)
Anyway, it's a pretty decent flick. Nothing special, but it was entertaining for what it was.
I'm out of time for it today, based on other things I have to deal with. Like dinner and seeing friends :) Back to the grind on this tomorrow, I think.
I've just about got things done - I've solved the packaging issue I had, but managed to execute an entire build with an incorrect set of parcels. So... I'll have a dev build up later this afternoon.
Now here's an interesting piece of history - back in the days of above ground nuclear tests, some fascinating photos of the initial stages of a nuclear explosion were taken - go check them out. Hat tip Boris.
Here's how the RIAA (i.e., the big labels) will die - not with a bang, but with a whimper:
Tunecore is playing a dangerous game. They are a music publishing service operating at minimal costs, and they have contracts with iTunes and Rhapsody allowing artists to sell their music on two of the most powerful music sellers.
Prior to the creation of Tunecore, this was the domain of the record labels - essentially meaning the Big Four: Universal, Sony BMG, EMI and Warner. The Big Four occupy a uniquely powerful position - known in economic terms as an oligopoly - where the entire global market is made up of just 4 companies. Over 75% of all music sold worldwide comes from these four - and they work together to hold a life-and-death grip over artists and the industry.
Tunecore may not be the one to knock the RIAA down, but something like it will. There are just tons of bands producing good music out there (my cousin plays drums for one of them). Unlike the surgically enhanced sex symbols tossed our way by the labels, these bands can carry a tune without voice enhancement. As the costs of producing music fall, and the process of selling music to the likes of iTunes and Amazon gets disintermediated, the industry will go through a sea change. The whole DRM war we're seeing now is the dying gasp of a set of people who can't - or won't - see the future.
In the internet access market, we can see the very definition of the late movers - the people who do not yet have broadband access. Here's a report that goes through that in some detail, and of the people who don't have broadband, it's not all what you might think:
So why are some dial-up users resisting the tide? According to a new survey from the Yankee Group, the most common reason US consumers don't subscribe to broadband is that it's too expensive. Despite promotional price cuts for DSL (which often cover slower connection speeds and eventually expire, shooting the price up), broadband is more costly than dial-up, especially for truly high speeds. Presumably, dial-up consumers have little need for tasks beyond e-mail, IM and simple Web browsing, which are doable through broadband, and want to keep their monthly expenses low.
Price isn't the only factor. More than 30% of consumers say that they just don't want broadband, and about 14% say they feel dial-up is adequate for their needs. Less than 10% are not able to get broadband access in their area.
That 30% who don't actually want broadband - at least as it's currently been marketed to them - are the ones to examine, I think. I'd suspect - as this report says - that these are light users of the internet. They send emails, they browse a handful of sites; they just don't see the point in something more expensive. Getting that group to buy in won't be an easy exercise - it took my dad years to convince my uncle to move off Windows 95, and he still hasn't convinced him on broadband.