I tried to make sense of this post from Gillmor, but it was too many non-sentences jumbled together, with the word "Attention" tossed in a few times to prevent me from nodding off. I may be no fan of Office - and I still think that the Ribbon in Office 12 is an utter atrocity - but if this article is what passes for opposing ideas, then the Redmondites have nothing to worry about.
Lispian explains why the one true language theory of software development doesn't work.
Bob reports that Cincom Smalltalk, Winter 2005 Edition, is ready for release. It's in the Cincom release machinery now; I'll report back with expected shipping dates when I have them. As well, once that happens, the NC download will flip to the latest stuff.
The scientific magazine "Nature" has compared 42 articles in both the encyclopedia Wikipedia and the Encyclopaedia Britannica. Experts in their field were given the task to check for factual errors. To the surprise of nature, both encyclopedias were containing similar amounts of errors.
Will the relentless Wikipedia critics get a clue, or decide that this isn't worth noticing?
The music industry isn't pleased with what Apple is doing with the iTunes store - BusinessWeek lets them have a soapbox:
Not necessarily. As has been true since the start, iPod owners mostly fill up their players from their own CD collections or swipe tunes from file-sharing sites. Now legal downloads may be losing their luster. According to Nielsen SoundScan, average weekly download sales as of Nov. 27 fell 0.44% vs. the third quarter. Says independent media analyst Richard Greenfield: "We're not seeing the kind of dramatic growth we should given the surge in sales of iPods and other MP3 players."
Which brings us to a grand irony: Apple, which launched the digital music revolution, may now be holding it back. Critics say Apple's proprietary technology and its refusal to offer more ways to buy or to stray from its rigid 99 cents a song model is dampening legal sales of digital tunes. "The villain in the story is the iPod," says Chris Gorog, CEO of Napster Inc. (NAPS ), which sells both subscriptions and downloads. "You have this device consumers love, but they're being restricted from buying anything other than downloads from Apple. People are bored with that."
Umm, yeah - we'd much rather buy from bozo outfits that install rootkits on our machines. People are "bored" with the Apple store? Well heck Chris - that sounds like a heck of a business opportunity to me. How about you try *gasp* competing with Apple instead of whining about their business model? Hmm - I decided I'd take a walk over to the Napster store and have a look around - pricing information on their subscription service seems to be pretty well hidden. I wandered by the FAQ, and came across this:
What happens to the music I downloaded to my PC if I cancel my Membership?
If you cancel your Membership, the music you downloaded from Napster will no longer be playable at the end of your current billing period. You can still use Napster Light to play and organize all of the music you own without a membership fee. Access Napster Light with same user name and password. With Napster Light, you can also sample 30-second clips and buy songs for 99¢ and albums from $6.95. If you decide to resume your Napster Membership, your Napster music library will be restored and your downloaded music will be playable again.
And they have the gall (later in the page) to call what Apple does lock in. I can burn CD's off of iTunes to my heart's content. If it's stuff I bought, they don't bring across anything but basic track info, but they don't render my collection worthless either. I'm not sure how this restriction plugs into Napster Lite, where you can buy songs one at a time for 99c. But the main membership - info on which I did find in the FAQ - costs $14.95 per month. Hey Chris - I'm bored with that. Looking over at iTunes, I notice that Apple just charges me 99c a song, and doesn't turn my music off if I decide I like another service better.
Which one of these do you think was set up with the help of our *cough* friends *cough at the RIAA, and which one wasn't?
Travis points out that - when you look at the implementation details - Smalltalk I/O can be as fast as C.
There hasn't been a lot of reporting on New Orleans of late, but a post by Dave Winer got me thinking about the city - it's going to come back, but it won't ever be what it was. Have a look at the history of Galveston, TX, before and after the 1900 storm.
Before the storm, Galveston was an up and coming commercial center, with lots of the nascent oil business going there. Afterwards, that all went to Houston. I expect a similar thing to happen to New Orleans, including the port itself - lots of business that went elsewhere (and found that it could get by elsewhere) simply won't come back. Like Galveston in 1900, the risks will look too high, and - if a full rebuild is necessary - business owners will look to mitigate their risks.
Galveston didn't disappear, of course, and neither will New Orleans. However, Galveston is no longer a commercial center - it's a tourist location. New Orleans is likely looking at the same fate.
Bobby Woolf falls into a common trap - he assumes that most people need massive scaling for their projects:
The article talks about tasks that don't require much business logic. Google just displays search results, e-mail, and map images. Yahoo is the poster child for portals, aggregating existing info and integrating it on the glass. They both use read-only data that can be highly replicated; users can configure the display. Those tasks require minimal programming logic, so PHP scripting and a simple SQL database be all you need (plus a Web server and OS). Even then, huge sites like Google and Yahoo must be doing much more than just using PHP.
But a lot of sites need more than scripting. Do your users need to: Find airline tickets? Trade stocks? Does your implementation need to: Integrate with EISs? Enforce security? Coordinate multiple users updating the same data concurrently? Good luck with PHP scripts. You're gonna need J2EE or .NET for that. I can tell you that this is what WebSphere (WAS) customers are doing.
The dirty secret of the software industry is that most people are, in fact, building fairly simple applications. Most users of things like WebSphere are using it simply as a JSP container - and that's a pretty complex (and expensive) container. Especially when you could build the same thing in Smalltalk in half the time, and for a fraction of the expense. Not to mention that you wouldn't need the army of consultants that WebSphere seems to require.
He goes on to analogize the current debate to the early 90's Smalltalk vs. PowerBuilder debates, and says this:
So LAMP may well work if you want open-source everything and just want to display (and CRUD?) your database. But for full-blown applications hosted on the Web, LAMP won't cut it. AJAX is a cool display technology (see Ajax and Java), but it's only a display; you still need a server behind it running something (LAMP, Java, .NET, etc.). .NET is on the same level as J2EE, and c# is very Java-like, so then that comparison is the old Microsoft-only vs. semi-open-standards and write once, run everywhere argument.
I'd bet good money that the scaling issues faced by Google, eBay, and Amazon are far beyond anything that most web developers will ever need. Funny that they didn't buy into the J2EE/WebSphere camp then; however did they manage it? According to Bobby, it's because they have simple applications. I'd disagree - I'd say it's because they made a rational choice to avoid the absurd levels of complexity in J2EE.
Hat tip to James Governor
I can't follow the link from Digg, but it sounds like artists are starting to get annoyed with Sony - likely because of damage to their sales. There was a piece on this in the NYT a week or so ago (sadly, the Times has now tossed it behind a pay wall) - if the entire DRM idea gets tarred by Sony's missteps, all the better. In the end, the artists are the ones who take the biggest hit when people stop buying CD's from a specific label.
Sela Ward is a fine actress, so it's painful to say this - she doesn't belong on House.. Not her specifically, even - her character. I like this show a lot, but I like it because of the interplay of Hugh Laurie and the three younger doctors. Ward's lawyer character throws the balance off, I think. This week's episode doesn't have her - and it's running a lot better.
To start, we launched Blink with a bevy of marketing dollars and a message very much focused on the individual storage benefits. We were very successful at attracting users ( at its height Blink has 1.5 million members, del.ico.us currently has 300,000 ) and getting them to import their bookmarks into our system.
What I find interesting about this pair of posts is the thought that a company that had 5 times the user base of del.icio.us could be considered a failure while del.icio.us is not. This makes me wonder what defines success here...That the VCs made a profit? I assume that must have been the case with the del.icio.us sale while it clearly was not with the original Blink.com service. Perhaps it's that the founders end up as millionaires? What ever it is, it definitely doesn't seem to be about users.
I tend to agree with Anil Dash, del.icio.us isn't yet a success except for being successful at making the founders and VCs a good return on their investment. If a service can grow to be 5 times as large and still be considered a failure then I think it is safe to say that calling del.icio.us a success is at best premature.
Wow, I remember making fun of Blink "back in the day", but I had no idea they had grabbed so many more users than del.icio.us. I'd call it a success for a simple reason though - they got bought by Yahoo, and Yahoo can fold the service into things that make money. Sobering reading though - for all the hype, I never would have guessed that the user penetration was lower than Blink's.
Peter Yared, CEO of software maker ActiveGrid, spent a critical chapter of his career steeped in Java, the programming language developed by Sun Microsystems In the late 1990s, Yared was chief technology officer of NetDynamics, which pioneered an application server designed to boost the performance of Web sites. It was based squarely on then wildly popular Java. He went on to spend five years as an executive at Sun. So it's especially surprising that Yared holds this view: "Java is a dinosaur."
The article talks up LAMP, but also points out that .NET usage is up. The overall upsurge in interest in dynamic languages is a good thing too.
Wired points out that product placement in TV shows went up by 84% last year - the impact of TiVo and similar devices (I expect most of the upsurge was from cable provide boxes). It seems that the writers and actors are confused about where the money comes from:
TV networks are turning to product placements to fight back against ad-skipping technologies like TiVo, but now some writers are putting up a fight, demanding more pay in exchange for scripting product plugs into their shows.
The issue sparked open protest last month, with both the Writer's Guild of America and the Screen Actors Guild calling for a "code of conduct" to govern the use of stealth advertising.
Code of Conduct? I fail to see how a product placement is more offensive than "message" storylines. Not to mention that the writers are forgetting where the money comes from. Until the iPod model takes over, it's still coming from advertisers. Like newspapers, they aren't reacting that well to disintermediation.
“The villain in the story is the iPod,” says Chris Gorog, CEO of Napster Inc. (NAPS ), which sells both subscriptions and downloads. “You have this device consumers love, but they’re being restricted from buying anything other than downloads from Apple. People are bored with that.”
Translation: "The public likes the competing product better! It's unfair! I'll try to spin my way out of that problem!"
The article comes from Business Week. They must have spent 3 whole seconds on research.
Cees explains why he gave up on Linux on the desktop:
As I wrote, I switched from Linux on the desktop (after almost 10 years!) back to Windows XP. I just was fed up with having to dig around for device drivers and support software for my camera, my scanner, my game pad, my iPod, my phone, my printer, etcetera. That I couldn’t get a 16 bit workflow done under Linux was the breaker. I left Windows XP on my laptop, installed shareware on it, and never looked back.
One of the regulars on the IRC channel made the same point this morning - it's just too much work. If you use Windows or Mac, things "just work" when you plug them in. Sure, you can often get them to work on Linux (eventually) - but in the meantime, how much time have you spent?
That kind of fiddling just isn't interesting for most people - because most people aren't entertained by trawling Google results for device driver information. Sure, Windows has flaws - more than I can count. But it's a lot closer to being a consumer friendly device than Linux is, or ever will be. The dirty secret is that it takes money to write drivers for the huge variety of peripherals on the market - and while Apple and MS have the resources to do that, the open source community just doesn't. Free development just doesn't support that kind of thing, unless it happens to bite a developer with the right knowledge. That's a thin reed to base your hopes on, and it's the one that Linux on the desktop advocates have been counting on.
On the server? Sure, I much prefer Linux. Like cees, I can see it being useful in a locked down corporate environment as well (although, to be honest, I'd go Mac there first). In the general consumer space? Not happening.
Cedric thinks that refactoring in Smalltalk is simpy a toy:
I can't believe that some people actually consider the dynamic refactoring approach used by the Smalltalk IDE as more than just a passing amusement.
I guess the Refacatoring Browser - standard equipment in all modern Smalltalks - is a toy then, and all of us Smalltalkers out here are simply engaged in mental masturbation.
Or maybe, we're busy being productive. Cedric's point about testing being required after a dynamic refactor is true, but pointless - if he thinks you don't need to test code in Java (or C#, etc) after refactoring, I feel sorry for the people he delivers to.
Like Cees, I'll admit that static typing allows for more precision in things like auto-completion. I'm also with him on this:
It’s not like we can’t see any advantage in the dead objects world it’s just that on the balance , the advantage is to dynamic languages. In my experience, they mesh better with how humans think and work a bit fuzzy at times, thriving on interaction, thinking fast and switching fast and needing tools that follow them. I don’t think, probably contrary to a lot of static typing adepts, that software is an engineering discipline (some thoughts I wrote up here and here). Tools need to be more like clay than like the machines in a factory assembly line, and a Smalltalk IDE is the ‘clay-est’ tool I have encountered so far…
I'll take the malleabity - and the molding power it brings - instead of the "power" to use Intellisense with the handcuffs that come with it
Earlier this year I wrote about Dr. Cem Kaner's meticulous chronicle of his experiences trying to get Alienware to fix, and then ultimately to refund his money for, a lemon of a computer he'd purchased from them. Since then, I've received a rather constant stream of complaints about the company, almost rivaling the gripes generated by the commodity PC vendors like Dell and HP.
What's struck me in particular is how similar many of the reader tales about Alienware are to Kaner's in terms of involving both faulty hardware and unhelpful support. "I would tell my story about my Alienware Area 51M laptop in its entirety, but I would merely be reiterating what happened to Cem Kaner," one reader wrote. "Though I did not have to go through the extremes he did to get his machine functional, I did encounter hardware issues right out of the box. I too experienced delayed delivery issues, hardware issues right out of the box, and failing tech support that knew nothing about PCs or basic networking. In total, the machine was sent back for repair three times. The last time it was in repair for a month and when it was returned to me, the modem was broken, the SD card reader they were supposed to fix was still broken, they had chipped the lid in two places, and the DVD+/-RW has some type of thermal paste that drips into the tray and keeps it from opening properly and potentially ruining media. I travel for a living, this machine was purchased to be my office and for its perceived durability and reliability and excellent tech support. Instead, those of us who purchased an Alienware system have all spent thousands of dollars on machines that often equate to little more than a doorstop or glorified paperweight."
The amazing thing is this - how many companies still think that they can provide shoddy/non-existant service and not get called on it. It's been over a decade since the web started to become a player in this area, but blogs - and what they've done to make personal publishing easy - has really accelerated the empowerment of consumers.
Back in 1995, you could put up a website, but getting a hosting solution (and getting the HTML to the site itself) was a chore - far beyond the level of crap that most people want to deal with. Now, it's simple - there are multiple free blog hosting services, and a variety of low cost ones. You can vent your spleen on the cheap, and getting the content posted isn't complicated.
Before that, bad service didn't get beyond word of mouth - and the national (or even local) media only covered truly bad service, the kind that killed/hurt people, or ended up scamming them out of large sums of cash. Now, word of mouth extends around the globe - sometimes, with a kick from digg, or slashdot, or drudge - in minutes. Even without those kicks, search engines make the complaints far more visible than many people seem to think (witness the sort of tale Foster is relating).
Radio Silence as a strategy doesn't work anymore. Too many people are willing to point out the problems, and they get amplified quickly if it's a common problem. If your customer service stinks, people are going to find out. Sooner than you think.
Google is making money mostly from ads - looks like Amazon has a different idea: open up the index, and charge based on consumption. They are opening up Alexa as a platform anyone can build on, and you pay as you go. It will be interesting to see who jumps at that:
Anyone can also use Alexa's servers and processing power to mine its index to discover things - perhaps, to outsource the crawl needed to create a vertical search engine, for example. Or maybe to build new kinds of search engines entirely, or ...well, whatever creative folks can dream up. And then, anyone can run that new service on Alexa's (er...Amazon's) platform, should they wish.
It's all done via web services. It's all integrated with Amazon's fabled web services platform. And there's no licensing fees. Just "consumption fees" which, at my first glance, seem pretty reasonable. ("Consumption" meaning consuming processor cycles, or storage, or bandwidth).
A lot of people have been talking about making search better - here's a platform that might allow some of them to try out a few ideas - without the huge expense of building up the server farm. Something to keep an eye on, that's for sure.
Nowadays, there is a lot of talk about memory management and garbage collectors. I personally think that garbage collectors are a bad thing. If you can't keep track of your data to delete it then how are you going to track it to do something useful? It amazes me that people would say that programmer time would be better spent on more useful tasks. What can be more useful than proper management of your data? This is lazyness plain and simple. It's like anything else in programming like checking error codes, trapping exceptions, using proper syntax, learning new API's, etc. No one wants to do it. But just because you don't want to do it doesn't mean it shouldn't be done.
Lazy? Hardly. GC is simply better for the vast majority of applications. Why is that? As you build a larger application, you'll have a tremendous number of objects flying around in and between modules. The management issue will get to be more and more troublesome, as proper encapsulation techniques will get in the way of the global knowledge necessary to properly manage memory.
So what ends up happening? Developers in large C/C++ projects end up building their own half-baked GC themselves. I say "half baked", because it almost certainly isn't going to be as efficient as the systems in Smalltalk, Lisp, C#, or Java - those were all built by people with deep knowledge of the field. It's not that application developers are stupid (heck, I'm one!) - it's just that GC is not inside the problem domain they work with. Why Vorlath wants to make it one is a mystery. Heres what he says:
Whenever I hear someone say that garbage collection saves time, I know these are beginners. It's not an insult. You just have to keep practicing. If deleting your data is slowing you down, chances are you need more experience. Now, there's a difference between someone who absolutely needs a garbage collector and someone who organises his data using the stack or automatic pointers. These are two different areas completely. I personally don't even think about deleting code. It's a normal part of programming that doesn't take any more or less time than anything else. And you know why I don't even think about it? Because it's all part of managing your data. If your data is organised, it's not even an issue.
Maybe if you have something close to a photographic memory, you can keep track of who's responsibility it should be (and when) to kill of a given object. Not being able to track such things isn't "laziness" - it's a matter of getting overwhelmed by complexity. I'll make a simple minded analogy - Chess is a relatively simple game, but - at any given moment - there are tons of possible moves, and the possibilities expand as you also consider the possible moves of your opponent. There are only a handful of master who can play with (and sometimes beat) software specifically built to track all of those possible moves (and give valuations to them). Are the rest of us lazy because we simply can't track as well as a master, or the custom software?
In Vorlath's world, I guess so. Where does this sort of thing take him? Well, he gets to that:
There's something else I want to discuss and it's having programs continue to run after their state has been corrupted. I, for one, would rather it come crashing down instantly. That way, I can fix it right away. Having software keep executing with a corrupt state where you can't easily trace the problem is not advancing the way we write software. This is a step backwards. If you absolutely need your software to keep running, have redundant systems.
If you make a mistake in freeing a pointer, you can end up with corruption, and have no idea where said corruption is - so crashing is the best possible result. If you use GC, you can have memory leaks (because you have strong holds on objects that you shouldn't have), but the sort of corruption possible with pointers won't come up - unless we are talking about GC added onto a language with pointers. In a fully managed language, you won't get that.
What Vorlath really wants is a small priesthood of experts - but even there, he's unrealistic:
There is a frightning trend going on where people graduate college or university and don't know how a computer works, yet they have their CS or Computer Engineering degree. A clear understanding of how memory works, paging and protection are critical. Also critical is a good understanding of the stack and different calling mechanisms. This will explain why certain language are the way they are, especially C.
The problem with C (the one he's pointing to, anyway) is because of the hardware knowledge and assumptions of the original designers. It was built to be a high level assembler, with the notions learned from the hardware they were familiar with. If I'm writing an RSS aggregator, I don't need to know or care how memory works at the hardware level. I merely need to let people know what kind of minimums the application expects. The stacks and different calling mechanisms? Those can differ across hardware and languages - how specific is this priesthood going to be? This guy's vision of the field is pretty darn blinkered.
We know that the Nintendo Revolution won't be pumping HD - Nintendo has admitted as much. However, here's some interesting speculation from Falafelkid as to what they will be doing - better graphical results using a technique called displacement mapping. Rather than try to quote segments here, I reommend that you head over there and read it. The upshot - games like "Call of Duty" might look almost as nice on the Revolution as they do on the XBox 360.
WonderBranding talks about cultural anthropology as a way of getting market information:
Watch how she not only uses your product but also how she doesn’t … is she unaware of an advantage your product offers but you keep forgetting to tell her about? Or is there something you need to change? Proctor & Gamble is having a great success with an ongoing series of “immersion” studies. They've gotten important feedback from surveys, but it was only when they had a group of mothers wear headgear cameras throughout the day (giving them a bird’s eye view of what the mother sees) that they discovered new ways of packaging diapers and baby wipes that made them easier to use. Information like this is the reason P&G profits are on a steady incline.
I wonder though - are the people who are willing to wear headcams truly representative of your target market? And, does the fact that they know that they are going to be watched change their behavior? I'm not sure there's a better way to gather the sort of information WonderBranding wants, but I think that the data gathered this way should have a few caveats attached.
Their email is: firstname.lastname@example.org
Their address is: WikipedaClassAction.org
PO Box 998
Long Beach, NY
I've been playing "Call of Duty" on my GameCube lately, and really enjoying it - but I was in Target last night, and played the XBox 360 demo of the game. It's pure eye candy. The 360 is still a bit pricey, and the hot power system problem is a little worrisome - but boy, oh boy - does it look nice. I expect I'll get one before next year is out.
I give up on libxml for the time being, and think instead of Chris Petrilli’s comment that ruby (and python) performance is “not quite in the league of Smalltalk (or Lisp, likely), which have extremely mature VMs with on-the-fly compilation and optimization”. Is Smalltalk then much faster than python or ruby, or comparable with C, for the task of parsing moderately large XML files?
No. Time to load and parse my iTunes library file, an 11mb Apple plist, on a 1 GHz G4 Powerbook with VisualWorks Non-Commercial 7.3.1: about three minutes.
That didn't seem right - I use the XML code in VW extensively, so I'm pretty familiar with it. I grabbed my iTunes file (only 2.7 MB) and parsed that - took 5.5 seconds. Well, the two caveats are, that's a smaller file, and my hardware isn't his hardware. With that in mind, I went ahead and created a large XML file. I grabbed the default feed file for BottomFeeder, and saved it as an XML feed list instead of as a binary dump - like this:
file := Tools.XMLConfigFileSupport.XMLConfigFile filename: 'g:\vw74\image\feeds.xml'. file saveObject: RSSFeedManager default subscribedFeedsFolder. file saveConfiguration
That just dumps the 80 sample feeds into a (pretty verbose) XML format - I ended up with a 13 MB file. That seemed large enough, so I tried the parse on that:
content := 'feeds.xml' asFilename contentsOfEntireFile. parser := XMLParser new. parser validate: false. Time millisecondsToRun: [parser parse: content readStream]
That last line times the execution - it ran in 17.9 seconds. Not a couple of seconds, but not 3 minutes, either. There was some GC going on during that, so I'm sure that things could be improved by simply configuring VW with a larger bite of old space up front - in dealing with large amounts of data, a fair bit of time is going to be chewed up either in allocating more memory, or GC'ng if we hit the current limits (as per the memory policy in place).
For this kind of parse to take 3 minutes, either the hardware would have to be very slow, or memory limits would have to be set badly for dealing with larger files. I'm not entirely sure what was going on.
Update: I ran the same code on my Mac Mini - it has a 1.3 Ghz G4 processor, and a paltry 256MB of RAM. The 2.7 MB file parsed in 12.8 seconds, the 13 MB file in 44.7 seconds. Not speedy, but not the 3 minutes reported by Alan Little either - and the Mini is no high end Mac.
I had a look at the Great Computer Language Shootout site this morning, since there's some VW Smalltalk code (and results) posted there. The comparison I was drawn to happened to be with Mono based C# (based on a comment here). There are some issues with the comparison, however:
- Have a look at this test - scroll down, and look at the execution. The source code is filed-in, and then the test is executed. That slows things down. Update - apparently, the code is not filed in first on the site test.
- Have a look at the C# version - the code is compiled, and then executed.
So I did the same thing I did with Troy's post over the weekend - I downloaded the code and did some local shaking out. To get it loaded, I had to create a namespace called ComputerLanguageShootout first, and also create a class named Benchmarks. Once that was done, I could file-in the code.
Then, I tried this:
"File in code, then execute" Time millisecondsToRun: [Smalltalk.ComputerLanguageShootout.Benchmarks nbody: 1000000].
That ran in 11.818 ms. I repeated the process a few times to make sure that those numbers weren't outliers, and they weren't. The original post here had a bunch of comments about filing in first, but I was wrong about that.
So, on to the profiler. Running that, it turns out that the bulk of the time is setting in the Body>>and:velocityAfter: method. Looking at that, we see the following code:
and: aBody velocityAfter: dt | dx dy dz distance mag | dx := x - aBody x. dy := y - aBody y. dz := z - aBody z. distance := ((dx*dx) + (dy*dy) + (dz*dz)) sqrt. mag := dt / (distance * distance * distance). self decreaseVelocity: dx y: dy z: dz m: aBody mass * mag. aBody increaseVelocity: dx y: dy z: dz m: mass * mag
It's no big surprise that this test (or any other test that is heavy on arithmetic) is going to be faster in C# (or C, or Java) than in Smalltalk. Why? Smalltalk isn't that fast on math, floating point math in particular. So if you intend to code up some low level mathematical model, the calculations should be in a different language (and our customers tend to do that - model in Smalltalk, low level math in C or C++). What this really points out is that micro-benchmarks aren't that useful. Most applications push data around, typically involving a database - and math operations don't dominate those kinds of applications. In which case, something that lets you code faster is going to help. As usual, pick the best tool for the job at hand.
On the Smalltalk IRC channel, it was pointed out that a lot of the other language tests vary just as widely. Ultimately, the shootout site simply isn't doing great cross language tests.
I found a couple of great gag gifts - they would work really well for kids, too. The Potato Gun:
The Original Harmless Squeeze powered Potato Gun. Just press the tip of the gun into a raw potato, break off a small pellet, aim, squeeze the handle and it will shoot the harmless potato pellet far across the room. You can get hundreds of shots from a single potato. Potato Gun size is 6 inches and simple to use. One Potato Gun per Package.
And the Fart Pen:
It's a real pen shaped like a finger and when you pull on it, out will come farting sounds. Great for that boring night of homework or maybe make your friends laugh at this very comical pen.
The release ball is rolling here at Cincom - barring some unforeseen issue, the winter release should go live before Christmas puts everything into slow motion. Stay tuned - I'll make an announcement when it's all gold.
File under "no good deed goes unpunished" - some lawyers who smell money want to shut Wikipedia down.
Update: If you send email using the link on that page (the one that asks for information, or registration of complaints) - and you register a complaint (like, say, this suit is a bad thing) - you'll be told that your feedback wasn't asked for. Welcome to the echo chamber!
I found this to be interesting - take the various biometric security measures that are being installed, and the supposed ways that hi-tech criminals/spies get around them:
Eyeballs, a severed hand, or fingers carried in ziplock bags. Back alley eye replacement surgery. These are scenarios used in recent blockbuster movies like Steven Spielberg's "Minority Report" and "Tomorrow Never Dies" to illustrate how unsavory characters in high-tech worlds beat sophisticated security and identification systems.
However, it may take nothing more than a very low-tech spoofing attack - play-doh, anyone?
Fingerprint scanning devices often use basic technology, such as an optical camera that take pictures of fingerprints which are then "read" by a computer. In order to assess how vulnerable the scanners are to spoofing, Schuckers and her research team made casts from live fingers using dental materials and used Play-Doh to create molds. They also assembled a collection of cadaver fingers.
In the laboratory, the researchers then systematically tested more than 60 of the faked samples. The results were a 90 percent false verification rate.
I guess that's why you need the boring human security guard - they can look for those kinds of scams.
ARmadgeddon has some interesting points about the data that analyst firms (Gartner is the main one talked about, but the point is universal) base their decisions on. The bottom line - it's a thin reed. Plenty of good take aways, but here's a really good thing to keep in mind:
Another issue is that data points are limited to clients of the analyst firms. Gartner’s CEO has stated that Gartner has only 15% of the possible end user market at companies. Does Gartner’s or Forrester’s client base represent a statistically valid sample of the overall IT buyer market?
The client base is also self selecting, which creates problems of its own. So - the question you want to ask yourself is this: Are the answers you get back worth what you pay for them?
Date Obasanjo is frustrated with the Newsgator API, and I recognize his frustration:
So what does this have to do with the Newsgator API ? Users of recent versions of RSS Bandit can synchronize the state of their RSS feeds with Newsgator Online using the Newsgator API. Where things get tricky is that this means that both the RSS Bandit and Newsgator Online either need to use the same techniques for identifying posts OR have a common way to map between their identification mechanisms. When I first used the API, I noticed that Newsgator has it's own notion of a "Newsgator ID" which it expects clients to use. In fact, it's worse than that. Newsgator Online assumes that clients that synchronize with it actually just fetch all their data from Newsgator Online including feed content. This is a pretty big assumption to make but I'm sure it made it easier to solve a bunch of tricky development problems for their various products. Instead of worrying about keeping data and algorithms on the clients in sync with the server, they just replace all the data on the client with the server data as part of the 'synchronization' process.
There are plenty of feeds in my list that don't have guids, and I moved away from using the link item a couple of years ago as a backup - too many were nil, and too many items seemed to crop up with the same link. Those problems don't seem so bad now, but I came up with a manufactured ID when there's no GUID. Which there frequently isn't - running this code in my BottomFeeder image:
(RSSFeedManager default getAllItems select: [:each | each guid isString and: [each guid size = 32]]) size
gives me 8,378 items - out of a total of 17,045. Nearly half. So yeah, it's frustrating.
Steve Wessels has a nice roundup on some very good board games. I've played most of them, and can heartily recommend them as well. Put these on your gift lists!
Troy posted a comparison of some C# logging code and a simple translation to Smalltalk - and then noticed that the Smalltalk code was a lot slower (nearly 5x). Well, this calls for the profiler. I got the same results for all three of his scenarios in Smalltalk, so I'll compare the code for the class test and for the instance test. First, I loaded Troy's code. Second, I went to the Parcel Manager and loaded the Advanced Tools (there's a legacy name for you) - which brings in the profiler. Profiling the code:
TimeProfiler profile: [TimeLoggers timeClassLogger: 200000].
Leads to this result:
You can click through to see the larger view. The upshot is, the original logging code sends #today to class Date multiple times within the logging code (instead of once). Additionally, it turns out that #expandMacrosWith: - while convenient - is fairly expensive. You can see all that in the expanded view. In any case, here's the original code:
writeToLog: aString (self fileStream) nextPutAll: ('%<!-- <1p>/<2p>/<3p>' expandMacrosWith: Date today dayOfMonth with: Date today monthIndex with: Date today year)
To change the performance, I did two things. First, I changed the code above to just print the string that was sent in, and then I changed the calling code to send in the string with the Date formatted. I could have encapsulated that in the new code for the above, but in any case, here's the new caller:
1 to: iterations do: [:i | si writeToLog: ('Testing write to log via basic logger - line number ', i printString, ' ', todayString)].
A better variation would have the todayString done in the logger - it doesn't force work on the user of the code. In any case, this variation yields logging code that is still slower than the C# code, but not absurdly so. I also posted that - 1.1 is the faster (but less well coded) version, with 1.2 being the better way. That code has the caller simply send in a string, to which the date will be appended, as follows:
| todayString | todayString := Date today printFormat: #(2 1 3 $/ 1 1 ). (self fileStream) nextPutAll: '<-- '; cr; nextPutAll: aString, ' '; nextPutAll: todayString; cr; nextPutAll: '-->'; cr; flush
On my system, that runs a a bit faster than the better encapsulated version, but not much.
The upshot of all this? Two things:
- Direct translations of code rarely lead to good results. Each language system has strengths and weaknesses, and direct translations don't hit the strengths
- Never make guesses about performance - run the profiler.
Again, all the code for this is in the public Store.
I've won my first game of civ 4, but I've also learned something less fun - and my friend Mike warned me about the same thing. After I play a game and quit, the PC simply isn't in a good state. Everything is slower - there must be things still running (although, I can't find them). So the only answer is, after each session I need to reboot. Joy.
Mark Derricutt weighs into the live/dead debate with some comments based on his Smalltalk experience:
Sure, alot of the time Smalltalk gives you this "everything is live" scenario, the problem I have is when Blaine (and others) say "there's no shutdown, compile, restart, and retry" which goes all the window when you start asking them about deploying an application.
I've been using Dolphin Smalltalk from Object Arts off and on for the last year for a small windows project, to get an executable you strip the running image down to the executable you ship to clients. As part of this "stripping" process, all the classes and methods that are unused in your application are (as they process implies) stripped/removed leaving you with a nice small redistributable application.
A problem I've often encountered with this is that often methods or classes that I'm using, but don't reference directly in code (class names set in config files etc.) also get stripped causing a problem that only occurs at "run time" - a state of the application where all the wonderfull "everything is live" functionality of the image isn't available. There's no means to update classes on the fly because the compilers been stripped out, I could include it - but then I'd be violating Object Arts licence aggreement. My only hope of debugging any problems is to take a generated .ERRORS file (stack trace, thankfully one that includes all local variables at each point in the stack), load up the image, load the strack trace into Ghoul and hope I've not changed the source too much.
Well, let me take these in turn. Producing an application in Smalltalk is often the hardest part - the downside to having there be a blurred line between development and deployment is the line drawing. Now, I don't use Dolphin, so I can't comment on their process. In VisualWorks, I have a fairly automated process for turning the crank on a BottomFeeder executable. It's not as simple as it could be, but it's not Makefile maintenance either :)
As to the compiler and online updates - there are no license restrictions in Cincom Smalltalk that prevent you from shipping the compiler. Way, way back in the day, when VW 2.5 was the current release (around 1995), ParcPlace-Digitalk (The vendor at that time) had such restrictions. They went with the release of VW 3, and have not come back. I post examples of screwing around with the Smalltalk system in BottomFeeder all the time here, for instance - and I ship updates as new versions of parcels, that get re-loaded over the existing code.
The issue Mark's talking about is a legal problem, not a technical one - and it's not a legal problem in any other Smalltalk system that I know of.
One last thing on deployment - it really depends on whether you are talking about a client or a server. For a client (especially a Windows or Mac client), you want to package up an executable. This matters a whole lot less on the server. The server running this blog, for instance, is a development system that was saved with the GUI turned off. It's still a full bore development system. In fact, if I were having a real problem with it, and wanted to know why it was doing something, I could have it save itself in the state it got into - and then download that image (meaning, I could save it separately, not impacting the running server outside the save window itself).
I've done that a few times, in order to see why certain things are happening. Which is the difference between live and dead.
Just when I was getting ready to slingshot around the sun for a trip to 1066, they tell me it's not possible. Now my whole day is ruined :)
Until a few minutes ago, the server was running slowly - I wasn't sure what the original problem was, but on restarting it, it turns out that I had a timing issue with the startup of one of the services. I've gotten that handled, and things are back to normal now.
I though that patent laws in the US were getting ridiculous. Well, it seems to be a race to the bottom, with the EU poised to take the lead: they want patent violations to carry criminal (i.e., jail time) provisions:
For once, declared adversaries are on the same side of an argument in the technology industry: Both sides are urging European lawmakers to drop legislation that would impose prison time on patent violators and that they say would stifle innovation across Europe.
It's a rare thing to see industry lining up with the EFF, but here it is:
Heavyweights like Nokia and Microsoft on one hand, and the grass-roots Foundation for a Free Information Infrastructure on the other, are making common cause against wide-ranging legislation proposed by the European Commission that would criminalize all intellectual property infringements, including patents. The law would provide blanket protection to all forms of intellectual property through the 25 countries of the union.
I suppose it's a small comfort for software developers that the EU rejected software patents recently (although, that fight is hardly over). If the people pushing this kind of thing in patents, along with the forces of copyright madness (the RIAA, MPAA, and their brethern in Europe) have their way, we can look forward to a complete stifling of forward motion.
Time for the weekly look at the logs - looks like 376 BottomFeeder downloads per day this last week. The details:
That looks about the same as last week - on to the html page accesses:
|Tool||Percentage of Accesses|
Total readership was up last week - I guess that "Humane Interface" kerfuffle added a few :) Finally, the RSS results:
|Tool||Percentage of Accesses|
|Net News Wire||10%|
Still tons of diversity in that tool list.
I posted on a confusing story out of France awhile back, and asked for clarification. Now, there's a good story on what's going on here. As it happens, France (and Spain, for that matter) are in the process of codifying an EU directive on copyright law from 2001. The issue comes up based on a few potentially troublesome aspects of that law - troublesome for OSS advocates and for commercial vendors alike. Here's what the OSS side doesn't like:
One proposal would require that all software enforce digital rights management, or DRM, a sort of digital lock intended to prevent illegal copying.
"This would basically undermine the fundamental principles of open-source software," said Loïc Dachary, vice president of the Free Software Foundation France, who is a software developer for Mekensleep, a Paris video game maker.
Based on how vendors currently want to "help" us with DRM (Sony and Microsoft come to mind), I think this is a terrible idea. It will bring all the worst aspects of the DMCA to Europe, in my opinion. The vendors have their own issues with the proposed law:
One would require software makers to show competitors the underlying source code of any DRM components in their products. The other would make software makers liable for damages from entertainment companies or artists even if their software were altered by criminals to perform illegal copying.
"We are very concerned about these two proposed amendments," said Francisco Mingorance, director of public policy in Brussels for Business Software Alliance Europe.
A sponsor of the French legislation, however, played down the concerns expressed by both factions as exaggerated and said lawmakers would ultimately strike the right balance between business and open-source interests. "What you are hearing now is a lot of nervousness as the legislation moves closer to a vote," said Senator Michel Thiollière, the main sponsor of the copyright legislation in the French Senate, which will vote on the issue in January. "I believe lawmakers will ultimately find the balance to protect the rights of artists and to preserve the best possible access for people who want to legally enjoy the Internet."
The danger is, since both OSS advocates and vendors have problems with the proposed law, it could easily be seen as the sort of "valid compromise" that makes everyone a little unhappy (excepting the RIAA and their European brethern, of course). This sounds like a train wreck law - let's hope it gets derailed.
We’re proud to announce that del.icio.us has joined the Yahoo! family. Together we’ll continue to improve how people discover, remember and share on the Internet, with a big emphasis on the power of community. We’re excited to be working with the Yahoo! Search team - they definitely get social systems and their potential to change the web. (We’re also excited to be joining our fraternal twin Flickr!)
That will definitely help them with ongoing scaling. Looks like it's a Google/Yahoo/MS fight for web eyeballs.
In news fthat should be from the past, Microsoft announces that they are bringing Windows up to the state of the art... circa the mid 80's:
Microsoft Corp. is working on a significant new feature for Windows Vista, known as Restart Manager, which is designed to update parts of the operating system or applications without having to reboot the entire machine.
Microsoft officials have not talked much publicly about this new feature, but Jim Allchin, the co-president of Microsoft's platform products and services division, recently told eWEEK that this is an example of just how important the reboot issue was to the Redmond-based software giant.
I boggled when I started reading Tom Yager's latest column - "Reviving Native Traditions". He's out there, but at least I know why he still thinks that the iTanium has a future - he's pushing for a resurgence of C++ (et.al.):
Even though it tags me as a graybeard, I have a firm belief that the welcome trend toward slower, cooler, more power efficient systems, as well as powerful converged mobile devices, requires nosing away from VB, Java, and .Net and back toward software that compiles from an editor down to byte patterns that match a target CPU’s unique, or native, machine language. One finds the antithesis of Java and .Net in C, C++, and Objective-C, the most popular programming languages that compile down to native code.
Yes, please pass me more buffer overflows now. Yeah, I know there are techniques to avoid them, and I know that "everyone" should know them. I think the history of the last decade shows that far too many people don't care - even at major vendors. System level sandboxing (something he mentions later in the column) is a great idea - and far better than the Java theory on that - but having an application take down an OS sandbox, while less bad than taking down the whole system, is still bad.
Heck, his own magazine contradicts him in the same issue.
Well, the forecasts were all calling for 3-8 inches of snow overnight - it looks more like 2 on my driveway right now. The schools closed, of course - this is Maryland :) Heck, they only just managed to get the roads cleared off. It might be enough for sledding, but I'm trapped on conference calls at the moment. We are making our go/no go decision on the release candidate for Cincom Smalltalk this afternoon - fingers crossed, it all looks good.
Blaine explains the difference between a Smalltalk environment and something like Eclipse:
I generally look at Smalltalk's IDE as playing with a live patient. You are in the middle of a living and breathing system that reacts immediately to you. There's no shutdown, compile, restart, and retry. It's all happening right here and now. It's an incredibly cool experience especially when you realize that you can execute any code and inspect the state of the system. Eclipse on the other hand is like looking at your objects through a window. You can do some manipulation, but you can't mess around with the guts of the patient like you can in Smalltalk. The thing that truly shocks me is why Ruby and Python developers still try to mimic Eclipse (ie. run the code in a different process) in the IDEs available. There's no IDE for them that allows you to be in the middle of a live system. Why wouldn't you want that? I can't wait till Ruby does have a great IDE like Smalltalks. It turns the amp to 12 instead of a mere 11.
And that really is the difference. Take the server running this blog - it's a headless (i.e., no UI) development image. When I update it, I can load code that modifies existing objects in the system. Need to add an attribute to all the registered users? No problem - I load the code, and every single extant instance gets the new attribute.
The same power exists at development time and and runtime. In the Java sphere, you don't get that power during development, much less at runtime. Which is why I call Java tools - even well regarded ones, like Eclipse - pale shadows.
This is kind of an inside baseball thing for the podcasting world, but if you follow that stuff at all, it's funny :)
I got this note from Andrew McNeil, a Cincomer in Sydney, Australia:
The Sydney Smalltalk Users Group is meeting on Monday 12th December 6PM at the James Squire at Kings Street Wharf down at Darling Harbour.
Special guest will be David Long.
David is the inventor of Atlantis Business Technology and produced the company's first patent application. He is the Chief Technical Officer and co-founder of the Atlantis software startup.
David will be talking about Development of Real-Time Applications with Atlantis and about the Atlantis technology in general.
Atlantis application is uniquely able to capture expert knowledge from human input and automatically generate a running program in a matter of minutes. The program duplicates programming functions in exact replication using a new modelling language known as ESGATEA.
This new and powerful programming language is designed to fast-track the product development process, in turn enabling customers to focus on their core business of bringing their products to market.
Bill Machrone at PC Magazine lays out the problems that have come out of the DMCA (here in the US - there are similar laws either on the books or being pushed elsewhere):
But if I did, I'd probably be in violation of the Digital Millennium Copyright Act, and it just might become the high-visibility test case that has me, PC Magazine, and Ziff Davis Media staring down the barrels of a lawsuit. Everything you need to know is on the Web, of course, and pointing you to the links would be the courteous thing to do. I'd planned to write a Solutions story on how to remove the driver that prevents your PC from ripping protected CDs, but I chickened out, because companies such as Sony and EMI have announced that they are upping their commitment to SunnComm and Macrovision copy protection on their releases. This means that they're now after the little guys, in addition to the counterfeiters, bootleggers, and big file-sharing networks.
This is the place the bozos at the RIAA and MPAA want to take us - and companies like Microsoft are only to happy to help them. Bill has an idea about what should be done:
Fortunately, the DMCA was enacted with a built-in review period, and it's time for the federal Copyright Office to review the anticircumvention provisions. If you would like to comment online, you can do so at www.copyright.gov/1201 /comment_forms/index.html.
I'm with Bill on this one
there is a bigger problem with dynamic languages in general, and this problem has been completely underestimated so far, which probably explains why dynamic languages are not making fast progress in developer mindshare.
This problem is the lack of IDE support.
I hardly know where to start with such nonsense. I suppose I could mention that Smalltalk has had auto-completion available for years, and that it was first implemented (and integrated into the tools) by customers. There may be a lot of Eclipse plugins, but how many of them are created as a "I wonder if I could..." thought over a couple of hours? Not many, I'd warrant. But just ask these guys - there's no IDE support in dynamic languages, so it didn't actually happen. Cedric manages to keep digging:
Auto-completion in IDE's has become extremely smart these past years. Refactoring is also a practice that modern developers have become completely hooked on, and for good reasons.
IDE's for dynamic languages come nowhere close to this level of functionality, and whenever I program in Ruby or Groovy, having to leave my IDE of choice is not only hard, it sometimes makes me decide against using the dynamic language, even if it's a clear winner on paper.
Ahh yes, for the examples he picks an Open Source project that isn't funded by a large corporation (I'm speaking of Eclipse), and a language that was dropped into the JVM and then forgotten. No mention of Smalltalk, or Lisp. Would it be too hard to have a look at the ancestors of all dynamic languages?
This kind of weather is not what we normally see in December in Maryland - at this time of year, it's usually the full excitement of 40 degree rain :)
Time to man the sleds!
Elliotte adds another post to the "Humane Interface" debate - the quote below follows on from an analogy about which remote is easier to use (an example Steve Jobs gave when introducing Front Row) - follow the link for the picture and analogy. Here's the meat, IMHO:
More buttons/methods does not make an object more powerful or more humane, quite the opposite in fact. Simplicity is a virtue. Smaller is better. Do you want to go all the way down to the absolutle theoretical minimum? For a list class that's probably about four public methods (new, insert, delete, get). Probably not. That's too few, but 78 is way too many. 10-12 is ideal purely from human-interface concerns. (I'd say 7-8 except I don't think most classes can really get that small. Maybe it is 7-8 if we count all overloaded variants as a single method though.) 25-30 is sort of the outside maximum: roughly the most signatures you can fit on a single piece of paper so that the programmer can see the whole class at once without scrolling their eyes. If that still doesn't convince you, tomorrow I'll take another look at the Array class that started this whole brouhaha and show exactly how pointless most of those 78 methods really are.
Well, I'm going to have to refer back to this post for part of the argument. Instead of dwelling on Ruby though (with which I'm less familiar), I'll look at Smalltalk, class OrderedCollection. In a base image, there are 55 instance methods in that class. There might be more with more packages loaded; have a look at this post on extensions to see how that's managed.
Now, I'm pretty sure that Elliotte would dislike 59 methods as much as he dislikes 78 :) Some of this is an Apples to Oranges comparison though. To wit: there are methods in the Smalltalk class implemented for polymorphic reasons that would not be done the same way in Java. For instance: #inspectorClass lets the collection tell the system how the development tools should look at the object. I'm fairly certain that Java IDE's approach that problem differently.
Additionally, exception raising has been factored out to individual methods. For instance:
notEnoughElementsError self error: (#errRemovedTooManyCollection << #dialogs >> 'attempt to remove more elements than are in the collection')
That's an exception that gets raised if we try to access past the end of a collection (on either end). An example:
removeFirst: numElements "Remove the first numElements elements of the receiver, and answer an Array of them. If the receiver has fewer than numElements elements, create an error notification." numElements > limit ifTrue: [^self notEnoughElementsError]. ^self privateRemoveIndex: 1 to: numElements returnElements: true
In Smalltalk, there's a strong bias to make the object responsible for its own actions. So when you try to index past the end, the object itself raises the exception - it's not handled by deep machinery in the VM.
Elliotte says that 10-12 methods are "enough" and that past that, we get into bloat. Well, looking at class OrderedCollection, I'm not sure what he'd want me to pull out. I suppose he might consider #add:before: fluff, but it's a useful method - it allows me to add some object before some other object in the collection. There's a ton of convenience methods of that sort in the class. Would it be better to simply have #add:, and then force developers to write their own versions of all the convenience protocol in their own classes? Recall that in Java, I can't extend a class, so these new methods will end up in some utility class. Clean OO, that's not.
Smalltalkers - and I think Rubyists - look at this problem very differently from Java folks. The Java people seem to like sparse classes, but they don't seem to realize that sparse classes combined with an inability to extend leads to everyone creating their own set of utility classes. In Smalltalk, the useful protocol that people add tends to migrate up to the vendor over time. For instance, in VisualWorks 3.0, OrderedCollection had 53 methods. More interestingly, the abstract Collection class (Collection) had 40 methods in VisualWorks 3.0 - and has 51 now. What we seem to have a two very different approaches to the class design problem. I think the Smalltalk/Ruby one is better, because it favors putting code where it belongs. The Java approach implicitly favors lots of utility classes.
From Blaine Buxton:
It's time for the Smalltalk User's Group. This week we will be discussing Dabble and showing the video demo. It should be a lot of fun! As always bring your cool pieces of code!
Here's all of the details:
When: December 13, 2005, 7pm - 9pm
Where: Offices of Northern Natural Gas
1111 S 103rd Street
Omaha Nebraska 68154
Office is at 103rd & Pacific. Guests can park in the Northern visitors parking area back of building, or across the street at the mall. Enter in front door, we'll greet you at the door at 7:00pm. If you arrive a bit later, just tell the guard at the reception desk you're here for the Smalltalk user meeting in the 1st floor training room.
Blaine Buxton, Mad Scientist In Training
"Tipping cows in fields Elysian"-Clutch
However, Elliot is completely right that having 78 methods in any class is an atrocity. Something that has that much surface area is way too complicated for humans to keep manageable. In addition, it also sets a bad example for coders learning the recommended ways of doing things -- i.e., "just throw anything you feel like in there."
I like the way he baldly asserts that 78 methods is "an atrocity". So what's the magic number? Is 22 methods ok, but 23 - heck no, that gets into atrocity range? There's absolutely no way to look at the raw number of methods and make that statement. He's assuming that there must be fluff in there - but that's an assumption, not evidence. The rest of his post is actually quite reasonable - it's just that one thing I object to.
Aaron Swartz has a long post up on reddit.com's move from Lisp to Python. It's an interesting post - especially so if you are looking for information on Python web frameworks - but the anecdotes about the reaction of the Lisp community sound very familiar to me as a Smalltalker. Sadly, lots of people in niche development communities have "conspiracy theory" reactions to this sort of thing.
The funny thing is, this gives me an excuse to make a point about reddit versus things like digg and blogniscient. The latter two services provide headlines and snippets in their feeds; reddit only provides a headline (title) and an url to follow - even on their web page. As someone who reads primarily in his aggregator, that makes reddit virtually useless to me. I do a lot of scanning, and - while I favor full content over partial content - partial content is better than just headlines.
If you are a startup, don't waste your time and money worrying about what happens when you have millions of users. Premature optimization is the root of all evil and in certain cases will lead you to being more conservative than you should be when designing features. Remember, even the big guys deal with scalability issues.
On a much smaller scale, I can speak from some experience here. When I first coded up the blog server running this site (back in 2002), I had no idea what kinds of scaling issues I'd hit - heck, at the time, I knew next to nothing about HTTP or XML. Had I spent time trying to solve whatever problems I could have imagined then, it would have been a complete waste. I had to learn from experience. Which is not to say that research and knowledgeable staff won't help; just don't assume that you know the answers up front. As Dare says, this is a problem that even the giants (who have a lot more cash on hand than the rest of us do) run into.
Sources say AOL's role as a critical player in search traffic makes it attractive to prospective partners.
Now, my referers are not representative of the web, but - I rarely see search requests from AOL. I see tons from Google and Yahoo, a trickle from MSN, and virtually none from anywhere else. AOL is bleeding subscribers and revenue - what value do they have to any buyer?
One of the things that came up in this thread was how method categories and partial class definitions make browsing code simpler in Smalltalk. Here's an example of why. In BottomFeeder, I have a package called RSSViewer (it holds most of the UI code for the application). Here's a screen shot of the upper four panes of the browser (the lower code pane is omitted), with one of the extension methods that have been added to selected:
What we are looking at is the top set of panes - starting from the left, we have:
- Package that contains the code in question - packages are a version control concept in VisualWorks
- Class View - a listing of the classes that are in the selected package
- Method Categories - often called protocols, these are names we use to group methods that are related in some way
- Method Pane - this is where we select a single method, so we can see its code
Notice how most of the protocols are italicized? If I were to select one, I wouldn't see any methods. What that's showing me is that there's more code for this class, but not in the selected package. I can narrow my view down to just the methods that are part of this package. However, I don't need to do that - I can see everything, if I want to. Here's another shot, of the same browser, but in hierarchy view:
Notice what's changed - the leftmost pane now shows the hierarchy for the selected class, and the second pane now shows all the packages that define code for that class. By selecting all those packages, I can see all the code for the class - or I can select only a subset of the packages, and see only that.
This is why commenters in the aforementioned thread say that Smalltalk's browsing tools and class extensions make larger numbers of methods palatable - we can add code directly to a class when we think it belongs there (instead of having to define a utility class), and then version off our changes independently of the rest of the system. We can then view our changes without having to see the whole class (or not - we can limit our view by package/class/protocol).
The Smalltalk tools give us a lot of ways to slice our view into the system.
Lee Gomes of the WSJ thinks that a subset of the technical blogosphere is shaking up tech reporting:
The reality is that while there are now as many tech blogs as stars in the sky, only a tiny fraction of them matter. And those that do aren't part of some proletarian information revolution, but instead have become the tech world's new elite. Reporters for the big mainstream newspapers and magazines, long accustomed to fawning treatment at corporate events, now show up and find that the best seats often go to the A-list bloggers. And living at the front of the velvet rope line means the big bloggers are frequently pitched and wooed. In fact, with the influence peddling universe in this state of flux, it's not uncommon for mainstream reporters, including the occasional technology columnist, to lobby bloggers to include links to their print articles.
There's a reason for this - when you look across the group of people blogging on technology, most of them are "hands on" people - i.e., they are not just talking about tech, they are producing it. Most (but not all) of the technical journalists suffer from the same problem as a lot of technical management: they hung up their tech spurs years ago, and now rely on other people (or their gut) in order to make decisions.
In business, smaller, nimbler companies "disintermediate" the plodding giants that have forgotten how to be nimble (think MS, circa 1985, or Google 4 years ago). In technology reporting, bloggers are getting the jump on many reporters, because they are still working in the field. Not all technical people can write, but there are plenty who can - and they can report on something far more quickly than the journals can.
One thing I've noticed on this - the technical journals - InfoWorld and ComputerWorld come to mind - have embraced blogging in a way that the MSM really hasn't. This has allowed the smarter publications to stay in the game. I no longer have to wait a week to see what Jon Udell thinks, for instance - he shows up in my aggregator instead. Over on the non-tech side of things, look at the New York Times as the epitomy of not getting this change in the landscape: they've put all the opinion columnists behind a pay wall. It's been a pretty good way of removing those folks (and the Times in general) from the conversation.
Hat tip to Daver Winer.
Looks like the musicians have started to cry "foul" over DRM - this is exactly the kind of thing that the labels fear most. In a NYT Op/Ed piece, Damian Kulash Jr. (of Ok/Go) has spelled out the problems succinctly:
The Sony BMG debacle revealed the privacy issues and security risks tied to the spyware that many copy-protection programs install on users' computers. But even if these problems are solved, copy protection is guaranteed to fail because it's a house of cards. No matter how sophisticated the software, it takes only one person to break it, once, and the music is free to roam and multiply on the peer-to-peer file-trading networks.
That's the technical problem right there - none of these schemes are break-proof, and the people willing to break them will do so quickly. Meanwhile, the bigger problem is with the people who legally buy the music, and then can't do what they want with it:
Meanwhile, music lovers get pushed away. Tech-savvy fans won't go to the trouble of buying a strings-attached record when they can get a better version free. Less Net-knowledgeable fans (those who don't know the simple tricks to get around the copy-protection software or don't use peer-to-peer networks) are punished by discs that often won't load onto their MP3 players (the copy-protection programs are incompatible with Apple's iPods, for example) and sometimes won't even play in their computers.
Conscientious fans, who buy music legally because it's the right thing to do, just get insulted. They've made the choice not to steal their music, and the labels thank them by giving them an inferior product hampered by software that's at best a nuisance, and at worst a security threat.
Damian then goes on to point out that having the music spread - even if some of it spreads illegally - is a net positive for the artists. More people hear it, and more people end up buying it. The attempt to lock it up merely gets in the way of the good people, and does nothing to stop the bad people.
Which is why all of these sorts of schemes - DRM for music, the absurd PVP-OPM thing that Microsoft wants to harm us with - these are all stupid ideas that get in the way of law abiding customers trying to legally use a product (CD, DVD, etc).