BGE decided to grace us with another micro-outage last night - just long enough to knock the Linux box off the air, apparently. It looks like my battery backup is shot, because the outage was well less than 20 minutes. Sigh. Now I need to buy a new backup and get that installed before going to CA tomorrow......
I lean towards the former, but this is an interesting post:
When I read this: "Smalltalk is such a productive language that consulting companies and development managers are avoiding it. Consulting companies are avoiding it as the billable time per project is several times shorter. Managers are avoiding it, because with Smalltalk they would not have that many developers to manage and thus it would decrease a need for managers.. --Vlastimil Adamovsky"
I start to wonder why people are always doing the same mistakes. Newspapers talks about Java and C# because it is more cool !!
So if you have any good reason why so many persons kill Smalltalk let me know.
It's amazing how quickly something can become the de facto standard in our industry. Take Java, for instance. It's not even a decade old and already companies, universities, and independent programmers have eschewed all other choices in favor of the dark brewed wonder. But, from a technical standpoint, does Java really deserve this rapid exaltation?
If you're comparing Java to C++, it does have some immediately nice aspects. However, one doesn't always have to compare everything to C++. Further, I still believe there are instances where C++ is the correct choice and not the hindrance it is often portrayed to be. However, it's obvious to me that many companies are not building their internal enterprise infrastructure with C++ and instead are choosing Java because it seems close enough to C++.
What if, instead of comparing Java with C++, we compare it with Smalltalk? After all, Java was inspired more by Smalltalk than C++. Smalltalk is positioned to service the same set of problems as Java, namely enterprise business software. In addition, there are several Smalltalks on the market that easily put Java to shame for the development of shrink-wrap software.
The sad part is, many people haven't considered it because they simply don't know what they don't know...
I just changed the update manager to use a progress dialog. Dohh, there's something I shouldn't have had to be asked!. Anyway, the latest updates are in the process of being uploaded to the 2.9 dev build area. In the meantime, the updates are available via BottomFeeder's upgrade manager.
If the line
doesn't have a trailing /, add one. You can omit the dev if you don't want to point at the dev build updates. Dumb bug, being addressed in the new stuff.
Someone's not with the program - it's snowing here, again! Probably not a serious snow, but it's cold enough to snow, which is bad enough....
I posted on the conflicts of interest of consulting companies yesterday - now comes this report from the Register on ROI and large ERP installations:
Most customers surveyed had used SAP software for close to three years and 57 per cent of them said they've paid more for the code than it's worth.
The average cost for a three-year SAP deployment is $10m, with consulting accounting for $3.6m, personnel soaking up $2.5m, software licenses another $2m, and related hardware and training costs picking up the rest of the tab.
Companies surveyed saw some benefits from workers being able to manipulate data more quickly with SAP products and better company-wide access to information. "However, a positive return on the SAP investment was achieved only when there was both a sufficient number of users and sufficient frequency of use (breadth and repeatability) to reap significant productivity based gains from the solution," Nucleus writes.
Now stop and consider who is helped by this sort of thing - typically, a large consulting firm sends in tons of bodies to implement this sort of thing. Think of the billable hours behind a full installation of something like this - is it even possible for a consultancy to be objective? At least with the vendor, you know where they are coming from - they want to sell you the tools. But the consulting firms put on an air of neutrality, and then recommend paths that - surprise - would result in lots of billable hours
So when you see a recommendation from (insert consulting firm here) to migrate your Smalltalk app - you know, the one that actually works - to Java, or .NET - ask yourself who benefits. At the end of the exercise, if it succeeds, you are right where you started - you have a working app. The consulting firm, having taken you zero feet forwards, is now much wealthier - at your expense
More evidence that blogs are hitting the mainstream, via InstaPundit:
CULTURE WATCH: Just seen on the bottom of the CNN Headline News screen:
Gary Hart Cyber Campaign Starts blog on possible 2004 presidential bid
Delightful, isn't it, that they felt no need to explain what a blog is?
As I mentioned yesterday, it's not the issues being raised on these blogs that I'm interested in here - merely the fact that blogs are being used more widely. I knew something was up when I got interviewed by the Lehrer Report, and now CNN is mentioning blogs in passing.
Where's my ls -ltr? The date field in Explorer is populated by evil aliens! Nothing lines up!
heh. Another satisfied Windows user....
I'm heading to our engineering planning meetings in Santa Clara tomorrow - my flight leaves at 5 AM. I'm not even sure I believe that time of day exists....
I should resolve never to fly on the old TWA equipment again - all the way to California, and no power at the seats!!!
Sigh. I did get most of a BottomFeeder export feeds as OPML done. So anyway, here I am at the engineering meetings, imparting my views of where we need to go, and getting the reactions (and vision) of the engineering group. Like any other engineering group, it's akin to getting a bunch of cats moving in synch (i.e., nearly impossible). However, this is a great group of people - one I'm very proud to be working with. I'll likely have more to say on the meetings later, but now I have to listen to Eliot talk about his technical vision for VisualWorks.
Remember that "MS Buys Squeak" April Fools trick from a few years back? Well, someone put together a MS buys Linux trick this year:
At a small press conference in Nepal, attended by two Sherpas and a Yak, as well as a THG stringer, Microsoft spokeswoman Avril Wonful announced that the Redmond company had acquired Linux. Although this surreptitious attempt by Microsoft to kill the Open Source community was supposed to go unnoticed, THG sources had long known that Microsoft executives were holed up in a Buddhist monastery in the area, meditating and trying to achieve greater self-awareness.
Go read the whole thing....
And another April Fools joke, via Lambda the Ultimate:
In a move which surprised industry analysts, Yahoo, Inc. has confirmed that the software which runs Yahoo Store, which was in the process of being converted from its original implementation in the computer language lisp to the C++ language, will be converted back into lisp as soon as possible. A Yahoo spokesman, who requested anonymity, had this to say: "Boy, that was really embarrassing. See, the reason we wanted to get rid of lisp is that none of us could read any of the code because of all those silly parentheses. But just last week, we found a text editor (called "emacs", I think) which has this amazing feature -- it actually can highlight the opening parenthesis that corresponds to a closing parenthesis. That just blew us all away. Once we had that killer feature, we knew that it was in our long-term interests to go back to lisp -- it's much more flexible than C++. Unfortunately, we'd already converted everything to C++ already... if any lisp programmers are reading this, you might want to fax us your resume." The spokesman went on to say that he'd heard great things about something called "closures", which he believed were a way to seal code against bugs or something like that.
The Fishbowl has some interesting thoughts on where software revolutions come from:
Look at Java, recently described on Bruce Eckel's weblog thusly, citing Paul Prescod: "He called COBOL and Java neanderthal languages that have no descendents on the evolutionary tree.". Java has great libraries and right now, great momentum, but it's a dead-end. It has no future. It has nothing to evolve into. Its only likely long-lived descendant is C#, a language that, if it survives, will do so for the same reasons that Visual Basic survived so far beyond BASIC's use-by date. This is the source of my disquiet about Web Services. Microsoft are telling me they'll be big. IBM are telling me they'll be big. Some very respected developers are enthusiastic, but most are sitting back wondering what the fuss is, and have been for three or four years now. The momentum just hasn't gathered. SOAP and XML-RPC are both great solutions to a particular range of problems, but we're just going to have to face the fact that the chance of them becoming a revolution, as promised, are slim. Shortly, some technology is going to appear and blow my socks off. But it's more likely to appear in some experimental corner of JBoss 4.0 than it is in J2EE 1.4 or in .NET. And it's quite likely going to appear in Python or Ruby, or even coded in C by some college student or lab assistant who has thought of a really neat way to solve a real problemin the real world, and wants to share that solution with the rest of us. And we'll take it, and use it in ways the inventor never dreamed of. That's where revolutions come from.I'm seeing more and more evidence that developers are ready for something else. The Smalltalk Community needs to be prepared if we want to be one of the answers
My posting tools are apparently posting through the proxy server (I'm on dialup at the hotel) - but telling me that they aren't. Then there was IE's complete refusal to browse with the proxy server, while Netscape seems happy. Sigh....
Spotted on Loosely Coupled:
The bad news for the IT industry is that CIOs have realized they can spend less and achieve more, using web services. Instead of investing in new systems, they want to get more out of the systems they've already got. Here's what they've been saying at InfoWorld's CTO Forum this week:
- "We've got to rationalize our investments. We've been spending $4 billion a year for years. We need to get more out of it," says Merrill Lynch's chief technology architect, Rick Carey.
- "When we looked at our core assets, we had lots of services our customers were not using. The problem was [the services] were not flexible and convenient enough," says senior Verizon IT executive Luis Lando.
The really bad news is that they can expose and link those web services without having to invest in expensive new integration systems. They don't even have to buy application servers, according to Cape Clear founder Annrai O'Toole
Hah! Anyone who thinks that exposing via web services will be easier than - say CORBA - hasn't tried it both ways.....
I've written an EXDI and Store interface for Sqlite (http://www.sqlite.org/) databases. It works on my machine under linux and windows (wine, really). And I think Sqlite is quite neat for local repositories, because the setup is so easy - you just need one dll/shared library where VW can find it. No server process, no configuration files.
The bad news is it's new and untested. While the feature set I use seems to work, that's about all I can claim. There were also some problems with Sqlite's view implementation; a large part of the code is workarounds. So I don't want to put it in the open repository until a few more people have used it.
I did put it on a web page: install instructions and downloadable links are at http://www.gjdv.at/cgi-bin/pyson.cgi/en/vwsqlite.pyson
If you do give it a try, I would suggest to also use a "real" repository, or "Publish as Parcel" on a regular basis ;-)
I got this in email as a response to this post:
Last night I read a piece by Dr. Alan Kay. Within the article is a section "Most of current practice today was invented in the 60s". It goes on:
"It is worth noting the slow pace of assimilation and acceptance in the larger world. C++ was made by doing to C roughly what was done to Algol in 1965 to make Simula. Java is very similar. The mouse and hyper-linking were invented in the early sixties. HTML is a markup language like SCRIBE of the late 60s. XML is a more generalized notation, but just does notationally what LISP did in the 60s. Linux is basically Unix, which dates from 1970, yet is now the "hottest thing" for many programmers. Overlapping window UIs are one of the few ideas from the seventies that has been adopted today. But most of the systems ideas that programmers use today are from the data- and server-centric world of the 60s.
The lag of adoption seems to be about 30 years for the larger world of programming, especially in business."
This, as you might expect, leads us to believe that the time for Smalltalk (and Lisp?) is about to come?
- BottomFeeder now supports OPML (as done by syndic8) in addition to OCS for feed lists
- BottomFeeder can export the subscribed feeds as an OPML file. This flattens all feeds into a single folder, but does allow export
- BottomFeeder can import OPML based (as done by syndic8) feedlists from a local file, and convert that into a set of subscribed feeds
Numerous people have requested this feature.
Via Matt Croyden
Follow the link for all the feeds - 12 from Cisco alone!
I've managed to get a few interesting new BottomFeeder features done in the last two days:
- Import from OPML File - You can now import an OPML file (in the format used by syndic8 into BottomFeeder. The imported feeds will be added to a new folder under the subscribed list
- Import from OCS File - You can now import an OCS file into BottomFeeder. The imported feeds will be added to a new folder under the subscribed list
- Add Feedlists from an OPML source (such as syndic8).
One of the complaints I've received in the past is that BottomFeeder cannot have feeds added from local export files (say, from other RSS aggregators). If you can get your feeds into OPML or OCS format, that's no longer the case.
I've just added an RSS Import to BottomFeeder. Numerous tools export feeds as either RSS, OPML, or OCS. BottomFeeder can now handle all three. In the case of RSS, the feeds get listed as items. In any case, BottomFeeder can now import all three formats, and export as OPML. This should make it easier to test the tool out.
Very cool... simplegeek
Yes it is and something I will be looking into and writing about. "S# is not a scripting language in the sense that it's any less powerful than other .NET languages; instead, it's a scripting language in the sense that it doesn't require strong variable typing in other words, it's a dynamically typed language. . . Beyond that, you can perform operations impossible in VB.NET or C#. " - Like Jason and his upcoming Win-Dev session, I am always interested in moreof the CLR from the internalperspective or "alternative Langauges" aspect of .NET to keep things interesting.
More news from the REST vs. SOAP front, via Slashdot:
tadghin writes "I was recently talking with Jeff Barr, creator of syndic8 and now Amazon's chief web services evangelist. He let drop an interesting tidbit. Amazon has both SOAP and REST interfaces to their web services, and 85% of their usage is of the REST interface." Read on for some more thoughts and information on REST and Web services, including information about a free Web services seminar on April 22nd.
" Despite all of the corporate hype over the SOAP stack, this is pretty compelling evidence that developers like the simpler REST approach. (I'm sure there are applications where SOAP is better, but I've always liked technologies that have low barriers to entry and grassroots adoption, and simple XML over HTTP approach seems to have that winning combination.)
This is pretty much how it's going in the blog world as well - when developers choose, they choose simple....
The planning meetings this week were too short, but also exhausting - start at 9, go until 7. Then hop back on a plan to get back home. At least I wasn't insane enough to take a red eye this time around. On the other hand, I got stuck - again - on the old TWA equipment American Airlines runs - no power at the seats! Since my notebook spends most of its time plugged in, the battery life is awful.
There was a plus side to that, of course - I read a couple of not terribly memorable, but interesting enough yarns. I'll spend the next few days sorting out how the meetings went, and then I'll have to go and edit our internal wikis to reflect the results of my cogitation. I'll also have to update the public wiki - with VW 7.1 and OS 6.8 out, it's time to start posting a coming attractions page for the next release.
Our current plan calls for the next release in the fall, likely November. That will keep us in the regular schedule we have been trying to hit - and also set up the follow on release to synch up with Smalltalk Solutions 2004.
So in Meerkat's Open Source Column, I see this:
Of course Sean is dead wrong as to the salient matter, but he's always a good read. RDF is for people who understand directed graphs. If you take any random audience, this is, of course, a small proportion. Same story for forensic histology, but I doubt Sean would moot for closing down all the crime labs. The argument that "not everyone can get RDF " is not worth any number of words. The more interesting point is that anyone who can't get RDF can't get relational databases or any other sort of formal information modeling, and they can't get code (both flow of control and declarative algeras are graphs more complex than RDF). For those outside this set, as Sean points out too obliquely, there are plenty of tools and they needn't deal with RDF directly
Yeah, right. Here's two words on RDF:
One thing I'm hoping will happen with the new import/export tools for BottomFeeder is a greater ability on the part of potential users to try the tool out. Bf now imports OPML, OCS, and RSS exports, converting them to a set of subscribed feeds. It likewise can export subscribed feeds as OPML, allowing easier interchange in that direction as well.
Missing Angel and 24! Thank goodness for the Replay....
While at Xerox PARC, Kay invented Smalltalk. Although the present day hot OO languages, Java and C#, make a lot of their C-like syntax, much of their real roots can be found in Smalltalk. In addition to an OO language, Smalltalk was also a development system and an operating system for Smalltalk programs. The five person Smalltalk team at PARC created both the software and the hardware to run it on. As a result, Smalltalk applications performed quite well on these systems.
In the 1980s, Kay explains, Intel and Motorola were not producing processors that could run these higher level languages. As a result, programmers interested in performance were programming in C and early bound languages. When Stroustrop developed C++ he wasn't trying to emulate the work done at PARC, he was creating support for objects using a preprocessor for C. The relationship between C++ and C was much like the relationship between SIMULA and Algol. Kay sees Java as falling between Smalltalk and C++. In some ways it is an improvement, in other ways it is mainly C++ with garbage collection. One of the most obvious deficiencies of Java, says Kay, is that "Java has a difficult time of adding to itself."
Go have a look at the whole article - it's well worth reading
I stumbled across this cartoon recently - very amusing. Something I can definitely identify with...
Writing on Eclipse, Ted Leung writes
Carlos responded to my response (Carlos, I was at the IBM Center for Java Technology in Silicon Valley -- I was part of the team that brought you XML4J, now known as Xerces-J.): The Eclipse environment in the programming language arms race is equivalent to those JDAM GPS guided bombs. These weapons are cheap, it changes the battlefield in a revolutionary way. So the next time someone argues to you that C# has a nice syntactic feature, show him how Eclipse makes that irrelevant.
Yes, for the uninformed masses who haven't seen Smalltalk or Lisp. For those of us who have, it's yet more reason to ponder the snails pace of progress in the "mainstream" of development...
I saw this in the XP mailing list on Yahoo. egb refers to Grady Booch, for context purposes.
There aren't 10-12; there's only one:
- minimize the time between specifying a feature and letting end-users benefit from it
egb: It finally strikes me in reading your message why we sometimes see angst from management as to the business value of agile stuff: the characteristic you suggest is orthogonal to return on investment (this is not to say that they are unrelated, but rather that they are very different measures of success and value). Reducing the time-to-availability is certainly a good value to pursue, but it is not the only one that organizations must embrace - as organizations grow larger and the cost of developing software becomes a relatively large capital expense for the business as a whole, achieving a return on investment becomes a greater driving factor.
Orthogonal to ROI? This is exactly what is wrong with BUFD - you spend so much time determining whether a thing is valuable, that by the time it gets delivered, the ROI has approached zero. Time to availability is perhaps the most important value. Why? Because you can then tell whether a development project is on track or off track. Being on or ahead of time has a value of its own, and tends to build ROI (or prove quickly that there will be no ROI). I don't think it would be possible for me to disagree more with the above sentiment.
And - not to put too fine a point on it - if the cost of developing software gets to be that much of a problem as the business grows, then there is a severe business problem that needs to be fixed. The cost is a symptom of a much bigger problem in that case.
I posted earlier on an XP mailing list thread - here's more:
Orthogonal to ROI?egb:Yes, these really are different values - but I am not claiming that they are completely independent values. There is certainly a time value to money, meaning that if you can reduce the time-to-availability, there is indeed an economic value. However, I would argue that minimizing time-to-availability is not necessarily equivalent to maximizing return on investment.
Time to availability is perhaps the most important value.Egb: In some domains, yes, but it is not fair to generalize this to be the primary value in all domains. If I'm writing embedded software for a toy, accelerated time-to-market has value but there is likely greater value attached to reducing per-unit cost...shaving off cents will make a difference in a mass market item (i.e. spending time to squeeze code into a smaller ROM footprint). If I'm writing an avionics control system, I probably have some hard final code complete date and accelerating availability will likely contribute to risk reduction (and thereby will have some economic value) but my return on investment will likely be more impacted by architectural initiatives that drive to economics of scale.
ok - first off, you picked two rather extreme examples. I'd warrant that most people reading this group are building more prosaic business systems - and in that case, your examples don't mean a lot. But even within those systems:
- embedded software in toys or games - if you don't get something in front of the customer fast, you have no market. The toy and game markets are perhaps the prime example of time to market being king; spending a lot of time on design in that space will ensure that you never get a product sold, plain and simple.
- Avionics - economics of scale in the manufacturing sense don't apply the same way to software. we can assemble large factories with machine tools and robots to spew out parts; we can't do anything even vaguely like that in software. I'm not at all sure that I get your point here at all, honestly
Why? Because you can then tell whether a development project is on track or off track. Being on or ahead of time has a value of its own, and tends to build ROI (or prove quickly that there will be no ROI).Egb: yes, but my point is that this does not necessarily optimize ROI....reducing time-to-availability does force early risk identification, which is certainly an element of ROI, but it is not the only element of ROI: what is outsourced? How small do I squeeze down my development staff? Can I achieve some economics of scale by earlier architectural investigation?
IMNSHO, if you are considering outsourcing of development, you better outsource the whole thing. Trying to run product management and design here, and development there (where there is 12 hours in tz distant and the culture is different as well) is a just silly. Those who think otherwise will answer differently when asked "Why not outsource marketing?", or "why not outsource all C level staff?" Economies of scale? In software? I truly have no idea what you mean by that. Large teams of developers are a mistake.
and - not to put too fine a point on it - if the cost of developing software gets to be that much of a problem as the business grows, then there is a severe business problem that needs to be fixed.
The cost is a symptom of a much bigger problem in that case.Egb: if you take the automotive industry as an example, reality is that cost is increasing - and not just the total dollars spent, but the relative dollars spent as a percentage of the cost of a car.
Egb: Does this constitute a
If their costs are rising in a well understood business, they have a problem. It's truly that simple. I don't know what they are doing wrong, but I'm sure that they are, in fact, doing something wrong.
Second, getting features in front of end users quickly is optimizing. Getting them there slowly after some sort of complex ROI process is a fools errand. Why? Because in general, without feedback from actual users, it's unlikely that software that solves real problems will be producedEgb: so, I think we have the two basic issues at hand: first, you would seem observe that time to availability is the most important value - I would agree with you for certain domains, but I would not agree to the generalized statement of applying this to all domains; second, you would seem to suggest that minimizing time-to-availability is tantamount to maximum ROI - I would agree that there is an influence, but I would again argue that there are many other elements that make up an ROI.
IMHO, it's true for any domain I can think of. In domains where something seems to prevent it, I suggest that the entire development model is screwed up, and needs fixing.
I've just read Ted Neward's tips for HTML based apps. They are good ideas, and, if used, will result in a more pleasant end user experience. For instance:
What is it, exactly, that takes an otherwise well-built, well-behaved application and turns it into a snail? A large part of it is the HTML being returned. When an HTML page contains dozens of references to images, large and small, scattered all over the page, the page as a whole seems to drag to a crawl as the browser is forced to go back to the server over and over again to download those images. Yes, the images make the page look pretty, but does the website really need mouse-flyover image-switching graphics buttons for a main menu? Or a footer of ivy leaves twined around the copyright statement? Or the company logo in the upper-left corner of every page? I'll be the first to admit that these things make the page look pretty, but after they've been seen once, they just fade into the background in the user's mind. Worse, though, they still need to be displayed, which means that they still need to be downloaded each and every time. (A good browser will sometimes cache some of the images, but there are limits to what can be cached.) Even beyond that, consider the size of the images themselves--if they're any decent size and color depth at all, they can measure well into the hundreds of kilobytes in size, all of which has to move across the network from server to client.
He goes on with some recommendations, all of which are good - but they raise a simple question in my mind. 20 years ago, we started the migration from server (mainframe) based applications to client (PC) based applications due to - well, pretty much exactly these problems. In fact, a web application is a green screen terminal application with (slowly loading) graphics. So why exactly do we want to deliver applications that way?
In some contexts, it makes a lot of sense. Outside the walls of the business, web apps provide a way of getting feedback from, and getting information to, customers and prospects. We have no control over the platforms those people have, so we have to settle for an LCD solution - a web app. Let's look within the business though - once you get beyond the simple applications, what value is being provided by web apps to your corporate users? Sure, IS can update them easily. On the other hand, delivering patches and/or new versions over the intranet for a client application isn't hard either. You have all that desktop power in front of the users - why throw it away? What many IS groups seem to forget is that the time of their users is important. I've seen web "solutions" to reporting from sales staff that force field sales people to spend 3 and 4 times more time on reporting than they did prior to the roll out of the "productive" web application. There was a reason we abandoned terminal screens; IS organizations would do well to recall those lessons.
Boy, did I ever identify with that.
I've been leaving the dev builds directories alone, just updating the upgrade directories. Well, I am now updating those as well. That way new downloads should get the latest stuff right off....
If you want to call it that. It's a geek's Easter breakfast ;)
It's a good thing that the kill rate of SARS is so low (something like 3.5% IIRC). There's something resembling a minor panic starting over it now; can you imagine what the major media would do with a virus that had numbers like Smallpox?
Lots of good historical pointers at Bitworking this morning - including this statistic:
The influenza virus had a profound virulence, with a mortality rate at 2.5% compared to the previous influenza epidemics, which were less than 0.1%
Earlier today, I pointed out that SARS had a death toll (thus far) of 3.5%. Hmm. My own grandmother's twin sister died in the flue pandemic in 1918 - this thing will bear watching....
I just finished implementing RSS Auto-Discovery to BottomFeeder. It's only in the dev builds at this point, but it seems to be working well. It was easy enough to implement as well; the hardest part was filtering all the duplicates from syndic8 XML-RPC queries.
What can you do with this now? Well, when adding a feed, if you type something like CNN, BottomFeeder will search the syndic8 site for matches, and then offer to subscribe you to any/all of them. It's pretty cool.
Tomorrow will be a phone day. I have a call at noon, another at 1, and a third at 2. That will keep me busy for awhile....
The earlier version of the screen-saver software contains a buffer overrun vulnerability in code that processes responses from the SETI@home server, according to Berend-Jan Wever, the 26-year-old Dutch student who wrote the advisory.
After tricking the client into connecting to a server the attacker controls, an attacker could cause the buffer overrun by sending a long string of data followed by a "newline" character, Wever wrote.
The vulnerability affects all versions of the SETI@home client software, including those for the Microsoft Windows operating system, Apple Computer's Macintosh operating system, and versions of the Unix operating system.
The software running on the main SETI@home server at UC Berkeley contains a similar vulnerability, according to the advisory.
And kind of an "oops" here as well:
A separate problem concerns the SETI@home client's transmission of information back to the SETI@home server. Wever discovered that all information from the SETI@home client is sent out in plain text form. That information includes data on the operating system and processor type used by the machine running the SETI@home client.
Malicious hackers could collect the SETI@home data using any one of a number of common packet-sniffing programs, providing useful information for planning a larger network attack, according to the advisory.
The vulnerability would require attackers to "spoof" a fake SETI@home server and trick the software clients into connecting to it before they could be compromised. The SETI@home team knew of no previous attack on a client that used such a method, the Web site said.
However, clients could easily be tricked using spoofing tools or attacked from HTTP proxy servers or routers used by the SETI@home host machine, according to the advisory.
More than 4 million Internet users have registered with SETI@home. Of those registered users, more than 500,000 are considered "active," having returned data to the main server within the last four weeks, according to the project's Web page.
Buffer overflows. So when are people going to learn that C and C++ are simply unsuitable for most tasks?