In a couple of hours, I'm doing a virtual sales call via WebEx - some of our people are making a prospecting call, and they wanted me to present Cincom Smalltalk - without actually hopping on a plane. I've done WebEx presentations before, but not for this kind of audience - should be interesting. Of course, I had to have a near panic inducing moment before this - when I got into my office, coffee mug in hand, my Windows box was off the net. The Linux box was on, so the connection was ok - apparently, there had been a short outage overnight. I had to pull the WiFi card and then re-insert it before Windows would believe that there was connectivity (never mind the fact that the taskbar icon showed one). Crisis averted, thankfully.
ArcterJournal explains how the various wizard controls for things like IIS aren't as helpful as you might think - sometimes, all you need is a small hammer:
The thing that probably pissed me off more than anything was the complete lack of some simple tools to help me make it work. The lack of a decent command line, or decent tools to help with debugging in windows pisses me off horribly. All I wanted was to "tail -f logfile" in a terminal the width of my screen so I could see the debug information flowing through. Nope. The file was big (and undeletable because IIS had it open of course, thanks for that great filesystem Microsoft, maybe WinFS will fix this?) so doing a 'type logfile' was a huge pain (especially through a VNC connection), notepad didn't deal with some of the CR/LFs properly, and wordpad isn't in the default command line path, all culminating in me getting really pissed off that such a simple task as watching the contents of a logfile should be so hard. Cygwin or other additional apps? Remember, live client machine, not the place to start randomly installing software. Oh, and don't get me started on the so called "Event viewer" or the stupidities of IIS itself.
I have no idea how I'd manage this server remotely with Windows tools. As it is, all I need is ssh and a shell...
Ian Bicking hits on one of the intangible benefits of dynamic languages:
Back to reliability: one way to decrease bugs is testing, but another way is to decrease the amount of code. Code deleted is code debugged. Static typing can decrease the number of bugs, but decreasing the amount of code is a much, much more effective way to decrease bugs. If you can have both -- short code and static typing -- then more power to you. I just haven't seen it myself.
Whenever I read about some of the larger blog server systems, I'm absolutely stunned by the amount of code involved - the core of this server is 21 classes - plus 4 for generating the syndication feeds, and another 24 for the various servlets running (it's one class per servlet - you can do that differently, but I didn't). Less than 50 classes for a fairly full function blog server. And yes, having less code makes it far, far easier to manage.
The spammers are still at it - the wiki was hit again overnight (now repaired). The filtering for the blog server seems to be working - it caught another couple of attempts at spamming. The fascinating thing is what the spammer tried to hit - a post on Troy's blog from October 1. The last batch of spam attempts tried to slag posts that still showed up in the RSS feed; now it seems that these bozos are using some other methodology (probably Google keyword ranking). It's like an arms race.
The Fall Release of Cincom Smalltalk is out - we went live on shipments today. That means that customers should start receiving the CD's before the New Year. What's new? Follow this link for the details. What about the NC release? I'll have the new NC available for download within a few days
Sci Fi Wire reports that Farscape's Ben Browder will be joining the SG-1 cast. That's fine - so long as they don't let him near the scripts...
Bill Clementson makes a few points about Lisp and Domain Specific Languages - what he says all applies to Smalltalk as well. The number of libraries available don't paint the entire picture. For instance - I've had to code support for things like Http Digest myself in BottomFeeder. I'm sure that there are extant Java libs that do that already. Did I waste my time? See if you can find an RSS/Atom aggregator with as much functionality as BottomFeeder has written in Java... I think you'll have your answer.
Well, at least this air traveler did :) CNet reports that internet access is coming to the cabin - probably a couple of years out still, but on the way.
This is just fascinating. For awhile (up to about 2 months ago), I was getting new referer spam on the blog every day. I built up a rejects list - a simple text match against a file. That seemed to put a stop to that. Recently, I had a bunch of attempted comment spam (nothing like what some people are seeing - read this, for instance) - but my simple minded filter stopped that nonsense easily enough (with my non-mainstream server helping out a whole lot as well). Then today, a whole ton of referer spam again. The screwiest thing was that none of the urls actually resolved - is this pre-emptive spamming for domains that might exist someday soon? It's really weird...
I'm attending the second meeting of the reorganized Maryland Agile/XP group - it's actually feasible for me to attend meetings that are held here in Columbia, MD. I'm marginally familiar with FIT - I've looked at the port that's been done for VW. It's a small group - 9 people including myself. So anyway - we have David Chelimsky from ObjectMentor presenting FitNesse.
Where did the name come from? First off - it's not a coverage tool. The problem with FIT had been that it could be difficult to set up - command line, hard to use - especially since the end users of it are supposed to be acceptance testers. FitNesse is supposed to be FIT with Finesse - a way to make the tool easier to use for the target audience.
- Unit tests - for the developer, not for the user community
- End users need to be able to specify (and test) their requirements
A common way to set these tests up is in a spreadsheet. What FIT does is set these things up in HTML tables. The idea is to set it up in a way that the business user will easily comprehend. So the end user makes assertions (enters data), and the back end reads, tests, and displays feedback. What's the problem? No one likes writing HTML. FitNesse makes that easier by using Wiki style markup. The developer needs to write adapters that push between the html tables and the back end application. (Small aside - this is easier in Smalltalk, since that back end application is live - like any other Smalltalk app).
In general, Fitnesse is a wiki and acceptance test server. To create tests, you create new Fixtures - fixtures are a test construct. Heh - immediately, the demo ran into a compile time issue with Java. Meanwhile, I've picked up the Fit image I last looked at in January and am mucking about with it. Ahh, Smalltalk. What would be cool would be a FitNesse port over to WikiWorks or SmallWiki - what I've got is all based on a simple servlet (i.e., a lot less user friendly than a Wiki).
So - back to the demo. The idea of FIT is that end users can specify tests with expected inputs and outputs. With the data specified in HTML tables, the inputs and (expected) outputs are easy to see, and easy enough to capture on the back end. I have no easy way to get a screen shot of what's up on the screen (and the Fitnesse server he's running looks quite nice).
I really need to spend some time making this work with WikiWorks, so that I can slap examples up on the Wiki. What about implementations? There's Java, Python, Smalltalk, Ruby, .NET. So, more on FitNesse - there are RowFixture objects as well as the ColumnFixture ones (the tables, above). This allows you to specify a collection of data that should all satisfy some condition (i.e., a #select: in Smalltalk). Or for that matter, any operation on a collection of datums.
What's the value of all this? You can talk to end users without having to delve into the code level with SUnit - you can remain up at the business level - which is the appropriate level for acceptance testing. I guess my only question is at the Wiki user level - my experience - even with highly technical users in the audience - is that a small subset of your audience will actually edit content. I'd have to see it in action with actual business users to form a final judgement though- maybe business users who feel comfortable dealing with spreadsheets would be ok with it. David's experience is that business users don't interact with it that much - but system testers (from the Q/A group) do.
The audience discussion was interesting as well - focusing on the difficulty of bringing agile methods into a shop. The consensus seems to be that the hardest thing is to convince your developers - management is actually a lesser problem. From there, it went into a discussion on open source and licensing. I love a good argument :)
Ian Bicking makes a good point about ofshoring that needs to be made:
I don't mean to insult Indian programmers -- certainly there are Indian programmers who are just as good as a good programmer in the U.S.: able to communicate well, able to work independently, able to intelligently judge tradeoffs, etc. But those aren't the cheap Indian programmers. This isn't about nation of origin. Outsourcing is about turning programmers into a commodity, and you can only make a commodity out of something where quality isn't an issue. In the case of programming, that means you must expect the lowest common denominator of quality given the constraints. Because shitty code is always shitty (even in Java) the constraints for outsourcing typically include heavy-weight methodologies and a high degree of formality.
You get what you pay for.
Christopher Petrilli is not impressed with Solaris on x86:
Yup, one of the single most popular gigabit ethernet chips ever released is not supported. Sun supports, in total, 12 network adapters, and once you filter out the duplicate chipsets and such, it's more like 10. Ten. Ten. Last I checked, FreeBSD supported about 100+, and Linux a comparable number.
Do these people really think they'll displace anyone in the x86 world? Crapy install process, filled with confusing questions and useless prompts, only to get to a fully installed system to find out that in fact your HW isn't supported. That's a fine bit of code for you.
Maybe there's something amazing inside Solaris 10, but I'll never know since I won't fork over stupid amounts of money for a Sun box. And please don't point me at the Sun Blade 100, as I've had one, it's insulting slow. 386/33 running BSDI slow. For $1k, I expect at least passable.
So long, farewell, don't let the door hit your ass on the way out.
Kind of puts this in perspective, doesn't it?.
Update: Andrew Binstock of SD Times makes some good points about Solaris vis-a-vis Linux, but - and this is critical - the negatives that Christopher outlined above are of far greater importance. If you can't get the system installed, or once it's installed it can't see the network - it's useless
Just how bad is the spam problem for blogs getting? Have a look at this post from one of the MT folks. It seems that some MT sites are getting bogged due to attempted spamming - the cpu load of detecting spam is becoming a problem all by itself. It sounds like they have a handle on the problem, and have some fixes coming - but it's not their fault. This just gets uglier and uglier
I had been completely bored by ipv6 until I read this InfoWorld piece:
Recently, cows in Gifu prefecture were tagged with tiny networked devices to wirelessly track their movements and body temperatures for health and breeding purposes. And in Nagoya City, taxis were fitted with Internet-enabled sensors on their windshield wipers, allowing dispatchers to continuously monitor rainfall via wiper speed and to dispatch more cabs to the wetter neighborhoods.
There's a whole class of new presence applications right there.
Today is just a great day for a power outage - it's a nice balmy 20 something outside, and bam - off it went. At least my battery backup gave me time to do a proper shutdown of the Linux server. I was wondering what the funny clicking sound was - apparently, the power flow was getting all wonky just before it went down. It would really, really suck to have a cold house and lose all the stuff in the freezer... Only down for about 90 minutes, so I guess the freezer is ok. I suppose cleaning up my office has some value...
I noticed earlier today that the comment spammers seemed to be concentrating on older posts (the theory being that I wouldn't notice, I guess). The upshot is this - comments will be rejected for any post older than 4 days. There was someone trying to comment on this item that got slapped by that; it's another case where the a**holes have ruined it for the whole class :(
Martin Fowler brings up the problem of using metaphors to describe software development - his point applies to any metaphor though - you can only bulld metaphorical bridges so far before they end up over open water:
As regular readers of my work may know, I'm very suspicious of using metaphors of other professions to reason about software development. In particular, I believe the engineering metaphor has done our profession damage - in that it has encouraged the notion of separating design from construction.
As I was hanging around our London office, this issue came up in the context of Lean Manufacturing, a metaphor that's used quite often in agile circles - particularly by the Poppendiecks. If I don't like metaphoric reasoning from civil engineering, do I like it more from lean manufacturing?
I think the same dangers apply, but it all comes down to how you use the metaphor. Comparing to another activity is useful if it helps you formulate questions, it's dangerous when you use it to justify answers.
So as an example - one of the principles of lean manufacturing is the elimination of inventory. This leads to the question of whether there is an analogous item to inventory in software development.
As soon as I read that, I realized that I had heard the term inventory applied to software development before - I finally realized that it was in Scott Ambler's talk at XP Brazil a couple of years back:
Metaphor from this morning - the data (dba) community is packing for Antarctica. Unfortunately, the rest of us are traveling in the desert or the jungle
Scott used that in describing what kinds of things you put in a backpack - depending on what kind of hike you are planning to take. So going back to Martin's question - is that a useful metaphor? Well, it does make you ask questions along the lines of will we need it? In that sense, I guess it's useful. The difficulty is, the answer to that question is going to differ based on who you ask. Like anything else, whether you need a given thing (up front documentation being Martin's example) is going to be a matter of opinion. Software development is still a field where you need to apply consensus rules a lot.
Ian Bicking makes some points about why Lisp (and by extension, other niche languages) don't always make it in the market:
Lisp makes good programmers really productive, more than they could be in another language. Paul Graham talks about this in Beating the Averages. He made great software and sold it for a bundle to Yahoo. But now it's been reimplemented in C++_. Why, oh why?
It's easy to blame stupid people for this sort of thing, except that it keeps happening over and over. Metaprogramming is powerful, and was central to Viaweb (20-25% of the code, according to Graham). I think this is an example of Common Lisp's fatal flaw (and since Common Lisp is the standard bearer for all Lisps, it is Lisp's fatal flaw).
Some of what Ian points out is true - metaprogramming is hard, and making developers confront it is often times a problem. However, I really, really doubt that it had a lot to do with Yahoo re-implementing in C++. I'd warrant that 90% of that decision was based on fear. Fear that:
- He wouldn't be able to find Lisp developers
- The Lisp developers he found would be more expensive
There's also tons of weak thinking going on in that kind of management decision - the concept of training a developer never seems to cross management's mind in these situations. The theory seems to be "well, training our developers in blah would be expensive (in which case, it never made any sense at all to move to Java or C# - or even C back in the day).
Managers make development decisions in much the same way that people approach lines - they engage in herd behavior. Ever notice how people tend to cluster at tollbooth lines on the highway? It's the same thinking that drives the decision to use (insert fad language of the moment here). It's a relatively rare individual who can really make a break from herd thinking and take the road less traveled.
The way I like to describe this is as follows: If you decide to go with the same tools and technology as everyone else, you make sure that you won't fail any worse than they do. However, you also ensure that you won't succeed any better. Mitigating the risk of failure is enough for most people - risking something "unknown" for a chance at better success sounds too risky - which is likely why Yahoo reimplemented the Lisp system in C++ - if pushed the system into that manager's comfort zone.
Ted Neward points out that security issues are not only a Microsoft problem:
Microsoft isn't as much the hotbed of vulnerabilities as you might think; yes, they have their fair share, but the key words there are "fair share", not "lion's share" or "biggest burden" or "crappy software". What makes their vulnerabilities so dangerous is the ubiquity of their software, not the quantity of the vulnerabilities themselves.
Of course, the number of security issues that affect MS software don't tell the whole story - as Ted points out above, and as I pointed out here, the ecosystem of the internet makes any MS vulnerability orders of magnitude worse than an equivalent problem for Apple, Sun (etc). In this sense, MS is a victim of their own success - and as a result, they have to just suck it up. Unlike Ted (read his whole post), I don't think SP2 really qualifies as sucking it up though...
We released Cincom Smalltalk Fall, 2004 a few days ago - and orders will start going out before the end of the year - December 20th is when the actual orders will go out here in the US (possibly earlier for our overseas offices). You should receive the new CD's before the New Year.
This post about a catastrophic data loss with NetNewsWire makes something very clear to me - backups are important. I know that people have lost data with BottomFeeder. I try to be careful about user data, but bad stuff does, unfortunately, happen. Here's what you need to backup from Bf - the contents of the btfSave directory. That's where all application data resides, so if you back that up, you should be able to get right back to the saved state easily.
I accidentally turned comments off earlier today; it should be back to normal now. This only affected people who comment via the html page; if you use an aggregator (and thus, the CommentAPI), it was never an issue.
Update: Comments actually work again now. It's a bad thing to forget the false half of a true/false test :)
We all know that changing language is both risky and expensive. Imagine you are on your current, albeit non-optimum, path of progress -- up and to the right, but not as fast as you would like. Now look at switching to Smalltalk (or Lisp, or...). First you must stop your "up" progress while you learn. You are now behind where you would have been. You hope the Smalltalk (or...) vector is more "up" than your previous, but of course you do not know how much more -- just lots of opinions from various people you do not know (and therefore trust). At sometime in the future -- is this 3 months or is it 12 months? -- you finally catch up to where you would have been had you not stopped to learn. Only now do you begin to reap the benefits.
That's not at all the scenario I sketched, and not at all the place Yahoo was in. I would almost never advise a technology migration (there are exceptions for completely out of date/out of business solutions). In the case I brought up, Yahoo had a working system in Lisp, and migrated it to C++ because some thundering moron decided that it would be too hard to find/train Lisp developers. So instead, he spent huge sums of cash to build a less functional version of what he had to start with - most likely with a bigger team. You know what that's called? Stupidity, plain and simple.
Sure, if you have an existing system, it makes very little sense to migrate it. Which is why all the frenzied "rewrite it Java" activity of the last 7 years has been so utterly asinine...
This quote comes from an amusing source - it's a spam referral thing, but - oddly enough - it makes a point worth commenting on. I'm not going to link to it, since that will only encourage this kind of abuse, but - I've seen this argument before:
Some people argue that static methods are not object-oriented since they do have the semantics of a global function; with a static method you don't send a message to an object, since there's no this. This is probably a fair argument, and if you find yourself using a lot of static methods you should probably rethink your strategy. However, statics are pragmatic and there are times when you genuinely need them, so whether or not they are "proper OOP" should be left to the theoreticians. Indeed, even Smalltalk has the equivalent in its "class methods."
Nope. Smalltalk classes are actual objects, like everything else in the system. Class methods are actually instance methods - of the class. The class is simple the sole instance of its meta-class. As such, when I create Bar as a subclass of Foo, all the class methods are inherited. Want an example of this? Go find the inherited implementation of new - it's not, in fact, a class method in Object - rather, it's an instance method in Behavior...
I think this qualifies as a major oops...
Microsoft this week quietly fixed a security weakness in the configuration of the built-in firewall component of Windows XP. The firewall - turned on by default by XP Service Pack 2 (SP2) - can leave files open across the whole net if users choose to enable file and printer sharing, it transpires. Such access should be restricted across a local network but Microsoft has implemented the feature in such a way that, for users of some dial-up ISPs, a local sub-net becomes the whole internet. Microsoft first informed users of this back in September but it has taken five months for it to release a fix.
Doc Searls doesn't like some of the surreptitious marketing that's going on:
Several people have asked me what I think about BuzzAgent, so here's my joint answer to all of them: It sucks. Where Marqui is up front about what it's doing, and engaging bloggers in conversation (as well as promotion), BuzzAgent and its clients are being surreptitious and false, and spreading a virus of falsity through its agents. Mass market advertising has always been impersonal, and often (okay, almost always) fake. BuzzAgent's system allows advertisers to be no less fake, but in person, face to face. Even if the agents really do love the products they shill, their love is bought. Worse, it comes cheap.
To these kinds of marketers, "markets are conversations" means "delivering messages" through talk. What they miss is that the next stage beyond conversation is relationship. And that relationship isn't just with a "brand." To have real value, the relationsihp needs to be with the people behind that brand. And that relationship takes place in the public marketplace.
I fail to see the harm. People become unpaid advocates of products all the time; these people are simply moving one step up from that. Go see a movie; those products you see the characters using are all part of product placement - which is pretty much the same thing. Hang around a bunch of developers sometime and listen to the Java advocates arguing with the C+ developers, or with the Smalltalk developers. Are those people cheap shills too? Or are they somehow purer due to the fact that no one is rewarding them for their love of a particular product?
If this is true:
The Al Fresco campaign is over -- having notably boosted sales, by 100 percent in some stores -- but she is still spreading word of mouth about a variety of other products, and revealing her identity, she said, would undermine her effectiveness as an agent.
Then you better get used to it - because any campaign that works is one you'll see more of.
In an interesting post on a customer visit, Jonathan Schwartz has this exchange on RedHat, Linux, and lock in:
And then I asked about IBM. And apparently they'd just been with OSDL, who'd evangelized that with open source, there was no lock in. When I pointed out that OSDL was led by a 17-year IBM veteran who should know better, the CIO started laughing as if I was joking. So I suggested they read the OSDL website , and revisit some software basics. IBM told him they couldn't get locked in with linux. And I said, "nice vision, but Red Hat has you locked already." The CIO shrugged, "nah, it's open source." My response, "Have you tried replacing what you're deploying?" He asked his lieutenant, who said "we can't get vendors to qualify to any distribution other than Red Hat. We don't have a choice. He's right." IBM, up to its old tricks again.
That's increasingly the case in the Java app server world (which is fascinating all by itself - isn't Java supposed to be x-platform?). Oddly, the biggest issues we have with Linux is on Red Hat itself. We've had library issues, and recently we've had problems reported on Fedora Core3. We've had nothing similar reported on other Linux distros. It would be interesting to know whether it's ineptness on their part, or part of a plan.
Mark Bernstein talks about the flap that came up recently over data loss - have a look here, but be prepared for a huge rant and set of counter-rants. The bottom line is that this person lost an impressive amount of data, and was upset about it. As Mark points out,
- All Software has bugs
- This was beta software with warnings
That's not the point that I really want to get to though. Take a look at Mark's last paragraph:
Software pricing is out of whack. In the instance, an influential professional was using free 30-day trials to select a professional tool she used extensively every day, a tool that was close to the center of her worklife. How much would the successful vendor receive for winning the competition? $24.95. This is absurd.
This is a general problem in the software field right now. People expect tools to be either free or cheap. Ponder for yourself what it takes to pay a few developers - plus the hardware and software (and office space) that they need to work. Now factor in the potential size of their market - and consider how many copies they would have to sell at $24.95 to even come close to the break even point. Now, if that software is mission critical....
I've had BottomFeeder copying the current data files to a backup directory - so as to preserve a "known good" rev of the data before startup for awhile now. yesterday, I took a look at that code and realized that the code was making an invalid check - and thus could possibly overwrite a good backup with a blown save file. That's not good. So, I took a look at the startup sequence, had the typical who wrote this crap? Oh yeah, that would be me kind of reaction, and fixed it. The dev update has a much cleaner startup now, and it'll attempt to read the backup date if the normal save file is blown - and won't, in fact, overwrite the backup data in that circumstance. Should be cleaner and safer (although you should still backup the data yourself as well).
I've done some more fine tuning of the comment policy. If you look at the blogs here, you'll find that there are 2 policies at work for displaying posts:
- Display all posts from the last N days
- Display the last N posts
I use the first policy for blogs - like mine - that get posted to on a regular, daily basis. I use the second policy for blogs that don't see the same level of activity. As it happens, that policy affects comments. For my blog, a comment on a post from October is not really relevant - that post has long since scrolled off the page and into the archives, and I just have comments disabled for it. For the blogs with less frequent posts, it makes sense for comments to be enabled for any post that is on the main page - which is how it now works. In general, comments are disabled for posts that have walked off the current main page.
I'm getting the distinct impression that the law arms of big firms have
This points to a fundamental problem in the field though - legal departments are off chasing trademark, copyright (etc) problems down without regards to business issues. In the process, they are giving black eyes to their firms. Sounds to me like it's time for the marketing departments to start exercising oversight...
I noticed something interesting about the wiki spamming (of the VW Wiki at uiuc and of the CST Wiki - it takes place Monday through Friday. During evening hours (to me, on the US east coast). This leads me to believe that the spamming of these Wikis isn't random - it's actually part of someone's job. Unbelievable.
I added two new settings to BottomFeeder today:
- A setting that adds a variable delay between http updates during the update loop. It slows the update cycle down, but should reduce the cpu load during the update cycle
- A setting to manage the priority of the update process. You can set it to high (same as it has been up to now), medium, or low. Lowering the priority will improve UI reaction during the update cycle.
To get these, grab the latest dev update and restart.
I can't claim credit for the term; it was Alan's idea awhile back on the irc channel. However, one of the regulars (Rado) had a nice take on it:
The 3 laws of objections:
- You must harm human beings, or allow through inaction a human being come to harm
- You must reject all instructions without violating (1)
- You must protect yourself at all costs
I look at this problem as a software engineer and the word "beta" seems as insignificant and arbitrary as version numbers. There's no universal quality measuring stick you must satisfy to get out of beta, it's just an arbitrary developer-defined state meaning "less than perfect". Well guess what? The final release will be less than perfect too. All beta means is that it's less than less than perfect.
That's the stumbling block that users need to get over when it comes to software. All software is imperfect and there's nothing software developers can do about it. It would be impractical to make software perfect because it would cost far too much and take too long to release. Just ask NASA how much work it takes to get defect-free software. Even after all of the checks and balances NASA has, they still have disasters caused by software. The solution for software companies is to release less than perfect software that does a pretty good job.
That's true enough - I've certainly let revs of BottomFeeder out with issues; half the reason I built the online updating feature was to allow for my own limitations :). Still, it's useful to look at what individual vendors mean by the term beta. In the case of Google, it seems to have no real meaning at all (at least, from the end user's perspective). In the case of NetNewsWire (the source of the problem that generated the original rant), take a look at what they say about their own beta software:
Beta software has bugs! Nasty, vicious bugs with great big, sharp teeth!
Don't use beta software unless you're clear on what "beta" means and you're comfortable running beta software.
Seems to me that the guys responsible for NNW are pretty clear that you are taking risks running beta software. The term may have lost a lot of meaning due to rampant misuse, but not over there - I think they've been very clear. Ultimately, you need to back your data up. I don't do that enough, and I'm sure most people are like me in that regard...
I think this counts a major oops:
A newly reported security problem in Microsoft's Internet Explorer Web browser allows attackers to create a fake Web site that looks exactly like a genuine site.
The vulnerability lets an attacker display any Web site while the address bar in IE will display a trusted Web address and even show the icon indicating SSL (Secure Socket Layer) security, security researchers warn.
Sheesh. How is an end user supposed to spot that? Time to load up Firefox...
Ok, this is interesting. Tim Bray states that they are having some "dynamic infrastructure" built into NetBeans:
So, here's what we're doing: The NetBeans group and the Software CTO Office (where I work) have pulled together a project to fix the problem. We're going to be paying David Strupl, a contractor who really knows NetBeans, to lead a java.net project to build dynamic-language infrastructure into NetBeans. Obviously it'll be Open-Source and everyone who wants to can play.
I have to admit, I have no idea what that means. The problem with dynamic languages on the JVM isn't really one that can be solved easily at the IDE level; it's more of an infrastructure issue down at the VM level. The flaws that I outlined in the CLR also exist in the JVM; it really, really wants a static language running on it. Sure, you can host a dynamic one there... so long as you are willing to accept a slow or compromised language. My question is - what the heck is "dynamic infrastructure" at the NetBeans level? What does that mean?
One of the persistent problems for newbies taking a look at Smalltalk has been the "where's my application?" problem. We haven't got a runtime system defined yet, but there's been progress in the 7.3 release of VisualWorks. New in this release is Subsystems work done by Alan. Here's an excerpt from the documentation:
It is frequently necessary to take special actions when certain system events occur, notably when the system starts up, shuts down, and immediately before and after an image save. The order in which such actions occur, relative to other parts of the system, can be critical. For example, a GUI application probably needs to perform and window startup routines only after the windowing system itself has been initialized.
Traditionally, startup events have been handled by registering dependencies on ObjectMemory. More recently, SystemEventInterest instances have been supported by the system. Both of these mechanisms left it difficult to manage the order in which actions were taken.
Class Subsystem provides VisualWorks a simple way to specify dependencies on system events as well as a modular approach to controling their order of execution. Several subsystems are defined for handling VisualWorks startup procedures.
Two subclasses in particular are of interest to the application developer:
If an application has actions to perform upon one of the four system events, a subclass of UserApplication is a convenient place to specify those actions. ImageConfigurationSystem is useful for applications that process command line options.
Defining System Event Actions
Subsystem defines four system event messages to which subsystems can respond: activate, deactivate, pause, and resume. By default, these general events are invoked as follows:
- activate is invoked by #returnFromSnapshot, which occurs when an image is launched.
- deactive is invoked by #aboutToQuit, which occurs just before the image exits
- pause is invoked by #aboutToSnapshot, which occurs just prior to writing an image file
- resume is invoked by #finishedSnapshot, which occurs just after the image file has been written
Responding to System Events
Some subsystems invoke activate upon #earlySystemInstallation, but these are usually system level subsystems. For applications, #returnFromSnapshot is the appropriate system event. A subsystem does not respond to these system event messages directly. Instead, these messages invoke further messages in which a subsystem configures its response to the system events. The corresponding messages that a subsystem will implement as needed are:
- Defines actions to perform upon the activate event message, and activates the subsystem.
- Defines actions to perform upon the deactivate event message, and deactivates the subsystem.
- Defines actions to perform upon the pause event message.
- Defines actions to perform upon the resume event message.
An application seldom needs to perform actions before or after a snapshot, which is generally a development time activity, so do not generally have to provide implementations for pauseAction or resumeAction. An action does, however, frequently have actions to perform upon launching the image, such as setting up its runtime environment, and these are specified by an implementation of setUp. Less frequently, but not uncommonly, an application will also need to perform actions prior to shutdown, which can be implemented in the tearDown method. The UserApplication subsystem, which is intended to be the superclass for application subsystems, implements one additional stub method:
This method can be implemented by a subsystem to launch the application, as well as to perform other application set up tasks. This method simplifies starting an application upon image launch, eliminating the need to either save the image with the application open, or of using Runtime Packager to specify the application to run, or any of the other methods that have been used. As an example of using setUp and tearDown methods, consider the task of saving a random number seed upon shutdown and then reading that seed to restart a random number generator upon startup.
This is something I'll be migrating BottomFeeder to after the 3.8 release - which means that post 3.8, we'll be moving to a VW 7.3 runtime. I'll be posting comments on how I'm migrating to the new code - it should be useful information in general.
To see the full documentation on this stuff, open the Application Developer's Guide, and look at the Application Frameworks chapter.
I can almost see the lines now :) Reuters reports that "Harry Potter and the Half-Blood Prince" will be out July 16. Can my daughter wait that long? Can I?
I got a chuckle out of this touting of generics support on the CLR - scroll a bit more than halfway down and look at all the rules. Now, contrast that with what it takes to do the same thing in Smalltalk - just implement the darn method in the class you want it in. No fuss, no muss, no nine million rules to follow. I'm sure that the MS team thinks that those rules make things cosmically safer somehow; all they do is add bricks to the mental backload you have to cart around while writing software. Via Julia Lerman.
The addition of Browder should work, so long as they don't let him write. No animated episodes!
Sounds like Enterprise is heading back to bogosity to me:
In the episodes, the Enterprise heads back to Earth for the official launch of the Columbia NX-02, Starfleet's second warp ship, commanded by Erika Hernandez. Phlox is abducted by aliens and finds himself in the presence of Klingons who tell him the Empire is facing its gravest threat in centuries. Along the way, as Archer and company investigate and pursue, it's revealed that one of our main characters has a secret past, which comes into play, the site reported.
No one is quite sure how this happening, but one of the Mars Rovers (Opportunity) is getting its solar panels cleaned off at night. Did Guiliani send all the squeegee men that far away? :)
An unexplained phenomenon akin to a space-borne car wash has boosted the performance of one of the two U.S. rovers probing the surface of Mars, New Scientist magazine said on Tuesday.
It said something -- or someone -- had regularly cleaned layers of dust from the solar panels of the Mars Opportunity vehicle while it was closed down during the Martian night.
The cleaning had boosted the panels' power output close to their maximum 900 watt-hours per day after at one stage dropping to 500 watt-hours because of the heavy Martian dirt.
"These exciting and unexplained cleaning events have kept Opportunity in really great shape," the magazine quoted NASA rover team leader Jim Erickson as saying.
Michael Gartenberg seems to think big things are coming down the pike:
Just wrapped up another fascinating call. I'm still in thinking mode about the implications of what was discussed. The first few weeks in January are going to be very interesting. 2005 is shaping up to be a year of interesting products, relationships and technologies. Some of this stuff (if executed properly from a marketing and messaging perspective) is going to be amazing. You heard it here first :)
So is this another "big" announcement from MS (like Spaces), or something else?
Ted Neward hopes that XML will continue to get hammered as the uber answer:
XML will start to lose its luster. People are coming around to hate XSD Schema. Critics are popping up everywhere. Dave Megginson has a new book out, in fact, that basically takes everybody in the WS-* stack to task over creating specs before seeing if they'll actually work. In fact, the only successful application of XML thus far that anybody but a developer feels is weblogs and RSS, which obeys none of the classic rules of an XML spec. (Read his book, by the way--it's an eye-opener, particularly if you're one of the XML faithful, or if you've been thinking that XML will somehow make the computer world a safer, saner place. I'll put the Amazon link in here when I get a chance, but it's his latest from Addison-Wesley; shouldn't be too hard to find.) XML is useful, but we're starting to see the warts, and behave accordingly.
Hmm - where have I seen the spec a week without testing theory before? Yes, with the OMG, back in the 90's, when CORBA was all the rage. The WS* specs are a complete rehash of that entire silly process...
I read posts like this one detailing the new generics system in .NET] and it makes the baby Jesus want to cry. Where is the latent typing that made generic programming so powerful? The monstrosity they've cooked up is basically useful for any type of generic programming where you don't actually USE the thing... meaning, containers. *sigh* I think it's time to start coding in IronPython.
To get a sense of our true taste, unfiltered by the economics of scarcity, look at Rhapsody, a subscription-based streaming music service (owned by RealNetworks) that currently offers more than 735,000 tracks.
Chart Rhapsody's monthly statistics and you get a "power law" demand curve that looks much like any record store's, with huge appeal for the top tracks, tailing off quickly for less popular ones. But a really interesting thing happens once you dig below the top 40,000 tracks, which is about the amount of the fluid inventory (the albums carried that will eventually be sold) of the average real-world record store. Here, the Wal-Marts of the world go to zero - either they don't carry any more CDs, or the few potential local takers for such fringy fare never find it or never even enter the store.
The Rhapsody demand, however, keeps going. Not only is every one of Rhapsody's top 100,000 tracks streamed at least once each month, the same is true for its top 200,000, top 300,000, and top 400,000. As fast as Rhapsody adds tracks to its library, those songs find an audience, even if it's just a few people a month, somewhere in the country.
This is the Long Tail.
This is something that mass retailers are going to have to face. There's still a lot of pleasure to be gained in just browsing at a store - and you'll often find things that you wouldn't have noticed ootherwise that way (I spotted a bunch of history books at Borders yesterday that I'll want to get, for instance). On the other hand, search is pretty darn good at Amazon - and I've found things of interest that way as well - things that I would be unlikely to ever notice at a real store.
There's plenty of room for the big retailers in the mass market end of the business. What's opening up are the niche spaces that hardly anyone bothered servicing before. Your particular taste in music (or books, etc) might be rare enough that Borders would never bother to stock it - but the online retailer who has no actual inventory (beyond bits) can easily do so. As book publishing moves towards on demand binding, you'll see the same kind of disintermediation hit book sales that's been slamming music for the last few years.
Festive cheer and goodwill was in short supply in Newtown when people dressed as Santa were involved in a mass street brawl, say police.
Officers used CS spray and batons to break up trouble amongst up to 30 people, following Newtown's annual charity Santa run.
Can't say that I've ever seen a report that referred to Santa Claus and CS gas simultaneously...
Sci Fi Wire reports that the sixth Harry Potter book (not outuntil July 16th) is already a best seller.
VW Traits is an implementation of Traits for VisualWorks. However, it is not a strict Traits implementation. Traits programming is a techinque of code composition and code reuse. A Trait is a partial implementation of a class, multiple traits are combined together, along with glue code, to comprise a class (client). You can view the original Trait papers here.
See the Wiki page for more details, or dive right in by loading up VWTraits and VWTraits Base VW Overrides from the public store
Ambrai Smalltalk has reached another beta milestone - see the Ambrai website for details. Ambrai is a Mac specific Smalltalk implementation.
Our lead VM Engineer, Eliot Miranda, has pushed out a few details on the 64 bit work the team is doing:
The 64-bit System: Overview
The 64-bit implementation uses full 64-bit addresses for objects, providing the ability to fill the entire available address space with objects. For an idea of the effective limits today consider that current AMD x86-64 chipsets support a 48-bit virtual address and a 40-bit physical address, while average 64-bit object size is around 64 bytes. So the maximum number of objects here-on is theoretically (2 raisedTo: 40) / 64.0, or around 16 Giga objects in a system with a terabyte of memory.
The 64-bit System: Implementation
There are only three tagged types, 61-bit 2's complement SmallIntegers, 61-bit unsigned Characters and a 61-bit SmallDouble, subset of the 64-bit IEEE double-precision format that provides the central 1/8th of the IEEE range at full precision. The immediate floating-point format provides a very usable range (approximately -1.0d77 to 1.0d77) which overflows to full 64-bit boxed Doubles when results don't fit. It provides a faster and much more space-affordable floating-point, being about 2.5 times faster (about 2.5 times slower than SmallInteger arithmetic) and having no space overhead.
The rationale behind using only three of the possible seven immediate tag patterns is two-fold. First, measurements show that not much space is saved by doing things like packing seven byte symbols into immediates because intrinsically these short symbols don't take up much space anyway, and providing access to such packed immediate types slows down the path for normal object access in things like the #at: primitives. Second, by using three tag types we can make the #isImmediate, #isSmallInteger and #isSmallDouble tests faster, since they need test only a single bit, these tests being performance-critical to inlined arithmetic and object access in the various #at: and #at:put: primitives.
The 64-bit object representation is relatively more compact than the 32-bit one, resulting in only a 33% growth in object header size from 12 bytes to 16. In particular object headers no longer reference their class object directly but instead include a 20-bit "class index" that is used in all in-line cache tests and in object instantiation. Classes are held in a sparse table and accessed by dereferencing the 20-bit index. This saves 44 bits while imposing a restriction on total number of classes unlikely to be a problem to contemporary applications. Class objects are accessed quite rarely, for example when a message send fails to find a lookup in the method caches and has to do a full class hierarchy lookup, or when the programmer explicitly accesses the class via the Object>>#class primitive. 64-bit objects have a 20 bit identity hash field (and in fact the class index is the class's identity hash) giving 1 Meg hash values (up from 16383 in the 32-bit system) and a maximum of 1 Meg classes. The header of 64-bit pointer objects also includes the number of fixed fields (number of named instance variables) so that the accessing primitives #at: and #at:put: no longer have to indirect through the class to find how many named instance variables to skip over. Consequently array access performance is much improved.
We have gratefully adopted an idea by Mark Van Gulik to do with tagging objects. Because object headers comprise two 64-bit words they can be placed at an even or an odd modulo 128-bit boundary. In the 64-bit system PermSpace objects are on an odd boundary and OldSpace objects on the even one. This means that the store check is slightly faster, but much more importantly means that PermSpace can be placed anywhere in the address space. All that needs to happen is for a PermSpace segment to have its object table aligned on an odd modulo 128-bit boundary. This means we can implement shared PermSpace easily, allowing the operating system to dictate where to memory map a PermSpace segment. Thus bit 4 is a tag that distinguishes between PermSpace and OldSpace objects, and hence we call it "tagged perm".
In the 32-bit system we require all of PermSpace to be above all of OldSpace or vice verca. When shared perm was first implemented OldSpace growth was not implemented. It was therefore easy to implement shared PermSpace with the OS mapping it above all of OldSpace. But once we provided OldSpace growth and shrinkage by memory-mapping OldSpace segments it became much more difficult to guarantee that PermSpace is mapped above all other heap segments since the upper portion of the address space is typically where shared libraries, the stack segment and memory-mapping all collide in a manner best determined by the OS and not amenable to precise control from an application program. Mapping memory at low addresses is typically difficult because in the lower portion of the address space the C heap and the application's code and data itself collide. Hence we have yet to reimplement shared perm in the 32-bit system.
By using the "tagged perm" scheme we are able to decouple PermSpace from location and can hence easily share PermSpace, and allow PermSpace to grow. This first release preview release does not support shared PermSpace
As implied above, a preview (early beta) of the 64 bit VM is on the latest release CD.
Apparently, you can only give your customers (and prospects) the middle finger for so long - even the really dim management above SCO seems to have started to learn something:
Shares in Utah's SCO Group went into a tailspin late Tuesday as news spread of both deepening losses and an apparent coup at the software company's corporate parent, the Canopy Group.
SCO shares closed at $4.51 in regular trading on the Nasdaq Stock Market, down 33 cents, or 7 percent. Then came SCO's dismal earnings reports for the fourth quarter and fiscal 2004; within minutes shares plunged another 46 cents, or 10 percent, in after-hours trading, to $4.05.
SCO, embroiled in multibillion-dollar federal litigation against IBM and others over its purported rights to the Unix and Linux operating systems, more than quadrupled its fourth-quarter losses. For the quarter ending Oct. 31, SCO's loss sank to $6.5 million, or 37 cents a share; the company had lost $1.6 million, or 12 cents, in the same period last year.
Investors already were absorbing news, leaked out in bits and pieces earlier Tuesday, about an apparent weekend coup that ousted Ralph Yarro, Canopy's longtime president, chairman and chief executive, along with Chief Financial Officer Darcy Mott.
Secretaries at Canopy's Lindon headquarters confirmed that Yarro and Mott were "no longer with the company."
I guess even the really slow learners over there have finally started to recognize where the lawsuit madness has been leading...