I've added basic module support to BottomFeeder. What are modules? Elements that live in their own namespace so that collisions are avoided. The only one that is really used at present is the content module - see here for a description. Ironically, this is the tag that Bf needs least, since the VW XML Parser detects encoded content for me; I just look for this tag because feeds that use it probably don't have a description tag. I'll be adding support for additional modules as I have time - and - more importantly - starting to use some of the newly available information.
So we get to see Angel back on the last two episodes - what I'm waiting for is the Angel/Spike dialog - that has the potential to be very amusing. The other question - since this is it for the series, it's open season - Joss could kill pretty much any character at this point. So will any of the regulars end up pushing up daisys? Hmmm....
An oft-heard complaint, echoed recently by Annrai O'Toole, is that J2EE app servers are oversold:
The J2EE vendors have done a fantastic job of convincing the world that you can't run a line of Java unless it runs inside a J2EE container. This is just pure bunkum.
I like his formulation that "J2EE is the Java equivalent of a mainframe." We also have, of course, in COM+, the Windows version of the same idea, which in its earlier MTS incarnation predated J2EE. I also notice that the real mainframes haven't gone away. With respect to the middleware services for which the J2EE server is best known -- "TP-heavy" transaction management, connection and object pooling, role-based security, and declarative control of these aspects -- the question of when and why this stuff is or isn't overkill seems never to go away
Needless complexity.... There are times you need the heavyweight stuff. But most of the time, you don't
This comes back to my notion of evolutionary IT. It's a mistake to assume that there's any guiding intelligence in IT. Decisions are made at random. Over the course of millions of years, certain decisions will result in a greater survival percentage....
I guess we settle in for a long wait....
Regardless of your policy views, this is just amusing. There's one way to "filibuster" programs you object to :)
where's the code?
I don't know about the Squeak code, but the VW code for BottomFeeder is at SourceForge - although for up to date access, get a public Store account and grab the BottomFeeder bundle. You should also load the latest Twoflower bundle - the system will load the 'goodie' version from the parcel directories, but it's out of date relative to the database.
Anyone interested in the code that powers this blog can grab the CST-Blog bundle from the public store as well.
Things are going okay for now, but it's getting more and more difficult to convince anyone to do anything in Smalltalk. I can't understand how a company that is finally reaping benefits of a large-scale project done in Smalltalk (successfully), starts looking to throw all that investment out of the window and go with a lesser tool.
In my particular area, we deal with both Smalltalk and Java, and we (Smalltalk team)are consistently delivering a more stable product faster (which means that we usually 'wait' until the Java part is ready). My team lead sees that, my manager sees that, even my director sees that. But when it comes to the VP, it's all Java talk.
Here's a tip for IT managers everywhere: If it's not broken, don't fix it. Maybe, just maybe, the company you work for won't have to lay off half your staff if you keep that in mind....
I received more details in email from the guy who sent me this:
I forgot to tell you the really fun part. (dohh!) Yes, part of the problem was mismanagement. A bigger problem was C++ and what Rhapsody could and could not do with it. The guys ended up doing some pretty basic R&D on the Rhapsody codebase which we guilt-tripped out of the vendor. This product gives great demo, but doesn't deliver for complex tasks. And of course one could replicate it in Smalltalk in about a week and not pay the ridiculous fees.
One of the biggest problems was building a decent message passing system between CPUs. As the doors are closing, there are still rancorous arguments about CORBA vs. RPC vs. sockets, and all of them are ugly in one way or another. C++'s lack of reflective capability is simply a disaster, and our C++ jocks could barely comprehend the nature of the problem.
VisualWorks developers - there are new VW 7.1 engines available for download:
These 7.1a engines are the most up-to-date engines for VisualWorks 7.x imagesas of May 2003. Version 7.1 engines are backward-compatible with 7.0 images and are recommended.
The version 7.1a engines fix the following bugs in the 7.1 engines:
Linkable GUI Platforms (AOSF, Linux86, LinuxPPC, LinuxSPARC, Solaris)
- The primitives for longLongAt:put: and unsignedLongLongAt:put: are broken for large integer values.
- The linkable GUI platforms could not do international input.
These problems have been addressed; download links for all engines are available here
I got this in email from a former colleague trapped in the unproductive bowels of the Java-Verse:
Thought I'd add another entry to your probably-overflowing file of "Gee, they shoulda used Smalltalk" stories.
For the last three years, I've been working on the UI for an industrial robot as a contractor for a guy I had worked for before. I tried to sell them Smalltalk but they went for Java, which worked thanks to the use of Smalltalk techniques but still didn't meet their realtime requirements. Eventually we switched over to Qt and life got really unpleasant. But that's just a side story.
This project has been running for several years. My client originally sold this project to a Japanese company that has trouble writing software. He sold it based on a software tool called Rhapsody: draw a bunch of UML diagrams and it generates C++ for you. His client saw this and went wild.
The last three years have been an interesting roller-coaster ride, consisting mostly of the inability of a *very* strong team to create the necessary software in a reasonable amount of time. Part of the problem was mismanagement, or simply lack of management, which is odd because the guy who started this has a reputation as the ultimate software development manager.
Recently, his client sent over a Japanese OO expert to take a look at what was going on. There was a big company pizza party to introduce him. He gave a little speech in which he said there was this system that he had first used back in 1984 and it still amazed him that everyone wasn't using it.
I was told by others present at a high-level meeting that my client asked this OO guy how he would have approached this problem. He said he probably would have started off writing it in Smalltalk.
BTW my client's company is closing its doors this Thursday.
I've certainly never thought of OO as a silver bullet - it's a useful technique for solving software problems, nothing more. The enemies of OO certainly like to paint it that way though. There's so much unclear on the concept stuff on that page that it's hard to know where to start; here's a sample of how this guy goes off the rails:
Let me give an example of this. Suppose I want to define a ubiquitous data structure. ubiquitous data type is a data type that occurs "all over the place" in a system.
As lisp programmers have know for a long time it is better to have a smallish number of ubiquitous data types and a large number of small functions that work on them, than to have a large number of data types and a small number of functions that work on them.
A ubiquitous data structure is something like a linked list, or an array or a hash table or a more advanced object like a time or date or filename.
In an OOPL I have to choose some base object in which I will define the ubiquitous data structure, all other objects that want to use this data structure must inherit this object. Suppose now I want to create some "time" object, where does this belong and in which object...
This gets a big huh??? - even in statically typed monstrosities like Java, interfaces allow one to define objects where they belong, but create a consistent interface. Smalltalk has always allowed for this, as had CLOS. Coming from someone who seems to know Lisp, I'm just baffled. This is either a straw man argument, or a real failure to grasp the concept. There's more; go see for yourself
I posted on a CIO Insight article recently - the one thatwentinto John Parkinson's theories on who actually does most of the development work. Today, Terry Raymond points out that the ACM has noticed the artcle as well:
John Parkinson of Cap Gemini Ernst & Young concludes that, as far back as the late 1970s, the most sophisticated and reliable software applications were chiefly the work of a small portion of programmers who were labeled 10X or Power Programmers, but these individuals fell out of favor when companies started concentrating on boosting the performance of average programmers via tools and methodologies. However, Parkinson has found indications that Power Programmers are staging a comeback, as evidenced by software products such as Squeak, an advanced version of the Smalltalk object-oriented development environment, and the inSORS Grid IP Multicast collaboration and videoconferencing system.
Interesting how people are noticing high levels of productivity and Smalltalk simultaneously. Now if they could just connect the dots.....
This rundown on Startrek was just perfect. This is funny:
Yeah, I know this one is overdone, but you'd think that the first time an explosion caused the guy at the nav station to fly over the captain's head with a good 8 feet of clearance, someone would say, "You know, we might think of inventing some furutistic restraining device to prevent that from happening." So of course, they did make something like that for the second Enterprise (the first one blew up due to poor lubrication), but what was it? A hard plastic thing that's locked over your thighs. Oh, I'll bet THAT feels good in the corners. "Hey look! The leg-bars worked as advertised! There goes Kirk's torso!"Here's the "Firefly" reference:
Picard: "Arm photon torpedoes!" Riker: "Captain! Are you sure that's wise?" Troi: "Captain! I'm picking up conflicting feelings about this! And, it appears that you're a 'fraidy cat." Wesley: "Captain, I'm just an annoying punk, but I thought I should say something." Worf: "Captain, can I push the button? This is giving me a big Klingon warrior chubby." Giordi: "Captain, I think we should reverse the polarity on them first." Picard: "I'm so confused. I'm going to go to my stateroom and look pensive."
Captain: "Let's shoot them." Crewman: "Are you sure that's wise?" Captain: "Do you know what the chain of command is? It's the chain I'll BEAT YOU WITH until you realize who's in command." Crewman: "Aye Aye, sir!"
Bob Lewis has a good commentary out on IT management's tendency to want to measure everything:
Someone - probably Deming - once said that if you can't measure you can't manage. Or, if you can't measure you can't improve. Something like that.
The problem is that instead of keeping the thought in context, managers, egged on by an army of consultants, immediately applied it to every blessed thing in the business without thinking very hard about the implications.
Many have missed the most important impact of measurement, which is that once you institute a program of formal measurement, you get what you measure and only what you measure.
That last part ought to be in bold: once you institute a program of formal measurement, you get what you measure and only what you measure
PORTLAND, Oregon (AP) -- Position Available: Interpreter, must be fluent in Klingon.
"There are some cases where we've had mental health patients where this was all they would speak," said the county's purchasing administrator, Franna Hathaway.
County officials said that obligates them to respond with a Klingon-English interpreter, putting the language of starship Enterprise officer Worf and other Klingon characters on a par with common languages such as Russian and Vietnamese, and less common tongues including Dari and Tongan
I wrote about VW memory management here the other day. In that article, I mostly went over areas you can change by implementing a custom Memory Policy class. That's hardly the only thing you can do though; there are a variety of settings in class ObjectMemory that you should be aware of. An excellent place to start is the documentation protocol on the class side of ObjectMemory - look at spaceDescriptions and reclamationFacilities - these two methods are actually doc. After you've read that, you'll have a better handle on how the VW memory system works.
For most applications, there are two memory spaces you'll want to take note of:
- New Space (Eden)
- Old Space
In rough terms, New Space is where objects are born. Old Space is where objects that don't die immediately go to live. If you think about it, it's clear why these two areas are of interest - if New Space is too small, then objects get tenured off into old space too quickly - and depending on how you have set up growth policies (see my earlier post), you can get either rapid memory growth or thrashing - or both in sequence. On the other hand, makeit too big, and the incremental collector - the one that scans New Space on a regular basis - may well spend too much time doing so, making for slower performance. The "correct" size is a balance between processor speed and application behavior - so you'll have to test it out. The default setting is often ok as-is.
Then there's Old Spacee - objects that don't get wiped out by the incremental collector go here to live - and remember, Old Space is not going to get scanned unless:
- The result of #favorGrowthOverReclamation is false, and the system has to scavenge to create space for objects moving from New Space
- You manually invoke the garbage collector (for instance, using the launcher menu)
If you know that your application creates lots of long lived objects, it's a good idea to increase the size of Old Space up front. Recall that MemoryPolicy merely controls how thesystem behaves - not how much space is available at startup. To change the space sizes, you can do one of two things:
- Use ObjectMemory class>>sizesAtStartup: to tell the system to increase the starting sizes at system startup. If you send this message, make sure to save the image - this only happens at startup!
- Use the API for expanding memory - ObjectMemory class>>growMemoryBy:
You can also shrink memory - but you want to be careful about doing that. In general, you should let the system do that itself via the settings in MemoryPolicy.
This barely scratches the surface of things you can do with the VW memory system - the settings allow for a lot of control over how and why memory is expanded or contracted. Make sure you read the class comments in MemoryPolicy and ObjectMemory - and the documentation methods there - before you get started. Questions? Send them my way
Patrick Logan points out that with keyword messages and blocks, you get an awful lot of the goodness of macros in Lisp:
While Smalltalk doesn't have syntax macros like Lisp, Smalltalk *does* have simpler block syntax and keyword arguments.
The effect is that new keyword messages can take block closures, which essentially delay evaluation until the method behind the message decides to evaluate them, and the message send looks an awful lot like new syntax, very clean.
Many uses of macros in Lisp, frankly, are simply created to hide the appearance of the lambda keyword. Of course there are more "legitimate" uses of macros, but these are less common.
This was noticed by Ted Leung, who asked:
This use of keyword messages and closures is really interesting. In Chapter 8 of On Lisp, Graham lists 6 reasons to use macros. Delaying or altering evaluation is an aspect of the first 4 reasons. Reason 5 involves using the calling environment, which you very rarely want to do, reason 6 involves using a new lexical environment, which you also rarely want to do, and reason 7 is inlining which is a performance annotation.
I'd be curious to see some examples of this type of Smalltalk code.
Rich Demers tried to answer on Ted's blog,but ran into a small problem with BottomFeeder's implementation of the CommentAPI - I was url encoding the text. Dohh! I fixed that, but here's the link Rich was pointing to
Periodically, I get asked about using VisualWorks in large memory situations. In the sense I'm discussing it, this means more than a gigabyte. If you need to set up a VW image that is going to use more than that amount of RAM in a runtime situation, you are going to have to get into the guts of the MemoryPolicy system. The first thing you'll want to do is read the documentation - it's pretty well covered there. After that, you'll need to have a look at a few places. The place to start is class MemoryPolicy. It turns out the ObjectMemory is managed, in large part, by settings in the current memory policy. As the class comment puts it:
This class contains a class variable called CurrentMemoryPolicy which contains a reference to the currently active memory-policy object. Control is passed to the CurrentMemoryPolicy during the idle loop and during the low-space condition for the appropriate response. This class also relies on the CurrentMemoryPolicy to specify a hard and soft low-space limit. The HardLowSpaceLimit is meant to represent an emergency low-space condition, and the SoftLowSpaceLimit is a secondary threshold that the CurrentMemoryPolicy can use to gain control in advance of the hard limit. This class arranges for control to be passed to the CurrentMemoryPolicy whenever either of these two limits is exceeded. See the comments in the class MemoryPolicy for further details regarding the role that a memory policy is expected to play.
This is why you have to modify MemoryPolicy if you have an application that will use "large" amounts of RAM - the basic policies aren't going to work. For our purposes, here's the core method that will interest you in MemoryPolicy: #favorGrowthOverReclamation. This method is invoked whenever the system needs more space for new objects - it has to decide, based on the current policy, whether to ask the OS for more RAM, or whether to do a GC and hope some space clears up. Here's the basic implementation:
favorGrowthOverReclamation "Answer true if we want to react (at this point in time) to the low-space condition by growing memory rather than reclaiming memory." ^growthRegimeMeasurementBlock value <= growthRegimeUpperBound
growthRegimeUpperBound is set to 32MB by default - likely not what you want! It's better than the pre-7.1 setting of 16MB, but not suitable for the situation we are discussing. The block being referenced looks like this:
[ObjectMemory dynamicallyAllocatedFootprint] or [ObjectMemory growthDuringCurrentSession]
What that's going to do is cap the maximum growth of memory for the Smalltalk application. You'll likely want to adjust that block based on your application and deployment situation. Another place you'll want to look is the amount of RAM that system allocates when it does allocate - which is controlled by the preferredGrowthIncrement variable. By default, this is set to 1 MB - you'll likely want it bigger. How do you do this? Here are the steps:
- Define a new subclass of MemoryPolicy. Make sure to implement the #setDefaults method, invoke the superclass version, and set the values of the appropriate variables.
- Install the policy like this: ObjectMemory installMemoryPolicy: MyMemoryPolicy new setDefaults
I've published an example implementation in the public store. The #favorGrowthOverReclamation method there looks like this:
favorGrowthOverReclamation ^self memoryStatus availableFreeBytes <= self growIfFreeBytesLessThan
Where growIfFreeBytesLessThan is a new variable setting. It's the floor for how much available RAM the image ought to have. Be Careful!! with this strategy - this method puts no ceiling on image growth. If the OS allows you to limit that, or there are other application specific factors that make this "not a problem", go ahead - but you'll likely want to think long and hard about this one.
In any case, this ought to get you started. To take a look at the example Memory Policy, look at package LargeMemoryPolicy in the public store. It installs itself as the new memory policy on load, as follows:
ObjectMemory installMemoryPolicy: LargeMemoryPolicy new setDefaults
Questions? Send them my way
Jim Robertson's Axiom of Software Development:
No good deed goes unpunished, no bad deed goes unrewarded
yes, I'm feeling cynical today :)
Tim Bray attempts to denigrate dynamic languages as "big" and "bloated" when one uses them to deploy systems. Hmm. Maybe he can explain this. BottomFeeder is in the same neighborhood as the other tools out there in widespread use. So much for "fat". maybe he can explain why Netscape routinely chews more RAM on my desktop than do my running Smalltalk development environments, much less the deployed applications. Then there's the complaint that Ted comments on:
Secondly, and in the same spirit, there do remain performance issues. There are is some (small) number of people who have to write low-level webserver code, and if you've ever done this under the gun of a million-hits-a-day load, you quickly become a control freak with an insane desire to remove as many as possible of the layers which separate your code from the silicon.
Hmmm. Sure, you can get better math performance from Java or C++ than you can from Smalltalk. On the other hand, Strongtalk is comparable, and Lisp can be as fast or faster. The main reason Smalltalk environments don't optimize for math speed is that it's not very important for most business applications. Why are the C/C++/Java crowd always so obsessed with this? Get it working, get it optimized - in that order. The C language crowd invariably inverts this. Not to mention this - in my experience, the performance issues are invariably not where the developers thought they would be up front. Premature optimization is a poor development practice.
But darned if Ted Leung didn't beat me to it:
Sun has a new article describing new language features in JDK 1.5. I love this sidebar quote:
The new language features all have one thing in common: they take some common idiom and provide linguistic support for it. In other words, they shift the responsibility for writing the boilerplate code from the programmer to the compiler.
In other words, we're modifying the language because we didn't have a macro system that we could do this with -- at least for generice, enhanced for, static import, and attributes. Getting rid of boilerplate code is what macros are all about.
Also notice that 3 out of the 6 features in the article are being copied from C#.
While Smalltalk doesn't have the macro system Lisp has, generics are a non-issue in ST, as is boxing/unboxing. Ditto the "simplification" things. One could summarize this whole thing as adding complexity to a language to make up for the glaring flaws.
Torsten Bergmann has two new goodies in the public store:
This small package adds separate window icons to the Launcher and Inspectors in VisualWorks 7.x. This is very helpful for Windows users since in the default VW implementation it is hard to distinguish IDE windows since they all have the same Cincom ST icon. If you are used to use ALT+TAB to switch between VisualWorks windows or other applications this is very helpful
This small package adds support for the native Windows shell folder dialog in VisualWorks. It wraps the SHBrowseForFolder and the SHGetPathFromIDList() API.
The New York City STUG is having an interesting meeting on May 28th:
NYC Smalltalk will hold its next meeting on Wednesday May 28th, 2003.
Date May 28th, 2003 Location Suite LLC offices Address 440 9th Avenue, 8th Floor Time 6:30pm to 7:00pm -- Open house Time 7:00to 8:30 pm -- Ethnography and System Design
See below for abstract and bios.
Take E or C train to 34th (Penn Station) walk to corner of 34th and 8th. Walk up one block to 9th.
RSVP is requested. Please send mail to email@example.com with subject line of: NYC Smalltalk May 28th, 2003
Our meetings are opened to the general public. Invite a friend ! To join our mailing list simply send mail to firstname.lastname@example.org.
Joining our list will give members access to all of the presentations and articles maintained on our site. Any questions send mail to email@example.com
Charles, Chair - NYC Smalltalk
Ethnography is the output of anthropological study; literally it means "writing people" or "writing culture" and it is usually a detailed description of a particular people, organization or type of work. What, you might ask, does this have to do with computer system design?
For the past 30 years, methods from traditional anthropology have been used to study people and their work and apply what is learned to designing computer systems. This presentation describes these methods and their application. We'll tell you how an Anthropologist got started in the Smalltalk lab at Xerox PARC and how an engineer got involved with anthropologists. We'll draw the connections to certain European software communities (e.g., Kristen Nygaard and Participatory Design), tell you about a few of our projects and what we are doing now.
Our contention is that especially in an environment that emphasizes agile, iterative approaches to software, observing technology in use is an important capability that software system architects, designers, and developers can use to insure successful design, development and deployment.
William L. Anderson
Bill is a co-founder of PRAXIS101. His consulting practice focuses on user-centered systems architecture, participatory design, software development practice innovation, and organizational learning.
Prior to founding Praxis101 Bill was a Software Architect for Xerox Corporation, where he developed service discovery software as part of an agent-based, peer-to-peer, document services middleware infrastructure. He pioneered codevelopment and participatory customer collaborations in rapid prototyping and product development in projects that put end-users and engineers together, jointly developing and deploying new products. Prior to Xerox, Bill worked in the telecom, networking, and pharmaceutical industries. Bill holds a Ph.D. in Theoretical Chemistry from Rensselaer Polytechnic Institute. He is currently serving on the National Research Council's National Committee on Data for Science and Technology, and speaks internationally on issues of preservation and access to scientific and technical data.
Susan L. Anderson
Susan is a co-founder of PRAXIS101and fassforward consulting group. She uses her expertise with cultural nuance, transformation and innovation to help companies break through their tough problems, plan for the future and productively engage employees and customers.
Formerly, Susan was Senior Director for Gartner.Com (covering users, technologies and competitive intelligence) where she helped build the next generation Gartner.Com. Prior to that, at Xerox, she was instrumental in developing collaborative, socio-technical approaches to the design of new technology (from knowledge solutions to remote collaboration tools and systems to document management systems). Susan's work has also emphasized change initiatives in corporations, education, health care and international development. She has been an active contributor to professional conferences and organizations. Susan holds a Ph.D. in Anthropology; she has written on topics from technology's role in health decision making to technology-mediated collaboration.
Looks different from the normal User's Group meeting - if you plan to be in NYC then, check it out!
Via Sam Ruby:
Sean McGrath: Dynamic typing is good for you. Its good for programming. Its good for data. It makes it possible to develop software without a crystal ball and without a 10x hit between "product" and services thanks to the extra levels of loose coupling it introduces into your designs. If you haven't yet tried it - get Python, get XML and do something non-trivial with the combination. Give it a couple of weeks to sink in. I promise you, you won't go back.
Although, I'd say try Smalltalk as well. Give Lisp a shot. Look at Ruby. And after all that, see what you think.
I'll be visiting a customer site this morning, so no blogging for a bit. First, a nice hot Mocha. mmmmm....
Managability is still fighting the good fight over static typing. Today, he's using a call to authority:
- Improves robustness through early error detection
- Increases performance by making required checks at the best times
- Supplements the weaknesses of unit testing
Hmm. Early error detection? I don't think his expert has done a lot of work in dynamic languages. The kinds of errors that static typing guard against just aren't that common. I can't recall the last time I made one, for instance. Increases performance - early optimization is always a bad strategy. If you are thinking of optimization at first, you made a mistake. Weakness of unit tests? You mean, testing the actual usage of the code is less useful than verifying that we passed an int to a method defined that way? Pardon me while I laugh. a lot.
After showing a demo of how nifty Eclipse is, he goes on to say:
I think given the arguments presented by Eric Allen, and the utility of Eclipse in doing TDD, the issue of Static vs Dynamic typing should be a closed issue.
Sure, just not the way you think. The Eclipse link argues about the lack of power in Ruby and Python development environments (is this true?). Here's a tip - grab a Smalltalk environment, and get back to us.
Within the context of commenting on Sun and Open Source, Ted Neward points out the irrelevance of vendor neutrality in the J2EE world:
FWIW, I agree with the large vendors that JBoss doesn't really deserve the J2EE brand name; I also think having the J2EE brand isn't worth the bytes it would take to put it on the website. By this point in time, with the market having consolidated behind just a few key players in the J2EE space, is it really all that important to be vendor-neutral?
That's about right - with how much money companies spend on J2EE server tools, migration is about as likely as db migration - i.e., not very. Given that, moving to J2EE will yield just as much vendor lock as moving to Smalltalk. It will also mean dramatically less productivity. Sure, you can find more developers with Java on their resume - but see this post for how that will work out....
Bob Martin discusses overstaffing and empire building this morning:
I once consulted for a company that had 50 developers working on a simple GUI. This GUI was a flat panel touch screen upon which several dozen dialog boxes could be made to appear. These 50 developers worked on this project for five years or more. That's 25 man-decades, 2.5 man-centuries! COME ON! Three guys could have done this in three months! My buddies and I used to joke that they had one developer per pixel and that each developer wrote the code for his pixel.
OK, so the manager was empire building. Some managers measure their worth by the number of people they manage rather by how much they can get done with how little. Still, I find this problem is not isolated. It seems to me that a large fraction (perhaps a majority) of all software projects are overstaffed by a huge factor
Another question to raise here - these problems are most rampant at the largest companies - the same ones that follow all the fads - and the same ones that are now furiously outsourcing development in order to save money. I think I see a pattern forming.
Well, Joss placed a link between Buffy and Angel this week - obviously, as Angel will be on the last two Buffy episodes - but it wasn't what I expected. My question for next season is, what the heck was Gunn offered?
I posted a little while ago on our home network blues. I just found the source of the problem - Under Services (This is an XP Home problem), the Server service wouldn't start - when we tried, it gave us some bizarre message about a missing file. I figured that maybe allowing it to get updates from Microsoft would help.
Well, that went oddly. After rebooting, the system wouldn't see the internet. So we had to re-run the blasted wizard, and that fixed that. Then voila, the Server and Computer Browser services started up, and we could actually share printers again. Huzzah.
So my question is, why was this so frelling hard?
Maybe it's the language that Ted works in. I commented on Ted's posts comparing object, relational, hierarchical, and procedural last week here and here. So today I ran across Ockham's Flashlight and this excellent observation:
As Ted points out:
thanks to the fact that everything produced by a SQL query is a table, we can have a single API for extracting however much, or however little, data from any particular query that we need. We don't have the "smaller than an object" problem...
However, the issue here is not a problem with OO, it is a problem with the crippled languages that dominate mainstream application development. C++ and its VM-hosted descendants freeze types as they are loaded. You cannot dynamically change the type of, say, a Java instance at runtime. Nor can one instance of a class be different typewise than other instances. Further evidence of the problem is the host of partial solutions: CLR attributes, AspectJ, XDoclet, the Universal Delegator, SQLJ, and countless code generation tools provide ways to "enhance" classes. And, at least in part, they work around the underlying weakness of the languages they are built for.
A different approach would be to choose a more powerful language. Ruby and Smalltalk don't have any problem letting you create right-sized objects whenever you need them. Moreover, extensions such as AOP are simple to code in such languages, and work within the language instead of cutting across the grain.
So why don't these languages, or others like them, dominate application development?
I think he has a point. There's still and impedance mismatch from OO to Relational in Smalltalk, but it's easier to work through.
Scott Johnson reports on his initial experiences with Linux. And while they aremostly positive, he points out a few "little things" that are very important:
The lack of overall keyboard shortcuts is astonishing to me. Example -- in Mozilla ctrl+a doesn't select all text in the current field.
The lack of an equivalent to the Microsoft common dialogs for File Open, File Save, etc really does make a difference. You wouldn't think so but it does.
It's always the small, finishing touch sort of things that new users notice. Something to keep in mind.
Gordon Weakliem comments on simplicity in language syntax. He makes some interesting points - and coincidentally, this topic has come up on comp.lang.smalltalk recently.
More news from ESUG - this time on the keynote speakers for the upcoming ESUG conference:
The European Smalltalk User Group (ESUG) is very proud to announce the following keynote speakers for the next ESUG Smalltalk Conference in Bled, Slovenia - August 25-29
- Lars Bak
- John Brant
- Vassily Bykov
- Dan Ingalls
- Joseph Pelrine
In addition to the keynote speeches, the program consists of other presentations in the morning and in the beginning of the afternoon. These are followed by Talkussions, discussion sessions/workshops about the talks held that day, or on any other topic of interest. With the Talkussions we want to bring the people giving the talk and the people that crave for more detailed information closer together, and foster ad-hoc workshops where the participants define the content. Hence this year's programme promises to be a healthy mix of talks and workshops.
We are always soliciting talks about your use of Smalltalk. Prospective speakers must clearly explain the format they would like to use (tutorial, experience report, workshop or demo). Each proposal must include a valid e-mail address and an abstract of around 200 words. All proposals have to be sent to Roel Wuyts AND Stephane Ducasse, in plain ASCII pasted in the body of the e-mail - NOT attached. Hot topics include, but are not limited to:
- web service applications using Smalltalk
- experience using eXtreme Programming
- distributed applications
- new Smalltalk implementations and optimizations
- Smalltalk Development Tools
- Language improvements or enhancements
Exhibitors or sponsors should directly contact the ESUG board for more information.
I got this from an ESUG board member:
Call for proposals for hosting the ESUG Smalltalk Event 2004
ESUG is a non-profit organization which was founded in 1991. Its goal is promoting object technology within the context of the Smalltalk programming language. To achieve this goal, ESUG undertakes many activities, such as:
- organizing conferences and workshops that allow users and vendors and also industry people and academics to meet
- providing free teaching materials and books to students and universities
- supporting local Smalltalk users groups and helping establish new ones
ESUG is looking for local organizers for hosting its event for year 2004. This edition of the event should take place the last week of August 2004 (i.e. from Saturday August 21st to Friday August 27th).
We (the ESUG board) would like to take decision on August 2003 during the forthcoming conference that will be held in Slovenia. We would like to meet discuss with potential organizers in Slovenia, so to discuss their proposals. Also, this will be the opportunity for selected organizers to gains over preceding local organizers experience.
Please send your proposal to us by August 20th 2003. Your proposal should include three names and email addresses for the local organization contact people (3 persons at least), and give information about location, hosting, etc.
- At Essen and Douai we got around 100 participants
- We plan to have (again) a student volunteer program to help local organizers to take care of the coffee break, material, badges, projectors, etc
- When you will propose to organize ESUG you should consider that we would like to host CampSmalltalk too. For this purpose we will need one or two amphitheaters, a room with network access.
- Ideally, we would like to be able to have everybody in a single location
We hope to hear about you soon and that we will have a great place for ESUG 2004.
The ESUG board
"Better Than Open Source? An Independent Look At the Java Community ProcessSM Services"
I've seen a number of articles recently on Silicon Valley, and whether or not it's still the place for software ventures. In that vein, here's a story on Cisco leaving the valley - spotted via Critical Section:
Will the last person/company to leave Silicon Valley turn out the lights?Â Now it is 3Com which is moving; continuing the string of companies which have fallen on hard times after buying the rights to name a stadium. (Candlestick Park it will always be...) The key fact: "[3Com CEO Bruce] Claflin has roots in Massachusetts."
I have been wondering for awhile now when the high costs of California would start overcoming the network effects of all the firms/people there. Is this all anecdotal, or is there something afoot here?
There is one decision organizations won't have to rush -- the move to .Net. Microsoft reversed its position that companies should rework everything for Windows-managed application framework. The .Net Framework is now just part of the Windows platform. It is installed with Windows Server 2003, but it does not replace the various frameworks (like Win32 and COM+) that were used to create Windows NT and Windows 2000 software.
Nor has Microsoft reserved Windows Server 2003's performance benefits solely for .Net software. Windows is a true platform, with an application layer that's acutely aware of the specific OS. But Microsoft has proved with Windows Server 2003 that it can dramatically rework its underlying OS without disrupting existing application-layer software
Hmmm. So does this mean that the army of VB programmers is pushing back on .NET?
There's been a lot of posting going on about static and dynamic typing recently - and today brings two more posts. This one from Bruce Eckel, with a similar light bulb pop to this post from Bob Martin:
If it's not tested, it's broken.
That is to say, if a program compiles in a strong, statically typed language, it just means that it has passed some tests. It means that the syntax is guaranteed to be correct (Python checks syntax at compile time, as well. It just doesn't have as many syntax contraints). But there's no guarantee of correctness just because the compiler passes your code. If your code seems to run, that's also no guarantee of correctness.
The only guarantee of correctness, regardless of whether your language is strongly or weakly typed, is whether it passes all the tests that define the correctness of your program. And you have to write some of those tests yourself. These, of course, are unit tests. In Thinking in Java, 3rd Edition, we filled the book with unit tests, and they paid off over and over again. Once you become "test infected," you can't go back
There's a lot more in that post - read the Python example, which is what Bruce uses to draw his conclusions. The bottom line is, the sorts of mistakes that static typing protects you from are rare, and the costs are much higher than the benefits (as I've posted before). For the opposing viewpoint, have a look at the Fishbowl, where we find this:
Firstly, I agree with Bruce Eckel. Static typing is a form of testing. As a form of testing, it's particularly restrictive on the programmer, and forces the programmer to test all sorts of things they probably shouldn't have to: remembering the unit testing adage that you should only test those things that could possibly break.
There are difficulties, however, with going from that premise, to the conclusion that testing can give you the same benefits as strong typing, but without the disadvantages. The difficulties lie in the difference between testing-through-static-typing, and testing-through-writing-tests.
My statement is that static typing provides few tests worth anything by itself. His defense of static typing comes here:
The other reason I tend to steer towards statically typed languages for my own projects puts me in agreement with Carlos. In a dynamically typed program, it's easy for a human to tell what type something is likely to be, but no way for a machine to say for sure what type something is. Thus searches, code-assistance and refactoring tools for dynamically typed languages must, at some point, guess. These are all tools I rely on frequently, and want to work with as little of my interference as possible.
I want to work with those tools as well. And guess what? I do! Losing static typing does not mean losing those tools.
I'm aware the Refactoring Browser originated in Smalltalk. What I fail to understand is how truly automatic refactoring is possible when types are indeterminate.
This isn't really an argument against, so much as it's a simple lack of experience with Smalltalk tools. At a talk a few weeks ago, I refactored all references to '+' to 'fred:'. This took awhile (there are a lot of references) - but it worked just fine. There's a reason these tools originated in Smalltalk - it was easier to build them. Why? Consider a compiler. In the course of compiling a program, it creates loads of meta information about the code. What happens to this meta information? It gets thrown away. A Smalltalk system is one in which none of the meta information is thrown away.
Being able to discover precisely where (or if) a type or method is referenced is invaluable. A text-search can help, but you must sift through the false-positives yourself. This requires a certain familiarity with the code, and as the code-base gets bigger (or your familiarity with it wanes for some other reason), that sifting takes longer and longer.
I rarely find this to be a problem. Smalltalk can find all the references (senders, implementors, references, restricted to a hierarchy or not). Using a pattern of naming with intent, I find that I usually have very few false positives to look at. If you name all your methods after commonly used system methods, sure, you'll have sifting. But that's bad practice.
I work on my own, personal projects maybe one or two days a week. I tend to have four or five hanging around, so some will go months without me looking at them. When I return to a project after that amount of time, the information that the IDE can glean from the type system is invaluable.
On the other hand, my couple of Ruby projects languish far longer, because it ends up being a lot harder for me to pick them up after a long absence, because I have forgotten all the type information that is implicit in the program.
The IDE's understanding of types can also cause it to save me at least as many keystrokes as the type information causes me to endure. For example, when auto-completing a method, Eclipse will check the local scope for objects of the same type as the arguments, and include them in the completion. Similar guesswork is performed when using macros to generate loops. IDEA has similar features. It will even recommend simple variable names for me based on their type.
It's also amazing how quickly you can remember the workings of a half-forgotten API with a sketchy glance at the documentation and an IDE with type-informative code-completion.
Type information helps recall API's? How? Are the class and method names so generic as to be meaningless? If that's the case, how does an int declaration help out? I simply don't get this argument. Code that does not use meaningful names is hard to figure out in any language, and it's independent of whether there's manifest typing information present. Well formed class libraries are easy to read, poorly formed ones aren't - period.
Only 4 hours and change to a new Buffy. Tomorrow night's Angel wrap up should be interesting as well, and we'll see if my linkage theory holds water or not. 24 is on tonight as well, but that one has been wandering into tinfoil hat territory, IMHO....
For some reason I had a distinct lack of motivation this morning - so I took a break and watched one of the Band of Brothers DVD's. That made me feel better, so now I'm a little more charged up and ready to look at blogging API's.
Clarence Westberg wonders how many times he needs to pay for XP. Me too - eventually, this notebook will be history. Will I have to pay for a new copy of the OS?
Via Thomas Bray I came across this artcle by Paul Graham. It's a long read, but absolutely worth the time. I can't possibly do justice to the entire article, but I'll comment anyway; it's what I do :)
What hackers and painters have in common is that they're both makers. Along with composers, architects, and writers, what hackers and painters are trying to do is make good things. They're not doing research per se, though if in the course of trying to make good things they discover some new technique, so much the better.This is an inchoate thought I've had about software for a long while - Graham manages to express it more clearly than I ever have. It's not just this paragraph - he explains this very thoroughly in the article.
For example, I was taught in college that one ought to figure out a program completely on paper before even going near a computer. I found that I did not program this way. I found that I liked to program sitting in front of a computer, not a piece of paper. Worse still, instead of patiently writing out a complete program and assuring myself it was correct, I tended to just spew out code that was hopelessly broken, and gradually beat it into shape. Debugging, I was taught, was a kind of final pass where you caught typos and oversights. The way I worked, it seemed like programming consisted of debugging.
For a long time I felt bad about this, just as I once felt bad that I didn't hold my pencil the way they taught me to in elementary school. If I had only looked over at the other makers, the painters or the architects, I would have realized that there was a name for what I was doing: sketching. As far as I can tell, the way they taught me to program in college was all wrong. You should figure out programs as you're writing them, just as writers and painters and architects do.
I agree with this quite a bit, and it's one of the areas where I depart somewhat from XP. I'm often not entirely certain what I want mycode to do - so much so that I couldn't write a clear test for it if I wanted to. A large body of my code starts as half thought out workspace scripts, with frequent trips through the debugger- mostly because it's easier to just look at the objects than to try and imagine what state they are in. This is one the reasons that the popular notion of avoiding the debugger drives me nuts. I do a lot of my best work in the debugger! From workspace scripts I move slowly to some kind of object model - and not necessarily the most well thought out one. In BottomFeeder, entire subsystems have been refactored more than once. I don't think I would have arrived at the current iteration by thinking about it more up front either - I had to iterate my way towards it.
I think this next thought is a brilliant insight into large company development:
If you want to make money at some point, remember this, because this one of the reasons startups win. Big companies want to decrease the standard deviation of design outcomes because they want to avoid disasters. But when you damp oscillations, you lose the high points as well as the low. This is not a problem for big companies, because they don't win by making great products. Big companies win by sucking less than other big companiesI've often said that somewhere around the $50 million revenue point, ad-hocery gets replaced by formalism - i.e., at that point (roughly) you need procedures. I wonder if it's also the point where less creativity starts happening? There's a balance here - you have to build something other people want, or there's very little point. One of the risks of not injecting outside influence (sales, marketing, product management) into development is that the engineers will wander off the reservation completely. Of course, that's why he talks about the 'day job' - he's convinced that the artists among the developers won't be able to completely express themselves in paid work.
In any case, it's a provocative, interesting read. I've barely scratched the surface of what he's said here - go read it yourself