I know that this has been commented on by other people, but it's an interesting piece of "police action" on the internet - Google was irritatated enough by the pagerank scamming that syndic8 was up to that they've removed them from their index. Go ahead - Google for Syndic8. Notice how they don't come up? Bill Kearny, one of the guys behind the site, has managed to get them to appear on page one (down a few links) by redirecting from his domain, but wow - that's a penalty. The take-away from this? Don't go wild with your SEO tricks if you want to be found.
Sci Fi Wire reports that Carnivale is gone:
HBO confirmed that it is canceling Carnivale, its Depression-era supernatural drama, after two seasons, Variety reported. HBO had produced a total of 24 episodes in the show's run.
It was a pretty decent show - I liked it well enough, my wife loved it. I wish they had just ended it instead of giving it the ambiguous send off they did. Compare the ending of the show (where the Preacher was being brought back by Sofie) with the way "Buffy" always wrapped a season, or with the way "Veronica Mars" ended last night. In both cases, a season wrap could stand as a series wrap if the show bit the dust.
Looks like I'll be storing up posts for later again - I'm attending a seminar on Open Source (from a legal/licensing standpoint) in Tyson's Corner, VA. The seminar is being put on by LSI (Law Seminars International). Why am I attending this, you ask? Well, we (the Cincom Smalltalk team) get asked about open source with respect to our product quite frequently. We don't have any plans to go that way at present - but I'm always on the lookout for more and better information. First up - Rick Statile of RedHat, on the GPL and the LGPL. Looks like Rick is a General Counsel for RedHat - his background is in M&A, where OSS license issues can crop up in interesting ways
- Cover the GPL and the LGPL
- History of the GPL
- GPL Themes and Issues
- Discussion of the text of the GPL
- LGPL differences
This is looking at the GPL (and LGPL) from the legal side - and from that standpoint, the GPL is short - only 3000 words (as opposed to many commercial licenses - which are often much longer). Interesting stats here - of the (at the time the stats were gathered) projects on SourceForge:
- 63,094 projects
- 43,578 under the GPL
- 7095 under the LGPL
So roughly 80% of the SF projects are under the GPL or the LGPL. His take - the license reads more like a manifesto or political document than like a license. First written in 1989, version 2 in 1991 - never challenged in court. Inspired by Stallman's experience with EMACS authorship.
GPL Themes and Issues:
- "Free"- gratis vs. libre
- "Viral" nature of the GPL - what does this mean?
- Is it a contract or a naked copyright license?
- What about software patents and the GPL?
- Revising/future versions
- Compat with other OSS licenses
- Warranties/Limitations of Liability
The preamble is like a manifesto, although it's not "formally" part of the license itself. It's not at all clear whether a court would take it into account when interpreting the license. Rick believes that the preamble would be taken into account, since a court may not find the actual contents clear enough. There's also Stallman's public comments and the FSF website FAQs on it.
The outline of the GPL - Rick states that it's clear here that the license was not written by a lawyer. It's also not necessarily congruent with copyright law (especially with respect to derivative works) - but this license was written before a lot of current jurisprudence was determined. Trademarks are completely left out of the license (example - redistribution of RedHat Linux involves stripping the trademarks out).
Section 1: covers redistribution rights. You have to include the GPL, note the absence of a warranty.
Section 2: covers modification/derivative work. This is where the "viral" nature comes in. Defines derivative works very broadly - any inclusion of any GPL code potentially makes the entire thing fall under the GPL. Rick: "Uses Copyright law against itself" - forces an author to keep their programs open. It's explicitly not a public domain thing - copyright is still in force.
Section 3: Specifies that you have to either include source, or make it available on request at nominal (i.e., only shipping) costs. Clearly written pre-internet (no cost of distro).
Here's the kicker - the viral nature of the license can open software that you would like to keep proprietary. Running a proprietary application on a GPL OS (i.e., Linux) is ok. Shipping on the same CD is fine as well, and does not open it. On the other hand, including GPL software with yours can open it. Question: "What do you mean by including?" So loading GPL code into an application may well infect it - another speaker is going to cover this later.
Rick's take - he thinks that it's unlikely that a court would force proprietary code to be opened (even if distributed) - he instead thinks that there might be copyright infringement, and damages might be assessed - and an injunction would forbid further distribution. This is clearly a point that much of the audience isn't in agreement with. In fact, one of the audience members pointed out that in Germany, the GPL has been enforced such that shipping software has been pushed under the GPL (under mutual agreement though - so still not clear).
The key reason that Rick believes this is that the GPL is a license, not a contract. There's no contract to specify specific damages, which is why the theory is that you would get an injunction preventing further shipment without a contract or removal. Big question here to define the difference between "shipping" and "distributing" - of concern to government agencies (if agency 1 gives code including GPL to agency 2, is that shipping of distributing?). The theory is, certainly within an agency it's not shipping (i.e., does not trigger the viral nature). People differ when you cross agency lines. Which raises the same question in my mind about distribution between business units of a firm, depending on the legal structure of the firm. Hmmm.
My take here - a court is going to have a lot of fun with this when it finally gets tested :)
The Acceptance clause (Section 5) - modifying or distributing implies acceptance (which is why Rick says it's a license and not a contract). The really tricky thing is patents - when the license was first written, Microsoft had no software patents. Now, they've exploded. There are firms (insurance firms wanting to make money) claiming that Linux infringes 283 patents, including 27 held by MS (but a lot are held by Linux friendly companies like IBM) - so Rick's thought is that we have what amounts to MAD protection here - MS won't launch because it would result in a large IBM response. Joy :)
How does it differ from the LGPL? The LGPL attempts to address the "viral" nature of the license in the face of link libraries. The consensus seems to be that static links trigger, while dynamic links don't. However, Rick says: Consult a lawyer.
This talk looks at a variety of Open Source licenses - given by David Teske, a lawyer who works in this area. Heh - he starts out by welcoming us to the "Less Stressful" part of the program. He calls the GPL the "800 lb Gorilla" rather than the "grandaddy" of Open Source licenses. It absorbs almost all discussion of OSS.
The basic problem behind this morning's earlier conversations - the mismatch between standard copyright law and software. We simply haven't resolved this issue. This talk will cover 4 licenses - Mozilla, (same as Sun's, generally), BSD, Sleepycat, and Apache.
- Non-restrictive - BSD, SleepyCat, and Apache
- Restrictive - MPL
The BSD license was the first one of these on the ground (including the GPL). It's purpose was to enhance collaboration while protecting the University (Berkely). The BSD is purely defensive - it doesn't "stick" to the code. What the BSD impacts are source and binary redistribution under a few conditions.
The BSD grants use "as-is" (i.e., like you found/developed it yourself). No express or implied warranties, no liabilities for damages under any theory of use. There are a few obligations - when you redistribute the source/binary, you must include the original copyright notice, and you can't use the contributor's name for advertising.
The SleepyCat license is a tweaked BSD license. It's nearly identical to the BSD - in the disclaimer, it specifically lists non-infringement as a warranty not provided. Section 3 adds some uncertainty:
- Some unclear language on how you must include source.
- The source must be freely redistributable under "reasonable conditions" (meaning what?)
This is an example of how a forked license often creates a less useful dead end. Next - the MPL, which was derived out of a desire to move between the openess of the BSD and the restrictions of the GPL. The main thing - covered code has to go under the MPL, so unlike the BSD, code under the MPL stays under the MPL. What rights does it grant?
- Perform (not really applicable to software, but included anyway).
- Use and sub-license
The MPL tries to define a modification more cleanly than the GPL. It's any "addition or deletion from the substance or structure". It's any new file that includes any part of the covered code. [ed] - again, very biased towards the world Smalltalk doesn't live in - I'd be interested to see how a new parcel in VW with an override of an existing method/class in the system gets defined here...
The original authors and each later contributor grants a broad patent license to each licensee in any patent rights it can license that would otherwise be infringed by that code inserted by the licensor. The business needs of Netscape (original developer of MPL) show through here. The MPL also includes explicit terms for the removal of license rights. It's a pretty broad attempt to prevent patent suits by licensees.The MPL also - like the GPL - prevents the licensor from changing the license terms in an additional document. On the other hand, release under different terms entirely is allowed.
Finally, the Apache License (1.0/1.1) - like the BSD. The ASL 2.0 combines aspects of the BSD, MPL, and GPL. The ASL 2.0 grants copyright license as per the MPL, in the same order. It then defines derivative works, more like the GPL. Patent license is like the MPL, somewhat less expansive. The ASL allows derived works to be distributed under a different license - provided that you are not breaching the ASl in your own use. It also explicitly allows warranties and liability, as well as charging for support (etc).
Peter Moldave (lawyer) is going to cover a few more licenses - Artistic, intel, MIT, NASA, PHP. These cover 4% of the licenses used on SourceForge (as opposed to 80% for the GPL/LGPL).
MIT, Intel, PHP - like BSD
Artistic, NASA - unique, some MPL like properties
All of these are OSI compliant (Artistic 2.0 is not listed). The NASA one merely adds language about export law (US). The MIT, Intel, and Artistic 2.0 are "GPL compatible". PHP is "free" but not "GPL compatible". Compatible here means that code under a GPL compatible license may be used in a GPL licensed project.
MIT, Intel, PHP - all simple. The Artistic and NASA licenses are complicated. MIT, Intel, PHP - no substantial restrictions imposed. Artistic - modified source need not be distributed if standard source is available. NASA - restrictive, modified source must be distributed. With all of these licenses, larger works are allowable - no restrictions.
MIT, Intel, PHP - no pricing constraints. Artistic:
- Can't charge for the code
- Can only charge a reasonable copying fee
- Can charge for support
- Can charge for larger work
NASA - source code must be freely available, not clear beyond that. Then there's patent treatments - other than NASA license, all of these are silent. NASA license has a specific grant of patent rights. The treatment of combination patents and modifications specified. Peter's take here - he's yet to see an OSS license that goes over patent rights in a helpful way. Only the NASA license requires any kind of warranty - the rest simply disclaim. What about output use (i.e., is the output of a compiler a "derivative work? What about a parser?). The Artistic and NASA licenses attempt to address, but get to technically specific (Artistic speaks explicitly about C and Perl routines - go figure).
All of these provide specifics on internal/external use, and opening of derived works. What about utility (i.e., do we actually need them?) - MIT, PHP, Intel - no real difference from the BSD. Artistic: 1.0 Unclear, and the V2.0 gets a lot closer to the GPL or perhaps the MPL. NASA - long and complex.
Larry is going to talk about the Academic Free Licenses and the Open Software License:
- AFL: not reciprocal (like the BSD)
- OSL: reciprocal (like the GPL)
The idea behind these two is to create two licenses that cover the ground between the GPL on the one hand, and the BSD on the other. The OSL lists all the provisions in section 106 of (US) copyright. The OSL contains the reciprocal (viral) clause. The AFL does not include that clause. The idea here is to provide two licenses that are mostly the same, but allow for that difference.
These licenses don't define "Derivative Work", leaving that as an exercise to the courts and lawyers. These licenses try to spell out that you get patent rights based on those that are embodied in the original work, and that they carry through to derivative works. The OSL license also try to mail down the concept of "distribution" by defining an "External Deployment" as any use by someone other than you. The AFL doesn't deal with this, as it doesn't care about reciprocity. Note that this definition calls ASP usage a "distribution".
So I asked - what if you offered a virtual Linux box as part of a grid service? that counts as a distribution in this sense, according to Larry. The basic issue is that the Copyright Act doesn't define distribution (not surprisingly - the world was different then).
What about disclaimer of warranty? The OSL and AFL warrants that the licensor has the right to grant you a license. This eliminates the "where did it come from?" problem that people worry about with OSS. This puts the risk back on the licensor, as they are warranting that they have the right to distribute. The license also states that any bringing of a patent claim (against the original work) by the licensee against the licensor or any other licensee will terminate the license (and thus the granted rights).
Heather J. Meeker on the CPL. It was approved in 2001 by the OSI, is a lot like the MPL. It was written to generalize the terms so that any OS originator (i.e., non-IBM) could use it.It came from the earlier IPL. Like the MPL, these licenses were intended to be accessible to lawyers and corporations. It is a viral license - version 0.5 was developed for Eclipse, the current version is 1.0.
The CPL distinguishes between original contributors and subsequent contributors. It also defines recipients. What you end up with is a stream of licensors (down the chain of contributors) - it's basically another way of saying sub-licensor, more or less (question from Larry Rosen). The CPL is explicit about copyright and patent licenses. The language I'm looking at on the screen was definitely written by a corporate lawyer - I nearly fell asleep just reading the first sentence.
Interesting isclaimer on infringement - the CPl puts the onus on the person wishing to redistribute to get patent rights if they are required. Definitely corporate friendly :) Another thing - subject to indemnification of contributors, distributors may offer different business terms to licensees. So, you can offer the software commercially for money. And another hint that this is a corporate license - the license explicitly states that each party waives its rights to a jury trial in any resulting litigation.
Interesting kicker on this last bit - the federal government has issues with automatic license termination and decisions about default litigation terms. Commercial firms probably have specific contracts for federal agencies for those cases - but that makes open source terms really interesting.
This is an exploration of the various OSS licenses by Dov Scherzer (a lawyer in this field). The main thing - virtually no jurisprudence yet. Only one US case, some evolving ones in Germany. The US case (Progress Software vs. MySQL AB) was settled. To be clear, he explains that OSS is not:
- Proprietary software
- Public domain
Boy, there are a lot of interesting questions from the government people, and a bunch of assumptions based on regulations I'm not familiar with :) Apparently, some section of US code has something to do with public domain and government code, or people here think it does.
So what's the reason for the LGPL? It's intended to allow libraries that don't "infect" applications. The idea is that an LGPL library can be used with a proprietary program, and not "infect" it. Again, the whole issue revolves around the concept of linking, and the notions used in the LGPL and GPL make a lot of C/C++ assumptions. Gosh knows what this means in the world of languages like Smalltalk, Lisp, or even Java.
Is the GPL a binding contract? It's "clickwrap", not an actual contract. It's included with the sources, possibly as a separate file. On many sites, there's no explicit mention of the GPL other than in the source listings. So is it binding? Current jurisprudence states that Clickwrap, Shrinkwrap, and Browsewrap licenses are valid only if the user has made some kind of affirmative act to agree to the terms.
So what do the license terms that claim to push any derived work under the GPL mean?
- GPL vs. the Copyright act
- Hypo - what if I take 2 GPL lines and stuff them into a huge app?
- The GPL says boom - derivative work Ii.e., it's been opened)
- The copyright act differs (does not call it a derivative work).
A question here - what does copyright's "fair use" mean here? 2 lines? 100 lines? What? Under copyright law, it's not a derivative work unless it has substantially copied from a prior work. What this means in terms of source code is not clear (at least to me :) ). Current case work is pretty much on artistic work (art, text, music). Is using a small module different? What makes two pieces of code combined? Same storage media, but separated? Not combined. Same executable? Combined. [ed] - but even there, it's not clear - what about a Java JAR file or a Smalltalk parcel? The "same executable" standard seems to imply a problem here. Here's an interesting example - he addresses plugins, and states that it depends on how the plugin is used/invoked.
Again, I'd say that some court is going to have a ball with this some day. Another example from the slide deck: Load a GPL library into a non-GPL code base. Derive a subclass of one of the loaded classes - according to the FSF FAQ, that opens the entire application. The bottom line according to Dov: at present, you need both a lawyer and a technical expert to decipher the GPL FAQs. At present, none of this has been tested in court. We just don't know.
Here's the post lunch slump, but to keep us awake we have Douglas E. Phillips on the GPL and enforcement. Interestingly, there's been a recent filing challenging the GPL on anti-trust grounds in Indiana - so there should be a lot to talk about here. Douglas says he's going to focus specifically on the "viral" aspects of the license.
Eben Moglen of the FSF calls any questioning of the GPL's enforceability as FUD, and blames Microsoft. [ed] - given the "easy target" nature of MS in the OSS space, that seems mighty convenient. Moglen claims that it's enforceable because he's been enforcing it in private settlements (which says nothing about how it would be taken in court). Asking in a public forum tends to attract immediate attacks on a personal level (seen in various USENET forums).
In 2003, the FSF has pursued 20-30 enforcement actions in private (as reported in Forbes). The FSF has been engaged by companies wanting enforcement (MySQL) to help. There haven't been many decisions yet - and Douglas says that this is a lot like the way the BSA has taken to challenging alleged violations.
A year ago, there was a CA vs. Quest Software suit that seemed to assume the GPL was enforceable. in July 2004, there was a decision in Munich Germany, which enjoined distribution w/o conformance with the license. That's still not a US based decision though, so it's not resolved in the US.
In the US, there's the infamous SCO case, and more recently - 2005 - there's a suit alleging that the GPL is a price fixing scheme hurting programmers. So what grounds are there to challenge?
- Constitutional violation
- Pre-empted by copyright law
- Violates export control laws
- Never been tested in court
- Fails under the UCC
- Fails under common law
- Violates anti-trust law
- Selectively enforced by FSF
- Fails as a copyright license
- Too vague
- Not effective as shrink/click wrap
License vs. Contracts: Moglen states that a license is permission to use property, while a contract is an exchange of obligations. Moglen states that the licensor has no obligations, so it's a pure copyright license (i.e., no contract necessary). Thus, shrinkwrap license enforcement are avoided. [ed] This seems shaky to me...
What about the courts? We can reach back to General talking Pictures Corp. vs. Western Electric Co. (1938), which allows the patentee to grant a license. There's a 1995 case (McCoy vs. Mitsubishi Cutlery, Inc) stating that a license is in fact a contract. Muddies the waters, that's for sure.
What about revocation? Copyright law states that non-exclusive licenses are revocable. The FSF argues with that in its FAQs: "The public already has the right to use the program under the GPL, and this right cannot be withdrawn". Seems to be a conflict there.
Is it enforceable as a contract? Need assent from both parties first. The GPL itself does not purport to be a contract, and the FSF states that it's not. So - how can a contract be formed? The question is, does copyright pre-empt this? In 1996, the court rejected preemption in ProCD vs. Zeidenberg, stating that "rights created by contract are not equivalent to any exclusive rights within the general scope of copyright". Also - "Contracts, by contrast, generally affect only their parties; strangers may do as they please, so contracts to do not create exclusive rights".
Then there's Alcatel USA vs. DGI Technologies (1999), where we learned that copyright law cannot be used to indirectly gain commercial control over products it does not have copyrighted. Hmm - what does that say about the derivative/viral part of the GPL? The way the GPL is worded, a one line inclusion triggers derivation, and that goes well beyond current jurisprudence on copyrights.
Now we get to the part that the corporate members of the audience are interested in - how to make money off this stuff :) This is coming from Mark Webbink, Deputy General Counsel at RedHat.
As to "can you make money"? At this point, Mark popped up a slide showing RedHat revenue growth, which is now over $100M annually. Interestingly, their revenues started to trend up again in 2003 when they went to subscription licensing. In fact, 70% of the revenue is from subscriptions, 30% from services. Most of the subscription revenue is Enterprise based.
How you build a business model depends on which license (style) you choose - GPL or BSD. taking GPL first:
- Must include source
- All redistributed code must be GPL
- No restrictions on copy, modify (etc)
- No binary only (proprietary) code
- No per user (etc) fees
And the BSD:
- Does not requires source
- No need to push everything under BSD
- May impose other conditions
- May be embedded in proprietary systems
- May charge license (etc) fees
Retail model - not used extensively anymore, Redhat, Suse, (etc) have moved off. Did not provide scalable revenue, did not appeal to enterprise buyers.
The Loss Leader model: Early RedHat distro model to get mindshare, and now used by IBM (Eclipse) and other large vendors. Mostly used by OEMs. What about Dual Licensing? That's how MySQL used to do business (GPL or binary only). The license varies at licensee's choice. Allows a proprietary license for binary only uses. Experience? Creates market confusion (which license do I want/need? Why do I need to pay?) SleepCat software is seeing this now.
Where is it at now? The bundling model (i.e., services and subscriptions). Have to be creative about what you are charging for based on the license. You can charge for warranties, or "other" services so long as you don't interfere with downstream rights. So you create a bundle and charge a subscription license for the convenience of bundling. You can bundle technology instead of a service: TiVo, for instance. Typically in embedded apps.
Bundling with patents? Not really likely with respect to the GPL. With BSD, possible. The GPL, with its downstream obligations, makes it unlikely. What about a membership model? Mandrake used this model when retail wasn't working for them - somewhat similar to public radio funding.
What about the "Free Riders"? This is inevitable with Open Source models. You get non-developing distributors, non-contributing consultants - and not all enhancements go back to the community.
Now we have Ed Walsh on patents an Open Source. He's an IP lawyer in Boston. This is the "what's the catch" talk of the day. He defines the FSF folks as in the ideologue camp. Then you have the people creating OSS, the developers. Finally, you have the people distributing OSS software (bundlers, like RedHat). You might also classify them as integrators. Finally, there are the users (and the employers of the users). So to answer a patent question about OSS, you have to figure out where people sit (to find out where they stand).
A separate group are patent "trolls" (i.e., the fine folks at Eolas - or, given this absurd filing, Jeff Bezos. So - Ideologues, developers, distributors - all things being equal, they might prefer that they not be patents at all.
Integrators: Building products on base of hardware/software. Wants "base" products to be free of restrictions. May want to distribute a product with restrictions. Does not want costs, wants freedom to operate.
Users: Want freedom to operate. Want to keep using software, don't want costs - direct or indirect.
Freedom to operate - is OSS more risky? The main risk is infringement of IP (or at least the perceived risk of that). Many of the proprietary vendors hold lots of patents (IBM, MS, Sun, etc). There haven't been many infringement cases yet - the SCO case is a bad example of what could come about. [ed] - I'll note in passing that Jonathan Schwartz of Sun has said "I like IP" - which means that even the supposed friends of open systems could easily jump ship.
First rule of litigation: Sue someone solvent (i.e., follow the money). Many of the larger vendors are using patents as "trading cards" - the rest of the field has to "pay to play" (settlements). With widespread distribution of source, (alleged) infringement is easy to spot.
What about community countermeasures? There's a public patent foundation that challenges egregious patents. There's also the Open Source law Center and the OSDL's legal defense fund. There's also infringment insurance and indemnities offered by vendors. As well, some of the OSS vendors (RedHat, for instance) are getting patents - potentially as a defensive measure (the MAD theory of patent acquisition). Many patents are bad simply because the PTO primarily looks for duplicative patents, not for prior art. This is a problem in a new field (like software). What about the "Patent Pledges" of some of the big vendors (IBM, Sun, MS)? They are claiming that they won't assert their patents against the OSS community - there are potential anti-trust limits on large scale agreements this way. Bearing in mind that a patent assertion typically costs $2M, and most OSS projects are (financially) poor, this can easily be taken as grandstanding.
The GPL has an implied patent license
- Uncertainty about what is licensed and to whom
- Your contributions, anything in the code, or anything in the code or later added to it?
- Your chain of title or the entire open source community?
What risks surround IP really depend on which OSS license you use. For instance - how do you manage patent cross-licenses and obligations under the GPL? How can you know who you aren't supposed to sue?
Ideologues don't like patents, never will. Distributors don't like patents, but are pragmatic. Integrators want it both ways, need a rational plan for their patents and those of others. users don't want to deal with patents, but need a rational plan.
This presentation comes down from theory into practice - what issues are there in using OSS licensed software? Alfred Kellog is an IP lawyer who specializes in software technology issues at UBS.
First off, there's a compelling value proposition (free!) On the legal side, it's an untested frontier:
- no judicial opinion validating the concept
- virtually no judicial opinion interpreting the provisions of the various OSS licenses
- no warranties or proof of non-IP infringement
- Reciprocal (viral) aspect of some licenses creates risk of unintended IP loss
So how do you utilize OSS without exposing yourself to excessive risk? What about risks you don't know about? (i.e., employee downloads that you are unaware of). Some of the risks you have to mitigate:
- Support - will you need it? Where will you get it?
- Lack of Warranties - What's the liklihood of a problem?
- Infringement claims - What is the risk, how do you figure it out? What are the consequences of a bad event?
- Non-compiance with license restrictions - are these easily satisfied?
- Loss of owed IP due to reciprocal features - How does your use impact? What is the uncertainty?
- License invalidity - hard to tell
- Unauthorized downloads - is there a policy in place explaining rules? Are there technical blocks? Can there be? What about open source that piggybacks with commercial products?
- Unauthorized open sourcing of your proprietary code by developers within your organization - need well known policies so that open source releases are planned
A presentation from Karen faulds CopenHaver of Black Duck Software. Interesting - she brought up Friedman's "Flat Earth" book and chapter 3 (on Open source) straight off. I guess I need to buy the book.
Anecdote - story of a developer that a colleague had to fire, because he wasn't producing software, he was downloading various bits from the web and integrating them - thus increasing the risk of unintentional opening of code based on various OSS (and proprietary) licenses that may have walked in. To be fair, she points out that lawyers copy boilerplate all the time, so it's not an odd thing at all.
Heh - the idea is that our IP laws and licenses should transfer to China (after they translate the various licenses out there). ironically, in "The Birth of the Modern", a book I'm reading now - one of the huge complaints that the UK publishing industry had against the US in the early 19th century was our (then) lack of concern for copyright law. The more things change...
So anyway, she's flogging compliance software that supposedly fingerprints open source software so that you can look up what you have versus what's "out there". I'd like to know how that works, and what assumptions it makes about the software methodology that the code comes from :)
Virtually all companies now work in a "mixed IP" environment. Software is being built evr higher on older layers of previous work. What are the compliance concerns?
- Absence of control over code available for download w/o charge
- Questions regarding code pedigree
- Assumption that licenses are unenforceable
She claims that "everyone" will start shipping source with everything. I think I'll introduce her to Apple and MS sometime :) In any event - the notion she's flogging is that you need to know what you can and can't do with the code combinations you have, which is a valid concern. As to whether this is a real concern? At least for public firms, you have the whole Sarbanes-Oxley madness, so it'll get attention. So - step one is assessing what you have, and then remediating based on what you found.
She's advocating starting small, with a pilot project - and carrying it through from start to finish. Avoid disrupting development, and anticipate employee concerns. For public companies - the ones with SarbOx issues - you need to worry about whistleblower implications. I'd add that this will be a bigger problem if - for whatever reason - you have a demoralized set of employees. For any remediation plan, you need to involve all the players (including developers), or it will get rejected out of hand.
The goal of all this? To understand what it is you are building/shipping, and making sure that any issues are identified early.
Ahh, everyone's favorite - the SCO suit. This talk is coming from David Bender, a well known attorney in this area of law. A short digression to set the table: Unix was born in 1969 at Bell Labs. It was widely licensed on nominal terms.
So onto SCO vs. IBM. Filed in 2003. Three complaints, and three sets of counter-claims. There's a motion pending for another complaint. very heavily litigated - as of April 2005, 437 docket entries. Five week trial is set for November 2005. Plaintiff: SCO, who markets Unix software. Defendent: IBM - markets a version of Unix called AIX. There are third party products - Linux in particular - that are Unix-like. The Linux claim is that this software was developed without relying on Unix code.
SCO alleges that IBM contributed a great deal of code derived from Unix software. The slide states that Linux is making inroads against MS, but the bigger issue, IMHO, is that it's making inroads against the various commercial Unix vendors, with SCO, at the low end, taking large scale damage. The most interesting thing to me, at this point - in my reading of the technical blogosphere, the SCO suit gets no respect (I certainly give it none, see above) - and is given no chance of success. The feel in this room is different. David points out that the biggest problem is that copyright law was written without software in mind, and it's fitting badly.
So how did Unix assets convey? From AT&T to Novell to Sabta Cruz Operation to the SCO Group (note: there's dispute on what went where on some of those steps). SCO claims that IBM breached its Unix license agreement with SCO's predecessor, AT&T by contributing derivative work to Linux. Their claim is that without these contributions, Linux would not be a viable competitor to SCO's Unix software (cue the loud guffaws from the tech blogosphere right there). They also claim copyright infringement, and claim that under the original AT&T agreement, they had the right to terminate IBM's license (which they did in 2003). Finally, they claimed unfair competition. SCO claims that IBM was well aware of these contracts and their meaning. SCO additionally claims that IBM intentionally harmed their business relationships by badmouting them with shared partners.
What does SCO want? $1B in restitution, and an injunction. Clearly, IBM doesn't agree :) The crux, it seems to me, is that SCO accepted the GPL by modifying and distributing Linux products for years. So, it's a "sour grapes" argument is what IBM says. IBM states that SCO acquired the rights to Unix software in an attempt to unify Unix and Linux - and that when that failed, they turned to a strategy of litigation.
Originally, SCO tried to claim a trade secrets infringement. When they couldn't identify any, they dropped that and started claiming that you need a SCO license to run Linux. IBM states that Linux developers will remove infringing code, if only SCO will identify it. SCO will not identify said infringement. So, IBM counter-claims that the AT&T/IBM agreement is breached by the attempted termination. They also make other claims under the lanham act, and claim that SCO has breached the GPL.
The original AT&T license looks confusing to me - it seems to state that IBM only had the right to use Unix and/or derivative works on internal systems and defined CPUs. If that's the cae, how do they ship any derived work? What am I missing here? This whole thing gets into what does or doesn't constitute a "derived work". It's at this point that I can state that I'm glad I'm not a lawyer - this is making my head spin. There's more - the people who signed the original IBM/AT&T contract could be called to testify, and they could be asked to explain what they understood the term "derivative work" to mean at the time. On another note, in Discovery, how does SCO identify supposedly infringing work?
SCO asserts that the added source is a derived work. Therefor, any of this code contributed to Linux constitutes a breach of the AT&T agreement. That's because derived work, according to the original license, it could only be used for internal purposes on designated machines. Then there's the GPL definition of the GPL:
either the program or ant derivative work under comyright law, that is to say, a work containing the Program or a portion of it...
Now David goes back to the Unix conveyances. There's no dispute that AT&T sold all rights to Novell. There's a huge dispute over what did or didn't go the Santa Cruz Operation in 1995. So Novell sold everything in 1.1(a) to SCO, except for schedule b:
- A "All rights and ownership of Unix"
- B "All Copyrights"
Ok, that's confusing :) There was a 1996 amendment - revised (b) to exclude all copyrights except those owned by Novell and required by SCO. Gah! The original agreement between IBm and AT&T doesn't seem to grant irrevocable rights, but an amendment executed by IBM, Novell, and SCO states that on payment of considerations, they would get irrevocable rights.
SCO then moved on to sue Novell in 2004 for "slander of title to the Unix copyrights" - more or less, that Novell's claims about who owns what were damaging and made "with malice". Novell claims that the documents SCO relies on for transfer of copyright are not clear enough. There was a ruling last year that went Novell's way, but no ruling on whether a further amendment cures that. No decision yet - not clear whether it will impact the IBM suit.
Then, Red Hat sued SCO for creating FUD around Linux (and thus their business). That was filed in fall 2003. Court has stayed the case pending resolution of the IBM case. They wanted declaratory judgment that they did not infringe or violate copyright.
Then, SCO started suing end users. AutoZone was using SCO Unix, switched to Linux after SCO dropped support. SCO alleges that Linux is a derived work, and that AutoZone is infringing SCO's copyright. AutoZone tried to move it, failed, and this has been stayed pending IBM.
Then SCO sued Daimler Chrysler for breach of contract, because they switched from SCO Unix to Linux, which they claim is a derived work. Most of this one has been dismissed.
So - when the dust settles, what will the outcome be? Who wins? SCO? IBM? Linux backers? The proprietary Unix vendors? Who? David won't make a prediction at this point. He does think it's possible that SCO could lose on copyright but win on contract grounds.
It's definite - people who commute from where I live - Columbia MD - to Northern VA, where the seminar I was attending was - are nuts. There's no way I could handle that drive every day :) Anyway, if you are curious about the Open Source/legal issues that the LSI seminar covered, follow this link - I pushed them all into a single category.
It's time that the people stopped pandering to Dave Winer's ego. The first thing to remember about Dave is that he's always reasonable, and it's the other guy who's wrong. Always. I mean, just look at his post, and compare it to Glenn's - which one sounds like a flaming egotist? The next thing to realize about Dave is his notions about rules - they're for other people - not important, syndication inventing, always right guys like Dave. Nope, never. Follow the self centered rambling here, here, and again, here.
For an example of his vast technical acumen, go have a look at the MetaWebLog API. Go ahead, try to implement just from that page - I dare you. It's far past time to stop pandering to this self centered, egotistical, obnoxious buffoon. He likes to play at being "the reasonable guy" - but if, like me, you've ever had the misfortune to exchange email with him, that veneer comes off very quickly and is replaced with venom and disdain.
If you don't agree with him - technically, politically, whatever - you aren't just wrong, you're evil. That's fine in 7th grade, but in an adult, it's just pathetic.
Register for Smalltalk Solutions now, so you can attend great talks like this one:
Introspection simplifies versioning of serialized objects
Schwab, Wilhelm: University of Florida, Department of Anaesthesiology
Tuesday 11:15 am to 12 pm
Abstract: Nearly transparent object serialization and subsequent deserialization are services commonly provided in object oriented programming systems. Examples of systems providing serialization services include Java, the Microsoft Foundation Class Library, and most Smalltalk systems. In general, serialization frameworks recursively serialize nested objects and collections, and automatically resolve circular references. The more reflective the system, the less effort is required of the programmer.
These features make serialization frameworks a very attractive choice for persisting application data. In effect, they provide a file format with little if any effort on the part of the application programmer. However, there is a cost associated with loading serialized data from previous versions of an application.
Unfortunately, the lure of obtaining a file format with no effort can turn into a break on future change. Programmers feel inhibited to improve their code because doing so might require changes to the serialized format of their objects.
This paper seeks to remove the barriers to change that are ordinarily associated with serialization frameworks and versioning. It presents a programming tool and some patterns for exploiting a reflective development environment to simplify writing code to cope with version changes in serialized object data. The tool is specific to Dolphin Smalltalk, but, could be adapted to other Smalltalks and to other reflective languages.
Bio: Dr. Schwab received the PhD degree for applications of image processing to problems in experimental stress analysis. For the past ten years, he has been largely occupied with medical applications of computers, including distributed object systems, clinician entry systems, and automated intraoperative data capture. He is firmly convinced that Smalltalk has been essential to successful collaboration with physicians and to building systems that are sufficiently robust for use by physicians at the point of patient care.
Much of Dr. Schwab's leisure time is devoted to drawing and, and more recently, painting in oil. He remains determined to master the basics of automobile repair and to build a modest piano repertoire.
See you in Orlando!
I finally got a contractor to come out and take a look at the entryway to my house. The wood around the doorway is rotting out, and it looks like the entire thing - door and all - will need to be replaced. The door has issues with sealing anyway, so a full replacement probably isn't a bad idea. Still, I wasn't really mentally prepared for a $2k estimate :(
I'm getting another estimate sometime in the next few days, so we'll see what that looks like. I've also called my builder - even though we are out of warranty, I think the problem is their fault - the way water drains off the roof, this was bound to happen. That probably won't go anywhere, but it can't hurt to ask.
In the meantime, I'm waiting for a tech from Comcast so that I can see if my cable modem is the issue. I'm getting connection drops of a few seconds to a few minutes duration about once an hour, and it's really getting irritating.
Update: The source of the problem has been identified - the line running from the cable box (outside) to my house (under the driveway) was bad - it was dropping signal before it ever got to the house. So, I now have a lovely orange line draped across my yard :) Within 2 weeks, Comcast should be out to replace the line properly. Apparently, they don't need to dig under the driveway at all - they have some kind of device that shoots a line right under from one post hole to another. Sounds cool - I'll post on it if I'm home when it happens.
Another Update: I spoke too soon. The connection was stable for a few hours, and now it's back to the micro-outages again. Sigh.
Well, this is interesting. Google has temporarily suspended downloads of the accelerator, citing capacity issues:
Thank you for your interest in Google Web Accelerator. We have currently reached our maximum capacity of users and are actively working to increase the number of users we can support.
There's some speculation that it might be part of a deeper problem, with this being a face saving excuse. Have a look at this:
The most disturbing aspect is Google Accelerator could prove to be a serious security problem for web sites specializing in dynamic user-generated content, such as forums and other community sites. This has always been an inherent problem when using carelessly configured caching proxy servers, but never before has such an expansive and intrusive caching platform been offered for free, thus the problems are likely to get much worse before improving as thousands more unsuspecting people install the Google Accelerator client each day.
Try using Web Accelerator on a forum site, one with lots of geeks who love Google and probably already have Web Accelerator installed. Why, if you're lucky, you'll be logged in as someone else, as the folks at SomethingAwful.com discovered. The posters in that forum discovered that most of the times they refreshed the page, they were logged in as a different person, seeing their friend's control panel for the forums.
Looks like there may be problems with the aggressive assumption that everyone follows the spec recommendations on GET completely. The thing to remember about this is that there are standards, and then there's actual practice. The latter tends to be a lot more relevant.
Tino Martinez is trying to replace Don Mattingly again -- this time in the New York Yankees record book. Martinez homered for the fifth straight game and New York rallied twice from big deficits to beat the Seattle Mariners 13-9 on Wednesday for their season-high fifth straight win. No Yankee has had a home run streak this long since Mattingly -- now the club's hitting coach -- tied the major league mark in 1987 with home runs in eight consecutive games.
Here's a way to get better performance and save on payroll - ditch Giambi, claim breach of contract based on his steroid use. Put Tino back at first fulltime. We get better at bats, a better glove, and lower payroll. Maybe Steinbrenner and Cashman can locate a clue and come to the same conclusion...
If previously the preferred route for low-cost, offshore development in India was to set up a captive development subsidiary, ISVs are now looking at alternatives such as third-party outsourcers that specialize in end-to-end product development, said Sarath Sura, managing director of the Indian operations of Sierra Atlantic Based in Fremont, California, Sierra Atlantic provides outsourced IT services and has a product development facility in Hyderabad in south India.
Which is kind of where I figured this would go. Why did I say it sounded dodgy? Well, get a load of this quote from the article:
To hedge risks, some ISVs outsource to more than one Indian outsourcer. Senable Technologies Inc. a startup in Dallas, for example, has outsourced development to three vendors in India, including Aspire, according to Andy Pulianda, the company's chief executive officer. Senable selected the three vendors after evaluating about 100 companies.
"India has the capability to provide robust commercial product development, but significant due diligence is required before selecting partners that meet your requirements," Pulianda said. A major pitfall, for example, could be low price because the vendor offering the lowest price may have cut costs on infrastructure, communications links or on security, Pulianda added.
Welcome to integration hell with that strategy. Here's a far better idea - hire half as many developers as you think you need, and have them use Smalltalk. You'll actually get a product out the door without the overhead of managing three development teams 12 time zones away.
Remember Satellite Forces, a company I visited last winter? They've just announced that their business is growing in the middle east. Satellite Forces uses Cincom Smalltalk to power their software:
SATELLITE FORCES ENHANCES TECHNICAL SUPPORT TO ADVANCE IN ACTIVE MIDDLE EASTERN IT MARKETS
Ottawa, May 12, 2005 - Satellite Forces International (SFI), a leading developer of business management applications, announced today that they are getting ready to launch their product and services in very active Middle Eastern markets. SFI is collaborating with Harrington Staffing/Informatics Resources of Ottawa, to provide expert technical support to meet the needs and requirements specific to Middle Eastern IT organizations.
“Entering our Atlantis application into a new and thriving geographic market space is of paramount importance to SFI” explains SFI president David Long. “ Having the appropriate level of product technical support is essential for meeting this goal”.
SFI and Harrington Staffing/Informatics Resources have developed a plan to leverage SFI's Atlantis application by aligning SFI's technology and Harrington's pre-qualified technical support resources. By having competently trained Technical Associates from Canada train our Middle Eastern counterparts, SFI is ensuring a comprehensive support system for our initiatives in the region.
“In order to be prepared to meet demand in the Middle East, it was important for us to develop a relationship with a firm that has a reputation for resourcing highly skilled IT professionals”, explains Haissam Rahal, Director of Business Development - Europe and Middle East. “Collaborating with Harrington Staffing/Informatics Resources enables us to focus on our core business of getting Atlantis to this new market area.”
Satellite Forces International Inc. believes the potential for Atlantis in the Middle Eastern IT environment is an estimated revenue stream upwards of 1.2 billion US dollars.
About Harrington Staffing/Informatics Resources
Harrington Staffing/Informatics Resources has been providing highly qualified resources to public and private sector clients for 30 years. Winner of the 2001 Better Business Bureau Torch Award for Marketplace Ethics and Standards and winner of Consumer Choice Award for Staffing industry in the last 3 years, Harrington has a pre-qualified pool of IT specialists that are available for diverse or unique skill required positions and can be engaged at various project stages. Harrington Staffing/Informatics Resources provides cost effective resourcing solutions in support of client business objectives. Visit www.harringtonhr.com
About Satellite Forces International
Satellite Forces is a Canadian based firm that specializes in the development of business process applications, advanced distributed networking solutions and software development tools. Atlantis, the main product offering, is designed to enhance and help expedite the development process for software development companies. Servicing a variety of globally positioned clients, SFI's Atlantis application is a flexible solution that enables organizations to increase profits and lower operating costs. Visit www.satelliteforces.ca.
President, Satellite Forces International Inc.
Apparently, things are not all sweetness and light at WalMart - have a look at this report:
Shari Eberts, retail analyst with J.P. Morgan, said the company's second-quarter forecast "is the largest negative revision in recent memory," and shares of Wal-Mart fell nearly 4 percent in early trading.
Wal-Mart's sales have suffered in recent quarters as soaring gasoline prices cut into household budgets, and the world's No. 1 retailer said it expects energy prices to weigh on second-quarter results too.
I don't think it has as much to do with the cost of fuel as this analyst would have us believe. Why do I say that? Two reasons. First this, from the same story:
Wal-Mart's weak first-quarter performance comes in contrast to rival Target Corp. (NYSE:TGT), which reported better-than-expected earnings as it kept prices up and reaped profits from its credit card operations. In contrast to Wal-Mart's focus on low prices, Target has carved out a niche selling stylish goods at low prices, as well as catering to consumers drawn just by price.
I'll fill in the blanks for the analyst, who likely hasn't left her desk and visited an actual WalMart or Target recently (or possibly ever). Here in the Columbia area, we have two nearby Targets, and two nearby WalMarts. When I walk into Target, what do I see?
- A Clean store - no disorganization on the shelves, the floors are clear, things look organized
- A staff that's working - stocking, checking people out of the store, etc
In contrast, what do I see at WalMart when I step in there?
- A somewhat dirty, somewhat unkempt store. Disorganized shelves, floors that look dirty
- The staff is socializing with itself - hanging around in the aisles, working slowly if at all
Here's what I think happened. After Sam Walton died, and a new team took over, standards slipped. Some bright MBA type decided that they could cut costs by doing without as much training (the socializing staff), and without as much cleaning (the general sense of disarray I see in there now). I'd bet that the MBA type who dreamed those things up is happy as heck, seeing reduced costs on his spreadsheet. Meanwhile, he's got no idea that this has a direct bearing on weak sales.
These small signs of distress are what I saw at KMart the last few years, before all the local stores were closed. Local shoppers started staying away because of those things. Ironically, the local WalMart is in an old KMart location - and it seems that location isn't all they've picked up. What the analysts who think this is all about energy costs miss is the small day to day stuff, which is almost certainly driven by some misplaced "cost control" policies from on high. We've all seen those kinds of things in action, and we've all seen the kind of damage they can cause. Just look at KMart...
The whole controversy surrounding syndic8 and SEO has an interesting subtext to it. "Everyone" agrees that they crossed a line with their subdomain advertising - here's a typical post on it, and here's my earlier post, where I called it pagerank scamming. Jeff Barr weighed in with an apology this morning. The unasked questions are:
- What line did they cross?
- Who defined the line they crossed?
- What's the appropriate punishment for whatever line they crossed?
Before anyone gets all self righteous and claims that it's all too obvious what they did, consider - Google effectively de-listed them from the net, and did so unilaterally. Everyone cheered that - but imagine if it were Microsoft (or IBM back in the 80s) doing something similar - would you be as copacetic with it?
Here's the thing - "everyone" is up in arms about AutoLink (I found a bunch of "the sky is falling posts" on this just this morning). No one is the least bit worried about this raw display of power by Google. This whole thing has the feel of the mandatory self criticism practice that the Soviets and other communists used, and I find it troubling.
Steve Rubel is still worried about AutoLink:
Google today shipped the gold master of its new Internet Explorer toolbar, complete with the oft-criticized autolink feature. eWeek has more including a soundbite from me. While the company made some changes to the Toolbar to give users options, Google still snubbed publishers. Autolink remains and the Toolbar now changes a publisher's content without their permission...with no way to opt-out. Gee, maybe the Google Content Blocker (a spoof) isn't so far from what we might see in the future.
All this, and no putting two and two together (the syndic8 thing I referenced). For that matter, on the political side I've noticed a few bloggers raising questions about Google News and how it picks content - the thought being that there's some active bias going on. That gets almost no attention, all the attention to the syndic8 thing gets cheered on by what amounts to a mob - and the entire Universe gets it shorts twisted over AutoLink - even as they cheerfully munge publisher content with ad blockers of various sorts.
At least inattention and hypocrisy are alive and well.
Spotted in Incipient(thoughts) - I ran across this explanation of teaching OO (Java, in this case, but that's not that relevant):
For instance, based on previous experience with students who had a similar background, I knew I might need to explain how the call stack worked, before introducing heap allocation as a further complication. I showed the slide with a picture of the stack and asked if that was familiar. People nodded. I wrote, in near-Java, a recursive implementation of "factorial" on the flip chart - and asked if that would work. One student said, "Not it won't - you'd need an extra static variable, wouldn't you ?"
So I walked through one specific call to factorial, showing how stack frames were allocated, where intermediate values of the parameter would be go, where intermediate results would go, and so on. Some of the students knew this stuff from CS classes, some of them even remembered parts of it - but I needed these notions, and the visual representations that go with them, to be in the "active" part of their memory while we'd be looking at more specifically OO notions, such as object construction (which involves dynamic allocation, which involves contrasting the stack and the heap, and which also raises the question of the difference Java draws between "primitive" and "object" types).
The question in the title arises from how Russ Pencin used to answer implementation detail questions in the Intro to Smalltalk class back at ParcPlace. Explaining the stack? That has less than nothing to do with teaching objects to a new group of students. I taught Smalltalk classes to many, many Cobol students, and trust me - the stack just didn't come up other than an explanation of what you saw in the debugger.
Getting bogged down in that level of detail does a huge disservice, IMHO. It's not useful, and doesn't have anything to do with the kinds of day to day business apps that ex-Cobol developers are going to be writing. teaching OO concepts has less than nothing to do with explaining the way the stack looks, or how objects get allocated in memory. As Russ used to ask me - Why do you care? A side point - if Java makes you care, then there's a very large problem inherent in Java.
I haven't taken a look at the logs in awhile (April 26), so I thought I'd take a gander. Here's what turns up for the interval from April 27 to May 12:
Platform BottomFeeder Downloads Windows 1996 Mac 8/9 1000 HPUX 875 Sources 773 Mac X 521 Linux x86 496 CE ARM 284 Windows98/ME 181 Update 149 Solaris 38 Linux Sparc 34 AIX 29 Linux PPC 24 SGI 20 ADUX 10 Source Script 6
I'm still baffled by those HPUX numbers :) Somewhere, there's a bunch of HPUX users who really, really want an aggregator - look at the SPARC and AIX numbers in comparison. It's also still the case that we are getting nearly twice the requests for Mac 8/9 than we are for OS X. OS X gets all the press, but it looks like 8/9 is still popular. Or, people are running 8/9 clients under OS X for some reason. The total adds up to 6436, which is just over 400 a day - very nice numbers, and an increase over the last time period I looked at.
So next, let's have a look aggregator access to the XML Feeds:
Tool Percentage of Accesses Mozilla 21.6% BottomFeeder 20.4% Other 18% Net News Wire 13.6% NewsGator 4.5% Internet Explorer 4.4% SharpReader 4.3% BlogLines 4% Planet Smalltalk 2% Feed Demon 1.5% Liferea 1.5% RSS Bandit 1.2% JetBrains 1% Feed Reader 1% Java 1%
Not a huge change from last time - Mozilla usage is slightly up, which probably means that Sage is getting used more. Everything else is about the same as before, with some slight variation. In other words, no huge surprises here.
Finally, let's take a look at the access to the HTML pages on the cincomsmalltalk site:
Tool Percentage of Accesses Mozilla 45.6% Internet Explorer 31.1% Other 17.3% Java 2.3% BottomFeeder 1.5% Net News Wire 1.2% Opera 1%
The interesting thing here is that "Other" usage - and since I doubt people are doing the Java equivalent of wget in an eclipse workspace, "Java" above counts as other - has climbed a bunch, seemingly at the expense of Mozilla usage. Other than that, things are about the same as last time.
The registration deadline for the Smalltalk Solutions Coding Contest is tonight - the actual contest will be made public on Monday at 9 am (Eastern Daylight time). There are some serious prizes on the line for the top three finalists - assuming they can attend the conference.
Update: It's closed as of 6 PM EDT. Make sure to register for StS so you can see how it comes out.
Third, most marketing sites exist to build relationships with customers. You think most businesses spend all that money to only have you visit once? Yeah, right. The only way a company can become profitable is have you visit again and again and again. Look at Amazon. Unprofitable for the first few years in business, but now that they have lots of repeat and return customers they are profitable.
Yeah, Amazon doesn't have RSS. But, the bar is higher now than it was in 1998 (just like the bar in 1998 was higher than it was in 1989).
Actually, they do have RSS, and I use it - you can set up search feeds through Amazon that pump back RSS. It's quite useful, especially if you are interested in a niche genre or a particular author. For instance, here's a feed url that will answer a book search for "Alternate History". I do get some bizarre matches on that (I'm looking for stuff like Turtledove's WWII series, and I sometimes run across stuff like this - go figure).
How did I construct that search? Why, BottomFeeder has a wizard for that :)
I mentioned that BottomFeeder has wizard support for Amazon RSS, which generated a comment pointing out that Amazon hides its RSS support pretty completely. I recall that it was pretty hard to figure their support out, but I created a wizard screen to hide all of the complexity that Amazon exposes. Here's a shot of the Book search definition screen:
Here's the DVD search definer:
And here's a shot of the CD search definer:
They work pretty well, and they make it easy to deal with Amazon's less than obvious RSS interfaces. But hey - why make your RSS capabilities obvious when you can file for completely bogus patents?
Phil Ringnalda observes the subtle power shift taking place now that customer reactions to products are going beyond simple word of mouth:
Assuming that having millions of people with weblogs isn't just a fad that will all be over soon, I'm quite looking forward to seeing what it takes for a company to survive having its customers much more able to talk about them, and having anyone who can google knowing what they say.
For instance, I've been sort of casually considering giving yTunes! a try, even though I'm not that into music which is sold in mass markets these days. But this morning, Charles Arthur linked to Tim Anderson's post saying that Yahoo! Music is as intrusive as Real Player, spewing toolbar and home page offers around and shoving Yahoo! Messenger into the Startup folder. Oops, game over, no need for me to even try it. That sort of thing was survivable, barely, back when you could be the only player for most of the audio around. Now, when you are competing with dozens of other players and stores for the fairly small non-file-sharing part of the market? I'm sure there's still some market for that sort of thing, but to me, it's nothing. There is no Yahoo! Music.
He's right - and this is something that I very much doubt gets the attention it deserves in marketing departments. It used to be that only local businesses - restaurants, for instance - got hurt by bad word of mouth. More properly, it took a lot longer - and an assist from big media - for it to spread.
Now, anyone can make comments - witness my WalMart post from the other day. That particular post is hardly going to dent WalMart, but is a lot easier to find than idle chat at a party.
Scoble points to a hate speech issue and says this:
Richard Silverstein had a commenter leave some hate speech on his blog and he's writing about what happened when he tried to get his blog provider to remove it. Don't visit these links if you aren't ready to see some profanity and/or hate speech cause he reprints the actual post that was left in his comments. Today he asks Hate Speech: Is there ever a limit for blog services?
That's a really tough one for me. Here on my personal blog I don't mind leaving up such speech but I will make sure any such speech is replied to with a personal reply from me. I'm a believer in not hiding that kind of stuff.
I'm not even sure why Scoble thinks this is a tough call. Free speech means that the government won't censor you - it doesn't mean that anyone has free access to a soapbox you provide. A blog is nothing if not a personal soapbox - and I don't think that there's anything remotely like a free speech issue involved in censoring nasty comments out. In fact, by doing so we collectively provide a level of social disapproval for such commentary. That social disapproval is critical in protecting the commons.
To my mind, you do a postive disservice to the community by leaving hate speech up on your site. IMHO, if someone wants to vent that way, they can get their own blog, and do it in their own space.
My wife and I went to a house warming this afternoon - friends of my in-laws had just moved, and we went up to Timonium. As it happens, Allen Ford, who used to work with my father in law, is a collector of old time musical devices. Player pianos and music boxes in particular, but he also had a number of old record players (and devices that came before record players). Here's an example - a player that takes metal disks with holes punched in it, and, by reading the holes, plays music:
It's hard to see in that shot (not a lot of quality in my camera phone), but there's a rather ornate picture on the back of that device. The disk is a metal plate with holes punched (like the ones you see in a player piano roll for a player piano). I asked Allen for a few details, and they were interesting. These machines were made in the late 19th century, reaching their zenith in the 1890's. The disks played for about a minute, and played instrumentals only - no voice. Very beautiful music though. The disks ranged up to a maximum of about 27 inches around - but Allen said that those were problematic, as the metal would flex during play and sometimes add discordant mechanical sounds. Here's a picture a much smaller device of the same ilk:
This one is a windup mechanic device - it's hard to tell in that picture, but the outer edge is serrated - it's all gear driven (as is the one above). Before this house warming, I had no idea that such devices even existed. The next generation of device, as it were, was this kind of cylinder player, from the first decade of the 20th century:
Those are cylinder boxes on the ground there. Allen showed my wife and I a couple - I should have taken another picture. One of the ones he took out was the delicate wax cylinder type - the other was a more durable celluloid cylinder. That latter one had a 200 lines per inch density, and I could barely make out the grooves on it. The wax cylinder had pretty clear ones.
Somehow, I neglected to take a picture of the reproducing piano with a roll on it - below are two shots - one of the pianos with a set of music boxes on it (including one of the Constellation, which rocks the ship in a sailing motion as it plays), and one of a Piano Roll Cabinet specifically made to hold the reproducing roles. A reproducing piano plays rolls which are paper tape recordings of an actual artist - we listened to Gershwin while I was there. Allen has been collecting these types of pianos since he was in his 20's (he's retired now) - he also puts them back in service for his collection, using his mechanical engineering background to guide him. Here are the shots:
Allen must have had a few hundred of those rolls around - there was another whole cabinet, plus more stacked up in various places. These pianos were being sold in the 1920's, and they were quite expensive - ranging from $1500 - $4000 (in mid-1920's dollars)! They made them between 1904 and 1941. They were being bought by affluent buyers who wanted music in the home - in much the same way that we buy stereo equipment now. I had a completely different idea based on a handful of old movies - I had some notion that these were used by bars and such, but I was mistaken - those were nicleodeons (a precursor to jukeboxes, more or less). The other interesting aspect of these things is that the player piano rolls I looked at used a binary code - us computer types were hardly the first ones down that path. Interestingly, there were three separate, incompatible systems - one binary, one using floating point, and one using something else entirely. Kind of like today's software - you had to buy the right kind of rolls for the kind of player you had.
Now, here's a lovely device - a music box made to look like a bird cage. When you wind it up, the bird sings, with the beak moving. It dates from around 1850, and is a magnificent piece of work:
Finally, there was a 1925 record player/radio - the record player still works, and the radio would as well - if only Allen could still get vacuum tubes for it. We asked him about how he goes about repairing a device like that, and it's pretty hard - he showed us a length of cloth covered rubber tubing he bought back in the 1970's, and he's glad he did - you just can't get that kind of authentic material anymore. It's kind of like repairing a classic car - you scour junkyards and yard sales, and advertise, and hope for the best - remanufacturing the original parts can get to be prohibitive. Anyhow, here's the record player front:
Those dusty bulbs there at the bottom are tubes (type 279, if I recall). Those are compatible replacements for the originals - Allen's got some working type 199's around for the radio that's built into the device, but those are even harder to find replacements for.
This was all fascinating stuff, and - since I'm working off memory here - I'm sure I got some of it wrong. I'm going to have to start carrying a small pad of paper around to take notes on, so that I can get details like these down). Fun stuff, and I could have spent a lot more time looking it all over.
My saga with Comcast continues. This morning, I have no DNS service. The cable modem had all the right lights on, but I couldn't get DNS lookup. Rebooting the modem didn't help, so I figured I'd go the all out - unplug the modem, unplug the router, and then replug. Well. That resulted in a cable modem that just wouldn't synch with the network at all. Sigh. This is what I should expect in the absence of local competition, I guess - Comcast doesn't have to care...
Figures - it all came back to life nearly the nano-second that I saved this post as a draft.
Looks like IBM is pushing employee blogs - James Snell has posted the guidelines they are following. The interesting part?
So with IBMers blogging both inside and outside our Intranet environment, recognizing full well that it was time to formalize their support for what many of us had been doing for quite some time, the corporate communications and legal teams worked collaboratively with the IBM Blogging Community to draft the Corporate Blogging Guidelines copied below. The core principles -- written by IBM bloggers over a period of ten days using an internal wiki -- are designed to guide IBMers as they figure out what they're going to blog about so they don't end up like certain notable ex-employees of certain notable other companies. They're also intended to communicate IBM's position on such practices as astroturfing, covert marketing, and openly goading or berating competitors -- specifically, don't do it. As these guidelines were being drafted, we drew heavily upon our own experiences as bloggers and the excellent prior art in this space graciously provided by Sun, Microsoft, Groove and many others who have drafted policies and guidelines for their employees.
That's one way to get buy-in to a set of policies - have open participation in the drafting of the policies.
Here's an interesting tidbit - some larger data centers are moving to DC power in order to cut down on heat inside the datacenter:
A typical power supply, which converts AC power into the various DC voltages required by individual server components, has an efficiency range of just 65% to 85%, vendors say. Just one 1-kilowatt power supply may generate 300 watts of waste heat, and today's blade servers can consume more than 14 kilowatts per rack.
Some data center managers have responded by using DC-based power distribution systems, eliminating the need for AC power supplies for server racks. IBM and HP both offer servers that can accept bulk DC power from a centralized, telecommunications-grade -48-volt DC power distribution unit (PDU) and then step it down to the voltages required at the server level.
Since no one actually produces DC power anymore (although, it's been produced more recently than I thought - witness this Boston story), these firms are doing the conversion temselves, just outside the data center. That way, the heat dissipation occurs in an area that doesn't need massive AC.
Tim Bray explains what actual interop between MS and Sun would look like. But then again, this alliance is all about Sun finally noticing that Linux thing eating their low end server market away, so why should MS care?
I guess it’s good that Steve and Scott made nice, and there’s no doubt that when the customers tell you to interoperate, then you bloody well interoperate, so it was a good piece of work (see Pat Patterson’s take in a comment on his own blog). But this glue for linking to Microsoft’s WS-Federation is a second-rate solution at best. Among other reasons, WS-Federation is yet another WS-backroom spec that might change (or go away) any time the people in the backroom want it to; not something I’d advise betting on. If you have products from any two vendors that implement Liberty Alliance specs properly, well, they interoperate.
Scoble's been tipped off about a pending deal in the syndication space:
Want a rumor? Come back tonight. I hear that one news aggregator company is acquiring another and tonight I'll have the details. It's going to be a big week for syndication.
There aren't that many commercial aggregators - and of them, Newsgator is the one that just received another round of funding. Is it them, or two of the others trying to build up for a server side move? I guess we'll find out later today.
Good TV returns - Sci Fridays are back July 15:
Stargate SG-1 returns for a ninth season with new cast members Ben Browder (Farscape), Emmy winner Beau Bridges, Oscar winner Lou Gossett Jr. and former The X-Files star Mitch Pileggi. Bridges will also appear in several episodes of Stargate Atlantis. Gossett and Browder's Farscape co-star Claudia Black join the cast of SG-1 in recurring roles, and former Baywatch star Jason Momoa joins the cast of Stargate Atlantis.
Battlestar Galactica comes back for a second season, with the entire ensemble cast returning: Edward James Olmos, Mary McDonnell, Katee Sackhoff, Jamie Bamber, James Callis, Tricia Helfer and Grace Park. Also resuming their roles are executive producer and writer Ronald D. Moore and executive producer David Eick.
Very good news!
Lileks has a post that's fit for both critics and fans of the various Trek series. Take this, for instance:
One of the good things about the End of Trek: I’ll never have to listen to the bitching of fans. The more I troll the message boards and forums and Usenet groups, the more I’m convinced that the entirety of Trek Fandom is made up of people devoted to proving the inadequacies of the thing they supposedly love. Oh, that episode was horrible. Worst season ever. That show wasn’t anything like the wonderful perfect original series remember that show where the computer ran the entire planet? No, not the one where the planet looked like the backlot for an Old West movie. No, not the one where the planet was some sort of jungle with Caucasian Polynesians who shoveled fruit into the mouth of a big computer-god. No, not the one where the planet was actually an asteroid. Oh wait, yeah, that one. No wait, the one where the planet was full of Indians, and the computer saved them by pushing away an asteroid a different one than the one where McCoy was dying and fell in love with the priestess, because it was turn to get some - and Kirk was like a big war brave chief or something. Miramanee! Man, he knocked some moccasins that one. Yes, the new Trek sucks, there’s nothing like that Nazi planet episode well, except for the Nazi planet two-parter. (Which sucked!) There was nothing that had Q in it, like in Next Gen, when he would take them all back to Robin Hood times and it wasn’t even a holodeck because he used his Q powers. For that matter, where were the holodeck stories on “Enterprise”? Not one! Okay, in the last one, but you know what I mean. You want to talk Trek, you talk Next Generation, and that means Whoopi Goldberg in a cardboard hat and a warship with a daycare center.
Heh - there's a lot more there :)
I have to give credit where it's due - after the last Comcast guy came out last week and replaced the outside line, they set up another appointment for today when I still had problems. The tech who came out - guy named Kevin - was very professional, got a new cable modem installed, worked through a provisioning problem on the phone, and then called me back just now to check on the problem - which seems to have cleared up - I haven't had an outage since he left. So my hat's off to the local Comcast tech folks. As much as I've complained, they were right on top of this one.
Julia Lerman gives us a dog's eye view of what's wrong with the world :)
Tomorrow, NewsGator will formally announce that it has acquired Bradbury Software, which makes the popular FeedDemon RSS aggregator, and TopStyle, a CSS/xHTML editor for Windows. FeedDemon will integrate tightly with the NewsGator Online synchronization platform and come bundled with all of NewsGator's paid subscription plans. Nick Bradbury, the developer behind both those products, will be joining NewsGator as well. This is a smart move by both companies. RSS is growing fast and the smaller players will need to combine to compete with more capitalized players, such as Yahoo, Ask/Bloglines/IAC, Microsoft and others.
Interestingly, this was the combination I was thinking of when I posted this. I didn't verbalize it then, mostly because I didn't want to look too foolish if I made the wrong guess :) Looks like Rob Fahrni sort of gets his wish.
The real question is, which codebase dies? They won't need two clients, and the code bases won't be that similar (especially given the fact that NewsGator lives inside Outlook). My guess? Sayonara FeedDemon.
WonderBranding talks about the circulation numbers for newspapers, focusing on the loss of female readers:
A recent post addressed the news that women are abandoning newspapers like rats a sinking ship. They feel disconnected and ignored when it comes to content, which is 70-80% male-oriented.
Interestingly enough, my wife's comment when I mentioned this wasn't the male oriented stories - it was that men take newspapers to the bathroom, and woment are happy with reading on a PC. Now, I'm not sure how this actually works out - men have always taken papers to the bathroom, and it's not as if there was some golden age when there were more stories focused on women either.
Bottom line? I suspect that the dropping circulation is simply part of the larger trend, unrelated to any kind of latent sexism thing.
Dave Winer speculates about the FeedDemon/NewsGator thing:
Chris Pirillo has an interview with Nick Bradbury about the deal with Newsgator. I guess it's official now. I was briefed on the deal by Nick Bradbury a couple of days ago. I understand that the motivation was to allow FeedDemon to tie into the subscription-sharing network Newsgator is building. It seems inevitable that they'll buy a Mac news reader product, they would probably like to buy NetNewsWire, and it would be hard to imagine Brent wouldn't take a reasonable offer (I have no inside knowledge). This is venture capital at work, not sales revenue. I imagine that Newsgator will roll up with Feedburner (they share an investor), and Technorati may become part of this deal too. The goal? Get large enough to go public or merge with something going public (SixApart) or get bought by Microsoft.
If it actually goes like that, I'd expect a lot of infighting and chaos. Why? You'll have a lot of strong willed developers trying to build some kind of coherent, common client platform to go back to the shared NewsGator server. I already fully expect either FeedDemon or NewsGator (client) to disappear (and my money would be on FeedDemon).
Never mind what they said in the interview - there's venture money behind these guys, and the very first thing out of the money guys mouths will be something like "we only need one client - pick one, or merge them". I expect that the codebases aren't that similar, so an announced merge of the two would mean either lots of floundering, or the death of one product with a few of its features added to the other.
The Yanks have won 9 straight - maybe they are getting serious, and maybe they have just enough pitching to make a run this year. Crossing fingers....
I've been at the Objectivity User's Conference all day, and there's no WiFi there - so I'm just getting back online. I'll have some notes from the sessions posted in a bit
I'm here at the Objectivity WorldView 2005 User's Conference. It's in Bethesda Maryland - looks like a large government audience. The first thing is the stock "Here's the Company" talk from the CEO. I suppose I shouldn't sound so jaundiced, but there it is :)
Here's the interesting piece of information - since 2001, the size of the average database has quadrupled. "Data is the capital of the new economy". We'e now into exabyte databases [ed] -can relational really scale to handle that if you have to do anything that resembles a complex join? I suppose Alan knows better than I do :)
He's focusing in on the audience (in particular, the security audience) with the "need to sift out the wheat from the huge piles of chaff" bit.
Ok, here we go with some of the examples/target markets:
- Security agencies (pattern detection across multiple disparate data sources)
- Investigation (FBI, et. al.) doing much the same thing
- Environmental control (lots of disparate data coming in from various sorts of sensors [ed] - here's where manifest typing just gets in the way.
- Telecommunications - usage of Objectivity here tends to be embedded in other apps. Pros touted here: reducing time to market, simplifying complex data types, OO reuse
- Scientific applications - genomics, bio-informatics. Dealing with huge volumes of data
Key areas targeted: Customer focus, innovation, platform for growth, drive increased adoption of OO (and OODB) technology. One of the things being touted is what they might call "mass customization" of the product within their customer base. Put another way, this is services led product direction. On the performance side, they've been focused on enhancements to their query engine - seems to be customer/service led. Heavily focused on bringing in additional partners who add value in various niche areas.
Here's a new thing - they are about to introduce a rental (subscription) license in order to lower the barrier to entry to their technology. Subscription plans are breaking out all over.The other big take away here - Objectivity is profitable and growing.
Douglas Barry, David Caplan, Leon Guzenda, Richard Winter. First up - DBMS market shares. "Other" is 9% - which encapsulates OODBs, XML DBs, etc. Interestingly enough, the dbms market is still growing, even though it's fairly mature. There were 10 commercial RDBMS systems, and 5 OSS ones. 7 commercial OODBs, 6 OSS ones. 20 XML DBs, and 10 OSS ones in that area. There are also a number of specialized products in various niches.
The mainline dbs (Oracle, et. al.) are fighting for the enterprise space - file and data. The ORDB and XML db corps are consolidating. XML and XQuery are growing in popularity. JDO is struggling. Recently, a few OSS ODBMS systems have popped up. Objectivity sees this as a good thing, with these systems doing "missionary" work.
What about the specialized dbms systems? Accelerators, in memory db systems of various types (eg, GemFire). Real time embedded systems (eg, Matisse). Objectivity says: "We can compete against any and all of these".
Next - Douglas on when to consider an OODB: when you have a business need for high performance and/or complex data. Complex data? Looks like a graph. Often lacks unique, natural identification. Often has significant number of many to many relationships (joins in a relational environment). Often requires traversing a graph structure. The big win for an ODBMS - no impedance mismatch. The data is stored in the same way that it's used. You don't have to build a mapping between the way it's used and the way it's stored.
"Data tends to stick where it lands" - but we continue to have use cases popping up that want to use this data in unexpected ways. Now we've got a complex diagram that - boiled down - says that an OODB in the middle can mediate between your applications (especially those with WS* needs) and the legacy data stores.
Next - Richard Winter on db scalability challenges. The big challenges: rapid growth in the sheer size of the data sets being stored. User populations are growing, and queries are becoming more complex. Users expect data to be up to the minute, and they expect to get near instantaneous answers ([ed] - this is a Google effect. Users have learned that search engines give them answers immediately...).
- direct, natural modelling of data semantics
- parallel operation
- advanced indexing -the basics (btree) were invented 30 years ago, and a "big" db had 100 records. Things are a trifle different now.
- sophisticated access technique
- query planning and optimization
- highly concurrent operation
- provision for application specific solutions
Demand is rising because the sheer volume of potential data is growing - and the availability of "always on" connectivity is growing. For many purposes, the scaling problem is proportional to the size of the largest partition.
Principal: The db engine should "know" the true structure of the data and optimize around that data. Most of the scaling work has been done on the relational dbs like Oracle and DB/2 - but that still doesn't help whhen you force a graph/network strcuture into a set of tables and rows. That removes knowledge of the structure from the db, and "outsources" it to the application(s).
Question - what about a standard API (i.e., OQL) - this from the OMG guy. The ODMG (now defunct) was working on this, and the files from that group have gone back to the OMG. Beyond the current efforts there, just stuff in the JCP (i.e., proprietary). One thought is that JDOQL may be adopted as a "standard".
Question - what is Objectivity doing with regard to JDO and XQuery? They mentioned that JDO is struggling, and XQuery is getting more popular. Objectivity now has XML import/export, and is planning to expose their internal API's via XQuery. It may be that XQuery ends up being a sort of default OQL.
Question - Compare and contrast Sybase IQ with Objectivity/DB. Sybase IQ is an innovative product for Sybase - uses column storage instead of row storage - in a conventional db, all data for a person goes in a table with multiple columns. With this they use data compression and bitmap indexing. Works for tabular data that fits the relational model ([ed] - I need to look this up). The view here - Objectivity works better for complex data that does not necessarily fit the relational model, and when you are going to tie the db and the app more tightly together (i.e., embedding).
Darren Hobbs wonders about the new method of class creation in VisualWorks (new since 1999 - which is to say, not that new :) ):
Hmm. VisualWorks has namespaces, which Smalltalk-80 does not, and they seem to have changed the way subclasses are defined. According to my copy of the purple book, a subclass is created by sending a class the message 'subclass:'. It should be possible to implement a version of subclass in the Creature class that adds the extra code.
However, VisualWorks wants me to send 'defineClass:' to a namespace. Problem with that is I want to change the way subclasses of a particular class are defined. If I change the code in the NameSpace class that will affect every class that is ever created thereafter.
How is this different than before? Prior to namespaces, if I modified the class creation method in class Behavior, I ran into the same issue. Nothing but the message recipient has changed, IMHO...
Steve Rubel points out an evolving phenomenon:
People now put the same degree of trust in me (and other reputable bloggers) that they might normally only reserve for analysts and journalists. There's a big difference between these influencers, however, and bloggers. With press/analysts you have a safety net. If they leak, you have options for recourse. With bloggers you really don't.
Well, I'm not clear on what safetly net exists with analysts either (unless you have a contract with them). Ultimately, you're making a trust call, and the person you confide in is going to either be trusted in the future - or not - depending on their behavior.
The more interesting aspect of this is that you could be "on the record" at any time now - whether you are giving a formal presentation/speech or not. That's going to be a rather large change as realization starts settling in.
most popular desktop aggregators on the Windows platform will now have a richer synchronization story with its most notable competitor. It also puts pressure on other desktop aggregators to figure out a strategy for their own synchronization stories. For example, I had planned to add support for synchronizing with both services in the Nightcrawler release of RSS Bandit but now it is unclear whether there is any incentive for Newsgator to provide an API for other RSS readers. Combining this with the fact that the Bloglines API isn't terribly rich means I'm between a rock and a hard place when it comes to providing synchronization with a web-based aggregator in the next version of RSS Bandit.
Ditto for BottomFeeder. Right now, BottomFeeder can synchronize itself with another running instance of BottomFeeder over HTTP, or via file import (i.e., you export a synch file from one copy, and then import it into another). Until there's a useful server based API, I don't see anything happening here on my end. I don't really see what incentive the NewsGator guys have for playing nice, either. That's not a shot at them - far from it. I just don't see why they would worry about it...
Interesting tidbit just came out at the tail end of the talk I just made it to (blasted beltway traffic!) - Objectivity now has a Python binding. That's pretty cool - they also support Smalltalk, of course. This means that you can be productive with Objectivity, instead of falling down into the endless complexity provided by C++ and Java. The presenter pointed out how much more difficult it is to do prototyping against Objectivity with C++ or Java. They plan to release official support for Python later this year - the binding itself has been around since 2000.
Today's keynote is being given by an FBI CTO who spent most of his career at NSA - interesting cross over. He made an interesting observation about how line workers view updates from IT - they fear that new technology coming from IT will be less capable than what they have now. That's hardly unique to FBI - and it's part of a large scale issue in IT shops: there's a lot more slavish following of analysts (especially the useless ones) and fads than there is examination of actual needs.
One item I noticed he hit on as being important to him - SOA. The thing about SOA that makes me skeptical is that it means all things to all people. It's like OO 15 years ago, or the visions being pushed by the OMG a decade ago. Universal answers never are - but C level IT folks always seem to be willing to drink the koolaid.
Oof. In a discussion of network security we just got treated to "buzzword bingo", where the buzzwords are all security agency specific. There's a real issue afoot there though - in any secure environment, how do you make data available to people in an appropriate manner? For instance - some data might need to be shielded for reasons having to do with trial rules. Others might have to be shielded based on which foreign governments are or are not allowed to have access. That all sounds like "why do I care?" for business folks but - with the advent of SarbOx rules, it actually has deeper meaning than a lot of us might like.
Here's another issue that will resonate with business people - the simple volume of structured (and unstructured) data washing though systems. Consider a product marketer or product manager - how the heck do you figure out what the competition is up to? For that matter, how do you figure out who the competition even is? Unlike the FBI/NSA problems, it's nearly all open data - but the fact that it can be found doesn't mean that it will be found. And here's where the FBI guy relates it to his problems:
When the military captured Hussein, they used social (or network) analysis to do so. The tools were pretty simple - large sheets of paper and markers. Now consider a hostage taking (something the FBI deals with). In such cases, they don't have a long time to find the person - it's literally life and death, and time matters a lot. Being able to drive that kind of analysis quickly using decent tools would help them a lot. To relate this back to what seems to be a trivial issue (at least when compared to kidnapping), consider a product team trying to determine which of a set of desirable features to implement for the next release of a product. There may be (say) 10 possible features of interest, but - given the size and capability of the team, combined with the desired delivery schedule, only (say) 3 can be delivered.
How do you make those calls? Market research to determine sales impact? Interviewing existing customers in order to extrapolate general market demand? What about extending the delivery time out so that more could be done? What about hiring more staff so that we could accomplish more? These are all calls that are made with insufficient data, and they need to be made decisively. In my work, I make those sorts of calls all the time, and I'm never entirely certain that I've made the right ones.
Lenny Hoffman, Objectivity engineer. Current query support added in 1992, and hasn't evolved a lot. Not initially a problem - most people were using the product as a persistent store. Customers who need query support either built their own or ran back to an RDBMS. The issue - sheer inertia held this areas back.
The big problem is that data size is growing. There's a growing demand from their customers for out of the box query support - they want the scalability that they've come to enjoy with the OODB, but with the kind of query support they know is available with an RDBMS.
- Higher Fidelity - relationship properties as query values, Set qualifiers, path based queries
- More customizable - Application defined calculated values, Application defined indices
- Better performance - primarily optimization
- Open - publish a public OQL with an independent predicate equation tree. Add a listener interface for monitoring, logging, and tuning
[ed] - interesting bit about OQL - will that be truly open, such that other ODB vendors could adopt it? Or is open in this sense just about having a defined API?
The Path query support is somewhat inspired by XPath, but without the reliance on an XML structure. This looks a lot like what you would do with a Smalltalk collection and one of #select:, #collect:, #reject:, or #detect:.
Calculated Values - this is what you get with Gemstone by having the code execute in the database instead of in the client, or in an RDBMS via stored procedures. With Indexing, the big changes are extensibility and the addition of a public API.
In summary - this is an enhancement to the existing query engine by opening it up and making it easier to access and extend. Another note here - this is not part of the current release. In answer toa question, it's not clear which release it will be part of.
The question is, how does an old dog such as myself get immersed in the gospel of yet another language? You can say all you want about Microsoft but you have to admit their developer tools are solid if nothing else. I know IronPython exists for the .Net framework but what about Ruby and Smalltalk implementations? I'm spoiled to Visual Studio.Net, it's so nicely integrated, and just works. I've really come to appreciate VS.Net now that I'm spending my days on a Linux box, you have no idea how pathetic the tools are on Linux.
Well, here's what I used to do, fwiw: I'd take a problem I had solved in my first programming language (Basic) and write the same application in the new language. That way, I wasn't trying to understand the domain problem, I was just learning the new language. It wasn't a hard problem it was a manual cryptogram solver (for the puzzles that still appear in some newspapers). I wrote that in Basic, in UCSD Pascal, in a proprietary language at the DoD, in C, and finally in Smalltalk. I stayed in Smalltalk after I finished the problem in less time than it had taken me to go over syntax in the other languages I had learned :)
As to the question about .NET integration - there are no shipping Smalltalks on that platform - a large part of the problem is that the CLR just isn't ready for a language like Smalltalk (at least not yet). As to tools sucking on Linux - that's not true if you use something like VisualWorks - which is binary portable across every platform we support :) I do my BottomFeeder work mostly on Windows, but I do all the blog server development on Linux - and on an old PII 400! Try running any of the supposedly "modern" development systems on that :)
Is Gosling really this stupid?
The "clear need" that Magnusson cites is anything but clear to Gosling, who says Sun has received negative response from the enterprise development community regarding the idea of open-source Java. "We've got several thousand man-years of engineering in [Java], and we hear very strongly that if this thing turned into an open source project—where just any old person could check in stuff—they'd all freak. They'd all go screaming into the hills."
I don't know James - has it been a problem for NetBeans? Is the Apache project in chaos? Is Eclipse? If you don't want to open source Java, that's fine, and believe me, I'd understand. What I don't get is why you have to make crap up instead of just saying no.
The next time someone in management asks "why do you spend time monitoring the blogosphere?", point them to this post by Steve Rubel. The next thing you'll want to figure out is "what would our response be in a similar situation?"
Blaine Buxton explains why resumable exceptions are a good thing:
Time for another "this is why Smalltalk is cool" post, but this one also holds true for Ruby And Lisp as well. So, it's a "why Smalltalk, Ruby, and Lisp kicks mucho booty" so to speak. OK, enough of the back patting and let's get down to business. Today's topic is resumable exceptions. It has a nice geeky ring to it doesn't it? The first thing you might ask yourself is, "Why in the world would I want to resume an exception? It's an exception! Dead programs tell no tales!" True, true. Normally, you want an exception to send your program down in flames because you had a mechanical glitch that you didn't expect. Better stop everything before the propeller goes slashing through your data unkindly! But, what if we had exceptions that were good that could notify us of potential bad things or even enumerate potential bad things?
I use resumable exceptions quite a bit in BottomFeeder - they allowed me to create a customized RSS/Atom handler that could deal with many of the trivial issues in feeds (like bad characters) without having to create more own "tag soup" parser. The lack of them in the maintream languages (Java, C++, C#) explains why every time I bring this up in a forum with people involved in the syndication space, they assume that I had to roll my own regex based tag soup parser. But hey - all those extra libraries must be making them more productive... somehow.