Looks like Chris Pirillo's gada.be would be an interesting site to play with, but it's not responding now. Looks like he needed more server oomph behind a service that Scoble and Winer (and everyone else) immediately advertised and promoted.
I think it's time to call out this patent nonsense. Have a look at this mess by Apple and the following people, who's names are on a patent that they all know full well is bogus:
- Richard Williamson
- Daniel Wilhite
- Jack Greenfield
- Linus Upson
Apparently, they were just granted this patent (filed in 2002) for the brilliant innovation of the proxy object. Well heck - I think there are a few instances of prior art. Just taking one I can rattle off the top of my head, let me fire up VisualWorks 2.5, released in 1995. Well - I see class LensAbsentee (actually shipped with VW 2.0, which came out in 1993, iirc). LensAbsentee is an abstract superclass (but not part of the "normal" hierarchy, as it's not descended from Object). It's purpose? Why, when you do a DB query using the Lens, you don't get full complex objects - you get - wait for it - proxies for them. When you actually try to deal with them, they fault in. Kind of like the way the patent explains it:
A method for providing stand-in objects, where relationships among objects are automatically resolved in an object oriented relational database model without the necessity of retrieving data from the database until it is needed. A "fault" class is defined, as well as fault objects whose data haven't yet been fetched from the database. An object that's created for the destination of a relationship whenever an object that includes the relationship is fetched from the database. When an object is fetched that has relationships, fault objects are created to "stand-in" for the destination objects of those relationships. Fault objects transform themselves into the actual enterprise objects—and fetch their data—the first time they're accessed. Subsequently, messages sent to the target objects are responded to by the objects themselves. This delayed resolution of relationships occurs in two stages: the creation of a placeholder object for the data to be fetched, and the fetching of that data only when it's needed. By not fetching an object until the application actually needs it, unnecessary interaction with the database server is therefore prevented.
So hey - you four "brilliant" patent holders - there's prior art staring you in the face (and I'm sure that there are older things than this - TopLink for Smalltalk predates the Lens, iirc). Do any of you have the integrity to admit it?
Update: I had pulled the patent links from this page, which apparently had them wrong. The links are fixed now, so that you can follow the absurdity in all its glory.
I think the key thing to bear in mind about Yahoo blog search is this comment from the CNet story:
Initially, Yahoo News Search will have access to material from hundreds of thousands of blogs but will eventually scan millions, said Joff Redfern, a director in Yahoo's search unit.
Which is why the results are (thus far) disappointing. I think the launch really amounts to a public beta.
We have an annoying (and seemingly inexplicable) issue with the Media Center PC. Every so often, it will stop supporting sound to or from the TV. Other sound works fine, leading me to think it's the tuner card. Rebooting always solves the problem, but it's annoying. Anyone seen this, or have an idea what I should look for?
The blog has been mostly inaccessible for the last hour or so - it was a scaling issue with one of the early things I did in the server code. At the bottom of the page here is a list of referers. I have Smalltalk code that generates that, and it has been running in the same image that serves the blog pages. The problem? Traffic (especially spam traffic) is up - so having a process that read log files, filtered them (in memory) and then spit out the cleansed files to be read by the server was a little too much - each time the log scan code ran, it was slowing the server down - and finally, today, just making it inaccessible.
The answer, of course, is to move that code out of the main server and run it as a cron job - which is what I have to do now.
A fair bit of the new traffic here is actual new readers - but there's a disturbing amount of attempted referer spam as well. The vast majority is offers for various drugs (the same ones advertised on TV), gambling, and of course, that perennial favorite, porn.
We have some filtering going on at the Apache level now to address that, and I've got the new process for dealing with referers coming up. What a bundle of fun this is :/
Looks like the Atom publishing API is a draft ietf thing now. I guess that means I need to look at implementing it - both server and client side.
Until you actually try to decouple it. Earlier today, I had a server issue that related to a process - scanning for referers - that needed to be run outside the main server. Fine; turning that off was simple. As it happens, running it separately surfaced a whole raft of little assumptions I'd made along the way.
It took a bit of effort to make the scanning service truly standalone - it was grabbing various bits of information (file locations, etc) from the blog settings information. Running independently, I didn't really want to saddle it with all that extra dreck, so I had to decouple that. Took a fair bit of trial and error to find all my assumptions too.
Bottom line - decoupling services is always harder than you think it will be.
Wow - I knew that newspapers were losing readers steadily, but I didn't realize just how bad it is - their readership is actually dying off.
Newspaper readership is down. Fewer young people are picking them up, and the average age of a newspaper reader is now 55, according to a Carnegie Corporation study. Many papers have been losing circulation at alarming rates across all age groups.
Newspaper profits and the stock prices of the companies that own them were also down during the first half of 2005. The biggest newspapers are cutting staffs, closing foreign bureaus and taking other steps to meet their owners' profit goals.
An average age of 55? Wow. That's got to alarm the finance guys.
Well, I'm tired of rewarding the spammers via the referer lists. Instead of putting that list at the bottom of the posts on the site, I've moved it back behind a POST - only the blog owner/admin can see them now, after logging in. It won't stop the flood of spam, but it will stop rewarding it.
Charles Johnson reports that referer spam attacks are up:
Behind the scenes, there is a pretty amazing swarm of robots hitting our Most Recent Referrers page tonight, using zombie servers (servers infected with a previous virus that leaves a back door open) with a range of proxy IP addresses, many in China, to try to plant URLs among our referrers that link to the usual dreary list of illicit pharmaceutical products. This kind of idiot spamming is a constant annoyance, but tonight’s robot swarm is extraordinary for its sheer volume.
That was the problem that took this site offline for a bit over an hour yesterday, and resulted in my moving the referer list behind a post form, accessible only to the author of a blog. I guess a new assortment of bots is out there being played with.
Update: Those are the same blasted spam referers we're seeing.
Yes, gada.be is a cool search aggregator - and now that the servers are in order, it brings results back pretty darn fast. Still, there's something missing that I need to make it useful for me (YMMV, of course) - syndication ready results. For instance, here's a gada.be search for BottomFeeder - but I have to be in my browser to see that. The problem? I don't want to be in my browser, I want to be in my aggregator. Right now, I have a variety of search feeds from a bunch of different engines. If gada.be provided results in RSS or Atom form, I'd be able to cut a lot of those back. As it stands, having those results live in HTML in a browser makes it far less interesting to me.
There's a nice German language tribute to the 25th anniversary of the release of Smalltalk-80.
Looks like Apple has discovered the power of impulse buying - have a look at the way Dave Astels ended up with a new powerbook at the Apple Store :)
Here's good news - Yahoo and MS are working together on IM, allowing their networks to talk. At present, the various IM systems are like independent, unconnected phone networks - a set of isolated silos, with the AIM one being the biggest. Maybe this will generate enough momentum that AOL will be forced to respond. Let's hope so.
Sylvie Noël is seeing the same thing I am - splogs are starting to choke the various blog search engines:
If you've got a PubSub account, you've probably come across these in the returns from whatever search term you've put in. I find them very annoying, as they drown out the few interesting new blogs that PubSub sometimes throws my way. In fact, it's destroying the usefulness of PubSub for me.
It's not just PubSub either - Feedster is being run over by splogs, and those blasted ads that Feedster is returning (as full items) are annoying as heck. I'd much prefer to see an ad tacked onto an item - the bozo ad items are not a lot better than spam. Technorati is getting washed and waxed by splogs too - add that to their frequent inaccessibility, and you have a service that's getting less useful all the time.
The damage just spreads...
Here's a site worth looking at if you are a history buff - WWI photos, in color.
My friend Mike pointed out Orson Scott Card's review of Serenity - I agree, it's a great movie - the sort of movie that makes you realize how good Star Wars could have been if Whedon had been in charge.
Mark Watson compares this process to MS' update, and calls it simple:
Why can't Microsoft make upgrades this easy. A few caveats: Ubuntu is not officially releasing "Breezy" until tomorrow, so I did this on my laptop (which is not my main Linux development system): In the Synaptic package manager, under Settings -> Repository, I manually edited my repositories changing all occurrences of "hoary" to "breezy" and I removed the install CDROM as a repository source. I then clicked the "Mark All Upgrades" taskbar icon and then clicked "Apply" - when asked, I chose the "Smart Mode" upgrade that apparently is meant for upgrading to new releases. One particularly great thing: under "hoary", I had to build and install my own driver for the RT2500 wifi device in my laptop and manually start it. After the upgrade, wireless is on with no manual operations. Note that with the RT2500, when booting Windows XP, I have to manually start wireless.
I don't know about you, but any process that involves manually editing configuration files and then building a driver isn't "easy", and doesn't compare favorably to Windows Update. or the Mac updater either.
PVRBlog has the scoop on something I find really interesting - the possible evolution of iTunes into an Apple Media Center:
The new iMac + Front Row package looks pretty similar to the first versions of Microsoft's Media Center XP. You have simple access to your music, photos, videos, and DVD player, all from a small iPod-like remote. It doesn't look like they're concentrating on sending the video to another room or to a larger screen, but if you live in a small apartment or dorm room and don't need to send video out to a larger screen, backing away from your iMac and using the remote could be a pretty good solution for an entertainment PC.
If Apple comes out with PVR capabilities, I'd get one of those instead of another Media Center PC. The Media Center PC has been too flaky.
When procedural hairspray just doesn't cut it anymore :)
Scoble on the iPod video:
One thing, though. Steve Jobs better never tell me we're copying him next time I meet him in the street. Why? Cause he brought out a video-playing computer (we call those Media Centers) and a portable video-playing device.
I'll bet that there will be one crucial difference - the Apple version probably won't drop audio for no apparent reason, like my Media Center PC does.
Tom Murphy of PR Opinions has some thoughts on "identity management" - i.e., knowing what people are finding out about you via the search engines. As he points out, this can be particularly interesting if you have a common name:
This online reputation ecosystem was brought home to me recently in a personal way. My parents, God bless them, weren't the most imaginative when deciding on my name. It's a proud family name, but Tom Murphy isn't exactly exotic. Indeed a quick search finds a playwright, the mayor of Pittsburgh and thousands of other similarly named individuals. We all have the same problem. There was an analyst at Meta Group (R.I.P.) called Tom Murphy and for years we used to receive each other's media queries. It's funny we now both work at Microsoft and the confusion has continued unabated.
But in the past week or so, the media in Ireland and the UK have been focussing in on an unsavoury Tom Murphy or to give him his full title, Tom "Slab" Murphy(no relation). He is the alleged chief of staff of the IRA and has been linked with some dodgy propertydealings in the UK amongst other things. The story has been on every TV news bulletin, radio bulletin, broadsheet, tabloid and online news service over here. A friend of mine joked that soon I'd be getting a lot more "respect". Although there's little likelihood that we'd be mistaken for each other, and of course he could take major offence at being mistaken for a PR practitioner, it illustrates the vagaries of online reputation.
I don't get associated with anyone that interesting, but have a look at a Google search for me - I'm hitting the top two slots again, with the Column Two guy in the third and fourth - and some consultant in the fifth slot. Mind you, I don't know either of those two guys, but both are in jobs similar enough to mine that there could be confusion from people who don't already know the one they are looking for.
The more interesting name match is the judge, who's there if you scroll down. In various name searches I have set up in BottomFeeder, that guy comes up because decisions he makes from the bench sometimes hit the news.
Which all goes back to what people will think when they Google you. The first problem, of course, is the possibility of getting the "wrong you". Which could make for a bad introduction before you even meet. What's the answer to that? I have no idea, honestly.
Russell Beattie demonstrates how hard it is for a company to shed a bad reputation - he's very, very wary of Microsoft, even when they are doing generally good things:
Microsoft Bribery- Microsoft is using it’s $60 billion cash reserves to buy out everyone who it has stomped over in the past decade. Sun, Netscape (AOL), Novell and now Real. Do they really think that cleans up their image? The value they get from their illegal monopolistic practices far outweighs the pitances they’re paying out in renumerations. This round of settlements is just clearing up loose ends so they can start another round of aggressive business tactics (look at Sendo for a recent example). They’re also doing things like “embracing” open standards like RSS, opening up their Office doc XML stuff, licensing their mobile OS to Palm, and doing IM interop? Sorry, it all looks good, but is mostly an effort on MS’ part to improve its public image, no less. As soon as they can crush their current competition, they’ll be back to their same old ways.
Now, the general buying public doesn't share the nasty image of MS that a lot of tech folks do, which is why the generally bad reputation they built up didn't do more harm. Still, Russ' post shows how persistent that "first impression" can be. With many people, you may never get a chance to create a better one.
Derek points out where the new downloadable TV thing that Apple (and ABC) are rolling out could go:
Where's the win, though?
Consumers have wanted "a la carte" cable programming for a while. Instead of being forced to buy bundles of 120 channels to get the 6 they want, they've wanted the ability to buy just those channels and (more importantly) pay for just those channels.
This has the potential for changing this dynamic even further, allowing people to buy their showsa la carte, and to eliminate many middlemen in the process.
This is something much bigger than the PVR - it's disintermediation hitting the entire broadcast and cable business between the eyes. It should be an interesting couple of years coming up.
It's all about branding and excitement. My daughter is only vaguely aware of the Media Center PC, but she's excited as heck about the new stuff Apple announced. First thing - MS needs better names. Second thing - the Media Center PC needs to be far easier to use. The wife and I went through too much pain to get ours working. If Apple gets into this game with a consumer friendly device, the Media Center PC will be a goner.
Interesting article on "Higher Order Messages" here - what's HOM?
A higher order message is a message that takes another message as an "argument". It defines how that message is forwarded on to one or more objects and how the responses are collated and returned to the sender. They fit well with collections; a single higher order message can perform a query or update of all the objects in a collection.
Nat Pryce goes on to give an example in Ruby. The frothing reference? Here it is:
The higher order messaging version does have messy dots between messages, but unfortunately that's an aspect of Ruby we can't change. At the risk of sounding like a frothing evangelist, I have to admit that the code would be neater in Smalltalk
And he gives a Smalltalk example. The interesting thing is, Michael did some HOM work in VisualWorks a couple years back - I can't find any posts from him about it, but you can load the HigherOrderMessaging package from the public store.
I just like the idea of being a frothing evangelist :) Maybe I'll bring some Alka-Seltzer to my next speaking engagement so that I can really play that up :)
We are really, really losing patience with the Media Center PC. When we first bought it, the price and feature list sounded really good. Then reality set in, with the various "interesting" setup problems. We finally got it working, and got things hooked up to the TV. That's when the fun really started.
Every day or two, the blasted thing just stops recording without sound. The bizarre thing is, sound isn't off on the PC when this happens - heck, the stupid media center application still beeps as you walk through it, trying to figure out why the heck the sound didn't get picked up with the show.
We had been considering buying another one of these as a replacement for our dying ReplayTV. Now, I'll just wait for Apple to ship something in this space. I rather suspect that it will actually work.
I'm heading up to New Jersey for a customer meeting in the morning - a quick ride up to Newark on Amtrak, and then the same thing back again in the late afternoon. I'll likely be network free until I get back.
Ted Neward echos the conventional wisdom on CORBA - that lack of interop killed it:
For starters, Steve Vinoski was a bit miffed at the idea posited by Mark Baker that CORBA failed. Sorry, Steve, I have to say it, but I agree with Mark--CORBA never fulfilled on its intended promise of seamless middleware interoperability and integration capabilities, and certainly not over the Internet in any meaningful way. By the time CORBA began to address some of those issues--firewalls being a big one--the world had already pretty much abandoned both the "distributed object brokers" (the other being COM/DCOM) and were starting to explore HTTP as the be-all, end-all transport protocol.
Lack of interop was never really the problem - I've seen the VisualWorks CORBA broker working against a large variety of brokers, including a few that no longer exist. Ted touches on the right answer - firewalls. WS* succeeded where CORBA failed for a very, very simple reason - port 80. To get a CORBA hookup between two entities, you have to go have a discussion with the IT (and, if your outfit is big enough, IT security) guys and get them to open up a port in the firewall. Their default answer is going to be no, so this takes work. Easier to just forget the whole thing.
Now, take WS*, by contrast. Well - the SOAP posts come straight into the already open port 80, so you don't need to have that talk with IT. This makes it far, far easier for various skunkworks projects to get going before anyone notices them - by which point they might be too important to kill off.
The WS* stack is at least as complex (and, at this point, arguably more complex) than CORBA ever has been. It's no more or less interoperable between service brokers either (technology-wise). It's effectively more interoperable solely because it uses port 80.
Here's an article on tagging problems from September - I meant to comment on it then, but I find myself looking at my flagged posts now that I'm on a train. Oddly enough, that relates to the problem at hand. Here's the scenario that the post lays out as problematic:
Let's say Joe reads a new article about a battery technology breakthrough in the Scientific American. Joe has been thinking about buying a fuel-efficient car lately. When Joe goes to tag the article's web page, he uses the following tags: "battery," "fuel-savings," "car," "future-vehicle." Let's say the article comes with a .gif of a high-level schematic for how the battery works. Joe saves the .gif in his Flikkr account, tagging it with "battery," "schematic," and "fuel-savings."
Eighteen months and many tags later, due to Joe's profession as an engineer at Intel, he has an electric moment and realizes the battery tech breakthrough has more relevance to something he's directly working on, in nano-tech. Given the keywords he chose, will he be able to 1) recall how he tagged the original article, to find it later on or, 2) if he can find it at all, will he be able to easily re-tag the article and the schematic .gif to match the new context in which Joe finds these ideas relevant? I wouldn't bet on either outcome.
That is a problem, and it's one most of us run into a lot. I use del.icio.us to tag posts that I want to be able to find later - I use the tag "cst" for posts that I want to share with people about Cincom Smalltalk. Now, the problem I'm going to run into here isn't the same as the one above - I'm not going to forget the tag. However, over time, I'll tag a whole ton of things that way. Once I have tens (never mind hundreds or thousands) of posts tagged that way, how do I find the needle I actually want in that haystack?
The article suggests that refactoring tools (like Smalltalk's refactoring browser) for tag libraries are the answer. I don't think so. There's a wall of inertia that's going to prevent most people from doing that. Heck, the simpler problem that my title references is that most people won't tag their posts at all. Of the ones that do, a smaller subset will be motivated to refactor.
Don't believe me? Well, let's look at two A-Listers as an example - Scoble and Winer. The former never categorizes a post, and the latter rarely bothers with a title or a category. These two are widely read, and deeply involved in "web 2.0" discussions - and even they can't be bothered to take the minimal amount of action necessary to enable it. How likely do you think it is that the average web user will bother? For your answer, walk into anyone's old video cassette library and see how many of the ones recorded at home actually have a label. The answer will be enlightening.
Here's more evidence: I subscribe to 315 feeds as I write this, and I keep a fairly large cache of old items for each one. Let's trawl through those and see which ones have a category set:
RSSFeedManager default getAllItems size.
That tells me how many items I have sitting in memory. The response? 16,466. Now, let's see how many have no category set:
(RSSFeedManager default getAllItems select: [:each | each category isNil or: [each category = 'None']]) size.
The result there? 10,810. Nearly two thirds of the items I'm tracking have no category associated with them. Now, let's walk back to the web 2.0 discussions where the semantic web heads are trying to decide whether RDF, or OPML, or something else is the best way to make sense of all this. I'll make it simple for them - it just doesn't matter. The problem isn't the one posited in the article - i.e., "how did I categorize that item"? It's "holy smokes, I'm awash in a sea of completely uncategorized plain text!". Before someone chimes in that text search will auto-categorize, I'll point out that engines like Google already do a lot of that - and, as Scoble has been noticing, there are limits to that.
Tim Bray wants to know what the point of splogs is:
I suspect most people never see spamblogs, but let me tell you, there are a lot of them out there and they get weirder and weirder and weirder. I’m actually baffled as to why they exist.
Oh, this is a simple one. Set up a search feed in your favorite aggregator, and then watch what comes back - especially from PubSub and Feedster, which are just filled with those right now.
End of the week, and time to have a look at the logs again - BottomFeeder downloads are back up - looks like last week was the blip. This week: 813 per day:
Wow, the Mac numbers went way up - I wonder what that's about? Interesting, and the word I hear about the VW VM getting better on the Mac in 7.4 (December of this year) is welcome news, given those numbers. Next, a look at the HTML page accesses:
|Tool||Percentage of Accesses|
That's a fascinating jump up in IE hits - traffic has been up, and there was a huge spam wave last week - and most of the spam reports itself as IE. So, I don't think I can read anything into those numbers. Let's have a look at the RSS access:
|Tool||Percentage of Accesses|
|Net News Wire||11%|
|RSS 2 Email||1%|
The RSS feed accesses don't show the same IE spike, which tells me that it's almost certainly the spam surge. The variety of tools being used in this space is still huge though - the consolidation that started has not run its course yet.
Scoble thinks an acquisition can change MS' image (which basically means changing its culture). He has it backwards:
Oh, and all it would take to completely remake Microsoft’s image? One acquisition. I hear we have $60 billion in the bank. I don’t want all of it. Just a small percentage. In fact, it’ll cost far less than it cost us to settle with Real to get in this game.
MS is huge, which means that nearly any acquisition will be of a smaller entity. Smaller entities simply don't have the power to change a larger corporate culture. One of three things happens:
- The staff of the acquired entity leaves, since things are "too different" for them
- The staff of the acquired entity "gives up", becoming more like the acquirer
- The smaller entity goes quiet, hoping to stay "under the radar" of corporate - thus maintaining a semblance of their old culture
Unless the two entities are roughly the same size - in that case, you get what amounts to civil war for an unpredictable period of time, as each side tries to "win". I've seen that one personally, and watched it in customers. It's not pretty, and it doesn't help anyone.
Odds are, MS wouldn't have that, because a merger of near equals is hard to imagine for them. Any smaller entity they get will just be swallowed whole, with the corporate culture enforced over it. Chance of image/culture change for MS in all this? About nil.
Looks like AOL is trying to jump into the blog search game - Steve Rubel has the story:
AOL and Intelliseek on Monday plan to unveil a blog content deal. Sue MacDonald at Intelliseek confirmed that the deal - set to be announced Monday at 7:30 a.m. - will give AOL access to rich blog data that they will deliver to consumers. While MacDonald did not say what specific data AOL will get, one can certainly speculate that it will come from BlogPulse and reside on the new AOL.com site.
I've been fairly happy with the BlogPulse results - they don't seem to contain the volume of splog content that is making Feedster and PubSub less useful every day.
I continue to get requests along the lines of "is Smalltalk safe for the next 20 years?". I've got this post out there, which I think sums things up nicely. However, an article written by John Dvorak illustrated to me again just how hard it is to peer into the near future (much less any further out). I can't find the article in Dvorak's PC Mag archive yet - it'll probably appear there next week. When you look for it, the title is "Computers and Modern Anarchy". Dvorak got into a whole thread about control and anarchy that doesn't interest me a lot - but he did make a point along the way that I wanted to highlight:
If you were running a nexus point or a BBS, you had to have huge banks of modems and multiple phone lines to receive a user on your "site". Most users today can probably no longer configure or use a modem. Dial-up is automatic, and it dials the internet, not each individual target.
Imagine how you surf the web today and realize that before it existed, you had to get the phone number of the site and call it directly each time. There was no hyperlinking; if you wanted to jump from site A to site B, you would have to hang up on one site and dial another. This was standard practice a mere 13 years or so ago.
Think about that - I remember how excited I was about getting USENET access via one of the BBS systems back then - and I remember the large amounts of money a roommate spent on chat too (something that is completely free with AIM, MS Messenger, etc. today). In the early 90's, it was a completely different (online) world.
Now take that forward - what are things going to look like in 15 years? From 1990, I sure wouldn't have seen what's here now. I seriously doubt that anyone sees 2020 clearly either.
Civ III is much more difficult than the old "Civ" (the DOS game) was. I used to play that at the King or Emperor level; I finally managed to win a game at "Warlord" level this afternoon, and I only managed that by staying on Monarchy, relentlessly building military units, and whacking the other powers until they died. How the heck would you win a game via the space race route? I have no clue...
We missed "Lost" on Wednesday - our misbehaving ReplayTV was on the fritz. So, we caught up this evening. In the middle of the episode, Hurley starts talking to Rose about his problem (the food found in the hatch). Jack walks by, and says hi to her.
At this point, the wife and I are saying - whoaaaa - she's dead! She died last season, I thought. I went hunting around the episode guides, and found this - "White Rabbit" last year, episode 5:
Jack is nearly delirious from lack of sleep and struggles to overcome the haunting events that brought him to Australia and, subsequently, to the island. Meanwhile, Boone gets caught in a treacherous riptide trying to save a woman who went out swimming. A pregnant Claire's health takes a bad turn from lack of fluids, and a thief may have stolen the last bottles of water. Veronica Hamel guest-stars as Jack's mother, Margo. Also, Jack (Matthew Fox) flashes back at 12 years old, to find himself on the playground in an altercation with a bully, who ultimately beats him up, and later learns a life lesson from his father.
I'm certain that the woman who Boone couldn't save was Rose - and now here she is, back in the flesh, and no one remembers that she's dead? What's up with that? Did I miss something, or is this a clue that will become plain later?
Chris Pirillo says what the rest of us have been thinking for awhile now:
In the past few days, I've been inundated with an enormous amount of subscribed search spam for designated keywords. To the tune of hundreds, if not THOUSANDS, of bunk entries. Who knew "lockergnome" and "pirillo" would be THAT popular?! Still, I can't help but think that others are having the same headaches - and 99% of the crap coming in is directly from a single domain: blogspot.com. Google, it may have been a smart acquisition in the beginning, but y'all need to clean house in a big way. You're the tallest nail, and you're really getting pounded - and now others, who aren't even using your service, are getting pounded. Blogspot has become nothing but a crapfarm, and your brand is going to go down with it. If your motto truly is to do no evil, then you need to start putting some resources behind an effort to curb this train wreck.
I don't know what's (specifically) making it so insanely easy for these spammers to get signed into your system, but you need to change that - ASAP. Forget about developing another Web-based aggregator for now (sorry, Shellen - Blogspot needs more help at this point). I'd love to ban / filter anything and everything that comes from blogspot.com, but the problem is that I have quite a few friends on that service who are sitting in the 1% "legitimate" minority.
As to why spammers go for that system, that's the simple part. It's free, and the signup process can be easily scripted. Which means that you can bot the whole thing, and create a universe of splogs within a few minutes. It's been a disaster waiting to happen, and now it's happening in a huge way.
Update: The sheer volume of these things is amazing. Every one of the blog search systems I use heavily for feeds - Technorati, BlogPulse, IceRocket, PubSub, Google Blog Search - they all get gamed by these splogs. Blog search just got a lot harder.
It occurs to me that the next target for link spammers will be del.icio.us. They have a simple RESTful interface, and it's easy enough to set up an account. All I have to do is wait, I suppose
Tim Bray noticed that spam blogs have just exploded - here's an example - look at the latest results for this Feedster search feed for Smalltalk. There have been 76 new items (some dupes) since midnight. Of those, 4 are actually what I'm looking for (references to Smalltalk, the programming language). Two are false positives, references to "smalltalk", as in speaking. The other 70? All splog results.
It's clear that these are bots at work - the Blogspot templates are the same for all the results. The funny thing is, the products and services being flogged aren't your typical pharmaceuticals and porn - the "offers" are all over the map. This seems to be a well organized and coordinated attack, using a boatload of fake blogs as the delivery vehicle.
Looks to me like it's time for Google to step up.
Ahh, fun, Nicholas Carr got tired of saying that IT is dead. So now he’s saying that Web 2.0 is “ammoral.”
Oh, really? Maybe you should check out the Web 2.0 stuff that Brian Bailey is doing. He is putting HDTV videos of his church’s services up. And much more. And he has a blog.
Scoble needs to understand the important difference between "amoral" and "immoral". Carr is asserting that the web is the former, not the latter. On balance, he's not wrong.
Evan Williams is apparently so far removed from things that he doesn't see how bad splogs have gotten. The number of "good" blogs on BlogSpot versus the number of splogs there doesn't really matter. At all. What matters is that a coordinated attack using bots was able to render nearly every blog search system irrelevant over the weekend (Blogdigger seems to be an exception).
The splogs on BlogSpot are effectively all that's there now, and Google should have seen it coming - it's not like splogs just popped up this weekend, or like bot attacks are new.
Looks like the Feedster guys lost some sleep this weekend - they've been having a look at the splog problem, and think they have an answer. The volume washing through Feedster results seems to be down, but it's hard for me to tell whether that's because:
- I've already received all the crap there is to get on the keywords I search for
- They've addressed the problem
- The attack has been stopped/slowed/paused
I'm still getting bogus results from PubSub, IceRocket, and BlogPulse though. In any event, I think Google is where the action needs to take place. They're the ones who have the targeted system.
One of the cool things about BottomFeeder is that I don't have to resort to eyeballing in order to figure things out - I have the full power of Smalltalk in front of me. So, I thought I'd have an objective look at the spam damage from splogs over the weekend. Here's what I did. First, I selected the folder that holds all my search feeds. Then I executed this:
| folder mgr dict | folder := RSS.RSSFeedViewer allInstances first feedTree selection. mgr := RSS.RSSFeedManager default. feeds := mgr getAllFeedsFrom: folder. dict := Dictionary new. feeds do: [:eachFeed | | matches | matches := eachFeed items select: [:eachItem | eachItem link ifNil: [false] ifNotNil: ['*blogspot*' match: eachItem link]]. dict at: eachFeed title put: (eachFeed items size -> matches size)]. ^dict
That resulted in an inspector that looks like this:
That's a useful view for scrolling through - let's cut things down and create a table that can be easily posted. I'll limit the table to feeds that have at least 10 bad results in them. First, I added a test to the previous script, such that only feeds passing my test get into that dictionary. I have 44 search feeds; 22 of them passed the bad results test. On to the html script:
stream := WriteStream on: (String new: 1000). stream nextPutAll: '<table border="1" cellpadding="3">'; cr. stream nextPutAll: '<tr>'; cr. stream nextPutAll: '<td><strong>Feed Title</strong></td>'. stream nextPutAll: '<td><strong>Total Items</strong></td>'. stream nextPutAll: '<td><strong>BlogSpot Items</strong></td>'. stream nextPutAll: '<td><strong>Splog Percentage</strong></td>'. stream nextPutAll: '</tr>'; cr. dict keysAndValuesDo: [:key :value | | total spam percent | stream nextPutAll: '<tr><td>'. stream nextPutAll: key, '</td>'. total := value key. spam := value value. stream nextPutAll: '<td>', total printString, '</td>'. stream nextPutAll: '<td>', spam printString, '</td>'. percent := ((spam/total) asFloat * 100) rounded. stream nextPutAll: '<td>', percent printString, '</td>'. stream nextPutAll: '</tr>'; cr]. stream nextPutAll: '</table>'; cr. ^stream contents
Running that produces the following output:
|Feed Title||Total Items||BlogSpot Items||Splog Percentage|
|IceRocket: "VA Smalltalk"||80||10||13|
|IceRocket: "Squeak Smalltalk"||80||23||29|
|BlogPulse: "Squeak Smalltalk"||29||15||52|
|IceRocket: "Dolphin Smalltalk"||80||16||20|
|Google Blog Search: BottomFeeder||80||35||44|
|BlogPulse: Dolphin Smalltalk||57||19||33|
|BlogPulse: "Cincom Smalltalk"||57||17||30|
|Technorati: "James Robertson"||80||46||58|
|Feedster on: "James Robertson"||80||27||34|
Gives you an idea of the kind of spam attack that was running over the weekend, doesn't it?
Real Tech News has disturbing info on what's in your pillow:
Fungal contamination of bedding was first studied in 1936, but there have been no reports in the last seventy years. For this new study, which was published online today in the scientific journal Allergy, the team studied samples from ten pillows with between 1.5 and 20 years of regular use. Each pillow was found to contain a substantial fungal load, with four to 16 different species being identified per sample and even higher numbers found in synthetic pillows.
Sounds lovely :)
Tim Bray wants to move to an "internet stamp" system in order to eliminate spam. It's a nice idea, but it will never work. Why? Well, what do you do if a bunch of domains decide to offer internet stamps for free? Just knock them off the net? Yeah, that'll go over well. There's another problem too - it requires a long grace period followed by a cutoff date - after which older clients will just stop working. Yes, I can sure see that happening too. How do you plan to manage an enforced upgrade across every platform on the net?
There's an even simpler problem. Let's say the cost is a penny a transmission, as Tim posits. This assumes a robust micropayment architecture (which doesn't exist). It also assumes that at that cost, spamming is prohibitive.
Hmm - that's $10,000 to send a million messages. Based on the kinds of revenues that spammers are supposedly rolling in, I suspect that this will be less of a disincentive than Tim thinks. People pay astounding amounts of money to put 30 second spots on the superbowl - spending $100,000 to put 10 million spam messages out just doesn't sound that wild to me. Not to mention the enormous pressure on governments to make unsolicited mail legal once there's tax revenue to be gleaned from it. No doubt you've seen the huge efforts governments take to stop unsolicited snail mail?
Thanks, but no thanks. Take that solution and just bury it.
Dave’s numbers suggest that there’s less there than meets the eye; that the numbers and reach of splogs are limited. It’s just that their automated content generation managed to cause them to fill up the ego feeds of a bunch of loudmouthed widely-read bloggers, who all screamed simultaneously.
The example I posted noted that searches for Smalltalk (as in, the programming language) got flooded. I would have to assume that Java searches were flooded the same way (or worse, given the larger number of potential readers). No, we aren't all interested only in ego searches. Some of us just want to see what's being said on topics of interest.
Jakob Nielsen has a list of dos and don'ts for blog authors. Some of them matter more than others, and some depend a lot on the context of your blogging. For instance - the "own your own domain name" one.
That depends on what you are trying to accomplish. Me? I'm evangelizing Cincom Smalltalk (and ranting about things in the industry that cross my view). Given the evangelism aspect, it makes sense more me to be blogging on a Cincom server - my goal is to build the Smalltalk community. How important the domain is depends a lot on your goals.
Another one of his suggestions popped at me as problematic as well:
Many weblog authors seem to think it's cool to write link anchors like: "some people think" or "there's more here and here." Remember one of the basics of the Web: Life is too short to click on an unknown. Tell people where they're going and what they'll find at the other end of the link.
You have to "go with the flow" of blogging on this one. An awful lot (most) of what is written on blogs is extremely temporal - it's very much based in the now. Which means that for the person reading about the latest kerfuffle (technical, political, whatever) - they get the context. It's very unlikely that anyone will care that deeply in a month, much less a few years.
Much of the rest of what he wrote is good stuff though - have a look, and see what you think. In this area, it's definitely a YMMV thing.
InfoWorld reports that the rise of the CIO is over - the job is being done in by the reporting requirements of Sarbanes-Oxley:
Because IT has a close relationship with all of a company's data stores, it provides everything the CFO wants to look at, including statutory reporting and analytical capabilities. At Merial, all senior IS directors now report to the CFO.
"Have information on a timely basis, with an audit trail, is what [Sarbanes-Oxley] required us to do. Everything must be traceable to the source," Lerner tells me. While IT is responsible for satisfying the needs for compliance, the CFO is the gatekeeper. So, in January, no more CIO.
I'm not sure what that means in the bigger picture - but it might be a good thing. I've seen an awful lot of projects that went on and on (long after they were obvious failures) solely because IT management couldn't stand up and deal with admitting it. With Sarb-Ox requirements and the CFO on the line, maybe there will be less. Or, human nature being what it is, maybe not :)
Now that it's been a couple of days, I thought I'd have a look at my search results again, and see how recent (more valid) results have replaced the earlier splog ridden stuff. I posted some code and a table showing the damage a few days ago - let's see what's happened since:
|Feed Title||Total Items||BlogSpot Items||Splog Percentage|
|IceRocket: "VA Smalltalk"||80||10||13|
|IceRocket: "Squeak Smalltalk"||80||21||26|
|BlogPulse: "Squeak Smalltalk"||29||15||52|
|IceRocket: "Dolphin Smalltalk"||80||15||19|
|Google Blog Search: BottomFeeder||80||34||43|
|BlogPulse: Dolphin Smalltalk||57||19||33|
|BlogPulse: "Cincom Smalltalk"||57||17||30|
|Technorati: "James Robertson"||80||46||58|
|Feedster on: "James Robertson"||80||29||36|
If you compare those numbers to the earlier ones, you'll see that they are trending down - the various engines have started responding to the problem. The results coming out of Feedster in particular are better - they seem to have done a pretty good job of weeding stuff out. Some of this, of course, is Google weeding out the splogs too. Also, those numbers are high due to the way BottomFeeder caches - all the old bad results are still setting there, marked as read (i.e., ignored by me, but still in my data set. Let's modify the original search so that I'm only looking at results that have arrived on Monday or today. I've also relaxed the number needed to show "badness" from 10 down to 2, given the smaller data set:
| folder mgr dict | folder := RSS.RSSFeedViewer allInstances first feedTree selection. mgr := RSS.RSSFeedManager default. feeds := mgr getAllFeedsFrom: folder. dict := Dictionary new. cutoff := Timestamp readFrom: '10/17/05' readStream. feeds do: [:eachFeed | | matches all | all := eachFeed items select: [:eachItem | eachItem pubDateString >= cutoff]. matches := all select: [:eachItem | eachItem link ifNil: [false] ifNotNil: [('*blogspot*' match: eachItem link)]]. matches size >= 2 ifTrue: [dict at: eachFeed displayTitle trimBlanks put: (all size -> matches size)]].
Simply adds a cutoff date of Monday at midnight. So what's been hammered since then?
|Feed Title||Total Items||BlogSpot Items||Splog Percentage|
That's a much smaller amount of damage - although, it looks like PubSub's matching algorithm is particularly vulnerable to this sort of attack.
One of the things people ask about a lot is the VW process model. The simple answer is that the VM is single threaded, so all VW processes are managed at the Smalltalk level - i.e., they aren't OS level threads. You can create OS level threads, but only in the context of threading an external API call. A good example of this can be seen in the various database connects that we ship - you'll note that we ship threaded and non-threaded (i.e., blocking) versions.
So given that, what are Smalltalk level processes good for? Well, bear in mind that you (as the developer) have full control over their semantics. That means that an application deployed on Windows will run exactly like one deployed on Linux (or Mac, or Unix) - a VW process is a Smalltalk artifact, so it's not going to be unpredictable. Let me walk through a simple example, using the BottomFeeder update loop. I subscribe to 315 feeds at the moment, so when the update loop fires, I get 315 VW level processes doing HTTP queries. If those were all OS level threads, the system would fall to its knees in seconds - I'd have to use a thread pool. Incidentally, I implemented one as an option for Bf - but I digress. Here's the main update loop (somewhat simplified for space reasons):
feedsToUpdate do: [:aFeed | | updater delay | updater := [self updateFeed: aFeed shouldForce: shouldForce totalFeeds: numberOfFeeds]. self settings runThreadedUpdates ifTrue: [self runThreadedUpdateFor: aFeed updateBlock: updater] ifFalse: [updater value]. "other code here..." ].
If I have threading turned off (useful on slow connections, where I don't want the queries competing for bandwidth), I just iterate over the list. The interesting piece is in the threaded updates:
runThreadedUpdateFor: aFeed updateBlock: updater self settings shouldThrottleThreads ifTrue: [self runWithThrottling: updater for: aFeed] ifFalse: [self runWithoutThrottling: updater for: aFeed]
That checks another setting, which controls whether the app should use a thread pool or not. The "throttling" code implements a pool, the non-throttled code just keeps forking off threads. That's how I run Bf, and it works fine (with a fast connection). Drilling to the throttled code:
runWithThrottling: updater for: aFeed self updateCounter addProcess: updater atPriority: self settings getUpdateLoopPriority. addProcess: aBlock atPriority: aPriorityOrNil "add the process to the wait pool" self sem critical: [self waitingCollection add: aBlock->aPriorityOrNil]
That code simply adds the new process to a queue, which runs a limited number of processes at once. The non-throttled code?
runWithoutThrottling: updater for: aFeed | proc | proc := updater newProcess. self updateCounter addThread: proc url: aFeed url. proc priority: self settings getUpdateLoopPriority. proc resume
Now that demonstrates something useful about the level of control you have over a VW process. I'm setting the priority of the process (by default, it's in a range from 1-100, with 8 "named" levels). Then I'm resuming the process. A VW process is defined simply as a block (the snippet all the way at the top) which later gets forked off. In this example, I'm setting the priority and then resuming (forking) the process. I'm also holding a reference to the process, so that it can be killed (for instance, if you take BottomFeeder offline, the system goes ahead and whacks all the in progress threads in that loop, along with the update loop itself).
The priority levels I mentioned are used by the default process scheduler - which is written in Smalltalk. What does that mean? It means that you have full control over the way processes run in Smalltalk. The default model runs the highest priority process that is ready to run, but - at a given priority level - no process will preempt another of the same priority. In other words, it's not time-slicing. Say you wanted it to be? Well, that's simple - to timeslice a given set of processes, you simply have a higher level process manage them (which is what my throttle does to some extent). If you want to timeslice the entire system? Have a look at class ProcessorScheduler and change the way it manages things.
It's a nice system, and it gives you a very high level of control over how your system runs.
Interesting piece of news here - Ward Cunningham (father of the Wiki) has left MS to be an Eclipse evangelist:
Microsoft Corp. has lost one of its high-profile hires to an open-source consortium. Mike Milinkovich, executive director of the Eclipse Foundation, announced on Monday that Ward Cunningham is leaving Microsoft to join the staff of the open-source tool consortium. Cunningham's new title is Director of Committer Community Development. Cunningham, the father of the Wiki concept, joined Microsoft about two years ago. At Microsoft, he was not involved directly in social-networking-software development.
That last bit is interesting - what did Microsoft want Ward to do, if they weren't going to have him work in the social software world?
Update: Dave Buck noticed this yesterday. Have a look at the quote Dave pulls - that's a pretty low level of excitement for an evangelist, IMHO.
Ted Neward explains why he doesn't like wide open dynamic support in a language:
First, the technical: dynamic languages may choose to expose moremeta-control over the language, but there's nothing inherent in the dynamic language that requires it, nor is there anything in a static language that prevents it. Languages/tools like Shigeru Chiba's OpenC++or Javassist, or Michiaki Tatsubori's OpenJavaclearly demonstrates that we can have a great deal of flexibility in how the language looks without losing the benefits of statically-typed environments. So to attribute this meta-linguistic capability exclusively to dynamic languages is a fallacy.
Secondly is the cultural issue: is the idea of granting meta-linguistic power (known as meta-object protocol, or MOP) to a language a good thing? Stu asserts that it is: "My concern is who controls the abstractions. Developer-oriented languages (like Scheme) give a lot of control (and responsibility) to developers. Vendor-oriented languages (like Java) leave that control more firmly in the hands of the vendor." So in whose hands are these abilities to change the language best placed?
*deep breath* I don't trust developers. There, I've said it.
Well, I'll take the contrary view (what a shocker!) - I don't trust the vendors. And I say that as the Product Manager for Cincom Smalltalk. When a vendor ships you a set of tools, you get the viewpoint of their developers as to how things ought to work. If that set of tools isn't malleable, you're just stuck. Hit a wall because the library isn't suitable for your needs? Too bad, you now have to argue with the vendor. Bearing in mind that you might not win.
Think that it'll be easy in the "obvious" cases? Heck, I'm the flipping Product Manager here, and I allegedly set direction - do you think the engineers buy everything I raise as a needed core library change (I've done a small number of them for BottomFeeder)? Heck no - how far do you think you'll get with Sun or MS?
The alternative is what you see in lots of Java projects - one more wrapper around the (insert your favorite example here, like String) class, because Sun decided to seal that one. It's just more pickaxe and shovel work to plow through, because it's simpler to not trust the developers. As opposed to those *cough* godlike *cough* library developers.
I'd be a whole lot more interested in Gillmor's blathering about attention if the site he flogs actually described what it does. It lets loose with a lot of buzzwords about my rights, and then asks me to join something. Umm, yeah - I've gotten the same pitch in gosh knows how many spams and junk mails too, Steve.
Here's a tip - you want this to go anywhere? Have the page that allegedly describes how great this stuff is actually tell me what the heck it is. In the meantime, I'll just use links. They make sense, and I don't need a set of buzzword bingo cards from a website to figure them out.
Bonus clue - I don't need to hand some website with less than no information on it my email address, name, and url to link.
One of the things that an aggregator allows you to do is keep up with a lot more information flow. As I said earlier today, I subscribe to 315 different feeds (44 of those are search feeds). I figured it might be interesting to see how much new content there is in a day from the non-media, non-search (i.e., mostly bloggers) feeds that I track. So, I opened up a workspace in BottomFeeder and started hacking out a script:
rejects := #('*feedster*' '*blogpulse*' '*google*' '*yahoo*' '*amazon*' '*icerocket*' '*rocketnews*' '*pubsub*' '*blogniscient*' '*digg*' '*sans*' '*infoworld*' '*computerworld*' '*linux*' '*slashdot*' '*wired*' '*rss.com*' '*internetnews*' '*comics*' '*file://*' '*technorati*' '*techrepublic*' '*meetup*' '*memeorand*' '*espn*' '*cnn*' '*extreme*' '*wbal*'). today := Date today asTimestamp. basicFeeds := RSSFeedManager default getAllMyFeeds reject: [:each | (rejects detect: [:each1 | each1 match: each url] ifNone: [nil]) notNil]. counts := OrderedCollection new. basicFeeds do: [:eachFeed | | todays | todays := eachFeed items select: [:each | each pubDateString >= today]. todays notEmpty ifTrue: [counts add: eachFeed displayTitle -> (todays size)]]. sorted := counts asSortedCollection: [:a :b | a value >= b value].
It's a pretty simple script - I grab all the feeds, filter out the ones that are either media or search related, and then see which ones have content today. Then I slam the results into a collection, sort by frequency, and do an inspect-it on the results. Unlike those *cough* advanced *cough* languages in the mainstream, Smalltalk lets me do this at runtime, in the running application. Kind of cool :) Anyway, I wrote a quick script to slap that stuff in an HTML table:
|PCWorld.com - Latest News Stories||26|
|Sam Ruby's Comments||22|
|The Doc Searls Weblog||22|
|Taegan Goddard's Political Wire||21|
|Lambda the Ultimate - Programming Languages Weblog||17|
|Microsoft Watch from Mary Jo Foley||15|
|RSS News by CodingTheWeb.com||15|
|Radio Free Blogistan||15|
|Philip Greenspun Weblog||14|
|Web Things, by Mark Baker||14|
|National Review Online||11|
|Exploration Through Example||10|
|TalkLeft: The Politics of Crime||8|
|Little Green Footballs||8|
|N=1: Population of One||8|
|Sci Fi Wire||8|
|Sjoerd Visscher's weblog||8|
|Glenn Vanderburg: Blog||8|
|Science @ NASA||5|
|The Ornery American||3|
|Don Park's Daily Habit||3|
|Cafe au Lait Java News and Resources||3|
|Travis Griggs - Blog||2|
|Dare Obasanjo aka Carnage4Life||2|
|Joho the Blog||2|
|Scobleizer - Microsoft Geek Blogger||2|
|Derek's Rantings and Musings||2|
|Alice Hill's Real Tech News - Independent Tech||2|
|Daypop Search - BottomFeeder||2|
|Software (Management) Process Improvement||1|
|Joi Ito's Web||1|
|Mark Watson's opinions on Java, AI, semantic web, and politics||1|
|Rob Fahrni, at the core.||1|
|The Blog Ride||1|
|Windley's Enterprise Computing Weblog||1|
|Better Living Through Software||1|
|Steve Shu's Blog||1|
|Industry Analyst Reporter - Applications and Software News||1|
|WCBS 880: Yankees on WCBS||1|
|cut on the bias||1|
|Austin Bay Blog||1|
|The Belmont Club||1|
|The Doctor is in||1|
|ARs closed Activity||1|
|Sam Gentile's Blog||1|
Now, I didn't get all the non-blogs out, but that's good enough for now - it's down to 89 feeds that way. The MARS one warrants some explanation - it's the feed off our internal bug tracking system, and we are approaching full code freeze for the next release - so activity is high. Other than that, the real outliers (i.e., lots of posts in a day) are group political blogs. Some of the high numbers are also some kind of server reset of the feed, not actual new content. That's still a problem that can fool an aggregator - especially when the feed in question doesn't have ID's for the items.
Anyway - looking at "real" results, it looks like a dozen new posts is a lot - most people are well under that. In fact, if I filter the list to those who posted 10 or fewer times so far today, I get down to 63 feeds. It turns out that the 7 (8 after this one goes up) posts today put me up near the top of that list. In fact, 23 of the feeds only have one new item so far today.
So - if you skim the high volume news/search feeds, the posts on single author blogs aren't that hard to keep up with. At least not if you use an aggregator :)
This ComputerWorld story is mostly about Sun's hopes in the software business - which can mostly be summed up by: "We give our software away; why the heck can't we make money that way?"
Well, to add to that exciting suite of revenue makers, Sun is eyeing PostgreSQL:
"We're not going to OEM Microsoft but we are looking at PostgreSQL right now," he said, adding that over time the database will become integrated into the operating system.
That's Loiacono, VP of their software business. So does PostgreSQL stay open source? Does it stay cross platform? Is Sun just going to bundle it, or buy it? This article seems unclear.
I'm sure most of you have seen this Gene Spafford quote, but I just ran across it, and it cracked me up:
Secure web servers are the equivalent of heavy armored cars. The problem is, they are being used to transfer rolls of coins and checks written in crayon by people on park benches to merchants doing business in cardboard boxes from beneath highway bridges. Further, the roads are subject to random detours, anyone with a screwdriver can control the traffic lights, and there are no police.
There's other good stuff here.