My glorious power company has been intent on taking out my hardware - recent power spikes took out my VCR, my Linux Box's power supply (for some reason, I had turned off my UPS. No idea how that happened). I took the Linux box downtime as an opportunity - we had been using it as the house router since about 1999. However, it was only providing a 10 mb house LAN - not a problem until we got a second ReplayTV. You can stream recorded shows from one replay to another over the LAN - and it turns out that 10 mb was not enough bandwidth for that. So when the Linux box went down, I bought Linksys router. That gave us 100 mb on the LAN, and the replays started sendiing video across it quite nicely. I spent the extra 50 bucks and got a dual mode router - we now have a 100 mb wired LAN and an 11 mb wireless LAN - which means that my notebook is no longer a slave to the ethernet cabling. All in all, it's quite nice - I should have done this before.
I've never been entirely happy with the way settings and such are save in BottomFeeder - so I am in the middle of redoing them. I had all the settings and feeds munged together into one big file - which was simple for me, but not so convenient for eend users. I am splitting the settings off into editable settings and binary feed files. This does make save/load a tad more complex; the RB is making the changes simpler, but there's still a lot of testing and work to do....
So the second ReplayTV is set up, and has enough bandwidth. So of course I have another problem. The replay is supposed to control my cable box (so that it can change the channel to record my chosen shows). The problem is, it seems to be (sometimes) dropping digits when changing channels. It's using a cable that snakes from the replay to the cable box, so that the IR commands can be translated. The replay in the living room works fine with the same settings, and we have the same cable boxes in both places. I'll be darned if I can figure out what's wrong - we suspect IR reflection is confusing the issue, but we aren't really sure. At this point, it's just frustrating
I should have gotten wireless a long time ago. It's very, very nice to be able to just pick my notebook up and walk to any room, and not have to worry about setting up a hub there, and having to sit near the the network access point. And I say this as someone who wired his house!
I'm deep into testing the new BottomFeeder startup and save code - there are still some kinks with starting up with the new file formats, while conversion seems to work just fine. More testing is definitely in order! If you load the current code out of the repository, be forewarned that it is not fully baked yet!
I stumbled onto an interesting commentary over at Gordon Weakliem's log on -stored procedures VS. straight SQL queries>http://radio.weblogs.com/0106046/2002/12/30.html]:
Java programmers seldom use stored procedures. They are not portable, it breaks the 'write once run anywhere' motto it brought to mind a recent quote in the whole .NET vs J2EE pet store brouhaha, something like "Sun recommends direct queries, but that's stupid". And this was coming from someone on the J2EE side of the debate. So this comment is a little perplexing. As I mentioned, my cross platform DB experience is pretty minimal (even when I was doing Java a couple years ago, it was against SQL Server), but it seems like by doing direct queries, you're throwing out a lot of potential optimizations at the database level that could otherwise be hidden behind stored procs.When the database is the biggest blocking point in your system, it seems to me that tossiing optimizations aside for (effectively) ideological reasons is just silly. We are about to upgrade the VWNC Registration system - right now, it uses a fairly dicey back end storage scheme (read - no database!). We are going to push it all into Postgres, and the api I'm using is completely Stored Procedure driven. Not only is it faster - it is actually a simpler api for me to deal with as a developer. As to worries about portability, I'll whip out an XP truism: YAGNI. If we end up having to migrate to another db, we will cross that bridge when we come to it. In the meantime, we'll enjoy a more optimal system...
So here's Sun finally branching out to Linux - too late, IMHO - they will be competing against some truly low cost vendors. I see this as being similar to the big airlines trying to compete with Southwest with low cost entries - but without addressing the basic cost structure problem. Anyway. I saw this quote, which just cracks me up:
Still, McNealy isn't wavering much in his belief in Unix, which still runs on most servers. "Linux isn't a market. It's a crankshaft, a widget," scoffs McNealy, using an analogy befitting the son of a former American Motors vice chairman...That's right, diss the market you are trying to enter....
I've done a test build that works on Windows and on Linux. I need to test out the ENV VAR and command line arguments to make sure that they override properly, but the base stuff works - A production image reads the old save file, converts to the new settings format, and saves all the files. The settings in the upcoming release will all be available in a text file that is user editable - and all the other settings (window information and feeds) are specified in that file. This makes backing up and saving your files much easier
Yes campers, I have spent my entire New Year's day (thus far) in the bowels of BottomFeeder startup code. I didn't have the startup sequence right for environment variables and command line arguments; they were being ignored. So I spent some time testing that out - brief aside here - having an image made that vastly easier for me. I was able tto simulate the runtime easily by having the dev image start up the application on startup, using all the current environment variables and command line arguments. This is relevant to a discussion of image based vs. non image based development that is going on in comp.lang.smalltalk right now. IMNSHO, having an image makes development loads simpler. For a runtime, it's also very nice to have an image - I have diagnosed countless issues in server apps by saving the headless server image in its current state and running that headful in order to see what happened. With a sealed runtime, that's nowhere near as easy. So anyway, back to BottomFeeder - I think I have the startup sequence right now, but I'm still testing to be sure - there will be no DEV builds until I get that sorted out. You can look here to see what I'm working on.
BottomFeeder users have noticed that the 2.6 release has taken longer to come out. There are a few reasons for that. First, Dave is working again, so he's doing paid work during business hours. I've had work to do as well for Cincom, which slowed down my contributions. Finally, I decided that I was completely unhappy with the state of the startup code and the save file code - so I refactored it. It had all been in two classes, and it really needed some rationalization. It's now split into its own package, in a number of much smaller classes. So I'm happier with the setup - but the wholesale change also calls for a lot of testing. I'm in the midst of that now, and I think I've found the majority of the issues. Stay Tuned
This is interesting. Amongst the other predictions, the author of this piece writes:
EJB will be almost a dead horse by years-end. As if my predictions weren't controversial enough, here's one to cement the whole deal. Yes, I firmly believe that EJB will be almost entirely "on the way out" within the Java enterprise space. Sorry, guys, it was a good run, you managed to bilk the industry of billions of dollars along the way, but the EJB facade is crumbling and developers are waking up to the realization that EJB just doesn't meet the goals it's supposed to: making enterprise development simple. Instead, the 800-page monster of a specification, the one that doesn't even come close to being tight enough to actually program to nor loose enough to permit serious performance improvements within an implementation, will be quietly and serenely allowed to drift off into irrelevance in the face of the burgeoning Web Services hype tidal wave. Most J2EE vendor containers will be advertised as "Web Service" containers first and EJB containers second (just as EJB containers today are EJB first, CORBA second). Doesn't mean J2EE is dead, just that EJB is fast becoming not the way to build systems.Anyone who has attended one of Alan Knight's talks recently won't wonder why - he has a great riff on EJB without taking any unfair swipes at it. I've personally seen more than one Fortune 500 firm go down the rat hole of death marches with EJB projects. I hope that Ted Neward is right about this.
I need to go over to my daughter's school and practice my grade school arts and crafts (volunteer day), and then I need to see a doctor. I'm pretty sure I have a throat infection - could not sleep at all last night, so I'm only going on caffeine right now....
With TDD, you create an automated test first, and only then write the minimal amount of code that you can get away with to satisfy that test. Every time someone finds a new bug, it gets added to the fully automated test suite. Then the programmer writes the minimal amount of code to make the new test pass (which makes the bug go away). - From Test Driving Test Driven DevelopmentThis was in a column Joel apparently writes for the magazine. Looks like some of the XP tenets are really going mainstream!
Another web log entry here:
Joe Brockmeier has a realistic overview of Extreme Programming. Of course, he did interview Kent Beck and Ron Jeffries. It's nice to see a hypeless introduction to the subject. (He also interviewed yours truly, but that may have been for outrageous quotes and not deep philosophical insights. :)So it looks like XP is fast becoming the next big thing in the methodology realm.
I finally feel good enough about the refactoring work to post a dev build. The save file formats have changed in this build, so if you download this - make sure to backup your current save file first!. Here's what's new:
- Editable settings are saved in a file called btfSettings.ini, and this file lives in the same directory as the application. It's an ini (key=value) file - most of of the value types are pretty obvious
- non-editable settings (window positions, etc) are saved in a file called btfWinSettings.btf. This is a binary file, and it will go into your save directory
- Feeds are saved in a file called rssFeeds.btf. This is a binary file, and it will go into your save directory
- Feed Lists are saved in a file called rssFeedLists.btf. This is a binary file, and it will go into your save directory
- There's a new setting - shouldTruncateAllFeedsToLimitOnSave - if you set this to true, then the save file will hold no more than the current max limit (50 by default) items for each feed. Due to a bug in all previous versions, the feeds were never properly truncated to that limit - which is why the save file kept getting bigger and startup time kept getting longer. Setting this to true dropped my feed file from 9 MB to 1.9 MB
- Except for the settings file, all files are pointed to from the settings file. These pointers can be overridden by command line arguments, or by environment variable settings. These will all be documented in the Users Guide. For now, you can see the options here
Doc Searls: Hey, coffee and wine shops, I'll be in town for the next day with a laptop and a PDA that are wondering who's ready for my business ? This problem doesn't seem all that much harder to me than syndicating and aggregating weblogs. In particular, both ends of the equation are likely to be behind a combination of firewalls, NAT, proxies, etc. Question to ponder: what technical, sociological, and legal innovations will be required to make this come about?I pondered this for awhile - there's a coffee/bagel shop in my local shopping center called Bagel Bin. I go there fairly often to get a snack and a coffee - the cool thing is, my dauughter prefers this place to McDonalds! But back to the topic - I don't have a PDA, but I do have a phone - Sam asks what would prevent a shop like this from advertising to these sorts of devices. That's a simple one, I think - what's the benefit for them in doing so?. This sort of local outfit gets a local clientele, and said clientele grows by word of mouth (and by proximity to the grocery store, a place everyone goes regularly). There would be an expense to setting up a connection, and can't imagine that there's a lot of upside in terms of new business. More or less, I think we geeks often completely overestimate the relevance of the net in the day to day lives of most people. I had these thoughts, but not really in any kind of focused way - until I saw Gordon Weakliem's post on the topic:
How about economic innovations? When I saw this post I immediately thought of three local businesses I patronize: The Wine Seller, Angelo's Pizza, and Pablo's (my local coffee house). For these businesses to engage in this type of arrangement, it would either have to be extremely inexpensive, or would have to yield outsized results. I'm amazed at the crude technology that most small businesses employ, mostly for reasons of cost. Sure, Starbucks can afford this, but if it's just Starbucks, et.al., I'm not interested. What makes weblogs interesting is that publishers can run one affordably and even I get to find the Wine Sellers, Angelo's and Pablo's of the web. 3 years ago, I'd guess most of my HTTP requests went to yahoo.com. These days, intertwingly.net is beating Yahoo! hands downGordon's post kicked my brain into gear, and let to my thoughts above. Now perhaps a local shop could run a weblog - they can be cheap - but that still assumes that someone at said shop would have the time and interest to post daily ramblings. Once you start a weblog, you either post frequently (and with any luck, interestingly), or you get no traffic to your site. Ultimately, I'm just not convinced that there's any compelling reason for most small shops to be on the net
The tag line at the top of the page is a link back to the main view, so that you can always navigate back there. I fixed a table layout issue - the log text is now aligned at the top - it ended up looking odd on the archive pages. Finally, each entry now has a Perma Link at the bottom. Ahh, Smalltalk. Where I can load these changes into the running server...
I wandered by This set of posts from the Dive Into Mark site, and got inspired. I went ahead and made sure all the images had alt tags, made sure that there were shortcut keys to the home page link (which is new) and to the search page. I added label tags to things that should have them - and rearranged the table so that the blog roll is on the right - which makes the content show up first in teext browsers like Lynx. I hope it's improved things - it seems to have.
After some fairly epic brain cramps (related to my lack of sleep from this sinus infection), I added category support to the blog. On entry, I give an entry a category. Still to be done is exposing that in a useful way - I will eventually add a list of categories I use for single click search access. But the hard work of changing the domain model is done - and without having to take the server down. Ahh, the joys of Smalltalk....
I added category support to the blog earlier today, and I've just wrapped up category searches. On the main page - just to the right of the entries - you'll see a list of all the existing categories. Clicking on the link will execute a search for all entries that have been categorized that way. I suppose allowing multiple categories per entry would be more useful, but I haven't gotten to that yet - it's a manual task determining what category an entry belongs in, and there are well over 500 log entries already. What I have now works, and it's pretty cool, if I do say so myself.
If you downloaded the BottomFeeder DEV build I posted two days ago, go grab the latest one now. The last build had a few problems at startup - both for initial (no prior use) and conversion (old style settings). Thiis build has been more thoroughly tested, and works in those situations here. Thanks to Dave for fixing the problems, and to Rich Demers for reporting them!
Then you should go read this article. I feel better knowing that I'm not the only one with issues....
According to this article in New Scientist:
One in four of the planetary systems identified to date outside the Solar System are capable of harbouring other Earths, say astrophysicists, a much higher proportion than anyone expected. The researchers decided the race to detect an extrasolar Earth-like planet is taking too long. So, instead of scanning the skies, they modelled all the planetary systems known so far to work out which could be hiding habitable planets.So what we need now is a handy Warp engine, ehh?
I should be taking down the Christmas tree, so of course I'm reading web logs and fixing BottomFeeder bugs. I have been meaning tto look through Gordon's blogroll - I usually find what he posts to be interesting, so I figured stuff he's reading would be interesting as well. I am not disappointed. I stumbled on the Loosely Coupled right off, and found this post:
A different picture emerges if we look back at what really happens when significant new interoperability standards emerge. HTTP over the Internet brought the commercial Web into being. The addition of RSS to that mix turned weblogs into a powerful channel for amplifying discourse. 802.11b has created an unanticipated blossoming of WiFi hotspots and ad hoc networking. None of these results were predicted (or even expected) by the creators of those standards. Reviewing the practical deployments of web services in 2002, there's been little in the way of heavyweight enterprise deployments, mainly because enterprises still regard the available standards as immature. But there have been plenty of casual or serendipitous discoveries and experiments. One of the best examples was Jon Udell's experiment in joining up URLs from multiple sources based on ISBN numbers. He's just published a new account, The disruptive Web, in which he sums up the ingredients which he believes contributed to its success:That's an interesting set of observations. If you make your main services available as straight HTTP-GETs, anyone can make use of them right now. That doesn't preclude offering other interfaces (SOAP), or using other mechanisms for your own internal operations - but what it does is make your services available to the widest possible audience. The other neat part of this - especially for Smalltalkers - is that it makes the implentation language irrelevant to the end user of your services. What then matters is how quickly and accurately you can get things done. Have a look at the Linea Engineering date and be encouraged - there is a coming software world that is ripe for those with higher productivity."Support HTTP GET-style URLs. Design them carefully, matching de facto standards where they exist. Keep the URLs short, so people can easily understand, modify, and trade them. Establish a blog reputation. Use the blog network to promote the service and enable users of the service to self-organize. It all adds up to a recipe for recombinant growth."
A few more bugs have been slain, but I need some feedback. The FTP uploads work, but I'm having issues with the FTP downloads. Might be my network setup; I need someone else to test that. I fixed the feed icon color issue - feeds stay marked red until all the feed items have been seen. Toggling items from all read to all unread (or vice versa) also changes the feed color appropriately. I also fixed a bug with the item caching setting - it was being ignored, but should not be being ignored any longer. Any bugs - please send them to me
Here's a post I identified with. In my case it's not my parent's - they are happy (and fairly sophisticated for non-tech users) Macintosh folks. My Dad's one of those who likes to mention how easy the Mac is every time I talk about some computer problem. But then there's my uncle and aunt - who are still running Windows 95 (yes, 95!) and using dialup access - because the newer stuff looks hard to them, and they understand what they have. Then there were my neighbors - they sent their computer to a repaiir place, and when it came back it couldn't print. The tech had apparently insttalled an HP driver as some kind of default, and they have an Epson printer. They were utterly baffled. Another example - a couple of years ago, I was visiting my sister. Her husband was at work when her neighbors had a problem signing into AOL. So I went over - they had no idea how to proceed - after a crash they had to reinstall AOL, and they did not know their password - it had always been auto-filled for them. They didn't know how to access any of the hint features most such things have. The printer issue is particularly relative - I see technical folks constantly talking about how people should just install Linux
- I can't imagine most people dealing with XFree86
- I can't imagine most people dealinng with printer installation. If they have trouble with plug and play, how are they going to manage Linux printer installation. And don't even think of asking them to figure out SAMBA.
- The root user/normal user divide will baffle most people. Software installation under Linux is no picnic for the non-technical (and if it's that damnable Oracle Java installer, it's no picnic for anyone) This is how the Fuzzy Blog summarized matters:
Both Jeremy and JWZ are realizing the issues of giving their Mom a Linux box (i.e. Momix): I'm sure that jwz's mother has more computer smarts than mine. And the funny thing is that most mothers aren't terribly adept at using computers. Why? Not because computers need to be difficult, but nobody designs software for them. Why is the way we save documents different than the way we locate them later? It makes little sense. This got me thinking about that old Linux box again. Why can't I at least get my Dad off Windows and make him happy? He'd be lost. Most of the Open Source software is no better than, say Windows, and worse yet it's never been subjected to a usability study. GoIt's a thing to ponder. And if you think aboutt it, it's the main reason that Windows (especially 9x) always defaults to ease of use over security.
If you thought David Gest was the only ass to recently sue Viacom, think again. A Montana man named Jack Ass has sued MTV's parent company, accusing the music channel of "plagiarizing" and "defaming" his good name in connection with the show "Jackass." The 44-year-old Ass, who legally changed his name from Bob Craft in 1997, is seeking at least $10 million from Viacom, which Ass contends is "liable for injury to a reputation I have built and defamation of character I have created."I spotted this over at Gordon Mohr's blog. Truth most assuredly is stranger than fiction