The suckage started last night. The power flickered last night, and took out my server's power supply. Since that was the router, it took me offline. So today I took that in to get it fixed, andn picked up a Linksys router. That was broken. back to the shop, got a new one. This one works.... but, neither of the Replay TV's could get a dynamic address off of it. So I gave them static IP's, that worked. Then there's my wife's PC. After a lot of twiddling, I think her ethernet card wennt, likely taken by the power spike. Sigh. I can't find the spare ethernet card I have lyinng around somewhere, so I can't test that idea. And my local repository is offline, since it's on the Linux box. To make things even better, The power in Cincinnati went out at some point yesterday, and took all the services down. The web apps are back up (obviously), but - we apparently didn't put the postgres db out as a cron job, so the public Store is offline, and I can't get ahold of the admin. I should grab some eggnog....
Sorry for the inconvenience folks; that winter storm did some damage to us, but we are back up.
Introduction The goal is to create a fully functional Smalltalk implementation of the Galaxy server capable of simultaneous support of different game specifications such as Galaxy+ and GalaxyNG. I would like this to become next-generation Galaxy server which will allow people to define their own Galaxy rules without having to modify source code every time. What is Galaxy? Galaxy a free very addictive play-by-email war game for multiple players. A game usually has somewhere between 10 and 80 players and one Game Master that runs the game. Each player gets to play one nation. The game is set in a Galaxy filled with many planets. Each nation starts with one populated planet. The other planets are empty. The goal of the game is to conquer the whole Galaxy by colonising the empty planets and killing the other nations. A game runs several turns a week. Players send in orders before each turn. These orders tell what the player wants his nation to do, and after each turn runs each player receives a report that states what happened that turn. There are many programs that allow you to graphically browse these turn reports. They show a map of the galaxy, the position all ships, and all kind of other statistics, and allow you to create orders. When you play a nations you can do many things. You can design your own ships. Each ship has many parameters, drive, number of weapons, shields, and cargo capabilities. These can all be tweaked to create many different ship designs. Populated planets can be used to build the ships you designed. The ships can be used to transport cargo and colonize planets, or to fight battles with your neighbours to take over their planets. Ships can be improved by technology research. Experienced players do not operate alone, they use diplomacy to create a pacts with other players, and attack their enemies together. Back stabbing, double deals, and other treason are of course also possible, no-one is to be completely trusted, and you will find that Galaxy is a good simulation of real-world politics. It is also very addictive :) Galaxy has been around for quite a while. Many variants have been developed during that time. In random order:
- Galaxy. This is the original version of galaxy, it was developed Russel Wallace somewhere in 1991 or maybe even earlier (V2.9 has a 1991 copyright).
- Blind Galaxy. It differs from Galaxy in that less information is visible. For instance you do not know what other nations there are in the Galaxy until you encounter them. The game was developed by Howard Bampton. He frequently starts new games. The code is a massively hacked offshoot of the early versions of Galaxy plus substantial new code. The home page of Blind Galaxy is on http://www.cs.utk.edu/~bampton/blind.html.
- Galaxy PBW, a verion of Blind Galaxy that can be played via the WWW. The home page is on http://pc046b.fzu.cz/galaxywww/.
- Galaxy plus (G+). A variation of the game developed in Russia somewhat similar to GalaxyNG but there are differences. Galaxy is Big in the former soviet union.
- GalaxyNG. This is a partial rewrite of the original code to create a more stable version of galaxy developed by a number of people. It also introduces some changes to the rules and the option to create more diverse kinds of galaxies. The home page for GalaxyNG with loads of information is at http://galaxy.pbem.net/index2.html.
- GalaxyNT a port of GalaxyNG to Windows NT.
- Blind GalaxyNG. This is a modification of GalaxyNG source code to create a game that is like Blind Galaxy.
- Galaxy G.
- Galactica and Galaxy/2 are extinct varients, Galaxy/2 was the precursor to Galactica.
With a bad network cable. The good news is, the Replay TV transfers video very nicely across a 100 mb network. The bad news is that the cable running from my wife's system to the basement won't synch up, and it's definitely a cable issue - a quick test with a good cable across the floor to the office figured that out. Oh the joys of crawling around in the basment to replace the wiring....
When I first saw this story - that the town of Bridgeeville, CA was up for bid on eBay, I wasn't sure what to think. Now it's been sold:
SAN FRANCISCO, California (AP) -- Sold: a fixer-upper Northern California town, for nearly $1.8 million -- on Internet auction site eBay. Now tiny Bridgeville waits to see who its new owner is. If the deal goes through as expected, 82 acres of Bridgeville will go to the unidentified buyer who put in a bid for $1,777,877 just seconds before the Internet auction closed Friday.I guess you can buy anything on the net...
My glorious power company has been intent on taking out my hardware - recent power spikes took out my VCR, my Linux Box's power supply (for some reason, I had turned off my UPS. No idea how that happened). I took the Linux box downtime as an opportunity - we had been using it as the house router since about 1999. However, it was only providing a 10 mb house LAN - not a problem until we got a second ReplayTV. You can stream recorded shows from one replay to another over the LAN - and it turns out that 10 mb was not enough bandwidth for that. So when the Linux box went down, I bought Linksys router. That gave us 100 mb on the LAN, and the replays started sendiing video across it quite nicely. I spent the extra 50 bucks and got a dual mode router - we now have a 100 mb wired LAN and an 11 mb wireless LAN - which means that my notebook is no longer a slave to the ethernet cabling. All in all, it's quite nice - I should have done this before.
I've never been entirely happy with the way settings and such are save in BottomFeeder - so I am in the middle of redoing them. I had all the settings and feeds munged together into one big file - which was simple for me, but not so convenient for eend users. I am splitting the settings off into editable settings and binary feed files. This does make save/load a tad more complex; the RB is making the changes simpler, but there's still a lot of testing and work to do....
So the second ReplayTV is set up, and has enough bandwidth. So of course I have another problem. The replay is supposed to control my cable box (so that it can change the channel to record my chosen shows). The problem is, it seems to be (sometimes) dropping digits when changing channels. It's using a cable that snakes from the replay to the cable box, so that the IR commands can be translated. The replay in the living room works fine with the same settings, and we have the same cable boxes in both places. I'll be darned if I can figure out what's wrong - we suspect IR reflection is confusing the issue, but we aren't really sure. At this point, it's just frustrating
I should have gotten wireless a long time ago. It's very, very nice to be able to just pick my notebook up and walk to any room, and not have to worry about setting up a hub there, and having to sit near the the network access point. And I say this as someone who wired his house!
I'm deep into testing the new BottomFeeder startup and save code - there are still some kinks with starting up with the new file formats, while conversion seems to work just fine. More testing is definitely in order! If you load the current code out of the repository, be forewarned that it is not fully baked yet!
I stumbled onto an interesting commentary over at Gordon Weakliem's log on -stored procedures VS. straight SQL queries>http://radio.weblogs.com/0106046/2002/12/30.html]:
Java programmers seldom use stored procedures. They are not portable, it breaks the 'write once run anywhere' motto it brought to mind a recent quote in the whole .NET vs J2EE pet store brouhaha, something like "Sun recommends direct queries, but that's stupid". And this was coming from someone on the J2EE side of the debate. So this comment is a little perplexing. As I mentioned, my cross platform DB experience is pretty minimal (even when I was doing Java a couple years ago, it was against SQL Server), but it seems like by doing direct queries, you're throwing out a lot of potential optimizations at the database level that could otherwise be hidden behind stored procs.When the database is the biggest blocking point in your system, it seems to me that tossiing optimizations aside for (effectively) ideological reasons is just silly. We are about to upgrade the VWNC Registration system - right now, it uses a fairly dicey back end storage scheme (read - no database!). We are going to push it all into Postgres, and the api I'm using is completely Stored Procedure driven. Not only is it faster - it is actually a simpler api for me to deal with as a developer. As to worries about portability, I'll whip out an XP truism: YAGNI. If we end up having to migrate to another db, we will cross that bridge when we come to it. In the meantime, we'll enjoy a more optimal system...
So here's Sun finally branching out to Linux - too late, IMHO - they will be competing against some truly low cost vendors. I see this as being similar to the big airlines trying to compete with Southwest with low cost entries - but without addressing the basic cost structure problem. Anyway. I saw this quote, which just cracks me up:
Still, McNealy isn't wavering much in his belief in Unix, which still runs on most servers. "Linux isn't a market. It's a crankshaft, a widget," scoffs McNealy, using an analogy befitting the son of a former American Motors vice chairman...That's right, diss the market you are trying to enter....
I've done a test build that works on Windows and on Linux. I need to test out the ENV VAR and command line arguments to make sure that they override properly, but the base stuff works - A production image reads the old save file, converts to the new settings format, and saves all the files. The settings in the upcoming release will all be available in a text file that is user editable - and all the other settings (window information and feeds) are specified in that file. This makes backing up and saving your files much easier
Yes campers, I have spent my entire New Year's day (thus far) in the bowels of BottomFeeder startup code. I didn't have the startup sequence right for environment variables and command line arguments; they were being ignored. So I spent some time testing that out - brief aside here - having an image made that vastly easier for me. I was able tto simulate the runtime easily by having the dev image start up the application on startup, using all the current environment variables and command line arguments. This is relevant to a discussion of image based vs. non image based development that is going on in comp.lang.smalltalk right now. IMNSHO, having an image makes development loads simpler. For a runtime, it's also very nice to have an image - I have diagnosed countless issues in server apps by saving the headless server image in its current state and running that headful in order to see what happened. With a sealed runtime, that's nowhere near as easy. So anyway, back to BottomFeeder - I think I have the startup sequence right now, but I'm still testing to be sure - there will be no DEV builds until I get that sorted out. You can look here to see what I'm working on.
BottomFeeder users have noticed that the 2.6 release has taken longer to come out. There are a few reasons for that. First, Dave is working again, so he's doing paid work during business hours. I've had work to do as well for Cincom, which slowed down my contributions. Finally, I decided that I was completely unhappy with the state of the startup code and the save file code - so I refactored it. It had all been in two classes, and it really needed some rationalization. It's now split into its own package, in a number of much smaller classes. So I'm happier with the setup - but the wholesale change also calls for a lot of testing. I'm in the midst of that now, and I think I've found the majority of the issues. Stay Tuned
This is interesting. Amongst the other predictions, the author of this piece writes:
EJB will be almost a dead horse by years-end. As if my predictions weren't controversial enough, here's one to cement the whole deal. Yes, I firmly believe that EJB will be almost entirely "on the way out" within the Java enterprise space. Sorry, guys, it was a good run, you managed to bilk the industry of billions of dollars along the way, but the EJB facade is crumbling and developers are waking up to the realization that EJB just doesn't meet the goals it's supposed to: making enterprise development simple. Instead, the 800-page monster of a specification, the one that doesn't even come close to being tight enough to actually program to nor loose enough to permit serious performance improvements within an implementation, will be quietly and serenely allowed to drift off into irrelevance in the face of the burgeoning Web Services hype tidal wave. Most J2EE vendor containers will be advertised as "Web Service" containers first and EJB containers second (just as EJB containers today are EJB first, CORBA second). Doesn't mean J2EE is dead, just that EJB is fast becoming not the way to build systems.Anyone who has attended one of Alan Knight's talks recently won't wonder why - he has a great riff on EJB without taking any unfair swipes at it. I've personally seen more than one Fortune 500 firm go down the rat hole of death marches with EJB projects. I hope that Ted Neward is right about this.
I need to go over to my daughter's school and practice my grade school arts and crafts (volunteer day), and then I need to see a doctor. I'm pretty sure I have a throat infection - could not sleep at all last night, so I'm only going on caffeine right now....
With TDD, you create an automated test first, and only then write the minimal amount of code that you can get away with to satisfy that test. Every time someone finds a new bug, it gets added to the fully automated test suite. Then the programmer writes the minimal amount of code to make the new test pass (which makes the bug go away). - From Test Driving Test Driven DevelopmentThis was in a column Joel apparently writes for the magazine. Looks like some of the XP tenets are really going mainstream!
Another web log entry here:
Joe Brockmeier has a realistic overview of Extreme Programming. Of course, he did interview Kent Beck and Ron Jeffries. It's nice to see a hypeless introduction to the subject. (He also interviewed yours truly, but that may have been for outrageous quotes and not deep philosophical insights. :)So it looks like XP is fast becoming the next big thing in the methodology realm.
I finally feel good enough about the refactoring work to post a dev build. The save file formats have changed in this build, so if you download this - make sure to backup your current save file first!. Here's what's new:
- Editable settings are saved in a file called btfSettings.ini, and this file lives in the same directory as the application. It's an ini (key=value) file - most of of the value types are pretty obvious
- non-editable settings (window positions, etc) are saved in a file called btfWinSettings.btf. This is a binary file, and it will go into your save directory
- Feeds are saved in a file called rssFeeds.btf. This is a binary file, and it will go into your save directory
- Feed Lists are saved in a file called rssFeedLists.btf. This is a binary file, and it will go into your save directory
- There's a new setting - shouldTruncateAllFeedsToLimitOnSave - if you set this to true, then the save file will hold no more than the current max limit (50 by default) items for each feed. Due to a bug in all previous versions, the feeds were never properly truncated to that limit - which is why the save file kept getting bigger and startup time kept getting longer. Setting this to true dropped my feed file from 9 MB to 1.9 MB
- Except for the settings file, all files are pointed to from the settings file. These pointers can be overridden by command line arguments, or by environment variable settings. These will all be documented in the Users Guide. For now, you can see the options here
Doc Searls: Hey, coffee and wine shops, I'll be in town for the next day with a laptop and a PDA that are wondering who's ready for my business ? This problem doesn't seem all that much harder to me than syndicating and aggregating weblogs. In particular, both ends of the equation are likely to be behind a combination of firewalls, NAT, proxies, etc. Question to ponder: what technical, sociological, and legal innovations will be required to make this come about?I pondered this for awhile - there's a coffee/bagel shop in my local shopping center called Bagel Bin. I go there fairly often to get a snack and a coffee - the cool thing is, my dauughter prefers this place to McDonalds! But back to the topic - I don't have a PDA, but I do have a phone - Sam asks what would prevent a shop like this from advertising to these sorts of devices. That's a simple one, I think - what's the benefit for them in doing so?. This sort of local outfit gets a local clientele, and said clientele grows by word of mouth (and by proximity to the grocery store, a place everyone goes regularly). There would be an expense to setting up a connection, and can't imagine that there's a lot of upside in terms of new business. More or less, I think we geeks often completely overestimate the relevance of the net in the day to day lives of most people. I had these thoughts, but not really in any kind of focused way - until I saw Gordon Weakliem's post on the topic:
How about economic innovations? When I saw this post I immediately thought of three local businesses I patronize: The Wine Seller, Angelo's Pizza, and Pablo's (my local coffee house). For these businesses to engage in this type of arrangement, it would either have to be extremely inexpensive, or would have to yield outsized results. I'm amazed at the crude technology that most small businesses employ, mostly for reasons of cost. Sure, Starbucks can afford this, but if it's just Starbucks, et.al., I'm not interested. What makes weblogs interesting is that publishers can run one affordably and even I get to find the Wine Sellers, Angelo's and Pablo's of the web. 3 years ago, I'd guess most of my HTTP requests went to yahoo.com. These days, intertwingly.net is beating Yahoo! hands downGordon's post kicked my brain into gear, and let to my thoughts above. Now perhaps a local shop could run a weblog - they can be cheap - but that still assumes that someone at said shop would have the time and interest to post daily ramblings. Once you start a weblog, you either post frequently (and with any luck, interestingly), or you get no traffic to your site. Ultimately, I'm just not convinced that there's any compelling reason for most small shops to be on the net
The tag line at the top of the page is a link back to the main view, so that you can always navigate back there. I fixed a table layout issue - the log text is now aligned at the top - it ended up looking odd on the archive pages. Finally, each entry now has a Perma Link at the bottom. Ahh, Smalltalk. Where I can load these changes into the running server...