Send to Printer

development

## Not liking the answer

June 7, 2006 17:02:33.738

Tim Bray doesn't like the "it just works" argument:

“The poor boy, that primitive Java stuff broke because he doesn’t have auto-magical big numbers like Lisp-n-Smalltalk had back in the day.” Thank you for raising my consciousness. If you’ll grant that the trade-off between fixed-size hard-wired datatypes and more abstract ones has been under discussion since Turing was a tot, I’ll grant that many attempts to pack the data in tight are symptoms of premature optimization. But space-vs-time trade-offs are just not gonna go away; deal with it. And I’ve had my working set blown to hell more than once trying to build the parse tree for what seemed like a moderately-sized incoming message, in a language that turned out to be just a little too high level. And the “My thought-experiment language solved that in 1976” mantra is boring .

Here's another one for him: Try doing the factorial of 1000 with Java integers. Whoops - can't do that either. It's not that the space vs. time is going completely away, but: in a world where we have 1GB+ of memory available, and hundreds of GB of disk, it's an affectation to hold onto 32 bit integers as some kind of rational optimization. Face it, Tim - Smalltalk and Lisp got this one right a long, long time ago, and James Gosling still hasn't wrapped his head around it.

#### strawman

[Isaac Gouy] June 7, 2006 19:20:17.616

Try doing the factorial of 1000 with Java integers.
Java has arbitrary precision arithmetic, so that's what a programmer would use.

[] June 7, 2006 19:38:03.140

"n = n.multiply(BigInteger.valueOf(i));" Ugh.

#### And Again...

[ James Robertson] June 7, 2006 21:26:15.271

Comment by James Robertson

The point, Isaac, is that in Smalltalk, you don't need to get bogged down in irrelevant implementation details. In Java, you do.

#### Lisp

[Greg Buchholz] June 7, 2006 23:05:59.996

...Lisp got this one right a long, long time ago...
They made arbitrary precision math the sensible default, but screwed up arrays.

[Reinout Heeck] June 8, 2006 1:25:41.952

Note how he asserts that Lisp and Smalltalk are merely a thought experiment, that pretty much discounts him as a seriuous commenter regardless of the specifics of number implementations he prefers. Let's bore him some more...

[Isaac Gouy] June 8, 2006 20:14:13.254

"n = n.multiply(BigInteger.valueOf(i));" Ugh.
(There are lots of problems with Java.)

#### ubiquity

[Isaac Gouy] June 8, 2006 20:26:43.063

1GB+ of memory
Not in my mobile phone. Not in my car. ...

#### Unsafe integers and safe arrays

[Bryce] June 9, 2006 15:37:15.317

I'm unconvinced of the performance gains from 32 bit integers in an implementation that can optimise away array checks in loops. For array check optimisation in loops, induction variable analysis is required which also allows overflow check to be removed from integer code.

It's not a speed vs space trade-off. It's trading speed or implementation complexity for complexity in the language and all programs written in it. In most cases Smalltalks and Lisps will use a single word for storage, it's only when the number overflows that it becomes an issue. Overflows, in the case being discussed, are bugs. And the implementation complexity is already required to optimise arrays fully.

Also, from my experience array checks and GC book-keeping is much more of a time waster for most loops. Besides, from measurement C only executes about 1 instruction per clock when most machines can execute at least 3 instructions. So there's room to hide a little waste without any performance costs. (I measured C by running oprofile)

Sure there are a few algorithms that really do require 32 bit integers including cryptographic code. But why make all code more complex just to allow some to run faster. Surely it's possible to have a 32 bit integer type for those rare occasions where it is what is desired just like we've got floating point types when we really want those semantics and performance.

Share