There's been a lot of posting going on about static and dynamic typing recently - and today brings two more posts. This one from Bruce Eckel, with a similar light bulb pop to this post from Bob Martin:
If it's not tested, it's broken.
That is to say, if a program compiles in a strong, statically typed language, it just means that it has passed some tests. It means that the syntax is guaranteed to be correct (Python checks syntax at compile time, as well. It just doesn't have as many syntax contraints). But there's no guarantee of correctness just because the compiler passes your code. If your code seems to run, that's also no guarantee of correctness.
The only guarantee of correctness, regardless of whether your language is strongly or weakly typed, is whether it passes all the tests that define the correctness of your program. And you have to write some of those tests yourself. These, of course, are unit tests. In Thinking in Java, 3rd Edition, we filled the book with unit tests, and they paid off over and over again. Once you become "test infected," you can't go back
There's a lot more in that post - read the Python example, which is what Bruce uses to draw his conclusions. The bottom line is, the sorts of mistakes that static typing protects you from are rare, and the costs are much higher than the benefits (as I've posted before). For the opposing viewpoint, have a look at the Fishbowl, where we find this:
Firstly, I agree with Bruce Eckel. Static typing is a form of testing. As a form of testing, it's particularly restrictive on the programmer, and forces the programmer to test all sorts of things they probably shouldn't have to: remembering the unit testing adage that you should only test those things that could possibly break.
There are difficulties, however, with going from that premise, to the conclusion that testing can give you the same benefits as strong typing, but without the disadvantages. The difficulties lie in the difference between testing-through-static-typing, and testing-through-writing-tests.
My statement is that static typing provides few tests worth anything by itself. His defense of static typing comes here:
The other reason I tend to steer towards statically typed languages for my own projects puts me in agreement with Carlos. In a dynamically typed program, it's easy for a human to tell what type something is likely to be, but no way for a machine to say for sure what type something is. Thus searches, code-assistance and refactoring tools for dynamically typed languages must, at some point, guess. These are all tools I rely on frequently, and want to work with as little of my interference as possible.
I want to work with those tools as well. And guess what? I do! Losing static typing does not mean losing those tools.
I'm aware the Refactoring Browser originated in Smalltalk. What I fail to understand is how truly automatic refactoring is possible when types are indeterminate.
This isn't really an argument against, so much as it's a simple lack of experience with Smalltalk tools. At a talk a few weeks ago, I refactored all references to '+' to 'fred:'. This took awhile (there are a lot of references) - but it worked just fine. There's a reason these tools originated in Smalltalk - it was easier to build them. Why? Consider a compiler. In the course of compiling a program, it creates loads of meta information about the code. What happens to this meta information? It gets thrown away. A Smalltalk system is one in which none of the meta information is thrown away.
Being able to discover precisely where (or if) a type or method is referenced is invaluable. A text-search can help, but you must sift through the false-positives yourself. This requires a certain familiarity with the code, and as the code-base gets bigger (or your familiarity with it wanes for some other reason), that sifting takes longer and longer.
I rarely find this to be a problem. Smalltalk can find all the references (senders, implementors, references, restricted to a hierarchy or not). Using a pattern of naming with intent, I find that I usually have very few false positives to look at. If you name all your methods after commonly used system methods, sure, you'll have sifting. But that's bad practice.
I work on my own, personal projects maybe one or two days a week. I tend to have four or five hanging around, so some will go months without me looking at them. When I return to a project after that amount of time, the information that the IDE can glean from the type system is invaluable.
On the other hand, my couple of Ruby projects languish far longer, because it ends up being a lot harder for me to pick them up after a long absence, because I have forgotten all the type information that is implicit in the program.
The IDE's understanding of types can also cause it to save me at least as many keystrokes as the type information causes me to endure. For example, when auto-completing a method, Eclipse will check the local scope for objects of the same type as the arguments, and include them in the completion. Similar guesswork is performed when using macros to generate loops. IDEA has similar features. It will even recommend simple variable names for me based on their type.
It's also amazing how quickly you can remember the workings of a half-forgotten API with a sketchy glance at the documentation and an IDE with type-informative code-completion.
Type information helps recall API's? How? Are the class and method names so generic as to be meaningless? If that's the case, how does an int declaration help out? I simply don't get this argument. Code that does not use meaningful names is hard to figure out in any language, and it's independent of whether there's manifest typing information present. Well formed class libraries are easy to read, poorly formed ones aren't - period.