Richard Mansfield has been scarred by the "C" language family. In this article, he tries to explain why OOP is bad. What he mainly explains is something very simple - he's only been exposed to OOP done wrong, and drawn bad conclusions from this:
To the extent that OOP is involved in components such as text boxes (not much, really), it's very successful. GUI components are great time-savers, and they work well. But don't confuse them with OOP itself. Few people attempt to modify the methods of components. You may change a text box's font size, but you don't change how a text box changes its font size.
What he's doing here is arguing against a concept that predates OO - encapsualtion. I was confused by this trashing of basic programming concepts until I read the short bio at the end of the piece:
Richard Mansfield has written 32 computer books since 1982, including bestsellers 'Machine Language for Beginners' (COMPUTE! Books) and 'The Second Book of Machine Language' (COMPUTE! Books). From 1981 through 1987, he was editor of COMPUTE! Magazine and from 1987 to 1991 he was editorial director and partner at Signal Research.
It looks to me like Richard started out in assembly, and his exposure to OOP has been through the marvelous examples provided by C++, Java, and C#. In those languages, you get OO as the supposed main play, but it's surrounded with stupidity like primitive data types, "final" class definitions, etc, etc - it's no wonder he's come away with so many bad ideas:
For a while it was a success, but things took a turn. In those early days, computer memory was scarce and processors were slow. Processor-intensive programs such as games and CAD had to be written in low-level languages just to compete in the marketplace. To conserve memory and increase execution speed, such programs were written in assembly language and then C, which conformed to the computer's inner structure rather than to the programmer's natural language. For example, people think of addition as 2 + 2, but a computer stack might work faster if its programming looks like this: 2 2 +. Programmers describe it as little Ashley's first birthday party: the computer starts counting from zero, so to the machine it's her zeroth birthday party.
When fast execution and memory conservation were more essential than clarity, zero-based indices, reverse-polish notation, and all kinds of bizarre punctuation and diction rose up into programming languages from the deep structure of the computer hardware itself. Some people don't care about the man-centuries of unnecessary debugging these inefficiencies have caused. I do. Efficiency is the goal of OOP, but the result is too often the opposite.
He then goes on to argue that data and functionality need to be separate - that this somehow increases the odds that your code will be flexible:
I find that leaving the data in a database and the data processing in the application simplifies my programming. Leaving data separate from processing certainly makes program maintenance easier, particularly when the overall structure of the data changes, as is so often the case in businesses (the most successful of which continually adapt to changing conditions). OOP asks you to build a hierarchical structure and thereafter try to adapt protean reality to that structure.
Encapsulation, too, is a noble goal in theory: you've reached the Platonic ideal for a particular programming job, so you seal it off from any further modification. And to be honest, constructing a class often is fun. It's like building a factory that will endlessly turn out robots that efficiently do their jobs, if you get all the details right. You get to play mad scientist, which can be challenging and stimulating. The catch is that in the real world programming jobs rarely are perfect, nor class details flawless.
This is where I think he runs off the rails. Inheritance is one aspect of OOP, but it's not the end-all, be-all. In fact, most of us recognize that deep inheritance trees lead to obfuscation more than they lead to elegance. But never mind that - what I want to know is this: If I have a set of functions over here, and the database over there - as opposed to a set of objects over here, and the database over there - how is the former easier to update than the latter? If the form of the data changes, guess what? The functions (or methods) need to adapt. Richard seems to think that a rigid separation somehow makes this easier - either he's smoking something, or he's never worked on a non-trivial system. I spent years working in C, and I also spent a fair bit of time in Basic (and other similar languages). Believe me, Smalltalk makes it far, far easier to deal with code migration issues than anything else I've ever worked with.
Mansfield has discovered that the Emperor has no clothes, and he thinks the Emperor is OOP. What he hasn't figured out - most likely through lack of exposure - is that C++, Java, and C# do not define OOP. In fact, they are all fairly ugly hacks pretending to be OO, while preserving the nasty familiarity of C style syntax. If he wants to blame someone, he can look at the supposed giants: Stroustrup, Gosling, and Hejlberg - who have managed to inflict a few megatons worth of damage to the software development field during their careers.