I was reading some article about OOP, a well-written article with better detail and an author with more robust industry experience in programming, about The Faster You Unlearn OOP, The Better For You And Your Software. The author articulates a lot of good points in a technical manner, but I noticed one of the comments that was adamantly trying to attack the author and “defend” OOP by claiming the naysayers are just incompetent. Welll, I’m not going to sit around and let this guy cook (this is GenZ speak for taking control of the situation) so I wrote a reply, and then edited it no less than 5 times to add more supporting evidence and pare the argument down to its strongest points.
Here was his original comment:
It is genuinely interesting that people who don’t know how to use OO for the reasons OO exists (reduce the likelyhood of bugs, reduce the amount of information that must be communicated between developers and control complexity by reducing cross-dependencies so that very large projects can be done efficiently, to pick just a few examples) put up their own deeply flawed pseudo-OO strawman as an example “OO” and then proceed to argue that their imaginary construct shows how shit OO is and why people should stop doing it.
Even more funny is that this is basically a back-to-spaghetti-code movement that reverses what happened 25 years ago, when people figured out that making everything have access to everything and be able to change everything was spectacularly bad from the point of view of making code that has few bugs and can be maintained and extended.
It seems to be a sadly common thing in the Game Development branch of IT that people who have very little knowledge of how to architect large scale solutions and how to make effective and efficient software development processes, like to, from their peak certainty (and ignorance) spot in the Dunning-Krugger curve, opinionate about software architecture concerns without even a mention of things like development process efficiency in aggregate (not just coding speed, something which is the least important part of it), inter and intra-team dependencies, flexibility for maintenability and extendability, bug reduction and bug discovery and removal efficiency.
Maybe it’s something to do with so many developers in the Industry not having to maintain their own code (game shipped = crap code and design problems solved) and being in average more junior than the rest of the industry so less likely to have seen enough projects in enough different situations to have grown beyond being just coders and to awareness of technical design and architectural concerns in the software development process?
I’m a little sad and a little angry that people who have not demonstrated much in the way of wisdom in terms of software development processes, are trying to undo decades of hard learned lessons without even understand why those things are there, a bit like saying “I never had a car accident and don’t like wearing a seatbelt, so I want to convince everybody else not to wear seatbelts”.
Aceticon
I’ve reproduced my reply below:
@Aceticon OO does not reduce the likelihood of bugs. It increases them because the programmer has to memorize a giant map of connections that are documented in a non-hierarchical manner. Tracking the state of variables (especially hidden state within objects) across the run-time of an application is the biggest source of bugs. Coders forget what is set to what, and when they pass that problem down to the next coder (a client, a new hire, etc) then that responsibility to know what is hidden adds to the likelihood of bugs. Bugs also increase based on LOC (Lines of Code) due to the fact that code is written and therefore has to be digested slowly, sequentially. Code makes little use of visual reasoning. Reading the code is the biggest bottleneck to understanding it. Just because you think reading is fun doesn’t mean the rest of the world of competent engineers does.
It amazes me that people like you take the initiative to spread their objectively trash opinions about OO. OO always was a dumpster fire because they tried to take a paradigm that had limited use cases and force the rest of the programming paradigms to fit within its limitations.
Imperative Procedural code is the gold standard for understanding how a computer accomplishes what it does. Functional language is a sophisticated and simplified interface to creating procedural code. OO is neither. OO is not a natural evolution of procedural code. Combining functions and variables into a logical unit is and will remain a totally incompetent idea except in the limited sense that it can model an actual physical object. When you attempt to force people to think of a process as an object—-which is what Java does, in particular—-you’re violating the need for code to represent what it literally does.
Lisp should have become the mainstay of systems development. Lisp-machines should never have died out. It is unforunate that clueless twits like you influenced the programming industry into the wrong direction because you were ignorant of the functional paradigm.
By default, a program is a process. When you do something in the world, you don’t start an object….you start a process. You only say ‘start the car’ but in reality you are starting the process of driving. The object SERVES the goal. Programmers need to have a general goal in mind when they start their program, and therefore they must fit all other concepts and paradigms within the overarching paradigm of a process. When a program launches on the computer the OS launches—say it with me–a PROCESS.
When you are diagnosing a bug with software, you have to identify WHEN the bug happens and under what conditions it happens. You have to be able to simulate the process and state of variables in your mind to find the problem. OO intentionally makes this difficult and therefore causes bugapalooza. In order to diagnose the problem, your brain must convert the OO code in your brain into an AST (abstract syntax tree) and traverse the tree, selecting branches that reflect the decisions made by the computer as well as the changes in state of the variables. This is why OO is inferior for debugging. It forces you to sit and make guesses about where the damn bug is. Lisp, by contrast, inherently aids you in debugging your code, because its structure mimics the AST, so all you have to do is follow the code from start to finish to figure out where the bug happened.
Not only is the logical analysis about this solid, these observations are substantiated by evolutionary biology and child psychology. 1. The human prefrontal cortex (the planning area) evolved out of the motor cortex. Thought is abstract action. Source: Dr. Jordan Peterson’s lectures free on YouTube. and 2. Children learn to execute actions before they learn how to simulate the state of objects in their environment, which is called Object Permanence. OO forces the user to constantly interrupt their attention from what they are doing to retrieve status information about categories of objects and the unknown state of those objects. It kicks you out of your execution and working-memory loop and side-tracks you with a long-term semantic memory recall task. In other words, it causes a biological “hard-fault”. You wouldn’t use a knife to cut chicken if you couldn’t see what the shape of that knife was. For all you know, some idiot labeled their dinner knife a kitchen knife. Likewise, in code you should never trust another programmer’s interpretation of reality and what they believe is a reasonable thing to hide from you. You want to know the exact shape of that knife and how you can use it. OO classes obfuscate details behind category labels. And, if you still can’t understand why Objects are so clearly unintuitive, just look at the rest of the animal kingdom: every living organism does things, but not all of them organize their thoughts into objects. The ‘recognition’ of any object they interact with is predicated on the environment’s signals (in the form of neurotransmitters and chemical reactions). Action is the lowest level of existence and the most overarching mode of existence for all organisms. All other forms of cognition are sub-servient to that one.
It is incredible that people like you who have such commital opinions about the benefits of OO are allowed to work in the industry and repeatedly pollute it with your shitty code habits and bad influence.
Programming paradigms should be taught in this order:
Imperative → Procedural → Functional → Actor → Object Oriented
Propositional Logic (Prolog) should be added into that list somewhere as well, but I’m not sure where. Probably after Object Oriented, as Prolog requires reasoning about objects.
AMDphreak
Leave a Smart Comment