Many programming “best practices” taught today are performance disasters waiting to happen.
related article:
SE Radio 577: Casey Muratori on Clean Code, Horrible Performance?
I thought the point of “clean code” was to make a software source code base comprehensible and maintainable by the people who are in charge of working with and deploying the code. When you optimize for people reading the code rather than some kind of performance metrics, I would expect performance improvements when you switch to performance optimization. The trade-off here is now code that’s more performant, but that’s more difficult to read, with interdependence and convolution between use cases. It’s harder to update, which means it’s slower and more costly (in engineering resources) to upgrade.
In a lot of modern software, you don’t need extreme performance. In the fields that do, you’ll find guidelines and other resources that explain what paradigms to avoid and what are outright forbidden. For example, I have heard of C++ exceptions and object-oriented features being forbidden in aircraft control software, for many of the reasons outlined in this article. But not everyone writes aircraft control code, so rather than saying clean code is “good” or clean code is “bad,” like a lot of things this should be “it depends on your needs.”
so rather than saying clean code is “good” or clean code is “bad,” like a lot of things this should be “it depends on your needs.”
That doesn’t generate clicks.
I think you are right that optimising engineering cost is the goal of these practices, but I believe it is a bad thing.
Nowadays we have the most powerful hardware we ever had, yet everything is slow. Sure we reduce dev cost, but the end user is paying the price difference with its time in my opinion.
In the end the only people that benefit from this are the owners of the product, making more money selling unoptimised software, devs maybe get a bit more money, but probably not that much.
I think you are right that optimising engineering cost is the goal of these practices, but I believe it is a bad thing.
In the end the only people that benefit from this are the owners of the product […]Yes, that’s exactly how the for-profit software industry (and really any for-profit industry) is run. The owners maximize their benefit. If you want to change that, that’s a much different problem on a much larger scale, but you will not see a for-profit company do anything but that.
What a dumb article. Sounds like an old C graybeard who’s never understood the point of proper type safety or readable code. None of the performance gains the author talks about actually matter, whereas the entire point of clean code is to make it easier to read and maintain by other programmers. Let’s also not forget this important quote from Donald Knuth: “premature optimization is the root of all evil”.
Simply put, unless you’re working in extremely resource-constrained systems, or have some code snippet being run an incredibly large number of times over a humongous amount of data, these kinds of performance optimizations simply don’t matter and you get more benefit from writing the code in a way that reduces bugs and is easier to read. Heck, most of the time compiler optimizations make this entire argument moot anyway.
OOP was a lie.
Hooo boy… I work in embedded and modern MCUs and modern compilers allow for well abstracted, readable code and small memory footprint. I currently work on 256k of RAM and 1M of Flash, which is plenty, but have previously worked on systems with a fraction of that and still it’s possible to write readable code on these.