If the hardware has followed the Moore’s law, what happened to software? We have built assembler, and compiler, and high-level language. And now we have powerful machines, and all these shiny new languages and frameworks promising productivity. So why the hell are we still providing lame software?
The software crisis
According to Wikipedia, the term “software crisis” was coined by some attendees at the first NATO Software Engineering Conference in 1968. The crisis manifested itself in several ways:
- Projects running over-budget
- Projects running over-time
- Software was very inefficient
- Software was of low quality
- Software often did not meet requirements
- Projects were unmanageable and code difficult to maintain
- Software was never delivered
That’s a good sum up of the problems I’ve met in 10 years of recent software development. And apparently, I’m not the only one, some of us talk about a software apocalypse. It’s both confusing and sad to see that our industry’s perception hasn’t changed. What could be so hard about software? Why do coders write so much bugs?
The software labor
Software has always been, and still is, underestimated. Have you ever think about the names, hardware and software? These names reflect the roots problem of our industry. We believed that the building of physical electronical device will be the hard part. Nobody has anticipated the fact that programming these machines could be complicated. That’s why the first software programmers were women. We cannot be proud of that, it was just because some highly respected (male) scientists of the 50th believed it would be an easy task.
This fundamental misconception is still widely spread today, especially for people who never tried to build a software by themselves. It explains partly why our profession is underestimated and poorly managed. Very few people understand the implication of a “soft” device. Thus, they try to apply what Fordisme and Taylorisme has taught to them: hire some managers to manage an army of low qualified people to work on chain production.
The software engineering
Software is, by definition, not a physical thing. It removes constraints, physical and temporal. You want to change a past event? No worries. We can build rules, and break them as we wish. You want to change a behaviour? Let me add a few conditions. We’re not constrained by time or space. Not even by logic. We are only limited by our imagination. (And a bit by the computer power and the money we’re supposed to earn with the software. But as hardware is really performant, and because IT generates so much money, the main limitation is still our imagination.)
How do you build a bridge? By a series of calculation and drawing, taking in account a huge number of parameters from the physical environment, in order to use the laws of physics to assemble physical components in something that will stand for a while. But mathematical and physical laws do not apply to a soft world. A world where almost any rule can be broken. A world where the number of possible states can be greater than the number of atoms in the whole universe. Standard line of business applications have a complexity that we cannot manage (it is common for us to not accept this fact). And to add more fun, even if we manage to make it works pretty well, we are still not sure that it will solve any actual problem.
A doom’s profession?
Rather a huge opportunity I think, because software root problems are the same since many decades, we can be the actors of its evolution, and learn from our short history. From the easiest to the hardest, here are a few propositions to improve our craft. Sorry if some might seem obvious, but as a software professional, I’m in a good position to know that it is not.
Let’s start with the basics: automating tests. I won’t re-explain here why and how. But we need to acknowledge that it is currently one of the best way to manage this invisible complexity. A series of automated tests to validate the behaviours of the software as it evolves. It’s a way of feeling what we are designing. Like with WYSIWYG, we want to see immediately when an expected behaviour is broken, and what will be the impact of our modifications.
Another point is to use less states and more types, because it is where the complexity hides. Mathematically, specialized types do reduce the number of possible states in functions outputs. Solution like Property Based Testing can help to test the limits of such systems.
A harder way is to understand the problem we are trying to solve. There is no clear separation between the pure technic, and the design of an appropriate solution in software. Which explains why there can’t be a clear separation between the maker (the coder) and the the thinker (the system designer). To be good, we need to understand the business and to be technically efficient. This is where Domain Driven Design and relative stuff like Living Documentation can help.
And finally, we can learn and apply more maths in our code. It fundamentally adds some laws in our chaotic soft world. Some laws that will highly improve the code, like avoiding side effects and mutable states. This is where Functional Programming and TLA+ can help.
Think out of the box
Finally, we need to remember how young is our industry, and the fact that we’re living a crisis since the beginning. It means that we need more imagination to solve our problems. For that, I encourage you to listen to people like Alan Kay or Victor Bret, and to learn about our history. What if written software is just not the good way to make it? What if we are still living the dark age of software development?
We build a lot from (too) small foundations, maybe it’s time to challenge ourselves.
Of course the title of this post is a refererence to an amazing OOPSLA conference by Alan Kay: the computer revolution hasn’t happened yet.