The context of OOP and FP

If you want lots of reaction when you tweet, a sure recipe is to talk about Object Oriented Programming (OOP) vs Functional Programming (FP).

Let’s throw the cat among the pigeons, with another blog post on the Internet giving personal perspective on OOP vs FP. I’m far from being a functional expert. I code professionally mainly in C#, and sometimes in F#. I only play with Haskell and Idris in coding dojo. I’m not trying to convince anyone about anything. I just would like to share my thoughts about these two important paradigms.

And the first step would be to agree on what’s OOP and what’s FP.

What’s OOP?

To really grasp the philosophy, I highly encourage you to follow Alan Kay’s work. Of course, he’s not the only one behind this paradigm, but I find his explanations really clear. This keynote is a good start. In a nutshell, Alan has (beyond other knowledge) a PHD in biology. The way complex organisms are working is basically what inspired the OOP paradigm. In other words: collaboration through specialize components to achieve a complex task. Each component can be independent and don’t need to be aware of the full system. They just collaborate, receiving some messages when something was required from them, and sending messages when they have done something meaningful for the system. How these components communicate, and how the orchestration between them is managed is not imposed by the paradigm. And different languages have used different implementation. But this notion of independent component collaboration is the core concept of OOP. Not really what we see when we type OOP in duckduckgo…

What’s FP


Instead of looking for inspiration in the biological field, Functional Programming is rooted in mathematics. I’ll quote Philip Wadler to give a better definition:
A program in a pure functional language is written as a set of equations. Explicit data flow ensures that the value of an expression depends only on its free variables. Hence substitution of equals for equals is always valid, making such programs especially easy to reason about. […] Explicit data flow also ensures that the order of computation is irrelevant, making [functional] programs susceptible to lazy evaluation.”
What does he means? If I write X = A+B, or Y = C * D I don’t care about the values of A and B, or C and D. That’s a mathematical equation, it represents the concept of all the valid values for an addition of type A and B, or a multiplication of type C and D. Hence I can compose X with Y, and do substitution:
X>>Y = (A + B) >>  Y = X >>( C * D) = (A + B) >>( C * D)
(>> is the composition operator in F#)
If any of these values have the possibility to change (mutate) outside of my scope, all this reasoning is no longer true. Even worse if not only the values, but also the types can change. That’s why we say that in FP, we can write code only thinking locally. I know that my “equations” (or pure functions) are always true, no need to think about the rest of the world to ensure that.
Please note that this willing of “local thinking” can also be find in the “independent components” from OOP. But contrary to OOP, FP explains how it can be achieved, thanks to mathematics.

Which one is the best?

Finally, the answer the whole industry was looking for since many years. Here it is: none of them, because this is not the good question. The question should be: which one is the best in my context? And now we can start to talk. I think the answer is easier that what many believe.
The more constraints you have, the harder it is to write code that woks, the easier it is to maintain the code in the long term. What are code constraints? We have many of them depending on the language: syntax, immutability, no side effects, unit tests, types and dependent types for example. Is it better to write code faster or to have long term maintainable code? In a start-up context, I probably need to produce code quickly, until I find the good product. In a critical context where lives are involved, I probably want something robust, even if I need 1 year to write it. And of course, there are all the scales from “quick and dirty” to “mathematically robust” that might be your context.

Just tools

Functional Programming is by nature more constrained than Object Oriented Programming. Because it forbids mutability and side effects. When the language also uses types, it leads to powerful concept like Type Driven Development where we can, by design, remove the invalid states from our system, to make it more robust at runtime. And when the language also uses dependent types, we can even encode business rules into the types, to avoid compilation if a business rule is not met. As a drawback, it is harder to write. But when you achieve to write it, you can trust the famous quote “if it compiles, it works”. Maintaining this code is usually easier than a huge OO codebase, because thanks to pure functions we can use local reasoning to maintain the system. Please note the “usually” in my previous sentence. A “good” OO codebase can in theory have this property of local thinking, but the fact that side effects and global impact are not forbidden by design makes it almost impossible to scale (at least in my experience).

FP is hard to write, but easy to maintain (easier to reason about). OOP is easier to write, but harder to maintain (harder to reason about due to side effect).

 

 

 

Patterns of Enterprise Application Architecture

Patterns of Enterprise Application Architecture is one of the many interested books written by Martin Fowler. I have a lot of respect for Martin, mainly because he’s able to constantly challenge himself and improve his craft, since several decades. Here is what I like and dislike about this book.

A bit hard to read

Like Eric Evans, Martin Fowler has tons of interesting things to say. Unfortunately, both have a style hard to read. Mainly because it is dense in terms of content, but also because it is not concise. Let me be clear: I don’t pretend by any way to be a better writer than them. I just consider what they write as much interesting as it is hard to read.

Not totally up to date

Because the book was written around 2000, with many technical details, some of the patterns described are no longer relevants. Which is not that bad, because it gives an historical perspective. For instance, it was released when OOP was kind of marginal, and patterns like ORM were not widely spread. It is interesting to see what were the alternatives, and how OOP was perceived before to be mainstream.

A must read stay a must read

This book has greatly influenced our industry in the last decades. From this perspective it is still valuable, because it helps to understand what IT looks like in the 2000s, and how it evolves. The book contains also some seeds of the original ideas behind the emergence of Domain Driven Design, such as the Domain Model pattern. It allows to understand the original link between Domain Model and OOP, and thus the influence of OOP in the mythical blue book by Eric Evans.

 Thank you Martin for this book and what you do for IT since many years.

 

 

 

About Inheritance and Composition

Like for dependency injection, heritage and composition are easily misundertsood.

We remember that programs written in Oriented Object Programming (OOP ) are designed by making them out of objects that interact with one another. Technically, we have two possibilities to share code in an OOP style: either composition or inheritance.

why-favor-composition-over-inheritance-in-java

Composition

The purpose of composition is to make wholes out of parts. These wholes (or components) will collaborate to achieve a common goal.
Composition has a natural translation in the real world. A car has 4 wheels. Without the wheels the car is still a car, but it loses its ability to move.
Back to code, a bank account might need a print service. Without the print service, the bank account is still a bank account, but it loses its ability to be printed. It is a “has a” relationship.

composition

Inheritance

Inheritance allows expressing and ordering concepts from generalized to specialized in a classification hierarchy.
The meaning of a class is visible in the inputs and outputs (public contract) of the class. As a child of a class, we are implicitly accepting responsibility for the public contract of the superclass. We are tightly coupled to this superclass: it is a “is a” relationship.
A human being is a mammal. No exceptions, all mammals have a neocortex. Without the neocortex, we are no longer mammals. Back to code, I may design a “human” child class, inheriting a “mammal” base class, both sharing a NeoCortex property.

 inheritance

Common mistake

The problem’s root is that sharing code can be done either by composition or by inheritance, but the impacts are not the same.
Any class can be composed by other classes because it requires their capabilities.
But with inheritance, the key is to decide if we accept responsibilities from the public contract of the superclass. If our parent has some capabilities/properties we don’t need, we deny a part of the responsibility. The “is a” relationship is broken as soon as there is something in the parent that isn’t true for the child. Adding responsibilities in the parent because “some children might need it” is not ok. All children need it, or we need another abstraction, or we need composition.
It is a problem because inheritance implies tight coupling. A refactoring of a capability can impact all the children of a base class. If one of the children does not use the refactored capability, it should not be impacted.

 images15e67y3x

Example

Lots of modern languages use the concept of ViewModel, even if it may be called otherwise. It is appealing to build a ViewModelBase with everything any children might need, like the ability to be printed, or to show a dialog box.
What happens when a child inherits this ViewModelBase but doesn’t need to be printed or to show dialogs? It accepts responsibilities that does not make sense for it. The signature and the meaning of the class are blurred. Without these printing or show dialog ability, the ViewModel is still a ViewModel.

On the other hand, implementing a RaisePropertyChanged function in ViewModelBase makes sense, because any ViewModel is by definition glue between the business logic and the view. It needs the ability to inform the view when a property is updated. Without this ability, it’s no longer a ViewModel. choosing-the-best-local-seo-company

How to choose?

The mantra “favor composition over inheritance”  is helpful, but as every mantra it is limited. It is true because sharing behaviours is always possible by composition, and will be more decoupled, but it is not always the wiser choice.

I think the “has a” vs “is a” relationship, and the reminder that every children must take the full responsibility of the parent in case of inheritance is enough to help choosing the best option.