The Computer Boys Take Over

Does it already happen to you that a book is so good that you can’t stop read it? I’ve experienced it a few times. For H2G2, the Lord Of the Rings or the Dark Tower for example. But for the first time it happens with an IT book: The Computer Boys Take Over.

Why it resonates with me: the short story

What’s the Software Crisis? Is it old? Why do we have so few women in IT? Why our job is still not understood? Why are we often in conflict with middle management? These questions are very recurrent in our industry. I heard them a lot in many conferences and blog posts. This book provides answers, or at least good insights about the roots of these problems.

IT history

Did you learn about IT history at school? I didn’t, and that’s a shame. Learning about the first decades of our industry (roughly 1950-1980) is full of lessons. Many of us believe that our job is mature. It isn’t. Our job exists since less than one Human life, and the computer industry is undergoing a fast-paced evolution. Most of us didn’t even remember what happens ten years ago. It would be both pretentious and dangerous to suppose that our industry is mature. This book does describe the beginning of IT, and shows clearly that the software evolution hasn’t happened yet. The fundamentals problems we had 30 years ago (projects running over time and budget, low quality, unmanageable software employees, lack of resources…) are still relevant today. I would even argue that it’s more relevant today than 30 years ago, because of the explosion of personal soft devices.

The coders diversity

The book explains well how we (deliberately or not) build a male oriented industry. From the lack of consideration of the first developers (women programming the ENIAC) to the IBM PAT (Programmer Aptitude Test) ravages and the will to overcome the “lack of software developers”, we have all the keys to understand how and why women were not welcome for such a long time (of course we’re not the only industry to suffer this problem, which is not a reason for ignoring it).

The challenge for the management

Since I’ve started to work as a professional, I’ve always felt uncomfortable with middle management and huge hierarchies. I just feel like it’s a waste, and I can’t understand the fact that “Excel-Managers” do need to express their authority on me. And by the way, how can they have authority on me without understanding what I’m doing?  I read a lot of stuff about the “Y-generation” and experts trying to explain this fact. But this book also provides potential answers:  what if software does remove the need for middle management from the classical Taylorist understanding of work? That would definitely create tensions between the middle managers and the software developers. That would also explain why it’s important for these managers to keep developers in a code monkey position. “Please code, don’t think. We are here to think and manage, you are here to code and obey.”

The representation of developers

Another discussion that I often have with other passionate developers is the lack of representation of our job. Should we create a union? An association? Who could have the legitimacy for that? Another question that this book tackles. It explains that this was already asked a few decades ago. That’s what lead to the creation of the ACM (Association For Computing Machinery) and the IEEE (Institute of Electrical and Electronics Engineer). Most of us don’t know about them, and these associations have historical points of conflicts between the “pragmatic business programmers” and the “scientist software engineers”.

Software is hard to do

This book gave me many insights, and if you are a regular reader of this blog, you will probably recognize how it influences my recent thoughts about software. It helped me to have a deeper understanding of our industry, better than any other book before. Like always, I have now even more questions, which is the proof that this book is a must read for anybody with the will to understand the past in order to improve the future of the software industry.

 

PS: If you are wondering why such a great book has a title that may look sexist, please understand it is ironic. It highlights the fact that for many decades, these bunch of unmanageable weirdo nerds were referred as the “Computer Boys” (usually with a pejorative connotation).

 

Patterns of Enterprise Application Architecture

Patterns of Enterprise Application Architecture is one of the many interested books written by Martin Fowler. I have a lot of respect for Martin, mainly because he’s able to constantly challenge himself and improve his craft, since several decades. Here is what I like and dislike about this book.

A bit hard to read

Like Eric Evans, Martin Fowler has tons of interesting things to say. Unfortunately, both have a style hard to read. Mainly because it is dense in terms of content, but also because it is not concise. Let me be clear: I don’t pretend by any way to be a better writer than them. I just consider what they write as much interesting as it is hard to read.

Not totally up to date

Because the book was written around 2000, with many technical details, some of the patterns described are no longer relevants. Which is not that bad, because it gives an historical perspective. For instance, it was released when OOP was kind of marginal, and patterns like ORM were not widely spread. It is interesting to see what were the alternatives, and how OOP was perceived before to be mainstream.

A must read stay a must read

This book has greatly influenced our industry in the last decades. From this perspective it is still valuable, because it helps to understand what IT looks like in the 2000s, and how it evolves. The book contains also some seeds of the original ideas behind the emergence of Domain Driven Design, such as the Domain Model pattern. It allows to understand the original link between Domain Model and OOP, and thus the influence of OOP in the mythical blue book by Eric Evans.

 Thank you Martin for this book and what you do for IT since many years.

 

 

 

The Software Evolution Hasn’t Happened Yet

If the hardware has followed the Moore’s law, what happened to software? We have built assembler, and compiler, and high-level language. And now we have powerful machines, and all these shiny new languages and frameworks promising productivity. So why the hell are we still providing lame software?

The software crisis

According to Wikipedia, the term “software crisis” was coined by some attendees at the first NATO Software Engineering Conference in 1968. The crisis manifested itself in several ways:

  • Projects running over-budget
  • Projects running over-time
  • Software was very inefficient
  • Software was of low quality
  • Software often did not meet requirements
  • Projects were unmanageable and code difficult to maintain
  • Software was never delivered

That’s a good sum up of the problems I’ve met in 10 years of recent software development. And apparently, I’m not the only one, some of us talk about a software apocalypse.  It’s both confusing and sad to see that our industry’s perception hasn’t changed. What could be so hard about software? Why do coders write so much bugs?

The software labor

Software has always been, and still is, underestimated. Have you ever think about the names, hardware and software? These names reflect the roots problem of our industry. We believed that the building of physical electronical device will be the hard part. Nobody has anticipated the fact that programming these machines could be complicated. That’s why the first software programmers were women. We cannot be proud of that, it was just because some highly respected (male) scientists of the 50th believed it would be an easy task.

This fundamental misconception is still widely spread today, especially for people who never tried to build a software by themselves. It explains partly why our profession is underestimated and poorly managed. Very few people understand the implication of a “soft” device. Thus, they try to apply what Fordisme and Taylorisme has taught to them: hire some managers to manage an army of low qualified people to work on chain production.

The software engineering

Software is, by definition, not a physical thing. It removes constraints, physical and temporal. You want to change a past event? No worries. We can build rules, and break them as we wish. You want to change a behaviour? Let me add a few conditions. We’re not constrained by time or space. Not even by logic. We are only limited by our imagination. (And a bit by the computer power and the money we’re supposed to earn with the software. But as hardware is really performant, and because IT generates so much money, the main limitation is still our imagination.)

How do you build a bridge? By a series of calculation and drawing, taking in account a huge number of parameters from the physical environment, in order to use the laws of physics to assemble physical components in something that will stand for a while. But mathematical and physical laws do not apply to a soft world. A world where almost any rule can be broken. A world where the number of possible states can be greater than the number of atoms in the whole universe. Standard line of business applications have a complexity that we cannot manage (it is common for us to not accept this fact). And to add more fun, even if we manage to make it works pretty well, we are still not sure that it will solve any actual problem.

A doom’s profession?

Rather a huge opportunity I think, because software root problems are the same since many decades, we can be the actors of its evolution, and learn from our short history. From the easiest to the hardest, here are a few propositions to improve our craft. Sorry if some might seem obvious, but as a software professional, I’m in a good position to know that it is not.

Let’s start with the basics: automating tests. I won’t re-explain here why and how. But we need to acknowledge that it is currently one of the best way to manage this invisible complexity. A series of automated tests to validate the behaviours of the software as it evolves. It’s a way of feeling what we are designing. Like with WYSIWYG, we want to see immediately when an expected behaviour is broken, and what will be the impact of our modifications.

Another point is to use less states and more types, because it is where the complexity hides. Mathematically, specialized types do reduce the number of possible states in functions outputs. Solution like Property Based Testing can help to test the limits of such systems.

A harder way is to understand the problem we are trying to solve. There is no clear separation between the pure technic, and the design of an appropriate solution in software. Which explains why there can’t be a clear separation between the maker (the coder) and the the thinker (the system designer). To be good, we need to understand the business and to be technically efficient. This is where Domain Driven Design and relative stuff like Living Documentation can help.

And finally, we can learn and apply more maths in our code. It fundamentally adds some laws in our chaotic soft world. Some laws that will highly improve the code, like avoiding side effects and mutable states. This is where Functional Programming and TLA+ can help.

Think out of the box

Finally, we need to remember how young is our industry, and the fact that we’re living a crisis since the beginning. It means that we need more imagination to solve our problems. For that, I encourage you to listen to people like Alan Kay or Victor Bret, and to learn about our history. What if written software is just not the good way to make it? What if we are still living the dark age of software development?

We build a lot from (too) small foundations, maybe it’s time to challenge ourselves.

 

Of course the title of this post is a refererence to an amazing OOPSLA conference by Alan Kay: the computer revolution hasn’t happened yet.

 

Diverse IT

Here is another privileged white guy who stand up to talk about diversity. Let me explain why.

Google memo, Uncle Bob and the SoftwareCraftsmanship community

If you didn’t follow this thread, you can have few information here. The thing to understand is that it’s a big matter. There are lots of anger, misconception and bad explanations, between smart people. I would like to testify as the average privileged developer why I ignored this matter for so long, and what I decided to do to care about that. Because as Martin Fowler says, “you can’t choose if you hurt someone, but you can choose if you care”.

A long ignorance

I am a passionate developer, and I do respect competent people. I really don’t care if they are higher in hierarchy, if they are men, women, black, white, gay, if they believe in any god(s), if they have lots of experience or if they are juniors, if they drink alcohol, if they eat meat, if they wear suits or t-shirts… I just do care about how competent and passionate (the two are generally connected) they are.
And because of that, I believed that I had the good behaviour. I deal with everybody in the same way, that’s fair isn’t it?

A spark of light

But then I meet different people and attend to different conferences, especially in the Software Craftsmanship community, where I learn so much about this topic. And thanks to them I realized two important mistakes I’ve done: assuming that equality is enough, and assuming that I’m not legitimate to talk about it.

 

 The equality inequality

Treating everyone equally is great. But it’s unfair at the same time. If you were a woman in the twentieth century and won a Nobel’s prize, are you equal or better than other Nobel’s scientifics? From a pure cognitive perspective, you are probably equally smart. But to achieve the same scientifics result in this male world, you probably must fight much more. You must fight for your rights, to be recognized by your pair, much more than an average white man.
The same thing happens in IT today. We must acknowledge for example that to be equally recognized in our field, women need to work harder than men.

 

The legitimacy paradox

Because I didn’t feel legitimate on this topic, I didn’t talk about it. But because I never spoke about that, people like me might think they should not care about it either. Worse, it’s always the same legitimate people who talk about it. But what if they would like to talk about TDD or CQRS/ES instead?

What to do?

First, we have to be careful in our talks, especially in public presentation. Be careful about genders, when we use guys, he, craftsman and so on for instance… Of course, it is the same thing when we write blog posts and when we use twitter.

Every day at work, we can watch our jokes and behaviour. Watch other jokes and behaviours as well, learn to identify when someone is hurt, and let her (or him) know that you care and you will do your best to avoid that it happens again. And when we hurt someone (because everyone can make mistakes), we should just apologize. No need to look for excuses, just apologize. Uncle Bob unfortunately gives us a good example of what not to do recently.

As I already said, it’s important to speak about this, at work, in conferences and everywhere, especially if we are part of the privileged guyz.

It’s far from enough to fix the problem, but it’s a start, and at least, it shows that we care.

 

 

Thanks to the awesome Franzi who specialy helped me to grow up on this topic.

 

Explaining monads

If you have grasped the mathematical basics, and if you have understood what’s a monoid, you are ready to discover monads. Of course, we’ll continue to have fun with F#. There are tons of approaches trying to explain what monads are. Personally, I found out once I understood the problem they are solving, thus it will be the approach of this blog post. Again, the following code can be found online.

First compositions rule

Remember the function composition rule: “The result of each function is passed as the argument of the next, and the result of the last one is the result of the whole “.

It means that functions must be “pluggable” (the output of the first must be compatible with the input of the second).  But It also implies that we need a side effects free world. If one of the function can throw an error, there is no more composition because the result of my function (an exception is thrown) is not passed as the argument of the next, and the result of the last function will not be the result of the whole. Instead it breaks the runtime, and someone at a higher level (the caller) must deal with it.

Note that there is nothing to explicit the fact that isPositive function can fail in this code. It compiles as everything seems fine, but it can fail at runtime.

Sometimes vs Always

What if instead of thowing an error sometimes (if i >0), we always return a Result, which is either a success containing the value, or a failure?

I can wrap my result into something to defer the side effects. Of course it implies changing my signatures. My functions are no longer compatible, because a Result<int> cannot be used as the input by the isPositive function, it expects a simple int.

Bind to the rescue

What if I write a simple transformation function, that I can call bind. Its roles are:

  • In case of success: to wrap my simple int into a Result<int>, and to call the next function with the wrapped result
  • In case of failure: to return a failure, and not to call the next function, because after all, we don’t need to execute the next function if the previous one fails.

And now I can call this function between two incompatible functions and… Voilà! They are pluggable again!

My final composition will look like this, and this time it compiles.
Of course, you probably don’t want to create special functions to apply the bind. F# like many languages allow us to define a proper operator bind: >>= (it is the common sign for bind operator).

A more elegant composition will look like this:

I get rid of side effects, thus I can compose my functions.

Dude, you lost me.

No problem, take a breath and think about that:

  • I wanted to compose functions, but they have side effects
  • I wanted to get rid of side effects, thus I introduce the Result<T> instead of throwing an exception
  • My functions where no longer “pluggable”, thus I introduced a bind function to transform a simple type into a more complex Result<T> type.
  • We defined a new operator bind (>>=) to replace the explicit bind call
  • In the end I can have a nice composition, without the side effects

This can be resume by: we find a way to compose function, despite the side effects.

And where’s the monad?

Let’s decrypt the Wikipedia definition of a monad and see if something match:

“Monads allow developer to build pipelines that process data in a series of steps, in which each action is decorated with the additional processing rules provided by the monad.”

Sounds familiar?  In our case we had a pipeline (ToInt -> IsPositive -> ToString), and we decorated each step with the ability to convert from T to Result<T> (and to escape the pipeline in case of failure). You understand that we could have done other things in our bind function, such as Logging, Exception handling, IO management… We managed side effects in the bind used to compose the functions, instead of in the functions themselves.

“A monad is defined by a return operator that creates values, and a bind operator used to link the actions in the pipeline; this definition must follow a set of axioms called monad laws, which are needed for the composition of actions in the pipeline to work properly. The result of the last one is the result of the whole.”

I think we talked enough about the bind function. In our case, the return function is as simple as: return x -> x

Here are the monad’s laws: return acts approximately as a neutral element of bind, in that:

  • (return x) >>= f ≡ f x
  • m >>= return ≡ m

Binding two functions in succession is the same as binding one function that can be determined from them (in other words: bind is associative):

  • (m >>= f) >>= g ≡ m >>= (x -> (f x >>= g))

This is what these tests demonstrate:

Thus, the type we defined Result<T>, with the associated bind and return functions is a monad. The container alone is not a monad. The bind function alone is not a monad. But the container Result<T> with the bind function, because this function defines the identity function (i.e. a return function) when used with a neutral element, and because it is associative, is a monad.

Last words

Let’s finish the Wikipedia decryption:

“Purely functional programs can use monads to structure procedures that include sequenced operations like those found in structured programming. Many common programming concepts can be described in terms of a monad structure without losing the beneficial property of referential transparency, including side effects such as input/output, variable assignment, exception handling, parsing, nondeterminism, concurrency, continuations, or domain-specific languages. This allows these concepts to be defined in a purely functional manner, without major extensions to the language’s semantics.”

And because input/output, variable assignment, exception handling, parsing, nondeterminism, concurrency, continuations and DSL are very common IT problems, FP use monads a lot to manage them. I hope it is now clear for you how we used a monad to structure our chain of operation, and how we managed exception in a purely functional manner.

If so, you now understand what’s a monad, and why it is so useful, especially in FP. If not, please let me know and I’ll try to improve my explanations!

 

References: If you feel frustrated and want to go further, I recommand you the excellent Category Theory for Beginners, and the even better (but harder) Category Theory for Programmers.

 

Explaining Monoids

Let’s have some fun with F#. Through example, we will discover together what are monoids, and why they could be useful. All the following code can be found online.

Function composition example with integers

We now all agree about what’s function composition. We all know what are integers. So, let’s compose some functions using the set of integers.

As shown in this code, add and minus for the set of integers are functions defining a rule of combining two elements for the set of integers. These functions have two interesting properties: they are associative, and they behave as an identity function when the neutral element is one of the input.

As you can guess, the neutral element in this case is 0. You can trust me, or you can just run this code instead:

Let’s wrap up: the set of integers has a rule (add) for combining two elements of the set, this rule is associative and has a neutral element (0).

 

Function composition example with hours 

We can also agree on what are hours (positive integers between 1 and 24), so we could compose some functions using the set of hours.

As it happens, we can also defined add and minus operation on hours. It’s a bit more interesting now, because these operations use a modulo, it’s no longer a simple operation. We can still prove associativity and the existence of a neutral element for the set hours on operation add.

We can add to our knowledge that the set of hours has a rule (add) for combining two elements of the set, this rule is associative and  has a neutral element (24).
 

Monoid land

Monoids are a collection of things (a set) with a rule of combining two elements of this set together (this rule is called the internal law of composition). This composition law has to be associative, and to contain a neutral element (otherwise it is not a monoid, but it could be another mathematical structure like a magma or a semigroup that we won’t detail here, because they have less interest for a programmer).

Does it remember you something? You got it:

  • The integers set with the function add is a monoid, because add is a rule to combine two int together, add is associative, and add is an identity function when it is used with the neutral element of the set which is zero.
  • The hours set with the function add is a monoid, because add is a rule to combine two hours together, add is associative, and add is an identity function when it is used with the neutral element of the set which is 24.

We use simple examples here, but please understand that any law (function) defining the combination of two element of a set, that is associative and behave as an identity function when used with a neutral element, is a valid rule to define a monoid. No matter the name or the complexity of this function. Note also that we call monoid the set with the function. Integers are not monoids. But integers with the function add is a monoid. Hours with the function add is another monoid. You can’t define a monoid without its internal law of composition.

Parallel Monoid

The coolest thing about monoid is the associativity law. And the cool thing with associativity in programming is that it allows parallel streams. Why?
Because if (f°g)°h =f°(g°h), you don’t care which function is executed in which order. How convenient is that for parallel processing?
 

Monoids everywhere

Cyrille Martraire has a great talk to explain why you should look for monoids in your domain. By their very nature, you can assume they have a neutral element, they are composable and parallelizable (because they are associative), and they are based on a set of value objects. You say all that when you say that something is a monoid in your domain.

Monoids are powerful and deserve to be known, as you probably use them already without noticing their existence!

 

What’s composition?

You remember that Functional Programming (FP) is all about function composition? But what’s composition by the way? Let’s explain it here with a few other useful concepts for FP, and go one step further in monads understanding.

Integer composition

A composition of integer in mathematics is a way of writing N as the sum of a sequence strictly positive. Weak composition is for sequence positive or equal to zero. It means that every positive integer admits infinitely many weak compositions. 0 is also known as the neutral element for the set of integers.

Function composition

A function composition is when we apply a function (say f) to the result of another one (say g) to produce a third function (say h). We write it h=g°f. Intuitively, composing two functions is a chaining process in which the output of the inner function becomes the input of the outer function. This is in essence functional programming.

Identity Function

A function that returns exactly its input as the output without any alteration is also known as the identity function. There is a symetry between weak composition of integers and weak composition of functions, because an identity function is a neutral element for the set of functions. Like 0 for integers, it can help in several situations to allow more composition. Which is useful, because we want to compose everything!

Which functions are composable?

Two functions are composable if their Domain and Co-Domain are compatible. Let’s imagine two functions:

  • StringToInt : s (string) -> i (integer) StringToInt is a function mapping a string (Domain) with an integer (Co-Domain).
  • TwoTimes: i (integer) -> o (integer) TwoTimes is a function mapping an integer (Domain) with another integer (Co-Domain).

Can I imagine a composition TwoTimes>>StringToInt? No because TwoTimes output is an integer, and StringToInt input is a string. The Co-Domain of TwoTimes is not compatible with the Domain of StringToInt. But the Co-Domain of StringToInt is compatible with the Domain of TwoTimes. Thus, we could define a correct composition StringToInt>> TwoTimes. Most FP patterns are just a way to wrap data so that Domain and Co-Domain stay compatible, thus composable, despite side effects.

Composable functions properties

If functions are composable, they can be associative if the Domain and the Co-Domain are suitable. It means: g°f(x) = g(x)°f .
You know for example that addition is associative.
So if g is +5, f is +10 and if x is 3, we can write: 5+ (3+10) = (5+3) + 10 = 18.
Note that, if two functions have the same Domain and Co-Domain, they are composable, and associative. The composition of functions with the same Domain and Co-Domain are called transformations, because mathematicians like to give a name to everything (and for once it seems a logical name).

How powerful is function composition?

In a few words: it allows to express complex things in a simple form. In physics for example, Lorentz Transformations are very famous. Using a series of transformation, Lorentz showed that the speed of light is the same in all inertial reference frames. The invariance of light speed is one of the postulates of Einstein’s Special Relativity theory.

Of course we’re not Einstein, but expressing complex things in a simple form is definitely something interesting for software design.

 

Do you know what’s a function?

I’m so used to functions in my daily job as software programmer that I was sure to know what it is. As often I was wrong, let me tell you why.

Function and relation

I intuitively called a function anything in my code with an input and an output. I suspected that functions without input and/or output where a special kind of functions. But in truth, f: a->b defines a relation between a and b. The Domain is the set containing a, and the Co-Domain is the set containing b.  A relation maps an input from the Domain with an output in the Co-Domain. A relation can be:

Back to code

If a relation can map an element from the Domain with several elements in the Co-Domain, it is not a function. Do I code a lot of relation with different possible outputs for the same input? Yes, any function with side effects when a is an element of the Domain, can produce different output for the same input. They are not functions by the mathematical definition.

It also means that we can’t define a function without defining the Domain and the Co-Domain. If we change the Domain or the Co-Domain, a relation can be a function or not. A famous example is the SquareRoot relation because:

Sqrt : a (|R) -> b (|R) is not a function (indeed, sqrt(4) can be either 2 or – 2).

PositiveSqrt: a (|R+) -> b (|R+) is a function (a subset of squareRoot that return only the positive value)

Note as well that constraining the Domain and the Co-Domain can make the function more specific (injective, bijective, or both).

So what?

Ok my coding function are not mathematical function, why should I care? Let me ask a different question: why do mathematicians care to manipulate functions instead of relations? Because it’s easier to reason about.

In the same way, the more specific the relation is, the more predictable the code will be. If a relation can produce several outputs for the same inputs, the code is not predictable. It means that my relation depends of a more global context, and thus I can’t reason at the function scale without taking this context into account. On the other hand when my relation is a function (usually called pure function in FP, ie without side effects) my code is predictable, and I can think at a function scale without considering the global context.

It is really powerful, because It doesn’t really matter how smart you are if you have the ability to split a complex problem into simple ones.

 

Why Functional Programming?

In this post, I would like to share why I think the functional paradigm is such a big deal, and why it is worthwhile to learn about it, no matter which technology you are using.

Functional Programming Roots

Functional Programming (FP) is deeply rooted in maths, and is intimidating because of that. But I think we underestimate the value of a mathematical approach of software: it allows to express complex problems with a succession of simple statements. When we translate this to IT, the point is to compose complexity to keep software systems predictable, no matter how big they are.

How mathematics works

Mathematics always follow the same pattern. They define something in a general way, like a polygon: “a polygon is a plane figure that is bounded by a finite chain of straight line segments closing in a loop to form a closed polygonal chain”. Then by applying specific laws, they are able to define more specific things: “A triangle is a polygon with three edges and three vertices”. And by adding more laws, we obtain even more specific things, like an equilateral triangle “An equilateral triangle has all sides the same length”. As you will see, FP design patterns can be explained in the same way: we will use something general, and because we will add some laws, it will define a more specific concept. Note that “A monad is just a monoid in the category of endofunctors” follows this pattern as well. It seems complicated because it uses other concepts we’re not familiar with.

Don’t be scared to learn the symbols

Maths are often intimidating due to complex words and abstract symbols. Learning them is a first step to demystify it. Here are some symbols often used to explain FP concepts:

The mathematical notation y=f(x) means for x, the relation f will produce the output y. f: a->b means for an input a, the relation f will produce an output b. We usually use this notation in FP. Of course we need to define the set for a  and b (integers, positive integers, real numbers, string…). The set of a is called the domain of f, and the set of b is called the codomain of f.

g°f means the composition of g and f, in other word the application of f,then g.

f ≡ g means that f is equivalent to g.

Software predictability

In a pure FP program, f: a->b is always true. Functions are pure, which means they will always produce the same output for the same input, no matter when the function is called. In our industry, because states can mutate over time, we build software where it’s false. If f has side effects, or if anyone can mutate a or b, we can no longer predict the output of f, because it is context dependent. It adds a lot of complexity in our code. Building a program by composing pure function can remove this complexity.

Why Function Composition?

Functional programming is all about function composition. If I have 2 function f:a->b and g: b->c, I want the composition g°f : a->c. The composition of simple functions allows to build services. The composition of services can manage use cases. And a composition of uses cases is an application.

Wrap up

Don’t be scared by mathematical symbols, and remember that if we compose pure functions by respecting some mathematical laws, we’ll be able to build a predictable complex software. It is certainly something that has some interest, isn’t it?

 

The post where you don’t learn what’s a monad, but where you may want to learn more about it

Since a while I wander how to write another blog on the internet trying to explain what’s a monad. The problem is well known, as soon as you understand what a monad is, you lose the ability to explain it.
The good news is that I still don’t fully understand what’s a monad, so maybe I’m still able to explain it.
I don’t claim to be a mathematician, or Functional Programming (FP) expert. To be honest I am pretty bad in maths, and I use F# when I can, but I still work with C# most of the time.
Thus if you are an expert and see any errors in the FP related blog posts , please do me a favor and correct me 🙂

 

A combination of Failures

To explain monads, I started a first blog post, to explain the math theory and stay at a theoretical level. My draft was quickly 3 pages long, and unclear, even for me. How deep should I dig into the maths? Ok then, let’s try the practical approach.
I started a new blog post, only to focus on a practical example, to show how we can handle error with something close of the maybe monad. I tried to present my work in a local meetup, and to livecode the explanation. By the way it was a 10-minutes talk. No need to say it was also a failure, I hope the attendees learn some stuff, but they certainly still not understand what’s a monad.

 

What’s the point?

It slowly drives me crazy. I was thinking day and night about which metaphor I could use to explain monads. Something different from the railway metaphor from Scott Wlaschin if possible. I started two more blog posts, and was still unhappy with them.
Weeks were running, and I couldn’t explain monads in the simple way I was looking for.
I tried to figure out at which moment in my learning path I finally understood it. I also challenged the really reason to do that? Why should I care? Why a developer should know what’s a monad?

Why should we care?

Because monads are a basic building block in functional programming. I think this tweet  nailed it.
Semigroup, monoid, applicative and monad are building blocks, or design patterns inspired from mathematics. They allow to combine anything.
And because functional programming is all about function composition,  it is important to know these blocks as a functional programmer. But the concepts of FP are helpful for an Object Oriented (OO) developer in many aspects as well, as uncle Bob recently points out.

 

What’s a monad?

Something hard to understand without the maths roots. That’s why I will realease several articles to explain both technical and mathematical aspects, this way you can choose which part is relevant for you. From basic maths to concrete example, choose your path to monads understanding!

 
IP Blocking Protection is enabled by IP Address Blocker from LionScripts.com.