Aiming for the Stars

If you read this blog as a software engineer, I apologize because for once I will not talk about software at all. I will talk about space, and me. I decided to talk about it because according to my twitter timeline, lots of people that might read this blog also like space stuff.
Unless you’re living in a cave and/or have absolutely no interest in space, you probably know that the European Space Agency (ESA) is looking for a new batch of astronauts.

And I actually mean it.

So, you think you can be an astronaut?

Yes, obviously, I mean who can doubt about that? Of course I’m kidding. I will probably be discarded pretty early in the selection process. But that’s certainly not a good reason for not trying. Whatever the issue, I will learn a ton of stuff about space and what is actually expected from an astronaut. And if I have the chance to go through the CV screening, I might even meet in person some of the next astronaut generation. How cool is that?

Have you any chance at all?

To be honest, I prefer not to think about that. Which probably means “no but if I think about it, I may lose the motivation to even try”. So let’s see what ESA is looking for:

 Master’s degree (possibly in Computer Sciences)
=> Check.

Willing to perform arduous physical activities
=> I can bear with that, I already have daily sports routine and would be glad to take more time for it.

Clear, concise and considerate communication is a must
=> Can always improve, but at least I’m used to it.

Excellent fine motor skills
=> At least I had when I was young.

Strong analytical and reporting skills, the ability to rapidly assimilate and synthesize complex information and sound decision-making capabilities
=> Sound like the daily routine for a programmer, right?

The workload of an astronaut is high and working hours can be irregular, hence high level of motivation
=> Like the workload of an astronaut, the workload of an entrepreneur is irregular and requires a high level of motivation

Should be passionate about sharing their knowledge, with a willingness to engage wide audiences
=> Indeed I like to share, even when it’s not knowledge.

Good reasoning capability
=> Again seems ok when your job is to build software.

The ability to work under stress
=> I can work under stress. As long as you don’t ask me to be efficient under stress…

Memory and concentration skills
=> That’s clearly not my main asset, but I’m sure I can work on it, and it would actually help me for other areas in my life.

A candidate’s personality should show high motivation, flexibility, gregariousness, empathy, non-aggression and emotional stability
=> Here again I must say that we have to work on these soft skills as a consultant in software design.

Of course I’m aware that going to space is several orders of magnitude harder than coding on a computer, but I have at least some traits of a potential candidate, and this was already unexpected!

What are the steps of the selection process?

The first step is to provide an awesome CV and cover letter. Then if I have the chance to go through the screening, I may have a chance to take psychological tests.  And if it goes well, I will have psychometric tests. And after that a batch of medical tests and interviews…
In other words, I’ll be very happy already if I can stay in the process until the psychometric tests. Especially because according to what I know of the selection process, these tests are quite “fun”. The kinds of tests you remember for the rest of your life I guess.

Why do you share an almost certain failure with us?

First of all, I hope that this post can help some people with doubts about their capabilities to take their chances. The only way to be certain to fail is to not even try.
The second point is that whatever happens, I’m sure I will learn a lot through the process, and that some of the things I’ll learn will interest some of my readers.
Last but not least, I hope that you can give me tips and tricks to have the best chance to go through each step.

So for now if you have any advice about how to write a CV and a cover letter that might get some attention from a Space Agency when you’re a software developer, I will be glad to hear about it!

To be continued…

Don’t reinvent the wheel

In all advices we receive as software programmer, one of the most misunderstood (just after don’t repeat yourself) is don’t reinvent the wheel.
Of course, it seems pretty obvious that if we re-do everything all the time, the whole industry won’t evolve at all. But when we dig enough, it is clear that all of us do not have the same definition of a wheel.
I would like to explain here that defining what needs to be re-do in your context might be harder than it seems.

How we made The Usual Suspects | Bryan Singer | The Guardian

The usual suspects

I often hear this argument for 2 main software parts: some User Interface (UI) component and the Object Relational Mapper (ORM).
For the UI this is usually used for complex grid with advanced features like filtering or advanced search for instance.
The ORM is this magic piece of code that let you manipulate data in a database without even think about it.
In both cases it seems perfectly reasonable to suppose that it will accelerate your development workflow to use them. Because after all your job is not to fetch data from a database or even to display them in a nice filtrable grid.

Small Business Success Stories - Company Bug

The success stories

I do not have any scientific data to demonstrate what will follow, and I admit gladly this is just gut feeling. I’m ready to bet though that many of you may have done the same experience.
You use the awesome UI component, and you’re able to deliver in production pretty fast. You can impress the stack holders during the demo thanks to many features in the UI. Most of them won’t be used but you know, who can do more can do less!
The story with your ORM is quite close, everything is working and you didn’t write a single line of SQL. Database migration is automatically handled and you don’t need to know how it works. How glorious is that?
This is by the way where most tutorials stop you’ll have to discover the rest by yourself.

Quantum speed limit may put brakes on quantum computers

The limit of the approach

One morning a user calls you: “I don’t understand why it’s so slow since the last update?”
The problem is that you don’t know either, worse it’s not that bad on your machine. A few logs and search later, you nailed it, one of the queries tries to fetch 80% of the database just because someone in the code try to get some unrelated data in a weird way. Weird in a SQL point of view, but not that weird when your ORM hide it for you. And then you have to understand how it works under the cover in order to avoid the normal behavior of the ORM. It usually leads to untestable code in order to achieve your goal without performance issues.
Soon after that, during the next spring planning, the users will ask for a “minor” UI change in the way we can filter data. And again, you will need to understand what’s under the cover in order to understand how to change it according to your user’s wishes. But the thing is that the theory of the framework will probably not match your program’s theory. And it will take a lot of hacky code and long hours of tests to make it work.
And this is without talking about the bugs (or misuse) of the libraries you’re dependent on. It could be quite long and frustrating to wait for other developers to fix a behavior important for your business.

Reinventing the Wheel. When people talk about “reinventing the… | by Kim  Bellard | Tincture

So what’s a wheel?

To be fair, it depends on your context, and project maturity. But basically, a wheel is something that is becoming a commodity through time (Wardley’s map could help to see and understand that not everything can become a commodity). Most famous examples might be electricity, the internet and now cloud computing.
Of course you certainly not need to build your own database, and not even to host it. But thinking that you can build a complete product by using mostly existing libraries and frameworks is also a myth.
Using other libraries to build a solution faster can make sense when you’re exploring a problem.
But at some point, you probably need to replace them with custom components where you can have a complete control. This “inflection” point is hard to find, and most companies fight to keep things as they were instead of looking to what will be required at their maturity stage.

A software wheel is something that you don’t need to care about, in your context and at your level of maturity. Hence the hard part is that it will change through time.
Try to keep an eye to understand where your bugs come from and what part of the code is harder to maintain. It might require to change “a wheel” by a custom part to take full control of the behaviors.

So do you have similar stories?

What’s programming

A few days ago I asked you to think about how to define your job as a software developer in a single sentence. This question is highly correlated to the question of what’s programming.
That’s why an article on the matter was pointed to me on twitter in response to my tweet on software developer’s job.

Looking for Naur’s programming as Theory building will lead you to this article.

It was written by Peter Naur (the one from the Backus-Naur Form, aka BNF programming language  syntax notation) in 1985, and it is so far one of the most comprehensive texts on what’s programming I had the chance to read.
To quote him, a motivation  “of the presentation is a conviction that it is important to have an appropriate understanding of what programming is. If our understanding is inappropriate we will misunderstand the difficulties that arise in the activity and our attempts to overcome them will give rise to conflicts and frustrations.
I could hardly agree more.

upload.wikimedia.org/wikipedia/commons/5/55/Pet...

Programming and the programmers’ knowledge

The main point of the article is that a programmer (and her team) somehow build a theory of the program when working on software, and that this knowledge cannot be totally embedded in documentation and/or code quality. Actually, the fact that different developers will build different theories explain why we will always find a codebase we have to maintain from someone else “bad”.
This is independent of the code quality or documentation unfortunately.
“The conclusion seems inescapable that at least with certain kinds of large programs, the continued adaptation, modification, and correction of errors in them, is essentially dependent on a certain kind of knowledge possessed by a group of programmers who are closely and continuously connected with them.”

digital knowledge image


How to treat the programmer

One of the main impacts of such a view is that developer cannot be the disposable we would like them to be in an industrial process. Quite the opposite, they will appear as key part of any success stories in the software industry.
“At the level of industrial management these views support treating programmers as workers of fairly low responsibility, and only brief education. On the Theory Building View the primary result of the programming activity is the theory held by the programmers. Since this theory by its very nature is part of the mental possession of each programmer, it follows that the notion of the programmer as an easily replaceable component in the program production activity has to be abandoned. Instead the programmer must be regarded as a responsible developer and manager of the activity in which the computer is a part.”

How to Become a Programmer: The Complete Beginner's Guide

What’s a dead program

If we acknowledge that the team building the product is so important, it follows that the life and death of a program is not defined by the fact that it runs in production, but by life and death of the team building it. From that we can suggest that methods like pair and mob programming are must have for a long living program, because it will help the theory of the program to be distilled continuously through the team, and it will address the training of newcomers to know the existing theory.

“The death of a program happens when the programmer team possessing its theory is dissolved. A dead program may continue to be used for execution in a computer and to produce useful results. The actual state of death becomes visible when demands for modifications of the program cannot be intelligently answered. Revival of a program is the rebuilding of its theory by a new programmer team.”
[…]
“What is required is that the new programmer has the opportunity to work in close contact with the programmers who already possess the theory, so as to be able to become familiar with the place of the program in the wider context of the relevant real world situations and so as to acquire the knowledge of how the program works and how unusual program reactions and program modifications are handled within the program theory.”

RIP QR Codes | Peacock Nine

And much more…

From my perspective, this article also gives insights about writing small replaceable features instead of huge monolithic one, on the importance of unit tests and on the reason why software development can’t be handled like an industrial process.
But as Florent Pellet recalls me recently, it’s always dangerous to interpret past writing with our current context and understanding.
Still, this paper is older than me, and it seems to me that it embedded many visionary wise knowledge that is absolutely not mainstream todays.
Do you have the same feeling about it?

What’s your job as a software developer

Because you’ll have many family’s meals in the upcoming days, it might be a good idea to train yourself in order to explain your developer job to the mere mortals in a single sentence. It may avoid the classical “can you fix my printer” or “can you check my wifi” issues. So before reading the rest of the post, stop for a while and try to think about it.



Done?
I tried it a few weeks ago, and here was my personal attempt.

I had the chance to talk about it recently with other developers of HackYourJob’s community, and it makes me realize that “In order to explain it to a computer” cannot be the goal. This is just a means. So here is a more recent attempt.

Of course the interesting thing in this thought is not the tweet by itself, but the corollaries it implies.

My job is to understand someone else job

Let’s consider this to start with. What would be the primary skill expected by someone who needs to understand someone else? Hint: I don’t believe that any technical framework will help for that. So, what if programming was a social activity?
Yes it’s important to argue that programming is not about being lonely in front of a computer. It might be a part of the job, but certainly not the hardest one.
Understanding someone else means that empathy, being able to learn fast and understanding business contexts are keys to be an efficient programmer. The best program you can write is the one you do not need to write because you found a better alternative for the business to improve without a costly software solution.

How to Really Understand Someone Else's Point of View

My job is to explain someone else job to a computer

Computers have this particularity: they do not accept ambiguity. It means that you have to be absolutely precise when you talk with them, in terms of syntax and content.
Hopefully for the syntax, compilers, modern IDE and search engine will help you. Contrary to what juniors or people outside of this industry might think, this is the easy part. Not trivial, but pretty easy compared to trying to express without ambiguity a set of business rules (ie the content).
Many people will argue that this set of rules, the so-called specifications, will be written by someone else so you won’t have to worry about them when coding. You will just need to translate. Spoiler alert: this is false.
Even if someone else tries to do it, it will never be clear enough for a computer to understand it. If a computer can understand it, it’s called an algorithm, which is nothing but executable specification when written in a specific programming language.
It doesn’t mean that having someone else to help you understand the business is bad, it just means that it won’t avoid the necessity for you as a software developer to understand the business.

10 Things Every Software Developer Should Know - DEV

My job is to improve someone else daily work

It’s tempting to believe that writing software is the ultimate goal of a software developer. Even my first tweet suggests that. But of course, software is only a means.
You can write the best software in the world, if it makes its user’s life harder for bad reasons, you are still a bad developer in my opinion. Hence ethic for example is also a major component of our work.
It also means that you can be an awesome developer if you improve someone else daily work with an excel macro. Don’t let the industry define you with technical languages and buzz words. The outcome of your job is much more than that.

Software Developer Levels: What are the Major Differences? | number8

How would you define your job in a single sentence?

In this post, I tried to express all corollaries implied by this simple sentence: “As a Software Developer I have to explain someone else job to a computer in order to improve someone else daily work.”
What are your propositions to improve it?
Or maybe you don’t agree with some of the corollaries?
Or you think I missed an important part of the job?

In any case I would be glad to hear about it.

The power of habit


“I’m not a great programmer; I’m just a good programmer with great habits.”

You have probably already heard this famous quote by Kent Beck. It is one of the most insightful quotes I heard in my career, because it carries hope.
Hope that you, me, anybody can become great by relying on habits. And this is great news become you were not born with habits. You rather develop them in response to an ecosystem, and you are more or less aware of them.
It is a common falsehood to convince ourselves that we develop habits in response to what we deeply are, in our soul, mind and body. But it’s exactly the other way around, we are in the end defined by our routines.
You’re not a drug addict because it was written in the stone that you will live as a junkie. You became a junkie because you relied on (bad) “having fun” habits for so long that, at some point, you lost control.
You’re not an average developer because you’re not smart enough. You became an average developer because you relied on (bad) coding habits for so long that you cannot even think about another way to work in this industry. Which leads to remark like “X is a great idea, but it doesn’t work in practice”. Where X is any good practices of software developers like unit testing, pairing or continuous deployment.

In other words, anybody can radically change its own life by changing its habits. This is exactly the topic of the last book I would like to share with you: the power of habit.

Great stories

Charles Duhigg is the author of this book, and he is such a good writer. I rarely had this much pleasure while reading a non-fictional book. The topic is the power of habit, why we do what we do and how to change it, but the writer uses storytelling to share with us his findings on the subject.
This book will improve your knowledge about the human brain, behavior and psychology, and provide awesome anecdotes about it. Stories about how a man with brains’ damages had an almost normal life despite a total amnesia, about how the coach Tony Dungy turns The Buccaneers football team from one of the worst to one of the best of all time, or about how commercial successes likes Pepsodent or Hey Yah were built in the background, all of them relying on the power of habit.

The book is organized around scientific findings, described through great stories, to make it popular science, and it is a delight to read.

Great advices

If you are not already convinced that habits are powerful, you will probably change your mind after reading a few of these stories and thinking a bit about your own life.
But the best part is that once you are aware of it, you have a chance to change the bad ones, and to improve the good ones. And the book also contains practical advices on how to do it.
In other words, this is a great tool to help change your life, and that is why I’m happy to share it with you.
It is, by far, the most impactful book on my daily life I have read for a while.

Sharing my good habits

Since more than 10 years now, I regularly teach myself some routines trying to get better. These habits now define me more than what I can say or write I believe. Here are some of them from the most to the less impactful:

  • Learn (by training, reading, watching videos and attending conferences, there is nothing you cannot learn)
  • Teach (through blog, book, training or anything, as soon as you learn something interesting, just share it, it will teach you even more)
  • Daily Exercise (physical training, it can be hard fitness or just a gentle walk, the point is to empty your mind and care about your body)
  • Stay focus (pomodoro helps me a lot for that)
  • Log your daily work (you probably already do daily stand up? Have you tried to write your plan and what you have accomplished every day? I barely do it for a year now, and find it to be essential already)

What about you? Which habits were the most impactful in your career and life?

OOP, FP and the expression problem

As a software developer, you will probably have at least once a month a discussion about Functional Programming (FP) vs Oriented Object Programming (OOP). Of course, it’s always about the context, I already talked about it.
But thanks to Samir Talwar, I had new insights recently on this topic that I would like to share with you.
To give more context, I’m working more and more with F# for 3  years, hence have more and more practical feedback about it. And in the last weeks I came to the conclusion that I feel much more comfortable with FP for mid term code maintenance, and here is why.

OOP in theory

On my current gig, I had to switch back mainly to C# (even if some part of the code are in F#). I recently had a challenging refactoring on the core domain to do, and it leads me to this tweet.

My thought at this moment was that OOP can indeed be beautiful in theory, but I rarely see it beautiful in practice. Not because lack of practice or bad code, just because it’s almost impossible to keep it beautiful in time.
At first the design is really clear, a few nouns, a few actions (aka verbs), sometimes a nice inheritance between objects, and it all sounds good. But of course, after a while the specifications evolve, and you need to add new objects and/or behavior.

OOP in practice

What I usually observe is that evolution of the domain forces you to refine your abstraction. It makes sense, because you improve your knowledge about the domain while you’re working on it. In practice it means that you try to add objects in your inheritance hierarchy, with behaviors that might be slightly different of what you had thought at first.
That’s where the design starts to slide out of control. Maybe we could just add a boolean here instead of multiplying the class implementations? Maybe we’ll keep this inheritance because there are so many shared behaviors in the base class that we don’t want to spread in most classes, just because one implementation doesn’t need this behavior. And so on…
That’s why I usually find OOP using composition rather than inheritance easier to maintain in the long term. But even with that, it’s hard to keep data and behavior together and maintainable.
Hence I somehow feel like I prefer the constraints of FP rather than the constraints of OOP, with separation of data and behaviors, but it was hard for me to explain clearly why.
That’s where Samir’s wisdom shows me the light…

The expression problem

Here is Samir’s answer to my tweet:

That’s it. You can change behaviors (verbs) easily or you can change nouns (classes) easily, but you can’t have both. It is known as the expression problem. FP focus on changing verbs easily whereas OOP focus on changing/adding nouns easily. Samir takes the time to write in detail about it.
I think a lot about that in the last years, but never see it so shortly and clearly explained than in this tweet. And to be fair when you see it so simply explained, you wonder why you need years to think about it!
Despite the expression problem seems to be a well-known issue in software industry, it is rarely cited in any debate about FP vs OOP, as far as I know.

OOP or FP?

I’ll rather let Samir answer if you don’t mind:

I totally agree with him, and this explains I guess why most people who jump from OOP to FP never looked back, even if it could be hard to tell exactly why.
It’s not better by design, it just has different pros and cons, but these pros tend to make the evolution of the software easier, if the verbs of your domain change more than your nouns.
In Samir’s experience it’s usually the case, in mine too, what’s your experience about it?

In defense of Event Sourcing

Practicing and teaching Test Driven Development (TDD) since many years now, I start to see where the point of acceptation of this practice is: when you accept that the problem is not the method, but the way you are coding, and that this method is just a revealer of bad practices. 
Indeed testing code without dependency inversion or single responsibility principle will be really painful. Hence lots of people conclude in the name of pragmatism that the problem is TDD, not their code. But the people who are ready to challenge their way of thinking and coding will learn a lot, and usually accept TDD, or at least Unit Testing as a good practice. Because it can avoid bad habits in code and design (after years of practices, I agree). 

In the last years, I also actively practice and teach CQRS/ES, mainly implementing it using C# or F#, or both. And I’m convinced that it has the same power as TDD for this: this method is also a revealer of bad practices. Why does it matter? Because most of the critics I hear sounds like “I had this huge error in the conception of my system, my traditional way of coding doesn’t tell me that, but your Event Sourcing stuff put my head in my ####, so I guess the problem is this Event Sourcing stuff, not my way of coding right?” 
 
So let’s talks about Event Sourcing and some usual critics we can find on the internet, or that we can have during trainings. 

Functional Event Sourcing in a nutshell by Jeremie Chassaing

How can I manage my very complex entity with billions of events??! 

This question almost always arises. Especially from people who tried an implementation and end up in this situation.  
As I heard for the first time from my co-worker Florent Pellet, an event stream is nothing but the representation of the lifespan and responsibility of an entity. So the question should not be how to handle it, but why should we handle it? If this situation happens, it’s your domain model shouting at you “I’M WROOOOOOOOONG, I’M a MONSTEEEEEEER! PLEASE CHANGE ME OR KILL ME BUT DON’T LEAVE ME LIKE THAT !!”. 

Too many events in a stream means an error in design. And the thing is that this error exists independently of the way you’re implementing your domain. We could easily detect it in a statefull/relational database implementation, if we care about bug tracking and which region of the code we need to change at each release. It would quickly reveal this monster aggregate (also called god object).  
The problem here is not Event Sourcing, but the design of the domain, and ignore it for too long will be much more painful than Event Sourcing by itself. 

That being said, we indeed have a technical solution for this problem, it’s called snapshots. Basically, the idea is to store intermediate state to avoid rebuild from scratch each time you need to load this monster aggregate. But when you have to use it, you can consider it as a design failure. It’s like code comments: it can be useful, but it’s often just used to hide bad coding habits. 

I can no longer access and change data in the database??! 

Have you ever heard something close to this: “You know, in my data-intensive applications issues are often caused by data anomalies rather than code-based bugs.”  
It’s a more elegant version of “The problem is not the software, it’s the users”. 
Indeed, quite a widespread developer’s bad habit is to fix problems directly in production data (ie consequences) rather than the root causes (ie code or process). 
Which can be painful in Event Sourcing because events contents are usually stored as JSON (when human readable), or even as blob not human readable at all. 

So let’s back to the magic question, instead of wondering how to do it, we can ask ourselves why to do it? It can often be tracked to UX, design or process errors.

So handle these problems at their causes, and then fix the consequences. You can’t just modify the database by hand because you have too many events to update? Good, write a script then. Which should have been done anyway for maintenance, no matter if you’re using an event store or a relational database. And yes, this script might be harder to write than for your classical relational database, hence the importance to fix the root cause.

But the business people do not understand it??! 

Ho yes they do. If you think they don’t, ask them if they think that a user with an empty cart because he just logged in is the same thing as a user with an empty cart because he adds and removes something 3 times. 
A dev might think it is because an empty cart is just an empty cart after all (and most of the time this is how it will be designed). The business though will understand the learning opportunities in this add/remove behavior, and would like to track it. 

Also have you ever worked in a system to “add logging”? It’s painful because it adds dependencies, and it’s not always easy to know what to log. Event Sourcing, at least coupled with Domain Driven Design (DDD), will reply to this question. Also this need of “logging” by the business is a sign that they understand well the concept of Event Sourcing and the value it can bring. 

But the devs are not trained for that and they don’t understand how to use it??! 

It’s one of my favorite one. Most devs use GIT. So basically most devs already understand and use the value of logging every past changes. And they also already understand that logging little changes will be much easier to exploit in time than logging big changes. 

It’s true though that they’re not trained to do such implementation by themselves, because they have done years of Oriented Object (even if it’s done in a procedural way), ORMs and relational database. At some point some people do not even know that alternatives do exist. 
Compared to this way of coding, it requires indeed a mind shift, but I can’t believe that someone smart enough to code and courageous enough to use ORM, will not be smart enough to learn about Event Sourcing. 
I do believe though that most employers do not want to invest in their own employees training, but that’s another topic. 

And I can no longer easily change my schema??! 

Finally, an (almost) valid point. Yes changing the schema (ie the serialization of events because you change, add or remove properties) is not a funny part. Schema migration was never straightforward anyway, again no matter the implementation you choose. 
I agree though it will require a more complex process in an event sourced system because each time you want to update the present, you need to care about the past. It might be unusual in a relational database, but it is actually a good idea. 

You have 3 solutions: 
1- you can pretend the past hasn’t existed, and use a script to update old events to become valid events 
2- you can pretend the past hasn’t existed, and fix the invalid events in the event repository using default values in the code 
3- you can care about the past because the version of the events might impact the way you want to handle it in your business model, in this case you can use event versioning and different paths in code 

In other words: you have to explicitly choose an update strategy for each change that could affect the past. Ignoring the past or not depends on the business’s needs. 

 
So you say it’s a silver bullet? 

Of course not, but I would like to avoid critics that are basically complaining that they need to change some bad habits.  

Event sourcing gives you more options, hence more responsibility. It gives you the opportunity to think with the business in mind (especially when coupled with DDD and CQRS). I believe that it’s this new world of “many options” that can afraid people who prefer the prescription of a rigid framework. The very fact that it does not have a proper standard and that everybody can come up with its own implementation is partly what makes it so powerful for me. 
 
Event Sourcing isn’t trivial. As we already saw, it makes designs error even more painful than usual (I’d prefer to say: harder to ignore or postpone). It means that using it without knowing about DDD for example might be a good way to shoot yourself in the foot. It also means that if you’re discovering a domain (proof of concept for a startup?) it won’t fit. 
 
But if you’re looking for a way to build a robust and scalable system in a domain that you know (even if it will change), I still haven’t found a better approach so far . 
 
Surprisingly enough, context is king. The power of Event Sourcing is that your implementation can greatly change depending on your context. 
 
 

3 bad coding habits of most software developers

And trust me, when I say “most developers” I include myself as I am now, or as I was a few years ago. 
Let’s talks together about the most productivity killer habits most of us share. I’ll discuss them from the less to the most common with my current experience. 

2 hours of manual testing can save 2 minutes of automated testing 

Unit testing is more and more widespread, but of course the game now is to explain why, in your team/context, you “can’t really do this”. 
A few of the usual excuses we tell others (and ourselves) are: 
– This is just a little project 
– The team is not trained/ready for this 
– We do not have time to write tests because we need to deliver features 
– Unit tests are worthless, we prefer end to end tests 
But 2 hours of manual testing can save 2 minutes of automated testing. Unless you won’t need to test your feature more than once or twice, writing an automated unit test will most of the time worth it (especially if you write it before, hence keep a testable architecture) .
Honestly, what would your customers say if you tell them that you don’t test everything at each release? Or worst, that most of the cost of the development process is actually manual testing of the software? 
Trust me I fought this idea long enough and was looking for many alternatives. As many beginners, my first contact with this method was something like “why the hell would you ever need this if you know how to code?” 
But in 10 years, I still haven’t found anything better than unit testing to deliver quality software and to speed up feature delivery in the mid and long term. 

2 days of Pull Request can save 2 hours of pair programming

I know the first point about unit testing is probably consensual enough, at least by the reader of this blog. And I know that this point about Pull Request will trigger much more discussion. 
My point is that, most of the time, Pull Request is a (bad) way to implement collaboration in the team. Probably one of the worst way to do it because: 
– It is asynchronous and necessitates lots of context switching both for the author and the validators 
– It slows done the flow of feature delivery as a Pull Request can stay for days in the review stage 
– It implies a relationship where the validator judge the job of someone else, and often feel necessary to add comments just to show that she took the time to seriously read the Pull Request (and sometimes it’s a true hierarchy relationship, when only architects or tech leads are authorized to validate Pull Requests) 
Yes, 2 days of Pull Request review can save you 2 hours of pair programming. Plus pair programming will improve the common code ownership, spread good habits, coding tips and domain knowledge in your team much faster. 
Should we avoid Pull Request then? No but use it only when it is strictly impossible to do the work directly in pair or mob. I hear you: “But my manager will kill me, 2 people to do the same tasks is pure madness!”. 
A first step to feel the benefits of what I describe here is to ask for synchronous reviews with the author. It will be much faster to validate it, because it’s easier to understand what our coworker means with her voice than with her code only.

2 weeks of coding can save 2 days of Event Storming

If I had only one thing to tell to younger me, that would be “Not Silverlight!”. More seriously that would be “being able to quickly understand any domain context and the human relationship involved in them will make you a much better developer than mastering any of the shiny tools around”. 
Don’t misinterpret me, methods and technology matters, but the challenge is to find the one fitting your context. Whereas most of the time we just impose our technical knowledge (ie habits) to the business (which is why most software are just CRUD built on relational database with the last fancy framework if you ask me). 
The thing is, we don’t care how good domain experts and/or the company are. The most critical point in software development is how well the developer will understand the domain. 
The best method I know so far to share business knowledge is known as Event Storming. It’s basically a meeting between technical people and domain expert to talk about how the company earns money. 
I know, business people are “really busy” and it’s very hard to get them for a few hours talking with you. The question though is, can they afford throwing away weeks of coding (or worse: keeping bad code and trying to fix it for the life of the software), when people will realize that the software doesn’t fit their needs? A few days of Event Storming, even with the whole team, is really cheap compared to the usage and maintenance of a bad software. And like for pair programming, it improves the feeling of common ownership of the software, more people feel involved in the process of creating the right tool for the company. 

Why are these methods still unusual? 

First of all, I would say that they are more and more common, but of course it’s still far from the usual way of working. I think it’s mainly due to the following points.

Line of business software are so complicated that nobody can control them. A corollary is that this level of complexity should be managed by a team, with proper tools. But we usually ignore this fact, due to our ego or just by habit.
All these methods have midterm return on investment, and are thus hard to evaluate. It doesn’t fit well in a company with a Taylorist state of mind and management, which is still the majority.

But as soon as you accept the complexity of software development and keep an egoless approach, these methods suddenly seem absolutely normal, as a way to avoid time wasting. 

The properties of great architecture

It is commonly said in IT, at least in Agile circles, that we can’t design software like we design building, using Waterfall approach.  Then we often said, including myself, that even if it works with the design of building, software really is a different beast. 
But have we asked ourselves how well this waterfall approach actually works for physical building? 

An architectural masterpiece

A few years ago, I moved to a little town near Lyon. During a walk, I accidentally discovered a very strange building. It basically looks like an enormous Soviet block, with no color and strange forms. I was really surprised because it was in a very nice place around, in the middle of a forest where you can do wonderful walk. There also was a very nice old fashion house in the neighborhood. And here stands this hideous block. My guess was that it was just something they inherit from the war, that the government gives away for charity (it is now a convent).
But when I was leaving the place, some signs for tourists explain to me that this is the famous “Couvent de la Tourette” design by the even more famous architect Le Corbusier. I hence accept that I just lack the sensibility to understand why this building is actually a piece of art, a masterpiece by a master architect, and not at all a Soviet block.
This was a few years ago, and I never think about it again, I may even have recommended this place to a few tourists in the region… “Do you know we have a famous building here?” 

Convent, la Tourette

Alexander’s point of view

One of my favorite things is to try to understand how people considered as pioneers were influenced to think as they do and come up with mind-blowing ideas, especially in the IT industry. How Alan Kay has come to think about Oriented Object Programming (OOP)? How the manifesto signers were influenced to think about Agile? 
And the one that leads me to this post: how the Gang Of Four has thought to design patterns? The short answer to this last question is: Christopher Alexander, another famous architect. In 1977 he co-authored “A Pattern Language” with Sara Ishikawa and Murray Silverstein. Of course the game does not stop here: how Alexander has come up to think about design patterns? 
I still don’t have the answer here, but fortunately Alexander likes to write, and has published The nature of order where he basically explains why, how, and when he thinks in this way, and try to give a few answers about the universe as well… 
Anyway, he criticizes lots of other architects in this book, including Le Corbusier, taking “Le Couvent de la Tourette” as an example of something very bad. He basically says that this block has nothing to do here, it doesn’t fit well in the place at all, it doesn’t care about the context. It was just an architect with probably lots of ego that draw it in his office, with his view of “modern” architecture, no matter if the place will be useful, or even beautiful. Funny enough, it perfectly fit what I thought when I saw this building for the first time, before knowing that a great architect has built it… 

What’s a great building then?

According to Alexander, either something contributes to the whole, either it doesn’t. I can only give you a hint of what it means, because the 1200 pages of the book are basically here to explain this concept of wholeness in depth. In a nutshell, he argues that the error of the last century architecture is to separate the building from its context, sometimes by ego, sometimes by greed, sometimes by ignorance. Because of that some people believe they can just draw something, and the technical implementation is not their problem. They are missing (and usually don’t want) a feedback loop. They think that they are smart enough to design a perfect thing up front. Because in their beliefs, the context doesn’t really matter. They underestimate the difficulty, because they miss the fact that this building will be part of a more complex system, and cannot be great by itself because it will or will not improve the complex system in which it will take place. 
Alexander thinks this kind of complex design can only be emergent and argues that this is how structure is built by the nature itself. 

Feedback loop, contexts, emergent design… Are you talking about Agile or what? 

Yes and no… Probably like you at the moment, I was thinking that I had found the source of inspiration for the Agile manifesto. And actually, it could be the case, because some of the manifesto signers were aware of the concept of design patterns, and thus somehow already influenced by Alexander. If by any chance a manifesto signer read this blog post, I would love to know if, at any point in the discussion, the name of Christopher Alexander was pronounced… 
Anyway, Alexander is also aware that what he describes in depth is properties of complex systems, and that activities like crafting software can certainly use many of these ideas as well. But he’s also aware of Agile, and even talks about XP in his book! For him, it’s not enough. There are these notions of emergent design and feedback loop, but he thinks we are still missing this concept of “wholeness”. And at this point, I agree with him. 

Dude, would you mind talking about software now? 

Here we go, Like Alexander, I do believe that his ideas applies very well in software as well, as in any complex systems. 
Indeed, we can feel that some programs are “bad” somehow.  The tricky thing is that we don’t have an objective way to judge it. We are missing this wholeness property. Some software is good, despite their lack of unit tests. Some software is good despite a shitty User Interface (UI). Some software is bad, despite the unit tests, the agile mindset, the pretty UI and the use of the last fancy framework or architecture style (yes Microservices, I see you!)  
This is because, in a complex system, what we feel as good or bad is the sum of many properties interlaced, that we can’t control or even enumerate. We can only feel them. Worse: these properties are different for each software, because they are highly dependent on the context. 
The same is true for physical building, still Alexander has tried to find high-level properties that usually fit what he considers as great structure. 
Then defining good software with practices or tool is definitely a failure. Your system won’t be great just because you’re building an event sourced system with a shiny framework in a functional language. 
It won’t be great just because you’re OOP code base is SOLID
It will be great only if both people building it and people using it are happy. Only if they feel whole when they use or build it. These kinds of software probably have high level common properties that we should strive to find. 

What’s on your mind about it? Can you think of any useful properties of such systems already?  

Bounded Context Patterns

After a decade of coding, I tend to believe that being able to discover and implement correctly bounded context is one of the main values I can bring to a company in my daily job. As a consultant I have the chance to do it in different gigs since a few years now, and I start to see repetitive patterns in the way companies are structured. I usually use a strategic Domain Driven Design (DDD) approach to understand and classify this structure, resulting in some domains, subdomains and bounded contexts. 
Because of this repetition, I think bounded contexts can be classified in patterns, and that these patterns can help to know the importance of the bounded context and how to build it in the most efficient way for the business. 
 
I believe these patterns can also be seen with a hexagonal approach. Indeed, like in the hexagonal architecture, we have core stuff and surrounding stuff with adapters and/or anti-corruption layer. It’s the same principle, just at a different scale. 

Here is a list of the most recurring patterns I have discovered over time. 

The Data loader Context 

It is quite common for a business to be dependent on data from an external source. This is when we require to fetch data from somewhere else, usually at a recurring period.
When I identify this pattern, I try to isolate this behavior in a context, ideally one project per data source. All these projects are Data Loaders for our domain. These contexts are often quite technical and requires a strategy to handle the external (and usually unreliable) data source. 

This image has an empty alt attribute; its file name is image-4.png

The Referential Context  

This a very tricky and still very common one. It is the famous “Tools” or “Common” you’ll find in any solution, but at a different scale. It’s usually a referential of something, that you need to share with the rest of the company, or even with other companies. You know that you should avoid it, but still you really need the whole business to share this single source of truth.
Of course, you want to isolate this behavior in a single context, and then you should not consider it as a “safe” context, because it will have many dependencies. If possible, make this context independent of its consumers, at least the dependencies will go only in one direction.
It’s a very challenging context, because of the dependencies. You need to talk a lot, with lots of team. They are also usually technical, because to make them more Domain Driven you’ll need to know why your data are used. And this is not often compatible with the will of staying independent of the consumers.


The Anti Corruption Context 

Because external data fetching is common, we need to convert them from something not trusted to something validated for our business. It is well known in the DDD community and usually implemented through what is called an Anti Corruption Layer
But I believe that it often makes sense to actually consider this layer as a context by itself, because the logic behind data conversion of an external source can be pretty complicated. Even if the business will tell you that “this is just a JSON you know”… 
This context is at the border between the external/technical world and the internal/domain world. It maps the technical validated data from the Data Loader to something usable by the domain. It is an adapter in the hexagonal architecture metaphor. 

The Reporting Context 

Here comes another almost unavoidable one: the reporting context. It “just” reports data from the core domain and we should not have a high level of domain complexity for that…But still, in my experience they are important because they are used to drive the company and some critical business decisions might depend on it.
And they can also be complex because it is not uncommon to handle automatic integration of this report with external tools, or different access type to the data depending on the user role. 
As explained by Scott Wlaschin in Domain Modeling Made Functional, this is where you’ll put the OLAP responsibility of your system. Whereas the business core is more like an OLTP system. It explains for me why you want to isolate both behaviors.
It is a port to the external world in the hexagonal architecture metaphor. 
 

The Business Core Context 

And finally, the holy grail, the one you look for because it creates value for your business, the key technical asset for your company: the business core context. 
All the projects I have worked with have a business core context. The thing is that most of the time, we put too much things inside (like the data loading or the anti-corruption or the reporting…)  
This is usually done for the sake of Don’t Repeat Yourself (DRY): we have the JSON from this external source, why should we transform it before use it?  This is the usual doom of DRY leading to technical issue leaking into your business, and that’s why you want to avoid it. 
This context should be the more Domain Driven of your whole solution, meaning for example that you want to avoid primitive types, dependencies to external stuff, or not handled exceptions. You shouldn’t do technical validation either, because other contexts can take care of that for you.
In the hexagonal metaphor, it is of course the core domain layer of the architecture. 

Map with core, generic and support contexts

In the DDD community, based on the already mythic blue book by Eric Evans, we usually classify contexts in three categories: core, support or generic. Of course, these categories will depend on your domain, but I think we can most of the time map them with the patterns define above. 
For instance, The Data Loader and the Reporting context can be generic contexts, but again beware of the hiding complexity in it (a third part tool might do the job, at least to start).
The Anti Corruption contexts are usually in the support category (useful but not a competitive advantage, still cannot be done by a third part tool, because it depends on your core domain).
The Referential Context might be core or support.
The Business Core Context is obviously in the core category. 

What next ?

Here are the main patterns that make sense for me so far. This is non exhaustive, for example another pattern I omit is the User Interface (UI) context. Because in which context your UI belongs to isn’t an easy topic in DDD, and one answer to this complex question can be a dedicated context to handle it. But I still don’t really know if it is a useful or a harmful pattern… I tend to prefer context like reporting with a clear business responsibility and usually many UIs.

Anyway, what about your experience? Maybe you have identified some patterns that are missing here? Maybe you know about some blog post or books exploring this part of strategic DDD patterns?
I’ll be glad if you accept to share it with me, and maybe we could build together a more exhaustive (and hopefully useful) list of bounded context patterns?