The power of habit


“I’m not a great programmer; I’m just a good programmer with great habits.”

You have probably already heard this famous quote by Kent Beck. It is one of the most insightful quotes I heard in my career, because it carries hope.
Hope that you, me, anybody can become great by relying on habits. And this is great news become you were not born with habits. You rather develop them in response to an ecosystem, and you are more or less aware of them.
It is a common falsehood to convince ourselves that we develop habits in response to what we deeply are, in our soul, mind and body. But it’s exactly the other way around, we are in the end defined by our routines.
You’re not a drug addict because it was written in the stone that you will live as a junkie. You became a junkie because you relied on (bad) “having fun” habits for so long that, at some point, you lost control.
You’re not an average developer because you’re not smart enough. You became an average developer because you relied on (bad) coding habits for so long that you cannot even think about another way to work in this industry. Which leads to remark like “X is a great idea, but it doesn’t work in practice”. Where X is any good practices of software developers like unit testing, pairing or continuous deployment.

In other words, anybody can radically change its own life by changing its habits. This is exactly the topic of the last book I would like to share with you: the power of habit.

Great stories

Charles Duhigg is the author of this book, and he is such a good writer. I rarely had this much pleasure while reading a non-fictional book. The topic is the power of habit, why we do what we do and how to change it, but the writer uses storytelling to share with us his findings on the subject.
This book will improve your knowledge about the human brain, behavior and psychology, and provide awesome anecdotes about it. Stories about how a man with brains’ damages had an almost normal life despite a total amnesia, about how the coach Tony Dungy turns The Buccaneers football team from one of the worst to one of the best of all time, or about how commercial successes likes Pepsodent or Hey Yah were built in the background, all of them relying on the power of habit.

The book is organized around scientific findings, described through great stories, to make it popular science, and it is a delight to read.

Great advices

If you are not already convinced that habits are powerful, you will probably change your mind after reading a few of these stories and thinking a bit about your own life.
But the best part is that once you are aware of it, you have a chance to change the bad ones, and to improve the good ones. And the book also contains practical advices on how to do it.
In other words, this is a great tool to help change your life, and that is why I’m happy to share it with you.
It is, by far, the most impactful book on my daily life I have read for a while.

Sharing my good habits

Since more than 10 years now, I regularly teach myself some routines trying to get better. These habits now define me more than what I can say or write I believe. Here are some of them from the most to the less impactful:

  • Learn (by training, reading, watching videos and attending conferences, there is nothing you cannot learn)
  • Teach (through blog, book, training or anything, as soon as you learn something interesting, just share it, it will teach you even more)
  • Daily Exercise (physical training, it can be hard fitness or just a gentle walk, the point is to empty your mind and care about your body)
  • Stay focus (pomodoro helps me a lot for that)
  • Log your daily work (you probably already do daily stand up? Have you tried to write your plan and what you have accomplished every day? I barely do it for a year now, and find it to be essential already)

What about you? Which habits were the most impactful in your career and life?

 

OOP, FP and the expression problem

As a software developer, you will probably have at least once a month a discussion about Functional Programming (FP) vs Oriented Object Programming (OOP). Of course, it’s always about the context, I already talked about it.
But thanks to Samir Talwar, I had new insights recently on this topic that I would like to share with you.
To give more context, I’m working more and more with F# for 3  years, hence have more and more practical feedback about it. And in the last weeks I came to the conclusion that I feel much more comfortable with FP for mid term code maintenance, and here is why.

OOP in theory

On my current gig, I had to switch back mainly to C# (even if some part of the code are in F#). I recently had a challenging refactoring on the core domain to do, and it leads me to this tweet.

My thought at this moment was that OOP can indeed be beautiful in theory, but I rarely see it beautiful in practice. Not because lack of practice or bad code, just because it’s almost impossible to keep it beautiful in time.
At first the design is really clear, a few nouns, a few actions (aka verbs), sometimes a nice inheritance between objects, and it all sounds good. But of course, after a while the specifications evolve, and you need to add new objects and/or behavior.

OOP in practice

What I usually observe is that evolution of the domain forces you to refine your abstraction. It makes sense, because you improve your knowledge about the domain while you’re working on it. In practice it means that you try to add objects in your inheritance hierarchy, with behaviors that might be slightly different of what you had thought at first.
That’s where the design starts to slide out of control. Maybe we could just add a boolean here instead of multiplying the class implementations? Maybe we’ll keep this inheritance because there are so many shared behaviors in the base class that we don’t want to spread in most classes, just because one implementation doesn’t need this behavior. And so on…
That’s why I usually find OOP using composition rather than inheritance easier to maintain in the long term. But even with that, it’s hard to keep data and behavior together and maintainable.
Hence I somehow feel like I prefer the constraints of FP rather than the constraints of OOP, with separation of data and behaviors, but it was hard for me to explain clearly why.
That’s where Samir’s wisdom shows me the light…

The expression problem

Here is Samir’s answer to my tweet:

That’s it. You can change behaviors (verbs) easily or you can change nouns (classes) easily, but you can’t have both. It is known as the expression problem. FP focus on changing verbs easily whereas OOP focus on changing/adding nouns easily. Samir takes the time to write in detail about it.
I think a lot about that in the last years, but never see it so shortly and clearly explained than in this tweet. And to be fair when you see it so simply explained, you wonder why you need years to think about it!
Despite the expression problem seems to be a well-known issue in software industry, it is rarely cited in any debate about FP vs OOP, as far as I know.

OOP or FP?

I’ll rather let Samir answer if you don’t mind:

I totally agree with him, and this explains I guess why most people who jump from OOP to FP never looked back, even if it could be hard to tell exactly why.
It’s not better by design, it just has different pros and cons, but these pros tend to make the evolution of the software easier, if the verbs of your domain change more than your nouns.
In Samir’s experience it’s usually the case, in mine too, what’s your experience about it?

 

In defense of Event Sourcing

Practicing and teaching Test Driven Development (TDD) since many years now, I start to see where the point of acceptation of this practice is: when you accept that the problem is not the method, but the way you are coding, and that this method is just a revealer of bad practices. 
Indeed testing code without dependency inversion or single responsibility principle will be really painful. Hence lots of people conclude in the name of pragmatism that the problem is TDD, not their code. But the people who are ready to challenge their way of thinking and coding will learn a lot, and usually accept TDD, or at least Unit Testing as a good practice. Because it can avoid bad habits in code and design (after years of practices, I agree). 

In the last years, I also actively practice and teach CQRS/ES, mainly implementing it using C# or F#, or both. And I’m convinced that it has the same power as TDD for this: this method is also a revealer of bad practices. Why does it matter? Because most of the critics I hear sounds like “I had this huge error in the conception of my system, my traditional way of coding doesn’t tell me that, but your Event Sourcing stuff put my head in my ####, so I guess the problem is this Event Sourcing stuff, not my way of coding right?” 
 
So let’s talks about Event Sourcing and some usual critics we can find on the internet, or that we can have during trainings. 

Functional Event Sourcing in a nutshell by Jeremie Chassaing

How can I manage my very complex entity with billions of events??! 

This question almost always arises. Especially from people who tried an implementation and end up in this situation.  
As I heard for the first time from my co-worker Florent Pellet, an event stream is nothing but the representation of the lifespan and responsibility of an entity. So the question should not be how to handle it, but why should we handle it? If this situation happens, it’s your domain model shouting at you “I’M WROOOOOOOOONG, I’M a MONSTEEEEEEER! PLEASE CHANGE ME OR KILL ME BUT DON’T LEAVE ME LIKE THAT !!”. 

Too many events in a stream means an error in design. And the thing is that this error exists independently of the way you’re implementing your domain. We could easily detect it in a statefull/relational database implementation, if we care about bug tracking and which region of the code we need to change at each release. It would quickly reveal this monster aggregate (also called god object).  
The problem here is not Event Sourcing, but the design of the domain, and ignore it for too long will be much more painful than Event Sourcing by itself. 

That being said, we indeed have a technical solution for this problem, it’s called snapshots. Basically, the idea is to store intermediate state to avoid rebuild from scratch each time you need to load this monster aggregate. But when you have to use it, you can consider it as a design failure. It’s like code comments: it can be useful, but it’s often just used to hide bad coding habits. 

I can no longer access and change data in the database??! 

Have you ever heard something close to this: “You know, in my data-intensive applications issues are often caused by data anomalies rather than code-based bugs.”  
It’s a more elegant version of “The problem is not the software, it’s the users”. 
Indeed, quite a widespread developer’s bad habit is to fix problems directly in production data (ie consequences) rather than the root causes (ie code or process). 
Which can be painful in Event Sourcing because events contents are usually stored as JSON (when human readable), or even as blob not human readable at all. 

So let’s back to the magic question, instead of wondering how to do it, we can ask ourselves why to do it? It can often be tracked to UX, design or process errors.

So handle these problems at their causes, and then fix the consequences. You can’t just modify the database by hand because you have too many events to update? Good, write a script then. Which should have been done anyway for maintenance, no matter if you’re using an event store or a relational database. And yes, this script might be harder to write than for your classical relational database, hence the importance to fix the root cause.

But the business people do not understand it??! 

Ho yes they do. If you think they don’t, ask them if they think that a user with an empty cart because he just logged in is the same thing as a user with an empty cart because he adds and removes something 3 times. 
A dev might think it is because an empty cart is just an empty cart after all (and most of the time this is how it will be designed). The business though will understand the learning opportunities in this add/remove behavior, and would like to track it. 

Also have you ever worked in a system to “add logging”? It’s painful because it adds dependencies, and it’s not always easy to know what to log. Event Sourcing, at least coupled with Domain Driven Design (DDD), will reply to this question. Also this need of “logging” by the business is a sign that they understand well the concept of Event Sourcing and the value it can bring. 

But the devs are not trained for that and they don’t understand how to use it??! 

It’s one of my favorite one. Most devs use GIT. So basically most devs already understand and use the value of logging every past changes. And they also already understand that logging little changes will be much easier to exploit in time than logging big changes. 

It’s true though that they’re not trained to do such implementation by themselves, because they have done years of Oriented Object (even if it’s done in a procedural way), ORMs and relational database. At some point some people do not even know that alternatives do exist. 
Compared to this way of coding, it requires indeed a mind shift, but I can’t believe that someone smart enough to code and courageous enough to use ORM, will not be smart enough to learn about Event Sourcing. 
I do believe though that most employers do not want to invest in their own employees training, but that’s another topic. 

And I can no longer easily change my schema??! 

Finally, an (almost) valid point. Yes changing the schema (ie the serialization of events because you change, add or remove properties) is not a funny part. Schema migration was never straightforward anyway, again no matter the implementation you choose. 
I agree though it will require a more complex process in an event sourced system because each time you want to update the present, you need to care about the past. It might be unusual in a relational database, but it is actually a good idea. 

You have 3 solutions: 
1- you can pretend the past hasn’t existed, and use a script to update old events to become valid events 
2- you can pretend the past hasn’t existed, and fix the invalid events in the event repository using default values in the code 
3- you can care about the past because the version of the events might impact the way you want to handle it in your business model, in this case you can use event versioning and different paths in code 

In other words: you have to explicitly choose an update strategy for each change that could affect the past. Ignoring the past or not depends on the business’s needs. 

 
So you say it’s a silver bullet? 

Of course not, but I would like to avoid critics that are basically complaining that they need to change some bad habits.  

Event sourcing gives you more options, hence more responsibility. It gives you the opportunity to think with the business in mind (especially when coupled with DDD and CQRS). I believe that it’s this new world of “many options” that can afraid people who prefer the prescription of a rigid framework. The very fact that it does not have a proper standard and that everybody can come up with its own implementation is partly what makes it so powerful for me. 
 
Event Sourcing isn’t trivial. As we already saw, it makes designs error even more painful than usual (I’d prefer to say: harder to ignore or postpone). It means that using it without knowing about DDD for example might be a good way to shoot yourself in the foot. It also means that if you’re discovering a domain (proof of concept for a startup?) it won’t fit. 
 
But if you’re looking for a way to build a robust and scalable system in a domain that you can know (even if it will change), I still haven’t found a better approach so far . 
 
Surprisingly enough, context is king. The power of Event Sourcing is that your implementation can greatly change depending on your context. 
 
 

 

3 bad coding habits of most software developers

And trust me, when I say “most developers” I include myself as I am now, or as I was a few years ago. 
Let’s talks together about the most productivity killer habits most of us share. I’ll discuss them from the less to the most common with my current experience. 

2 hours of manual testing can save 2 minutes of automated testing 

Unit testing is more and more widespread, but of course the game now is to explain why, in your team/context, you “can’t really do this”. 
A few of the usual excuses we tell others (and ourselves) are: 
– This is just a little project 
– The team is not trained/ready for this 
– We do not have time to write tests because we need to deliver features 
– Unit tests are worthless, we prefer end to end tests 
But 2 hours of manual testing can save 2 minutes of automated testing. Unless you won’t need to test your feature more than once or twice, writing an automated unit test will most of the time worth it (especially if you write it before, hence keep a testable architecture) .
Honestly, what would your customers say if you tell them that you don’t test everything at each release? Or worst, that most of the cost of the development process is actually manual testing of the software? 
Trust me I fought this idea long enough and was looking for many alternatives. As many beginners, my first contact with this method was something like “why the hell would you ever need this if you know how to code?” 
But in 10 years, I still haven’t found anything better than unit testing to deliver quality software and to speed up feature delivery in the mid and long term. 

2 days of Pull Request can save 2 hours of pair programming

I know the first point about unit testing is probably consensual enough, at least by the reader of this blog. And I know that this point about Pull Request will trigger much more discussion. 
My point is that, most of the time, Pull Request is a (bad) way to implement collaboration in the team. Probably one of the worst way to do it because: 
– It is asynchronous and necessitates lots of context switching both for the author and the validators 
– It slows done the flow of feature delivery as a Pull Request can stay for days in the review stage 
– It implies a relationship where the validator judge the job of someone else, and often feel necessary to add comments just to show that she took the time to seriously read the Pull Request (and sometimes it’s a true hierarchy relationship, when only architects or tech leads are authorized to validate Pull Requests) 
Yes, 2 days of Pull Request review can save you 2 hours of pair programming. Plus pair programming will improve the common code ownership, spread good habits, coding tips and domain knowledge in your team much faster. 
Should we avoid Pull Request then? No but use it only when it is strictly impossible to do the work directly in pair or mob. I hear you: “But my manager will kill me, 2 people to do the same tasks is pure madness!”. 
A first step to feel the benefits of what I describe here is to ask for synchronous reviews with the author. It will be much faster to validate it, because it’s easier to understand what our coworker means with her voice than with her code only.

2 weeks of coding can save 2 days of Event Storming

If I had only one thing to tell to younger me, that would be “Not Silverlight!”. More seriously that would be “being able to quickly understand any domain context and the human relationship involved in them will make you a much better developer than mastering any of the shiny tools around”. 
Don’t misinterpret me, methods and technology matters, but the challenge is to find the one fitting your context. Whereas most of the time we just impose our technical knowledge (ie habits) to the business (which is why most software are just CRUD built on relational database with the last fancy framework if you ask me). 
The thing is, we don’t care how good domain experts and/or the company are. The most critical point in software development is how well the developer will understand the domain. 
The best method I know so far to share business knowledge is known as Event Storming. It’s basically a meeting between technical people and domain expert to talk about how the company earns money. 
I know, business people are “really busy” and it’s very hard to get them for a few hours talking with you. The question though is, can they afford throwing away weeks of coding (or worse: keeping bad code and trying to fix it for the life of the software), when people will realize that the software doesn’t fit their needs? A few days of Event Storming, even with the whole team, is really cheap compared to the usage and maintenance of a bad software. And like for pair programming, it improves the feeling of common ownership of the software, more people feel involved in the process of creating the right tool for the company. 

Why are these methods still unusual? 

First of all, I would say that they are more and more common, but of course it’s still far from the usual way of working. I think it’s mainly due to the following points.

Line of business software are so complicated that nobody can control them. A corollary is that this level of complexity should be managed by a team, with proper tools. But we usually ignore this fact, due to our ego or just by habit.
All these methods have midterm return on investment, and are thus hard to evaluate. It doesn’t fit well in a company with a Taylorist state of mind and management, which is still the majority.

But as soon as you accept the complexity of software development and keep an egoless approach, these methods suddenly seem absolutely normal, as a way to avoid time wasting. 

 

The properties of great architecture

It is commonly said in IT, at least in Agile circles, that we can’t design software like we design building, using Waterfall approach.  Then we often said, including myself, that even if it works with the design of building, software really is a different beast. 
But have we asked ourselves how well this waterfall approach actually works for physical building? 

An architectural masterpiece

A few years ago, I moved to a little town near Lyon. During a walk, I accidentally discovered a very strange building. It basically looks like an enormous Soviet block, with no color and strange forms. I was really surprised because it was in a very nice place around, in the middle of a forest where you can do wonderful walk. There also was a very nice old fashion house in the neighborhood. And here stands this hideous block. My guess was that it was just something they inherit from the war, that the government gives away for charity (it is now a convent).
But when I was leaving the place, some signs for tourists explain to me that this is the famous “Couvent de la Tourette” design by the even more famous architect Le Corbusier. I hence accept that I just lack the sensibility to understand why this building is actually a piece of art, a masterpiece by a master architect, and not at all a Soviet block.
This was a few years ago, and I never think about it again, I may even have recommended this place to a few tourists in the region… “Do you know we have a famous building here?” 

Convent, la Tourette

Alexander’s point of view

One of my favorite things is to try to understand how people considered as pioneers were influenced to think as they do and come up with mind-blowing ideas, especially in the IT industry. How Alan Kay has come to think about Oriented Object Programming (OOP)? How the manifesto signers were influenced to think about Agile? 
And the one that leads me to this post: how the Gang Of Four has thought to design patterns? The short answer to this last question is: Christopher Alexander, another famous architect. In 1977 he co-authored “A Pattern Language” with Sara Ishikawa and Murray Silverstein. Of course the game does not stop here: how Alexander has come up to think about design patterns? 
I still don’t have the answer here, but fortunately Alexander likes to write, and has published The nature of order where he basically explains why, how, and when he thinks in this way, and try to give a few answers about the universe as well… 
Anyway, he criticizes lots of other architects in this book, including Le Corbusier, taking “Le Couvent de la Tourette” as an example of something very bad. He basically says that this block has nothing to do here, it doesn’t fit well in the place at all, it doesn’t care about the context. It was just an architect with probably lots of ego that draw it in his office, with his view of “modern” architecture, no matter if the place will be useful, or even beautiful. Funny enough, it perfectly fit what I thought when I saw this building for the first time, before knowing that a great architect has built it… 

What’s a great building then?

According to Alexander, either something contributes to the whole, either it doesn’t. I can only give you a hint of what it means, because the 1200 pages of the book are basically here to explain this concept of wholeness in depth. In a nutshell, he argues that the error of the last century architecture is to separate the building from its context, sometimes by ego, sometimes by greed, sometimes by ignorance. Because of that some people believe they can just draw something, and the technical implementation is not their problem. They are missing (and usually don’t want) a feedback loop. They think that they are smart enough to design a perfect thing up front. Because in their beliefs, the context doesn’t really matter. They underestimate the difficulty, because they miss the fact that this building will be part of a more complex system, and cannot be great by itself because it will or will not improve the complex system in which it will take place. 
Alexander thinks this kind of complex design can only be emergent and argues that this is how structure is built by the nature itself. 

Feedback loop, contexts, emergent design… Are you talking about Agile or what? 

Yes and no… Probably like you at the moment, I was thinking that I had found the source of inspiration for the Agile manifesto. And actually, it could be the case, because some of the manifesto signers were aware of the concept of design patterns, and thus somehow already influenced by Alexander. If by any chance a manifesto signer read this blog post, I would love to know if, at any point in the discussion, the name of Christopher Alexander was pronounced… 
Anyway, Alexander is also aware that what he describes in depth is properties of complex systems, and that activities like crafting software can certainly use many of these ideas as well. But he’s also aware of Agile, and even talks about XP in his book! For him, it’s not enough. There are these notions of emergent design and feedback loop, but he thinks we are still missing this concept of “wholeness”. And at this point, I agree with him. 

Dude, would you mind talking about software now? 

Here we go, Like Alexander, I do believe that his ideas applies very well in software as well, as in any complex systems. 
Indeed, we can feel that some programs are “bad” somehow.  The tricky thing is that we don’t have an objective way to judge it. We are missing this wholeness property. Some software is good, despite their lack of unit tests. Some software is good despite a shitty User Interface (UI). Some software is bad, despite the unit tests, the agile mindset, the pretty UI and the use of the last fancy framework or architecture style (yes Microservices, I see you!)  
This is because, in a complex system, what we feel as good or bad is the sum of many properties interlaced, that we can’t control or even enumerate. We can only feel them. Worse: these properties are different for each software, because they are highly dependent on the context. 
The same is true for physical building, still Alexander has tried to find high-level properties that usually fit what he considers as great structure. 
Then defining good software with practices or tool is definitely a failure. Your system won’t be great just because you’re building an event sourced system with a shiny framework in a functional language. 
It won’t be great just because you’re OOP code base is SOLID
It will be great only if both people building it and people using it are happy. Only if they feel whole when they use or build it. These kinds of software probably have high level common properties that we should strive to find. 

What’s on your mind about it? Can you think of any useful properties of such systems already?  

 

Bounded Context Patterns

After a decade of coding, I tend to believe that being able to discover and implement correctly bounded context is one of the main values I can bring to a company in my daily job. As a consultant I have the chance to do it in different gigs since a few years now, and I start to see repetitive patterns in the way companies are structured. I usually use a strategic Domain Driven Design (DDD) approach to understand and classify this structure, resulting in some domains, subdomains and bounded contexts. 
Because of this repetition, I think bounded contexts can be classified in patterns, and that these patterns can help to know the importance of the bounded context and how to build it in the most efficient way for the business. 
 
I believe these patterns can also be seen with a hexagonal approach. Indeed, like in the hexagonal architecture, we have core stuff and surrounding stuff with adapters and/or anti-corruption layer. It’s the same principle, just at a different scale. 

Here is a list of the most recurring patterns I have discovered over time. 

The Data loader Context 

It is quite common for a business to be dependent on data from an external source. This is when we require to fetch data from somewhere else, usually at a recurring period.
When I identify this pattern, I try to isolate this behavior in a context, ideally one project per data source. All these projects are Data Loaders for our domain. These contexts are often quite technical and requires a strategy to handle the external (and usually unreliable) data source. 

This image has an empty alt attribute; its file name is image-4.png

The Referential Context  

This a very tricky and still very common one. It is the famous “Tools” or “Common” you’ll find in any solution, but at a different scale. It’s usually a referential of something, that you need to share with the rest of the company, or even with other companies. You know that you should avoid it, but still you really need the whole business to share this single source of truth.
Of course, you want to isolate this behavior in a single context, and then you should not consider it as a “safe” context, because it will have many dependencies. If possible, make this context independent of its consumers, at least the dependencies will go only in one direction.
It’s a very challenging context, because of the dependencies. You need to talk a lot, with lots of team. They are also usually technical, because to make them more Domain Driven you’ll need to know why your data are used. And this is not often compatible with the will of staying independent of the consumers.


The Anti Corruption Context 

Because external data fetching is common, we need to convert them from something not trusted to something validated for our business. It is well known in the DDD community and usually implemented through what is called an Anti Corruption Layer
But I believe that it often makes sense to actually consider this layer as a context by itself, because the logic behind data conversion of an external source can be pretty complicated. Even if the business will tell you that “this is just a JSON you know”… 
This context is at the border between the external/technical world and the internal/domain world. It maps the technical validated data from the Data Loader to something usable by the domain. It is an adapter in the hexagonal architecture metaphor. 

The Reporting Context 

Here comes another almost unavoidable one: the reporting context. It “just” reports data from the core domain and we should not have a high level of domain complexity for that…But still, in my experience they are important because they are used to drive the company and some critical business decisions might depend on it.
And they can also be complex because it is not uncommon to handle automatic integration of this report with external tools, or different access type to the data depending on the user role. 
As explained by Scott Wlaschin in Domain Modeling Made Functional, this is where you’ll put the OLAP responsibility of your system. Whereas the business core is more like an OLTP system. It explains for me why you want to isolate both behaviors.
It is a port to the external world in the hexagonal architecture metaphor. 
 

The Business Core Context 

And finally, the holy grail, the one you look for because it creates value for your business, the key technical asset for your company: the business core context. 
All the projects I have worked with have a business core context. The thing is that most of the time, we put too much things inside (like the data loading or the anti-corruption or the reporting…)  
This is usually done for the sake of Don’t Repeat Yourself (DRY): we have the JSON from this external source, why should we transform it before use it?  This is the usual doom of DRY leading to technical issue leaking into your business, and that’s why you want to avoid it. 
This context should be the more Domain Driven of your whole solution, meaning for example that you want to avoid primitive types, dependencies to external stuff, or not handled exceptions. You shouldn’t do technical validation either, because other contexts can take care of that for you.
In the hexagonal metaphor, it is of course the core domain layer of the architecture. 

Map with core, generic and support contexts

In the DDD community, based on the already mythic blue book by Eric Evans, we usually classify contexts in three categories: core, support or generic. Of course, these categories will depend on your domain, but I think we can most of the time map them with the patterns define above. 
For instance, The Data Loader and the Reporting context can be generic contexts, but again beware of the hiding complexity in it (a third part tool might do the job, at least to start).
The Anti Corruption contexts are usually in the support category (useful but not a competitive advantage, still cannot be done by a third part tool, because it depends on your core domain).
The Referential Context might be core or support.
The Business Core Context is obviously in the core category. 

What next ?

Here are the main patterns that make sense for me so far. This is non exhaustive, for example another pattern I omit is the User Interface (UI) context. Because in which context your UI belongs to isn’t an easy topic in DDD, and one answer to this complex question can be a dedicated context to handle it. But I still don’t really know if it is a useful or a harmful pattern… I tend to prefer context like reporting with a clear business responsibility and usually many UIs.

Anyway, what about your experience? Maybe you have identified some patterns that are missing here? Maybe you know about some blog post or books exploring this part of strategic DDD patterns?
I’ll be glad if you accept to share it with me, and maybe we could build together a more exhaustive (and hopefully useful) list of bounded context patterns? 
 
 

 

Event Storming and Event Modeling from the trenches

In 2014 I had the chance to join a workshop on Event Storming by Alberto Brandolini at BuildStuff. After that I started to practice it intensively at work and was quickly convinced that it was a very interesting asset to do my job. Indeed, as a consultant I meet many teams, and need to understand in a few hours as much context as possible. It is surprising how, with the right tool, you can even learn things about the job that business experts themselves were missing because they lack time to think about it. 
In 2019, I added another string to my bow thanks to a workshop on Event Modeling by Adam Dymitruk and Greg Young in Lyon. As already explained, it covers some lacking for me in the Event Storming approach, and thus I find it to be a very complementary method to Event Storming. The synergy of both tools can be huge.
I would like to share with you how they help me to improve my craft, plus give you some tricks from the trenches. 

Event Storming from the trenches 

A quick lesson I learned from early Event Storming is that all Event Storming are different, and this is great. Of course, they are different in terms of content, but what I mean is that they can also be really different in form. Because they will always adapt to your context!
For instance, it is sometimes enough to put only the events (no commands, aggregates or even contexts), because it will trigger the necessary discussion, and you won’t need to go further.   

An important point is to agree before the Event Storming on why you are doing it. You want to cover a new feature? You want to explain the business to someone else? You want to clarify a point with your team? Depending on the end goal, the form should be adapted to suit your needs. 

By experience, the law of two feet works very well for an Event Storming workshop. You don’t want people to be disengaged, and you don’t need to have all the team all the time. Some might take a break while others are digging a specific point, and this is great. Collective intelligence at work! 

In terms of timing, I believe that half a day is basically the most that you can do if people are really invested. It can be exhausting to carry such a workshop for too long. Keep an eye on how people feel and don’t hesitate to call for a break if you feel like the mob need it. 
 
Something I rarely see when people talk about the Event Storming session they performed is drawing a link between events and commands between contexts, when an event in a context triggers a command in another one. It gives you an instant view of Bounded Contexts relationship, hence some hints to know if your contexts are well defined. 

Finally, after many attempts, I must say that for me, Event Storming is a killer tool to have a big picture view of a situation. I’ve seen it a bit less valuable when I needed to dig in the implementation on a specific part (in the solution space if you prefer). This is where Event Modeling came to the rescue! 

Event Modeling from the trenches 

One of the main feedback I had from my many Event Storming workshops is that losing the temporal link between the events was a shame, because it has a great value to describe business workflow. For me this is one of the main benefits of Event Modeling. 
It makes it very good to explore concrete implementation, to represent business workflow and to link it with the UI.  
 
And like Event Storming, it gives a very good domain view, especially when combine with Event Sourcing and CQRS. It is powerful to describe the solution workflow, as you imagine and then implement it. 
With time, as the model evolve and grows in maturity, it’s something really valuable to support technical and/or business discussion. 
As Adam would say, this is sort of a blueprint of the system that a business or a technical profile can understand easily. 

I find useful to take screenshots of such models to add into User Story as documentation, or even in Pull Request in order to describe which part of the system we updated (showing a picture before the Pull Request and after the Pull Request for instance).  

Another trick to reach an interesting model is to describe each workflow separately, even if we feel that some of them will be handled by the same piece of code. Then when all scenarios are well described as a unit, you can try to merge them into a single one that could theoretically handle all of them. But even there it’s interesting to keep a trace of all the single scenario that leads you to this design. 

Hope it helps! 

I hope these few tricks will help you to perform better Event Storming and Modeling sessions. I could add that whatever the workshop you do, capturing the end result in a Miro board (or equivalent tool) is usually a good idea for asynchronous communication and future evolution of the model.
But if you should keep only one thing from this article, it would be don’t worry too much about the form, keep whatever works well as an event driven description for your team, and don’t mind how you call it.  
 
Because domain events are the powerful idea here, Event Storming or Modeling are “just” a way to exploit it 🙂 

 

Event Modeling

When was your last breakthrough about software development? I think I can resume mine in 4 steps:
1- Writing test is not optional to handle complex system
2- Good architecture is not optional to write test
3- Most programs are built with a CRUD approach, which isn’t easy to maintain in complex systems
4- Event-driven architecture may help to avoid this CRUD approach, mainly because Events can be domain concepts that we share with the business.

That’s why, coming from the DDD world I’m so interested in CQRS/ES and functional programming to handle complex systems. I’ve seen a few times already how the combination of DDD strategic patterns to handle contexts, CQRS/ES implementation and Event Storming for exploration of the problem space are efficient. It allows to get a very deep understanding of the problem, and a very robust way to implement the solution, all relying on the power of events.

Recently I’ve seen a few tweets about this “new” stuff: Event Modeling. And a few tweets later, I had the chance to organize an Event Modeling workshop in Lyon with Adam and Greg, that I had the pleasure to attend. Let me share some feedback here.

Event Modeling vs Event Storming

There is a very detailed post about what’s Event Modeling by the author Adam Dymitruk, thus I will rather give a short explanation assuming that you already know about Event Storming.
Both methods are starting in the same way: you want to “storm” events by asking business people to tell the story of their domain.
Then, you try to order the events in time to show that you can describe a business flow from start to end quite exhaustively.
From that point Event Storming (at least in the way I’m used to practice it) focus on the emergence of commands, aggregates and bounded contexts.
Event Modeling in the other hand will then focus on User Interface (UI). The goal is to add UIs to show the progression of a screen through the business flow. You will often duplicate the same screen, and just change some values to represent in which way the UIs are impacted by the events. Note that you can add different lanes to represent the UIs for different users impacted by the same events.
At the same time, some lanes can be added for the events. Some events are about inventory. Some are about payment… Yes, it’s just another way to let your bounded contexts emerge. And a very powerful one I think, because it makes clear that UI most of the time gathered data from many contexts.

Event Modeling keeps Events order

Practicing Event Modeling removes some Event Storming frustration for me.
I have facilitated many Events Storming in many organisations since 3 years now, and I often have the same feedback that losing the temporal relationship of events (when going from Event Streams to Aggregate) is a shame, because we loose meaningful information. I usually agree but answered that this relationship wasn’t “lost”, and that we could always take pictures to keep it somewhere.
Event Modeling just keep this relationship by default, and I think it’s a good idea in the end, because time matters when speaking about events (actually they’re just two sides of the same coin).

Event Modeling also focus on UI

I also observed during my Event Storming session that adding some UI sketch might help a lot to remove ambiguity.
One of the steps in Event Modeling is actually to draw all meaningful UIs for different users. It makes sense because it helps business people to have a better understanding of the “link” between domain events and the software they will use in the end. I think it helps them to translate from problem space to solution space if you will. Also, as already explain, it shows that UIs most of the time gather data from several contexts.

A different view of bounded contexts

The step of drawing contexts around aggregates like in Event Storming doesn’t exist. Instead the contexts emerge in even a better way I think: by putting events stream in different lanes depending on the things they are doing for the business.
Is it more about payment? Or inventory? Or security? We easily feel that all events do not lie in the same streams. And each stream of events might in the end represent a different bounded context.

A CQRS cycle keeping track of time

Last but not least: Event Modeling results in a “true” representation of a CQRS/ES system. We know the classical circular view in a CQRS system: View -> Command -> Event -> Model.
What does this circular system look like in time? Waves of View -> Command -> Event -> Model -> View -> Command -> Event -> Model … and so on.
In the end, using this Event Model as a blueprint for the software we are building makes sense, because it doesn’t miss any component that matters for business. We have the domain events and we have the UIs, and all the transition leading from one UI to another following the pattern View -> Command -> Event -> Model .

Finally, a silver bullet ?

I’m not an expert of Event Modeling, and like anything new I probably miss some disadvantages of this method due to my feeling that it removes many problems I met in my current way of working.
But I’ll try to use it very soon, because I see lots of potential in this approach.
I can see how it helps me to be closer to the solution space, with still something making sense for the business.
Something I already feel harder though might be to know how to represent aggregates with it. Actually this is probably due to the author view: he doesn’t really care about context or aggregate; but rather about streams. An event stream is a “logical” stream of events… Ie either a context or an aggregate depending on the context I guess^^

Anyway, I will try it and let you know how things are going 😊

 

Writing code isn’t the bottleneck

What is the birthdate of the last framework you’re working with? Why have you chosen it? Did you choose it?
I feel more and more puzzled by the state of our industry, and how we’re used to deal with the fact that we suck at building software. I think a part of it might be due to a feeling that the problem is how fast we can build software. Spoiler alert: writing code isn’t the bottleneck.

Why frameworks exist?

They usually promise you to avoid writing tons of “boiler plate” code. This code that has absolutely no value to your business, but still is not optional to make your system work in production.
In other words, frameworks are here to “increase” your productivity by gaining time to write code that matters for your business.
What frameworks usually forget to tell you is that the time you’ll get by avoiding boiler plate code will mostly be used to understand and configure the framework. No pain no gain, right?

A pact with the devil.

But the worst part of it is not that it will take you time to configure. It’s that you are now highly dependent of a framework. It matters because frameworks are opinionated. They are here to solve problems, but the way they solve them might not be exactly what you need on your project. And to be fair, I’ve seen this many times: in the end you’ll burn 95% of your energy trying to make the business fit your framework, instead of building a software fitting your business.

Do you want to understand problems or solutions?

And let me add this important point: you should never use a framework if you don’t understand all the problems it’s solving. By building your expertise over a framework, you are building your expertise on a solution. And if you do not understand the problems, you are an imposter, because bringing a solution is not enough to be a software professional. You have to understand and challenge the problems.

Writing code isn’t the bottleneck.

Believe it or not, writing code is cheap. The cost is in maintenance and communication. Thus all efforts made in order to deliver features faster are false friends when they increase the burden in maintenance and/or communication. The real challenge is to keep the code easy to change.

Writing code isn’t the bottleneck.

 

The Design of Everyday Things

Do you know a job hard to explain, hard to do and usually underestimate? Of course, I’m talking about the job of designer 😊
What do I know about it? Almost nothing, like anybody after reading a book. This book is the bible in this area: the design of everyday things, by Don Norman. Let me share with you a short feedback about it.

The design of everyday software?

As Scott said, like for any topic, “When you’re outside an area of expertise looking in, it can often seem monolithic and uniform, but when you’re an insider you quickly realize that there are many different subcommunities, each with different specialties and different approaches.”
I tend to oversimplify the job of designers, mainly because I lack knowledge about it. And this book helped me to understand something really important: building Software is Design, and we have tons of things to learn from their practices. And to be fair, people like Johan Martinsson and Alex Bolboaca have already talk and share a lot about this.
Basically, you can replace almost everywhere in the book “Designer” by “Developer,” and it still makes sense.

It resonates a lot with my daily work

This book is well written and a pleasure to read. It starts by talking about psychology, then about the fallacy of accusing the user before accusing the design. In other words, “Read The Fucking Manual” is not an acceptable answer, because who read the manual? People that are not able to use your product the first time, and still are forced to use it. When was the last time that you read the user manual from your phone or any online application, you’re using?
The assumption that a good design can remove the very risk of errors is absolutely powerful, both in design and software.
The end of the book talks about the (hard) collaboration with the marketing to build a product, and I’m sure it would remind you many anecdotes as well!

Go read it

This book is a reference in the design world but should be as much a reference in the software world. If you haven’t done it already, you can add it on your list of books to read!