How do you teach DDD?

The more I work using Domain Driven Design in my daily job, the more I face how hard it is to teach. What’s the matter here? Why is DDD so often misunderstood? Of course, we can just consider that it’s a complex thing, hence requiring lots of knowledge and practices to be mastered. But let’s be more constructive and try to find the root problem in the domain of DDD teaching.

Like always, we need to agree on the terms we’ll use through this post.

Reminder: DDD

DDD is about tackling the complexity of the domain in the heart of software. It means that, in an ideal world, your software must be as complex as the problem it’s solving. Plus, the problem you solve must be the most valuable possible thing for your company. DDD provides patterns to address these points (spoiler alert: it’s not an easy task). Implementing DDD is all about analyzing dependencies.

Reminder: Tactical patterns

Tactical patterns are the most concrete parts for us developers because it’s about code! It’s a set of technical tools to concretely apply a DDD approach where it matters the most. These patterns evolve quickly, at the same speed as good practices. A well-known example is the repository pattern. Tactical patterns help to know how to build each part of a business.


Reminder: Strategic patterns

Strategic patterns are more abstract. They allow to categorize the domain (the business) into subdomains, in order to know how to build context of applications around it. Each context has a specific common business language (the ubiquitous language) and use precise way to communicate. Does a context impose things to another context? Do they share a kernel? How do they communicate? All of this can be captured using strategic patterns. They help to know what the different parts of the business are, and how they interact.


The DDD usual problems

Since I start studying DDD in 2012, I meet two recurrent problems.
The first is that newcomers (including me) do not know where to start. They can feel DDD is huge, but the reference books are intimidating. The community of smart peoples using words you never heard before seems nice, but you feel so stupid around them that it’s hard to stay.
The second one is that most people (yep, was me again) will not grab the duality of strategic versus tactical patterns at first glance. Worse, they may stay focus on the tactical patterns because they seem more concrete. We already know some of them, like factories or layered architecture. The consequence could be that we overlook the strategic patterns.

Root cause analysis

Following the principles of good design, how can we improve DDD so that these mistakes are harder to make for newcomers? Let’s use my (almost) secret silver bullet for that.
In my domain of DDD teaching, I believe there are two sub domains: one is about tactics, the other is about strategies.
An efficient implementation for teaching DDD would map one bounded context for strategic patterns, and one context to for tactical patterns. We have several clues to justify this implementation.
First clue: different languages. The two contexts have specific ubiquitous language.
Second clue: different life cycle. Tactical patterns evolve quickly. Most of the patterns from the blue book can be replaced by more recent patterns like CQRS/ES or hexagonal architecture. Strategic patterns are much more durable.
Third clue: the level of abstraction is not the same in both contexts.
Last clue: the strategic context is much more important. It’s my core domain, the core values I want to share when teaching DDD. There is a clear dependence between my contexts. Tactical patterns are important but will bring much less value if I teach them alone.
I now have a part of the answer about why DDD is hard to teach: it mixes two very different contexts. Both are useful, one is more central than the other, but also more abstract, hence harder to understand.

Solution attempt

Eric was really ambitious in his attempt to improve the industry, providing both strategic and tactical views. I think the trending and the emergence of major DDD conferences across the world are proofs that he succeeds. I also understand his willing to provide something that is not only abstract, to avoid the architect ivory tower syndrome.
Still, strategic and tactical patterns are really different beasts. These two faces of DDD make it harder to understand.
How can we manage this complexity? It depends on the context (did you see that coming?). But in the domain of teaching DDD for newcomers, I would accentuate on strategic patterns, mainly bounded context, because this is the pattern that can lead to all other patterns. Then I would explain that all the industry standards good practices (continuous integration, segregation of responsibility and so on…) are mandatory to implement the core domain, and that DDD also includes them under the name tactical patterns.
Trying to go the other way around (teaching the tactical part as introduction to DDD) increases the risk of missing the core concepts.

As always, it’s not a recipe, but it may be a way to facilitate the understanding of DDD.


It’s all about fast feedback

I’m a big fan of James Coplien.
I appreciate his controversial point of view about Unit Testing, even if I don’t agree with him. Recently he tweeted about this article explaining why integration tests are much better than unit tests.
Let’s explore it together.

Unit test problem

You write code and then run the unit tests, only to have them fail. Upon closer inspection, you realize that you added a collaborator to the production code but forgot to configure a mock object for it in the unit tests.”

In other words he points out that after a refactoring, some tests will fail despite there is no behaviour modification.
But it really depends on how you write your unit tests. When practicing TDD, I’m testing a logical behaviour. Several classes can emerge, I don’t necessarily have one test class for one production class. And when coupled with DDD practices, I can write unit tests without mocking anything, because I test business logic. Also when I strive for applying SRP, I have a few functions per interfaces, meaning one test won’t fail just because I change some interfaces in another place.
That being said, I agree that some refactoring might change some contracts, which requires to change both the production and the test code. My point is: if it is complicated, it is not because of the unit tests.

System tests solution

“While I was enslaved by unit tests, I couldn’t help but notice that the system tests treated me with more respect. They didn’t care about implementation details. As behavior remained intact. they didn’t complain.”
(System tests for the author are almost end to end and include database)

Again, I agree, if it’s a technical refactoring, the tests that use only production code without mocking may be less brittle.
But I also notice that, when a system test goes red after an actual regression, it takes me a while to find out what the actual problem is. Also, when I had some bugs in production, it is harder to reproduce it in my system tests. It encourages me to fix the code without changing the system tests.
And finally, when there is a business change, it hurts much more to change my system tests. I need a few hours just to write the test managing the new behaviour. I have to initialize half of the application just to check 2 lines of code.


Pyramid tests is a myth!

“If you’re like me, you were trained to believe in and embrace the testing pyramid.”…”But my experience revealed a different picture altogether. I found that not only were unit tests extremely brittle due to their coupling to volatile implementation details, but they also formed the wider base of the regression pyramid. In other words, they were a pain to maintain, and developers were encouraged to write as many of them as possible.”

I definitely disagree.
I was not trained to believe in the testing pyramid. I was trained to believe in the “get your shit done, faster please”. And using this approach I believed, like the author and many other developers, that system tests have a better ROI.

It’s all about Fast Feedback

So I tried, and it doesn’t work for me because of long feedback (more than a few minutes).
Feedback time is everything when you run your tests hundreds of times per day, on your local machine. And running your tests hundreds of times per day is really important to practice TDD.
Running the tests has to be easy and fast, or it becomes a pain. And when it’s painful, we loose discipline. We start checking Twitter while the tests are running, and that’s the start of the end of our productivity.

Contextual testing

Maybe there are contexts where systems tests have a better ROI than unit tests. But in my context of backend complex application, with lots of business requirements changing every day, it is not the case.
And I’m not speaking theoretically here. I was involved in 6 projects since I started my career. One has no tests at all, two used inverse pyramid, and three used the classic Mike Kohn Pyramid.
No need to talk about the failure of the project without tests.
The two who used the inverse pyramid have commons characteristics: it was really painful to maintain after a few months, and we need one or two developers almost full-time to keep the system tests working. Not mentioning the time we loose to run the tests before pushing our code, meaning that we often pushed without testing and break the build.

The projects I enjoyed the most as a developer, and my customer as well I believe, was when we have a solid unit testing strategy following the classical Mike Kohn Pyramid. We were able to run thousands of unit tests in a few seconds, and keep a few integration tests running only on the server.

To get results in the long run, don’t give up unit tests, at least if you are working in the same context than me.


The coding Iceberg

Recently, I read a few posts about AIs, including Uncle Bob’s post, in response to Grady Booch (about this tweet).
I think the fact tha
t some people believe AIs might replace developers leads to interesting questions about what is programming?

What is programming?

As Bob recalls us, Alan Turing defined programming as a sequence of simple statements of the following form:

Given that the sstem is in state s1,
When event E occurs,
Then set the system to state s2 and invoke behavior B

We express some needs in terms of state transition to be executed by a machine, in other words we build state machines in order to satisfy precise specifications.
This explanation makes it crystal clear that the complexity increases with the number of states in the system.

The perception

Lots of people consider coding as a translation in a given language. You can translate a text from French to English, so you should be able to translate specifications from human readable to computer understandable.
Of course, it is only the emerged part. The translation from human readable to machine understandable code is already automatic, it’s called the compilation.

Developers must be Domain Driven

Under water, developers must express business needs in an unambiguous way, which is the hard part. Coding would be a translation only if the requirements were already a precise state machine, ie a suite of exhaustive Given/When/Then.

As the specifications are never precise enough, we developers must complete them. We have to challenge existing specifications with our understanding of computer, and we have to fill the missing parts with our understanding of the domain.
Thus, considering developers as simple translator who do not need to understand what they are writing about is a sure recipe for failure.

The future

Believing that coders can be replaced by AIs is believing that coders do not need to understand what they are writing about.

AIs might help developers to do a better job though, if they are designed to optimize automatic part like compilation.


The over-engineering theory

I never was comfortable with the idea of “complex domain”.

“Use DDD only in complex domain” is a mantra used by DDD experts to avoid the Silver Bullet Syndrom. It is really convenient to avoid uncomfortable discussion.

Why do I mind?

We’d like to know what is a complex domain to know when to use DDD.

From a strategic perspective, I already argued that we could always use it. The remaining question is: when should I use DDD from a tactical perspective? When should I implement hexagonal architecture, CQRS/ES and other fancy stuff like that?

Not in a CRUD software?

Whatever the implementation we choose, software is a business abstraction. When we choose a CRUD implementation, we bet the future software will have few features. We bet we won’t need many abstractions.  The cognitive load due to the domain will stay low, implying we can mix business logic with technical terms, without building unmanageable software.

In my experience, lots of applications begun as simple CRUD, but they quickly evolved in something more complex and useful for the business. Unfortunately we keep coding them as simple CRUD, which results in a massive mess. Certainly not in purpose, adding just a bit of complexity day after day makes it hard to grab the whole complexity of the solution. We are the boiling frog.

The over-engineering theory

The challenge is to find at which point our simple CRUD application becomes a more complex line of business application.

My theory is that we consider patterns from tactical DDD over-engineering all the time. Because we are unable to feel the complexity of the software we are building day after day. Worse, we think that the knowledge we accumulate about this complexity makes us valuable as the dungeon master. We are deeply biased to sense our own work.

But what happens when we go working on another project? The will to rewrite the full software from scratch. Push the frog in hot water, and she won’t stay for long.


Looking for the inflection point

We’d like to find when it becomes interesting to implement tactical DDD patterns, despite the cost it has.

A part of the solution may be to check how long a new comer needs to be productive. When it is too long, there is room for improvement, and tactical patterns can help to decrease the code complexity.
Another solution may be to ask for a few audits by different DDD experts, but it could be more expensive.
A simpler solution may be to assume that the software will become more complex. It implies we should look at which tactical pattern can fit our needs, day after day.

Remember there is a world of practices in tactical DDD patterns. Using all of them all the time is not desirable. Picking the good one at the good time is the actual difficulty.

When should I use tactical DDD patterns?

I do not have a definitive answer for that, just an empirical observation: I’ve seen countless software implemented in a CRUD way that would be greatly improved by a DDD tactical approach. I’ve seen no software implemented with a tactical DDD approach that would be greatly improved by a CRUD implementation.

Just saying.


“If you think good design is expensive, you should look at the cost of bad design.” -Dr. Ralf Speth, CEO Jaguar 




It is not your domain

From a technical perspective, I would argue that a DDD project is nothing but a clear and protected domain.
Working on lots of legacy code though, I observe common mistakes to identify what is inside the domain, and what is outside.


Your data model is not your domain model

It is a really common mistake to use a data model as a domain model. I think ORM and CRUD have their part of responsibility in this misconception. A data model is a representation of the model optimized for data storage.

Building object in code from this model using ORM doesn’t make it a domain model. It is still a data model, it is still optimized for data storage. It is no longer sql tables, nothing more.

The data model has the responsibility to store data, whereas the domain model has the responsibility to handle business logic. An application can be considered CRUD only if there is no business logic around the data model. Even in this (rare) case, your data model is not your domain model. It just means that, as no business logics is involved, we don’t need any abstraction to manage it, and thus we have no domain model.


Your view model is not your domain model

Another common mistake is to add just one layer between the data model and the view. This layer has different name (Presenter, Controller, ViewModel…Whatever) in different patterns and languages, but the idea is the same: abstracting the view using code.

The mistake is when we believe this is the place to add business logic. This layer has already the responsibility to abstract the view, we don’t want to add any other responsibility.


Your data model should not notify your view

I go crazy when I find a model object with the responsibility to notify the view when some data changed (often using databinding). It literally means that this data or domain object, has also the responsibility to manage the way data are displayed.

As usual, it is all about separation of concern.

Your domain model should depend of (almost) nothing

The domain model is a set of classes or functions, group in a domain layer, containing all the business logic. This layer is absolutely free from the infrastructure. It should be a pure abstraction of the business.

A domain model is still valid if we change the way we render the software. A domain model is still valid if we change the data storage. A domain model is still valid if we decide to stop using our favourite Framework.

A domain model depends of one thing and one thing only: the business.







Abstracting the reality

Let’s define the reality as the consequence of all previous Facts.

Nobody knows every fact from the past. And when we share some facts, we don’t give them the same importance. We build our reality based on the facts we believe to know, and give them some importance based on what we think do matter.

In other words, there are several realities. A reality can be based on some false facts, still it is a reality for someone. Not to be confused with the concept of truth: there is only one truth, the problem is that nobody knows it (hint: most people suppose their reality is the truth).

Building a shared abstract reality (i.e. software) is exceptionally hard.

Spaghetti Realities

It’s the point of DDD to avoid mixing those realities. It takes lots of time and analysis to understand the realities of a business. It is an iterating task. Domain experts share a reality because they share some business processes. For example someone from the marketing team will understand another colleague from the marketing. But she won’t always understand a colleague from the shipping team. They face different challenges, they use different languages: they work in different realities.

Strategic patterns from DDD help us to build context specific solution, in order to build a software matching only one reality, avoiding an unmanageable level of complexity in the shared abstraction.

How to choose a reality

Essential complexity is the solution to a problem in its purest form. Accidental complexity is when the solution we develop is more complicated than required by the problem.

In the excellent paper Out of the tar pit, Ben Moseley and Peter Marks explain how we bring technical accidental complexity in our software, especially with states, control (the order in which things happen), and volume (the size in terms of line of code).

I think we often miss the opportunity to look for accidental complexity in the domain as well. We suppose that the problem is already well understood, whereas most of the time there is room for improvements.

How to implement the reality

If we agree that the reality is the consequence of all previous Facts, it makes sense to look for an implementation where these facts are represented. Such an implementation would be easier to translate into the reality, and vice versa. Such an implementation help to describe the essence of software: it is an automated decision maker based on past facts.This implementation is already a reality

The user send a wish into the software (“I’d like to do that…)”, the software takes a decision based on its own reality (i.e. the facts it knows), and it send feedback data to the user to help her find her next wish.

Just replace the world Fact by Event, Wish with Command and User Feedback with Query to find an existing implementation for these concepts.



Much focus in our industry is on how we can write code faster. We want super productive IDE, high level language and huge frameworks to protect us from the hard reality of building software.

In my opinion it’s more important to find how we can write maintainable code by focusing on readability.



Lots of frameworks promise faster development by “getting rid of” the plumbing, this dirty stuff we don’t want to deal with.

Frameworks might not be too bad if they don’t hurt readability. But as they almost all do, the long term maintenance is a nightmare, because we have a new dependency. It is the classical 90% percent of time to fight the framework for the 10% of things that don’t match our use case.

By trying to write an application faster, we may lost our freedom to update our software when we want. Worse, we are constrained by some naming or architecture convention, making our code less readable.


Property Dependency Injection

The only argument I hear to justify dependencies injection by properties is: it is easier to write. And it’s a really bad argument.

It is much more valuable to be able to understand easily all the dependencies of an object. One of the best solution in an OO language is dependencies injection by constructor. It allows seeing quickly what are the responsibilities of a given class.

It’s really irritating to find out at runtime that I miss a dependency because it is injected via a property, or worse, directly by calling a static IOC container in an initialize function.


Choose freedom

Instead of losing readability with a framework or other antipatterns, we can choose freedom. We must find lightweight libraries to match our needs, and not be afraid to write code. It will be more maintainable than a framework.

It is where a DDD approach makes sense: by identifying the core domain, we know which part of the system must be as pure as possible. But in a more generic domain, using a framework may be beneficial. As usual, context is king.

The core domain is the main value of our software, do we prefer a quick writing, or an easy reading?


The wrong fight

It’s appealing to believe that we should write software faster in order to improve the business we work for. I believe it’s a fundamental mistake. To be faster we need to make the software easy to evolve. And it evolves faster if we can understand quickly what is its purpose.

How many hours per day do we write code?
How many hours per day do we read code?
So why are we still writing new frameworks instead of focusing on clean code and readability?



Thanks Samuel Pecoul for suggesting improvements


A few myths about CQRS

I’m happy to see that event driven architecture rises in popularity, I already explained why I believe it makes sense. But when a topic gets more traction, we hear more and more weird things. Let me try to clarify a few myths about Command and Query Responsibility Segregation (CQRS) here.


CQRS is not Events Sourcing (which is not Event Storming)

Some people understand CQRS/ES as a single architecture paradigm. It’s actually two different paradigms, even is they were brought out together .

CQRS is about segregation of the commands and the queries in a system. It is nothing but Single Responsibility Principle (SRP) apply to separate the decisions from the effects. It comes from CQS.

Event Sourcing (ES) is about storing immutable events as the source of truth to derive any state of the whole system at any time.

Both paradigms work well together (Greg Young calls them “symbiotic”). But we can do CQRS without ES, and we can do ES without CQRS.

And just because I heard the confusion a few times, Event Storming is not Event Sourcing. It is not an architecture paradigm. Event Storming is a way to crunch knowledge with the business in an event driven approach, but it is not technical at all. It is a convention to talk with other people using post-its to represent business events.


CQRS is not Eventual Consistency

Eventual Consistency is when the read model is not synchronously updated with the write model.

It could be challenging, but it’s not mandatory in a CQRS system. We can choose to keep things synchronous, when it is good enough. We can go for an asynchronous implementation when we meet performance issue.

Thus Eventual Consistency is about performance, not about CQRS.


CQRS is not noSql and/or 2 databases

When I asked Greg Young what’s CQRS, he asks me in return if I know the concept of a SQL View. A View is nothing but a projection of the data, optimized for reading. That’s the query part of the system.

Sure, we can choose to store the query part in a separate database. Yes, it could make sense to store it in a noSql database, to reduce the impedance mismatch.

But it is CQRS as soon as there is a logical separation between command and queries. The rest is just a technical choice.

vector silhouette of a girl with raised hands and broken chains

CQRS is freedom

CQRS is before all freedom in architecture.
It allows choosing if we need eventual consistency, noSql, synchronous or asynchronous validation, and so on. It’s not an extremely complex topic, only useful for Amazon or Google.

Of course, CQRS is not a silver bullet either, but so far I have seen lots of missing opportunities to use it in several domains. But I still haven’t seen a software using CQRS where they should have go for a simple CRUD system instead.


PS: I just saw Greg has written on the same subject in a  more concise and direct way 3 years ago.

PS2: If you want to know more, I’ll be talking about the synergy between CQRS, ES and DDD, at BDXIO next October the 21st



Micro-service and bounded context clarification

One micro-service should cover one bounded context” asserts Vaughn Vernon.

It leads to an interesting tweeter discussion with Greg Young, Romeu Moura and Mathias Verraes.  Greg and Romeu disagree. I say it depends of the context.

To know if one can cover another, we must know what a micro-service is and what a bounded context is. I find their definitions are fuzzy, and I think this discussion is a proof of that. I’ll try to clarify my point in this article.

Disclaimer: the following definitions are based on my current understanding of DDD strategic patterns, I don’t claim they are the only true definitions.


Micro-service definition

We believe we know what is a micro-service until we take a look at the Wikipedia definition.

In contrast to SOA, micro-services gives an answer to the question of how big a service should be and how they should communicate with each other. In micro-services architecture, services should be small and the protocols should be lightweight.”

This definition is symptomatic of our industry: we explain something by something else we don’t understand either. Do we share a common understanding of what is “small” and “lightweight”? I don’t think so.

A better explanation is the second property in the details of the Wikipedia page: “Services are organized around capabilities, e.g., user interface front-end, recommendation, logistics, billing, etc.

To be fair, this property has lots of symmetries with how we define a bounded context.micro-services1-297x250

Domain definition and problem space

To understand what a bounded context is, we need to define what a domain is. It is a theoretical representation of a business, in the problem space. And it can be divided in sub-domains.

For example in the well-known business of Amazon,  the core domain is about selling stuff online, and there are different sub-domain more or less generic like shipping, billing, advertising and so on.

We are in the problem space because we have no idea (yet) about how we should implement Amazon’s business , it is just a theoretical description of what they do.

DDD patterns that are applicable to the problem space (figure 1-4 page 10 of PPP of DDD)

Bounded context definition and solution space

A bounded context is a projection in the solution space to define boundaries in the system implementing the domain. Bounded contexts are important because they allow to define an ubiquitous language, valid within their boundaries. A product in a billing bounded context has not the same meaning than in a shipping bounded context.

When it is badly done, we obtain a big ball of mud, e.g. a huge system without boundaries where an update in the billing part may cause side effect in the shipping part.

We are in the solution space because it is an actual description of how we implement the domain. A bounded context does not necessarily matches exactly one sub domain. But having too many bounded contexts overlapping between different sub-domains is definitely a design smell.

solutionDDD patterns that are applicable to the solution space (figure 1-5 page 10 of PPP of DDD)

One micro-service should cover one bounded context?

Now that we defined micro-service and bounded context, we can try to decide if one micro-service should cover one bounded context?

And of course, we still cannot decide, because we still lack the (business) context. In some business context, a micro service might fit a bounded context. In some other, several micro services will be in one bounded context. The only thing we can suppose is that a micro service overlapping different bounded contexts has something wrong.

As usual in any DDD discussion, context is king.


For more thougths on bounded contexts and micro-services, there is this excellent podcast by Eric Evans.





Time-Aware design

I already advocated how an Event Driven Architecture allows our code to be more aligned with our business. Mike has the same conclusion, focusing on Practical Event Sourcing.
But why such a system is by design more aligned with the business?onceaponatimeBecause it tells a story 

Good code must tell a story.
It must tell a story to the business experts to help them take their next decision. It must tell a story to the developers to help them grab the flow of the system. It must tell a story to the users, to help them use the software.
A story is nothing but a succession of important Events.


Because it stores root causes instead of consequences

Storing the root causes of a decision is more interesting than storing its consequences. It’s always possible to derive the current state of our system by re applying the root cause and subsequent Events.
But if only the current state is known, it is harder to find what happened in the past.


Because it is a « free » logging system

Why do we use logging at Runtime? Because we try to guess the story.
If by design we track every important Event, the logging system is « free », in the sense that no more infrastructure is required to support logging.
Events can be used retroactively by the business to look for interesting correlation in the data.


Event Driven limits

There is no free lunch though.
Event Driven systems have very low coupling, which improves maintenance, but hurts navigability. When interactions between classes are done using Events, navigation is harder to follow the flow of a use case.
It will be hard to build a team of developers used to this design, as it is not mainstream. The team will need training, to perform the required mindset shift.
Also it is often implemented with no-SQL storage, a less mature technology than SQL storage.


Event Driven Architecture introduces Time in design

We know we lack the representation of time in software despite of its fundamental impact. Even if it has limits, Event Driven Architecture is a way to fill this lack by introducing time at the heart of the software.
It facilitates the alignment of our code with our business, and with the runtime.
It allows less friction between the different dimensions.
It is a Time-Aware design.



Thanks Gautier for your suggestion to replace TimeFull by Time-Aware.



IP Blocking Protection is enabled by IP Address Blocker from