Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Domain-Driven Design (verraes.net)
247 points by ingve on Sept 27, 2021 | hide | past | favorite | 191 comments


Seeing a lot of hate for DDD here so let me offer an alternative point of view from someone who advocated for DDD on my team.

When I joined, my team had been building the backend for the first version of our app for about 4 months. I would describe the state of the code base when I joined as an 'Anemic Domain Model' as defined by Martin Fowler

- There was a 'domain model' in the loosest possible sense - Each model type was just a POJO with raw getters/setters for each field - Almost all fields were primitives (mostly strings with a few int/doubles/dates) - All validation code for these types was done in application code and was fragmented throughout the code base - Internal DB identifiers were fully exposed into the code model - Internal service types were liberally mixed with external service types - No notion of aggregate roots - every entity was just accessed ad-hoc

It was a highly unsustainable approach, and one of the first thing I did was attempt to implement strategic DDD in the areas that were the most painful. This included

- Adopting rich value objects to represent domain concepts instead of raw strings - Enforcing business invariants inside the model classes - Enriching the domain entities with methods that matched business behavior and performed validation - Creation of repositories that shifted much of the persistence details out of the application code - Defining aggregates based on our required access patterns which simplified our data access - Bounded contexts for our internal domain - mapping external service types to our own internal representation

The result of all this was the creation of a core 'domain model' that captured business behavior expected of our service and most importantly, ended up significantly simplifying the rest of the application code. If DDD makes your app code more complex, you're doing something wrong.


Perhaps the problem is inventing a new language for good application design. Uncharitably, this explanation sounds as if you've taken a description of modularity, type-safety, and maintainability and run it through a randomizing jargon thesaurus.

If an experienced developer can barely understand you, there's a communication problem.


Well I understood it perfectly as in I read it as "modularity, type-safety, and maintainability"...then again I am familiar with DDD and thus understand the ubiquitous language from the original comment...which is kind of one of the main points from DDD ;)


This sounds like another data point of many in this thread that point towards DDD as a philosophy working very well, while enforcing technical techniques advocated for in the book being hit-or-miss.

This makes intuitive sense to me: it’s much easier to define a principle that applies in many situations but much harder to define concrete technical implementations that do.

In your situation it sounds like the multiplier you applied was in pushing your team towards a rich domain. That’s a much more sensible approach than, say, using CQRS everywhere.


I attended a few DDD meetup a few years ago, and it never quite made sense why would you engage in this kind of architecture nowadays. The room was filled with experienced Java developers with deep Enterprise Software(tm) knowledge.

Then, I started working as an AI Software Engineer (mix of a software engineer + devops + data scientist), and it all clicked. DDD is a wonderful design pattern for anything related to Data Science, AI, ML. Why ? because 90% of your problems is retroactively making sense, organizing, sorting, filtering, aggregating all the data you got from your favorite Data Base/Lake/Wharehouse. DDD let you have a unified language, invariant definition and expectations between your existing business challenges and the analytics your are running on it. It's very good for validating assumptions accross a dataset, for example: Sales amount can never been < 0 ? Let's check that... Oh well, you forgot about returns, so now you can define them and be explicit when to include or exclude them.


And the most important thing is that you have clear benefits in communication using a common domain language.


I need to second this.

The ubiquitous language alone, is highly valuable.

Even if you dislike the ideas about software architecture that DDD introduces, having a clear, broadly carried, living dictionary, helps a lot! No more EntityTreeSectionBatchBuilder, but CompanyDivisionBuilder. No more 'yea, the ActorModel is really what should have been the User, but since User has legacy in the Authentication part (which also does some authorization, it's on the backlog), we needed another model name.

But also, learning about your domain, writing that down, outside (or before) patterns, frameworks, libraries start enforcing their domain, is very liberating and valuable in the long run.


> invariant definition

What's that? I haven't encountered that terminology before.


"Sales quantity are always > 0"; "Ticket number are unique"; "Returns are always linked to an existing ticket"

You know, stuff you usually put in assert statements. Except in real life you don't have strict application of these laws in the data. So you need a way to find out what percentage of your data is "misshapen" and what to do with them. You can't necessarily filter them out, because they might reflect some business process with relevant information. You goal is to easily identify these outliers without crashing your processing (that would be stupid).


Invariant: doesn't vary. Things that always are (or should be) true.


> Software for a complex domain requires all designers (engineers, testers, analysts, …) to have a deep, shared understanding of the domain, guided by domain experts ... That understanding is rooted in language: the domain language should be formalised into a Ubiquitous Language (shared, agreed upon, unambiguous) ...

> DDD is not prescriptive. It doesn’t have rules of how to do it, and is open to new interpretation. It doesn’t prescribe methods, or practices, and even the patterns in the book1 are meant to be illustrative rather than a final set. ...

> That makes DDD notoriously hard to define.

I don't know anything about DDD, but if you can't concretely describe it and how it's different to how teams naturally work together, how can it be actionable and what stops it becoming another cargo cult when nobody can agree on what it is?


The book that introduced the term, “Domain-Driven Design” by Eric Evans, is thick but excellent.


I recently came across this talk from Scott Wlaschin titled "Domain Modeling Made Functional".

I found it a great practical introduction to DDD.

https://www.youtube.com/watch?v=PLFl95c-IiU


If that talk speaks to you (as it where) I can also highly recommend his book with the same title, https://pragprog.com/titles/swdddf/domain-modeling-made-func...

They are the probably the best concrete and actionable introductions to DDD out there, and do a very good job of separating practical DDD from the OOP dogma that it is sometimes tied to. The book uses F# as it's language to implement and demonstrate its examples, but most of the concepts are presented in a language agnostic way.


I find it hard to understand how you'd get the essence of the idea from some of these descriptions, I was lucky enough to be at OOPSLA when Erics book came out and he had a session on it, I think he explained it pretty well, not sure if there are online presentations of his that are good, his book is good, but has a lot of extra more "technical" stuff to deal with how to implement it.


Agreed, my issue wity DDD is that it is based on a bunch of good ideas but very little of it is actually actionable.


I actually think that this perception is what has caused most of the architectural overkill mentioned in this thread. People want something "actionable", so they extract the specific patterns.

The most important part of DDD is the principles. It's a way of looking at the world and at problems, of seeing things first and foremost from the business perspective. How you apply those principles to achieve that perspective must vary dramatically from project to project, which means that the details of action are left to you to decide.


Several of the ideas, like ubiquitous language and bounded contexts, are highly actionable.


DDD always felt like a bad adaptation of the emperor's clothes for programmers to me.


In my 22 years of career in Software starting as a developer I have seen DDD used only one time successfully and appropriately. All other attempts were half baked and over engineered mess.

The one time it worked is a very complex domain of insurance policy administration. It had highly complex business rules and had to maintain invariants. The original developers were all experienced and senior from a consulting company. I joined as a permanent member of the team before handover.

The problem was that over several years it was a hard time to get new hires up to speed to maintain the complex code and test it. Moreover even the ubiquitous language starts to change with new CEO, SMEs retiring, change in market etc. E.g. Member became Customer. Now you either refactor the entire codebase or have the code different to the new business jargon. Guess which one was chosen? Over time it became hard to refactor. I have now left the company many years back, but I am assuming this code still lives.

Then I worked in enterprises where this kind of core systems were built in some humongous monolith like SAP, maintained my an army of people from WITCH companies. All the digital apps, web portal etc. was a frontend, API and caching layer. The DDD complexity was in the bowels of SAP or PEGA or Dynamics etc.


So this is a very fundamental explanation of DDD, the kind you might learn at university. But the last time I researched DDD there was a very concrete architecture associated with it, and I didn't really understand why. Every DDD article would also introduce CQRS for some reason, it seems they are inextricably connected, at least for web application development. Anyone got a good story on that?


I think the definition of DDD is abstract on purpose. I proposed a one-liner a couple of years ago [0]:

* The essence of DDD: make the implicit explicit (language, boundaries, code) and evolve your model so it matches the domain *

But to be honest, in hindsight I think it was nothing more than a drip in the ocean, and does not clarify a lot, unless you are already well versed in DDD; it's like those explanations about monads...

As for the CQRS-as-a-top-level architecture:

There are a lot of "beginner experts" and self-proclaimed thought leaders emerging, as DDD is getting more popular... (~= Agile movement)

CQRS-all-the-things is typically a phase that you go through if you are studying DDD. (I've been there, done that, got the T-shirt.)

In my personal opinion a good heuristic to detect "beginner experts" is that they prefer to use a lot of DDD lingo, and focus on the more technical aspects instead of truly trying to understand the business domain first.

Update:

Added the first paragraph with an attempt for a DDD definition.

[0] https://tojans.me/posts/ddd-in-a-tweet/


Unfortunately there isn't a single definition of DDD accepted by everyone. At its core DDD is about the practice of software design which puts Domain - user language and problems - first. There are no technical considerations.

ES (Event Sourcing) and CQRS are technical patterns people like to use while doing DDD because of various reasons, but they're in no way required to practice DDD.


I've seen architecture presentation where it seemed like event sourcing took DDD hostage, just using it as a vehicle to sell that particular astronaut-ism. Not even a ubiquitous language...


I think part of the reason is that event-sourced needs DDD. So any story or presentation about ES, will at least mention DDD.

But DDD, by no means, needs ES.

Another part of the reason, I think, is because many other architectural patterns have taken certain concepts 'hostage'. Take for example Rails. MVC. 'Model' in Rails has a clear and distinct meaning, place and implementation. There is no way to have a bounded context, isolated domain models in Rails, without at least the terminology becoming very confusing. Having root aggregates in Rails makes all that worse, even.

What I'm trying to paint, here, is that ES naturally embraces DDD, whereas other architectures often conflict, or require effort to fit in DDD. Therefore, you'll probably find many talks about ES or CQRS and DDD, but hardly any about Actor, MVC, etc., and DDD.


Can anyone make explicit some of the "various reasons" ES/CQRS are associated with DDD? This is confusing me.

To those experienced in DDD: You can do DDD without CQRS? And you can do CQRS without it being DDD too? This is not clear to me as an observer.


It's because a popular book on DDD "Implementing Domain-driven Design" uses CQRS in its examples. If the book was written today CQRS would probably be replaced with Serverless computing or something even more trendy.


I'm not advocating for CQRS or any of that, I've personally never used it on a project. But, the reason that architectures are associated with DDD is that DDD is fundamentally a philosophy or an idea. Certain software designs achieve that philosophical ideal better than others.

For example, DDD stresses the idea of an "isolated domain model" - and that has a very practical reason, being that your business rules are already complicated enough so mixing in database transactions, HTTP caching, response serialization, authorization, etc. etc., into your domain logic makes it harder to understand. You want to be able to have a conversation with a customer / domain expert where they explain something and you can adjust the code quickly to meet their needs. That's the ideal goal at least.

So some patterns enable that goal better. For example, this implies that you need some kind of data access layer pattern so that code that cares about the database is separate from code that cares about domain logic. Well, there's a million different ways to get that separation, and they have different tradeoffs, and like everything else, certain patterns become trendy. That's what's going on with CQRS. People feel that it leads to a better expression of the domain model, because creating data and querying it are often radically different from the customer's perspective.

Whether or not that goal is achieved, I'm not sure. But that's the reason people are experimenting with patterns like that, as I see it.


DDD is sort of the OOP of OOP, without the clever Design Patterns. -- DDD is the kind of principles most OOP people can learn and apply in the same mechanistic way the apply OOP principles and that actually do pay off (unlike design patterns which are really clever but have very little impact in real world).

CQRS is associated with DDD because it is the kind of smart pattern made out of bits that the other smart patterns frown against -- like data duplication/denormalization.


If someone wonders about DDD, I'd recommend Lightbend's free courses, e.g. [1]. They don't go too in-depth and you can take all of them (I think five?) within a weekend.

[1] https://academy.lightbend.com/courses/course-v1:lightbend+LR...


I clicked this article to find out what DDD is as I've heard the term and was curious to learn more about it. Unfortunately, this article does very little to enlighten me besides the very basic common sense stuff, like understanding the business domain/keeping in close contact with the people who do.

The above idea is hardly revolutionary anyway, I'm sure many people operated this way DDD or not.

This seems like another one in the endless stream of innovative project management methodologies that promise to make everything better, but whose main purpose is creating lucrative make-work opportunities to the next batch of consultants and evangelists.


very basic common sense stuff, like understanding the business domain/keeping in close contact with the people who do.

Much of DDD is common sense, what it offers is a way to be explicitly common sense. The important factors, the way I see it, is to explicitly make sure both developers and stakeholder have a shared mental model of the business process that maps to the domain in question, they all agree on which 'jargon' terms map to which parts of that model AND that that model (with the agreed upon terms) is explicitly represented in the code.

Keeping in close contact with the business domain people is most useful if you actually understand each other. Too often I've seen developers and business people using the same 'every day' word when discussing a problem and only much later do they find out that their obvious understanding of that word was very different.


Nope, still means nothing too me. I'll just accept that how this bucket of words is supposed to form some cohesive methodology or pattern for anything is beyond my understanding. I'm sure some people get paid a lot of $ to implement it tho.


Simple Version. Before writing a function that reports how many of a certain thing has been "sold", make sure everybody involved agrees what "sold" means in this specific context and domain. And once you have agreed, make sure that every time a function, variable or database table talks about things "sold" it's using that definition.

How complex your methodology around this has to be depends entirely on how complex the domain is and how many different definitions of "sold" is used with different parts of that domain.


Ah, ya in our code no one can agree on what sold means so we just trial and error for which one seems to work in each circumstance. Or sometimes we need both so we'll have..

doStuff({soldFromSys1, soldFromSys2, soldType, price, priceType, priceTypeIdKey, customer})

¯\_(ツ)_/¯


What's so novel about this that deserve its own name and writing a book about this?


Because it's incredibly hard to do, and have some ideas to help.


It may seem like common sense, but I have worked with many people who don't understand the importance of the actual software reflecting the domain that is being served. It's funny that DDD is criticized for being "too technical" when the whole point of it is that technical concerns must always serve the domain / customers, and that technology merely enables that.

And, conversely, I have seen many people get extremely excited about solving technical problems that aren't related to the underlying business in any way. That is why the book needed to be written - people are more concerned with the newest database or frontend framework, but have lost sight of the purpose of software: to provide value to people.

So, sure, there are DDD consultants and trainers. They're going to capitalize off of the idea like vultures, as is done with all ideas and movements. But, the core beliefs of DDD seem like a good idea to me.


I didn't read the PDF but it seems a way to formalize the usual analysis of customer requirements. It goes like this:

1. Understand the business of the customer, or any requirement won't make sense.

2. Learn the jargon of the business, or you won't be able to communicate.

3. Use that jargon when naming database tables, software modules, variables, everything, or you'll have to translate between your possibly abstract names and the jargon of the business.

So, apparently nothing new but I should read the PDF (too bad it's not HTML) because there could be new useful ideas about the process.

Edit: oops, the PDF is only that page as PDF. I'll google for it but does anybody have a good link to a detailed explanation of the method?


The post is just this web page. You just need to read the posted link.


DDD strikes me the software version of Agile sometimes. The ideas and philosophy behind are good, but ends up being taken as a silver bullet. If you do this you will have a good architecture and your software will be well architected. Especially in the .NET world i've seen DDD being branded together with CQRS as "Clean Architecture" which in reality turns out to be a mess of layers and separations.


> DDD strikes me the software version of Agile sometimes.

DDD is more about managing your business software needs than making software. If you read "Implementing DDD" a good thing to take from it the fact you should focus your efforts on your business core value add. It's where you put your best developers, architects and money. Anything outside this core will get less resources and can often be outsourced.

And to evaluate what is this core and what is needed, you need your technical team to speak often with the domain experts. Using some common language.

The coding aspect is itself agile as usually there will be miscommunication at first between your tech team and your expert giving you a less than good result. More communication, more knowledge shared and understood will make you think differently about your product and its architecture: that's when you refactor.


Like the original post fails to mention any stakeholders when it comes to developing "Ubiquitous Language" (it enumerates "engineers, testers, analysts, …", and I don't think ellipses does any justice to them), most people forget that DDD is about modelling the real world before you start coding.

My experience is that, unfortunately, majority of developers are either not experienced enough, or smart enough (these are closely related), to keep the abstraction creep at bay and apply only meaningful abstractions. And others are simply not interested enough, and they know they can get a LGTM with following misjudged patterns.

They also jump at the opportunity to use identical terms from the DDD book, and then keep explaining them with more colloquial terms everybody understands: it's like social sciences all over again (sorry social sciences, but it's what it is) where they invent terms so they'd be more "scientific". Why not simply use the terms everybody understands?

I get the argument that a new language allows for consistency, but it still gets misapplied, because it's the same humans doing the work.

I can't think of a better parallel than SQL-vs-noSQL databases: sure, no-ACID makes a bunch of things simpler, but then every developer has to think through all of the same problems ACID DBs solve, and majority will get them wrong (or maybe even everybody would get them wrong _most of the time_).


> 's like social sciences all over again (sorry social sciences, but it's what it is) where they invent terms so they'd be more "scientific". Why not simply use the terms everybody understands?

To be precise probably. Every domain develops a jargon at some point. It can be used to exclude or sound smart but most of the time it just to be precise. Common terms are too much vague / polysemic.

Why do you need words like compiler, linker, dynamic and static types and linter instead of using common words?


I am fine with jargon to suit a purpose. You'd have to explain "compiler" or "linker" with a mouthful, and you are unlikely to be very successful (unless you go to those mathematical models of computers).

My (limited) experience with social sciences (as obligatory courses in Math/CS studies) was that you'd have things like "Someone's Continuum" (I am, thankfully, at a loss for the name of Someone :)) to say that two extremes of learning are rote learning and learning with understanding (this, by virtue of what "two extremes" mean in a natural language, means that there are things in between), all explained in 3-4 pages of dense prose that says, literally, nothing else. There is no precision gained from introduction of these terms for the most part, mostly the number of things to memorize is increased.

To contradict that, a "continuum" in mathematics is given a very precise meaning to indicate that no matter how small your "in between", you can still find something there: you get very particular properties and can easily differentiate if something is a continuum or not. You also build up other tools to work on things which are "continuous", like derivatives, differentials or integrals.

Some of the language of DDD is quite like it, which probably makes sense, since it's more closely related to social sciences (not rigorous science), rather than formal sciences. That's not to slight it, because that's the best we can do with some things, but that means that in any particular setting, you should understand what you are trying to get out of it, and if you are struggling with language, move on to the point of the exercise instead (eg. establishing domains, boundaries and shared language as it relates to your problem).


> majority of developers

Jump to coding or architecting because that's their happy place. That's easy, that's what they know so they think that's what they've been hired for.

Shelling some dollars to buy an off-the-shelf solution for some ancillary application? No thanks we think we can do it ourselves and have more control over it. Let's go coding.


The principles listed in the article look like general common-sense design principles. I can hardly imagine anyone designing a system without understanding the problem domain first - and likewise, the problem domain usually drives the underlying models (because what else can they be driven by)? I acknowledge that something the design process strays away from the original domain requirements (especially among the less experienced teams), but that usually feels wrong irrespective of the design philosophy. So the definition of DDD seems a bit tautological.

I would appreciate an explanation of what DDD is by counterexample - i.e. what are examples of non-domain driven design process? Especially, examples of successful or mostly successful projects - i.e. where the disconnect from the problem domain does not feel wrong from the start.


Typical oldschool enterprise software that has database tables instead of a decently abstract model, pages instead of services, "it should happen very rarely" instead of invariants, "don't do that" instead of validation, lore instead of design, etc.

What's particularly contrary to DDD principles and common is mixing up different concerns without regard for model integrity (e.g. ad hoc incorrect caching of stale data for performance reasons, messing up all business logic).


We use DDD at the current company I work in and to be honest, I detest it so much that sometimes it makes me wonder if I even want to continue in the programming space (been at it for 20 years).

Don't get me wrong, DDD has meaning and purpose, but some companies are applying it as a badge to be obtained instead of pondering the question, do you really need to rewrite everything following DDD?

In our case, simple CRUD APIs that in "regular programming" might take a couple 200 line files have turned into unmanageable nightmares in DDD that take you at least a couple of days of really intensive investigation to understand, because it have been divided in more that 25 files that hold 3 or 4 lines of code at most, with so many abstraction layers that it's impossible for the best of us to follow in one go.

Now, you could make the argument "You Are Doing It Wrong(tm)" but since I'm just a drone in this specific scheme and there's no wiggle room for anything (the team is quite inflexible on this) I have to follow it to the letter.

Just giving my two cents, again, not depreciating DDD, it has its purpose but in my opinion, it's for very specific projects.


Tactical/technical DDD patterns should only be used for parts of the code where there is a lot of business agility required, so the behavior of your code changes a lot, and you have a tight feedback loop with your business unit.

Your story sounds like they implemented a "technical DDD top-level architecture" (TM), whatever that may be. (I'd assume layers of abstractions coupled with logic spread all over the place, without any added benefit.)

You see this a lot when people read some stuff about DDD, and they start experimenting with the technical/tactical patterns, because this is the aspect that makes most sense to a technical audience.

In reality the tactical/technical DDD patterns should only be applied in the core part of your business (i.e. the thing that gives you a strategic advantage over your competitors.), because that typically needs to change a lot, so having a common language/model with the business tends to be worth the extra upkeep required when opting for more flexible models.

Identifying what the core part is of your business (most likely it's not authentication, billing, invoicing, content management, ...) is one of the more important (and most difficult) aspects of DDD.


I attended the Domain Driven Design Exchange conference in London years ago. They keynote was by Eric Evans (author of the DDD Blue Book)

He said if he wrote the book again, he'd have put all the patterns as an appendix

He thought people concentrate on applying the patterns rather than seeing DDD as a way to communicate. Between developers but also to the business


Right, communicating to the business is actually the core message of DDD. It should potentially be called “anthropological design” since the “domain experts” are a synonym for your non-technical users of your software (the domain is the business domain, which is whatever your software is trying to do for them). The message is that you have to observe your users in their natural habitat.

Let me put it this way, when Twitter started out, they did not have tweets. They had posts, and the act of posting to Twitter was called twittering. They were not associated with birds (actually more with whales lol). The idea of birds and tweeting actually came later with a third-party client interacting with their API.

Eric Evans in the early aughts now makes a big splash with this outrageous statement, where many of us graybeards would instead say “if it ain't broke don't fix it”: Eric Evans would recommend that the posts table in the database be renamed to the tweets table. Version 2 of the API should not reference “posts” or post_ids, but rather tweets and tweet_ids.

Why?! Those sorts of migrations are painful and clumsy! Yes, Eric says. (He is not stupid.) Maybe it's a lost cause. But, Eric remarks on two things:

1. There is no reason to believe, given software’s previous performance, that any amount of upfront planning is going to generate the most consistent useful model before the software is built and we can interact with it. So you're going to want to iterate. What are the systematic obstacles to renaming the table and the API, and can we overcome them so that we can do lots of little experiments?

2. Something else that is clumsy and painful, is when your users come to you reporting a problem with Widgets or whatever, and you go off and you fix the FactoryService to add some new functionality to widgets, tell the user that their problem is fixed, and they go and do the thing again and run into the same problem that they ran into, “it's not fixed yet!”. Why did this happen? One big reason is that the word “widget” means something different in the database versus the backend, or in the backend versus the frontend, or in the frontend versus the real world. Twitter might get some other notion of “topics” and they roll it out and everyone starts to call them “posts”, now the topic table holds posts and the posts table holds tweets, and you're always looking for “posts” in the wrong table now.

So, you should rename the table because first, this should be a possible thing for you to do and building up that sort of leverage is going to pay dividends later, and second, the less friction we can have by transforming the way we developers speak into the way that our users speak, is going to pay dividends too.

This anthropology is kind of the core part of DDD, I don't understand why people try to do DDD as design patterns rather than saying that it's the users who unwittingly dictate the design, as we redesign around them to reduce friction.

It's similar to, I don't understand why people find it hard to draw context boundaries in DDD. So bounded contexts are a programming idea, in programming we call them namespaces, they exist to disambiguate between two names that are otherwise the same. DDD says that we need to do this because different parts of the business will use the same word to refer to different things, and trying to get either side of the business to use some different word is error-prone and a losing proposition. So instead we need namespaces, so that both of our domain experts can speak in their own language and we can understand them both because in this context we use this namespace, in that context we use that namespace. So: where do you draw the boundary? In other words how big should your modules be? (Or these days, for “module” read “microservice”.)

Simple: you partition users into groups, based on the sorts of things that they seem to care about when they are interacting with the system, and the different ways that they talk about the world. The bounded context is not an “entity” or a “strong entity” or a service-discovery threshold, rather it is an anthropological construct just like everything else in DDD. “The people in shipping care about this for one reason, the people in billing care about it for another, they don't usually talk to each other, but I guess sometimes they do...” sounds like you've got a shipping module/microservice and a billing module/microservice. The boundary is the human boundary.

Similarly for “should I use events or RPC?” ... Does someone from shipping ever come up to the billing department and say “The delivery costs a ton more because XYZ, the customer said they preferred to pay more rather than cancel the order, I am gonna stay here in billing until this critical task is complete,” or whatever, or would they prefer an asynchronous process like email, “we will just put it on the shelf until we can pay to safely ship it.” Different industries would have different standards here! If it's something that has no shelf life, that delivery does not want to keep in the shelves for one second longer than it has to, then that drives the different behavior. Only way you can know is by observing your users in their natural habitat.


DDD technical patterns are a weird bunch.

On one hand, there are quite a few smart concepts in there. Everyone should know what aggregates are, for example.

On the other hand, many of them have little to do with the basic premise of DDD. You can have DDD without these specific patterns.

On the third hand, people start applying every single pattern everywhere, like they did with GoF patterns.


I'd advice people new to DDD to involve themselves with understanding the Strategic Design parts first. The rationale, pros and cons for using it etc. And stay well away from tactical patterns until you know the role of DDD wrt the objectives you want to achieve. Strategic vs. tactical are completely different concerns. The problem with a lot of DDD information is that people tend to dive into tactical way to early and introduce all kinds of architecture that is not needed. Note that result of strategic analysis may well be to conclude that a simple CRUD design is the best way forward (for a subdomain or even the entire project). The non-technical domain understanding is most important, and also helps heaps in keeping your (non-technical) stakeholders in the loop throughout the development process.


I found this book very good in that regard. The first half of the book is on strategy and emphasizes its importance. It also makes it clear that only a subset of your system is suitable for DDD (for example, not the CRUD bits). I also found it much clearer and less verbose than the Evans book.

https://www.wiley.com/en-us/Patterns%2C+Principles%2C+and+Pr...


Yes, this is a very good introduction to DDD proper, with the primary focus on when to apply what parts of it!


> On the other hand, many of them have little to do with the basic premise of DDD.

Honestly, I see no relation at all. Those technical patterns can be applied the same way whether you model your code after the domain or not, and you don't need them at all to model your code after the domain.

They look a lot like "true Agile" 2.0.


Good post. The technical patterns are nigh useless unless you're going all in on event sourcing and/or CQRS - and perhaps even then. KISS > DDD.

To me, what matters are the strategic patterns, i.e. how you think and talk about your domain. What a lot of (microservice) software gets wrong is bounded contexts and APIs, which can be improved through event storming (discussing what kind of things happen in your domain) and context mapping (how that [sub]domain interacts with others). And then there's ubiquitous language, or calling the thing what it is to the business, not some programmer gobbledygook.


All you need is a common vocabulary around English proper nouns and some idea of how they work together.

And then the client data has all fields named in Spanish with abbreviations, lettering and numbering everywhere.


I'm an Argentine Spanish-English bilingual programmer. I try to program exclusively in English, but when developing integrations to Argentine systems I often have to fall back to spanglish code.

In general it's preferable to use only english because the code is more readable (because the language's keywords and APIs are in english), but also to maintain consistency with noun-adjective order, verb tenses, etc. Some local concepts can be easily translated (or at least it would seem so) like invoice-factura ... but then inevitably I arrive at concepts for which there is no obvious translation (so I could make one up, but it would not be clear enough for others) or for which the translation doesn't exactly line up.


I think this is common all around the world. For example, the more my teams get close to money and regulations the more we have to use words of our own language. It would be pointless trying to invent a translation of very precise legal words for concepts that exist only in our own country.


I generally treat all country-specific stuff as data.


Any tips on to persuade a stakeholder , senior leader, or your team lead that their area of focus is not a core part of the business without them feeling defensive about their status within the company?

Any tips of how to do DDD when every area is considered a core priority of the business because uncomfortable conversations are hard?


Frame it in terms of money.

"There is some benefits to what you're proposing. My primary concern is cost. I see no technical difficulties with doing it <this other way>, and one of the benefits of that is that you'd have it ready in a tenth of the time. If I'm wrong, we can always change our minds and build it your way. That would barely take any additional time at all, since what I'm proposing is so simple to begin with.

"So, what do you say? Would you like to try to get it done by November, or should we labour over it until next June? If I understand the business figures right, if we can get it out in November, we'll make $6,000,000 more – and it would all be because you made the right technical decision here."


> "So, what do you say? Would you like to try to get it done by November, or should we labour over it until next June? If I understand the business figures right, if we can get it out in November, we'll make $6,000,000 more – and it would all be because you made the right technical decision here."

... <End of Year Review>

"We were able to get this out by end of November, thanks to your good technical decision. $6,000,000 saved! Congrats and great job. Accordingly, you've qualified for an annual bonus of a $100 Amazon Gift Card and the default 1.5% raise.

Keep up the good work!"


> Any tips on to persuade a stakeholder , senior leader, or your team lead that their area of focus is not a core part of the business without them feeling defensive about their status within the company?

To get buy-in you have to provide value for them that aligns with their needs. If they desire a delusional feeling of outsized importance within an organization then you need to be quite creative. It's more likely that their needs are simpler though. They need to feel they are getting value from you, even if the other parts of the business hold more of the cards.

Try to find simple things to fix for them, choose to build out features with their input, and when building new features for the main stakeholders try and prioritize items which help multiple stakeholders. This is good practice in any case because in the long-term things can change quite significantly within organizations and you don't want to be perceived as only an ally of some within the business.

UPDATE: Beware of quantifying too much. This really impresses some people but will make others feel really small. You may completely lose a connection with one of the smaller stakeholders by quantifying every detail and calling out big numbers around the large stakeholders. You need to work qualitatively for the most part for them. If you want to use numbers, pilot something with a small stakeholder and make a 25% increase to their sales (or a cost reduction). If big holders can get a similar percentage then that translates to big numbers. It feels like a big number to both groups.


I have a few ideas, in fact I gave a talk about that a long time ago [0], but I think one of my friends Marijn gave a good suggestion on twitter [1]: don't sell DDD, but fix your the problem your boss has.

[0] https://www.slideshare.net/TomJanssens1/selling-ddd

[1] https://twitter.com/huizendveld/status/1440683623628230665


I think the question was more how to sell not-doing-DDD.

Reasonable people (including DDD advocates) clearly understand that tactical patterns can easily be misapplied, and that the strategic part of it is more important, but inexperienced programmers jump in on the "new" fad (it's not even that new, which is most perplexing to me), and misapply all the patterns they possibly can.

I see there's a lot of similar sentiment here, so I think the question really is: how to convince inexperienced "converts" where the right boundary of applying tactical DDD solutions is?


One cause of misapplying tactical patterns is learning. When people start learning something, they do it badly and in inappropriate contexts. The solutions to this are:

A. Don't learn things.

B. Dedicate some amount of time to learn something in a sandbox before using it on the job.

C. Once you've learned something well enough to see the error of your ways, dedicate some amount of time to clean up your old work.


I know A is a tongue-in-cheek, but I think you underestimate human's ability to apply things by learning from good learning materials.

B and C are a single thing, and unfortunately, C does not happen, which is why any particular methodology gets a bad rep. And it's obviously already happening with DDD (judging by the polarized sentiments around here).

And finally, while I do believe abstraction is the ultimate tool of the human mind (and mathematics is the purest form of abstraction we are capable of), I do not think it suits all brains equally, and not everybody will be equally capable of ever getting the right understanding. Basically, your architecture can be _too smart_ if you are looking to hire actual, real-world developers and software engineers, and have them be efficient.


> Your story sounds like they implemented a "technical DDD top-level architecture" (TM), whatever that may be. (I'd assume layers of abstractions coupled with logic spread all over the place, without any added benefit.)

Exactly :)


> In reality the tactical/technical DDD patterns should only be applied in the core part of your business [...]

I cannot imagine anything where it would make sense unless you're implementing an actual framework such as Spring in javaland.

And at that point I'd say you're wasting effort and just using spring boot should be preferable.

It does make sense with these super low level frameworks though, but which corporation makes them in-house at this point?


Think about for example a planning component for hospital beds... There are a lot of parts that are really straightforward to implement, but for these planning components it might make more sense to develop an in-house component. (Assuming existing constraint solvers and/or rule engines are not a viable solution for you in this particular scenario.)

If your business is talking about updating/deleting/inserting data, you are not describing the actual reasoning behind the change. For DDD, it makes sense to figure out why exactly you are doing the things you do, and model these explicitly in your systems for those parts that matter.

The stereo-typical example for this is an address change: if you model this as an "AddressUpdated", you might as well use CRUD, as this does not specify the intent of the change.

You could change an address because it contained a typo, but you can also change it because someone moved. These might lead to different outcomes, so in DDD you would typically model these as "AddressTypoCorrected" and "ContactMoved".

There are other fine-grained aspects, for example the need of an identity for an object: physical money is considered a value object (so it has no specific ID per instance) in almost all contexts, unless you are the national bank: all of a sudden the identity (serial number of the bill) does matter, so the same "thing" might have different "models" within different parts of the business.

Other examples might be value objects for specific areas, for example weight. Typically weight starts out as a number, and all of a sudden there might be a need to add a precision, mark it as an estimate, or have it "unknown". In order to avoid if-statements all over the code, you construct a "weight value object" that properly manages all of these peculiarities in a single place (i.e. what's the result of an estimate+undefined etc.)


You clearly have some positive experience with DDD, and I'm definitely not trying to say that DDD is broken by design or anything like that.

I'm sure there are successful and maintainable projects that utilize this design approach.

Nonetheless, the only thing I could think of after reading your example is just how many subtle bugs and inconsistent behaviors this engine will have with various edgecases, so I'm still pretty convinced I'd rather implement it with less abstraction/indirection.

It might just be a difference of character at the end of the day, because I do agree that what you write sounds great. I just see it more like a triangle in the CAP theorem, where the edges are speed, abstraction/extensibility and stability/consistency


> Nonetheless, the only thing I could think of after reading your example is just how many subtle bugs and inconsistent behaviors this engine will have with various edgecases, so I'm still pretty convinced I'd rather implement it with less abstraction/indirection.

This example clarifies the intent behind DDD: make the implicit explicit and make sure there is awareness about all the edge cases.

It might be as simple as contacting a user in case you have an uncovered edge case, but at least you'd be aware that your system is unable to handle edge case X. (In non-DDD scenarios this would just be a bug that emerges - implicit behavior.)


Yes! When the project becomes large enough, a lot of value loss can be prevented by discovering edge cases before implementation, and DDD practically forces that to happen. Less things discovered by devs means less design cycles, which means less effort lost in design and implementation—of course only as long as proper grooming is done to avoid implementing things out of customer priority.


> Nonetheless, the only thing I could think of after reading your example is just how many subtle bugs and inconsistent behaviors this engine will have with various edgecases, so I'm still pretty convinced I'd rather implement it with less abstraction/indirection.

This line of reasoning cuts both ways: how many bugs and inconsistent behaviors often pop up because developers rushed to write code without gathering enough requirements on the domain, how many productivity problems are caused by growing the system by accretion where it can, and how many rewrites were required just to fit the system's domain to the problem and shed technical debt from the accretion.


I'm sorry you're having that experience. DDD is specifically aimed at tackling complexity, as it says on the cover. Part of the problem is that complexity is relative to the observer, how experienced they are in that particular domain, etc. Good abstractions make complexity manageable, bad ones create more complexity. And that's another problem: a domain might be quite straightforward but bad explanations, missing information, bad abstractions, etc can make it seem more complex.

Your colleagues need to remember that DDD is supposed to be applied pragmatically. If the structure causes more navigation work than needed, simplify it. If the problem could be solved with a simple CRUD system, do that. If most of the problem is CRUD, but there's one particularly complex bit that changes a lot and requires a lot of flexibility, isolate that part, so that the simple and complex parts can have a simple integration, don't leak into each other, and can evolve at their own speeds.


I think the main problem is the Blue Book (Domain Driven Design) contains mostly technical advice. If my memory is correct there's only one of the last chapter about the organizational aspect.

Implementing DDD on the other hand is a lot better about this. Surely because it has been written 10 years later. So most people should start with it.

But anyway when people say they're using DDD, if they can't point you to some domain experts, a dictionary of the ubiquitous language or a mapping of what they do they're not using DDD.


> I'm sorry you're having that experience. DDD is specifically aimed at tackling complexity, as it says on the cover. Part of the problem is that complexity is relative to the observer, how experienced they are in that particular domain, etc.

I'd say part of the problem is that DDD critics conflate DDD with overly complex, enterprisey models that don't match their personal preferences on the acceptable tradeoffs between complexity and correctness.

As DDD comes up sounding like too much work to implement too much complexity that brings too little value, they flag it as a concern.

What I believe is missing from this discussion is the scenario where DDD practices are not followed and consequently teams are forced to iterate and reimplements projects or parts of it just to fit requirements that emerged because some aspects of the domain model weren't looked into. Design by accretion is largely accepted, as is technical debt, but they do have a cost.


You make quite the point here. That is the advantage I see for DDD; you have dependency on service that may change, DDD will make your life easier when switching, otherwise there will be weeks of rewriting code. And what happens during transition times (thinking about CRM for example) where you have to keep connected to both systems?.

This said, in my situation it's like trying to kill flies with a muon cannon (fyi. this gun is fictional and it's exaggerated to drive the point), it's cool'n's*it but the same could have been done with a newspaper or your hand. To maintain the muon cannon you need the entire Fermilab team, your hand... well, it's your hand.

Apologies for the exaggeration, can't avoid it :D


>I'd say part of the problem is that DDD critics conflate DDD with overly complex, enterprisey models

Every text I've ever written on DDD has made it pretty clear that these patterns are at the very heart of what it is. I've never seen one that says "look, all this stuff is optional, write your software however, here's how to really get to grips with the domain model".

I don't think it's the critics conflating. It is what it is.


I read a bit about DDD but never really went in-depth with it, like I never read the book or anything.

Instead I just try to absorb the major takeaways that I got from what I've read:

1. Bring in people with domain knowledge to help you understand the expected behavior of the system.

2. Try to establish a consistent language that is used in both verbal conversations as well as code.

I feel like those are good, easy-to-understand principles, and I've never understood why the whole DDD space is taken up by this insanely complicated terminology and theory. It's so off-putting.


This is exactly spot on and my go-to approach as well. I always thought myself of a huge fan of DDD, simply because I think it makes so much sense to discuss the architecture directly with the stakeholders until everyone technical and nontechnical alike really agrees about the existence, relation and constraints of entities: By using common, ubiquitous language and just giving the same things the same consistent, common-sense names while giving different things different names.

It's a godsend for the top-down inside-out approach of requirements engineering that I like and teach.

Then I met some people the actual local DDD meetup group and was shocked that that part basically made up effectively zero percent of their discussion and the rest was taken up by talk about adapters, hexagonal architecture and all kinds of artificial design patterns that cultivate complexity and self-importance. I've been careful to call myself a DDD advocate ever since.

IMHO DDD went through the same unfortunate descent that Agile did: An originally really great idea and common-sense approach that went on to get bastardized into a cargo cult by coaches who like to produce sheets for BS bingo.


Yep, exact same experience.

I also had the unfortunate experience of working with some die-hards who believe DDD dictates that "the code and only the code should reflect the business requirements 1:1", with emphasis on the "only". Which means that things that are often configurable in other systems are instead hardcoded and scattered in the codebase done by those. In early stage startups this is a death kneel.


I signed for a course on DDD thinking it will be about Data-Driven Design. It was about Domain-Driven Design and it's like the exact opposite.

Data-DD is focus on data and keep it simple. Domain-DD is another one of these architecture astronauts fads where you introduce new abstractions and then spend most of the time wondering whether penguin is a bird or a fish for the purpose of your application.


> Domain-DD is another one of these architecture astronauts fads where you introduce new abstractions and then spend most of the time wondering whether penguin is a bird or a fish for the purpose of your application.

That doesn't sound right. Domain-driven design is just a technique to design a data model that fits your application domain, and designs the whole application around that data model.

To put it differently, with DDD first you define a data structures, and afterwards the application is just operators over said data structure.

The only reason why in DDD you would care about whether "a penguin is a fish or a bird" is if your businesses required handling fishes or birds in fundamentally different and incompatible ways, which would require you to write special purpose code that crossed all layers.

It sounds like you've been experiencing a kind of analysis paralysis with the DDD tag.


> Domain-DD is another one of these architecture astronauts fads where you introduce new abstractions

Reading this I guess you followed the wrong course. One that shoved you Tactical Patterns down the throat without explaining well-enough why you should use them. DDD in the core is not about architecture, but about *common understanding* of the domain, *before* you start coding and hacking on a particular architecture. The domain understanding should help you make appropriate architecture decisions, but in now way dictate what architecture designs to use.


> it have been divided in more that 25 files that hold 3 or 4 lines of code at most, with so many abstraction layers that it's impossible for the best of us to follow in one go.

When you put engineers in charge you get overengineering and when you put managers you get underengineering.

Is there a way out?


This is so true, and has been forever. In the early 90s I worked on a system where you couldn't just write structs, rather you had to submit their definition to a guy who entered the details into a database, and there was a daily run to generate the C header files from that database. To this day I'm convinced the only reason it was done this way was that it could be done this way.


It's an over-reliance on ceremony. On some days I think it's a character trait.


Ouch! Crazy ;-)


> When you put engineers in charge you get overengineering

This means you picked the wrong engineers. An engineer's job and skill is to be able to make this sort of call.

The same goes with managers or any other role.

So the way out is always to carefully select experienced, competent, no nonsense people.


Getting experienced senior engineers, probably they have seen both; and in the best case they have worked on a middle ground project so they experienced how to do it and how not to do it.


I think you need to have a few of those "I've use this pattern and had to stay up 24 hours to meet a deadline because shit was way too complicated for the delivered value" to instil a healthy fear of overcomplicating solutions.

I've worked with developers with 5+ years of experience that haven't went trough that (either corporate culture allowed them to deliver minimum value in 5 days or jumping projects before it gets to the WTF stage of complexity). It's hard to learn if you never get burned.


Experienced and exposed to areas outside their main expertise so they are well rounded and pragmatic.


At the moment, there's no way out, plannign stipulates that all our microservices must be rewritten to accomodate this super abstract DDD design (and youy're right, it was an engineer who created our current layout)


> When you put engineers in charge you get overengineering and when you put managers you get underengineering. Is there a way out?

The whole idea is to bring them together "in the same room" and allow common understanding of the domain so they stay on the same page throughout the project. Then they'll do each what they are best at (managers whip up glossy slides, and devs crank out reams of code ;)


I noticed something similar. A minimal PR to introduce DDD on our code base ballooned the codebase by something like 1,000 lines, smattered all over the code base. I think it would have ballooned it by about 12-15,000 in total in the end if we'd used it everywhere. That would have been fertile breeding ground for bugs.

The ideas made sense on logically complex code that required frequent refactoring, but the strict separation between all the different layers simply led to a lot of code in most instance. Far too much.

We also couldnt agree on where the limits of the bounded contexts really lay. Most documentation on this issue is a mere handwave saying "you figure it out" or "it'll become clear when you do these exercises with the business" (it didnt), which is odd given how vitally important it is and how damaging it is to bound the wrong things.


> We also couldnt agree on where the limits of the bounded contexts really lay. Most documentation on this issue is a mere handwave saying "you figure it out" or "it'll become clear when you do these exercises with the business" (it didnt), which is odd given how vitally important it is and how damaging it is to bound the wrong things.

This is the hardest part of software design. No wonder there are no clear cut rules on how to do it. You have to be both a domain and implementation expert to get the boundaries right on the first try.


Yeah, it is tricky. I've rewritten many a code base because I drew the original boundaries in the wrong place and their logical locations only became clear in retrospect.

I'm not so convinced that it's something you can get just by better communicating with "the business" either.

This is partly what infuriated me so about the extreme amount of code required to follow DDD patterns - 3-4x the amount of code means 3-4x the cost of that rewrite if you get the boundary wrong.


I find the original tactical DDD patterns as useful as the gang-of-four OOP patterns these days. Modern languages made the latter irrelevant. Modern DDD practice emphasizes getting the strategic aspects of DDD right: language and boundaries.

Doesn't matter if your code has a type named `Aggregate` in it. Matters if you get your consistency boundaries right.

> I'm not so convinced that it's something you can get just by better communicating with "the business" either.

I don't have a good answer for this. I personally try to keep my modules small so that there's not more than a ~week worth of stuff to redo if understanding of business (or business itself) changes. I often fail too.


>Doesn't matter if your code has a type named `Aggregate` in it. Matters if you get your consistency boundaries right

I tended to find a lot written about the former and very little written about the latter.

In fact I found essentially zero practical or actionable advice about getting the boundaries right beyond just "it's important to get it right". DDD doesnt appear to have a coherent opinion beyond that.

Not in the books nor in blog posts written about the topic.

It reminded me a bit of how scrum had a lot to say about standups (the color of that shed was well defined) but very little to say about refactoring.


I've found that separating around "abstraction levels", to use Clean Code lingo, really helps for defining good boundaries. This is really super-emphasised in Clean Code, but I find this the most important concept in the book.

Examples:

Are you writing to disk? Printing to the screen? Saving to the database? This is a different "abstraction level" than your data processing, those things should be separated by a boundary.

Are you doing hardcore math to procedurally generate a terrain? It shouldn't be on the same layer you push the triangles to the screen.

It's also about isolation: you want to control the Framebuffers in your 3D renderer? Don't let the Framebuffer class with OpenGL calls ever leak outside the renderer, even though there's encapsulation. Just use a data class to communicate between renderer/non-renderer code.

--

However, the part that is not really discussed on these books is that we should be vigilant not to "over-abstract". Sometimes you have a certain level of abstraction distributed over two or more classes, only to mirror some kind of external structure or mental mode, but in reality you want the code for those things to be together in the same class/method.

One example I run across a lot in 3D renderers is wrapping internal 3D library concepts like "vertex buffer", "context" and "framebuffer" in separate abstract objects, even when they really don't need to be abstract.

For example: "Open GL renderer" will only ever be able to call "OpenGL vertex buffer", "OpenGL context" and "OpenGL framebuffer", while Vulkan will only call the Vulkan equivalents. This means you don't need a "framebuffer" abstraction, you can have it on the same layer as the renderer. You might need a data-only, non-abstract "framebuffer" class to control it from the outside, though.


>However, the part that is not really discussed on these books is that we should be vigilant not to "over-abstract".

To be fair, I've never really seen any process/paradigm address this. I've mostly done it based upon gut feel - sometimes abstractions seem like too much of an imposition. Other times it feels critical.

DDD just says "do way too much. all the time".

I imagine one day there will be an abstraction calculus but software engineering aint there yet.


This is a bit different. You are talking about a domain where the developer is usually the domain expert. You know very well what "procedurally generated terrain", "3D renderer", "database", "screen" and "framebuffer" are. Even if you don't - good and unambiguous definitions are usually just a couple internet searches away.

Now imagine you need to encode behaviours for a system where domain experts use terms and jargon you've never heard before. Even worse, users of the same system coming from different departments use the same terms to mean different things. How do you draw the boundaries there? That's what the GP finds disappointing - there is no single guide or reliable process to jump into a new domain and get the boundaries right.


That's not how lines of code work - more lines can be more clear and easier to write. There are extremely concise mathematical proofs that require extreme amount of time to produce, whereas a more verbose proof takes a simpler approach but one that involves less thinking.

Same with programs.


If there were a way to produce bounded context boundaries by following a general pattern / algorithm, what would that be? I fail to see how they can be created usefully without lot's of conversations with domain experts plus elbow grease. It's the part of software that remains entirely art and not science.

Why is that a bad thing?


My impression was that it would:

* Try to mininimize the overlap between bounded contexts - loosely coupling domain models.

* Default to "too large" to begin with and institute a pattern for breaking down a bounded context into two smaller contexts.

I tend to find any process that defaults to "have more conversations/interactions" defaults to wheel spinning without some sort of specific plan about what those interactions would entail.


> Try to mininimize the overlap between bounded contexts - loosely coupling domain models.

What if this doesn't lead to a more accurate representation of the domain? What if two contexts are just coupled in the business, for good reasons?

> Default to "too large" to begin with and institute a pattern for breaking down a bounded context into two smaller contexts

This is exactly the ambiguous advice that you are rallying against. How do you know when to break down a context?

It's like, building any system involving many actors and actions is hard, that has nothing to do with software. We're just digitizing the same patterns and behaviors that people have used to run companies for hundreds of years.

People want a playbook to be followed to arrive at a "perfect" domain model or architecture. I'm sorry, that sounds pretty farfetched to me. It reminds me of how we first started thinking about computability theory, when David Hilbert proposed that we should be able to devise an algorithm that could decide the truth or falsity of any logical statement (the Entscheidungsproblem). Hilbert was one of the smartest mathematicians to ever live, and he was very confident that this could be done.

Well, Alan Turing, Kurt Gödel, and Alonzo Church (not slouches in their own right) all smashed that idea with various proofs. The truth can often be counterintuitive. I am sorry that the world is complex, I also wish it weren't so.


I think you misunderstand. I said that I thought that DDD would proscribe something along these lines. I am not endorsing this as a fully complete, usable process, I am saying that something like this is both possible and necessary for the paradigm to function.

It's a critical topic that is right at the heart of DDD. I researched up and down on this topic and unlike 'where to use a factory" the DDD community refuses to go into even as much detail on this topic than I just did with my half baked comment.

I am not trying to "complete" DDD here. I think DDD should largely be consigned to the trash heap.


> This is exactly the ambiguous advice that you are rallying against. How do you know when to break down a context?

This is a very important question. I will bite and give a guidance because no one is willing to give advice here. If you development team grows over 10 SE. Then you need to split it. So your bounded context has a complete team on it becoming experts in the subject as well as experts in the software implementation. Each team should be able to work independently of other teams.


Forming your bounded contexts based on technical complexity and team configuration is certainly not advocated by DDD.

Is "larger than 10 software engineers" a rule that domain experts would suggest? Would that be in the ubiquitous language?


How exactly does DDD advocate forming bounded contexts?

I ask because only ever heard personal opinions in answer to this question. I researched it heavily and found nothing. As far as I can tell it doesnt.


I've been replying to you in other threads. There is no answer to this question, it is an art. I understand that it's frustrating, but that doesn't make it any less true.

As far as what the goal of the art is - the goal is to avoid linguistic and semantic ambiguities in the ubiquitous language. There is even a section entitled "Recognizing Splinters Within a Bounded Context" where specific examples are given:

* duplicate concepts * false cognates

If you have truly duplicate concepts across contexts, this is a symptom of the lines not properly being drawn, and perhaps a new, shared context is missing.

False cognates are the bread and butter of bounded contexts though - these occur when two areas of a business use the same exact word for something, but they mean slightly different things _depending on the business context_. The example in the book given is the notion of a "Charge," which customer invoicing and bill payment departments might both use. But each department only cares about certain pieces of data of a charge, so if one "Charge" model were created, it would be more complicated because it had to worry about all the different ways that the teams use it. And even worse, sometimes they are used in _conflicting ways_. That is a semantic collision, creating ambiguity in the model.

This is what a bounded context is meant to address. Each department gets its own model, each with its own version of Charge. The code and data is fit for the specific business purpose it's serving, instead of having a one-size-fits all model that gets the job done, but is more complicated to use in all contexts.

Honestly curious, have you read the book? I still don't think it will give you what you're looking for in terms of a prescriptive formula for "doing DDD right," but there's quite a bit of guidance in there.


I'm familiar with the idea that "duplicate concepts" indicate that you should have a separate bounded context, from, I think Martin Fowler's blog? This is actually partly what I was referring to when I said hand waving.

It's conceptually similar to answering the question "How do I know where the borders of Germany lie?" by saying "ask the first person you see if they speak German".

It also conflicted with a process I followed, which was to essentially create a team glossary and agree to semantically disambiguate terms which had multiple different meanings (e.g. linux user/website user instead of just user) and even just "ban" the usage of terms which got overloaded too much.

(I discovered that semantic collisions didn't just present problems in code, it often prevented you and your team from having coherent conversations).

This could, of course, then put everything we touched as a team into the same bounded context. Or not...?

>The example in the book given is the notion of a "Charge," which customer invoicing and bill payment departments might both use. But each department only cares about certain pieces of data of a charge

It sounds like they're essentially saying (not explicitly, but via assumption), that your software should follow conway's law.

Nonetheless, this example screams "bug alert" to me, since assumptions made by departments (and, as a consequence software systems) about what they should care about are where the really nasty bugs lie - frequently driven by misunderstanding between departments about terms (e.g. what counts as a user).


In a typical pattern of buzzword adoption, your horrible architecture isn't DDD just because someone calls it so; it's just bad design.

In particular, pulverized source files and excessive abstraction layers are characteristic symptoms of dogmatic, value-oblivious impractical design: quite the opposite of thinking hard about a meaningful domain model in order to use it as a shared language.


Exactly like agile, which has morphed into following the rules of the latest Agile Framework.


> Now, you could make the argument "You Are Doing It Wrong(tm)"

I always hate these arguments - for me, whether a particular programming paradigm is 'good' or 'bad' for an organisation comes down to: "what will my least senior developer do with this?". If it tends to produce tangled nightmares, then it's not a good paradigm, it's about how the weakest link will use it, not the strongest ones.


Ok, so in practice you are saying that no good programming paradigm exists. Where do we go from there?


I agree no paradigm is particularly great for beginners - OOP leads to junior devs separating their code prematurely/inaccurately, writing horrendous inheritance trees etc., functional programming can lead to some really opaque code that feels like you're trying to solve a puzzle when reading it. I do think good principles exist however:

- try to write code that can be easily unit tested

- composition over inheritance

- immutable over mutable + avoid side effects

- avoid recursion unless your data is recursive

etc. etc.


Well, I don't think anything changed. We already knew there was no Silver Bullet.

To paraphrase Brooks, no pattern or methodology is gonna make development "more productive, more reliable or simpler". All patterns and methodologies require a skilled practitioner, but someone skilled enough can even eschew them and still make good software, so in the end they don't really matter.

The point of patterns/methodologies is purely to facilitate communication between experts, not to guide.

In the end, good software is not about following recipes, especially complex recipes. It's a craft.

As for where do we go, what we need is the foundation for personally re-discovering and internalising those same methodologies and patterns, rather than blindly following them. Sweezyjeezy already enumerated some ideas above, and I have done the same in the past: https://news.ycombinator.com/item?id=27987498


Certainly no good general purpose programming paradigm exists. Nor will it.

The more general purpose it purports to be the vaguer and more misapplied it will end up being.

DDD could be a lot better as a movement if it tried to limit its scope a bit.


Wait until the market stabilizes and new dev don't outnumber more senior ones 10:1.

Until then, we have to stick to patterns that are very simple, easy to teach and have wide pits of success. We also have to lean heavily on systems that easy to replace and easy to refactor, and can be managed by the few very experienced devs. That's why platforms like Ruby on Rails did so well, even though they can be divisive.


> for me, whether a particular programming paradigm is 'good' or 'bad' for an organisation comes down to: "what will my least senior developer do with this?"

I wish ideas like these were more prevalent or that, at least, people considered that in many cases the quality of a method or practice is not an intrinsic attribute, but a matter of suitability to a particular environment.

That's the problem I have with agile enthusiasts in general. I've seen agile methods operating wonders on many projects, but I've seen it failing on much more, and it's completely OK to assume that the method has its assumptions/conditions/limitations but enthusiasts, instead, blame the company and practitioners for not having understood and applied it appropriately.


That's not a good heuristic IMO; it boils down to evaluating every item in terms of what a toddler could do with it. There are situations in which it makes sense, but you can't build a civilization like that. The correct way is to keep toddlers / "least senior developers" away from powerful/dangerous things, until they grow up to the point they can be taught to use them responsibly.


Developers will destroy everything good!

DDD has some great ideas about how to model things and talk about things. But it is not a concrete architecture prescribing a particular number of layers or lines of code.


I read a preprint of Eric Evans book before it came out, and then bought a copy when it got published.

As someone who had been in the industry 6-7 years at that point, it really resonated - he was describing modes of success and failure I had seen but didn’t really have names for. Much of the usefulness of the book was just to put names on these things.

What has happened to ‘DDD’ in the meantime surprised me. It never occurred to me from the original book that a methodology of strict practices could emerge from it. To me that wasn’t the sense if it at all.


I call this “domain driven design driven design”.


DDD doesn't prescribe any structure really, other than saying you should separate out different contexts and use unified terminology throughout the business, which I don't think anyone would argue against.

Are you talking about design patterns by any chance? Maybe things like repositories, adapters, small and focused service classes and the like?


If it’s really that simple, why are do bible sized books written on the topic ? Why does it even need a name if it’s just common sense ?


Any engineering methodology that requires 500+ pages to explain should be hard avoided. The size of those DDD books always made me run away. Good engineering should always come back to KISS. Abstract only when it helps reduce complexity.


Cause consultants need to make $.


Don't get me wrong, I'm no DDD expert by any stretch.

I think the main topic and real difficultly of DDD is figuring out specifically where and how to separate out different contexts (the so-called 'bounded context' in DDD parlance).

Creating microservices is a good example. Where should the responsibility for a single microservice start and finish? What might the implications for scalability and extensibility be? What does this mean for data storage? What data will be shared or replicated between microservices and how will this be done?

Answering these kinds of questions is hard and has big implications for your teams and for your business.


^^ These are the kinds of questions DDD has very vague answers to IME. It handwaves about all of the most critical aspects of software development.

It's very specific that, for instance, your domain model "should only be created by factories", though.

It's all a bit bikesheddy.


DDD provides insights to understanding how a business works for people who are not domain experts themself, and are tasked to translate business requirements to code. This insight helps make appropriate decisions about those "most critical aspects of software development", nothing more, nothing less. Whether you use factories or not is a much lower-level technical decision far removed from the essence and rationale of 'doing DDD'.


>Whether you use factories or not is a much lower-level technical decision far removed from the essence and rationale of 'doing DDD'.

shrug maybe im reading all the wrong things but IME most DDD discussions, blog posts and books sound more like this:

https://stackoverflow.com/questions/555241/domain-driven-des...

And few delve into the real "essence" as you put it.

Event storming is one of few times, but it doesn't seem to be a core part of DDD and i found the outcome to be underwhelming.


That SO article if from '09 when 'big OOP' was all the hype. Hyping things to bigger proportions than they should be are a problem in IT. We just saw it for 'Microservices', for instance. These hypes serve to overpromise what you'll get, and sow confusion for years to come. In that regard I hope that DDD will not climb the hype cycle again, and we'll stay calm and just use what works.

I think most important to realize that DDD is just another tool in your toolbox, and can be used alongside all / most of the other tools you already use. Event storming can be a nice way to quickly kick off the elaboration process, should the method appeal to you.


People write books that could be blog posts all the time.

Most non-fiction literature is like that.


> If it’s really that simple, why are do bible sized books written on the topic ?

Money.


> I detest it so much that sometimes it makes me wonder if I even want to continue in the programming space

Hugs. I'm sure we've all suffered this Kafkaesque torture. But it still sucks.

Have you heard of the CIA (nee OSS) book on Simple Sabotage Field Manual? http://www.simplesabotage.com

It predates Brazil, Office Space, Dilbert, etc. After reading this book, and observing management, it's hard to imagine it's not all deliberate. There's just something inherently evil in bureaucracy.

> ...you could make the argument "You Are Doing It Wrong™"

Ages ago my company's study group tackled Applying Use Cases: A Practical Guide. https://www.amazon.com/Applying-Use-Cases-Practical-Guide/dp... After all the monkey motion with UML, schemes, design patterns, etc, this book was like a clarion blasting away ignorance and ambiguity.

It was so clear. Do the use cases. Then directly derive architect from those use cases. Voila! Impossible to fuck up.

However. Young me learned a very valuable lesson.

Nothing is so obvious and virtuous and good that some whackadoodles cannot, will not comprehend it.

But why?

Obstinance? Actual confusion? Inability to suspend disbelief? White knuckled desperate grasp on prior beliefs? Fear? Moral and philosophical opposition? Refusal to concede control (power)?

I have no idea why.

Whatever the root cause, I've experienced these impasses so many times, I've simply given up.

I eventually learned to do whatever it takes to publicly appease the tyrannical gods of confusion, then do any actual work as able on the down low.


Much like most things engineers bitch about, the tool isn't to blame here (as you say yourself) - it's the wrong tool for the wrong job.

There's nothing about DDD that says you can't make a simple CRUD API, if that's all that is required.

DDD's principle value is one of ubiquity (sold as "Ubiquitous Language" but I posit that "Ubiquity" is more accurate) - does your code do what the organisation does, and vice versa?

Not just using the same terminology, but using the same workflow.

Now if what the organisation does is basic stuff, then your code should be basic, too.

If there is an asymmetry between what the org and code do.. there's pain.



My experience too. “No silver bullet” is correct. DDD has some value but the cargo culting of it gets pretty ridiculous in some places.


DDD principals should only be applied to reduce complexity. If the problem space is really just crud applications, DDD modeling or engineering is a waste of time and money.

But here’s where you have to be careful. It’s still useful to go through things like Event Storming and modeling domains to understand the problem space before deciding that basic crud is good enough.


I've been coding for over 15 years now and everytime someone says "Nah, we only need CRUD", they end up eating their words a couple of months down the line. The net result: domain logic all over the place. People tend to think that applying DDD infers a tremendous overhead in the code. That doesn't have to be the case. However, some tend to go overboard and that's when I can understand the frustration.


Quick-and-dirty is always dirty and rarely quick.


> instead of pondering the question, do you really need to rewrite everything following DDD?

Anecdotal: I chatted with Eric Evans, author of the DDD book, at a conference once, and he stressed that DDD was only appropriate for certain parts of the system that called for it. I think he'd be as frustrated as you are by the situation you describe.


Parent poster also mentions in another post that they are also using Microservices.

I can see how excessive detail division into Microservices can create a nightmare. I have seen it over and over. We need better guidance on how to breakdown services and how much. Erik Evans touches on this https://www.youtube.com/watch?v=sFCgXH7DwxM but I still think he is to shy on giving advice. Microservices are valuable because they allow your team to work autonomously. So you you should roughly have one Microservice per team. It is not a hard rule at all, but it can help to see if your microservices are too granular. That is the number of Microservices should be about 5 to 7 times larger than the number of developers. If you have less than 2 developers per Microservice, you are probably creating a maintenance nightmare.


This is exactly the state of a project I'm working at, but without DDD and with microservices instead. It is probably more of a general problem of complexity growth that happens when anything goes wrong with architecture and isn't addressed properly not just a consequence of bad DDD.


I’ll bite: your team may say they are doing “domain driven design” but given from the description that they are not, you could just as well claim to be training alligators and be as correct.

However, you _are_ correct to say that DDD has a very limited use - as does the domain model pattern itself. Martin Fowler even calls this out in Patterns of Enterprise Application Architecture, imploring that it is necessary only when there are “complex and ever-changing business rules”. Most business systems should be multi-player Access databases, and instead have “a few sums and not-null checks”, thus should not use a domain model, thus should not use a technique aimed at designing and validating a domain model.

Honestly, I’d find a new job.


If you are doing CRUD in DDD ... "You are doing it wrong"


> Don't get me wrong, DDD has meaning and purpose, but some companies are applying it as a badge to be obtained

That's the issue. If they weren't doing with DDD they would be doing the exact same thing with whatever the CEO read in Gartner instead anyways.


You don't use DDD at your current company. Naming it DDD doesn't make it DDD.


That's the "No True Yorkshireman" fallacy.

In my experience it's extremely safe to assume that, even with the best intentions, DDD code can become stupid big balls of mud.


> more that 25 files that hold 3 or 4 lines of code at most

That sounds kind of annoying, but not very hard to understand.


I've been in a similar situation so I hear you. It is so mentally draining.


If it's just crud. Create a minimal API that handles it to show the difference.

Which programming language? There was some work shown in .net recently


Python, FastAPI... so you can imagine how something as simple as a CRUD becomes quickly impossible to manage as soon as you ignore everything and start creating repositories, queries, use cases, etc. for a single endpoint.

I agree with you that a minimal implementation should handle it but... someone had wet dreams with DDD and everything has to be DDD now :)


And for something like "translations crud" this shouldn't be required ( which service needs to be updated from a translation update?).

Perhaps that's a perfect example to implement it and push it. ( depends on how far you're willing to go and push back against who)

Look for a fancy name to describe the "pattern" could help.


DDD is a wonderful idea and one that I think should be applied more broadly: a deep understanding of the problem domain, shared between tech and product, can greatly improve quality, reduce bugs, simplify communication, and even result in better structured code. In particular, it tends to lead to code structure that follows the structure of the underlying problem domain, which tends to be much clearer than code that just tries to implement features without an underlying understanding of their purpose.

The design patterns and actual implementations traditionally associated with DDD, on the other hand, are quite bad, and tend to lead to Enterprise Software levels of cruft and/or excessively dogmatic use of OOP.


Remember when people got offended by the "DDD" acronym? [1] "Good" times...

[1] https://twitter.com/sarahmei/status/1073251153360482304


LOL If only people understood the same letters can be used for multiple acronyms https://en.m.wikipedia.org/wiki/DDD

What's even funnier is that none of the acronyms listed there is a soft-porn reference as claimed in the twitter thread.


I just don't understand people that insist on bringing their emotional baggage into absolutely everything.


The definition is highly summarised, which (like many definitions) makes it only useful if you already understand the concept, and you need a way to remember it. Or the definition can be a framework for explaining it to others, which is how I use it in workshops.

A more real world story of DDD in practice is this blog post:

https://verraes.net/2021/09/design-and-reality/


I am working really hard on how to apply DDD to a video game context, specifically with unity3d. It is a struggle because I think the game creator cultural zeitgeist doesn’t like applying software patterns from ‘outside’ influences, like enterprise software. It is my personal view that the zeitgeist is terribly short sighted and is actually only considering the game client at the expense of the entire software system. (It is also a struggle for me because the video game community appears littered with SEO grifters, and the real experts are very secretive (so different from open source on the web). That is okay because it is fun to synthesize DDD techniques into a new context.

I think of game software as the entire system and not just the game client. That means any backend or infrastructure is a piece of the complex whole.

DDD for backend systems is well understood. The difference is it needs to serve game clients as well as any other clients (e.g. a website to market the game, forums for game related posting, maybe even run logic for chatbot(s)). If you want to expose online services to your game client, the backend system is where you will want to think about what logic to run and what state to persist.

I like DDD for some categories of game client code, not all. There are some simple gameplay elements that are driven from game engine physics and collisions. There are more complex gameplay elements that benefit from more complex domain modeling techniques. Basically allow yourself to write untested/lightly tests components for simple gameplay elements. For complex systems, it may be beneficial to think of unity as concrete implementations for domain specific logic. Model the code in the language of the game system expert and think about how to translate unity runtime events (user input, collisions) into terms meaningful for the domain.

In practical terms, unity game objects represent domain entities and interactions between game objects can trigger domain logic. For example, a projectile game object colliding with a combatant game object is a good place to execute combat service logic, which may trigger domain events (observable behavior) that can then be listened for by subscribers (e.g. a damage UI, a toast notification, or any other unity UI).

This is turning into more of a brain dump but I wanted to send this out into the world since it’s a topic I’m studying heavily right now.

Does anyone have more experience applying DDD to video games out there?


As anything else, a blind application of a technique, paradigm, or pattern, could be horrific. It doesn't matter what combination of capital letters is used. I've done horrific things when I was "doing DDD", until I realised how simple and elegant domain models could be when done right. Forcing people to do something they aren't used to and expect decent results is disastrous in any industry. It's just in software everything can be "refactored" so we are more easy-going with those things.

With DDD "done right" there's no issue with understanding what the code does because it's split into tons of files. It's actually the opposite. But, indeed, it could be a nightmare when applied blindly, or forcibly because of some ivory-tower architect decided to "use DDD" without even knowing what it is. I know as I've been that guy.


Yet another overloaded tech buzzword. For me, DDD means 2 things:

1. Listen to how your customer refers to their business and internalize this perspective as if your life depends on it (your career certainly may).

2. Find a way to build your product and communicate about it in similar terms. The methodology should be understood to be domain-specific.

Anything that extends the philosophy beyond these points is making assumptions about your product that really only you and your customers should be making.

The only concrete technical assumption I would make here is that you probably want to at least consider SQL for a few minutes if you are dealing with tons of distinct types and complex, rapidly-changing requirements from the business. If you can find a way to make the business write their own logic/queries in a language they understand, you have discovered the most powerful force multiplier I am personally aware of right now.


DDD seems to be one of those things where 'what it's meant to mean' is very different from 'how it's actually practiced'.

Reading about what it's meant to mean, it seems pretty common sense. As often implemented, however, it seems to lead to a lot of accidental complexity and a bunch of dubious usefulness abstraction layers.

I have a theory why this is. It seems that enterprise development as a field have a penchant for methodologies and acronyms. So, whatever silver bullet the enterprise-y shops get interested in eventually gets enterprisified. Be it UML, OOP, Agile, or DDD.

EDIT: I've found this an interesting read that sheds some more light on DDD using counter-examples:

http://media.pragprog.com/titles/swdddf/understand.pdf


I made this introduction to DDD [0] with my friend Thomas a few years ago. It was specifically intended at developers and to be very practical.

[0] Video of the talk: https://vimeo.com/167722768


Now for a second there I was a bit confused and thought this was about forming your business ideas around the available .com domain names.

My guess was wrong, but I'm still confused what it is about :)


This principle is a key flaw in the idea: "A complex domain can not be efficiently expressed as a single universal model and language, and must therefore be separated into Bounded Contexts (ie. an internally consistent language and model) by the system designers". We humans understand how everything we experience is part of a wider world. So if you are unable to know how things relate, as all things do, you do not yet understand the domain.


I don’t think it’s accurate to say that DDD advocates ignoring how things relate. It’s more like finding the higher level concepts that do relate to each other. You don’t need to know about internal combustion to write traffic rules. In fact, it’s better if you don’t know about that because your rules shouldn’t need to change when the mechanism that moves the car changes.


Separating things into bounded contexts sounds like specifically not connecting them, which means not understanding how they connect or even trying. And its these bounded things that seem to cause the biggest issues later on when their actual connections come to the surface as bugs in the system.


In my experience the most useful part of DDD is to have a common vocabulary for your projects - developers and end users should have some common terminology. For complex business domains it is good to have a glossary, and for your code to always use those words in the same way the business uses them.

Other than that, most DDD concepts are a bit dated, and really oriented around JAVA/C# in the early 2000s.


I feel that I must be missing something. How else can one design anything?


I think domain driven design is born to differentiate with database driven design. So we design at a higher level, our thought process, vocabulary is tied with business terms.


We used to design systems at a higher level with UML: Use Case diagrams were meant as a design tool to talk with the product owner about the subjects and verbs of the domain. Sequence diagrams allowed us to capture any dynamics and interactions at the domain level.

That was like 25 years ago.


"Making illegal states unrepresentable" is one of the worst possible pieces of advice I ever encountered.

It sounds very reasonable, but as soon as you face the issue of communicating to the user or other components the fact that something is wrong and what is wrong, you'll discover that it is very hard to inform about illegal states if you cannot represent them.


The idea is that you have a clear, strongly-typed separation between unvalidated data and validated data.

Unvalidated data (the DTO) is just the raw representation of whatever was inputed, read from storage, received from external services, etc. Any possible input should be acceptable and faithfully representable as a DTO.

Then, by passing the DTO to a validation function, you return either a validated object (a model) which is in fact constrained to only contain a legal data state; or a set of validation errors which can be acted upon.

Your business logic should operate only on validated objects, so that you can actually rely on your basic assumptions, and actual workflow rules (eg. "you can't checkout an empty cart") can be expressed and separated from trivial validation (eg. "quantity must be greater than 0").


That's not what "Making illegal states unrepresentable" means.

Let's say you're trying to parse a user object. Let's further say users always have to have a last name - a user without last name would be illegal state.

Now if you're parsing some data for a user that really doesn't have a last name for some reason, there's two approaches to this problem - either you return a User with a null last name (which is essentially giving incorrect information to the caller). Or you make it impossible to set User.lastName to null (for example by making it an Optional) and fail with an error about what went wrong.

Of course you wouldn't NEED to restrict User.lastName to never be null - but if you always fail with an error in that case anyways, why not? That way any consumer of the User object knows that the lastName will always be there and valid.


But clearly, requiring a last name is wrong. If legitimate users can lack a last name, the system needs to work without last names: probably there should be a flexible "person's name" class that encapsulates first names, last names, titles etc. instead of attaching a raw last name to users.


... more like a "getPersonName()" method instead of a whole 'nother class, init?


The various parts of a person's name should be encapsulated in a specific class, and one of these objects, rather than a loose last name and other concrete fields, should be a mandatory attribute of a user object.

There should be only one place for name-handling logic, and since other people types besides users could appear in the domain model (commercial customer, social network "friend", relative, etc.) the user class isn't that place.


The reason to make them unrepresentable so you will be able to catch the case when the input was illegal.

Otherwise you may never notice that illegal state was there in the first place.

I encountered many systems that would take invalid inputs and produce invalid (but sometimes even valid) results [1].

Illegal internal states are even harder to catch. When using FSM / StateCharts I usually automatically print error to log (and crash) for all illegal state/event combinations in the State Transition Table, instead of silently ignoring them.

[1] GIGO - Garbage In Garbage Out

https://en.wikipedia.org/wiki/Garbage_in,_garbage_out


Is this not just as simple as recommending to use an input mask, or date picker or lookup list so it's impossible to end up with an invalid value in your system?

I always took it as such.


Who is Mathias Verraes?


* Disclaimer: Mathias is a personal friend of mine, and I was heavily involved in the European DDD community over a decade ago. *

Mathias is one of the original instigators of the European DDD community: he spent the last decade giving talks all over Europe about DDD and organizing one of the largest DDD conferences in the world: DDD Europe [0].

Next to this he also has a consulting org (recently rebranded to Aardling [1]), and spends a lot of time with DDD gurus like Eric Evans (the "inventor" of DDD, author of the DDD blue book), Alberto Brandolini ("inventor" of event storming), Nick Tune, Paul Rayner, Yves Reynhout, Cyrille Martraire, Romeu Moura, Marco Heimeshoff, ...

As far as I'm concerned, he's the real deal, and has skin in the game. Highly recommended!

[0] https://dddeurope.com/ [1] https://aardling.eu/


He has an "About" page and links to his LinkedIn profile on his website.


Mark my words, DDD without cqrs is a futile attempt.


DDD is a software engineering concept that if you took

100 software engineers with decade of experience,

then 30 of them would have no idea or understood it bad, and then you'd have 70 different interpretations

funny anecdote: at my last semester of eng. degree there was presentation to perform about some fancy topic. One person who had I guess like 2 or 3 years of xp decided to pick DDD and thought he'll manage to do it like a day or a few before presentation

He failed kinda hard with understanding it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: