Implementing Domain-Driven Design Workshop In Colombia

Bogota

Here I am on a Friday evening in Bogotá, Colombia. This week I taught my Implementing Domain-Driven Design Workshop here. We had a nice group of students, and as usual the class had their eyes opened to what it means to implement with DDD. This was a slightly condensed Workshop, being only two days rather than the normal three. Still, we were able to cover all the material as well as many of the Workshop problems and coding exercises. It is a lot of material to cover in two days, and most student brains were filled to the brim by the time we finished. All of the students were quite pleased after experiencing several strategic and tactical “ah ha!” moments along the way.

Naturally I got some good feedback that I am putting into the slides and code exercises. It was suggested I provide a basic set of components for both Java and .NET developers on which they can work their exercises. I think I will do this, but I also don’t want the next classes to be too heavily focused on architectural mechanisms, but to be more concerned with the resulting domain model and accompanying tests.

Colombian Startup Incubator and Col 3.0

After the class was completed I also participated in Colombian startup incubator initiatives. I performed assessments in both product and technical architecture, advised and mentored. For three of the top projects, we stepped through questions to identify their Core Domain (yes, DDD). I only used the term DDD with one of the teams, purposely avoiding it with the other two. I wanted the teams to focus only on the competitive advantages of their product offerings and not get distracted by the details of Domain-Driven Something-Or-Another.

It worked very well. One of the three teams was already quite advanced in their product vision, so there wasn’t a lot of Core model design still lacking. Yet, for the other two teams we dug out some missing Core Domain features. One of the incubator leads later told me that the third team was “ecstatic” with the results. They were missing a really important part of the Core that will influence subscribers to invest their intellectual capital in the service and remain subscribers.

Colombia is on a mission to innovate en mass, which has opened a path for many startups. Some of the incubator programs will soon be leading their teams to the US to obtain first-round funding. Apparently much of the initial seed money comes from industries involved in natural resources (oil, minerals, etc.), and represents around 6% of all such revenues. I am rather certain that at “only 6%” it is no small change. The funding, administered through the government, is not limited to startup seed capital. Actually, the Colombian government is setting up open office space along with high-end computing resources all over the country. All you have to do is show up and use the resources, which makes it very easy for Colombians to try their hand at innovation.

It is quite interesting to see this initiative, called Col 3.0, unfolding. As they say, “watch this space.”

This is not an entirely new approach to growth by innovation. For another perspective, see Singapore’s A*Star (Agency for Science, Technology, and Research).

Guaranteed Anemia with Dozer

Today I was inspired by Scott Hanselman to get my blogging act together. It’s been awhile, maybe nine months or more. It’s way overdue.

I’ve been helping a colleague on a (currently confidential) project for the NYSE. The goal is to introduce Domain-Driven Design in some incremental steps. He posed a few questions today. The discussion starts with this background: “PROJECT_NAME has both service ‘domain’ objects and data ‘domain’ objects (mostly called the same names and they give it the domain namespace, not I). The transport mechanism delivers “service” objects from the UI and other services. When they are ready to be Hibernate persisted they get transformed via Dozer to a data ‘domain’ object. For instance, while some service is dealing with an order object, it is a service ‘order’ object. When the service throws it over the wall to the Repository, the service Dozers it to a data ‘order’ object and shovels it into the queue channel for delivery to the DAL.”

He continues: “If I understand you properly, as a rule of thumb for a rich domain model given this architectural world-view, we would generally replace setters on the ‘domain’ objects with business-type methods (what we’ve been doing for ages). But the Hibernated data ‘domain’ objects, well, I don’t see much need for them to have getters and setters either given that Hibernate no longer needs them. That is, given Dozer.”

In a nutshell, this is his question: “What is your world-view without Dozer? Just one domain model with Hibernate annotations also (given that the client is using annotations and not xml)? Or do you split the model as they do and just get-set? Or other?”

This is a pretty typical question among those unfamiliar with the DDD “world view.” So don’t feel shy about having similar quandaries. If you are familiar with DDD, what might your response be? Here’s what I had to say…

[Begin reply.]

You gave the right answer. It’s just dozing data around. So in the end, what really is the difference between the “service object” and the “domain object”? Probably not much difference. It’s just mapping attributes around. And Dozer covers over any business intent what so ever. Everything is reduced to anemia. Like my Chapter 1 [of Implementing Domain-Driven Design] says, it’s “Anemia Everywhere.”

So two months from now, bring in someone who knows nothing about PROJECT_NAME and tell them to go explain to you what the business value is of a given use case. It will probably take them hours to explain it because they will have to start at the source of the data, in some other system, and trace it all around. In fact they may never be able to tell you exactly because there may be deep insight hidden in the data mappings that only a conversation with a specific group of desk traders could reveal.

However, if using DDD you would have one simple place in the model where you’d go and look at a single line of code and say: “Oh, that’s what’s going on here.” It’s because the model would capture exactly the intent — not of the developer — but of the specific group of desk traders who originally spec’d the system that they wanted.

That’s the difference. In the end you may choose not to use getters and/or setters of any kind, because as you state, Hibernate doesn’t need them.

If you really need to cover Chapter 1 again to understand that clearly, I’d highly recommend that that’s where the real answer to your question is found. It’s not really clearly Chapter 7 that reveals that. Domain Services are just one tactical tool to help you model in a specific situation. But it’s the Ubiquitous Language in and explicitly Bounded Context (an application or business service with a domain model) that addresses this.

[End reply.]

If my answer highlights the need to refresh your resolve for stick with the DDD Ubiquitous Language rather than technical solutions that usually lead to Anemia Everywhere, take a refresher here: Implementing Domain-Driven Design; Chapter 1, Getting Started with DDD.

Protecting Aggregate Dependencies

On October 17, I presented on Part I of my DDD Effective Aggregate Design essay at the Denver-Boulder DDD Meetup. We had a nice attendance and once again benefited from the use of Quark’s conference room. We had a good turn out with around 20 in attendance.

Even after the essay and presentation, I am concerned that the vital message about true invariants is being missed by some. Of course it is to be expected since anyone who has designed aggregates has faced the various challenges of grasping true business rules that absolutely require transactional consistency. Still, a recent conversation indicates that modelers many times take the opposite direction found in Part I of my essay, guarding various domain object life cycles by placing them inside an aggregate. True, sometimes doing this represents an actual business rule. In that case domain experts would explicitly insist that ‘such-and-such must not be removed from the system until thus-and-so is also removed.’ That may speak to a true consistency rule. However, many times the management of a given entity life cycle isn’t a true business rule, and modeling with that in mind causes unnecessary aggregate bloat and related negative consequences.

So, what are some effective ways of managing life cycles of separate aggregates without incorrectly modeling their boundaries? Consider these approaches:

1. Use code reviews. This may seem far too obvious to state, but it is one of the simplest ways to ensure that when one domain object depends on the existence of others, looking out for untimely removal by employing peer review is an effective first step. Always look for code like this:

someRepository.remove(aggregateInstance);

When you find it, ask yourself if this is appropriate and permissible given that the possibility exists that other aggregate instances require its existence.

2. Implement repository remove() operations to refuse inappropriate removal. Using an example from my essay, consider the possible inappropriate removal of BacklogItems and Sprints. If a BacklogItem is committed to a Sprint, it must not be removed. Otherwise, the Sprint’s integrity would be compromised by it referencing a now non-existing, yet committed, BacklogItem. Too, if we removed a Sprint, any BacklogItems committed to it would then wrongly reference the now defunct Sprint. We could protect against the inappropriate removal of either of these aggregates without trying to place a convoluted consistency boundary around them. In a repository’s remove() operation implementation, we can check each aggregate instance for a state that would break dependencies if the operation’s normal completion was carried out. For example, the BacklogItemRepository would check whether each BacklogItem is committed to a Sprint. If so, the remove() operation would throw an IllegalStateException. The same goes for the Sprint. If its repository discovers that it has any BacklogItems committed, it would refuse to remove the Sprint instance, throwing an exception.

3. Never perform physical aggregate instance removal. Under some circumstances, even the above two measures could be insufficient. Or even if removal was appropriate at one time, new dependencies could develop over time as new requirements are established. Of course, if the new requirements call for a true dependency invariant, we’d move an existing aggregate inside the boundary of another, forming a new aggregate definition. However, if it’s not a true invariant, consider using a status to reflect the current logical state of a removed instance: REMOVED. That way domain aggregates never completely disappear from the system, they just reach a state where they cannot be consumed using repository queries. If need be, you could change the status back to EXISTING in order to avert disaster. Or even if the instance remains in the REMOVED state, you can still view the instance to see when, why, and by whom it was removed, and even analyze its attributes.

This is, in fact, how BacklogItems are modeled in my sample core domain. We actually never want to delete any BacklogItem from the persistence store, but we can transition it to a state that causes it to be filtered out of normal view queries.

4. Similar to #3, if you are using an event store you can establish that an aggregate instance has been removed, but you would never lose the history of state transitions for the formerly existing domain object. With added advantage, the event store would allow you to reconstruct the state of the aggregate instance at any point in time.

While this list may not be exhaustive, it does show that there are alternatives to modeling chunky aggregates to preserve dependencies when they don’t reflect true business rules. Yet, the real challenge is in recognizing when a true invariant exists and when it does not. Distinguishing the difference will allow us to design small aggregates that reflect true business rules, and at the same time keep our systems performing and scaling optimally.

DDD Community Essay: Effective Aggregate Design

My essay on Effective Aggregate Design was released late on October 2. In just two days there have been nearly 500 views/downloads. This is clear indication of the challenges faced by many when designing aggregates using the DDD tactical pattern.

Here’s a brief overview of the three-part series:

Part I considers the modeling of an aggregate.

Part II will look at the model and design issues of how different aggregates relate to each other (available in November).

Part III will discuss the discovery process: how to recognize when a design problem is a hint of a new insight, and how different aggregate models are tried and then superseded (available in December).

Part I has already received very positive reviews by the DDD community, such as this tweet:

weakreference: Great essay on aggregates and aggregate modeling by @VaughnVernon http://t.co/IXmiOTkw Looking forward to part 2!

DDD Meetup: The Role of Grid Computing and DDD

Join us at the Rooster and Moon to discuss the role of grid computing and DDD, and common tactical patterns.  The Denver-Boulder Domain-Driven Design Meetup Group meets monthly or semi-monthly.

Follow-up Review: As always Randy Stafford provided great information on the roots and progression of grid computing, as well as contemporary use. He introduced the group to using DDD with Oracle’s Coherence.

I will post our next meetup date and location soon.

DDD Meetup: Stategic Design with Bounded Contexts and Context Mapping

Tonight Paul Rayner presented on some of DDD’s essential strategic design elements, Bounded Contexts and Context Mapping. This was also the topic of my REST with DDD QCon presentation back in November 2010. Paul is one of Eric’s DDD Immersion instructors, and so he carries a well-rounded understanding into his material. It’s great to have Paul and Randy Stafford as co-organizers with me.

Paul’s presentation led into a good discussion on the practical use of Context Maps in various projects. As a result, I offered an example enterprise that faces interesting modeling challenges. I used a publishing organization that must deal with the various stages of the life cycle of books. Roughly speaking, publishers deal with stages similar to these:

  • Conceptualizing and proposing a book
  • Contracting with authors
  • Managing the book’s authorship and editorial process
  • Designing the book layout, including graphic illustrations
  • Translating the book to other human languages
  • Producing the physical print and/or electronic editions
  • Marketing the book
  • Selling the book to resellers and/or directly to consumers
  • Shipping a physical book to consumers

I discuss this in Chapter 2 of my upcoming DDD book. The following is an excerpt.

Throughout each of these stages, is there one single way to properly model a Book? Absolutely not. At each of these stages the Book has different definitions. It is not until contract that the Book has a tentative title, which might change during editorial. During the authorship and editorial the Book has a collection of drafts with comments and corrections, along with a final draft. Graphics design creates page layouts. Production takes the layouts and uses them to create press images, “blue lines,” and finally plates. Marketing doesn’t need any of the editorial or production artifacts. And to shipping, the Book might carry only an identity, inventory location, availability count, a size, and a weight.

What would happen if you tried to design a central model for Books that facilitated all the stages in its life cycle? There would be a high degree of confusion, disagreement, contention, and little deliverable software. Even if a correct common model could be delivered from time to time, it would likely meet the needs of all clients only occasionally and far too briefly.

To counter that kind of undesirable churn and burn, such a publisher should use separate bounded contexts for each of the above life cycle stages. In each of the unique bounded contexts, one domain object in each could be named Book. While the various Book objects would share identity across all or most of the contexts, most Book compositions would be vastly different from each of the others. That’s fine, and in fact the way it should be. When the team of a given bounded context speaks about a Book, it means exactly what they require for their context. The organization embraces the natural need for differences. This is not to say that such positive outcomes are trivial to achieve. Nonetheless, using explicit bounded contexts, software gets delivered regularly with incremental improvements that address the specific needs of the business.

This excerpt doesn’t get into modeling the full solution, but it did provide a good way to discuss what each of the models might be like. Whenever you face a similar challenge in your business, consider using this example, especially if you aren’t in book publishing. It will allow participants to take a step back and consider things from a different perspective. It sure enabled a lot of good discussion for our meetup group, and many got over the hurdles they faced in understanding the need for using separate bounded contexts in every DDD project.

First Denver-Boulder DDD Meetup

At the first ever Denver-Boulder DDD Meetup we discussed the business value of using DDD. For the discussion I put together the following topics. These are elaborated on in my upcoming book on successfully implementing domain-driven design:

Think Ubiquitous Language In a Context…

Organization gains a useful model of specific business domain

  • Investment in what matters most
  • Distinguishes; competitive advantage; mission well understood

A refined, precise definition and understanding of domain is developed

  • Business better understands itself
    • NOTE: One attendee said that his company has actually created sales materials from the lessons learned in domain modeling. This was not a coincidence. The interaction of domain experts with developers really yielded deeper understanding and market value.
  • Model refinements and deep understanding serve as an analysis tool

Domain experts contribute to software design

  • Grow a deeper understanding of the core business
  • Gain consensus among domain experts
  • Developers share common language as a unified team with domain experts
    • Knowledge transfer from domain experts to developers
    • Training and hand-off is easier
      • Domain experts “know the model”
    • Express goal to adhere to language of domain

Stronger support for task-based user interface (inductive)

  • Domain-driven is baked in, influencing use of software
  • Reduced training overhead

Think Development In a Context…

Clean boundaries around pure models

  • Emphasizes modeling the domain in context instead of solving other unrelated problems
  • Maintenance

Architecture is better organized

  • Reasons for integration are explicit
  • Understand the domains of the architecture
  • Reasons to use architectural styles are driven by need (ie. just enough architecture)

Agile, iterative, evolutionary modeling

  • Refinements only cease at product “sunset”

Patterns that facilitate precise designs using correct modeling “tools”

  • Bounded Context
    • Aggregate, Entity, Value Object, Service, Event
  • Context Maps

Presenting at QCon San Francisco 2010

Here’s a link to the presentation I gave at QCon San Francisco in November of 2011.

RESTful SOA or Domain-Driven Design–A Compromise?

The presentation shows how Context Mapping allows us to use SOA and DDD together, getting the best from each. Eric Evans also posted the presentation on the dddcommunity.org site under Strategic Design.

I had a blast in San Francisco. I was there when the San Francisco Giants won the 2010 World Series.  This was their first championship since moving to San Francisco in 1958.