Introduction

In the previous unit, we:

  • Prototyped the end-user interface to the stock trader system
  • Built a frontend API for the browser to provide the required interactions
  • Introduced the Backend for Frontend (BFF) architectural pattern that we’ll use to provide that browser-facing API

In this unit, we’ll start development of a reactive microservice for our stock trader system that will integrate with our BFF by translating our domain model into Java code from the ground up.

Lagom will be used as our reactive microservice framework of choice. Lagom provides facilities for implementing reactive microservices and is built on top of some very powerful reactive technologies such as Akka, which is highly complimentary with the Play framework that we used for our BFF. The combination of Play and Lagom, both underpinned by Akka, streamlines the creation of highly scalable and robust microservices architecture systems.

Domain activities

Thus far, our hypothetical stock trader development team has participated in an event storming process, and then elaborated on those understandings by fleshing out a domain model and UI prototype for the system to be built. Now it is time for them to start putting down some code.

Recall that we identified three subdomains in the stock trader business:

  • Portfolios
  • Brokerage
  • Wire transfers

Assuming our development team will be split into three project teams (one for each of these subdomains), we’ll need to establish the APIs that each team will be providing so that each team knows how services will interact with each other.

In domain-driven design, there are three key domain activities: commands, queries, and events. In the previous units, we’ve formulated our stock trader domain interactions using these three activities. Now we’re going to translate each of those into Lagom code.

Commands

It feels natural to start at the beginning with the first activity that a new user will likely perform, creating a new portfolio for themselves. In our domain model, there is a command to open a new portfolio. This command applies to the portfolio bounded context, so the portfolio service team will be responsible for defining and implementing that command.

OpenPortfolio command

In our event storming process, we didn’t get into a lot of detail about any data that might be attached to the openPortfolio command. We also didn’t consider what feedback might be provided upon issuing that command; we often work on the basis that a command can be accepted or rejected and leave it at that. In order to code this command up, we will need to sort these details out.

Fortunately, the UI prototype that we completed in Unit 2 gives us a hint about what data might need to be provided in order to open a new portfolio: specifically, our UI requires us to provide a name when we create a portfolio. Of course, in reality, much more information would be required to open an investment account. But for our purposes, we’ll keep it simple and require just a name for the portfolio.

We’ll need a data transfer object (DTO) to carry the data we’re sending with the openPortfolio command, so let’s write our first piece of code:

@Value
public class OpenPortfolioDetails {
    @NonNull String name;
}

The @Value and @NonNull annotations are provided by Lombok as a convenience for creating immutable data objects, which we will use frequently. In this case:

  • the @Value annotation is making name private and final, and provides a getter for it
  • @NonNull will ensure that we set a value to name when we create a new instance

Now that we know our openPortfolio command will require OpenPortfolioDetails to be sent with it, we need to think about the response type.

Once we’ve created a new portfolio, we’ll need a way to be able to refer to it, so it will be convenient if we provide an ID for the new portfolio that we can use to refer to it in the future. Our type will be ServiceCall<OpenPortfolioDetails, PortfolioId>, which specifies OpenPortfolioDetails in and PortfolioId out.

Now we can create the service interface for our portfolio service and add the openPortfolio command to it with the type signature that we’ve defined:

public interface PortfolioService extends Service {
  //...
  ServiceCall<OpenPortfolioDetails, PortfolioId> openPortfolio();
  //...
}

Let’s dive a little deeper into two important concepts in Lagom: ServiceCall and Service.

Defining services in Lagom

In Lagom, we define a service interface by extending Service, and we create ServiceCall methods within it. Under the hood a ServiceCall will translate into an ordinary HTTP endpoint using JSON requests and responses.

The ServiceCall interface is parameterized by two types:

  • The type required for the request body
  • The type of the response returned by the service call

To specify how to send a command to our service ,we’ll need to define:

  • The name of the command (the service call method name)
  • The data passed with the command (the first type parameter for the ServiceCall interface)
  • The data returned in response to submitting the command (the second type parameter for the ServiceCall interface)

As we work through how to define the details of our command, we’ll need to pull in our learnings from event storming and prototyping together to arrive at the correct specification.

Note: In a scenario in which UI changes are not involved, you’ll need to be more rigorous during event storming about specifying data models. Event storming is a very flexible process that can (and must) be tailored to take into account the type of project you are delivering, whether that be a complete web application, including user interfaces, or a manual business process flow with very little software involved.

PlaceOrder command

In DDD, we usually don’t consider commands to have a response. Rather, we think about a command as being either rejected or acknowledged. This is important to keep in mind as we work through the placeOrder command.

There are several actions that may take a significant duration of time to fully resolve. In these situations, we don’t want to wait for the activity to complete before providing any feedback to the caller. Rather, it’s much preferable to quickly acknowledge the command in an immediate response. This leaves it up to the caller to subscribe to further notifications (events) that will provide additional details once the command has fully completed. This type of integration pattern is called publish/subscribe and is a cornerstone of reactive systems development.

In the diagram below, we have three main flows:

  1. A multipart form is submitted to the BFF, which triggers a command (for example, “place order”)
  2. An entity is created with a unique ID, and the ID is returned as part of the original response to the client
  3. The BFF subscribes to domain events of interest and, at some point when/if an event is published about the entity, the BFF pushes the event to the UI (for example, over a WebSocket channel)

alt

A more concrete example of this pattern is placing an order to trade stock. It may take some time before a buyer or seller can be found to make a trade, and certain types of conditional orders may take hours, days, or even months before being resolved.

For example, a limit order will not trigger until the share price is above/below a certain point. In the case of a limit order with a non-deterministic completion time, we need to provide a prompt response to acknowledge that the limit order has been accepted and then an event when (and if) the limit order has been fulfilled. Another event type will also cover a scenario in which the limit order expires, is cancelled, or not fulfilled because of any other reason.

Let’s look at how we’ve defined the placeOrder service call in our portfolio service:

ServiceCall<OrderDetails, OrderId> placeOrder(PortfolioId portfolioId);

The placeOrder command will reply with an OrderId once the order has been accepted so that we can reference the order.

To invoke the placeOrder service call, we need to provide OrderDetails. This is another DTO that we’ll examine in a moment. Also notice that the placeOrder method takes the portfolio ID as a parameter.

Let’s now look at the OrderDetails request type:

  @Value
  public class OrderDetails {
      @NonNull String symbol;
      int shares;
      @NonNull TradeType tradeType;
      @NonNull OrderType orderType;
  }

When we send a request to place an order, we need to indicate whether we want to buy or sell. We will specify this by the TradeType. We also require a valid stock symbol, the number of shares we want to trade, and the type of order we’d like to place (for example, market, limit, and so on).

The TradeType is a simple Java enum to determine whether we want to buy or sell shares:

  public enum TradeType {
    BUY,
    SELL
  }

The OrderType is a little more involved due to the different types of orders we will support. We can define a ‘market’ order or a ‘limit’ order. To model this we need a type OrderType that is either:

  • a market order,
  • or a limit order with a monetary limit value (the maximum price for a buy, or the minimum price for a sell order).

If we add new types of orders in the future, they may require additional properties (for example, a stop order will require two monetary values: one for the limit price and one for the stop price).

To model this type of data, we can use an algebraic data type (ADT).

The algebraic data type pattern

An algebraic data type (ADT) is a type that consists of a set of variants, or cases. It is similar to a Java enum, however, the variants are allowed to have different fields and different methods. In fact, in some languages in which ADTs are supported directly, they are called enumerated types.

The important properties we want to support for an ADT are:

  • The variants can have different numbers and types of properties associated with them (a market order has no additional information, whereas a limit has a limit value, and a stop-limit will have two values).
  • At compile time, we can check that all variants are handled.

Algebraic data types are not directly supported by the Java language, and the essential information here is obfuscated behind some fairly verbose Java boilerplate. It may be helpful to first see how we’d represent our OrderType in a language that more directly supports ADTs. For example in Rust, we would write:

enum OrderType {
  Market,
  Limit(BigDecimal),
}

While (the current version of) Scala doesn’t provide ADTs as a language concept, it requires only a very modest amount of boilerplate to implement an ADT. Here’s what our OrderType would look like in Scala, which gives us some hints as to how we can represent an ADT in Java:

sealed abstract class OrderType
case object Market extends OrderType
case class Limit(priceLimit: BigDecimal) extends OrderType

In both Scala and Rust ,the languages provide compiler-checked exhaustive pattern matching. This pattern matching looks and operates very similarly to a Java switch on an enum.

For example, here’s an example of pattern matching in Scala that determines whether a buy order should trigger based on its OrderType:

order match {
  case Market => true
  case Limit(priceLimit) => price <= priceLimit
}

In Java, we can encode ADTs using a combination of two well-known design patterns:

Here is our OrderType with Market and Limit cases in Java:

public abstract class OrderType {
    private OrderType() {}

    @Value
    @EqualsAndHashCode(callSuper = false)
    public static class Market extends OrderType {
        private Market() {}
        public static Market INSTANCE = new Market();

        public <T> T visit(Visitor<T> visitor) {
            return visitor.visit(this);
        }
    }

    @Value
    @EqualsAndHashCode(callSuper = false)
    public static class Limit extends OrderType {
        @NonNull BigDecimal limitPrice;

        public <T> T visit(Visitor<T> visitor) {
            return visitor.visit(this);
        }
    }

    public interface Visitor<T> {
        T visit(Market m);
        T visit(Limit l);
    }

    public abstract <T> T visit(Visitor<T> visitor);
}

The Visitor interface ensures that when we handle an OrderType that we implement handling for all the variants (provided we use the visitor to do this.)

The Sealed Class pattern uses an abstract class with a private constructor to ensure that the only concrete subclasses of OrderType that can be constructed are inner/nested classes. This makes it much easier to ensure that our visitor interface covers all the possible subclasses of our OrderType.

Using this implementation, we can define logic to determine whether a sell order should result in a trade based on the current market price. This visitor code is the equivalent to the Scala pattern matching example that we demonstrated previously:

orderType.visit(new OrderType.Visitor<Boolean>() {
    @Override
    public Boolean visit(OrderType.Market m) {
        return true;
    }

    @Override
    public Boolean visit(OrderType.Limit limitOrder) {
        return price.compareTo(limitOrder.getLimitPrice()) <= 0;
    }
});

By using a visitor in this way, rather than a if-instanceOf-else chain, we get a compile-time check that our visitor covers all the cases. This is particularly helpful when the time comes to add a new variant, or when we wish to refactor our code. With a little bit of creativity, there’s no reason that we can’t fully leverage Java’s type system and let the compiler provide us with extra safety, rather than relying on exhaustive testing alone.

For a service that may have complex data types that would be well represented by ADTs, it may be worth exploring a Scala implementation, rather than Java to avoid verbosity like the Visitor pattern. Each of the tools in the Reactive Platform (Play Framework, Akka, and Lagom) provide both a Java and a Scala API for you to choose from. In the future, there may be support for pattern matching in Java. The advantage of adopting a microservices architecture is that you can mix and match languages based on the requirements of each bounded context. This is especially important in large enterprise domains with tens, hundreds, or thousands of developers and services!

Now that we’ve seen how commands in our domain are translated into service calls, the next type of domain activity we’ll examine are queries.

Queries

In our domain model, we can query a portfolio to obtain information about its current cash and stock holdings. Let’s translate that into our portfolio service code. In Lagom, a query is typically implemented as a service call. In our portfolio service, we defined this query as:

ServiceCall<NotUsed, PortfolioView> getPortfolio(PortfolioId portfolioId);

The service call takes the ID for the portfolio we want to interact with. The request type is NotUsed, which is an empty singleton class. By convention, NotUsed is used (no pun intended) to indicate that the request body is not used, so we don’t need to supply one. We’ve provided a request body type of NotUsed here, as the only information we need to provide for this query is the specific portfolio we wish to query.

There is nothing that specifically identifies a service call as corresponding to a query, as opposed to a command. We should use naming conventions to help distinguish commands from queries. When naming commands and queries, keep in mind that queries are always synchronous calls and expected to respond immediately with the results of a query, whereas commands can be either synchronous or asynchronous.

Our getPortfolio call will respond with a PortfolioView:

@Value
public class PortfolioView {
    @NonNull PortfolioId portfolioId;
    @NonNull String name;
    @NonNull BigDecimal funds;
    @NonNull PSequence<Holding> holdings;
}

We’ve used the PSequence collection type from the PCollections library. It is sufficient to think of a PSequence as an immutable version of the standard List collection. Later in this article, we’ll discuss our use of immutable data in our services.

The Holding class is simply a stock symbol and a share count:

@Value
public class Holding {
    @NonNull String symbol;
    int shareCount;
}

With this amount of detail, we will be able to understand the value of our portfolio by combining the cash on hand value with further computations on the current value of our holdings, based on each symbol in our portfolio and the number of shares we own. Rather than perform a trip to IEX in order to get current prices for each symbol, we’ll leave that up to the consumer of this query. If we were to include a current price per symbol within the view, we would need integrate with IEX within this synchronous operation. This would require us to safeguard query responses in the event of latency or failure with IEX.

Now that we’ve covered commands and queries, the final type of domain activity that we need to address are events.

Domain events

Services can interact with each other by issuing commands to each other, but they can also interact by publishing and responding to events. The Lagom framework provides a message broker API for publishing and subscribing to events.

By default, the message broker API this is backed by Kafka to provide durable, distributed, fault-tolerant message handling. Within the message broker API (and within Kafka), a topic is the main way to organize different types of messages.

Our portfolio service will provide a topic about order placed events:

Topic<OrderPlaced> orderPlaced();

Any other services or systems that are interested in knowing about orders being placed in the system can subscribe to this topic and receive OrderPlaced messages, which look like the following:

@Value
public class OrderPlaced {
    @NonNull PortfolioId portfolioId;
    @NonNull OrderId orderId;
    @NonNull OrderDetails orderDetails;
}

@Value
public class OrderDetails {
    @NonNull String symbol;
    int shares;
    @NonNull TradeType tradeType;
    @NonNull OrderType orderType;
}

The orderPlaced topic will be of particular interest to the team that is responsible for the broker service, as one of their primary responsibilities is to convert orders into trades. The broker service will subscribe to the orderPlaced topic to pick up orders it needs to process. Once the broker service has resolved an order, it will publish to its own orderResult topic:

public interface BrokerService extends Service {
  //...
  Topic<OrderResult> orderResult();
  //...
}

The portfolio service will subscribe to this orderResult topic to get updates on the orders it has placed. Subscribers to the orderResult topic will get OrderResult messages, which will notify them of successful and failed orders:

public abstract class OrderResult {
    private OrderResult() {}

    public abstract PortfolioId getPortfolioId();
    public abstract OrderId getOrderId();

    @Value
    @EqualsAndHashCode(callSuper = false)
    public static class Fulfilled extends OrderResult {
        @NonNull PortfolioId portfolioId;
        @NonNull OrderId orderId;
        @NonNull Trade trade;
    }

    @Value
    @EqualsAndHashCode(callSuper = false)
    public static class Failed extends OrderResult {
        @NonNull PortfolioId portfolioId;
        @NonNull OrderId orderId;
    }

    // some annotations, visitor pattern code, etc.
}

@Value
public class Trade {
    @NonNull String symbol;
    int shares;
    @NonNull TradeType tradeType;
    @NonNull BigDecimal sharePrice;
}

Let’s look at an end-to-end process flow for a sell order:

  1. User: Interacts with UI to sell some shares of a stock.
  2. Browser: Makes call to BFF to place the order.
  3. BFF: Invokes the placeOrder service call on the portfolio service.
  4. Portfolio service: Confirms availability of shares to sell.
  5. Portfolio service: Publishes OrderPlaced event in its orderPlaced topic.
  6. Portfolio service: Sends back Done response to BFF (which is propagated back to the browser to complete that web request).
  7. Broker service: Observes OrderPlaced event.
  8. Broker service: Finds buyer for shares.
  9. Broker service: Publishes OrderResult.Fulfilled event, including the trade price, in orderResult topic.
  10. Portfolio: Updates the funds held, based on the sale price.
  11. BFF: Observes OrderResults.Fulfilled event in orderResult, propagates (using websocket) back to the browser to update the UI.

We can represent this interaction using a UML sequence diagram, as follows:

alt

Although we used a sequence diagram, this process is completely asynchronous. In the diagram, we’ve denoted the events in blue. The subscription to those event topics is an important source of asynchrony in our system.

Asynchronous commands and service calls

There is a useful distinction to observe between our openPortfolio call and our placeOrder call. Our openPortfolio is a synchronous call, which means that once we get a response, the service has completed all the associated work for that call.

If we look at a diagram of the openPortfolio call, it would look like this:

alt

The synchronous interaction is fully complete once the response is sent to the BFF (and likewise to the browser).

In contrast, the placeOrder call is asynchronous. When we get a response from placeOrder containing the ID for the order, it is only an indication that the order has been accepted. It does not mean that the order has been completed with the purchase or sale of some shares.

We will cover asynchrony in much greater detail in subsequent units of this series.

The decision as to whether to use a synchronous or asynchronous interaction is not purely a technical one. Let’s take a moment to consider the business implications of this choice.

Consider the impacts if the broker service is temporarily unavailable when the portfolio receives the placeOrder command. If we use a synchronous interaction between the portfolio and the broker, the portfolio will not be able to communicate with the broker, and the order placement process will fail. This will force us to emit a system error to the user, which is against the core principle of resilience in the reactive manifesto. However, if we use an asynchronous interaction (as we’ve described above), the order can be accepted even during an outage, but it will not be processed until the broker is available to read the OrderPlaced events from the topic.

Summary

This covers the basics of our data types, along with a detailed design for the process and interactions involved to place an order. This introduces us to the concepts of asynchrony, which we’ll cover in much more depth in the next unit.

In order to complete a comprehensive overview of the way we’ll handle data in Reactive Stock Trader, let’s finish this unit by covering the concept of immutable data and why it’s so important in a reactive system.

Immutable data with Lombok and PCollections

Our services will make use of a lot of data objects. These objects may represent commands, queries, events, or data associated with any one of these. Because we will be passing these data objects between different execution contexts and across the network to different server instances, it will be much simpler to deal with immutable data. Using immutable data will significantly reduce the possibility of data races within the highly asynchronous execution contexts we’ll use. This is a very small complexity of any distributed system, but well worth it in order to maximize the use of computing resources by distributing our workload across a broad set of resources: threads, CPUs, nodes, clusters.

Lombok provides annotations to simplify the process of creating immutable objects in Java. In particular, we’ll make extensive use of the @Value annotation, as we’ve already seen used in the sample code in this unit.

When added to a class, the @Value annotation:

  • Ensures the class is final
  • Ensures that all fields are private and final
  • Generates getters for all fields
  • Generates equals and hashcode implementations, based on value equality (two instances are equal if and only if they have the same values for all fields)

When we need collections we’ll use the PCollections library to provide a set of persistent (immutable) data structures as an alternative to the standard Java collections. The PCollections library provides immutable collections that support efficient updates of the data. In this context, updating a persistent collection creates a new collection, but may reuse objects from the old collection for efficiency. Because the collections are immutable, it is safe to share objects between the original and the updated collection.

See the Lagom documentation for more information on the use of PCollections with Lagom.

Note: While it is not absolutely required to use immutable data to build Lagom, we strongly advise the use of immutable data. We will make significant use of concurrent execution with CompleteStage, so the risk of subtle bugs introduced through concurrent access to mutable data is non-trivial.

Summary & next steps

In this unit, we’ve covered the basics of the process flow that we wish to implement along with the data types involved, implemented using pure Java, and enhanced with Lombok and Collections.

In the next unit, we’ll significantly expand on the concepts of asynchrony, explaining what an asynchronous boundary is and why it’s of critical importance to master before designing distributed systems. Once we expand on the theory of asynchrony, we’ll be in a great position to continue implementing our services using Lagom.

Previous: Prototype the UI and UI integration patternsNext: Concurrency, parallelism, and asynchrony