What is the correct granularity for events in the context of designing a rule-based decision system?

by IoChaos   Last Updated September 03, 2018 11:05 AM

Introduction

We need to design a system that, given a set of events that are happening in the source application, reacts to them and if some conditions have been met, actions can be triggered. Users will be able to express this workflow through a UI, where they will configure graphically how events change the state of the workflow and what actions (if any) are triggered when that happens.

My question is mainly about the input (from now on, events) of a system of this type. There seems to be some disagreement in the team about the granularity that should be used to codify this information. I come from an OOP background and part of my team comes from a more FP background and languages of the LISP family.

Although I’m looking forward to hearing other options, of course, there are two basic approaches that we have discussed:



1. Fine-grained event that reflects a business fact

Basically an event with the definition given in Event Sourcing, following the material that I have read and seen from Greg Young mainly.

The event is an immutable fact, written in past tense that captures something important for the domain of the application. They have a name and a payload, where information that can’t be inferred by the previous state is added.

Our application is not event sourced, so at some places, the event will be recorded (along with the current models that we persist in the DB) and of course, we will need to design the characteristics of the event (the name, the payload, etc). I am not advocating for a change of our current solution (typical E/R models) to Event Sourcing, just saying that these events would be added and stored and I would follow the design guidelines of Event Sourcing for them. Typical examples:


Name: CartCreated                  Payload: { … }
Name: CartItemAdded                Payload: { … }
Name: CartItemAdded                Payload: { … }
Name: CartItemClosed               Payload: { … }
Name: ShippingInformationAdded     Payload: { … }

2. Coarse-grained event (generic operations)

Others propose a wider event, where we capture generic operations as the creation, modification or deletion. It would basically be a capture of the new information (diff) that is going to be saved in our current entities (assuming a cart and order are models in our app that have tables associated in the DB). The database row is saved and also an extra row in an event store like:

Entity: Cart    Operation: Created     Payload: { … }
Entity: Cart    Operation: Modified    Payload: { items: [x] }
Entity: Cart    Operation: Modified    Payload: { items: [x, y] }
Entity: Cart    Operation: Modified    Payload: { closed: true }
Entity: Order   Operation: Modified    Payload: { shipping: [x] }

Actually, this point of discussion has appeared in other contexts and although I’m not really sure, sometimes I feel it is about different interpretations of this Rich Hickey’s sentence:



“You’d be much better off 90% of the time you use classes to do data things to just use a hash instead … your system could be simpler, you could write generic data processing and utilities that didn’t have to know about your class …”

This is about classes, and I understand the point about reuse. But my feeling is that “write generic data processing” is playing a role in my colleague’s views, as I am adding extra pieces of information (the event name and the payload it has) to the data we previously had, changing its shape.

Key discussion points

This is part of the criticism that I get to the idea of the fine-grained events:



1. It is more useful when we store everything. We don’t want to lose information. Recoding everything we don’t need to think about custom events.

The fact that I am adding a new event that some code will insert, that requires design (you have to decide how the payload is going to look like) is seen as a dangerous reinterpretation of the data. Following this argument, just recording the difference between the old and the new state of a database row is better.

I see it differently. The information that we send to the database today is also an interpretation of what users are doing that was designed by someone time ago. Between the fields that we send to the persistent storage sometimes, we leave as implicit things that are important for the domain. Our DB schema is not a perfect or magical canonical data for our domain, it is something that we designed too.

I feel like an obvious point that our database is an internal detail of our current implementation, and usually, that is something that you want to keep hidden. If our system grows, isolation between applications will be important. We do not want to keep reviewing this system whenever we change the internal representation of data in our source system, and of course, we do not want it broken. The events act as a boundary between the source system and other systems and we treat the payload as a contract.

Isolation is an important point here. I do not understand why this would be less important in a functional context neither.

2. Just recording the information that we are adding to a row and let the code infer what is happening there is way more powerful.

The fact that I am designing a new event is seen as a constraint (and again, it is true that this requires design, code, etc). They see as powerful the fact that they can always reinterpret the "original" data and get meaning as opposed to doing this interpretation before. In my opinion, if we interpret before, we avoid duplication.

Different parts of our infrastructure are going to be listening and processing these events. The example is not perfect, but here it goes. Let’s assume we have a payment system and attempts to pay partial amounts do require a credit check:


Coarse-grained


PaymentCreated: { user_id: 123, total_amount: 155.4, current_amount: 50, accepted: true }

Fine-grained

RecurrentPaymentRequested: { user_id: 123, total_amount: 155.4, current_amount: 50 }
RecurrentPaymentAccepted: { }

In the coarse-grained example, different parts of the application wanting to operate with the fact that a recurrent payment has been accepted must understand that the current_amount is less than the total (to know that we are talking about a recurring payment) and then look for a field accepted that indicates if it has been accepted or not. Although there are ways to fix it, the timing factor (the request happens before than the acceptation) is also lost.

This need to know about the payload and execute the logic to know what is happening gets replicated through your code or potentially other systems, in every place where you want to consume the event and process accepted recurrent payments.

It is a small thing, but if the system gets bigger, this duplication gets everywhere. It makes sense that consumers rely just on the information that they need (Interface Segregation Principle), doesn’t it?. Does it make sense to be constantly reinterpreting the data?.

Conclusion

These at least, are the goals that I was trying to achieve:

  1. Have events that I can use as input for this rule-based system that act as a boundary between the source system, this new one, and others to come. Do not expose internal details about the source system.

  2. Have consumers that only have to rely on specific events, and avoid logic to process events to be duplicated in different places.

  3. Use the fact that I need to have an input for the new system to create events that make explicit some business facts that are being left implicit in our database models.

For me, the solution with the fine-grained events wins. But I think I can be missing something (or a lot of things). Other colleagues seem to have other intuitions and I want to get this right and learn as much as possible.

Is my background (statically typed languages, OOP) making me blind to better approaches?.

Am I overcomplicating the solution maybe?.

Are specific events a limitation for the workflow system that we want to create?.

Thank you so much!



Related Questions



How to create new aggregate root in CQRS?

Updated February 21, 2017 13:05 PM

CQRS - Passing aggregate root as argument

Updated February 25, 2017 23:05 PM


Event Sourcing and cross Aggregate validation

Updated January 10, 2018 23:05 PM