I have a system that uses a graph database without any ORM, mapper or tool that tracks entity changes, like EntityFramework. I’m not using domain entities, instead I have an Event that calls changes directly in Commands.
Instead of having a entity like that
Community{
public string Title {get;set;}
Handle(ChangeDetails @event){
title = @event.title
}
}
I have a CQRS command that changes two databases at same time, so, in my “event” sourcing I have something “command sourcing” that I can replay those commands to reconstruct the database with changes made, like this:
CommunityWriteCommands{
Handle(ChangeTitleOrDescriptionEvent @event){
GraphDatabase.community(@eventId).set("title", @title).set("description", @eventDescription)
}
}
This could be considered “wrong” or “incorrect” from the perspective of Software engineering/DDD/CleanArch etc? Without any ORM/EntityFramework or tool that tracks changes, seems to me better to store commands than domain entities events (i’m not even using domain entities)
3
in my “event” sourcing I have something “command sourcing” that I can replay those commands to reconstruct the database with changes made
So this isn’t a unique idea; for example, the LMAX architecture uses a durable sequence of input messages as part of the mechanism to ensure that the business processing replicas are in the same state as the primary.
That’s not quite the same thing as using input messages as the system of record, but it is pretty close, and demonstrates that the pattern is viable (at least in some contexts).
From what I can tell, if the only consumer of your durable representations is your business processor, then you have a lot of freedom to tune that representation to your specific needs.
But if you are expecting other systems to read that information…? Now you need to start worrying about whether those other systems understand the durable representations the same way that your business processor does.
To some extent, that’s always true (if you put “feet” into a relational database, and some other system interprets that information as “meters”, then you are going to have a bad time).
But the additional concern here is that these other systems need to understand the policies used to derive information, and which policies were active at the time that the input message was originally processed, and so on.
For example, imagine a business process that prioritizes items, using order of arrival as a tie breaker – so you might prefer last-in/first-out (LIFO), or first-in/first-out (FIFO). In order for other processes to interpret the stored information the same way as your business process, they will need to know which policy is being used. Furthermore, if you decide to change policies, the other services will need to know what logic is used to decide when to change, so that they correctly change at the same point in the input stream.
Messy. Doable, but messy.
And so you have to think: do the advantages of storing input messages outweigh the disadvantages? If they do, then it is “OK”.