Replies: 9 comments 19 replies
-
Mapping state building could be done through streams, but that requires a more premeditated approach. Another idea is to slap an index on the log and do the queries ad-hoc. The optimistic query wouldn't be too hard. Smells, superficially, like two changes to the EventStore API and then there should be enough meat to e.g. implement a sample system. Maybe based on https://github.com/bwaidelich/dcb-example-courses? |
Beta Was this translation helpful? Give feedback.
-
The first thing we need to do, IMO, is align on the "Context and Problem Statement" part of the ADR. Otherwise we'll all end up talking past each other. And it's possible there are alternatives to DCBs that would also address said problems that should be considered. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Event-modeling-ish diagram of how it would be modeled with DCB: |
Beta Was this translation helpful? Give feedback.
-
Here's a view of two courses and two students, and the events a command handler would need to consider in order to enroll Bob to the English Literature course. Notice that we need to consider only certain events from the Lit course, as well as the creation events from Bob's stream. We will also need to consider all After loading these events, and folding over them to reach some internal state, we decide that we're going to accept Bob's enrollment, and we've also noticed that we've reached the maximum capacity of the literature class. In order to persist these events, we need to check that:
As we're dealing with multiple streams, we can no longer have a single integer and the UUID of a stream to make these checks. The query presented in the talks includes might look like query = %StreamQuery{
ids: [
%{course: "course-lit"},
%{student: "student-bob"}
],
events: [
"CourseCreated",
"StudentCreated",
"StudentEnrolled",
"CourseCapacityChanged"
]
} That would net us:
When we accept the command, we need to emit two events for the Lit stream at versions 4, and 5 respectively. When we try to append the events, the event store needs to confirm that:
|
Beta Was this translation helpful? Give feedback.
-
I think we can see here that appending events to the stream is not going to be terribly hard, what is going to be harder is:
|
Beta Was this translation helpful? Give feedback.
-
There is also the question of what to do with Commanded's Aggregates. Aggregates are two things in Commanded:
If we're going to move from 1 aggregate module with 1 process to 1 command handler per workflow are we also going to create processes for these command handlers? |
Beta Was this translation helpful? Give feedback.
-
Here's the structure of the streams & events tables. I've omitted FK constraints and References. Table "event_store.events"
Column | Type | Collation | Nullable | Default
----------------+--------------------------+-----------+----------+---------
event_id | uuid | | not null |
event_type | text | | not null |
causation_id | uuid | | |
correlation_id | uuid | | |
data | bytea | | not null |
metadata | bytea | | |
created_at | timestamp with time zone | | not null | now()
Indexes:
"events_pkey" PRIMARY KEY, btree (event_id)
Table "event_store.stream_events"
Column | Type | Collation | Nullable | Default
-------------------------+--------+-----------+----------+---------
event_id | uuid | | not null |
stream_id | bigint | | not null |
stream_version | bigint | | not null |
original_stream_id | bigint | | |
original_stream_version | bigint | | |
Indexes:
"stream_events_pkey" PRIMARY KEY, btree (event_id, stream_id)
"ix_stream_events" UNIQUE, btree (stream_id, stream_version)
Table "event_store.streams"
Column | Type | Collation | Nullable | Default
----------------+--------------------------+-----------+----------+--------------------------------------------------------
stream_id | bigint | | not null | nextval('event_store.streams_stream_id_seq'::regclass)
stream_uuid | text | | not null |
stream_version | bigint | | not null | 0
created_at | timestamp with time zone | | not null | now()
deleted_at | timestamp with time zone | | |
Indexes:
"streams_pkey" PRIMARY KEY, btree (stream_id)
"ix_streams_stream_uuid" UNIQUE, btree (stream_uuid) |
Beta Was this translation helpful? Give feedback.
-
I've just realized there are some misunderstandings surrounding this topic, prompting me to ask a question: How would it work when student streams are in |
Beta Was this translation helpful? Give feedback.
-
Please add to the material. My starting point is:
From my org notes, summarizing:
The basic deal is to have the concept of aggregate (with an id) but only have message handlers.
They build the subset of aggregate root state they need by selecting and replaying events
Command -> MessageHandler
MessageHandler <-> EventLog(query events of type A, B, C, for aggregate id X)
MessageHandler -> Event
Note that multiple MessageHandlers can run in parallel. Therefore, they need to optimistically lock
on the query. Sara explains the details at around 38m30s, but essentially the event set returned in the
query contains a last version number, and the append sends the query + the last version number seen. If the
same query matches a later version number, the transaction is retried.
Beta Was this translation helpful? Give feedback.
All reactions