You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, when running Astrid, the block mining flow uses the bridge to commit state sync events to the block. This is done inside FinalizeAndAssemblehere.
The CommitStates function uses c.bridgeReader.Events(c.execCtx, blockNum) to fetch the events it needs to submit for the new block. This will not work when we are the block producer because the Bridge would have not seen a block with blockNum=X where X is the block we are currently mining.
Calling bridge.ProcessNewBlocks(blockX) would not work either, because calling that flow is the responsibility of the Block Consumer (in the sync package), not the Block Producer. E.g. failure scenario is we call ProcessNewBlocks(blockX), we mine the block, however our block is not the one that has higher difficulty so ends up being discarded. Then our Bridge will have inconsistent mappings. It is important that we do not call ProcessNewBlocks in Block Production. Think of Block Production as being read-only, never write.
A solution I propose to solve this is to introduce a new Read API on the Bridge and Bridge Reader:
Then, the block consumer will use bridge.Events(blockNum) as part of Finalize while the block producer will use bridge.EventsWithinTime(fromTime, toTime) as part of FinalizeAndAssemble.
In order to support such an API we will need to add a new table in the Bridge component which maps an event timestamp to its corresponding event id. Then the EventsWithinTime API can first collect the event ids that fall within the time window from this table and then fetch the corresponding events from the main events table using the ids.
Note this API needs to be supported only for hot state, so we can prune its data as part of the retirement flow in PruneAncientBlocks -> PruneHeimdall. No need to implement this lookup from snapshots.
The goal of this task is to:
Implement bridge.EventsWithinTime
Change the Bor consensus engine to use bridge.EventsWithinTime in FinalizeAndAssemble
The text was updated successfully, but these errors were encountered:
Currently, when running Astrid, the block mining flow uses the bridge to commit state sync events to the block. This is done inside
FinalizeAndAssemble
here.The
CommitStates
function usesc.bridgeReader.Events(c.execCtx, blockNum)
to fetch the events it needs to submit for the new block. This will not work when we are the block producer because the Bridge would have not seen a block withblockNum=X
where X is the block we are currently mining.Calling
bridge.ProcessNewBlocks(blockX)
would not work either, because calling that flow is the responsibility of the Block Consumer (in thesync
package), not the Block Producer. E.g. failure scenario is we call ProcessNewBlocks(blockX), we mine the block, however our block is not the one that has higher difficulty so ends up being discarded. Then our Bridge will have inconsistent mappings. It is important that we do not callProcessNewBlocks
in Block Production. Think of Block Production as being read-only, never write.A solution I propose to solve this is to introduce a new Read API on the Bridge and Bridge Reader:
Then, the block consumer will use
bridge.Events(blockNum)
as part ofFinalize
while the block producer will usebridge.EventsWithinTime(fromTime, toTime)
as part ofFinalizeAndAssemble
.In order to support such an API we will need to add a new table in the Bridge component which maps an event timestamp to its corresponding event id. Then the EventsWithinTime API can first collect the event ids that fall within the time window from this table and then fetch the corresponding events from the main events table using the ids.
Note this API needs to be supported only for hot state, so we can prune its data as part of the retirement flow in
PruneAncientBlocks
->PruneHeimdall
. No need to implement this lookup from snapshots.The goal of this task is to:
The text was updated successfully, but these errors were encountered: