Sequencing and syncing #171
Replies: 2 comments 1 reply
-
Proposal AThe problem boils down to: how to build blocks without knowing the execution outcome (block hash) before hand? It turns out that there are solutions that need this feature: block builders such as rbuilder from flashobts. And besides potentially overcoming the challenges, they are a good thing to have in itself as they bring some nice sequencing features on their own. Here's how they work on the system Happy pathsequenceDiagram
sequencer (CDK) ->> execution client (sequencer): build block `engine_forkchoiceUpdatedV2(ForkchoiceState, PayloadAttributes)`
execution client (sequencer) -->> sequencer (CDK): {payloadStatus: {status: VALID, ...}, payloadId: buildProcessId}
sequencer (CDK) ->> execution client (sequencer): get status `engine_getPayloadV2(PayloadId)`
execution client (sequencer) -->> sequencer (CDK): {executionPayload, blockValue}
sequencer (CDK) ->> sync node (CDK): distribute new block
sync node (CDK) ->> execution client (sync): go to new tip of the chain: engine_forkchoiceUpdatedV2(ForkchoiceState, null)
Note that:
Sync from DAsequenceDiagram
CDK ->> L1: get verified block hash
L1 -->> CDK: block hash
CDK ->> execution client: check block hash
execution client -->> CDK: block hash does not exist
CDK ->> L1: download DA
L1 -->> CDK: DA
CDK ->> builder: build block using DA
builder -->> CDK: new block
CDK ->> execution client: new tip of the chain
execution client ->> CDK: get block body
CDK -->> execution client: block body
The key parts of this flow:
Build blocks with forced transactionsIn order to build block with forced transactions / add forced blocks, the builder is also used: CDK would download the froced data, and request the builder to compute it. Once it's done it will share the new block with the execution client ConsiderationsThe complex part here is implementing the builder, ideally we should be able to reuse an existing software for that. Unfortunately, builders themselves depend on execution clients as it can be seen from flashbots moving from geth based to reth based builder implementation. This could lead the project to a situation where we need to do specific implementations for the different execution clients (rETH, erigon, ...), which is something to avoid, one of the goals of going vanilla A potential remediation would be to not support all the features on all the clients, for instance, only reth would be supported for sequencing. On the other hand, when it comes to syncing, as long as there is a single honest node that has the blocks as per the DA, the network could use that node to sync. This opens the room for having a dedicated service (that anyone could run) that is in charge of building the state from the DA Alternatively, we could force the DA to include block hashes, and pretend that forced transactions will never exist. Doing so would enable a vanilla usage of the Engine API without any trick or add on. But it would increase the cost of DA, and eventually, forced transactions should make it into the protocol Builder implementationReth looks the way to go here, the modularity of the projects makes it easier to build blocks, it looks like we would be able to reuse certian part of the codebase to build blocks without forking |
Beta Was this translation helpful? Give feedback.
-
In Proposal A we won't know what's the content of the block. What is We can definitely use Reth for building a builder, even as an Execution Extension but how we're going to provide that block to the execution client? |
Beta Was this translation helpful? Give feedback.
-
Context
For a while the CDK team has been willing to re-architect the relation between the execution and the CDK client in a way that it uses the Engine API. This is a no brainer when the goal is to be as vanilla as possible, since all the execution clients use this interface for Ethereum. Not only that, but the separation of execution / consensus client of Ethereum, matches very well with our approach of execution / cdk clients. Additionally, this path seems to future proof interesting forward looking features such as sequencer decentralisation (there are already tools for that such as beacon kit), PBS style goodies like rollup boost, based rollups, ...
Challenges
tl;dr:
Unfortunately, using the Engine API to driver the execution client of a rollup is not as straight forward as it may seem. This is the case mostly because in order to build a block, the consensus client (CC, in our case the
cdk
), requests this to the execution client (EC) to do so. Upon request the EC will initiate the block building by adding transactions from the pool in the block. Later on, the CC will propagate this block to other CCs which will validate the block against the EC. The problem is that:AggOracle
Interesting resources
Beta Was this translation helpful? Give feedback.
All reactions