diff --git a/docs/chain_management/chain_type_manager.md b/docs/chain_management/chain_type_manager.md index 6ba1a79d3..0204fe55c 100644 --- a/docs/chain_management/chain_type_manager.md +++ b/docs/chain_management/chain_type_manager.md @@ -31,12 +31,12 @@ The exact process of deploying & registering a ST will be described in [sections | chainId | Permanent | Permanent identifier of the ST. Due to wallet support reasons, for now chainId has to be small (48 bits). This is one of the reasons why for now we’ll deploy STs manually, to prevent STs from having the same chainId as some another popular chain. In the future it will be trustlessly assigned as a random 32-byte value.| | baseTokenAssetId | Permanent | Each ST can have their own custom base token (i.e. token used for paying the fees). It is set once during creation and can never be changed. Note, that we refer to and "asset id" here instead of an L1 address. To read more about what is assetId and how it works check out the document for [asset router](../bridging/asset_router/Overview.md) | | chainTypeManager | Permanent | The CTM that deployed the ST. In principle, it could be possible to migrate between CTMs (assuming both CTMs support that). However, in practice it may be very hard and as of now such functionality is not supported. | -| admin | By admin of ST | The admin of the ST. It has some limited powers to govern the chain. To read more about which powers are available to a chain admin and which precautions should be taken, check out this document (FIXME: link to document about admin precauotions) | +| admin | By admin of ST | The admin of the ST. It has some limited powers to govern the chain. To read more about which powers are available to a chain admin and which precautions should be taken, check [out this document](../chain_management/admin_role.md) | | validatorTimelock | CTM | For now, we want all the chains to use the same 21h timelock period before their batches are finalized. Only CTM can update the address that can submit state transitions to the rollup (that is, the validatorTimelock). | | validatorTimelock.validator | By admin of ST | The admin of ST can choose who can submit new batches to the ValidatorTimelock. | | priorityTx FeeParams | By admin of ST | The admin of a ZK chain can amend the priority transaction fee params. | | transactionFilterer | By admin of ST | A chain may put an additional filter to the incoming L1->L2 transactions. This may be needed by a permissioned chain (e.g. a Validium bank-lile corporate chain). | -| DA validation / permanent rollup status | By admin of ST | A chain can decide which DA layer to use. You check out more about safe DA management here (FIXME: link to admin doc) | +| DA validation / permanent rollup status | By admin of ST | A chain can decide which DA layer to use. You check out more about [safe DA management here](./admin_role.md) | | executing upgrades | By admin of ST | While exclusively CTM governance can set the content of the upgrade, STs will typically be able to choose suitable time for them to actually execute it. In the current release, STs will have to follow our upgrades. | | settlement layer | By admin of ST | The admin of the chain can enact migrations to other settlement layers. | @@ -59,7 +59,7 @@ In the current release, each chain will be an instance of zkSync Era and so the ### Emergency upgrade -In case of an emergency, the [security council](https://blog.zknation.io/introducing-zk-nation/) has the ability to freeze the ecosystem and conduct an emergency upgrade (FIXME: link to governance doc). +In case of an emergency, the [security council](https://blog.zknation.io/introducing-zk-nation/) has the ability to freeze the ecosystem and conduct an emergency upgrade. In case we are aware that some of the committed batches on an ST are dangerous to be executed, the CTM can call `revertBatches` on that ST. For faster reaction, the admin of the ChainTypeManager has the ability to do so without waiting for govenrnace approval that may take a lot of time. This action does not lead to funds being lost, so it is considered suitable for the partially trusted role of the admin of the ChainTypeManager. diff --git a/docs/l2_system_contracts/batches_and_blocks_on_zksync.md b/docs/l2_system_contracts/batches_and_blocks_on_zksync.md index 7830a1e9e..b4b70e4b1 100644 --- a/docs/l2_system_contracts/batches_and_blocks_on_zksync.md +++ b/docs/l2_system_contracts/batches_and_blocks_on_zksync.md @@ -28,19 +28,15 @@ Our `SystemContext` contract allows to get information about batches and L2 bloc ## Initializing L1 batch -FIXME: correct bootloader code link +At the start of the batch, the operator [provides](../../system-contracts/bootloader/bootloader.yul#L3935) the timestamp of the batch, its number and the hash of the previous batch. The root hash of the Merkle tree serves as the root hash of the batch. -At the start of the batch, the operator [provides](https://github.com/code-423n4/2024-03-zksync/blob/e8527cab32c9fe2e1be70e414d7c73a20d357550/code/system-contracts/bootloader/bootloader.yul#L3867) the timestamp of the batch, its number and the hash of the previous batch. The root hash of the Merkle tree serves as the root hash of the batch. - -The SystemContext can immediately check whether the provided number is the correct batch number. It also immediately sends the previous batch hash to L1, where it will be checked during the commit operation. Also, some general consistency checks are performed. This logic can be found [here](https://github.com/code-423n4/2024-03-zksync/blob/e8527cab32c9fe2e1be70e414d7c73a20d357550/code/system-contracts/contracts/SystemContext.sol#L466). +The SystemContext can immediately check whether the provided number is the correct batch number. It also immediately sends the previous batch hash to L1, where it will be checked during the commit operation. Also, some general consistency checks are performed. This logic can be found [here](../../system-contracts/contracts/SystemContext.sol#L469). ## L2 blocks processing and consistency checks ### `setL2Block` -FIXME: fix link - -Before each transaction, we call `setL2Block` [method](https://github.com/code-423n4/2024-03-zksync/blob/e8527cab32c9fe2e1be70e414d7c73a20d357550/code/system-contracts/bootloader/bootloader.yul#L2825). There we will provide some data about the L2 block that the transaction belongs to: +Before each transaction, we call `setL2Block` [method](../../system-contracts/bootloader/bootloader.yul#L2884). There we will provide some data about the L2 block that the transaction belongs to: - `_l2BlockNumber` The number of the new L2 block. - `_l2BlockTimestamp` The timestamp of the new L2 block. @@ -50,7 +46,7 @@ Before each transaction, we call `setL2Block` [method](https://github.com/code-4 If two transactions belong to the same L2 block, only the first one may have non-zero `_maxVirtualBlocksToCreate`. The rest of the data must be same. -The `setL2Block` [performs](https://github.com/code-423n4/2024-03-zksync/blob/e8527cab32c9fe2e1be70e414d7c73a20d357550/code/system-contracts/contracts/SystemContext.sol#L341) a lot of similar consistency checks to the ones for the L1 batch. +The `setL2Block` [performs](../../system-contracts/contracts/SystemContext.sol#L355) a lot of similar consistency checks to the ones for the L1 batch. ### L2 blockhash calculation and storage diff --git a/docs/l2_system_contracts/system_contracts_bootloader_description.md b/docs/l2_system_contracts/system_contracts_bootloader_description.md index e6b9c6275..fa7cd115d 100644 --- a/docs/l2_system_contracts/system_contracts_bootloader_description.md +++ b/docs/l2_system_contracts/system_contracts_bootloader_description.md @@ -21,7 +21,7 @@ The use of each system contract will be explained down below. ### Pre-deployed contracts -Some of the contracts need to be predeployed at the genesis, but they do not need the kernel space rights. To give them minimal permissiones, we predeploy them at consequtive addressess that start right at the `2^16`. These will be described in the following sections (FIXME). +Some of the contracts need to be predeployed at the genesis, but they do not need the kernel space rights. To give them minimal permissiones, we predeploy them at consequtive addressess that start right at the `2^16`. These will be described in the following sections. # zkEVM internals diff --git a/docs/settlement_contracts/data_availability/rollup_da.md b/docs/settlement_contracts/data_availability/rollup_da.md index d99198283..f4e24d399 100644 --- a/docs/settlement_contracts/data_availability/rollup_da.md +++ b/docs/settlement_contracts/data_availability/rollup_da.md @@ -3,26 +3,30 @@ FIXME: run a spellchecker -# EIP4844 support +## Prerequisites + +Before reading this document, it is better to understand how [custom DA](./custom_da.md) in general works. + +## EIP4844 support EIP-4844, commonly known as Proto-Danksharding, is an upgrade to the ethereum protocol that introduces a new data availability solution embedded in layer 1. More information about it can be found [here](https://ethereum.org/en/roadmap/danksharding/). To facilitate EIP4844 blob support, our circuits allow providing two arrays in our public input to the circuit: -- `blobCommitments` -- this is the commitment that helps to check the correctness of the blob content. The formula on how it is computed will be explained below in the document (FIXME: link). +- `blobCommitments` -- this is the commitment that helps to check the correctness of the blob content. The formula on how it is computed will be explained below in the document. - `blobHash` -- the `keccak256` hash of the inner contents of the blob. Note, that our circuits require that each blob contains exactly `4096 * 31` bytes. The maximal number of blobs that are supported by our proving system is 16, but the system contracts support only 6 blobs at most for now. -When committing a batch, the L1DAValidator (FIXME: link to the description of pubdata processing) is called with the data provided by the operator and it should return the two arrays described above. These arrays be put inside the batch commitment and then the correctness of the commitments will be verified at the proving stage. +When committing a batch, the L1DAValidator is called with the data provided by the operator and it should return the two arrays described above. These arrays be put inside the batch commitment and then the correctness of the commitments will be verified at the proving stage. -Note, that the `Executor.sol` (and the contract itself) is not responsible for checking that the provided `blobHash` and `blobCommitments` in any way correspond to the pubdata inside the batch as it is the job of the DA Validator pair (FIXME: link). +Note, that the `Executor.sol` (and the contract itself) is not responsible for checking that the provided `blobHash` and `blobCommitments` in any way correspond to the pubdata inside the batch as it is the job of the DA Validator pair. -# Publishing pubdata to L1 +## Publishing pubdata to L1 Let's see an example of how the approach above works in rollup DA validators. -## RollupL2DAValidator +### RollupL2DAValidator ![RollupL2DAValidator.png](./L1%20smart%20contracts/Rollup_DA.png) @@ -37,7 +41,7 @@ To give the flexibility of checking different DA, we send the following data to - The hash of the `_totalPubdata`. In case the size of pubdata is small, it will allow the operator also use just standard Ethereum calldata for the DA. - Send the `blobHash` array. -## RollupL1DAValidator +### RollupL1DAValidator When committing the batch, the operator will provide the preimage of the fields that the RollupL2DAValidator has sent before, and also some `l1DaInput` along with it. This `l1DaInput` will be used to prove that the pubdata was indeed provided in this batch. @@ -71,6 +75,6 @@ assert uint256(res[32:]) == BLS_MODULUS The final `blobCommitment` is calculated as the hash between the `blobVersionedHash`, `opening point` and the `claimed value`. The zero knowledge circuits will verify that the opening point and the claimed value were calculated correctly and correspond to the data that was hashed under the `blobHash`. -# Structure of the pubdata +## Structure of the pubdata Rollups maintain the same structure of pubdata and apply the same rules for compresison as those that were used in the previous versions of the system. These can be read [here](./Handling%20pubdata.md). diff --git a/docs/settlement_contracts/data_availability/standard_pubdata_format.md b/docs/settlement_contracts/data_availability/standard_pubdata_format.md index a64af3371..3df8ba58a 100644 --- a/docs/settlement_contracts/data_availability/standard_pubdata_format.md +++ b/docs/settlement_contracts/data_availability/standard_pubdata_format.md @@ -1,9 +1,9 @@ # Standard pubdata format [back to readme](../../README.md) -While with the introduction of custom DA validators (FIXME LINK), any pubdata logic could be applied for each chain (including calldata-based pubdata), ZK chains are generally optimized for using state-diffs based rollup model. +While with the introduction of [custom DA validators](./custom_da.md), any pubdata logic could be applied for each chain (including calldata-based pubdata), ZK chains are generally optimized for using state-diffs based rollup model. -This document will describe how the standard pubdata format looks like. This is the format that is enforced for permanent rollup chains (FIXME: link to permanent rollup description). +This document will describe how the standard pubdata format looks like. This is the format that is enforced for [permanent rollup chains](../../chain_management/admin_role.md#ispermanentrollup-setting). Pubdata in zkSync can be divided up into 4 different categories: @@ -63,13 +63,13 @@ The L1Messenger contract will maintain a rolling hash of all the L2ToL1 logs `ch Note, that the user is charged for necessary future the computation that will be needed to calculate the final merkle root. It is roughly 4x higher than the cost to calculate the hash of the leaf, since the eventual tree might have be 4x times the number nodes. In any case, this will likely be a relatively negligible part compared to the cost of the pubdata. -At the end of the execution, the bootloader will [provide](../../system-contracts/bootloader/bootloader.yul#L2621) (FIXME: check link) a list of all the L2ToL1 logs (this will be provided by the operator in the memory of the bootloader). The L1Messenger checks that the rolling hash from the provided logs is the same as in the `chainedLogsHash` and calculate the merkle tree of the provided messages. Right now, we always build the Merkle tree of size `16384`, but we charge the user as if the tree was built dynamically based on the number of leaves in there. The implementation of the dynamic tree has been postponed until the later upgrades. +At the end of the execution, the bootloader will [provide](../../../system-contracts/bootloader/bootloader.yul#L2676) a list of all the L2ToL1 logs (this will be provided by the operator in the memory of the bootloader). The L1Messenger checks that the rolling hash from the provided logs is the same as in the `chainedLogsHash` and calculate the merkle tree of the provided messages. Right now, we always build the Merkle tree of size `16384`, but we charge the user as if the tree was built dynamically based on the number of leaves in there. The implementation of the dynamic tree has been postponed until the later upgrades. > Note, that unlike most other parts of pubdata, the user L2->L1 must always be validated by the trusted `L1Messenger` system contract. If we moved this responsibility to L2DAValidator it would be possible that a malicious operator provided incorrect data and forged transactions out of names of certain users. ### Long L2→L1 messages & bytecodes -If the user wants to send an L2→L1 message, its preimage is [appended](../../system-contracts/contracts/L1Messenger.sol#L122) to the message’s rolling hash too `chainedMessagesHash = keccak256(chainedMessagesHash, keccak256(message))`. +If the user wants to send an L2→L1 message, its preimage is [appended](../../../system-contracts/contracts/L1Messenger.sol#L126) to the message’s rolling hash too `chainedMessagesHash = keccak256(chainedMessagesHash, keccak256(message))`. A very similar approach for bytecodes is used, where their rolling hash is calculated and then the preimages are provided at the end of the batch to form the full pubdata for the batch. @@ -87,13 +87,13 @@ The only places where the built-in L2→L1 messaging should continue to be used: ### Obtaining `txNumberInBlock` -To have the same log format, the `txNumberInBlock` must be obtained. While it is internally counted in the VM, there is currently no opcode to retrieve this number. We will have a public variable `txNumberInBlock` in the `SystemContext`, which will be incremented with each new transaction and retrieve this variable from there. It is [zeroed out](https://github.com/code-423n4/2024-03-zksync/blob/7e85e0a997fee7a6d75cadd03d3233830512c2d2/code/system-contracts/contracts/SystemContext.sol#L486) at the end of the batch. +To have the same log format, the `txNumberInBlock` must be obtained. While it is internally counted in the VM, there is currently no opcode to retrieve this number. We will have a public variable `txNumberInBlock` in the `SystemContext`, which will be incremented with each new transaction and retrieve this variable from there. It is [zeroed out](../../../system-contracts/contracts/SystemContext.sol#L515) at the end of the batch. ## Bootloader implementation The bootloader has a memory segment dedicated to the ABI-encoded data of the L1ToL2Messenger to perform the `publishPubdataAndClearState` call. -At the end of the execution of the batch, the operator should provide the corresponding data into the bootloader memory, i.e user L2→L1 logs, long messages, bytecodes, etc. After that, the [call](../../system-contracts/bootloader/bootloader.yul#L2635) is performed to the `L1Messenger` system contract, that would call the L2DAValidator that should check the adherence of the pubdata to the specified format. +At the end of the execution of the batch, the operator should provide the corresponding data into the bootloader memory, i.e user L2→L1 logs, long messages, bytecodes, etc. After that, the [call](../../../system-contracts/bootloader/bootloader.yul#L2676) is performed to the `L1Messenger` system contract, that would call the L2DAValidator that should check the adherence of the pubdata to the specified format. # Bytecode Publishing @@ -105,7 +105,7 @@ Uncompressed bytecodes are included within the `totalPubdata` bytes and have the ## Compressed Bytecode Publishing -Unlike uncompressed bytecode which are published as part of `factoryDeps`, compressed bytecodes are published as long l2 → l1 messages which can be seen [here](../../system-contracts/contracts/Compressor.sol#L73). +Unlike uncompressed bytecode which are published as part of `factoryDeps`, compressed bytecodes are published as long l2 → l1 messages which can be seen [here](../../../system-contracts/contracts/Compressor.sol#L78). ### Bytecode Compression Algorithm — Server Side diff --git a/docs/settlement_contracts/priority_queue/processing_of_l1->l2_txs.md b/docs/settlement_contracts/priority_queue/processing_of_l1->l2_txs.md index 3674d7d66..a2698971c 100644 --- a/docs/settlement_contracts/priority_queue/processing_of_l1->l2_txs.md +++ b/docs/settlement_contracts/priority_queue/processing_of_l1->l2_txs.md @@ -16,9 +16,9 @@ Please read the full [article](../../l2_system_contracts/system_contracts_bootlo A new priority operation can be appended by calling the `requestL2TransactionDirect` or `requestL2TransactionTwoBridges` methods on `BridgeHub` smart contract. `BridgeHub` will ensure that the base token is deposited via `L1AssetRouter` and send transaction request to the specified state transition contract (selected by the chainID). State transition contract will perform several checks for the transaction, making sure that it is processable and provides enough fee to compensate the operator for this transaction. Then, this transaction will be [appended](../../l1-contracts/contracts/state-transition/chain-deps/facets/Mailbox.sol#569) to the priority tree (and optionally to the legacy priority queue). -> In the previous system, priority operations were structured in a queue. However, now they will be stored in an incremental merkle tree. The motivation for the tree structure will be displayed in sections below (FIXME: link). +> In the previous system, priority operations were structured in a queue. However, now they will be stored in an incremental merkle tree. The motivation for the tree structure can be read [here](./priority-queue.md). -The difference between `requestL2TransactionDirect` and `requestL2TransactionTwoBridges` is that the `msg.sender` on the L2 Transaction is the second bridge in the `requestL2TransactionTwoBridges` case, while it is the `msg.sender` of the `requestL2TransactionDirect` in the first case. For more details read the [L1 ecosystem contracts](./L1%20ecosystem%20contracts.md) +The difference between `requestL2TransactionDirect` and `requestL2TransactionTwoBridges` is that the `msg.sender` on the L2 Transaction is the second bridge in the `requestL2TransactionTwoBridges` case, while it is the `msg.sender` of the `requestL2TransactionDirect` in the first case. For more details read the [bridgehub documentation](../../bridging/bridgehub/overview.md) ## Bootloader @@ -29,9 +29,9 @@ Whenever an operator sees a priority operation, it can include the transaction i Whenever a priority transaction is processed, the `numberOfPriorityTransactions` gets incremented by 1, while `priorityOperationsRollingHash` is assigned to `keccak256(priorityOperationsRollingHash, processedPriorityOpHash)`, where `processedPriorityOpHash` is the hash of the priority operations that has been just processed. -Also, for each priority transaction, we [emit](../../system-contracts/bootloader/bootloader.yul#L1046) a user L2→L1 log with its hash and result, which basically means that it will get Merklized and users will be able to prove on L1 that a certain priority transaction has succeeded or failed (which can be helpful to reclaim your funds from bridges if the L2 part of the deposit has failed). +Also, for each priority transaction, we [emit](../../../system-contracts/bootloader/bootloader.yul#L1046) a user L2→L1 log with its hash and result, which basically means that it will get Merklized and users will be able to prove on L1 that a certain priority transaction has succeeded or failed (which can be helpful to reclaim your funds from bridges if the L2 part of the deposit has failed). -Then, at the end of the batch, we [submit](../../system-contracts/bootloader/bootloader.yul#L4117-L4118) 2 L2→L1 log system log with these values. +Then, at the end of the batch, we [submit](../../../system-contracts/bootloader/bootloader.yul#L4117-L4118) 2 L2→L1 log system log with these values. ## Batch commit @@ -41,14 +41,14 @@ During batch commit, the contract will remember those values, but not validate t During batch execution, the will check that the `priorityOperationsRollingHash` rolling hash provided before was correct. There are two ways to do it: -- [Legacy one that uses priority queue](../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L397). We will pop `numberOfPriorityTransactions` from the top of priority queue and verify that the hashes match. -- [The new one that uses priority tree](../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L397). The operator would have to provide the hashes of these priority operations in an array, as well as proof that this entire segment belongs to the merkle tree. After it is verified that this array of leaves is correct, it will be checked whether the rolling hash of those is equal to the `priorityOperationsRollingHash`. +- [Legacy one that uses priority queue](../../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L397). We will pop `numberOfPriorityTransactions` from the top of priority queue and verify that the hashes match. +- [The new one that uses priority tree](../../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L397). The operator would have to provide the hashes of these priority operations in an array, as well as proof that this entire segment belongs to the merkle tree. After it is verified that this array of leaves is correct, it will be checked whether the rolling hash of those is equal to the `priorityOperationsRollingHash`. # Upgrade transactions ## Initiation -Upgrade transactions can only be created during a system upgrade. It is done if the `DiamondProxy` delegatecalls to the implementation that manually puts this transaction into the storage of the DiamondProxy, this could happen on calling `upgradeChainFromVersion` function in `Admin.sol` on the State Transition contract. Note, that since it happens during the upgrade, there is no “real” checks on the structure of this transaction. We do have [some validation](../../l1-contracts/contracts/upgrades/BaseZkSyncUpgrade.sol#L193), but it is purely on the side of the implementation which the `DiamondProxy` delegatecalls to and so may be lifted if the implementation is changed. +Upgrade transactions can only be created during a system upgrade. It is done if the `DiamondProxy` delegatecalls to the implementation that manually puts this transaction into the storage of the DiamondProxy, this could happen on calling `upgradeChainFromVersion` function in `Admin.sol` on the State Transition contract. Note, that since it happens during the upgrade, there is no “real” checks on the structure of this transaction. We do have [some validation](../../../l1-contracts/contracts/upgrades/BaseZkSyncUpgrade.sol#L193), but it is purely on the side of the implementation which the `DiamondProxy` delegatecalls to and so may be lifted if the implementation is changed. The hash of the currently required upgrade transaction is stored under `l2SystemContractsUpgradeTxHash` variable. @@ -65,7 +65,7 @@ The upgrade transactions are processed just like with priority transactions, wit ## Commit -After an upgrade has been initiated, it will be required that the next commit batches operation already contains the system upgrade transaction. It is [checked](../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L223) by verifying the corresponding L2→L1 log. +After an upgrade has been initiated, it will be required that the next commit batches operation already contains the system upgrade transaction. It is [checked](../../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L223) by verifying the corresponding L2→L1 log. We also remember that the upgrade transaction has been processed in this batch (by amending the `l2SystemContractsUpgradeBatchNumber` variable). @@ -77,9 +77,9 @@ Note, however, that we do not “remember” that certain batches had a version ## Execute -Once batch with the upgrade transaction has been executed, we [delete](../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L486) them from storage for efficiency to signify that the upgrade has been fully processed and that a new upgrade can be initiated. +Once batch with the upgrade transaction has been executed, we [delete](../../../l1-contracts/contracts/state-transition/chain-deps/facets/Executor.sol#L486) them from storage for efficiency to signify that the upgrade has been fully processed and that a new upgrade can be initiated. -# Security considerations +## Security considerations Since the operator can put any data into the bootloader memory and for L1→L2 transactions the bootloader has to blindly trust it and rely on L1 contracts to validate it, it may be a very powerful tool for a malicious operator. Note, that while the governance mechanism is trusted, we try to limit our trust for the operator as much as possible, since in the future anyone would be able to become an operator. @@ -93,4 +93,4 @@ In the current system though having such logic would be dangerous and would allo The most important caveat of this malicious upgrade is that it may change implementation of the `Keccak256` precompile to return any values that the operator needs. - When the`priorityOperationsRollingHash` will be updated, instead of the “correct” rolling hash of the priority transactions, the one which would appear with the correct topmost priority operation is returned. The operator can’t amend the behaviour of `numberOfPriorityTransactions`, but it won’t help much, since the the `priorityOperationsRollingHash` will match on L1 on the execution step. -That’s why the concept of the upgrade transaction is needed: this is the only transaction that can initiate transactions out of the kernel space and thus change bytecodes of system contracts. That’s why it must be the first one and that’s why bootloader [emits](../../system-contracts/bootloader/bootloader.yul#L603) its hash via a system L2→L1 log before actually processing it. +That’s why the concept of the upgrade transaction is needed: this is the only transaction that can initiate transactions out of the kernel space and thus change bytecodes of system contracts. That’s why it must be the first one and that’s why bootloader [emits](../../../system-contracts/bootloader/bootloader.yul#L603) its hash via a system L2→L1 log before actually processing it. diff --git a/docs/settlement_contracts/zkchain_basics.md b/docs/settlement_contracts/zkchain_basics.md index decdd88bc..5e91f7605 100644 --- a/docs/settlement_contracts/zkchain_basics.md +++ b/docs/settlement_contracts/zkchain_basics.md @@ -41,7 +41,7 @@ This facet responsible for the configuration setup and upgradabity, handling tas * Freezability: Executing the freezing/unfreezing of facets within the diamond proxy to safeguard the ecosystem during upgrades or in response to detected vulnerabilities. Control over the AdminFacet is divided between two main entities: -- CTM (Chain Type Manager, formerly known as `StateTransitionManager`) - Separate smart contract that can perform critical changes to the system as protocol upgrades. For more detailed information on its function and design, refer to the [Hyperchain section](https://github.com/code-423n4/2024-03-zksync/blob/main/docs/Smart%20contract%20Section/L1%20ecosystem%20contracts.md#st--stm). Although currently only one version of the CTM exists, the architecture allows for future versions to be introduced via subsequent upgrades. The owner of the CTM is the [decentralized governance](https://blog.zknation.io/introducing-zk-nation/), while for non-critical an Admin entity is used (see details below). +- CTM (Chain Type Manager, formerly known as `StateTransitionManager`) - Separate smart contract that can perform critical changes to the system as protocol upgrades. For more detailed information on its function and design, refer to [this document](../chain_management/chain_type_manager.md). Although currently only one version of the CTM exists, the architecture allows for future versions to be introduced via subsequent upgrades. The owner of the CTM is the [decentralized governance](https://blog.zknation.io/introducing-zk-nation/), while for non-critical an Admin entity is used (see details below). - Chain Admin - Multisig smart contract managed by each individual chain that can perform non-critical changes to the system such as granting validator permissions. ### MailboxFacet @@ -98,11 +98,11 @@ More about L1->L2 operations can be found [here](./Handling%20L1→L2%20ops%20on L2 -> L1 communication, in contrast to L1 -> L2 communication, is based only on transferring the information, and not on the transaction execution on L1. The full description of the mechanism for sending information from L2 to L1 can be found [here](./Standard%20pubdata%20format.md). -The Mailbox facet also facilitates L1<>L3 communications for those chains that settle on top of Gateway. The user interfaces for those are identical to the L1<>L2 communication described above. To learn more about L1<>L3 communication works, check out this document (FIXME: link) +The Mailbox facet also facilitates L1<>L3 communications for those chains that settle on top of Gateway. The user interfaces for those are identical to the L1<>L2 communication described above. To learn more about L1<>L3 communication works, check out [this document](../gateway/messaging_via_gateway.md) and [this one](../gateway/nested_l3_l1_messaging.md). ### ExecutorFacet -A contract that accepts L2 batches, enforces data availability via DA validators and checks the validity of zk-proofs. You can read more about DA validators in this docuemnt (FIXME :link). +A contract that accepts L2 batches, enforces data availability via DA validators and checks the validity of zk-proofs. You can read more about DA validators [in this docuemnt](../settlement_contracts/data_availability/custom_da.md). The state transition is divided into three stages: