You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our current MPT implementation utilises its internal node cache as a buffer for dirty nodes that must remain in memory for the duration of a full block. Only at the end of a block the modifications are finalised and the nodes may be written to the disk.
This cache, by default, is 10M elements large -- which is orders of magnitudes large than any working set of a block up until today. The caches LRU policy is supposed to make sure that all nodes of the current working set are the last ones to be evicted, thus working-set nodes are supposed to remain in memory until the end of the block (this is not tested, see #647).
A transaction mix producing a huge working set, however, could lead to the eviction of an element that has not been finalised, which currently should result in a panic with a message stating that there was an attempt to write a node with a dirty hash to the disk.
Required steps:
clarify how such resource exhaustion scenarios should be handled
implement a solution in alignment to the defined policy
The text was updated successfully, but these errors were encountered:
min_costs_for_changing_state: 5000 -- for updating a preexisting slot or setting it to zero; increasing nonces should be more than 21.000; so should be balance or code changes;
maximum_number_of_nodes_changed_by_state_change: 40+64 = 104 if the maximum depth of a trie is used; note that in practice we see a maximum depth of ~22 right now;
Thus, the maximum work set size for a block could be
WS ≤ 31.000.000/5.000*104 = 644.800
which is more than 15x smaller than the current cache size of 10M.
Our current MPT implementation utilises its internal node cache as a buffer for dirty nodes that must remain in memory for the duration of a full block. Only at the end of a block the modifications are finalised and the nodes may be written to the disk.
This cache, by default, is 10M elements large -- which is orders of magnitudes large than any working set of a block up until today. The caches LRU policy is supposed to make sure that all nodes of the current working set are the last ones to be evicted, thus working-set nodes are supposed to remain in memory until the end of the block (this is not tested, see #647).
A transaction mix producing a huge working set, however, could lead to the eviction of an element that has not been finalised, which currently should result in a panic with a message stating that there was an attempt to write a node with a dirty hash to the disk.
Required steps:
The text was updated successfully, but these errors were encountered: