Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node cache is limit for maximum workset #686

Open
2 tasks
HerbertJordan opened this issue Dec 12, 2023 · 1 comment
Open
2 tasks

Node cache is limit for maximum workset #686

HerbertJordan opened this issue Dec 12, 2023 · 1 comment

Comments

@HerbertJordan
Copy link
Collaborator

Our current MPT implementation utilises its internal node cache as a buffer for dirty nodes that must remain in memory for the duration of a full block. Only at the end of a block the modifications are finalised and the nodes may be written to the disk.

This cache, by default, is 10M elements large -- which is orders of magnitudes large than any working set of a block up until today. The caches LRU policy is supposed to make sure that all nodes of the current working set are the last ones to be evicted, thus working-set nodes are supposed to remain in memory until the end of the block (this is not tested, see #647).

A transaction mix producing a huge working set, however, could lead to the eviction of an element that has not been finalised, which currently should result in a panic with a message stating that there was an attempt to write a node with a dirty hash to the disk.

Required steps:

  • clarify how such resource exhaustion scenarios should be handled
  • implement a solution in alignment to the defined policy
@HerbertJordan
Copy link
Collaborator Author

HerbertJordan commented Dec 12, 2023

To estimate an upper boundary on the work set size the following heuristic can be used:

WS ≤ max_gas_per_block / min_costs_for_changing_state * maximum_number_of_nodes_changed_by_state_change

We have the following parameters:

  • max_gas_per_block: 31.000.000
  • min_costs_for_changing_state: 5000 -- for updating a preexisting slot or setting it to zero; increasing nonces should be more than 21.000; so should be balance or code changes;
  • maximum_number_of_nodes_changed_by_state_change: 40+64 = 104 if the maximum depth of a trie is used; note that in practice we see a maximum depth of ~22 right now;

Thus, the maximum work set size for a block could be

WS ≤ 31.000.000/5.000*104 = 644.800

which is more than 15x smaller than the current cache size of 10M.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant