Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: rework MemoryCell for cache efficiency #1672

Merged
merged 9 commits into from
May 6, 2024

Conversation

Oppen
Copy link
Contributor

@Oppen Oppen commented Mar 18, 2024

Store data and metadata inline as a single [u64; 4] with 32 bytes alignment, fitting a whole number of cells per cache line to reduce evictions and double sharing and to avoid split line access.

Besides performance, an observable change is that now Memory::get always returns a Cow::Owned variant because the decoding process always creates a new value.

Checklist

  • Linked to Github Issue
  • Unit tests added
  • Integration tests added.
  • This change requires new documentation.
    • Documentation has been added/updated.
    • CHANGELOG has been updated.

Copy link

github-actions bot commented Mar 18, 2024

Benchmark Results for unmodified programs 🚀

Command Mean [s] Min [s] Max [s] Relative
base big_factorial 2.396 ± 0.024 2.378 2.456 1.15 ± 0.03
head big_factorial 2.082 ± 0.046 2.057 2.203 1.00
Command Mean [s] Min [s] Max [s] Relative
base big_fibonacci 2.354 ± 0.032 2.325 2.406 1.15 ± 0.03
head big_fibonacci 2.041 ± 0.051 2.002 2.183 1.00
Command Mean [s] Min [s] Max [s] Relative
base blake2s_integration_benchmark 8.785 ± 0.109 8.664 8.939 1.15 ± 0.02
head blake2s_integration_benchmark 7.658 ± 0.068 7.552 7.759 1.00
Command Mean [s] Min [s] Max [s] Relative
base compare_arrays_200000 2.469 ± 0.041 2.441 2.575 1.14 ± 0.03
head compare_arrays_200000 2.172 ± 0.047 2.142 2.289 1.00
Command Mean [s] Min [s] Max [s] Relative
base dict_integration_benchmark 1.590 ± 0.007 1.580 1.599 1.11 ± 0.01
head dict_integration_benchmark 1.427 ± 0.013 1.415 1.459 1.00
Command Mean [s] Min [s] Max [s] Relative
base field_arithmetic_get_square_benchmark 1.449 ± 0.013 1.427 1.471 1.11 ± 0.02
head field_arithmetic_get_square_benchmark 1.302 ± 0.027 1.279 1.376 1.00
Command Mean [s] Min [s] Max [s] Relative
base integration_builtins 8.740 ± 0.091 8.633 8.874 1.13 ± 0.02
head integration_builtins 7.703 ± 0.115 7.582 7.921 1.00
Command Mean [s] Min [s] Max [s] Relative
base keccak_integration_benchmark 9.019 ± 0.111 8.897 9.209 1.13 ± 0.02
head keccak_integration_benchmark 8.006 ± 0.117 7.854 8.241 1.00
Command Mean [s] Min [s] Max [s] Relative
base linear_search 2.537 ± 0.049 2.486 2.612 1.19 ± 0.02
head linear_search 2.129 ± 0.010 2.119 2.153 1.00
Command Mean [s] Min [s] Max [s] Relative
base math_cmp_and_pow_integration_benchmark 1.951 ± 0.012 1.930 1.973 1.13 ± 0.02
head math_cmp_and_pow_integration_benchmark 1.727 ± 0.023 1.709 1.785 1.00
Command Mean [s] Min [s] Max [s] Relative
base math_integration_benchmark 1.742 ± 0.023 1.724 1.785 1.08 ± 0.01
head math_integration_benchmark 1.608 ± 0.002 1.603 1.611 1.00
Command Mean [s] Min [s] Max [s] Relative
base memory_integration_benchmark 1.373 ± 0.011 1.362 1.392 1.14 ± 0.01
head memory_integration_benchmark 1.207 ± 0.010 1.198 1.232 1.00
Command Mean [s] Min [s] Max [s] Relative
base operations_with_data_structures_benchmarks 1.996 ± 0.007 1.986 2.006 1.10 ± 0.01
head operations_with_data_structures_benchmarks 1.819 ± 0.011 1.802 1.834 1.00
Command Mean [ms] Min [ms] Max [ms] Relative
base pedersen 564.2 ± 6.0 558.1 575.0 1.09 ± 0.01
head pedersen 517.0 ± 2.0 513.7 520.7 1.00
Command Mean [s] Min [s] Max [s] Relative
base poseidon_integration_benchmark 1.001 ± 0.001 1.000 1.002 1.05 ± 0.01
head poseidon_integration_benchmark 0.958 ± 0.005 0.952 0.968 1.00
Command Mean [s] Min [s] Max [s] Relative
base secp_integration_benchmark 2.024 ± 0.017 2.006 2.062 1.09 ± 0.01
head secp_integration_benchmark 1.863 ± 0.008 1.846 1.876 1.00
Command Mean [ms] Min [ms] Max [ms] Relative
base set_integration_benchmark 760.1 ± 10.9 749.8 783.0 1.18 ± 0.02
head set_integration_benchmark 643.3 ± 1.0 641.5 644.9 1.00
Command Mean [s] Min [s] Max [s] Relative
base uint256_integration_benchmark 4.938 ± 0.179 4.831 5.422 1.15 ± 0.04
head uint256_integration_benchmark 4.312 ± 0.058 4.242 4.419 1.00

Copy link

codecov bot commented Mar 18, 2024

Codecov Report

Attention: Patch coverage is 99.34641% with 1 lines in your changes are missing coverage. Please review.

Project coverage is 94.81%. Comparing base (f3161e3) to head (f8398ca).

Files Patch % Lines
vm/src/vm/vm_memory/memory_segments.rs 75.00% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1672   +/-   ##
=======================================
  Coverage   94.80%   94.81%           
=======================================
  Files         101      101           
  Lines       38689    38715   +26     
=======================================
+ Hits        36679    36707   +28     
+ Misses       2010     2008    -2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@Oppen Oppen force-pushed the perf/compact_memory_cell_main branch from 4bbc851 to 84dac58 Compare March 18, 2024 15:49
Store data and metadata inline as a single `[u64; 4]` with `32` bytes
alignment, fitting a whole number of cells per cache line to reduce
evictions and double sharing and to avoid split line access.

Besides performance, an observable change is that now
`Memory::get` always returns a `Cow::Owned` variant because the
decoding process always creates a new value.
@Oppen Oppen force-pushed the perf/compact_memory_cell_main branch from 84dac58 to 96835a2 Compare March 18, 2024 16:55
Copy link
Contributor

@fmoletta fmoletta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing! 🚀

@fmoletta fmoletta enabled auto-merge March 26, 2024 20:26
@fmoletta fmoletta disabled auto-merge March 26, 2024 20:27
Copy link
Collaborator

@pefontana pefontana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before merging I think it will be nice to add a workflow that benchmarks the hyper-threading performance, so we can measure the improvement done here

Copy link

Benchmark Results for modified programs 🚀

Command Mean [s] Min [s] Max [s] Relative
head blake2s_integration_benchmark 7.640 ± 0.106 7.494 7.889 1.00
Command Mean [s] Min [s] Max [s] Relative
head compare_arrays_200000 2.165 ± 0.054 2.111 2.278 1.00
Command Mean [s] Min [s] Max [s] Relative
head dict_integration_benchmark 1.391 ± 0.011 1.375 1.407 1.00
Command Mean [s] Min [s] Max [s] Relative
head field_arithmetic_get_square_benchmark 1.204 ± 0.010 1.190 1.216 1.00
Command Mean [s] Min [s] Max [s] Relative
head integration_builtins 7.666 ± 0.082 7.568 7.825 1.00
Command Mean [s] Min [s] Max [s] Relative
head keccak_integration_benchmark 7.941 ± 0.117 7.787 8.189 1.00
Command Mean [s] Min [s] Max [s] Relative
head linear_search 2.119 ± 0.068 2.069 2.297 1.00
Command Mean [s] Min [s] Max [s] Relative
head math_cmp_and_pow_integration_benchmark 1.412 ± 0.007 1.401 1.420 1.00
Command Mean [s] Min [s] Max [s] Relative
head math_integration_benchmark 1.401 ± 0.008 1.386 1.410 1.00
Command Mean [s] Min [s] Max [s] Relative
head memory_integration_benchmark 1.177 ± 0.008 1.167 1.193 1.00
Command Mean [s] Min [s] Max [s] Relative
head operations_with_data_structures_benchmarks 1.481 ± 0.011 1.467 1.500 1.00
Command Mean [ms] Min [ms] Max [ms] Relative
head pedersen 560.4 ± 5.7 556.2 575.7 1.00
Command Mean [ms] Min [ms] Max [ms] Relative
head poseidon_integration_benchmark 956.2 ± 18.6 939.7 996.4 1.00
Command Mean [s] Min [s] Max [s] Relative
head secp_integration_benchmark 1.778 ± 0.009 1.766 1.795 1.00
Command Mean [ms] Min [ms] Max [ms] Relative
head set_integration_benchmark 659.7 ± 1.1 658.1 661.5 1.00
Command Mean [s] Min [s] Max [s] Relative
head uint256_integration_benchmark 4.256 ± 0.040 4.200 4.324 1.00

Copy link

github-actions bot commented Apr 16, 2024

**Hyper Thereading Benchmark results**




hyperfine -r 2 -n "hyper_threading_main threads: 1" 'RAYON_NUM_THREADS=1 ./hyper_threading_main' -n "hyper_threading_pr threads: 1" 'RAYON_NUM_THREADS=1 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 1
  Time (mean ± σ):     30.318 s ±  0.069 s    [User: 29.581 s, System: 0.735 s]
  Range (min … max):   30.269 s … 30.367 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 1
  Time (mean ± σ):     27.348 s ±  0.069 s    [User: 26.536 s, System: 0.810 s]
  Range (min … max):   27.299 s … 27.396 s    2 runs
 
Summary
  'hyper_threading_pr threads: 1' ran
    1.11 ± 0.00 times faster than 'hyper_threading_main threads: 1'




hyperfine -r 2 -n "hyper_threading_main threads: 2" 'RAYON_NUM_THREADS=2 ./hyper_threading_main' -n "hyper_threading_pr threads: 2" 'RAYON_NUM_THREADS=2 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 2
  Time (mean ± σ):     16.390 s ±  0.060 s    [User: 30.017 s, System: 0.742 s]
  Range (min … max):   16.347 s … 16.432 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 2
  Time (mean ± σ):     14.698 s ±  0.031 s    [User: 27.005 s, System: 0.936 s]
  Range (min … max):   14.677 s … 14.720 s    2 runs
 
Summary
  'hyper_threading_pr threads: 2' ran
    1.12 ± 0.00 times faster than 'hyper_threading_main threads: 2'




hyperfine -r 2 -n "hyper_threading_main threads: 4" 'RAYON_NUM_THREADS=4 ./hyper_threading_main' -n "hyper_threading_pr threads: 4" 'RAYON_NUM_THREADS=4 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 4
  Time (mean ± σ):     12.181 s ±  0.343 s    [User: 42.150 s, System: 1.005 s]
  Range (min … max):   11.938 s … 12.424 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 4
  Time (mean ± σ):     10.678 s ±  0.173 s    [User: 39.224 s, System: 1.083 s]
  Range (min … max):   10.556 s … 10.800 s    2 runs
 
Summary
  'hyper_threading_pr threads: 4' ran
    1.14 ± 0.04 times faster than 'hyper_threading_main threads: 4'




hyperfine -r 2 -n "hyper_threading_main threads: 6" 'RAYON_NUM_THREADS=6 ./hyper_threading_main' -n "hyper_threading_pr threads: 6" 'RAYON_NUM_THREADS=6 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 6
  Time (mean ± σ):     11.403 s ±  0.032 s    [User: 42.757 s, System: 0.993 s]
  Range (min … max):   11.380 s … 11.426 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 6
  Time (mean ± σ):     10.673 s ±  0.070 s    [User: 39.048 s, System: 1.045 s]
  Range (min … max):   10.624 s … 10.723 s    2 runs
 
Summary
  'hyper_threading_pr threads: 6' ran
    1.07 ± 0.01 times faster than 'hyper_threading_main threads: 6'




hyperfine -r 2 -n "hyper_threading_main threads: 8" 'RAYON_NUM_THREADS=8 ./hyper_threading_main' -n "hyper_threading_pr threads: 8" 'RAYON_NUM_THREADS=8 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 8
  Time (mean ± σ):     11.444 s ±  0.047 s    [User: 42.766 s, System: 1.107 s]
  Range (min … max):   11.411 s … 11.477 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 8
  Time (mean ± σ):     10.627 s ±  0.172 s    [User: 39.490 s, System: 1.128 s]
  Range (min … max):   10.506 s … 10.749 s    2 runs
 
Summary
  'hyper_threading_pr threads: 8' ran
    1.08 ± 0.02 times faster than 'hyper_threading_main threads: 8'




hyperfine -r 2 -n "hyper_threading_main threads: 16" 'RAYON_NUM_THREADS=16 ./hyper_threading_main' -n "hyper_threading_pr threads: 16" 'RAYON_NUM_THREADS=16 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 16
  Time (mean ± σ):     11.571 s ±  0.138 s    [User: 42.378 s, System: 1.065 s]
  Range (min … max):   11.473 s … 11.668 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 16
  Time (mean ± σ):     10.613 s ±  0.146 s    [User: 39.867 s, System: 1.214 s]
  Range (min … max):   10.510 s … 10.716 s    2 runs
 
Summary
  'hyper_threading_pr threads: 16' ran
    1.09 ± 0.02 times faster than 'hyper_threading_main threads: 16'


Copy link
Collaborator

@pefontana pefontana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to check the benchmarks here, they are giving wrong numbers

@pefontana pefontana added this pull request to the merge queue May 6, 2024
@pefontana
Copy link
Collaborator

Benchmarks are going OK now

Merged via the queue into main with commit bc5a14e May 6, 2024
72 checks passed
@pefontana pefontana deleted the perf/compact_memory_cell_main branch May 6, 2024 15:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants