Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Particle creation is slow in parallel simulatins #5011

Open
RudolfWeeber opened this issue Nov 6, 2024 · 0 comments
Open

Particle creation is slow in parallel simulatins #5011

RudolfWeeber opened this issue Nov 6, 2024 · 0 comments

Comments

@RudolfWeeber
Copy link
Contributor

The time for creating a particle scales linearly in the number of mpi ranks for high core numbers (>~64)
E.g 30 ms per particle creatoin (explicitly specifying the particle id) for 512 MPi ranks on the ant cluster at ICP.

In my opionion, particle insertoin should comprise

  • a broadcast from Python on rank 0 to all ranks (scales n log n in number of ranks)
  • maybe some point to point communicaoitn at constatn time

Steps to investigate

Use a Python profiler on the particle creation code in src/python/espressomd.particle_data.py with different number of mpi ranks to figure out, where in the creation process the poor scaling appears.

The script was

import espressomd
import numpy as np
from time import time

s=espressomd.System(box_l=[1,1,1])
grid =s.cell_system.node_grid
n=10*np.prod(grid)
print(grid)
for n in 1,10,100:
    tick=time()
    s.part.add(id=range(n),pos=np.random.random((n,3))*s.box_l)
    tock=time()
    print(n, (tock-tick)/n)
    s.part.clear()

And the commands to run were

module load spack/default gcc/12.3.0 cuda/12.3.0 openmpi/4.1.6 \
            fftw/3.3.10 boost/1.83.0 cmake/3.27.9 python/3.12.1

`
srun --cpu-bind=cores --ntasks=512 --time=60 -u ./pypresso ./test.py 
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant