-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prefer scipy.fft to numpy.fft #211
Comments
Thanks for raising this @mreineck. As I understand it, scipy.fft is pypocketfft? In that case, we'd certainly want to leverage the performance enhancments. @landmanbester do you think this is something you could address in #204 or do you think that would be outside the scope of the PR? Of course, since you're introducing a dependency on ducc0, you could use the pypocketfft within the gridding functionality. It seems that scipy fft could be used in the dask wrappers: https://github.com/ska-sa/codex-africanus/search?q=fft&unscoped_q=fft scipy is an optional codex dependency. I wonder whether we should force the user to install the scipy in order to use the nifty gridding functionality, or silently fall back to numpy if scipy is not installed? Anyone have strong opinions here? |
Yes and no :) It's To make things even more confusing,
In fact I opened another issue earlier today for |
In the case that |
Yep, I could do this in that PR. Maybe we should consider dask wrappers for the FFT's in ducc0? This would be the first step towards recognising that, eventually, we are going to need a distributed FFT. @mreineck do you already have something like that in NIFTy? |
You mean an FFT for arrays that are distributed over several tasks along one axis? Sooner or later I need to have MPI array transposition code in ducc as well, but that will probably not happen very soon. |
I meant any out of memory 2D FFT. I guess splitting up tasks along one of the spatial axes is one way to go. We saw a talk by someone at Cambridge that was trying to do this and he made it look pretty difficult. Philipp told me that is was in principle in NIFTy already, just not the IO part of it. I am mainly thinking about this because of the promised 60kx60k SKA images but I know some wide-field VLBI people that would already find this very useful |
It's what FFTW does, and it seems to work reasonably well. Of course, a full FFT requires two full array transpositions, which is not cheap compared to the pure FFT computation cost. As long as the array fits into a single node's memory, I'd definitely prefer a multi-threaded transform over MPI (and perhaps do simultaneous independent FFTs on multiple nodes, if that's possible) ... but 60kx60k is getting close to the limit. Assuming I had to do work with a huge set of visibilities and such a large image, I'd try the following approach with
This won't scale perfectly, but on the other hand it's really simple to set up while still being quite efficient. In any case, the only missing ingredient to the kind of MPI-distributed FFT you are looking for is a Python functionality that can transpose a MPI-distributed 2D array. I'm pretty sure this must exist already... |
As I understand the problem, the challenge of a distributed FFT is handling the communication pattern inherent in the butterfly diagram: At some point one needs to broadcast all images to all nodes and it just becomes really expensive from a data transfer POV? |
As long as your 1D FFTs fit into memory, you shouldn't need to worry about the butterfly. The normal strategy is:
It's not terribly complicated, but it is indeed expensive. |
I just noticed that the code uses
numpy.fft
in many places.If you have the choice, I strongly suggest to use
scipy.fft
instead, because this is much faster for multi-D transforms. Also the interface is practically identical (in contrast to the olderscipy.fftpack
module).The text was updated successfully, but these errors were encountered: