-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Predicting from Laplacian eigenfunctions #184
Comments
@JoshVStaden this is the issue we'll use to track development of this issue. Let me know when you're ready to start and I'll begin guiding you through the code modifications. |
@sjperkins Thanks. I am writing exams for the next 2 weeks, and I finish on the 15th of June. Any time after that is good for me. |
Hi Simon, I am finished with exams, let me know when you want to start with the modifications. |
OK, so you're going to be writing a tensorflow operator. The basic tensorflow tutorial is here. However to get a closer idea of the the type of operator you'll be implementing, look here and specifically at the Gaussian shape operator files. This operator applies the shape parameters of a Gaussian at all points in the UV plane: gauss_shape_op.h # General definition of the Gaussian Shape Operator class
gauss_shape_op_cpu.cpp # Concrete instantiation of the CPU operator
gauss_shape_op_cpu.h # General templated definition of the CPU operator
gauss_shape_op_gpu.cu # Concrete instantiation of the GPU operator
gauss_shape_op_gpu.cuh # General templated definition of the GPU operator
test_gauss_shape.py # Test case comparing the CPU and GPU operators You'll notice there's a fair amount of C++, tensorflow and CUDA boilerplate code that needs to be generated. I would suggest using tfopgen to generate the boilerplate for this laplacian operator. I would also strongly suggest just implementing the CPU version first (like the Gaussian CPU operator) as its easier and this can then be plugged into the pipeline to confirm the output. After that we can take a look at the GPU operator. A note on naming. I would suggest that you name the dimension associated with the {
"laplacian eigenvectors": "nlesrc" ,
"point": "npsrc",
"gaussian": "ngsrc",
"sersic": "nssrc",
} Also, you'll need to pick a particular ordering for your laplactian shape parameter array. For example the gaussian shape parameter array schema is defined as There's probably a fair amount to digest there, but a good first start would be defining the inputs and outputs for the laplacian eigenvector in tfopgen's YAML format and getting the skeleton compiling. |
Thanks. I'll get right on it. |
Also I suggest you fork the repository, create a branch with your changes and push those up to your fork. Then we can create an ongoing pull request to guide your code development. |
@JoshVStaden also feel free to ask me if you have any questions regarding the math... |
I seem to be having a problem running the test_gauss_shape.py, when running load_tf_lib(), it tries to find a file called montblanc/extensions/tensorflow/rime.so, which doesn't exist in the files. Is there perhaps a way of bypassing this, or am I meant to install this file? |
So a C++ and CUDA extension needs to be compiled. This happens with a ~/src/montblanc $ pip install -e . if you've cloned montblanc into Other installation instructions are here |
Montblanc currently only predicts from delta functions, 2D Gaussians and Sersic functions. It should be possible to predict from arbitrary functions by computing the Fourier transform of a Gaussian process analytically. Doing this approximately for the class of stationary and isotropic processes should be fairly simple if we take the reduced rank approach detailed here. Sidestepping all the gory details, and as a first step, this basically amounts to finding the 2D Fourier transform of functions of the form (or linear combinations thereof)
where i and j are integers that we sum over and L_l, L_m are constants which define the support of the underlying function (i.e. the domain in which it is non-zero). Note that, since these are separable functions and the FT of a sinusoid is simply a delta function, this should be a simple computation. As it stands these functions are only supported on -L_l < 0 < L_l and -L_m < 0 < L_m but it should be simple to translate them to a different domain.
The above basis functions are defined over a rectangular domain. In the future we may wish to also consider circular domains in which case the basis functions are Bessel functions (the FTs of which are also known analytically).
The text was updated successfully, but these errors were encountered: