Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Predicting from Laplacian eigenfunctions #184

Open
landmanbester opened this issue May 31, 2017 · 9 comments
Open

Predicting from Laplacian eigenfunctions #184

landmanbester opened this issue May 31, 2017 · 9 comments

Comments

@landmanbester
Copy link

Montblanc currently only predicts from delta functions, 2D Gaussians and Sersic functions. It should be possible to predict from arbitrary functions by computing the Fourier transform of a Gaussian process analytically. Doing this approximately for the class of stationary and isotropic processes should be fairly simple if we take the reduced rank approach detailed here. Sidestepping all the gory details, and as a first step, this basically amounts to finding the 2D Fourier transform of functions of the form (or linear combinations thereof)

phil_phim

where i and j are integers that we sum over and L_l, L_m are constants which define the support of the underlying function (i.e. the domain in which it is non-zero). Note that, since these are separable functions and the FT of a sinusoid is simply a delta function, this should be a simple computation. As it stands these functions are only supported on -L_l < 0 < L_l and -L_m < 0 < L_m but it should be simple to translate them to a different domain.

The above basis functions are defined over a rectangular domain. In the future we may wish to also consider circular domains in which case the basis functions are Bessel functions (the FTs of which are also known analytically).

@sjperkins
Copy link
Member

@JoshVStaden this is the issue we'll use to track development of this issue. Let me know when you're ready to start and I'll begin guiding you through the code modifications.

@JoshVStaden
Copy link

@sjperkins Thanks. I am writing exams for the next 2 weeks, and I finish on the 15th of June. Any time after that is good for me.

@sjperkins sjperkins assigned sjperkins and unassigned sjperkins May 31, 2017
@JoshVStaden
Copy link

Hi Simon, I am finished with exams, let me know when you want to start with the modifications.

@sjperkins
Copy link
Member

sjperkins commented Jun 21, 2017

OK, so you're going to be writing a tensorflow operator. The basic tensorflow tutorial is here.

However to get a closer idea of the the type of operator you'll be implementing, look here and specifically at the Gaussian shape operator files. This operator applies the shape parameters of a Gaussian at all points in the UV plane:

gauss_shape_op.h               # General definition of the Gaussian Shape Operator class
gauss_shape_op_cpu.cpp   # Concrete instantiation of the CPU operator
gauss_shape_op_cpu.h       # General templated definition of the CPU operator
gauss_shape_op_gpu.cu     # Concrete instantiation of the GPU operator
gauss_shape_op_gpu.cuh   # General templated definition of the GPU operator
test_gauss_shape.py           # Test case comparing the CPU and GPU operators

You'll notice there's a fair amount of C++, tensorflow and CUDA boilerplate code that needs to be generated. I would suggest using tfopgen to generate the boilerplate for this laplacian operator.

I would also strongly suggest just implementing the CPU version first (like the Gaussian CPU operator) as its easier and this can then be plugged into the pipeline to confirm the output. After that we can take a look at the GPU operator.

A note on naming. I would suggest that you name the dimension associated with the Laplacian Eigenvectors as nlesrc to following the naming convention below:

{
  "laplacian eigenvectors": "nlesrc" ,
  "point": "npsrc",
  "gaussian": "ngsrc",
  "sersic": "nssrc",
}

Also, you'll need to pick a particular ordering for your laplactian shape parameter array. For example the gaussian shape parameter array schema is defined as (3, 'ngsrc'), defining the 3 shape parameters for each gaussian. Then, if your Laplacian Eigenvector has 5 shape parameters for example, you should set the schema or general shape of the array to be (5, 'nlesrc'). This ordering is useful for writing the CUDA kernels.

There's probably a fair amount to digest there, but a good first start would be defining the inputs and outputs for the laplacian eigenvector in tfopgen's YAML format and getting the skeleton compiling.

@JoshVStaden
Copy link

Thanks. I'll get right on it.

@sjperkins
Copy link
Member

Also I suggest you fork the repository, create a branch with your changes and push those up to your fork. Then we can create an ongoing pull request to guide your code development.

@landmanbester
Copy link
Author

@JoshVStaden also feel free to ask me if you have any questions regarding the math...

@JoshVStaden
Copy link

I seem to be having a problem running the test_gauss_shape.py, when running load_tf_lib(), it tries to find a file called montblanc/extensions/tensorflow/rime.so, which doesn't exist in the files. Is there perhaps a way of bypassing this, or am I meant to install this file?

@sjperkins
Copy link
Member

So a C++ and CUDA extension needs to be compiled. This happens with a

~/src/montblanc $ pip install -e .

if you've cloned montblanc into ~/src/montblanc... You'll probably have to do this each time the source changes.

Other installation instructions are here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants