-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Struggling with matching HoloPy simulation to experimental holographic images - need advice #448
Comments
Working close to the focal plane might be an issue if the resolution of your camera isn't sufficient to detect the fringes. Could you please post your experimental and simulated holograms, along with the code you are using to calculate the hologram? Also what are the output parameters of your LED (wavelength, bandwidth)? |
Thanks for sending these details! For the issues described in 2.1 and 2.2, the problem seems to be that the experimental holograms are saturated. The pixels at the center are at maximum brightness, so the experiment is not capturing the full range of intensities. The simulation appears dark because there is a wide range of intensities, and the image is normalized. Best option here is to retake the hologram at a lower incident intensity, so that you don't have saturation. Alternatively you can scale your experimental hologram to best match the simulation, though you will not get good agreement where the experimental hologram is saturated. For the issue described in section 1, you can check to see if this is a result of the the bandwidth of the LED. Try simulating a hologram for a wavelength of, say 510 nm (at the lower end of the band) and at 550 nm (at the high end) and see how the fringes change. |
I'm having some trouble with my holography project and could really use some help.
I'm trying to match experimental holographic images with simulated ones generated by HoloPy, but the results aren't great. Here's my situation:
In my experiments, I'm using an LED light source (haven't tried laser yet), and I'm trying to locate particles that are about 10 microns away from the focal plane. Even though I know where the particles should be, when I try to match the experimental images with simulated ones (using either pixel differences or radial intensity profiles), I'm getting errors of more than 5 microns, and these errors seem pretty random. When visually comparing experimental and simulated images under the same conditions, I've noticed that some of the fine fringe structures don't quite match up (which might be due to the light source's coherence issues). This structural mismatch leads to significant errors when using algorithmic matching approaches.
I suspect the LED's lower coherence might be causing issues, but before I switch to a laser setup, I'd like to know if there could be other problems I'm missing. Even though I'm using Mie theory with lens effects in the simulation, I'm wondering if working so close to the focal plane (0-10 microns) could be making things worse?
Has anyone dealt with similar matching problems? Any insights would be really helpful, especially:
What else besides coherence could be causing these matching issues?
Is working near the focal plane making things more difficult?
Any suggestions for improving the matching accuracy?
Thanks in advance for any help!
The text was updated successfully, but these errors were encountered: