Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spot location: Normalizing intensity values #3

Open
Altairch95 opened this issue Apr 13, 2022 · 1 comment
Open

Spot location: Normalizing intensity values #3

Altairch95 opened this issue Apr 13, 2022 · 1 comment

Comments

@Altairch95
Copy link

Hi there,

I am running Mosaic Particle Tracker for 2D spots location of GFP and RFP tags in fluorescence microscopy images. I realised that previous to estimation of point locations and its refinement, all pixel intensity values I are normalised as (I - I min) / (I max - I min), being I max and I min the global maximum and minimum, respectively (Sbalzarini, 2005).

Here are my questions:

  1. Is this normalisation relevant only when tracking spots along a movie? Or it make sense also in single-frame images?

  2. What would be the effect of this global normalisation when calculating central moments of intensity between two-channel images?

  3. In the case of two-channel stacks (red / green), where each stack has different global maxima, how this would affect the grouping of spots according to m0-m2?

Thanks!
Altair

@krzysg
Copy link
Owner

krzysg commented Apr 19, 2022

Hi,

Well, in Particle Tracker (PT) this normalization is needed since it makes optimization process (linking stage) much more reliable and correct - we would not want to m0 to have extreme values and dominate (or on the other hand be negligible) in compare to other quantities that are taken into account during linking (like position of particles, their speed etc.).
Normalization of single-frame image has often much sense but it depends on use of such image (it is needed for example for CNN etc.).

I'm not able to answer if comparison of m0-m2 between two-channel images have sense or not. In many cases it would do the job but in many other not.
Just to clarify - let's consider m0 only. By its definition m0 equals to sum of the pixel intensities within given radius (in PT it is a sum of pixel intensities on restored image). The question is what information is kept in both channels - are they somehow correlated which would mean that the brighter m0 is in one channel the brighter m0 is in other channel?

Anyway - probably you should contact author of the paper that this plugin is based on to get better answer. Here is a page with contact to Prof. Ivo Sbalzarini.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants