Skip to content

Commit

Permalink
Merge pull request #43 from kunzaatko/master
Browse files Browse the repository at this point in the history
fix(docs): typos in docs and links
  • Loading branch information
roflmaostc authored Jun 14, 2023
2 parents f32924f + ba0a393 commit 4c30146
Show file tree
Hide file tree
Showing 10 changed files with 78 additions and 77 deletions.
6 changes: 3 additions & 3 deletions docs/src/background/loss_functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ One common loss function (especially in deep learning) is simply the $L^2$ norm


So far we provide three adapted loss functions with our package. However, it is relatively easy to incorporate
custom defined loss functions or import them from packages like [Flux.ml](https://fluxml.ai/Flux.jl/stable/models/losses/).
The interface from Flux.ml is the same as for our loss functions.
custom defined loss functions or import them from packages like [Flux.jl](https://fluxml.ai/Flux.jl/stable/models/losses/).
The interface from Flux.jl is the same as for our loss functions.


## Poisson Loss
Expand Down Expand Up @@ -44,7 +44,7 @@ The numerical evaluation of the Poisson loss can lead to issues. Since $\mu(r)=0

## Scaled Gaussian Loss
It is well known that the Poisson density function behaves similar as a Gaussian density function for $\mu\gg 1$. This approximation is almost for all use cases in microscopy valid since regions of interest in an image usually consists of multiple photons and not to a single measured photon.
Mathematically the Poisson probability can be approximately (using Stirling's formula in the derivation) expressed as:
Mathematically the Poisson probability can be approximately (using [Stirling's formula](https://en.wikipedia.org/wiki/Stirling's_approximation) in the derivation) expressed as:

$$p(Y(r)|\mu(r)) \approx \prod_r \frac{\exp \left(-\frac{(x-\mu(r) )^2}{2 \mu(r) }\right)}{\sqrt{2 \pi \mu(r) }}$$

Expand Down
8 changes: 4 additions & 4 deletions docs/src/background/physical_background.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ $Y(r) = (S * \text{PSF})(r) + b$

In frequency space (Fourier transforming the above equation) the equation with $b=0$ is:

$\tilde Y(k) = (\tilde S * \tilde{\text{PSF}})(k)$
$\tilde Y(k) = (\tilde S \cdot \tilde{\text{PSF}})(k),$

where $k$ is the spatial frequency. From that equation it is clear why the green and blue line in the plot look very similar. The reason is, that the orange line is constant and we basically multiply the OTF with the orange line.
where $k$ is the spatial frequency and $\cdot$ represents term-wise multiplication (this is due to the [convolution theorem of the Fourier transform](https://en.wikipedia.org/wiki/Convolution_theorem)). From that equation it is clear why the green and blue line in the plot look very similar. The reason is, that the orange line is constant and we basically multiply the OTF with the orange line.


## Noise Model
Expand All @@ -38,8 +38,8 @@ $Y(r) = (S * \text{PSF})(r) + N(r) = \mu(r) + N(r)$
where $N$ being a noise term.
In fluorescence microscopy the dominant noise is usually *Poisson shot noise* (see [Mertz:2019](@cite)).
The origin of that noise is the quantum nature of photons. Since the measurement process spans over a time T only a discrete number of photons is detected (in real experiment the amount of photons per pixel is usually in the order of $10^1 - 10^3$). Note that this noise is not introduced by the sensor and is just a effect due to quantum nature of light.
We can interprete every sensor pixel as a discrete random variable $X$. The expected value of that pixel would be $\mu(r)$ (true specimen convolved with the $\text{PSF})$. Due to noise, the systems measures randomly a signal for $X$ according to the Poisson distribution:
We can interpret every sensor pixel as a discrete random variable $X$. The expected value of that pixel would be $\mu(r)$ (true specimen convolved with the $\text{PSF})$. Due to noise, the systems measures randomly a signal for $X$ according to the Poisson distribution:

$f(y, \mu) = \frac{\mu^y \exp(-\mu)}{\Gamma(y + 1)}$

where $f$ is the probability density distribution, $y$ the measured value of the sensor, $\mu$ the expected value and $\Gamma$ the generalized factorial function.
where $f$ is the probability density distribution, $y$ the measured value of the sensor, $\mu$ the expected value and $\Gamma$ the generalized factorial function ([Gamma function](https://en.wikipedia.org/wiki/Gamma_function)).
4 changes: 2 additions & 2 deletions docs/src/workflow/basic_workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ In this section we show the workflow for deconvolution of 2D and 3D images using
From these examples one can also understand the different effects of the regularizers.

The picture below shows the general principle of DeconvOptim.jl.
Since we interprete deconvolution as an optimization we initialize the reconstruction variables *rec*.
Since we interpret deconvolution as an optimization we initialize the reconstruction variables *rec*.
*rec* is a array of pixels which are the variables we are optimizing for.
Then we can apply some mapping eg. to reconstruct only pixels having non-negative intensity values.
Afterwards we compose the *total loss* functions. It consists of a regularizing part (weighted with $\lambda$) and a *loss* part.
Expand All @@ -19,4 +19,4 @@ In most cases changing the regularizer or the number of iterations is enough.
![](../assets/tex/pipeline.svg)

For all options, see the function references.
Via the help of Julia (typing `]` in the REPL) we can also access extensive help.
Via the help of Julia (typing `?` in the REPL) we can also access extensive help.
2 changes: 1 addition & 1 deletion docs/src/workflow/performance_tips.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Performance Tips

## Regularizer
The regularizers are built during calling with metaprogramming. Everytime you call `TV()` it creates a new version which is evaluated with
The regularizers are built during calling with metaprogramming. Every time you call `TV()` it creates a new version which is evaluated with
`eval`. In the first `deconvolution` routine it has to compile this piece of code.
To prevent the compilation every time, define
```julia
Expand Down
2 changes: 1 addition & 1 deletion src/DeconvOptim.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ module DeconvOptim

export gpu_or_cpu

# to check wether CUDA is enabled
# to check whether CUDA is enabled
using Requires
# for fast array regularizers
using Tullio
Expand Down
48 changes: 24 additions & 24 deletions src/analysis_tools.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,26 +17,26 @@ function relative_energy_regain(ground_truth, rec)

# a dict to store the values for certain frequencies
# we store a list since some (rounded) frequencies occur more than once
ΔE_R_dict = Dict{T, Vector{T}}()
E_R_dict = Dict{T, Vector{T}}()
ΔE_R_dict = Dict{T,Vector{T}}()
E_R_dict = Dict{T,Vector{T}}()

# round the frequencies to 4 digits, alternative would be to bin
round4(x) = T(round(x, digits=3))


# iterate over the frequencies and calculate the relative energy regain
for (i₂, f₂) in enumerate(fftfreq(size(rec_fft, 2)))
for (i₁, f₁) in enumerate(fftfreq(size(rec_fft, 1)))
f_res = round4((f₁^2 + f₂^2))
Δ_E_R = abs2(ground_truth_fft[i₁, i₂] - rec_fft[i₁, i₂])
E_R = abs2(ground_truth_fft[i₁, i₂])
Δ_E_R = abs2(ground_truth_fft[i₁, i₂] - rec_fft[i₁, i₂])
E_R = abs2(ground_truth_fft[i₁, i₂])

update_dict_list!(ΔE_R_dict, f_res, Δ_E_R)
update_dict_list!(E_R_dict, f_res, E_R)
end
end


# finally transform everything into a list of frequencies and
# a list of relative energy regains
freqs = T[]
Expand All @@ -47,7 +47,7 @@ function relative_energy_regain(ground_truth, rec)
mean_E_r = mean(E_R_dict[f])
push!(G_R_list, (mean_E_r - mean_ΔE_r) / mean_E_r)
end

return freqs, G_R_list
end

Expand Down Expand Up @@ -95,8 +95,8 @@ end
Calculates the mean variance between two array, but normalizing arra a to the same mean as array b.
"""
function normalized_variance(a, b)
factor = sum(b)/sum(a)
sum(abs2.(a.*factor .-b))./prod(size(a))
factor = sum(b) / sum(a)
sum(abs2.(a .* factor .- b)) ./ prod(size(a))
end

function reset_summary!(summary)
Expand All @@ -118,7 +118,7 @@ end
A useful routine to simplify performance checks of deconvolution on simulated data.
Returns an Options structure to be used with the deconvolution routine as an argument to `opt_options` and
a summary dictionary with all the performance metrics calculated, which is resetted and updated during deconvolution.
a summary dictionary with all the performance metrics calculated, which is reset and updated during deconvolution.
This can then be plotted or visualized.
The summary dictionary has the following content:
Expand All @@ -136,8 +136,8 @@ end
# Arguments
- `ground_truth`: The underlying ground truth data. Note that this scaling is unimportant due to the normalized norms used for comparison,
whereas the relative offset matters.
- `iterations`: The maximal number of iterations to performance. If covergence is reached, the result may have less iterations
- `mapping`: If mappings such as the positivity contraints (e.g. `NonNegative()`) are used in the deconvolution routing, they also
- `iterations`: The maximal number of iterations to performance. If convergence is reached, the result may have less iterations
- `mapping`: If mappings such as the positivity constraints (e.g. `NonNegative()`) are used in the deconvolution routing, they also
need to be provided here. Otherwises select `nothing`.
- `every`: This option allows to select every how many iterations the evaluation is performed. Note that the results will not keep track
of this iteration number.
Expand Down Expand Up @@ -188,18 +188,18 @@ function options_trace_deconv(ground_truth, iterations, mapping, every=1; more_o
@show more_options
summary = Dict()
reset_summary!(summary)
summary["ground_truth"] = ground_truth # needs to be accesible
summary["ground_truth"] = ground_truth # needs to be accessible
idx = 1
cb = tr -> begin
img = (mapping === nothing) ? tr[end].metadata["x"] : mapping[1](tr[end].metadata["x"])
img = (mapping === nothing) ? tr[end].metadata["x"] : mapping[1](tr[end].metadata["x"])
img *= mean(summary["ground_truth"])
record_progress!(summary, img, idx, tr[end].value,
tr[end].metadata["time"], tr[end].metadata["Current step size"])
record_progress!(summary, img, idx, tr[end].value,
tr[end].metadata["time"], tr[end].metadata["Current step size"])
idx += 1
false
end

opt_options = Optim.Options(callback = cb, iterations=iterations, show_every=every, store_trace=true, extended_trace=true; more_options...)
opt_options = Optim.Options(callback=cb, iterations=iterations, show_every=every, store_trace=true, extended_trace=true; more_options...)
return (opt_options, summary)
end

Expand All @@ -224,18 +224,18 @@ function record_progress!(summary, img, idx, loss, mytime, stepsize)
push!(summary["nccs"], ncc)
summary["best_ncc"], summary["best_ncc_img"], summary["best_ncc_idx"] = let
if ncc > summary["best_ncc"]
(ncc, img, idx)
(ncc, img, idx)
else
(summary["best_ncc"], summary["best_ncc_img"], summary["best_ncc_idx"])
end
end
nvar = normalized_variance(img, ground_truth)
push!(summary["nvars"], nvar)
summary["best_nvar"], summary["best_nvar_img"], summary["best_nvar_idx"] = let
summary["best_nvar"], summary["best_nvar_img"], summary["best_nvar_idx"] = let
if nvar < summary["best_nvar"]
(nvar, img, idx)
(nvar, img, idx)
else
(summary["best_nvar"], summary["best_nvar_img"], summary["best_nvar_idx"])
(summary["best_nvar"], summary["best_nvar_img"], summary["best_nvar_idx"])
end
end
end
end
2 changes: 1 addition & 1 deletion src/conv.jl
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ end
plan_conv(u, v [, dims])
Pre-plan an optimized convolution for arrays shaped like `u` and `v` (based on pre-plan FFT)
along the given dimenions `dims`.
along the given dimensions `dims`.
`dims = 1:ndims(u)` per default.
The 0 frequency of `u` must be located at the first entry.
We return two arguments:
Expand Down
77 changes: 39 additions & 38 deletions src/generic_invert.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
export invert, OptInterface, Opt_Optim, Opt_OptimPackNextGen

abstract type OptInterface end # To accomodate multiple optimizers which are incompatible
abstract type OptInterface end # To accommodate multiple optimizers which are incompatible
struct Opt_Optim <: OptInterface end #
struct Opt_OptimPackNextGen <: OptInterface end #

Expand Down Expand Up @@ -28,58 +28,59 @@ regularizers and mappings.
- `opt_options=nothing`: Can be a options file required by Optim.jl. Will overwrite iterations.
- `debug_f=nothing`: A debug function which must take a single argument, the current reconstruction.
"""
function invert(measured, rec0, forward;
iterations=nothing, λ=eltype(rec0)(0.05),
regularizer=nothing,
opt=LBFGS(linesearch=LineSearches.BackTracking()),
opt_options=nothing,
mapping=Non_negative(),
loss=Poisson(),
real_gradient=true,
debug_f=nothing,
opt_package=Opt_Optim)
function invert(measured, rec0, forward;
iterations=nothing, λ=eltype(rec0)(0.05),
regularizer=nothing,
opt=LBFGS(linesearch=LineSearches.BackTracking()),
opt_options=nothing,
mapping=Non_negative(),
loss=Poisson(),
real_gradient=true,
debug_f=nothing,
opt_package=Opt_Optim)

# if not special options are given, just restrict iterations
if opt_package <: Opt_Optim && opt_options !== nothing && iterations !== nothing
error("If `opt_options` are provided you need to include the iterations as part of these instead of providing the `iterations` argument.")
end
iterations = (iterations === nothing) ? 20 : iterations

if opt_package <: Opt_Optim
if opt_options === nothing
opt_options = Optim.Options(iterations=iterations)
end
if opt_package <: Opt_Optim
if opt_options === nothing
opt_options = Optim.Options(iterations=iterations)
end
end

# Get the mapping functions to achieve constraints
# like non negativity
mf, m_invf = get_mapping(mapping)
regularizer = get_regularizer(regularizer, eltype(rec0))


debug_f_n(x) = let
if isnothing(debug_f)
identity(x)
else
debug_f(mf(x))
debug_f_n(x) =
let
if isnothing(debug_f)
identity(x)
else
debug_f(mf(x))
end
end
end


storage_μ = deepcopy(measured)
function total_loss(rec)
# handle if there is a provided mapping function
mf_rec = mf(rec)
mf_rec = mf(rec)
forward_v = forward(mf_rec)
loss_v = sum(loss(forward_v, measured, storage_μ))
loss_v += λ .* regularizer(mf_rec)
return loss_v
loss_v += λ .* regularizer(mf_rec)
return loss_v
end
# nice precompilation before calling Zygote etc.
Base.invokelatest(total_loss, rec0)

# see here https://github.com/FluxML/Zygote.jl/issues/342
take_real_or_not(g) = real_gradient ? real.(g) : g
take_real_or_not(g::AbstractArray{<:Real}) = g
take_real_or_not(g::AbstractArray{<:Real}) = g

# this is the function which will be provided to Optimize
# check Optim's documentation for the purpose of F and Get
Expand All @@ -91,39 +92,39 @@ function invert(measured, rec0, forward;
# for more details on that check:
# https://discourse.julialang.org/t/dynamically-create-a-function-initial-idea-with-eval-failed-due-to-world-age-issue/49139/17
function fg!(F, G, rec)

# Zygote calculates both derivative and loss, therefore do everything in one step
if G != nothing
if G !== nothing
# apply debug function
debug_f_n(rec)

y, back = Base.invokelatest(Zygote._pullback, total_loss, rec)
# calculate gradient
G .= take_real_or_not(Base.invokelatest(back, 1)[2])
if F != nothing
if F !== nothing
return y
end
end
if F != nothing
if F !== nothing
return Base.invokelatest(total_loss, rec)
end
end

if isa(opt_package, Type{Opt_Optim})
if opt_options===nothing
if isa(opt_package, Type{Opt_Optim})
if opt_options === nothing
opt_options = Optim.Options(iterations=iterations)
end
# do the optimization with LBGFS
res = Optim.optimize(Optim.only_fg!(fg!), rec0, opt, opt_options)
res_out = mf(Optim.minimizer(res))
# supports a different interface as for example used in OptimPackNextGen for the function 'vmlmb!'
elseif isa(opt_package, Type{Opt_OptimPackNextGen})
# supports a different interface as for example used in OptimPackNextGen for the function 'vmlmb!'
elseif isa(opt_package, Type{Opt_OptimPackNextGen})
res = copy(rec0)

if isnothing(opt_options)
opt((x,g) -> fg!(true, g, x), res; maxiter=iterations)
opt((x, g) -> fg!(true, g, x), res; maxiter=iterations)
else
opt((x,g) -> fg!(true, g, x), res; maxiter=iterations, opt_options...)
opt((x, g) -> fg!(true, g, x), res; maxiter=iterations, opt_options...)
end
res_out = mf(res)
else
Expand Down
4 changes: 2 additions & 2 deletions src/lossfunctions.jl
Original file line number Diff line number Diff line change
Expand Up @@ -130,10 +130,10 @@ Calculates the Poisson loss using the Anscombe-based norm for `μ` and `meas`.
we extract a centered region from `μ` of the same size as `meas`.
`b=0` is the optional parameter under the `√`.
Note that the data will be normalized to the mean of the data, which means that you have to
devide this parameter also by the mean of the data, i.e. b=3.0/8.0/mean(measured).
divide this parameter also by the mean of the data, i.e. b=3.0/8.0/mean(measured).
"""
function anscombe_aux(μ, meas, storage=similar(μ); b=0)
# we cannot devide b here by meas, since meas is already normalized
# we cannot divide b here by meas, since meas is already normalized
# due to numerical errors, μ can be negative or 0
mm = minimum(μ)
if mm <= 0
Expand Down
2 changes: 1 addition & 1 deletion src/utils.jl
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ julia> ds([1 2; 3 5; 5 6; 7 8])
function generate_downsample(num_dim, downsample_dims, factor)
@assert num_dim length(downsample_dims)
# create unit cell with Cartesian Index
# dims_units containts every where a 1 where the downsampling should happen
# dims_units contains every where a 1 where the downsampling should happen
dims_units = zeros(Int, num_dim)
# here we set which dimensions should be downsamples
dims_units[downsample_dims] .= 1
Expand Down

0 comments on commit 4c30146

Please sign in to comment.