Monte Carlo localization is
biased for any finite sample size—i.e., the expected
value of the location computed by the algorithm differs from the true
expected value—because of the way particle filtering works. In this
question, you are asked to quantify this bias.
To simplify, consider a world with four possible robot locations:
MCL uses these probabilities to generate particle weights, which are
subsequently normalized and used in the resampling process. For
simplicity, let us assume we generate only one new sample in the
resampling process, regardless of
-
What is the resulting probability distribution over
$X$ for this new sample? Answer this question separately for$N=1,\ldots,10$ , and for$N=\infty$ . -
The difference between two probability distributions
$P$ and$Q$ can be measured by the KL divergence, which is defined as$${KL}(P,Q) = \sum_i P(x_i)\log\frac{P(x_i)}{Q(x_i)}\ .$$ What are the KL divergences between the distributions in (a) and the true posterior? -
What modification of the problem formulation (not the algorithm!) would guarantee that the specific estimator above is unbiased even for finite values of
$N$ ? Provide at least two such modifications (each of which should be sufficient).