Skip to content

Understanding probability distribution for entropy coding #139

Answered by fracape
AlbertoPresta asked this question in Q&A
Discussion options

You must be logged in to vote

Hi,
There are no stupid questions.

So first of all, we are doing two things when training: we train a model to optimally compress images in a differential manner, but we also want to generate fixed-point probability mass functions (or their corresponding CDF) that model well the distributions of the generated latent tensors from training samples. For the final (real-life) codec, they will be shared by the encoder and the decoder for bit accurate decoding of the bitstream at inference.

The tail_mass corresponds to the cumulative probabilities that we'll exclude form the "range" of our main arithmetic coder, split in a lower tail and upper tail of the distribution. You can check the range c…

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@AlbertoPresta
Comment options

@fracape
Comment options

Answer selected by AlbertoPresta
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants