How to have a fixed dimensional latent space after entropy bottleneck.compress()? #150
Unanswered
PatatalouiS
asked this question in
Q&A
Replies: 1 comment
-
As you mentioned, the latent space |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone,
I'm looking for trained Auto-encoder architectures, to reduce the size of video frames as much as possible, before feeding another architecture, for a rather specific video indexing task by content.
CompressAi fits my needs perfectly. I want to get the output string, to send it to my other network. After converting the string into an integer representation, I realize that the length of this space is not fixed and depends on the input image. This must be a particularity of entropy coding.
Do you have an idea, with a minimal loss of information, for each latent space to be of fixed size?
Maybe with an operation on the encoding provided by compress()?
Maybe by replacing the entropy_bottleneck by another encoding method?
Thanks in advance, and congratulations for this splendid library.
Beta Was this translation helpful? Give feedback.
All reactions