-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the decoding speed of the ELIC model #312
Comments
The measured decoding latency may depend on the CPU, GPU, or other conditions. Their setup is mentioned here:
Relative measurements may be more worthwhile on different machines. Looking at their chart, it looks like:
|
Note The above comment still applies. I took another look at the paper, and it says:
Our current implementation uses a # In [He2022], this is labeled "g_ch^(k)".
channel_context = {
f"y{k}": nn.Sequential(
conv(sum(self.groups[:k]), 224, kernel_size=5, stride=1),
nn.ReLU(inplace=True),
conv(224, 128, kernel_size=5, stride=1),
nn.ReLU(inplace=True),
conv(128, self.groups[k] * 2, kernel_size=5, stride=1),
)
for k in range(1, len(self.groups))
} However, our current # In [He2022], this is labeled "Param Aggregation".
param_aggregation = [
sequential_channel_ramp(
# Input: spatial context, channel context, and hyper params.
self.groups[k] * 2 + (k > 0) * self.groups[k] * 2 + N * 2,
self.groups[k] * 2,
min_ch=N * 2,
num_layers=3,
interp="linear",
make_layer=nn.Conv2d,
make_act=lambda: nn.ReLU(inplace=True),
kernel_size=1,
stride=1,
padding=0,
)
for k in range(len(self.groups))
] |
Thank you so much for your reimplementation of ELIC. However, in ELIC's original paper, they reported that the decoding latency of ELIC is much less than 100ms. But when I test with ELIC from CompressAI, the latency is about 130ms. May I ask why there is a speed gap?
Looking forward to your reply.
The text was updated successfully, but these errors were encountered: