-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about LaPred in Tables I and II #4
Comments
Also, I would like to just confirm how the conventional models are trained with real perception inputs:
|
The 0.760 mAP is from LaPred with 6 modes and the 0.588 one is with only 5 modes, for the reason of fair comparison with AgentFormer. |
Yes. By the way, we have tried different matching strategies in our recent work https://arxiv.org/abs/2406.08113. Please have a check.
It's the matched GT agents' actual future positions, thus there's an one-to-one matching process to run. |
Hi, congrats on the awesome work!
Just a question, what would be the difference in the evaluation of LaPred in Tab I (the GT det&track input, 0.588 mAP_f line) and in Table II (the GT map, 0.760 mAP_f line)?
I had understood that both of these lines were simply trained on curated data and evaluated with GT map access, which would explain why the corresponding Agentformer line is the same in both tables. So I was wondering why LaPred isn't the same.
Thanks in advance!
The text was updated successfully, but these errors were encountered: