Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The evaluation results depend on batch size #10

Open
apple2373 opened this issue Dec 4, 2022 · 1 comment
Open

The evaluation results depend on batch size #10

apple2373 opened this issue Dec 4, 2022 · 1 comment

Comments

@apple2373
Copy link

apple2373 commented Dec 4, 2022

I noticed that the evaluation results are different depending on the batch size. I found the reason is this line.
https://github.com/yuyanli0831/OmniFusion/blob/aaf52cc953ade3be1f5fc3df446705e4223b8d21/test.py#L161

Because the median value changes if we group different images together, the results will be slightly different per batch size. If we want deterministic one, then we can either disable the median alignment or use the median per image.

@apple2373
Copy link
Author

apple2373 commented Dec 5, 2022

Well, not just implementation, I wonder why we need median alignment. My best guess is that, if all objects go twice as much as far, and also become twice as much as big, the images look same? (Am I correct?! but i can imagine there's some scale ambiguity if we only have a single image.) And to address the scale ambiguity, we use median of ground truth and predicted images to align the scales of these images.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant