-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2.4.9_backport : Optimizer doesn't work well #43
Comments
Probably a parameter issue. Parameters don't affect start state much, since it is just the cost volume minimum. Try setting thetastep to .99 to slow down the optimization. I forget which way it goes, but I think lambda is the coupling parameter, try to increase or decrease it until d becomes smooth. You can then tweak until you have the desired tradeoff between smooth and close fit. Epsilon is less useful, but it changes how much the result looks like L1 or L2 norm, and should probably be in the 0.01-10 range. |
That's really odd, with a lambda that low, I would expect a to follow d around very closely, and d to become totally smooth. I wonder if your g weighting is messed up somehow. let me have a quick look at the code.... |
I have a test for messed up g values:
That should cause the d function to flatten out. If not, something is really wrong. |
Sorry, when I watched a,d image carefully, The upper left part changed a little.
I used these code block.
[Console] text_file_name = ../../../Trajectory_30_seconds/scene_000.txt But I think optimizer is very weak because when I tried to CPU version optimizer worked and got high quality result(but the quality doesn't reach a result in video which you sent me(see #38). |
I solved this issue! This issue occurs from this code in DepthmapDenoiseWeightedHuber.cpp
And I used cuda with compute capability 3.0, so this statement becomes false, and no statement ware executed. The solution for user who using cuda with compute capability 3.0, is _to comment out all "#if CUDA_ARCH>300" statements and "#endif" statements which correspond with #if_. In other issue( #22 ), you said
So, compute capability 3.0 is ok.
means cuda with compute capability 3.0 doesn't satisfied this statement. I'm not sure which is true, your code or your comment( #22 ). P.S. |
I know this is old, but you're correct. it should be >= not >, since the code no longer uses funnel shift. I may edit this when I get time. |
Hi, all.
I run OpenDTAM-2.4.9backport, but optimizer doesn't work well.
How should I do to fix this problem?
I'd like to get high quality results like #38 (I don't know what branch he used to record #38's video, but anyway this problem is not related now).
And as I watched this video, the initial image(a and d) looks similar as my results, so initial part is not problem, I think.
Is it optimization parameter problem? or bug?
In my understanding, this is optimization part, right?
Here is the parameter I used(default setting).
And this is my results(after optimization), and before optimization I got same (a, d) image.
And also I downloaded OpenDTAM2.4.9-backport and I don't change the code(for debug, I added some cout statement, but I think cout statement doesn't change any variables.)
The text was updated successfully, but these errors were encountered: