You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to run depth estimation with the small backbone size and DPT decoder on the NYUv2 Depth Dataset, using the general framework presented in the depth_estimation.ipynb notebook. I downloaded the NYUv2 dataset from Kaggle at https://www.kaggle.com/datasets/soumikrakshit/nyu-depth-v2?resource=download and ran depth estimation on the ~50k training images by adding the code below to the end of depth_estimation.ipynb.
At first, I was simply computing RMSE on the entire image (without Eigen crop or min/max thresholding) and computed the RMSE as 0.433. However, after reading issue #227, I looked into the pre_eval and evaluate methods in the Monocular-Depth-Estimation-Toolbox repository and added min/max thresholding (with min threshold 1e-3 and max threshold 10, as these were the default values for the thresholds given in the repository), as well as Eigen crop at the same pixel range [45:471, 41:601] as was used in the repository. However, even after making this change, the RMSE is still 0.410, far from the value of 0.356 reported in the paper. For clarity, I am calculating the RMSE by computing the MSE for each image separately, averaging the MSEs, and then taking the square root of this average, which is (to my understanding) the correct implementation of RMSE.
I understand that I am running on training data, whereas the performance reported in the paper is for the validation data, but the disparity should not be this severe (~16% increase in RMSE), especially as validation performance should be, in general, no better than training performance.
Now that I have incorporated Eigen crop and min/max thresholding and the results are still not being replicated, is there any step which was used in the paper which I have left out here? From looking into the Monocular-Depth-Estimation-Toolbox repository, it looks that I have performed all of the steps included there. Alternatively, is there some simple way that could I import code from the Monocular-Depth-Estimation-Toolbox repository and use it to evaluate the DINOv2 Depth Estimator? From looking at their ReadMe, it seems that this would be quite nontrivial, as DINOv2 is not listed as a supported backbone.
Some notes about my code:
data_list is a list containing one element per training sample, with each element being a two-element list in which the first element is the name of the folder (e.g. basement_0001a_out) and the second element is the index of the sample within the folder
I multiply by 10.0 when loading the ground truth depth map since the ground truth depth maps are provided at 1/10th scale. When loading the ground truth depth maps as is, without multiplication, the depth predictions are on average almost exactly 10 times the ground truth depth, and after multiplying by 10, the histogram of depth predictions and ground truth depths lines up accurately.
I am trying to run depth estimation with the small backbone size and DPT decoder on the NYUv2 Depth Dataset, using the general framework presented in the depth_estimation.ipynb notebook. I downloaded the NYUv2 dataset from Kaggle at https://www.kaggle.com/datasets/soumikrakshit/nyu-depth-v2?resource=download and ran depth estimation on the ~50k training images by adding the code below to the end of depth_estimation.ipynb.
At first, I was simply computing RMSE on the entire image (without Eigen crop or min/max thresholding) and computed the RMSE as 0.433. However, after reading issue #227, I looked into the
pre_eval
andevaluate
methods in the Monocular-Depth-Estimation-Toolbox repository and added min/max thresholding (with min threshold 1e-3 and max threshold 10, as these were the default values for the thresholds given in the repository), as well as Eigen crop at the same pixel range [45:471, 41:601] as was used in the repository. However, even after making this change, the RMSE is still 0.410, far from the value of 0.356 reported in the paper. For clarity, I am calculating the RMSE by computing the MSE for each image separately, averaging the MSEs, and then taking the square root of this average, which is (to my understanding) the correct implementation of RMSE.I understand that I am running on training data, whereas the performance reported in the paper is for the validation data, but the disparity should not be this severe (~16% increase in RMSE), especially as validation performance should be, in general, no better than training performance.
Now that I have incorporated Eigen crop and min/max thresholding and the results are still not being replicated, is there any step which was used in the paper which I have left out here? From looking into the Monocular-Depth-Estimation-Toolbox repository, it looks that I have performed all of the steps included there. Alternatively, is there some simple way that could I import code from the Monocular-Depth-Estimation-Toolbox repository and use it to evaluate the DINOv2 Depth Estimator? From looking at their ReadMe, it seems that this would be quite nontrivial, as DINOv2 is not listed as a supported backbone.
Some notes about my code:
data_list
is a list containing one element per training sample, with each element being a two-element list in which the first element is the name of the folder (e.g. basement_0001a_out) and the second element is the index of the sample within the folderI multiply by
10.0
when loading the ground truth depth map since the ground truth depth maps are provided at 1/10th scale. When loading the ground truth depth maps as is, without multiplication, the depth predictions are on average almost exactly 10 times the ground truth depth, and after multiplying by 10, the histogram of depth predictions and ground truth depths lines up accurately.`mse = torch.zeros(len(data_list))
print_increment = 100
for i in range(len(data_list)):
print("Avg MSE = " + str(mse.mean().item()))
print("Avg RMSE = " + str(torch.sqrt(mse.mean()).item()))`
The text was updated successfully, but these errors were encountered: