-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installation Guide #14
Comments
Hi, The error message is "“Invalid Node - /If BTW, I successfully created "depth_anything_vitb14.onnx" from "depth_anything_vitb14.pth". I'm not sure where / what caused this error. My tested environment: |
Copy and paste dpt.py in this repo to <depth_anything_installpath>/depth_anything folder. |
That beautifully works! the .engine file was successfully created by following the instruction. |
Hi, just a question regarding the performance issue. Is there a way to display / process faster by modifying something in the code, such as resolution? or this 118ms per frame is the maximum we can get? As mentioned above, my test environment is Jetson AGX Orin 64GB memory 12 cores. Attached a sample run video and image for your reference: |
Maybe attach external GPU to your Jetson, if it's possible? |
To the authors I would like to say Great work! There doesn't seem to be any CMakeLists.txt in place. I would like to contribute as well to this project, maybe shall I set-up a CMakeLists and auto install or something? Also, we could move this to a newer version of C++ standard like 20, what do you think? I am a Linux developer myself, this can be quite useful. |
Yes, sure. There is indeed (CMakeLists.txt)[https://github.com/spacewalk01/depth-anything-tensorrt/blob/main/CMakeLists.txt]. But It would be great if you make it for linux. |
Please send pull request |
Ahh yes, I missed it, it's late for me 1am 😆 I will do it tomorrow after work will have to test it as well, it may not be that quick 😄 |
For example stuff like this: bool IsPathExist(const std::string& path) {
#ifdef _WIN32
DWORD fileAttributes = GetFileAttributesA(path.c_str());
return (fileAttributes != INVALID_FILE_ATTRIBUTES);
#else
return (access(path.c_str(), F_OK) == 0);
#endif
}
bool IsFile(const std::string& path) {
if (!IsPathExist(path)) {
printf("%s:%d %s not exist\n", __FILE__, __LINE__, path.c_str());
return false;
}
#ifdef _WIN32
DWORD fileAttributes = GetFileAttributesA(path.c_str());
return ((fileAttributes != INVALID_FILE_ATTRIBUTES) && ((fileAttributes & FILE_ATTRIBUTE_DIRECTORY) == 0));
#else
struct stat buffer;
return (stat(path.c_str(), &buffer) == 0 && S_ISREG(buffer.st_mode));
#endif
} Can be replaced by the crossplatform version std::filesystem (introduced in C++ 17) [[nodiscard]] bool IsFile(const std::string& path)
{
if (!std::filesystem::exists(path)) {
printf("%s:%d %s not exist\n", __FILE__, __LINE__, path.c_str());
return false;
}
return std::filesystem::is_regular_file(path);
} And |
Thanks looks good to me :) |
I got stuck this evening with some unplanned issues. It seems that CUDA works with specific GCC compilers and as I am using the latest one it doesn't like it, there are also other problems with installing TensorRT library. I have slightly refactored the code and done improvements. But will be able to submit the PR only on the weekend. cmake_minimum_required(VERSION 3.28)
project(depth-anything-tensorrt-simplified VERSION 1.0 LANGUAGES C CXX CUDA)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED TRUE)
set(CMAKE_C_STANDARD 11)
set(CMAKE_C_STANDARD_REQUIRED TRUE)
# TODO: Add more stricter compiler flags
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU|Clang")
add_compile_options("-Wall" "-Wextra" "-Wcast-align" "-Wunused" "-O2" "-fexceptions" "-pedantic")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
add_compile_options("/W4" "/WX" "/we4715" "/O2" "/EHsc" "/permissive-")
endif() I am thinking maybe it would be a better idea to stick this all into the docker container ? 🤔 |
Installation
Download the pretrained model and install Depth-Anything:
git clone https://github.com/LiheYoung/Depth-Anything cd Depth-Anything pip install -r requirements.txt
Copy and paste
dpt.py
in this repo to<depth_anything_installpath>/depth_anything
folder. Note that I've only removed a squeeze operation at the end of model's forward function indpt.py
to avoid conflicts with TensorRT.Export the model to onnx format using
export_to_onnx.py
, you will get an onnx file nameddepth_anything_vit{}14.onnx
, such asdepth_anything_vitb14.onnx
.Install TensorRT using TensorRT official guidance.
Click here for Windows guide
TensorRT-8.x.x.x
. This new subdirectory will be referred to as<installpath>
in the steps below.TensorRT-8.x.x.x.Windows10.x86_64.cuda-x.x.zip
file to the location that you chose. Where:8.x.x.x
is your TensorRT versioncuda-x.x
is CUDA version11.6
,11.8
or12.0
PATH
. To do so, copy the DLL files from<installpath>/lib
to your CUDA installation directory, for example,C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vX.Y\bin
, wherevX.Y
is your CUDA version. The CUDA installer should have already added the CUDA path to your system PATH.Click here for installing tensorrt on Linux.
Find trtexec and then export onnx to engine.
Add --fp16 if you want to enable fp16 precision
Download and install any recent OpenCV for Windows.
Modify TensorRT and OpenCV paths in CMakelists.txt:
Build project by using the following commands or cmake-gui(Windows).
Tested Environment
The text was updated successfully, but these errors were encountered: