Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fatal error: cuda.h: No such file or directory #32

Open
andyluo7 opened this issue May 19, 2020 · 6 comments
Open

Fatal error: cuda.h: No such file or directory #32

andyluo7 opened this issue May 19, 2020 · 6 comments

Comments

@andyluo7
Copy link

I am building the project with "make build" command.
The following error occurred.
Should I modify the Makefile or any environment parameter?

[ 20%] Building CXX object lwis/CMakeFiles/lwis.dir/src/lwis.cpp.o
In file included from /home/aluo/inference_results_v0.5/closed/NVIDIA/code/harness/lwis/src/lwis.cpp:17:0:
/home/aluo/inference_results_v0.5/closed/NVIDIA/code/harness/lwis/include/lwis.hpp:31:10: fatal error: cuda.h: No such file or directory
#include <cuda.h>
^~~~~~~~
compilation terminated.
lwis/CMakeFiles/lwis.dir/build.make:62: recipe for target 'lwis/CMakeFiles/lwis.dir/src/lwis.cpp.o' failed
make[4]: *** [lwis/CMakeFiles/lwis.dir/src/lwis.cpp.o] Error 1
make[4]: Leaving directory '/home/aluo/inference_results_v0.5/closed/NVIDIA/build/harness'
CMakeFiles/Makefile2:122: recipe for target 'lwis/CMakeFiles/lwis.dir/all' failed
make[3]: *** [lwis/CMakeFiles/lwis.dir/all] Error 2
make[3]: Leaving directory '/home/aluo/inference_results_v0.5/closed/NVIDIA/build/harness'
Makefile:83: recipe for target 'all' failed
make[2]: *** [all] Error 2
make[2]: Leaving directory '/home/aluo/inference_results_v0.5/closed/NVIDIA/build/harness'
Makefile:280: recipe for target 'build_harness' failed
make[1]: *** [build_harness] Error 2
make[1]: Leaving directory '/home/aluo/inference_results_v0.5/closed/NVIDIA'
Makefile:228: recipe for target 'build' failed
make: *** [build] Error 2

@nvpohanh
Copy link

@andyluo7 Did you run make build inside the container? If so, could you try nvidia-smi and ls /usr/local/cuda/include | grep cuda.h to see if nvidia-docker is running correctly? Thanks

@andyluo7
Copy link
Author

@nvpohanh , no, i did not run it in the container. I try to run the inference on AGX Xavier.
Should I run it in container?

@nvpohanh
Copy link

Ah, sorry, didn't know that this was on Xavier. No, you don't need docker on Xavier.

Which JetPack did you install? We were using JetPack 4.3 DP.

@andyluo7
Copy link
Author

@nvpohanh, I used the latest Jetpack 4.4 DP. Is there any difference?
There are multiple cuda.h files in the system.
I wonder if I should change the Makefile to add the include path.

@nvpohanh
Copy link

Yes, maybe they changed the path. Could you add that to /usr/local/cuda/include/cuda.h?

@andyluo7
Copy link
Author

I have to hard coded #include </usr/local/cuda/include/cuda.h> in the .h and .c files in /inference_results_v0.5/closed/NVIDIA/code/harness/lwis to make it work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants