Build applications written in NVIDIA® CUDA™ code for OpenCL™ 1.2 devices.
- leave applications in NVIDIA® CUDA™
- compile into OpenCL 1.2
- run on any OpenCL 1.2 GPU
- Write an NVIDIA® CUDA™ sourcecode file, or find an existing one
- Let's use cuda_sample.cu
- Compile, using
cocl
:
$ cocl_py cuda_sample.cu
...
... (bunch of compily stuff) ...
...
./cuda_sample.cu compiled into ./cuda_sample
Run:
$ ./cuda_sample
Using Intel , OpenCL platform: Intel Gen OCL Driver
Using OpenCL device: Intel(R) HD Graphics 5500 BroadWell U-Processor GT2
hostFloats[2] 123
hostFloats[2] 222
hostFloats[2] 444
- compiler for host-side code, including memory allocation, copy, streams, kernel launches
- compiler for device-side code, handling templated C++ code, converting it into bog-standard OpenCL 1.2 code
- cuBLAS API implementations for GEMM, GEMV, SCAL, SAXPY (using Cedric Nugteren's CLBlast)
- cuDNN API implementations for: convolutions (using
im2col
algorithm over Cedric Nugteren's CLBlast, pooling, ReLU, tanh, and sigmoid
Kernel compilation proceeds in two steps:
Slides on the IWOCL website, here
Coriander development is carried out using the following platforms:
- Ubuntu 16.04, with:
- NVIDIA K80 GPU and/or NVIDIA K520 GPU (via aws)
- Mac Book Pro 4th generation (thank you ASAPP :-) ), with:
- Intel HD Graphics 530
- Radeon Pro 450
- Sierra OS
Other systems should work too, ideally. You will need at a minimum at least one OpenCL-enabled GPU, and appropriate OpenCL drivers installed, for the GPU. Both linux and Mac systems stand a reasonable chance of working ok.
For installation, please see installation
You can install the following plugins:
- Coriander-clblast: just do
cocl_plugins.py install --repo-url https://github.com/hughperkins/coriander-clblast
- Coriander-dnn: just do
cocl_plugins.py install --repo-url https://github.com/hughperkins/coriander-dnn
- Your plugin here?
- use
cocl_add_executable
andcocl_add_library
- see cmake usage
See testing
See assumptions
Coriander uses the following libraries:
- clang/llvm: c/c++ parser/compiler; many contributors
- thrust: parallel GPU library, from NVIDIA®
- yaml-cpp: yaml for c++, by Jesse Beder
- EasyCL: wrapper for OpenCL 1.2 boilerplate
- argparsecpp: command-line parser for c++
- gtest: unit tests for c++, from Google
- Eigen-CL: Minimally-tweaked fork of Eigen, for OpenCL 1.2
- tf-coriander: Tensorflow for OpenCL-1.2
Please cite: CUDA-on-CL: a compiler and runtime for running NVIDIA® CUDA™ C++11 applications on OpenCL™ 1.2 Devices
- June 23:
- factorized CLBlast implementation of NVIDIA® CUDA™ cuBLAS API, into new plugin coriander-clblast
- June 21:
- created a new release v6.0.0, that marks a bunch of changes:
- incorporates of course the earlier changes:
- took some big steps towards portability and Windows compilation, ie using python 2.7 scripts, rather than bash scripts, and fixing many Windows-related compilation issues
- the plugin architecture
- factorizing the partial NVIDIA® CUDA™ cuDNN API implementation into a new plugin coriander-dnn
- moved the default installation directory from
/usr/local
to~/coriander
- this means that plugins can be installed without
sudo
- it also makes it relatively easy to wipe and reinstall, for more effective jenkins testing
- this means that plugins can be installed without
install_distro.py
is now considerably more tested than a few days ago, and handles downloadingllvm-4.0
automatically
- incorporates of course the earlier changes:
- created a new release v6.0.0, that marks a bunch of changes:
- Older news