Maximizing intelligence per watt
- Singapore
-
01:36
(UTC +08:00) - @nczhu
Pinned Loading
-
-
janhq/awesome-local-ai
janhq/awesome-local-ai PublicAn awesome repository of local AI tools
-
janhq/cortex.tensorrt-llm
janhq/cortex.tensorrt-llm PublicForked from NVIDIA/TensorRT-LLM
Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU accelerated inference on NVIDIA's GPUs.
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.