Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Misc] Upload a shell for bare-metal #268

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions install-vllm-for-bare-metal.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Install-vllm-for-bare-metal
A shell script for vllm installatoin on bare-metal


### Description
The script is used to build and install vllm and its dependencies from source code. Build and install from source code helps to achieve best performance.

Not all dependencies are required for vllm installation, it depends on use cases. The script comments out all optional dependencies, and just leaves pytorch and vllm installation. If you want to install dependencies from this script, you can uncomment specific script block to install dependencies.

### Using the script
Go to your working directory and
```
bash install-vllm-for-bare-metal.sh
```
Please note, for optional dependencies, the script will clone its source code into the working directory. If you do not want to keep source code for optional dependencies, you can delete them after installation.
103 changes: 103 additions & 0 deletions install-vllm-for-bare-metal.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
#!/usr/bin/env bash

# READ BEFORE USE:
# This script helps to install vllm on bare-metal. The time it takes is in term of hours.

# The script default installs pytorch and vllm, and assumes rocm and its environment has been installed.
# The vllm also has other dependencies, if you find something is missing you can uncomment certain script block to install.
# Make sure the vllm whl to install is in the same directory as this script.
# The script may also pull some repository in current directory, you can delete them after installation.

# To test on vllm installation: run some examples in vllm/examples.





# # Install some basic utilities
# apt-get update -q -y && apt-get install -q -y python3 python3-pip
# apt-get update -q -y && apt-get install -q -y sqlite3 libsqlite3-dev libfmt-dev libmsgpack-dev libsuitesparse-dev
# # Remove sccache
# python3 -m pip install --upgrade pip
# apt-get purge -y sccache; python3 -m pip uninstall -y sccache; rm -f "$(which sccache)"


# # local build environment
# export PYTORCH_ROCM_ARCH="gfx90a;gfx942"
# export LLVM_SYMBOLIZER_PATH="/opt/rocm/llvm/bin/llvm-symbolizer"
# export PATH="$PATH:/opt/rocm/bin:/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/bin:"
# export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/opt/rocm/lib/:/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib:"
# export CPLUS_INCLUDE_PATH="$CPLUS_INCLUDE_PATH:/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/include:/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/include/torch/csrc/api/include/:/opt/rocm/include/:"


# python3 -m pip install --upgrade pip && rm -rf /var/lib/apt/lists/*


# # optional numpy update
# case "$(which python3)" in \
# *"/opt/conda/envs/py_3.9"*) \
# rm -rf /opt/conda/envs/py_3.9/lib/python3.9/site-packages/numpy-1.20.3.dist-info/;; \
# *) ;; esac


# # optional flash-attention installation
# FA_BRANCH="3cea2fb"
# pip uninstall -y flash-attn
# git clone https://github.com/ROCm/flash-attention.git
# cd flash-attention
# git checkout $FA_BRANCH
# git submodule update --init
# GPU_ARCHS=${PYTORCH_ROCM_ARCH} python3 setup.py bdist_wheel --dist-dir=dist
# pip install dist/*.whl
# cd ..


# # optional triton installation
# pip uninstall -y triton \
# && python3 -m pip install ninja cmake wheel pybind11 && git clone https://github.com/triton-lang/triton.git \
# && cd triton \
# && git checkout e192dba \
# && cd python \
# && python3 setup.py bdist_wheel --dist-dir=dist \
# && pip install dist/*.whl \
# && cd .. && cd ..


# pytorch installation
pip uninstall -y torch torchvision \
&& git clone https://github.com/ROCm/pytorch.git pytorch \
&& cd pytorch && git checkout cedc116 && git submodule update --init --recursive \
&& python3 tools/amd_build/build_amd.py \
&& CMAKE_PREFIX_PATH=$(python3 -c 'import sys; print(sys.prefix)') python3 setup.py bdist_wheel --dist-dir=dist \
&& pip install dist/*.whl \
&& cd .. \
&& git clone https://github.com/pytorch/vision.git vision \
&& cd vision && git checkout v0.19.1 \
&& python3 setup.py bdist_wheel --dist-dir=dist \
&& pip install dist/*.whl \
&& cd ..


# # optional huggingface installation
# python3 -m pip install --upgrade huggingface-hub[cli]


# # optional profile installation
# git clone -b nvtx_enabled https://github.com/ROCm/rocmProfileData.git \
# && cd rocmProfileData/rpd_tracer \
# && pip install -r requirements.txt && cd ../ \
# && make && make install \
# && cd hipMarker && python setup.py install \
# && cd ../..


# Install vLLM (and gradlib)
pip install -U -r requirements-rocm.txt \
&& pip uninstall -y vllm gradlib \
&& pip install *.whl


# Install amd_smi
# cd /opt/rocm/share/amd_smi \
# && pip wheel . --wheel-dir=dist \
# && pip install dist/*.whl \