Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.18 habana_main merge #307

Closed
wants to merge 16 commits into from
Closed

1.18 habana_main merge #307

wants to merge 16 commits into from

Conversation

kzawora-intel
Copy link

No description provided.

xuechendi and others added 16 commits September 12, 2024 19:18
Signed-off-by: Chendi.Xue <[email protected]>
Signed-off-by: Chendi.Xue <[email protected]>
…lava-1.5 (#283)

FILL IN THE PR DESCRIPTION HERE

FIX #282  (*link existing issues this PR will resolve*)

**BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE
DESCRIPTION ABOVE**

---

<details>
<!-- inside this <details> section, markdown rendering does not work, so
we use raw html here. -->
<summary><b> PR Checklist (Click to Expand) </b></summary>

<p>Thank you for your contribution to vLLM! Before submitting the pull
request, please ensure the PR meets the following criteria. This helps
vLLM maintain the code quality and improve the efficiency of the review
process.</p>

<h3>PR Title and Classification</h3>
<p>Only specific types of PRs will be reviewed. The PR title is prefixed
appropriately to indicate the type of change. Please use one of the
following:</p>
<ul>
    <li><code>[Bugfix]</code> for bug fixes.</li>
<li><code>[CI/Build]</code> for build or continuous integration
improvements.</li>
<li><code>[Doc]</code> for documentation fixes and improvements.</li>
<li><code>[Model]</code> for adding a new model or improving an existing
model. Model name should appear in the title.</li>
<li><code>[Frontend]</code> For changes on the vLLM frontend (e.g.,
OpenAI API server, <code>LLM</code> class, etc.) </li>
<li><code>[Kernel]</code> for changes affecting CUDA kernels or other
compute kernels.</li>
<li><code>[Core]</code> for changes in the core vLLM logic (e.g.,
<code>LLMEngine</code>, <code>AsyncLLMEngine</code>,
<code>Scheduler</code>, etc.)</li>
<li><code>[Hardware][Vendor]</code> for hardware-specific changes.
Vendor name should appear in the prefix (e.g.,
<code>[Hardware][AMD]</code>).</li>
<li><code>[Misc]</code> for PRs that do not fit the above categories.
Please use this sparingly.</li>
</ul>
<p><strong>Note:</strong> If the PR spans more than one category, please
include all relevant prefixes.</p>

<h3>Code Quality</h3>

<p>The PR need to meet the following code quality standards:</p>

<ul>
<li>We adhere to <a
href="https://google.github.io/styleguide/pyguide.html">Google Python
style guide</a> and <a
href="https://google.github.io/styleguide/cppguide.html">Google C++
style guide</a>.</li>
<li>Pass all linter checks. Please use <a
href="https://github.com/vllm-project/vllm/blob/main/format.sh"><code>format.sh</code></a>
to format your code.</li>
<li>The code need to be well-documented to ensure future contributors
can easily understand the code.</li>
<li>Include sufficient tests to ensure the project to stay correct and
robust. This includes both unit tests and integration tests.</li>
<li>Please add documentation to <code>docs/source/</code> if the PR
modifies the user-facing behaviors of vLLM. It helps vLLM user
understand and utilize the new features or changes.</li>
</ul>

<h3>Notes for Large Changes</h3>
<p>Please keep the changes as concise as possible. For major
architectural changes (>500 LOC excluding kernel/data/config/test), we
would expect a GitHub issue (RFC) discussing the technical design and
justification. Otherwise, we will tag it with <code>rfc-required</code>
and might not go through the PR.</p>

<h3>What to Expect for the Reviews</h3>

<p>The goal of the vLLM team is to be a <i>transparent reviewing
machine</i>. We would like to make the review process transparent and
efficient and make sure no contributor feel confused or frustrated.
However, the vLLM team is small, so we need to prioritize some PRs over
others. Here is what you can expect from the review process: </p>

<ul>
<li> After the PR is submitted, the PR will be assigned to a reviewer.
Every reviewer will pick up the PRs based on their expertise and
availability.</li>
<li> After the PR is assigned, the reviewer will provide status update
every 2-3 days. If the PR is not reviewed within 7 days, please feel
free to ping the reviewer or the vLLM team.</li>
<li> After the review, the reviewer will put an <code>
action-required</code> label on the PR if there are changes required.
The contributor should address the comments and ping the reviewer to
re-review the PR.</li>
<li> Please respond to all comments within a reasonable time frame. If a
comment isn't clear or you disagree with a suggestion, feel free to ask
for clarification or discuss the suggestion.
 </li>
</ul>

<h3>Thank You</h3>

<p> Finally, thank you for taking the time to read these guidelines and
for your interest in contributing to vLLM. Your contributions make vLLM
a great tool for everyone! </p>


</details>
Update documentation on support of fp8
After #252, HPUGraph capture
takes much less memory, and we can reduce the memory reserved for
HPUGraphs. On Llama3.1-8b-Instruct (G2), capturing 100% of prefill and
decode graphs on BS=256 now takes 1.566 GB of HBM, which is far less
than 40% (~30 GB) we reserve by default. This results in lots of unused
(==wasted) memory, which could be used instead for more KV cache blocks.
The original code doesn't print the default value correctly

INFO 09-17 00:06:07 habana_model_runner.py:95]
VLLM_PROMPT_BS_BUCKET_MIN=1 (default:_**min**_)
INFO 09-17 00:06:07 habana_model_runner.py:95]
VLLM_PROMPT_BS_BUCKET_STEP=1 (default:_**step**_)
INFO 09-17 00:06:07 habana_model_runner.py:95]
VLLM_PROMPT_BS_BUCKET_MAX=1 (default:_**max**_)

This change make it print the correct default value
INFO 09-17 21:30:51 habana_model_runner.py:95]
VLLM_PROMPT_BS_BUCKET_MIN=1 (default:_**1**_)
INFO 09-17 21:30:51 habana_model_runner.py:95]
VLLM_PROMPT_BS_BUCKET_STEP=4 (default:_**32**_)
INFO 09-17 21:30:51 habana_model_runner.py:95]
VLLM_PROMPT_BS_BUCKET_MAX=4 (default:_**64**_)
Fix blocks number calculation for Flat PA via adding empty table_block
(#158)
Remove dummy seq metadata from loop for Flat PA fix
Add extra mark_step() on each decode layer to optimize the performance
on Gaudi.

Signed-off-by: Bob Zhu <[email protected]>
…ng (#297)

fix bug: device_str in initialize_ray_cluster requires uppercase string

w/o the bug fix, multi HPUs will encounter "ValueError: The number of
required hpus exceeds the total number of available hpus in the
placement group" error, as the device_str is not expected as uppercase,
then available hpus always returns 0.
@kzawora-intel kzawora-intel added the habana Issues or PRs submitted by Habana Labs label Sep 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
habana Issues or PRs submitted by Habana Labs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants