Skip to content

Commit

Permalink
Allow higher version of lm-eval (#2165)
Browse files Browse the repository at this point in the history
  • Loading branch information
joecummings authored Dec 17, 2024
1 parent 9dae7f1 commit c0b2cbd
Show file tree
Hide file tree
Showing 6 changed files with 10 additions and 10 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/gpu_test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ jobs:
- name: Install remaining dependencies
run: |
python -m pip install -e ".[dev]"
python -m pip install lm-eval==0.4.5
python -m pip install lm-eval>=0.4.5
- name: Run recipe and unit tests with coverage
run: pytest tests --ignore tests/torchtune/modules/_export --with-integration --cov=. --cov-report=xml --durations=20 -vv
- name: Upload Coverage to Codecov
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/recipe_test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ jobs:
run: |
python -m pip install torch torchvision torchao
python -m pip install -e ".[dev]"
python -m pip install lm-eval==0.4.5
python -m pip install lm-eval>=0.4.5
- name: Run recipe tests with coverage
run: pytest tests -m integration_test --cov=. --cov-report=xml --durations=20 -vv
- name: Upload Coverage to Codecov
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/regression_test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ jobs:
- name: Install remaining dependencies
run: |
python -m pip install -e ".[dev]"
python -m pip install lm-eval==0.4.5
python -m pip install lm-eval>=0.4.5
- name: Run regression tests with coverage
run: pytest tests -m slow_integration_test --silence-s3-logs --cov=. --cov-report=xml --durations=20 -vv
- name: Upload Coverage to Codecov
Expand Down
4 changes: 2 additions & 2 deletions recipes/configs/llama3_2_vision/11B_evaluation.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
# This config assumes that you've run the following command before launching:
# tune download meta-llama/Llama-3.2-11B-Vision-Instruct --output-dir /tmp/Llama-3.2-11B-Vision-Instruct --ignore-patterns "original/consolidated*"
#
# It also assumes that you've downloaded the EleutherAI Eval Harness (v0.4.5):
# pip install lm_eval==0.4.5
# It also assumes that you've downloaded the EleutherAI Eval Harness (v0.4.5 or higher):
# pip install lm_eval
#
# To launch, run the following command from root torchtune directory:
# tune run eleuther_eval --config llama3_2_vision/11B_evaluation
Expand Down
6 changes: 3 additions & 3 deletions recipes/eleuther_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -441,10 +441,10 @@ def __init__(self, cfg: DictConfig) -> None:
# Double check we have the right Eval Harness version
from importlib.metadata import version

if version("lm-eval") != "0.4.5":
if version("lm-eval") < "0.4.5":
raise RuntimeError(
"This recipe requires EleutherAI Eval Harness v0.4.5. "
"Please install with `pip install lm-eval==0.4.5`"
"This recipe requires EleutherAI Eval Harness v0.4.5 or higher. "
"Please install with `pip install lm-eval>=0.4.5`"
)

# General variable initialization
Expand Down
4 changes: 2 additions & 2 deletions tests/recipes/test_eleuther_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -132,8 +132,8 @@ def test_eval_recipe_errors_without_lm_eval(self, monkeypatch, tmpdir):
monkeypatch.setattr(sys, "argv", cmd)
with pytest.raises(
RuntimeError,
match="This recipe requires EleutherAI Eval Harness v0.4.5. "
"Please install with `pip install lm-eval==0.4.5`",
match="This recipe requires EleutherAI Eval Harness v0.4.5 or higher. "
"Please install with `pip install lm-eval>=0.4.5`",
):
runpy.run_path(TUNE_PATH, run_name="__main__")

Expand Down

0 comments on commit c0b2cbd

Please sign in to comment.