Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way to run llama2 in the new repo? #1319

Open
dnatarajan00 opened this issue Jan 7, 2025 · 1 comment
Open

Is there a way to run llama2 in the new repo? #1319

dnatarajan00 opened this issue Jan 7, 2025 · 1 comment

Comments

@dnatarajan00
Copy link

I understand this repo has been deprecated. I would like to run llama2 in the new location (https://github.com/meta-llama/llama-models) but am unable to get it working. Are there instructions for how to run llama2 in the new location?

Please see this associated issue here: meta-llama/llama-models#247

@DangNhutNguyen
Copy link

Running Llama 2 from meta-llama/llama-models

  1. Clone the repository:

    git clone https://github.com/meta-llama/llama-models.git
    cd llama-models
  2. Install dependencies:

    pip install -r requirements.txt
  3. Download model weights (follow repo instructions).

  4. Run the model with a script like this:

    from transformers import LlamaForCausalLM, LlamaTokenizer
    
    model_name = "llama-2"  # or local model path
    
    tokenizer = LlamaTokenizer.from_pretrained(model_name)
    model = LlamaForCausalLM.from_pretrained(model_name)
    
    inputs = tokenizer("Hello, world!", return_tensors="pt")
    outputs = model.generate(inputs['input_ids'])
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))
  5. Troubleshooting: Check the repo's [Issues](https://github.com/meta-llama/llama-models/issues) for solutions to common problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants