You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I understand this repo has been deprecated. I would like to run llama2 in the new location (https://github.com/meta-llama/llama-models) but am unable to get it working. Are there instructions for how to run llama2 in the new location?
git clone https://github.com/meta-llama/llama-models.git
cd llama-models
Install dependencies:
pip install -r requirements.txt
Download model weights (follow repo instructions).
Run the model with a script like this:
fromtransformersimportLlamaForCausalLM, LlamaTokenizermodel_name="llama-2"# or local model pathtokenizer=LlamaTokenizer.from_pretrained(model_name)
model=LlamaForCausalLM.from_pretrained(model_name)
inputs=tokenizer("Hello, world!", return_tensors="pt")
outputs=model.generate(inputs['input_ids'])
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
I understand this repo has been deprecated. I would like to run llama2 in the new location (https://github.com/meta-llama/llama-models) but am unable to get it working. Are there instructions for how to run llama2 in the new location?
Please see this associated issue here: meta-llama/llama-models#247
The text was updated successfully, but these errors were encountered: