We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Here is the script I'm using. It's just copy-pasted from the example.
import abliterator model = "D:\AI\abliterator\model\Qwen2.5-3B-Instruct" dataset = [abliterator.get_harmful_instructions(), abliterator.get_harmless_instructions()]split by harmful/harmless device = 'cuda' n_devices = None cache_fname = 'my_cached_point.pth' activation_layers = None chat_template = None negative_toks = [33260] positive_toks = [39814] my_model = abliterator.ModelAbliterator( model, dataset, device='cuda', n_devices=None, cache_fname=None, activation_layers=None, chat_template=None, positive_toks=positive_toks, negative_toks=negative_toks )
But I keep getting this result, suggesting that I'm not correctly pointing to the file the right way.
The text was updated successfully, but these errors were encountered:
HookedTransformer.from_pretrained_no_processing takes a kwarg local_files_only=True. So you can submit a PR, I suppose. Ref: https://huggingface.co/blog/mlabonne/abliteration
HookedTransformer.from_pretrained_no_processing
local_files_only=True
Sorry, something went wrong.
No branches or pull requests
Here is the script I'm using. It's just copy-pasted from the example.
But I keep getting this result, suggesting that I'm not correctly pointing to the file the right way.
The text was updated successfully, but these errors were encountered: