[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
-
Updated
Nov 15, 2024 - Jupyter Notebook
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
Must-read Papers on Knowledge Editing for Large Language Models.
EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.
Official code repo for "Editing Implicit Assumptions in Text-to-Image Diffusion Models"
[NeurIPS 2024] Knowledge Circuits in Pretrained Transformers
Code for "Learning to Edit: Aligning LLMs with Knowledge Editing (ACL 2024)"
[ICLR 2024] Unveiling the Pitfalls of Knowledge Editing for Large Language Models
[ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.
[EMNLP 2024 Findings] To Forget or Not? Towards Practical Knowledge Unlearning for Large Language Models
OneEdit: A Neural-Symbolic Collaboratively Knowledge Editing System.
Code and dataset for the paper: "Can Editing LLMs Inject Harm?"
Official codes for COLING 2024 paper "Robust and Scalable Model Editing for Large Language Models": https://arxiv.org/abs/2403.17431v1
Stable Knowledge Editing in Large Language Models
Official implementation for Zhong & Le et al., GNNs Also Deserve Editing, and They Need It More Than Once. ICML 2024
Can Knowledge Editing Really Correct Hallucinations?
Debiasing Stereotyped Language Models via Model Editing
MLaKE: Multilingual Knowledge Editing Benchmark for Large Language Models
An Automated Framework to Construct Datasets for Assessing Knowledge Editing or Multi-Hop Reasoning Capability of Language Models.
Add a description, image, and links to the knowledge-editing topic page so that developers can more easily learn about it.
To associate your repository with the knowledge-editing topic, visit your repo's landing page and select "manage topics."