Skip to content

A Collection of High Quality research papers and open-source projects about LLM-agents

License

Notifications You must be signed in to change notification settings

junhua/awesome-llm-agents

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Awesome-LLM-Agents: Recent Trends and Advancement in Agentic AI

image This Awesome-LLM-Agents contains A hand-picked and carefully categorised reading list. Furthermore, I will conduct review on each paper and project, and (hopefully) put them in anto a survey paper. The detailed thought process of forming this project is documented at this Medium Post. It's put behind a paywall to prevent the evil LLMs' crawling. The full category breakdown.

LLM Core — Foundation Models

  • Scaling laws for neural language models (OpenAI, 2020, arXiv)
  • LLaMA: Open and Efficient Foundation Language Models (Meta, Feb 2023, arXiv)
  • The Llama 3 Herd of Models (Meta, July 2024, arXiv)
  • Sparks of Artificial General Intelligence: Early experiments with GPT-4 (Microsoft, Apr 2023, arXiv)
  • Apple Intelligence Foundation Language Models (Apple, Doc)
  • StarCoder (Dec 2023, arXiv)
  • Gemma 2B: Improving Open Language Models at a Practical Size (Jul, 2024, arXiv)

LLM Core — Prompt Engineering

  • Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (Google Brain, 2022, NeurIPS)
  • Tree of Thoughts: Deliberate Problem Solving with Large Language Models (Princeton & DeepMind, 2023, NeurIPS, Benchmark)
  • Self-Consistency Improves Chain of Thought Reasoning in Language Models (Google Brain, 2023, ICLR)
  • ReAct: Synergizing Reasoning and Action in Language Models (Princeton & Google Brain, Mar 2023 ICLR)
  • Reflexion: Language agents with verbal reinforcement learning (Northeastern, MIT & Princeton, 2023, NeurIPS)
  • ART: Automatic multi-step reasoning and tool-use for large language models (UW, UCI, Microsoft, Allen AI & Meta, 2023, arXiv)
  • Directional Stimulus Prompting (UCSB & Microsoft, 2023, NeurIPS)
  • Active Prompting with Chain-of-Thought for Large Language Models (HKUST etc., Jul 2024, arXiv)
  • Step-Back Prompting Enables Reasoning Via Abstraction in Large Language Models (DeepMind, Mar 2024, arXiv)

LLM Core — Retrieval-Augmented Generation

  • Retrieval Augmented Generation or Long-Context LLMs? A Comprehensive Study and Hybrid Approach (DeepMind, Jul 2024, arXiv)
  • Retrieval-Augmented Generation for Large Language Models: A Survey (Tongji & Fudan, Mar 2024, arXiv)
  • Improving Retrieval Augmented Language Model with Self-Reasoning (Baidu, Jul 2024, arXiv)

LLM Core — Finetuning / PEFT

  • Lora: Low-rank adaptation of large language models (Microsoft & CMU, Oct 2021, arXiv)
  • QLoRA: Efficient Finetuning of Quantized LLMs (UW, 2023, NeurIPS)
  • A Survey on LoRA of Large Language Models (ZJU, Jul 2024, arXiv)
  • Distilling System 2 into System 1 (Meta, Jul 2024, arXiv)
  • Mixture of LoRA Experts (Microsoft & Tsinghua, 2024, ICLR)

LLM Core — Alignments and Safety

  • Rule Based Rewards for Language Model Safety (OpenAI, Jul 2024, Preprint)
  • A Comprehensive Survey of LLM Alignment Techniques: RLHF, RLAIF, PPO, DPO and More (Salesforce, Jul 2024, arXiv)
  • Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study (Tsinghua, Apr 2024, arXiv)
  • PERL: Parameter Efficient Reinforcement Learning from Human Feedback (Google, Mar 2024, arXiv)
  • RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback (Google, Dec 2023, arXiv)
  • Training language models to follow instructions with human feedback (OpenAI, Mar 2022, arXiv)
  • Constitutional AI: Harmlessness from AI Feedback (Anthropic, Dec 2022, arXiv) ⭐
  • Self-instruct: Aligning language models with self-generated instructions (Allen AI, May 2023, ACL) ⭐
  • Direct preference optimization: Your language model is secretly a reward model (Stanford, 2023, NeurIPS)⭐
  • ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback (Tsinghua, UIUC, Tencent, RUC etc., 2024, ICML) ⭐
  • Camels in a changing climate: Enhancing lm adaptation with tulu 2 (Allen AI & UW, Nov 2023, arXiv)
  • Steerlm: Attribute conditioned sft as an (user-steerable) alternative to rlhf (Nvidia, Oct 2023, arXiv)

LLM Core — Datasets, benchmarks, Metrics

  • GAIA: a benchmark for general AI assistants (Meta, Nov 2023, ICLR)
  • Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators (Stanford, Apr 2024, arXiv)
  • Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena (UCB, UCSD, CMU & Stanford, Dec 2023, NeurIPS)
  • FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets (KAIST, Apr 2024, ICLR)
  • Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference (UCB, Stanford & UCSD, Mar 2024, arXiv)
  • Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF (UCB, 2023, HF)
  • Lmsys-chat-1m: A large-scale real-world llm conversation dataset (UCB, UCSD, CMU & Stanford, Mar 2024, ICLR)

Agent Core — Planning / Reasoning Describe

  • Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents (PKU, 2024, NIPS)
  • Large Language Models as Commonsense Knowledge for Large-Scale Task Planning (NUS, 2023 , NIPS)

Agent Core — Memory

  • A Survey on the Memory Mechanism of Large Language Model based Agents (RUC &Huawei, Apr 2024, arXiv)

Agent Core — Tools

  • Offline Training of Language Model Agents with Functions as Learnable Weights (PSU, UW, USC & Microsoft, 2024, ICML)
  • Tool Learning with Foundation Models (Tsinghua, UIUC, CMU, etc., 2023, arXiv)
  • Toolformer: Language models can teach themselves to use tools (Meta, 2023, NeurIPS)

Agentic Workflow — Paradigms

  • Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View (ZJU & Deepmind, Oct 2023, arXiv)
  • Rethinking the Bounds of LLM Reasoning: Are Multi-Agent Discussions the Key? (ZJU, HKUST & UIUC, May 2024, arXiv)
  • 360◦REA: Towards A Reusable Experience Accumulation with 360◦Assessment for Multi-Agent System (Apr 2024, arXiv)
  • CAMEL: Communicative Agents for “Mind” Exploration of Large Language Model Society (KAUST, 2023, NIPS)
  • A Survey on Large Language Model based Autonomous Agents (2023, arXiv)
  • Mixture-of-Agents Enhances Large Language Model Capabilities (Together AI, Jun 2024, arXiv)

Agentic Applications — Simulation

  • Generative Agents: Interactive Simulacra of Human Behavior (Stanford/Google, Apr 2023, arXiv, Demo)
  • Deciphering digital detectives: Understanding llm behaviors and capabilities in multi-agent mystery games (Umontreal, Dec 2023, arXiv)
  • VillagerAgent: A Graph-Based Multi-Agent Framework for Coordinating Complex Task Dependencies in Minecraft (ZJU, Jun 2024, arXiv)

Agentic Applications — Finance

Multi-agent frameworks

TODO:

  • Agentic Workflow — Human-Agent Interactions
  • Agentic Applications — Dev Tools
  • Agentic Applications — Content Creation (AIGC)
  • Agentic Applications — Social Network
  • Agentic Applications — Education
  • Production Operations — LLMOps
  • Production Operations — AI Cloud
  • Production Operations — Monitoring

Citation

Please cite the repo if you refer to the content of the this repository for your work.

@misc{awesome-llm-agents-jl,
  author = {Junhua Liu},
  title = {Awesome-LLM-Agents: recent trends and advancement in Agentic AI},
  year = {2024},
  month = {August},
  publisher = {Medium},
  journal = {AI Advances},
  doi = {10.5281/zenodo.14021180},
  url = {https://medium.com/junhua/awesome-llm-agents-recent-trends-and-advancement-in-agentic-ai-90bac6249060},
}

About

A Collection of High Quality research papers and open-source projects about LLM-agents

Resources

License

Stars

Watchers

Forks

Packages

No packages published