Skip to content
forked from mem0ai/mem0

The memory layer for Personalized AI

Notifications You must be signed in to change notification settings

aaronlim0919/mem0

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Mem0 Logo

Mem0 Slack Mem0 Discord Mem0 Twitter Y Combinator S24 mem0ai npm package mem0ai Python package on PyPi Mem0 newsletter

Mem0: The Memory Layer for Personalized AI

Mem0 provides an intelligent, adaptive memory layer for Large Language Models (LLMs), enhancing personalized AI experiences by retaining and utilizing contextual information across diverse applications. This enhanced memory capability is crucial for applications ranging from customer support and healthcare diagnostics to autonomous systems and personalized content recommendations, allowing AI to remember user preferences, adapt to individual needs, and continuously improve over time.

Note

The Mem0 repository now also includes the Embedchain project. We continue to maintain and support Embedchain ❤️. You can find the Embedchain codebase in the embedchain directory.

🚀 Quickstart

Installation

The Mem0 package can be installed directly from pip command in the terminal.

pip install mem0ai

Basic Usage (Open Source)

Mem0 supports various LLMs, details of which can be found in our docs, checkout Supported LLMs. By default, Mem0 is equipped with gpt-4o, and to use it, you need to set the keys in the environment variable.

import os
os.environ["OPENAI_API_KEY"] = "sk-xxx"

Now, you can simply initialize the memory.

from mem0 import Memory

m = Memory()

You can perform the following task on the memory.

  1. Add: adds memory
  2. Update: update memory of a given memory_id
  3. Search: fetch memories based on a query
  4. Get: return memories for a certain user/agent/session
  5. History: describes how a memory has changed over time for a specific memory ID
# 1. Add: Store a memory from any unstructured text
result = m.add("I am working on improving my tennis skills. Suggest some online courses.", user_id="alice", metadata={"category": "hobbies"})

# Created memory --> 'Improving her tennis skills.' and 'Looking for online suggestions.'
# 2. Update: update the memory
result = m.update(memory_id=<memory_id_1>, data="Likes to play tennis on weekends")

# Updated memory --> 'Likes to play tennis on weekends.' and 'Looking for online suggestions.'
# 3. Search: search related memories
related_memories = m.search(query="What are Alice's hobbies?", user_id="alice")

# Retrieved memory --> 'Likes to play tennis on weekends'
# 4. Get all memories
all_memories = m.get_all()
memory_id = all_memories[0]["id"] # get a memory_id

# All memory items --> 'Likes to play tennis on weekends.' and 'Looking for online suggestions.'
# 5. Get memory history for a particular memory_id
history = m.history(memory_id=<memory_id_1>)

# Logs corresponding to memory_id_1 --> {'prev_value': 'Working on improving tennis skills and interested in online courses for tennis.', 'new_value': 'Likes to play tennis on weekends' }

Tip

If you are looking for a hosted version and don't want to setup the infrastucture yourself, checkout Mem0 Platform Docs to get started in minutes.

🔑 Core Features

  • Multi-Level Memory: User, Session, and AI Agent memory retention
  • Adaptive Personalization: Continuous improvement based on interactions
  • Developer-Friendly API: Simple integration into various applications
  • Cross-Platform Consistency: Uniform behavior across devices
  • Managed Service: Hassle-free hosted solution

📖 Documentation

For detailed usage instructions and API reference, visit our documentation at docs.mem0.ai.

🔧 Advanced Usage

For production environments, you can use Qdrant as a vector store:

from mem0 import Memory

config = {
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "host": "localhost",
            "port": 6333,
        }
    },
}

m = Memory.from_config(config)

🗺️ Roadmap

  • Integration with various LLM providers
  • Support for LLM frameworks
  • Integration with AI Agents frameworks
  • Customizable memory creation/update rules
  • Hosted platform support

🙋‍♂️ Support

Join our Slack or Discord community for support and discussions. If you have any questions, feel free to reach out to us using one of the following methods:

About

The memory layer for Personalized AI

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 69.7%
  • MDX 20.8%
  • Jupyter Notebook 6.7%
  • JavaScript 2.3%
  • Dockerfile 0.2%
  • Makefile 0.2%
  • Other 0.1%