Mem-LLM is a privacy-first Python framework for building memory-enabled AI assistants that run locally.
- Fixed critical memory, tool parsing, and backend compatibility issues.
- Improved SQL ordering and thread-safety behavior.
- Added missing runtime dependencies (
psutil,networkx). - Updated backend defaults:
- Ollama:
granite4:3b - LM Studio:
google/gemma-3-12b
- Ollama:
pip install mem-llmfrom mem_llm import MemAgent
agent = MemAgent(backend="ollama", model="granite4:3b")
agent.set_user("alice")
print(agent.chat("My name is Alice."))
print(agent.chat("What is my name?"))from mem_llm import MemAgent
agent = MemAgent(backend="lmstudio", model="google/gemma-3-12b")
agent.set_user("alice")
print(agent.chat("Summarize Python in one sentence."))- Persistent memory per user (JSON or SQLite)
- Multi-backend support (Ollama, LM Studio)
- Tool calling system (
@tool, built-in tools, validation) - Streaming responses
- Knowledge base integration
- Conversation analytics
- REST API and Web UI
Memory LLM/- main package source and release filesquickstart/- step-by-step usage examples & tutorials
- PyPI: https://pypi.org/project/mem-llm/
- Documentation: Memory LLM/README.md
- Changelog: Memory LLM/CHANGELOG.md
- Issues: https://github.com/emredeveloper/Mem-LLM/issues
Mem-LLM is released under the MIT License.