Long-term memory engine for AI agents.
Give your coding assistant a brain that persists across sessions.
Quick Start • Auto Install • Architecture • MCP Tools • Configuration • Contributing
Memorable is an MCP server that gives AI coding agents—Cursor, Claude Code, GitHub Copilot, Windsurf—persistent long-term memory backed by semantic vector search.
Without Memorable, every conversation starts from zero. With it, your agent recalls past decisions, learned patterns, corrections, and project context across sessions.
| Feature | Memorable | Plain file notes | Other memory servers |
|---|---|---|---|
| Semantic search | Yes (pgvector cosine similarity) | No (keyword grep) | Varies |
| Dedup | SHA-256 content hashing | Manual | Rare |
| Multi-dimensional scoping | user × agent × app × run | None | Usually single-tenant |
| Five memory types | fact, conversation, decision, code_pattern, correction | Unstructured | Usually one |
| MCP-native | 12 typed tools, auto-schema | N/A | Some |
| Pluggable embeddings | OpenAI, Gemini, Ollama, OpenRouter, custom | N/A | Rarely |
| Knowledge graph | Entity-relation extraction + traversal | N/A | No |
| Heartbeat / self-reflection | Periodic consolidation + contradiction detection | N/A | No |
| Hybrid retrieval | Vector + recency + frequency scoring | N/A | Vector only |
- Go 1.21+
- PostgreSQL 15+ with the
pgvectorextension - Embedding API key — one of: OpenAI, Google Gemini, OpenRouter, or Ollama (local, no key needed). Any OpenAI-compatible API also works via the
customprovider.
From source:
git clone https://github.com/two-tech-dev/memorable.git
cd memorable
make buildThe binary is written to bin/memorable.
With Go:
go install github.com/two-tech-dev/memorable/cmd/memorable@latestUsing Docker:
docker run -d --name memorable-pg \
-e POSTGRES_DB=memorable \
-e POSTGRES_HOST_AUTH_METHOD=trust \
-p 5432:5432 \
pgvector/pgvector:pg16Or install pgvector on an existing PostgreSQL instance.
Copy the example config and set your API key:
cp config.example.yaml memorable.yamlEdit memorable.yaml or use environment variables:
# Pick your embedding provider:
export OPENAI_API_KEY=sk-... # OpenAI
export GEMINI_API_KEY=AI... # Google Gemini
# Or use Ollama locally (no key needed)
export MEMORABLE_DSN=postgres://localhost:5432/memorable?sslmode=disablebin/memorable
# or
make runThe server starts on stdio and auto-migrates the database schema on first connect.
Memorable includes install scripts that automatically configure MCP for your AI agents.
macOS / Linux:
./scripts/install.shWindows (PowerShell):
.\scripts\install.ps1This detects and configures Cursor, Claude Desktop / Claude Code, VS Code (GitHub Copilot), and Windsurf in one command.
# Linux/macOS
./scripts/install.sh --agent cursor
./scripts/install.sh --agent claude --config ~/memorable.yaml
# Windows
.\scripts\install.ps1 -Agent cursor
.\scripts\install.ps1 -Agent claude -Config "C:\Users\me\memorable.yaml"Supported agents: cursor, claude, copilot, windsurf, all
| Agent | Config file | Servers key |
|---|---|---|
| Cursor | ~/.cursor/mcp.json |
mcpServers |
| Claude | ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) / %APPDATA%\Claude\claude_desktop_config.json (Windows) |
mcpServers |
| VS Code (Copilot) | .vscode/mcp.json (workspace) |
servers |
| Windsurf | ~/.codeium/windsurf/mcp_config.json |
mcpServers |
The scripts safely merge into existing config files — your other MCP servers are preserved.
Click to expand manual agent configuration
Add to .cursor/mcp.json:
{
"mcpServers": {
"memorable": {
"command": "memorable",
"args": ["-config", "/path/to/memorable.yaml"]
}
}
}Add to your MCP config:
{
"mcpServers": {
"memorable": {
"command": "memorable",
"args": []
}
}
}Add to .vscode/mcp.json:
{
"servers": {
"memorable": {
"type": "stdio",
"command": "memorable",
"args": ["-config", "memorable.yaml"]
}
}
}Add to ~/.codeium/windsurf/mcp_config.json:
{
"mcpServers": {
"memorable": {
"command": "memorable"
}
}
}┌───────────────────────────────────────────────────────────────┐
│ AI Agent (Cursor, Claude, Copilot, Windsurf) │
│ ↕ MCP (stdio) │
├───────────────────────────────────────────────────────────────┤
│ MCP Server (12 tools) │
│ ┌──────┐ ┌────────┐ ┌──────────┐ ┌───────┐ ┌────────────┐ │
│ │ CRUD │ │ search │ │heartbeat │ │ graph │ │ stats │ │
│ └──┬───┘ └───┬────┘ └────┬─────┘ └───┬───┘ └─────┬──────┘ │
│ └─────────┴────────────┴───────────┴───────────┘ │
│ Memory Manager │
│ (dedup · embed · CRUD · scope) │
├────────┬──────────────────┬──────────────────┬───────────────┤
│ L1 │ L2 Vector DB │ Knowledge │ L3 Soul/ │
│ Cache │ (pgvector) │ Graph │ Profile │
│ (LRU) │ │ (entity-rel) │ (traits) │
├────────┴──────────────────┴──────────────────┴───────────────┤
│ Embedding Provider Hybrid Retrieval │
│ (OpenAI / Gemini / Ollama / Custom) (vector+recency+freq) │
└───────────────────────────────────────────────────────────────┘
↕
┌──────────────┐
│ PostgreSQL │
│ + pgvector │
└──────────────┘
memorable/
├── cmd/memorable/ # Application entry point
│ └── main.go
├── internal/
│ ├── cache/ # L1 LRU cache (generic, thread-safe)
│ ├── config/ # YAML config loader + env overrides
│ ├── embedding/ # Embedding providers (OpenAI, Gemini, Ollama, Custom)
│ ├── graph/ # Knowledge graph (entities, relations, triple extraction)
│ ├── heartbeat/ # Self-reflection & memory consolidation
│ ├── mcp/ # MCP server, tool registration, typed handlers
│ ├── memory/ # Core types, Manager (CRUD + dedup), VectorStore interface
│ ├── profile/ # L3 Soul/Profile (user trait accumulation)
│ ├── retrieval/ # Hybrid scoring (vector + recency + frequency)
│ └── store/ # Storage implementations (pgvector)
├── scripts/ # Auto-install scripts (bash + PowerShell)
├── docs/ # Documentation assets
├── config.example.yaml # Example configuration
├── Makefile # Build, test, lint targets
└── .github/workflows/ # CI pipeline
| Type | Description | Example |
|---|---|---|
fact |
Factual knowledge the agent should remember | "This project uses Go 1.21 with modules." |
conversation |
Key points from conversations | "User prefers tabs over spaces." |
decision |
Architectural or design decisions | "We chose pgvector over Pinecone for self-hosting." |
code_pattern |
Reusable patterns and idioms | "Error wrapping: always use fmt.Errorf with %w." |
correction |
Mistakes and their fixes | "Don't import from internal/store directly; use memory.VectorStore." |
Every memory is scoped along four dimensions, all optional:
| Dimension | Purpose | Example |
|---|---|---|
user_id |
Per-user isolation | "alice" |
agent_id |
Per-agent context | "cursor", "claude" |
app_id |
Per-project context | "memorable", "my-saas" |
run_id |
Per-session context | "session-2024-01-15" |
Memories with no scope are global. Scopes are combined as filters during search and retrieval.
Memorable exposes 12 tools through the MCP protocol:
Store a new memory. Automatically deduplicates by content hash within the same scope.
{
"content": "The project uses pgvector for vector similarity search",
"type": "fact",
"app_id": "my-project"
}Search memories by semantic similarity. The query is embedded and compared against stored vectors using cosine distance.
{
"query": "which database do we use?",
"limit": 5,
"app_id": "my-project"
}Retrieve a specific memory by UUID.
{ "id": "550e8400-e29b-41d4-a716-446655440000" }Update content (triggers re-embedding) and/or merge metadata.
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"content": "Updated: we migrated from pgvector to Qdrant",
"metadata": { "reviewed": true }
}Delete a memory by UUID.
{ "id": "550e8400-e29b-41d4-a716-446655440000" }List memories with filters and pagination.
{
"type": "decision",
"app_id": "my-project",
"limit": 20,
"offset": 0
}Get aggregate statistics: total count, breakdown by type, time range.
{ "user_id": "alice" }Run a consolidation cycle. Analyzes stored memories, finds clusters of similar content, and generates insights (summaries, contradictions, patterns).
{ "user_id": "alice", "app_id": "my-project" }Returns:
- Summaries — consolidated clusters of related memories
- Contradictions — detects when corrections supersede older facts
- Patterns — recurring themes across memories
Extract entities and relations from text and add them to the knowledge graph.
{
"content": "The project uses PostgreSQL. React depends on Node.",
"memory_id": "550e8400-..."
}Search for entities in the knowledge graph by name.
{ "query": "postgres", "limit": 5 }Get entities and relations connected to a given entity, with configurable traversal depth.
{ "entity_id": "ent_abc123", "depth": 2 }Get knowledge graph statistics: entity and relation counts.
{}Memorable looks for configuration in this order:
--configflag (explicit path)./memorable.yaml(current directory)~/.memorable/config.yaml(home directory)- Built-in defaults
# Server transport
server:
transport: stdio # stdio | http (future)
# Storage backend
storage:
backend: pgvector # pgvector | sqlite (future)
pgvector:
dsn: "postgres://localhost:5432/memorable?sslmode=disable"
table_name: memories
vector_dimensions: 1536 # Must match embedding model
# Embedding provider (openai | gemini | ollama | custom)
embedding:
provider: openai
openai:
api_key: ${OPENAI_API_KEY}
model: text-embedding-3-small
# base_url: https://custom-api.example.com/v1 # For OpenAI-compatible APIs
gemini:
api_key: ${GEMINI_API_KEY}
model: text-embedding-004
ollama:
base_url: http://localhost:11434
model: nomic-embed-text
dims: 768
# Generic OpenAI-compatible provider — works with any /v1/embeddings API
custom:
base_url: https://openrouter.ai/api/v1 # Required
api_key: ${CUSTOM_API_KEY}
model: openai/text-embedding-3-small # Required
dims: 1536 # Required
headers: # Optional
X-Title: Memorable| Variable | Description | Overrides |
|---|---|---|
OPENAI_API_KEY |
OpenAI API key for embeddings | embedding.openai.api_key |
GEMINI_API_KEY |
Google Gemini API key | embedding.gemini.api_key |
CUSTOM_API_KEY |
API key for custom provider | embedding.custom.api_key |
MEMORABLE_DSN |
PostgreSQL connection string | storage.pgvector.dsn |
| Model | Dimensions | Notes |
|---|---|---|
text-embedding-3-small |
1536 | Default. Best cost/performance ratio. |
text-embedding-3-large |
3072 | Higher accuracy, 2× storage. |
text-embedding-ada-002 |
1536 | Legacy. |
Set base_url to use OpenAI-compatible APIs like Voyage AI, Together AI, or Azure OpenAI.
| Model | Dimensions | Notes |
|---|---|---|
text-embedding-004 |
768 | Recommended. |
embedding-001 |
768 | Legacy. |
| Model | Dimensions | Notes |
|---|---|---|
nomic-embed-text |
768 | Good general-purpose model. |
mxbai-embed-large |
1024 | Higher accuracy. |
all-minilm |
384 | Smallest, fastest. |
Run ollama pull nomic-embed-text to download the model, then set provider: ollama in config. No API key required.
Set provider: custom and point base_url at any API that implements the OpenAI /v1/embeddings endpoint.
OpenRouter:
embedding:
provider: custom
custom:
base_url: https://openrouter.ai/api/v1
api_key: ${CUSTOM_API_KEY}
model: openai/text-embedding-3-small
dims: 1536
headers:
X-Title: Memorable
HTTP-Referer: https://github.com/two-tech-dev/memorableTogether AI:
embedding:
provider: custom
custom:
base_url: https://api.together.xyz/v1
api_key: ${CUSTOM_API_KEY}
model: togethercomputer/m2-bert-80M-8k-retrieval
dims: 768Voyage AI:
embedding:
provider: custom
custom:
base_url: https://api.voyageai.com/v1
api_key: ${CUSTOM_API_KEY}
model: voyage-3
dims: 1024Mistral:
embedding:
provider: custom
custom:
base_url: https://api.mistral.ai/v1
api_key: ${CUSTOM_API_KEY}
model: mistral-embed
dims: 1024Azure OpenAI:
embedding:
provider: custom
custom:
base_url: https://YOUR-RESOURCE.openai.azure.com/openai/deployments/YOUR-DEPLOYMENT
api_key: ${CUSTOM_API_KEY}
model: text-embedding-3-small
dims: 1536
headers:
api-key: ${CUSTOM_API_KEY}Note: Set
dimsto match the actual output dimensions of your chosen model, and ensurestorage.pgvector.vector_dimensionsmatches.
# Build
make build
# Run tests
make test
# Run linter
make lint
# Clean build artifacts
make clean
See CONTRIBUTING.md for the full development guide.
MIT © 2026 Two Tech Dev