Basic Memory lets you build persistent knowledge through natural conversations with Large Language Models (LLMs) like
Claude, while keeping everything in simple Markdown files on your computer. It uses the Model Context Protocol (MCP) to
enable any compatible LLM to read and write to your local knowledge base.
You can view shared context via files in ~/basic-memory (default directory location).
Alternative Installation via Smithery
You can use Smithery to automatically configure Basic
Memory for Claude Desktop:
This installs and configures Basic Memory without requiring manual edits to the Claude Desktop configuration file. The
Smithery server hosts the MCP server component, while your data remains stored locally as Markdown files.
Glama.ai
Why Basic Memory?
Most LLM interactions are ephemeral - you ask a question, get an answer, and everything is forgotten. Each conversation
starts fresh, without the context or knowledge from previous ones. Current workarounds have limitations:
Chat histories capture conversations but aren't structured knowledge
RAG systems can query documents but don't let LLMs write back
Vector databases require complex setups and often live in the cloud
Knowledge graphs typically need specialized tools to maintain
Basic Memory addresses these problems with a simple approach: structured Markdown files that both humans and LLMs can
read
and write to. The key advantages:
Local-first: All knowledge stays in files you control
Bi-directional: Both you and the LLM read and write to the same files
Structured yet simple: Uses familiar Markdown with semantic patterns
Traversable knowledge graph: LLMs can follow links between topics
Standard formats: Works with existing editors like Obsidian
Lightweight infrastructure: Just local files indexed in a local SQLite database
With Basic Memory, you can:
Have conversations that build on previous knowledge
Create structured notes during natural conversations
Have conversations with LLMs that remember what you've discussed before
Navigate your knowledge graph semantically
Keep everything local and under your control
Use familiar tools like Obsidian to view and edit notes
Build a personal knowledge base that grows over time
How It Works in Practice
Let's say you're exploring coffee brewing methods and want to capture your knowledge. Here's how it works:
Start by chatting normally:
... continue conversation.
Ask the LLM to help structure this knowledge:
LLM creates a new Markdown file on your system (which you can see instantly in Obsidian or your editor):
The note embeds semantic content and links to other topics via simple Markdown formatting.
You see this file on your computer in real time in the current project directory (default ~/$HOME/basic-memory).
Realtime sync is enabled by default with the v0.12.0 version
In a chat with the LLM, you can reference a topic:
The LLM can now build rich context from the knowledge graph. For example:
Each related document can lead to more context, building a rich semantic understanding of your knowledge base.
This creates a two-way flow where:
Humans write and edit Markdown files
LLMs read and write through the MCP protocol
Sync keeps everything consistent
All knowledge stays in local files.
Technical Implementation
Under the hood, Basic Memory:
Stores everything in Markdown files
Uses a SQLite database for searching and indexing
Extracts semantic meaning from simple Markdown patterns
Maintains the local knowledge graph derived from the files
Provides bidirectional synchronization between files and the knowledge graph
Implements the Model Context Protocol (MCP) for AI integration
Exposes tools that let AI assistants traverse and manipulate the knowledge graph
Uses memory:// URLs to reference entities across tools and conversations
The file format is just Markdown with some simple markup:
Each Markdown file has:
Frontmatter
Observations
Observations are facts about a topic.
They can be added by creating a Markdown list with a special format that can reference a category, tags using a
"#" character, and an optional context.
Observation Markdown format:
Examples of observations:
Relations
Relations are links to other topics. They define how entities connect in the knowledge graph.
Markdown format:
Examples of relations:
Using with VS Code
For one-click installation, click one of the install buttons below...
You can use Basic Memory with VS Code to easily retrieve and store information while coding. Click the installation buttons above for one-click setup, or follow the manual installation instructions below.
Manual Installation
Add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P and typing Preferences: Open User Settings (JSON).
Optionally, you can add it to a file called .vscode/mcp.json in your workspace. This will allow you to share the configuration with others.
Using with Claude Desktop
Basic Memory is built using the MCP (Model Context Protocol) and works with the Claude desktop app (https://claude.ai/):
Configure Claude Desktop to use Basic Memory:
Edit your MCP configuration file (usually located at ~/Library/Application Support/Claude/claude_desktop_config.json
for OS X):
If you want to use a specific project (see Multiple Projects), update your
Claude Desktop
config:
Sync your knowledge:
Basic Memory will sync the files in your project in real time if you make manual edits.
In Claude Desktop, the LLM can now use these tools:
Basic Memory lets you build persistent knowledge through natural conversations with Large Language Models (LLMs) like
Claude, while keeping everything in simple Markdown files on your computer. It uses the Model Context Protocol (MCP) to
enable any compatible LLM to read and write to your local knowledge base.
You can view shared context via files in ~/basic-memory (default directory location).
Alternative Installation via Smithery
You can use Smithery to automatically configure Basic
Memory for Claude Desktop:
This installs and configures Basic Memory without requiring manual edits to the Claude Desktop configuration file. The
Smithery server hosts the MCP server component, while your data remains stored locally as Markdown files.
Glama.ai
Why Basic Memory?
Most LLM interactions are ephemeral - you ask a question, get an answer, and everything is forgotten. Each conversation
starts fresh, without the context or knowledge from previous ones. Current workarounds have limitations:
Chat histories capture conversations but aren't structured knowledge
RAG systems can query documents but don't let LLMs write back
Vector databases require complex setups and often live in the cloud
Knowledge graphs typically need specialized tools to maintain
Basic Memory addresses these problems with a simple approach: structured Markdown files that both humans and LLMs can
read
and write to. The key advantages:
Local-first: All knowledge stays in files you control
Bi-directional: Both you and the LLM read and write to the same files
Structured yet simple: Uses familiar Markdown with semantic patterns
Traversable knowledge graph: LLMs can follow links between topics
Standard formats: Works with existing editors like Obsidian
Lightweight infrastructure: Just local files indexed in a local SQLite database
With Basic Memory, you can:
Have conversations that build on previous knowledge
Create structured notes during natural conversations
Have conversations with LLMs that remember what you've discussed before
Navigate your knowledge graph semantically
Keep everything local and under your control
Use familiar tools like Obsidian to view and edit notes
Build a personal knowledge base that grows over time
How It Works in Practice
Let's say you're exploring coffee brewing methods and want to capture your knowledge. Here's how it works:
Start by chatting normally:
... continue conversation.
Ask the LLM to help structure this knowledge:
LLM creates a new Markdown file on your system (which you can see instantly in Obsidian or your editor):
The note embeds semantic content and links to other topics via simple Markdown formatting.
You see this file on your computer in real time in the current project directory (default ~/$HOME/basic-memory).
Realtime sync is enabled by default with the v0.12.0 version
In a chat with the LLM, you can reference a topic:
The LLM can now build rich context from the knowledge graph. For example:
Each related document can lead to more context, building a rich semantic understanding of your knowledge base.
This creates a two-way flow where:
Humans write and edit Markdown files
LLMs read and write through the MCP protocol
Sync keeps everything consistent
All knowledge stays in local files.
Technical Implementation
Under the hood, Basic Memory:
Stores everything in Markdown files
Uses a SQLite database for searching and indexing
Extracts semantic meaning from simple Markdown patterns
Maintains the local knowledge graph derived from the files
Provides bidirectional synchronization between files and the knowledge graph
Implements the Model Context Protocol (MCP) for AI integration
Exposes tools that let AI assistants traverse and manipulate the knowledge graph
Uses memory:// URLs to reference entities across tools and conversations
The file format is just Markdown with some simple markup:
Each Markdown file has:
Frontmatter
Observations
Observations are facts about a topic.
They can be added by creating a Markdown list with a special format that can reference a category, tags using a
"#" character, and an optional context.
Observation Markdown format:
Examples of observations:
Relations
Relations are links to other topics. They define how entities connect in the knowledge graph.
Markdown format:
Examples of relations:
Using with VS Code
For one-click installation, click one of the install buttons below...
You can use Basic Memory with VS Code to easily retrieve and store information while coding. Click the installation buttons above for one-click setup, or follow the manual installation instructions below.
Manual Installation
Add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P and typing Preferences: Open User Settings (JSON).
Optionally, you can add it to a file called .vscode/mcp.json in your workspace. This will allow you to share the configuration with others.
Using with Claude Desktop
Basic Memory is built using the MCP (Model Context Protocol) and works with the Claude desktop app (https://claude.ai/):
Configure Claude Desktop to use Basic Memory:
Edit your MCP configuration file (usually located at ~/Library/Application Support/Claude/claude_desktop_config.json
for OS X):
If you want to use a specific project (see Multiple Projects), update your
Claude Desktop
config:
Sync your knowledge:
Basic Memory will sync the files in your project in real time if you make manual edits.
In Claude Desktop, the LLM can now use these tools: