Basic Memory
Start Here

What is Basic Memory

An overview of what Basic Memory is and how it works.

Basic Memory is a knowledge base that you and your AI assistant share. It stores notes as Markdown files so your work stays readable, portable, and searchable.

Instead of losing valuable insights in conversation history, you build a persistent knowledge base where both you and AI can read, write, and enhance each other's work.


Why Basic Memory?

The problem: AI conversations are ephemeral. You have a great discussion, make important decisions, learn something new - and then it's gone, buried in chat history.

The solution: Basic Memory gives your AI assistant a persistent memory. Knowledge captured in one conversation is available in all future conversations. Your AI can reference past discussions, decisions, and context.

Key benefits:

  • Persistent context - Knowledge survives across conversations
  • You own your data - Plain Markdown files you control
  • Structured knowledge - Observations and relations create a semantic graph
  • Works with any AI - Claude, ChatGPT, and other MCP-compatible assistants

What it does

Stores notes as Markdown

Notes are plain files you can edit with any editor. No lock-in, no proprietary formats.

Connects ideas with links

Relations and tags turn notes into a knowledge graph that grows over time.

Lets assistants search and write

MCP tools let your assistant read, write, search, and organize notes.

Cloud or Local

Use the hosted cloud service or run everything locally - your choice.

How it works

Basic Memory runs an MCP server that can read and write Markdown files. A SQLite index keeps search fast. Your assistant calls tools like search_notes, read_note, and write_note to work with your notes.

mermaid
Rendering diagram...

The workflow

  1. Capture a note - You write or ask your assistant to write a note during a conversation.
  2. Index and connect - The system indexes the note, extracts observations and relations, and links it to related notes.
  3. Reuse later - In future conversations, your assistant searches and loads relevant context automatically.

Example conversation

You: "What did we decide about the authentication approach?"

AI: [Searches knowledge base, finds your past notes]
        "Based on your notes, you decided to use JWT tokens for API
        authentication. The decision was made on January 15th and
        documented in 'Decision: API Authentication'."

You: "Add a note about implementing refresh tokens"

AI: [Creates a new note linked to the authentication decision]
        "I've created a note about refresh tokens and linked it to
        your authentication decision notes."

What a note looks like

Notes are standard Markdown with optional semantic structure:

---
title: API Authentication Decision
tags: [security, api, auth]
---

# API Authentication Decision

## Context
We needed to choose an authentication approach for the new API.

## Observations
- [decision] Use JWT tokens for API auth #security
- [requirement] Tokens expire after 24 hours
- [risk] Rate limiting needed on login endpoint #auth

## Relations
- implements [[API Security Spec]]
- depends_on [[User Service]]
- relates_to [[Token Refresh]]

Key concepts:

  • Observations - Categorized facts: [decision], [requirement], [risk], etc.
  • Relations - Links to other notes: [[Other Note]] in simple WiliLink format
  • Tags - Searchable metadata: #security, #api
The headings '## Observations' and '## Relations' are only informative. Basic Memory will parse elements from any where in the Markdown.

What the AI sees

When your AI assistant searches your knowledge base, it doesn't just find text - it navigates a semantic graph of connected ideas.

The knowledge graph

Each note becomes an entity with structured data:

mermaid
Rendering diagram...
  • Entities - Each note is an entity with a title, content, and metadata
  • Observations - Categorized facts extracted from the note (decisions, requirements, risks)
  • Relations - Typed links connecting entities (implements, depends_on, relates_to)

Building context

When you ask a question, the AI doesn't just return one note. It traverses the graph to build rich context:

mermaid
Rendering diagram...

The flow:

  1. Search - Your question triggers a search across all notes
  2. Expand - The AI uses build_context to follow relations and gather connected notes
  3. Synthesize - With the full context loaded, the AI can give a complete answer

This recursive traversal means asking about "API authentication" automatically pulls in related decisions, dependencies, and connected topics - giving your AI the full picture.

Memory URLs

The AI references knowledge using memory:// URLs:

memory://api-authentication          # Reference by permalink
memory://api-authentication/relates_to/*  # Follow all 'relates_to' links
memory://folder/note-title           # Reference by path

These stable identifiers let the AI (and you) pinpoint exactly what context to load.

You don't have to understand or think about the object graph or relations. You can just ask the AI to manage it for you.
You: "Make sure you add observations and relations to this note"

AI: [Update the note with semantic information]
    "OK I've updated the note with observations and relations...."

You: "Make sure you do this for all our other notes :)"

AI: [Makes a note in its own memory to keep notes annotated with semantic information]
    "I'll remember that...."

Seeing into the black box

AI memory is typically opaque - you don't know what context the AI has or what it "remembers." Basic Memory makes this transparent.

  • See what your AI sees - Every piece of context is a file you can read
  • Edit what your AI knows - Modify, delete, or reorganize knowledge anytime
  • Watch changes happen - See exactly what your AI adds or updates
  • Keep your memory - Plain Markdown files you own forever
  • Audit trail - Every note has a history; you can see what was added when
  • No surprises - The AI can only know what's in your files; no hidden context
  • Portable knowledge - Plain markdown means you're never locked in; chat with one AI, bring your knowledge to the next

Closing the loop

AI agents work best when they can observe the results of their actions. Basic Memory creates a feedback loop where each conversation builds on the last.

How it works:

  • Cumulative intelligence - Each conversation adds to the knowledge base, making future conversations smarter
  • Human-in-the-loop refinement - You can correct and improve AI-generated notes, and the AI learns from your edits
  • Context compounds - Unlike chat history that gets truncated, knowledge persists and connects
  • Pattern recognition - Over time, the AI can recognize patterns across your entire knowledge base

The feedback loop

mermaid
Rendering diagram...

Each cycle reinforces learning. You ask questions, the AI searches and responds, creates notes from the conversation, and you review and refine. The knowledge base grows with each iteration.

Knowledge growth over time

mermaid
Rendering diagram...

Knowledge refinement over time

Each conversation builds on previous context, creating increasingly refined understanding:

mermaid
Rendering diagram...

The result: your AI gets smarter about your work with every interaction.


Where it runs

Cloud

Basic Memory Cloud provides:

  • Hosted MCP endpoint - Connect without installing anything
  • Access from any device - Use your memory from desktop, mobile, cli, multiple AIs
  • Web app - Browse and edit notes in your browser
  • Local sync - Sync your notes locally for easy management
  • Snapshots - Point-in-time backups, automaticly done daily or manual as needed

Local

The open-source local version provides:

  • Full control - Everything runs on your machine
  • No account needed - Use immediately after install
  • CLI tools - Command-line access to all features
  • Offline access - Works without internet

Both use the same Markdown format, so you can start with one and switch to the other later.


MCP Integration

Basic Memory uses the Model Context Protocol (MCP) to connect with AI assistants. MCP is an open standard that lets AI assistants use external tools.

Available tools:

  • write_note - Create or update notes
  • read_note - Read notes with context
  • search_notes - Full-text search
  • edit_note - Incremental editing
  • build_context - Load related notes
  • list_memory_projects - Manage projects
  • ...and many more

Compatible assistants:

  • Claude Desktop
  • Claude Code
  • ChatGPT (Pro/Max)
  • Google Gemini
  • Cursor
  • VS Code (with MCP extension)
  • Codex

Getting started

Ready to try Basic Memory?

Quickstart: Cloud

Connect in 2 minutes. No installation required.

Quickstart: Local

Install locally and run everything on your machine.

Next steps

Getting Started

Full installation guide with configuration options.

Knowledge Format

Learn the note structure with observations and relations.

MCP Tools Reference

All available tools for AI assistants.