Price Per TokenPrice Per Token

Basic Memory MCP Server

by basicmachines-co

0

About

Basic Memory is a local-first knowledge graph and note-taking system that enables persistent knowledge management through natural conversations with LLMs. All data is stored in simple Markdown files on your computer, giving you full ownership of your knowledge base. Key features of Basic Memory: - **Local knowledge graph** with bidirectional relationships between notes and semantic connections - **Semantic vector search** combining full-text search with vector similarity using FastEmbed embeddings - **Schema system** for inferring, validating, and diffing knowledge base structure - **MCP-native integration** allowing any compatible LLM (Claude, etc.) to read and write to your knowledge base - Optional cloud sync for cross-device access while maintaining local-first as default - CLI with project management, workspace-aware commands, and htop-inspired project dashboard

README

[](https://www.gnu.org/licenses/agpl-3.0) [](https://badge.fury.io/py/basic-memory) [](https://www.python.org/downloads/) [](https://github.com/basicmachines-co/basic-memory/actions) [](https://github.com/astral-sh/ruff)

🚀 Basic Memory Cloud is Live!

  • Cross-device and multi-platform support is here. Your knowledge graph now works on desktop, web, and mobile.
  • Cloud is optional. The local-first open-source workflow continues as always.
  • OSS discount: use code BMFOSS for 20% off for 3 months.
  • Sign up now →

    with a 7 day free trial

    Basic Memory

    Basic Memory lets you build persistent knowledge through natural conversations with Large Language Models (LLMs) like Claude, while keeping everything in simple Markdown files on your computer. It uses the Model Context Protocol (MCP) to enable any compatible LLM to read and write to your local knowledge base.

    What's New in v0.19.0

  • Semantic Vector Search — find notes by meaning, not just keywords. Combines full-text and vector similarity for hybrid search with FastEmbed embeddings.
  • Schema System — infer, validate, and diff the structure of your knowledge base with schema_infer, schema_validate, and schema_diff tools.
  • Per-Project Cloud Routing — route individual projects through the cloud while others stay local, using API key authentication (basic-memory project set-cloud).
  • FastMCP 3.0 — upgraded to FastMCP 3.0 with tool annotations for better client integration.
  • CLI Overhaul — JSON output mode (--json) for scripting, workspace-aware commands, and an htop-inspired project dashboard.
  • Smarter Editingedit_note append/prepend auto-creates notes if they don't exist; write_note has an overwrite guard to prevent accidental data loss.
  • Richer Search Results — matched chunk text returned in search results for better context.
  • See the full CHANGELOG for details.

  • Website: basicmemory.com
  • Documentation: docs.basicmemory.com
  • Community: Discord
  • Pick up your conversation right where you left off

  • AI assistants can load context from local files in a new conversation
  • Notes are saved locally as Markdown files in real time
  • No project knowledge or special prompting required
  • https://github.com/user-attachments/assets/a55d8238-8dd0-454a-be4c-8860dbbd0ddc

    Quick Start

    # Install with uv (recommended)
    uv tool install basic-memory

    Configure Claude Desktop (edit ~/Library/Application Support/Claude/claude_desktop_config.json)

    Add this to your config:

    { "mcpServers": { "basic-memory": { "command": "uvx", "args": [ "basic-memory", "mcp" ] } } }

    Now in Claude Desktop, you can:

    - Write notes with "Create a note about coffee brewing methods"

    - Read notes with "What do I know about pour over coffee?"

    - Search with "Find information about Ethiopian beans"

    You can view shared context via files in ~/basic-memory (default directory location).

    Automatic Updates

    Basic Memory includes a default-on auto-update flow for CLI installs.

  • Auto-install supported: uv tool and Homebrew installs
  • Default check interval: every 24 hours (86400 seconds)
  • MCP-safe behavior: update checks run silently in basic-memory mcp mode
  • uvx behavior: skipped (runtime is ephemeral and managed by uvx)
  • Manual update commands:

    # Check now and install if supported
    bm update

    Check only, do not install

    bm update --check

    Config options in ~/.basic-memory/config.json:

    {
      "auto_update": true,
      "update_check_interval": 86400
    }
    

    To disable automatic updates, set "auto_update": false.

    Why Basic Memory?

    Most LLM interactions are ephemeral - you ask a question, get an answer, and everything is forgotten. Each conversation starts fresh, without the context or knowledge from previous ones. Current workarounds have limitations:

  • Chat histories capture conversations but aren't structured knowledge
  • RAG systems can query documents but don't let LLMs write back
  • Vector databases r
  • Related MCP Servers

    AI Research Assistant

    AI Research Assistant

    hamid-vakilzadeh

    AI Research Assistant provides comprehensive access to millions of academic papers through the Semantic Scholar and arXiv databases. This MCP server enables AI coding assistants to perform intelligent literature searches, citation network analysis, and paper content extraction without requiring an API key. Key features include: - Advanced paper search with multi-filter support by year ranges, citation thresholds, field of study, and publication type - Title matching with confidence scoring for finding specific papers - Batch operations supporting up to 500 papers per request - Citation analysis and network exploration for understanding research relationships - Full-text PDF extraction from arXiv and Wiley open-access content (Wiley TDM token required for institutional access) - Rate limits of 100 requests per 5 minutes with options to request higher limits through Semantic Scholar

    Web & Search
    12 8
    Linkup

    Linkup

    LinkupPlatform

    Linkup is a real-time web search and content extraction service that enables AI assistants to search the web and retrieve information from trusted sources. It provides source-backed answers with citations, making it ideal for fact-checking, news gathering, and research tasks. Key features of Linkup: - Real-time web search using natural language queries to find current information, news, and data - Page fetching to extract and read content from any webpage URL - Search depth modes: Standard for direct-answer queries and Deep for complex research across multiple sources - Source-backed results with citations and context from relevant, trustworthy websites - JavaScript rendering support for accessing dynamic content on JavaScript-heavy pages

    Web & Search
    2 24
    Math-MCP

    Math-MCP

    EthanHenrickson

    Math-MCP is a computation server that enables Large Language Models (LLMs) to perform accurate numerical calculations through the Model Context Protocol. It provides precise mathematical operations via a simple API to overcome LLM limitations in arithmetic and statistical reasoning. Key features of Math-MCP: - Basic arithmetic operations: addition, subtraction, multiplication, division, modulo, and bulk summation - Statistical analysis functions: mean, median, mode, minimum, and maximum calculations - Rounding utilities: floor, ceiling, and nearest integer rounding - Trigonometric functions: sine, cosine, tangent, and their inverses with degrees and radians conversion support

    Developer Tools
    22 81