About
Mem0 Memory Server is an MCP-compatible memory layer that uses Mem0 (mem0.ai) to persistently store, search, and retrieve coding preferences and technical knowledge. It enables AI coding assistants to maintain context across sessions by remembering code snippets, implementation patterns, setup configurations, and best practices. Key features: - Store code snippets with comprehensive context including dependencies, language/framework versions, setup instructions, and documentation - Semantic search through stored coding preferences to find relevant implementations, solutions, and best practices - Retrieve all stored preferences to analyze patterns and ensure no relevant context is missed - SSE-based endpoint for integration with MCP-compatible clients like Cursor - Persistent memory system that runs as a decoupled process, allowing agents to connect, use, and disconnect as needed
README
MCP Server with Mem0 for Managing Coding Preferences
This demonstrates a structured approach for using an MCP server with mem0 to manage coding preferences efficiently. The server can be used with Cursor and provides essential tools for storing, retrieving, and searching coding preferences.
Installation
1. Clone this repository
2. Initialize the uv environment:
uv venv
3. Activate the virtual environment:
source .venv/bin/activate
4. Install the dependencies using uv:
# Install in editable mode from pyproject.toml
uv pip install -e .
5. Update .env file in the root directory with your mem0 API key:
MEM0_API_KEY=your_api_key_here
Usage
1. Start the MCP server:
uv run main.py
2. In Cursor, connect to the SSE endpoint, follow this doc for reference:
http://0.0.0.0:8080/sse
3. Open the Composer in Cursor and switch to Agent mode.
Demo with Cursor
https://github.com/user-attachments/assets/56670550-fb11-4850-9905-692d3496231c
Features
The server provides three main tools for managing code preferences:
1. add_coding_preference: Store code snippets, implementation details, and coding patterns with comprehensive context including:
- Complete code with dependencies
- Language/framework versions
- Setup instructions
- Documentation and comments
- Example usage
- Best practices
2. get_all_coding_preferences: Retrieve all stored coding preferences to analyze patterns, review implementations, and ensure no relevant information is missed.
3. search_coding_preferences: Semantically search through stored coding preferences to find relevant:
- Code implementations
- Programming solutions
- Best practices
- Setup guides
- Technical documentation
Why?
This implementation allows for a persistent coding preferences system that can be accessed via MCP. The SSE-based server can run as a process that agents connect to, use, and disconnect from whenever needed. This pattern fits well with "cloud-native" use cases where the server and clients can be decoupled processes on different nodes.
Server
By default, the server runs on 0.0.0.0:8080 but is configurable with command line arguments like:
uv run main.py --host --port
The server exposes an SSE endpoint at /sse that MCP clients can connect to for accessing the coding preferences management tools.
Related MCP Servers
AI Research Assistant
hamid-vakilzadeh
AI Research Assistant provides comprehensive access to millions of academic papers through the Semantic Scholar and arXiv databases. This MCP server enables AI coding assistants to perform intelligent literature searches, citation network analysis, and paper content extraction without requiring an API key. Key features include: - Advanced paper search with multi-filter support by year ranges, citation thresholds, field of study, and publication type - Title matching with confidence scoring for finding specific papers - Batch operations supporting up to 500 papers per request - Citation analysis and network exploration for understanding research relationships - Full-text PDF extraction from arXiv and Wiley open-access content (Wiley TDM token required for institutional access) - Rate limits of 100 requests per 5 minutes with options to request higher limits through Semantic Scholar
Linkup
LinkupPlatform
Linkup is a real-time web search and content extraction service that enables AI assistants to search the web and retrieve information from trusted sources. It provides source-backed answers with citations, making it ideal for fact-checking, news gathering, and research tasks. Key features of Linkup: - Real-time web search using natural language queries to find current information, news, and data - Page fetching to extract and read content from any webpage URL - Search depth modes: Standard for direct-answer queries and Deep for complex research across multiple sources - Source-backed results with citations and context from relevant, trustworthy websites - JavaScript rendering support for accessing dynamic content on JavaScript-heavy pages
Math-MCP
EthanHenrickson
Math-MCP is a computation server that enables Large Language Models (LLMs) to perform accurate numerical calculations through the Model Context Protocol. It provides precise mathematical operations via a simple API to overcome LLM limitations in arithmetic and statistical reasoning. Key features of Math-MCP: - Basic arithmetic operations: addition, subtraction, multiplication, division, modulo, and bulk summation - Statistical analysis functions: mean, median, mode, minimum, and maximum calculations - Rounding utilities: floor, ceiling, and nearest integer rounding - Trigonometric functions: sine, cosine, tangent, and their inverses with degrees and radians conversion support