Price Per TokenPrice Per Token

Knowledge Base Server

by jeanibarz

0

About

Knowledge Base Server is an MCP server that enables browsing and semantic search across local knowledge bases. It provides content retrieval and listing capabilities with vector-based similarity search. Key features of Knowledge Base Server: - List and retrieve documents from configurable local knowledge base directories - Semantic search with embeddings using FAISS vector indexes for fast similarity matching - Multiple embedding provider support: Ollama (local, recommended), OpenAI API, and HuggingFace - Structured knowledge management with environment-based configuration - Automatic document indexing and embedding generation

README

Knowledge Base MCP Server

[](https://smithery.ai/server/@jeanibarz/knowledge-base-mcp-server) This MCP server provides tools for listing and retrieving content from different knowledge bases.

Setup Instructions

These instructions assume you have Node.js and npm installed on your system.

Installing via Smithery

To install Knowledge Base Server for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @jeanibarz/knowledge-base-mcp-server --client claude

Manual Installation

Prerequisites

  • Node.js (version 16 or higher)
  • npm (Node Package Manager)
  • 1. Clone the repository:

        git clone 
        cd knowledge-base-mcp-server
        

    2. Install dependencies:

        npm install
        

    3. Configure environment variables:

    This server supports three embedding providers: Ollama (recommended for reliability), OpenAI and HuggingFace (fallback option).

    ### Option 1: Ollama Configuration (Recommended) * Set EMBEDDING_PROVIDER=ollama to use local Ollama embeddings * Install Ollama and pull an embedding model: ollama pull dengcao/Qwen3-Embedding-0.6B:Q8_0 * Configure the following environment variables:

            EMBEDDING_PROVIDER=ollama
            OLLAMA_BASE_URL=http://localhost:11434  # Default Ollama URL
            OLLAMA_MODEL=dengcao/Qwen3-Embedding-0.6B:Q8_0          # Default embedding model
            KNOWLEDGE_BASES_ROOT_DIR=$HOME/knowledge_bases
            

    ### Option 2: OpenAI Configuration

    * Set EMBEDDING_PROVIDER=openai to use OpenAI API for embeddings * Configure the following environment variables:

            EMBEDDING_PROVIDER=openai
            OPENAI_API_KEY=your_api_key_here
            OPENAI_MODEL_NAME=text-embedding-ada-002
            KNOWLEDGE_BASES_ROOT_DIR=$HOME/knowledge_bases
            

    ### Option 3: HuggingFace Configuration (Fallback) * Set EMBEDDING_PROVIDER=huggingface or leave unset (default) * Obtain a free API key from HuggingFace * Configure the following environment variables:

            EMBEDDING_PROVIDER=huggingface          # Optional, this is the default
            HUGGINGFACE_API_KEY=your_api_key_here
            HUGGINGFACE_MODEL_NAME=sentence-transformers/all-MiniLM-L6-v2
            KNOWLEDGE_BASES_ROOT_DIR=$HOME/knowledge_bases
            

    ### Additional Configuration * The server supports the FAISS_INDEX_PATH environment variable to specify the path to the FAISS index. If not set, it will default to $HOME/knowledge_bases/.faiss. * Logging can be routed to a file by setting LOG_FILE=/path/to/logs/knowledge-base.log. Log verbosity defaults to info and can be adjusted with LOG_LEVEL=debug|info|warn|error. * You can set these environment variables in your .bashrc or .zshrc file, or directly in the MCP settings.

    4. Build the server:

        npm run build
        

    5. Add the server to the MCP settings:

    * Edit the cline_mcp_settings.json file located at /home/jean/.vscode-server/data/User/globalStorage/saoudrizwan.claude-dev/settings/. * Add the following configuration to the mcpServers object:

    * Option 1: Ollama Configuration

        "knowledge-base-mcp-ollama": {
          "command": "node",
          "args": [
            "/path/to/knowledge-base-mcp-server/build/index.js"
          ],
          "disabled": false,
          "autoApprove": [],
          "env": {
            "KNOWLEDGE_BASES_ROOT_DIR": "/path/to/knowledge_bases",
            "EMBEDDING_PROVIDER": "ollama",
            "OLLAMA_BASE_URL": "http://localhost:11434",
            "OLLAMA_MODEL": "dengcao/Qwen3-Embedding-0.6B:Q8_0"
          },
          "description": "Retrieves similar chunks from the knowledge base based on a query using Ollama."
        },
        

    * Option 2: OpenAI Configuration

        "knowledge-base-mcp-openai": {
          "command": "node",
          "args": [
            "/path/to/knowledge-base-mcp-server/build/index.js"
          ],
          "disabled": false,
          "autoApprove": [],
          "env": {
            "KNOWLEDGE_BASES_ROOT_DIR": "/path/to/knowledge_bases",
            "EMBEDDING_PROVIDER": "openai",
            "OPENAI_API_KEY": "YOUR_OPENAI_API_KEY",
            "OPENAI_MODEL_NAME": "text-embedding-ada-002"
          },
          "description": "Retrieves similar chunks from the knowledge base based on a query using OpenAI."
        },
        

    * Option 3: HuggingFace Configuration

    ```json "knowledge-base-m

    Related MCP Servers

    AI Research Assistant

    AI Research Assistant

    hamid-vakilzadeh

    AI Research Assistant provides comprehensive access to millions of academic papers through the Semantic Scholar and arXiv databases. This MCP server enables AI coding assistants to perform intelligent literature searches, citation network analysis, and paper content extraction without requiring an API key. Key features include: - Advanced paper search with multi-filter support by year ranges, citation thresholds, field of study, and publication type - Title matching with confidence scoring for finding specific papers - Batch operations supporting up to 500 papers per request - Citation analysis and network exploration for understanding research relationships - Full-text PDF extraction from arXiv and Wiley open-access content (Wiley TDM token required for institutional access) - Rate limits of 100 requests per 5 minutes with options to request higher limits through Semantic Scholar

    Web & Search
    12 8
    Linkup

    Linkup

    LinkupPlatform

    Linkup is a real-time web search and content extraction service that enables AI assistants to search the web and retrieve information from trusted sources. It provides source-backed answers with citations, making it ideal for fact-checking, news gathering, and research tasks. Key features of Linkup: - Real-time web search using natural language queries to find current information, news, and data - Page fetching to extract and read content from any webpage URL - Search depth modes: Standard for direct-answer queries and Deep for complex research across multiple sources - Source-backed results with citations and context from relevant, trustworthy websites - JavaScript rendering support for accessing dynamic content on JavaScript-heavy pages

    Web & Search
    2 24
    Math-MCP

    Math-MCP

    EthanHenrickson

    Math-MCP is a computation server that enables Large Language Models (LLMs) to perform accurate numerical calculations through the Model Context Protocol. It provides precise mathematical operations via a simple API to overcome LLM limitations in arithmetic and statistical reasoning. Key features of Math-MCP: - Basic arithmetic operations: addition, subtraction, multiplication, division, modulo, and bulk summation - Statistical analysis functions: mean, median, mode, minimum, and maximum calculations - Rounding utilities: floor, ceiling, and nearest integer rounding - Trigonometric functions: sine, cosine, tangent, and their inverses with degrees and radians conversion support

    Developer Tools
    22 81