About
PraisonAI MCP Server enables MCP-based access to the PraisonAI multi-agent framework, allowing AI systems to orchestrate collaborative agent teams that plan, research, code, and deliver results autonomously. Key features of PraisonAI MCP Server: - Multi-agent orchestration with agent handoffs and guardrails for complex task workflows - Persistent memory and RAG (Retrieval-Augmented Generation) for context-aware, stateful operations - Automated delivery of results to messaging platforms including Telegram, Discord, and WhatsApp - Integration with 100+ LLM providers for flexible model selection - Multiple transport protocols (HTTP, STDIO, and WebSocket) for seamless agent-to-agent communication - High-performance agent instantiation in under 4 microseconds, enabling 24/7 continuous operation - Low-code framework designed for production-ready deployments with human-agent collaboration support
README
PraisonAI 🦞
PraisonAI 🦞 — Automate and solve complex challenges with AI agent teams that plan, research, code, and deliver results to Telegram, Discord, and WhatsApp — running 24/7. A low-code, production-ready multi-agent framework with handoffs, guardrails, memory, RAG, and 100+ LLM providers, built around simplicity, customisation, and effective human-agent collaboration.
---
> Quick Paths: > - 🆕 New here? → Quick Start *(1 minute to first agent)* > - 📦 Installing? → Installation > - 🐍 Python SDK? → Python Examples > - 🎯 CLI user? → CLI Quick Reference > - 🤝 Contributing? → Contributing
---
⚡ Performance
PraisonAI is built for speed, with agent instantiation in under 4μs. This reduces overhead, improves responsiveness, and helps multi-agent systems scale efficiently in real-world production workloads.
| Performance Metric | PraisonAI | |--------------------|-----------| | Avg Instantiation Time | 3.77 μs |
---
🎯 Use Cases
AI agents solving real-world problems across industries:
| Use Case | Description | |----------|-------------| | 🔍 Research & Analysis | Conduct deep research, gather information, and generate insights from multiple sources automatically | | 💻 Code Generation | Write, debug, and refactor code with AI agents that understand your codebase and requirements | | ✍️ Content Creation | Generate blog posts, documentation, marketing copy, and technical writing with multi-agent teams | | 📊 Data Pipelines | Extract, transform, and analyze data from APIs, databases, and web sources automatically | | 🤖 Customer Support | Deploy 24/7 support bots on Telegram, Discord, Slack with memory and knowledge-backed responses | | ⚙️ Workflow Automation | Automate multi-step business processes with agents that hand off tasks, verify results, and self-correct |
---
Supported Providers
PraisonAI supports 100+ LLM providers through seamless integration:
Related MCP Servers
AI Research Assistant
hamid-vakilzadeh
AI Research Assistant provides comprehensive access to millions of academic papers through the Semantic Scholar and arXiv databases. This MCP server enables AI coding assistants to perform intelligent literature searches, citation network analysis, and paper content extraction without requiring an API key. Key features include: - Advanced paper search with multi-filter support by year ranges, citation thresholds, field of study, and publication type - Title matching with confidence scoring for finding specific papers - Batch operations supporting up to 500 papers per request - Citation analysis and network exploration for understanding research relationships - Full-text PDF extraction from arXiv and Wiley open-access content (Wiley TDM token required for institutional access) - Rate limits of 100 requests per 5 minutes with options to request higher limits through Semantic Scholar
Linkup
LinkupPlatform
Linkup is a real-time web search and content extraction service that enables AI assistants to search the web and retrieve information from trusted sources. It provides source-backed answers with citations, making it ideal for fact-checking, news gathering, and research tasks. Key features of Linkup: - Real-time web search using natural language queries to find current information, news, and data - Page fetching to extract and read content from any webpage URL - Search depth modes: Standard for direct-answer queries and Deep for complex research across multiple sources - Source-backed results with citations and context from relevant, trustworthy websites - JavaScript rendering support for accessing dynamic content on JavaScript-heavy pages
Math-MCP
EthanHenrickson
Math-MCP is a computation server that enables Large Language Models (LLMs) to perform accurate numerical calculations through the Model Context Protocol. It provides precise mathematical operations via a simple API to overcome LLM limitations in arithmetic and statistical reasoning. Key features of Math-MCP: - Basic arithmetic operations: addition, subtraction, multiplication, division, modulo, and bulk summation - Statistical analysis functions: mean, median, mode, minimum, and maximum calculations - Rounding utilities: floor, ceiling, and nearest integer rounding - Trigonometric functions: sine, cosine, tangent, and their inverses with degrees and radians conversion support
Built by @aellman
Tools
Directories
Models & Pricing
Endpoints
Rankings
- All Rankings
- All Benchmarks
- Best LLM for Coding
- Best LLM for Math
- Best LLM for Writing
- Best LLM for RAG
- Best Local LLM
- Best LLM for OpenClaw
- Best LLM for Cursor
- Best LLM for Windsurf
- Best LLM for Cline
- Best LLM for Aider
- Best LLM for GitHub Copilot
- Best LLM for Bolt
- Best LLM for Continue.dev
- MMLU-Pro
- GPQA
- LiveCodeBench
- Aider
- AIME
- MATH (Hard)
- Big-Bench Hard
2026 68 Ventures, LLC. All rights reserved.