About
Shrimp Task Manager (also known as Gist Task Manager) is an intelligent AI-native task management system built on the Model Context Protocol (MCP) that transforms natural language instructions into structured, actionable programming workflows. It guides AI coding agents through systematic development processes while maintaining long-term memory of project context. Key capabilities include: - Intelligent task decomposition that breaks complex requirements into manageable subtasks with dependency tracking - Project rules initialization to define coding standards and maintain style consistency across large codebases - Research mode for systematic technical exploration, comparing solutions, and surveying best practices - Task memory function that automatically backs up history and preserves context across coding sessions - Real-time execution status tracking with automatic complexity assessment and completeness verification - Optional web-based GUI when enabled via environment configuration, creating a visual task management interface - Chain-of-thought reasoning and reflection mechanisms to prevent redundant work and improve output quality
README
目錄
MCP Shrimp Task Manager
[](https://www.youtube.com/watch?v=Arzu0lV09so)
[](https://smithery.ai/server/@cjo4m06/mcp-shrimp-task-manager)
> 🚀 An intelligent task management system based on Model Context Protocol (MCP), providing an efficient programming workflow framework for AI Agents.
Shrimp Task Manager guides Agents through structured workflows for systematic programming, enhancing task memory management mechanisms, and effectively avoiding redundant and repetitive coding work.
✨ Features
ENABLE_GUI=true in your .env file. When enabled, a WebGUI.md file containing the access address will be created in your DATA_DIR.🧭 Usage Guide
Shrimp Task Manager offers a structured approach to AI-assisted programming through guided workflows and systematic task management.
What is Shrimp?
Shrimp is essentially a prompt template that guides AI Agents to better understand and work with your project. It uses a series of prompts to ensure the Agent aligns closely with your project's specific needs and conventions.
Research Mode in Practice
Before diving into task planning, you can leverage the research mode for technical investigation and knowledge gathering. This is particularly useful when:
Simply tell the Agent "research [your topic]" or "enter research mode for [technology/problem]" to begin systematic investigation. The research findings will then inform your subsequent task planning and development decisions.
First-Time Setup
When working with a new project, simply tell the Agent "init project rules". This will guide the Agent to generate a set of rules tailored to your project's specific requirements and structure.
Task Planning Process
To develop or update features, use the command "plan task [your description]". The system will reference the previously established rules, attempt to understand your project, search for relevant code sections, and propose a comprehensive plan based on the current state of your project.
Feedback Mechanism
During the planning process, Shrimp guides the Agent through multiple steps of thinking. You can review this process and provide feedback if you feel it's heading in the wrong direction. Simply interrupt and share your perspective - the Agent will incorporate your feedback and continue the planning process.
Task Execution
When you're satisfied with the plan, use "execute task [task name or ID]" to implement it. If you don't specify a task name or ID, the system will automatically identify and execute the highest priority task.
Continuous Mode
If you prefer to execu
Related MCP Servers
AI Research Assistant
hamid-vakilzadeh
AI Research Assistant provides comprehensive access to millions of academic papers through the Semantic Scholar and arXiv databases. This MCP server enables AI coding assistants to perform intelligent literature searches, citation network analysis, and paper content extraction without requiring an API key. Key features include: - Advanced paper search with multi-filter support by year ranges, citation thresholds, field of study, and publication type - Title matching with confidence scoring for finding specific papers - Batch operations supporting up to 500 papers per request - Citation analysis and network exploration for understanding research relationships - Full-text PDF extraction from arXiv and Wiley open-access content (Wiley TDM token required for institutional access) - Rate limits of 100 requests per 5 minutes with options to request higher limits through Semantic Scholar
Linkup
LinkupPlatform
Linkup is a real-time web search and content extraction service that enables AI assistants to search the web and retrieve information from trusted sources. It provides source-backed answers with citations, making it ideal for fact-checking, news gathering, and research tasks. Key features of Linkup: - Real-time web search using natural language queries to find current information, news, and data - Page fetching to extract and read content from any webpage URL - Search depth modes: Standard for direct-answer queries and Deep for complex research across multiple sources - Source-backed results with citations and context from relevant, trustworthy websites - JavaScript rendering support for accessing dynamic content on JavaScript-heavy pages
Math-MCP
EthanHenrickson
Math-MCP is a computation server that enables Large Language Models (LLMs) to perform accurate numerical calculations through the Model Context Protocol. It provides precise mathematical operations via a simple API to overcome LLM limitations in arithmetic and statistical reasoning. Key features of Math-MCP: - Basic arithmetic operations: addition, subtraction, multiplication, division, modulo, and bulk summation - Statistical analysis functions: mean, median, mode, minimum, and maximum calculations - Rounding utilities: floor, ceiling, and nearest integer rounding - Trigonometric functions: sine, cosine, tangent, and their inverses with degrees and radians conversion support