About
Interactive Feedback MCP enables human-in-the-loop workflows for AI-assisted development tools. It provides a desktop interface that pauses AI execution to capture user feedback, run commands, and review outputs before the AI continues—preventing speculative tool calls and consolidating interactions into a single feedback-aware request. Key features of Interactive Feedback MCP: - Native UI for real-time command execution and output viewing directly within the feedback loop - Textual feedback capture that allows users to guide AI behavior before task completion - Per-project configuration management using Qt QSettings, storing commands, auto-execution preferences, and UI state across sessions - Compatibility with popular AI coding assistants including Cursor, Cline, and Windsurf - Cost-saving workflow optimization that reduces premium API requests by consolidating multiple tool calls into user-validated interactions
README
Interactive Feedback MCP
Developed by Fábio Ferreira (@fabiomlferreira). Check out dotcursorrules.com for more AI development enhancements.
Simple MCP Server to enable a human-in-the-loop workflow in AI-assisted development tools like Cursor. This server allows you to run commands, view their output, and provide textual feedback directly to the AI. It is also compatible with Cline and Windsurf.
Prompt Engineering
For the best results, add the following to your custom prompt in your AI assistant, you should add it on a rule or directly in the prompt (e.g., Cursor):
> Whenever you want to ask a question, always call the MCP interactive_feedback.
> Whenever you’re about to complete a user request, call the MCP interactive_feedback instead of simply ending the process.
> Keep calling MCP until the user’s feedback is empty, then end the request.
This will ensure your AI assistant uses this MCP server to request user feedback before marking the task as completed.
💡 Why Use This?
By guiding the assistant to check in with the user instead of branching out into speculative, high-cost tool calls, this module can drastically reduce the number of premium requests (e.g., OpenAI tool invocations) on platforms like Cursor. In some cases, it helps consolidate what would be up to 25 tool calls into a single, feedback-aware request — saving resources and improving performance.Configuration
This MCP server uses Qt's QSettings to store configuration on a per-project basis. This includes:
These settings are typically stored in platform-specific locations (e.g., registry on Windows, plist files on macOS, configuration files in ~/.config or ~/.local/share on Linux) under an organization name "FabioFerreira" and application name "InteractiveFeedbackMCP", with a unique group for each project directory.
The "Save Configuration" button in the UI primarily saves the current command typed into the command input field and the state of the "Execute automatically on next run" checkbox for the active project. The visibility of the command section is saved automatically when you toggle it. General window size and position are saved when the application closes.
Installation (Cursor)
1. Prerequisites:
* Python 3.11 or newer.
* uv (Python package manager). Install it with:
* Windows: pip install uv
* Linux/Mac: curl -LsSf https://astral.sh/uv/install.sh | sh
2. Get the code:
* Clone this repository:
git clone https://github.com/noopstudios/interactive-feedback-mcp.git
* Or download the source code.
3. Navigate to the directory:
* cd path/to/interactive-feedback-mcp
4. Install dependencies:
* uv sync (this creates a virtual environment and installs packages)
5. Run the MCP Server:
* uv run server.py
6. Configure in Cursor:
* Cursor typically allows specifying custom MCP servers in its settings. You'll need to point Cursor to this running server. The exact mechanism might vary, so consult Cursor's documentation for adding custom MCPs.
* Manual Configuration (e.g., via mcp.json)
Remember to change the /Users/fabioferreira/Dev/scripts/interactive-feedback-mcp path to the actual path where you cloned the repository on your system.
{
"mcpServers": {
"interactive-feedback-mcp": {
"command": "uv",
"args": [
"--directory",
"/Users/fabioferreira/Dev/scripts/interactive-feedback-mcp",
"run",
"server.py"
],
"timeout": 600,
"autoApprove": [
"interactive_feedback"
]
}
}
}
* You might use a server identifier like interactive-feedback-mcp when configuring it in Cursor.For Cline / Windsurf
Similar setup principles apply. You would configure the server command (e.g., `uv run
Related MCP Servers
AI Research Assistant
hamid-vakilzadeh
AI Research Assistant provides comprehensive access to millions of academic papers through the Semantic Scholar and arXiv databases. This MCP server enables AI coding assistants to perform intelligent literature searches, citation network analysis, and paper content extraction without requiring an API key. Key features include: - Advanced paper search with multi-filter support by year ranges, citation thresholds, field of study, and publication type - Title matching with confidence scoring for finding specific papers - Batch operations supporting up to 500 papers per request - Citation analysis and network exploration for understanding research relationships - Full-text PDF extraction from arXiv and Wiley open-access content (Wiley TDM token required for institutional access) - Rate limits of 100 requests per 5 minutes with options to request higher limits through Semantic Scholar
Linkup
LinkupPlatform
Linkup is a real-time web search and content extraction service that enables AI assistants to search the web and retrieve information from trusted sources. It provides source-backed answers with citations, making it ideal for fact-checking, news gathering, and research tasks. Key features of Linkup: - Real-time web search using natural language queries to find current information, news, and data - Page fetching to extract and read content from any webpage URL - Search depth modes: Standard for direct-answer queries and Deep for complex research across multiple sources - Source-backed results with citations and context from relevant, trustworthy websites - JavaScript rendering support for accessing dynamic content on JavaScript-heavy pages
Math-MCP
EthanHenrickson
Math-MCP is a computation server that enables Large Language Models (LLMs) to perform accurate numerical calculations through the Model Context Protocol. It provides precise mathematical operations via a simple API to overcome LLM limitations in arithmetic and statistical reasoning. Key features of Math-MCP: - Basic arithmetic operations: addition, subtraction, multiplication, division, modulo, and bulk summation - Statistical analysis functions: mean, median, mode, minimum, and maximum calculations - Rounding utilities: floor, ceiling, and nearest integer rounding - Trigonometric functions: sine, cosine, tangent, and their inverses with degrees and radians conversion support