Price Per TokenPrice Per Token

WCWG MCP Server

by rusiaaman

0

About

wcgw is a shell and coding agent MCP server that enables AI assistants to execute commands, edit files, and run iterative development workflows on your local machine. Key features: - Fully interactive shell access with support for sending keystrokes and running background commands - File creation and editing with syntax checking for various programming languages - Multiple operation modes: architect, code-writer, and unrestricted wcgw mode - VSCode extension for attaching the agent's shell directly in your editor - Context saving tool for storing file paths and descriptions as task checkpoints - Automatic repository structure analysis and CLAUDE.md loading - Support for multiple shells including Bash and ZSH

README

Shell and Coding agent for Claude and other mcp clients

Empowering chat applications to code, build and run on your local machine.

wcgw is an MCP server with tightly integrated shell and code editing tools.

> ⚠️ Warning: This MCP server provides unfiltered access to your machine's shell and files. It does not restrict LLMs from executing arbitrary commands or making unintended changes. This tool can be misused by attackers or run dangerous commands if the AI hallucinates. Run this repository only if you fully understand and accept the risks associated with running AI agents with no restrictions.

As of 2026 the reason you could use wcgw is that it provides fully interactive shell experience that you and the agent both can control (including sending key-strokes). Combined with the wcgw vscode extension that attaches the agent's shell in your editor, you can get the best agentic shell experience that is out there. The file editing tricks and the general minimalism also helps agent be more productive.

[](https://github.com/rusiaaman/wcgw/actions/workflows/python-tests.yml) [](https://github.com/rusiaaman/wcgw/actions/workflows/python-types.yml) [](https://github.com/rusiaaman/wcgw/actions/workflows/python-publish.yml) [](https://codecov.io/gh/rusiaaman/wcgw)

Demo

Updates

  • [6 Oct 2025] Model can now run multiple commands in background. ZSH is now a supported shell. Multiplexing improvements.
  • [27 Apr 2025] Removed support for GPTs over relay server. Only MCP server is supported in version >= 5.
  • [24 Mar 2025] Improved writing and editing experience for sonnet 3.7, CLAUDE.md gets loaded automatically.
  • [16 Feb 2025] You can now attach to the working terminal that the AI uses. See the "attach-to-terminal" section below.
  • [15 Jan 2025] Modes introduced: architect, code-writer, and all powerful wcgw mode.
  • [8 Jan 2025] Context saving tool for saving relevant file paths along with a description in a single file. Can be used as a task checkpoint or for knowledge transfer.
  • [29 Dec 2024] Syntax checking on file writing and edits is now stable. Made initialize tool call useful; sending smart repo structure to claude if any repo is referenced. Large file handling is also now improved.
  • [9 Dec 2024] Vscode extension to paste context on Claude app
  • 🚀 Highlights

  • Create, Execute, Iterate: Ask claude to keep running compiler checks till all errors are fixed, or ask it to keep checking for the status of a long running command till it's done.
  • Large file edit: Supports large file incremental edits to avoid token limit issues. Smartly selects when to do small edits or large rewrite based on % of change needed.
  • Syntax checking on edits: Reports feedback to the LLM if its edits have any syntax errors, so that it can redo it.
  • Interactive Command Handling: Supports interactive commands using arrow keys, interrupt, and ansi escape sequences.
  • File protections:
  • - The AI needs to read a file at least once before it's allowed to edit or rewrite it. This avoids accidental overwrites. - Avoids context filling up while reading very large files. Files get chunked based on token length. - On initialisation the provided workspace's directory structure is returned after selecting important files (based on .gitignore as well as a statistical approach) - File edit based on search-replace tries to find correct search block if it has multiple matches based on previous search blocks. Fails otherwise (for correctness). - File edit has spacing tolerant matching, with warning on issues like indentation mismatch. If there's no match, the closest match is returned to the AI to fix its mistakes. - Using Aider-like search and replace, which has better performance than tool call based search and replace.
  • Shell optimizations:
  • - Current working directory is always returned after any shell command to prevent AI from getting lost. - Command polling exits after a quick timeout to avoid slow feedback. However, status checking has wait tolerance based on fresh output streaming from a command. Both of these approach combined provides a good shell interaction experience. - Supports multiple concurrent background commands alongside the main interactive shell.
  • Saving repo context in a single file: Task checkpointing using "ContextSave" tool saves detailed context in a single file. Tasks can later be resumed in a new chat asking "Resume task id". The saved file can be used to do oth
  • Related MCP Servers

    AI Research Assistant

    AI Research Assistant

    hamid-vakilzadeh

    AI Research Assistant provides comprehensive access to millions of academic papers through the Semantic Scholar and arXiv databases. This MCP server enables AI coding assistants to perform intelligent literature searches, citation network analysis, and paper content extraction without requiring an API key. Key features include: - Advanced paper search with multi-filter support by year ranges, citation thresholds, field of study, and publication type - Title matching with confidence scoring for finding specific papers - Batch operations supporting up to 500 papers per request - Citation analysis and network exploration for understanding research relationships - Full-text PDF extraction from arXiv and Wiley open-access content (Wiley TDM token required for institutional access) - Rate limits of 100 requests per 5 minutes with options to request higher limits through Semantic Scholar

    Web & Search
    12 8
    Linkup

    Linkup

    LinkupPlatform

    Linkup is a real-time web search and content extraction service that enables AI assistants to search the web and retrieve information from trusted sources. It provides source-backed answers with citations, making it ideal for fact-checking, news gathering, and research tasks. Key features of Linkup: - Real-time web search using natural language queries to find current information, news, and data - Page fetching to extract and read content from any webpage URL - Search depth modes: Standard for direct-answer queries and Deep for complex research across multiple sources - Source-backed results with citations and context from relevant, trustworthy websites - JavaScript rendering support for accessing dynamic content on JavaScript-heavy pages

    Web & Search
    2 24
    Math-MCP

    Math-MCP

    EthanHenrickson

    Math-MCP is a computation server that enables Large Language Models (LLMs) to perform accurate numerical calculations through the Model Context Protocol. It provides precise mathematical operations via a simple API to overcome LLM limitations in arithmetic and statistical reasoning. Key features of Math-MCP: - Basic arithmetic operations: addition, subtraction, multiplication, division, modulo, and bulk summation - Statistical analysis functions: mean, median, mode, minimum, and maximum calculations - Rounding utilities: floor, ceiling, and nearest integer rounding - Trigonometric functions: sine, cosine, tangent, and their inverses with degrees and radians conversion support

    Developer Tools
    22 81