About
DaVinci Resolve MCP Server connects AI assistants like Claude, Cursor, and Windsurf to DaVinci Resolve for natural language control of professional post-production workflows. It provides complete coverage of the DaVinci Resolve Scripting API for DaVinci Resolve 18.5 and above, enabling manipulation of timelines, media pools, Fusion compositions, color grading nodes, and gallery stills. Key features include: - Full timeline and media pool management including clip properties, markers, and timecode - Fusion composition node graph control — add/delete nodes, wire connections, set parameters, manage keyframes, control undo grouping, and trigger renders on the active Fusion page - Gallery stills operations including grab-and-export with automatic format fallback (DRX grade files and images), plus metadata extraction - Timeline item cache control for Fusion output optimization - Cross-platform sandbox path handling for macOS, Linux, and Windows that automatically redirects protected temp directories to Resolve-safe locations - Project archival, LUT export, and companion file management
README
DaVinci Resolve MCP Server
[](https://github.com/samuelgursky/davinci-resolve-mcp/releases) [](#api-coverage) -blue.svg) [](#test-results) [](https://www.blackmagicdesign.com/products/davinciresolve) [](https://www.python.org/downloads/) [](https://opensource.org/licenses/MIT)
A Model Context Protocol (MCP) server providing complete coverage of the DaVinci Resolve Scripting API. Connect AI assistants (Claude, Cursor, Windsurf) to DaVinci Resolve and control every aspect of your post-production workflow through natural language.
What's New in v2.1.0
fusion_comp tool — 20-action tool exposing the full Fusion composition node graph API. Add/delete/find nodes, wire connections, set/get parameters, manage keyframes, control undo grouping, set render ranges, and trigger renders — all on the currently active Fusion page compositiontimeline_item_fusion cache actions — added get_cache_enabled and set_cache actions for Fusion output cache control directly on timeline itemsv2.0.9
_resolve_safe_dir() now handles macOS (/var/folders, /private/var), Linux (/tmp, /var/tmp), and Windows (AppData\Local\Temp) sandbox paths that Resolve can't write to. Redirects to ~/Documents/resolve-stills instead of Desktopgrab_and_export — exported files are read into the response (DRX as inline text, images as base64) then deleted from disk automatically. Zero file accumulation. Pass cleanup: false to keep files on diskserver.py and resolve_mcp_server.py now share the same version and both use _resolve_safe_dir() for all Resolve-facing temp paths (project export, LUT export, still export)v2.0.8
grab_and_export action on gallery_stills — combines GrabStill() + ExportStills() in a single atomic call, keeping the live GalleryStill reference for reliable export. Returns a file manifest with exported image + companion .drx grade file/var/folders and /private/var paths are redirected to ~/Desktop/resolve-stills since Resolve's process can't write to sandboxed temp directoriesExportStills requires the Gallery panel to be visible on the Color page. All 9 supported formats (dpx, cin, tif, jpg, png, ppm, bmp, xpm, drx) produce a companion .drx grade file alongside the imagev2.0.7
export_layout_preset, import_layout_preset, and delete_layout_preset now validate that resolved file paths stay within the expected Resolve presets directory, preventing path traversal via crafted preset namesquit_app/restart_app tools can terminate Resolve; MCP clients should require user confirmation before invokingv2.0.6
timeline_item_color unpacked _check() as (proj, _, _) but _check() returns (pm, proj, err), so proj got the ProjectManager instead of the Project, crashing assign_color_group and remove_from_color_groupv2.0.5
--full mode) now auto-reconnects and auto-launches Resolve, matching the compound server behaviorGetProjectManager(), GetCurrentProject(), GetCurrentTimeline() failures now return clear errors instead of NoneType crashesget_resolve(), get_project_manager(), get_current_project() replace 178 boilerplate blocksv2.0.4
mode to grade_mode to match Resolve API; corrected documentation from replace/append to actual keyframe alignment modes (0=No keyframes, 1=Source Timecode aligned, 2=Start Frames aligned)mode for existing clients, grade_mode takes precedencev2.0.3
GetNodeGraph(0) returns False in Resolve; now calls without args unless layer_index is explicitly providedRelated MCP Servers
AI Research Assistant
hamid-vakilzadeh
AI Research Assistant provides comprehensive access to millions of academic papers through the Semantic Scholar and arXiv databases. This MCP server enables AI coding assistants to perform intelligent literature searches, citation network analysis, and paper content extraction without requiring an API key. Key features include: - Advanced paper search with multi-filter support by year ranges, citation thresholds, field of study, and publication type - Title matching with confidence scoring for finding specific papers - Batch operations supporting up to 500 papers per request - Citation analysis and network exploration for understanding research relationships - Full-text PDF extraction from arXiv and Wiley open-access content (Wiley TDM token required for institutional access) - Rate limits of 100 requests per 5 minutes with options to request higher limits through Semantic Scholar
Linkup
LinkupPlatform
Linkup is a real-time web search and content extraction service that enables AI assistants to search the web and retrieve information from trusted sources. It provides source-backed answers with citations, making it ideal for fact-checking, news gathering, and research tasks. Key features of Linkup: - Real-time web search using natural language queries to find current information, news, and data - Page fetching to extract and read content from any webpage URL - Search depth modes: Standard for direct-answer queries and Deep for complex research across multiple sources - Source-backed results with citations and context from relevant, trustworthy websites - JavaScript rendering support for accessing dynamic content on JavaScript-heavy pages
Math-MCP
EthanHenrickson
Math-MCP is a computation server that enables Large Language Models (LLMs) to perform accurate numerical calculations through the Model Context Protocol. It provides precise mathematical operations via a simple API to overcome LLM limitations in arithmetic and statistical reasoning. Key features of Math-MCP: - Basic arithmetic operations: addition, subtraction, multiplication, division, modulo, and bulk summation - Statistical analysis functions: mean, median, mode, minimum, and maximum calculations - Rounding utilities: floor, ceiling, and nearest integer rounding - Trigonometric functions: sine, cosine, tangent, and their inverses with degrees and radians conversion support