About
Atom of Thoughts is an MCP server implementing a decomposition-based reasoning framework for complex problem-solving with large language models. Based on the research paper "Atom of Thoughts for Markov LLM Test-Time Scaling," it breaks down reasoning tasks into atomic steps to improve accuracy and minimize logical errors. Key features: - **AoT (Full Version)**: Deep analysis with maximum depth of 5 levels for exhaustive hypothesis generation and verification - **AoT-light**: Lightweight version with reduced depth (3 levels) for faster processing in time-sensitive scenarios - Structured reasoning using five atom types including premises, hypotheses, and conclusions - Multi-step verification to derive high-confidence conclusions - Optimized for complex reasoning tasks requiring logical decomposition and error minimization Effective for hypothesis generation requiring multi-perspective verification, critical decision-making with multiple validation steps, and scenarios where minimizing logical errors is crucial.
README
Atom of Thoughts (AoT)
[](https://mseep.ai/app/kbsooo-mcp-atom-of-thoughts)
[](https://smithery.ai/server/@kbsooo/mcp_atom_of_thoughts)
[](https://mseep.ai/app/672e0733-4c20-461f-b9cf-59ec2c9dafff)
A Model Context Protocol (MCP) server implementation of Atom of Thoughts, a decomposition-based reasoning framework.
> Note: This implementation is based on the research paper "Atom of Thoughts for Markov LLM Test-Time Scaling" (Teng et al., 2025).
English Documentation
This repository implements Atom of Thoughts (AoT), a decomposition-based reasoning framework, as a Model Context Protocol (MCP) server. This implementation is based on the concepts presented in the paper "Atom of Thoughts for Markov LLM Test-Time Scaling" (Teng et al., 2025).
Available Tools
Two main tools are provided:
1. AoT (Full Version): A complete Atom of Thoughts tool with full capabilities for deep analysis and complex problem solving. 2. AoT-light (Lightweight Version): A streamlined version optimized for faster processing and quicker results.
AoT-light: Lightweight Version
AoT-light is designed for faster processing in time-sensitive situations:
Use Cases
Atom of Thoughts is effective in the following scenarios:
Atom Types
AoT uses five types of atoms:
1. premise: Basic assumptions or given information for problem solving 2. reasoning: Logical reasoning process based on other atoms 3. hypothesis: Proposed solutions or intermediate conclusions 4. verification: Process to evaluate the validity of other atoms (especially hypotheses) 5. conclusion: Verified hypotheses or final problem solutions
Core Features
#### 1. Decomposition-Contraction Mechanism
A mechanism to decompose atoms into smaller sub-atoms and contract them back after verification.
startDecomposition(atomId): Start atom decomposition
- addToDecomposition(decompositionId, atomId): Add sub-atom to decomposition
- completeDecomposition(decompositionId): Complete decomposition process#### 2. Automatic Termination Mechanism
getTerminationStatus(): Return current termination status and reasongetBestConclusion(): Return the conclusion with highest confidenceParameter Descriptions
Usage Method
1. Understand the problem and define necessary premise atoms 2. Create reasoning atoms based on premises 3. Create hypothesis atoms based on reasoning 4. Create verification atoms to verify hypotheses 5. Deriv
Related MCP Servers
AI Research Assistant
hamid-vakilzadeh
AI Research Assistant provides comprehensive access to millions of academic papers through the Semantic Scholar and arXiv databases. This MCP server enables AI coding assistants to perform intelligent literature searches, citation network analysis, and paper content extraction without requiring an API key. Key features include: - Advanced paper search with multi-filter support by year ranges, citation thresholds, field of study, and publication type - Title matching with confidence scoring for finding specific papers - Batch operations supporting up to 500 papers per request - Citation analysis and network exploration for understanding research relationships - Full-text PDF extraction from arXiv and Wiley open-access content (Wiley TDM token required for institutional access) - Rate limits of 100 requests per 5 minutes with options to request higher limits through Semantic Scholar
Linkup
LinkupPlatform
Linkup is a real-time web search and content extraction service that enables AI assistants to search the web and retrieve information from trusted sources. It provides source-backed answers with citations, making it ideal for fact-checking, news gathering, and research tasks. Key features of Linkup: - Real-time web search using natural language queries to find current information, news, and data - Page fetching to extract and read content from any webpage URL - Search depth modes: Standard for direct-answer queries and Deep for complex research across multiple sources - Source-backed results with citations and context from relevant, trustworthy websites - JavaScript rendering support for accessing dynamic content on JavaScript-heavy pages
Math-MCP
EthanHenrickson
Math-MCP is a computation server that enables Large Language Models (LLMs) to perform accurate numerical calculations through the Model Context Protocol. It provides precise mathematical operations via a simple API to overcome LLM limitations in arithmetic and statistical reasoning. Key features of Math-MCP: - Basic arithmetic operations: addition, subtraction, multiplication, division, modulo, and bulk summation - Statistical analysis functions: mean, median, mode, minimum, and maximum calculations - Rounding utilities: floor, ceiling, and nearest integer rounding - Trigonometric functions: sine, cosine, tangent, and their inverses with degrees and radians conversion support