Price Per TokenPrice Per Token
Desearch AI

Desearch AI

by desearch-ai

0

About

Desearch AI is an AI-powered search engine that delivers real-time, decentralized search capabilities across multiple platforms and data sources. Built on the Bittensor network, it provides access to metadata and content from X (Twitter), Reddit, Arxiv, and general web search, enhanced with AI analysis and sentiment detection. Key features of Desearch AI: - Real-time search across X (Twitter), Reddit, Arxiv, and the broader web - AI-powered analysis delivering contextual and unfiltered search results - Sentiment analysis to determine the emotional tone of social media posts - Comprehensive metadata extraction for deeper content understanding - Decentralized architecture ensuring unbiased and highly relevant results - Seamless integration with Claude Desktop and Cursor IDE for AI coding workflows

README

Desearch (Subnet 22) on Bittensor

[](https://opensource.org/licenses/MIT)

Introduction

Bittensor Desearch (Subnet 22):

Welcome to Desearch, the AI-powered search engine built on Bittensor. Designed for the Bittensor community and general internet users, Desearch delivers an unbiased and verifiable search experience. Through our API, developers and AI builders are empowered to integrate AI search capabilities into their products, with access to metadata from platforms like X, Reddit, Arxiv and general web search.

Key Features

  • AI-powered Analysis: Utilizes decentralized AI models to deliver relevant, contextual, and unfiltered search results.
  • Real-time Access to Diverse Data Sources: Access metadata from platforms like X, Reddit, Arxiv, and broader web data.
  • Sentiment and Metadata Analysis: Determines the emotional tone of social posts while analyzing key metadata to provide a comprehensive understanding of public sentiment.
  • Time-efficient: Minimizes manual data sorting, saving valuable research time.
  • User-friendly Design: Suitable for both beginners and experts.
  • Advantages

  • Decentralized Platform: Built on the Bittensor network, ensures unbiased and highly relevant search results through decentralization.
  • Customizability: Tailors data analysis to meet specific user requirements.
  • Versatility: Applicable for diverse research fields, from market analysis to academic studies.
  • Community-driven Innovation: Built and optimized by a decentralized network of Bittensor miners, validators, and users for continuous search result enhancement.
  • ---

    Installation

    Requirements: Python 3.10 or higher

    1. Clone the repository:

        git clone https://github.com/Desearch-ai/subnet-22.git
        
    2. Install the requirements:
        cd desearch
        python -m pip install -r requirements.txt
        python -m pip install -e .
        

    ---

    Preparing Your Environment

    Before running a miner or validator, ensure to:

  • Create a wallet.
  • Register the wallet to a netuid.
  • Environment Variables Configuration

    For setting up the necessary environment variables for your miner or validator, please refer to the Environment Variables Guide.

    Running the Miner

    python -m neurons/miners/miner.py
        --netuid 22
        --subtensor.network finney
        --wallet.name 
        --wallet.hotkey 
        --axon.port 14000
    

    Running the Validator API with Automatic Updates

    These validators are designed to run and update themselves automatically. To run a validator, follow these steps:

    1. Install this repository, you can do so by following the steps outlined in the installation section. 2. Install Weights and Biases and run wandb login within this repository. This will initialize Weights and Biases, enabling you to view KPIs and Metrics on your validator. (Strongly recommended to help the network improve from data sharing) 3. Install Redis. 4. Install PM2 and the jq package on your system. On Linux:

        sudo apt update && sudo apt install jq && sudo apt install npm && sudo npm install pm2 -g && pm2 update
        
    On Mac OS
        brew update && brew install jq && brew install npm && sudo npm install pm2 -g && pm2 update
        
    5. Run the run.sh script which will handle running your validator and pulling the latest updates as they are issued.

        pm2 start run.sh --name desearch_autoupdate -- --wallet.name  --wallet.hotkey 
        

    You can configure api workers and port by adding the following parameters:

        pm2 start run.sh --name desearch_autoupdate -- --workers 4 --port 8005  --wallet.name  --wallet.hotkey 
        

    This will run three PM2 processes:

    1. desearch_validator_process: Single validator service, which runs synthetic queries, updates metagraph, manages uids and sets weights. 2. desearch_api_process: API service run by uvicorn workers, which serves the API endpoints. 3. desearch_autoupdate: This script will check for updates every 30 minutes, if there is an update then it will pull it, install packages and restart 2 processes above and then restart itself.

    Detailed Setup Instructions

    For step-by-step guidance on setting up and running a miner, validator, or operating on the testnet or mainnet, refer to the following guid

    Related MCP Servers

    AI Research Assistant

    AI Research Assistant

    hamid-vakilzadeh

    AI Research Assistant provides comprehensive access to millions of academic papers through the Semantic Scholar and arXiv databases. This MCP server enables AI coding assistants to perform intelligent literature searches, citation network analysis, and paper content extraction without requiring an API key. Key features include: - Advanced paper search with multi-filter support by year ranges, citation thresholds, field of study, and publication type - Title matching with confidence scoring for finding specific papers - Batch operations supporting up to 500 papers per request - Citation analysis and network exploration for understanding research relationships - Full-text PDF extraction from arXiv and Wiley open-access content (Wiley TDM token required for institutional access) - Rate limits of 100 requests per 5 minutes with options to request higher limits through Semantic Scholar

    Web & Search
    12 8
    Linkup

    Linkup

    LinkupPlatform

    Linkup is a real-time web search and content extraction service that enables AI assistants to search the web and retrieve information from trusted sources. It provides source-backed answers with citations, making it ideal for fact-checking, news gathering, and research tasks. Key features of Linkup: - Real-time web search using natural language queries to find current information, news, and data - Page fetching to extract and read content from any webpage URL - Search depth modes: Standard for direct-answer queries and Deep for complex research across multiple sources - Source-backed results with citations and context from relevant, trustworthy websites - JavaScript rendering support for accessing dynamic content on JavaScript-heavy pages

    Web & Search
    2 24
    Math-MCP

    Math-MCP

    EthanHenrickson

    Math-MCP is a computation server that enables Large Language Models (LLMs) to perform accurate numerical calculations through the Model Context Protocol. It provides precise mathematical operations via a simple API to overcome LLM limitations in arithmetic and statistical reasoning. Key features of Math-MCP: - Basic arithmetic operations: addition, subtraction, multiplication, division, modulo, and bulk summation - Statistical analysis functions: mean, median, mode, minimum, and maximum calculations - Rounding utilities: floor, ceiling, and nearest integer rounding - Trigonometric functions: sine, cosine, tangent, and their inverses with degrees and radians conversion support

    Developer Tools
    22 81