chain of draft prompt tool.com
chain of draft prompt tool.com logo

Chain of Draft Prompt Tool

Organizes prompts into structured, multi-step workflows through a chain execution system where outputs from one step bec...

Created byApr 23, 2025

MCP Chain of Draft (CoD) Prompt Tool

[![smithery badge](https://smithery.ai/badge/@brendancopley/mcp-chain-of-draft-prompt-tool)](https://smithery.ai/server/@brendancopley/mcp-chain-of-draft-prompt-tool) [![version](https://img.shields.io/npm/v/mcp-chain-of-draft-prompt-tool.svg?style=flat-square)](https://npmjs.org/mcp-chain-of-draft-prompt-tool) [![package size](https://packagephobia.com/badge?p=mcp-chain-of-draft-prompt-tool)](https://packagephobia.com/result?p=mcp-chain-of-draft-prompt-tool) [![license](https://img.shields.io/npm/l/mcp-chain-of-draft-prompt-tool?color=%23007a1f&style=flat-square)](https://github.com/brendancopley/mcp-chain-of-draft-prompt-tool/blob/master/LICENSE) [![stargazers](https://img.shields.io/github/stars/brendancopley/mcp-chain-of-draft-prompt-tool?style=social)](https://github.com/brendancopley/mcp-chain-of-draft-prompt-tool/stargazers) [![number of forks](https://img.shields.io/github/forks/brendancopley/mcp-chain-of-draft-prompt-tool?style=social)](https://github.com/brendancopley/mcp-chain-of-draft-prompt-tool/fork)

Overview

The MCP Chain of Draft (CoD) Prompt Tool is a powerful Model Context Protocol tool that enhances LLM reasoning by transforming standard prompts into either Chain of Draft (CoD) or Chain of Thought (CoT) format. Here's how it works:
  1. **Input Transformation**: Your regular prompt is automatically transformed into a CoD/CoT format
  1. **LLM Processing**: The transformed prompt is passed to your chosen LLM (Claude, GPT, Ollama, or local models)
  1. **Enhanced Reasoning**: The LLM processes the request using structured reasoning steps
  1. **Result Transformation**: The response is transformed back into a clear, concise format
This approach significantly improves reasoning quality while reducing token usage and maintaining high accuracy.

BYOLLM Support

This tool supports a "Bring Your Own LLM" approach, allowing you to use any language model of your choice:

Supported LLM Integrations

  • **Cloud Services** - Anthropic Claude - OpenAI GPT models - Mistral AI
  • **Local Models** - Ollama (all models) - Local LLama variants - Any model supporting chat completion API

Configuring Your LLM

  1. **Cloud Services** ```bash # For Anthropic Claude export ANTHROPIC_API_KEY=your_key_here # For OpenAI export OPENAI_API_KEY=your_key_here # For Mistral AI export MISTRAL_API_KEY=your_key_here ```
  1. **Local Models with Ollama** ```bash # First install Ollama curl https://ollama.ai/install.sh | sh # Pull your preferred model ollama pull llama2 # or ollama pull mistral # or any other model # Configure the tool to use Ollama export MCP_LLM_PROVIDER=ollama export MCP_OLLAMA_MODEL=llama2 # or your chosen model ```
  1. **Custom Local Models** ```bash # Point to your local model API export MCP_LLM_PROVIDER=custom export MCP_CUSTOM_LLM_ENDPOINT=http://localhost:your_port ```

Credits

This project implements the Chain of Draft (CoD) reasoning approach as a Model Context Protocol (MCP) prompt tool for Claude. The core Chain of Draft implementation is based on the work by [stat-guy](https://github.com/stat-guy/chain-of-draft). We extend our gratitude for their pioneering work in developing this efficient reasoning approach.
Original Repository: [https://github.com/stat-guy/chain-of-draft](https://github.com/stat-guy/chain-of-draft)

Key Benefits

  • **Efficiency**: Significantly reduced token usage (as little as 7.6% of standard CoT)
  • **Speed**: Faster responses due to shorter generation time
  • **Cost Savings**: Lower API costs for LLM calls
  • **Maintained Accuracy**: Similar or even improved accuracy compared to CoT
  • **Flexibility**: Applicable across various reasoning tasks and domains

Features

  1. **Core Chain of Draft Implementation** - Concise reasoning steps (typically 5 words or less) - Format enforcement - Answer extraction
  1. **Performance Analytics** - Token usage tracking - Solution accuracy monitoring - Execution time measurement - Domain-specific performance metrics
  1. **Adaptive Word Limits** - Automatic complexity estimation - Dynamic adjustment of word limits - Domain-specific calibration
  1. **Comprehensive Example Database** - CoT to CoD transformation - Domain-specific examples (math, code, biology, physics, chemistry, puzzle) - Example retrieval based on problem similarity
  1. **Format Enforcement** - Post-processing to ensure adherence to word limits - Step structure preservation - Adherence analytics
  1. **Hybrid Reasoning Approaches** - Automatic selection between CoD and CoT - Domain-specific optimization - Historical performance-based selection
  1. **OpenAI API Compatibility** - Drop-in replacement for standard OpenAI clients - Support for both completions and chat interfaces - Easy integration into existing workflows

Setup and Installation

Prerequisites

  • Python 3.10+ (for Python implementation)
  • Node.js 22+ (for JavaScript implementation)
  • Nx (for building Single Executable Applications)

Python Installation

  1. Clone the repository
  1. Install dependencies: ```bash pip install -r requirements.txt ```
  1. Configure API keys in `.env` file: ``` ANTHROPIC_API_KEY=your_api_key_here ```
  1. Run the server: ```bash python server.py ```

JavaScript/TypeScript Installation

  1. Clone the repository
  1. Install dependencies: ```bash npm install ```
  1. Configure API keys in `.env` file: ``` ANTHROPIC_API_KEY=your_api_key_here ```
  1. Build and run the server: ```bash # Build TypeScript files using Nx npm run nx build # Start the server npm start # For development with auto-reload: npm run dev ```
Available scripts:
  • `npm run nx build`: Compiles TypeScript to JavaScript using Nx build system
  • `npm run build:sea`: Creates Single Executable Applications for all platforms
  • `npm start`: Runs the compiled server from `dist`
  • `npm test`: Runs the test query against the server
  • `npm run dev`: Runs the TypeScript server directly using ts-node (useful for development)
The project uses Nx as its build system, providing:
  • Efficient caching and incremental builds
  • Cross-platform build support
  • Integrated SEA generation
  • Dependency graph visualization
  • Consistent build process across environments

Single Executable Applications (SEA)

This project supports building Single Executable Applications (SEA) using Node.js 22+ and the [@getlarge/nx-node-sea](https://github.com/getlarge/nx-node-sea) plugin. This allows you to create standalone executables that don't require Node.js to be installed on the target system.

Building SEA Executables

The project includes several scripts for building SEA executables:

SEA Build Configuration

The project uses Nx for managing the build process. The SEA configuration is handled through the nx-node-sea plugin, which provides a streamlined way to create Node.js single executable applications.
Key features of the SEA build process:
  • Cross-platform support (macOS, Linux, Windows)
  • Automatic dependency bundling
  • Optimized binary size
  • No runtime dependencies required

Using SEA Executables

Once built, the SEA executables can be found in the `dist` directory. These executables:
  • Are completely standalone
  • Don't require Node.js installation
  • Can be distributed and run directly
  • Maintain all functionality of the original application
For Claude Desktop integration with SEA executables, update your configuration to use the executable path:

Claude Desktop Integration

To integrate with Claude Desktop:
  1. Install Claude Desktop from [claude.ai/download](https://claude.ai/download)
  1. Create or edit the Claude Desktop config file: ``` ~/Library/Application Support/Claude/claude_desktop_config.json ```
  1. Add the tool configuration (Python version): ```json { "mcpServers": { "chain-of-draft-prompt-tool": { "command": "python3", "args": ["/absolute/path/to/cod/server.py"], "env": { "ANTHROPIC_API_KEY": "your_api_key_here" } } } } ``` Or for the JavaScript version: ```json { "mcpServers": { "chain-of-draft-prompt-tool": { "command": "node", "args": ["/absolute/path/to/cod/index.js"], "env": { "ANTHROPIC_API_KEY": "your_api_key_here" } } } } ```
  1. Restart Claude Desktop
You can also use the Claude CLI to add the tool:

Using with Dive GUI

[Dive](https://github.com/OpenAgentPlatform/Dive) is an excellent open-source MCP Host Desktop Application that provides a user-friendly GUI for interacting with MCP tools like this one. It supports multiple LLMs including ChatGPT, Anthropic Claude, Ollama, and other OpenAI-compatible models.

Integrating with Dive

  1. Download and install Dive from their [releases page](https://github.com/OpenAgentPlatform/Dive/releases)
  1. Configure the Chain of Draft tool in Dive's MCP settings:
If you're using the non-SEA version:

Key Benefits of Using Dive

  • Universal LLM Support with multiple API key management
  • Cross-platform availability (Windows, MacOS, Linux)
  • Seamless MCP integration in both stdio and SSE modes
  • Multi-language interface
  • Custom instructions and system prompts
  • Automatic updates
Using Dive provides a convenient way to interact with the Chain of Draft tool through a modern, feature-rich interface while maintaining all the benefits of the MCP protocol.

Testing with MCP Inspector

The project includes integration with the MCP Inspector tool, which provides a visual interface for testing and debugging MCP tools. This is especially useful during development or when you want to inspect the tool's behavior.

Running the Inspector

You can start the MCP Inspector using the provided npm script:
This will:
  1. Start the MCP server in the background
  1. Launch the MCP Inspector interface in your default browser
  1. Connect to the running server for testing

Using the Inspector Interface

The MCP Inspector provides:
  • Real-time visualization of tool calls and responses
  • Interactive testing of MCP functions
  • Request/response history
  • Debug information for each interaction
  • Performance metrics and timing data
This makes it an invaluable tool for:
  • Development and debugging
  • Understanding tool behavior
  • Testing different inputs and scenarios
  • Verifying MCP compliance
  • Performance optimization
The Inspector will be available at `http://localhost:5173` by default.

Available Tools

The Chain of Draft server provides the following tools:
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

Developer Usage

Python Client

If you want to use the Chain of Draft client directly in your Python code:

JavaScript/TypeScript Client

For TypeScript/Node.js applications:

Implementation Details

The server is available in both Python and JavaScript implementations, both consisting of several integrated components:

Python Implementation

  1. **AnalyticsService**: Tracks performance metrics across different problem domains and reasoning approaches
  1. **ComplexityEstimator**: Analyzes problems to determine appropriate word limits
  1. **ExampleDatabase**: Manages and retrieves examples, transforming CoT examples to CoD format
  1. **FormatEnforcer**: Ensures reasoning steps adhere to word limits
  1. **ReasoningSelector**: Intelligently chooses between CoD and CoT based on problem characteristics

JavaScript Implementation

  1. **analyticsDb**: In-memory database for tracking performance metrics
  1. **complexityEstimator**: Analyzes problems to determine complexity and appropriate word limits
  1. **formatEnforcer**: Ensures reasoning steps adhere to word limits
  1. **reasoningSelector**: Automatically chooses between CoD and CoT based on problem characteristics and historical performance
Both implementations follow the same core principles and provide identical MCP tools, making them interchangeable for most use cases.

License

This project is open-source and available under the MIT license.