batchit.com
batchit.com logo

BatchIt

Combine tool calls into a single batch_execute call.

Created byApr 22, 2025

MCP BatchIt

Batch multiple MCP tool calls into a single "batch_execute" request reducing overhead and token usage for AI agents.

Table of Contents

  1. Introduction
  1. Why Use BatchIt
  1. Key Features & Limitations
  1. Installation & Startup
  1. Multi-Phase Usage
  1. FAQ
  1. License

Introduction

NOTICE: Work in ProgressThis project is actively being developed to address several complex challenges:While functional, expect ongoing improvements and changes as we refine the solution.
MCP BatchIt is a simple aggregator server in the Model Context Protocol (MCP) ecosystem. It exposes just one tool: `batch_execute`. Rather than calling multiple MCP tools (like fetch, read_file, create_directory, write_file, etc.) in separate messages, you can batch them together in one aggregator request.
This dramatically reduces token usage, network overhead, and repeated context in your AI agent or LLM conversation.

Why Use BatchIt

  • One Action per Message Problem: Normally, an LLM or AI agent can only call a single MCP tool at a time, forcing multiple calls for multi-step tasks.
  • Excessive Round Trips: 10 separate file operations might require 10 messages 10 responses.
  • BatchIt s Approach:

Key Features & Limitations

Features

  1. Single Batch Execute Tool
  1. Parallel Execution
  1. Timeout & Stop on Error
  1. Connection Caching

Limitations

  1. No Data Passing Mid-Batch
  1. No Partial Progress
  1. Must Use a Real MCP Server
  1. One Target Server per Call

Installation & Startup

BatchIt starts on STDIO by default so your AI agent (or any MCP client) can spawn it. For example:
You can now send JSON-RPC requests (tools/call method, name= "batch_execute") to it.

MEMORY BANK

Using Cline/Roo Code, you can build a framework of contextual project documentation by leveraging the powerful "Memory Bank" custom instructions developed by Nick Baumann.

Traditional Approach (19+ calls):

  1. Read package.json
  1. Wait for response
  1. Read README.md
  1. Wait for response
  1. List code definitions
  1. Wait for response
  1. Create memory-bank directory
  1. Wait for response
  1. Write productContext.md
  1. Write systemPatterns.md
  1. Write techContext.md
  1. Write progress.md
  1. Write activeContext.md
  1. Wait for responses (5 more calls)
Total: ~19 separate API calls (13 operations + 6 response waits)

BatchIt Approach (1-3 calls)

Multi-Phase Usage

When working with complex multi-step tasks that depend on real-time output (such as reading files and generating documentation), you'll need to handle the process in distinct phases. This is necessary because BatchIt doesn't support data passing between sub-operations within the same request.

Implementation Phases

Information Gathering

In this initial phase, we gather information from the filesystem by reading necessary files (e.g., package.json, README.md). This is accomplished through a batch_execute call to the filesystem MCP server:
Note: The aggregator spawns @modelcontextprotocol/server-filesystem (via npx) to execute parallel read_file operations.

LLM Only Step (List Code Definitions)

This phase involves processing outside the aggregator, typically using LLM or AI agent capabilities:
This step utilizes Roo Code's list_code_definition_names tool, which is exclusively available to LLMs. However, note that many MCP servers can provide similar functionality, making it possible to complete this process without LLM requests.

Document Creation

The final phase combines data from previous steps (file contents and code definitions) to generate documentation in the memory-bank directory:
Note: Code block was split into 2 parts due to size limits.
The aggregator processes these operations sequentially (maxConcurrent=1), creating the directory and writing multiple documentation files. The result array indicates the success/failure status of each operation.

FAQ

Q1: Do I need multiple aggregator calls if sub-op #2 depends on sub-op #1 s results? Yes. BatchIt doesn t pass data between sub-ops in the same request. You do multi-phase calls (like the example above).
Q2: Why do I get Tool create_directory not found sometimes? Because your transport might be pointing to the aggregator script itself instead of the real MCP server. Make sure you reference something like @modelcontextprotocol/server-filesystem.
Q3: Can I do concurrency plus stopOnError? Absolutely. If a sub-op fails, we skip launching new sub-ops. Already-running ones finish in parallel.
Q4: Does BatchIt re-spawn the target server each time? It can if you specify keepAlive: false. But if you use the same exact targetServer.name + transport, it caches the connection until an idle timeout passes.
Q5: Are partial results returned if an error occurs in the middle? Yes. Each sub-op that finished prior to the error is included in the final aggregator response, along with the failing sub-op. Remaining sub-ops are skipped if stopOnError is true.

License

MIT

MCP BatchIt

Batch multiple MCP tool calls into a single "batch_execute" request reducing overhead and token usage for AI agents.

Table of Contents

  1. Introduction
  1. Why Use BatchIt
  1. Key Features & Limitations
  1. Installation & Startup
  1. Multi-Phase Usage
  1. FAQ
  1. License

Introduction

NOTICE: Work in ProgressThis project is actively being developed to address several complex challenges:While functional, expect ongoing improvements and changes as we refine the solution.
MCP BatchIt is a simple aggregator server in the Model Context Protocol (MCP) ecosystem. It exposes just one tool: `batch_execute`. Rather than calling multiple MCP tools (like fetch, read_file, create_directory, write_file, etc.) in separate messages, you can batch them together in one aggregator request.
This dramatically reduces token usage, network overhead, and repeated context in your AI agent or LLM conversation.

Why Use BatchIt

  • One Action per Message Problem: Normally, an LLM or AI agent can only call a single MCP tool at a time, forcing multiple calls for multi-step tasks.
  • Excessive Round Trips: 10 separate file operations might require 10 messages 10 responses.
  • BatchIt s Approach:

Key Features & Limitations

Features

  1. Single Batch Execute Tool
  1. Parallel Execution
  1. Timeout & Stop on Error
  1. Connection Caching

Limitations

  1. No Data Passing Mid-Batch
  1. No Partial Progress
  1. Must Use a Real MCP Server
  1. One Target Server per Call

Installation & Startup

BatchIt starts on STDIO by default so your AI agent (or any MCP client) can spawn it. For example:
You can now send JSON-RPC requests (tools/call method, name= "batch_execute") to it.

MEMORY BANK

Using Cline/Roo Code, you can build a framework of contextual project documentation by leveraging the powerful "Memory Bank" custom instructions developed by Nick Baumann.

Traditional Approach (19+ calls):

  1. Read package.json
  1. Wait for response
  1. Read README.md
  1. Wait for response
  1. List code definitions
  1. Wait for response
  1. Create memory-bank directory
  1. Wait for response
  1. Write productContext.md
  1. Write systemPatterns.md
  1. Write techContext.md
  1. Write progress.md
  1. Write activeContext.md
  1. Wait for responses (5 more calls)
Total: ~19 separate API calls (13 operations + 6 response waits)

BatchIt Approach (1-3 calls)

Multi-Phase Usage

When working with complex multi-step tasks that depend on real-time output (such as reading files and generating documentation), you'll need to handle the process in distinct phases. This is necessary because BatchIt doesn't support data passing between sub-operations within the same request.

Implementation Phases

Information Gathering

In this initial phase, we gather information from the filesystem by reading necessary files (e.g., package.json, README.md). This is accomplished through a batch_execute call to the filesystem MCP server:
Note: The aggregator spawns @modelcontextprotocol/server-filesystem (via npx) to execute parallel read_file operations.

LLM Only Step (List Code Definitions)

This phase involves processing outside the aggregator, typically using LLM or AI agent capabilities:
This step utilizes Roo Code's list_code_definition_names tool, which is exclusively available to LLMs. However, note that many MCP servers can provide similar functionality, making it possible to complete this process without LLM requests.

Document Creation

The final phase combines data from previous steps (file contents and code definitions) to generate documentation in the memory-bank directory:
Note: Code block was split into 2 parts due to size limits.
The aggregator processes these operations sequentially (maxConcurrent=1), creating the directory and writing multiple documentation files. The result array indicates the success/failure status of each operation.

FAQ

Q1: Do I need multiple aggregator calls if sub-op #2 depends on sub-op #1 s results? Yes. BatchIt doesn t pass data between sub-ops in the same request. You do multi-phase calls (like the example above).
Q2: Why do I get Tool create_directory not found sometimes? Because your transport might be pointing to the aggregator script itself instead of the real MCP server. Make sure you reference something like @modelcontextprotocol/server-filesystem.
Q3: Can I do concurrency plus stopOnError? Absolutely. If a sub-op fails, we skip launching new sub-ops. Already-running ones finish in parallel.
Q4: Does BatchIt re-spawn the target server each time? It can if you specify keepAlive: false. But if you use the same exact targetServer.name + transport, it caches the connection until an idle timeout passes.
Q5: Are partial results returned if an error occurs in the middle? Yes. Each sub-op that finished prior to the error is included in the final aggregator response, along with the failing sub-op. Remaining sub-ops are skipped if stopOnError is true.

License

MIT