<a href="https://glama.ai/mcp/servers/hedrd1hxv5"><img width="380" height="200" src="https://glama.ai/mcp/servers/hedrd1hxv5/badge" alt="Inception Server MCP server" /></a>
Features
Tools
execute_mcp_client - Ask a question to a separate LLM, ignore all the intermediate steps it takes when querying it's tools, and return the output.
execute_parallel_mcp_client - Takes a list of inputs and a main prompt, and executes the prompt in parallel for each string in the input.
E.G. get the time of 6 major cities right now - London, Paris, Tokyo, Rio, New York, Sidney.
execute_map_reduce_mcp_client - Process multiple items in parallel and then sequentially reduce the results to a single output.
Development
Dependencies:
Install mcp-client-cli
create a bash file somewhere that activates the venv and executes the llm executable
install package
Install dependencies:
Build the server:
For development with auto-rebuild:
Installation
To use with Claude Desktop, add the server config:
On MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
Debugging
Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector, which is available as a package script:
The Inspector will provide a URL to access debugging tools in your browser.
Disclaimer
Ok this is a difficult one. Will take some setting up unfortunately.
However, if you manage to make this more straightforward, please send me PR's.
mcp-inception MCP Server
Call another mcp client from your mcp client. Delegate tasks, offload context windows. An agent for your agent!
This is a TypeScript-based MCP server that implements a simple LLM query system.
<a href="https://glama.ai/mcp/servers/hedrd1hxv5"><img width="380" height="200" src="https://glama.ai/mcp/servers/hedrd1hxv5/badge" alt="Inception Server MCP server" /></a>
Features
Tools
execute_mcp_client - Ask a question to a separate LLM, ignore all the intermediate steps it takes when querying it's tools, and return the output.
execute_parallel_mcp_client - Takes a list of inputs and a main prompt, and executes the prompt in parallel for each string in the input.
E.G. get the time of 6 major cities right now - London, Paris, Tokyo, Rio, New York, Sidney.
execute_map_reduce_mcp_client - Process multiple items in parallel and then sequentially reduce the results to a single output.
Development
Dependencies:
Install mcp-client-cli
create a bash file somewhere that activates the venv and executes the llm executable
install package
Install dependencies:
Build the server:
For development with auto-rebuild:
Installation
To use with Claude Desktop, add the server config:
On MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
Debugging
Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector, which is available as a package script:
The Inspector will provide a URL to access debugging tools in your browser.