LLM Bridge MCP [](https://smithery.ai/server/@sjquant/llm-bridge-mcp)
LLM Bridge MCP allows AI agents to interact with multiple large language models through a standardized interface. It leverages the Message Control Protocol (MCP) to provide seamless access to different LLM providers, making it easy to switch between models or use multiple models in the same application.
Features Unified interface to multiple LLM providers:
- OpenAI (GPT models)
- Anthropic (Claude models)
- Google (Gemini models)
- DeepSeek
- ... Built with Pydantic AI for type safety and validation Supports customizable parameters like temperature and max tokens Provides usage tracking and metrics Tools The server implements the following tool:
`prompt`: The text prompt to send to the LLM `model_name`: Specific model to use (default: "openai:gpt-4o-mini") `temperature`: Controls randomness (0.0 to 1.0) `max_tokens`: Maximum number of tokens to generate `system_prompt`: Optional system prompt to guide the model's behavior Installation Installing via Smithery To install llm-bridge-mcp for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@sjquant/llm-bridge-mcp):
Manual Installation Clone the repository: Install [uv](https://github.com/astral-sh/uv) (if not already installed): Configuration Create a `.env` file in the root directory with your API keys:
Usage Using with Claude Desktop or Cursor Add a server entry to your Claude Desktop configuration file or `.cursor/mcp.json`:
Troubleshooting Common Issues 1. "spawn uvx ENOENT" Error
This error occurs when the system cannot find the `uvx` executable in your PATH. To resolve this:
**Solution: Use the full path to uvx**
Find the full path to your uvx executable:
Then update your MCP server configuration to use the full path:
License This project is licensed under the MIT License - see the LICENSE file for details.