openai mcp.com
openai mcp.com logo

OpenAI MCP

Provides a high-performance bridge between OpenAI and Anthropic models with prompt templating, response streaming, and e...

Created byApr 22, 2025

MCP Coding Assistant with support for OpenAI + other LLM Providers

A powerful Python recreation of Claude Code with enhanced real-time visualization, cost management, and Model Context Protocol (MCP) server capabilities. This tool provides a natural language interface for software development tasks with support for multiple LLM providers.
Version Python

Key Features

  • Multi-Provider Support: Works with OpenAI, Anthropic, and other LLM providers
  • Model Context Protocol Integration:
  • Real-Time Tool Visualization: See tool execution progress and results in real-time
  • Cost Management: Track token usage and expenses with budget controls
  • Comprehensive Tool Suite: File operations, search, command execution, and more
  • Enhanced UI: Rich terminal interface with progress indicators and syntax highlighting
  • Context Optimization: Smart conversation compaction and memory management
  • Agent Coordination: Specialized agents with different roles can collaborate on tasks

Installation

  1. Clone this repository
  1. Install dependencies:
  1. Create a .env file with your API keys:

Usage

CLI Mode

Run the CLI with the default provider (determined from available API keys):
Specify a provider and model:
Set a budget limit to manage costs:

MCP Server Mode

Run as a Model Context Protocol server:
Start in development mode with the MCP Inspector:
Configure host and port:
Specify additional dependencies:
Load environment variables from file:

MCP Client Mode

Connect to an MCP server using Claude as the reasoning engine:
Specify a Claude model:
Try the included example server:

Multi-Agent MCP Mode

Launch a multi-agent client with synchronized agents:
Use a custom agent configuration file:
Example with the echo server:

Available Tools

  • View: Read files with optional line limits
  • Edit: Modify files with precise text replacement
  • Replace: Create or overwrite files
  • GlobTool: Find files by pattern matching
  • GrepTool: Search file contents using regex
  • LS: List directory contents
  • Bash: Execute shell commands

Chat Commands

  • /help: Show available commands
  • /compact: Compress conversation history to save tokens
  • /version: Show version information
  • /providers: List available LLM providers
  • /cost: Show cost and usage information
  • /budget [amount]: Set a budget limit
  • /quit, /exit: Exit the application

Architecture

Claude Code Python Edition is built with a modular architecture:

Using with Model Context Protocol

Using Claude Code as an MCP Server

Once the MCP server is running, you can connect to it from Claude Desktop or other MCP-compatible clients:
  1. Install and run the MCP server:
  1. Open the configuration page in your browser:
  1. Follow the instructions to configure Claude Desktop, including:

Using Claude Code as an MCP Client

To connect to any MCP server using Claude Code:
  1. Ensure you have your Anthropic API key in the environment or .env file
  1. Start the MCP server you want to connect to
  1. Connect using the MCP client:
  1. Type queries in the interactive chat interface

Using Multi-Agent Mode

For complex tasks, the multi-agent mode allows multiple specialized agents to collaborate:
  1. Create an agent configuration file or use the provided example
  1. Start your MCP server
  1. Launch the multi-agent client:
  1. Use the command interface to interact with multiple agents:

Contributing

  1. Fork the repository
  1. Create a feature branch
  1. Implement your changes with tests
  1. Submit a pull request

License

MIT

Acknowledgments

This project is inspired by Anthropic's Claude Code CLI tool, reimplemented in Python with additional features for enhanced visibility, cost management, and MCP server capabilities.# OpenAI Code Assistant
A powerful command-line and API-based coding assistant that uses OpenAI APIs with function calling and streaming.

Features

  • Interactive CLI for coding assistance
  • Web API for integration with other applications
  • Model Context Protocol (MCP) server implementation
  • Replication support for high availability
  • Tool-based architecture for extensibility
  • Reinforcement learning for tool optimization
  • Web client for browser-based interaction

Installation

  1. Clone the repository
  1. Install dependencies:
  1. Set your OpenAI API key:

Usage

CLI Mode

Run the assistant in interactive CLI mode:
Options:
  • --model, -m: Specify the model to use (default: gpt-4o)
  • --temperature, -t: Set temperature for response generation (default: 0)
  • --verbose, -v: Enable verbose output with additional information
  • --enable-rl/--disable-rl: Enable/disable reinforcement learning for tool optimization
  • --rl-update: Manually trigger an update of the RL model

API Server Mode

Run the assistant as an API server:
Options:
  • --host: Host address to bind to (default: 127.0.0.1)
  • --port, -p: Port to listen on (default: 8000)
  • --workers, -w: Number of worker processes (default: 1)
  • --enable-replication: Enable replication across instances
  • --primary/--secondary: Whether this is a primary or secondary instance
  • --peer: Peer instances to replicate with (host:port), can be specified multiple times

MCP Server Mode

Run the assistant as a Model Context Protocol (MCP) server:
Options:
  • --host: Host address to bind to (default: 127.0.0.1)
  • --port, -p: Port to listen on (default: 8000)
  • --dev: Enable development mode with additional logging
  • --dependencies: Additional Python dependencies to install
  • --env-file: Path to .env file with environment variables

MCP Client Mode

Connect to an MCP server using the assistant as the reasoning engine:
Options:
  • --model, -m: Model to use for reasoning (default: gpt-4o)
  • --host: Host address for the MCP server (default: 127.0.0.1)
  • --port, -p: Port for the MCP server (default: 8000)

Deployment Script

For easier deployment, use the provided script:
To enable replication:

Web Client

To use the web client, open web-client.html in your browser. Make sure the API server is running.

API Endpoints

Standard API Endpoints

  • POST /conversation: Create a new conversation
  • POST /conversation/{conversation_id}/message: Send a message to a conversation
  • POST /conversation/{conversation_id}/message/stream: Stream a message response
  • GET /conversation/{conversation_id}: Get conversation details
  • DELETE /conversation/{conversation_id}: Delete a conversation
  • GET /health: Health check endpoint

MCP Protocol Endpoints

  • GET /: Health check (MCP protocol)
  • POST /context: Get context for a prompt template
  • GET /prompts: List available prompt templates
  • GET /prompts/{prompt_id}: Get a specific prompt template
  • POST /prompts: Create a new prompt template
  • PUT /prompts/{prompt_id}: Update an existing prompt template
  • DELETE /prompts/{prompt_id}: Delete a prompt template

Replication

The replication system allows running multiple instances of the assistant with synchronized state. This provides:
  • High availability
  • Load balancing
  • Fault tolerance
To set up replication:
  1. Start a primary instance with --enable-replication
  1. Start secondary instances with --enable-replication --secondary --peer [primary-host:port]

Tools

The assistant includes various tools:
  • Weather: Get current weather for a location
  • View: Read files from the filesystem
  • Edit: Edit files
  • Replace: Write files
  • Bash: Execute bash commands
  • GlobTool: File pattern matching
  • GrepTool: Content search
  • LS: List directory contents
  • JinaSearch: Web search using Jina.ai
  • JinaFactCheck: Fact checking using Jina.ai
  • JinaReadURL: Read and summarize webpages

CLI Commands

  • /help: Show help message
  • /compact: Compact the conversation to reduce token usage
  • /status: Show token usage and session information
  • /config: Show current configuration settings
  • /rl-status: Show RL tool optimizer status (if enabled)
  • /rl-update: Update the RL model manually (if enabled)
  • /rl-stats: Show tool usage statistics (if enabled)

MCP Coding Assistant with support for OpenAI + other LLM Providers

A powerful Python recreation of Claude Code with enhanced real-time visualization, cost management, and Model Context Protocol (MCP) server capabilities. This tool provides a natural language interface for software development tasks with support for multiple LLM providers.
Version Python

Key Features

  • Multi-Provider Support: Works with OpenAI, Anthropic, and other LLM providers
  • Model Context Protocol Integration:
  • Real-Time Tool Visualization: See tool execution progress and results in real-time
  • Cost Management: Track token usage and expenses with budget controls
  • Comprehensive Tool Suite: File operations, search, command execution, and more
  • Enhanced UI: Rich terminal interface with progress indicators and syntax highlighting
  • Context Optimization: Smart conversation compaction and memory management
  • Agent Coordination: Specialized agents with different roles can collaborate on tasks

Installation

  1. Clone this repository
  1. Install dependencies:
  1. Create a .env file with your API keys:

Usage

CLI Mode

Run the CLI with the default provider (determined from available API keys):
Specify a provider and model:
Set a budget limit to manage costs:

MCP Server Mode

Run as a Model Context Protocol server:
Start in development mode with the MCP Inspector:
Configure host and port:
Specify additional dependencies:
Load environment variables from file:

MCP Client Mode

Connect to an MCP server using Claude as the reasoning engine:
Specify a Claude model:
Try the included example server:

Multi-Agent MCP Mode

Launch a multi-agent client with synchronized agents:
Use a custom agent configuration file:
Example with the echo server:

Available Tools

  • View: Read files with optional line limits
  • Edit: Modify files with precise text replacement
  • Replace: Create or overwrite files
  • GlobTool: Find files by pattern matching
  • GrepTool: Search file contents using regex
  • LS: List directory contents
  • Bash: Execute shell commands

Chat Commands

  • /help: Show available commands
  • /compact: Compress conversation history to save tokens
  • /version: Show version information
  • /providers: List available LLM providers
  • /cost: Show cost and usage information
  • /budget [amount]: Set a budget limit
  • /quit, /exit: Exit the application

Architecture

Claude Code Python Edition is built with a modular architecture:

Using with Model Context Protocol

Using Claude Code as an MCP Server

Once the MCP server is running, you can connect to it from Claude Desktop or other MCP-compatible clients:
  1. Install and run the MCP server:
  1. Open the configuration page in your browser:
  1. Follow the instructions to configure Claude Desktop, including:

Using Claude Code as an MCP Client

To connect to any MCP server using Claude Code:
  1. Ensure you have your Anthropic API key in the environment or .env file
  1. Start the MCP server you want to connect to
  1. Connect using the MCP client:
  1. Type queries in the interactive chat interface

Using Multi-Agent Mode

For complex tasks, the multi-agent mode allows multiple specialized agents to collaborate:
  1. Create an agent configuration file or use the provided example
  1. Start your MCP server
  1. Launch the multi-agent client:
  1. Use the command interface to interact with multiple agents:

Contributing

  1. Fork the repository
  1. Create a feature branch
  1. Implement your changes with tests
  1. Submit a pull request

License

MIT

Acknowledgments

This project is inspired by Anthropic's Claude Code CLI tool, reimplemented in Python with additional features for enhanced visibility, cost management, and MCP server capabilities.# OpenAI Code Assistant
A powerful command-line and API-based coding assistant that uses OpenAI APIs with function calling and streaming.

Features

  • Interactive CLI for coding assistance
  • Web API for integration with other applications
  • Model Context Protocol (MCP) server implementation
  • Replication support for high availability
  • Tool-based architecture for extensibility
  • Reinforcement learning for tool optimization
  • Web client for browser-based interaction

Installation

  1. Clone the repository
  1. Install dependencies:
  1. Set your OpenAI API key:

Usage

CLI Mode

Run the assistant in interactive CLI mode:
Options:
  • --model, -m: Specify the model to use (default: gpt-4o)
  • --temperature, -t: Set temperature for response generation (default: 0)
  • --verbose, -v: Enable verbose output with additional information
  • --enable-rl/--disable-rl: Enable/disable reinforcement learning for tool optimization
  • --rl-update: Manually trigger an update of the RL model

API Server Mode

Run the assistant as an API server:
Options:
  • --host: Host address to bind to (default: 127.0.0.1)
  • --port, -p: Port to listen on (default: 8000)
  • --workers, -w: Number of worker processes (default: 1)
  • --enable-replication: Enable replication across instances
  • --primary/--secondary: Whether this is a primary or secondary instance
  • --peer: Peer instances to replicate with (host:port), can be specified multiple times

MCP Server Mode

Run the assistant as a Model Context Protocol (MCP) server:
Options:
  • --host: Host address to bind to (default: 127.0.0.1)
  • --port, -p: Port to listen on (default: 8000)
  • --dev: Enable development mode with additional logging
  • --dependencies: Additional Python dependencies to install
  • --env-file: Path to .env file with environment variables

MCP Client Mode

Connect to an MCP server using the assistant as the reasoning engine:
Options:
  • --model, -m: Model to use for reasoning (default: gpt-4o)
  • --host: Host address for the MCP server (default: 127.0.0.1)
  • --port, -p: Port for the MCP server (default: 8000)

Deployment Script

For easier deployment, use the provided script:
To enable replication:

Web Client

To use the web client, open web-client.html in your browser. Make sure the API server is running.

API Endpoints

Standard API Endpoints

  • POST /conversation: Create a new conversation
  • POST /conversation/{conversation_id}/message: Send a message to a conversation
  • POST /conversation/{conversation_id}/message/stream: Stream a message response
  • GET /conversation/{conversation_id}: Get conversation details
  • DELETE /conversation/{conversation_id}: Delete a conversation
  • GET /health: Health check endpoint

MCP Protocol Endpoints

  • GET /: Health check (MCP protocol)
  • POST /context: Get context for a prompt template
  • GET /prompts: List available prompt templates
  • GET /prompts/{prompt_id}: Get a specific prompt template
  • POST /prompts: Create a new prompt template
  • PUT /prompts/{prompt_id}: Update an existing prompt template
  • DELETE /prompts/{prompt_id}: Delete a prompt template

Replication

The replication system allows running multiple instances of the assistant with synchronized state. This provides:
  • High availability
  • Load balancing
  • Fault tolerance
To set up replication:
  1. Start a primary instance with --enable-replication
  1. Start secondary instances with --enable-replication --secondary --peer [primary-host:port]

Tools

The assistant includes various tools:
  • Weather: Get current weather for a location
  • View: Read files from the filesystem
  • Edit: Edit files
  • Replace: Write files
  • Bash: Execute bash commands
  • GlobTool: File pattern matching
  • GrepTool: Content search
  • LS: List directory contents
  • JinaSearch: Web search using Jina.ai
  • JinaFactCheck: Fact checking using Jina.ai
  • JinaReadURL: Read and summarize webpages

CLI Commands

  • /help: Show help message
  • /compact: Compact the conversation to reduce token usage
  • /status: Show token usage and session information
  • /config: Show current configuration settings
  • /rl-status: Show RL tool optimizer status (if enabled)
  • /rl-update: Update the RL model manually (if enabled)
  • /rl-stats: Show tool usage statistics (if enabled)