deepseek.com
deepseek.com logo

DeepSeek

Integrates DeepSeek's language models, enabling AI-powered chat completions with customizable parameters for tasks like...

Created byApr 22, 2025

DeepSeek MCP Server

A Model Context Protocol (MCP) server for the DeepSeek API, allowing seamless integration of DeepSeek's powerful language models with MCP-compatible applications like Claude Desktop.

*Anonymously* use DeepSeek API -- Only a proxy is seen on the other side

<a href="https://glama.ai/mcp/servers/asht4rqltn"><img width="380" height="200" src="https://glama.ai/mcp/servers/asht4rqltn/badge" alt="DeepSeek Server MCP server" /></a>

Installation

Installing via Smithery

To install DeepSeek MCP Server for Claude Desktop automatically via Smithery:

Manual Installation

Usage with Claude Desktop

Add this to your claude_desktop_config.json:

Features

Note: The server intelligently handles these natural language requests by mapping them to appropriate configuration changes. You can also query the current settings and available models:
  • User: "What models are available?"
  • User: "What configuration options do I have?"
  • User: "What is the current temperature setting?"
  • User: "Start a multi-turn conversation. With the following settings: model: 'deepseek-chat', make it not too creative, and allow 8000 tokens."

Automatic Model Fallback if R1 is down

  • If the primary model (R1) is down (called deepseek-reasoner in the server), the server will automatically attempt to try with v3 (called deepseek-chat in the server)
  • V3 is recommended for general purpose use, while R1 is recommended for more technical and complex queries, primarily due to speed and token usage

Resource discovery for available models and configurations:

  • Custom model selection
  • Temperature control (0.0 - 2.0)
  • Max tokens limit
  • Top P sampling (0.0 - 1.0)
  • Presence penalty (-2.0 - 2.0)
  • Frequency penalty (-2.0 - 2.0)

Enhanced Conversation Features

Multi-turn conversation support:
  • Maintains complete message history and context across exchanges
  • Preserves configuration settings throughout the conversation
  • Handles complex dialogue flows and follow-up chains automatically
This feature is particularly valuable for two key use cases:
  1. Training & Fine-tuning: Since DeepSeek is open source, many users are training their own versions. The multi-turn support provides properly formatted conversation data that's essential for training high-quality dialogue models.
  1. Complex Interactions: For production use, this helps manage longer conversations where context is crucial:
The implementation handles all context management and message formatting behind the scenes, letting you focus on the actual interaction rather than the technical details of maintaining conversation state.

Testing with MCP Inspector

You can test the server locally using the MCP Inspector tool:
  1. Build the server:
  1. Run the server with MCP Inspector:
The inspector will open in your browser and connect to the server via stdio transport. You can:
  • View available tools
  • Test chat completions with different parameters
  • Debug server responses
  • Monitor server performance
Note: The server uses DeepSeek's R1 model (deepseek-reasoner) by default, which provides state-of-the-art performance for reasoning and general tasks.

License

MIT

DeepSeek MCP Server

A Model Context Protocol (MCP) server for the DeepSeek API, allowing seamless integration of DeepSeek's powerful language models with MCP-compatible applications like Claude Desktop.

*Anonymously* use DeepSeek API -- Only a proxy is seen on the other side

<a href="https://glama.ai/mcp/servers/asht4rqltn"><img width="380" height="200" src="https://glama.ai/mcp/servers/asht4rqltn/badge" alt="DeepSeek Server MCP server" /></a>

Installation

Installing via Smithery

To install DeepSeek MCP Server for Claude Desktop automatically via Smithery:

Manual Installation

Usage with Claude Desktop

Add this to your claude_desktop_config.json:

Features

Note: The server intelligently handles these natural language requests by mapping them to appropriate configuration changes. You can also query the current settings and available models:
  • User: "What models are available?"
  • User: "What configuration options do I have?"
  • User: "What is the current temperature setting?"
  • User: "Start a multi-turn conversation. With the following settings: model: 'deepseek-chat', make it not too creative, and allow 8000 tokens."

Automatic Model Fallback if R1 is down

  • If the primary model (R1) is down (called deepseek-reasoner in the server), the server will automatically attempt to try with v3 (called deepseek-chat in the server)
  • V3 is recommended for general purpose use, while R1 is recommended for more technical and complex queries, primarily due to speed and token usage

Resource discovery for available models and configurations:

  • Custom model selection
  • Temperature control (0.0 - 2.0)
  • Max tokens limit
  • Top P sampling (0.0 - 1.0)
  • Presence penalty (-2.0 - 2.0)
  • Frequency penalty (-2.0 - 2.0)

Enhanced Conversation Features

Multi-turn conversation support:
  • Maintains complete message history and context across exchanges
  • Preserves configuration settings throughout the conversation
  • Handles complex dialogue flows and follow-up chains automatically
This feature is particularly valuable for two key use cases:
  1. Training & Fine-tuning: Since DeepSeek is open source, many users are training their own versions. The multi-turn support provides properly formatted conversation data that's essential for training high-quality dialogue models.
  1. Complex Interactions: For production use, this helps manage longer conversations where context is crucial:
The implementation handles all context management and message formatting behind the scenes, letting you focus on the actual interaction rather than the technical details of maintaining conversation state.

Testing with MCP Inspector

You can test the server locally using the MCP Inspector tool:
  1. Build the server:
  1. Run the server with MCP Inspector:
The inspector will open in your browser and connect to the server via stdio transport. You can:
  • View available tools
  • Test chat completions with different parameters
  • Debug server responses
  • Monitor server performance
Note: The server uses DeepSeek's R1 model (deepseek-reasoner) by default, which provides state-of-the-art performance for reasoning and general tasks.

License

MIT