xiyansql (mysql).com
xiyansql (mysql).com logo

XiYanSQL (MySQL)

Enables natural language interaction with MySQL databases through XiYanSQL, providing secure table listing, data reading...

Created byApr 22, 2025

Table of Contents

  • Features
  • Preview
  • Installation
  • Configuration
  • Launch
  • It Does Not Work
  • Citation

Features

  • Fetch data by natural language through XiYanSQL
  • Support general LLMs (GPT,qwenmax), Text-to-SQL SOTA model
  • Support pure local mode (high security!)
  • Support MySQL and PostgreSQL.
  • List available tables as resources
  • Read table contents

Preview

Architecture

There are two ways to integrate this server in your project, as shown below: The left is remote mode, which is the default mode. It requires an API key to access the xiyanSQL-qwencoder-32B model from service provider (see Configuration). Another mode is local mode, which is more secure. It does not require the API key.
architecture.png

Best practice and reports

Evaluation on MCPBench

The following figure illustrates the performance of the XiYan MCP server as measured by the MCPBench benchmark. The XiYan MCP server demonstrates superior performance compared to both the MySQL MCP server and the PostgreSQL MCP server, achieving a lead of 2-22 percentage points. The detailed experiment results can be found at MCPBench and the report "Evaluation Report on MCP Servers".
exp_mcpbench.png

Tools Preview

  • The tool get_data provides a natural language interface for retrieving data from a database. This server will convert the input natural language into SQL using a built-in model and call the database to return the query results.
  • The {dialect}://{table_name} resource allows obtaining a portion of sample data from the database for model reference when a specific table_name is specified.
  • The {dialect}:// resource will list the names of the current databases

Installation

Installing from pip

Python 3.11+ is required. You can install the server through pip, and it will install the latest version:
After that you can directly run the server by:
But it does not provide any functions until you complete following config. You will get a yml file. After that you can run the server by:

Installing from Smithery.ai

Not fully tested.

Configuration

You need a YAML config file to configure the server. A default config file is provided in config_demo.yml which looks like this:

LLM Configuration

Name is the name of the model to use, key is the API key of the model, url is the API url of the model. We support following models.
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

General LLMs

If you want to use the general LLMs, e.g. gpt3.5, you can directly config like this:
If you want to use Qwen from Alibaba, e.g. Qwen-max, you can use following config:

Text-to-SQL SOTA model

We recommend the XiYanSQL-qwencoder-32B (https://github.com/XGenerationLab/XiYanSQL-QwenCoder), which is the SOTA model in text-to-sql, see Bird benchmark. There are two ways to use the model. You can use either of them. (1) Modelscope, (2) Alibaba Cloud DashScope.
(1) Modelscope version
You need to apply a key of API-inference from Modelscope, https://www.modelscope.cn/docs/model-service/API-Inference/intro Then you can use the following config:
Read our model description for more details.
(2) Dashscope version
We deployed the model on Alibaba Cloud DashScope, so you need to set the following environment variables: Send me your email to get the key. ( godot.lzl@alibaba-inc.com ) In the email, please attach the following information:
We will send you a key according to your email. And you can fill the key in the yml file. The key will be expired by 1 month or 200 queries or other legal restrictions.
Note: this model service is just for trial, if you need to use it in production, please contact us.
Alternatively, you can also deploy the model XiYanSQL-qwencoder-32B on your own server.

Local Model

Note: the local model is slow (about 12 seconds per query on my macbook). If you need a stable and fast service, we still recommend to use the modelscope version.
To run xiyan_mcp_server in local mode, you need
  1. a PC/Mac with at least 16GB RAM
  1. 6GB disk space
Step 1: Install additional Python packages
Step 2: (optional) manually download the model We recommend xiyansql-qwencoder-3b. You can manually download the model by
It will take you 6GB disk space.
Step 3: download the script and run server. src/xiyan_mcp_server/local_xiyan_server.py
The server will be running on http://localhost:5090/
Step 4: prepare config and run xiyan_mcp_server the config.yml should be like:
Till now the local mode is ready.

Database Configuration

host, port, user, password, database are the connection information of the database.
You can use local or any remote databases. Now we support MySQL and PostgreSQL(more dialects soon).

MySQL

PostgreSQL

Step 1: Install Python packages
Step 2: prepare the config.yml like this:
Note that dialect should be postgresql for postgresql.

Launch

Claude Desktop

Add this in your Claude Desktop config file, ref <a href="https://github.com/XGenerationLab/xiyan_mcp_server/blob/main/imgs/claude_desktop.jpg">Claude Desktop config example</a>
Please note that the Python command here requires the complete path to the Python executable (`/xxx/python`); otherwise, the Python interpreter cannot be found. You can determine this path by using the command `which python`. The same applies to other applications as well.

Cline

Prepare the config like Claude Desktop

Goose

Add following command in the config, ref <a href="https://github.com/XGenerationLab/xiyan_mcp_server/blob/main/imgs/goose.jpg">Goose config example</a>

Cursor

Use the same command like Goose.

Witsy

Add following in command:
Add an env: key is YML and value is the path to your yml file. Ref <a href="https://github.com/XGenerationLab/xiyan_mcp_server/blob/main/imgs/witsy.jpg">Witsy config example</a>

It Does Not Work!

Contact us: <a href="https://github.com/XGenerationLab/xiyan_mcp_server/blob/main/imgs/dinggroup_out.png">Ding Group </a><a href="https://weibo.com/u/2540915670" target="_blank">Follow me on Weibo</a>

Citation

If you find our work helpful, feel free to give us a cite.

Table of Contents

  • Features
  • Preview
  • Installation
  • Configuration
  • Launch
  • It Does Not Work
  • Citation

Features

  • Fetch data by natural language through XiYanSQL
  • Support general LLMs (GPT,qwenmax), Text-to-SQL SOTA model
  • Support pure local mode (high security!)
  • Support MySQL and PostgreSQL.
  • List available tables as resources
  • Read table contents

Preview

Architecture

There are two ways to integrate this server in your project, as shown below: The left is remote mode, which is the default mode. It requires an API key to access the xiyanSQL-qwencoder-32B model from service provider (see Configuration). Another mode is local mode, which is more secure. It does not require the API key.
architecture.png

Best practice and reports

Evaluation on MCPBench

The following figure illustrates the performance of the XiYan MCP server as measured by the MCPBench benchmark. The XiYan MCP server demonstrates superior performance compared to both the MySQL MCP server and the PostgreSQL MCP server, achieving a lead of 2-22 percentage points. The detailed experiment results can be found at MCPBench and the report "Evaluation Report on MCP Servers".
exp_mcpbench.png

Tools Preview

  • The tool get_data provides a natural language interface for retrieving data from a database. This server will convert the input natural language into SQL using a built-in model and call the database to return the query results.
  • The {dialect}://{table_name} resource allows obtaining a portion of sample data from the database for model reference when a specific table_name is specified.
  • The {dialect}:// resource will list the names of the current databases

Installation

Installing from pip

Python 3.11+ is required. You can install the server through pip, and it will install the latest version:
After that you can directly run the server by:
But it does not provide any functions until you complete following config. You will get a yml file. After that you can run the server by:

Installing from Smithery.ai

Not fully tested.

Configuration

You need a YAML config file to configure the server. A default config file is provided in config_demo.yml which looks like this:

LLM Configuration

Name is the name of the model to use, key is the API key of the model, url is the API url of the model. We support following models.
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

General LLMs

If you want to use the general LLMs, e.g. gpt3.5, you can directly config like this:
If you want to use Qwen from Alibaba, e.g. Qwen-max, you can use following config:

Text-to-SQL SOTA model

We recommend the XiYanSQL-qwencoder-32B (https://github.com/XGenerationLab/XiYanSQL-QwenCoder), which is the SOTA model in text-to-sql, see Bird benchmark. There are two ways to use the model. You can use either of them. (1) Modelscope, (2) Alibaba Cloud DashScope.
(1) Modelscope version
You need to apply a key of API-inference from Modelscope, https://www.modelscope.cn/docs/model-service/API-Inference/intro Then you can use the following config:
Read our model description for more details.
(2) Dashscope version
We deployed the model on Alibaba Cloud DashScope, so you need to set the following environment variables: Send me your email to get the key. ( godot.lzl@alibaba-inc.com ) In the email, please attach the following information:
We will send you a key according to your email. And you can fill the key in the yml file. The key will be expired by 1 month or 200 queries or other legal restrictions.
Note: this model service is just for trial, if you need to use it in production, please contact us.
Alternatively, you can also deploy the model XiYanSQL-qwencoder-32B on your own server.

Local Model

Note: the local model is slow (about 12 seconds per query on my macbook). If you need a stable and fast service, we still recommend to use the modelscope version.
To run xiyan_mcp_server in local mode, you need
  1. a PC/Mac with at least 16GB RAM
  1. 6GB disk space
Step 1: Install additional Python packages
Step 2: (optional) manually download the model We recommend xiyansql-qwencoder-3b. You can manually download the model by
It will take you 6GB disk space.
Step 3: download the script and run server. src/xiyan_mcp_server/local_xiyan_server.py
The server will be running on http://localhost:5090/
Step 4: prepare config and run xiyan_mcp_server the config.yml should be like:
Till now the local mode is ready.

Database Configuration

host, port, user, password, database are the connection information of the database.
You can use local or any remote databases. Now we support MySQL and PostgreSQL(more dialects soon).

MySQL

PostgreSQL

Step 1: Install Python packages
Step 2: prepare the config.yml like this:
Note that dialect should be postgresql for postgresql.

Launch

Claude Desktop

Add this in your Claude Desktop config file, ref <a href="https://github.com/XGenerationLab/xiyan_mcp_server/blob/main/imgs/claude_desktop.jpg">Claude Desktop config example</a>
Please note that the Python command here requires the complete path to the Python executable (`/xxx/python`); otherwise, the Python interpreter cannot be found. You can determine this path by using the command `which python`. The same applies to other applications as well.

Cline

Prepare the config like Claude Desktop

Goose

Add following command in the config, ref <a href="https://github.com/XGenerationLab/xiyan_mcp_server/blob/main/imgs/goose.jpg">Goose config example</a>

Cursor

Use the same command like Goose.

Witsy

Add following in command:
Add an env: key is YML and value is the path to your yml file. Ref <a href="https://github.com/XGenerationLab/xiyan_mcp_server/blob/main/imgs/witsy.jpg">Witsy config example</a>

It Does Not Work!

Contact us: <a href="https://github.com/XGenerationLab/xiyan_mcp_server/blob/main/imgs/dinggroup_out.png">Ding Group </a><a href="https://weibo.com/u/2540915670" target="_blank">Follow me on Weibo</a>

Citation

If you find our work helpful, feel free to give us a cite.