Simple way to expose your database to AI-Agent via MCP or OpenAPI 3.1 protocols.
This will run for you an API:
Which you can use inside your AI Agent:
mcp-raw-cursor-setup.png
Gateway will generate AI optimized API.
Why Centralmind/Gateway
AI agents and LLM-powered applications need fast, secure access to data, but traditional APIs and databases aren't built for this purpose. We're building an API layer that automatically generates secure, LLM-optimized APIs for your structured data.
Our solution:
Filters out PII and sensitive data to ensure compliance with GDPR, CPRA, SOC 2, and other regulations
Adds traceability and auditing capabilities, ensuring AI applications aren't black boxes and security teams maintain control
Optimizes for AI workloads, supporting Model Context Protocol (MCP) with enhanced meta information to help AI agents understand APIs, along with built-in caching and security features
Our primary users are companies deploying AI agents for customer support, analytics, where they need models to access the data without direct SQL access to databases elemenating security, compliance and peformance risks.
demo
Features
Automatic API Generation Creates APIs automatically using LLM based on table schema and sampled data
Multiple Protocol Support Provides APIs as REST or MCP Server including SSE mode
API Documentation Auto-generated Swagger documentation and OpenAPI 3.1.0 specification
PII Protection Implements <a href="https://docs.centralmind.ai/plugins/pii_remover/">regex plugin</a> or <a href="https://docs.centralmind.ai/plugins/presidio_anonymizer/">Microsoft Presidio plugin</a> for PII and sensitive data redaction
Flexible Configuration Easily extensible via YAML configuration and plugin system
Deployment Options Run as a binary or Docker container with ready-to-use <a href="https://docs.centralmind.ai/helm/gateway/">Helm chart</a>
Local & On-Premises Support for <a href="https://docs.centralmind.ai/providers/local-models/">self-hosted LLMs</a> through configurable AI endpoints and models
Row-Level Security (RLS) Fine-grained data access control using <a href="https://docs.centralmind.ai/plugins/lua_rls/">Lua scripts</a>
Authentication Options Built-in support for <a href="https://docs.centralmind.ai/plugins/api_keys/">API keys</a> and <a href="https://docs.centralmind.ai/plugins/oauth/">OAuth</a>
Comprehensive Monitoring Integration with <a href="https://docs.centralmind.ai/plugins/otel/">OpenTelemetry (OTel)</a> for request tracking and audit trails
**Performance Optimization** Implements time-based and <a href="https://docs.centralmind.ai/plugins/lru_cache/">LRU caching</a> strategies
How it Works
img.png
1. Connect & Discover
Gateway connects to your structured databases like PostgreSQL and automatically analyzes the schema and data samples
to generate an optimized API structure based on your prompt. LLM is used only on discovery stage to produce API configuration.
The tool uses AI Providers to generate the API configuration while ensuring security
through PII detection.
2. Deploy
Gateway supports multiple deployment options from standalone binary, docker or <a href="https://docs.centralmind.ai/example/k8s/">Kubernetes</a>.
Check our <a href="https://docs.centralmind.ai/docs/content/getting-started/launching-api/">launching guide</a> for detailed
instructions. The system uses YAML configuration and plugins for easy customization.
3. Use & Integrate
Access your data through REST APIs or Model Context Protocol (MCP) with built-in security features.
Gateway seamlessly integrates with AI models and applications like <a href="https://docs.centralmind.ai/docs/content/integration/langchain/">LangChain</a>,
<a href="https://docs.centralmind.ai/docs/content/integration/chatgpt/">OpenAI</a> and
<a href="https://docs.centralmind.ai/docs/content/integration/claude-desktop/">Claude Desktop</a> using function calling
or <a href="https://docs.centralmind.ai/docs/content/integration/cursor/">Cursor</a> through MCP. You can also <a href="https://docs.centralmind.ai/plugins/otel/">setup telemetry</a> to local or remote destination in otel format.
Once logged in, you can create an API key in the API section of AI Studio. The free tier includes a generous monthly token allocation, making it accessible for development and testing purposes.
Configure AI provider authorization. For Google Gemini, set an API key.
Run the discovery command:
Monitor the generation process:
Review the generated configuration in gateway.yaml:
Running the API
Run locally
Docker Compose
MCP Protocol Integration
Gateway implements the MCP protocol for seamless integration with Claude and other tools. For detailed setup instructions, see our <a href="https://docs.centralmind.ai/docs/content/integration/claude-desktop/">Claude integration guide</a>.
Build the gateway binary:
Configure Claude Desktop tool configuration:
Roadmap
It is always subject to change, and the roadmap will highly depend on user feedback. At this moment,
we are planning the following features:
Database and Connectivity
**Extended Database Integrations** - Redshift, S3 (Iceberg and Parquet), Oracle DB, Microsoft SQL Server, Elasticsearch
SSH tunneling - ability to use jumphost or ssh bastion to tunnel connections
Enhanced Functionality
Advanced Query Capabilities - Complex filtering syntax and Aggregation functions as parameters
Enhanced MCP Security - API key and OAuth authentication
Platform Improvements
Schema Management - Automated schema evolution and API versioning
Simple way to expose your database to AI-Agent via MCP or OpenAPI 3.1 protocols.
This will run for you an API:
Which you can use inside your AI Agent:
mcp-raw-cursor-setup.png
Gateway will generate AI optimized API.
Why Centralmind/Gateway
AI agents and LLM-powered applications need fast, secure access to data, but traditional APIs and databases aren't built for this purpose. We're building an API layer that automatically generates secure, LLM-optimized APIs for your structured data.
Our solution:
Filters out PII and sensitive data to ensure compliance with GDPR, CPRA, SOC 2, and other regulations
Adds traceability and auditing capabilities, ensuring AI applications aren't black boxes and security teams maintain control
Optimizes for AI workloads, supporting Model Context Protocol (MCP) with enhanced meta information to help AI agents understand APIs, along with built-in caching and security features
Our primary users are companies deploying AI agents for customer support, analytics, where they need models to access the data without direct SQL access to databases elemenating security, compliance and peformance risks.
demo
Features
Automatic API Generation Creates APIs automatically using LLM based on table schema and sampled data
Multiple Protocol Support Provides APIs as REST or MCP Server including SSE mode
API Documentation Auto-generated Swagger documentation and OpenAPI 3.1.0 specification
PII Protection Implements <a href="https://docs.centralmind.ai/plugins/pii_remover/">regex plugin</a> or <a href="https://docs.centralmind.ai/plugins/presidio_anonymizer/">Microsoft Presidio plugin</a> for PII and sensitive data redaction
Flexible Configuration Easily extensible via YAML configuration and plugin system
Deployment Options Run as a binary or Docker container with ready-to-use <a href="https://docs.centralmind.ai/helm/gateway/">Helm chart</a>
Local & On-Premises Support for <a href="https://docs.centralmind.ai/providers/local-models/">self-hosted LLMs</a> through configurable AI endpoints and models
Row-Level Security (RLS) Fine-grained data access control using <a href="https://docs.centralmind.ai/plugins/lua_rls/">Lua scripts</a>
Authentication Options Built-in support for <a href="https://docs.centralmind.ai/plugins/api_keys/">API keys</a> and <a href="https://docs.centralmind.ai/plugins/oauth/">OAuth</a>
Comprehensive Monitoring Integration with <a href="https://docs.centralmind.ai/plugins/otel/">OpenTelemetry (OTel)</a> for request tracking and audit trails
**Performance Optimization** Implements time-based and <a href="https://docs.centralmind.ai/plugins/lru_cache/">LRU caching</a> strategies
How it Works
img.png
1. Connect & Discover
Gateway connects to your structured databases like PostgreSQL and automatically analyzes the schema and data samples
to generate an optimized API structure based on your prompt. LLM is used only on discovery stage to produce API configuration.
The tool uses AI Providers to generate the API configuration while ensuring security
through PII detection.
2. Deploy
Gateway supports multiple deployment options from standalone binary, docker or <a href="https://docs.centralmind.ai/example/k8s/">Kubernetes</a>.
Check our <a href="https://docs.centralmind.ai/docs/content/getting-started/launching-api/">launching guide</a> for detailed
instructions. The system uses YAML configuration and plugins for easy customization.
3. Use & Integrate
Access your data through REST APIs or Model Context Protocol (MCP) with built-in security features.
Gateway seamlessly integrates with AI models and applications like <a href="https://docs.centralmind.ai/docs/content/integration/langchain/">LangChain</a>,
<a href="https://docs.centralmind.ai/docs/content/integration/chatgpt/">OpenAI</a> and
<a href="https://docs.centralmind.ai/docs/content/integration/claude-desktop/">Claude Desktop</a> using function calling
or <a href="https://docs.centralmind.ai/docs/content/integration/cursor/">Cursor</a> through MCP. You can also <a href="https://docs.centralmind.ai/plugins/otel/">setup telemetry</a> to local or remote destination in otel format.
Once logged in, you can create an API key in the API section of AI Studio. The free tier includes a generous monthly token allocation, making it accessible for development and testing purposes.
Configure AI provider authorization. For Google Gemini, set an API key.
Run the discovery command:
Monitor the generation process:
Review the generated configuration in gateway.yaml:
Running the API
Run locally
Docker Compose
MCP Protocol Integration
Gateway implements the MCP protocol for seamless integration with Claude and other tools. For detailed setup instructions, see our <a href="https://docs.centralmind.ai/docs/content/integration/claude-desktop/">Claude integration guide</a>.
Build the gateway binary:
Configure Claude Desktop tool configuration:
Roadmap
It is always subject to change, and the roadmap will highly depend on user feedback. At this moment,
we are planning the following features:
Database and Connectivity
**Extended Database Integrations** - Redshift, S3 (Iceberg and Parquet), Oracle DB, Microsoft SQL Server, Elasticsearch
SSH tunneling - ability to use jumphost or ssh bastion to tunnel connections
Enhanced Functionality
Advanced Query Capabilities - Complex filtering syntax and Aggregation functions as parameters
Enhanced MCP Security - API key and OAuth authentication
Platform Improvements
Schema Management - Automated schema evolution and API versioning