Overview
- ** Database Health** - analyze index health, connection utilization, buffer cache, vacuum health, sequence limits, replication lag, and more.
- ** Index Tuning** - explore thousands of possible indexes to find the best solution for your workload, using industrial-strength algorithms.
- ** Query Plans** - validate and optimize performance by reviewing EXPLAIN plans and simulating the impact of hypothetical indexes.
- ** Schema Intelligence** - context-aware SQL generation based on detailed understanding of the database schema.
- ** Safe SQL Execution** - configurable access control, including support for read-only mode and safe SQL parsing, making it usable for both development and production.
Demo
- Challenge: We generated a movie app using an AI assistant, but the SQLAlchemy ORM code ran painfully slow.
- Solution: Using Postgres MCP Pro with Cursor, we fixed the performance issues in minutes.
- Fixed performance - including ORM queries, indexing, and caching
- Fixed a broken page - by prompting the agent to explore the data, fix queries, and add related content.
- Improved the top movies - by exploring the data and fixing the ORM query to surface more relevant results.
Quick Start
Prerequisites
- Access credentials for your database.
- Docker or Python 3.12 or higher.
Access Credentials
psql or a GUI tool such as pgAdmin.Docker or Python
Installation
Option 1: Using Docker
Option 2: Using Python
pipx installed you can install Postgres MCP Pro with:uv:uv, see the uv installation instructions.Configure Your AI Assistant
Claude Desktop Configuration
- MacOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%/Claude/claude_desktop_config.json
Settings menu item in Claude Desktop to locate the configuration file.mcpServers section of the configuration file.localhost to work from inside of the container.- MacOS/Windows: Uses
host.docker.internalautomatically
- Linux: Uses
172.17.0.1or the appropriate host address automatically
postgresql://... with your Postgres database connection URI.- Unrestricted Mode: Allows full read/write access to modify data and schema. It is suitable for development environments.
- Restricted Mode: Limits operations to read-only transactions and imposes constraints on resource utilization (presently only execution time). It is suitable for production environments.
--access-mode=unrestricted with --access-mode=restricted in the configuration examples above.Other MCP Clients
- If you are using Cursor, you can use navigate from the
Command PalettetoCursor Settings, then open theMCPtab to access the configuration file.
- If you are using Windsurf, you can navigate to from the
Command PalettetoOpen Windsurf Settings Pageto access the configuration file.
- If you are using Goose run
goose configure, then selectAdd Extension.
SSE Transport
--transport=sse option.mcp.json or Cline's cline_mcp_settings.json you can put:mcp_config.json is slightly different:Postgres Extension Installation (Optional)
pg_statements and hypopg extensions on your database.- The
pg_statementsextension allows Postgres MCP Pro to analyze query execution statistics. For example, this allows it to understand which queries are running slow or consuming significant resources.
- The
hypopgextension allows Postgres MCP Pro to simulate the behavior of the Postgres query planner after adding indexes.
Installing extensions on AWS RDS, Azure SQL, or Google Cloud SQL
pg_statements and hypopg extensions should already be available on the system.
In this case, you can just run CREATE EXTENSION commands using a role with sufficient privileges:Installing extensions on self-managed Postgres
pg_statements extension you must ensure that it is listed in the shared_preload_libraries in the Postgres configuration file.
The hypopg extension may also require additional system-level installation (e.g., via your package manager) because it does not always ship with Postgres.Usage Examples
Get Database Health Overview
Check the health of my database and identify any issues.
Analyze Slow Queries
What are the slowest queries in my database? And how can I speed them up?
Get Recommendations On How To Speed Things Up
My app is slow. How can I make it faster?
Generate Index Recommendations
Analyze my database workload and suggest indexes to improve performance.
Optimize a Specific Query
Help me optimize this query: SELECT * FROM orders JOIN customers ON orders.customer_id = customers.id WHERE orders.created_at > '2023-01-01';
MCP Server API
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
Related Projects
- Query MCP. An MCP server for Supabase Postgres with a three-tier safety architecture and Supabase management API support.
- PG-MCP. An MCP server for PostgreSQL with flexible connection options, explain plans, extension context, and more.
- Reference PostgreSQL MCP Server. A simple MCP Server implementation exposing schema information as MCP resources and executing read-only queries.
- Supabase Postgres MCP Server. This MCP Server provides Supabase management features and is actively maintained by the Supabase community.
- Nile MCP Server. An MCP server providing access to the management API for the Nile's multi-tenant Postgres service.
- Neon MCP Server. An MCP server providing access to the management API for Neon's serverless Postgres service.
- Wren MCP Server. Provides a semantic engine powering business intelligence for Postgres and other databases.
- Aiven Database Optimizer. A tool that provides holistic database workload analysis, query optimizations, and other performance improvements.
- dba.ai. An AI-powered database administration assistant that integrates with GitHub to resolve code issues.
- pgAnalyze. A comprehensive monitoring and analytics platform for identifying performance bottlenecks, optimizing queries, and real-time alerting.
- Postgres.ai. An interactive chat experience combining an extensive Postgres knowledge base and GPT-4.
- Xata Agent. An open-source AI agent that automatically monitors database health, diagnoses issues, and provides recommendations using LLM-powered reasoning and playbooks.
- Dexter. A tool for generating and testing hypothetical indexes on PostgreSQL.
- PgHero. A performance dashboard for Postgres, with recommendations. Postgres MCP Pro incorporates health checks from PgHero.
- PgTune. Heuristics for tuning Postgres configuration.
Frequently Asked Questions
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
Roadmap
Technical Notes
Index Tuning
- Identify SQL queries in need of tuning.
If you know you are having a problem with a specific SQL query you can provide it.
Postgres MCP Pro can also analyze the workload to identify index tuning targets.
To do this, it relies on the
pg_stat_statementsextension, which records the runtime and resource consumption of each query. A query is a candidate for index tuning if it is a top resource consumer, either on a per-execution basis or in aggregate. At present, we use execution time as a proxy for cumulative resource consumption, but it may also make sense to look at specifics resources, e.g., the number of blocks accessed or the number of blocks read from disk. Theanalyze_query_workloadtool focuses on slow queries, using the mean time per execution with thresholds for execution count and mean execution time. Agents may also callget_top_queries, which accepts a parameter for mean vs. total execution time, then pass these queriesanalyze_query_indexesto get index recommendations. Sophisticated index tuning systems use "workload compression" to produce a representative subset of queries that reflects the characteristics of the workload as a whole, reducing the problem for downstream algorithms. Postgres MCP Pro performs a limited form of workload compression by normalizing queries so that those generated from the same template appear as one. It weights each query equally, a simplification that works when the benefits to indexing are large.
- Generate candidate indexes Once we have a list of SQL queries that we want to improve through indexing, we generate a list of indexes that we might want to add. To do this, we parse the SQL and identify any columns used in filters, joins, grouping, or sorting. To generate all possible indexes we need to consider combinations of these columns, because Postgres supports multicolumn indexes. In the present implementation, we include only one permutation of each possible multicolumn index, which is selected at random. We make this simplification to reduce the search space because permutations often have equivalent performance. However, we hope to improve in this area.
- Search for the optimal index configuration.
Our objective is to find the combination of indexes that optimally balances the performance benefits against the costs of storing and maintaining those indexes.
We estimate the performance improvement by using the "what if?" capabilities provided by the
hypopgextension. This simulates how the Postgres query optimizer will execute a query after the addition of indexes, and reports changes based on the actual Postgres cost model. One challenge is that generating query plans generally requires knowledge of the specific parameter values used in the query. Query normalization, which is necessary to reduce the queries under consideration, removes parameter constants. Parameter values provided via bind variables are similarly not available to us. To address this problem, we produce realistic constants that we can provide as parameters by sampling from the table statistics. In version 16, Postgres added generic explain plan functionality, but it has limitations, for example aroundLIKEclauses, which our implementation does not have. Search strategy is critical because evaluating all possible index combinations feasible only in simple situations. This is what most sets apart various indexing approaches. Adapting the approach of Microsoft's Anytime algorithm, we employ a greedy search strategy, i.e., find the best one-index solution, then find the best index to add to that to produce a two-index solution. Our search terminates when the time budget is exhausted or when a round of exploration fails to produce any gains above the minimum improvement threshold of 10%.
- Cost-benefit analysis. When posed with two indexing alternatives, one which produces better performance and one which requires more space, how do we decide which to choose? Traditionally, index advisors ask for a storage budget and optimize performance with respect to that storage budget. We also take a storage budget, but perform a cost-benefit analysis throughout the optimization. We frame this as the problem of selecting a point along the Pareto front the set of choices for which improving one quality metric necessarily worsens another. In an ideal world, we might want to assess the cost of the storage and the benefit of improved performance in monetary terms. However, there is a simpler and more practical approach: to look at the changes in relative terms. Most people would agree that a 100x performance improvement is worth it, even if the storage cost is 2x. In our implementation, we use a configurable parameter to set this threshold. By default, we require the change in the log (base 10) of the performance improvement to be 2x the difference in the log of the space cost. This works out to allowing a maximum 10x increase in space for a 100x performance improvement.
Database Health
- Index Health. Looks for unused indexes, duplicate indexes, and indexes that are bloated. Bloated indexes make inefficient use of database pages. Postgres autovacuum cleans up index entries pointing to dead tuples, and marks the entries as reusable. However, it does not compact the index pages and, eventually, index pages may contain few live tuple references.
- Buffer Cache Hit Rate. Measures the proportion of database reads that are served from the buffer cache instead of disk. A low buffer cache hit rate must be investigated as it is often not cost-optimal and leads to degraded application performance.
- Connection Health. Checks the number of connections to the database and reports on their utilization. The biggest risk is running out of connections, but a high number of idle or blocked connections can also indicate issues.
- Vacuum Health. Vacuum is important for many reasons. A critical one is preventing transaction id wraparound, which can cause the database to stop accepting writes. The Postgres multi-version concurrency control (MVCC) mechanism requires a unique transaction id for each transaction. However, because Postgres uses a 32-bit signed integer for transaction ids, it needs to reuse transaction ids after after a maximum of 2 billion transactions. To do this it "freezes" the transaction ids of historical transactions, setting them all to a special value that indicates distant past. When records first go to disk, they are written visibility for a range of transaction ids. Before re-using these transaction ids, Postgres must update any on-disk records, "freezing" them to remove the references to the transaction ids to be reused. This check looks for tables that require vacuuming to prevent transaction id wraparound.
- Replication Health. Checks replication health by monitoring lag between primary and replicas, verifying replication status, and tracking usage of replication slots.
- Constraint Health. During normal operation, Postgres rejects any transactions that would cause a constraint violation. However, invalid constraints may occur after loading data or in recovery scenarios. This check looks for any invalid constraints.
- Sequence Health. Looks for sequences that are at risk of exceeding their maximum value.
Postgres Client Library
libpq dependency.
Asyncpg is also probably faster than psycopg3, but we have not validated this ourselves.
Older benchmarks report a larger performance gap, suggesting that the newer psycopg3 has closed the gap as it matures.psycopg3 over asyncpg.
We remain open to revising this decision in the future.Connection Configuration
Schema Information
Protected SQL Execution
COMMIT or ROLLBACK statement and then beginning a new transaction.ROLLBACK statement and then beginning a new transaction.
For example:commit or rollback statements.
Helpfully, the popular Postgres stored procedure languages, including PL/pgSQL and PL/Python, do not allow for COMMIT or ROLLBACK statements.
If you have unsafe stored procedure languages enabled on your database, then our read-only protections could be circumvented.- "Unrestricted" provides maximum flexibility. It is suitable for development environments where speed and flexibility are paramount, and where there is no need to protect valuable or sensitive data.
- "Restricted" provides a balance between flexibility and safety. It is suitable for production environments where the database is exposed to untrusted users, and where it is important to protect valuable or sensitive data.
Postgres MCP Pro Development
Local Development Setup
- Install uv:
- Clone the repository:
- Install dependencies:
- Run the server:
Overview
- ** Database Health** - analyze index health, connection utilization, buffer cache, vacuum health, sequence limits, replication lag, and more.
- ** Index Tuning** - explore thousands of possible indexes to find the best solution for your workload, using industrial-strength algorithms.
- ** Query Plans** - validate and optimize performance by reviewing EXPLAIN plans and simulating the impact of hypothetical indexes.
- ** Schema Intelligence** - context-aware SQL generation based on detailed understanding of the database schema.
- ** Safe SQL Execution** - configurable access control, including support for read-only mode and safe SQL parsing, making it usable for both development and production.
Demo
- Challenge: We generated a movie app using an AI assistant, but the SQLAlchemy ORM code ran painfully slow.
- Solution: Using Postgres MCP Pro with Cursor, we fixed the performance issues in minutes.
- Fixed performance - including ORM queries, indexing, and caching
- Fixed a broken page - by prompting the agent to explore the data, fix queries, and add related content.
- Improved the top movies - by exploring the data and fixing the ORM query to surface more relevant results.
Quick Start
Prerequisites
- Access credentials for your database.
- Docker or Python 3.12 or higher.
Access Credentials
psql or a GUI tool such as pgAdmin.Docker or Python
Installation
Option 1: Using Docker
Option 2: Using Python
pipx installed you can install Postgres MCP Pro with:uv:uv, see the uv installation instructions.Configure Your AI Assistant
Claude Desktop Configuration
- MacOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%/Claude/claude_desktop_config.json
Settings menu item in Claude Desktop to locate the configuration file.mcpServers section of the configuration file.localhost to work from inside of the container.- MacOS/Windows: Uses
host.docker.internalautomatically
- Linux: Uses
172.17.0.1or the appropriate host address automatically
postgresql://... with your Postgres database connection URI.- Unrestricted Mode: Allows full read/write access to modify data and schema. It is suitable for development environments.
- Restricted Mode: Limits operations to read-only transactions and imposes constraints on resource utilization (presently only execution time). It is suitable for production environments.
--access-mode=unrestricted with --access-mode=restricted in the configuration examples above.Other MCP Clients
- If you are using Cursor, you can use navigate from the
Command PalettetoCursor Settings, then open theMCPtab to access the configuration file.
- If you are using Windsurf, you can navigate to from the
Command PalettetoOpen Windsurf Settings Pageto access the configuration file.
- If you are using Goose run
goose configure, then selectAdd Extension.
SSE Transport
--transport=sse option.mcp.json or Cline's cline_mcp_settings.json you can put:mcp_config.json is slightly different:Postgres Extension Installation (Optional)
pg_statements and hypopg extensions on your database.- The
pg_statementsextension allows Postgres MCP Pro to analyze query execution statistics. For example, this allows it to understand which queries are running slow or consuming significant resources.
- The
hypopgextension allows Postgres MCP Pro to simulate the behavior of the Postgres query planner after adding indexes.
Installing extensions on AWS RDS, Azure SQL, or Google Cloud SQL
pg_statements and hypopg extensions should already be available on the system.
In this case, you can just run CREATE EXTENSION commands using a role with sufficient privileges:Installing extensions on self-managed Postgres
pg_statements extension you must ensure that it is listed in the shared_preload_libraries in the Postgres configuration file.
The hypopg extension may also require additional system-level installation (e.g., via your package manager) because it does not always ship with Postgres.Usage Examples
Get Database Health Overview
Check the health of my database and identify any issues.
Analyze Slow Queries
What are the slowest queries in my database? And how can I speed them up?
Get Recommendations On How To Speed Things Up
My app is slow. How can I make it faster?
Generate Index Recommendations
Analyze my database workload and suggest indexes to improve performance.
Optimize a Specific Query
Help me optimize this query: SELECT * FROM orders JOIN customers ON orders.customer_id = customers.id WHERE orders.created_at > '2023-01-01';
MCP Server API
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
Related Projects
- Query MCP. An MCP server for Supabase Postgres with a three-tier safety architecture and Supabase management API support.
- PG-MCP. An MCP server for PostgreSQL with flexible connection options, explain plans, extension context, and more.
- Reference PostgreSQL MCP Server. A simple MCP Server implementation exposing schema information as MCP resources and executing read-only queries.
- Supabase Postgres MCP Server. This MCP Server provides Supabase management features and is actively maintained by the Supabase community.
- Nile MCP Server. An MCP server providing access to the management API for the Nile's multi-tenant Postgres service.
- Neon MCP Server. An MCP server providing access to the management API for Neon's serverless Postgres service.
- Wren MCP Server. Provides a semantic engine powering business intelligence for Postgres and other databases.
- Aiven Database Optimizer. A tool that provides holistic database workload analysis, query optimizations, and other performance improvements.
- dba.ai. An AI-powered database administration assistant that integrates with GitHub to resolve code issues.
- pgAnalyze. A comprehensive monitoring and analytics platform for identifying performance bottlenecks, optimizing queries, and real-time alerting.
- Postgres.ai. An interactive chat experience combining an extensive Postgres knowledge base and GPT-4.
- Xata Agent. An open-source AI agent that automatically monitors database health, diagnoses issues, and provides recommendations using LLM-powered reasoning and playbooks.
- Dexter. A tool for generating and testing hypothetical indexes on PostgreSQL.
- PgHero. A performance dashboard for Postgres, with recommendations. Postgres MCP Pro incorporates health checks from PgHero.
- PgTune. Heuristics for tuning Postgres configuration.
Frequently Asked Questions
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
[object Object] | [object Object] |
Roadmap
Technical Notes
Index Tuning
- Identify SQL queries in need of tuning.
If you know you are having a problem with a specific SQL query you can provide it.
Postgres MCP Pro can also analyze the workload to identify index tuning targets.
To do this, it relies on the
pg_stat_statementsextension, which records the runtime and resource consumption of each query. A query is a candidate for index tuning if it is a top resource consumer, either on a per-execution basis or in aggregate. At present, we use execution time as a proxy for cumulative resource consumption, but it may also make sense to look at specifics resources, e.g., the number of blocks accessed or the number of blocks read from disk. Theanalyze_query_workloadtool focuses on slow queries, using the mean time per execution with thresholds for execution count and mean execution time. Agents may also callget_top_queries, which accepts a parameter for mean vs. total execution time, then pass these queriesanalyze_query_indexesto get index recommendations. Sophisticated index tuning systems use "workload compression" to produce a representative subset of queries that reflects the characteristics of the workload as a whole, reducing the problem for downstream algorithms. Postgres MCP Pro performs a limited form of workload compression by normalizing queries so that those generated from the same template appear as one. It weights each query equally, a simplification that works when the benefits to indexing are large.
- Generate candidate indexes Once we have a list of SQL queries that we want to improve through indexing, we generate a list of indexes that we might want to add. To do this, we parse the SQL and identify any columns used in filters, joins, grouping, or sorting. To generate all possible indexes we need to consider combinations of these columns, because Postgres supports multicolumn indexes. In the present implementation, we include only one permutation of each possible multicolumn index, which is selected at random. We make this simplification to reduce the search space because permutations often have equivalent performance. However, we hope to improve in this area.
- Search for the optimal index configuration.
Our objective is to find the combination of indexes that optimally balances the performance benefits against the costs of storing and maintaining those indexes.
We estimate the performance improvement by using the "what if?" capabilities provided by the
hypopgextension. This simulates how the Postgres query optimizer will execute a query after the addition of indexes, and reports changes based on the actual Postgres cost model. One challenge is that generating query plans generally requires knowledge of the specific parameter values used in the query. Query normalization, which is necessary to reduce the queries under consideration, removes parameter constants. Parameter values provided via bind variables are similarly not available to us. To address this problem, we produce realistic constants that we can provide as parameters by sampling from the table statistics. In version 16, Postgres added generic explain plan functionality, but it has limitations, for example aroundLIKEclauses, which our implementation does not have. Search strategy is critical because evaluating all possible index combinations feasible only in simple situations. This is what most sets apart various indexing approaches. Adapting the approach of Microsoft's Anytime algorithm, we employ a greedy search strategy, i.e., find the best one-index solution, then find the best index to add to that to produce a two-index solution. Our search terminates when the time budget is exhausted or when a round of exploration fails to produce any gains above the minimum improvement threshold of 10%.
- Cost-benefit analysis. When posed with two indexing alternatives, one which produces better performance and one which requires more space, how do we decide which to choose? Traditionally, index advisors ask for a storage budget and optimize performance with respect to that storage budget. We also take a storage budget, but perform a cost-benefit analysis throughout the optimization. We frame this as the problem of selecting a point along the Pareto front the set of choices for which improving one quality metric necessarily worsens another. In an ideal world, we might want to assess the cost of the storage and the benefit of improved performance in monetary terms. However, there is a simpler and more practical approach: to look at the changes in relative terms. Most people would agree that a 100x performance improvement is worth it, even if the storage cost is 2x. In our implementation, we use a configurable parameter to set this threshold. By default, we require the change in the log (base 10) of the performance improvement to be 2x the difference in the log of the space cost. This works out to allowing a maximum 10x increase in space for a 100x performance improvement.
Database Health
- Index Health. Looks for unused indexes, duplicate indexes, and indexes that are bloated. Bloated indexes make inefficient use of database pages. Postgres autovacuum cleans up index entries pointing to dead tuples, and marks the entries as reusable. However, it does not compact the index pages and, eventually, index pages may contain few live tuple references.
- Buffer Cache Hit Rate. Measures the proportion of database reads that are served from the buffer cache instead of disk. A low buffer cache hit rate must be investigated as it is often not cost-optimal and leads to degraded application performance.
- Connection Health. Checks the number of connections to the database and reports on their utilization. The biggest risk is running out of connections, but a high number of idle or blocked connections can also indicate issues.
- Vacuum Health. Vacuum is important for many reasons. A critical one is preventing transaction id wraparound, which can cause the database to stop accepting writes. The Postgres multi-version concurrency control (MVCC) mechanism requires a unique transaction id for each transaction. However, because Postgres uses a 32-bit signed integer for transaction ids, it needs to reuse transaction ids after after a maximum of 2 billion transactions. To do this it "freezes" the transaction ids of historical transactions, setting them all to a special value that indicates distant past. When records first go to disk, they are written visibility for a range of transaction ids. Before re-using these transaction ids, Postgres must update any on-disk records, "freezing" them to remove the references to the transaction ids to be reused. This check looks for tables that require vacuuming to prevent transaction id wraparound.
- Replication Health. Checks replication health by monitoring lag between primary and replicas, verifying replication status, and tracking usage of replication slots.
- Constraint Health. During normal operation, Postgres rejects any transactions that would cause a constraint violation. However, invalid constraints may occur after loading data or in recovery scenarios. This check looks for any invalid constraints.
- Sequence Health. Looks for sequences that are at risk of exceeding their maximum value.
Postgres Client Library
libpq dependency.
Asyncpg is also probably faster than psycopg3, but we have not validated this ourselves.
Older benchmarks report a larger performance gap, suggesting that the newer psycopg3 has closed the gap as it matures.psycopg3 over asyncpg.
We remain open to revising this decision in the future.Connection Configuration
Schema Information
Protected SQL Execution
COMMIT or ROLLBACK statement and then beginning a new transaction.ROLLBACK statement and then beginning a new transaction.
For example:commit or rollback statements.
Helpfully, the popular Postgres stored procedure languages, including PL/pgSQL and PL/Python, do not allow for COMMIT or ROLLBACK statements.
If you have unsafe stored procedure languages enabled on your database, then our read-only protections could be circumvented.- "Unrestricted" provides maximum flexibility. It is suitable for development environments where speed and flexibility are paramount, and where there is no need to protect valuable or sensitive data.
- "Restricted" provides a balance between flexibility and safety. It is suitable for production environments where the database is exposed to untrusted users, and where it is important to protect valuable or sensitive data.
Postgres MCP Pro Development
Local Development Setup
- Install uv:
- Clone the repository:
- Install dependencies:
- Run the server: