prompt manager.com
prompt manager.com logo

Prompt Manager

Manages and serves customizable prompt templates with variable substitution and tag-based organization for streamlined L...

Created byApr 22, 2025

MCP Prompts Server

An MCP server for managing prompts and templates with project orchestration capabilities. Part of the Model Context Protocol ecosystem.
This server provides a simple way to store, retrieve, and apply templates for AI prompts, making it easier to maintain consistent prompting patterns across your AI applications.

Table of Contents

  • Features
  • Installation
  • Configuration
  • Usage
  • Prompt Format
  • Multi-Format Prompt Support
  • Storage Adapters
  • Docker Deployment
  • Development
  • Release Process
  • Changelog
  • Best Practices
  • License
  • Architecture
  • MCP Resources Integration
  • MCP Server Integration
  • Server-Sent Events (SSE) Support

Implementation Updates

The MCP Prompts Server has been refactored to use the new registration methods from MCP SDK version 1.6.1:
  • server.resource for defining resource endpoints (e.g., for prompts and templates).
  • server.tool for registering tool operations (e.g., add, get, update, list, delete, and apply_template).
  • server.prompt for prompt-specific operations (e.g., "review-code").
These changes simplify the codebase, improve maintainability, and ensure better compatibility with the latest MCP SDK.

Features

  • Store and retrieve prompts
  • Create and use templates with variables
  • List prompts with filtering by tags
  • Apply variables to templates
  • Multiple storage backends (file system, PostgreSQL, and MDC format)
  • Easy to use with Claude and other AI assistants
  • Project orchestration capabilities
  • Health check endpoints

Installation

Using npx (recommended)

Global installation

Using Docker

Verifying Installation

After installation, you can verify that the server is working by:
  1. Opening Claude Desktop
  1. Typing "/" in the chat input to see if prompts from the server appear
  1. Testing with a simple tool call:

Configuration

The server can be configured using environment variables:
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

PostgreSQL settings (required if STORAGE_TYPE=postgres)

[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

MDC settings (required if STORAGE_TYPE=mdc)

[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

Usage

Using with Claude

In Claude 3 Desktop app, you can configure the MCP Prompts server in your claude_desktop_config.json:

Available Tools

The MCP Prompts server provides the following tools:
  • add_prompt: Add a new prompt
  • get_prompt: Get a prompt by ID
  • update_prompt: Update an existing prompt
  • list_prompts: List all prompts
  • delete_prompt: Delete a prompt by ID
  • apply_template: Apply variables to a prompt template

API Usage Examples

Listing Available Prompts

To see what prompts are available:
To filter by tags:

Getting a Specific Prompt

To retrieve a specific prompt by ID:

Using a Template Prompt

To apply variables to a template prompt:

Managing Prompts

Adding a New Prompt

To add a new prompt:

Editing an Existing Prompt

To edit an existing prompt:

Using Prompts in Your Workflow

Development Workflow Example

When starting work on a new feature:
  1. Request the development system prompt template
  1. Fill in the template with your project details
  1. Use the resulting system prompt to guide Claude's assistance

Code Review Example

When reviewing code:
  1. Request the code review template
  1. Provide the code to be reviewed
  1. Claude will provide a structured review

Prompt Format

A prompt has the following structure:

Multi-Format Prompt Support

The MCP Prompts Server includes a powerful MutablePrompt interface that allows prompts to be converted between multiple formats:
  • JSON Format: Standard internal format used by the server
  • MDC Format: Cursor Rules Markdown format (.mdc files)
  • PGAI Format: Format with embedding support for PostgreSQL AI
  • Template Format: Dynamic format with variable placeholders

Converting Between Formats

The MutablePrompt interface provides methods to convert prompts between these formats:

Applying Templates

You can easily apply variables to template prompts:

Extracting Variables

Extract variables from template content:

Creating from Different Formats

You can also create prompts from various formats:

Integration with Storage Adapters

The MutablePrompt interface works seamlessly with the existing storage adapters:
This flexible format handling enables:
  1. Cross-Platform Compatibility: Use prompts in different tools and platforms
  1. Vector Search: Use PGAI format for semantic search capabilities
  1. IDE Integration: Direct compatibility with Cursor Rules
  1. Template Systems: Export templates for use in various programming languages

Storage Adapters

The server supports three types of storage adapters:
  1. File Adapter: Stores prompts as individual JSON files in a directory.
  1. PostgreSQL Adapter: Stores prompts in a PostgreSQL database.
  1. MDC Adapter: Stores prompts in Cursor Rules MDC format.
Storage types can be configured using the STORAGE_TYPE environment variable:

PostgreSQL Setup

When using PostgreSQL storage, configure the following environment variables:
Alternatively, use a connection string:

Docker Deployment

The MCP Prompts server can be deployed using Docker in various configurations, depending on your needs.

Using the Docker Compose Manager

We provide a convenient script for managing Docker Compose configurations, making it easy to launch the server with different profiles and environments:

Purpose-Driven Container Architecture

Our Docker architecture is designed around specific use cases:
  1. Production Environment: Optimized for performance and security
  1. Development Environment: Includes hot-reloading and debugging tools
  1. Test Environment: For running automated tests
  1. PostgreSQL Integration: Adds PostgreSQL storage backend
  1. Multiple MCP Servers Integration: Connects with other MCP servers

Available Docker Images

The official MCP Prompts Docker images are available on Docker Hub:
  • Production: sparesparrow/mcp-prompts:latest
  • Development: sparesparrow/mcp-prompts:dev
  • Test: sparesparrow/mcp-prompts:test

Docker Compose Configurations

We offer several Docker Compose configurations that can be combined:

Base Deployment

This provides a basic MCP Prompts server with file-based storage.

Development Environment

Includes hot-reloading, source code mounting, and Node.js inspector for debugging.

PostgreSQL Integration

Adds PostgreSQL database and Adminer for database management.

Testing Environment

Sets up a environment optimized for running tests.

Multiple MCP Servers Integration

The MCP Prompts Server is designed to integrate seamlessly with other MCP servers in the ecosystem, providing a comprehensive solution for AI contextual needs. This integration enables advanced capabilities through the combined power of specialized servers.

Data Flow Architecture

Configuration for Multi-Server Setup

To enable integration with multiple MCP servers, you need to:
  1. Start the required MCP servers using Docker Compose:
  1. Configure your MCP client (such as Claude Desktop) with the appropriate server URLs:

Server-Specific Environment Variables

When integrating with other servers, the following environment variables can be set:
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

Integration Use Cases

  1. Template Variables with Memory Server:
  1. Prompt Synchronization with GitHub Server:
  1. Filesystem Management:
  1. Advanced Reasoning with Sequential Thinking:
  1. Voice Feedback with ElevenLabs:
  1. Vector Search with PostgreSQL AI:

Building and Publishing Docker Images

To build and publish Docker images:

Data Persistence

By default, Docker volumes are used for persistence:
  • mcp-prompts-data: For prompt storage
  • mcp-prompts-postgres-data: For PostgreSQL data
To use a local directory instead:

Development

Development Workflow

Setting Up Development Environment

  1. Clone the repository
  1. Install dependencies
  1. Set up environment variables Create a .env file with the necessary configuration.

Development Commands

  • Start development server with hot reloading
  • Build the project
  • Run unit tests
  • Run integration tests
  • Test build process
  • Test Docker build
  • Build Docker image

Build Process

The build process includes several important steps:
  1. TypeScript Compilation
  1. Make Entry Point Executable

Testing

Run the tests:
Run the MCP Inspector for testing:

Comprehensive Test Scripts

For more advanced testing options, use the provided test script:

Docker Container Health Testing

To test the health of Docker containers:
This test verifies that the health check endpoint is working correctly when the MCP-Prompts server is running in a Docker container.

Directory Structure

The project follows a structured organization to maintain clean separation of concerns:

Release Process

Pre-Release Checklist

  • All TypeScript errors are resolved
  • Code linting passes with no errors
  • Code is properly formatted according to project standards
  • Unit tests pass
  • Integration tests pass
  • Build test passes
  • Docker build test passes
  • Package installation test passes
  • README is up-to-date with the latest features and changes
  • CHANGELOG is updated with all notable changes

Version Update

  • Update version in package.json according to semantic versioning
  • Ensure dependencies are up-to-date
  • Update any version references in documentation

Publishing

  • Create a git tag for the new version
  • Push changes and tag to GitHub
  • Publish to npm (npm publish)
  • Build and push Docker image

Post-Release Verification

  • Verify installation from npm
  • Verify package can be run with npx
  • Verify Docker image works as expected
  • Verify integration with Claude Desktop

Changelog

[1.2.20] - 2025-03-14

  • Automated version bump

[1.2.19] - 2024-03-16

Fixed

  • Fixed TypeScript errors in PostgresAdapter implementation
  • Enhanced savePrompt method to properly return the created prompt
  • Added updatePrompt method to the PostgresAdapter
  • Fixed StorageAdapter interface to include listPrompts and clearAll methods
  • Improved error handling in database-tools.ts for the clearAll method
  • Enhanced health check endpoint with more detailed information

Added

  • Added better documentation and error handling for health check endpoint

[1.2.18] - 2024-03-14

Added

  • Added HTTP server with health check endpoint
  • Added Docker container health checks
  • Added ESM module compatibility for Node.js 18-23+
  • Enhanced database tools with better error handling

Changed

  • Improved Docker build process with multi-stage builds
  • Streamlined configuration management
  • Optimized PostgreSQL adapter connection handling
  • Updated dependencies to latest versions

Fixed

  • Fixed issues with file adapter on certain file systems
  • Improved error messages for better debugging
  • Fixed template variable extraction

[1.2.0] - 2025-03-14

Changed

  • Reorganized codebase structure for better maintainability
  • Moved Docker-related files to docker/ directory
  • Moved build scripts to scripts/build/ directory
  • Moved test scripts to scripts/test/ directory
  • Updated GitHub workflows to use new file paths
  • Updated Docker Compose configuration to use new file paths
  • Added comprehensive development documentation

Added

  • Created development documentation with detailed instructions
  • Created release checklist for release preparation
  • Added CHANGELOG.md to track changes

Removed

  • Removed duplicate and redundant files
  • Removed incomplete scripts

[1.1.0] - 2024-03-01

Added

  • PGAI vector search for semantic prompt discovery
  • Support for embeddings in PostgreSQL
  • Improved prompts collection with professional templates
  • Batch processing capabilities for prompt collections

Changed

  • Enhanced prompt processing pipeline
  • Improved command-line interface with more options
  • Better error handling and validation

[1.0.0] - 2024-02-15

Added

  • Initial release of MCP Prompts Server
  • Basic prompt management capabilities (add, edit, get, list, delete)
  • Template variable substitution
  • Tag-based organization
  • File-based storage
  • Import/export functionality
  • MCP protocol compatibility

Best Practices

  1. Organize with Tags: Use tags to categorize your prompts for easier retrieval
  1. Use Templates: Create reusable templates with variables for consistent prompting
  1. Include Metadata: Add author, version, and other metadata for better organization
  1. Regular Backups: Use the backup functionality if managing critical prompts
  1. Optimize Large Collections: Use pagination when retrieving large prompt collections
  1. Use Consistent Naming: Name prompts clearly and consistently for easy discovery
  1. Tag Effectively: Use tags to organize prompts by purpose, project, or context
  1. Templatize Reusable Prompts: Create templates for frequently used prompts with variables
  1. Update Regularly: Keep your prompts up-to-date as your needs change
  1. Share with Team: Share effective prompts with your team for consistent interactions

License

MIT

Architecture

The MCP Prompts Server is designed to integrate seamlessly with other MCP servers to provide a comprehensive ecosystem for AI prompt management. Below are diagrams illustrating various integration scenarios:

Core Architecture

Multiple MCP Servers Integration

Data Flow for Prompt Management

Resource Integration Pattern

Docker Deployment Architecture

MCP Resources Integration

The MCP Prompts Server integrates with various MCP resource servers to enhance prompt capabilities. This integration allows prompts to access and incorporate external data, making AI interactions more contextual and powerful.

MCP Resources Architecture

Resource URI Format

MCP Resources are referenced using a consistent URI format:
Examples:
  • @filesystem:/path/to/file.js - Access a local file
  • @memory:session/12345 - Retrieve data from memory
  • @github:owner/repo/path/to/file - Get content from a GitHub repository
  • @sequential-thinking:analysis-1234 - Reference a reasoning chain
  • @elevenlabs:text-to-speak - Generate audio from text

Integration with Templates

Templates can be enhanced with MCP resources to create dynamic, context-aware prompts:

Available Resource Types

[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

Resource Fallback Strategies

For robust production use, implement fallback strategies when resources are unavailable:

Setting Up MCP Resource Servers

To enable resource integration, deploy the required MCP servers using Docker Compose:
For detailed setup instructions for each resource server, refer to the MCP Servers Documentation.

MCP Server Integration

The MCP Prompts Server can be integrated with other specialized MCP servers to enhance its capabilities and provide a more comprehensive solution. This section outlines how to integrate with the Mermaid Diagram Server and the Orchestrator Server.

Integration with Mermaid Diagram Server

The Mermaid Diagram Server provides tools for generating, analyzing, and modifying Mermaid diagrams using natural language instructions. Integrating this server with the MCP Prompts Server enables visualization of prompts, their relationships, and their usage patterns.

Use Cases for Mermaid Integration

  1. Prompt Relationship Visualization: Generate diagrams showing how different prompts are related and categorized.
  1. Template Structure Representation: Visualize the structure of complex templates with their variables and conditional blocks.
  1. Variable Usage Analysis: Create charts showing how variables are used across different templates.
  1. Resource Dependencies Graph: Visualize which prompts depend on which MCP resources.
  1. Prompt Flow Diagrams: Create sequence diagrams showing the flow of prompt processing.

Integration with Orchestrator Server

The Orchestrator Server implements the Orchestrator-Workers pattern to coordinate between multiple specialized servers. Integrating this server with the MCP Prompts Server enables complex workflows that involve multiple steps and servers.

Orchestration Workflows

Multi-Server Project Analysis Pattern

This pattern leverages multiple MCP servers for comprehensive project analysis:

Setting Up Integration

To integrate the MCP Prompts Server with the Mermaid and Orchestrator servers, follow these steps:
  1. Configure Server Connections:
  1. Enable Resource Sharing: Set the ENABLE_RESOURCES environment variable to true for the MCP Prompts Server.
  1. Set Up Docker Compose: Create a docker-compose.yml file that includes all required servers:
volumes: prompts-data: postgres-data:

Router Configuration

To set up the MCP Router with multiple servers, create a router-config.json file:

Use Cases and Integration Examples

1. Project Documentation Generator

Use the integrated servers to automatically generate comprehensive project documentation with diagrams:

2. Prompt Visualization Dashboard

Create a dashboard for visualizing and managing prompts with their relationships:

3. Template-Based Project Generator

Generate new projects using templates and visualize the project structure:

Docker Compose for Full Integration

Here's a complete Docker Compose configuration for integrating all servers with the MCP Router:
This integration approach provides a complete ecosystem for managing prompts, creating visualizations, orchestrating workflows, and persisting data.

Server-Sent Events (SSE) Support

The MCP-Prompts server now includes support for Server-Sent Events (SSE), which enables real-time updates without polling. This is particularly useful for applications that need to receive prompt updates in real-time, such as the Claude desktop application.

Running with SSE Support

You can run the MCP-Prompts server with SSE support using Docker Compose:
This will start the MCP-Prompts server with SSE enabled on port 3003. The SSE endpoint is available at /events.

Configuring Claude Desktop to Use SSE

To configure Claude desktop to use the dockerized MCP-Prompts server with SSE support, update your claude_desktop_config.json file (typically located at ~/.config/Claude/claude_desktop_config.json):
Replace /path/to/your/prompts and /path/to/your/backups with your actual paths.

SSE API

The SSE endpoint sends the following events:
  1. Connect Event: Sent when a client connects to the SSE endpoint
  1. Heartbeat Event: Sent every 30 seconds to keep the connection alive
You can listen for these events in your client application:

MCP Prompts Server

An MCP server for managing prompts and templates with project orchestration capabilities. Part of the Model Context Protocol ecosystem.
This server provides a simple way to store, retrieve, and apply templates for AI prompts, making it easier to maintain consistent prompting patterns across your AI applications.

Table of Contents

  • Features
  • Installation
  • Configuration
  • Usage
  • Prompt Format
  • Multi-Format Prompt Support
  • Storage Adapters
  • Docker Deployment
  • Development
  • Release Process
  • Changelog
  • Best Practices
  • License
  • Architecture
  • MCP Resources Integration
  • MCP Server Integration
  • Server-Sent Events (SSE) Support

Implementation Updates

The MCP Prompts Server has been refactored to use the new registration methods from MCP SDK version 1.6.1:
  • server.resource for defining resource endpoints (e.g., for prompts and templates).
  • server.tool for registering tool operations (e.g., add, get, update, list, delete, and apply_template).
  • server.prompt for prompt-specific operations (e.g., "review-code").
These changes simplify the codebase, improve maintainability, and ensure better compatibility with the latest MCP SDK.

Features

  • Store and retrieve prompts
  • Create and use templates with variables
  • List prompts with filtering by tags
  • Apply variables to templates
  • Multiple storage backends (file system, PostgreSQL, and MDC format)
  • Easy to use with Claude and other AI assistants
  • Project orchestration capabilities
  • Health check endpoints

Installation

Using npx (recommended)

Global installation

Using Docker

Verifying Installation

After installation, you can verify that the server is working by:
  1. Opening Claude Desktop
  1. Typing "/" in the chat input to see if prompts from the server appear
  1. Testing with a simple tool call:

Configuration

The server can be configured using environment variables:
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

PostgreSQL settings (required if STORAGE_TYPE=postgres)

[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

MDC settings (required if STORAGE_TYPE=mdc)

[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

Usage

Using with Claude

In Claude 3 Desktop app, you can configure the MCP Prompts server in your claude_desktop_config.json:

Available Tools

The MCP Prompts server provides the following tools:
  • add_prompt: Add a new prompt
  • get_prompt: Get a prompt by ID
  • update_prompt: Update an existing prompt
  • list_prompts: List all prompts
  • delete_prompt: Delete a prompt by ID
  • apply_template: Apply variables to a prompt template

API Usage Examples

Listing Available Prompts

To see what prompts are available:
To filter by tags:

Getting a Specific Prompt

To retrieve a specific prompt by ID:

Using a Template Prompt

To apply variables to a template prompt:

Managing Prompts

Adding a New Prompt

To add a new prompt:

Editing an Existing Prompt

To edit an existing prompt:

Using Prompts in Your Workflow

Development Workflow Example

When starting work on a new feature:
  1. Request the development system prompt template
  1. Fill in the template with your project details
  1. Use the resulting system prompt to guide Claude's assistance

Code Review Example

When reviewing code:
  1. Request the code review template
  1. Provide the code to be reviewed
  1. Claude will provide a structured review

Prompt Format

A prompt has the following structure:

Multi-Format Prompt Support

The MCP Prompts Server includes a powerful MutablePrompt interface that allows prompts to be converted between multiple formats:
  • JSON Format: Standard internal format used by the server
  • MDC Format: Cursor Rules Markdown format (.mdc files)
  • PGAI Format: Format with embedding support for PostgreSQL AI
  • Template Format: Dynamic format with variable placeholders

Converting Between Formats

The MutablePrompt interface provides methods to convert prompts between these formats:

Applying Templates

You can easily apply variables to template prompts:

Extracting Variables

Extract variables from template content:

Creating from Different Formats

You can also create prompts from various formats:

Integration with Storage Adapters

The MutablePrompt interface works seamlessly with the existing storage adapters:
This flexible format handling enables:
  1. Cross-Platform Compatibility: Use prompts in different tools and platforms
  1. Vector Search: Use PGAI format for semantic search capabilities
  1. IDE Integration: Direct compatibility with Cursor Rules
  1. Template Systems: Export templates for use in various programming languages

Storage Adapters

The server supports three types of storage adapters:
  1. File Adapter: Stores prompts as individual JSON files in a directory.
  1. PostgreSQL Adapter: Stores prompts in a PostgreSQL database.
  1. MDC Adapter: Stores prompts in Cursor Rules MDC format.
Storage types can be configured using the STORAGE_TYPE environment variable:

PostgreSQL Setup

When using PostgreSQL storage, configure the following environment variables:
Alternatively, use a connection string:

Docker Deployment

The MCP Prompts server can be deployed using Docker in various configurations, depending on your needs.

Using the Docker Compose Manager

We provide a convenient script for managing Docker Compose configurations, making it easy to launch the server with different profiles and environments:

Purpose-Driven Container Architecture

Our Docker architecture is designed around specific use cases:
  1. Production Environment: Optimized for performance and security
  1. Development Environment: Includes hot-reloading and debugging tools
  1. Test Environment: For running automated tests
  1. PostgreSQL Integration: Adds PostgreSQL storage backend
  1. Multiple MCP Servers Integration: Connects with other MCP servers

Available Docker Images

The official MCP Prompts Docker images are available on Docker Hub:
  • Production: sparesparrow/mcp-prompts:latest
  • Development: sparesparrow/mcp-prompts:dev
  • Test: sparesparrow/mcp-prompts:test

Docker Compose Configurations

We offer several Docker Compose configurations that can be combined:

Base Deployment

This provides a basic MCP Prompts server with file-based storage.

Development Environment

Includes hot-reloading, source code mounting, and Node.js inspector for debugging.

PostgreSQL Integration

Adds PostgreSQL database and Adminer for database management.

Testing Environment

Sets up a environment optimized for running tests.

Multiple MCP Servers Integration

The MCP Prompts Server is designed to integrate seamlessly with other MCP servers in the ecosystem, providing a comprehensive solution for AI contextual needs. This integration enables advanced capabilities through the combined power of specialized servers.

Data Flow Architecture

Configuration for Multi-Server Setup

To enable integration with multiple MCP servers, you need to:
  1. Start the required MCP servers using Docker Compose:
  1. Configure your MCP client (such as Claude Desktop) with the appropriate server URLs:

Server-Specific Environment Variables

When integrating with other servers, the following environment variables can be set:
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

Integration Use Cases

  1. Template Variables with Memory Server:
  1. Prompt Synchronization with GitHub Server:
  1. Filesystem Management:
  1. Advanced Reasoning with Sequential Thinking:
  1. Voice Feedback with ElevenLabs:
  1. Vector Search with PostgreSQL AI:

Building and Publishing Docker Images

To build and publish Docker images:

Data Persistence

By default, Docker volumes are used for persistence:
  • mcp-prompts-data: For prompt storage
  • mcp-prompts-postgres-data: For PostgreSQL data
To use a local directory instead:

Development

Development Workflow

Setting Up Development Environment

  1. Clone the repository
  1. Install dependencies
  1. Set up environment variables Create a .env file with the necessary configuration.

Development Commands

  • Start development server with hot reloading
  • Build the project
  • Run unit tests
  • Run integration tests
  • Test build process
  • Test Docker build
  • Build Docker image

Build Process

The build process includes several important steps:
  1. TypeScript Compilation
  1. Make Entry Point Executable

Testing

Run the tests:
Run the MCP Inspector for testing:

Comprehensive Test Scripts

For more advanced testing options, use the provided test script:

Docker Container Health Testing

To test the health of Docker containers:
This test verifies that the health check endpoint is working correctly when the MCP-Prompts server is running in a Docker container.

Directory Structure

The project follows a structured organization to maintain clean separation of concerns:

Release Process

Pre-Release Checklist

  • All TypeScript errors are resolved
  • Code linting passes with no errors
  • Code is properly formatted according to project standards
  • Unit tests pass
  • Integration tests pass
  • Build test passes
  • Docker build test passes
  • Package installation test passes
  • README is up-to-date with the latest features and changes
  • CHANGELOG is updated with all notable changes

Version Update

  • Update version in package.json according to semantic versioning
  • Ensure dependencies are up-to-date
  • Update any version references in documentation

Publishing

  • Create a git tag for the new version
  • Push changes and tag to GitHub
  • Publish to npm (npm publish)
  • Build and push Docker image

Post-Release Verification

  • Verify installation from npm
  • Verify package can be run with npx
  • Verify Docker image works as expected
  • Verify integration with Claude Desktop

Changelog

[1.2.20] - 2025-03-14

  • Automated version bump

[1.2.19] - 2024-03-16

Fixed

  • Fixed TypeScript errors in PostgresAdapter implementation
  • Enhanced savePrompt method to properly return the created prompt
  • Added updatePrompt method to the PostgresAdapter
  • Fixed StorageAdapter interface to include listPrompts and clearAll methods
  • Improved error handling in database-tools.ts for the clearAll method
  • Enhanced health check endpoint with more detailed information

Added

  • Added better documentation and error handling for health check endpoint

[1.2.18] - 2024-03-14

Added

  • Added HTTP server with health check endpoint
  • Added Docker container health checks
  • Added ESM module compatibility for Node.js 18-23+
  • Enhanced database tools with better error handling

Changed

  • Improved Docker build process with multi-stage builds
  • Streamlined configuration management
  • Optimized PostgreSQL adapter connection handling
  • Updated dependencies to latest versions

Fixed

  • Fixed issues with file adapter on certain file systems
  • Improved error messages for better debugging
  • Fixed template variable extraction

[1.2.0] - 2025-03-14

Changed

  • Reorganized codebase structure for better maintainability
  • Moved Docker-related files to docker/ directory
  • Moved build scripts to scripts/build/ directory
  • Moved test scripts to scripts/test/ directory
  • Updated GitHub workflows to use new file paths
  • Updated Docker Compose configuration to use new file paths
  • Added comprehensive development documentation

Added

  • Created development documentation with detailed instructions
  • Created release checklist for release preparation
  • Added CHANGELOG.md to track changes

Removed

  • Removed duplicate and redundant files
  • Removed incomplete scripts

[1.1.0] - 2024-03-01

Added

  • PGAI vector search for semantic prompt discovery
  • Support for embeddings in PostgreSQL
  • Improved prompts collection with professional templates
  • Batch processing capabilities for prompt collections

Changed

  • Enhanced prompt processing pipeline
  • Improved command-line interface with more options
  • Better error handling and validation

[1.0.0] - 2024-02-15

Added

  • Initial release of MCP Prompts Server
  • Basic prompt management capabilities (add, edit, get, list, delete)
  • Template variable substitution
  • Tag-based organization
  • File-based storage
  • Import/export functionality
  • MCP protocol compatibility

Best Practices

  1. Organize with Tags: Use tags to categorize your prompts for easier retrieval
  1. Use Templates: Create reusable templates with variables for consistent prompting
  1. Include Metadata: Add author, version, and other metadata for better organization
  1. Regular Backups: Use the backup functionality if managing critical prompts
  1. Optimize Large Collections: Use pagination when retrieving large prompt collections
  1. Use Consistent Naming: Name prompts clearly and consistently for easy discovery
  1. Tag Effectively: Use tags to organize prompts by purpose, project, or context
  1. Templatize Reusable Prompts: Create templates for frequently used prompts with variables
  1. Update Regularly: Keep your prompts up-to-date as your needs change
  1. Share with Team: Share effective prompts with your team for consistent interactions

License

MIT

Architecture

The MCP Prompts Server is designed to integrate seamlessly with other MCP servers to provide a comprehensive ecosystem for AI prompt management. Below are diagrams illustrating various integration scenarios:

Core Architecture

Multiple MCP Servers Integration

Data Flow for Prompt Management

Resource Integration Pattern

Docker Deployment Architecture

MCP Resources Integration

The MCP Prompts Server integrates with various MCP resource servers to enhance prompt capabilities. This integration allows prompts to access and incorporate external data, making AI interactions more contextual and powerful.

MCP Resources Architecture

Resource URI Format

MCP Resources are referenced using a consistent URI format:
Examples:
  • @filesystem:/path/to/file.js - Access a local file
  • @memory:session/12345 - Retrieve data from memory
  • @github:owner/repo/path/to/file - Get content from a GitHub repository
  • @sequential-thinking:analysis-1234 - Reference a reasoning chain
  • @elevenlabs:text-to-speak - Generate audio from text

Integration with Templates

Templates can be enhanced with MCP resources to create dynamic, context-aware prompts:

Available Resource Types

[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

Resource Fallback Strategies

For robust production use, implement fallback strategies when resources are unavailable:

Setting Up MCP Resource Servers

To enable resource integration, deploy the required MCP servers using Docker Compose:
For detailed setup instructions for each resource server, refer to the MCP Servers Documentation.

MCP Server Integration

The MCP Prompts Server can be integrated with other specialized MCP servers to enhance its capabilities and provide a more comprehensive solution. This section outlines how to integrate with the Mermaid Diagram Server and the Orchestrator Server.

Integration with Mermaid Diagram Server

The Mermaid Diagram Server provides tools for generating, analyzing, and modifying Mermaid diagrams using natural language instructions. Integrating this server with the MCP Prompts Server enables visualization of prompts, their relationships, and their usage patterns.

Use Cases for Mermaid Integration

  1. Prompt Relationship Visualization: Generate diagrams showing how different prompts are related and categorized.
  1. Template Structure Representation: Visualize the structure of complex templates with their variables and conditional blocks.
  1. Variable Usage Analysis: Create charts showing how variables are used across different templates.
  1. Resource Dependencies Graph: Visualize which prompts depend on which MCP resources.
  1. Prompt Flow Diagrams: Create sequence diagrams showing the flow of prompt processing.

Integration with Orchestrator Server

The Orchestrator Server implements the Orchestrator-Workers pattern to coordinate between multiple specialized servers. Integrating this server with the MCP Prompts Server enables complex workflows that involve multiple steps and servers.

Orchestration Workflows

Multi-Server Project Analysis Pattern

This pattern leverages multiple MCP servers for comprehensive project analysis:

Setting Up Integration

To integrate the MCP Prompts Server with the Mermaid and Orchestrator servers, follow these steps:
  1. Configure Server Connections:
  1. Enable Resource Sharing: Set the ENABLE_RESOURCES environment variable to true for the MCP Prompts Server.
  1. Set Up Docker Compose: Create a docker-compose.yml file that includes all required servers:
volumes: prompts-data: postgres-data:

Router Configuration

To set up the MCP Router with multiple servers, create a router-config.json file:

Use Cases and Integration Examples

1. Project Documentation Generator

Use the integrated servers to automatically generate comprehensive project documentation with diagrams:

2. Prompt Visualization Dashboard

Create a dashboard for visualizing and managing prompts with their relationships:

3. Template-Based Project Generator

Generate new projects using templates and visualize the project structure:

Docker Compose for Full Integration

Here's a complete Docker Compose configuration for integrating all servers with the MCP Router:
This integration approach provides a complete ecosystem for managing prompts, creating visualizations, orchestrating workflows, and persisting data.

Server-Sent Events (SSE) Support

The MCP-Prompts server now includes support for Server-Sent Events (SSE), which enables real-time updates without polling. This is particularly useful for applications that need to receive prompt updates in real-time, such as the Claude desktop application.

Running with SSE Support

You can run the MCP-Prompts server with SSE support using Docker Compose:
This will start the MCP-Prompts server with SSE enabled on port 3003. The SSE endpoint is available at /events.

Configuring Claude Desktop to Use SSE

To configure Claude desktop to use the dockerized MCP-Prompts server with SSE support, update your claude_desktop_config.json file (typically located at ~/.config/Claude/claude_desktop_config.json):
Replace /path/to/your/prompts and /path/to/your/backups with your actual paths.

SSE API

The SSE endpoint sends the following events:
  1. Connect Event: Sent when a client connects to the SSE endpoint
  1. Heartbeat Event: Sent every 30 seconds to keep the connection alive
You can listen for these events in your client application: