rag documentation.com
rag documentation.com logo

RAG Documentation

Integrates Qdrant vector search with documentation retrieval to enable context-aware responses and semantic querying for...

Created byApr 22, 2025

RAG Documentation MCP Server

An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context.

Table of Contents

  • Features
  • Quick Start
  • Docker Compose Setup
  • Web Interface
  • Configuration
  • Acknowledgments
  • Troubleshooting

Features

Tools

  1. search_documentation
  1. list_sources
  1. extract_urls
  1. remove_documentation
  1. list_queue
  1. run_queue
  1. clear_queue
  1. add_documentation

Quick Start

The RAG Documentation tool is designed for:
  • Enhancing AI responses with relevant documentation
  • Building documentation-aware AI assistants
  • Creating context-aware tooling for developers
  • Implementing semantic documentation search
  • Augmenting existing knowledge bases

Docker Compose Setup

The project includes a docker-compose.yml file for easy containerized deployment. To start the services:
To stop the services:

Web Interface

The system includes a web interface that can be accessed after starting the Docker Compose services:
  1. Open your browser and navigate to: http://localhost:3030
  1. The interface provides:

Configuration

Embeddings Configuration

The system uses Ollama as the default embedding provider for local embeddings generation, with OpenAI available as a fallback option. This setup prioritizes local processing while maintaining reliability through cloud-based fallback.

Environment Variables

  • EMBEDDING_PROVIDER: Choose the primary embedding provider ('ollama' or 'openai', default: 'ollama')
  • EMBEDDING_MODEL: Specify the model to use (optional)
  • OPENAI_API_KEY: Required when using OpenAI as provider
  • FALLBACK_PROVIDER: Optional backup provider ('ollama' or 'openai')
  • FALLBACK_MODEL: Optional model for fallback provider

Cline Configuration

Add this to your cline_mcp_settings.json:

Claude Desktop Configuration

Add this to your claude_desktop_config.json:

Default Configuration

The system uses Ollama by default for efficient local embedding generation. For optimal reliability:
  1. Install and run Ollama locally
  1. Configure OpenAI as fallback (recommended):
This configuration ensures:
  • Fast, local embedding generation with Ollama
  • Automatic fallback to OpenAI if Ollama fails
  • No external API calls unless necessary
Note: The system will automatically use the appropriate vector dimensions based on the provider:
  • Ollama (nomic-embed-text): 768 dimensions
  • OpenAI (text-embedding-3-small): 1536 dimensions

Acknowledgments

This project is a fork of qpd-v/mcp-ragdocs, originally developed by qpd-v. The original project provided the foundation for this implementation.
Special thanks to the original creator, qpd-v, for their innovative work on the initial version of this MCP server. This fork has been enhanced with additional features and improvements by Rahul Retnan.

Troubleshooting

Server Not Starting (Port Conflict)

If the MCP server fails to start due to a port conflict, follow these steps:
  1. Identify and kill the process using port 3030:
  1. Restart the MCP server
  1. If the issue persists, check for other processes using the port:
  1. You can also change the default port in the configuration if needed

RAG Documentation MCP Server

An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context.

Table of Contents

  • Features
  • Quick Start
  • Docker Compose Setup
  • Web Interface
  • Configuration
  • Acknowledgments
  • Troubleshooting

Features

Tools

  1. search_documentation
  1. list_sources
  1. extract_urls
  1. remove_documentation
  1. list_queue
  1. run_queue
  1. clear_queue
  1. add_documentation

Quick Start

The RAG Documentation tool is designed for:
  • Enhancing AI responses with relevant documentation
  • Building documentation-aware AI assistants
  • Creating context-aware tooling for developers
  • Implementing semantic documentation search
  • Augmenting existing knowledge bases

Docker Compose Setup

The project includes a docker-compose.yml file for easy containerized deployment. To start the services:
To stop the services:

Web Interface

The system includes a web interface that can be accessed after starting the Docker Compose services:
  1. Open your browser and navigate to: http://localhost:3030
  1. The interface provides:

Configuration

Embeddings Configuration

The system uses Ollama as the default embedding provider for local embeddings generation, with OpenAI available as a fallback option. This setup prioritizes local processing while maintaining reliability through cloud-based fallback.

Environment Variables

  • EMBEDDING_PROVIDER: Choose the primary embedding provider ('ollama' or 'openai', default: 'ollama')
  • EMBEDDING_MODEL: Specify the model to use (optional)
  • OPENAI_API_KEY: Required when using OpenAI as provider
  • FALLBACK_PROVIDER: Optional backup provider ('ollama' or 'openai')
  • FALLBACK_MODEL: Optional model for fallback provider

Cline Configuration

Add this to your cline_mcp_settings.json:

Claude Desktop Configuration

Add this to your claude_desktop_config.json:

Default Configuration

The system uses Ollama by default for efficient local embedding generation. For optimal reliability:
  1. Install and run Ollama locally
  1. Configure OpenAI as fallback (recommended):
This configuration ensures:
  • Fast, local embedding generation with Ollama
  • Automatic fallback to OpenAI if Ollama fails
  • No external API calls unless necessary
Note: The system will automatically use the appropriate vector dimensions based on the provider:
  • Ollama (nomic-embed-text): 768 dimensions
  • OpenAI (text-embedding-3-small): 1536 dimensions

Acknowledgments

This project is a fork of qpd-v/mcp-ragdocs, originally developed by qpd-v. The original project provided the foundation for this implementation.
Special thanks to the original creator, qpd-v, for their innovative work on the initial version of this MCP server. This fork has been enhanced with additional features and improvements by Rahul Retnan.

Troubleshooting

Server Not Starting (Port Conflict)

If the MCP server fails to start due to a port conflict, follow these steps:
  1. Identify and kill the process using port 3030:
  1. Restart the MCP server
  1. If the issue persists, check for other processes using the port:
  1. You can also change the default port in the configuration if needed