MCP-server-ragdocs
Table of Contents
- Usage
- Features
- Configuration
- Deployment
- Tools
- Project Structure
- Using Ollama Embeddings
- License
- Development Workflow
- Contributing
- Forkception Acknowledgments
Usage
- Enhancing AI responses with relevant documentation
- Building documentation-aware AI assistants
- Creating context-aware tooling for developers
- Implementing semantic documentation search
- Augmenting existing knowledge bases
Features
- Vector-based documentation search and retrieval
- Support for multiple documentation sources
- Support for local (Ollama) embeddings generation or OPENAI
- Semantic search capabilities
- Automated documentation processing
- Real-time context augmentation for LLMs
Configuration
Usage with Claude Desktop
claude_desktop_config.json
:OpenAI Configuration
Ollama Configuration
Ollama run from this codebase
Environment Variables Reference
[object Object] | [object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] | [object Object] |
Local Deployment
- Qdrant vector database on port 6333
- Ollama LLM service on port 11434
- Qdrant: http://localhost:6333
- Ollama: http://localhost:11434
Cloud Deployment
- Use hosted Qdrant Cloud service
- Set these environment variables:
Tools
search_documentation
query
(string): The text to search for in the documentation. Can be a natural language query, specific terms, or code snippets.
limit
(number, optional): Maximum number of results to return (1-20, default: 5). Higher limits provide more comprehensive results but may take longer to process.
list_sources
extract_urls
url
(string): The complete URL of the webpage to analyze (must include protocol, e.g., https://). The page must be publicly accessible.
add_to_queue
(boolean, optional): If true, automatically add extracted URLs to the processing queue for later indexing. Use with caution on large sites to avoid excessive queuing.
remove_documentation
urls
(string[]): Array of URLs to remove from the database. Each URL must exactly match the URL used when the documentation was added.
list_queue
run_queue
clear_queue
Project Structure
Using Ollama Embeddings without docker
- Install Ollama:
- Download the nomic-embed-text model:
- Verify installation:
License
Contributing
- Fork the repository
- Install dependencies:
npm install
- Create a feature branch:
git checkout -b feat/your-feature
- Commit changes with npm run commit to ensure compliance with Conventional Commits
- Push to your fork and open a PR
Forkception Acknowledgments
MCP-server-ragdocs
Table of Contents
- Usage
- Features
- Configuration
- Deployment
- Tools
- Project Structure
- Using Ollama Embeddings
- License
- Development Workflow
- Contributing
- Forkception Acknowledgments
Usage
- Enhancing AI responses with relevant documentation
- Building documentation-aware AI assistants
- Creating context-aware tooling for developers
- Implementing semantic documentation search
- Augmenting existing knowledge bases
Features
- Vector-based documentation search and retrieval
- Support for multiple documentation sources
- Support for local (Ollama) embeddings generation or OPENAI
- Semantic search capabilities
- Automated documentation processing
- Real-time context augmentation for LLMs
Configuration
Usage with Claude Desktop
claude_desktop_config.json
:OpenAI Configuration
Ollama Configuration
Ollama run from this codebase
Environment Variables Reference
[object Object] | [object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] | [object Object] |
Local Deployment
- Qdrant vector database on port 6333
- Ollama LLM service on port 11434
- Qdrant: http://localhost:6333
- Ollama: http://localhost:11434
Cloud Deployment
- Use hosted Qdrant Cloud service
- Set these environment variables:
Tools
search_documentation
query
(string): The text to search for in the documentation. Can be a natural language query, specific terms, or code snippets.
limit
(number, optional): Maximum number of results to return (1-20, default: 5). Higher limits provide more comprehensive results but may take longer to process.
list_sources
extract_urls
url
(string): The complete URL of the webpage to analyze (must include protocol, e.g., https://). The page must be publicly accessible.
add_to_queue
(boolean, optional): If true, automatically add extracted URLs to the processing queue for later indexing. Use with caution on large sites to avoid excessive queuing.
remove_documentation
urls
(string[]): Array of URLs to remove from the database. Each URL must exactly match the URL used when the documentation was added.
list_queue
run_queue
clear_queue
Project Structure
Using Ollama Embeddings without docker
- Install Ollama:
- Download the nomic-embed-text model:
- Verify installation:
License
Contributing
- Fork the repository
- Install dependencies:
npm install
- Create a feature branch:
git checkout -b feat/your-feature
- Commit changes with npm run commit to ensure compliance with Conventional Commits
- Push to your fork and open a PR