Configurable server environment for efficient memory-based operations with customizable settings and flexible limits for...
Created byApr 23, 2025
TxtAI Assistant MCP
A Model Context Protocol (MCP) server implementation for semantic search and memory management using [txtai](https://github.com/neuml/txtai). This server provides a robust API for storing, retrieving, and managing text-based memories with semantic search capabilities.
About txtai
This project is built on top of [txtai](https://github.com/neuml/txtai), an excellent open-source AI-powered search engine created by [NeuML](https://github.com/neuml). txtai provides:
All-in-one semantic search solution
Neural search with transformers
Zero-shot text classification
Text extraction and embeddings
Multi-language support
High performance and scalability
We extend txtai's capabilities by integrating it with the Model Context Protocol (MCP), enabling AI assistants like Claude and Cline to leverage its powerful semantic search capabilities. Special thanks to the txtai team for creating such a powerful and flexible tool.
Features
Semantic search across stored memories
Persistent storage with file-based backend
Tag-based memory organization and retrieval
Memory statistics and health monitoring
Automatic data persistence
Comprehensive logging
Configurable CORS settings
Integration with Claude and Cline AI
Prerequisites
Python 3.8 or higher
pip (Python package installer)
virtualenv (recommended)
Installation
Clone this repository:
Run the start script:
The script will:
Create a virtual environment
Install required dependencies
Set up necessary directories
Create a configuration file from template
Start the server
Configuration
The server can be configured using environment variables in the `.env` file. A template is provided at `.env.template`:
Integration with Claude and Cline AI
This TxtAI Assistant can be used as an MCP server with Claude and Cline AI to enhance their capabilities with semantic memory and search functionality.
Configuration for Claude
To use this server with Claude, add it to Claude's MCP configuration file (typically located at `~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):
Configuration for Cline
To use with Cline, add the server configuration to Cline's MCP settings file (typically located at `~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json`):
Available MCP Tools
Once configured, the following tools become available to Claude and Cline:
`store_memory`: Store new memory content with metadata and tags
`retrieve_memory`: Retrieve memories based on semantic search
`search_by_tag`: Search memories by tags
`delete_memory`: Delete a specific memory by content hash
`get_stats`: Get database statistics
`check_health`: Check database and embedding model health
Usage Examples
In Claude or Cline, you can use these tools through the MCP protocol:
The AI will automatically use these tools to maintain context and retrieve relevant information during conversations.
API Endpoints
Store Memory
Store a new memory with optional metadata and tags.
**Request Body:**
Search Memories
Search memories using semantic search.
**Request Body:**
Search by Tags
Search memories by tags.
**Request Body:**
Delete Memory
Delete a specific memory by its content hash.
Get Statistics
Get system statistics including memory counts and tag distribution.
Health Check
Check the health status of the server.
Directory Structure
Data Storage
Memories and tags are stored in JSON files in the `data` directory:
`memories.json`: Contains all stored memories
`tags.json`: Contains the tag index
Logging
Logs are stored in the `logs` directory. The default log file is `server.log`.
Development
To contribute to this project:
Fork the repository
Create a feature branch
Make your changes
Submit a pull request
Error Handling
The server implements comprehensive error handling:
Invalid requests return appropriate HTTP status codes
Errors are logged with stack traces
User-friendly error messages are returned in responses
Security Considerations
CORS settings are configurable via environment variables
File paths are sanitized to prevent directory traversal
Input validation is performed on all endpoints
License
This project is licensed under the MIT License - see the LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Support
If you encounter any issues or have questions, please file an issue on the GitHub repository.