perplexity search.com
perplexity search.com logo

Perplexity Search

Integrates with Perplexity's web search API to enable real-time fact-checking, research, and content generation using up...

Created byApr 23, 2025

mcp-perplexity-search


Notice

**This repository is no longer maintained.**
The functionality of this tool is now available in [mcp-omnisearch](https://github.com/spences10/mcp-omnisearch), which combines multiple MCP tools in one unified package.
Please use [mcp-omnisearch](https://github.com/spences10/mcp-omnisearch) instead.

A Model Context Protocol (MCP) server for integrating Perplexity's AI API with LLMs. This server provides advanced chat completion capabilities with specialized prompt templates for various use cases.

Features

  • Advanced chat completion using Perplexity's AI models
  • Predefined prompt templates for common scenarios:
  • Technical documentation generation
  • Security best practices analysis
  • Code review and improvements
  • API documentation in structured format
  • Custom template support for specialized use cases
  • Multiple output formats (text, markdown, JSON)
  • Optional source URL inclusion in responses
  • Configurable model parameters (temperature, max tokens)
  • Support for various Perplexity models including Sonar and LLaMA

Configuration

This server requires configuration through your MCP client. Here are examples for different environments:

Cline Configuration

Add this to your Cline MCP settings:

Claude Desktop with WSL Configuration

For WSL environments, add this to your Claude Desktop configuration:

Environment Variables

The server requires the following environment variable:
  • `PERPLEXITY_API_KEY`: Your Perplexity API key (required)

API

The server implements a single MCP tool with configurable parameters:

chat_completion

Generate chat completions using the Perplexity API with support for specialized prompt templates.
Parameters:
  • `messages` (array, required): Array of message objects with: - `role` (string): 'system', 'user', or 'assistant' - `content` (string): The message content
  • `prompt_template` (string, optional): Predefined template to use: - `technical_docs`: Technical documentation with code examples - `security_practices`: Security implementation guidelines - `code_review`: Code analysis and improvements - `api_docs`: API documentation in JSON format
  • `custom_template` (object, optional): Custom prompt template with: - `system` (string): System message for assistant behaviour - `format` (string): Output format preference - `include_sources` (boolean): Whether to include sources
  • `format` (string, optional): 'text', 'markdown', or 'json' (default: 'text')
  • `include_sources` (boolean, optional): Include source URLs (default: false)
  • `model` (string, optional): Perplexity model to use (default: 'sonar')
  • `temperature` (number, optional): Output randomness (0-1, default: 0.7)
  • `max_tokens` (number, optional): Maximum response length (default: 1024)

Development

Setup

  1. Clone the repository
  1. Install dependencies:
  1. Build the project:
  1. Run in development mode:

Publishing

The project uses changesets for version management. To publish:
  1. Create a changeset:
  1. Version the package:
  1. Publish to npm:

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT License - see the [LICENSE](LICENSE) file for details.

Acknowledgments

  • Built on the [Model Context Protocol](https://github.com/modelcontextprotocol)
  • Powered by [Perplexity SONAR](https://docs.perplexity.ai/api-reference/chat-completions)