Sequential Thinking Multi-Agent System (MAS) 
Overview
sequentialthinking
tool designed for complex problem-solving. Unlike its predecessor, this version utilizes a true Multi-Agent System (MAS) architecture where:- A Coordinating Agent (the
Team
object incoordinate
mode) manages the workflow.
- Specialized Agents (Planner, Researcher, Analyzer, Critic, Synthesizer) handle specific sub-tasks based on their defined roles and expertise.
- Incoming thoughts are actively processed, analyzed, and synthesized by the agent team, not just logged.
- The system supports complex thought patterns, including revisions of previous steps and branching to explore alternative paths.
- Integration with external tools like Exa (via the Researcher agent) allows for dynamic information gathering.
- Robust Pydantic validation ensures data integrity for thought steps.
- Detailed logging tracks the process, including agent interactions (handled by the coordinator).
Key Differences from Original Version (TypeScript)
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
How it Works (Coordinate Mode)
- Initiation: An external LLM uses the
sequential-thinking-starter
prompt to define the problem and initiate the process.
- Tool Call: The LLM calls the
sequentialthinking
tool with the first (or subsequent) thought, structured according to theThoughtData
Pydantic model.
- Validation & Logging: The tool receives the call, validates the input using Pydantic, logs the incoming thought, and updates the history/branch state via
AppContext
.
- Coordinator Invocation: The core thought content (along with context about revisions/branches) is passed to the
SequentialThinkingTeam
'sarun
method.
- Coordinator Analysis & Delegation: The
Team
(acting as Coordinator) analyzes the input thought, breaks it down into sub-tasks, and delegates these sub-tasks to the most relevant specialist agents (e.g., Analyzer for analysis tasks, Researcher for information needs).
- Specialist Execution: Delegated agents execute their specific sub-tasks using their instructions, models, and tools (like
ThinkingTools
orExaTools
).
- Response Collection: Specialists return their results to the Coordinator.
- Synthesis & Guidance: The Coordinator synthesizes the specialists' responses into a single, cohesive output. This output may include recommendations for revision or branching based on the specialists' findings (especially from the Critic and Analyzer). It also provides guidance for the LLM on formulating the next thought.
- Return Value: The tool returns a JSON string containing the Coordinator's synthesized response, status, and updated context (branches, history length).
- Iteration: The calling LLM uses the Coordinator's response and guidance to formulate the next
sequentialthinking
tool call, potentially triggering revisions or branches as suggested.
Token Consumption Warning
sequentialthinking
call invokes:- The Coordinator agent (the
Team
itself).
- Multiple specialist agents (potentially Planner, Researcher, Analyzer, Critic, Synthesizer, depending on the Coordinator's delegation).
Prerequisites
- Python 3.10+
- Access to a compatible LLM API (configured for
agno
). The system currently supports:
- Exa API Key (required only if using the Researcher agent's capabilities)
uv
package manager (recommended) orpip
.
MCP Server Configuration (Client-Side)
env
section within your MCP client configuration should include the API key for your chosen LLM_PROVIDER
.Installation & Setup
- Clone the repository:
- Set Environment Variables:
Create a
.env
file in the project root directory or export the variables directly into your environment:Note on Model Selection:
- Install Dependencies: It's highly recommended to use a virtual environment.
Usage
Sequential Thinking Multi-Agent System (MAS) 
Overview
sequentialthinking
tool designed for complex problem-solving. Unlike its predecessor, this version utilizes a true Multi-Agent System (MAS) architecture where:- A Coordinating Agent (the
Team
object incoordinate
mode) manages the workflow.
- Specialized Agents (Planner, Researcher, Analyzer, Critic, Synthesizer) handle specific sub-tasks based on their defined roles and expertise.
- Incoming thoughts are actively processed, analyzed, and synthesized by the agent team, not just logged.
- The system supports complex thought patterns, including revisions of previous steps and branching to explore alternative paths.
- Integration with external tools like Exa (via the Researcher agent) allows for dynamic information gathering.
- Robust Pydantic validation ensures data integrity for thought steps.
- Detailed logging tracks the process, including agent interactions (handled by the coordinator).
Key Differences from Original Version (TypeScript)
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
How it Works (Coordinate Mode)
- Initiation: An external LLM uses the
sequential-thinking-starter
prompt to define the problem and initiate the process.
- Tool Call: The LLM calls the
sequentialthinking
tool with the first (or subsequent) thought, structured according to theThoughtData
Pydantic model.
- Validation & Logging: The tool receives the call, validates the input using Pydantic, logs the incoming thought, and updates the history/branch state via
AppContext
.
- Coordinator Invocation: The core thought content (along with context about revisions/branches) is passed to the
SequentialThinkingTeam
'sarun
method.
- Coordinator Analysis & Delegation: The
Team
(acting as Coordinator) analyzes the input thought, breaks it down into sub-tasks, and delegates these sub-tasks to the most relevant specialist agents (e.g., Analyzer for analysis tasks, Researcher for information needs).
- Specialist Execution: Delegated agents execute their specific sub-tasks using their instructions, models, and tools (like
ThinkingTools
orExaTools
).
- Response Collection: Specialists return their results to the Coordinator.
- Synthesis & Guidance: The Coordinator synthesizes the specialists' responses into a single, cohesive output. This output may include recommendations for revision or branching based on the specialists' findings (especially from the Critic and Analyzer). It also provides guidance for the LLM on formulating the next thought.
- Return Value: The tool returns a JSON string containing the Coordinator's synthesized response, status, and updated context (branches, history length).
- Iteration: The calling LLM uses the Coordinator's response and guidance to formulate the next
sequentialthinking
tool call, potentially triggering revisions or branches as suggested.
Token Consumption Warning
sequentialthinking
call invokes:- The Coordinator agent (the
Team
itself).
- Multiple specialist agents (potentially Planner, Researcher, Analyzer, Critic, Synthesizer, depending on the Coordinator's delegation).
Prerequisites
- Python 3.10+
- Access to a compatible LLM API (configured for
agno
). The system currently supports:
- Exa API Key (required only if using the Researcher agent's capabilities)
uv
package manager (recommended) orpip
.
MCP Server Configuration (Client-Side)
env
section within your MCP client configuration should include the API key for your chosen LLM_PROVIDER
.Installation & Setup
- Clone the repository:
- Set Environment Variables:
Create a
.env
file in the project root directory or export the variables directly into your environment:Note on Model Selection:
- Install Dependencies: It's highly recommended to use a virtual environment.
Usage
- Using `uv run` (Recommended):
- Directly using Python:
sequentialthinking
tool available to compatible MCP clients configured to use it.`sequentialthinking` Tool Parameters
ThoughtData
Pydantic model:Interacting with the Tool (Conceptual Example)
- LLM: Uses a starter prompt (like
sequential-thinking-starter
) with the problem definition.
- LLM: Calls
sequentialthinking
tool withthoughtNumber: 1
, the initialthought
(e.g., "Plan the analysis..."), an estimatedtotalThoughts
, andnextThoughtNeeded: True
.
- Server: MAS processes the thought. The Coordinator synthesizes responses from specialists and provides guidance (e.g., "Analysis plan complete. Suggest researching X next. No revisions recommended yet.").
- LLM: Receives the JSON response containing
coordinatorResponse
.
- LLM: Formulates the next thought based on the
coordinatorResponse
(e.g., "Research X using available tools...").
- LLM: Calls
sequentialthinking
tool withthoughtNumber: 2
, the newthought
, potentially updatedtotalThoughts
,nextThoughtNeeded: True
.
- Server: MAS processes. The Coordinator synthesizes (e.g., "Research complete. Findings suggest a flaw in thought #1's assumption. RECOMMENDATION: Revise thought #1...").
- LLM: Receives the response, notes the recommendation.
- LLM: Formulates a revision thought.
- LLM: Calls
sequentialthinking
tool withthoughtNumber: 3
, the revisionthought
,isRevision: True
,revisesThought: 1
,nextThoughtNeeded: True
.
- ... and so on, potentially branching or extending the process as needed.
Tool Response Format
Logging
- Logs are written to
~/.sequential_thinking/logs/sequential_thinking.log
by default. (Configuration might be adjustable in the logging setup code).
- Uses Python's standard
logging
module.
- Includes a rotating file handler (e.g., 10MB limit, 5 backups) and a console handler (typically INFO level).
- Logs include timestamps, levels, logger names, and messages, including structured representations of thoughts being processed.
Development
- Clone the repository: (As in Installation)
- Set up Virtual Environment: (Recommended)
- Install Dependencies (including dev):
Ensure your
requirements-dev.txt
orpyproject.toml
specifies development tools (likepytest
,ruff
,black
,mypy
).
- Run Checks: Execute linters, formatters, and tests (adjust commands based on your project setup).
- Contribution: (Consider adding contribution guidelines: branching strategy, pull request process, code style).
License
Sequential Thinking Multi-Agent System (MAS) 
Overview
sequentialthinking
tool designed for complex problem-solving. Unlike its predecessor, this version utilizes a true Multi-Agent System (MAS) architecture where:- A Coordinating Agent (the
Team
object incoordinate
mode) manages the workflow.
- Specialized Agents (Planner, Researcher, Analyzer, Critic, Synthesizer) handle specific sub-tasks based on their defined roles and expertise.
- Incoming thoughts are actively processed, analyzed, and synthesized by the agent team, not just logged.
- The system supports complex thought patterns, including revisions of previous steps and branching to explore alternative paths.
- Integration with external tools like Exa (via the Researcher agent) allows for dynamic information gathering.
- Robust Pydantic validation ensures data integrity for thought steps.
- Detailed logging tracks the process, including agent interactions (handled by the coordinator).
Key Differences from Original Version (TypeScript)
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
[object Object] | [object Object] | [object Object] |
How it Works (Coordinate Mode)
- Initiation: An external LLM uses the
sequential-thinking-starter
prompt to define the problem and initiate the process.
- Tool Call: The LLM calls the
sequentialthinking
tool with the first (or subsequent) thought, structured according to theThoughtData
Pydantic model.
- Validation & Logging: The tool receives the call, validates the input using Pydantic, logs the incoming thought, and updates the history/branch state via
AppContext
.
- Coordinator Invocation: The core thought content (along with context about revisions/branches) is passed to the
SequentialThinkingTeam
'sarun
method.
- Coordinator Analysis & Delegation: The
Team
(acting as Coordinator) analyzes the input thought, breaks it down into sub-tasks, and delegates these sub-tasks to the most relevant specialist agents (e.g., Analyzer for analysis tasks, Researcher for information needs).
- Specialist Execution: Delegated agents execute their specific sub-tasks using their instructions, models, and tools (like
ThinkingTools
orExaTools
).
- Response Collection: Specialists return their results to the Coordinator.
- Synthesis & Guidance: The Coordinator synthesizes the specialists' responses into a single, cohesive output. This output may include recommendations for revision or branching based on the specialists' findings (especially from the Critic and Analyzer). It also provides guidance for the LLM on formulating the next thought.
- Return Value: The tool returns a JSON string containing the Coordinator's synthesized response, status, and updated context (branches, history length).
- Iteration: The calling LLM uses the Coordinator's response and guidance to formulate the next
sequentialthinking
tool call, potentially triggering revisions or branches as suggested.
Token Consumption Warning
sequentialthinking
call invokes:- The Coordinator agent (the
Team
itself).
- Multiple specialist agents (potentially Planner, Researcher, Analyzer, Critic, Synthesizer, depending on the Coordinator's delegation).
Prerequisites
- Python 3.10+
- Access to a compatible LLM API (configured for
agno
). The system currently supports:
- Exa API Key (required only if using the Researcher agent's capabilities)
uv
package manager (recommended) orpip
.
MCP Server Configuration (Client-Side)
env
section within your MCP client configuration should include the API key for your chosen LLM_PROVIDER
.Installation & Setup
- Clone the repository:
- Set Environment Variables:
Create a
.env
file in the project root directory or export the variables directly into your environment:Note on Model Selection:
- Install Dependencies: It's highly recommended to use a virtual environment.
Usage
- Using `uv run` (Recommended):
- Directly using Python:
sequentialthinking
tool available to compatible MCP clients configured to use it.`sequentialthinking` Tool Parameters
ThoughtData
Pydantic model:Interacting with the Tool (Conceptual Example)
- LLM: Uses a starter prompt (like
sequential-thinking-starter
) with the problem definition.
- LLM: Calls
sequentialthinking
tool withthoughtNumber: 1
, the initialthought
(e.g., "Plan the analysis..."), an estimatedtotalThoughts
, andnextThoughtNeeded: True
.
- Server: MAS processes the thought. The Coordinator synthesizes responses from specialists and provides guidance (e.g., "Analysis plan complete. Suggest researching X next. No revisions recommended yet.").
- LLM: Receives the JSON response containing
coordinatorResponse
.
- LLM: Formulates the next thought based on the
coordinatorResponse
(e.g., "Research X using available tools...").
- LLM: Calls
sequentialthinking
tool withthoughtNumber: 2
, the newthought
, potentially updatedtotalThoughts
,nextThoughtNeeded: True
.
- Server: MAS processes. The Coordinator synthesizes (e.g., "Research complete. Findings suggest a flaw in thought #1's assumption. RECOMMENDATION: Revise thought #1...").
- LLM: Receives the response, notes the recommendation.
- LLM: Formulates a revision thought.
- LLM: Calls
sequentialthinking
tool withthoughtNumber: 3
, the revisionthought
,isRevision: True
,revisesThought: 1
,nextThoughtNeeded: True
.
- ... and so on, potentially branching or extending the process as needed.
Tool Response Format
Logging
- Logs are written to
~/.sequential_thinking/logs/sequential_thinking.log
by default. (Configuration might be adjustable in the logging setup code).
- Uses Python's standard
logging
module.
- Includes a rotating file handler (e.g., 10MB limit, 5 backups) and a console handler (typically INFO level).
- Logs include timestamps, levels, logger names, and messages, including structured representations of thoughts being processed.
Development
- Clone the repository: (As in Installation)
- Set up Virtual Environment: (Recommended)
- Install Dependencies (including dev):
Ensure your
requirements-dev.txt
orpyproject.toml
specifies development tools (likepytest
,ruff
,black
,mypy
).
- Run Checks: Execute linters, formatters, and tests (adjust commands based on your project setup).
- Contribution: (Consider adding contribution guidelines: branching strategy, pull request process, code style).