A server using FastMCP framework to generate images based on prompts via a remote Comfy server.
Overview
This script sets up a server using the FastMCP framework to generate images based on prompts using a specified workflow. It interacts with a remote Comfy server to submit prompts and retrieve generated images.
Workflow file exported from Comfy UI. This code includes a sample Flux-Dev-ComfyUI-Workflow.json which is only used here as reference. You will need to export from your workflow and set the environment variables accordingly.
You can install the required packages for local development:
Configuration
Set the following environment variables:
COMFY_URL to point to your Comfy server URL.
COMFY_WORKFLOW_JSON_FILE to point to the absolute path of the API export json file for the comfyui workflow.
PROMPT_NODE_ID to the id of the text prompt node.
OUTPUT_NODE_ID to the id of the output node with the final image.
OUTPUT_MODE to either url or file to select desired output.
Optionally, if you have an Ollama server running, you can connect to it for prompt generation.
OLLAMA_API_BASE to the url where ollama is running.
PROMPT_LLM to the name of the model hosted on ollama for prompt generation.
Example:
Usage
Comfy MCP Server can be launched by the following command:
A server using FastMCP framework to generate images based on prompts via a remote Comfy server.
Overview
This script sets up a server using the FastMCP framework to generate images based on prompts using a specified workflow. It interacts with a remote Comfy server to submit prompts and retrieve generated images.
Workflow file exported from Comfy UI. This code includes a sample Flux-Dev-ComfyUI-Workflow.json which is only used here as reference. You will need to export from your workflow and set the environment variables accordingly.
You can install the required packages for local development:
Configuration
Set the following environment variables:
COMFY_URL to point to your Comfy server URL.
COMFY_WORKFLOW_JSON_FILE to point to the absolute path of the API export json file for the comfyui workflow.
PROMPT_NODE_ID to the id of the text prompt node.
OUTPUT_NODE_ID to the id of the output node with the final image.
OUTPUT_MODE to either url or file to select desired output.
Optionally, if you have an Ollama server running, you can connect to it for prompt generation.
OLLAMA_API_BASE to the url where ollama is running.
PROMPT_LLM to the name of the model hosted on ollama for prompt generation.
Example:
Usage
Comfy MCP Server can be launched by the following command: