Let the AI manage complex plans with integrated task management and tracking tools. Supports STDIO, SSE and Streamable HTTP transports.
A task management system that implements the Model Context Protocol (MCP) for seamless integration with agentic AI tools. This system allows AI agents to create, manage, and track tasks within plans using Valkey as the persistence layer.
The system is built using:
The MCP server is designed to run one protocol at a time for simplicity. By default, all protocols are disabled and you need to explicitly enable the one you want to use.
docker volume create valkey-data
docker run -d --name valkey-mcp \
-p 8080:8080 \
-p 6379:6379 \
-v valkey-data:/data \
-e ENABLE_SSE=true \
ghcr.io/jbrinkman/valkey-ai-tasks:latest
docker run -d --name valkey-mcp \
-p 8080:8080 \
-p 6379:6379 \
-v valkey-data:/data \
-e ENABLE_STREAMABLE_HTTP=true \
ghcr.io/jbrinkman/valkey-ai-tasks:latest
docker run -i --rm --name valkey-mcp \
-v valkey-data:/data \
-e ENABLE_STDIO=true \
ghcr.io/jbrinkman/valkey-ai-tasks:latest
The container images are published to GitHub Container Registry and can be pulled using:
docker pull ghcr.io/jbrinkman/valkey-ai-tasks:latest
# or a specific version
docker pull ghcr.io/jbrinkman/valkey-ai-tasks:1.1.0
The MCP server supports two transport protocols: Server-Sent Events (SSE) and Streamable HTTP. Each protocol exposes similar endpoints but with different interaction patterns.
GET /sse/list_functions
: Lists all available functionsPOST /sse/invoke/{function_name}
: Invokes a function with the given parametersPOST /mcp
: Handles all MCP requests using JSON format
{"method": "list_functions", "params": {}}
{"method": "invoke", "params": {"function": "function_name", "params": {...}}}
The server automatically selects the appropriate transport based on:
/
), the server redirects based on content type:
application/json
→ Streamable HTTPGET /health
: Returns server health statuscreate_plan
: Create a new planget_plan
: Get a plan by IDlist_plans
: List all planslist_plans_by_application
: List all plans for a specific applicationupdate_plan
: Update an existing plandelete_plan
: Delete a plan by IDupdate_plan_notes
: Update notes for a planget_plan_notes
: Get notes for a plancreate_task
: Create a new task in a planget_task
: Get a task by IDlist_tasks_by_plan
: List all tasks in a planlist_tasks_by_status
: List all tasks with a specific statusupdate_task
: Update an existing taskdelete_task
: Delete a task by IDreorder_task
: Change the order of a task within its planupdate_task_notes
: Update notes for a taskget_task_notes
: Get notes for a taskTo configure an AI agent to use the local MCP server, add the following to your MCP configuration file (the exact file location depends on your AI Agent):
Note: The docker container should already be running.
{
"mcpServers": {
"valkey-tasks": {
"serverUrl": "http://localhost:8080/sse"
}
}
}
Note: The docker container should already be running.
{
"mcpServers": {
"valkey-tasks": {
"serverUrl": "http://localhost:8080/mcp"
}
}
}
STDIO transport allows the MCP server to communicate via standard input/output, which is useful for legacy AI tools that rely on stdin/stdout for communication.
For agentic tools that need to start and manage the MCP server process, use a configuration like this:
{
"mcpServers": {
"valkey-tasks": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-v", "valkey-data:/data"
"-e", "ENABLE_STDIO=true",
"ghcr.io/jbrinkman/valkey-ai-tasks:latest"
]
}
}
}
When running in Docker, use the container name as the hostname:
{
"mcpServers": {
"valkey-tasks": {
"serverUrl": "http://valkey-mcp-server:8080/sse"
}
}
}
The system supports rich Markdown-formatted notes for both plans and tasks. This feature is particularly useful for AI agents to maintain context between sessions and document important information.
Notes content is sanitized to prevent XSS and other security issues while preserving Markdown formatting.
In addition to MCP tools, the system provides MCP resources that allow AI agents to access structured data directly. These resources provide a complete view of plans and tasks in a single request, which is more efficient than making multiple tool calls.
The Plan Resource provides a complete view of a plan, including its tasks and notes. It supports the following URI patterns:
ai-tasks://plans/{id}/full
- Returns a specific plan with its tasksai-tasks://plans/full
- Returns all plans with their tasksai-tasks://applications/{app_id}/plans/full
- Returns all plans for a specific applicationEach resource returns a JSON object or array with the following structure:
{
"id": "plan-123",
"application_id": "my-app",
"name": "New Feature Development",
"description": "Implement new features for the application",
"status": "new",
"notes": "# Project Notes\n\nThis project aims to implement the following features...",
"created_at": "2025-06-27T14:00:21Z",
"updated_at": "2025-07-01T13:04:01Z",
"tasks": [
{
"id": "task-456",
"plan_id": "plan-123",
"title": "Task 1",
"description": "Description for task 1",
"status": "pending",
"priority": "high",
"order": 0,
"notes": "# Task Notes\n\nThis task requires the following steps...",
"created_at": "2025-06-27T14:00:50Z",
"updated_at": "2025-07-01T12:04:27Z"
},
// Additional tasks...
]
}
AI agents can access these resources using the MCP resource API. Here's an example of how to read a resource:
{
"action": "read_resource",
"params": {
"uri": "ai-tasks://plans/123/full"
}
}
This will return the complete plan resource including all tasks, which is more efficient than making separate calls to get the plan and then its tasks.
AI agents can interact with this task management system through the MCP API using either SSE or Streamable HTTP transport. Here are examples for both transport protocols:
/sse/list_functions
to discover available functions/sse/invoke/create_plan
with parameters:
{
"application_id": "my-app",
"name": "New Feature Development",
"description": "Implement new features for the application",
"notes": "# Project Notes\n\nThis project aims to implement the following features:\n\n- Feature A\n- Feature B\n- Feature C"
}
/sse/invoke/create_task
/sse/invoke/bulk_create_tasks
for multiple tasks at once:
{
"plan_id": "plan-123",
"tasks_json": "[
{
\"title\": \"Task 1\",
\"description\": \"Description for task 1\",
\"priority\": \"high\",
\"status\": \"pending\",
\"notes\": \"# Task Notes\\n\\nThis task requires the following steps:\\n\\n1. Step one\\n2. Step two\\n3. Step three\"
},
{
\"title\": \"Task 2\",
\"description\": \"Description for task 2\",
\"priority\": \"medium\",
\"status\": \"pending\"
}
]"
}
/sse/invoke/update_task
to update task status as work progressesHere's a sample prompt that would trigger an AI agent to use the MCP task management system:
I need to organize work for my new application called "inventory-manager".
Create a plan for this application with the following plan notes:
"# Inventory Manager Project
This project aims to create a comprehensive inventory management system with the following goals:
- Track inventory levels in real-time
- Generate reports on inventory movement
- Provide alerts for low stock items"
Add the following tasks:
1. Set up database schema
2. Implement REST API endpoints
3. Create user authentication system
4. Design frontend dashboard
5. Implement inventory tracking features
For the database schema task, add these notes:
"# Database Schema Notes
The schema should include the following tables:
- Products
- Categories
- Inventory Transactions
- Users
- Roles"
Prioritize the tasks appropriately and set the first two tasks as "in_progress".
With this prompt, an AI agent with access to the Valkey MCP Task Management Server would:
For information on how to set up a development environment, contribute to the project, and understand the codebase structure, please refer to the Developer Guide.
For contribution guidelines, including commit message format and pull request process, see Contributing Guidelines.
This project is licensed under the BSD-3-Clause License.