A server that answers questions about football matches, and specialised in the football club Feyenoord.
Model Context Protocol (MCP) server for querying Feyenoord football match data using natural language.
The Streamable HTTP server is publicly available at https://mcp.feyod.nl/mcp. A Docker container is available on Docker Hub.
This MCP server provides a natural language interface to query Feyod: Feyenoord Open Data. This allows users to get answers to their questions related to Feyenoord matches, players, opponents and related events.
The underlying Feyod database is maintained in the jeroenvdmeer/feyod
GitHub repository. You will need to obtain the latest SQL file from that repository to set up the required database.
The server uses LangChain to:
LLM and embedding models are dynamically loaded based on configuration using a provider factory (llm_factory.py
), allowing easy switching between providers like OpenAI, Google, etc.
The Feyod MCP server is publicly available at https://mcp.feyod.nl/mcp
. You can connect to this endpoint from any MCP-compatible client, such as Claude Desktop.
A Docker image of the Feyod MCP server is available on Docker Hub. You can pull and run it using the following commands:
Pull the Docker image:
docker pull jeroenvdmeer/feyod-mcp
Run the Docker container:
You will need to provide the necessary environment variables for the LLM provider and API key. You can also mount the feyod.db
file if you want to use a local database instead of the one included in the image.
docker run -p 8000:8000 \
-e LLM_PROVIDER="your_llm_provider" \
-e LLM_API_KEY="your_api_key" \
jeroenvdmeer/feyod-mcp
Replace your_llm_provider
and your_api_key
with your actual LLM configuration.
To mount a local database file:
docker run -p 8000:8000 \
-e LLM_PROVIDER="your_llm_provider" \
-e LLM_API_KEY="your_api_key" \
-v <absolute_path_to_feyod_db>:/app/feyod/feyod.db \
jeroenvdmeer/feyod-mcp
Replace <absolute_path_to_feyod_db>
with the absolute path to your feyod.db
file on your host machine.
This server exposes MCP tools for querying the Feyenoord database. Tools are discoverable via the MCP protocol (tools/list
).
Clone repositories:
# Clone this repo for the MCP server
git clone https://github.com/jeroenvdmeer/feyod-mcp.git
# Clone the Feyod database
git clone https://github.com/jeroenvdmeer/feyod.git
# Change directory into the MCP server
cd feyod-mcp
Create and activate a virtual environment (recommended: uv):
Refer to https://docs.astral.sh/uv/ for the installation instructions of uv
.
uv venv
.venv\Scripts\activate # Windows
# or
source .venv/bin/activate # macOS/Linux
Install dependencies:
# Using uv (recommended)
uv add "mcp[cli]" langchain langchain-openai langchain-google-genai python-dotenv aiosqlite
# Or using pip
pip install -r requirements.txt
Set up the database:
# Change directory to the feyod directory with the SQL file
cd ../feyod
# Build the SQLite database using the SQL statements
sqlite3 feyod.db < feyod.sql
Create a .env
file in the mcp
directory with the following variables:
# Path to the SQLite database file (relative to mcp folder or absolute)
DATABASE_PATH="../feyod/feyod.db"
# Server host binding (defaults to localhost/127.0.0.1)
HOST="127.0.0.1"
# Logging level (e.g., DEBUG, INFO, WARNING, ERROR)
LOG_LEVEL=INFO
# --- LLM Configuration ---
LLM_PROVIDER="google" # or "openai", etc.
LLM_API_KEY="YOUR_API_KEY_HERE"
LLM_MODEL="gemini-2.5-flash"
# --- Example Loading Configuration (Optional) ---
EXAMPLE_SOURCE="local" # or "mongodb"
EXAMPLE_DB_CONNECTION_STRING=""
EXAMPLE_DB_NAME="feyenoord_data"
EXAMPLE_DB_COLLECTION="examples"
Notes:
HOST
setting defaults to "127.0.0.1" for local development. When running in Docker, it's automatically set to "0.0.0.0" to allow external connections.LLM_PROVIDER
matches one defined in llm_factory.py
.langchain-google-genai
).EXAMPLE_SOURCE="mongodb"
, configure MongoDB settings as above.You can run the server in several ways:
mcp dev main.py
python main.py
# or
mcp run main.py
The server will start and listen for MCP connections (stdio by default, or HTTP/SSE if configured).
You can containerize the MCP server using the provided Dockerfile
.
Build the Docker image:
Navigate to the mcp
directory in your terminal and run the following command:
docker build -t feyod-mcp:latest .
This will build an image tagged feyod-mcp:latest
.
Run the Docker container: You can run the container, mapping the internal port 8000 to an external port (e.g., 8000) on your host machine. You will also need to mount the database file as a volume so the container can access it.
docker run -p 8000:8000 -v <absolute_path_to_feyod_db>:/app/../feyod/feyod.db feyod-mcp:latest
Replace <absolute_path_to_feyod_db>
with the absolute path to your feyod.db
file on your host machine.
Alternatively, you can pass environment variables directly:
docker run -p 8000:8000 -e DATABASE_PATH="/app/../feyod/feyod.db" -e LLM_PROVIDER="google" -e LLM_API_KEY="YOUR_API_KEY_HERE" -e LLM_MODEL="gemini-2.5-flash" -v <absolute_path_to_feyod_db>:/app/../feyod/feyod.db feyod-mcp:latest
Remember to replace the placeholder values with your actual configuration.
The server inside the container will start and listen on 0.0.0.0:8000
.
To add support for a new provider:
pip install langchain-anthropic
).llm_factory.py
to add the provider..env
/ README: Add the necessary API key to your .env
file.requirements.txt
for specific package dependencies.langchain-openai
, langchain-google-genai
).mcp dev main.py
and the MCP Inspector for local testing..env
and Claude Desktop config.This section provides an overview of the key security measures implemented in the MCP server. These measures are selected based on a risk assessment and the security considerations provided in the MCP documentation.
The key assets of the MCP server are:
Both assets serve the key objective of the MCP server to provide correct answers to questions from the end-user.
As a basis for the risk assessment you can find the analysis of the information security criteria in the following overview.
Criterium | Impact | Description |
---|---|---|
Confidentiality | None | As the Feyod database is publicly available, exposure of the data will have no impact. |
Integrity | High | The key features of the MCP server resolves around providing accurate answers to questions. When the data is no longer complete and/or correct, the MCP server can no longer live up to this promise. |
Availability | High | The key features of the MCP server resolves around providing accurate answers to questions. When the data is no longer available, the MCP server can no longer live up to this promise. |
Given the importance of the integrity and availability of the data, interactions with the database which delete and/or alter the Feyod database are considered the key risks that impact the security of the information in the Feyod database. Such interactions can be performed by:
DELETE
and UPDATE
queries)Given the objective to provide correct answers to questions from the end-user to the following key risks are considered:
Based on the key risks identified related to the information in the Feyod database and the publicly exposed Streamable HTTP server, the following key security controls are implemented.
Risk | Control | Implementation Details |
---|---|---|
Misusing the MCP server to generate malicious SQL queries (e.g. DELETE and UPDATE queries) |
Prompt Engineering & Query Validation | Prompt-Level Enforcement: The system prompt sent to the LLM in query_processor.py contains the explicit instruction: "DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database. Only SELECT statements are allowed." Application-Level Guardrail: After generation, the application code validates that the returned SQL string begins with SELECT (case-insensitive). This serves as a deterministic check to prevent the execution of other statement types.Read-Only Access: For the publicly hosted service, the database file is mounted as read-only. This provides a strong, infrastructure-level guarantee that no write or delete operations can be performed, regardless of the generated SQL. |
Unauthorised access to the database in hosting platform of the MCP server | Filesystem and Network Isolation | Local File Access: Per standard configuration the SQLite database is a local file (feyod.db ) and is not exposed over a network port. The application, as seen in database.py , accesses it directly from the filesystem, minimizing the network attack surface.Principle of Least Privilege: In the production hosting environment, standard filesystem permissions are used to ensure that only the service account running the MCP server process has read access to the database file. |
Risk | Control | Implementation Details |
---|---|---|
Incorrect SQL queries and/or output messages generated by the LLM leading to incorrect answers | Multi-Stage Validation Workflow | Context-Rich Generation: To improve accuracy, the LLM is provided with the full database schema (from database.get_schema_description ) and, when available, few-shot examples of valid question-query pairs (from examples.py ).Pre-Execution Syntax Check: In query_processor.py , every generated SQL query is first checked using an EXPLAIN statement (check_sql_syntax function). This validates the query's syntax against the database schema without actually executing it, catching most errors.Automated Fixing Loop: If the syntax check fails, the workflow ( attempt_fix_sql function) triggers a second LLM call, providing it with the invalid query and the specific database error it produced. This allows the model to attempt a correction, which is then re-validated before execution. |
Excessive requests are made to the MCP server leading to unavailability of the service and high costs | Resource & Infrastructure Limiting | Query Result Capping: The SQL generation prompt instructs the LLM to "always limit your query to at most 5 results" , which prevents queries from returning excessively large datasets that could strain the server or client.Containerization: The use of Docker allows for applying resource limits (CPU, memory) on the container, preventing a single runaway process from impacting the entire host machine. |
Malicious requests are made to the MCP service which trigger a high number of costly LLM calls and lead to high costs | Efficient Workflow & External Monitoring | Optimized LLM Usage: The query processing workflow is designed to be efficient. For a valid user question, it typically requires only a single LLM call. The more expensive "fixer" call is only triggered upon failure of the initial generated query. |
MCP resources:
Securing generative AI and LLM applications:
This initiative is not affiliated with Feyenoord Rotterdam N.V. and therefore not an official Feyenoord product. The data provided through this server is unofficial and might be incorrect.