A coding assistant MCP server that allows to explore a code-base and make changes to code. Should be used with trusted repos only (insufficient protection against prompt injections).
A CLI tool built in Rust for assisting with code-related tasks.
Ensure you have Rust installed on your system. Then:
# Clone the repository
git clone https://github.com/stippi/code-assistant
# Navigate to the project directory
cd code-assistant
# Build the project
cargo build --release
# The binary will be available in target/release/code-assistant
The code-assistant
implements the Model Context Protocol by Anthropic.
This means it can be added as a plugin to MCP client applications such as Claude Desktop.
Create a file ~/.config/code-assistant/projects.json
.
This file adds available projects in MCP server mode (list_projects
and file operation tools).
It has the following structure:
{
"code-assistant": {
"path": "/Users/<username>/workspace/code-assistant"
},
"asteroids": {
"path": "/Users/<username>/workspace/asteroids"
},
"zed": {
"path": "Users/<username>/workspace/zed"
}
}
Notes:
A Finder window opens highlighting the file claude_desktop_config.json
.
Open that file in your favorite text editor.
An example configuration is given below:
{
"mcpServers": {
"code-assistant": {
"command": "/Users/<username>/workspace/code-assistant/target/release/code-assistant",
"args": [
"server"
],
"env": {
"PERPLEXITY_API_KEY": "pplx-...", // optional, enables perplexity_ask tool
"SHELL": "/bin/zsh" // your login shell, required when configuring "env" here
}
}
}
}
Code Assistant can run in two modes:
code-assistant --task <TASK> [OPTIONS]
Available options:
--path <PATH>
: Path to the code directory to analyze (default: current directory)-t, --task <TASK>
: Task to perform on the codebase (required in terminal mode, optional with --ui
)--ui
: Start with GUI interface--continue-task
: Continue from previous state-v, --verbose
: Enable verbose logging-p, --provider <PROVIDER>
: LLM provider to use [ai-core, anthropic, open-ai, ollama, vertex, open-router] (default: anthropic)-m, --model <MODEL>
: Model name to use (provider-specific defaults: anthropic="claude-sonnet-4-20250514", open-ai="gpt-4o", vertex="gemini-2.5-pro-preview-06-05", open-router="anthropic/claude-3-7-sonnet", ollama=required)--base-url <BASE_URL>
: API base URL for the LLM provider to use--tool-syntax <TOOL_SYNTAX>
: Tool invocation syntax [native, xml, caret] (default: xml) - native
= tools via API, xml
= custom system message with XML tags, caret
= custom system message with triple-caret blocks--num-ctx <NUM_CTX>
: Context window size in tokens (default: 8192, only relevant for Ollama)--record <RECORD>
: Record API responses to a file (only supported for Anthropic provider currently)--playback <PLAYBACK>
: Play back a recorded session from a file--fast-playback
: Fast playback mode - ignore chunk timing when playing recordingsEnvironment variables:
ANTHROPIC_API_KEY
: Required when using the Anthropic providerOPENAI_API_KEY
: Required when using the OpenAI providerGOOGLE_API_KEY
: Required when using the Vertex providerOPENROUTER_API_KEY
: Required when using the OpenRouter providerPERPLEXITY_API_KEY
: Required to use the Perplexity search API toolsWhen using the AI Core provider (--provider ai-core
), you need to create a configuration file containing your service key credentials and model deployments.
Default config file location: ~/.config/code-assistant/ai-core.json
Sample configuration:
{
"auth": {
"client_id": "<your service key client id>",
"client_secret": "<your service key client secret>",
"token_url": "https://<your service key url>/oauth/token",
"api_base_url": "https://<your service key api URL>/v2/inference"
},
"models": {
"claude-sonnet-4": "<your deployment id for the model>"
}
}
You can specify a custom config file path using the --aicore-config
option:
code-assistant --provider ai-core --aicore-config /path/to/your/ai-core-config.json --task "Your task"
Configuration steps:
mkdir -p ~/.config/code-assistant/
~/.config/code-assistant/ai-core.json
--provider ai-core
is specifiedExamples:
# Analyze code in current directory using Anthropic's Claude
code-assistant --task "Explain the purpose of this codebase"
# Use a different provider and model
code-assistant --task "Review this code for security issues" --provider open-ai --model gpt-4o
# Analyze a specific directory with verbose logging
code-assistant --path /path/to/project --task "Add error handling" --verbose
# Start with GUI interface
code-assistant --ui
# Start GUI with an initial task
code-assistant --ui --task "Refactor the authentication module"
# Use Ollama with a local model
code-assistant --task "Document this API" --provider ollama --model llama2 --num-ctx 4096
# Record a session for later playback (Anthropic only)
code-assistant --task "Optimize database queries" --record ./recordings/db-optimization.json
# Play back a recorded session with fast-forward (no timing delays)
code-assistant --playback ./recordings/db-optimization.json --fast-playback
Runs as a Model Context Protocol server:
code-assistant server [OPTIONS]
Available options:
-v, --verbose
: Enable verbose loggingThis section is not really a roadmap, as the items are in no particular order. Below are some topics that are likely the next focus.
replace_in_file
and we know in which file quite early.
If we also know this file has changed since the LLM last read it, we can block the attempt with an appropriate error message.execute_command
tool runs a shell with the provided command line, which at the moment is completely unchecked.\n
line endings, no trailing white space).
This increases the success rate of matching search blocks quite a bit, but certain ways of fuzzy matching might increase the success even more.
Failed matches introduce quite a bit of inefficiency, since they almost always trigger the LLM to re-read a file.
Even when the error output of the replace_in_file
tool includes the complete file and tells the LLM not to re-read the file.Contributions are welcome! Please feel free to submit a Pull Request.