Professional project scaffolding toolkit with zero-configuration AI context generation, template generation for Rust/Deno/Python projects, and hybrid neuro-symbolic code analysis.
Zero-configuration AI context generation system that analyzes any codebase instantly through CLI, MCP, or HTTP interfaces. Built by Pragmatic AI Labs with extreme quality standards and zero tolerance for technical debt.
Toyota Way Success: Achieved 97% complexity reduction in stubs.rs through complete modular refactoring (v0.29.5). Project maintains zero tolerance standards: 0 SATD comments, 0 failing doctests, 0 failing property tests, 72+ comprehensive property tests, and proper separation of concerns across all components. Latest refactoring created dedicated modules (language_analyzer.rs, defect_formatter.rs, dead_code_formatter.rs) eliminating 549 lines of duplicated code while maintaining full functionality β
Install pmat
using one of the following methods:
From Crates.io (Recommended):
cargo install pmat
With the Quick Install Script (Linux only):
curl -sSfL https://raw.githubusercontent.com/paiml/paiml-mcp-agent-toolkit/master/scripts/install.sh | sh
macOS users: Please use cargo install pmat
instead. Pre-built binaries are only available for Linux.
From Source:
git clone https://github.com/paiml/paiml-mcp-agent-toolkit
cd paiml-mcp-agent-toolkit
cargo build --release
From GitHub Releases:
Pre-built binaries for Linux are available on the releases page. macOS and Windows users should use cargo install pmat
.
# Analyze current directory
pmat context
# Get complexity metrics for top 10 files
pmat analyze complexity --top-files 10
# Analyze specific files with include patterns
pmat analyze complexity --include "src/*.rs" --format json
# Test with validated examples (try these!)
cargo run --example complexity_demo
pmat analyze complexity --include "server/examples/complexity_*.rs"
# Find technical debt
pmat analyze satd
# Run comprehensive quality checks
pmat quality-gate --strict
Add to your Cargo.toml
:
[dependencies]
pmat = "0.28.0"
Basic usage:
use pmat::{
services::code_analysis::CodeAnalysisService,
types::ProjectPath,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let service = CodeAnalysisService::new();
let path = ProjectPath::new(".");
// Generate context
let context = service.generate_context(path, None).await?;
println!("Project context: {}", context);
// Analyze complexity
let complexity = service.analyze_complexity(path, Some(10)).await?;
println!("Complexity results: {:?}", complexity);
Ok(())
}
pmat refactor auto
achieves extreme quality standards
pmat refactor auto --single-file-mode --file path/to/file.rs
for targeted refactoringpmat refactor docs
maintains Zero Tolerance Quality Standardspmat enforce extreme --file path/to/file.rs
for file-specific enforcementpmat lint-hotspot --file path/to/file.rs
for targeted analysis--fail-on-violation
for CI/CD pipelines# Zero-configuration context generation
pmat context # Auto-detects language
pmat context --format json # JSON output
pmat context -t rust # Force toolchain
pmat context --skip-expensive-metrics # Fast mode
# Code analysis
pmat analyze complexity --top-files 5 # Complexity analysis
pmat analyze complexity --fail-on-violation # CI/CD mode - exit(1) if violations
pmat analyze churn --days 30 # Git history analysis
pmat analyze dag --target-nodes 25 # Dependency graph
pmat analyze dead-code --format json # Dead code detection
pmat analyze dead-code --fail-on-violation --max-percentage 10 # CI/CD mode
pmat analyze satd --top-files 10 # Technical debt
pmat analyze satd --strict --fail-on-violation # Zero tolerance for debt
pmat analyze deep-context --format json # Comprehensive analysis
pmat analyze deep-context --full # Full detailed report
pmat analyze deep-context --include-pattern "*.rs" # Filter by file pattern
pmat analyze big-o # Big-O complexity analysis
pmat analyze makefile-lint # Makefile quality linting
pmat analyze proof-annotations # Provability analysis
# Analysis commands
pmat analyze graph-metrics # Graph centrality metrics (PageRank, betweenness, closeness)
pmat analyze name-similarity "function_name" # Fuzzy name matching with phonetic support
pmat analyze symbol-table # Symbol extraction with cross-references
pmat analyze duplicates --min-lines 10 # Code duplication detection
pmat quality-gate --fail-on-violation # Comprehensive quality enforcement
pmat diagnose --verbose # Self-diagnostics and health checks
# WebAssembly Support
pmat analyze assemblyscript --wasm-complexity # AssemblyScript analysis with WASM metrics
pmat analyze webassembly --include-binary # WebAssembly binary and text format analysis
# Project scaffolding
pmat scaffold rust --templates makefile,readme,gitignore
pmat list # Available templates
# Refactoring engine
pmat refactor interactive # Interactive refactoring
pmat refactor serve --config refactor.json # Batch refactoring
pmat refactor status # Check refactor progress
pmat refactor resume # Resume from checkpoint
pmat refactor auto # AI-powered automatic refactoring
pmat refactor docs --dry-run # Clean up documentation
# Demo and visualization
pmat demo --format table # CLI demo
pmat demo --web --port 8080 # Web interface
pmat demo --repo https://github.com/user/repo # Analyze GitHub repo
# Quality enforcement
pmat quality-gate --fail-on-violation # CI/CD quality gate
pmat quality-gate --checks complexity,satd,security # Specific checks
pmat quality-gate --format human # Human-readable output
pmat enforce extreme # Enforce extreme quality standards
# Add to Claude Code
claude mcp add pmat ~/.local/bin/pmat
The MCP server can now be run using the pmcp Rust SDK for better type safety and async support:
// Run the MCP server with pmcp SDK
cargo run --example mcp_server_pmcp
// Or use as a library
use pmat::mcp_pmcp::{handlers::*, PmcpServer};
use pmcp::{Server, ServerBuilder};
let server = ServerBuilder::new("pmat-mcp", "1.0.0")
.with_tool("analyze_complexity", "Analyze code complexity",
Box::new(AnalyzeComplexityTool))
.with_tool("analyze_satd", "Detect technical debt",
Box::new(AnalyzeSatdTool))
// ... add more tools
.build();
// Handle connections
server.handle_connection(stream).await?;
The pmcp SDK provides:
Available MCP tools:
generate_template
- Generate project files from templatesscaffold_project
- Generate complete project structureanalyze_complexity
- Code complexity metrics with tool compositionanalyze_code_churn
- Git history analysisanalyze_dag
- Dependency graph generationanalyze_dead_code
- Dead code detectionanalyze_deep_context
- Comprehensive analysis with tool compositiongenerate_context
- Zero-config context generationanalyze_big_o
- Big-O complexity analysis with confidence scoresanalyze_makefile_lint
- Lint Makefiles with 50+ quality rulesanalyze_proof_annotations
- Lightweight formal verificationanalyze_graph_metrics
- Graph centrality and PageRank analysisrefactor_interactive
- Interactive refactoring with explanationsAI agents can now chain analysis tools using the --files
parameter:
# Step 1: Find complexity hotspots
pmat analyze complexity --top-files 5 --format json
# Step 2: Deep analyze those specific files (MCP composition)
pmat analyze comprehensive --files src/complex.rs,src/legacy.rs
# Step 3: Target refactoring on problematic files
pmat refactor auto --files src/complex.rs
MCP Agent Workflow Example:
// AI agent discovers hotspots
const hotspots = await callTool("pmat_analyze_complexity", {
top_files: 5,
format: "json"
});
// Agent extracts file paths and performs deep analysis
const detailed = await callTool("pmat_analyze_comprehensive", {
files: hotspots.files.map(f => f.path)
});
// Agent generates targeted refactoring plan
const refactor = await callTool("pmat_refactor_auto", {
files: detailed.high_risk_files
});
# Start server
pmat serve --port 8080 --cors
# API endpoints
curl "http://localhost:8080/health"
curl "http://localhost:8080/api/v1/analyze/complexity?top_files=5"
curl "http://localhost:8080/api/v1/templates"
# POST analysis
curl -X POST "http://localhost:8080/api/v1/analyze/deep-context" \
-H "Content-Type: application/json" \
-d '{"project_path":"./","include":["ast","complexity","churn"]}'
All analyze commands now support --fail-on-violation
for seamless CI/CD integration:
name: Code Quality
on: [push, pull_request]
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@stable
- name: Install pmat
run: cargo install pmat
- name: Check Complexity
run: |
pmat analyze complexity \
--max-cyclomatic 15 \
--max-cognitive 10 \
--fail-on-violation
- name: Check Technical Debt
run: pmat analyze satd --strict --fail-on-violation
- name: Check Dead Code
run: |
pmat analyze dead-code \
--max-percentage 10.0 \
--fail-on-violation
- name: Run Quality Gate
run: pmat quality-gate --fail-on-violation
--max-cyclomatic
(default: 20), --max-cognitive
(default: 15)--max-percentage
(default: 15.0%)--fail-on-violation
See examples/ci_integration.rs
for more CI/CD patterns including GitLab CI, Jenkins, and pre-commit hooks.
--include
patterns now properly work with test directories (e.g., --include "tests/**/*.rs"
)map_or
with is_some_and
for cleaner codehandle_refactor_auto
function complexity reduced from 136 β 21 (84% reduction)--fail-on-violation
flag--max-percentage
for dead-code analysis--file
flag to pmat analyze comprehensive
for analyzing individual files.make lint
passes with pedantic and nursery standards.pmat refactor docs
)pmat analyze graph-metrics
for centrality analysis.pmat analyze name-similarity
for fuzzy name matching.pmat analyze symbol-table
for symbol extraction.pmat analyze duplicates
for detecting duplicate code.This project exemplifies the Toyota Way philosophy through disciplined quality practices:
# .github/workflows/quality.yml
- name: Run Quality Gate
run: |
pmat quality-gate \
--checks complexity,satd,security,dead-code \
--max-complexity-p99 20 \
--fail-on-violation
Explore our comprehensive documentation to get the most out of pmat
.
pmat
with AI agents.For systems with low swap space, we provide a configuration tool:
make config-swap # Configure 8GB swap (requires sudo)
make clear-swap # Clear swap memory between heavy operations
The project uses a distributed test architecture for fast feedback:
# Run specific test suites
make test-unit # <10s - Core logic tests
make test-services # <30s - Service integration
make test-protocols # <45s - Protocol validation
make test-e2e # <120s - Full system tests
make test-performance # Performance regression
# Run all tests in parallel
make test-all
# Coverage analysis
make coverage-stratified
We welcome contributions! Please see our Contributing Guide for details.
# Clone and setup
git clone https://github.com/paiml/paiml-mcp-agent-toolkit
cd paiml-mcp-agent-toolkit
# Install dependencies
make install-deps
# Run tests
make test-fast # Quick validation
make test-all # Complete test suite
# Check code quality
make lint # Run extreme quality lints
make coverage # Generate coverage report
git checkout -b feature/amazing-feature
)make lint # Check code quality
make test # Run all tests (fast, doctests, property tests, examples)
Note: The make test
command runs comprehensive testing including:
See CONTRIBUTING.md for detailed guidelines.
Licensed under either of:
at your option.
Built with β€οΈ by Pragmatic AI Labs