ARETE

AI Developer Tools — Ship faster, spend less, stay sovereign

17 years enterprise ops. Now building AI infrastructure.

Python / Rust / TypeScript / SQLite / FastAPI

// AI Developer Tools

CLI tools for the AI engineering workflow — cost tracking, prompt ops, context analysis, memory, and agent linting.

agent-lint

Workflow YAML cost estimator + linter 262 tests

pip install agentlinter
ai-spend

AI API cost aggregator CLI 248 tests

pip install ai-spend
promptctl

Claude API toolkit — prompt engineering + code review + doc intelligence 311 tests

pip install promptctlai
context-hygiene

Context window hygiene analyzer for LLM conversations 379 tests

Private
mcp-manager

MCP server manager across agentic IDEs 158 tests

pip install arete-mcp
claudemd-forge

CLAUDE.md generator + auditor + drift detector 594 tests

pip install claudemd-forge
memboot

Zero-infra persistent memory for LLMs 304 tests

pip install memboot

// Flagship Projects

Animus

AI agent framework — autonomous build pipelines, dual-model routing, streaming, MCP server, identity system with guardrails

13,676+ tests 4 packages 97% coverage

Fantasy football analytics SaaS — live at benchgoblins.com. Player dossiers, scoring engine, agent pipeline

2,001 tests 99% coverage Live on Fly.io + Vercel
Quorum

Multi-agent conflict resolution — versioned intent graphs, overlap detection, Python + Rust (PyO3)

926 tests 97% coverage Live on PyPI

// Case Study

claudemd-forge How I solved context drift for AI coding agents

Problem

AI coding agents like Claude Code rely on CLAUDE.md files for project context — coding standards, architecture, commands, anti-patterns. But these files are written by hand, go stale within days, and nobody audits them. The agent makes worse decisions every time the context drifts from reality.

Solution

Built a CLI that analyzes your codebase and generates accurate CLAUDE.md files automatically. It reads pyproject.toml, package.json, Cargo.toml, detects naming conventions by sampling source files, maps architecture trees, and extracts commands from CI configs. Then it audits existing files for accuracy and detects behavioral drift across LLM model versions using benchmark suites.

Architecture

  • Generator — metadata extraction, pattern analysis, Jinja2 templates
  • Auditor — 4 accuracy checkers validate claims against codebase
  • Drift Detector — 6 check types, 4 model adapters, YAML benchmark suites, trend visualization
  • License Server — FastAPI, SHA-256 hashed keys, rate limiting, activation tracking

Results

594 tests passing
13+ repos using forge-generated CLAUDE.md
8 repos validated with drift detector
100/100 audit score on own CLAUDE.md
30,000+ tests across the fleet
7 published CLI tools
3 live production systems