AI Orchestration

AMOF includes a custom AI agent runtime -- a full orchestrator that plans tasks, delegates to specialized workers, manages context, and enforces safety guardrails.

Overview

User Goal
    |
    v
+------------------+
|   Agent Loop     |  Message -> LLM -> Tool Calls -> Execute -> Loop
+------------------+
    |
    v
+------------------+
|   Task Planner   |  Breaks complex goals into subtask DAGs
+------------------+
    |
    v
+------------------+     +------------------+
|   Executor       | --> |   Runners        |  code, k8s, helm, debug, jenkins
+------------------+     +------------------+
    |
    v
+------------------+
|   Model Router   |  Selects model by complexity, risk, context size
+------------------+

Agent Loop

The core agent loop in Agent.run() follows this cycle: receive message, build context, call LLM, parse response, execute tool calls, append results, loop until completion or budget exceeded.

Execution Modes

ModeCommandBehavior
Interactiveamof agentREPL with slash commands (/status, /cost, /release, /review, /quit)
Single-shotamof agent "task"Execute one goal and exit
Plan-onlyamof agent --plan "goal"Read-only analysis; produces structured plan without modifying files

Safety Controls

ControlMechanism
Cost ceilingdefault_max_cost in agent.yaml (default: $5.00)
Budget warningsAlerts at 50%, 75%, and 90% of budget
Iteration guardMaximum iteration count prevents infinite loops
Lint-on-completeAll modified files linted before task completion

Planner-Executor Architecture

Phase 1: Planning

The TaskPlanner uses a strong model to decompose a goal into a directed acyclic graph (DAG) of subtasks. Each subtask has an id, title, description, runner assignment, and dependency list.

Phase 2: Execution

The SubtaskExecutor iterates through subtasks in dependency order. For each subtask it resolves the runner, loads the prompt and tools, executes, and feeds results into the next dependent subtask.

Model Router & LLM Ladder

FactorBehavior
Low complexity (< 0.20)Routes to fast tier (worker cascade)
High complexity (> 0.60)Routes to strong tier (orchestrator cascade)
Large context (> 200k tokens)Forces strong tier
Risk escalationInfrastructure/deployment always use strong tier
Provider healthTracks failures; routes around degraded providers
FallbackPrimary Anthropic, fallback OpenAI, 60s cooldown

BYOK (Bring Your Own Key)

ProviderVariableNotes
AnthropicANTHROPIC_API_KEYAuto SSL cert detection for corporate proxies
OpenAIOPENAI_API_KEYGPT and o-series models
OpenRouterOPENROUTER_API_KEYUnified gateway to 100+ models

Tool System

Core Tools

ToolDescription
ReadRead file contents with path validation
WriteCreate or overwrite files (guardrail-checked)
StrReplaceSurgical find-and-replace within files
DeleteDelete files (guardrail-checked)
ShellExecute shell commands (command blocking enforced)
GrepRegex search across the codebase
GlobFind files matching glob patterns
LSList directory contents

Orchestration Tools

ToolDescription
DelegateSpawn sub-agents for parallel subtask execution
MemorySearchSemantic vector search over codebase embeddings
GitCheckpointSave and restore Git state for safe experimentation
ReadLintsRun linters and return diagnostics

Operational Tools

ToolDescription
K8sKubernetes pod inspection, log viewing, environment variables
HelmChart templating, diffing, syncing
JenkinsTrigger CI/CD pipelines
ImagesContainer image discovery, diffing, migration
AuditRecord changelog entries

Domain Runners

RunnerToolsTierMax Iterations
codeRead, Write, StrReplace, Delete, Shell, Grep, Glob, LS, GitCheckpoint, ReadLintsstandard50
k8sK8s, Shell, Read, Grepfast30
helmHelm, Read, Write, Shell, Grep, Globstandard40
debugK8s, Read, Grep, Shellstandard40
jenkinsJenkins, Shellfast20

Context Pipeline

  • Repo Profiling: amof profile generates tech stack profiles from manifest files (Chart.yaml, package.json, etc.)
  • Context Building: Assembles system prompt from repo profiles, ecosystem manifest, guardrails, codebase index, and rules
  • Codebase Indexing: Incremental via Merkle tree -- only changed files sent to LLM (~$0.01-$0.10 incremental vs $0.10-$0.50 full)
  • Context Summarization: Compresses old conversation turns while preserving file paths, decisions, and error resolutions
  • High-Risk File Identification: Scores files by complexity, dependency count, and entry point status

Extended Thinking

SettingBehavior
Auto-detectionIdentifies thinking-capable models; sets temperature=1, thinking.type=adaptive
Budgetthinking_budget: 16000 tokens in agent.yaml
DisplayThinking blocks rendered in faded italic in interactive shell
PrefillConditional assistant prefill disabled for thinking models (API requirement)