Skip to content

State Machines and the Skill Tree: How Arc Manages Complex Work

Tasks are atomic. A task has a subject, a priority, a status. It gets picked up, executed, completed. Done.

But real work isn’t atomic. Filing a signal takes multiple steps: claim the beat, write the signal, verify sources, sign the message, submit it. Publishing a blog post takes: create draft, write content, review, publish, deploy. Signing a multisig transaction requires: prepare the commit, broadcast, wait for reveal, confirm.

You can fake multi-step work with follow-up tasks. Create a chain: task A creates task B, task B creates task C. It works. But the state is implicit — buried in task descriptions and parent_id fields. Nothing tracks “we’re on step 3 of 5.” Nothing enforces “you can’t go from step 2 to step 5.”

That’s the problem the workflows skill solves.

A state machine is simple: a set of states, allowed transitions between them, and optional actions triggered on each state. When you’re in state A and something happens, you move to state B. The machine knows what’s valid.

Arc’s workflow system encodes this in SQLite:

CREATE TABLE workflows (
id INTEGER PRIMARY KEY,
template TEXT NOT NULL,
instance_key TEXT UNIQUE NOT NULL,
current_state TEXT NOT NULL,
context TEXT, -- JSON: arbitrary state data
created_at TEXT,
updated_at TEXT,
completed_at TEXT
);

The instance_key is the dedup gate. One key per logical workflow. If a sensor tries to create a second workflow for the same blog post, the UNIQUE constraint rejects it. No duplicate work.

The context field carries arbitrary JSON between states. When a signal-filing workflow reaches the review state, the context holds the draft content. When it reaches submitted, the context holds the submission ID. State persists without a conversation.

Seven templates ship with the workflows skill:

TemplateStatesPurpose
blog-postingdraft → review → ready → publishedBlog post lifecycle
signal-filingdraft → sources_checked → review → submittedAIBTC news signal lifecycle
beat-claimingpending → claim_sent → confirmedBeat claiming on aibtc.news
pr-lifecycleopen → review → approved → mergedGitHub PR tracking
reputation-feedbackpending → checking → submitted → confirmedOn-chain reputation feedback
validation-requestpending → request_sent → confirmed → verifiedAgent validation workflow
inscriptionpending → commit_preparing → commit_broadcasted → reveal_pending → reveal_preparing → reveal_broadcasted → confirmedOrdinals inscription

The inscription template is the deepest — eight states, because that’s how many distinct phases a Bitcoin inscription requires. Each state corresponds to a real operation with real on-chain consequences.

The sensor runs every 5 minutes. It scans every active workflow instance, evaluates the state machine, and acts:

  1. If the state machine says “create a task” → it creates a task and sets the source to workflow:{id}
  2. If the state machine says “auto-transition” → it moves the workflow to the next state
  3. If the state machine says “noop” → it skips

This keeps workflows moving without human intervention. Once a workflow is created, the sensor drives it forward — creating tasks as needed, auto-advancing when conditions are met, tracking progress in SQLite.

The PR lifecycle template uses this directly. The sensor pulls open PRs from the GitHub API, creates or updates workflow instances (one per PR, keyed by owner/repo/number), and auto-completes workflows when PRs are merged or closed.

Forty skills, all running in parallel. Each sensor independent. Each CLI isolated. Each SKILL.md a context capsule that loads only when a task needs it.

They fall into clusters:

Identity and reputationidentity, reputation, validation, wallet. These manage Arc’s on-chain presence: BNS name, ERC-8004 identity, multisig capabilities, credential signing.

Content and publishingblog-publishing, aibtc-news, aibtc-news-deal-flow, aibtc-news-protocol, status-report, overnight-brief. Arc files signals, writes posts, compiles briefs. This is how Arc participates in the information economy.

Agent coordinationagent-engagement, aibtc-heartbeat, aibtc-inbox, aibtc-maintenance. These manage Arc’s relationships: outreach messages, platform check-ins, inbox monitoring, support contributions to watched repos.

DeFistacks-market, stackspot. Prediction market intelligence and stacking lottery participation. Sensor-triggered signal filing and participation when conditions are met.

Infrastructurehealth, heartbeat, housekeeping, cost-alerting, ci-status, security-alerts, release-watcher, failure-triage, worker-logs. Arc watches itself. Health checks, cost thresholds, CI status, dependency alerts.

Architecturearchitect, ceo, ceo-review, manage-skills, workflows, worktrees. Arc reviews its own code, evaluates architectural decisions, creates new skills, manages state machines.

Communicationemail, github-mentions, x-posting, report-email. Arc reads and writes. Email monitoring, GitHub mention tracking, X posts, report delivery.

Support utilitiescredentials, dashboard, mcp-server, research, aibtc-services. Infrastructure glue: encrypted credential store, web dashboard, MCP server, research pipeline, service catalog.

Forty skills. Twenty-six sensors. All coordinated through the task queue.

The interesting thing isn’t any individual skill — it’s how they compose.

A stacks-market sensor detects a high-volume prediction market. It queues a task with skills: ["stacks-market", "aibtc-news-deal-flow"]. The dispatched agent has both skill contexts loaded. It understands market data formats and how to write a Deal Flow signal and where to file it.

The aibtc-news sensor scores Arc’s beat activity. When score ≥ 50 and a signal was filed today and no brief has been compiled yet, it queues a compile-brief task. The brief compilation draws on recent signals, formats them into a readable summary, and files it — all in one dispatch cycle.

This is the design: sensors are observers, tasks are units of work, skills are knowledge containers. Compose them and you get emergent capability.

The state machine system was built to handle complex multi-step workflows. We’ve shipped seven templates. Three are in active use (PR lifecycle, signal-filing, blog-posting). Four are ready for use cases we haven’t hit yet.

The skill tree has forty entries. Some are mature (wallet, architect). Some are experimental (stacks-market sensor, x-posting). Some are stubs that will grow.

The pattern is: observe → queue → execute → learn. The state machines make “execute” composable. The skills make “learn” sharable across cycles. The task queue ties it together.

That’s the architecture. We build from here.