GitHubBlog

Search Documentation

Search for a page in the docs

Async Lifecycle

OpenAlice is not a request/response chatbot. It has an autonomous lifecycle — things happen even when you're not looking. This page explains the event-driven architecture that makes this work.

The Event Bus

At the center is the EventLog — a persistent, append-only JSONL event bus. Everything that happens asynchronously flows through it:

  • Disk — Append-only JSONL file at data/event-log/events.jsonl. Source of truth, survives crashes.
  • Memory — Ring buffer of the 500 most recent entries for fast queries. Rebuilt from disk on startup.
  • Subscriptions — Listeners subscribe by event type (or wildcard). Each append fans out synchronously to matching subscribers.

Typed Event System

Events are not free-form. OpenAlice has a typed event registry — AgentEventMap — where every event type has:

  • A TypeBox schema for runtime payload validation
  • An external flag marking whether outside callers can ingest it
  • A human-readable description surfaced in the UI

Adding a new event type means adding one entry to the AgentEvents registry. The schema, the external gate, and the description all live together.

Current Event Types

EventExternal?When it fires
cron.fireCron scheduler timer fired for a registered job
cron.doneCron job routed through the AI and completed
cron.errorCron job routing failed
heartbeat.doneHeartbeat produced content and (attempted to) deliver
heartbeat.skipHeartbeat fired but stayed quiet (HEARTBEAT_OK, duplicate, outside active hours, empty)
heartbeat.errorHeartbeat invocation errored
message.receivedUser message arrived on a connector
message.sentAssistant reply dispatched on a connector
task.requestedExternal caller asked Alice to run a one-shot task (see Webhooks)
task.doneRequested task completed, reply dispatched
task.errorRequested task failed

Payloads are validated at the boundary. An invalid payload rejects the append with a clear error.

Listeners and Producers

Two roles subscribe to and produce events on the bus:

Listener

A module that reacts to events. Each listener declares:

  • name — unique identifier
  • subscribes — event types it listens to (or '*' for wildcard)
  • emits — event types it may emit (or '*' for wildcard) — this is a constrained set; emitting anything else throws
  • handle(entry, ctx) — the reaction logic

The ListenerRegistry manages lifecycle centrally: every module hands its listener over, and start() / stop() activate all of them together. Errors inside a handler are caught and logged — one bad listener doesn't take down the others.

Producer

A pure event source — something that emits events but doesn't react to any. Webhook ingest is a producer: it has no subscription but must be able to emit task.requested. Declaring it at registration time makes it visible in topology queries and gives it a constrained emit handle.

Connector messages share one producer. Every connector (Web, Telegram, MCP Ask, future Discord/Slack/...) routes message.received and message.sent through ConnectorCenter's single connectors producer rather than declaring its own. Adding a new connector requires no producer wiring — just call connectorCenter.emitMessageReceived() / emitMessageSent(). The Flow graph stays clean (one connectors node instead of one per connector).

causedBy — Event Lineage

Every event produced by a listener carries a causedBy reference to the parent event that triggered it. This builds an implicit causal graph:

POST /api/events/ingest  →  task.requested (seq 42)
                                 │ causedBy: null
                                 ↓
task-router handles        →  task.done (seq 43)
                                 │ causedBy: 42
                                 ↓
connector delivers         →  message.sent (seq 44)
                                 │ causedBy: 43

You can trace any outcome back to its trigger by following causedBy. The UI Flow view uses this to render concrete edges between events.

Three Autonomous Routers

Three listeners do the heavy lifting — each subscribes to cron.fire (or related external events) and routes them to AgentCenter:

cron-router — User-Defined Jobs

Subscribes to cron.fire events for non-internal jobs. Sends the payload to AgentCenter, delivers the reply via ConnectorCenter, emits cron.done / cron.error. Jobs run serially — if one is still processing when the next fires, the second is skipped.

heartbeat — Market Monitoring

Subscribes to cron.fire for the __heartbeat__ job specifically. Same basic flow as cron-router, but adds active-hours guarding, structured response parsing (HEARTBEAT_OK / CHAT_YES), and dedup before delivery. Emits heartbeat.done / heartbeat.skip / heartbeat.error. See Heartbeat.

task-router — External Tasks

Subscribes to task.requested — the only event type that can be ingested from outside the process. Runs the prompt through AgentCenter in a dedicated task/default session, delivers the reply, emits task.done / task.error. See Webhooks.

All three share the same shape: serial processing guard, AgentCenter integration, ConnectorCenter delivery, typed completion events. If you need a new autonomous behavior, write another listener following the same pattern.

Internal timers vs user cron

Not every periodic action goes through cron.fire. The engine also runs internal timers that don't emit events at all — they just tick and call domain code directly. The most prominent is the broker catalog refresh: every 6 hours the engine calls refreshCatalog() on every UTA so newly listed assets surface in contract search. It uses a plain setInterval, not the cron registry, because there's nothing to schedule — it's product behavior, not user behavior.

Rule of thumb: user-visible periodic work (heartbeat, scheduled tasks, snapshots) goes through cron + event bus, so it shows up in the Flow graph and respects user config. Engine housekeeping (catalog refresh, file rotation, etc.) runs as a plain timer.

Observers

Listeners can also be observers — subscribers that don't emit anything, just watch. The built-in event-metrics listener is a wildcard observer (subscribes: '*') that keeps per-type counts and last-seen timestamps in memory. Useful for cheap observability of the bus.

fire() — In-Process Event Injection

Plugins and custom code can emit events through ctx.fire() instead of going through HTTP:

await ctx.fire('task.requested', { prompt: 'Check BTC price' })

Same pipeline as the webhook path, same listener fan-out, same validation. Use this when you want to poke Alice from inside the process without a network round-trip.

The Trading Lifecycle

Trading has its own async lifecycle that connects to this system through event hooks rather than through the cron path:

User/AI Decision
    ↓
stage operations → commit → push (requires approval)
    ↓                              ↓
    ↓                     Guard Pipeline runs
    ↓                              ↓
    ↓                     Broker executes orders
    ↓                              ↓
    ↓                     ┌──── Post-Push Hooks ────┐
    ↓                     │  • Snapshot (immediate)  │
    ↓                     │  • EventLog recording    │
    ↓                     └──────────────────────────┘
    ↓
tradingSync (async — exchanges settle later)
    ↓
Order filled / cancelled / expired → Sync commit recorded

Key hooks:

  • Post-push — Immediately after orders hit the broker, onPostPush fires and a snapshot captures account state. Event-driven, not cron-driven.
  • Post-reject — When you reject a commit, onPostReject fires and a snapshot records the state at rejection time.
  • Sync — Order settlement is asynchronous. tradingSync polls the broker for fills, which may happen seconds or hours after the push. Each sync produces a new commit on the trading git.

Topology & Flow Visualization

The /automation page in the Web UI renders a live graph of the event system:

  • Nodes — listeners, producers, event types
  • Edgessubscribes pulls events in (blue), emits pushes events out (green)
  • Wildcard aura — listeners declaring subscribes: '*' or emits: '*' get a breathing halo instead of N individual edges (which would explode the graph)
  • Metadata tooltips — hover any event node to see its description

Backed by GET /api/topology. Useful for understanding what's hooked up to what in a running instance.

Startup Sequence

On boot, the async systems start in dependency order:

  1. EventLog — Created first. Everything depends on it.
  2. ListenerRegistry — Created around the EventLog.
  3. CronEngine — Loads persisted jobs from data/cron/jobs.json. Arms timers.
  4. Routers — cron-router, heartbeat, task-router, metrics register with the registry.
  5. Schedulers — SnapshotScheduler registers the __snapshot__ cron job and its handler.
  6. Plugins — Web, Telegram, MCP start. WebPlugin declares the webhook-ingest producer.
  7. Registry.start() — Activates all listeners simultaneously.

By the time plugins are up, the event bus is running and all subscribers are listening. The first cron fire after boot triggers the whole chain.

Error Resilience

  • Listener errors — Caught and logged, don't affect other listeners.
  • Cron jobs — Failed jobs get exponential backoff: 30s → 1m → 5m → 15m → 1h. Reset on success.
  • Snapshots — Failed accounts get one retry. Failures are logged but don't crash the system.
  • Heartbeat — Errors logged as heartbeat.error. Next scheduled fire tries again fresh.
  • EventLog — Dual-write to disk + memory. If the process crashes, the disk log survives and the memory buffer is rebuilt on restart.