Skip to main content

Event-Driven Architecture

Every action in lim starts with an event. Bank transactions, invoices, manual entries, MCP tool calls — all are events that flow through a unified pipeline.

Event Flow

┌──────────────┐     ┌──────────────┐     ┌──────────────┐     ┌──────────────┐
│  INGESTION   │ ──→ │ NORMALIZATION│ ──→ │  JUDGMENT    │ ──→ │   LEDGER     │
│              │     │              │     │              │     │              │
│ Bank API     │     │ Common event │     │ Rule match   │     │ Journal entry│
│ Webhook      │     │ format       │     │ History      │     │ Debit=Credit │
│ MCP tool     │     │              │     │ AI inference │     │ Audit log    │
│ CLI input    │     │              │     │ Escalate     │     │              │
└──────────────┘     └──────────────┘     └──────────────┘     └──────────────┘

                                                               ┌──────────────┐
                                                               │   QUERY      │
                                                               │              │
                                                               │ Trial balance│
                                                               │ P&L, BS, CF  │
                                                               │ Alerts       │
                                                               └──────────────┘

Ingestion Layer

Events enter lim through multiple channels:
ChannelHow it worksExample
Bank API syncPeriodic polling of bank transaction APIsNew checking account debit
WebhooksPush notifications from external servicesStripe payment received
MCP tool callsAI agents calling lim toolsClaude creating a journal entry
CLI manual entrylim add with natural languagelim add "AWS 11000 yen"
File uploadReceipt/invoice image or CSVlim add --file receipt.jpg
ImportBulk import from other accounting softwarelim import freee export.csv
Each ingestion adapter normalizes the raw data into a common event format.

Event Normalization

All events are normalized to a common structure before processing:
{
  "id": "evt_01926f3a...",
  "companyId": "comp_01926f3a...",
  "type": "bank_transaction",
  "sourceType": "bank_sync",
  "sourceId": "tx_abc123",
  "data": {
    "amount": 11000,
    "direction": "outflow",
    "counterparty": "AWS",
    "description": "Amazon Web Services monthly bill",
    "transactionDate": "2026-03-16"
  },
  "occurredAt": "2026-03-16T09:00:00Z",
  "processedAt": null
}
Normalization ensures that regardless of how an event enters the system, the judgment engine receives a consistent input.

The Event Table

Every event is persisted in the event table before processing. This serves three purposes:
  1. Audit trail. Every business event is recorded permanently, even if it’s later reversed or modified.
  2. Replay capability. Events can be replayed to reconstruct the ledger state at any point in time.
  3. Debugging. When a journal entry looks wrong, you can trace it back to the original event.

Event Lifecycle

created → processing → processed
                    → failed (retryable)
                    → escalated (needs human)

Event Graph

Events can be linked to form a directed acyclic graph (DAG). This captures causal relationships:
Invoice issued (evt_001)
    └──→ Payment received (evt_002)
             └──→ Bank reconciliation (evt_003)
The event_graph table stores these edges. This enables:
  • Traceability. “Why does this journal entry exist?” Follow the event chain back to the originating business event.
  • Impact analysis. “If I reverse this invoice, what else is affected?”
  • Compliance. Auditors can see the complete chain from source document to financial statement.

PG LISTEN/NOTIFY

lim uses PostgreSQL’s built-in LISTEN/NOTIFY mechanism for real-time event processing:
-- Publisher (on event insert)
NOTIFY lim_events, '{"eventId": "evt_01926f3a...", "companyId": "comp_..."}';

-- Subscriber (event processor)
LISTEN lim_events;
This provides:
  • Low latency. Events are processed within milliseconds of insertion.
  • No external dependencies. No Kafka, no Redis, no RabbitMQ. Just PostgreSQL.
  • Transactional consistency. NOTIFY is sent only when the transaction commits.

Polling Fallback

LISTEN/NOTIFY is not durable — if the subscriber is down when a notification fires, it’s lost. lim uses a polling fallback:
-- Every 5 seconds, check for unprocessed events
SELECT * FROM event
WHERE processed_at IS NULL
  AND created_at < NOW() - INTERVAL '5 seconds'
ORDER BY created_at
LIMIT 100;
This ensures no event is ever lost, even during deployments or crashes.

Event Processing Guarantees

GuaranteeHow
At-least-once deliveryPolling fallback catches missed notifications
Idempotent processingsource_type + source_id uniqueness constraint prevents duplicates
Ordered within companypg_advisory_xact_lock(company_id) ensures sequential processing
Audit loggedEvery event persisted before processing begins
The combination of LISTEN/NOTIFY for speed and polling for reliability gives lim the responsiveness of a message queue with the simplicity of a single PostgreSQL database.

Subscribing to Events

External systems can subscribe to lim events via webhooks or the MCP server:
# Webhook subscription (coming soon)
lim webhooks create --url https://your-app.com/hook --events journal_entry.created

# MCP resource subscription
# AI agents can poll the journal-entries resource for changes
Events that trigger notifications:
EventDescription
journal_entry.createdNew journal entry posted
journal_entry.reversedJournal entry reversed
alert.triggeredAlert rule condition met
scenario.appliedScenario entries promoted to real entries
bank_transaction.unmatchedNew bank transaction needs classification