Event-Driven Architecture
Every action in lim starts with an event. Bank transactions, invoices, manual entries, MCP tool calls — all are events that flow through a unified pipeline.
Event Flow
┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ INGESTION │ ──→ │ NORMALIZATION│ ──→ │ JUDGMENT │ ──→ │ LEDGER │
│ │ │ │ │ │ │ │
│ Bank API │ │ Common event │ │ Rule match │ │ Journal entry│
│ Webhook │ │ format │ │ History │ │ Debit=Credit │
│ MCP tool │ │ │ │ AI inference │ │ Audit log │
│ CLI input │ │ │ │ Escalate │ │ │
└──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘
│
┌──────────────┐
│ QUERY │
│ │
│ Trial balance│
│ P&L, BS, CF │
│ Alerts │
└──────────────┘
Ingestion Layer
Events enter lim through multiple channels:
| Channel | How it works | Example |
|---|
| Bank API sync | Periodic polling of bank transaction APIs | New checking account debit |
| Webhooks | Push notifications from external services | Stripe payment received |
| MCP tool calls | AI agents calling lim tools | Claude creating a journal entry |
| CLI manual entry | lim add with natural language | lim add "AWS 11000 yen" |
| File upload | Receipt/invoice image or CSV | lim add --file receipt.jpg |
| Import | Bulk import from other accounting software | lim import freee export.csv |
Each ingestion adapter normalizes the raw data into a common event format.
Event Normalization
All events are normalized to a common structure before processing:
{
"id": "evt_01926f3a...",
"companyId": "comp_01926f3a...",
"type": "bank_transaction",
"sourceType": "bank_sync",
"sourceId": "tx_abc123",
"data": {
"amount": 11000,
"direction": "outflow",
"counterparty": "AWS",
"description": "Amazon Web Services monthly bill",
"transactionDate": "2026-03-16"
},
"occurredAt": "2026-03-16T09:00:00Z",
"processedAt": null
}
Normalization ensures that regardless of how an event enters the system, the judgment engine receives a consistent input.
The Event Table
Every event is persisted in the event table before processing. This serves three purposes:
- Audit trail. Every business event is recorded permanently, even if it’s later reversed or modified.
- Replay capability. Events can be replayed to reconstruct the ledger state at any point in time.
- Debugging. When a journal entry looks wrong, you can trace it back to the original event.
Event Lifecycle
created → processing → processed
→ failed (retryable)
→ escalated (needs human)
Event Graph
Events can be linked to form a directed acyclic graph (DAG). This captures causal relationships:
Invoice issued (evt_001)
└──→ Payment received (evt_002)
└──→ Bank reconciliation (evt_003)
The event_graph table stores these edges. This enables:
- Traceability. “Why does this journal entry exist?” Follow the event chain back to the originating business event.
- Impact analysis. “If I reverse this invoice, what else is affected?”
- Compliance. Auditors can see the complete chain from source document to financial statement.
PG LISTEN/NOTIFY
lim uses PostgreSQL’s built-in LISTEN/NOTIFY mechanism for real-time event processing:
-- Publisher (on event insert)
NOTIFY lim_events, '{"eventId": "evt_01926f3a...", "companyId": "comp_..."}';
-- Subscriber (event processor)
LISTEN lim_events;
This provides:
- Low latency. Events are processed within milliseconds of insertion.
- No external dependencies. No Kafka, no Redis, no RabbitMQ. Just PostgreSQL.
- Transactional consistency. NOTIFY is sent only when the transaction commits.
Polling Fallback
LISTEN/NOTIFY is not durable — if the subscriber is down when a notification fires, it’s lost. lim uses a polling fallback:
-- Every 5 seconds, check for unprocessed events
SELECT * FROM event
WHERE processed_at IS NULL
AND created_at < NOW() - INTERVAL '5 seconds'
ORDER BY created_at
LIMIT 100;
This ensures no event is ever lost, even during deployments or crashes.
Event Processing Guarantees
| Guarantee | How |
|---|
| At-least-once delivery | Polling fallback catches missed notifications |
| Idempotent processing | source_type + source_id uniqueness constraint prevents duplicates |
| Ordered within company | pg_advisory_xact_lock(company_id) ensures sequential processing |
| Audit logged | Every event persisted before processing begins |
The combination of LISTEN/NOTIFY for speed and polling for reliability gives lim the responsiveness of a message queue with the simplicity of a single PostgreSQL database.
Subscribing to Events
External systems can subscribe to lim events via webhooks or the MCP server:
# Webhook subscription (coming soon)
lim webhooks create --url https://your-app.com/hook --events journal_entry.created
# MCP resource subscription
# AI agents can poll the journal-entries resource for changes
Events that trigger notifications:
| Event | Description |
|---|
journal_entry.created | New journal entry posted |
journal_entry.reversed | Journal entry reversed |
alert.triggered | Alert rule condition met |
scenario.applied | Scenario entries promoted to real entries |
bank_transaction.unmatched | New bank transaction needs classification |