Skip to main content

Accounting = fn(events)

lim is built on a simple idea: accounting is a function. Business events go in, journal entries come out. No batches, no month-end scrambles, no manual data entry.
journal_entries = f(business_events)
This is not a metaphor. It is the literal architecture of lim.

The Problem with Traditional Accounting

Traditional accounting software treats the ledger as a database you write to manually. A human looks at a bank statement, decides which account to post to, types the numbers, and clicks save. This process has several problems:
  1. It’s slow. Entries are recorded days or weeks after the event.
  2. It’s error-prone. Humans transpose digits, pick wrong accounts, forget entries.
  3. It’s batch-oriented. Financial data is only “current” after monthly close.
  4. It doesn’t scale. More transactions = more human hours.
  5. It can’t be automated. AI agents can’t operate legacy accounting software.

The Functional Model

lim inverts this. Every business event — a bank transaction, an invoice, a receipt — is an input to a function. The function’s output is a journal entry.
Event: { type: "bank_debit", counterparty: "AWS", amount: 11000, date: "2026-03-16" }

f(event)

Journal Entry:
  Debit  Communication     10,000
  Debit  Input VAT          1,000
  Credit Accounts payable  11,000
The function f is the judgment engine. It is deterministic when possible (rule match), probabilistic when necessary (AI inference), and always auditable.

Properties of the Function

  • Deterministic where possible. 95% of transactions hit a learned rule — same input always produces same output.
  • Idempotent. Processing the same event twice does not create duplicate entries.
  • Auditable. Every judgment decision is logged: which step resolved it, what confidence level, who confirmed it.
  • Learning. The function improves over time. Human corrections become rules.

Event-Driven, Not Batch-Driven

In traditional accounting:
Day 1-30: Business happens
Day 31:   Accountant processes everything in a batch
Day 35:   Financial statements are "current" (already 5 days stale)
In lim:
Event occurs → Judgment engine fires → Journal entry created → Reports update
Latency: < 1 second
Financial reports (trial balance, P&L, balance sheet, cash flow) are always live. There is no “close” step required to see current numbers — though lim does support period close for formal reporting.

The Learning Flywheel

The functional model enables a powerful feedback loop:
┌─────────────────────────────────────────────┐
│  1. Event arrives                            │
│  2. Judgment engine classifies it            │
│     - Rule match (95% of events) → auto-post │
│     - AI inference (5%) → suggest to human   │
│  3. Human confirms or corrects               │
│  4. Correction becomes a new rule            │
│  5. Next similar event → rule match          │
└─────────────────────────────────────────────┘
Over time:
  • Rule match rate increases (approaches 99%)
  • AI inference rate decreases (approaches 0%)
  • AI API costs approach zero
  • Human involvement approaches zero
After 6 months of typical usage, a company with 200-500 monthly transactions should see fewer than 5 transactions per month requiring human input.

Why This Matters for AI Agents

The functional model makes lim a natural fit for AI agents:
  • MCP tools map directly to functions. create_journal_entry, match_bank_transaction, generate_scenario — each is a pure function call.
  • Resources are read-only views. Trial balance, accounts, journal entries — agents can read the current state without side effects.
  • No UI required. There is no “click this button, fill this form” workflow. Everything is an API call.
  • Agents improve the system. When an AI agent processes a transaction and a human confirms it, that becomes a rule. The agent is training the system.

Comparison

Traditional Accountinglim
Input methodManual data entryEvents (automatic)
ProcessingBatch (monthly)Real-time (< 1 second)
ClassificationHuman judgmentRule → History → AI → Escalate
LearningNoneCorrections become rules
ReportsStale until closeAlways live
AI agent supportNoneNative (MCP + REST)
ScalingMore humansMore rules (zero marginal cost)