Indexing Pipeline
Three ingestion pathways ensure comprehensive transaction coverage with different speed/completeness tradeoffs.
Pipeline Comparison
Polling Indexer
Every 5 minutes
~5 min
All exclusive contracts
Alchemy Webhooks
Real-time
~2 sec
Monitored addresses
Site-Triggered
On user tx
Instant
Shared contracts (Uniswap)
Polling Indexer
Source: server/services/indexer.ts
Runs every 5 minutes. Scans eth_getLogs for all exclusive contract addresses in 5000-block chunks.
Process:
Query
eth_getLogsfor all exclusive Oeconomia contract addressesExtract unique transaction hashes from matching logs
Fetch full transaction + receipt from Alchemy
Decode via
ProtocolDecoderStore in PostgreSQL (skip duplicates via unique
txHash)Extract token transfers from
Transferevent logsUpdate token balance cache for affected addresses
Broadcast via WebSocket to subscribed clients
Configuration:
Alchemy Webhooks
Source: server/routes/webhooks.ts
Receives real-time POST /api/webhooks/alchemy payloads from Alchemy Notify when monitored addresses are involved in transactions.
Payload:
Each activity item goes through the same fetch → decode → store → broadcast pipeline.
Site-Triggered Tracking
Endpoint: POST /api/track-tx
Called by Oeconomia frontend apps when a user submits a transaction. This catches transactions on shared/public contracts (like Uniswap V3) that wouldn't be auto-indexed since they're not exclusive to Oeconomia.
WebSocket Broadcasting
After a transaction is stored, it's broadcast to WebSocket clients:
Error Handling
The indexer continues processing if individual transactions fail to decode
Duplicate transactions are silently skipped via the unique
txHashconstraintRate limit errors from Alchemy are caught and retried on the next polling cycle
Block range tracking ensures no gaps between indexer runs
Last updated