Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think this design in the article works in practice.

A single `events` table falls apart as the system grows, and untyped JSONB data in `event_data` column just moves the mess into code. Event payloads drift, handlers fill with branching logic, and replaying or migrating old events becomes slow and risky. The pattern promises clarity but eventually turns into a pile of conditionals trying to decode years of inconsistent data.

A simpler and more resilient approach is using the database features already built for this. Stored procedures can record both business data and audit records in a controlled way. CDC provides a clean stream for the tables that actually need downstream consumers. And even carefully designed triggers give you consistent invariants and auditability without maintaining a separate projection system that can lag or break.

Event sourcing works when the domain truly centers on events, but for most systems these database driven tools stay cleaner, cheaper, and far more predictable over time.



The only place this kind of append-only event log consistently works well is clickstream-style workloads.

You rarely replay them to reconstruct business state; you just pump them into analytics or enrichment pipelines.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: