
Your database knows what happened. You just can’t ask it.
Every MySQL database with binary logging enabled already records every data change internally. But that history is locked inside binary files designed for replication, not for humans.
Try asking “what happened to customer 16791?” and you will hit a wall of raw binary events, undocumented offsets, and tools built for a different era.
The data exists. The tooling to make it useful does not. That is the gap that data change intelligence fills.
The gap nobody talks about

The database ecosystem has two well-established tool categories for dealing with data changes, and neither one answers the question above.
CDC tools move data. Debezium, Maxwell, and dozens of similar tools read the MySQL binlog and stream change events to Kafka, Snowflake, data lakes, or other downstream systems. They are excellent at what they do. But they are pipes. They move data from A to B. They do not answer questions. You cannot ask Debezium “what happened to row 16791” and get back an answer. You would need to set up Kafka, configure a consumer, write a query against whatever downstream store you chose, and hope the retention window still covers the event you are looking for.
Recovery tools restore databases. When something goes wrong, the traditional approach is point-in-time recovery: find a full backup, restore it to a temporary MySQL instance, replay binlogs to the exact second before the incident, extract the rows you need, and insert them back into production. This works. It also takes 3 to 5 hours, requires a spare server, and demands deep MySQL expertise. For one deleted row.
There is a third category that should exist between these two: tools that make your data change history queryable and actionable, without moving it to a warehouse and without restoring a full backup. Tools that let you ask “what changed?” and get an answer in seconds, then generate the exact SQL to undo it.
That category is what we are calling data change intelligence.
What data change intelligence means
Data change intelligence has three properties:
It captures every change with full context. Not just “row was updated” but the complete before-and-after state of every column. When someone changes a customer’s plan from “pro” to “free”, you see both values, the exact timestamp, and which table and row were affected.
It makes changes queryable. You can filter by table, schema, event type, time range, primary key, or even specific columns that were modified. You can ask “show me all DELETEs on the orders table in the last hour” or “when was the email column last changed for user 4821” and get structured results.
It generates recovery SQL. When you find the change that caused the problem, you can generate the exact SQL to reverse it. A DELETE becomes an INSERT with the original row values. A bad UPDATE becomes an UPDATE that reverts to the previous values. No full restore needed. No temporary server. No guessing.
The combination of these three properties creates something that neither CDC tools nor recovery tools provide on their own: the ability to investigate and fix data problems in seconds instead of hours.
Why existing tools don’t fill this gap
You might think you can cobble this together from existing tools. Here is why each approach falls short.
“We have audit tables.” Audit tables only capture what the application writes to them. Direct SQL, migrations, admin tools, replication events, stored procedures, and anything else that bypasses the application layer are invisible. Audit triggers also add write latency to every transaction, grow unbounded with poor indexing, and break silently when someone changes the schema and forgets to update the trigger. And most audit tables do not capture the full before-and-after state at the column level. They just record “row was updated.”
“We can use mysqlbinlog.” You can. And if you have done it at 3am during an incident, you know exactly how painful it is. mysqlbinlog outputs raw binary events in a format designed for replication, not human consumption. Filtering by table, time range, or primary key requires grep chains, awk scripts, or purpose-built parsers. Generating recovery SQL from the output requires even more tooling. It works, but it is not a workflow anyone wants to repeat.
“We have Debezium streaming to Kafka.” Great for real-time data pipelines. Not great for ad-hoc investigation. To answer “what happened to row 16791” you need to query your Kafka consumer’s downstream store, which means maintaining a separate query layer, managing retention, and hoping the data has not been compacted or expired. Debezium is infrastructure for data movement. It is not an investigation tool.
“We use enterprise audit solutions.” Tools like IBM Guardium, Imperva, or DataSunrise are designed for compliance in regulated industries. They are powerful, but they come with enterprise pricing, long procurement cycles, and deployment complexity that makes them impractical for most teams. And their focus is compliance reporting and access control, not answering “what happened to this specific row and can I undo it.”
How dbtrail fills the gap
dbtrail is a data change intelligence tool for MySQL. It reads the binary log via MySQL’s replication protocol, indexes every row-level change with full before-and-after data, and makes the entire change history queryable and recoverable.
The architecture is straightforward. A lightweight Go agent runs on an EC2 instance (or any server that can reach your MySQL). It connects as a replication client, reads binlog events in real time, and indexes them into a local MySQL database on the same machine. The SaaS layer provides the API, dashboard, and MCP endpoint that query this index.
This means queries never hit your production database. There is no replica lag, no competition with application traffic, and no performance impact. The agent reads the binlog stream (which MySQL is already writing) and indexes locally. The “lag” is the time it takes to write an index row: milliseconds.
You can access your change history three ways:
Through Claude or any MCP client. dbtrail exposes a remote MCP server that lets you ask questions in natural language. “What happened to customer 16791?” or “Show me all DELETEs on the orders table since yesterday.” Claude calls the appropriate dbtrail tools and returns structured results with recovery SQL.
Through the dashboard. Filter by server, schema, table, event type, and time range. Browse changes, inspect before/after row states, select the events you want to reverse, and generate recovery SQL with one click.
Through the REST API. Integrate change queries and recovery into scripts, CI/CD pipelines, or custom tooling. Every operation available in the dashboard and MCP is also available as an API endpoint with API key authentication.
Daily tool, not just emergency tool
The most important shift in thinking about data change intelligence is that it is not just for emergencies.
Yes, when someone accidentally deletes a row or runs a bad UPDATE, dbtrail generates recovery SQL in seconds instead of the hours required for traditional point-in-time recovery. That capability alone justifies the tool.
But the daily value comes from investigation. After every deploy, you can verify that the migration touched the right rows and nothing else. When a support ticket comes in claiming “my account was changed”, you can see exactly what changed, when, and what the previous values were. When you suspect unauthorized changes to a sensitive table, you can query the full change history filtered by table and time range.
These are questions that every team with a production database asks regularly. Today the answer is usually “we don’t have a good way to check that” or “let me ask the DBA to look at the binlogs.” With data change intelligence, the answer is a 5-second query.
Try it
dbtrail is available today for MySQL 5.7, 8.0, and 8.4, including Amazon RDS, Aurora, and Percona Server. The free tier includes one server, 7 days of change history, and full MCP access.
Start free at dbtrail.com or read the docs to learn more.