WL#2749: Extend Logs: Add ability to log bad rows
Affects: Server-7.1
—
Status: Un-Assigned
One feature that could be added when we are extending
our logging capabilities, is the ability to log "bad"
rows, generated during bulk data changes, to a table.
For example, from
http://tkyte.blogspot.com/2005/07/how-cool-is-this.html
regarding a new Oracle10g feature:
"DML Error Logging, instead of the 100,000 row update/insert/whatever
failing because a single row doesn’t quite work out, we can have the
99,999 successful rows go through and have the one bad row logged to
a table! I mean, this isn’t just cool, this is “change my world,
rewrite the books, stop the presses, wow, knock me over with a feather”."
Trudy Pelzer's comment:
We already have INSERT IGNORE, which allows a bulk INSERT
to keep going when an error is encountered. We even have
the facility to determine whether all INSERTs were processed
(check the mysql_info() C API function). But logging *which*
statement failed would certainly be a nifty feature.
Copyright (c) 2000, 2025, Oracle Corporation and/or its affiliates. All rights reserved.