In Java, a transaction refers to a sequence of operations performed as a single logical unit of work. A transaction must be either fully completed or fully failed, ensuring data integrity in applications that interact with databases. Transactions are particularly important in systems that require a high degree of reliability and consistency, such as banking and e-commerce applications.
Auto-commit is a mode of operation in which a database automatically commits every individual SQL statement as a transaction. When auto-commit is enabled, each SQL statement is treated as a separate transaction. This means that if an error occurs during execution, the changes made by the previous statements will still persist because they were automatically committed. Turning off auto-commit allows multiple statements to be grouped into a single transaction, which can be committed or rolled back together.
Here's an example of how to work with transactions and auto-commit in Java using JDBC:
// Disable auto-commit mode
connection.setAutoCommit(false);
try {
// Execute SQL statements
PreparedStatement stmt1 = connection.prepareStatement("INSERT INTO accounts (name, balance) VALUES (?, ?)");
stmt1.setString(1, "Alice");
stmt1.setDouble(2, 1000.00);
stmt1.executeUpdate();
PreparedStatement stmt2 = connection.prepareStatement("INSERT INTO accounts (name, balance) VALUES (?, ?)");
stmt2.setString(1, "Bob");
stmt2.setDouble(2, 1500.00);
stmt2.executeUpdate();
// Commit the transaction
connection.commit();
} catch (SQLException e) {
// Rollback the transaction in case of an error
connection.rollback();
e.printStackTrace();
} finally {
// Restore auto-commit mode
connection.setAutoCommit(true);
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?