Handling transactions with PDO (PHP Data Objects) is essential for ensuring data integrity in your database operations. By using transactions, you can execute multiple queries as a single unit of work. If any query fails, you can roll back all changes, preserving the state of the database.
Below is a simple example demonstrating how to use transactions with PDO:
<?php
// Create a new PDO instance
$dsn = 'mysql:host=your_host;dbname=your_db;charset=utf8';
$username = 'your_username';
$password = 'your_password';
try {
$pdo = new PDO($dsn, $username, $password);
$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
// Start a transaction
$pdo->beginTransaction();
// Prepare and execute queries
$stmt1 = $pdo->prepare("INSERT INTO users (name, email) VALUES (?, ?)");
$stmt1->execute(['John Doe', 'john@example.com']);
$stmt2 = $pdo->prepare("INSERT INTO orders (user_id, product_id) VALUES (?, ?)");
$stmt2->execute([1, 2]); // Assume user_id is 1
// Commit the transaction
$pdo->commit();
echo "Transaction completed successfully.";
} catch (Exception $e) {
// Rollback the transaction if something failed
$pdo->rollBack();
echo "Failed to complete transaction: " . $e->getMessage();
}
?>
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?