In Go, using the pgx package to handle transactions and rollbacks offers a robust way to ensure data integrity when performing multiple database operations. Below is an example of how to use transactions with pgx effectively.
package main
import (
"context"
"fmt"
"github.com/jackc/pgx/v5"
"log"
)
func main() {
ctx := context.Background()
// Connect to the database
conn, err := pgx.Connect(ctx, "your_connection_string")
if err != nil {
log.Fatal("Unable to connect to database:", err)
}
defer conn.Close(ctx)
// Begin the transaction
tx, err := conn.Begin(ctx)
if err != nil {
log.Fatal("Unable to begin transaction:", err)
}
// Ensure to rollback in case of errors
defer func() {
if err != nil {
if rbErr := tx.Rollback(ctx); rbErr != nil {
log.Fatalf("failed to rollback: %v", rbErr)
}
log.Println("Transaction rolled back due to error:", err)
} else {
// Commit the transaction if no errors
if err := tx.Commit(ctx); err != nil {
log.Fatalf("failed to commit transaction: %v", err)
}
log.Println("Transaction committed successfully")
}
}()
// Example database operations
_, err = tx.Exec(ctx, "INSERT INTO users (name, age) VALUES ($1, $2)", "John Doe", 30)
if err != nil {
return // This will trigger a rollback
}
_, err = tx.Exec(ctx, "UPDATE accounts SET balance = balance - $1 WHERE user_id = $2", 100, 1)
if err != nil {
return // This will trigger a rollback
}
// More operations can be added here
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?