In Go language, using GORM for database operations involves handling transactions efficiently. Transactions ensure that a series of operations either all succeed or all fail, maintaining data integrity. In GORM, you can manage transactions and rollbacks by using the `DB.Begin()`, `Commit()`, and `Rollback()` methods.
package main
import (
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"log"
)
type Product struct {
ID uint `gorm:"primaryKey"`
Code string `gorm:"unique"`
Price uint
}
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
log.Fatal(err)
}
// Start a new transaction
tx := db.Begin()
// Create a product
product := Product{Code: "L1212", Price: 1000}
if err := tx.Create(&product).Error; err != nil {
// Rollback transaction if there is an error
tx.Rollback()
log.Fatal(err)
}
// If everything is fine, commit the transaction
// Uncomment to induce an error for rollback demonstration
// product.Code = "L1212" // This would violate unique constraint
if err := tx.Commit().Error; err != nil {
tx.Rollback()
log.Fatal(err)
}
log.Println("Transaction committed successfully")
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?