To paginate query results using GORM in Go, you can use the `Limit` and `Offset` methods. This allows you to control the number of results returned and to skip a certain number of records, effectively implementing pagination.
package main
import (
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"log"
)
type User struct {
gorm.Model
Name string
Email string
}
func main() {
db, err := gorm.Open(sqlite.Open("test.db"), &gorm.Config{})
if err != nil {
log.Fatal(err)
}
// Create table
db.AutoMigrate(&User{})
// Seed some data
for i := 0; i < 100; i++ {
db.Create(&User{Name: "User #" + strconv.Itoa(i), Email: "user" + strconv.Itoa(i) + "@example.com"})
}
// Pagination setup
var users []User
page := 1 // Current page number
pageSize := 10 // Number of records per page
// Fetching paginated results
db.Offset((page - 1) * pageSize).Limit(pageSize).Find(&users)
// Output the users
for _, user := range users {
log.Println(user.Name, user.Email)
}
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?