To paginate query results using the database/sql
package with MySQL in Go, you typically use the SQL LIMIT
and OFFSET
clauses. This approach allows you to fetch a specific subset of results from your database, facilitating the pagination process.
Below is an example of how to implement pagination in Go using database/sql
:
package main
import (
"database/sql"
"fmt"
"log"
_ "github.com/go-sql-driver/mysql"
)
func main() {
// Open a database connection
db, err := sql.Open("mysql", "user:password@/dbname")
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Define pagination parameters
page := 1 // current page number
itemsPerPage := 10 // number of items per page
offset := (page - 1) * itemsPerPage
// Prepare the SQL query with LIMIT and OFFSET
query := fmt.Sprintf("SELECT * FROM items LIMIT %d OFFSET %d", itemsPerPage, offset)
// Execute the query
rows, err := db.Query(query)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
// Iterate through the rows
for rows.Next() {
var id int
var name string
if err := rows.Scan(&id, &name); err != nil {
log.Fatal(err)
}
fmt.Printf("%d: %s\n", id, name)
}
// Check for errors from iterating over rows
if err := rows.Err(); err != nil {
log.Fatal(err)
}
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?