Paginating query results in a PostgreSQL database using the Go language and the `database/sql` package can be done effectively by utilizing the `LIMIT` and `OFFSET` SQL clauses. This allows you to control the number of records returned by your SQL queries, making it easier to manage and display large datasets.
Here’s an example of how you can implement pagination in Go:
package main
import (
"database/sql"
"fmt"
"log"
"net/http"
_ "github.com/lib/pq"
)
func main() {
db, err := sql.Open("postgres", "user=username dbname=mydb sslmode=disable")
if err != nil {
log.Fatal(err)
}
defer db.Close()
page := 1 // Current page number
pageSize := 10 // Number of records per page
offset := (page - 1) * pageSize
rows, err := db.Query("SELECT id, name FROM users ORDER BY id LIMIT $1 OFFSET $2", pageSize, offset)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
for rows.Next() {
var id int
var name string
if err := rows.Scan(&id, &name); err != nil {
log.Fatal(err)
}
fmt.Printf("ID: %d, Name: %s\n", id, name)
}
if err := rows.Err(); err != nil {
log.Fatal(err)
}
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?