Cursor-based pagination allows you to efficiently load pages of data by using a reference to the last item of the previous page. This avoids issues with changes in dataset size or order, providing a more consistent experience while navigating through large datasets. Here's how you can implement it in Go:
package main
import (
"database/sql"
"fmt"
"net/http"
_ "github.com/lib/pq"
)
const pageSize = 10
type Item struct {
ID int
Name string
}
func getItems(db *sql.DB, cursor string) ([]Item, string, error) {
query := "SELECT id, name FROM items"
if cursor != "" {
query += " WHERE id > " + cursor
}
query += " ORDER BY id ASC LIMIT $1"
rows, err := db.Query(query, pageSize)
if err != nil {
return nil, "", err
}
defer rows.Close()
var items []Item
var nextCursor string
for rows.Next() {
var item Item
if err := rows.Scan(&item.ID, &item.Name); err != nil {
return nil, "", err
}
items = append(items, item)
nextCursor = fmt.Sprintf("%d", item.ID) // Set nextCursor to the last item's ID
}
return items, nextCursor, nil
}
func itemsHandler(w http.ResponseWriter, r *http.Request, db *sql.DB) {
cursor := r.URL.Query().Get("cursor")
items, nextCursor, err := getItems(db, cursor)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
// Here you would typically marshal items to JSON and respond
}
func main() {
connStr := "user=username dbname=dbname sslmode=disable"
db, err := sql.Open("postgres", connStr)
if err != nil {
panic(err)
}
defer db.Close()
http.HandleFunc("/items", func(w http.ResponseWriter, r *http.Request) {
itemsHandler(w, r, db)
})
http.ListenAndServe(":8080", nil)
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?