Paginating results in Echo can be efficiently achieved by handling query parameters for page numbers and limits. The following example illustrates how to implement pagination in a Go application using Echo:
package main
import (
"net/http"
"strconv"
"github.com/labstack/echo/v4"
)
func main() {
e := echo.New()
// Simulated database results
items := make([]string, 100)
for i := 0; i < 100; i++ {
items[i] = "Item " + strconv.Itoa(i+1)
}
e.GET("/items", func(c echo.Context) error {
page, _ := strconv.Atoi(c.QueryParam("page"))
limit, _ := strconv.Atoi(c.QueryParam("limit"))
if page <= 0 {
page = 1
}
if limit <= 0 {
limit = 10
}
start := (page - 1) * limit
end := start + limit
if end > len(items) {
end = len(items)
}
if start >= len(items) {
return c.JSON(http.StatusOK, []string{})
}
return c.JSON(http.StatusOK, items[start:end])
})
e.Logger.Fatal(e.Start(":8080"))
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?