In Go, using the pgx library to map rows from a database query to structs is quite straightforward. The pgx package provides an efficient way to interact with PostgreSQL databases, allowing you to scan rows directly into your Go structs.
To map rows to structs, you’ll first define your struct that matches the columns of the database table. Then, you can use the pgx package to execute your query and scan the results into your struct.
package main
import (
"context"
"database/sql"
"fmt"
"log"
"github.com/jackc/pgx/v4/pgxpool"
_ "github.com/lib/pq"
)
type User struct {
ID int64
Name string
Email string
}
func main() {
db, err := pgxpool.Connect(context.Background(), "postgres://user:password@localhost:5432/mydb")
if err != nil {
log.Fatal(err)
}
defer db.Close()
var users []User
rows, err := db.Query(context.Background(), "SELECT id, name, email FROM users")
if err != nil {
log.Fatal(err)
}
defer rows.Close()
for rows.Next() {
var user User
err = rows.Scan(&user.ID, &user.Name, &user.Email)
if err != nil {
log.Fatal(err)
}
users = append(users, user)
}
if err = rows.Err(); err != nil {
log.Fatal(err)
}
fmt.Println(users)
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?