Using connection pools with Redis in Go can help manage multiple clients and optimize resource usage effectively. Here's a quick guide on how to implement Redis connection pooling in your Go applications.
package main
import (
"fmt"
"github.com/go-redis/redis/v8"
"context"
)
func main() {
ctx := context.Background()
// Creating a connection pool
rdb := redis.NewClient(&redis.Options{
Addr: "localhost:6379", // Redis server address
Password: "", // No password set
DB: 0, // Use default DB
PoolSize: 10, // Set the size of the connection pool
})
// Ping the Redis server to test the connection
pong, err := rdb.Ping(ctx).Result()
if err != nil {
fmt.Println("Could not connect to Redis:", err)
}
fmt.Println(pong) // Output: PONG
// Example of using the pool to set and get a value
err = rdb.Set(ctx, "key", "value", 0).Err()
if err != nil {
fmt.Println("Could not set key:", err)
}
val, err := rdb.Get(ctx, "key").Result()
if err != nil {
fmt.Println("Could not get key:", err)
}
fmt.Println("key:", val) // Output: key: value
// Closing the connection
defer rdb.Close()
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?