Cache stampede is a situation where multiple requests to a cache miss lead to a high load on the underlying data source, which can cause performance issues. To handle cache stampede effectively using Redis in Go, you can implement a mechanism to ensure that only one process retrieves and caches the data while others wait for it to be available.
Here's a simple approach using Redis for cache locking:
package main
import (
"fmt"
"time"
"github.com/go-redis/redis/v8"
"golang.org/x/net/context"
)
var ctx = context.Background()
func getValueFromCache(key string) (string, error) {
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
})
defer client.Close()
// Try to get value from cache
val, err := client.Get(ctx, key).Result()
if err == redis.Nil { // Cache miss
// Set a unique lock identifier
lockKey := key + ":lock"
isLocked, err := client.SetNX(ctx, lockKey, "locked", 5*time.Second).Result()
if err != nil {
return "", err
}
if isLocked {
// Get fresh data (simulate with sleep)
time.Sleep(2 * time.Second) // Simulating data fetch
freshData := "Fresh data from source"
// Store retrieved data in cache
client.Set(ctx, key, freshData, 0)
return freshData, nil
} else {
// Wait and retry
time.Sleep(100 * time.Millisecond)
return getValueFromCache(key) // Retry fetching value
}
} else if err != nil {
return "", err
}
return val, nil // Cache hit
}
func main() {
key := "mydata"
value, err := getValueFromCache(key)
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Println("Value:", value)
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?