This guide provides an example of using the Go http.Client with timeouts and retries, helping developers to handle HTTP requests efficiently while ensuring their applications are robust and reliable.
Go, http.Client, timeouts, retries, HTTP requests, Go programming, Go language
package main
import (
"fmt"
"net/http"
"time"
)
// createClient creates an http.Client with timeout settings
func createClient(timeout time.Duration) *http.Client {
return &http.Client{
Timeout: timeout,
}
}
// makeRequest tries to make an HTTP GET request with retries
func makeRequest(client *http.Client, url string, retries int) (*http.Response, error) {
var resp *http.Response
var err error
for i := 0; i < retries; i++ {
resp, err = client.Get(url)
if err == nil {
return resp, nil
}
fmt.Printf("Attempt %d failed: %v\n", i+1, err)
time.Sleep(2 * time.Second) // wait before retrying
}
return nil, fmt.Errorf("failed after %d attempts: %w", retries, err)
}
func main() {
client := createClient(5 * time.Second) // 5 seconds timeout
url := "https://example.com"
retries := 3
resp, err := makeRequest(client, url, retries)
if err != nil {
fmt.Printf("Error making request: %v\n", err)
return
}
defer resp.Body.Close()
fmt.Printf("Response status: %s\n", resp.Status)
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?