In Swift, caching responses using `URLCache` is a straightforward process that enhances the performance of your network requests by storing data for future use. Implementing a custom caching strategy can significantly reduce load times and network usage.
To utilize `URLCache` effectively, follow these steps:
// Step 1: Create and configure the URLCache
let cacheSizeMemory = 512 * 1024 // 512 KB
let cacheSizeDisk = 20 * 1024 * 1024 // 20 MB
let urlCache = URLCache(memoryCapacity: cacheSizeMemory, diskCapacity: cacheSizeDisk, diskPath: nil)
// Step 2: Set up a URLSession with the cache
let sessionConfiguration = URLSessionConfiguration.default
sessionConfiguration.urlCache = urlCache
let session = URLSession(configuration: sessionConfiguration)
// Step 3: Make a network request
let url = URL(string: "https://api.example.com/data")!
let request = URLRequest(url: url)
session.dataTask(with: request) { data, response, error in
if let data = data, let response = response {
// Cache the response for future use
if let httpResponse = response as? HTTPURLResponse, httpResponse.statusCode == 200 {
let cachedResponse = NSCachedURLResponse(response: response, data: data)
urlCache.storeCachedResponse(cachedResponse, for: request)
}
}
}.resume()
// Step 4: Retrieve the cached response
if let cachedResponse = urlCache.cachedResponse(for: request) {
// Use the cached data
let cachedData = cachedResponse.data
print("Cached Data: \(cachedData)")
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?