Implementing an LRU (Least Recently Used) cache in Swift is a great way to optimize memory usage for applications that need to store a limited amount of data in memory. Below is an example demonstrating how to create an LRU cache in Swift.
class LRUCache {
private var cache: [Int: Int]
private var order: [Int]
private let capacity: Int
init(capacity: Int) {
self.capacity = capacity
self.cache = [:]
self.order = []
}
func get(_ key: Int) -> Int {
guard let value = cache[key] else {
return -1
}
// Move key to the front to mark it as recently used
order.removeAll { $0 == key }
order.insert(key, at: 0)
return value
}
func put(_ key: Int, _ value: Int) {
if cache.keys.contains(key) {
cache[key] = value
// Move key to the front to mark it as recently used
order.removeAll { $0 == key }
} else {
if cache.count >= capacity {
// Remove the least recently used key
if let lruKey = order.last {
cache.removeValue(forKey: lruKey)
order.removeLast()
}
}
cache[key] = value
}
order.insert(key, at: 0)
}
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?