A ring buffer is a fixed-size data structure that uses a single, fixed-size buffer as though it were connected end-to-end. This is particularly useful in scenarios where you want to implement a queue while minimizing memory allocations. Below is an example of how to implement a ring buffer in Swift.
class RingBuffer {
private var buffer: [T?]
private var head: Int = 0
private var tail: Int = 0
private var size: Int = 0
init(capacity: Int) {
buffer = Array(repeating: nil, count: capacity)
}
func write(element: T) {
if size < buffer.count {
buffer[tail] = element
tail = (tail + 1) % buffer.count
size += 1
} else {
buffer[tail] = element
tail = (tail + 1) % buffer.count
head = (head + 1) % buffer.count
}
}
func read() -> T? {
guard size > 0 else { return nil }
let element = buffer[head]
buffer[head] = nil
head = (head + 1) % buffer.count
size -= 1
return element
}
func isEmpty() -> Bool {
return size == 0
}
func isFull() -> Bool {
return size == buffer.count
}
}
// Usage example:
let ringBuffer = RingBuffer(capacity: 5)
ringBuffer.write(element: 1)
ringBuffer.write(element: 2)
print(ringBuffer.read() ?? "Empty") // Output: 1
print(ringBuffer.isFull()) // Output: false
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?