In Swift, a generic ring buffer (also known as a circular buffer) is a data structure that uses a fixed-size array and allows for efficient insertion and removal of elements. Here’s an example implementation of a generic ring buffer in Swift:
struct RingBuffer<T> {
private var buffer: [T?]
private var head: Int = 0
private var tail: Int = 0
private var size: Int = 0
var count: Int {
return size
}
var isEmpty: Bool {
return size == 0
}
init(capacity: Int) {
buffer = Array<T?>(repeating: nil, count: capacity)
}
mutating func enqueue(_ element: T) {
if size == buffer.count {
// Buffer is full. Overwrite the oldest element.
head = (head + 1) % buffer.count
} else {
size += 1
}
buffer[tail] = element
tail = (tail + 1) % buffer.count
}
mutating func dequeue() -> T? {
guard !isEmpty else {
return nil
}
let element = buffer[head]
buffer[head] = nil
head = (head + 1) % buffer.count
size -= 1
return element
}
func peek() -> T? {
return isEmpty ? nil : buffer[head]
}
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?