A deque (double-ended queue) is an abstract data type that allows insertion and deletion of elements from both ends. In Go, we can implement a deque using slices or linked lists. Below is an example implementation of a simple deque using slices.
package main
import "fmt"
// Deque structure
type Deque struct {
items []int
}
// PushFront adds an item to the front of the deque
func (d *Deque) PushFront(item int) {
d.items = append([]int{item}, d.items...)
}
// PushBack adds an item to the back of the deque
func (d *Deque) PushBack(item int) {
d.items = append(d.items, item)
}
// PopFront removes an item from the front of the deque
func (d *Deque) PopFront() (int, bool) {
if len(d.items) == 0 {
return 0, false
}
item := d.items[0]
d.items = d.items[1:]
return item, true
}
// PopBack removes an item from the back of the deque
func (d *Deque) PopBack() (int, bool) {
if len(d.items) == 0 {
return 0, false
}
index := len(d.items) - 1
item := d.items[index]
d.items = d.items[:index]
return item, true
}
// Main function to demonstrate the deque
func main() {
deque := Deque{}
// Perform operations
deque.PushBack(1)
deque.PushBack(2)
deque.PushFront(0)
fmt.Println(deque.PopFront()) // Output: 0
fmt.Println(deque.PopBack()) // Output: 2
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?