In Go, you can implement a queue using various data structures. A common approach is to use a slice, where you can enqueue and dequeue elements efficiently. Below is an example of how to implement a basic queue using a slice:
package main
import "fmt"
// Queue represents a simple queue structure
type Queue struct {
items []interface{}
}
// Enqueue adds an item to the end of the queue
func (q *Queue) Enqueue(item interface{}) {
q.items = append(q.items, item)
}
// Dequeue removes and returns the item at the front of the queue
func (q *Queue) Dequeue() interface{} {
if len(q.items) == 0 {
return nil
}
item := q.items[0]
q.items = q.items[1:]
return item
}
// IsEmpty checks if the queue is empty
func (q *Queue) IsEmpty() bool {
return len(q.items) == 0
}
// Size returns the number of items in the queue
func (q *Queue) Size() int {
return len(q.items)
}
func main() {
queue := Queue{}
queue.Enqueue(1)
queue.Enqueue(2)
queue.Enqueue(3)
fmt.Println(queue.Dequeue()) // Output: 1
fmt.Println(queue.Size()) // Output: 2
fmt.Println(queue.IsEmpty()) // Output: false
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?