In Go, you can implement a generic queue using a struct combined with Go's type parameters (generics). This allows you to create a queue that can hold any type of data while maintaining type safety.
Here’s an example of how to implement a simple generic queue in Go:
package main
import (
"fmt"
"sync"
)
// Queue structure
type Queue[T any] struct {
items []T
lock sync.Mutex
}
// Enqueue adds an item to the queue
func (q *Queue[T]) Enqueue(item T) {
q.lock.Lock()
defer q.lock.Unlock()
q.items = append(q.items, item)
}
// Dequeue removes and returns an item from the queue
func (q *Queue[T]) Dequeue() (T, bool) {
q.lock.Lock()
defer q.lock.Unlock()
if len(q.items) == 0 {
var zero T
return zero, false // return zero value of T if the queue is empty
}
item := q.items[0]
q.items = q.items[1:]
return item, true
}
// Size returns the number of items in the queue
func (q *Queue[T]) Size() int {
q.lock.Lock()
defer q.lock.Unlock()
return len(q.items)
}
// Example usage
func main() {
q := Queue[int]{}
q.Enqueue(1)
q.Enqueue(2)
fmt.Println(q.Dequeue()) // Outputs: 1
fmt.Println(q.Size()) // Outputs: 1
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?