Implementing a generic priority queue in Go can be achieved through the use of interfaces and slices. This allows you to create a queue that can handle any data type while maintaining the order of processing based on priority. Below is an example of how to implement a simple generic priority queue in Go.
package main
import (
"container/heap"
"fmt"
)
// Item is something we manage in a priority queue.
type Item struct {
value interface{} // The value of the item; arbitrary.
priority int // The priority of the item in the queue.
}
// A PriorityQueue implements heap.Interface and holds Items.
type PriorityQueue []*Item
func (pq PriorityQueue) Len() int { return len(pq) }
func (pq PriorityQueue) Less(i, j int) bool {
// We want Pop to give us the highest priority, so we use greater than here.
return pq[i].priority > pq[j].priority
}
func (pq PriorityQueue) Swap(i, j int) {
pq[i], pq[j] = pq[j], pq[i]
}
func (pq *PriorityQueue) Push(x interface{}) {
item := x.(*Item)
*pq = append(*pq, item)
}
func (pq *PriorityQueue) Pop() interface{} {
old := *pq
n := len(old)
item := old[n-1]
*pq = old[0 : n-1]
return item
}
func main() {
// Create a PriorityQueue and add some items
pq := &PriorityQueue{}
heap.Init(pq)
heap.Push(pq, &Item{
value: "low priority task",
priority: 1,
})
heap.Push(pq, &Item{
value: "medium priority task",
priority: 5,
})
heap.Push(pq, &Item{
value: "high priority task",
priority: 10,
})
// Working with the priority queue
for pq.Len() > 0 {
item := heap.Pop(pq).(*Item)
fmt.Printf("%v: %d\n", item.value, item.priority)
}
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?