In Go, writing generic data structures like stacks and queues can be achieved by using type parameters. This allows you to create data structures that can handle any data type. Below is an example of how to implement generic stacks and queues in Go.
package main
import "fmt"
// Stack represents a generic stack data structure
type Stack[T any] struct {
items []T
}
// Push adds an item to the top of the stack
func (s *Stack[T]) Push(item T) {
s.items = append(s.items, item)
}
// Pop removes and returns the top item of the stack
func (s *Stack[T]) Pop() T {
if len(s.items) == 0 {
var zero T
return zero // return zero value of T if stack is empty
}
topItem := s.items[len(s.items)-1]
s.items = s.items[:len(s.items)-1]
return topItem
}
// Queue represents a generic queue data structure
type Queue[T any] struct {
items []T
}
// Enqueue adds an item to the back of the queue
func (q *Queue[T]) Enqueue(item T) {
q.items = append(q.items, item)
}
// Dequeue removes and returns the front item of the queue
func (q *Queue[T]) Dequeue() T {
if len(q.items) == 0 {
var zero T
return zero // return zero value of T if queue is empty
}
frontItem := q.items[0]
q.items = q.items[1:]
return frontItem
}
func main() {
// Example usage of Stack
stack := Stack[int]{}
stack.Push(1)
stack.Push(2)
fmt.Println("Popped from stack:", stack.Pop()) // Output: Popped from stack: 2
// Example usage of Queue
queue := Queue[string]{}
queue.Enqueue("first")
queue.Enqueue("second")
fmt.Println("Dequeued from queue:", queue.Dequeue()) // Output: Dequeued from queue: first
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?