A binary search tree (BST) is a data structure that facilitates fast lookup, addition, and removal of items. A BST has the following properties:
Below is an example of a binary search tree implementation in Go:
package main
import "fmt"
type Node struct {
Value int
Left *Node
Right *Node
}
type BST struct {
Root *Node
}
func (bst *BST) Insert(value int) {
bst.Root = insert(bst.Root, value)
}
func insert(node *Node, value int) *Node {
if node == nil {
return &Node{Value: value}
}
if value < node.Value {
node.Left = insert(node.Left, value)
} else {
node.Right = insert(node.Right, value)
}
return node
}
func (bst *BST) Search(value int) bool {
return search(bst.Root, value)
}
func search(node *Node, value int) bool {
if node == nil {
return false
}
if value == node.Value {
return true
}
if value < node.Value {
return search(node.Left, value)
}
return search(node.Right, value)
}
func main() {
bst := &BST{}
bst.Insert(5)
bst.Insert(3)
bst.Insert(7)
bst.Insert(1)
bst.Insert(4)
fmt.Println(bst.Search(3)) // Output: true
fmt.Println(bst.Search(8)) // Output: false
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?