Merge sort is a classic divide-and-conquer algorithm that efficiently sorts an array or slice by dividing it into smaller sub-arrays, sorting those, and merging them back together. Below is an implementation of merge sort in Go.
package main
import "fmt"
// merge function merges two sorted slices
func merge(left, right []int) []int {
var result []int
i, j := 0, 0
// Merge until one slice is empty
for i < len(left) && j < len(right) {
if left[i] < right[j] {
result = append(result, left[i])
i++
} else {
result = append(result, right[j])
j++
}
}
// Add remaining elements (if any)
result = append(result, left[i:]...)
result = append(result, right[j:]...)
return result
}
// mergeSort function recursively sorts the slice
func mergeSort(slice []int) []int {
if len(slice) < 2 {
return slice
}
mid := len(slice) / 2
left := mergeSort(slice[:mid])
right := mergeSort(slice[mid:])
return merge(left, right)
}
func main() {
arr := []int{38, 27, 43, 3, 9, 82, 10}
sortedArr := mergeSort(arr)
fmt.Println("Sorted array:", sortedArr)
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?