Dynamic programming is a powerful method for solving complex problems by breaking them down into simpler subproblems. It is often used in optimization problems where a solution can be constructed efficiently using previously computed results.
In Go, you can implement dynamic programming patterns using either memoization or tabulation. Below are examples of both approaches using the Fibonacci sequence as a typical dynamic programming problem.
package main
import "fmt"
// Memoization function to calculate Fibonacci numbers
func fibMemo(n int, memo map[int]int) int {
if n <= 1 {
return n
}
if val, found := memo[n]; found {
return val
}
memo[n] = fibMemo(n-1, memo) + fibMemo(n-2, memo)
return memo[n]
}
func main() {
n := 10
memo := make(map[int]int)
result := fibMemo(n, memo)
fmt.Println("Fibonacci of", n, "is", result)
}
package main
import "fmt"
// Tabulation function to calculate Fibonacci numbers
func fibTab(n int) int {
if n <= 1 {
return n
}
fibs := make([]int, n+1)
fibs[0] = 0
fibs[1] = 1
for i := 2; i <= n; i++ {
fibs[i] = fibs[i-1] + fibs[i-2]
}
return fibs[n]
}
func main() {
n := 10
result := fibTab(n)
fmt.Println("Fibonacci of", n, "is", result)
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?