The 0/1 Knapsack problem is a classic optimization problem that can be solved using dynamic programming. In this problem, you are given a set of items, each with a weight and a value, and a knapsack that can hold a maximum weight. The objective is to find the maximum value that can be put in the knapsack without exceeding its weight capacity.
Here is a C++ implementation of the 0/1 Knapsack problem:
#include
#include
using namespace std;
int knapSack(int W, vector weights, vector values, int n) {
vector> K(n + 1, vector(W + 1));
for (int i = 0; i <= n; i++) {
for (int w = 0; w <= W; w++) {
if (i == 0 || w == 0)
K[i][w] = 0;
else if (weights[i - 1] <= w)
K[i][w] = max(values[i - 1] + K[i - 1][w - weights[i - 1]], K[i - 1][w]);
else
K[i][w] = K[i - 1][w];
}
}
return K[n][W];
}
int main() {
int W = 50; // maximum weight of knapsack
vector values = {60, 100, 120};
vector weights = {10, 20, 30};
int n = values.size();
cout << "Maximum value in Knapsack = " << knapSack(W, weights, values, n) << endl;
return 0;
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?