The Strategy Pattern is a behavioral design pattern that enables selecting an algorithm's implementation at runtime. In embedded systems using C++, it can help to dynamically change the behavior of an application without modifying its structure. The pattern typically involves defining a family of algorithms, encapsulating each one, and making them interchangeable through a common interface.
#include
#include
// Strategy Interface
class Strategy {
public:
virtual void execute() = 0; // Pure virtual function for execute
};
// Concrete Strategy A
class ConcreteStrategyA : public Strategy {
public:
void execute() override {
std::cout << "Executing strategy A" << std::endl;
}
};
// Concrete Strategy B
class ConcreteStrategyB : public Strategy {
public:
void execute() override {
std::cout << "Executing strategy B" << std::endl;
}
};
// Context
class Context {
private:
std::unique_ptr strategy; // Store a pointer to the strategy
public:
Context(std::unique_ptr strategy) : strategy(std::move(strategy)) {}
void setStrategy(std::unique_ptr newStrategy) {
strategy = std::move(newStrategy);
}
void executeStrategy() {
strategy->execute();
}
};
// Main function to demonstrate the pattern
int main() {
Context context(std::make_unique());
context.executeStrategy(); // Output: Executing strategy A
context.setStrategy(std::make_unique());
context.executeStrategy(); // Output: Executing strategy B
return 0;
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?