The Strategy Pattern is a design pattern that enables selecting an algorithm's behavior at runtime. In the context of a web server, this can be used to define various request-handling strategies.
This example demonstrates how the Strategy Pattern can be implemented in a web server context using C++. We will create a simple server that can handle GET and POST requests through different strategies.
keywords: strategy pattern, C++, web server, design patterns, request handling
#include
#include
// Strategy interface
class RequestHandler {
public:
virtual void handleRequest() = 0;
};
// Concrete Strategy for handling GET requests
class GetRequestHandler : public RequestHandler {
public:
void handleRequest() override {
std::cout << "Handling GET request." << std::endl;
}
};
// Concrete Strategy for handling POST requests
class PostRequestHandler : public RequestHandler {
public:
void handleRequest() override {
std::cout << "Handling POST request." << std::endl;
}
};
// Context that uses the Strategy
class Server {
private:
std::unique_ptr handler;
public:
void setHandler(std::unique_ptr newHandler) {
handler = std::move(newHandler);
}
void processRequest() {
if (handler) {
handler->handleRequest();
}
else {
std::cout << "No handler set for this request." << std::endl;
}
}
};
int main() {
Server server;
// Set to handle GET requests
server.setHandler(std::make_unique());
server.processRequest();
// Change to handle POST requests
server.setHandler(std::make_unique());
server.processRequest();
return 0;
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?