The shared_mutex
class in C++ allows multiple threads to read shared data concurrently while ensuring exclusive access for writing. This is particularly useful in scenarios where read-heavy workloads are common, as it strikes a balance between read and write access.
To manage access to a shared_mutex
, you can use the unique_lock
class, which provides a convenient way to lock and unlock the mutex in a scoped manner. This helps prevent deadlocks and ensures that the mutex is properly released when it goes out of scope.
#include <iostream>
#include <shared_mutex>
#include <thread>
std::shared_mutex sharedMutex;
int sharedCounter = 0;
void readCounter() {
std::shared_lock<std::shared_mutex> lock(sharedMutex); // Shared lock for reading
std::cout << "Current Counter: " << sharedCounter << std::endl;
}
void incrementCounter() {
std::unique_lock<std::shared_mutex> lock(sharedMutex); // Unique lock for writing
++sharedCounter;
}
int main() {
std::thread readers[5];
for (int i = 0; i < 5; ++i) {
readers[i] = std::thread(readCounter);
}
std::thread writer(incrementCounter);
for (int i = 0; i < 5; ++i) {
readers[i].join();
}
writer.join();
return 0;
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?