CountDownLatch is a synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes. This can be useful in scenarios where you want to ensure certain tasks finish before proceeding. Here are some best practices for using CountDownLatch in Java:
CountDownLatch, Java concurrency, thread synchronization, best practices, Java programming
// Import necessary classes
import java.util.concurrent.CountDownLatch;
public class CountDownLatchExample {
public static void main(String[] args) {
// Create a CountDownLatch initialized to 3
final CountDownLatch latch = new CountDownLatch(3);
// Create and start three worker threads
for (int i = 0; i < 3; i++) {
new Thread(new Worker(latch)).start();
}
// Wait for all workers to finish
try {
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("All workers have finished!");
}
static class Worker implements Runnable {
private final CountDownLatch latch;
Worker(CountDownLatch latch) {
this.latch = latch;
}
@Override
public void run() {
try {
// Simulate work
System.out.println(Thread.currentThread().getName() + " is working...");
Thread.sleep((long) (Math.random() * 1000));
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
latch.countDown(); // Decrement the latch count
System.out.println(Thread.currentThread().getName() + " finished working.");
}
}
}
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?