In Java, managing timeouts and retries is crucial for ensuring that your application can handle network failures or slow responses effectively. Using a combination of classes such as `java.util.concurrent` for managing timeouts and retry logic, you can create resilient applications that perform reliably under various conditions.
The following example demonstrates how to implement timeouts and retries using the `ExecutorService` and `Future` classes.
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
public class TimeoutRetryExample {
private static final int MAX_RETRIES = 3;
private static final long TIMEOUT_SECONDS = 5;
public static void main(String[] args) {
ExecutorService executorService = Executors.newSingleThreadExecutor();
String result = executeWithRetries(executorService, TimeoutRetryExample::task);
System.out.println("Result: " + result);
executorService.shutdown();
}
private static String executeWithRetries(ExecutorService executorService, Callable task) {
for (int i = 0; i < MAX_RETRIES; i++) {
Future future = executorService.submit(task);
try {
return future.get(TIMEOUT_SECONDS, TimeUnit.SECONDS);
} catch (TimeoutException e) {
System.out.println("Timeout occurred, retrying... " + (i + 1));
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}
throw new RuntimeException("Failed after retries");
}
private static String task() throws InterruptedException {
// Simulate some processing time
Thread.sleep(6000); // simulates a long-running task
return "Task Completed";
}
}
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?