When working with cancellable operations in Swift, it's essential to expose them in a way that allows callers to manage their lifespan effectively. This often involves providing a mechanism for callers to request cancellation and handle the result afterward.
One common approach is to return a type that represents a cancellable operation, such as `AnyCancellable` from the Combine framework. This allows callers to store it and call the cancellation method when needed. You may also want to include additional callbacks for success and failure scenarios.
Here’s an example of how to expose a cancellable operation:
// Example of exposing a cancellable operation in Swift
import Combine
class NetworkService {
func fetchData(completion: @escaping (Result) -> Void) -> AnyCancellable {
let publisher = URLSession.shared.dataTaskPublisher(for: URL(string: "https://api.example.com/data")!)
.map(\.data)
.mapError { $0 as Error }
.eraseToAnyPublisher()
return publisher
.sink(receiveCompletion: { result in
switch result {
case .finished:
break // Completed successfully
case .failure(let error):
completion(.failure(error))
}
}, receiveValue: { data in
completion(.success(data))
})
}
}
// Usage
let service = NetworkService()
let cancellable = service.fetchData { result in
switch result {
case .success(let data):
print("Received data: \(data)")
case .failure(let error):
print("Error: \(error)")
}
}
// Cancel the operation if needed
cancellable.cancel()
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?