1# oneshot 2 3Oneshot spsc (single producer, single consumer) channel. Meaning each channel instance 4can only transport a single message. This has a few nice outcomes. One thing is that 5the implementation can be very efficient, utilizing the knowledge that there will 6only be one message. But more importantly, it allows the API to be expressed in such 7a way that certain edge cases that you don't want to care about when only sending a 8single message on a channel does not exist. For example: The sender can't be copied 9or cloned, and the send method takes ownership and consumes the sender. 10So you are guaranteed, at the type level, that there can only be one message sent. 11 12The sender's send method is non-blocking, and potentially lock- and wait-free. 13See documentation on [Sender::send] for situations where it might not be fully wait-free. 14The receiver supports both lock- and wait-free `try_recv` as well as indefinite and time 15limited thread blocking receive operations. The receiver also implements `Future` and 16supports asynchronously awaiting the message. 17 18 19## Examples 20 21This example sets up a background worker that processes requests coming in on a standard 22mpsc channel and replies on a oneshot channel provided with each request. The worker can 23be interacted with both from sync and async contexts since the oneshot receiver 24can receive both blocking and async. 25 26```rust 27use std::sync::mpsc; 28use std::thread; 29use std::time::Duration; 30 31type Request = String; 32 33// Starts a background thread performing some computation on requests sent to it. 34// Delivers the response back over a oneshot channel. 35fn spawn_processing_thread() -> mpsc::Sender<(Request, oneshot::Sender<usize>)> { 36 let (request_sender, request_receiver) = mpsc::channel::<(Request, oneshot::Sender<usize>)>(); 37 thread::spawn(move || { 38 for (request_data, response_sender) in request_receiver.iter() { 39 let compute_operation = || request_data.len(); 40 let _ = response_sender.send(compute_operation()); // <- Send on the oneshot channel 41 } 42 }); 43 request_sender 44} 45 46let processor = spawn_processing_thread(); 47 48// If compiled with `std` the library can receive messages with timeout on regular threads 49#[cfg(feature = "std")] { 50 let (response_sender, response_receiver) = oneshot::channel(); 51 let request = Request::from("data from sync thread"); 52 53 processor.send((request, response_sender)).expect("Processor down"); 54 match response_receiver.recv_timeout(Duration::from_secs(1)) { // <- Receive on the oneshot channel 55 Ok(result) => println!("Processor returned {}", result), 56 Err(oneshot::RecvTimeoutError::Timeout) => eprintln!("Processor was too slow"), 57 Err(oneshot::RecvTimeoutError::Disconnected) => panic!("Processor exited"), 58 } 59} 60 61// If compiled with the `async` feature, the `Receiver` can be awaited in an async context 62#[cfg(feature = "async")] { 63 tokio::runtime::Runtime::new() 64 .unwrap() 65 .block_on(async move { 66 let (response_sender, response_receiver) = oneshot::channel(); 67 let request = Request::from("data from sync thread"); 68 69 processor.send((request, response_sender)).expect("Processor down"); 70 match response_receiver.await { // <- Receive on the oneshot channel asynchronously 71 Ok(result) => println!("Processor returned {}", result), 72 Err(_e) => panic!("Processor exited"), 73 } 74 }); 75} 76``` 77 78## Sync vs async 79 80The main motivation for writing this library was that there were no (known to me) channel 81implementations allowing you to seamlessly send messages between a normal thread and an async 82task, or the other way around. If message passing is the way you are communicating, of course 83that should work smoothly between the sync and async parts of the program! 84 85This library achieves that by having a fast and cheap send operation that can 86be used in both sync threads and async tasks. The receiver has both thread blocking 87receive methods for synchronous usage, and implements `Future` for asynchronous usage. 88 89The receiving endpoint of this channel implements Rust's `Future` trait and can be waited on 90in an asynchronous task. This implementation is completely executor/runtime agnostic. It should 91be possible to use this library with any executor. 92 93 94License: MIT OR Apache-2.0 95