1 // Copyright (c) 2016 The vulkano developers
2 // Licensed under the Apache License, Version 2.0
3 // <LICENSE-APACHE or
4 // https://www.apache.org/licenses/LICENSE-2.0> or the MIT
5 // license <LICENSE-MIT or https://opensource.org/licenses/MIT>,
6 // at your option. All files in the project carrying such
7 // notice may not be copied, modified, or distributed except
8 // according to those terms.
9 
10 //! Represents an event that will happen on the GPU in the future.
11 //!
12 //! Whenever you ask the GPU to start an operation by using a function of the vulkano library (for
13 //! example executing a command buffer), this function will return a *future*. A future is an
14 //! object that implements [the `GpuFuture` trait](crate::sync::GpuFuture) and that represents the
15 //! point in time when this operation is over.
16 //!
17 //! No function in vulkano immediately sends an operation to the GPU (with the exception of some
18 //! unsafe low-level functions). Instead they return a future that is in the pending state. Before
19 //! the GPU actually starts doing anything, you have to *flush* the future by calling the `flush()`
20 //! method or one of its derivatives.
21 //!
22 //! Futures serve several roles:
23 //!
24 //! - Futures can be used to build dependencies between operations and makes it possible to ask
25 //!   that an operation starts only after a previous operation is finished.
26 //! - Submitting an operation to the GPU is a costly operation. By chaining multiple operations
27 //!   with futures you will submit them all at once instead of one by one, thereby reducing this
28 //!   cost.
29 //! - Futures keep alive the resources and objects used by the GPU so that they don't get destroyed
30 //!   while they are still in use.
31 //!
32 //! The last point means that you should keep futures alive in your program for as long as their
33 //! corresponding operation is potentially still being executed by the GPU. Dropping a future
34 //! earlier will block the current thread (after flushing, if necessary) until the GPU has finished
35 //! the operation, which is usually not what you want.
36 //!
37 //! If you write a function that submits an operation to the GPU in your program, you are
38 //! encouraged to let this function return the corresponding future and let the caller handle it.
39 //! This way the caller will be able to chain multiple futures together and decide when it wants to
40 //! keep the future alive or drop it.
41 //!
42 //! # Executing an operation after a future
43 //!
44 //! Respecting the order of operations on the GPU is important, as it is what *proves* vulkano that
45 //! what you are doing is indeed safe. For example if you submit two operations that modify the
46 //! same buffer, then you need to execute one after the other instead of submitting them
47 //! independently. Failing to do so would mean that these two operations could potentially execute
48 //! simultaneously on the GPU, which would be unsafe.
49 //!
50 //! This is done by calling one of the methods of the `GpuFuture` trait. For example calling
51 //! `prev_future.then_execute(command_buffer)` takes ownership of `prev_future` and will make sure
52 //! to only start executing `command_buffer` after the moment corresponding to `prev_future`
53 //! happens. The object returned by the `then_execute` function is itself a future that corresponds
54 //! to the moment when the execution of `command_buffer` ends.
55 //!
56 //! ## Between two different GPU queues
57 //!
58 //! When you want to perform an operation after another operation on two different queues, you
59 //! **must** put a *semaphore* between them. Failure to do so would result in a runtime error.
60 //! Adding a semaphore is a simple as replacing `prev_future.then_execute(...)` with
61 //! `prev_future.then_signal_semaphore().then_execute(...)`.
62 //!
63 //! > **Note**: A common use-case is using a transfer queue (ie. a queue that is only capable of
64 //! > performing transfer operations) to write data to a buffer, then read that data from the
65 //! > rendering queue.
66 //!
67 //! What happens when you do so is that the first queue will execute the first set of operations
68 //! (represented by `prev_future` in the example), then put a semaphore in the signalled state.
69 //! Meanwhile the second queue blocks (if necessary) until that same semaphore gets signalled, and
70 //! then only will execute the second set of operations.
71 //!
72 //! Since you want to avoid blocking the second queue as much as possible, you probably want to
73 //! flush the operation to the first queue as soon as possible. This can easily be done by calling
74 //! `then_signal_semaphore_and_flush()` instead of `then_signal_semaphore()`.
75 //!
76 //! ## Between several different GPU queues
77 //!
78 //! The `then_signal_semaphore()` method is appropriate when you perform an operation in one queue,
79 //! and want to see the result in another queue. However in some situations you want to start
80 //! multiple operations on several different queues.
81 //!
82 //! TODO: this is not yet implemented
83 //!
84 //! # Fences
85 //!
86 //! A `Fence` is an object that is used to signal the CPU when an operation on the GPU is finished.
87 //!
88 //! Signalling a fence is done by calling `then_signal_fence()` on a future. Just like semaphores,
89 //! you are encouraged to use `then_signal_fence_and_flush()` instead.
90 //!
91 //! Signalling a fence is kind of a "terminator" to a chain of futures
92 
93 pub use self::{
94     fence_signal::{FenceSignalFuture, FenceSignalFutureBehavior},
95     join::JoinFuture,
96     now::{now, NowFuture},
97     semaphore_signal::SemaphoreSignalFuture,
98 };
99 use super::{
100     fence::{Fence, FenceError},
101     semaphore::Semaphore,
102 };
103 use crate::{
104     buffer::Buffer,
105     command_buffer::{
106         CommandBufferExecError, CommandBufferExecFuture, PrimaryCommandBufferAbstract,
107         ResourceUseRef, SubmitInfo,
108     },
109     device::{DeviceOwned, Queue},
110     image::{sys::Image, ImageLayout},
111     memory::BindSparseInfo,
112     swapchain::{self, PresentFuture, PresentInfo, Swapchain, SwapchainPresentInfo},
113     DeviceSize, OomError, VulkanError,
114 };
115 use smallvec::SmallVec;
116 use std::{
117     error::Error,
118     fmt::{Display, Error as FmtError, Formatter},
119     ops::Range,
120     sync::Arc,
121 };
122 
123 mod fence_signal;
124 mod join;
125 mod now;
126 mod semaphore_signal;
127 
128 /// Represents an event that will happen on the GPU in the future.
129 ///
130 /// See the documentation of the `sync` module for explanations about futures.
131 // TODO: consider switching all methods to take `&mut self` for optimization purposes
132 pub unsafe trait GpuFuture: DeviceOwned {
133     /// If possible, checks whether the submission has finished. If so, gives up ownership of the
134     /// resources used by these submissions.
135     ///
136     /// It is highly recommended to call `cleanup_finished` from time to time. Doing so will
137     /// prevent memory usage from increasing over time, and will also destroy the locks on
138     /// resources used by the GPU.
cleanup_finished(&mut self)139     fn cleanup_finished(&mut self);
140 
141     /// Builds a submission that, if submitted, makes sure that the event represented by this
142     /// `GpuFuture` will happen, and possibly contains extra elements (eg. a semaphore wait or an
143     /// event wait) that makes the dependency with subsequent operations work.
144     ///
145     /// It is the responsibility of the caller to ensure that the submission is going to be
146     /// submitted only once. However keep in mind that this function can perfectly be called
147     /// multiple times (as long as the returned object is only submitted once).
148     /// Also note that calling `flush()` on the future  may change the value returned by
149     /// `build_submission()`.
150     ///
151     /// It is however the responsibility of the implementation to not return the same submission
152     /// from multiple different future objects. For example if you implement `GpuFuture` on
153     /// `Arc<Foo>` then `build_submission()` must always return `SubmitAnyBuilder::Empty`,
154     /// otherwise it would be possible for the user to clone the `Arc` and make the same
155     /// submission be submitted multiple times.
156     ///
157     /// It is also the responsibility of the implementation to ensure that it works if you call
158     /// `build_submission()` and submits the returned value without calling `flush()` first. In
159     /// other words, `build_submission()` should perform an implicit flush if necessary.
160     ///
161     /// Once the caller has submitted the submission and has determined that the GPU has finished
162     /// executing it, it should call `signal_finished`. Failure to do so will incur a large runtime
163     /// overhead, as the future will have to block to make sure that it is finished.
build_submission(&self) -> Result<SubmitAnyBuilder, FlushError>164     unsafe fn build_submission(&self) -> Result<SubmitAnyBuilder, FlushError>;
165 
166     /// Flushes the future and submits to the GPU the actions that will permit this future to
167     /// occur.
168     ///
169     /// The implementation must remember that it was flushed. If the function is called multiple
170     /// times, only the first time must result in a flush.
flush(&self) -> Result<(), FlushError>171     fn flush(&self) -> Result<(), FlushError>;
172 
173     /// Sets the future to its "complete" state, meaning that it can safely be destroyed.
174     ///
175     /// This must only be done if you called `build_submission()`, submitted the returned
176     /// submission, and determined that it was finished.
177     ///
178     /// The implementation must be aware that this function can be called multiple times on the
179     /// same future.
signal_finished(&self)180     unsafe fn signal_finished(&self);
181 
182     /// Returns the queue that triggers the event. Returns `None` if unknown or irrelevant.
183     ///
184     /// If this function returns `None` and `queue_change_allowed` returns `false`, then a panic
185     /// is likely to occur if you use this future. This is only a problem if you implement
186     /// the `GpuFuture` trait yourself for a type outside of vulkano.
queue(&self) -> Option<Arc<Queue>>187     fn queue(&self) -> Option<Arc<Queue>>;
188 
189     /// Returns `true` if elements submitted after this future can be submitted to a different
190     /// queue than the other returned by `queue()`.
queue_change_allowed(&self) -> bool191     fn queue_change_allowed(&self) -> bool;
192 
193     /// Checks whether submitting something after this future grants access (exclusive or shared,
194     /// depending on the parameter) to the given buffer on the given queue.
195     ///
196     /// > **Note**: Returning `Ok` means "access granted", while returning `Err` means
197     /// > "don't know". Therefore returning `Err` is never unsafe.
check_buffer_access( &self, buffer: &Buffer, range: Range<DeviceSize>, exclusive: bool, queue: &Queue, ) -> Result<(), AccessCheckError>198     fn check_buffer_access(
199         &self,
200         buffer: &Buffer,
201         range: Range<DeviceSize>,
202         exclusive: bool,
203         queue: &Queue,
204     ) -> Result<(), AccessCheckError>;
205 
206     /// Checks whether submitting something after this future grants access (exclusive or shared,
207     /// depending on the parameter) to the given image on the given queue.
208     ///
209     /// Implementations must ensure that the image is in the given layout. However if the `layout`
210     /// is `Undefined` then the implementation should accept any actual layout.
211     ///
212     /// > **Note**: Returning `Ok` means "access granted", while returning `Err` means
213     /// > "don't know". Therefore returning `Err` is never unsafe.
214     ///
215     /// > **Note**: Keep in mind that changing the layout of an image also requires exclusive
216     /// > access.
check_image_access( &self, image: &Image, range: Range<DeviceSize>, exclusive: bool, expected_layout: ImageLayout, queue: &Queue, ) -> Result<(), AccessCheckError>217     fn check_image_access(
218         &self,
219         image: &Image,
220         range: Range<DeviceSize>,
221         exclusive: bool,
222         expected_layout: ImageLayout,
223         queue: &Queue,
224     ) -> Result<(), AccessCheckError>;
225 
226     /// Checks whether accessing a swapchain image is permitted.
227     ///
228     /// > **Note**: Setting `before` to `true` should skip checking the current future and always
229     /// > forward the call to the future before.
check_swapchain_image_acquired( &self, swapchain: &Swapchain, image_index: u32, before: bool, ) -> Result<(), AccessCheckError>230     fn check_swapchain_image_acquired(
231         &self,
232         swapchain: &Swapchain,
233         image_index: u32,
234         before: bool,
235     ) -> Result<(), AccessCheckError>;
236 
237     /// Joins this future with another one, representing the moment when both events have happened.
238     // TODO: handle errors
join<F>(self, other: F) -> JoinFuture<Self, F> where Self: Sized, F: GpuFuture,239     fn join<F>(self, other: F) -> JoinFuture<Self, F>
240     where
241         Self: Sized,
242         F: GpuFuture,
243     {
244         join::join(self, other)
245     }
246 
247     /// Executes a command buffer after this future.
248     ///
249     /// > **Note**: This is just a shortcut function. The actual implementation is in the
250     /// > `CommandBuffer` trait.
then_execute<Cb>( self, queue: Arc<Queue>, command_buffer: Cb, ) -> Result<CommandBufferExecFuture<Self>, CommandBufferExecError> where Self: Sized, Cb: PrimaryCommandBufferAbstract + 'static,251     fn then_execute<Cb>(
252         self,
253         queue: Arc<Queue>,
254         command_buffer: Cb,
255     ) -> Result<CommandBufferExecFuture<Self>, CommandBufferExecError>
256     where
257         Self: Sized,
258         Cb: PrimaryCommandBufferAbstract + 'static,
259     {
260         command_buffer.execute_after(self, queue)
261     }
262 
263     /// Executes a command buffer after this future, on the same queue as the future.
264     ///
265     /// > **Note**: This is just a shortcut function. The actual implementation is in the
266     /// > `CommandBuffer` trait.
then_execute_same_queue<Cb>( self, command_buffer: Cb, ) -> Result<CommandBufferExecFuture<Self>, CommandBufferExecError> where Self: Sized, Cb: PrimaryCommandBufferAbstract + 'static,267     fn then_execute_same_queue<Cb>(
268         self,
269         command_buffer: Cb,
270     ) -> Result<CommandBufferExecFuture<Self>, CommandBufferExecError>
271     where
272         Self: Sized,
273         Cb: PrimaryCommandBufferAbstract + 'static,
274     {
275         let queue = self.queue().unwrap();
276         command_buffer.execute_after(self, queue)
277     }
278 
279     /// Signals a semaphore after this future. Returns another future that represents the signal.
280     ///
281     /// Call this function when you want to execute some operations on a queue and want to see the
282     /// result on another queue.
283     #[inline]
then_signal_semaphore(self) -> SemaphoreSignalFuture<Self> where Self: Sized,284     fn then_signal_semaphore(self) -> SemaphoreSignalFuture<Self>
285     where
286         Self: Sized,
287     {
288         semaphore_signal::then_signal_semaphore(self)
289     }
290 
291     /// Signals a semaphore after this future and flushes it. Returns another future that
292     /// represents the moment when the semaphore is signalled.
293     ///
294     /// This is a just a shortcut for `then_signal_semaphore()` followed with `flush()`.
295     ///
296     /// When you want to execute some operations A on a queue and some operations B on another
297     /// queue that need to see the results of A, it can be a good idea to submit A as soon as
298     /// possible while you're preparing B.
299     ///
300     /// If you ran A and B on the same queue, you would have to decide between submitting A then
301     /// B, or A and B simultaneously. Both approaches have their trade-offs. But if A and B are
302     /// on two different queues, then you would need two submits anyway and it is always
303     /// advantageous to submit A as soon as possible.
304     #[inline]
then_signal_semaphore_and_flush(self) -> Result<SemaphoreSignalFuture<Self>, FlushError> where Self: Sized,305     fn then_signal_semaphore_and_flush(self) -> Result<SemaphoreSignalFuture<Self>, FlushError>
306     where
307         Self: Sized,
308     {
309         let f = self.then_signal_semaphore();
310         f.flush()?;
311 
312         Ok(f)
313     }
314 
315     /// Signals a fence after this future. Returns another future that represents the signal.
316     ///
317     /// > **Note**: More often than not you want to immediately flush the future after calling this
318     /// > function. If so, consider using `then_signal_fence_and_flush`.
319     #[inline]
then_signal_fence(self) -> FenceSignalFuture<Self> where Self: Sized,320     fn then_signal_fence(self) -> FenceSignalFuture<Self>
321     where
322         Self: Sized,
323     {
324         fence_signal::then_signal_fence(self, FenceSignalFutureBehavior::Continue)
325     }
326 
327     /// Signals a fence after this future. Returns another future that represents the signal.
328     ///
329     /// This is a just a shortcut for `then_signal_fence()` followed with `flush()`.
330     #[inline]
then_signal_fence_and_flush(self) -> Result<FenceSignalFuture<Self>, FlushError> where Self: Sized,331     fn then_signal_fence_and_flush(self) -> Result<FenceSignalFuture<Self>, FlushError>
332     where
333         Self: Sized,
334     {
335         let f = self.then_signal_fence();
336         f.flush()?;
337 
338         Ok(f)
339     }
340 
341     /// Presents a swapchain image after this future.
342     ///
343     /// You should only ever do this indirectly after a `SwapchainAcquireFuture` of the same image,
344     /// otherwise an error will occur when flushing.
345     ///
346     /// > **Note**: This is just a shortcut for the `Swapchain::present()` function.
347     #[inline]
then_swapchain_present( self, queue: Arc<Queue>, swapchain_info: SwapchainPresentInfo, ) -> PresentFuture<Self> where Self: Sized,348     fn then_swapchain_present(
349         self,
350         queue: Arc<Queue>,
351         swapchain_info: SwapchainPresentInfo,
352     ) -> PresentFuture<Self>
353     where
354         Self: Sized,
355     {
356         swapchain::present(self, queue, swapchain_info)
357     }
358 
359     /// Turn the current future into a `Box<dyn GpuFuture>`.
360     ///
361     /// This is a helper function that calls `Box::new(yourFuture) as Box<dyn GpuFuture>`.
362     #[inline]
boxed(self) -> Box<dyn GpuFuture> where Self: Sized + 'static,363     fn boxed(self) -> Box<dyn GpuFuture>
364     where
365         Self: Sized + 'static,
366     {
367         Box::new(self) as _
368     }
369 
370     /// Turn the current future into a `Box<dyn GpuFuture + Send>`.
371     ///
372     /// This is a helper function that calls `Box::new(yourFuture) as Box<dyn GpuFuture + Send>`.
373     #[inline]
boxed_send(self) -> Box<dyn GpuFuture + Send> where Self: Sized + Send + 'static,374     fn boxed_send(self) -> Box<dyn GpuFuture + Send>
375     where
376         Self: Sized + Send + 'static,
377     {
378         Box::new(self) as _
379     }
380 
381     /// Turn the current future into a `Box<dyn GpuFuture + Sync>`.
382     ///
383     /// This is a helper function that calls `Box::new(yourFuture) as Box<dyn GpuFuture + Sync>`.
384     #[inline]
boxed_sync(self) -> Box<dyn GpuFuture + Sync> where Self: Sized + Sync + 'static,385     fn boxed_sync(self) -> Box<dyn GpuFuture + Sync>
386     where
387         Self: Sized + Sync + 'static,
388     {
389         Box::new(self) as _
390     }
391 
392     /// Turn the current future into a `Box<dyn GpuFuture + Send + Sync>`.
393     ///
394     /// This is a helper function that calls `Box::new(yourFuture) as Box<dyn GpuFuture + Send +
395     /// Sync>`.
396     #[inline]
boxed_send_sync(self) -> Box<dyn GpuFuture + Send + Sync> where Self: Sized + Send + Sync + 'static,397     fn boxed_send_sync(self) -> Box<dyn GpuFuture + Send + Sync>
398     where
399         Self: Sized + Send + Sync + 'static,
400     {
401         Box::new(self) as _
402     }
403 }
404 
405 unsafe impl<F: ?Sized> GpuFuture for Box<F>
406 where
407     F: GpuFuture,
408 {
cleanup_finished(&mut self)409     fn cleanup_finished(&mut self) {
410         (**self).cleanup_finished()
411     }
412 
build_submission(&self) -> Result<SubmitAnyBuilder, FlushError>413     unsafe fn build_submission(&self) -> Result<SubmitAnyBuilder, FlushError> {
414         (**self).build_submission()
415     }
416 
flush(&self) -> Result<(), FlushError>417     fn flush(&self) -> Result<(), FlushError> {
418         (**self).flush()
419     }
420 
signal_finished(&self)421     unsafe fn signal_finished(&self) {
422         (**self).signal_finished()
423     }
424 
queue_change_allowed(&self) -> bool425     fn queue_change_allowed(&self) -> bool {
426         (**self).queue_change_allowed()
427     }
428 
queue(&self) -> Option<Arc<Queue>>429     fn queue(&self) -> Option<Arc<Queue>> {
430         (**self).queue()
431     }
432 
check_buffer_access( &self, buffer: &Buffer, range: Range<DeviceSize>, exclusive: bool, queue: &Queue, ) -> Result<(), AccessCheckError>433     fn check_buffer_access(
434         &self,
435         buffer: &Buffer,
436         range: Range<DeviceSize>,
437         exclusive: bool,
438         queue: &Queue,
439     ) -> Result<(), AccessCheckError> {
440         (**self).check_buffer_access(buffer, range, exclusive, queue)
441     }
442 
check_image_access( &self, image: &Image, range: Range<DeviceSize>, exclusive: bool, expected_layout: ImageLayout, queue: &Queue, ) -> Result<(), AccessCheckError>443     fn check_image_access(
444         &self,
445         image: &Image,
446         range: Range<DeviceSize>,
447         exclusive: bool,
448         expected_layout: ImageLayout,
449         queue: &Queue,
450     ) -> Result<(), AccessCheckError> {
451         (**self).check_image_access(image, range, exclusive, expected_layout, queue)
452     }
453 
454     #[inline]
check_swapchain_image_acquired( &self, swapchain: &Swapchain, image_index: u32, before: bool, ) -> Result<(), AccessCheckError>455     fn check_swapchain_image_acquired(
456         &self,
457         swapchain: &Swapchain,
458         image_index: u32,
459         before: bool,
460     ) -> Result<(), AccessCheckError> {
461         (**self).check_swapchain_image_acquired(swapchain, image_index, before)
462     }
463 }
464 
465 /// Contains all the possible submission builders.
466 #[derive(Debug)]
467 pub enum SubmitAnyBuilder {
468     Empty,
469     SemaphoresWait(SmallVec<[Arc<Semaphore>; 8]>),
470     CommandBuffer(SubmitInfo, Option<Arc<Fence>>),
471     QueuePresent(PresentInfo),
472     BindSparse(SmallVec<[BindSparseInfo; 1]>, Option<Arc<Fence>>),
473 }
474 
475 impl SubmitAnyBuilder {
476     /// Returns true if equal to `SubmitAnyBuilder::Empty`.
477     #[inline]
is_empty(&self) -> bool478     pub fn is_empty(&self) -> bool {
479         matches!(self, SubmitAnyBuilder::Empty)
480     }
481 }
482 
483 /// Access to a resource was denied.
484 #[derive(Clone, Debug, PartialEq, Eq)]
485 pub enum AccessError {
486     /// Exclusive access is denied.
487     ExclusiveDenied,
488 
489     /// The resource is already in use, and there is no tracking of concurrent usages.
490     AlreadyInUse,
491 
492     UnexpectedImageLayout {
493         allowed: ImageLayout,
494         requested: ImageLayout,
495     },
496 
497     /// Trying to use an image without transitioning it from the "undefined" or "preinitialized"
498     /// layouts first.
499     ImageNotInitialized {
500         /// The layout that was requested for the image.
501         requested: ImageLayout,
502     },
503 
504     /// Trying to use a buffer that still contains garbage data.
505     BufferNotInitialized,
506 
507     /// Trying to use a swapchain image without depending on a corresponding acquire image future.
508     SwapchainImageNotAcquired,
509 }
510 
511 impl Error for AccessError {}
512 
513 impl Display for AccessError {
fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError>514     fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError> {
515         write!(
516             f,
517             "{}",
518             match self {
519                 AccessError::ExclusiveDenied => "only shared access is allowed for this resource",
520                 AccessError::AlreadyInUse => {
521                     "the resource is already in use, and there is no tracking of concurrent usages"
522                 }
523                 AccessError::UnexpectedImageLayout { .. } => {
524                     unimplemented!() // TODO: find a description
525                 }
526                 AccessError::ImageNotInitialized { .. } => {
527                     "trying to use an image without transitioning it from the undefined or \
528                     preinitialized layouts first"
529                 }
530                 AccessError::BufferNotInitialized => {
531                     "trying to use a buffer that still contains garbage data"
532                 }
533                 AccessError::SwapchainImageNotAcquired => {
534                     "trying to use a swapchain image without depending on a corresponding acquire \
535                     image future"
536                 }
537             }
538         )
539     }
540 }
541 
542 /// Error that can happen when checking whether we have access to a resource.
543 #[derive(Clone, Debug, PartialEq, Eq)]
544 pub enum AccessCheckError {
545     /// Access to the resource has been denied.
546     Denied(AccessError),
547     /// The resource is unknown, therefore we cannot possibly answer whether we have access or not.
548     Unknown,
549 }
550 
551 impl Error for AccessCheckError {}
552 
553 impl Display for AccessCheckError {
fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError>554     fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError> {
555         write!(
556             f,
557             "{}",
558             match self {
559                 AccessCheckError::Denied(_) => "access to the resource has been denied",
560                 AccessCheckError::Unknown => "the resource is unknown",
561             }
562         )
563     }
564 }
565 
566 impl From<AccessError> for AccessCheckError {
from(err: AccessError) -> AccessCheckError567     fn from(err: AccessError) -> AccessCheckError {
568         AccessCheckError::Denied(err)
569     }
570 }
571 
572 /// Error that can happen when creating a graphics pipeline.
573 #[derive(Clone, Debug, PartialEq, Eq)]
574 pub enum FlushError {
575     /// Access to a resource has been denied.
576     AccessError(AccessError),
577 
578     /// Not enough memory.
579     OomError(OomError),
580 
581     /// The connection to the device has been lost.
582     DeviceLost,
583 
584     /// The surface is no longer accessible and must be recreated.
585     SurfaceLost,
586 
587     /// The surface has changed in a way that makes the swapchain unusable. You must query the
588     /// surface's new properties and recreate a new swapchain if you want to continue drawing.
589     OutOfDate,
590 
591     /// The swapchain has lost or doesn't have full screen exclusivity possibly for
592     /// implementation-specific reasons outside of the application’s control.
593     FullScreenExclusiveModeLost,
594 
595     /// The flush operation needed to block, but the timeout has elapsed.
596     Timeout,
597 
598     /// A non-zero present_id must be greater than any non-zero present_id passed previously
599     /// for the same swapchain.
600     PresentIdLessThanOrEqual,
601 
602     /// Access to a resource has been denied.
603     ResourceAccessError {
604         error: AccessError,
605         use_ref: Option<ResourceUseRef>,
606     },
607 
608     /// The command buffer or one of the secondary command buffers it executes was created with the
609     /// "one time submit" flag, but has already been submitted it the past.
610     OneTimeSubmitAlreadySubmitted,
611 
612     /// The command buffer or one of the secondary command buffers it executes is already in use by
613     /// the GPU and was not created with the "concurrent" flag.
614     ExclusiveAlreadyInUse,
615 }
616 
617 impl Error for FlushError {
source(&self) -> Option<&(dyn Error + 'static)>618     fn source(&self) -> Option<&(dyn Error + 'static)> {
619         match self {
620             FlushError::AccessError(err) => Some(err),
621             FlushError::OomError(err) => Some(err),
622             FlushError::ResourceAccessError { error, .. } => Some(error),
623             _ => None,
624         }
625     }
626 }
627 
628 impl Display for FlushError {
fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError>629     fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), FmtError> {
630         write!(
631             f,
632             "{}",
633             match self {
634                 FlushError::AccessError(_) => "access to a resource has been denied",
635                 FlushError::OomError(_) => "not enough memory",
636                 FlushError::DeviceLost => "the connection to the device has been lost",
637                 FlushError::SurfaceLost => "the surface of this swapchain is no longer valid",
638                 FlushError::OutOfDate => "the swapchain needs to be recreated",
639                 FlushError::FullScreenExclusiveModeLost => {
640                     "the swapchain no longer has full screen exclusivity"
641                 }
642                 FlushError::Timeout => {
643                     "the flush operation needed to block, but the timeout has elapsed"
644                 }
645                 FlushError::PresentIdLessThanOrEqual => {
646                     "present id is less than or equal to previous"
647                 }
648                 FlushError::ResourceAccessError { .. } => "access to a resource has been denied",
649                 FlushError::OneTimeSubmitAlreadySubmitted => {
650                     "the command buffer or one of the secondary command buffers it executes was \
651                     created with the \"one time submit\" flag, but has already been submitted in \
652                     the past"
653                 }
654                 FlushError::ExclusiveAlreadyInUse => {
655                     "the command buffer or one of the secondary command buffers it executes is \
656                     already in use was not created with the \"concurrent\" flag"
657                 }
658             }
659         )
660     }
661 }
662 
663 impl From<AccessError> for FlushError {
from(err: AccessError) -> FlushError664     fn from(err: AccessError) -> FlushError {
665         FlushError::AccessError(err)
666     }
667 }
668 
669 impl From<VulkanError> for FlushError {
from(err: VulkanError) -> Self670     fn from(err: VulkanError) -> Self {
671         match err {
672             VulkanError::OutOfHostMemory | VulkanError::OutOfDeviceMemory => {
673                 Self::OomError(err.into())
674             }
675             VulkanError::DeviceLost => Self::DeviceLost,
676             VulkanError::SurfaceLost => Self::SurfaceLost,
677             VulkanError::OutOfDate => Self::OutOfDate,
678             VulkanError::FullScreenExclusiveModeLost => Self::FullScreenExclusiveModeLost,
679             _ => panic!("unexpected error: {:?}", err),
680         }
681     }
682 }
683 
684 impl From<FenceError> for FlushError {
from(err: FenceError) -> FlushError685     fn from(err: FenceError) -> FlushError {
686         match err {
687             FenceError::OomError(err) => FlushError::OomError(err),
688             FenceError::Timeout => FlushError::Timeout,
689             FenceError::DeviceLost => FlushError::DeviceLost,
690             _ => unreachable!(),
691         }
692     }
693 }
694