Add documentation and usability methods to GpuFuture

This commit is contained in:
Pierre Krieger 2017-05-06 20:10:16 +02:00
parent 698e0cc9ae
commit 3213c9fef3
6 changed files with 131 additions and 15 deletions

View File

@ -189,8 +189,7 @@ fn main() {
let future = future
.then_execute(queue.clone(), cb)
.then_swapchain_present(queue.clone(), swapchain.clone(), image_num)
.then_signal_fence();
future.flush().unwrap();
.then_signal_fence_and_flush().unwrap();
submissions.push(Box::new(future) as Box<_>);
for ev in window.window().poll_events() {

View File

@ -195,8 +195,7 @@ fn main() {
let future = future
.then_execute(queue.clone(), command_buffer)
.then_swapchain_present(queue.clone(), swapchain.clone(), image_num)
.then_signal_fence();
future.flush().unwrap();
.then_signal_fence_and_flush().unwrap();
submissions.push(Box::new(future) as Box<_>);
for ev in window.window().poll_events() {

View File

@ -412,8 +412,7 @@ fn main() {
// present command at the end of the queue. This means that it will only be presented once
// the GPU has finished executing the command buffer that draws the triangle.
.then_swapchain_present(queue.clone(), swapchain.clone(), image_num)
.then_signal_fence();
future.flush().unwrap();
.then_signal_fence_and_flush().unwrap();
submissions.push(Box::new(future) as Box<_>);
// Note that in more complex programs it is likely that one of `acquire_next_image`,

View File

@ -46,6 +46,15 @@ pub unsafe trait CommandBuffer: DeviceOwned {
/// Executes this command buffer on a queue.
///
/// This function returns an object that implements the `GpuFuture` trait. See the
/// documentation of the `sync` module for more information.
/// The command buffer is not actually executed until you call `flush()` on the object.
/// You are encouraged to chain together as many futures as possible before calling `flush()`,
/// and call `.then_signal_future()` before doing so.
///
/// > **Note**: In the future this function may return `-> impl GpuFuture` instead of a
/// > concrete type.
///
/// > **Note**: This is just a shortcut for `execute_after`.
///
/// # Panic
@ -61,6 +70,15 @@ pub unsafe trait CommandBuffer: DeviceOwned {
/// Executes the command buffer after an existing future.
///
/// This function returns an object that implements the `GpuFuture` trait. See the
/// documentation of the `sync` module for more information.
/// The command buffer is not actually executed until you call `flush()` on the object.
/// You are encouraged to chain together as many futures as possible before calling `flush()`,
/// and call `.then_signal_future()` before doing so.
///
/// > **Note**: In the future this function may return `-> impl GpuFuture` instead of a
/// > concrete type.
///
/// This function requires the `'static` lifetime to be on the command buffer. This is because
/// this function returns a `CommandBufferExecFuture` whose job is to lock resources and keep
/// them alive while they are in use by the GPU. If `'static` wasn't required, you could call

View File

@ -150,6 +150,9 @@ pub unsafe trait GpuFuture: DeviceOwned {
}
/// Signals a semaphore after this future. Returns another future that represents the signal.
///
/// Call this function when you want to execute some operations on a queue and want to see the
/// result on another queue.
#[inline]
fn then_signal_semaphore(self) -> SemaphoreSignalFuture<Self> where Self: Sized {
let device = self.device().clone();
@ -164,7 +167,32 @@ pub unsafe trait GpuFuture: DeviceOwned {
}
}
/// Signals a semaphore after this future and flushes it. Returns another future that
/// represents the moment when the semaphore is signalled.
///
/// This is a just a shortcut for `then_signal_semaphore()` followed with `flush()`.
///
/// When you want to execute some operations A on a queue and some operations B on another
/// queue that need to see the results of A, it can be a good idea to submit A as soon as
/// possible while you're preparing B.
///
/// If you ran A and B on the same queue, you would have to decide between submitting A then
/// B, or A and B simultaneously. Both approaches have their trade-offs. But if A and B are
/// on two different queues, then you would need two submits anyway and it is always
/// advantageous to submit A as soon as possible.
#[inline]
fn then_signal_semaphore_and_flush(self) -> Result<SemaphoreSignalFuture<Self>, Box<Error>>
where Self: Sized
{
let f = self.then_signal_semaphore();
f.flush()?;
Ok(f)
}
/// Signals a fence after this future. Returns another future that represents the signal.
///
/// > **Note**: More often than not you want to immediately flush the future after calling this
/// > function. If so, consider using `then_signal_fence_and_flush`.
#[inline]
fn then_signal_fence(self) -> FenceSignalFuture<Self> where Self: Sized {
let device = self.device().clone();
@ -178,6 +206,18 @@ pub unsafe trait GpuFuture: DeviceOwned {
}
}
/// Signals a fence after this future. Returns another future that represents the signal.
///
/// This is a just a shortcut for `then_signal_fence()` followed with `flush()`.
#[inline]
fn then_signal_fence_and_flush(self) -> Result<FenceSignalFuture<Self>, Box<Error>>
where Self: Sized
{
let f = self.then_signal_fence();
f.flush()?;
Ok(f)
}
/// Presents a swapchain image after this future.
///
/// You should only ever do this indirectly after a `SwapchainAcquireFuture` of the same image,

View File

@ -7,17 +7,78 @@
// notice may not be copied, modified, or distributed except
// according to those terms.
//! Synchronization primitives for Vulkan objects.
//!
//! In Vulkan, you have to manually ensure two things:
//!
//! - That a buffer or an image are not read and written simultaneously (similarly to the CPU).
//! - That writes to a buffer or an image are propagated to other queues by inserting memory
//! barriers.
//! Synchronization on the GPU.
//!
//! But don't worry ; this is automatically enforced by this library (as long as you don't use
//! any unsafe function). See the `memory` module for more info.
//! Just like for CPU code, you have to ensure that buffers and images are not accessed mutably by
//! multiple GPU queues simultaneously and that they are not accessed mutably by the CPU and by the
//! GPU simultaneously.
//!
//! This safety is enforced at runtime by vulkano but it is not magic and you will require some
//! knowledge if you want to avoid errors.
//!
//! # Futures
//!
//! Whenever you want ask the GPU to start an operation (for example executing a command buffer),
//! you need to call a function from vulkano that returns a *future*. A future is an object that
//! implements [the `GpuFuture` trait](trait.GpuFuture.html) and that represents the point in time
//! when the operation is over.
//!
//! Futures serve several roles:
//!
//! - Futures can be used to build dependencies between submissions so that you can ask that an
//! operation starts only after a previous operation is finished.
//! - Submitting an operation to the GPU is a costly operation. When chaining multiple operations
//! with futures you will submit them all at once instead of one by one, thereby reducing this
//! cost.
//! - Futures keep alive the resources and objects used by the GPU so that they don't get destroyed
//! while they are still in use.
//!
//! The last point means that you should keep futures alive in your program for as long as their
//! corresponding operation is potentially still being executed by the GPU. Dropping a future
//! earlier will block the current thread until the GPU has finished the operation, which is not
//! usually not what you want.
//!
//! In other words if you write a function in your program that submits an operation to the GPU,
//! you should always make this function return the corresponding future and let the caller handle
//! it.
//!
//! # Dependencies between futures
//!
//! Building dependencies between futures is important, as it is what *proves* vulkano that
//! some an operation is indeed safe. For example if you submit two operations that modify the same
//! buffer, then you need to make sure that one of them gets executed after the other. Failing to
//! add a dependency would mean that these two operations could potentially execute simultaneously
//! on the GPU, which would be unsafe.
//!
//! Adding a dependency is done by calling one of the methods of the `GpuFuture` trait. For example
//! calling `prev_future.then_execute(command_buffer)` takes ownership of `prev_future` and returns
//! a new future in which `command_buffer` starts executing after the moment corresponding to
//! `prev_future` happens. The new future corresponds to the moment when the execution of
//! `command_buffer` ends.
//!
//! ## Between different GPU queues
//!
//! When you want to perform an operation after another operation on two different queues, you
//! **must** put a *semaphore* between them. Failure to do so would result in a runtime error.
//!
//! Adding a semaphore is a simple as replacing `prev_future.then_execute(...)` with
//! `prev_future.then_signal_semaphore().then_execute(...)`.
//!
//! In practice you usually want to use `then_signal_semaphore_and_flush()` instead of
//! `then_signal_semaphore()`, as the execution will start sooner.
//!
//! TODO: using semaphores to dispatch to multiple queues
//!
//! # Fences
//!
//! A `Fence` is an object that is used to signal the CPU when an operation on the GPU is finished.
//!
//! If you want to perform an operation on the CPU after an operation on the GPU is finished (eg.
//! if you want to read on the CPU data written by the GPU), then you need to ask the GPU to signal
//! a fence after the operation and wait for that fence to be *signalled* on the CPU.
//!
//! TODO: talk about fence + semaphore simultaneously
//! TODO: talk about using fences to clean up
use std::ops;
use std::sync::Arc;