Merge branch 'master' into moltenvk

This commit is contained in:
Pierre Krieger 2017-05-16 13:01:40 +02:00
commit 0dddd6f7e5
169 changed files with 20096 additions and 13094 deletions

2
.gitignore vendored
View File

@ -1,3 +1,3 @@
target
Cargo.lock
.cargo
.cargo

View File

@ -21,6 +21,7 @@ addons:
script:
- cargo test -v --manifest-path glsl-to-spirv/Cargo.toml
- cargo test -v --manifest-path vulkano-shaders/Cargo.toml
- cargo test -v --manifest-path vulkano-shader-derive/Cargo.toml
# We run the projects that depend on vulkano with `-j 1` or we have a
# chance to reach travis' memory limit
- cargo test -j 1 --manifest-path vulkano-win/Cargo.toml
@ -49,3 +50,7 @@ after_success:
[ $TRAVIS_BRANCH = master ] &&
[ $TRAVIS_PULL_REQUEST = false ] &&
cargo publish --token ${CRATESIO_TOKEN} --manifest-path vulkano-shaders/Cargo.toml
- |
[ $TRAVIS_BRANCH = master ] &&
[ $TRAVIS_PULL_REQUEST = false ] &&
cargo publish --token ${CRATESIO_TOKEN} --manifest-path vulkano-shader-derive/Cargo.toml

View File

@ -5,5 +5,6 @@ members = [
"vk-sys",
"vulkano",
"vulkano-shaders",
"vulkano-shader-derive",
"vulkano-win"
]

270
DESIGN.md Normal file
View File

@ -0,0 +1,270 @@
This document contains the global design decisions made by the vulkano library. It can also be a
good start if you want to contribute to some internal parts of vulkano and don't know how it works.
This document assumes that you're already familiar with Vulkan and does not introduce the various
concepts. However it can still be a good read if you are not so familiar.
If you notice any mistake, feel free to open a PR. If you want to suggest something, feel free to
open a PR as well.
# The three kinds of objects
Vulkano provides wrappers around all objects of the Vulkan API. However these objects are split in
three categories, depending on their access pattern:
- Objects that are not created often and in very small numbers.
- Objects that are typically created at initialization and which are often accessed without mutation
by performance-critical code.
- Objects that are created, destroyed or modified during performance-critical code, and that
usually require a synchronization strategy to avoid race conditions.
The first category are objects that are not created often and created in very small numbers:
Instances, Devices, Surfaces, Swapchains. In a typical application each of these objects is only
created once and destroyed when the application exits. Vulkano's API provides a struct that
corresponds to each of these objects, and this struct is typically wrapped in an `Arc`.
Their `new` method in fact returns an `Arc<T>` instead of just a `T` in order to encourage users to
use `Arc`s. You use these objects by cloning them around like you would use objects in a
garbage-collected language such as Java.
The second category are objects like the GraphicsPipeline, ComputePipeline, PipelineLayout,
RenderPass and Framebuffer. They are usually created at initialization and don't perform any
operations themselves, but they describe to the Vulkan implementation operations that we are going
to perform and are thus frequently accessed in order to determine whether the operation that the
vulkano user requested is compliant to what was described. Just like the first category, each of
these objects has a struct that corresponds to them, but in order to make these checks as fast as
possible these structs have a template parameter that describes in a strongly-typed fashion the
operation on the CPU side. This makes it possible to move many checks to compile-time instead of
runtime. More information in another section of this document.
The third category are objects like CommandBuffers, CommandPools, DescriptorSets, DescriptorPools,
Buffers, Images, and memory pools (although not technically a Vulkan object). The way they are
implemented has a huge impact on the performance of the application. Contrary to the first two
categories, each of these objects is represented in vulkano by an unsafe trait (and not by a
struct) that can be freely implemented by the user if they wish. Vulkano provides unsafe structs
such as `UnsafeBuffer`, `UnsafeImage`, etc. which have zero overhead and do not perform any safety
checks, and are the tools used by the safe implementations of the traits. Vulkano also provides
some safe implementations for convenience such as `CpuAccessibleBuffer` or `AttachmentImage`.
# Runtime vs compile-time checks
The second category of objects described above are objects that describe to the Vulkan
implementation an operation that we are going to perform later. For example a `ComputePipeline`
object describes to the Vulkan implementation a compute operation and contains the shader's code
and the list of resources that we are going to bind and that are going to be accessed by the shader.
Since vulkano is a safe library, it needs to check whether the operation the user requests (eg.
executing a compute operation) matches the corresponding `ComputePipeline` (for example, check
that the list of resources passed by the user matches what the compute pipeline expects).
These checks can be expensive. For example when it comes to buffers, vulkano needs to check whether
the layout of the buffers passed by the user is the same as what is expected, by looping through all
the members and following several indirections. If you multiply this by several dozens or hundreds
of operations, it can become very expensive.
In order to reduce the stress caused by these checks, structs such as `ComputePipeline` have a
template parameter which describes the operation. Whenever vulkano performs a check, it queries
the templated object through a trait, and each safety check has its own trait. This means
that we can build strongly-typed objects at compile-time that describe a very precise operation and
whose method implementations are trivial. For example, we can create a `MyComputeOpDesc` type which
implements the `ResourcesListMatch<MyResourcesList>` trait (which was made up for the sake of the
example), and the user will only be able to pass a `MyResourcesList` object for the list of
resources. This moves the check to compile-time and totally eliminates any runtime check. The
compute pipeline is then expressed as `ComputePipeline<MyComputeOpDesc>`.
However this design has a drawback, which is that is can be difficult to explicitly express such a
type. A compute pipeline in the example above could be expressed as
`ComputePipeline<MyComputeOpDesc>`, but in practice these types (like `MyComputeOpDesc`) would be
built by builders and can become extremely long and annoying to put in a struct (just like for
example the type of `(10..).filter(|n| n*2).skip(3).take(5)` can be very long and annoying to put
in a struct). This is especially problematic as it concerns objects that are usually created at
initialization and stay alive for a long time, in other words the kind of objects that you would
put in a struct.
In order to solve this naming problem, all the traits that are used to describe operations must be
boxable so that we can turn `ComputePipeline<Very<Long<And<Complicated, Type>>>>` into
`ComputePipeline<Box<ComputePipelineDesc>>`. This means that we can't use associated types and
templates for any of the trait methods. Ideologically it is a bit annoying to have to restrict
ourselves in what we can do just because the user needs to be able to write out the precise type,
but it's the only pragmatic solution for now.
# Submissions
Any object that can be submitted to a GPU queue (for example a command buffer) implements
the `Submit` trait.
The `Submit` trait provides a function named `build` which returns a `Submission<Self>` object
(where `Self` is the type that implements the `Submit` trait). The `Submission` object must be kept
alive by the user for as long as the GPU hasn't finished executing the submission. Trying to
destroy a `Submission` will block until it is the case. Since the `Submission` holds the object
that was submitted, this object is also kept alive for as long as the GPU hasn't finished executing
it.
For the moment submitting an object always creates a fence, which is how the `Submission` knows
whether the GPU has finished executing it. Eventually this will need to be modified for the sake of
performance.
In order to make the `Submit` trait safer to implement, the method that actually needs to be
implemented is not `build` but `append_submission`. This method uses a API/lifetime trick to
guarantee that the GPU only executes command buffers that outlive the struct that implements
`Submit`.
SAFETY ISSUE HERE HOWEVER: the user can use mem::forget on the Submission and then drop the
objects referenced by it. There are two solutions to this: either store a bunch of Arc<Fence> in
every single object referenced by submissions (eg. pipeline objects), or force the user to use
either Arcs or give ownership of the object. The latter is preferred but not yet implemented.
# Pools
There are three kinds of pools in vulkano: memory pools, descriptor pools, and command pools. Only
the last two are technically Vulkan concepts, but using a memory pool is also a very common
pattern that you are strongly encouraged to embrace when you write a Vulkan application.
These three kinds of pools are each represented in vulkano by a trait. When you use the Vulkan API,
you are expected to create multiple command pools and multiple descriptor pools for maximum
performance. In vulkano however, it is the implementation of the pool trait that is responsible
for managing multiple actual pool objects. In other words a pool in vulkano is just a trait that
provides a method to allocate or free some resource, and the advanced functionality of Vulkan
pools (like resetting a command buffer, resetting a pool, or managing the descriptor pool's
capacity) is handled internally by the implementation of the trait. For example freeing a
command buffer can be implemented by resetting it and reusing it, instead of actually freeing it.
One of the goals of vulkano is to be easy to use by default. Therefore vulkano provides a default
implementation for each of these pools, and the `new` constructors of types that need a pool (ie.
buffers, images, descriptor sets, and command buffers) will use the default implementation. It is
possible for the user to use an alternative implementation of a pool by using an alternative
constructor, but the default implementations should be good for most usages. This is similar to
memory allocators in languages such as C++ and Rust, in the sense that some users want to be able
to use a custom allocator but most of the time it's not worth bothering with that.
# Command buffers
Command buffer objects belong to the last category of objects that were described above. They are
represented by an unsafe trait and can be implemented manually by the user if they wish.
However this poses a practical problem, which is that creating a command buffer in a safe way
is really complicated. There are tons of commands to implement, and each command has a ton of
safety requirements. If a user wants to create a custom command buffer type, it is just not an
option to ask them to reimplement these safety checks themselves.
The reason why users may want to create their own command buffer types is to implement
synchronization themselves. Vulkano's default implementation (which is `AutobarriersCommandBuffer`)
will automatically place pipeline barriers in order to handle cache flushes and image layout
transitions and avoid data races, but this automatic computation can be seen as expensive.
In order to make it possible to customize the synchronization story of command buffers, vulkano has
split the command buffer building process in two steps. First the user builds a list of commands
through an iterator-like API (and vulkano will check their validity), and then they are turned into
a command buffer through a trait. This means that the user can customize the synchronization
strategy (by customizing the second step) while still using the same command-building process
(the first step). Commands are not opinionated towards one strategy or another. The
command-building code is totally isolated from the synchronization strategy and only checks
whether the commands themselves are valid.
The fact that all the commands are added at once can be a little surprising for a user coming from
Vulkan. Vulkano's API looks very similar to Vulkan's API, but there is a major difference: in
Vulkan the cost of creating a command buffer is distributed between each function call, but in
vulkano it is done all at once. For example creating a command buffer with 6 commands with Vulkan
requires 8 function calls that take say 5µs each, while creating the same command buffer with
vulkano requires 8 function calls, but the first 7 are almost free and the last one takes 40µs.
After some thinking, it was considered to not be a problem.
Creating a list of commands with an iterator-like API has the problem that the type of the list of
commands changes every time you add a new command to the list
(just like for example `let iterator = iterator.skip(1)` changes the type of `iterator`). This is
a problem in situations where we don't know at compile-time the number of commands that we are
going to add. In order to solve this, it is required that the `CommandsList` trait be boxable,
so that the user can use a `Box<CommandsList>`. This is unfortunately not optimal as you will need
a memory allocation for each command that is added to the list. The situation here could still be
improved.
# The auto-barriers builder
As explained above, the default implementation of a command buffer provided by vulkano
automatically places pipeline barriers to avoid issues such as caches not being flushed, commands
being executed simultaneously when they shouldn't, or images having the wrong layout.
This is not an easy job, because Vulkan allows lots of weird access patterns that we want to make
available in vulkano. You can for example create a buffer object split into multiple sub-buffer
objects, or make some images and buffers share the same memory.
In order to make it possible to handle everything properly, the `Buffer` and `Image` traits need to
help us with the `conflicts` methods. Each buffer and image can be queried to know whether it
potentially uses the same memory as any other buffer or image. When two resources conflict, this
means that you can't write to one and read from the other one simultaneously or write to both
simultaneously.
But we don't want to check every single combination of buffer and image every time to check whether
they conflict. So in order to improve performance, buffers and images also need to provide a key
that identifies them. Two resources that can potentially conflict must always return the same key.
The regular `conflict` functions are still necessary to handle the situation where buffers or
images accidentally return the same key but don't actually conflict.
This conflict system is also used to make sure that the attachments of a framebuffer don't conflict
with each other or that the resources in a descriptor set don't conflict with each other (both
situations are forbidden).
# Image layouts
Tracking image layouts can be tedious. Vulkano uses a simple solution, which is that images must
always be in a specific layout at the beginning and the end of a command buffer. If a transition
is performed during a command buffer, the image must be transitioned back before the end of the
command buffer. The layout in question is queried with a method on the `Image` trait.
For example an `AttachmentImage` must always be in the `ColorAttachmentOptimal` layout for color
attachment, and the `DepthStencilAttachmentOptimal` layout for depth-stencil attachments. If any
command switches the image to another layout, then it will need to be switched back before the end
of the command buffer.
This system works very nicely in practice, and unnecessary layout transitions almost never happen.
The only situation where unnecessary transitions tend to happen in practice is for swapchain images
that are transitioned from `PresentSrc` to `ColorAttachmentOptimal` before the start of the
render pass, because the initial layout of the render pass attachment is `ColorAttachmentOptimal`
by default for color attachments. Vulkano should make it clear in the documentation of render
passes that the user is encouraged to specify when an attachment is expected to be in the
`PresentSrc` layout.
The only problematic area concerns the first usage of an image, where it must be transitioned from
the `Undefined` or `Preinitialized` layout. This is done by making the user pass a command buffer
builder in the constructor of images, and the constructor adds a transition command to it. The
image implementation is responsible for making sure that the transition command has been submitted
before any further command that uses the image.
# Inter-queue synchronization
When users submit two command buffers to two different queues, they expect the two command buffers
to execute in parallel. However this is forbidden if doing so could result in a data race,
like for example if one command buffer writes to an image and the other one reads from that same
image.
In this situation, the only possible technical solution is to make the execution of the second
command buffer block until the first command buffer has finished executing.
This case is similar to spawning two threads that each access the same resource protected by
a `RwLock` or a `Mutex`. One of the two threads will need to block until the first one is finished.
This raises the question: should vulkano implicitly block command buffers to avoid data races,
or should it force the user to explicitly add wait operations? By comparing a CPU-side
multithreaded program and a GPU-side multithreaded program, then the answer is to make it implicit,
as a CPU will also implicitly block when calling a function that happens to lock a `Mutex` or
a `RwLock`. In CPU code, these locking problems are always "fixed" by properly documenting the
behavior of the functions you call. Similarly, vulkano should precisely document its behavior.
More generally users are encouraged to avoid sharing resources between multiple queues unless these
resources are read-only, and in practice in a video game it is indeed rarely needed to share
resources between multiple queues. Just like for CPU-side multithreading, users are encouraged to
have a graph of the ways queues interact with each other.
However another problem arises. In order to make a command buffer wait for another, you need to
make the queue of the first command buffer submit a semaphore after execution, and the queue of
the second command buffer wait on that same semaphore before execution. Semaphores can only be used
once. This means that when you submit a command buffer to a queue, you must already know if any
other command buffers are going to wait on the one you are submitting, and if so how many. This is not
something that vulkano can automatically determine. The fact that there is therefore no optimal
algorithm for implicit synchronization would be a good point in favor of explicit synchronization.
The decision was taken to encourage users to explicitly handle synchronization between multiple
queues, but if they forget to do so then vulkano will automatically fall back to a dumb
worst-case-scenario but safe behavior. Whenever this dumb behavior is triggered, a debug message
is outputted by vulkano with the `vkDebugReportMessageEXT` function. This message can easily be
caught by the user by registering a callback, or with a debugger.
It is yet to be determined what exactly the user needs to handle. The user will at least need to
specify an optional list of semaphores to signal at each submission, but maybe not the list of
semaphores to wait upon if these can be determined automatically. This has yet to be seen.

View File

@ -13,7 +13,7 @@ What does vulkano do?
- Provides a low-levelish API around Vulkan. It doesn't hide what it does, but provides some
comfort types.
- Plans to prevents all invalid API usages, even the most obscure ones. The purpose of vulkano
- Plans to prevent all invalid API usages, even the most obscure ones. The purpose of vulkano
is not to draw a teapot, but to cover all possible usages of Vulkan and detect all the
possible problems. Invalid API usage is prevented thanks to both compile-time checks and
runtime checks.
@ -26,6 +26,11 @@ What does vulkano do?
**Warning: this library breaks every five minutes for the moment.**
Note that vulkano does **not** require you to install the official Vulkan SDK. This is not
something specific to vulkano (you don't need to SDK to write program that use Vulkan, even
without vulkano), but many people are unaware of that and install the SDK thinking that it is
required.
## [Documentation](https://docs.rs/vulkano)
[![](https://docs.rs/vulkano/badge.svg)](https://docs.rs/vulkano)
@ -53,9 +58,8 @@ This crate uses the Cargo workspaces feature that is available only in nightly,
is to make several crates share the same `target/` directory. It is normal to get an error if you
try to run `cargo build` at the root of the directory.
In order to run tests, go to the `vulkano` subdirectory and run `cargo test`. On nVidia GPUs, you
will have to set the `RUST_TEST_THREADS` environment variable to `1` because of
[a bug](https://devtalk.nvidia.com/default/topic/938723/creating-destroying-several-vkdevices-concurrently-sometimes-crashes-or-deadlocks/).
In order to run tests, go to the `vulkano` subdirectory and run `cargo test`. Make sure your
Vulkan driver is up to date before doing so.
## License

BIN
apispec.pdf Normal file

Binary file not shown.

View File

@ -7,9 +7,9 @@ build = "build.rs"
[dependencies]
vulkano = { path = "../vulkano" }
vulkano-win = { path = "../vulkano-win" }
cgmath = "0.7.0"
cgmath = "0.12.0"
image = "0.6.1"
winit = "0.5.3"
winit = "0.6.4"
time = "0.1.35"
[build-dependencies]

View File

@ -16,7 +16,10 @@ extern crate vulkano;
extern crate vulkano_win;
use vulkano_win::VkSurfaceBuild;
use vulkano::command_buffer::CommandBufferBuilder;
use vulkano::sync::GpuFuture;
use std::sync::Arc;
use std::time::Duration;
fn main() {
@ -30,7 +33,8 @@ fn main() {
.next().expect("no device available");
println!("Using device: {} (type: {:?})", physical.name(), physical.ty());
let window = winit::WindowBuilder::new().build_vk_surface(&instance).unwrap();
let events_loop = winit::EventsLoop::new();
let window = winit::WindowBuilder::new().build_vk_surface(&events_loop, &instance).unwrap();
let queue = physical.queue_families().find(|q| q.supports_graphics() &&
window.surface().is_supported(q).unwrap_or(false))
@ -52,7 +56,7 @@ fn main() {
let present = caps.present_modes.iter().next().unwrap();
let usage = caps.supported_usage_flags;
vulkano::swapchain::Swapchain::new(&device, &window.surface(), 3,
vulkano::swapchain::Swapchain::new(&device, &window.surface(), caps.min_image_count,
vulkano::format::B8G8R8A8Srgb, dimensions, 1,
&usage, &queue, vulkano::swapchain::SurfaceTransform::Identity,
vulkano::swapchain::CompositeAlpha::Opaque,
@ -60,52 +64,40 @@ fn main() {
};
let vertex_buffer = vulkano::buffer::cpu_access::CpuAccessibleBuffer::<[Vertex]>
::array(&device, 4, &vulkano::buffer::BufferUsage::all(),
Some(queue.family())).expect("failed to create buffer");
#[derive(Debug, Clone)]
struct Vertex { position: [f32; 2] }
impl_vertex!(Vertex, position);
// The buffer that we created contains uninitialized data.
// In order to fill it with data, we have to *map* it.
{
// The `write` function would return `Err` if the buffer was in use by the GPU. This
// obviously can't happen here, since we haven't ask the GPU to do anything yet.
let mut mapping = vertex_buffer.write(Duration::new(0, 0)).unwrap();
mapping[0].position = [-0.5, -0.5];
mapping[1].position = [-0.5, 0.5];
mapping[2].position = [ 0.5, -0.5];
mapping[3].position = [ 0.5, 0.5];
}
let vertex_buffer = vulkano::buffer::cpu_access::CpuAccessibleBuffer::<[Vertex]>
::from_iter(&device, &vulkano::buffer::BufferUsage::all(),
Some(queue.family()), [
Vertex { position: [-0.5, -0.5 ] },
Vertex { position: [-0.5, 0.5 ] },
Vertex { position: [ 0.5, -0.5 ] },
Vertex { position: [ 0.5, 0.5 ] },
].iter().cloned()).expect("failed to create buffer");
mod vs { include!{concat!(env!("OUT_DIR"), "/shaders/src/bin/image_vs.glsl")} }
let vs = vs::Shader::load(&device).expect("failed to create shader module");
mod fs { include!{concat!(env!("OUT_DIR"), "/shaders/src/bin/image_fs.glsl")} }
let fs = fs::Shader::load(&device).expect("failed to create shader module");
mod renderpass {
single_pass_renderpass!{
let renderpass = Arc::new(
single_pass_renderpass!(device.clone(),
attachments: {
color: {
load: Clear,
store: Store,
format: ::vulkano::format::B8G8R8A8Srgb,
format: images[0].format(),
samples: 1,
}
},
pass: {
color: [color],
depth_stencil: {}
}
}
}
let renderpass = renderpass::CustomRenderPass::new(&device, &renderpass::Formats {
color: (vulkano::format::B8G8R8A8Srgb, 1)
}).unwrap();
).unwrap()
);
let texture = vulkano::image::immutable::ImmutableImage::new(&device, vulkano::image::Dimensions::Dim2d { width: 93, height: 93 },
vulkano::format::R8G8B8A8Unorm, Some(queue.family())).unwrap();
@ -116,22 +108,13 @@ fn main() {
image::ImageFormat::PNG).unwrap().to_rgba();
let image_data = image.into_raw().clone();
let image_data_chunks = image_data.chunks(4).map(|c| [c[0], c[1], c[2], c[3]]);
// TODO: staging buffer instead
let pixel_buffer = vulkano::buffer::cpu_access::CpuAccessibleBuffer::<[[u8; 4]]>
::array(&device, image_data.len(), &vulkano::buffer::BufferUsage::all(),
Some(queue.family())).expect("failed to create buffer");
{
let mut mapping = pixel_buffer.write(Duration::new(0, 0)).unwrap();
for (o, i) in mapping.iter_mut().zip(image_data.chunks(4)) {
o[0] = i[0];
o[1] = i[1];
o[2] = i[2];
o[3] = i[3];
}
}
pixel_buffer
vulkano::buffer::cpu_access::CpuAccessibleBuffer::<[[u8; 4]]>
::from_iter(&device, &vulkano::buffer::BufferUsage::all(),
Some(queue.family()), image_data_chunks)
.expect("failed to create buffer")
};
@ -142,22 +125,7 @@ fn main() {
vulkano::sampler::SamplerAddressMode::Repeat,
0.0, 1.0, 0.0, 0.0).unwrap();
let descriptor_pool = vulkano::descriptor::descriptor_set::DescriptorPool::new(&device);
mod pipeline_layout {
pipeline_layout!{
set0: {
tex: CombinedImageSampler
}
}
}
let pipeline_layout = pipeline_layout::CustomPipeline::new(&device).unwrap();
let set = pipeline_layout::set0::Set::new(&descriptor_pool, &pipeline_layout, &pipeline_layout::set0::Descriptors {
tex: (&sampler, &texture)
});
let pipeline = vulkano::pipeline::GraphicsPipeline::new(&device, vulkano::pipeline::GraphicsPipelineParams {
let pipeline = Arc::new(vulkano::pipeline::GraphicsPipeline::new(&device, vulkano::pipeline::GraphicsPipelineParams {
vertex_input: vulkano::pipeline::vertex::SingleBufferDefinition::new(),
vertex_shader: vs.main_entry_point(),
input_assembly: vulkano::pipeline::input_assembly::InputAssembly {
@ -181,43 +149,57 @@ fn main() {
fragment_shader: fs.main_entry_point(),
depth_stencil: vulkano::pipeline::depth_stencil::DepthStencil::disabled(),
blend: vulkano::pipeline::blend::Blend::pass_through(),
layout: &pipeline_layout,
render_pass: vulkano::framebuffer::Subpass::from(&renderpass, 0).unwrap(),
}).unwrap();
render_pass: vulkano::framebuffer::Subpass::from(renderpass.clone(), 0).unwrap(),
}).unwrap());
let set = Arc::new(simple_descriptor_set!(pipeline.clone(), 0, {
tex: (texture.clone(), sampler.clone())
}));
let framebuffers = images.iter().map(|image| {
let attachments = renderpass::AList {
color: &image,
};
let attachments = renderpass.desc().start_attachments()
.color(image.clone());
let dimensions = [image.dimensions()[0], image.dimensions()[1], 1];
vulkano::framebuffer::Framebuffer::new(&renderpass, [images[0].dimensions()[0], images[0].dimensions()[1], 1], attachments).unwrap()
vulkano::framebuffer::Framebuffer::new(renderpass.clone(), dimensions, attachments).unwrap()
}).collect::<Vec<_>>();
let command_buffers = framebuffers.iter().map(|framebuffer| {
vulkano::command_buffer::PrimaryCommandBufferBuilder::new(&device, queue.family())
.copy_buffer_to_color_image(&pixel_buffer, &texture, 0, 0 .. 1, [0, 0, 0],
[texture.dimensions().width(), texture.dimensions().height(), 1])
//.clear_color_image(&texture, [0.0, 1.0, 0.0, 1.0])
.draw_inline(&renderpass, &framebuffer, renderpass::ClearValues {
color: [0.0, 0.0, 1.0, 1.0]
})
.draw(&pipeline, &vertex_buffer, &vulkano::command_buffer::DynamicState::none(),
&set, &())
.draw_end()
.build()
}).collect::<Vec<_>>();
let mut submissions: Vec<Box<GpuFuture>> = Vec::new();
loop {
let image_num = swapchain.acquire_next_image(Duration::new(10, 0)).unwrap();
vulkano::command_buffer::submit(&command_buffers[image_num], &queue).unwrap();
swapchain.present(&queue, image_num).unwrap();
while submissions.len() >= 4 {
submissions.remove(0);
}
for ev in window.window().poll_events() {
let (image_num, future) = swapchain.acquire_next_image(Duration::new(10, 0)).unwrap();
let cb = vulkano::command_buffer::AutoCommandBufferBuilder::new(device.clone(), queue.family())
.unwrap()
.copy_buffer_to_image(pixel_buffer.clone(), texture.clone())
.unwrap()
//.clear_color_image(&texture, [0.0, 1.0, 0.0, 1.0])
.begin_render_pass(
framebuffers[image_num].clone(), false,
renderpass.desc().start_clear_values()
.color([0.0, 0.0, 1.0, 1.0])).unwrap()
.draw(pipeline.clone(), vulkano::command_buffer::DynamicState::none(), vertex_buffer.clone(),
set.clone(), ()).unwrap()
.end_render_pass().unwrap()
.build().unwrap();
let future = future
.then_execute(queue.clone(), cb).unwrap()
.then_swapchain_present(queue.clone(), swapchain.clone(), image_num)
.then_signal_fence_and_flush().unwrap();
submissions.push(Box::new(future) as Box<_>);
let mut done = false;
events_loop.poll_events(|ev| {
match ev {
winit::Event::Closed => return,
winit::Event::WindowEvent { event: winit::WindowEvent::Closed, .. } => done = true,
_ => ()
}
}
});
if done { return; }
}
}

View File

@ -10,7 +10,7 @@
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable
#extension GL_ARB_shading_language_450pack : enable
layout(location = 0) in vec2 tex_coords;
layout(location = 0) out vec4 f_color;

View File

@ -10,7 +10,7 @@
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable
#extension GL_ARB_shading_language_450pack : enable
layout(location = 0) in vec2 position;
layout(location = 0) out vec2 tex_coords;

View File

@ -17,9 +17,11 @@ extern crate vulkano;
extern crate vulkano_win;
use vulkano_win::VkSurfaceBuild;
use vulkano::command_buffer::CommandBufferBuilder;
use vulkano::sync::GpuFuture;
use vulkano::image::ImageView;
use std::sync::Arc;
use std::time::Duration;
mod vs { include!{concat!(env!("OUT_DIR"), "/shaders/src/bin/teapot_vs.glsl")} }
mod fs { include!{concat!(env!("OUT_DIR"), "/shaders/src/bin/teapot_fs.glsl")} }
@ -35,7 +37,8 @@ fn main() {
.next().expect("no device available");
println!("Using device: {} (type: {:?})", physical.name(), physical.ty());
let window = winit::WindowBuilder::new().build_vk_surface(&instance).unwrap();
let events_loop = winit::EventsLoop::new();
let window = winit::WindowBuilder::new().build_vk_surface(&events_loop, &instance).unwrap();
let queue = physical.queue_families().find(|q| q.supports_graphics() &&
window.surface().is_supported(q).unwrap_or(false))
@ -59,14 +62,14 @@ fn main() {
let usage = caps.supported_usage_flags;
let format = caps.supported_formats[0].0;
vulkano::swapchain::Swapchain::new(&device, &window.surface(), 3, format, dimensions, 1,
vulkano::swapchain::Swapchain::new(&device, &window.surface(), caps.min_image_count, format, dimensions, 1,
&usage, &queue, vulkano::swapchain::SurfaceTransform::Identity,
vulkano::swapchain::CompositeAlpha::Opaque,
present, true, None).expect("failed to create swapchain")
};
let depth_buffer = vulkano::image::attachment::AttachmentImage::transient(&device, images[0].dimensions(), vulkano::format::D16Unorm).unwrap();
let depth_buffer = vulkano::image::attachment::AttachmentImage::transient(&device, images[0].dimensions(), vulkano::format::D16Unorm).unwrap().access();
let vertex_buffer = vulkano::buffer::cpu_access::CpuAccessibleBuffer
::from_iter(&device, &vulkano::buffer::BufferUsage::all(), Some(queue.family()), examples::VERTICES.iter().cloned())
@ -82,7 +85,7 @@ fn main() {
// note: this teapot was meant for OpenGL where the origin is at the lower left
// instead the origin is at the upper left in vulkan, so we reverse the Y axis
let proj = cgmath::perspective(cgmath::rad(3.141592 / 2.0), { let d = images[0].dimensions(); d[0] as f32 / d[1] as f32 }, 0.01, 100.0);
let proj = cgmath::perspective(cgmath::Rad(std::f32::consts::FRAC_PI_2), { let d = images[0].dimensions(); d[0] as f32 / d[1] as f32 }, 0.01, 100.0);
let view = cgmath::Matrix4::look_at(cgmath::Point3::new(0.3, 0.3, 1.0), cgmath::Point3::new(0.0, 0.0, 0.0), cgmath::Vector3::new(0.0, -1.0, 0.0));
let scale = cgmath::Matrix4::from_scale(0.01);
@ -98,48 +101,30 @@ fn main() {
let vs = vs::Shader::load(&device).expect("failed to create shader module");
let fs = fs::Shader::load(&device).expect("failed to create shader module");
mod renderpass {
single_pass_renderpass!{
let renderpass = Arc::new(
single_pass_renderpass!(device.clone(),
attachments: {
color: {
load: Clear,
store: Store,
format: ::vulkano::format::Format,
format: images[0].format(),
samples: 1,
},
depth: {
load: Clear,
store: DontCare,
format: ::vulkano::format::D16Unorm,
format: vulkano::image::ImageAccess::format(&depth_buffer),
samples: 1,
}
},
pass: {
color: [color],
depth_stencil: {depth}
}
}
}
).unwrap()
);
let renderpass = renderpass::CustomRenderPass::new(&device, &renderpass::Formats {
color: (images[0].format(), 1),
depth: (vulkano::format::D16Unorm, 1)
}).unwrap();
let descriptor_pool = vulkano::descriptor::descriptor_set::DescriptorPool::new(&device);
mod pipeline_layout {
pipeline_layout!{
set0: {
uniforms: UniformBuffer<::vs::ty::Data>
}
}
}
let pipeline_layout = pipeline_layout::CustomPipeline::new(&device).unwrap();
let set = pipeline_layout::set0::Set::new(&descriptor_pool, &pipeline_layout, &pipeline_layout::set0::Descriptors {
uniforms: &uniform_buffer
});
let pipeline = vulkano::pipeline::GraphicsPipeline::new(&device, vulkano::pipeline::GraphicsPipelineParams {
let pipeline = Arc::new(vulkano::pipeline::GraphicsPipeline::new(&device, vulkano::pipeline::GraphicsPipelineParams {
vertex_input: vulkano::pipeline::vertex::TwoBuffersDefinition::new(),
vertex_shader: vs.main_entry_point(),
input_assembly: vulkano::pipeline::input_assembly::InputAssembly::triangle_list(),
@ -160,58 +145,67 @@ fn main() {
fragment_shader: fs.main_entry_point(),
depth_stencil: vulkano::pipeline::depth_stencil::DepthStencil::simple_depth_test(),
blend: vulkano::pipeline::blend::Blend::pass_through(),
layout: &pipeline_layout,
render_pass: vulkano::framebuffer::Subpass::from(&renderpass, 0).unwrap(),
}).unwrap();
render_pass: vulkano::framebuffer::Subpass::from(renderpass.clone(), 0).unwrap(),
}).unwrap());
let set = Arc::new(simple_descriptor_set!(pipeline.clone(), 0, {
uniforms: uniform_buffer.clone()
}));
let framebuffers = images.iter().map(|image| {
let attachments = renderpass::AList {
color: &image,
depth: &depth_buffer,
};
let attachments = renderpass.desc().start_attachments()
.color(image.clone()).depth(depth_buffer.clone());
let dimensions = [image.dimensions()[0], image.dimensions()[1], 1];
vulkano::framebuffer::Framebuffer::new(&renderpass, [image.dimensions()[0], image.dimensions()[1], 1], attachments).unwrap()
vulkano::framebuffer::Framebuffer::new(renderpass.clone(), dimensions, attachments).unwrap()
}).collect::<Vec<_>>();
let command_buffers = framebuffers.iter().map(|framebuffer| {
vulkano::command_buffer::PrimaryCommandBufferBuilder::new(&device, queue.family())
.draw_inline(&renderpass, &framebuffer, renderpass::ClearValues {
color: [0.0, 0.0, 1.0, 1.0],
depth: 1.0,
})
.draw_indexed(&pipeline, (&vertex_buffer, &normals_buffer), &index_buffer,
&vulkano::command_buffer::DynamicState::none(), &set, &())
.draw_end()
.build()
}).collect::<Vec<_>>();
let mut submissions: Vec<Arc<vulkano::command_buffer::Submission>> = Vec::new();
let mut submissions: Vec<Box<GpuFuture>> = Vec::new();
loop {
submissions.retain(|s| s.destroying_would_block());
while submissions.len() >= 4 {
submissions.remove(0);
}
{
// aquiring write lock for the uniform buffer
let mut buffer_content = uniform_buffer.write(Duration::new(1, 0)).unwrap();
let mut buffer_content = uniform_buffer.write().unwrap();
let rotation = cgmath::Matrix3::from_angle_y(cgmath::rad(time::precise_time_ns() as f32 * 0.000000001));
let rotation = cgmath::Matrix3::from_angle_y(cgmath::Rad(time::precise_time_ns() as f32 * 0.000000001));
// since write lock implementd Deref and DerefMut traits,
// we can update content directly
buffer_content.world = cgmath::Matrix4::from(rotation).into();
}
let image_num = swapchain.acquire_next_image(Duration::from_millis(1)).unwrap();
submissions.push(vulkano::command_buffer::submit(&command_buffers[image_num], &queue).unwrap());
swapchain.present(&queue, image_num).unwrap();
let (image_num, future) = swapchain.acquire_next_image(std::time::Duration::new(1, 0)).unwrap();
for ev in window.window().poll_events() {
let command_buffer = vulkano::command_buffer::AutoCommandBufferBuilder::new(device.clone(), queue.family()).unwrap()
.begin_render_pass(
framebuffers[image_num].clone(), false,
renderpass.desc().start_clear_values()
.color([0.0, 0.0, 1.0, 1.0]).depth((1f32))).unwrap()
.draw_indexed(
pipeline.clone(), vulkano::command_buffer::DynamicState::none(),
(vertex_buffer.clone(), normals_buffer.clone()),
index_buffer.clone(), set.clone(), ()).unwrap()
.end_render_pass().unwrap()
.build().unwrap();
let future = future
.then_execute(queue.clone(), command_buffer).unwrap()
.then_swapchain_present(queue.clone(), swapchain.clone(), image_num)
.then_signal_fence_and_flush().unwrap();
submissions.push(Box::new(future) as Box<_>);
let mut done = false;
events_loop.poll_events(|ev| {
match ev {
winit::Event::Closed => return,
winit::Event::WindowEvent { event: winit::WindowEvent::Closed, .. } => done = true,
_ => ()
}
}
});
if done { return; }
}
}

View File

@ -10,7 +10,7 @@
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable
#extension GL_ARB_shading_language_450pack : enable
layout(location = 0) in vec3 v_normal;
layout(location = 0) out vec4 f_color;

View File

@ -10,7 +10,7 @@
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable
#extension GL_ARB_shading_language_450pack : enable
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;

View File

@ -34,10 +34,11 @@ use vulkano_win::VkSurfaceBuild;
use vulkano::buffer::BufferUsage;
use vulkano::buffer::CpuAccessibleBuffer;
use vulkano::command_buffer;
use vulkano::command_buffer::AutoCommandBufferBuilder;
use vulkano::command_buffer::CommandBufferBuilder;
use vulkano::command_buffer::DynamicState;
use vulkano::command_buffer::PrimaryCommandBufferBuilder;
use vulkano::command_buffer::Submission;
use vulkano::descriptor::pipeline_layout::EmptyPipeline;
use vulkano::descriptor::pipeline_layout::PipelineLayout;
use vulkano::descriptor::pipeline_layout::EmptyPipelineDesc;
use vulkano::device::Device;
use vulkano::framebuffer::Framebuffer;
use vulkano::framebuffer::Subpass;
@ -54,6 +55,7 @@ use vulkano::pipeline::viewport::Viewport;
use vulkano::pipeline::viewport::Scissor;
use vulkano::swapchain::SurfaceTransform;
use vulkano::swapchain::Swapchain;
use vulkano::sync::GpuFuture;
use std::sync::Arc;
use std::time::Duration;
@ -91,6 +93,7 @@ fn main() {
// Some little debug infos.
println!("Using device: {} (type: {:?})", physical.name(), physical.ty());
// The objective of this example is to draw a triangle on a window. To do so, we first need to
// create the window.
//
@ -101,7 +104,8 @@ fn main() {
//
// This returns a `vulkano_win::Window` object that contains both a cross-platform winit
// window and a cross-platform Vulkan surface that represents the surface of the window.
let window = winit::WindowBuilder::new().build_vk_surface(&instance).unwrap();
let events_loop = winit::EventsLoop::new();
let window = winit::WindowBuilder::new().build_vk_surface(&events_loop, &instance).unwrap();
// The next step is to choose which GPU queue will execute our draw commands.
//
@ -179,7 +183,7 @@ fn main() {
let format = caps.supported_formats[0].0;
// Please take a look at the docs for the meaning of the parameters we didn't mention.
Swapchain::new(&device, &window.surface(), 2, format, dimensions, 1,
Swapchain::new(&device, &window.surface(), caps.min_image_count, format, dimensions, 1,
&caps.supported_usage_flags, &queue, SurfaceTransform::Identity, alpha,
present, true, None).expect("failed to create swapchain")
};
@ -225,57 +229,37 @@ fn main() {
// The next step is to create a *render pass*, which is an object that describes where the
// output of the graphics pipeline will go. It describes the layout of the images
// where the colors, depth and/or stencil information will be written.
mod render_pass {
use vulkano::format::Format;
// Calling this macro creates multiple structs based on the macro's parameters:
//
// - `CustomRenderPass` is the main struct that represents the render pass.
// - `Formats` can be used to indicate the list of the formats of the attachments.
// - `AList` can be used to indicate the actual list of images that are attached.
//
// Render passes can also have multiple subpasses, the only restriction being that all
// the passes will use the same framebuffer dimensions. Here we only have one pass, so
// we use the appropriate macro.
single_pass_renderpass!{
attachments: {
// `color` is a custom name we give to the first and only attachment.
color: {
// `load: Clear` means that we ask the GPU to clear the content of this
// attachment at the start of the drawing.
load: Clear,
// `store: Store` means that we ask the GPU to store the output of the draw
// in the actual image. We could also ask it to discard the result.
store: Store,
// `format: <ty>` indicates the type of the format of the image. This has to
// be one of the types of the `vulkano::format` module (or alternatively one
// of your structs that implements the `FormatDesc` trait). Here we use the
// generic `vulkano::format::Format` enum because we don't know the format in
// advance.
format: Format,
}
},
pass: {
// We use the attachment named `color` as the one and only color attachment.
color: [color],
// No depth-stencil attachment is indicated with empty brackets.
depth_stencil: {}
let render_pass = Arc::new(single_pass_renderpass!(device.clone(),
attachments: {
// `color` is a custom name we give to the first and only attachment.
color: {
// `load: Clear` means that we ask the GPU to clear the content of this
// attachment at the start of the drawing.
load: Clear,
// `store: Store` means that we ask the GPU to store the output of the draw
// in the actual image. We could also ask it to discard the result.
store: Store,
// `format: <ty>` indicates the type of the format of the image. This has to
// be one of the types of the `vulkano::format` module (or alternatively one
// of your structs that implements the `FormatDesc` trait). Here we use the
// generic `vulkano::format::Format` enum because we don't know the format in
// advance.
format: images[0].format(),
// TODO:
samples: 1,
}
},
pass: {
// We use the attachment named `color` as the one and only color attachment.
color: [color],
// No depth-stencil attachment is indicated with empty brackets.
depth_stencil: {}
}
}
// The macro above only created the custom struct that represents our render pass. We also have
// to actually instanciate that struct.
//
// To do so, we have to pass the actual values of the formats of the attachments.
let render_pass = render_pass::CustomRenderPass::new(&device, &render_pass::Formats {
// Use the format of the images and one sample.
color: (images[0].format(), 1)
}).unwrap();
).unwrap());
// Before we draw we have to create what is called a pipeline. This is similar to an OpenGL
// program, but much more specific.
let pipeline = GraphicsPipeline::new(&device, GraphicsPipelineParams {
let pipeline = Arc::new(GraphicsPipeline::new(&device, GraphicsPipelineParams {
// We need to indicate the layout of the vertices.
// The type `SingleBufferDefinition` actually contains a template parameter corresponding
// to the type of each vertex. But in this code it is automatically inferred.
@ -328,15 +312,10 @@ fn main() {
// attachments without any change.
blend: Blend::pass_through(),
// Shaders can usually access resources such as images or buffers. This parameters is here
// to indicate the layout of the accessed resources, which is also called the *pipeline
// layout*. Here we don't access anything, so we just create an `EmptyPipeline` object.
layout: &EmptyPipeline::new(&device).unwrap(),
// We have to indicate which subpass of which render pass this pipeline is going to be used
// in. The pipeline will only be usable from this particular subpass.
render_pass: Subpass::from(&render_pass, 0).unwrap(),
}).unwrap();
render_pass: Subpass::from(render_pass.clone(), 0).unwrap(),
}).unwrap());
// The render pass we created above only describes the layout of our framebuffers. Before we
// can draw we also need to create the actual framebuffers.
@ -344,26 +323,40 @@ fn main() {
// Since we need to draw to multiple images, we are going to create a different framebuffer for
// each image.
let framebuffers = images.iter().map(|image| {
// When we create the framebuffer we need to pass the actual list of images for the
// framebuffer's attachments.
//
// The type of data that corresponds to this list depends on the way you created the
// render pass. With the `single_pass_renderpass!` macro you need to call
// `.desc().start_attachments()`. The returned object will have a method whose name is the
// name of the first attachment. When called, it returns an object that will have a method
// whose name is the name of the second attachment. And so on. Only the object returned
// by the method of the last attachment can be passed to `Framebuffer::new`.
let attachments = render_pass.desc().start_attachments().color(image.clone());
// Actually creating the framebuffer. Note that we have to pass the dimensions of the
// framebuffer. These dimensions must be inferior or equal to the intersection of the
// dimensions of all the attachments.
let dimensions = [image.dimensions()[0], image.dimensions()[1], 1];
Framebuffer::new(&render_pass, dimensions, render_pass::AList {
// The `AList` struct was generated by the render pass macro above, and contains one
// member for each attachment.
color: image
}).unwrap()
Framebuffer::new(render_pass.clone(), dimensions, attachments).unwrap()
}).collect::<Vec<_>>();
// Initialization is finally finished!
// In the loop below we are going to submit commands to the GPU. Submitting a command produces
// a `Submission` object which holds the resources for as long as they are in use by the GPU.
// an object that implements the `GpuFuture` trait, which holds the resources for as long as
// they are in use by the GPU.
//
// Destroying a `Submission` blocks until the GPU is finished executing it. In order to avoid
// Destroying the `GpuFuture` blocks until the GPU is finished executing it. In order to avoid
// that, we store them in a `Vec` and clean them from time to time.
let mut submissions: Vec<Arc<Submission>> = Vec::new();
let mut submissions: Vec<Box<GpuFuture>> = Vec::new();
loop {
// Clearing the old submissions by keeping alive only the ones whose destructor would block.
submissions.retain(|s| s.destroying_would_block());
// Clearing the old submissions by keeping alive only the ones which probably aren't
// finished.
while submissions.len() >= 4 {
submissions.remove(0);
}
// Before we can draw on the output, we have to *acquire* an image from the swapchain. If
// no image is available (which happens if you submit draw commands too quickly), then the
@ -372,7 +365,7 @@ fn main() {
//
// This function can block if no image is available. The parameter is a timeout after
// which the function call will return an error.
let image_num = swapchain.acquire_next_image(Duration::new(1, 0)).unwrap();
let (image_num, future) = swapchain.acquire_next_image(Duration::new(1, 0)).unwrap();
// In order to draw, we have to build a *command buffer*. The command buffer object holds
// the list of commands that are going to be executed.
@ -383,41 +376,46 @@ fn main() {
//
// Note that we have to pass a queue family when we create the command buffer. The command
// buffer will only be executable on that given queue family.
let command_buffer = PrimaryCommandBufferBuilder::new(&device, queue.family())
let command_buffer = AutoCommandBufferBuilder::new(device.clone(), queue.family()).unwrap()
// Before we can draw, we have to *enter a render pass*. There are two methods to do
// this: `draw_inline` and `draw_secondary`. The latter is a bit more advanced and is
// not covered here.
//
// The third parameter contains the list of values to clear the attachments with. Only
// the attachments that use `load: Clear` appear in this struct.
.draw_inline(&render_pass, &framebuffers[image_num], render_pass::ClearValues {
color: [0.0, 0.0, 1.0, 1.0]
})
// The third parameter builds the list of values to clear the attachments with. The API
// is similar to the list of attachments when building the framebuffers, except that
// only the attachments that use `load: Clear` appear in the list.
.begin_render_pass(framebuffers[image_num].clone(), false,
render_pass.desc().start_clear_values().color([0.0, 0.0, 1.0, 1.0]))
.unwrap()
// We are now inside the first subpass of the render pass. We add a draw command.
//
// The last two parameters contain the list of resources to pass to the shaders.
// Since we used an `EmptyPipeline` object, the objects have to be `()`.
.draw(&pipeline, &vertex_buffer, &DynamicState::none(), (), &())
.draw(pipeline.clone(), DynamicState::none(), vertex_buffer.clone(), (), ())
.unwrap()
// We leave the render pass by calling `draw_end`. Note that if we had multiple
// subpasses we could have called `next_inline` (or `next_secondary`) to jump to the
// next subpass.
.draw_end()
.end_render_pass()
.unwrap()
// Finish building the command buffer by calling `build`.
.build();
.build().unwrap();
// Now all we need to do is submit the command buffer to the queue.
submissions.push(command_buffer::submit(&command_buffer, &queue).unwrap());
let future = future
.then_execute(queue.clone(), command_buffer).unwrap()
// The color output is now expected to contain our triangle. But in order to show it on
// the screen, we have to *present* the image by calling `present`.
//
// This function does not actually present the image immediately. Instead it submits a
// present command at the end of the queue. This means that it will only be presented once
// the GPU has finished executing the command buffer that draws the triangle.
swapchain.present(&queue, image_num).unwrap();
// The color output is now expected to contain our triangle. But in order to show it on
// the screen, we have to *present* the image by calling `present`.
//
// This function does not actually present the image immediately. Instead it submits a
// present command at the end of the queue. This means that it will only be presented once
// the GPU has finished executing the command buffer that draws the triangle.
.then_swapchain_present(queue.clone(), swapchain.clone(), image_num)
.then_signal_fence_and_flush().unwrap();
submissions.push(Box::new(future) as Box<_>);
// Note that in more complex programs it is likely that one of `acquire_next_image`,
// `command_buffer::submit`, or `present` will block for some time. This happens when the
@ -429,11 +427,13 @@ fn main() {
// Handling the window events in order to close the program when the user wants to close
// it.
for ev in window.window().poll_events() {
let mut done = false;
events_loop.poll_events(|ev| {
match ev {
winit::Event::Closed => return,
winit::Event::WindowEvent { event: winit::WindowEvent::Closed, .. } => done = true,
_ => ()
}
}
});
if done { return; }
}
}

View File

@ -10,7 +10,7 @@
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable
#extension GL_ARB_shading_language_450pack : enable
layout(location = 0) out vec4 f_color;

View File

@ -10,7 +10,7 @@
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable
#extension GL_ARB_shading_language_450pack : enable
layout(location = 0) in vec2 position;

View File

@ -1,14 +1,15 @@
[package]
name = "glsl-to-spirv"
version = "0.1.1"
version = "0.1.2"
authors = ["Pierre Krieger <pierre.krieger1708@gmail.com>"]
repository = "https://github.com/tomaka/vulkano"
description = "Wrapper around the official GLSL to SPIR-V compiler"
license = "MIT/Apache-2.0"
build = "build/build.rs"
categories = ["rendering::graphics-api"]
[dependencies]
tempdir = "0.3.4"
tempdir = "0.3.5"
[build-dependencies]
cmake = "0.1.13"
cmake = "0.1.19"

View File

@ -1,7 +1,10 @@
[package]
name = "vk-sys"
version = "0.2.0"
version = "0.2.2"
authors = ["Pierre Krieger <pierre.krieger1708@gmail.com>"]
repository = "https://github.com/tomaka/vulkano"
description = "Bindings for the Vulkan graphics API"
license = "MIT/Apache-2.0"
documentation = "https://docs.rs/vk-sys"
keywords = ["vulkan", "bindings", "graphics", "gpu"]
categories = ["rendering::graphics-api"]

View File

@ -56,6 +56,7 @@ pub type SwapchainKHR = u64;
pub type DisplayKHR = u64;
pub type DisplayModeKHR = u64;
pub type DebugReportCallbackEXT = u64;
pub type DescriptorUpdateTemplateKHR = u64;
pub const LOD_CLAMP_NONE: f32 = 1000.0;
pub const REMAINING_MIP_LEVELS: u32 = 0xffffffff;
@ -100,6 +101,7 @@ pub const SUBOPTIMAL_KHR: u32 = 1000001003;
pub const ERROR_OUT_OF_DATE_KHR: u32 = -1000001004i32 as u32;
pub const ERROR_INCOMPATIBLE_DISPLAY_KHR: u32 = -1000003001i32 as u32;
pub const ERROR_VALIDATION_FAILED_EXT: u32 = -1000011001i32 as u32;
pub const ERROR_OUT_OF_POOL_MEMORY_KHR: u32 = -1000069000i32 as u32;
pub type StructureType = u32;
pub const STRUCTURE_TYPE_APPLICATION_INFO: u32 = 0;
@ -165,6 +167,18 @@ pub const STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR: u32 = 1000009000;
pub const STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT: u32 = 1000011000;
pub const STRUCTURE_TYPE_IOS_SURFACE_CREATE_INFO_MVK: u32 = 1000000000 + (52 * 1000);
pub const STRUCTURE_TYPE_MACOS_SURFACE_CREATE_INFO_MVK: u32 = 1000000000 + (53 * 1000);
pub const STRUCTURE_TYPE_PHYSICAL_DEVICE_FEATURES_2_KHR: u32 = 1000059000;
pub const STRUCTURE_TYPE_PHYSICAL_DEVICE_PROPERTIES_2_KHR: u32 = 1000059001;
pub const STRUCTURE_TYPE_FORMAT_PROPERTIES_2_KHR: u32 = 1000059002;
pub const STRUCTURE_TYPE_IMAGE_FORMAT_PROPERTIES_2_KHR: u32 = 1000059003;
pub const STRUCTURE_TYPE_PHYSICAL_DEVICE_IMAGE_FORMAT_INFO_2_KHR: u32 = 1000059004;
pub const STRUCTURE_TYPE_QUEUE_FAMILY_PROPERTIES_2_KHR: u32 = 1000059005;
pub const STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_PROPERTIES_2_KHR: u32 = 1000059006;
pub const STRUCTURE_TYPE_SPARSE_IMAGE_FORMAT_PROPERTIES_2_KHR: u32 = 1000059007;
pub const STRUCTURE_TYPE_PHYSICAL_DEVICE_SPARSE_IMAGE_FORMAT_INFO_2_KHR: u32 = 1000059008;
pub const STRUCTURE_TYPE_VI_SURFACE_CREATE_INFO_NN: u32 = 1000062000;
pub const STRUCTURE_TYPE_PHYSICAL_DEVICE_PUSH_DESCRIPTOR_PROPERTIES_KHR: u32 = 1000080000;
pub const STRUCTURE_TYPE_DESCRIPTOR_UPDATE_TEMPLATE_CREATE_INFO_KHR: u32 = 1000085000;
pub type SystemAllocationScope = u32;
pub const SYSTEM_ALLOCATION_SCOPE_COMMAND: u32 = 0;
@ -598,6 +612,8 @@ pub const FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT: u32 = 0x00000200;
pub const FORMAT_FEATURE_BLIT_SRC_BIT: u32 = 0x00000400;
pub const FORMAT_FEATURE_BLIT_DST_BIT: u32 = 0x00000800;
pub const FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT: u32 = 0x00001000;
pub const FORMAT_FEATURE_TRANSFER_SRC_BIT_KHR: u32 = 0x00004000;
pub const FORMAT_FEATURE_TRANSFER_DST_BIT_KHR: u32 = 0x00008000;
pub type FormatFeatureFlags = Flags;
@ -619,6 +635,7 @@ pub const IMAGE_CREATE_SPARSE_RESIDENCY_BIT: u32 = 0x00000002;
pub const IMAGE_CREATE_SPARSE_ALIASED_BIT: u32 = 0x00000004;
pub const IMAGE_CREATE_MUTABLE_FORMAT_BIT: u32 = 0x00000008;
pub const IMAGE_CREATE_CUBE_COMPATIBLE_BIT: u32 = 0x00000010;
pub const IMAGE_CREATE_2D_ARRAY_COMPATIBLE_BIT_KHR: u32 = 0x00000020;
pub type ImageCreateFlags = Flags;
@ -853,6 +870,9 @@ pub const COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT: u32 = 0x00000001;
pub type CommandPoolResetFlags = Flags;
pub type CommandPoolTrimFlagsKHR = Flags;
pub type CommandBufferUsageFlagBits = u32;
pub const COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT: u32 = 0x00000001;
pub const COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT: u32 = 0x00000002;
@ -890,6 +910,18 @@ pub type ColorSpaceKHR = u32;
#[deprecated = "Renamed to COLOR_SPACE_SRGB_NONLINEAR_KHR"]
pub const COLORSPACE_SRGB_NONLINEAR_KHR: u32 = 0;
pub const COLOR_SPACE_SRGB_NONLINEAR_KHR: u32 = 0;
pub const COLOR_SPACE_DISPLAY_P3_LINEAR_EXT: u32 = 1000104001;
pub const COLOR_SPACE_DISPLAY_P3_NONLINEAR_EXT: u32 = 1000104002;
pub const COLOR_SPACE_SCRGB_LINEAR_EXT: u32 = 1000104003;
pub const COLOR_SPACE_SCRGB_NONLINEAR_EXT: u32 = 1000104004;
pub const COLOR_SPACE_DCI_P3_LINEAR_EXT: u32 = 1000104005;
pub const COLOR_SPACE_DCI_P3_NONLINEAR_EXT: u32 = 1000104006;
pub const COLOR_SPACE_BT709_LINEAR_EXT: u32 = 1000104007;
pub const COLOR_SPACE_BT709_NONLINEAR_EXT: u32 = 1000104008;
pub const COLOR_SPACE_BT2020_LINEAR_EXT: u32 = 1000104009;
pub const COLOR_SPACE_BT2020_NONLINEAR_EXT: u32 = 1000104010;
pub const COLOR_SPACE_ADOBERGB_LINEAR_EXT: u32 = 1000104011;
pub const COLOR_SPACE_ADOBERGB_NONLINEAR_EXT: u32 = 1000104012;
pub type PresentModeKHR = u32;
pub const PRESENT_MODE_IMMEDIATE_KHR: u32 = 0;
@ -963,6 +995,17 @@ pub type MacOSSurfaceCreateFlagsMVK = u32;
pub type IOSSurfaceCreateFlagsMVK = u32;
pub type DescriptorSetLayoutCreateFlagBits = u32;
pub const DESCRIPTOR_SET_LAYOUT_CREATE_PUSH_DESCRIPTOR_BIT_KHR: u32 = 0x00000001;
pub type DescriptorUpdateTemplateTypeKHR = u32;
pub const DESCRIPTOR_UPDATE_TEMPLATE_TYPE_DESCRIPTOR_SET_KHR: u32 = 0;
pub const DESCRIPTOR_UPDATE_TEMPLATE_TYPE_PUSH_DESCRIPTORS_KHR: u32 = 1;
pub const DESCRIPTOR_UPDATE_TEMPLATE_TYPE_BEGIN_RANGE_KHR: u32 = DESCRIPTOR_UPDATE_TEMPLATE_TYPE_DESCRIPTOR_SET_KHR;
pub const DESCRIPTOR_UPDATE_TEMPLATE_TYPE_END_RANGE_KHR: u32 = DESCRIPTOR_UPDATE_TEMPLATE_TYPE_PUSH_DESCRIPTORS_KHR;
pub const DESCRIPTOR_UPDATE_TEMPLATE_TYPE_RANGE_SIZE_KHR: u32 = (DESCRIPTOR_UPDATE_TEMPLATE_TYPE_PUSH_DESCRIPTORS_KHR - DESCRIPTOR_UPDATE_TEMPLATE_TYPE_DESCRIPTOR_SET_KHR + 1);
pub type DescriptorUpdateTemplateCreateFlagsKHR = Flags;
pub type PFN_vkAllocationFunction = extern "system" fn(*mut c_void, usize, usize, SystemAllocationScope) -> *mut c_void;
pub type PFN_vkReallocationFunction = extern "system" fn(*mut c_void, *mut c_void, usize, usize, SystemAllocationScope) -> *mut c_void;
pub type PFN_vkFreeFunction = extern "system" fn(*mut c_void, *mut c_void);
@ -2374,7 +2417,6 @@ pub struct DebugReportCallbackCreateInfoEXT {
pub pUserData: *mut c_void,
}
#[repr(C)]
pub struct IOSSurfaceCreateInfoMVK {
pub sType: StructureType,
@ -2420,6 +2462,119 @@ pub struct MVKSwapchainPerformance {
pub averageFramesPerSecond: c_double,
}
#[repr(C)]
pub struct PhysicalDeviceFeatures2KHR {
pub sType: StructureType,
pub pNext: *const c_void,
pub features: PhysicalDeviceFeatures,
}
#[repr(C)]
pub struct PhysicalDeviceProperties2KHR {
pub sType: StructureType,
pub pNext: *const c_void,
pub properties: PhysicalDeviceProperties,
}
#[repr(C)]
pub struct FormatProperties2KHR {
pub sType: StructureType,
pub pNext: *const c_void,
pub formatProperties: FormatProperties,
}
#[repr(C)]
pub struct ImageFormatProperties2KHR {
pub sType: StructureType,
pub pNext: *const c_void,
pub imageFormatProperties: ImageFormatProperties,
}
#[repr(C)]
pub struct PhysicalDeviceImageFormatInfo2KHR {
pub sType: StructureType,
pub pNext: *const c_void,
pub format: Format,
pub imageType: ImageType,
pub tiling: ImageTiling,
pub usage: ImageUsageFlags,
pub flags: ImageCreateFlags,
}
#[repr(C)]
pub struct QueueFamilyProperties2KHR {
pub sType: StructureType,
pub pNext: *const c_void,
pub queueFamilyProperties: QueueFamilyProperties,
}
#[repr(C)]
pub struct PhysicalDeviceMemoryProperties2KHR {
pub sType: StructureType,
pub pNext: *const c_void,
pub memoryProperties: PhysicalDeviceMemoryProperties,
}
#[repr(C)]
pub struct SparseImageFormatProperties2KHR {
pub sType: StructureType,
pub pNext: *const c_void,
pub properties: SparseImageFormatProperties,
}
#[repr(C)]
pub struct PhysicalDeviceSparseImageFormatInfo2KHR {
pub sType: StructureType,
pub pNext: *const c_void,
pub format: Format,
pub imageType: ImageType,
pub samples: SampleCountFlagBits,
pub usage: ImageUsageFlags,
pub tiling: ImageTiling,
}
pub type ViSurfaceCreateFlagsNN = Flags;
#[repr(C)]
pub struct ViSurfaceCreateInfoNN {
pub sType: StructureType,
pub pNext: *const c_void,
pub flags: ViSurfaceCreateFlagsNN,
pub window: *const c_void,
}
#[repr(C)]
pub struct PhysicalDevicePushDescriptorPropertiesKHR {
pub sType: StructureType,
pub pNext: *const c_void,
pub maxPushDescriptors: u32,
}
#[repr(C)]
pub struct DescriptorUpdateTemplateEntryKHR {
pub dstBinding: u32,
pub dstArrayElement: u32,
pub descriptorCount: u32,
pub descriptorType: DescriptorType,
pub offset: usize,
pub stride: usize,
}
#[repr(C)]
pub struct DescriptorUpdateTemplateCreateInfoKHR {
pub sType: StructureType,
pub pNext: *const c_void,
pub flags: DescriptorUpdateTemplateCreateFlagsKHR,
pub descriptorUpdateEntryCount: u32,
pub pDescriptorUpdateEntries: *const DescriptorUpdateTemplateEntryKHR,
pub templateType: DescriptorUpdateTemplateTypeKHR,
pub descriptorSetLayout: DescriptorSetLayout,
pub pipelineBindPoint: PipelineBindPoint,
pub pipelineLayout: PipelineLayout,
pub set: u32,
}
macro_rules! ptrs {
($struct_name:ident, { $($name:ident => ($($param_n:ident: $param_ty:ty),*) -> $ret:ty,)+ }) => (
pub struct $struct_name {
@ -2523,6 +2678,14 @@ ptrs!(InstancePointers, {
SetMoltenVKDeviceConfigurationMVK => (device: Device, pConfiguration: *mut MVKDeviceConfiguration) -> Result,
GetPhysicalDeviceMetalFeaturesMVK => (physicalDevice: PhysicalDevice, pMetalFeatures: *mut MVKPhysicalDeviceMetalFeatures) -> Result,
GetSwapchainPerformanceMVK => (device: Device, swapchain: SwapchainKHR, pSwapchainPerf: *mut MVKSwapchainPerformance) -> Result,
CreateViSurfaceNN => (instance: Instance, pCreateInfo: *const ViSurfaceCreateInfoNN, pAllocator: *const AllocationCallbacks, pSurface: *mut SurfaceKHR) -> Result,
GetPhysicalDeviceFeatures2KHR => (physicalDevice: PhysicalDevice, pFeatures: *mut PhysicalDeviceFeatures2KHR) -> (),
GetPhysicalDeviceProperties2KHR => (physicalDevice: PhysicalDevice, pProperties: *mut PhysicalDeviceProperties2KHR) -> (),
GetPhysicalDeviceFormatProperties2KHR => (physicalDevice: PhysicalDevice, pFormatProperties: *mut FormatProperties2KHR) -> (),
GetPhysicalDeviceImageFormatProperties2KHR => (physicalDevice: PhysicalDevice, pImageFormatInfo: *const PhysicalDeviceImageFormatInfo2KHR, pImageFormatProperties: *mut ImageFormatProperties2KHR) -> Result,
GetPhysicalDeviceQueueFamilyProperties2KHR => (physicalDevice: PhysicalDevice, pQueueFamilyPropertiesCount: *mut u32, pQueueFamilyProperties: *mut QueueFamilyProperties2KHR) -> (),
GetPhysicalDeviceMemoryProperties2KHR => (physicalDevice: PhysicalDevice, pMemoryProperties: *mut PhysicalDeviceMemoryProperties2KHR) -> (),
GetPhysicalDeviceSparseImageFormatProperties2KHR => (physicalDevice: PhysicalDevice, pFormatInfo: *const PhysicalDeviceSparseImageFormatInfo2KHR, pPropertyCount: *mut u32, pProperties: *mut SparseImageFormatProperties2KHR) -> (),
});
ptrs!(DevicePointers, {
@ -2597,6 +2760,7 @@ ptrs!(DevicePointers, {
CreateCommandPool => (device: Device, pCreateInfo: *const CommandPoolCreateInfo, pAllocator: *const AllocationCallbacks, pCommandPool: *mut CommandPool) -> Result,
DestroyCommandPool => (device: Device, commandPool: CommandPool, pAllocator: *const AllocationCallbacks) -> (),
ResetCommandPool => (device: Device, commandPool: CommandPool, flags: CommandPoolResetFlags) -> Result,
TrimCommandPoolKHR => (device: Device, commandPool: CommandPool, flags: CommandPoolTrimFlagsKHR) -> (),
AllocateCommandBuffers => (device: Device, pAllocateInfo: *const CommandBufferAllocateInfo, pCommandBuffers: *mut CommandBuffer) -> Result,
FreeCommandBuffers => (device: Device, commandPool: CommandPool, commandBufferCount: u32, pCommandBuffers: *const CommandBuffer) -> (),
BeginCommandBuffer => (commandBuffer: CommandBuffer, pBeginInfo: *const CommandBufferBeginInfo) -> Result,
@ -2652,4 +2816,9 @@ ptrs!(DevicePointers, {
AcquireNextImageKHR => (device: Device, swapchain: SwapchainKHR, timeout: u64, semaphore: Semaphore, fence: Fence, pImageIndex: *mut u32) -> Result,
QueuePresentKHR => (queue: Queue, pPresentInfo: *const PresentInfoKHR) -> Result,
CreateSharedSwapchainsKHR => (device: Device, swapchainCount: u32, pCreateInfos: *const SwapchainCreateInfoKHR, pAllocator: *const AllocationCallbacks, pSwapchains: *mut SwapchainKHR) -> Result,
CmdPushDescriptorSetKHR => (commandBuffer: CommandBuffer, pipelineBindPoint: PipelineBindPoint, layout: PipelineLayout, set: u32, descriptorWriteCount: u32, pDescriptorWrites: *const WriteDescriptorSet) -> (),
CreateDescriptorUpdateTemplateKHR => (device: Device, pCreateInfo: *const DescriptorUpdateTemplateCreateInfoKHR, pAllocator: *const AllocationCallbacks, pDescriptorUpdateTemplate: *mut DescriptorUpdateTemplateKHR) -> Result,
DestroyDescriptorUpdateTemplateKHR => (device: Device, descriptorUpdateTemplate: DescriptorUpdateTemplateKHR, pAllocator: *const AllocationCallbacks) -> (),
UpdateDescriptorSetWithTemplateKHR => (device: Device, descriptorSet: DescriptorSet, descriptorUpdateTemplate: DescriptorUpdateTemplateKHR, pData: *const c_void) -> (),
CmdPushDescriptorSetWithTemplateKHR => (commandBuffer: CommandBuffer, descriptorUpdateTemplate: DescriptorUpdateTemplateKHR, layout: PipelineLayout, set: u32, pData: *const c_void) -> (),
});

Binary file not shown.

View File

@ -0,0 +1,18 @@
[package]
name = "vulkano-shader-derive"
version = "0.3.2"
authors = ["Pierre Krieger <pierre.krieger1708@gmail.com>"]
repository = "https://github.com/tomaka/vulkano"
description = "Safe wrapper for the Vulkan graphics API"
license = "MIT/Apache-2.0"
documentation = "https://docs.rs/vulkano"
categories = ["rendering::graphics-api"]
[lib]
name = "vulkano_shader_derive"
proc-macro = true
[dependencies]
glsl-to-spirv = { version = "0.1.2", path = "../glsl-to-spirv" }
syn = { version = "0.10", features = ["aster", "visit"] }
vulkano-shaders = { version = "0.3", path = "../vulkano-shaders" }

View File

@ -0,0 +1,27 @@
# Usage
This replaces `vulkano-shaders`.
```rust
#[macro_use]
extern crate vulkano_shader_derive;
mod fs {
#[derive(VulkanoShader)]
#[ty = "fragment"]
#[src = "
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_450pack : enable
layout(location = 0) out vec4 f_color;
void main() {
f_color = vec4(1.0, 0.0, 0.0, 1.0);
}"]
struct Dummy;
}
let fs = fs::Shader::load(&device).expect("failed to create shader module");
```

View File

@ -0,0 +1,46 @@
extern crate glsl_to_spirv;
extern crate proc_macro;
extern crate syn;
extern crate vulkano_shaders;
use proc_macro::TokenStream;
#[proc_macro_derive(VulkanoShader, attributes(src, ty))]
pub fn derive(input: TokenStream) -> TokenStream {
let syn_item = syn::parse_macro_input(&input.to_string()).unwrap();
let src = syn_item.attrs.iter().filter_map(|attr| {
match attr.value {
syn::MetaItem::NameValue(ref i, syn::Lit::Str(ref val, _)) if i == "src" => {
Some(val.clone())
},
_ => None
}
}).next().expect("Can't find `src` attribute ; put #[src = \"...\"] for example.");
let ty_str = syn_item.attrs.iter().filter_map(|attr| {
match attr.value {
syn::MetaItem::NameValue(ref i, syn::Lit::Str(ref val, _)) if i == "ty" => {
Some(val.clone())
},
_ => None
}
}).next().expect("Can't find `ty` attribute ; put #[ty = \"vertex\"] for example.");
let ty = match &ty_str[..] {
"vertex" => glsl_to_spirv::ShaderType::Vertex,
"fragment" => glsl_to_spirv::ShaderType::Fragment,
"geometry" => glsl_to_spirv::ShaderType::Geometry,
"tess_ctrl" => glsl_to_spirv::ShaderType::TessellationControl,
"tess_eval" => glsl_to_spirv::ShaderType::TessellationEvaluation,
"compute" => glsl_to_spirv::ShaderType::Compute,
_ => panic!("Unexpected shader type ; valid values: vertex, fragment, geometry, tess_ctrl, tess_eval, compute")
};
let spirv_data = match glsl_to_spirv::compile(&src, ty) {
Ok(compiled) => compiled,
Err(message) => panic!("{}\nfailed to compile shader", message),
};
vulkano_shaders::reflect("Shader", spirv_data).unwrap().parse().unwrap()
}

View File

@ -1,11 +1,12 @@
[package]
name = "vulkano-shaders"
version = "0.3.1"
version = "0.3.2"
authors = ["Pierre Krieger <pierre.krieger1708@gmail.com>"]
repository = "https://github.com/tomaka/vulkano"
description = "Shaders "
license = "MIT/Apache-2.0"
documentation = "http://tomaka.github.io/vulkano/vulkano/index.html"
categories = ["rendering::graphics-api"]
[dependencies]
glsl-to-spirv = { version = "0.1.0", path = "../glsl-to-spirv" }
glsl-to-spirv = { version = "0.1.2", path = "../glsl-to-spirv" }

View File

@ -7,7 +7,7 @@
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::collections::HashSet;
use std::cmp;
use enums;
use parse;
@ -18,9 +18,11 @@ pub fn write_descriptor_sets(doc: &parse::Spirv) -> String {
// Finding all the descriptors.
let mut descriptors = Vec::new();
struct Descriptor {
name: String,
set: u32,
binding: u32,
desc_ty: String,
array_count: u64,
readonly: bool,
}
@ -49,68 +51,127 @@ pub fn write_descriptor_sets(doc: &parse::Spirv) -> String {
}).next().expect(&format!("Uniform `{}` is missing a binding", name));
// Find informations about the kind of binding for this descriptor.
let (desc_ty, readonly) = descriptor_infos(doc, pointed_ty, false).expect(&format!("Couldn't find relevant type for uniform `{}` (type {}, maybe unimplemented)", name, pointed_ty));
let (desc_ty, readonly, array_count) = descriptor_infos(doc, pointed_ty, false).expect(&format!("Couldn't find relevant type for uniform `{}` (type {}, maybe unimplemented)", name, pointed_ty));
descriptors.push(Descriptor {
name: name,
desc_ty: desc_ty,
set: descriptor_set,
binding: binding,
array_count: array_count,
readonly: readonly,
});
}
// Sorting descriptors by binding in order to make sure we're in the right order.
descriptors.sort_by(|a, b| a.binding.cmp(&b.binding));
// Looping to find all the push constant structs.
let mut push_constants_size = 0;
for instruction in doc.instructions.iter() {
let type_id = match instruction {
&parse::Instruction::TypePointer { type_id, storage_class: enums::StorageClass::StorageClassPushConstant, .. } => {
type_id
},
_ => continue
};
// Computing the list of sets that are needed.
let sets_list = descriptors.iter().map(|d| d.set).collect::<HashSet<u32>>();
let mut output = String::new();
// Iterate once per set.
for &set in sets_list.iter() {
let descr = descriptors.iter().enumerate().filter(|&(_, d)| d.set == set)
.map(|(_, d)| {
format!("DescriptorDesc {{
binding: {binding},
ty: {desc_ty},
array_count: 1,
stages: stages.clone(),
readonly: {readonly},
}}", binding = d.binding, desc_ty = d.desc_ty,
readonly = if d.readonly { "true" } else { "false" })
})
.collect::<Vec<_>>();
output.push_str(&format!(r#"
fn set{set}_layout(stages: ShaderStages) -> VecIntoIter<DescriptorDesc> {{
vec![
{descr}
].into_iter()
}}
"#, set = set, descr = descr.join(",")));
let (_, size, _) = ::structs::type_from_id(doc, type_id);
let size = size.expect("Found runtime-sized push constants");
push_constants_size = cmp::max(push_constants_size, size);
}
let max_set = sets_list.iter().cloned().max().map(|v| v + 1).unwrap_or(0);
// Writing the body of the `descriptor` method.
let descriptor_body = descriptors.iter().map(|d| {
format!("({set}, {binding}) => Some(DescriptorDesc {{
ty: {desc_ty},
array_count: {array_count},
stages: self.0.clone(),
readonly: {readonly},
}}),", set = d.set, binding = d.binding, desc_ty = d.desc_ty, array_count = d.array_count,
readonly = if d.readonly { "true" } else { "false" })
output.push_str(&format!(r#"
}).collect::<Vec<_>>().concat();
let num_sets = 1 + descriptors.iter().fold(0, |s, d| cmp::max(s, d.set));
// Writing the body of the `num_bindings_in_set` method.
let num_bindings_in_set_body = {
(0 .. num_sets).map(|set| {
let num = 1 + descriptors.iter().filter(|d| d.set == set)
.fold(0, |s, d| cmp::max(s, d.binding));
format!("{set} => Some({num}),", set = set, num = num)
}).collect::<Vec<_>>().concat()
};
// Writing the body of the `descriptor_by_name_body` method.
let descriptor_by_name_body = descriptors.iter().map(|d| {
format!(r#"{name:?} => Some(({set}, {binding})),"#,
name = d.name, set = d.set, binding = d.binding)
}).collect::<Vec<_>>().concat();
// Writing the body of the `num_push_constants_ranges` method.
let num_push_constants_ranges_body = {
if push_constants_size == 0 {
"0"
} else {
"1"
}
};
// Writing the body of the `push_constants_range` method.
let push_constants_range_body = format!(r#"
if num != 0 || {pc_size} == 0 {{ return None; }}
Some(PipelineLayoutDescPcRange {{
offset: 0,
size: {pc_size},
stages: ShaderStages::all(), // FIXME: wrong
}})
"#, pc_size = push_constants_size);
format!(r#"
#[derive(Debug, Clone)]
pub struct Layout(ShaderStages);
#[allow(unsafe_code)]
unsafe impl PipelineLayoutDesc for Layout {{
type SetsIter = VecIntoIter<Self::DescIter>;
type DescIter = VecIntoIter<DescriptorDesc>;
fn num_sets(&self) -> usize {{
{num_sets}
}}
fn descriptors_desc(&self) -> Self::SetsIter {{
vec![
{layouts}
].into_iter()
fn num_bindings_in_set(&self, set: usize) -> Option<usize> {{
match set {{
{num_bindings_in_set_body}
_ => None
}}
}}
fn descriptor(&self, set: usize, binding: usize) -> Option<DescriptorDesc> {{
match (set, binding) {{
{descriptor_body}
_ => None
}}
}}
fn num_push_constants_ranges(&self) -> usize {{
{num_push_constants_ranges_body}
}}
fn push_constants_range(&self, num: usize) -> Option<PipelineLayoutDescPcRange> {{
{push_constants_range_body}
}}
}}
"#, layouts = (0 .. max_set).map(|n| format!("set{}_layout(self.0)", n)).collect::<Vec<_>>().join(",")));
output
#[allow(unsafe_code)]
unsafe impl PipelineLayoutDescNames for Layout {{
fn descriptor_by_name(&self, name: &str) -> Option<(usize, usize)> {{
match name {{
{descriptor_by_name_body}
_ => None
}}
}}
}}
"#, num_sets = num_sets, num_bindings_in_set_body = num_bindings_in_set_body,
descriptor_by_name_body = descriptor_by_name_body, descriptor_body = descriptor_body,
num_push_constants_ranges_body = num_push_constants_ranges_body,
push_constants_range_body = push_constants_range_body)
}
/// Assumes that `variable` is a variable with a `TypePointer` and returns the id of the pointed
@ -135,12 +196,12 @@ fn pointer_variable_ty(doc: &parse::Spirv, variable: u32) -> u32 {
}).next().unwrap()
}
/// Returns a `DescriptorDescTy` constructor and a bool indicating whether the descriptor is
/// read-only.
/// Returns a `DescriptorDescTy` constructor, a bool indicating whether the descriptor is
/// read-only, and the number of array elements.
///
/// See also section 14.5.2 of the Vulkan specs: Descriptor Set Interface
fn descriptor_infos(doc: &parse::Spirv, pointed_ty: u32, force_combined_image_sampled: bool)
-> Option<(String, bool)>
-> Option<(String, bool, u64)>
{
doc.instructions.iter().filter_map(|i| {
match i {
@ -170,10 +231,11 @@ fn descriptor_infos(doc: &parse::Spirv, pointed_ty: u32, force_combined_image_sa
let desc = format!("DescriptorDescTy::Buffer(DescriptorBufferDesc {{
dynamic: Some(false),
storage: {}
storage: {},
content: DescriptorBufferContentDesc::F32, // FIXME: wrong
}})", if is_ssbo { "true" } else { "false "});
Some((desc, true))
Some((desc, true, 1))
},
&parse::Instruction::TypeImage { result_id, ref dim, arrayed, ms, sampled,
@ -203,7 +265,7 @@ fn descriptor_infos(doc: &parse::Spirv, pointed_ty: u32, force_combined_image_sa
array_layers: {}
}}", ms, arrayed);
Some((desc, true))
Some((desc, true, 1))
} else if let &enums::Dim::DimBuffer = dim {
// We are a texel buffer.
@ -212,7 +274,7 @@ fn descriptor_infos(doc: &parse::Spirv, pointed_ty: u32, force_combined_image_sa
format: None, // TODO: specify format if known
}}", !sampled);
Some((desc, true))
Some((desc, true, 1))
} else {
// We are a sampled or storage image.
@ -236,7 +298,7 @@ fn descriptor_infos(doc: &parse::Spirv, pointed_ty: u32, force_combined_image_sa
array_layers: {},
}})", ty, sampled, dim, ms, arrayed);
Some((desc, true))
Some((desc, true, 1))
}
},
@ -248,7 +310,20 @@ fn descriptor_infos(doc: &parse::Spirv, pointed_ty: u32, force_combined_image_sa
&parse::Instruction::TypeSampler { result_id } if result_id == pointed_ty => {
let desc = format!("DescriptorDescTy::Sampler");
Some((desc, true))
Some((desc, true, 1))
},
&parse::Instruction::TypeArray { result_id, type_id, length_id } if result_id == pointed_ty => {
let (desc, readonly, arr) = match descriptor_infos(doc, type_id, false) {
None => return None,
Some(v) => v,
};
assert_eq!(arr, 1); // TODO: implement?
let len = doc.instructions.iter().filter_map(|e| {
match e { &parse::Instruction::Constant { result_id, ref data, .. } if result_id == length_id => Some(data.clone()), _ => None }
}).next().expect("failed to find array length");
let len = len.iter().rev().fold(0u64, |a, &b| (a << 32) | b as u64);
Some((desc, readonly, len))
},
_ => None, // TODO: other types

View File

@ -65,9 +65,6 @@ pub fn reflect<R>(name: &str, mut spirv: R) -> Result<String, Error>
// now parsing the document
let doc = try!(parse::parse_spirv(&data));
// TODO: remove
println!("{:#?}", doc);
let mut output = String::new();
output.push_str(r#"
#[allow(unused_imports)]
@ -84,6 +81,8 @@ pub fn reflect<R>(name: &str, mut spirv: R) -> Result<String, Error>
#[allow(unused_imports)]
use vulkano::descriptor::descriptor::DescriptorBufferDesc;
#[allow(unused_imports)]
use vulkano::descriptor::descriptor::DescriptorBufferContentDesc;
#[allow(unused_imports)]
use vulkano::descriptor::descriptor::DescriptorImageDesc;
#[allow(unused_imports)]
use vulkano::descriptor::descriptor::DescriptorImageDescDimensions;
@ -102,7 +101,9 @@ pub fn reflect<R>(name: &str, mut spirv: R) -> Result<String, Error>
#[allow(unused_imports)]
use vulkano::descriptor::pipeline_layout::PipelineLayoutDesc;
#[allow(unused_imports)]
use vulkano::descriptor::pipeline_layout::UnsafePipelineLayout;
use vulkano::descriptor::pipeline_layout::PipelineLayoutDescNames;
#[allow(unused_imports)]
use vulkano::descriptor::pipeline_layout::PipelineLayoutDescPcRange;
"#);
{

View File

@ -29,12 +29,33 @@ pub fn write_structs(doc: &parse::Spirv) -> String {
result
}
/// Represents a rust struct member
struct Member {
name: String,
value: String,
offset: Option<usize>
}
impl Member {
fn declaration_text(&self) -> String {
let offset = match self.offset {
Some(o) => format!("/* offset: {} */", o),
_ => "".to_owned(),
};
format!(" pub {}: {} {}", self.name, self.value, offset)
}
fn copy_text(&self) -> String {
format!(" {name}: self.{name}", name = self.name)
}
}
/// Writes a single struct.
fn write_struct(doc: &parse::Spirv, struct_id: u32, members: &[u32]) -> String {
let name = ::name_from_id(doc, struct_id);
// Strings of each member definition.
let mut members_defs = Vec::with_capacity(members.len());
// The members of this struct.
let mut rust_members = Vec::with_capacity(members.len());
// Padding structs will be named `_paddingN` where `N` is determined by this variable.
let mut next_padding_num = 0;
@ -90,7 +111,11 @@ fn write_struct(doc: &parse::Spirv, struct_id: u32, members: &[u32]) -> String {
if spirv_offset != *current_rust_offset {
let diff = spirv_offset.checked_sub(*current_rust_offset).unwrap();
let padding_num = next_padding_num; next_padding_num += 1;
members_defs.push(format!("pub _dummy{}: [u8; {}]", padding_num, diff));
rust_members.push(Member {
name: format!("_dummy{}", padding_num),
value: format!("[u8; {}]", diff),
offset: None,
});
*current_rust_offset += diff;
}
}
@ -102,8 +127,11 @@ fn write_struct(doc: &parse::Spirv, struct_id: u32, members: &[u32]) -> String {
current_rust_offset = None;
}
members_defs.push(format!("pub {name}: {ty} /* offset: {offset} */",
name = member_name, ty = ty, offset = spirv_offset));
rust_members.push(Member {
name: member_name.to_owned(),
value: ty,
offset: Some(spirv_offset),
});
}
// Try determine the total size of the struct in order to add padding at the end of the struct.
@ -139,20 +167,28 @@ fn write_struct(doc: &parse::Spirv, struct_id: u32, members: &[u32]) -> String {
if let (Some(cur_size), Some(req_size)) = (current_rust_offset, spirv_req_total_size) {
let diff = req_size.checked_sub(cur_size as u32).unwrap();
if diff >= 1 {
members_defs.push(format!("pub _dummy{}: [u8; {}]", next_padding_num, diff));
rust_members.push(Member {
name: format!("_dummy{}", next_padding_num),
value: format!("[u8; {}]", diff),
offset: None,
});
}
}
// We can only derive common traits if there's no unsized member in the struct.
let derive = if current_rust_offset.is_some() {
"#[derive(Copy, Clone, Debug, Default)]\n"
// We can only implement Clone if there's no unsized member in the struct.
let (impl_text, derive_text) = if current_rust_offset.is_some() {
let i = format!("\nimpl Clone for {name} {{\n fn clone(&self) -> Self {{\n \
{name} {{\n{copies}\n }}\n }}\n}}\n", name = name,
copies = rust_members.iter().map(Member::copy_text).collect::<Vec<_>>().join(",\n"));
(i, "#[derive(Copy)]")
} else {
""
("".to_owned(), "")
};
format!("#[repr(C)]\n{derive}\
pub struct {name} {{\n\t{members}\n}} /* total_size: {t:?} */\n",
derive = derive, name = name, members = members_defs.join(",\n\t"), t = spirv_req_total_size)
format!("#[repr(C)]{derive_text}\npub struct {name} {{\n{members}\n}} /* total_size: {t:?} */\n{impl_text}",
name = name,
members = rust_members.iter().map(Member::declaration_text).collect::<Vec<_>>().join(",\n"),
t = spirv_req_total_size, impl_text = impl_text, derive_text = derive_text)
}
/// Returns true if a `BuiltIn` decorator is applied on a struct member.
@ -175,7 +211,7 @@ fn is_builtin_member(doc: &parse::Spirv, id: u32, member_id: u32) -> bool {
/// Returns the type name to put in the Rust struct, and its size and alignment.
///
/// The size can be `None` if it's only known at runtime.
fn type_from_id(doc: &parse::Spirv, searched: u32) -> (String, Option<usize>, usize) {
pub fn type_from_id(doc: &parse::Spirv, searched: u32) -> (String, Option<usize>, usize) {
for instruction in doc.instructions.iter() {
match instruction {
&parse::Instruction::TypeBool { result_id } if result_id == searched => {
@ -269,6 +305,7 @@ fn type_from_id(doc: &parse::Spirv, searched: u32) -> (String, Option<usize>, us
return (format!("[{}]", t), None, t_align);
},
&parse::Instruction::TypeStruct { result_id, ref member_types } if result_id == searched => {
// TODO: take the Offset member decorate into account?
let name = ::name_from_id(doc, result_id);
let size = member_types.iter().filter_map(|&t| type_from_id(doc, t).1).fold(0, |a,b|a+b);
let align = member_types.iter().map(|&t| type_from_id(doc, t).2).max().unwrap_or(1);

View File

@ -1,11 +1,12 @@
[package]
name = "vulkano-win"
version = "0.3.1"
version = "0.3.2"
authors = ["Pierre Krieger <pierre.krieger1708@gmail.com>"]
repository = "https://github.com/tomaka/vulkano"
description = "Link between vulkano and winit"
license = "MIT/Apache-2.0"
categories = ["rendering::graphics-api"]
[dependencies]
vulkano = { version = "0.3.0", path = "../vulkano" }
winit = "0.5.3"
winit = "0.6.4"

View File

@ -10,7 +10,7 @@ use vulkano::instance::Instance;
use vulkano::instance::InstanceExtensions;
use vulkano::swapchain::Surface;
use vulkano::swapchain::SurfaceCreationError;
use winit::WindowBuilder;
use winit::{EventsLoop, WindowBuilder};
use winit::CreationError as WindowCreationError;
pub fn required_extensions() -> InstanceExtensions {
@ -34,12 +34,12 @@ pub fn required_extensions() -> InstanceExtensions {
}
pub trait VkSurfaceBuild {
fn build_vk_surface(self, instance: &Arc<Instance>) -> Result<Window, CreationError>;
fn build_vk_surface(self, events_loop: &EventsLoop, instance: &Arc<Instance>) -> Result<Window, CreationError>;
}
impl VkSurfaceBuild for WindowBuilder {
fn build_vk_surface(self, instance: &Arc<Instance>) -> Result<Window, CreationError> {
let window = try!(self.build());
fn build_vk_surface(self, events_loop: &EventsLoop, instance: &Arc<Instance>) -> Result<Window, CreationError> {
let window = try!(self.build(events_loop));
let surface = try!(unsafe { winit_to_surface(instance, &window) });
Ok(Window {

View File

@ -1,17 +1,18 @@
[package]
name = "vulkano"
version = "0.3.1"
version = "0.3.2"
authors = ["Pierre Krieger <pierre.krieger1708@gmail.com>"]
repository = "https://github.com/tomaka/vulkano"
description = "Safe wrapper for the Vulkan graphics API"
license = "MIT/Apache-2.0"
documentation = "https://docs.rs/vulkano"
categories = ["rendering::graphics-api"]
build = "build.rs"
[dependencies]
crossbeam = "0.2.5"
fnv = "1.0.2"
shared_library = "0.1.4"
smallvec = "0.2.0"
lazy_static = "0.1.15"
vk-sys = { version = "0.2.0", path = "../vk-sys" }
crossbeam = "0.2.10"
fnv = "1.0.5"
shared_library = "0.1.5"
smallvec = "0.3.1"
lazy_static = "0.2.2"
vk-sys = { version = "0.2.2", path = "../vk-sys" }

View File

@ -20,33 +20,25 @@ use std::marker::PhantomData;
use std::mem;
use std::ops::Deref;
use std::ops::DerefMut;
use std::ops::Range;
use std::ptr;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::RwLock;
use std::sync::RwLockReadGuard;
use std::sync::RwLockWriteGuard;
use std::sync::Weak;
use std::time::Duration;
use std::sync::TryLockError;
use smallvec::SmallVec;
use buffer::sys::BufferCreationError;
use buffer::sys::SparseLevel;
use buffer::sys::UnsafeBuffer;
use buffer::sys::Usage;
use buffer::traits::AccessRange;
use buffer::traits::BufferAccess;
use buffer::traits::BufferInner;
use buffer::traits::Buffer;
use buffer::traits::CommandBufferState;
use buffer::traits::CommandListState;
use buffer::traits::GpuAccessResult;
use buffer::traits::SubmitInfos;
use buffer::traits::TrackedBuffer;
use buffer::traits::TypedBuffer;
use buffer::traits::PipelineBarrierRequest;
use buffer::traits::PipelineMemoryBarrierRequest;
use command_buffer::Submission;
use buffer::traits::TypedBufferAccess;
use device::Device;
use device::DeviceOwned;
use device::Queue;
use instance::QueueFamily;
use memory::Content;
@ -55,9 +47,7 @@ use memory::pool::AllocLayout;
use memory::pool::MemoryPool;
use memory::pool::MemoryPoolAlloc;
use memory::pool::StdMemoryPool;
use sync::FenceWaitError;
use sync::Sharing;
use sync::Fence;
use sync::AccessFlagBits;
use sync::PipelineStages;
@ -72,23 +62,17 @@ pub struct CpuAccessibleBuffer<T: ?Sized, A = Arc<StdMemoryPool>> where A: Memor
// The memory held by the buffer.
memory: A::Alloc,
// Access pattern of the buffer. Can be read-locked for a shared CPU access, or write-locked
// for either a write CPU access or a GPU access.
access: RwLock<()>,
// Queue families allowed to access this buffer.
queue_families: SmallVec<[u32; 4]>,
// Latest submission that uses this buffer.
// Also used to block any attempt to submit this buffer while it is accessed by the CPU.
latest_submission: RwLock<LatestSubmission>,
// Necessary to make it compile.
marker: PhantomData<Box<T>>,
}
#[derive(Debug)]
struct LatestSubmission {
read_submissions: Mutex<Vec<Weak<Submission>>>,
write_submission: Option<Weak<Submission>>, // TODO: can use `Weak::new()` once it's stabilized
}
impl<T> CpuAccessibleBuffer<T> {
/// Deprecated. Use `from_data` instead.
#[deprecated]
@ -118,7 +102,7 @@ impl<T> CpuAccessibleBuffer<T> {
// TODO: check whether that's true ^
{
let mut mapping = uninitialized.write(Duration::new(0, 0)).unwrap();
let mut mapping = uninitialized.write().unwrap();
ptr::write::<T>(&mut *mapping, data)
}
@ -155,7 +139,7 @@ impl<T> CpuAccessibleBuffer<[T]> {
// TODO: check whether that's true ^
{
let mut mapping = uninitialized.write(Duration::new(0, 0)).unwrap();
let mut mapping = uninitialized.write().unwrap();
for (i, o) in data.zip(mapping.iter_mut()) {
ptr::write(o, i);
@ -233,11 +217,8 @@ impl<T: ?Sized> CpuAccessibleBuffer<T> {
Ok(Arc::new(CpuAccessibleBuffer {
inner: buffer,
memory: mem,
access: RwLock::new(()),
queue_families: queue_families,
latest_submission: RwLock::new(LatestSubmission {
read_submissions: Mutex::new(vec![]),
write_submission: None,
}),
marker: PhantomData,
}))
}
@ -269,22 +250,16 @@ impl<T: ?Sized, A> CpuAccessibleBuffer<T, A> where T: Content + 'static, A: Memo
///
/// After this function successfully locks the buffer, any attempt to submit a command buffer
/// that uses it will block until you unlock it.
// TODO: remove timeout parameter since CPU-side locking can't use it
#[inline]
pub fn read(&self, timeout: Duration) -> Result<ReadLock<T>, FenceWaitError> {
let submission = self.latest_submission.read().unwrap();
// TODO: should that set the write_submission to None?
if let Some(submission) = submission.write_submission.clone().and_then(|s| s.upgrade()) {
try!(submission.wait(timeout));
}
pub fn read(&self) -> Result<ReadLock<T>, TryLockError<RwLockReadGuard<()>>> {
let lock = try!(self.access.try_read());
let offset = self.memory.offset();
let range = offset .. offset + self.inner.size();
Ok(ReadLock {
inner: unsafe { self.memory.mapped_memory().unwrap().read_write(range) },
lock: submission,
lock: lock,
})
}
@ -296,131 +271,82 @@ impl<T: ?Sized, A> CpuAccessibleBuffer<T, A> where T: Content + 'static, A: Memo
///
/// After this function successfully locks the buffer, any attempt to submit a command buffer
/// that uses it will block until you unlock it.
// TODO: remove timeout parameter since CPU-side locking can't use it
#[inline]
pub fn write(&self, timeout: Duration) -> Result<WriteLock<T>, FenceWaitError> {
let mut submission = self.latest_submission.write().unwrap();
{
let mut read_submissions = submission.read_submissions.get_mut().unwrap();
for submission in read_submissions.drain(..) {
if let Some(submission) = submission.upgrade() {
try!(submission.wait(timeout));
}
}
}
if let Some(submission) = submission.write_submission.take().and_then(|s| s.upgrade()) {
try!(submission.wait(timeout));
}
pub fn write(&self) -> Result<WriteLock<T>, TryLockError<RwLockWriteGuard<()>>> {
let lock = try!(self.access.try_write());
let offset = self.memory.offset();
let range = offset .. offset + self.inner.size();
Ok(WriteLock {
inner: unsafe { self.memory.mapped_memory().unwrap().read_write(range) },
lock: submission,
lock: lock,
})
}
}
unsafe impl<T: ?Sized, A> Buffer for CpuAccessibleBuffer<T, A>
// FIXME: wrong
unsafe impl<T: ?Sized, A> Buffer for Arc<CpuAccessibleBuffer<T, A>>
where T: 'static + Send + Sync, A: MemoryPool
{
type Access = Self;
#[inline]
fn inner(&self) -> &UnsafeBuffer {
&self.inner
}
#[inline]
fn blocks(&self, _: Range<usize>) -> Vec<usize> {
vec![0]
fn access(self) -> Self {
self
}
#[inline]
fn block_memory_range(&self, _: usize) -> Range<usize> {
0 .. self.size()
}
fn needs_fence(&self, _: bool, _: Range<usize>) -> Option<bool> {
Some(true)
}
#[inline]
fn host_accesses(&self, _: usize) -> bool {
true
}
unsafe fn gpu_access(&self, ranges: &mut Iterator<Item = AccessRange>,
submission: &Arc<Submission>) -> GpuAccessResult
{
let queue_id = submission.queue().family().id();
if self.queue_families.iter().find(|&&id| id == queue_id).is_none() {
panic!("Trying to submit to family {} a buffer suitable for families {:?}",
queue_id, self.queue_families);
}
let is_written = {
let mut written = false;
while let Some(r) = ranges.next() { if r.write { written = true; break; } }
written
};
let dependencies = if is_written {
let mut submissions = self.latest_submission.write().unwrap();
let write_dep = mem::replace(&mut submissions.write_submission,
Some(Arc::downgrade(submission)));
let mut read_submissions = submissions.read_submissions.get_mut().unwrap();
let read_submissions = mem::replace(&mut *read_submissions, Vec::new());
read_submissions.into_iter()
.chain(write_dep.into_iter())
.filter_map(|s| s.upgrade())
.collect::<Vec<_>>()
} else {
let submissions = self.latest_submission.read().unwrap();
let mut read_submissions = submissions.read_submissions.lock().unwrap();
read_submissions.push(Arc::downgrade(submission));
submissions.write_submission.clone().and_then(|s| s.upgrade()).into_iter().collect()
};
GpuAccessResult {
dependencies: dependencies,
additional_wait_semaphore: None,
additional_signal_semaphore: None,
}
fn size(&self) -> usize {
self.inner.size()
}
}
unsafe impl<T: ?Sized, A> TypedBuffer for CpuAccessibleBuffer<T, A>
unsafe impl<T: ?Sized, A> TypedBuffer for Arc<CpuAccessibleBuffer<T, A>>
where T: 'static + Send + Sync, A: MemoryPool
{
type Content = T;
}
unsafe impl<T: ?Sized, A> TrackedBuffer for CpuAccessibleBuffer<T, A>
unsafe impl<T: ?Sized, A> BufferAccess for CpuAccessibleBuffer<T, A>
where T: 'static + Send + Sync, A: MemoryPool
{
type CommandListState = CpuAccessibleBufferClState;
type FinishedState = CpuAccessibleBufferFinished;
#[inline]
fn inner(&self) -> BufferInner {
BufferInner {
buffer: &self.inner,
offset: 0,
}
}
#[inline]
fn initial_state(&self) -> Self::CommandListState {
// We don't know when the user is going to write to the buffer, so we just assume that it's
// all the time.
CpuAccessibleBufferClState {
size: self.size(),
stages: PipelineStages { host: true, .. PipelineStages::none() },
access: AccessFlagBits { host_write: true, .. AccessFlagBits::none() },
first_stages: None,
write: true,
earliest_previous_transition: 0,
needs_flush_at_the_end: false,
}
fn conflict_key(&self, self_offset: usize, self_size: usize) -> u64 {
self.inner.key()
}
#[inline]
fn try_gpu_lock(&self, exclusive_access: bool, queue: &Queue) -> bool {
true // FIXME:
}
#[inline]
unsafe fn increase_gpu_lock(&self) {
// FIXME:
}
}
unsafe impl<T: ?Sized, A> TypedBufferAccess for CpuAccessibleBuffer<T, A>
where T: 'static + Send + Sync, A: MemoryPool
{
type Content = T;
}
unsafe impl<T: ?Sized, A> DeviceOwned for CpuAccessibleBuffer<T, A>
where A: MemoryPool
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.inner.device()
}
}
@ -434,148 +360,18 @@ pub struct CpuAccessibleBufferClState {
needs_flush_at_the_end: bool,
}
impl CommandListState for CpuAccessibleBufferClState {
type FinishedState = CpuAccessibleBufferFinished;
fn transition(self, num_command: usize, _: &UnsafeBuffer, _: usize, _: usize, write: bool,
stage: PipelineStages, access: AccessFlagBits)
-> (Self, Option<PipelineBarrierRequest>)
{
debug_assert!(!stage.host);
debug_assert!(!access.host_read);
debug_assert!(!access.host_write);
if write {
// Write after read or write after write.
let new_state = CpuAccessibleBufferClState {
size: self.size,
stages: stage,
access: access,
first_stages: Some(self.first_stages.clone().unwrap_or(stage)),
write: true,
earliest_previous_transition: num_command,
needs_flush_at_the_end: true,
};
let barrier = PipelineBarrierRequest {
after_command_num: self.earliest_previous_transition,
source_stage: self.stages,
destination_stages: stage,
by_region: true,
memory_barrier: if self.write {
Some(PipelineMemoryBarrierRequest {
offset: 0,
size: self.size,
source_access: self.access,
destination_access: access,
})
} else {
None
},
};
(new_state, Some(barrier))
} else if self.write {
// Read after write.
let new_state = CpuAccessibleBufferClState {
size: self.size,
stages: stage,
access: access,
first_stages: Some(self.first_stages.clone().unwrap_or(stage)),
write: false,
earliest_previous_transition: num_command,
needs_flush_at_the_end: self.needs_flush_at_the_end,
};
let barrier = PipelineBarrierRequest {
after_command_num: self.earliest_previous_transition,
source_stage: self.stages,
destination_stages: stage,
by_region: true,
memory_barrier: Some(PipelineMemoryBarrierRequest {
offset: 0,
size: self.size,
source_access: self.access,
destination_access: access,
}),
};
(new_state, Some(barrier))
} else {
// Read after read.
let new_state = CpuAccessibleBufferClState {
size: self.size,
stages: self.stages | stage,
access: self.access | access,
first_stages: Some(self.first_stages.clone().unwrap_or(stage)),
write: false,
earliest_previous_transition: self.earliest_previous_transition,
needs_flush_at_the_end: self.needs_flush_at_the_end,
};
(new_state, None)
}
}
fn finish(self) -> (Self::FinishedState, Option<PipelineBarrierRequest>) {
let barrier = if self.needs_flush_at_the_end {
let barrier = PipelineBarrierRequest {
after_command_num: self.earliest_previous_transition,
source_stage: self.stages,
destination_stages: PipelineStages { host: true, .. PipelineStages::none() },
by_region: true,
memory_barrier: Some(PipelineMemoryBarrierRequest {
offset: 0,
size: self.size,
source_access: self.access,
destination_access: AccessFlagBits { host_read: true,
.. AccessFlagBits::none() },
}),
};
Some(barrier)
} else {
None
};
let finished = CpuAccessibleBufferFinished {
first_stages: self.first_stages.unwrap_or(PipelineStages::none()),
write: self.needs_flush_at_the_end,
};
(finished, barrier)
}
}
pub struct CpuAccessibleBufferFinished {
first_stages: PipelineStages,
write: bool,
}
impl CommandBufferState for CpuAccessibleBufferFinished {
fn on_submit<B, F>(&self, buffer: &B, queue: &Arc<Queue>, fence: F) -> SubmitInfos
where B: Buffer, F: FnOnce() -> Arc<Fence>
{
// FIXME: implement correctly
SubmitInfos {
pre_semaphore: None,
post_semaphore: None,
pre_barrier: None,
post_barrier: None,
}
}
}
/// Object that can be used to read or write the content of a `CpuAccessBuffer`.
///
/// Note that this object holds a rwlock read guard on the chunk. If another thread tries to access
/// this buffer's content or tries to submit a GPU command that uses this buffer, it will block.
pub struct ReadLock<'a, T: ?Sized + 'a> {
inner: MemCpuAccess<'a, T>,
lock: RwLockReadGuard<'a, LatestSubmission>,
lock: RwLockReadGuard<'a, ()>,
}
impl<'a, T: ?Sized + 'a> ReadLock<'a, T> {
@ -606,7 +402,7 @@ impl<'a, T: ?Sized + 'a> Deref for ReadLock<'a, T> {
/// this buffer's content or tries to submit a GPU command that uses this buffer, it will block.
pub struct WriteLock<'a, T: ?Sized + 'a> {
inner: MemCpuAccess<'a, T>,
lock: RwLockWriteGuard<'a, LatestSubmission>,
lock: RwLockWriteGuard<'a, ()>,
}
impl<'a, T: ?Sized + 'a> WriteLock<'a, T> {

View File

@ -0,0 +1,552 @@
// Copyright (c) 2017 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::iter;
use std::marker::PhantomData;
use std::mem;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::MutexGuard;
use smallvec::SmallVec;
use buffer::sys::BufferCreationError;
use buffer::sys::SparseLevel;
use buffer::sys::UnsafeBuffer;
use buffer::sys::Usage;
use buffer::traits::BufferAccess;
use buffer::traits::BufferInner;
use buffer::traits::Buffer;
use buffer::traits::TypedBuffer;
use buffer::traits::TypedBufferAccess;
use device::Device;
use device::DeviceOwned;
use device::Queue;
use instance::QueueFamily;
use memory::pool::AllocLayout;
use memory::pool::MemoryPool;
use memory::pool::MemoryPoolAlloc;
use memory::pool::StdMemoryPool;
use sync::Sharing;
use OomError;
/// Buffer from which "sub-buffers" of fixed size can be individually allocated.
///
/// This buffer is especially suitable when you want to upload or download some data at each frame.
///
/// # Usage
///
/// A `CpuBufferPool` is a bit similar to a `Vec`. You start by creating an empty pool, then you
/// grab elements from the pool and use them, and if the pool is full it will automatically grow
/// in size.
///
/// But contrary to a `Vec`, elements automatically free themselves when they are dropped (ie.
/// usually when they are no longer in use by the GPU).
///
/// # Arc-like
///
/// The `CpuBufferPool` struct internally contains an `Arc`. You can clone the `CpuBufferPool` for
/// a cheap cost, and all the clones will share the same underlying buffer.
///
pub struct CpuBufferPool<T: ?Sized, A = Arc<StdMemoryPool>> where A: MemoryPool {
// The device of the pool.
device: Arc<Device>,
// The memory pool to use for allocations.
pool: A,
// Current buffer from which subbuffers are grabbed.
current_buffer: Mutex<Option<Arc<ActualBuffer<A>>>>,
// Size in bytes of one subbuffer.
one_size: usize,
// Buffer usage.
usage: Usage,
// Queue families allowed to access this buffer.
queue_families: SmallVec<[u32; 4]>,
// Necessary to make it compile.
marker: PhantomData<Box<T>>,
}
// One buffer of the pool.
struct ActualBuffer<A> where A: MemoryPool {
// Inner content.
inner: UnsafeBuffer,
// The memory held by the buffer.
memory: A::Alloc,
// Access pattern of the subbuffers.
subbuffers: Vec<ActualBufferSubbuffer>,
// The subbuffer that should be available next.
next_subbuffer: AtomicUsize,
// Number of subbuffers in the buffer.
capacity: usize,
}
// Access pattern of one subbuffer.
#[derive(Debug)]
struct ActualBufferSubbuffer {
// Number of `CpuBufferPoolSubbuffer` objects that point to this subbuffer.
num_cpu_accesses: AtomicUsize,
// Number of `CpuBufferPoolSubbuffer` objects that point to this subbuffer and that have been
// GPU-locked.
num_gpu_accesses: AtomicUsize,
}
/// A subbuffer allocated from a `CpuBufferPool`.
///
/// When this object is destroyed, the subbuffer is automatically reclaimed by the pool.
pub struct CpuBufferPoolSubbuffer<T: ?Sized, A> where A: MemoryPool {
buffer: Arc<ActualBuffer<A>>,
// Index of the subbuffer within `buffer`.
subbuffer_index: usize,
// Size in bytes of the subbuffer.
size: usize,
// Whether this subbuffer was locked on the GPU.
// If true, then num_gpu_accesses must be decreased.
gpu_locked: AtomicBool,
// Necessary to make it compile.
marker: PhantomData<Box<T>>,
}
impl<T> CpuBufferPool<T> {
#[inline]
pub fn new<'a, I>(device: Arc<Device>, usage: &Usage, queue_families: I)
-> CpuBufferPool<T>
where I: IntoIterator<Item = QueueFamily<'a>>
{
unsafe {
CpuBufferPool::raw(device, mem::size_of::<T>(), usage, queue_families)
}
}
/// Builds a `CpuBufferPool` meant for simple uploads.
///
/// Shortcut for a pool that can only be used as transfer sources and with exclusive queue
/// family accesses.
#[inline]
pub fn upload(device: Arc<Device>) -> CpuBufferPool<T> {
CpuBufferPool::new(device, &Usage::transfer_source(), iter::empty())
}
}
impl<T> CpuBufferPool<[T]> {
#[inline]
pub fn array<'a, I>(device: Arc<Device>, len: usize, usage: &Usage, queue_families: I)
-> CpuBufferPool<[T]>
where I: IntoIterator<Item = QueueFamily<'a>>
{
unsafe {
CpuBufferPool::raw(device, mem::size_of::<T>() * len, usage, queue_families)
}
}
}
impl<T: ?Sized> CpuBufferPool<T> {
pub unsafe fn raw<'a, I>(device: Arc<Device>, one_size: usize,
usage: &Usage, queue_families: I) -> CpuBufferPool<T>
where I: IntoIterator<Item = QueueFamily<'a>>
{
let queue_families = queue_families.into_iter().map(|f| f.id())
.collect::<SmallVec<[u32; 4]>>();
let pool = Device::standard_pool(&device);
CpuBufferPool {
device: device,
pool: pool,
current_buffer: Mutex::new(None),
one_size: one_size,
usage: usage.clone(),
queue_families: queue_families,
marker: PhantomData,
}
}
/// Returns the current capacity of the pool.
pub fn capacity(&self) -> usize {
match *self.current_buffer.lock().unwrap() {
None => 0,
Some(ref buf) => buf.capacity,
}
}
}
impl<T, A> CpuBufferPool<T, A> where A: MemoryPool {
/// Sets the capacity to `capacity`, or does nothing if the capacity is already higher.
///
/// Since this can involve a memory allocation, an `OomError` can happen.
pub fn reserve(&self, capacity: usize) -> Result<(), OomError> {
let mut cur_buf = self.current_buffer.lock().unwrap();
// Check current capacity.
match *cur_buf {
Some(ref buf) if buf.capacity >= capacity => {
return Ok(())
},
_ => ()
};
self.reset_buf(&mut cur_buf, capacity)
}
/// Grants access to a new subbuffer and puts `data` in it.
///
/// If no subbuffer is available (because they are still in use by the GPU), a new buffer will
/// automatically be allocated.
///
/// > **Note**: You can think of it like a `Vec`. If you insert an element and the `Vec` is not
/// > large enough, a new chunk of memory is automatically allocated.
pub fn next(&self, data: T) -> CpuBufferPoolSubbuffer<T, A> {
let mut mutex = self.current_buffer.lock().unwrap();
let data = match self.try_next_impl(&mut mutex, data) {
Ok(n) => return n,
Err(d) => d,
};
let next_capacity = match *mutex {
Some(ref b) => b.capacity * 2,
None => 3,
};
self.reset_buf(&mut mutex, next_capacity).unwrap(); /* FIXME: error */
match self.try_next_impl(&mut mutex, data) {
Ok(n) => n,
Err(_) => unreachable!()
}
}
/// Grants access to a new subbuffer and puts `data` in it.
///
/// Returns `None` if no subbuffer is available.
///
/// A `CpuBufferPool` is always empty the first time you use it, so you shouldn't use
/// `try_next` the first time you use it.
#[inline]
pub fn try_next(&self, data: T) -> Option<CpuBufferPoolSubbuffer<T, A>> {
let mut mutex = self.current_buffer.lock().unwrap();
self.try_next_impl(&mut mutex, data).ok()
}
// Creates a new buffer and sets it as current.
fn reset_buf(&self, cur_buf_mutex: &mut MutexGuard<Option<Arc<ActualBuffer<A>>>>, capacity: usize) -> Result<(), OomError> {
unsafe {
let (buffer, mem_reqs) = {
let sharing = if self.queue_families.len() >= 2 {
Sharing::Concurrent(self.queue_families.iter().cloned())
} else {
Sharing::Exclusive
};
let total_size = match self.one_size.checked_mul(capacity) {
Some(s) => s,
None => return Err(OomError::OutOfDeviceMemory),
};
match UnsafeBuffer::new(&self.device, total_size, &self.usage, sharing, SparseLevel::none()) {
Ok(b) => b,
Err(BufferCreationError::OomError(err)) => return Err(err),
Err(_) => unreachable!() // We don't use sparse binding, therefore the other
// errors can't happen
}
};
let mem_ty = self.device.physical_device().memory_types()
.filter(|t| (mem_reqs.memory_type_bits & (1 << t.id())) != 0)
.filter(|t| t.is_host_visible())
.next().unwrap(); // Vk specs guarantee that this can't fail
let mem = try!(MemoryPool::alloc(&self.pool, mem_ty,
mem_reqs.size, mem_reqs.alignment, AllocLayout::Linear));
debug_assert!((mem.offset() % mem_reqs.alignment) == 0);
debug_assert!(mem.mapped_memory().is_some());
try!(buffer.bind_memory(mem.memory(), mem.offset()));
**cur_buf_mutex = Some(Arc::new(ActualBuffer {
inner: buffer,
memory: mem,
subbuffers: {
let mut v = Vec::with_capacity(capacity);
for _ in 0 .. capacity {
v.push(ActualBufferSubbuffer {
num_cpu_accesses: AtomicUsize::new(0),
num_gpu_accesses: AtomicUsize::new(0),
});
}
v
},
capacity: capacity,
next_subbuffer: AtomicUsize::new(0),
}));
Ok(())
}
}
// Tries to lock a subbuffer from the current buffer.
fn try_next_impl(&self, cur_buf_mutex: &mut MutexGuard<Option<Arc<ActualBuffer<A>>>>, data: T)
-> Result<CpuBufferPoolSubbuffer<T, A>, T>
{
// Grab the current buffer. Return `Err` if the pool wasn't "initialized" yet.
let current_buffer = match cur_buf_mutex.clone() {
Some(b) => b,
None => return Err(data)
};
// Grab the next subbuffer to use.
let next_subbuffer = {
// Since the only place that touches `next_subbuffer` is this code, and since we own a
// mutex lock to the buffer, it means that `next_subbuffer` can't be accessed
// concurrently.
let val = current_buffer.next_subbuffer.fetch_add(1, Ordering::Relaxed);
// TODO: handle overflows?
// TODO: rewrite this in a proper way by holding an intermediary struct in the mutex instead of the Arc directly
val % current_buffer.capacity
};
// Check if subbuffer is already taken. If so, the pool is full.
if current_buffer.subbuffers[next_subbuffer].num_cpu_accesses.compare_and_swap(0, 1, Ordering::SeqCst) != 0 {
return Err(data);
}
// Reset num_gpu_accesses.
current_buffer.subbuffers[next_subbuffer].num_gpu_accesses.store(0, Ordering::SeqCst);
// Write `data` in the memory.
unsafe {
let range = (next_subbuffer * self.one_size) .. ((next_subbuffer + 1) * self.one_size);
let mut mapping = current_buffer.memory.mapped_memory().unwrap().read_write(range);
*mapping = data;
}
Ok(CpuBufferPoolSubbuffer {
buffer: current_buffer,
subbuffer_index: next_subbuffer,
gpu_locked: AtomicBool::new(false),
size: self.one_size,
marker: PhantomData,
})
}
}
// Can't automatically derive `Clone`, otherwise the compiler adds a `T: Clone` requirement.
impl<T: ?Sized, A> Clone for CpuBufferPool<T, A> where A: MemoryPool + Clone {
fn clone(&self) -> Self {
let buf = self.current_buffer.lock().unwrap();
CpuBufferPool {
device: self.device.clone(),
pool: self.pool.clone(),
current_buffer: Mutex::new(buf.clone()),
one_size: self.one_size,
usage: self.usage.clone(),
queue_families: self.queue_families.clone(),
marker: PhantomData,
}
}
}
unsafe impl<T: ?Sized, A> DeviceOwned for CpuBufferPool<T, A>
where A: MemoryPool
{
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
}
}
unsafe impl<T: ?Sized, A> Buffer for CpuBufferPoolSubbuffer<T, A>
where A: MemoryPool
{
type Access = Self;
#[inline]
fn access(self) -> Self {
self
}
#[inline]
fn size(&self) -> usize {
self.size
}
}
unsafe impl<T: ?Sized, A> TypedBuffer for CpuBufferPoolSubbuffer<T, A>
where A: MemoryPool
{
type Content = T;
}
impl<T: ?Sized, A> Clone for CpuBufferPoolSubbuffer<T, A> where A: MemoryPool {
fn clone(&self) -> CpuBufferPoolSubbuffer<T, A> {
let old_val = self.buffer.subbuffers[self.subbuffer_index].num_cpu_accesses.fetch_add(1, Ordering::SeqCst);
debug_assert!(old_val >= 1);
CpuBufferPoolSubbuffer {
buffer: self.buffer.clone(),
subbuffer_index: self.subbuffer_index,
gpu_locked: AtomicBool::new(false),
size: self.size,
marker: PhantomData,
}
}
}
unsafe impl<T: ?Sized, A> BufferAccess for CpuBufferPoolSubbuffer<T, A>
where A: MemoryPool
{
#[inline]
fn inner(&self) -> BufferInner {
BufferInner {
buffer: &self.buffer.inner,
offset: self.subbuffer_index * self.size,
}
}
#[inline]
fn size(&self) -> usize {
self.size
}
#[inline]
fn conflict_key(&self, self_offset: usize, self_size: usize) -> u64 {
self.buffer.inner.key() + self.subbuffer_index as u64
}
#[inline]
fn try_gpu_lock(&self, _: bool, _: &Queue) -> bool {
let in_use = &self.buffer.subbuffers[self.subbuffer_index].num_gpu_accesses;
if in_use.compare_and_swap(0, 1, Ordering::SeqCst) != 0 {
return false;
}
let was_locked = self.gpu_locked.swap(true, Ordering::SeqCst);
debug_assert!(!was_locked);
true
}
#[inline]
unsafe fn increase_gpu_lock(&self) {
let was_locked = self.gpu_locked.swap(true, Ordering::SeqCst);
debug_assert!(!was_locked);
let in_use = &self.buffer.subbuffers[self.subbuffer_index];
let num_usages = in_use.num_gpu_accesses.fetch_add(1, Ordering::SeqCst);
debug_assert!(num_usages >= 1);
debug_assert!(num_usages <= in_use.num_cpu_accesses.load(Ordering::SeqCst));
}
}
unsafe impl<T: ?Sized, A> TypedBufferAccess for CpuBufferPoolSubbuffer<T, A>
where A: MemoryPool
{
type Content = T;
}
unsafe impl<T: ?Sized, A> DeviceOwned for CpuBufferPoolSubbuffer<T, A>
where A: MemoryPool
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.buffer.inner.device()
}
}
impl<T: ?Sized, A> Drop for CpuBufferPoolSubbuffer<T, A>
where A: MemoryPool
{
#[inline]
fn drop(&mut self) {
let in_use = &self.buffer.subbuffers[self.subbuffer_index];
let prev_val = in_use.num_cpu_accesses.fetch_sub(1, Ordering::SeqCst);
debug_assert!(prev_val >= 1);
if self.gpu_locked.load(Ordering::SeqCst) {
let was_in_use = in_use.num_gpu_accesses.fetch_sub(1, Ordering::SeqCst);
debug_assert!(was_in_use >= 1);
}
}
}
#[cfg(test)]
mod tests {
use std::mem;
use buffer::CpuBufferPool;
#[test]
fn basic_create() {
let (device, _) = gfx_dev_and_queue!();
let _ = CpuBufferPool::<u8>::upload(device);
}
#[test]
fn reserve() {
let (device, _) = gfx_dev_and_queue!();
let pool = CpuBufferPool::<u8>::upload(device);
assert_eq!(pool.capacity(), 0);
pool.reserve(83).unwrap();
assert_eq!(pool.capacity(), 83);
}
#[test]
fn capacity_increase() {
let (device, _) = gfx_dev_and_queue!();
let pool = CpuBufferPool::upload(device);
assert_eq!(pool.capacity(), 0);
pool.next(12);
let first_cap = pool.capacity();
assert!(first_cap >= 1);
for _ in 0 .. first_cap + 5 {
mem::forget(pool.next(12));
}
assert!(pool.capacity() > first_cap);
}
#[test]
fn reuse_subbuffers() {
let (device, _) = gfx_dev_and_queue!();
let pool = CpuBufferPool::upload(device);
assert_eq!(pool.capacity(), 0);
let mut capacity = None;
for _ in 0 .. 64 {
pool.next(12);
let new_cap = pool.capacity();
assert!(new_cap >= 1);
match capacity {
None => capacity = Some(new_cap),
Some(c) => assert_eq!(c, new_cap),
}
}
}
}

View File

@ -15,22 +15,22 @@
use std::marker::PhantomData;
use std::mem;
use std::ops::Range;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::Weak;
use std::sync::atomic::AtomicUsize;
use smallvec::SmallVec;
use buffer::sys::BufferCreationError;
use buffer::sys::SparseLevel;
use buffer::sys::UnsafeBuffer;
use buffer::sys::Usage;
use buffer::traits::AccessRange;
use buffer::traits::BufferAccess;
use buffer::traits::BufferInner;
use buffer::traits::Buffer;
use buffer::traits::GpuAccessResult;
use buffer::traits::TypedBuffer;
use command_buffer::Submission;
use buffer::traits::TypedBufferAccess;
use device::Device;
use device::DeviceOwned;
use device::Queue;
use instance::QueueFamily;
use memory::pool::AllocLayout;
use memory::pool::MemoryPool;
@ -39,6 +39,7 @@ use memory::pool::StdMemoryPool;
use sync::Sharing;
use OomError;
use SafeDeref;
/// Buffer whose content is accessible by the CPU.
#[derive(Debug)]
@ -52,20 +53,13 @@ pub struct DeviceLocalBuffer<T: ?Sized, A = Arc<StdMemoryPool>> where A: MemoryP
// Queue families allowed to access this buffer.
queue_families: SmallVec<[u32; 4]>,
// Latest submission that uses this buffer.
// Also used to block any attempt to submit this buffer while it is accessed by the CPU.
latest_submission: Mutex<LatestSubmission>,
// Number of times this buffer is locked on the GPU side.
gpu_lock: AtomicUsize,
// Necessary to make it compile.
marker: PhantomData<Box<T>>,
}
#[derive(Debug)]
struct LatestSubmission {
read_submissions: SmallVec<[Weak<Submission>; 4]>,
write_submission: Option<Weak<Submission>>, // TODO: can use `Weak::new()` once it's stabilized
}
impl<T> DeviceLocalBuffer<T> {
/// Builds a new buffer. Only allowed for sized data.
#[inline]
@ -139,10 +133,7 @@ impl<T: ?Sized> DeviceLocalBuffer<T> {
inner: buffer,
memory: mem,
queue_families: queue_families,
latest_submission: Mutex::new(LatestSubmission {
read_submissions: SmallVec::new(),
write_submission: None,
}),
gpu_lock: AtomicUsize::new(0),
marker: PhantomData,
}))
}
@ -165,79 +156,89 @@ impl<T: ?Sized, A> DeviceLocalBuffer<T, A> where A: MemoryPool {
}
}
unsafe impl<T: ?Sized, A> Buffer for DeviceLocalBuffer<T, A>
where T: 'static + Send + Sync, A: MemoryPool
/// Access to a device local buffer.
// FIXME: add destructor
#[derive(Debug, Copy, Clone)]
pub struct DeviceLocalBufferAccess<P>(P);
unsafe impl<T: ?Sized, A> Buffer for Arc<DeviceLocalBuffer<T, A>>
where T: 'static + Send + Sync,
A: MemoryPool + 'static
{
type Access = DeviceLocalBufferAccess<Arc<DeviceLocalBuffer<T, A>>>;
#[inline]
fn inner(&self) -> &UnsafeBuffer {
&self.inner
}
#[inline]
fn blocks(&self, _: Range<usize>) -> Vec<usize> {
vec![0]
fn access(self) -> Self::Access {
DeviceLocalBufferAccess(self)
}
#[inline]
fn block_memory_range(&self, _: usize) -> Range<usize> {
0 .. self.size()
}
fn needs_fence(&self, _: bool, _: Range<usize>) -> Option<bool> {
Some(false)
}
#[inline]
fn host_accesses(&self, _: usize) -> bool {
false
}
unsafe fn gpu_access(&self, ranges: &mut Iterator<Item = AccessRange>,
submission: &Arc<Submission>) -> GpuAccessResult
{
let queue_id = submission.queue().family().id();
if self.queue_families.iter().find(|&&id| id == queue_id).is_none() {
panic!("Trying to submit to family {} a buffer suitable for families {:?}",
queue_id, self.queue_families);
}
let is_written = {
let mut written = false;
while let Some(r) = ranges.next() { if r.write { written = true; break; } }
written
};
let mut submissions = self.latest_submission.lock().unwrap();
let dependencies = if is_written {
let write_dep = mem::replace(&mut submissions.write_submission,
Some(Arc::downgrade(submission)));
let read_submissions = mem::replace(&mut submissions.read_submissions,
SmallVec::new());
// We use a temporary variable to bypass a lifetime error in rustc.
let list = read_submissions.into_iter()
.chain(write_dep.into_iter())
.filter_map(|s| s.upgrade())
.collect::<Vec<_>>();
list
} else {
submissions.read_submissions.push(Arc::downgrade(submission));
submissions.write_submission.clone().and_then(|s| s.upgrade()).into_iter().collect()
};
GpuAccessResult {
dependencies: dependencies,
additional_wait_semaphore: None,
additional_signal_semaphore: None,
}
fn size(&self) -> usize {
self.inner.size()
}
}
unsafe impl<T: ?Sized, A> TypedBuffer for DeviceLocalBuffer<T, A>
where T: 'static + Send + Sync, A: MemoryPool
unsafe impl<T: ?Sized, A> TypedBuffer for Arc<DeviceLocalBuffer<T, A>>
where T: 'static + Send + Sync,
A: MemoryPool + 'static
{
type Content = T;
}
unsafe impl<P, T: ?Sized, A> BufferAccess for DeviceLocalBufferAccess<P>
where P: SafeDeref<Target = DeviceLocalBuffer<T, A>>,
T: 'static + Send + Sync,
A: MemoryPool + 'static
{
#[inline]
fn inner(&self) -> BufferInner {
BufferInner {
buffer: &self.0.inner,
offset: 0,
}
}
#[inline]
fn conflict_key(&self, self_offset: usize, self_size: usize) -> u64 {
self.0.inner.key()
}
#[inline]
fn try_gpu_lock(&self, _: bool, _: &Queue) -> bool {
// FIXME: not implemented correctly
/*let val = self.0.gpu_lock.fetch_add(1, Ordering::SeqCst);
if val == 1 {
true
} else {
self.0.gpu_lock.fetch_sub(1, Ordering::SeqCst);
false
}*/
true
}
#[inline]
unsafe fn increase_gpu_lock(&self) {
// FIXME: not implemented correctly
/*let val = self.0.gpu_lock.fetch_add(1, Ordering::SeqCst);
debug_assert!(val >= 1);*/
}
}
unsafe impl<P, T: ?Sized, A> TypedBufferAccess for DeviceLocalBufferAccess<P>
where P: SafeDeref<Target = DeviceLocalBuffer<T, A>>,
T: 'static + Send + Sync,
A: MemoryPool + 'static
{
type Content = T;
}
unsafe impl<P, T: ?Sized, A> DeviceOwned for DeviceLocalBufferAccess<P>
where P: SafeDeref<Target = DeviceLocalBuffer<T, A>>,
T: 'static + Send + Sync,
A: MemoryPool + 'static
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.0.inner.device()
}
}

View File

@ -18,92 +18,247 @@
//! The buffer will be stored in device-local memory if possible
//!
use std::error;
use std::fmt;
use std::iter;
use std::marker::PhantomData;
use std::mem;
use std::ops::Range;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::Weak;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering;
use smallvec::SmallVec;
use buffer::CpuAccessibleBuffer;
use buffer::sys::BufferCreationError;
use buffer::sys::SparseLevel;
use buffer::sys::UnsafeBuffer;
use buffer::sys::Usage;
use buffer::traits::AccessRange;
use buffer::traits::BufferAccess;
use buffer::traits::BufferInner;
use buffer::traits::Buffer;
use buffer::traits::GpuAccessResult;
use buffer::traits::TypedBuffer;
use command_buffer::Submission;
use buffer::traits::TypedBufferAccess;
use command_buffer::cb::AddCommand;
use command_buffer::commands_raw::CmdCopyBuffer;
use command_buffer::commands_raw::CmdCopyBufferError;
use command_buffer::AutoCommandBufferBuilder;
use command_buffer::CommandBuffer;
use command_buffer::CommandBufferBuilder;
use command_buffer::CommandBufferBuilderError;
use command_buffer::CommandBufferExecFuture;
use device::Device;
use device::DeviceOwned;
use device::Queue;
use instance::QueueFamily;
use memory::pool::AllocLayout;
use memory::pool::MemoryPool;
use memory::pool::MemoryPoolAlloc;
use memory::pool::StdMemoryPool;
use memory::pool::StdMemoryPoolAlloc;
use sync::NowFuture;
use sync::Sharing;
use OomError;
/// Buffer that is written once then read for as long as it is alive.
pub struct ImmutableBuffer<T: ?Sized, A = Arc<StdMemoryPool>> where A: MemoryPool {
// TODO: implement Debug
pub struct ImmutableBuffer<T: ?Sized, A = StdMemoryPoolAlloc> {
// Inner content.
inner: UnsafeBuffer,
memory: A::Alloc,
// Memory allocated for the buffer.
memory: A,
// True if the `ImmutableBufferInitialization` object was used by the GPU then dropped.
// This means that the `ImmutableBuffer` can be used as much as we want without any restriction.
initialized: AtomicBool,
// Queue families allowed to access this buffer.
queue_families: SmallVec<[u32; 4]>,
latest_write_submission: Mutex<Option<Weak<Submission>>>, // TODO: can use `Weak::new()` once it's stabilized
started_reading: AtomicBool,
// Necessary to have the appropriate template parameter.
marker: PhantomData<Box<T>>,
}
impl<T> ImmutableBuffer<T> {
/// Builds a new buffer. Only allowed for sized data.
#[inline]
pub fn new<'a, I>(device: &Arc<Device>, usage: &Usage, queue_families: I)
-> Result<Arc<ImmutableBuffer<T>>, OomError>
where I: IntoIterator<Item = QueueFamily<'a>>
// TODO: make this prettier
type ImmutableBufferFromBufferFuture = CommandBufferExecFuture<NowFuture, ::command_buffer::cb::SubmitSyncLayer<::command_buffer::cb::AbstractStorageLayer<::command_buffer::cb::UnsafeCommandBuffer<Arc<::command_buffer::pool::standard::StandardCommandPool>>>>>;
impl<T: ?Sized> ImmutableBuffer<T> {
/// Builds an `ImmutableBuffer` from some data.
///
/// This function builds a memory-mapped intermediate buffer, writes the data to it, builds a
/// command buffer that copies from this intermediate buffer to the final buffer, and finally
/// submits the command buffer as a future.
///
/// This function returns two objects: the newly-created buffer, and a future representing
/// the initial upload operation. In order to be allowed to use the `ImmutableBuffer`, you must
/// either submit your operation after this future, or execute this future and wait for it to
/// be finished before submitting your own operation.
pub fn from_data<'a, I>(data: T, usage: &Usage, queue_families: I, queue: Arc<Queue>)
-> Result<(Arc<ImmutableBuffer<T>>, ImmutableBufferFromBufferFuture), OomError>
where I: IntoIterator<Item = QueueFamily<'a>>,
T: 'static + Send + Sync + Sized,
{
let source = CpuAccessibleBuffer::from_data(queue.device(), &Usage::transfer_source(),
iter::once(queue.family()), data)?;
ImmutableBuffer::from_buffer(source, usage, queue_families, queue)
}
/// Builds an `ImmutableBuffer` that copies its data from another buffer.
///
/// This function returns two objects: the newly-created buffer, and a future representing
/// the initial upload operation. In order to be allowed to use the `ImmutableBuffer`, you must
/// either submit your operation after this future, or execute this future and wait for it to
/// be finished before submitting your own operation.
pub fn from_buffer<'a, B, I>(source: B, usage: &Usage, queue_families: I, queue: Arc<Queue>)
-> Result<(Arc<ImmutableBuffer<T>>, ImmutableBufferFromBufferFuture), OomError>
where B: Buffer + TypedBuffer<Content = T> + DeviceOwned, // TODO: remove + DeviceOwned once Buffer requires it
B::Access: 'static + Clone + Send + Sync,
I: IntoIterator<Item = QueueFamily<'a>>,
T: 'static + Send + Sync,
{
let cb = AutoCommandBufferBuilder::new(source.device().clone(), queue.family())?;
let (buf, cb) = match ImmutableBuffer::from_buffer_with_builder(source, usage,
queue_families, cb)
{
Ok(v) => v,
Err(ImmutableBufferFromBufferWithBuilderError::OomError(err)) => return Err(err),
Err(ImmutableBufferFromBufferWithBuilderError::CommandBufferBuilderError(_)) => {
// Example errors that can trigger this: forbidden while inside render pass,
// ranges overlapping between buffers, missing usage in one of the buffers, etc.
// None of them can actually happen.
unreachable!()
},
};
let future = match cb.build()?.execute(queue) {
Ok(f) => f,
Err(_) => unreachable!()
};
Ok((buf, future))
}
/// Builds an `ImmutableBuffer` that copies its data from another buffer.
pub fn from_buffer_with_builder<'a, B, I, Cb, O>(source: B, usage: &Usage, queue_families: I,
builder: Cb)
-> Result<(Arc<ImmutableBuffer<T>>, O), ImmutableBufferFromBufferWithBuilderError>
where B: Buffer + TypedBuffer<Content = T> + DeviceOwned, // TODO: remove + DeviceOwned once Buffer requires it
I: IntoIterator<Item = QueueFamily<'a>>,
Cb: CommandBufferBuilder +
AddCommand<CmdCopyBuffer<B::Access, ImmutableBufferInitialization<T>>, Out = O>,
{
unsafe {
ImmutableBuffer::raw(device, mem::size_of::<T>(), usage, queue_families)
// We automatically set `transfer_dest` to true in order to avoid annoying errors.
let actual_usage = Usage {
transfer_dest: true,
.. *usage
};
let (buffer, init) = ImmutableBuffer::raw(source.device().clone(), source.size(),
&actual_usage, queue_families)?;
let builder = builder.copy_buffer(source, init)?;
Ok((buffer, builder))
}
}
}
impl<T> ImmutableBuffer<T> {
/// Builds a new buffer with uninitialized data. Only allowed for sized data.
///
/// Returns two things: the buffer, and a special access that should be used for the initial
/// upload to the buffer.
///
/// You will get an error if you try to use the buffer before using the initial upload access.
/// However this function doesn't check whether you actually used this initial upload to fill
/// the buffer like you're supposed to do.
///
/// You will also get an error if you try to get exclusive access to the final buffer.
///
/// # Safety
///
/// - The `ImmutableBufferInitialization` should be used to fill the buffer with some initial
/// data, otherwise the content is undefined.
///
#[inline]
pub unsafe fn uninitialized<'a, I>(device: Arc<Device>, usage: &Usage, queue_families: I)
-> Result<(Arc<ImmutableBuffer<T>>, ImmutableBufferInitialization<T>), OomError>
where I: IntoIterator<Item = QueueFamily<'a>>
{
ImmutableBuffer::raw(device, mem::size_of::<T>(), usage, queue_families)
}
}
impl<T> ImmutableBuffer<[T]> {
/// Builds a new buffer. Can be used for arrays.
pub fn from_iter<'a, D, I>(data: D, usage: &Usage, queue_families: I, queue: Arc<Queue>)
-> Result<(Arc<ImmutableBuffer<[T]>>, ImmutableBufferFromBufferFuture), OomError>
where I: IntoIterator<Item = QueueFamily<'a>>,
D: ExactSizeIterator<Item = T>,
T: 'static + Send + Sync + Sized,
{
let source = CpuAccessibleBuffer::from_iter(queue.device(), &Usage::transfer_source(),
iter::once(queue.family()), data)?;
ImmutableBuffer::from_buffer(source, usage, queue_families, queue)
}
/// Builds a new buffer with uninitialized data. Can be used for arrays.
///
/// Returns two things: the buffer, and a special access that should be used for the initial
/// upload to the buffer.
///
/// You will get an error if you try to use the buffer before using the initial upload access.
/// However this function doesn't check whether you actually used this initial upload to fill
/// the buffer like you're supposed to do.
///
/// You will also get an error if you try to get exclusive access to the final buffer.
///
/// # Safety
///
/// - The `ImmutableBufferInitialization` should be used to fill the buffer with some initial
/// data, otherwise the content is undefined.
///
#[inline]
pub fn array<'a, I>(device: &Arc<Device>, len: usize, usage: &Usage, queue_families: I)
-> Result<Arc<ImmutableBuffer<[T]>>, OomError>
pub unsafe fn uninitialized_array<'a, I>(device: Arc<Device>, len: usize, usage: &Usage,
queue_families: I)
-> Result<(Arc<ImmutableBuffer<[T]>>, ImmutableBufferInitialization<[T]>), OomError>
where I: IntoIterator<Item = QueueFamily<'a>>
{
unsafe {
ImmutableBuffer::raw(device, len * mem::size_of::<T>(), usage, queue_families)
}
ImmutableBuffer::raw(device, len * mem::size_of::<T>(), usage, queue_families)
}
}
impl<T: ?Sized> ImmutableBuffer<T> {
/// Builds a new buffer without checking the size.
/// Builds a new buffer without checking the size and granting free access for the initial
/// upload.
///
/// Returns two things: the buffer, and a special access that should be used for the initial
/// upload to the buffer.
/// You will get an error if you try to use the buffer before using the initial upload access.
/// However this function doesn't check whether you used this initial upload to fill the buffer.
/// You will also get an error if you try to get exclusive access to the final buffer.
///
/// # Safety
///
/// You must ensure that the size that you pass is correct for `T`.
/// - You must ensure that the size that you pass is correct for `T`.
/// - The `ImmutableBufferInitialization` should be used to fill the buffer with some initial
/// data.
///
pub unsafe fn raw<'a, I>(device: &Arc<Device>, size: usize, usage: &Usage, queue_families: I)
-> Result<Arc<ImmutableBuffer<T>>, OomError>
#[inline]
pub unsafe fn raw<'a, I>(device: Arc<Device>, size: usize, usage: &Usage, queue_families: I)
-> Result<(Arc<ImmutableBuffer<T>>, ImmutableBufferInitialization<T>), OomError>
where I: IntoIterator<Item = QueueFamily<'a>>
{
let queue_families = queue_families.into_iter().map(|f| f.id())
.collect::<SmallVec<[u32; 4]>>();
let queue_families = queue_families.into_iter().map(|f| f.id()).collect();
ImmutableBuffer::raw_impl(device, size, usage, queue_families)
}
// Internal implementation of `raw`. This is separated from `raw` so that it doesn't need to be
// inlined.
unsafe fn raw_impl(device: Arc<Device>, size: usize, usage: &Usage,
queue_families: SmallVec<[u32; 4]>)
-> Result<(Arc<ImmutableBuffer<T>>, ImmutableBufferInitialization<T>), OomError>
{
let (buffer, mem_reqs) = {
let sharing = if queue_families.len() >= 2 {
Sharing::Concurrent(queue_families.iter().cloned())
@ -111,7 +266,7 @@ impl<T: ?Sized> ImmutableBuffer<T> {
Sharing::Exclusive
};
match UnsafeBuffer::new(device, size, &usage, sharing, SparseLevel::none()) {
match UnsafeBuffer::new(&device, size, &usage, sharing, SparseLevel::none()) {
Ok(b) => b,
Err(BufferCreationError::OomError(err)) => return Err(err),
Err(_) => unreachable!() // We don't use sparse binding, therefore the other
@ -128,23 +283,29 @@ impl<T: ?Sized> ImmutableBuffer<T> {
device_local.chain(any).next().unwrap()
};
let mem = try!(MemoryPool::alloc(&Device::standard_pool(device), mem_ty,
let mem = try!(MemoryPool::alloc(&Device::standard_pool(&device), mem_ty,
mem_reqs.size, mem_reqs.alignment, AllocLayout::Linear));
debug_assert!((mem.offset() % mem_reqs.alignment) == 0);
try!(buffer.bind_memory(mem.memory(), mem.offset()));
Ok(Arc::new(ImmutableBuffer {
let final_buf = Arc::new(ImmutableBuffer {
inner: buffer,
memory: mem,
queue_families: queue_families,
latest_write_submission: Mutex::new(None),
started_reading: AtomicBool::new(false),
initialized: AtomicBool::new(false),
marker: PhantomData,
}))
});
let initialization = ImmutableBufferInitialization {
buffer: final_buf.clone(),
used: Arc::new(AtomicBool::new(false)),
};
Ok((final_buf, initialization))
}
}
impl<T: ?Sized, A> ImmutableBuffer<T, A> where A: MemoryPool {
impl<T: ?Sized, A> ImmutableBuffer<T, A> {
/// Returns the device used to create this buffer.
#[inline]
pub fn device(&self) -> &Arc<Device> {
@ -161,84 +322,334 @@ impl<T: ?Sized, A> ImmutableBuffer<T, A> where A: MemoryPool {
}
}
unsafe impl<T: ?Sized, A> Buffer for ImmutableBuffer<T, A>
where T: 'static + Send + Sync, A: MemoryPool
{
unsafe impl<T: ?Sized, A> Buffer for Arc<ImmutableBuffer<T, A>> {
type Access = Self;
#[inline]
fn inner(&self) -> &UnsafeBuffer {
&self.inner
}
#[inline]
fn blocks(&self, _: Range<usize>) -> Vec<usize> {
vec![0]
fn access(self) -> Self {
self
}
#[inline]
fn block_memory_range(&self, _: usize) -> Range<usize> {
0 .. self.size()
}
fn needs_fence(&self, _: bool, _: Range<usize>) -> Option<bool> {
Some(true)
}
#[inline]
fn host_accesses(&self, _: usize) -> bool {
false
}
unsafe fn gpu_access(&self, ranges: &mut Iterator<Item = AccessRange>,
submission: &Arc<Submission>) -> GpuAccessResult
{
let queue_id = submission.queue().family().id();
if self.queue_families.iter().find(|&&id| id == queue_id).is_none() {
panic!()
}
let write = {
let mut write = false;
while let Some(range) = ranges.next() {
if range.write { write = true; break; }
}
write
};
if write {
assert!(self.started_reading.load(Ordering::Acquire) == false);
}
let dependency = {
let mut latest_submission = self.latest_write_submission.lock().unwrap();
if write {
mem::replace(&mut *latest_submission, Some(Arc::downgrade(submission)))
} else {
latest_submission.clone()
}
};
let dependency = dependency.and_then(|d| d.upgrade());
if write {
assert!(self.started_reading.load(Ordering::Acquire) == false);
} else {
self.started_reading.store(true, Ordering::Release);
}
GpuAccessResult {
dependencies: if let Some(dependency) = dependency {
vec![dependency]
} else {
vec![]
},
additional_wait_semaphore: None,
additional_signal_semaphore: None,
}
fn size(&self) -> usize {
self.inner.size()
}
}
unsafe impl<T: ?Sized, A> TypedBuffer for ImmutableBuffer<T, A>
where T: 'static + Send + Sync, A: MemoryPool
{
unsafe impl<T: ?Sized, A> TypedBuffer for Arc<ImmutableBuffer<T, A>> {
type Content = T;
}
unsafe impl<T: ?Sized, A> BufferAccess for ImmutableBuffer<T, A> {
#[inline]
fn inner(&self) -> BufferInner {
BufferInner {
buffer: &self.inner,
offset: 0,
}
}
#[inline]
fn conflict_key(&self, self_offset: usize, self_size: usize) -> u64 {
self.inner.key()
}
#[inline]
fn try_gpu_lock(&self, exclusive_access: bool, queue: &Queue) -> bool {
if exclusive_access {
return false;
}
if !self.initialized.load(Ordering::Relaxed) {
return false;
}
true
}
#[inline]
unsafe fn increase_gpu_lock(&self) {
}
}
unsafe impl<T: ?Sized, A> TypedBufferAccess for ImmutableBuffer<T, A> {
type Content = T;
}
unsafe impl<T: ?Sized, A> DeviceOwned for ImmutableBuffer<T, A> {
#[inline]
fn device(&self) -> &Arc<Device> {
self.inner.device()
}
}
/// Access to the immutable buffer that can be used for the initial upload.
//#[derive(Debug)] // TODO:
pub struct ImmutableBufferInitialization<T: ?Sized, A = StdMemoryPoolAlloc> {
buffer: Arc<ImmutableBuffer<T, A>>,
used: Arc<AtomicBool>,
}
unsafe impl<T: ?Sized, A> BufferAccess for ImmutableBufferInitialization<T, A> {
#[inline]
fn inner(&self) -> BufferInner {
self.buffer.inner()
}
#[inline]
fn conflict_key(&self, self_offset: usize, self_size: usize) -> u64 {
self.buffer.inner.key()
}
#[inline]
fn try_gpu_lock(&self, exclusive_access: bool, queue: &Queue) -> bool {
!self.used.compare_and_swap(false, true, Ordering::Relaxed)
}
#[inline]
unsafe fn increase_gpu_lock(&self) {
debug_assert!(self.used.load(Ordering::Relaxed));
}
}
unsafe impl<T: ?Sized, A> TypedBufferAccess for ImmutableBufferInitialization<T, A> {
type Content = T;
}
unsafe impl<T: ?Sized, A> Buffer for ImmutableBufferInitialization<T, A> {
type Access = Self;
#[inline]
fn access(self) -> Self {
self
}
#[inline]
fn size(&self) -> usize {
self.buffer.inner.size()
}
}
unsafe impl<T: ?Sized, A> TypedBuffer for ImmutableBufferInitialization<T, A> {
type Content = T;
}
unsafe impl<T: ?Sized, A> DeviceOwned for ImmutableBufferInitialization<T, A> {
#[inline]
fn device(&self) -> &Arc<Device> {
self.buffer.inner.device()
}
}
impl<T: ?Sized, A> Clone for ImmutableBufferInitialization<T, A> {
#[inline]
fn clone(&self) -> ImmutableBufferInitialization<T, A> {
ImmutableBufferInitialization {
buffer: self.buffer.clone(),
used: self.used.clone(),
}
}
}
impl<T: ?Sized, A> Drop for ImmutableBufferInitialization<T, A> {
#[inline]
fn drop(&mut self) {
if self.used.load(Ordering::Relaxed) {
self.buffer.initialized.store(true, Ordering::Relaxed);
}
}
}
/// Error that can happen when creating a `CmdCopyBuffer`.
#[derive(Debug, Copy, Clone)]
pub enum ImmutableBufferFromBufferWithBuilderError {
/// Out of memory.
OomError(OomError),
/// Error while adding the command to the builder.
CommandBufferBuilderError(CommandBufferBuilderError<CmdCopyBufferError>),
}
impl error::Error for ImmutableBufferFromBufferWithBuilderError {
#[inline]
fn description(&self) -> &str {
match *self {
ImmutableBufferFromBufferWithBuilderError::OomError(_) => {
"out of memory"
},
ImmutableBufferFromBufferWithBuilderError::CommandBufferBuilderError(_) => {
"error while adding the command to the builder"
},
}
}
#[inline]
fn cause(&self) -> Option<&error::Error> {
match *self {
ImmutableBufferFromBufferWithBuilderError::OomError(ref err) => Some(err),
ImmutableBufferFromBufferWithBuilderError::CommandBufferBuilderError(ref err) => Some(err),
}
}
}
impl fmt::Display for ImmutableBufferFromBufferWithBuilderError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}
impl From<OomError> for ImmutableBufferFromBufferWithBuilderError {
#[inline]
fn from(err: OomError) -> ImmutableBufferFromBufferWithBuilderError {
ImmutableBufferFromBufferWithBuilderError::OomError(err)
}
}
impl From<CommandBufferBuilderError<CmdCopyBufferError>> for ImmutableBufferFromBufferWithBuilderError {
#[inline]
fn from(err: CommandBufferBuilderError<CmdCopyBufferError>) -> ImmutableBufferFromBufferWithBuilderError {
ImmutableBufferFromBufferWithBuilderError::CommandBufferBuilderError(err)
}
}
#[cfg(test)]
mod tests {
use std::iter;
use buffer::cpu_access::CpuAccessibleBuffer;
use buffer::immutable::ImmutableBuffer;
use buffer::sys::Usage;
use command_buffer::AutoCommandBufferBuilder;
use command_buffer::CommandBuffer;
use command_buffer::CommandBufferBuilder;
use sync::GpuFuture;
#[test]
fn from_data_working() {
let (device, queue) = gfx_dev_and_queue!();
let (buffer, _) = ImmutableBuffer::from_data(12u32, &Usage::all(),
iter::once(queue.family()),
queue.clone()).unwrap();
let dest = CpuAccessibleBuffer::from_data(&device, &Usage::all(),
iter::once(queue.family()), 0).unwrap();
let _ = AutoCommandBufferBuilder::new(device.clone(), queue.family()).unwrap()
.copy_buffer(buffer, dest.clone()).unwrap()
.build().unwrap()
.execute(queue.clone()).unwrap()
.then_signal_fence_and_flush().unwrap();
let dest_content = dest.read().unwrap();
assert_eq!(*dest_content, 12);
}
#[test]
fn from_iter_working() {
let (device, queue) = gfx_dev_and_queue!();
let (buffer, _) = ImmutableBuffer::from_iter((0 .. 512u32).map(|n| n * 2), &Usage::all(),
iter::once(queue.family()),
queue.clone()).unwrap();
let dest = CpuAccessibleBuffer::from_iter(&device, &Usage::all(),
iter::once(queue.family()),
(0 .. 512).map(|_| 0u32)).unwrap();
let _ = AutoCommandBufferBuilder::new(device.clone(), queue.family()).unwrap()
.copy_buffer(buffer, dest.clone()).unwrap()
.build().unwrap()
.execute(queue.clone()).unwrap()
.then_signal_fence_and_flush().unwrap();
let dest_content = dest.read().unwrap();
for (n, &v) in dest_content.iter().enumerate() {
assert_eq!(n * 2, v as usize);
}
}
#[test]
#[should_panic] // TODO: check Result error instead of panicking
fn writing_forbidden() {
let (device, queue) = gfx_dev_and_queue!();
let (buffer, _) = ImmutableBuffer::from_data(12u32, &Usage::all(),
iter::once(queue.family()),
queue.clone()).unwrap();
let _ = AutoCommandBufferBuilder::new(device.clone(), queue.family()).unwrap()
.fill_buffer(buffer, 50).unwrap()
.build().unwrap()
.execute(queue.clone()).unwrap()
.then_signal_fence_and_flush().unwrap();
}
#[test]
#[should_panic] // TODO: check Result error instead of panicking
fn read_uninitialized_forbidden() {
let (device, queue) = gfx_dev_and_queue!();
let (buffer, _) = unsafe {
ImmutableBuffer::<u32>::uninitialized(device.clone(), &Usage::all(),
iter::once(queue.family())).unwrap()
};
let src = CpuAccessibleBuffer::from_data(&device, &Usage::all(),
iter::once(queue.family()), 0).unwrap();
let _ = AutoCommandBufferBuilder::new(device.clone(), queue.family()).unwrap()
.copy_buffer(src, buffer).unwrap()
.build().unwrap()
.execute(queue.clone()).unwrap()
.then_signal_fence_and_flush().unwrap();
}
#[test]
fn init_then_read_same_cb() {
let (device, queue) = gfx_dev_and_queue!();
let (buffer, init) = unsafe {
ImmutableBuffer::<u32>::uninitialized(device.clone(), &Usage::all(),
iter::once(queue.family())).unwrap()
};
let src = CpuAccessibleBuffer::from_data(&device, &Usage::all(),
iter::once(queue.family()), 0).unwrap();
let _ = AutoCommandBufferBuilder::new(device.clone(), queue.family()).unwrap()
.copy_buffer(src.clone(), init).unwrap()
.copy_buffer(buffer, src.clone()).unwrap()
.build().unwrap()
.execute(queue.clone()).unwrap()
.then_signal_fence_and_flush().unwrap();
}
#[test]
#[ignore] // TODO: doesn't work because the submit sync layer isn't properly implemented
fn init_then_read_same_future() {
let (device, queue) = gfx_dev_and_queue!();
let (buffer, init) = unsafe {
ImmutableBuffer::<u32>::uninitialized(device.clone(), &Usage::all(),
iter::once(queue.family())).unwrap()
};
let src = CpuAccessibleBuffer::from_data(&device, &Usage::all(),
iter::once(queue.family()), 0).unwrap();
let cb1 = AutoCommandBufferBuilder::new(device.clone(), queue.family()).unwrap()
.copy_buffer(src.clone(), init).unwrap()
.build().unwrap();
let cb2 = AutoCommandBufferBuilder::new(device.clone(), queue.family()).unwrap()
.copy_buffer(buffer, src.clone()).unwrap()
.build().unwrap();
let _ = cb1.execute(queue.clone()).unwrap()
.then_execute(queue.clone(), cb2).unwrap()
.then_signal_fence_and_flush().unwrap();
}
// TODO: write tons of tests that try to exploit loopholes
// this isn't possible yet because checks aren't correctly implemented yet
}

View File

@ -9,7 +9,10 @@
//! Location in memory that contains data.
//!
//! All buffers are guaranteed to be accessible from the GPU.
//! A Vulkan buffer is very similar to a buffer that you would use in programming languages in
//! general, in the sense that it is a location in memory that contains data. The difference
//! between a Vulkan buffer and a regular buffer is that the content of a Vulkan buffer is
//! accessible from the GPU.
//!
//! # High-level wrappers
//!
@ -58,216 +61,28 @@
//! Using uniform/storage texel buffers requires creating a *buffer view*. See the `view` module
//! for how to create a buffer view.
//!
use std::marker::PhantomData;
use std::mem;
use std::ops::Range;
use std::sync::Arc;
pub use self::cpu_access::CpuAccessibleBuffer;
pub use self::cpu_pool::CpuBufferPool;
pub use self::device_local::DeviceLocalBuffer;
pub use self::immutable::ImmutableBuffer;
pub use self::slice::BufferSlice;
pub use self::sys::BufferCreationError;
pub use self::sys::Usage as BufferUsage;
pub use self::traits::BufferAccess;
pub use self::traits::BufferInner;
pub use self::traits::Buffer;
pub use self::traits::TypedBuffer;
pub use self::traits::TypedBufferAccess;
pub use self::view::BufferView;
pub use self::view::BufferViewRef;
pub mod cpu_access;
pub mod cpu_pool;
pub mod device_local;
pub mod immutable;
pub mod sys;
pub mod traits;
pub mod view;
/// A subpart of a buffer.
///
/// This object doesn't correspond to any Vulkan object. It exists for API convenience.
///
/// # Example
///
/// Creating a slice:
///
/// ```no_run
/// use vulkano::buffer::BufferSlice;
/// # let buffer: std::sync::Arc<vulkano::buffer::DeviceLocalBuffer<[u8]>> =
/// unsafe { std::mem::uninitialized() };
/// let _slice = BufferSlice::from(&buffer);
/// ```
///
/// Selecting a slice of a buffer that contains `[T]`:
///
/// ```no_run
/// use vulkano::buffer::BufferSlice;
/// # let buffer: std::sync::Arc<vulkano::buffer::DeviceLocalBuffer<[u8]>> =
/// unsafe { std::mem::uninitialized() };
/// let _slice = BufferSlice::from(&buffer).slice(12 .. 14).unwrap();
/// ```
///
#[derive(Clone)]
pub struct BufferSlice<'a, T: ?Sized, B: 'a> {
marker: PhantomData<T>,
resource: &'a Arc<B>,
offset: usize,
size: usize,
}
impl<'a, T: ?Sized, B: 'a> BufferSlice<'a, T, B> {
/// Returns the buffer that this slice belongs to.
pub fn buffer(&self) -> &'a Arc<B> {
&self.resource
}
/// Returns the offset of that slice within the buffer.
#[inline]
pub fn offset(&self) -> usize {
self.offset
}
/// Returns the size of that slice in bytes.
#[inline]
pub fn size(&self) -> usize {
self.size
}
/// Builds a slice that contains an element from inside the buffer.
///
/// This method builds an object that represents a slice of the buffer. No actual operation
/// is performed.
///
/// # Example
///
/// TODO
///
/// # Safety
///
/// The object whose reference is passed to the closure is uninitialized. Therefore you
/// **must not** access the content of the object.
///
/// You **must** return a reference to an element from the parameter. The closure **must not**
/// panic.
#[inline]
pub unsafe fn slice_custom<F, R: ?Sized>(self, f: F) -> BufferSlice<'a, R, B>
where F: for<'r> FnOnce(&'r T) -> &'r R
// TODO: bounds on R
{
let data: &T = mem::zeroed();
let result = f(data);
let size = mem::size_of_val(result);
let result = result as *const R as *const () as usize;
assert!(result <= self.size());
assert!(result + size <= self.size());
BufferSlice {
marker: PhantomData,
resource: self.resource,
offset: self.offset + result,
size: size,
}
}
}
impl<'a, T, B: 'a> BufferSlice<'a, [T], B> {
/// Returns the number of elements in this slice.
#[inline]
pub fn len(&self) -> usize {
self.size() / mem::size_of::<T>()
}
/// Reduces the slice to just one element of the array.
///
/// Returns `None` if out of range.
#[inline]
pub fn index(self, index: usize) -> Option<BufferSlice<'a, T, B>> {
if index >= self.len() { return None; }
Some(BufferSlice {
marker: PhantomData,
resource: self.resource,
offset: self.offset + index * mem::size_of::<T>(),
size: mem::size_of::<T>(),
})
}
/// Reduces the slice to just a range of the array.
///
/// Returns `None` if out of range.
#[inline]
pub fn slice(self, range: Range<usize>) -> Option<BufferSlice<'a, [T], B>> {
if range.end > self.len() { return None; }
Some(BufferSlice {
marker: PhantomData,
resource: self.resource,
offset: self.offset + range.start * mem::size_of::<T>(),
size: (range.end - range.start) * mem::size_of::<T>(),
})
}
}
impl<'a, T: ?Sized, B: 'a> From<&'a Arc<B>> for BufferSlice<'a, T, B>
where B: TypedBuffer<Content = T>, T: 'static
{
#[inline]
fn from(r: &'a Arc<B>) -> BufferSlice<'a, T, B> {
BufferSlice {
marker: PhantomData,
resource: r,
offset: 0,
size: r.size(),
}
}
}
impl<'a, T, B: 'a> From<BufferSlice<'a, T, B>> for BufferSlice<'a, [T], B>
where T: 'static
{
#[inline]
fn from(r: BufferSlice<'a, T, B>) -> BufferSlice<'a, [T], B> {
BufferSlice {
marker: PhantomData,
resource: r.resource,
offset: r.offset,
size: r.size,
}
}
}
/// Takes a `BufferSlice` that points to a struct, and returns a `BufferSlice` that points to
/// a specific field of that struct.
#[macro_export]
macro_rules! buffer_slice_field {
($slice:expr, $field:ident) => (
// TODO: add #[allow(unsafe_code)] when that's allowed
unsafe { $slice.slice_custom(|s| &s.$field) }
)
}
#[cfg(test)]
mod tests {
// TODO: restore these tests
/*use std::mem;
use buffer::Usage;
use buffer::Buffer;
use buffer::BufferView;
use buffer::BufferViewCreationError;
use memory::DeviceLocal;
#[test]
fn create() {
let (device, queue) = gfx_dev_and_queue!();
let _ = Buffer::<[i8; 16], _>::new(&device, &Usage::all(), DeviceLocal, &queue).unwrap();
}
#[test]
fn array_len() {
let (device, queue) = gfx_dev_and_queue!();
let b = Buffer::<[i16], _>::array(&device, 12, &Usage::all(),
DeviceLocal, &queue).unwrap();
assert_eq!(b.len(), 12);
assert_eq!(b.size(), 12 * mem::size_of::<i16>());
}*/
}
mod slice;
mod traits;

287
vulkano/src/buffer/slice.rs Normal file
View File

@ -0,0 +1,287 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::marker::PhantomData;
use std::mem;
use std::ops::Range;
use std::sync::Arc;
use buffer::traits::BufferAccess;
use buffer::traits::BufferInner;
use buffer::traits::TypedBuffer;
use buffer::traits::TypedBufferAccess;
use buffer::traits::Buffer;
use device::Device;
use device::DeviceOwned;
use device::Queue;
/// A subpart of a buffer.
///
/// This object doesn't correspond to any Vulkan object. It exists for API convenience.
///
/// # Example
///
/// Creating a slice:
///
/// ```ignore // FIXME: unignore
/// use vulkano::buffer::BufferSlice;
/// # let buffer: std::sync::Arc<vulkano::buffer::DeviceLocalBuffer<[u8]>> = return;
/// let _slice = BufferSlice::from(&buffer);
/// ```
///
/// Selecting a slice of a buffer that contains `[T]`:
///
/// ```ignore // FIXME: unignore
/// use vulkano::buffer::BufferSlice;
/// # let buffer: std::sync::Arc<vulkano::buffer::DeviceLocalBuffer<[u8]>> = return;
/// let _slice = BufferSlice::from(&buffer).slice(12 .. 14).unwrap();
/// ```
///
pub struct BufferSlice<T: ?Sized, B> {
marker: PhantomData<T>,
resource: B,
offset: usize,
size: usize,
}
// We need to implement `Clone` manually, otherwise the derive adds a `T: Clone` requirement.
impl<T: ?Sized, B> Clone for BufferSlice<T, B>
where B: Clone
{
#[inline]
fn clone(&self) -> Self {
BufferSlice {
marker: PhantomData,
resource: self.resource.clone(),
offset: self.offset,
size: self.size,
}
}
}
impl<T: ?Sized, B> BufferSlice<T, B> {
#[inline]
pub fn from_typed_buffer(r: B) -> BufferSlice<T, B>
where B: TypedBuffer<Content = T>
{
let size = r.size();
BufferSlice {
marker: PhantomData,
resource: r,
offset: 0,
size: size,
}
}
#[inline]
pub fn from_typed_buffer_access(r: B) -> BufferSlice<T, B>
where B: TypedBufferAccess<Content = T>
{
let size = r.size();
BufferSlice {
marker: PhantomData,
resource: r,
offset: 0,
size: size,
}
}
/// Returns the buffer that this slice belongs to.
pub fn buffer(&self) -> &B {
&self.resource
}
/// Returns the offset of that slice within the buffer.
#[inline]
pub fn offset(&self) -> usize {
self.offset
}
/// Returns the size of that slice in bytes.
#[inline]
pub fn size(&self) -> usize {
self.size
}
/// Builds a slice that contains an element from inside the buffer.
///
/// This method builds an object that represents a slice of the buffer. No actual operation
/// is performed.
///
/// # Example
///
/// TODO
///
/// # Safety
///
/// The object whose reference is passed to the closure is uninitialized. Therefore you
/// **must not** access the content of the object.
///
/// You **must** return a reference to an element from the parameter. The closure **must not**
/// panic.
#[inline]
pub unsafe fn slice_custom<F, R: ?Sized>(self, f: F) -> BufferSlice<R, B>
where F: for<'r> FnOnce(&'r T) -> &'r R
// TODO: bounds on R
{
let data: &T = mem::zeroed();
let result = f(data);
let size = mem::size_of_val(result);
let result = result as *const R as *const () as usize;
assert!(result <= self.size());
assert!(result + size <= self.size());
BufferSlice {
marker: PhantomData,
resource: self.resource,
offset: self.offset + result,
size: size,
}
}
}
impl<T, B> BufferSlice<[T], B> {
/// Returns the number of elements in this slice.
#[inline]
pub fn len(&self) -> usize {
debug_assert_eq!(self.size() % mem::size_of::<T>(), 0);
self.size() / mem::size_of::<T>()
}
/// Reduces the slice to just one element of the array.
///
/// Returns `None` if out of range.
#[inline]
pub fn index(self, index: usize) -> Option<BufferSlice<T, B>> {
if index >= self.len() { return None; }
Some(BufferSlice {
marker: PhantomData,
resource: self.resource,
offset: self.offset + index * mem::size_of::<T>(),
size: mem::size_of::<T>(),
})
}
/// Reduces the slice to just a range of the array.
///
/// Returns `None` if out of range.
#[inline]
pub fn slice(self, range: Range<usize>) -> Option<BufferSlice<[T], B>> {
if range.end > self.len() { return None; }
Some(BufferSlice {
marker: PhantomData,
resource: self.resource,
offset: self.offset + range.start * mem::size_of::<T>(),
size: (range.end - range.start) * mem::size_of::<T>(),
})
}
}
unsafe impl<T: ?Sized, B> Buffer for BufferSlice<T, B> where B: Buffer {
type Access = BufferSlice<T, B::Access>;
#[inline]
fn access(self) -> Self::Access {
BufferSlice {
marker: PhantomData,
resource: self.resource.access(),
offset: self.offset,
size: self.size,
}
}
#[inline]
fn size(&self) -> usize {
self.size
}
}
unsafe impl<T: ?Sized, B> BufferAccess for BufferSlice<T, B> where B: BufferAccess {
#[inline]
fn inner(&self) -> BufferInner {
let inner = self.resource.inner();
BufferInner {
buffer: inner.buffer,
offset: inner.offset + self.offset,
}
}
#[inline]
fn size(&self) -> usize {
self.size
}
#[inline]
fn conflicts_buffer(&self, self_offset: usize, self_size: usize,
other: &BufferAccess, other_offset: usize, other_size: usize) -> bool
{
let self_offset = self.offset + self_offset;
// FIXME: spurious failures ; needs investigation
//debug_assert!(self_size + self_offset <= self.size);
self.resource.conflicts_buffer(self_offset, self_size, other, other_offset, other_size)
}
#[inline]
fn conflict_key(&self, self_offset: usize, self_size: usize) -> u64 {
let self_offset = self.offset + self_offset;
// FIXME: spurious failures ; needs investigation
//debug_assert!(self_size + self_offset <= self.size);
self.resource.conflict_key(self_offset, self_size)
}
#[inline]
fn try_gpu_lock(&self, exclusive_access: bool, queue: &Queue) -> bool {
self.resource.try_gpu_lock(exclusive_access, queue)
}
#[inline]
unsafe fn increase_gpu_lock(&self) {
self.resource.increase_gpu_lock()
}
}
unsafe impl<T: ?Sized, B> TypedBufferAccess for BufferSlice<T, B> where B: BufferAccess, {
type Content = T;
}
unsafe impl<T: ?Sized, B> DeviceOwned for BufferSlice<T, B>
where B: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.resource.device()
}
}
impl<T, B> From<BufferSlice<T, B>> for BufferSlice<[T], B> {
#[inline]
fn from(r: BufferSlice<T, B>) -> BufferSlice<[T], B> {
BufferSlice {
marker: PhantomData,
resource: r.resource,
offset: r.offset,
size: r.size,
}
}
}
/// Takes a `BufferSlice` that points to a struct, and returns a `BufferSlice` that points to
/// a specific field of that struct.
#[macro_export]
macro_rules! buffer_slice_field {
($slice:expr, $field:ident) => (
// TODO: add #[allow(unsafe_code)] when that's allowed
unsafe { $slice.slice_custom(|s| &s.$field) }
)
}

View File

@ -32,6 +32,7 @@ use std::sync::Arc;
use smallvec::SmallVec;
use device::Device;
use device::DeviceOwned;
use memory::DeviceMemory;
use memory::MemoryRequirements;
use sync::Sharing;
@ -44,7 +45,6 @@ use VulkanPointers;
use vk;
/// Data storage in a GPU-accessible location.
#[derive(Debug)]
pub struct UnsafeBuffer {
buffer: vk::Buffer,
device: Arc<Device>,
@ -183,12 +183,6 @@ impl UnsafeBuffer {
Ok(())
}
/// Returns the device used to create this buffer.
#[inline]
pub fn device(&self) -> &Arc<Device> {
&self.device
}
/// Returns the size of the buffer in bytes.
#[inline]
pub fn size(&self) -> usize {
@ -239,6 +233,12 @@ impl UnsafeBuffer {
pub fn usage_indirect_buffer(&self) -> bool {
(self.usage & vk::BUFFER_USAGE_INDIRECT_BUFFER_BIT) != 0
}
/// Returns a key unique to each `UnsafeBuffer`. Can be used for the `conflicts_key` method.
#[inline]
pub fn key(&self) -> u64 {
self.buffer
}
}
unsafe impl VulkanObject for UnsafeBuffer {
@ -250,6 +250,20 @@ unsafe impl VulkanObject for UnsafeBuffer {
}
}
unsafe impl DeviceOwned for UnsafeBuffer {
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
}
}
impl fmt::Debug for UnsafeBuffer {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "<Vulkan buffer {:?}>", self.buffer)
}
}
impl Drop for UnsafeBuffer {
#[inline]
fn drop(&mut self) {
@ -525,6 +539,7 @@ mod tests {
use super::Usage;
use device::Device;
use device::DeviceOwned;
use sync::Sharing;
#[test]
@ -541,7 +556,7 @@ mod tests {
}
#[test]
#[should_panic = "Can't enable sparse residency without enabling sparse binding as well"]
#[should_panic(expected = "Can't enable sparse residency without enabling sparse binding as well")]
fn panic_wrong_sparse_residency() {
let (device, _) = gfx_dev_and_queue!();
let sparse = SparseLevel { sparse: false, sparse_residency: true, sparse_aliased: false };
@ -552,7 +567,7 @@ mod tests {
}
#[test]
#[should_panic = "Can't enable sparse aliasing without enabling sparse binding as well"]
#[should_panic(expected = "Can't enable sparse aliasing without enabling sparse binding as well")]
fn panic_wrong_sparse_aliased() {
let (device, _) = gfx_dev_and_queue!();
let sparse = SparseLevel { sparse: false, sparse_residency: false, sparse_aliased: true };

View File

@ -7,238 +7,305 @@
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::any::Any;
use std::ops::Range;
use std::sync::Arc;
use buffer::BufferSlice;
use buffer::sys::UnsafeBuffer;
use command_buffer::Submission;
use device::DeviceOwned;
use device::Queue;
use image::Image;
use image::ImageAccess;
use memory::Content;
use sync::AccessFlagBits;
use sync::Fence;
use sync::PipelineStages;
use sync::Semaphore;
use SafeDeref;
use VulkanObject;
pub unsafe trait Buffer: 'static + Send + Sync {
/// Returns the inner buffer.
fn inner(&self) -> &UnsafeBuffer;
/// Trait for objects that represent either a buffer or a slice of a buffer.
///
/// See also `TypedBuffer`.
// TODO: require `DeviceOwned`
pub unsafe trait Buffer {
/// Object that represents a GPU access to the buffer.
type Access: BufferAccess;
/// Returns whether accessing a range of this buffer should signal a fence.
fn needs_fence(&self, write: bool, Range<usize>) -> Option<bool>;
/// Builds an object that represents a GPU access to the buffer.
fn access(self) -> Self::Access;
/// Called when a command buffer that uses this buffer is being built.
/// Returns the size of the buffer in bytes.
fn size(&self) -> usize;
/// Returns the length of the buffer in number of elements.
///
/// Must return true if the command buffer should include a pipeline barrier at the start,
/// to read from what the host wrote, and a pipeline barrier at the end, to flush caches and
/// allows the host to read the data.
fn host_accesses(&self, block: usize) -> bool;
/// This method can only be called for buffers whose type is known to be an array.
#[inline]
fn len(&self) -> usize where Self: TypedBuffer, Self::Content: Content {
self.size() / <Self::Content as Content>::indiv_size()
}
/// Given a range, returns the list of blocks which each range is contained in.
/// Builds a `BufferSlice` object holding part of the buffer.
///
/// Each block must have a unique number. Hint: it can simply be the offset of the start of the
/// block.
/// Calling this function multiple times with the same parameter must always return the same
/// value.
/// The return value must not be empty.
fn blocks(&self, range: Range<usize>) -> Vec<usize>;
/// This method can only be called for buffers whose type is known to be an array.
///
/// This method can be used when you want to perform an operation on some part of the buffer
/// and not on the whole buffer.
///
/// Returns `None` if out of range.
#[inline]
fn slice<T>(self, range: Range<usize>) -> Option<BufferSlice<[T], Self>>
where Self: Sized + TypedBuffer<Content = [T]>
{
BufferSlice::slice(self.into_buffer_slice(), range)
}
/// Returns the range of bytes of the buffer slice used by a block.
fn block_memory_range(&self, block: usize) -> Range<usize>;
/// Builds a `BufferSlice` object holding the buffer by value.
#[inline]
fn into_buffer_slice(self) -> BufferSlice<Self::Content, Self>
where Self: Sized + TypedBuffer
{
BufferSlice::from_typed_buffer(self)
}
/// Builds a `BufferSlice` object holding part of the buffer.
///
/// This method can only be called for buffers whose type is known to be an array.
///
/// If the host is still accessing the buffer, this function implementation should block
/// until it is no longer the case.
/// This method can be used when you want to perform an operation on a specific element of the
/// buffer and not on the whole buffer.
///
/// **Important**: The `Submission` object likely holds an `Arc` to `self`. Therefore you
/// should store the `Submission` in the form of a `Weak<Submission>` and not
/// of an `Arc<Submission>` to avoid cyclic references.
unsafe fn gpu_access(&self, ranges: &mut Iterator<Item = AccessRange>,
submission: &Arc<Submission>) -> GpuAccessResult;
/// Returns `None` if out of range.
#[inline]
fn index<T>(self, index: usize) -> Option<BufferSlice<[T], Self>>
where Self: Sized + TypedBuffer<Content = [T]>
{
self.slice(index .. (index + 1))
}
}
/// Extension trait for `Buffer`. Indicates the type of the content of the buffer.
pub unsafe trait TypedBuffer: Buffer {
/// The type of the content of the buffer.
type Content: ?Sized;
}
/// Trait for objects that represent a way for the GPU to have access to a buffer or a slice of a
/// buffer.
///
/// See also `TypedBufferAccess`.
pub unsafe trait BufferAccess: DeviceOwned {
/// Returns the inner information about this buffer.
fn inner(&self) -> BufferInner;
/// Returns the size of the buffer in bytes.
// FIXME: don't provide by default, because can be wrong
#[inline]
fn size(&self) -> usize {
self.inner().size()
self.inner().buffer.size()
}
}
/// Extension trait for `Buffer`. Types that implement this can be used in a `StdCommandBuffer`.
///
/// Each buffer and image used in a `StdCommandBuffer` have an associated state which is
/// represented by the `CommandListState` associated type of this trait. You can make multiple
/// buffers or images share the same state by making `is_same` return true.
pub unsafe trait TrackedBuffer: Buffer {
/// State of the buffer in a list of commands.
/// Returns the length of the buffer in number of elements.
///
/// The `Any` bound is here for stupid reasons, sorry.
// TODO: remove Any bound
type CommandListState: Any + CommandListState<FinishedState = Self::FinishedState>;
/// State of the buffer in a finished list of commands.
type FinishedState: CommandBufferState;
/// Returns true if TODO.
///
/// If `is_same` returns true, then the type of `CommandListState` must be the same as for the
/// other buffer. Otherwise a panic will occur.
/// This method can only be called for buffers whose type is known to be an array.
#[inline]
fn is_same_buffer<B>(&self, other: &B) -> bool where B: Buffer {
self.inner().internal_object() == other.inner().internal_object()
fn len(&self) -> usize where Self: TypedBufferAccess, Self::Content: Content {
self.size() / <Self::Content as Content>::indiv_size()
}
/// Returns true if TODO.
///
/// If `is_same` returns true, then the type of `CommandListState` must be the same as for the
/// other image. Otherwise a panic will occur.
/// Builds a `BufferSlice` object holding the buffer by reference.
#[inline]
fn is_same_image<I>(&self, other: &I) -> bool where I: Image {
false
fn as_buffer_slice(&self) -> BufferSlice<Self::Content, &Self>
where Self: Sized + TypedBufferAccess
{
BufferSlice::from_typed_buffer_access(self)
}
/// Returns the state of the buffer when it has not yet been used.
fn initial_state(&self) -> Self::CommandListState;
}
/// Trait for objects that represent the state of a slice of the buffer in a list of commands.
pub trait CommandListState {
type FinishedState: CommandBufferState;
/// Returns a new state that corresponds to the moment after a slice of the buffer has been
/// used in the pipeline. The parameters indicate in which way it has been used.
/// Builds a `BufferSlice` object holding part of the buffer by reference.
///
/// If the transition should result in a pipeline barrier, then it must be returned by this
/// function.
fn transition(self, num_command: usize, buffer: &UnsafeBuffer, offset: usize, size: usize,
write: bool, stage: PipelineStages, access: AccessFlagBits)
-> (Self, Option<PipelineBarrierRequest>)
where Self: Sized;
/// Function called when the command buffer builder is turned into a real command buffer.
/// This method can only be called for buffers whose type is known to be an array.
///
/// This function can return an additional pipeline barrier that will be applied at the end
/// of the command buffer.
fn finish(self) -> (Self::FinishedState, Option<PipelineBarrierRequest>);
/// This method can be used when you want to perform an operation on some part of the buffer
/// and not on the whole buffer.
///
/// Returns `None` if out of range.
#[inline]
fn slice<T>(&self, range: Range<usize>) -> Option<BufferSlice<[T], &Self>>
where Self: Sized + TypedBufferAccess<Content = [T]>
{
BufferSlice::slice(self.as_buffer_slice(), range)
}
/// Builds a `BufferSlice` object holding the buffer by value.
#[inline]
fn into_buffer_slice(self) -> BufferSlice<Self::Content, Self>
where Self: Sized + TypedBufferAccess
{
BufferSlice::from_typed_buffer_access(self)
}
/// Builds a `BufferSlice` object holding part of the buffer by reference.
///
/// This method can only be called for buffers whose type is known to be an array.
///
/// This method can be used when you want to perform an operation on a specific element of the
/// buffer and not on the whole buffer.
///
/// Returns `None` if out of range.
#[inline]
fn index<T>(&self, index: usize) -> Option<BufferSlice<[T], &Self>>
where Self: Sized + TypedBufferAccess<Content = [T]>
{
self.slice(index .. (index + 1))
}
/// Returns true if an access to `self` (as defined by `self_offset` and `self_size`)
/// potentially overlaps the same memory as an access to `other` (as defined by `other_offset`
/// and `other_size`).
///
/// If this function returns `false`, this means that we are allowed to access the offset/size
/// of `self` at the same time as the offset/size of `other` without causing a data race.
fn conflicts_buffer(&self, self_offset: usize, self_size: usize,
other: &BufferAccess, other_offset: usize, other_size: usize)
-> bool
{
// TODO: should we really provide a default implementation?
debug_assert!(self_size <= self.size());
if self.inner().buffer.internal_object() != other.inner().buffer.internal_object() {
return false;
}
let self_offset = self_offset + self.inner().offset;
let other_offset = other_offset + other.inner().offset;
if self_offset < other_offset && self_offset + self_size <= other_offset {
return false;
}
if other_offset < self_offset && other_offset + other_size <= self_offset {
return false;
}
true
}
/// Returns true if an access to `self` (as defined by `self_offset` and `self_size`)
/// potentially overlaps the same memory as an access to `other` (as defined by
/// `other_first_layer`, `other_num_layers`, `other_first_mipmap` and `other_num_mipmaps`).
///
/// If this function returns `false`, this means that we are allowed to access the offset/size
/// of `self` at the same time as the offset/size of `other` without causing a data race.
fn conflicts_image(&self, self_offset: usize, self_size: usize, other: &ImageAccess,
other_first_layer: u32, other_num_layers: u32, other_first_mipmap: u32,
other_num_mipmaps: u32) -> bool
{
let other_key = other.conflict_key(other_first_layer, other_num_layers, other_first_mipmap,
other_num_mipmaps);
self.conflict_key(self_offset, self_size) == other_key
}
/// Returns a key that uniquely identifies the range given by offset/size.
///
/// Two ranges that potentially overlap in memory should return the same key.
///
/// The key is shared amongst all buffers and images, which means that you can make several
/// different buffer objects share the same memory, or make some buffer objects share memory
/// with images, as long as they return the same key.
///
/// Since it is possible to accidentally return the same key for memory ranges that don't
/// overlap, the `conflicts_buffer` or `conflicts_image` function should always be called to
/// verify whether they actually overlap.
fn conflict_key(&self, self_offset: usize, self_size: usize) -> u64 {
// FIXME: remove implementation
unimplemented!()
}
/// Shortcut for `conflicts_buffer` that compares the whole buffer to another.
#[inline]
fn conflicts_buffer_all(&self, other: &BufferAccess) -> bool {
self.conflicts_buffer(0, self.size(), other, 0, other.size())
}
/// Shortcut for `conflicts_image` that compares the whole buffer to a whole image.
#[inline]
fn conflicts_image_all(&self, other: &ImageAccess) -> bool {
self.conflicts_image(0, self.size(), other, 0, other.dimensions().array_layers(), 0,
other.mipmap_levels())
}
/// Shortcut for `conflict_key` that grabs the key of the whole buffer.
#[inline]
fn conflict_key_all(&self) -> u64 {
self.conflict_key(0, self.size())
}
/// Locks the resource for usage on the GPU. Returns `false` if the lock was already acquired.
///
/// This function implementation should remember that it has been called and return `false` if
/// it gets called a second time.
///
/// The only way to know that the GPU has stopped accessing a queue is when the buffer object
/// gets destroyed. Therefore you are encouraged to use temporary objects or handles (similar
/// to a lock) in order to represent a GPU access.
// TODO: return Result?
fn try_gpu_lock(&self, exclusive_access: bool, queue: &Queue) -> bool;
/// Locks the resource for usage on the GPU. Supposes that the resource is already locked, and
/// simply increases the lock by one.
///
/// Must only be called after `try_gpu_lock()` succeeded.
unsafe fn increase_gpu_lock(&self);
}
/// Requests that a pipeline barrier is created.
pub struct PipelineBarrierRequest {
/// The number of the command after which the barrier should be placed. Must usually match
/// the number that was passed to the previous call to `transition`, or 0 if the buffer hasn't
/// been used yet.
pub after_command_num: usize,
/// The source pipeline stages of the transition.
pub source_stage: PipelineStages,
/// The destination pipeline stages of the transition.
pub destination_stages: PipelineStages,
/// If true, the pipeliner barrier is by region. There is literaly no reason to pass `false`
/// here, but it is included just in case.
pub by_region: bool,
/// An optional memory barrier. See the docs of `PipelineMemoryBarrierRequest`.
pub memory_barrier: Option<PipelineMemoryBarrierRequest>,
}
/// Requests that a memory barrier is created as part of the pipeline barrier.
///
/// By default, a pipeline barrier only guarantees that the source operations are executed before
/// the destination operations, but it doesn't make memory writes made by source operations visible
/// to the destination operations. In order to make so, you have to add a memory barrier.
///
/// The memory barrier always concerns the buffer that is currently being processed. You can't add
/// a memory barrier that concerns another resource.
pub struct PipelineMemoryBarrierRequest {
/// Offset of start of the range to flush.
/// Inner information about a buffer.
#[derive(Copy, Clone, Debug)]
pub struct BufferInner<'a> {
/// The underlying buffer object.
pub buffer: &'a UnsafeBuffer,
/// The offset in bytes from the start of the underlying buffer object to the start of the
/// buffer we're describing.
pub offset: usize,
/// Size of the range to flush.
pub size: usize,
/// Source accesses.
pub source_access: AccessFlagBits,
/// Destination accesses.
pub destination_access: AccessFlagBits,
}
/// Trait for objects that represent the state of the buffer in a command buffer.
pub trait CommandBufferState {
/// Called right before the command buffer is submitted.
// TODO: function should be unsafe because it must be guaranteed that a cb is submitted
fn on_submit<B, F>(&self, buffer: &B, queue: &Arc<Queue>, fence: F) -> SubmitInfos
where B: Buffer, F: FnOnce() -> Arc<Fence>;
}
pub struct SubmitInfos {
pub pre_semaphore: Option<(Arc<Semaphore>, PipelineStages)>,
pub post_semaphore: Option<Arc<Semaphore>>,
pub pre_barrier: Option<PipelineBarrierRequest>,
pub post_barrier: Option<PipelineBarrierRequest>,
}
unsafe impl<B> Buffer for Arc<B> where B: Buffer {
unsafe impl<T> BufferAccess for T where T: SafeDeref, T::Target: BufferAccess {
#[inline]
fn inner(&self) -> &UnsafeBuffer {
fn inner(&self) -> BufferInner {
(**self).inner()
}
fn needs_fence(&self, _: bool, _: Range<usize>) -> Option<bool> { unimplemented!() }
fn host_accesses(&self, _: usize) -> bool { unimplemented!() }
fn blocks(&self, _: Range<usize>) -> Vec<usize> { unimplemented!() }
fn block_memory_range(&self, _: usize) -> Range<usize> { unimplemented!() }
unsafe fn gpu_access(&self, _: &mut Iterator<Item = AccessRange>,
_: &Arc<Submission>) -> GpuAccessResult { unimplemented!() }
#[inline]
fn size(&self) -> usize {
(**self).size()
}
}
unsafe impl<B> TrackedBuffer for Arc<B> where B: TrackedBuffer, Arc<B>: Buffer {
type CommandListState = B::CommandListState;
type FinishedState = B::FinishedState;
#[inline]
fn is_same_buffer<Bo>(&self, other: &Bo) -> bool where Bo: Buffer {
(**self).is_same_buffer(other)
fn conflicts_buffer(&self, self_offset: usize, self_size: usize,
other: &BufferAccess, other_offset: usize, other_size: usize) -> bool
{
(**self).conflicts_buffer(self_offset, self_size, other, other_offset, other_size)
}
#[inline]
fn is_same_image<I>(&self, other: &I) -> bool where I: Image {
(**self).is_same_image(other)
fn conflict_key(&self, self_offset: usize, self_size: usize) -> u64 {
(**self).conflict_key(self_offset, self_size)
}
#[inline]
fn initial_state(&self) -> Self::CommandListState {
(**self).initial_state()
fn try_gpu_lock(&self, exclusive_access: bool, queue: &Queue) -> bool {
(**self).try_gpu_lock(exclusive_access, queue)
}
}
pub unsafe trait TypedBuffer: Buffer {
type Content: ?Sized + 'static;
#[inline]
fn len(&self) -> usize where Self::Content: Content {
self.size() / <Self::Content as Content>::indiv_size()
unsafe fn increase_gpu_lock(&self) {
(**self).increase_gpu_lock()
}
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct AccessRange {
pub block: usize,
pub write: bool,
/// Extension trait for `BufferAccess`. Indicates the type of the content of the buffer.
pub unsafe trait TypedBufferAccess: BufferAccess {
/// The type of the content.
type Content: ?Sized;
}
pub struct GpuAccessResult {
pub dependencies: Vec<Arc<Submission>>,
pub additional_wait_semaphore: Option<Arc<Semaphore>>,
pub additional_signal_semaphore: Option<Arc<Semaphore>>,
unsafe impl<T> TypedBufferAccess for T where T: SafeDeref, T::Target: TypedBufferAccess {
type Content = <T::Target as TypedBufferAccess>::Content;
}

View File

@ -17,23 +17,24 @@
//!
//! # Example
//!
//! ```no_run
//! ```
//! # use std::sync::Arc;
//! use vulkano::buffer::immutable::ImmutableBuffer;
//! use vulkano::buffer::sys::Usage;
//! use vulkano::buffer::BufferView;
//! use vulkano::format;
//!
//! # let device: Arc<vulkano::device::Device> = unsafe { std::mem::uninitialized() };
//! # let queue: Arc<vulkano::device::Queue> = unsafe { std::mem::uninitialized() };
//! # let device: Arc<vulkano::device::Device> = return;
//! # let queue: Arc<vulkano::device::Queue> = return;
//! let usage = Usage {
//! storage_texel_buffer: true,
//! .. Usage::none()
//! };
//!
//! let buffer = ImmutableBuffer::<[u32]>::array(&device, 128, &usage,
//! Some(queue.family())).unwrap();
//! let _view = BufferView::new(&buffer, format::R32Uint).unwrap();
//! let (buffer, _future) = ImmutableBuffer::<[u32]>::from_iter((0..128).map(|n| n), &usage,
//! Some(queue.family()),
//! queue.clone()).unwrap();
//! let _view = BufferView::new(buffer, format::R32Uint).unwrap();
//! ```
use std::marker::PhantomData;
@ -44,12 +45,18 @@ use std::ptr;
use std::sync::Arc;
use buffer::Buffer;
use buffer::BufferSlice;
use buffer::BufferAccess;
use buffer::BufferInner;
use buffer::TypedBuffer;
use buffer::TypedBufferAccess;
use device::Device;
use device::DeviceOwned;
use format::FormatDesc;
use format::StrongStorage;
use Error;
use OomError;
use SafeDeref;
use VulkanObject;
use VulkanPointers;
use check_errors;
@ -57,19 +64,30 @@ use vk;
/// Represents a way for the GPU to interpret buffer data. See the documentation of the
/// `view` module.
pub struct BufferView<F, B> where B: Buffer {
pub struct BufferView<F, B> where B: BufferAccess {
view: vk::BufferView,
buffer: Arc<B>,
buffer: B,
marker: PhantomData<F>,
atomic_accesses: bool,
}
impl<F, B> BufferView<F, B> where B: Buffer {
impl<F, B> BufferView<F, B> where B: BufferAccess {
/// Builds a new buffer view.
#[inline]
pub fn new<'a, S>(buffer: S, format: F)
-> Result<Arc<BufferView<F, B>>, BufferViewCreationError>
where S: Into<BufferSlice<'a, [F::Pixel], B>>, B: 'static, F: StrongStorage + 'static
pub fn new<P>(buffer: P, format: F) -> Result<Arc<BufferView<F, B>>, BufferViewCreationError>
where P: TypedBuffer<Content = [F::Pixel]> + Buffer<Access = B>,
B: BufferAccess,
F: StrongStorage + 'static
{
unsafe {
BufferView::unchecked(buffer.access(), format)
}
}
/// Builds a new buffer view from a `BufferAccess` object.
#[inline]
pub fn from_access(buffer: B, format: F) -> Result<Arc<BufferView<F, B>>, BufferViewCreationError>
where B: TypedBufferAccess<Content = [F::Pixel]>, F: StrongStorage + 'static
{
unsafe {
BufferView::unchecked(buffer, format)
@ -77,72 +95,71 @@ impl<F, B> BufferView<F, B> where B: Buffer {
}
/// Builds a new buffer view without checking that the format is correct.
pub unsafe fn unchecked<'a, S, T: ?Sized>(buffer: S, format: F)
-> Result<Arc<BufferView<F, B>>,
BufferViewCreationError>
where S: Into<BufferSlice<'a, T, B>>, B: 'static, T: 'static, F: FormatDesc + 'static
pub unsafe fn unchecked(org_buffer: B, format: F)
-> Result<Arc<BufferView<F, B>>, BufferViewCreationError>
where B: BufferAccess, F: FormatDesc + 'static
{
let buffer = buffer.into();
let device = buffer.resource.inner().device();
let format = format.format();
let (view, format_props) = {
let size = org_buffer.size();
let BufferInner { buffer, offset } = org_buffer.inner();
// TODO: check minTexelBufferOffsetAlignment
let device = buffer.device();
let format = format.format();
if !buffer.buffer().inner().usage_uniform_texel_buffer() &&
!buffer.buffer().inner().usage_storage_texel_buffer()
{
return Err(BufferViewCreationError::WrongBufferUsage);
}
// TODO: check minTexelBufferOffsetAlignment
let format_props = {
let vk_i = device.instance().pointers();
let mut output = mem::uninitialized();
vk_i.GetPhysicalDeviceFormatProperties(device.physical_device().internal_object(),
format as u32, &mut output);
output.bufferFeatures
};
{
let nb = buffer.size() / format.size().expect("Can't use a compressed format for buffer views");
let l = buffer.buffer().inner().device().physical_device().limits().max_texel_buffer_elements();
if nb > l as usize {
return Err(BufferViewCreationError::MaxTexelBufferElementsExceeded);
if !buffer.usage_uniform_texel_buffer() && !buffer.usage_storage_texel_buffer() {
return Err(BufferViewCreationError::WrongBufferUsage);
}
}
if buffer.buffer().inner().usage_uniform_texel_buffer() {
if (format_props & vk::FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT) == 0 {
return Err(BufferViewCreationError::UnsupportedFormat);
{
let nb = size / format.size().expect("Can't use a compressed format for buffer views");
let l = device.physical_device().limits().max_texel_buffer_elements();
if nb > l as usize {
return Err(BufferViewCreationError::MaxTexelBufferElementsExceeded);
}
}
}
if buffer.buffer().inner().usage_storage_texel_buffer() {
if (format_props & vk::FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT) == 0 {
return Err(BufferViewCreationError::UnsupportedFormat);
let format_props = {
let vk_i = device.instance().pointers();
let mut output = mem::uninitialized();
vk_i.GetPhysicalDeviceFormatProperties(device.physical_device().internal_object(),
format as u32, &mut output);
output.bufferFeatures
};
if buffer.usage_uniform_texel_buffer() {
if (format_props & vk::FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT) == 0 {
return Err(BufferViewCreationError::UnsupportedFormat);
}
}
}
let infos = vk::BufferViewCreateInfo {
sType: vk::STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved,
buffer: buffer.resource.inner().internal_object(),
format: format as u32,
offset: buffer.offset as u64,
range: buffer.size as u64,
};
if buffer.usage_storage_texel_buffer() {
if (format_props & vk::FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT) == 0 {
return Err(BufferViewCreationError::UnsupportedFormat);
}
}
let infos = vk::BufferViewCreateInfo {
sType: vk::STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved,
buffer: buffer.internal_object(),
format: format as u32,
offset: offset as u64,
range: size as u64,
};
let view = {
let vk = device.pointers();
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateBufferView(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
(output, format_props)
};
Ok(Arc::new(BufferView {
view: view,
buffer: buffer.resource.clone(),
buffer: org_buffer,
marker: PhantomData,
atomic_accesses: (format_props &
vk::FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT) != 0,
@ -151,20 +168,20 @@ impl<F, B> BufferView<F, B> where B: Buffer {
/// Returns the buffer associated to this view.
#[inline]
pub fn buffer(&self) -> &Arc<B> {
pub fn buffer(&self) -> &B {
&self.buffer
}
/// Returns true if the buffer view can be used as a uniform texel buffer.
#[inline]
pub fn uniform_texel_buffer(&self) -> bool {
self.buffer.inner().usage_uniform_texel_buffer()
self.buffer.inner().buffer.usage_uniform_texel_buffer()
}
/// Returns true if the buffer view can be used as a storage texel buffer.
#[inline]
pub fn storage_texel_buffer(&self) -> bool {
self.buffer.inner().usage_storage_texel_buffer()
self.buffer.inner().buffer.usage_storage_texel_buffer()
}
/// Returns true if the buffer view can be used as a storage texel buffer with atomic accesses.
@ -174,7 +191,7 @@ impl<F, B> BufferView<F, B> where B: Buffer {
}
}
unsafe impl<F, B> VulkanObject for BufferView<F, B> where B: Buffer {
unsafe impl<F, B> VulkanObject for BufferView<F, B> where B: BufferAccess {
type Object = vk::BufferView;
#[inline]
@ -183,17 +200,62 @@ unsafe impl<F, B> VulkanObject for BufferView<F, B> where B: Buffer {
}
}
impl<F, B> Drop for BufferView<F, B> where B: Buffer {
unsafe impl<F, B> DeviceOwned for BufferView<F, B>
where B: BufferAccess
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.buffer.device()
}
}
impl<F, B> fmt::Debug for BufferView<F, B> where B: BufferAccess + fmt::Debug {
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
fmt.debug_struct("BufferView")
.field("raw", &self.view)
.field("buffer", &self.buffer)
.finish()
}
}
impl<F, B> Drop for BufferView<F, B> where B: BufferAccess {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.buffer.inner().device().pointers();
vk.DestroyBufferView(self.buffer.inner().device().internal_object(), self.view,
let vk = self.buffer.inner().buffer.device().pointers();
vk.DestroyBufferView(self.buffer.inner().buffer.device().internal_object(), self.view,
ptr::null());
}
}
}
pub unsafe trait BufferViewRef {
type BufferAccess: BufferAccess;
type Format;
fn view(&self) -> &BufferView<Self::Format, Self::BufferAccess>;
}
unsafe impl<F, B> BufferViewRef for BufferView<F, B> where B: BufferAccess {
type BufferAccess = B;
type Format = F;
#[inline]
fn view(&self) -> &BufferView<F, B> {
self
}
}
unsafe impl<T, F, B> BufferViewRef for T where T: SafeDeref<Target = BufferView<F, B>>, B: BufferAccess {
type BufferAccess = B;
type Format = F;
#[inline]
fn view(&self) -> &BufferView<F, B> {
&**self
}
}
/// Error that can happen when creating a buffer view.
#[derive(Debug, Copy, Clone)]
pub enum BufferViewCreationError {
@ -274,9 +336,9 @@ mod tests {
.. Usage::none()
};
let buffer = ImmutableBuffer::<[[u8; 4]]>::array(&device, 128, &usage,
Some(queue.family())).unwrap();
let view = BufferView::new(&buffer, format::R8G8B8A8Unorm).unwrap();
let (buffer, _) = ImmutableBuffer::<[[u8; 4]]>::from_iter((0..128).map(|_| [0; 4]), &usage,
Some(queue.family()), queue.clone()).unwrap();
let view = BufferView::new(buffer, format::R8G8B8A8Unorm).unwrap();
assert!(view.uniform_texel_buffer());
}
@ -291,9 +353,10 @@ mod tests {
.. Usage::none()
};
let buffer = ImmutableBuffer::<[[u8; 4]]>::array(&device, 128, &usage,
Some(queue.family())).unwrap();
let view = BufferView::new(&buffer, format::R8G8B8A8Unorm).unwrap();
let (buffer, _) = ImmutableBuffer::<[[u8; 4]]>::from_iter((0..128).map(|_| [0; 4]), &usage,
Some(queue.family()),
queue.clone()).unwrap();
let view = BufferView::new(buffer, format::R8G8B8A8Unorm).unwrap();
assert!(view.storage_texel_buffer());
}
@ -308,9 +371,10 @@ mod tests {
.. Usage::none()
};
let buffer = ImmutableBuffer::<[u32]>::array(&device, 128, &usage,
Some(queue.family())).unwrap();
let view = BufferView::new(&buffer, format::R32Uint).unwrap();
let (buffer, _) = ImmutableBuffer::<[u32]>::from_iter((0..128).map(|_| 0), &usage,
Some(queue.family()),
queue.clone()).unwrap();
let view = BufferView::new(buffer, format::R32Uint).unwrap();
assert!(view.storage_texel_buffer());
assert!(view.storage_texel_buffer_atomic());
@ -321,10 +385,12 @@ mod tests {
// `VK_FORMAT_R8G8B8A8_UNORM` guaranteed to be a supported format
let (device, queue) = gfx_dev_and_queue!();
let buffer = ImmutableBuffer::<[[u8; 4]]>::array(&device, 128, &Usage::none(),
Some(queue.family())).unwrap();
let (buffer, _) = ImmutableBuffer::<[[u8; 4]]>::from_iter((0..128).map(|_| [0; 4]),
&Usage::none(),
Some(queue.family()),
queue.clone()).unwrap();
match BufferView::new(&buffer, format::R8G8B8A8Unorm) {
match BufferView::new(buffer, format::R8G8B8A8Unorm) {
Err(BufferViewCreationError::WrongBufferUsage) => (),
_ => panic!()
}
@ -340,11 +406,12 @@ mod tests {
.. Usage::none()
};
let buffer = ImmutableBuffer::<[[f64; 4]]>::array(&device, 128, &usage,
Some(queue.family())).unwrap();
let (buffer, _) = ImmutableBuffer::<[[f64; 4]]>::from_iter((0..128).map(|_| [0.0; 4]),
&usage, Some(queue.family()),
queue.clone()).unwrap();
// TODO: what if R64G64B64A64Sfloat is supported?
match BufferView::new(&buffer, format::R64G64B64A64Sfloat) {
match BufferView::new(buffer, format::R64G64B64A64Sfloat) {
Err(BufferViewCreationError::UnsupportedFormat) => (),
_ => panic!()
}

View File

@ -0,0 +1,173 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use buffer::BufferAccess;
use command_buffer::cb;
use command_buffer::commands_raw;
use command_buffer::cb::AddCommand;
use command_buffer::cb::CommandBufferBuild;
use command_buffer::cb::UnsafeCommandBuffer;
use command_buffer::CommandAddError;
use command_buffer::CommandBuffer;
use command_buffer::CommandBufferBuilder;
use command_buffer::CommandBufferExecError;
use command_buffer::pool::CommandPool;
use command_buffer::pool::StandardCommandPool;
use device::Device;
use device::DeviceOwned;
use device::Queue;
use image::Layout;
use image::ImageAccess;
use instance::QueueFamily;
use sync::AccessCheckError;
use sync::AccessFlagBits;
use sync::PipelineStages;
use sync::GpuFuture;
use OomError;
type Cb<P> = cb::DeviceCheckLayer<cb::QueueTyCheckLayer<cb::ContextCheckLayer<cb::StateCacheLayer<cb::SubmitSyncBuilderLayer<cb::AutoPipelineBarriersLayer<cb::AbstractStorageLayer<cb::UnsafeCommandBufferBuilder<P>>>>>>>>;
///
///
/// Note that command buffers allocated from the default command pool (`Arc<StandardCommandPool>`)
/// don't implement the `Send` and `Sync` traits. If you use this pool, then the
/// `AutoCommandBufferBuilder` will not implement `Send` and `Sync` either. Once a command buffer
/// is built, however, it *does* implement `Send` and `Sync`.
///
pub struct AutoCommandBufferBuilder<P = Arc<StandardCommandPool>> where P: CommandPool {
inner: Cb<P>
}
impl AutoCommandBufferBuilder<Arc<StandardCommandPool>> {
pub fn new(device: Arc<Device>, queue_family: QueueFamily)
-> Result<AutoCommandBufferBuilder<Arc<StandardCommandPool>>, OomError>
{
let pool = Device::standard_command_pool(&device, queue_family);
let cmd = unsafe {
let c = try!(cb::UnsafeCommandBufferBuilder::new(&pool, cb::Kind::primary(), cb::Flags::SimultaneousUse /* TODO: */));
let c = cb::AbstractStorageLayer::new(c);
let c = cb::AutoPipelineBarriersLayer::new(c);
let c = cb::SubmitSyncBuilderLayer::new(c, cb::SubmitSyncBuilderLayerBehavior::UseLayoutHint);
let c = cb::StateCacheLayer::new(c);
let c = cb::ContextCheckLayer::new(c, false, true);
let c = cb::QueueTyCheckLayer::new(c);
let c = cb::DeviceCheckLayer::new(c);
c
};
Ok(AutoCommandBufferBuilder {
inner: cmd,
})
}
}
unsafe impl<P, O, E> CommandBufferBuild for AutoCommandBufferBuilder<P>
where Cb<P>: CommandBufferBuild<Out = O, Err = E>,
P: CommandPool
{
type Out = O;
type Err = E;
#[inline]
fn build(self) -> Result<O, E> {
// TODO: wrap around?
CommandBufferBuild::build(self.inner)
}
}
unsafe impl<P> CommandBuffer for AutoCommandBufferBuilder<P>
where Cb<P>: CommandBuffer,
P: CommandPool
{
type Pool = <Cb<P> as CommandBuffer>::Pool;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<Self::Pool> {
self.inner.inner()
}
#[inline]
fn prepare_submit(&self, future: &GpuFuture, queue: &Queue) -> Result<(), CommandBufferExecError> {
self.inner.prepare_submit(future, queue)
}
#[inline]
fn check_buffer_access(&self, buffer: &BufferAccess, exclusive: bool, queue: &Queue)
-> Result<Option<(PipelineStages, AccessFlagBits)>, AccessCheckError>
{
self.inner.check_buffer_access(buffer, exclusive, queue)
}
#[inline]
fn check_image_access(&self, image: &ImageAccess, layout: Layout, exclusive: bool, queue: &Queue)
-> Result<Option<(PipelineStages, AccessFlagBits)>, AccessCheckError>
{
self.inner.check_image_access(image, layout, exclusive, queue)
}
}
unsafe impl<P> DeviceOwned for AutoCommandBufferBuilder<P>
where Cb<P>: DeviceOwned,
P: CommandPool
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.inner.device()
}
}
unsafe impl<P> CommandBufferBuilder for AutoCommandBufferBuilder<P>
where Cb<P>: CommandBufferBuilder,
P: CommandPool
{
#[inline]
fn queue_family(&self) -> QueueFamily {
self.inner.queue_family()
}
}
macro_rules! pass_through {
(($($param:ident),*), $cmd:ty) => {
unsafe impl<P $(, $param)*> AddCommand<$cmd> for AutoCommandBufferBuilder<P>
where P: CommandPool,
Cb<P>: AddCommand<$cmd, Out = Cb<P>>
{
type Out = AutoCommandBufferBuilder<P>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
Ok(AutoCommandBufferBuilder {
inner: self.inner.add(command)?,
})
}
}
}
}
pass_through!((Rp, F), commands_raw::CmdBeginRenderPass<Rp, F>);
pass_through!((S, Pl), commands_raw::CmdBindDescriptorSets<S, Pl>);
pass_through!((B), commands_raw::CmdBindIndexBuffer<B>);
pass_through!((Pl), commands_raw::CmdBindPipeline<Pl>);
pass_through!((V), commands_raw::CmdBindVertexBuffers<V>);
pass_through!((), commands_raw::CmdClearAttachments);
pass_through!((S, D), commands_raw::CmdCopyBuffer<S, D>);
pass_through!((S, D), commands_raw::CmdCopyBufferToImage<S, D>);
pass_through!((), commands_raw::CmdDrawRaw);
pass_through!((), commands_raw::CmdDrawIndexedRaw);
pass_through!((B), commands_raw::CmdDrawIndirectRaw<B>);
pass_through!((), commands_raw::CmdEndRenderPass);
pass_through!((C), commands_raw::CmdExecuteCommands<C>);
pass_through!((B), commands_raw::CmdFillBuffer<B>);
pass_through!((), commands_raw::CmdNextSubpass);
pass_through!((Pc, Pl), commands_raw::CmdPushConstants<Pc, Pl>);
pass_through!((), commands_raw::CmdSetState);
pass_through!((B, D), commands_raw::CmdUpdateBuffer<B, D>);

View File

@ -0,0 +1,371 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use std::sync::Arc;
use buffer::Buffer;
use buffer::TypedBuffer;
use buffer::TypedBufferAccess;
use device::DeviceOwned;
use command_buffer::DrawIndirectCommand;
use command_buffer::DynamicState;
use command_buffer::cb::AddCommand;
use command_buffer::cb::CommandBufferBuild;
use command_buffer::commands_extra;
use command_buffer::commands_raw;
use descriptor::descriptor_set::DescriptorSetsCollection;
use framebuffer::FramebufferAbstract;
use framebuffer::RenderPassAbstract;
use framebuffer::RenderPassDescClearValues;
use image::Image;
use instance::QueueFamily;
use pipeline::ComputePipelineAbstract;
use pipeline::GraphicsPipelineAbstract;
use pipeline::vertex::VertexSource;
use pipeline::input_assembly::Index;
///
/// > **Note**: This trait is just a utility trait. Do not implement it yourself. Instead
/// > implement the `AddCommand` and `CommandBufferBuild` traits.
pub unsafe trait CommandBufferBuilder: DeviceOwned {
/// Adds a command that writes the content of a buffer.
///
/// This function is similar to the `memset` function in C. The `data` parameter is a number
/// that will be repeatidely written through the entire buffer.
///
/// > **Note**: This function is technically safe because buffers can only contain integers or
/// > floating point numbers, which are always valid whatever their memory representation is.
/// > But unless your buffer actually contains only 32-bits integers, you are encouraged to use
/// > this function only for zeroing the content of a buffer by passing `0` for the data.
// TODO: not safe because of signalling NaNs
#[inline]
fn fill_buffer<B, O>(self, buffer: B, data: u32) -> Result<O, CommandBufferBuilderError<commands_raw::CmdFillBufferError>>
where Self: Sized + AddCommand<commands_raw::CmdFillBuffer<B::Access>, Out = O>,
B: Buffer
{
let cmd = match commands_raw::CmdFillBuffer::new(buffer.access(), data) {
Ok(cmd) => cmd,
Err(err) => return Err(CommandBufferBuilderError::CommandBuildError(err)),
};
Ok(self.add(cmd)?)
}
/// Adds a command that writes data to a buffer.
#[inline]
fn update_buffer<B, D, O>(self, buffer: B, data: D) -> Result<O, CommandBufferBuilderError<commands_raw::CmdUpdateBufferError>>
where Self: Sized + AddCommand<commands_raw::CmdUpdateBuffer<B::Access, D>, Out = O>,
B: Buffer + TypedBuffer<Content = D>,
D: 'static
{
let cmd = match commands_raw::CmdUpdateBuffer::new(buffer, data) {
Ok(cmd) => cmd,
Err(err) => return Err(CommandBufferBuilderError::CommandBuildError(err)),
};
Ok(self.add(cmd)?)
}
/// Adds a command that copies from a buffer to another.
#[inline]
fn copy_buffer<S, D, O>(self, src: S, dest: D) -> Result<O, CommandBufferBuilderError<commands_raw::CmdCopyBufferError>>
where Self: Sized + AddCommand<commands_raw::CmdCopyBuffer<S::Access, D::Access>, Out = O>,
S: Buffer,
D: Buffer
{
let cmd = match commands_raw::CmdCopyBuffer::new(src.access(), dest.access()) {
Ok(cmd) => cmd,
Err(err) => return Err(CommandBufferBuilderError::CommandBuildError(err)),
};
Ok(self.add(cmd)?)
}
/// Adds a command that copies the content of a buffer to an image.
///
/// For color images (ie. all formats except depth and/or stencil formats) this command does
/// not perform any conversion. The data inside the buffer must already have the right format.
/// TODO: talk about depth/stencil
///
/// > **Note**: This function is technically safe because buffers can only contain integers or
/// > floating point numbers, which are always valid whatever their memory representation is.
/// > But unless your buffer actually contains only 32-bits integers, you are encouraged to use
/// > this function only for zeroing the content of a buffer by passing `0` for the data.
// TODO: not safe because of signalling NaNs
#[inline]
fn copy_buffer_to_image<B, I, O>(self, buffer: B, image: I)
-> Result<O, CommandBufferBuilderError<commands_raw::CmdCopyBufferToImageError>>
where Self: Sized + AddCommand<commands_raw::CmdCopyBufferToImage<B::Access, I::Access>, Out = O>,
B: Buffer, I: Image
{
let cmd = match commands_raw::CmdCopyBufferToImage::new(buffer.access(), image.access()) {
Ok(cmd) => cmd,
Err(err) => return Err(CommandBufferBuilderError::CommandBuildError(err)),
};
Ok(self.add(cmd)?)
}
/// Same as `copy_buffer_to_image` but lets you specify a range for the destination image.
#[inline]
fn copy_buffer_to_image_dimensions<B, I, O>(self, buffer: B, image: I, offset: [u32; 3],
size: [u32; 3], first_layer: u32, num_layers: u32,
mipmap: u32) -> Result<O, CommandBufferBuilderError<commands_raw::CmdCopyBufferToImageError>>
where Self: Sized + AddCommand<commands_raw::CmdCopyBufferToImage<B::Access, I::Access>, Out = O>,
B: Buffer, I: Image
{
let cmd = match commands_raw::CmdCopyBufferToImage::with_dimensions(buffer.access(),
image.access(), offset, size,
first_layer, num_layers, mipmap)
{
Ok(cmd) => cmd,
Err(err) => return Err(CommandBufferBuilderError::CommandBuildError(err)),
};
Ok(self.add(cmd)?)
}
/// Adds a command that starts a render pass.
///
/// If `secondary` is true, then you will only be able to add secondary command buffers while
/// you're inside the first subpass of the render pass. If `secondary` is false, you will only
/// be able to add inline draw commands and not secondary command buffers.
///
/// You must call this before you can add draw commands.
#[inline]
fn begin_render_pass<F, C, O>(self, framebuffer: F, secondary: bool, clear_values: C)
-> Result<O, CommandAddError>
where Self: Sized + AddCommand<commands_raw::CmdBeginRenderPass<Arc<RenderPassAbstract + Send + Sync>, F>, Out = O>,
F: FramebufferAbstract + RenderPassDescClearValues<C>
{
let cmd = commands_raw::CmdBeginRenderPass::new(framebuffer, secondary, clear_values);
self.add(cmd)
}
/// Adds a command that jumps to the next subpass of the current render pass.
#[inline]
fn next_subpass<O>(self, secondary: bool) -> Result<O, CommandAddError>
where Self: Sized + AddCommand<commands_raw::CmdNextSubpass, Out = O>
{
let cmd = commands_raw::CmdNextSubpass::new(secondary);
self.add(cmd)
}
/// Adds a command that ends the current render pass.
///
/// This must be called after you went through all the subpasses and before you can build
/// the command buffer or add further commands.
#[inline]
fn end_render_pass<O>(self) -> Result<O, CommandAddError>
where Self: Sized + AddCommand<commands_raw::CmdEndRenderPass, Out = O>
{
let cmd = commands_raw::CmdEndRenderPass::new();
self.add(cmd)
}
/// Adds a command that draws.
///
/// Can only be used from inside a render pass.
#[inline]
fn draw<P, S, Pc, V, O>(self, pipeline: P, dynamic: DynamicState, vertices: V, sets: S,
push_constants: Pc) -> Result<O, CommandAddError>
where Self: Sized + AddCommand<commands_extra::CmdDraw<V, P, S, Pc>, Out = O>,
S: DescriptorSetsCollection,
P: VertexSource<V> + GraphicsPipelineAbstract + Clone
{
let cmd = commands_extra::CmdDraw::new(pipeline, dynamic, vertices, sets, push_constants);
self.add(cmd)
}
/// Adds a command that draws indexed vertices.
///
/// Can only be used from inside a render pass.
#[inline]
fn draw_indexed<P, S, Pc, V, Ib, I, O>(self, pipeline: P, dynamic: DynamicState,
vertices: V, index_buffer: Ib, sets: S, push_constants: Pc) -> Result<O, CommandAddError>
where Self: Sized + AddCommand<commands_extra::CmdDrawIndexed<V, Ib::Access, P, S, Pc>, Out = O>,
S: DescriptorSetsCollection,
P: VertexSource<V> + GraphicsPipelineAbstract + Clone,
Ib: Buffer,
Ib::Access: TypedBufferAccess<Content = [I]>,
I: Index + 'static
{
let cmd = commands_extra::CmdDrawIndexed::new(pipeline, dynamic, vertices, index_buffer.access(),
sets, push_constants);
self.add(cmd)
}
/// Adds an indirect draw command.
///
/// Can only be used from inside a render pass.
#[inline]
fn draw_indirect<P, S, Pc, V, B, O>(self, pipeline: P, dynamic: DynamicState,
vertices: V, indirect_buffer: B, sets: S, push_constants: Pc) -> Result<O, CommandAddError>
where Self: Sized + AddCommand<commands_extra::CmdDrawIndirect<V, B::Access, P, S, Pc>, Out = O>,
S: DescriptorSetsCollection,
P: VertexSource<V> + GraphicsPipelineAbstract + Clone,
B: Buffer,
B::Access: TypedBufferAccess<Content = [DrawIndirectCommand]>
{
let cmd = commands_extra::CmdDrawIndirect::new(pipeline, dynamic, vertices, indirect_buffer.access(),
sets, push_constants);
self.add(cmd)
}
/// Executes a compute shader.
fn dispatch<P, S, Pc, O>(self, dimensions: [u32; 3], pipeline: P, sets: S, push_constants: Pc)
-> Result<O, CommandBufferBuilderError<commands_extra::CmdDispatchError>>
where Self: Sized + AddCommand<commands_extra::CmdDispatch<P, S, Pc>, Out = O>,
S: DescriptorSetsCollection,
P: Clone + ComputePipelineAbstract,
{
let cmd = match commands_extra::CmdDispatch::new(dimensions, pipeline, sets, push_constants) {
Ok(cmd) => cmd,
Err(err) => return Err(CommandBufferBuilderError::CommandBuildError(err)),
};
Ok(self.add(cmd)?)
}
/// Builds the actual command buffer.
///
/// You must call this function after you have finished adding commands to the command buffer
/// builder. A command buffer will returned, which you can then submit or use in an "execute
/// commands" command.
#[inline]
fn build(self) -> Result<Self::Out, Self::Err>
where Self: Sized + CommandBufferBuild
{
CommandBufferBuild::build(self)
}
/// Returns true if the pool of the builder supports graphics operations.
#[inline]
fn supports_graphics(&self) -> bool {
self.queue_family().supports_graphics()
}
/// Returns true if the pool of the builder supports compute operations.
#[inline]
fn supports_compute(&self) -> bool {
self.queue_family().supports_compute()
}
/// Returns the queue family of the command buffer builder.
fn queue_family(&self) -> QueueFamily;
}
/// Error that can happen when adding a command to a command buffer builder.
#[derive(Debug, Copy, Clone)]
pub enum CommandBufferBuilderError<E> {
/// Error while creating the command.
CommandBuildError(E),
/// Error while adding the command to the builder.
CommandAddError(CommandAddError),
}
impl<E> From<CommandAddError> for CommandBufferBuilderError<E> {
#[inline]
fn from(err: CommandAddError) -> CommandBufferBuilderError<E> {
CommandBufferBuilderError::CommandAddError(err)
}
}
impl<E> error::Error for CommandBufferBuilderError<E> where E: error::Error {
#[inline]
fn description(&self) -> &str {
match *self {
CommandBufferBuilderError::CommandBuildError(_) => {
"error while creating a command to add to a builder"
},
CommandBufferBuilderError::CommandAddError(_) => {
"error while adding a command to the builder"
},
}
}
#[inline]
fn cause(&self) -> Option<&error::Error> {
match *self {
CommandBufferBuilderError::CommandBuildError(ref err) => {
Some(err)
},
CommandBufferBuilderError::CommandAddError(ref err) => {
Some(err)
},
}
}
}
impl<E> fmt::Display for CommandBufferBuilderError<E> where E: error::Error {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}
/// Error that can happen when adding a command to a command buffer builder.
#[derive(Debug, Copy, Clone)]
pub enum CommandAddError {
/// This command is forbidden when inside a render pass.
ForbiddenInsideRenderPass,
/// This command is forbidden when outside of a render pass.
ForbiddenOutsideRenderPass,
/// This command is forbidden in a secondary command buffer.
ForbiddenInSecondaryCommandBuffer,
/// The queue family doesn't support graphics operations.
GraphicsOperationsNotSupported,
/// The queue family doesn't support compute operations.
ComputeOperationsNotSupported,
/// Trying to execute a secondary command buffer in a primary command buffer of a different
/// queue family.
QueueFamilyMismatch,
}
impl error::Error for CommandAddError {
#[inline]
fn description(&self) -> &str {
match *self {
CommandAddError::ForbiddenInsideRenderPass => {
"this command is forbidden when inside a render pass"
},
CommandAddError::ForbiddenOutsideRenderPass => {
"this command is forbidden when outside of a render pass"
},
CommandAddError::ForbiddenInSecondaryCommandBuffer => {
"this command is forbidden in a secondary command buffer"
},
CommandAddError::GraphicsOperationsNotSupported => {
"the queue family doesn't support graphics operations"
},
CommandAddError::ComputeOperationsNotSupported => {
"the queue family doesn't support compute operations"
},
CommandAddError::QueueFamilyMismatch => {
"trying to execute a secondary command buffer in a primary command buffer of a \
different queue family"
},
}
}
}
impl fmt::Display for CommandAddError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}

View File

@ -0,0 +1,153 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::any::Any;
use std::sync::Arc;
use buffer::BufferAccess;
use command_buffer::cb::AddCommand;
use command_buffer::cb::CommandBufferBuild;
use command_buffer::cb::UnsafeCommandBuffer;
use command_buffer::commands_raw;
use command_buffer::CommandAddError;
use command_buffer::CommandBuffer;
use command_buffer::CommandBufferBuilder;
use command_buffer::CommandBufferExecError;
use device::Device;
use device::DeviceOwned;
use device::Queue;
use image::Layout;
use image::ImageAccess;
use instance::QueueFamily;
use sync::AccessCheckError;
use sync::AccessFlagBits;
use sync::GpuFuture;
use sync::PipelineStages;
/// Layer that stores commands in an abstract way.
pub struct AbstractStorageLayer<I> {
inner: I,
commands: Vec<Box<Any + Send + Sync>>,
}
impl<I> AbstractStorageLayer<I> {
/// Builds a new `AbstractStorageLayer`.
#[inline]
pub fn new(inner: I) -> AbstractStorageLayer<I> {
AbstractStorageLayer {
inner: inner,
commands: Vec::new(),
}
}
}
unsafe impl<I> CommandBuffer for AbstractStorageLayer<I> where I: CommandBuffer {
type Pool = I::Pool;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<I::Pool> {
self.inner.inner()
}
#[inline]
fn prepare_submit(&self, future: &GpuFuture, queue: &Queue) -> Result<(), CommandBufferExecError> {
self.inner.prepare_submit(future, queue)
}
#[inline]
fn check_buffer_access(&self, buffer: &BufferAccess, exclusive: bool, queue: &Queue)
-> Result<Option<(PipelineStages, AccessFlagBits)>, AccessCheckError>
{
self.inner.check_buffer_access(buffer, exclusive, queue)
}
#[inline]
fn check_image_access(&self, image: &ImageAccess, layout: Layout, exclusive: bool, queue: &Queue)
-> Result<Option<(PipelineStages, AccessFlagBits)>, AccessCheckError>
{
self.inner.check_image_access(image, layout, exclusive, queue)
}
}
unsafe impl<I> DeviceOwned for AbstractStorageLayer<I> where I: DeviceOwned {
#[inline]
fn device(&self) -> &Arc<Device> {
self.inner.device()
}
}
unsafe impl<I, O, E> CommandBufferBuild for AbstractStorageLayer<I>
where I: CommandBufferBuild<Out = O, Err = E>
{
type Out = AbstractStorageLayer<O>;
type Err = E;
#[inline]
fn build(self) -> Result<Self::Out, E> {
let inner = try!(self.inner.build());
Ok(AbstractStorageLayer {
inner: inner,
commands: self.commands,
})
}
}
unsafe impl<I> CommandBufferBuilder for AbstractStorageLayer<I> where I: CommandBufferBuilder {
#[inline]
fn queue_family(&self) -> QueueFamily {
self.inner.queue_family()
}
}
macro_rules! pass_through {
(($($param:ident),*), $cmd:ty) => {
unsafe impl<I $(, $param)*> AddCommand<$cmd> for AbstractStorageLayer<I>
where I: for<'r> AddCommand<&'r $cmd, Out = I>, $cmd: Send + Sync + 'static
{
type Out = AbstractStorageLayer<I>;
#[inline]
fn add(mut self, command: $cmd) -> Result<Self::Out, CommandAddError> {
let new_inner = AddCommand::add(self.inner, &command)?;
// TODO: should store a lightweight version of the command
self.commands.push(Box::new(command) as Box<_>);
Ok(AbstractStorageLayer {
inner: new_inner,
commands: self.commands,
})
}
}
}
}
pass_through!((Rp, F), commands_raw::CmdBeginRenderPass<Rp, F>);
pass_through!((S, Pl), commands_raw::CmdBindDescriptorSets<S, Pl>);
pass_through!((B), commands_raw::CmdBindIndexBuffer<B>);
pass_through!((Pl), commands_raw::CmdBindPipeline<Pl>);
pass_through!((V), commands_raw::CmdBindVertexBuffers<V>);
pass_through!((S, D), commands_raw::CmdBlitImage<S, D>);
pass_through!((), commands_raw::CmdClearAttachments);
pass_through!((S, D), commands_raw::CmdCopyBuffer<S, D>);
pass_through!((S, D), commands_raw::CmdCopyBufferToImage<S, D>);
pass_through!((S, D), commands_raw::CmdCopyImage<S, D>);
pass_through!((), commands_raw::CmdDispatchRaw);
pass_through!((), commands_raw::CmdDrawIndexedRaw);
pass_through!((B), commands_raw::CmdDrawIndirectRaw<B>);
pass_through!((), commands_raw::CmdDrawRaw);
pass_through!((), commands_raw::CmdEndRenderPass);
pass_through!((C), commands_raw::CmdExecuteCommands<C>);
pass_through!((B), commands_raw::CmdFillBuffer<B>);
pass_through!((), commands_raw::CmdNextSubpass);
pass_through!((Pc, Pl), commands_raw::CmdPushConstants<Pc, Pl>);
pass_through!((S, D), commands_raw::CmdResolveImage<S, D>);
pass_through!((), commands_raw::CmdSetEvent);
pass_through!((), commands_raw::CmdSetState);
pass_through!((B, D), commands_raw::CmdUpdateBuffer<B, D>);

View File

@ -0,0 +1,116 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use command_buffer::cb::AddCommand;
use command_buffer::cb::CommandBufferBuild;
use command_buffer::CommandAddError;
use command_buffer::CommandBufferBuilder;
use command_buffer::commands_raw;
use device::Device;
use device::DeviceOwned;
use instance::QueueFamily;
pub struct AutoPipelineBarriersLayer<I> {
inner: I,
}
impl<I> AutoPipelineBarriersLayer<I> {
#[inline]
pub fn new(inner: I) -> AutoPipelineBarriersLayer<I> {
AutoPipelineBarriersLayer {
inner: inner,
}
}
}
/*unsafe impl<C, I, L> AddCommand<C> for AutoPipelineBarriersLayer<I, L>
where I: for<'r> AddCommand<&'r C, Out = I>
{
type Out = AutoPipelineBarriersLayer<I, (L, C)>;
#[inline]
fn add(self, command: C) -> Self::Out {
AutoPipelineBarriersLayer {
inner: AddCommand::add(self.inner, command),
}
}
}*/
unsafe impl<I, O, E> CommandBufferBuild for AutoPipelineBarriersLayer<I>
where I: CommandBufferBuild<Out = O, Err = E>
{
type Out = O;
type Err = E;
#[inline]
fn build(self) -> Result<O, E> {
self.inner.build()
}
}
unsafe impl<I> DeviceOwned for AutoPipelineBarriersLayer<I>
where I: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.inner.device()
}
}
unsafe impl<I> CommandBufferBuilder for AutoPipelineBarriersLayer<I>
where I: CommandBufferBuilder
{
#[inline]
fn queue_family(&self) -> QueueFamily {
self.inner.queue_family()
}
}
macro_rules! pass_through {
(($($param:ident),*), $cmd:ty) => {
unsafe impl<I, O $(, $param)*> AddCommand<$cmd> for AutoPipelineBarriersLayer<I>
where I: for<'r> AddCommand<$cmd, Out = O>
{
type Out = AutoPipelineBarriersLayer<O>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
Ok(AutoPipelineBarriersLayer {
inner: AddCommand::add(self.inner, command)?,
})
}
}
}
}
pass_through!((Rp, F), commands_raw::CmdBeginRenderPass<Rp, F>);
pass_through!((S, Pl), commands_raw::CmdBindDescriptorSets<S, Pl>);
pass_through!((B), commands_raw::CmdBindIndexBuffer<B>);
pass_through!((Pl), commands_raw::CmdBindPipeline<Pl>);
pass_through!((V), commands_raw::CmdBindVertexBuffers<V>);
pass_through!((S, D), commands_raw::CmdBlitImage<S, D>);
pass_through!((), commands_raw::CmdClearAttachments);
pass_through!((S, D), commands_raw::CmdCopyBuffer<S, D>);
pass_through!((S, D), commands_raw::CmdCopyBufferToImage<S, D>);
pass_through!((S, D), commands_raw::CmdCopyImage<S, D>);
pass_through!((), commands_raw::CmdDispatchRaw);
pass_through!((), commands_raw::CmdDrawRaw);
pass_through!((), commands_raw::CmdDrawIndexedRaw);
pass_through!((B), commands_raw::CmdDrawIndirectRaw<B>);
pass_through!((), commands_raw::CmdEndRenderPass);
pass_through!((C), commands_raw::CmdExecuteCommands<C>);
pass_through!((B), commands_raw::CmdFillBuffer<B>);
pass_through!((), commands_raw::CmdNextSubpass);
pass_through!((Pc, Pl), commands_raw::CmdPushConstants<Pc, Pl>);
pass_through!((S, D), commands_raw::CmdResolveImage<S, D>);
pass_through!((), commands_raw::CmdSetEvent);
pass_through!((), commands_raw::CmdSetState);
pass_through!((B, D), commands_raw::CmdUpdateBuffer<B, D>);

View File

@ -0,0 +1,269 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use command_buffer::cb::AddCommand;
use command_buffer::cb::CommandBufferBuild;
use command_buffer::CommandAddError;
use command_buffer::CommandBufferBuilder;
use command_buffer::commands_raw;
use device::Device;
use device::DeviceOwned;
use instance::QueueFamily;
/// Layer around a command buffer builder that checks whether the commands can be executed in the
/// given context related to render passes.
///
/// What is checked exactly:
///
/// - When adding a command that can only be executed within a render pass or outside of a render
/// pass, checks that we are within or outside of a render pass.
/// - When leaving the render pass or going to the next subpass, makes sure that the number of
/// subpasses of the current render pass is respected.
/// - When binding a graphics pipeline or drawing, makes sure that the pipeline is valid for the
/// current render pass.
///
pub struct ContextCheckLayer<I> {
// Inner command buffer builder.
inner: I,
// True if we are currently inside a render pass.
inside_render_pass: bool,
// True if entering/leaving a render pass or going to the next subpass is allowed.
allow_render_pass_ops: bool,
}
impl<I> ContextCheckLayer<I> {
/// Builds a new `ContextCheckLayer`.
///
/// If `allow_render_pass_ops` is true, then entering/leaving a render pass or going to the
/// next subpass is allowed by the layer.
///
/// If `inside_render_pass` is true, then the builder is currently inside a render pass.
///
/// Note that this layer will only protect you if you pass correct values in this constructor.
/// It is not unsafe to pass wrong values, but if you do so then the layer will be inefficient
/// as a safety tool.
#[inline]
pub fn new(inner: I, inside_render_pass: bool, allow_render_pass_ops: bool)
-> ContextCheckLayer<I>
{
ContextCheckLayer {
inner: inner,
inside_render_pass: inside_render_pass,
allow_render_pass_ops: allow_render_pass_ops,
}
}
/// Destroys the layer and returns the underlying command buffer.
#[inline]
pub fn into_inner(self) -> I {
self.inner
}
}
unsafe impl<I, O, E> CommandBufferBuild for ContextCheckLayer<I>
where I: CommandBufferBuild<Out = O, Err = E>
{
type Out = O;
type Err = E;
#[inline]
fn build(self) -> Result<O, E> {
self.inner.build()
}
}
unsafe impl<I> DeviceOwned for ContextCheckLayer<I>
where I: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.inner.device()
}
}
unsafe impl<I> CommandBufferBuilder for ContextCheckLayer<I>
where I: CommandBufferBuilder
{
#[inline]
fn queue_family(&self) -> QueueFamily {
self.inner.queue_family()
}
}
// TODO:
// impl!((C), commands_raw::CmdExecuteCommands<C>);
// FIXME: must also check that a pipeline's render pass matches the render pass
// FIXME:
// > If the variable multisample rate feature is not supported, pipeline is a graphics pipeline,
// > the current subpass has no attachments, and this is not the first call to this function with
// > a graphics pipeline after transitioning to the current subpass, then the sample count
// > specified by this pipeline must match that set in the previous pipeline
macro_rules! impl_always {
(($($param:ident),*), $cmd:ty) => {
unsafe impl<'a, I, O $(, $param)*> AddCommand<$cmd> for ContextCheckLayer<I>
where I: AddCommand<$cmd, Out = O>
{
type Out = ContextCheckLayer<O>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
Ok(ContextCheckLayer {
inner: self.inner.add(command)?,
inside_render_pass: self.inside_render_pass,
allow_render_pass_ops: self.allow_render_pass_ops,
})
}
}
}
}
impl_always!((S, Pl), commands_raw::CmdBindDescriptorSets<S, Pl>);
impl_always!((B), commands_raw::CmdBindIndexBuffer<B>);
impl_always!((Pl), commands_raw::CmdBindPipeline<Pl>);
impl_always!((V), commands_raw::CmdBindVertexBuffers<V>);
impl_always!((Pc, Pl), commands_raw::CmdPushConstants<Pc, Pl>);
impl_always!((), commands_raw::CmdSetState);
macro_rules! impl_inside_only {
(($($param:ident),*), $cmd:ty) => {
unsafe impl<'a, I, O $(, $param)*> AddCommand<$cmd> for ContextCheckLayer<I>
where I: AddCommand<$cmd, Out = O>
{
type Out = ContextCheckLayer<O>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
if !self.inside_render_pass {
return Err(CommandAddError::ForbiddenOutsideRenderPass);
}
Ok(ContextCheckLayer {
inner: self.inner.add(command)?,
inside_render_pass: self.inside_render_pass,
allow_render_pass_ops: self.allow_render_pass_ops,
})
}
}
}
}
impl_inside_only!((), commands_raw::CmdClearAttachments);
impl_inside_only!((), commands_raw::CmdDrawIndexedRaw);
impl_inside_only!((B), commands_raw::CmdDrawIndirectRaw<B>);
impl_inside_only!((), commands_raw::CmdDrawRaw);
macro_rules! impl_outside_only {
(($($param:ident),*), $cmd:ty) => {
unsafe impl<'a, I, O $(, $param)*> AddCommand<$cmd> for ContextCheckLayer<I>
where I: AddCommand<$cmd, Out = O>
{
type Out = ContextCheckLayer<O>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
if self.inside_render_pass {
return Err(CommandAddError::ForbiddenInsideRenderPass);
}
Ok(ContextCheckLayer {
inner: self.inner.add(command)?,
inside_render_pass: self.inside_render_pass,
allow_render_pass_ops: self.allow_render_pass_ops,
})
}
}
}
}
impl_outside_only!((S, D), commands_raw::CmdBlitImage<S, D>);
impl_outside_only!((S, D), commands_raw::CmdCopyBuffer<S, D>);
impl_outside_only!((S, D), commands_raw::CmdCopyBufferToImage<S, D>);
impl_outside_only!((S, D), commands_raw::CmdCopyImage<S, D>);
impl_outside_only!((), commands_raw::CmdDispatchRaw);
impl_outside_only!((B), commands_raw::CmdFillBuffer<B>);
impl_outside_only!((S, D), commands_raw::CmdResolveImage<S, D>);
impl_outside_only!((), commands_raw::CmdSetEvent);
impl_outside_only!((B, D), commands_raw::CmdUpdateBuffer<B, D>);
unsafe impl<'a, I, O, Rp, F> AddCommand<commands_raw::CmdBeginRenderPass<Rp, F>> for ContextCheckLayer<I>
where I: AddCommand<commands_raw::CmdBeginRenderPass<Rp, F>, Out = O>
{
type Out = ContextCheckLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdBeginRenderPass<Rp, F>) -> Result<Self::Out, CommandAddError> {
if self.inside_render_pass {
return Err(CommandAddError::ForbiddenInsideRenderPass);
}
if !self.allow_render_pass_ops {
return Err(CommandAddError::ForbiddenInSecondaryCommandBuffer);
}
Ok(ContextCheckLayer {
inner: self.inner.add(command)?,
inside_render_pass: true,
allow_render_pass_ops: true,
})
}
}
unsafe impl<'a, I, O> AddCommand<commands_raw::CmdNextSubpass> for ContextCheckLayer<I>
where I: AddCommand<commands_raw::CmdNextSubpass, Out = O>
{
type Out = ContextCheckLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdNextSubpass) -> Result<Self::Out, CommandAddError> {
if !self.inside_render_pass {
return Err(CommandAddError::ForbiddenOutsideRenderPass);
}
if !self.allow_render_pass_ops {
return Err(CommandAddError::ForbiddenInSecondaryCommandBuffer);
}
// FIXME: check number of subpasses
Ok(ContextCheckLayer {
inner: self.inner.add(command)?,
inside_render_pass: true,
allow_render_pass_ops: true,
})
}
}
unsafe impl<'a, I, O> AddCommand<commands_raw::CmdEndRenderPass> for ContextCheckLayer<I>
where I: AddCommand<commands_raw::CmdEndRenderPass, Out = O>
{
type Out = ContextCheckLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdEndRenderPass) -> Result<Self::Out, CommandAddError> {
if !self.inside_render_pass {
return Err(CommandAddError::ForbiddenOutsideRenderPass);
}
if !self.allow_render_pass_ops {
return Err(CommandAddError::ForbiddenInSecondaryCommandBuffer);
}
// FIXME: check number of subpasses
Ok(ContextCheckLayer {
inner: self.inner.add(command)?,
inside_render_pass: false,
allow_render_pass_ops: true,
})
}
}

View File

@ -0,0 +1,131 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use command_buffer::cb::AddCommand;
use command_buffer::cb::CommandBufferBuild;
use command_buffer::CommandAddError;
use command_buffer::CommandBufferBuilder;
use command_buffer::commands_raw;
use device::Device;
use device::DeviceOwned;
use instance::QueueFamily;
use VulkanObject;
/// Layer around a command buffer builder that checks whether the commands added to it belong to
/// the same device as the command buffer.
pub struct DeviceCheckLayer<I> {
inner: I,
}
impl<I> DeviceCheckLayer<I> {
/// Builds a new `DeviceCheckLayer`.
#[inline]
pub fn new(inner: I) -> DeviceCheckLayer<I> {
DeviceCheckLayer {
inner: inner,
}
}
/// Destroys the layer and returns the underlying command buffer.
#[inline]
pub fn into_inner(self) -> I {
self.inner
}
}
unsafe impl<I> DeviceOwned for DeviceCheckLayer<I>
where I: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.inner.device()
}
}
unsafe impl<I> CommandBufferBuilder for DeviceCheckLayer<I>
where I: CommandBufferBuilder
{
#[inline]
fn queue_family(&self) -> QueueFamily {
self.inner.queue_family()
}
}
unsafe impl<I, O, E> CommandBufferBuild for DeviceCheckLayer<I>
where I: CommandBufferBuild<Out = O, Err = E>
{
type Out = O;
type Err = E;
#[inline]
fn build(self) -> Result<O, E> {
self.inner.build()
}
}
macro_rules! pass_through {
(($($param:ident),*), $cmd:ty) => (
unsafe impl<'a, I, O $(, $param)*> AddCommand<$cmd> for DeviceCheckLayer<I>
where I: AddCommand<$cmd, Out = O> + DeviceOwned, $cmd: DeviceOwned
{
type Out = DeviceCheckLayer<O>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
let inner_device = self.inner.device().internal_object();
let cmd_device = command.device().internal_object();
assert_eq!(inner_device, cmd_device);
Ok(DeviceCheckLayer {
inner: self.inner.add(command)?,
})
}
}
);
(($($param:ident),*), $cmd:ty, no-device) => (
unsafe impl<'a, I, O $(, $param)*> AddCommand<$cmd> for DeviceCheckLayer<I>
where I: AddCommand<$cmd, Out = O>
{
type Out = DeviceCheckLayer<O>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
Ok(DeviceCheckLayer {
inner: self.inner.add(command)?,
})
}
}
);
}
pass_through!((Rp, F), commands_raw::CmdBeginRenderPass<Rp, F>);
pass_through!((S, Pl), commands_raw::CmdBindDescriptorSets<S, Pl>);
pass_through!((B), commands_raw::CmdBindIndexBuffer<B>);
pass_through!((Pl), commands_raw::CmdBindPipeline<Pl>);
pass_through!((V), commands_raw::CmdBindVertexBuffers<V>);
pass_through!((S, D), commands_raw::CmdBlitImage<S, D>);
pass_through!((), commands_raw::CmdClearAttachments, no-device);
pass_through!((S, D), commands_raw::CmdCopyBuffer<S, D>);
pass_through!((S, D), commands_raw::CmdCopyBufferToImage<S, D>);
pass_through!((S, D), commands_raw::CmdCopyImage<S, D>);
pass_through!((), commands_raw::CmdDispatchRaw);
pass_through!((), commands_raw::CmdDrawIndexedRaw, no-device);
pass_through!((B), commands_raw::CmdDrawIndirectRaw<B>);
pass_through!((), commands_raw::CmdDrawRaw, no-device);
pass_through!((), commands_raw::CmdEndRenderPass, no-device);
pass_through!((C), commands_raw::CmdExecuteCommands<C>);
pass_through!((B), commands_raw::CmdFillBuffer<B>);
pass_through!((), commands_raw::CmdNextSubpass, no-device);
pass_through!((Pc, Pl), commands_raw::CmdPushConstants<Pc, Pl>);
pass_through!((S, D), commands_raw::CmdResolveImage<S, D>);
pass_through!((), commands_raw::CmdSetEvent);
pass_through!((), commands_raw::CmdSetState);
pass_through!((B, D), commands_raw::CmdUpdateBuffer<B, D>);

View File

@ -0,0 +1,100 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
//! Internals of vulkano's command buffers building.
//!
//! You probably don't need to look inside this module if you're a beginner. The
//! `AutoCommandBufferBuilder` provided in the parent module should be good for most needs.
//!
//! # Builder basics
//!
//! The lowest-level command buffer types are `UnsafeCommandBufferBuilder` and
//! `UnsafeCommandBuffer`. These two types have zero overhead over Vulkan command buffers but are
//! very unsafe to use.
//!
//! Before you can add a command to an unsafe command buffer builder, you should:
//!
//! - Make sure that the buffers or images used by the command stay alive for the duration of the
//! command buffer.
//! - Check that the device used by the buffers or images of the command is the same as the device
//! of the command buffer.
//! - If the command buffer is inside/outside a render pass, check that the command can be executed
//! inside/outside a render pass. Same for secondary command buffers.
//! - Check that the command can be executed on the queue family of the command buffer. Some queue
//! families don't support graphics and/or compute operations .
//! - Make sure that when the command buffer is submitted the buffers and images of the command
//! will be properly synchronized.
//! - Make sure that pipeline barriers are correctly inserted in order to avoid race conditions.
//!
//! In order to allow you to customize which checks are performed, vulkano provides *layers*. They
//! are structs that can be put around a command buffer builder and that will perform them. Keep
//! in mind that all the conditions above must be respected, but if you somehow make sure at
//! compile-time that some requirements are always correct, you can avoid paying some runtime cost
//! by not using all layers.
//!
//! Adding a command to a command buffer builder is done in two steps:
//!
//! - First you must build a struct that represents the command to add. The struct's constructor
//! can perform various checks to make sure that the command itself is valid, or it can provide
//! an unsafe constructor that doesn't perform any check.
//! - Then use the `AddCommand` trait to add it. The trait is implemented on the command buffer
//! builder and on the various layers, and its template parameter is the struct representing
//! the command.
//!
//! Since the `UnsafeCommandBufferBuilder` doesn't keep the command structs alive (as it would
//! incur an overhead), it implements `AddCommand<&T>`.
//!
//! The role of the `CommandsListLayer` and `BufferedCommandsListLayer` layers is to keep the
//! commands alive. They implement `AddCommand<T>` if the builder they wrap around implements
//! `AddCommand<&T>`. In other words they are the lowest level that you should put around an
//! `UnsafeCommandBufferBuilder`.
//!
//! The other layers of this module implement `AddCommand<T>` if the builder they wrap around
//! implements `AddCommand<T>`.
//!
//! # Building a command buffer
//!
//! Once you are satisfied with the commands you added to a builder, use the `CommandBufferBuild`
//! trait to build it.
//!
//! This trait is implemented on the `UnsafeCommandBufferBuilder` but also on all the layers.
//! The builder's layers can choose to add layers around the finished command buffer.
//!
//! # The `CommandsList` trait
//!
//! The `CommandsList` trait is implemented on any command buffer or command buffer builder that
//! exposes a list of commands. It is required by some of the layers.
pub use self::abstract_storage::AbstractStorageLayer;
pub use self::auto_barriers::AutoPipelineBarriersLayer;
pub use self::context_check::ContextCheckLayer;
pub use self::device_check::DeviceCheckLayer;
pub use self::queue_ty_check::QueueTyCheckLayer;
pub use self::state_cache::StateCacheLayer;
pub use self::submit_sync::SubmitSyncBuilderLayer;
pub use self::submit_sync::SubmitSyncBuilderLayerBehavior;
pub use self::submit_sync::SubmitSyncLayer;
pub use self::sys::Kind;
pub use self::sys::Flags;
pub use self::sys::UnsafeCommandBufferBuilder;
pub use self::sys::UnsafeCommandBuffer;
pub use self::traits::AddCommand;
// TODO: remove this line
pub use command_buffer::traits::CommandBufferBuild;
mod abstract_storage;
mod auto_barriers;
mod device_check;
mod context_check;
mod queue_ty_check;
mod state_cache;
mod submit_sync;
mod sys;
mod traits;

View File

@ -0,0 +1,245 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use command_buffer::cb::AddCommand;
use command_buffer::cb::CommandBufferBuild;
use command_buffer::CommandAddError;
use command_buffer::CommandBufferBuilder;
use command_buffer::CommandBuffer;
use command_buffer::commands_raw;
use device::Device;
use device::DeviceOwned;
use instance::QueueFamily;
use VulkanObject;
/// Layer around a command buffer builder that checks whether the commands added to it match the
/// type of the queue family of the underlying builder.
///
/// Commands that perform graphical or compute operations can only be executed on queue families
/// that support graphical or compute operations. This is what this layer verifies.
pub struct QueueTyCheckLayer<I> {
inner: I,
}
impl<I> QueueTyCheckLayer<I> {
/// Builds a new `QueueTyCheckLayer`.
#[inline]
pub fn new(inner: I) -> QueueTyCheckLayer<I> {
QueueTyCheckLayer {
inner: inner,
}
}
/// Destroys the layer and returns the underlying command buffer.
#[inline]
pub fn into_inner(self) -> I {
self.inner
}
}
unsafe impl<I> DeviceOwned for QueueTyCheckLayer<I>
where I: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.inner.device()
}
}
unsafe impl<I> CommandBufferBuilder for QueueTyCheckLayer<I>
where I: CommandBufferBuilder
{
#[inline]
fn queue_family(&self) -> QueueFamily {
self.inner.queue_family()
}
}
unsafe impl<I, O, E> CommandBufferBuild for QueueTyCheckLayer<I>
where I: CommandBufferBuild<Out = O, Err = E>
{
type Out = O;
type Err = E;
#[inline]
fn build(self) -> Result<O, E> {
self.inner.build()
}
}
macro_rules! q_ty_impl_always {
(($($param:ident),*), $cmd:ty) => {
unsafe impl<'a, I, O $(, $param)*> AddCommand<$cmd> for QueueTyCheckLayer<I>
where I: CommandBufferBuilder + AddCommand<$cmd, Out = O>
{
type Out = QueueTyCheckLayer<O>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
Ok(QueueTyCheckLayer {
inner: self.inner.add(command)?,
})
}
}
}
}
q_ty_impl_always!((S, D), commands_raw::CmdCopyBuffer<S, D>);
q_ty_impl_always!((S, D), commands_raw::CmdCopyBufferToImage<S, D>);
q_ty_impl_always!((S, D), commands_raw::CmdCopyImage<S, D>);
q_ty_impl_always!((B), commands_raw::CmdFillBuffer<B>);
q_ty_impl_always!((B, D), commands_raw::CmdUpdateBuffer<B, D>);
macro_rules! q_ty_impl_graphics {
(($($param:ident),*), $cmd:ty) => {
unsafe impl<'a, I, O $(, $param)*> AddCommand<$cmd> for QueueTyCheckLayer<I>
where I: CommandBufferBuilder + AddCommand<$cmd, Out = O>
{
type Out = QueueTyCheckLayer<O>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
if !self.supports_graphics() {
return Err(CommandAddError::GraphicsOperationsNotSupported);
}
Ok(QueueTyCheckLayer {
inner: self.inner.add(command)?,
})
}
}
}
}
q_ty_impl_graphics!((Rp, F), commands_raw::CmdBeginRenderPass<Rp, F>);
q_ty_impl_graphics!((B), commands_raw::CmdBindIndexBuffer<B>);
q_ty_impl_graphics!((V), commands_raw::CmdBindVertexBuffers<V>);
q_ty_impl_graphics!((S, D), commands_raw::CmdBlitImage<S, D>);
q_ty_impl_graphics!((), commands_raw::CmdClearAttachments);
q_ty_impl_graphics!((), commands_raw::CmdDrawIndexedRaw);
q_ty_impl_graphics!((B), commands_raw::CmdDrawIndirectRaw<B>);
q_ty_impl_graphics!((), commands_raw::CmdDrawRaw);
q_ty_impl_graphics!((), commands_raw::CmdEndRenderPass);
q_ty_impl_graphics!((), commands_raw::CmdNextSubpass);
q_ty_impl_graphics!((S, D), commands_raw::CmdResolveImage<S, D>);
macro_rules! q_ty_impl_compute {
(($($param:ident),*), $cmd:ty) => {
unsafe impl<'a, I, O $(, $param)*> AddCommand<$cmd> for QueueTyCheckLayer<I>
where I: CommandBufferBuilder + AddCommand<$cmd, Out = O>
{
type Out = QueueTyCheckLayer<O>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
if !self.supports_compute() {
return Err(CommandAddError::ComputeOperationsNotSupported);
}
Ok(QueueTyCheckLayer {
inner: self.inner.add(command)?,
})
}
}
}
}
q_ty_impl_compute!((), commands_raw::CmdDispatchRaw);
macro_rules! q_ty_impl_graphics_or_compute {
(($($param:ident),*), $cmd:ty) => {
unsafe impl<'a, I, O $(, $param)*> AddCommand<$cmd> for QueueTyCheckLayer<I>
where I: CommandBufferBuilder + AddCommand<$cmd, Out = O>
{
type Out = QueueTyCheckLayer<O>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
assert!(self.supports_graphics() || self.supports_compute()); // TODO: proper error?
Ok(QueueTyCheckLayer {
inner: self.inner.add(command)?,
})
}
}
}
}
q_ty_impl_graphics_or_compute!((Pc, Pl), commands_raw::CmdPushConstants<Pc, Pl>);
q_ty_impl_graphics_or_compute!((), commands_raw::CmdSetEvent);
q_ty_impl_graphics_or_compute!((), commands_raw::CmdSetState);
unsafe impl<I, O, Pl> AddCommand<commands_raw::CmdBindPipeline<Pl>> for QueueTyCheckLayer<I>
where I: CommandBufferBuilder + AddCommand<commands_raw::CmdBindPipeline<Pl>, Out = O>
{
type Out = QueueTyCheckLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdBindPipeline<Pl>) -> Result<Self::Out, CommandAddError> {
if command.is_graphics() {
if !self.supports_graphics() {
return Err(CommandAddError::GraphicsOperationsNotSupported);
}
} else {
if !self.supports_compute() {
return Err(CommandAddError::ComputeOperationsNotSupported);
}
}
Ok(QueueTyCheckLayer {
inner: self.inner.add(command)?,
})
}
}
unsafe impl<I, O, S, Pl> AddCommand<commands_raw::CmdBindDescriptorSets<S, Pl>> for QueueTyCheckLayer<I>
where I: CommandBufferBuilder + AddCommand<commands_raw::CmdBindDescriptorSets<S, Pl>, Out = O>
{
type Out = QueueTyCheckLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdBindDescriptorSets<S, Pl>) -> Result<Self::Out, CommandAddError> {
if command.is_graphics() {
if !self.supports_graphics() {
return Err(CommandAddError::GraphicsOperationsNotSupported);
}
} else {
if !self.supports_compute() {
return Err(CommandAddError::ComputeOperationsNotSupported);
}
}
Ok(QueueTyCheckLayer {
inner: self.inner.add(command)?,
})
}
}
unsafe impl<I, O, C> AddCommand<commands_raw::CmdExecuteCommands<C>> for QueueTyCheckLayer<I>
where I: CommandBufferBuilder + AddCommand<commands_raw::CmdExecuteCommands<C>, Out = O>,
C: CommandBuffer
{
type Out = QueueTyCheckLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdExecuteCommands<C>) -> Result<Self::Out, CommandAddError> {
// Note that safety rules guarantee that the secondary command buffer belongs to the same
// device as ourselves. Therefore this assert is only a debug assert.
debug_assert_eq!(command.command_buffer().queue_family().physical_device().internal_object(),
self.queue_family().physical_device().internal_object());
if command.command_buffer().queue_family().id() != self.queue_family().id() {
return Err(CommandAddError::QueueFamilyMismatch);
}
Ok(QueueTyCheckLayer {
inner: self.inner.add(command)?,
})
}
}

View File

@ -0,0 +1,266 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use command_buffer::cb::AddCommand;
use command_buffer::cb::CommandBufferBuild;
use command_buffer::CommandAddError;
use command_buffer::CommandBufferBuilder;
use command_buffer::commands_raw;
use command_buffer::DynamicState;
use device::Device;
use device::DeviceOwned;
use instance::QueueFamily;
use VulkanObject;
use vk;
/// Layer around a command buffer builder that caches the current state of the command buffer and
/// avoids redundant state changes.
///
/// For example if you add a command that sets the current vertex buffer, then later another
/// command that sets the current vertex buffer to the same value, then the second one will be
/// discarded by this layer.
///
/// As a general rule there's no reason not to use this layer unless you know that your commands
/// are already optimized in this regard.
///
/// # Safety
///
/// This layer expects that the commands passed to it all belong to the same device.
///
/// Since this layer can potentially optimize out some commands, a mismatch between devices could
/// potentially go undetected if it is checked in a lower layer.
pub struct StateCacheLayer<I> {
// The inner builder that will actually execute the stuff.
inner: I,
// The dynamic state to synchronize with `CmdSetState`.
dynamic_state: DynamicState,
// The compute pipeline currently bound. 0 if nothing bound.
compute_pipeline: vk::Pipeline,
// The graphics pipeline currently bound. 0 if nothing bound.
graphics_pipeline: vk::Pipeline,
// The latest bind vertex buffers command.
vertex_buffers: Option<commands_raw::CmdBindVertexBuffersHash>,
}
impl<I> StateCacheLayer<I> {
/// Builds a new `StateCacheLayer`.
///
/// It is safe to start caching at any point of the construction of a command buffer.
#[inline]
pub fn new(inner: I) -> StateCacheLayer<I> {
StateCacheLayer {
inner: inner,
dynamic_state: DynamicState::none(),
compute_pipeline: 0,
graphics_pipeline: 0,
vertex_buffers: None,
}
}
/// Destroys the layer and returns the underlying command buffer.
#[inline]
pub fn into_inner(self) -> I {
self.inner
}
}
unsafe impl<I> DeviceOwned for StateCacheLayer<I>
where I: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.inner.device()
}
}
unsafe impl<I> CommandBufferBuilder for StateCacheLayer<I>
where I: CommandBufferBuilder
{
#[inline]
fn queue_family(&self) -> QueueFamily {
self.inner.queue_family()
}
}
unsafe impl<Pl, I, O> AddCommand<commands_raw::CmdBindPipeline<Pl>> for StateCacheLayer<I>
where I: AddCommand<commands_raw::CmdBindPipeline<Pl>, Out = O>
{
type Out = StateCacheLayer<O>;
#[inline]
fn add(mut self, command: commands_raw::CmdBindPipeline<Pl>) -> Result<Self::Out, CommandAddError> {
let raw_pipeline = command.sys().internal_object();
let new_command = {
if command.is_graphics() {
if raw_pipeline == self.graphics_pipeline {
command.disabled()
} else {
self.graphics_pipeline = raw_pipeline;
command
}
} else {
if raw_pipeline == self.compute_pipeline {
command.disabled()
} else {
self.compute_pipeline = raw_pipeline;
command
}
}
};
Ok(StateCacheLayer {
inner: self.inner.add(new_command)?,
dynamic_state: DynamicState::none(),
graphics_pipeline: self.graphics_pipeline,
compute_pipeline: self.compute_pipeline,
vertex_buffers: self.vertex_buffers,
})
}
}
unsafe impl<Cb, I, O> AddCommand<commands_raw::CmdExecuteCommands<Cb>> for StateCacheLayer<I>
where I: AddCommand<commands_raw::CmdExecuteCommands<Cb>, Out = O>
{
type Out = StateCacheLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdExecuteCommands<Cb>) -> Result<Self::Out, CommandAddError> {
// After a secondary command buffer is added, all states at reset to the "unknown" state.
let new_inner = self.inner.add(command)?;
Ok(StateCacheLayer {
inner: new_inner,
dynamic_state: DynamicState::none(),
compute_pipeline: 0,
graphics_pipeline: 0,
vertex_buffers: None,
})
}
}
unsafe impl<I, O> AddCommand<commands_raw::CmdSetState> for StateCacheLayer<I>
where I: AddCommand<commands_raw::CmdSetState, Out = O>
{
type Out = StateCacheLayer<O>;
#[inline]
fn add(mut self, command: commands_raw::CmdSetState) -> Result<Self::Out, CommandAddError> {
// We need to synchronize `self.dynamic_state` with the state in `command`.
// While doing so, we tweak `command` to erase the states that are the same as what's
// already in `self.dynamic_state`.
let mut command_state = command.state().clone();
// Handle line width.
if let Some(new_val) = command_state.line_width {
if self.dynamic_state.line_width == Some(new_val) {
command_state.line_width = None;
} else {
self.dynamic_state.line_width = Some(new_val);
}
}
// TODO: missing implementations
Ok(StateCacheLayer {
inner: self.inner.add(commands_raw::CmdSetState::new(command.device().clone(), command_state))?,
dynamic_state: self.dynamic_state,
graphics_pipeline: self.graphics_pipeline,
compute_pipeline: self.compute_pipeline,
vertex_buffers: self.vertex_buffers,
})
}
}
unsafe impl<I, O, B> AddCommand<commands_raw::CmdBindVertexBuffers<B>> for StateCacheLayer<I>
where I: AddCommand<commands_raw::CmdBindVertexBuffers<B>, Out = O>
{
type Out = StateCacheLayer<O>;
#[inline]
fn add(mut self, mut command: commands_raw::CmdBindVertexBuffers<B>)
-> Result<Self::Out, CommandAddError>
{
match &mut self.vertex_buffers {
&mut Some(ref mut curr) => {
if *curr != *command.hash() {
let new_hash = command.hash().clone();
command.diff(curr);
*curr = new_hash;
}
},
curr @ &mut None => {
*curr = Some(command.hash().clone());
}
};
Ok(StateCacheLayer {
inner: self.inner.add(command)?,
dynamic_state: self.dynamic_state,
graphics_pipeline: self.graphics_pipeline,
compute_pipeline: self.compute_pipeline,
vertex_buffers: self.vertex_buffers,
})
}
}
unsafe impl<I, O, E> CommandBufferBuild for StateCacheLayer<I>
where I: CommandBufferBuild<Out = O, Err = E>
{
type Out = O;
type Err = E;
#[inline]
fn build(self) -> Result<O, E> {
self.inner.build()
}
}
macro_rules! pass_through {
(($($param:ident),*), $cmd:ty) => {
unsafe impl<'a, I, O $(, $param)*> AddCommand<$cmd> for StateCacheLayer<I>
where I: AddCommand<$cmd, Out = O>
{
type Out = StateCacheLayer<O>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
Ok(StateCacheLayer {
inner: self.inner.add(command)?,
dynamic_state: self.dynamic_state,
graphics_pipeline: self.graphics_pipeline,
compute_pipeline: self.compute_pipeline,
vertex_buffers: self.vertex_buffers,
})
}
}
}
}
pass_through!((Rp, F), commands_raw::CmdBeginRenderPass<Rp, F>);
pass_through!((S, Pl), commands_raw::CmdBindDescriptorSets<S, Pl>);
pass_through!((B), commands_raw::CmdBindIndexBuffer<B>);
pass_through!((S, D), commands_raw::CmdBlitImage<S, D>);
pass_through!((), commands_raw::CmdClearAttachments);
pass_through!((S, D), commands_raw::CmdCopyBuffer<S, D>);
pass_through!((S, D), commands_raw::CmdCopyBufferToImage<S, D>);
pass_through!((S, D), commands_raw::CmdCopyImage<S, D>);
pass_through!((), commands_raw::CmdDispatchRaw);
pass_through!((), commands_raw::CmdDrawIndexedRaw);
pass_through!((B), commands_raw::CmdDrawIndirectRaw<B>);
pass_through!((), commands_raw::CmdDrawRaw);
pass_through!((), commands_raw::CmdEndRenderPass);
pass_through!((B), commands_raw::CmdFillBuffer<B>);
pass_through!((), commands_raw::CmdNextSubpass);
pass_through!((Pc, Pl), commands_raw::CmdPushConstants<Pc, Pl>);
pass_through!((S, D), commands_raw::CmdResolveImage<S, D>);
pass_through!((), commands_raw::CmdSetEvent);
pass_through!((B, D), commands_raw::CmdUpdateBuffer<B, D>);

View File

@ -0,0 +1,870 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::collections::hash_map::Entry;
use std::hash::{Hash, Hasher};
use std::sync::Arc;
use fnv::FnvHashMap;
use buffer::BufferAccess;
use command_buffer::cb::AddCommand;
use command_buffer::cb::CommandBufferBuild;
use command_buffer::cb::UnsafeCommandBuffer;
use command_buffer::CommandAddError;
use command_buffer::CommandBuffer;
use command_buffer::CommandBufferBuilder;
use command_buffer::CommandBufferExecError;
use command_buffer::commands_raw;
use framebuffer::FramebufferAbstract;
use image::Layout;
use image::ImageAccess;
use instance::QueueFamily;
use device::Device;
use device::DeviceOwned;
use device::Queue;
use sync::AccessCheckError;
use sync::AccessError;
use sync::AccessFlagBits;
use sync::PipelineStages;
use sync::GpuFuture;
/// Layers that ensures that synchronization with buffers and images between command buffers is
/// properly handled.
///
/// The following are handled:
///
/// - Return an error when submitting if the user didn't provide the guarantees for proper
/// synchronization.
///
/// - Automatically generate pipeline barriers between command buffers if necessary to handle
/// the transition between command buffers.
/// TODO: ^ this is not the case yet
///
pub struct SubmitSyncBuilderLayer<I> {
inner: I,
resources: FnvHashMap<Key, ResourceEntry>,
behavior: SubmitSyncBuilderLayerBehavior,
}
/// How the layer behaves when it comes to image layouts.
#[derive(Debug, Copy, Clone)]
pub enum SubmitSyncBuilderLayerBehavior {
/// When an image is added for the first time to the builder, the layer will suppose that the
/// image is already in the layout that is required by this operation. When submitting the
/// command buffer, the layer will then check whether it is truly the case.
///
/// For example if you create a command buffer with an image copy operation with the
/// TRANSFER_DEST layout, then when submitting the layer will make sure that the image is
/// in the TRANSFER_DEST layout.
Explicit,
/// The layer will call the `ImageAccess::initial_layout_requirement()` and
/// `ImageAccess::final_layout_requirement()` methods, and assume that images respectively
/// enter and leave the builder in these two layouts.
///
/// This supposes that an inner layer (that the submit sync layer is not aware of)
/// automatically performs the required transition if necessary.
///
/// For example if you create a command buffer with an image copy operation with the
/// TRANSFER_DEST layout, then the submit sync layer will suppose that an inner layer
/// automatically performs a transition from the layout returned by
/// `initial_layout_requirement()` to the TRANSFER_DEST layout. When submitting the layer will
/// make sure that the image is in the layout returned by `initial_layout_requirement()`.
///
/// There is only one exception: if the layout of the first usage of the image is `Undefined`
/// or `Preinitialized`, then the layer will not use the hint. This can only happen when
/// entering a render pass, as it is the only command for which these layouts are legal (except
/// for pipeline barriers which are not supported by this layer).
///
/// > **Note**: The exception above is not an optimization. If the initial layout hint of an
/// > image is a layout other than `Preinitialized`, and this image is used for the first time
/// > as `Preinitialized`, then we have a problem. But since is forbidden to perform a
/// > transition *to* the `Preinitialized` layout (and it wouldn't make any sense to do so),
/// > then there is no way to resolve this conflict in an inner layer. That's why we must
/// > assume that the image is in the `Preinitialized` layout in the first place. When it
/// > comes to `Undefined`, however, this is purely an optimization as it is possible to
/// > "transition" to `Undefined` by not doing anything.
UseLayoutHint,
}
enum Key {
Buffer(Box<BufferAccess + Send + Sync>),
Image(Box<ImageAccess + Send + Sync>),
FramebufferAttachment(Box<FramebufferAbstract + Send + Sync>, u32),
}
impl Key {
#[inline]
fn conflicts_buffer_all(&self, buf: &BufferAccess) -> bool {
match self {
&Key::Buffer(ref a) => a.conflicts_buffer_all(buf),
&Key::Image(ref a) => a.conflicts_buffer_all(buf),
&Key::FramebufferAttachment(ref b, idx) => {
let img = b.attachments()[idx as usize].parent();
img.conflicts_buffer_all(buf)
},
}
}
#[inline]
fn conflicts_image_all(&self, img: &ImageAccess) -> bool {
match self {
&Key::Buffer(ref a) => a.conflicts_image_all(img),
&Key::Image(ref a) => a.conflicts_image_all(img),
&Key::FramebufferAttachment(ref b, idx) => {
let b = b.attachments()[idx as usize].parent();
b.conflicts_image_all(img)
},
}
}
}
impl PartialEq for Key {
#[inline]
fn eq(&self, other: &Key) -> bool {
match other {
&Key::Buffer(ref b) => self.conflicts_buffer_all(b),
&Key::Image(ref b) => self.conflicts_image_all(b),
&Key::FramebufferAttachment(ref b, idx) => {
self.conflicts_image_all(b.attachments()[idx as usize].parent())
},
}
}
}
impl Eq for Key {
}
impl Hash for Key {
#[inline]
fn hash<H: Hasher>(&self, state: &mut H) {
match self {
&Key::Buffer(ref buf) => buf.conflict_key_all().hash(state),
&Key::Image(ref img) => img.conflict_key_all().hash(state),
&Key::FramebufferAttachment(ref fb, idx) => {
let img = fb.attachments()[idx as usize].parent();
img.conflict_key_all().hash(state)
},
}
}
}
struct ResourceEntry {
final_stages: PipelineStages,
final_access: AccessFlagBits,
exclusive: bool,
initial_layout: Layout,
final_layout: Layout,
}
impl<I> SubmitSyncBuilderLayer<I> {
/// Builds a new layer that wraps around an existing builder.
#[inline]
pub fn new(inner: I, behavior: SubmitSyncBuilderLayerBehavior) -> SubmitSyncBuilderLayer<I> {
SubmitSyncBuilderLayer {
inner: inner,
resources: FnvHashMap::default(),
behavior: behavior,
}
}
// Adds a buffer to the list.
fn add_buffer<B>(&mut self, buffer: &B, exclusive: bool, stages: PipelineStages,
access: AccessFlagBits)
where B: BufferAccess + Send + Sync + Clone + 'static
{
// TODO: don't create the key every time ; https://github.com/rust-lang/rfcs/pull/1769
let key = Key::Buffer(Box::new(buffer.clone()));
match self.resources.entry(key) {
Entry::Vacant(entry) => {
entry.insert(ResourceEntry {
final_stages: stages,
final_access: access,
exclusive: exclusive,
initial_layout: Layout::Undefined,
final_layout: Layout::Undefined,
});
},
Entry::Occupied(mut entry) => {
let entry = entry.get_mut();
// TODO: remove some stages and accesses when there's an "overflow"?
entry.final_stages = entry.final_stages | stages;
entry.final_access = entry.final_access | access;
entry.exclusive = entry.exclusive || exclusive;
entry.final_layout = Layout::Undefined;
},
}
}
// Adds an image to the list.
fn add_image<T>(&mut self, image: &T, exclusive: bool, stages: PipelineStages,
access: AccessFlagBits)
where T: ImageAccess + Send + Sync + Clone + 'static
{
let key = Key::Image(Box::new(image.clone()));
let initial_layout = match self.behavior {
SubmitSyncBuilderLayerBehavior::Explicit => unimplemented!(), // FIXME:
SubmitSyncBuilderLayerBehavior::UseLayoutHint => image.initial_layout_requirement(),
};
let final_layout = match self.behavior {
SubmitSyncBuilderLayerBehavior::Explicit => unimplemented!(), // FIXME:
SubmitSyncBuilderLayerBehavior::UseLayoutHint => image.final_layout_requirement(),
};
match self.resources.entry(key) {
Entry::Vacant(entry) => {
entry.insert(ResourceEntry {
final_stages: stages,
final_access: access,
exclusive: exclusive,
initial_layout: initial_layout,
final_layout: final_layout,
});
},
Entry::Occupied(mut entry) => {
let entry = entry.get_mut();
// TODO: exclusive accss if transition required?
entry.exclusive = entry.exclusive || exclusive;
// TODO: remove some stages and accesses when there's an "overflow"?
entry.final_stages = entry.final_stages | stages;
entry.final_access = entry.final_access | access;
entry.final_layout = final_layout;
},
}
}
// Adds a framebuffer to the list.
fn add_framebuffer<F>(&mut self, framebuffer: &F)
where F: FramebufferAbstract + Send + Sync + Clone + 'static
{
// TODO: slow
for index in 0 .. FramebufferAbstract::attachments(framebuffer).len() {
let key = Key::FramebufferAttachment(Box::new(framebuffer.clone()), index as u32);
let desc = framebuffer.attachment(index).expect("Wrong implementation of FramebufferAbstract trait");
let image = FramebufferAbstract::attachments(framebuffer)[index];
let initial_layout = match self.behavior {
SubmitSyncBuilderLayerBehavior::Explicit => desc.initial_layout,
SubmitSyncBuilderLayerBehavior::UseLayoutHint => {
match desc.initial_layout {
Layout::Undefined | Layout::Preinitialized => desc.initial_layout,
_ => image.parent().initial_layout_requirement(),
}
},
};
let final_layout = match self.behavior {
SubmitSyncBuilderLayerBehavior::Explicit => desc.final_layout,
SubmitSyncBuilderLayerBehavior::UseLayoutHint => {
match desc.final_layout {
Layout::Undefined | Layout::Preinitialized => desc.final_layout,
_ => image.parent().final_layout_requirement(),
}
},
};
match self.resources.entry(key) {
Entry::Vacant(entry) => {
entry.insert(ResourceEntry {
final_stages: PipelineStages { all_commands: true, ..PipelineStages::none() }, // FIXME:
final_access: AccessFlagBits::all(), // FIXME:
exclusive: true, // FIXME:
initial_layout: initial_layout,
final_layout: final_layout,
});
},
Entry::Occupied(mut entry) => {
let entry = entry.get_mut();
// TODO: update stages and access
entry.exclusive = true; // FIXME:
entry.final_layout = final_layout;
},
}
}
}
}
unsafe impl<I, O, E> CommandBufferBuild for SubmitSyncBuilderLayer<I>
where I: CommandBufferBuild<Out = O, Err = E>
{
type Out = SubmitSyncLayer<O>;
type Err = E;
#[inline]
fn build(self) -> Result<Self::Out, E> {
Ok(SubmitSyncLayer {
inner: try!(self.inner.build()),
resources: self.resources,
})
}
}
unsafe impl<I> DeviceOwned for SubmitSyncBuilderLayer<I>
where I: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.inner.device()
}
}
unsafe impl<I> CommandBufferBuilder for SubmitSyncBuilderLayer<I>
where I: CommandBufferBuilder
{
#[inline]
fn queue_family(&self) -> QueueFamily {
self.inner.queue_family()
}
}
// FIXME: implement manually
macro_rules! pass_through {
(($($param:ident),*), $cmd:ty) => {
unsafe impl<'a, I, O $(, $param)*> AddCommand<$cmd> for SubmitSyncBuilderLayer<I>
where I: AddCommand<$cmd, Out = O>
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(self, command: $cmd) -> Result<Self::Out, CommandAddError> {
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
}
}
// FIXME: implement manually
pass_through!((S, Pl), commands_raw::CmdBindDescriptorSets<S, Pl>);
pass_through!((V), commands_raw::CmdBindVertexBuffers<V>);
pass_through!((C), commands_raw::CmdExecuteCommands<C>);
unsafe impl<I, O, Rp, F> AddCommand<commands_raw::CmdBeginRenderPass<Rp, F>> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdBeginRenderPass<Rp, F>, Out = O>,
F: FramebufferAbstract + Send + Sync + Clone + 'static
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(mut self, command: commands_raw::CmdBeginRenderPass<Rp, F>) -> Result<Self::Out, CommandAddError> {
self.add_framebuffer(command.framebuffer());
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O, B> AddCommand<commands_raw::CmdBindIndexBuffer<B>> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdBindIndexBuffer<B>, Out = O>,
B: BufferAccess + Send + Sync + Clone + 'static
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(mut self, command: commands_raw::CmdBindIndexBuffer<B>) -> Result<Self::Out, CommandAddError> {
self.add_buffer(command.buffer(), false,
PipelineStages { vertex_input: true, .. PipelineStages::none() },
AccessFlagBits { index_read: true, .. AccessFlagBits::none() });
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O, P> AddCommand<commands_raw::CmdBindPipeline<P>> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdBindPipeline<P>, Out = O>
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdBindPipeline<P>) -> Result<Self::Out, CommandAddError> {
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O, S, D> AddCommand<commands_raw::CmdBlitImage<S, D>> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdBlitImage<S, D>, Out = O>,
S: ImageAccess + Send + Sync + Clone + 'static,
D: ImageAccess + Send + Sync + Clone + 'static
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(mut self, command: commands_raw::CmdBlitImage<S, D>) -> Result<Self::Out, CommandAddError> {
self.add_image(command.source(), false,
PipelineStages { transfer: true, .. PipelineStages::none() },
AccessFlagBits { transfer_read: true, .. AccessFlagBits::none() });
self.add_image(command.destination(), true,
PipelineStages { transfer: true, .. PipelineStages::none() },
AccessFlagBits { transfer_write: true, .. AccessFlagBits::none() });
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O> AddCommand<commands_raw::CmdClearAttachments> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdClearAttachments, Out = O>
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdClearAttachments) -> Result<Self::Out, CommandAddError> {
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O, S, D> AddCommand<commands_raw::CmdCopyBuffer<S, D>> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdCopyBuffer<S, D>, Out = O>,
S: BufferAccess + Send + Sync + Clone + 'static,
D: BufferAccess + Send + Sync + Clone + 'static
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(mut self, command: commands_raw::CmdCopyBuffer<S, D>) -> Result<Self::Out, CommandAddError> {
self.add_buffer(command.source(), false,
PipelineStages { transfer: true, .. PipelineStages::none() },
AccessFlagBits { transfer_read: true, .. AccessFlagBits::none() });
self.add_buffer(command.destination(), true,
PipelineStages { transfer: true, .. PipelineStages::none() },
AccessFlagBits { transfer_write: true, .. AccessFlagBits::none() });
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O, S, D> AddCommand<commands_raw::CmdCopyBufferToImage<S, D>> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdCopyBufferToImage<S, D>, Out = O>,
S: BufferAccess + Send + Sync + Clone + 'static,
D: ImageAccess + Send + Sync + Clone + 'static
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(mut self, command: commands_raw::CmdCopyBufferToImage<S, D>) -> Result<Self::Out, CommandAddError> {
self.add_buffer(command.source(), false,
PipelineStages { transfer: true, .. PipelineStages::none() },
AccessFlagBits { transfer_read: true, .. AccessFlagBits::none() });
self.add_image(command.destination(), true,
PipelineStages { transfer: true, .. PipelineStages::none() },
AccessFlagBits { transfer_write: true, .. AccessFlagBits::none() });
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O, S, D> AddCommand<commands_raw::CmdCopyImage<S, D>> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdCopyImage<S, D>, Out = O>,
S: ImageAccess + Send + Sync + Clone + 'static,
D: ImageAccess + Send + Sync + Clone + 'static
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(mut self, command: commands_raw::CmdCopyImage<S, D>) -> Result<Self::Out, CommandAddError> {
self.add_image(command.source(), false,
PipelineStages { transfer: true, .. PipelineStages::none() },
AccessFlagBits { transfer_read: true, .. AccessFlagBits::none() });
self.add_image(command.destination(), true,
PipelineStages { transfer: true, .. PipelineStages::none() },
AccessFlagBits { transfer_write: true, .. AccessFlagBits::none() });
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O> AddCommand<commands_raw::CmdDispatchRaw> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdDispatchRaw, Out = O>
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdDispatchRaw) -> Result<Self::Out, CommandAddError> {
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O> AddCommand<commands_raw::CmdDrawRaw> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdDrawRaw, Out = O>
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdDrawRaw) -> Result<Self::Out, CommandAddError> {
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O> AddCommand<commands_raw::CmdDrawIndexedRaw> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdDrawIndexedRaw, Out = O>
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdDrawIndexedRaw) -> Result<Self::Out, CommandAddError> {
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O, B> AddCommand<commands_raw::CmdDrawIndirectRaw<B>> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdDrawIndirectRaw<B>, Out = O>,
B: BufferAccess + Send + Sync + Clone + 'static
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(mut self, command: commands_raw::CmdDrawIndirectRaw<B>) -> Result<Self::Out, CommandAddError> {
self.add_buffer(command.buffer(), true,
PipelineStages { draw_indirect: true, .. PipelineStages::none() },
AccessFlagBits { indirect_command_read: true, .. AccessFlagBits::none() });
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O> AddCommand<commands_raw::CmdEndRenderPass> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdEndRenderPass, Out = O>
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdEndRenderPass) -> Result<Self::Out, CommandAddError> {
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O, B> AddCommand<commands_raw::CmdFillBuffer<B>> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdFillBuffer<B>, Out = O>,
B: BufferAccess + Send + Sync + Clone + 'static
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(mut self, command: commands_raw::CmdFillBuffer<B>) -> Result<Self::Out, CommandAddError> {
self.add_buffer(command.buffer(), true,
PipelineStages { transfer: true, .. PipelineStages::none() },
AccessFlagBits { transfer_write: true, .. AccessFlagBits::none() });
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O> AddCommand<commands_raw::CmdNextSubpass> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdNextSubpass, Out = O>
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdNextSubpass) -> Result<Self::Out, CommandAddError> {
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O, Pc, Pl> AddCommand<commands_raw::CmdPushConstants<Pc, Pl>> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdPushConstants<Pc, Pl>, Out = O>
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdPushConstants<Pc, Pl>) -> Result<Self::Out, CommandAddError> {
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O, S, D> AddCommand<commands_raw::CmdResolveImage<S, D>> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdResolveImage<S, D>, Out = O>,
S: ImageAccess + Send + Sync + Clone + 'static,
D: ImageAccess + Send + Sync + Clone + 'static
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(mut self, command: commands_raw::CmdResolveImage<S, D>) -> Result<Self::Out, CommandAddError> {
self.add_image(command.source(), false,
PipelineStages { transfer: true, .. PipelineStages::none() },
AccessFlagBits { transfer_write: true, .. AccessFlagBits::none() });
self.add_image(command.destination(), true,
PipelineStages { transfer: true, .. PipelineStages::none() },
AccessFlagBits { transfer_write: true, .. AccessFlagBits::none() });
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O> AddCommand<commands_raw::CmdSetEvent> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdSetEvent, Out = O>
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdSetEvent) -> Result<Self::Out, CommandAddError> {
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O> AddCommand<commands_raw::CmdSetState> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdSetState, Out = O>
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(self, command: commands_raw::CmdSetState) -> Result<Self::Out, CommandAddError> {
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
unsafe impl<I, O, B, D> AddCommand<commands_raw::CmdUpdateBuffer<B, D>> for SubmitSyncBuilderLayer<I>
where I: AddCommand<commands_raw::CmdUpdateBuffer<B, D>, Out = O>,
B: BufferAccess + Send + Sync + Clone + 'static
{
type Out = SubmitSyncBuilderLayer<O>;
#[inline]
fn add(mut self, command: commands_raw::CmdUpdateBuffer<B, D>) -> Result<Self::Out, CommandAddError> {
self.add_buffer(command.buffer(), true,
PipelineStages { transfer: true, .. PipelineStages::none() },
AccessFlagBits { transfer_write: true, .. AccessFlagBits::none() });
Ok(SubmitSyncBuilderLayer {
inner: AddCommand::add(self.inner, command)?,
resources: self.resources,
behavior: self.behavior,
})
}
}
/// Layer around a command buffer that handles synchronization between command buffers.
pub struct SubmitSyncLayer<I> {
inner: I,
resources: FnvHashMap<Key, ResourceEntry>,
}
unsafe impl<I> CommandBuffer for SubmitSyncLayer<I> where I: CommandBuffer {
type Pool = I::Pool;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<I::Pool> {
self.inner.inner()
}
fn prepare_submit(&self, future: &GpuFuture, queue: &Queue) -> Result<(), CommandBufferExecError> {
// TODO: if at any point we return an error, we can't recover
for (key, entry) in self.resources.iter() {
match key {
&Key::Buffer(ref buf) => {
let err = match future.check_buffer_access(&buf, entry.exclusive, queue) {
Ok(_) => {
unsafe { buf.increase_gpu_lock(); }
continue;
},
Err(err) => err
};
if !buf.try_gpu_lock(entry.exclusive, queue) {
match err {
AccessCheckError::Unknown => panic!(), // TODO: use the err returned by try_gpu_lock
AccessCheckError::Denied(err) => return Err(err.into()),
}
}
},
&Key::Image(ref img) => {
let err = match future.check_image_access(img, entry.initial_layout,
entry.exclusive, queue)
{
Ok(_) => {
unsafe { img.increase_gpu_lock(); }
continue;
},
Err(err) => err
};
if !img.try_gpu_lock(entry.exclusive, queue) {
match err {
AccessCheckError::Unknown => panic!(), // TODO: use the err returned by try_gpu_lock
AccessCheckError::Denied(err) => return Err(err.into()),
}
}
},
&Key::FramebufferAttachment(ref fb, idx) => {
let img = fb.attachments()[idx as usize].parent();
let err = match future.check_image_access(img, entry.initial_layout,
entry.exclusive, queue)
{
Ok(_) => {
unsafe { img.increase_gpu_lock(); }
continue;
},
Err(err) => err
};
if !img.try_gpu_lock(entry.exclusive, queue) {
match err {
AccessCheckError::Unknown => panic!(), // TODO: use the err returned by try_gpu_lock
AccessCheckError::Denied(err) => return Err(err.into()),
}
}
},
}
}
// FIXME: pipeline barriers if necessary?
Ok(())
}
#[inline]
fn check_buffer_access(&self, buffer: &BufferAccess, exclusive: bool, queue: &Queue)
-> Result<Option<(PipelineStages, AccessFlagBits)>, AccessCheckError>
{
// TODO: check the queue family
// We can't call `.get()` on the HashMap because of the `Borrow` requirement that's
// unimplementable on our key type.
// TODO:
for (key, value) in self.resources.iter() {
if !key.conflicts_buffer_all(buffer) {
continue;
}
if !value.exclusive && exclusive {
return Err(AccessCheckError::Denied(AccessError::ExclusiveDenied));
}
return Ok(Some((value.final_stages, value.final_access)));
}
Err(AccessCheckError::Unknown)
}
#[inline]
fn check_image_access(&self, image: &ImageAccess, layout: Layout, exclusive: bool, queue: &Queue)
-> Result<Option<(PipelineStages, AccessFlagBits)>, AccessCheckError>
{
// TODO: check the queue family
// We can't call `.get()` on the HashMap because of the `Borrow` requirement that's
// unimplementable on our key type.
// TODO:
for (key, value) in self.resources.iter() {
if !key.conflicts_image_all(image) {
continue;
}
if layout != Layout::Undefined && value.final_layout != layout {
return Err(AccessCheckError::Denied(AccessError::UnexpectedImageLayout {
allowed: value.final_layout,
requested: layout,
}));
}
if !value.exclusive && exclusive {
return Err(AccessCheckError::Denied(AccessError::ExclusiveDenied));
}
return Ok(Some((value.final_stages, value.final_access)));
}
Err(AccessCheckError::Unknown)
}
}
unsafe impl<I> DeviceOwned for SubmitSyncLayer<I> where I: DeviceOwned {
#[inline]
fn device(&self) -> &Arc<Device> {
self.inner.device()
}
}

View File

@ -0,0 +1,341 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::ptr;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use buffer::BufferAccess;
use command_buffer::CommandBuffer;
use command_buffer::CommandBufferBuilder;
use command_buffer::CommandBufferExecError;
use command_buffer::cb::CommandBufferBuild;
use command_buffer::pool::CommandPool;
use command_buffer::pool::CommandPoolBuilderAlloc;
use command_buffer::pool::CommandPoolAlloc;
use device::Device;
use device::DeviceOwned;
use device::Queue;
use framebuffer::EmptySinglePassRenderPassDesc;
use framebuffer::Framebuffer;
use framebuffer::FramebufferAbstract;
use framebuffer::RenderPass;
use framebuffer::RenderPassAbstract;
use framebuffer::Subpass;
use image::Layout;
use image::ImageAccess;
use instance::QueueFamily;
use sync::AccessCheckError;
use sync::AccessFlagBits;
use sync::PipelineStages;
use sync::GpuFuture;
use OomError;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
/// Determines the kind of command buffer that we want to create.
#[derive(Debug, Clone)]
pub enum Kind<R, F> {
/// A primary command buffer can execute all commands and can call secondary command buffers.
Primary,
/// A secondary command buffer can execute all dispatch and transfer operations, but not
/// drawing operations.
Secondary,
/// A secondary command buffer within a render pass can only call draw operations that can
/// be executed from within a specific subpass.
SecondaryRenderPass {
/// Which subpass this secondary command buffer can be called from.
subpass: Subpass<R>,
/// The framebuffer object that will be used when calling the command buffer.
/// This parameter is optional and is an optimization hint for the implementation.
framebuffer: Option<F>,
},
}
impl Kind<RenderPass<EmptySinglePassRenderPassDesc>, Framebuffer<RenderPass<EmptySinglePassRenderPassDesc>, ()>> {
/// Equivalent to `Kind::Primary`.
///
/// > **Note**: If you use `let kind = Kind::Primary;` in your code, you will probably get a
/// > compilation error because the Rust compiler couldn't determine the template parameters
/// > of `Kind`. To solve that problem in an easy way you can use this function instead.
#[inline]
pub fn primary() -> Kind<RenderPass<EmptySinglePassRenderPassDesc>, Framebuffer<RenderPass<EmptySinglePassRenderPassDesc>, ()>> {
Kind::Primary
}
}
/// Flags to pass when creating a command buffer.
///
/// The safest option is `SimultaneousUse`, but it may be slower than the other two.
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub enum Flags {
/// The command buffer can be used multiple times, but must not execute more than once
/// simultaneously.
None,
/// The command buffer can be executed multiple times in parallel.
SimultaneousUse,
/// The command buffer can only be submitted once. Any further submit is forbidden.
OneTimeSubmit,
}
/// Command buffer being built.
///
/// You can add commands to an `UnsafeCommandBufferBuilder` by using the `AddCommand` trait.
/// The `AddCommand<&Cmd>` trait is implemented on the `UnsafeCommandBufferBuilder` for any `Cmd`
/// that is a raw Vulkan command.
///
/// When you are finished adding commands, you can use the `CommandBufferBuild` trait to turn this
/// builder into an `UnsafeCommandBuffer`.
// TODO: change P parameter to be a CommandPoolBuilderAlloc
pub struct UnsafeCommandBufferBuilder<P> where P: CommandPool {
// The command buffer obtained from the pool. Contains `None` if `build()` has been called.
cmd: Option<P::Builder>,
// Device that owns the command buffer.
// TODO: necessary?
device: Arc<Device>,
// Flags that were used at creation.
// TODO: necessary?
flags: Flags,
// True if we are a secondary command buffer.
// TODO: necessary?
secondary_cb: bool,
}
impl<P> UnsafeCommandBufferBuilder<P> where P: CommandPool {
/// Creates a new builder.
///
/// # Safety
///
/// Creating and destroying an unsafe command buffer is not unsafe per se, but the commands
/// that you add to it are unchecked, do not have any synchronization, and are not kept alive.
///
/// In other words, it is your job to make sure that the commands you add are valid, that they
/// don't use resources that have been destroyed, and that they do not introduce any race
/// condition.
///
/// > **Note**: Some checks are still made with `debug_assert!`. Do not expect to be able to
/// > submit invalid commands.
pub unsafe fn new<R, F>(pool: &P, kind: Kind<R, F>, flags: Flags)
-> Result<UnsafeCommandBufferBuilder<P>, OomError>
where R: RenderPassAbstract, F: FramebufferAbstract
{
let secondary = match kind {
Kind::Primary => false,
Kind::Secondary | Kind::SecondaryRenderPass { .. } => true,
};
let cmd = try!(pool.alloc(secondary, 1)).next().expect("Requested one command buffer from \
the command pool, but got zero.");
UnsafeCommandBufferBuilder::already_allocated(cmd, kind, flags)
}
/// Creates a new command buffer builder from an already-allocated command buffer.
///
/// # Safety
///
/// See the `new` method.
///
/// The kind must match how the command buffer was allocated.
///
pub unsafe fn already_allocated<R, F>(alloc: P::Builder, kind: Kind<R, F>, flags: Flags)
-> Result<UnsafeCommandBufferBuilder<P>, OomError>
where R: RenderPassAbstract, F: FramebufferAbstract
{
let device = alloc.device().clone();
let vk = device.pointers();
let cmd = alloc.inner().internal_object();
let vk_flags = {
let a = match flags {
Flags::None => 0,
Flags::SimultaneousUse => vk::COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT,
Flags::OneTimeSubmit => vk::COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT,
};
let b = match kind {
Kind::Primary | Kind::Secondary => 0,
Kind::SecondaryRenderPass { .. } => {
vk::COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT
},
};
a | b
};
let (rp, sp) = if let Kind::SecondaryRenderPass { ref subpass, .. } = kind {
(subpass.render_pass().inner().internal_object(), subpass.index())
} else {
(0, 0)
};
let framebuffer = if let Kind::SecondaryRenderPass { ref subpass, framebuffer: Some(ref framebuffer) } = kind {
// TODO: restore check
//assert!(framebuffer.is_compatible_with(subpass.render_pass())); // TODO: proper error
FramebufferAbstract::inner(&framebuffer).internal_object()
} else {
0
};
let inheritance = vk::CommandBufferInheritanceInfo {
sType: vk::STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO,
pNext: ptr::null(),
renderPass: rp,
subpass: sp,
framebuffer: framebuffer,
occlusionQueryEnable: 0, // TODO:
queryFlags: 0, // TODO:
pipelineStatistics: 0, // TODO:
};
let infos = vk::CommandBufferBeginInfo {
sType: vk::STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO,
pNext: ptr::null(),
flags: vk_flags,
pInheritanceInfo: &inheritance,
};
try!(check_errors(vk.BeginCommandBuffer(cmd, &infos)));
Ok(UnsafeCommandBufferBuilder {
cmd: Some(alloc),
device: device.clone(),
flags: flags,
secondary_cb: match kind {
Kind::Primary => false,
Kind::Secondary | Kind::SecondaryRenderPass { .. } => true,
},
})
}
}
unsafe impl<P> DeviceOwned for UnsafeCommandBufferBuilder<P> where P: CommandPool {
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
}
}
unsafe impl<P> CommandBufferBuilder for UnsafeCommandBufferBuilder<P> where P: CommandPool {
#[inline]
fn queue_family(&self) -> QueueFamily {
self.cmd.as_ref().unwrap().queue_family()
}
}
unsafe impl<P> VulkanObject for UnsafeCommandBufferBuilder<P> where P: CommandPool {
type Object = vk::CommandBuffer;
#[inline]
fn internal_object(&self) -> vk::CommandBuffer {
self.cmd.as_ref().unwrap().inner().internal_object()
}
}
unsafe impl<P> CommandBufferBuild for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBuffer<P>;
type Err = OomError;
#[inline]
fn build(mut self) -> Result<Self::Out, OomError> {
unsafe {
let cmd = self.cmd.take().unwrap();
let vk = self.device.pointers();
try!(check_errors(vk.EndCommandBuffer(cmd.inner().internal_object())));
Ok(UnsafeCommandBuffer {
cmd: cmd.into_alloc(),
device: self.device.clone(),
flags: self.flags,
already_submitted: AtomicBool::new(false),
secondary_cb: self.secondary_cb
})
}
}
}
/// Command buffer that has been built.
///
/// Doesn't perform any synchronization and doesn't keep the object it uses alive.
// TODO: change P parameter to be a CommandPoolAlloc
pub struct UnsafeCommandBuffer<P> where P: CommandPool {
// The Vulkan command buffer.
cmd: P::Alloc,
// Device that owns the command buffer.
// TODO: necessary?
device: Arc<Device>,
// Flags that were used at creation.
// TODO: necessary?
flags: Flags,
// True if the command buffer has always been submitted once. Only relevant if `flags` is
// `OneTimeSubmit`.
already_submitted: AtomicBool,
// True if this command buffer belongs to a secondary pool - needed for Drop
secondary_cb: bool
}
unsafe impl<P> CommandBuffer for UnsafeCommandBuffer<P> where P: CommandPool {
type Pool = P;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<P> {
self
}
#[inline]
fn prepare_submit(&self, _: &GpuFuture, _: &Queue) -> Result<(), CommandBufferExecError> {
// Not our job to check.
Ok(())
}
#[inline]
fn check_buffer_access(&self, buffer: &BufferAccess, exclusive: bool, queue: &Queue)
-> Result<Option<(PipelineStages, AccessFlagBits)>, AccessCheckError>
{
Err(AccessCheckError::Unknown)
}
#[inline]
fn check_image_access(&self, image: &ImageAccess, layout: Layout, exclusive: bool, queue: &Queue)
-> Result<Option<(PipelineStages, AccessFlagBits)>, AccessCheckError>
{
Err(AccessCheckError::Unknown)
}
}
unsafe impl<P> DeviceOwned for UnsafeCommandBuffer<P> where P: CommandPool {
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
}
}
unsafe impl<P> VulkanObject for UnsafeCommandBuffer<P> where P: CommandPool {
type Object = vk::CommandBuffer;
#[inline]
fn internal_object(&self) -> vk::CommandBuffer {
self.cmd.inner().internal_object()
}
}

View File

@ -0,0 +1,20 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use command_buffer::CommandAddError;
/// Adds a command to a command buffer builder.
pub unsafe trait AddCommand<C> {
/// The new command buffer builder type.
type Out;
/// Adds the command. This takes ownership of the builder and returns a new builder with the
/// command appended at the end of it.
fn add(self, cmd: C) -> Result<Self::Out, CommandAddError>;
}

View File

@ -0,0 +1,135 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use command_buffer::cb::AddCommand;
use command_buffer::CommandAddError;
use command_buffer::commands_raw::CmdBindDescriptorSets;
use command_buffer::commands_raw::CmdBindDescriptorSetsError;
use command_buffer::commands_raw::CmdBindPipeline;
use command_buffer::commands_raw::CmdDispatchRaw;
use command_buffer::commands_raw::CmdDispatchRawError;
use command_buffer::commands_raw::CmdPushConstants;
use command_buffer::commands_raw::CmdPushConstantsError;
use descriptor::descriptor_set::DescriptorSetsCollection;
use pipeline::ComputePipelineAbstract;
/// Command that executes a compute shader.
pub struct CmdDispatch<P, S, Pc> {
push_constants: CmdPushConstants<Pc, P>,
descriptor_sets: CmdBindDescriptorSets<S, P>,
bind_pipeline: CmdBindPipeline<P>,
dispatch_raw: CmdDispatchRaw,
}
impl<P, S, Pc> CmdDispatch<P, S, Pc>
where P: ComputePipelineAbstract, S: DescriptorSetsCollection
{
/// See the documentation of the `dispatch` method.
pub fn new(dimensions: [u32; 3], pipeline: P, sets: S, push_constants: Pc)
-> Result<CmdDispatch<P, S, Pc>, CmdDispatchError>
where P: Clone
{
let bind_pipeline = CmdBindPipeline::bind_compute_pipeline(pipeline.clone());
let descriptor_sets = try!(CmdBindDescriptorSets::new(true, pipeline.clone(), sets));
let push_constants = try!(CmdPushConstants::new(pipeline.clone(), push_constants));
let dispatch_raw = try!(unsafe { CmdDispatchRaw::new(pipeline.device().clone(), dimensions) });
Ok(CmdDispatch {
push_constants: push_constants,
descriptor_sets: descriptor_sets,
bind_pipeline: bind_pipeline,
dispatch_raw: dispatch_raw,
})
}
}
unsafe impl<Cb, P, S, Pc, O, O1, O2, O3> AddCommand<CmdDispatch<P, S, Pc>> for Cb
where Cb: AddCommand<CmdPushConstants<Pc, P>, Out = O1>,
O1: AddCommand<CmdBindDescriptorSets<S, P>, Out = O2>,
O2: AddCommand<CmdBindPipeline<P>, Out = O3>,
O3: AddCommand<CmdDispatchRaw, Out = O>
{
type Out = O;
#[inline]
fn add(self, command: CmdDispatch<P, S, Pc>) -> Result<Self::Out, CommandAddError> {
Ok(self.add(command.push_constants)?
.add(command.descriptor_sets)?
.add(command.bind_pipeline)?
.add(command.dispatch_raw)?)
}
}
/// Error that can happen when creating a `CmdDispatch`.
#[derive(Debug, Copy, Clone)]
pub enum CmdDispatchError {
/// The dispatch dimensions are larger than the hardware limits.
DispatchRawError(CmdDispatchRawError),
/// Error while binding descriptor sets.
BindDescriptorSetsError(CmdBindDescriptorSetsError),
/// Error while setting push constants.
PushConstantsError(CmdPushConstantsError),
}
impl From<CmdDispatchRawError> for CmdDispatchError {
#[inline]
fn from(err: CmdDispatchRawError) -> CmdDispatchError {
CmdDispatchError::DispatchRawError(err)
}
}
impl From<CmdBindDescriptorSetsError> for CmdDispatchError {
#[inline]
fn from(err: CmdBindDescriptorSetsError) -> CmdDispatchError {
CmdDispatchError::BindDescriptorSetsError(err)
}
}
impl From<CmdPushConstantsError> for CmdDispatchError {
#[inline]
fn from(err: CmdPushConstantsError) -> CmdDispatchError {
CmdDispatchError::PushConstantsError(err)
}
}
impl error::Error for CmdDispatchError {
#[inline]
fn description(&self) -> &str {
match *self {
CmdDispatchError::DispatchRawError(_) => {
"the dispatch dimensions are larger than the hardware limits"
},
CmdDispatchError::BindDescriptorSetsError(_) => {
"error while binding descriptor sets"
},
CmdDispatchError::PushConstantsError(_) => {
"error while setting push constants"
},
}
}
#[inline]
fn cause(&self) -> Option<&error::Error> {
match *self {
CmdDispatchError::DispatchRawError(ref err) => Some(err),
CmdDispatchError::BindDescriptorSetsError(ref err) => Some(err),
CmdDispatchError::PushConstantsError(ref err) => Some(err),
}
}
}
impl fmt::Display for CmdDispatchError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}

View File

@ -0,0 +1,183 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use std::mem;
use std::sync::Arc;
use buffer::BufferAccess;
use buffer::TypedBufferAccess;
use command_buffer::commands_raw::CmdBindDescriptorSets;
use command_buffer::commands_raw::CmdBindDescriptorSetsError;
use command_buffer::commands_raw::CmdBindPipeline;
use command_buffer::commands_raw::CmdPushConstants;
use command_buffer::commands_raw::CmdPushConstantsError;
use command_buffer::CommandAddError;
use command_buffer::DispatchIndirectCommand;
use command_buffer::RawCommandBufferPrototype;
use command_buffer::CommandsList;
use command_buffer::CommandsListSink;
use descriptor::PipelineLayoutAbstract;
use descriptor::descriptor_set::collection::TrackedDescriptorSetsCollection;
use device::DeviceOwned;
use pipeline::ComputePipeline;
use sync::AccessFlagBits;
use sync::PipelineStages;
use VulkanObject;
use VulkanPointers;
use vk;
/// Wraps around a commands list and adds an indirect dispatch command at the end of it.
pub struct CmdDispatchIndirect<L, B, Pl, S, Pc>
where L: CommandsList, Pl: PipelineLayoutAbstract, S: TrackedDescriptorSetsCollection
{
// Parent commands list.
previous: CmdPushConstants<
CmdBindDescriptorSets<
CmdBindPipeline<L, Arc<ComputePipeline<Pl>>>,
S, Arc<ComputePipeline<Pl>>
>,
Pc, Arc<ComputePipeline<Pl>>
>,
raw_buffer: vk::Buffer,
raw_offset: vk::DeviceSize,
// The buffer.
buffer: B,
}
impl<L, B, Pl, S, Pc> CmdDispatchIndirect<L, B, Pl, S, Pc>
where L: CommandsList, Pl: PipelineLayoutAbstract, S: TrackedDescriptorSetsCollection
{
/// This function is unsafe because the values in the buffer must be less or equal than
/// `VkPhysicalDeviceLimits::maxComputeWorkGroupCount`.
pub unsafe fn new(previous: L, pipeline: Arc<ComputePipeline<Pl>>, sets: S, push_constants: Pc,
buffer: B)
-> Result<CmdDispatchIndirect<L, B, Pl, S, Pc>, CmdDispatchIndirectError>
where B: TypedBufferAccess<Content = DispatchIndirectCommand>
{
let previous = CmdBindPipeline::bind_compute_pipeline(previous, pipeline.clone());
let device = previous.device().clone();
let previous = CmdBindDescriptorSets::new(previous, false, pipeline.clone(), sets)?;
let previous = CmdPushConstants::new(previous, pipeline.clone(), push_constants)?;
let (raw_buffer, raw_offset) = {
let inner = buffer.inner();
if !inner.buffer.usage_indirect_buffer() {
return Err(CmdDispatchIndirectError::MissingBufferUsage);
}
if inner.offset % 4 != 0 {
return Err(CmdDispatchIndirectError::WrongAlignment);
}
(inner.buffer.internal_object(), inner.offset as vk::DeviceSize)
};
Ok(CmdDispatchIndirect {
previous: previous,
raw_buffer: raw_buffer,
raw_offset: raw_offset,
buffer: buffer,
})
}
}
unsafe impl<L, B, Pl, S, Pc> CommandsList for CmdDispatchIndirect<L, B, Pl, S, Pc>
where L: CommandsList, B: BufferAccess,
Pl: PipelineLayoutAbstract, S: TrackedDescriptorSetsCollection
{
#[inline]
fn append<'a>(&'a self, builder: &mut CommandsListSink<'a>) {
self.previous.append(builder);
{
let stages = PipelineStages { compute_shader: true, .. PipelineStages::none() };
let access = AccessFlagBits { indirect_command_read: true, .. AccessFlagBits::none() };
builder.add_buffer_transition(&self.buffer, 0,
mem::size_of::<DispatchIndirectCommand>(), false,
stages, access);
}
builder.add_command(Box::new(move |raw: &mut RawCommandBufferPrototype| {
unsafe {
let vk = raw.device.pointers();
let cmd = raw.command_buffer.clone().take().unwrap();
vk.CmdDispatchIndirect(cmd, self.raw_buffer, self.raw_offset);
}
}));
}
}
/// Error that can happen when creating a `CmdDispatch`.
#[derive(Debug, Copy, Clone)]
pub enum CmdDispatchIndirectError {
/// The buffer must have the "indirect" usage.
MissingBufferUsage,
/// The buffer must be 4-bytes-aligned.
WrongAlignment,
/// Error while binding descriptor sets.
BindDescriptorSetsError(CmdBindDescriptorSetsError),
/// Error while setting push constants.
PushConstantsError(CmdPushConstantsError),
}
impl From<CmdBindDescriptorSetsError> for CmdDispatchIndirectError {
#[inline]
fn from(err: CmdBindDescriptorSetsError) -> CmdDispatchIndirectError {
CmdDispatchIndirectError::BindDescriptorSetsError(err)
}
}
impl From<CmdPushConstantsError> for CmdDispatchIndirectError {
#[inline]
fn from(err: CmdPushConstantsError) -> CmdDispatchIndirectError {
CmdDispatchIndirectError::PushConstantsError(err)
}
}
impl error::Error for CmdDispatchIndirectError {
#[inline]
fn description(&self) -> &str {
match *self {
CmdDispatchIndirectError::MissingBufferUsage => {
"the buffer must have the indirect usage."
},
CmdDispatchIndirectError::WrongAlignment => {
"the buffer must be 4-bytes-aligned"
},
CmdDispatchIndirectError::BindDescriptorSetsError(_) => {
"error while binding descriptor sets"
},
CmdDispatchIndirectError::PushConstantsError(_) => {
"error while setting push constants"
},
}
}
#[inline]
fn cause(&self) -> Option<&error::Error> {
match *self {
CmdDispatchIndirectError::MissingBufferUsage => None,
CmdDispatchIndirectError::WrongAlignment => None,
CmdDispatchIndirectError::BindDescriptorSetsError(ref err) => Some(err),
CmdDispatchIndirectError::PushConstantsError(ref err) => Some(err),
}
}
}
impl fmt::Display for CmdDispatchIndirectError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}

View File

@ -0,0 +1,81 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use command_buffer::CommandAddError;
use command_buffer::DynamicState;
use command_buffer::cb::AddCommand;
use command_buffer::commands_raw::CmdBindDescriptorSets;
use command_buffer::commands_raw::CmdBindPipeline;
use command_buffer::commands_raw::CmdBindVertexBuffers;
use command_buffer::commands_raw::CmdDrawRaw;
use command_buffer::commands_raw::CmdPushConstants;
use command_buffer::commands_raw::CmdSetState;
use descriptor::descriptor_set::DescriptorSetsCollection;
use pipeline::GraphicsPipelineAbstract;
use pipeline::vertex::VertexSource;
/// Command that draws non-indexed vertices.
pub struct CmdDraw<V, P, S, Pc> {
vertex_buffers: CmdBindVertexBuffers<V>,
push_constants: CmdPushConstants<Pc, P>,
descriptor_sets: CmdBindDescriptorSets<S, P>,
set_state: CmdSetState,
bind_pipeline: CmdBindPipeline<P>,
draw_raw: CmdDrawRaw,
}
impl<V, P, S, Pc> CmdDraw<V, P, S, Pc>
where P: GraphicsPipelineAbstract, S: DescriptorSetsCollection
{
/// See the documentation of the `draw` method.
pub fn new(pipeline: P, dynamic: DynamicState, vertices: V, sets: S, push_constants: Pc)
-> CmdDraw<V, P, S, Pc>
where P: VertexSource<V> + Clone
{
let (_, vertex_count, instance_count) = pipeline.decode(&vertices);
let bind_pipeline = CmdBindPipeline::bind_graphics_pipeline(pipeline.clone());
let device = bind_pipeline.device().clone();
let set_state = CmdSetState::new(device, dynamic);
let descriptor_sets = CmdBindDescriptorSets::new(true, pipeline.clone(), sets).unwrap() /* TODO: error */;
let push_constants = CmdPushConstants::new(pipeline.clone(), push_constants).unwrap() /* TODO: error */;
let vertex_buffers = CmdBindVertexBuffers::new(&pipeline, vertices);
let draw_raw = unsafe { CmdDrawRaw::new(vertex_count as u32, instance_count as u32, 0, 0) };
CmdDraw {
vertex_buffers: vertex_buffers,
push_constants: push_constants,
descriptor_sets: descriptor_sets,
set_state: set_state,
bind_pipeline: bind_pipeline,
draw_raw: draw_raw,
}
}
}
unsafe impl<Cb, V, P, S, Pc, O, O1, O2, O3, O4, O5> AddCommand<CmdDraw<V, P, S, Pc>> for Cb
where Cb: AddCommand<CmdBindVertexBuffers<V>, Out = O1>,
O1: AddCommand<CmdPushConstants<Pc, P>, Out = O2>,
O2: AddCommand<CmdBindDescriptorSets<S, P>, Out = O3>,
O3: AddCommand<CmdSetState, Out = O4>,
O4: AddCommand<CmdBindPipeline<P>, Out = O5>,
O5: AddCommand<CmdDrawRaw, Out = O>
{
type Out = O;
#[inline]
fn add(self, command: CmdDraw<V, P, S, Pc>) -> Result<Self::Out, CommandAddError> {
Ok(self.add(command.vertex_buffers)?
.add(command.push_constants)?
.add(command.descriptor_sets)?
.add(command.set_state)?
.add(command.bind_pipeline)?
.add(command.draw_raw)?)
}
}

View File

@ -0,0 +1,101 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use buffer::BufferAccess;
use buffer::TypedBufferAccess;
use command_buffer::CommandAddError;
use command_buffer::DynamicState;
use command_buffer::cb::AddCommand;
use command_buffer::commands_raw::CmdBindDescriptorSets;
use command_buffer::commands_raw::CmdBindIndexBuffer;
use command_buffer::commands_raw::CmdBindPipeline;
use command_buffer::commands_raw::CmdBindVertexBuffers;
use command_buffer::commands_raw::CmdPushConstants;
use command_buffer::commands_raw::CmdSetState;
use command_buffer::commands_raw::CmdDrawIndexedRaw;
use descriptor::descriptor_set::DescriptorSetsCollection;
use pipeline::GraphicsPipelineAbstract;
use pipeline::input_assembly::Index;
use pipeline::vertex::VertexSource;
/// Command that draws indexed vertices.
pub struct CmdDrawIndexed<V, Ib, P, S, Pc>
{
vertex_buffers: CmdBindVertexBuffers<V>,
index_buffer: CmdBindIndexBuffer<Ib>,
push_constants: CmdPushConstants<Pc, P>,
descriptor_sets: CmdBindDescriptorSets<S, P>,
set_state: CmdSetState,
bind_pipeline: CmdBindPipeline<P>,
draw_indexed_raw: CmdDrawIndexedRaw,
}
impl<V, Ib, I, P, S, Pc> CmdDrawIndexed<V, Ib, P, S, Pc>
where P: GraphicsPipelineAbstract,
S: DescriptorSetsCollection,
Ib: BufferAccess + TypedBufferAccess<Content = [I]>,
I: Index + 'static
{
/// See the documentation of the `draw` method.
pub fn new(pipeline: P, dynamic: DynamicState,
vertices: V, index_buffer: Ib, sets: S, push_constants: Pc)
-> CmdDrawIndexed<V, Ib, P, S, Pc>
where P: VertexSource<V> + Clone
{
let index_count = index_buffer.len();
let (_, _, instance_count) = pipeline.decode(&vertices);
let bind_pipeline = CmdBindPipeline::bind_graphics_pipeline(pipeline.clone());
let device = bind_pipeline.device().clone();
let set_state = CmdSetState::new(device, dynamic);
let descriptor_sets = CmdBindDescriptorSets::new(true, pipeline.clone(), sets).unwrap() /* TODO: error */;
let push_constants = CmdPushConstants::new(pipeline.clone(), push_constants).unwrap() /* TODO: error */;
let vertex_buffers = CmdBindVertexBuffers::new(&pipeline, vertices);
let index_buffer = CmdBindIndexBuffer::new(index_buffer);
let draw_indexed_raw = unsafe {
CmdDrawIndexedRaw::new(
index_count as u32, instance_count as u32,
0, 0, 0
)
};
// TODO: check that dynamic state is not missing some elements required by the pipeline
CmdDrawIndexed {
vertex_buffers: vertex_buffers,
index_buffer: index_buffer,
push_constants: push_constants,
descriptor_sets: descriptor_sets,
set_state: set_state,
bind_pipeline: bind_pipeline,
draw_indexed_raw: draw_indexed_raw,
}
}
}
unsafe impl<Cb, V, Ib, P, S, Pc, O, O1, O2, O3, O4, O5, O6> AddCommand<CmdDrawIndexed<V, Ib, P, S, Pc>> for Cb
where Cb: AddCommand<CmdBindVertexBuffers<V>, Out = O1>,
O1: AddCommand<CmdBindIndexBuffer<Ib>, Out = O2>,
O2: AddCommand<CmdPushConstants<Pc, P>, Out = O3>,
O3: AddCommand<CmdBindDescriptorSets<S, P>, Out = O4>,
O4: AddCommand<CmdSetState, Out = O5>,
O5: AddCommand<CmdBindPipeline<P>, Out = O6>,
O6: AddCommand<CmdDrawIndexedRaw, Out = O>
{
type Out = O;
#[inline]
fn add(self, command: CmdDrawIndexed<V, Ib, P, S, Pc>) -> Result<Self::Out, CommandAddError> {
Ok(self.add(command.vertex_buffers)?
.add(command.index_buffer)?
.add(command.push_constants)?
.add(command.descriptor_sets)?
.add(command.set_state)?
.add(command.bind_pipeline)?
.add(command.draw_indexed_raw)?)
}
}

View File

@ -0,0 +1,87 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use buffer::BufferAccess;
use buffer::TypedBufferAccess;
use command_buffer::CommandAddError;
use command_buffer::DynamicState;
use command_buffer::DrawIndirectCommand;
use command_buffer::cb::AddCommand;
use command_buffer::commands_raw::CmdBindDescriptorSets;
use command_buffer::commands_raw::CmdBindPipeline;
use command_buffer::commands_raw::CmdBindVertexBuffers;
use command_buffer::commands_raw::CmdDrawIndirectRaw;
use command_buffer::commands_raw::CmdPushConstants;
use command_buffer::commands_raw::CmdSetState;
use descriptor::descriptor_set::DescriptorSetsCollection;
use pipeline::GraphicsPipelineAbstract;
use pipeline::vertex::VertexSource;
/// Command that draws non-indexed vertices.
pub struct CmdDrawIndirect<V, I, P, S, Pc> {
vertex_buffers: CmdBindVertexBuffers<V>,
push_constants: CmdPushConstants<Pc, P>,
descriptor_sets: CmdBindDescriptorSets<S, P>,
set_state: CmdSetState,
bind_pipeline: CmdBindPipeline<P>,
draw_raw: CmdDrawIndirectRaw<I>,
}
impl<V, I, P, S, Pc> CmdDrawIndirect<V, I, P, S, Pc>
where P: GraphicsPipelineAbstract, S: DescriptorSetsCollection,
I: BufferAccess + TypedBufferAccess<Content = [DrawIndirectCommand]>
{
/// See the documentation of the `draw` method.
pub fn new(pipeline: P, dynamic: DynamicState, vertices: V, indirect_buffer: I, sets: S,
push_constants: Pc) -> CmdDrawIndirect<V, I, P, S, Pc>
where P: VertexSource<V> + Clone
{
let draw_count = indirect_buffer.len() as u32;
// TODO: err, how to ensure safety for ranges in the command?
let bind_pipeline = CmdBindPipeline::bind_graphics_pipeline(pipeline.clone());
let device = bind_pipeline.device().clone();
let set_state = CmdSetState::new(device, dynamic);
let descriptor_sets = CmdBindDescriptorSets::new(true, pipeline.clone(), sets).unwrap() /* TODO: error */;
let push_constants = CmdPushConstants::new(pipeline.clone(), push_constants).unwrap() /* TODO: error */;
let vertex_buffers = CmdBindVertexBuffers::new(&pipeline, vertices);
let draw_raw = unsafe { CmdDrawIndirectRaw::new(indirect_buffer, draw_count) };
CmdDrawIndirect {
vertex_buffers: vertex_buffers,
push_constants: push_constants,
descriptor_sets: descriptor_sets,
set_state: set_state,
bind_pipeline: bind_pipeline,
draw_raw: draw_raw,
}
}
}
unsafe impl<Cb, V, I, P, S, Pc, O, O1, O2, O3, O4, O5> AddCommand<CmdDrawIndirect<V, I, P, S, Pc>> for Cb
where Cb: AddCommand<CmdBindVertexBuffers<V>, Out = O1>,
O1: AddCommand<CmdPushConstants<Pc, P>, Out = O2>,
O2: AddCommand<CmdBindDescriptorSets<S, P>, Out = O3>,
O3: AddCommand<CmdSetState, Out = O4>,
O4: AddCommand<CmdBindPipeline<P>, Out = O5>,
O5: AddCommand<CmdDrawIndirectRaw<I>, Out = O>
{
type Out = O;
#[inline]
fn add(self, command: CmdDrawIndirect<V, I, P, S, Pc>) -> Result<Self::Out, CommandAddError> {
Ok(self.add(command.vertex_buffers)?
.add(command.push_constants)?
.add(command.descriptor_sets)?
.add(command.set_state)?
.add(command.bind_pipeline)?
.add(command.draw_raw)?)
}
}

View File

@ -0,0 +1,24 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
//! Additional commands built on top of core commands.
//!
//! These commands are specific to vulkano and make it easier to perform common operations.
pub use self::dispatch::{CmdDispatch, CmdDispatchError};
//pub use self::dispatch_indirect::{CmdDispatchIndirect, CmdDispatchIndirectError};
pub use self::draw::CmdDraw;
pub use self::draw_indexed::CmdDrawIndexed;
pub use self::draw_indirect::CmdDrawIndirect;
mod dispatch;
//mod dispatch_indirect;
mod draw;
mod draw_indexed;
mod draw_indirect;

View File

@ -0,0 +1,165 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use std::ops::Range;
use std::ptr;
use smallvec::SmallVec;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use format::ClearValue;
use framebuffer::FramebufferAbstract;
use framebuffer::RenderPassDescClearValues;
use framebuffer::RenderPassAbstract;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that makes the command buffer enter a render pass.
pub struct CmdBeginRenderPass<Rp, F> {
// Inline or secondary.
contents: vk::SubpassContents,
// The draw area.
rect: [Range<u32>; 2],
// The clear values for the clear attachments.
clear_values: SmallVec<[vk::ClearValue; 6]>,
// The raw render pass handle to bind.
raw_render_pass: vk::RenderPass,
// The raw framebuffer handle to bind.
raw_framebuffer: vk::Framebuffer,
// The device.
device: Arc<Device>,
// The render pass. Can be `None` if same as framebuffer.
render_pass: Option<Rp>,
// The framebuffer.
framebuffer: F,
}
impl<F> CmdBeginRenderPass<Arc<RenderPassAbstract + Send + Sync>, F>
where F: FramebufferAbstract
{
/// See the documentation of the `begin_render_pass` method.
// TODO: allow setting more parameters
pub fn new<C>(framebuffer: F, secondary: bool, clear_values: C)
-> CmdBeginRenderPass<Arc<RenderPassAbstract + Send + Sync>, F>
where F: RenderPassDescClearValues<C>
{
let raw_render_pass = RenderPassAbstract::inner(&framebuffer).internal_object();
let device = framebuffer.device().clone();
let raw_framebuffer = FramebufferAbstract::inner(&framebuffer).internal_object();
let clear_values = {
framebuffer.convert_clear_values(clear_values).map(|clear_value| {
match clear_value {
ClearValue::None => {
vk::ClearValue::color(vk::ClearColorValue::float32([0.0; 4]))
},
ClearValue::Float(val) => {
vk::ClearValue::color(vk::ClearColorValue::float32(val))
},
ClearValue::Int(val) => {
vk::ClearValue::color(vk::ClearColorValue::int32(val))
},
ClearValue::Uint(val) => {
vk::ClearValue::color(vk::ClearColorValue::uint32(val))
},
ClearValue::Depth(val) => {
vk::ClearValue::depth_stencil(vk::ClearDepthStencilValue {
depth: val, stencil: 0
})
},
ClearValue::Stencil(val) => {
vk::ClearValue::depth_stencil(vk::ClearDepthStencilValue {
depth: 0.0, stencil: val
})
},
ClearValue::DepthStencil((depth, stencil)) => {
vk::ClearValue::depth_stencil(vk::ClearDepthStencilValue {
depth: depth, stencil: stencil,
})
},
}
}).collect()
};
let rect = [0 .. framebuffer.dimensions()[0],
0 .. framebuffer.dimensions()[1]];
CmdBeginRenderPass {
contents: if secondary { vk::SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS }
else { vk::SUBPASS_CONTENTS_INLINE },
rect: rect,
clear_values: clear_values,
raw_render_pass: raw_render_pass,
raw_framebuffer: raw_framebuffer,
device: device,
render_pass: None,
framebuffer: framebuffer,
}
}
}
impl<Rp, F> CmdBeginRenderPass<Rp, F> {
/// Returns the framebuffer of the command.
#[inline]
pub fn framebuffer(&self) -> &F {
&self.framebuffer
}
}
unsafe impl<Rp, F> DeviceOwned for CmdBeginRenderPass<Rp, F>
where F: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.framebuffer.device()
}
}
unsafe impl<'a, P, Rp, F> AddCommand<&'a CmdBeginRenderPass<Rp, F>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdBeginRenderPass<Rp, F>) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
let begin = vk::RenderPassBeginInfo {
sType: vk::STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO,
pNext: ptr::null(),
renderPass: command.raw_render_pass,
framebuffer: command.raw_framebuffer,
renderArea: vk::Rect2D {
offset: vk::Offset2D {
x: command.rect[0].start as i32,
y: command.rect[1].start as i32,
},
extent: vk::Extent2D {
width: command.rect[0].end - command.rect[0].start,
height: command.rect[1].end - command.rect[1].start,
},
},
clearValueCount: command.clear_values.len() as u32,
pClearValues: command.clear_values.as_ptr(),
};
vk.CmdBeginRenderPass(cmd, &begin, command.contents);
}
Ok(self)
}
}

View File

@ -0,0 +1,163 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use std::ptr;
use std::sync::Arc;
use smallvec::SmallVec;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use descriptor::descriptor_set::DescriptorSetsCollection;
use descriptor::pipeline_layout::PipelineLayoutAbstract;
use descriptor::pipeline_layout::PipelineLayoutSetsCompatible;
use device::Device;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that binds descriptor sets to the command buffer.
pub struct CmdBindDescriptorSets<S, P> {
// The raw Vulkan enum representing the kind of pipeline.
pipeline_ty: vk::PipelineBindPoint,
// The raw pipeline object to bind.
raw_pipeline_layout: vk::PipelineLayout,
// The raw sets to bind. Array where each element is a tuple of the first set to bind and the
// sets to bind.
raw_sets: SmallVec<[(u32, SmallVec<[vk::DescriptorSet; 8]>); 4]>,
// The device of the pipeline object, so that we can compare it with the command buffer's
// device.
device: Arc<Device>,
// The sets to bind. Unused, but we need to keep them alive.
sets: S,
// The pipeline layout. Unused, but we need to keep it alive.
pipeline_layout: P,
}
impl<S, P> CmdBindDescriptorSets<S, P>
where P: PipelineLayoutAbstract, S: DescriptorSetsCollection
{
/// Builds the command.
///
/// If `graphics` is true, the sets will be bound to the graphics slot. If false, they will be
/// bound to the compute slot.
///
/// Returns an error if the sets are not compatible with the pipeline layout.
#[inline]
pub fn new(graphics: bool, pipeline_layout: P, sets: S)
-> Result<CmdBindDescriptorSets<S, P>, CmdBindDescriptorSetsError>
{
if !PipelineLayoutSetsCompatible::is_compatible(&pipeline_layout, &sets) {
return Err(CmdBindDescriptorSetsError::IncompatibleSets);
}
let raw_pipeline_layout = pipeline_layout.sys().internal_object();
let device = pipeline_layout.device().clone();
let raw_sets = {
let mut raw_sets: SmallVec<[(u32, SmallVec<[_; 8]>); 4]> = SmallVec::new();
let mut add_new = true;
for set_num in 0 .. sets.num_sets() {
let set = match sets.descriptor_set(set_num) {
Some(set) => set.internal_object(),
None => { add_new = true; continue; },
};
if add_new {
let mut v = SmallVec::new(); v.push(set);
raw_sets.push((set_num as u32, v));
add_new = false;
} else {
raw_sets.last_mut().unwrap().1.push(set);
}
}
raw_sets
};
Ok(CmdBindDescriptorSets {
raw_pipeline_layout: raw_pipeline_layout,
raw_sets: raw_sets,
pipeline_ty: if graphics { vk::PIPELINE_BIND_POINT_GRAPHICS }
else { vk::PIPELINE_BIND_POINT_COMPUTE },
device: device,
sets: sets,
pipeline_layout: pipeline_layout,
})
}
}
impl<S, P> CmdBindDescriptorSets<S, P> {
/// True if we bind to the graphics pipeline. False if the compute pipeline.
// TODO: should be an enum?
#[inline]
pub fn is_graphics(&self) -> bool {
self.pipeline_ty == vk::PIPELINE_BIND_POINT_GRAPHICS
}
}
unsafe impl<S, Pl> DeviceOwned for CmdBindDescriptorSets<S, Pl>
where Pl: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.pipeline_layout.device()
}
}
unsafe impl<'a, P, Pl, S> AddCommand<&'a CmdBindDescriptorSets<S, Pl>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdBindDescriptorSets<S, Pl>) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
for &(first_set, ref sets) in command.raw_sets.iter() {
vk.CmdBindDescriptorSets(cmd, command.pipeline_ty, command.raw_pipeline_layout,
first_set, sets.len() as u32, sets.as_ptr(),
0, ptr::null()); // TODO: dynamic offset not supported
}
}
Ok(self)
}
}
/// Error that can happen when creating a `CmdBindDescriptorSets`.
#[derive(Debug, Copy, Clone)]
pub enum CmdBindDescriptorSetsError {
/// The sets are not compatible with the pipeline layout.
// TODO: inner error
IncompatibleSets,
}
impl error::Error for CmdBindDescriptorSetsError {
#[inline]
fn description(&self) -> &str {
match *self {
CmdBindDescriptorSetsError::IncompatibleSets => {
"the sets are not compatible with the pipeline layout"
},
}
}
}
impl fmt::Display for CmdBindDescriptorSetsError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}

View File

@ -0,0 +1,102 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use buffer::BufferAccess;
use buffer::TypedBufferAccess;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use pipeline::input_assembly::Index;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that binds an index buffer to a command buffer.
pub struct CmdBindIndexBuffer<B> {
// Raw handle of the buffer to bind.
raw_buffer: vk::Buffer,
// Raw offset of the buffer to bind.
offset: vk::DeviceSize,
// Type of index.
index_type: vk::IndexType,
// The device of the buffer, so that we can compare it with the command buffer's device.
device: Arc<Device>,
// The buffer to bind. Unused, but we need to keep it alive.
buffer: B,
}
impl<B, I> CmdBindIndexBuffer<B>
where B: BufferAccess + TypedBufferAccess<Content = [I]>,
I: Index + 'static
{
/// Builds the command.
#[inline]
pub fn new(buffer: B) -> CmdBindIndexBuffer<B> {
let device;
let raw_buffer;
let offset;
{
let inner = buffer.inner();
debug_assert!(inner.offset < inner.buffer.size());
// TODO: check > The sum of offset and the address of the range of VkDeviceMemory object that is backing buffer, must be a multiple of the type indicated by indexType
assert!(inner.buffer.usage_index_buffer()); // TODO: error
device = inner.buffer.device().clone();
raw_buffer = inner.buffer.internal_object();
offset = inner.offset as vk::DeviceSize;
}
CmdBindIndexBuffer {
raw_buffer: raw_buffer,
offset: offset,
index_type: I::ty() as vk::IndexType,
device: device,
buffer: buffer,
}
}
}
impl<B> CmdBindIndexBuffer<B> {
/// Returns the index buffer to bind.
#[inline]
pub fn buffer(&self) -> &B {
&self.buffer
}
}
unsafe impl<B> DeviceOwned for CmdBindIndexBuffer<B>
where B: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.buffer.device()
}
}
unsafe impl<'a, P, B> AddCommand<&'a CmdBindIndexBuffer<B>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdBindIndexBuffer<B>) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdBindIndexBuffer(cmd, command.raw_buffer, command.offset, command.index_type);
}
Ok(self)
}
}

View File

@ -0,0 +1,140 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::marker::PhantomData;
use std::sync::Arc;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use pipeline::ComputePipelineAbstract;
use pipeline::GraphicsPipelineAbstract;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that binds a pipeline to a command buffer.
pub struct CmdBindPipeline<P> {
// The raw pipeline object to bind.
raw_pipeline: vk::Pipeline,
// The raw Vulkan enum representing the kind of pipeline.
pipeline_ty: vk::PipelineBindPoint,
// The device of the pipeline object, so that we can compare it with the command buffer's
// device.
device: Arc<Device>,
// The pipeline object to bind. Unused, but we need to keep it alive.
pipeline: P,
}
impl<P> CmdBindPipeline<P> {
/// Builds a command that binds a compute pipeline to the compute pipeline bind point.
///
/// Use this command right before a compute dispatch.
#[inline]
pub fn bind_compute_pipeline(pipeline: P) -> CmdBindPipeline<P>
where P: ComputePipelineAbstract
{
let raw_pipeline = pipeline.inner().internal_object();
let device = pipeline.device().clone();
CmdBindPipeline {
raw_pipeline: raw_pipeline,
pipeline_ty: vk::PIPELINE_BIND_POINT_COMPUTE,
device: device,
pipeline: pipeline,
}
}
/// Builds a command that binds a graphics pipeline to the graphics pipeline bind point.
///
/// Use this command right before a draw command.
#[inline]
pub fn bind_graphics_pipeline(pipeline: P) -> CmdBindPipeline<P>
where P: GraphicsPipelineAbstract
{
let raw_pipeline = GraphicsPipelineAbstract::inner(&pipeline).internal_object();
let device = pipeline.device().clone();
CmdBindPipeline {
raw_pipeline: raw_pipeline,
pipeline_ty: vk::PIPELINE_BIND_POINT_GRAPHICS,
device: device,
pipeline: pipeline,
}
}
/// This disables the command but keeps it alive. All getters still return the same value, but
/// executing the command will not do anything.
#[inline]
pub fn disabled(mut self) -> CmdBindPipeline<P> {
self.raw_pipeline = 0;
self
}
/// Returns the device the pipeline is assocated with.
#[inline]
pub fn device(&self) -> &Arc<Device> {
&self.device
}
/// True if this is the graphics pipeline. False if the compute pipeline.
// TODO: should be an enum?
#[inline]
pub fn is_graphics(&self) -> bool {
self.pipeline_ty == vk::PIPELINE_BIND_POINT_GRAPHICS
}
/// Returns an object giving access to the pipeline object that will be bound.
#[inline]
pub fn sys(&self) -> CmdBindPipelineSys {
CmdBindPipelineSys(self.raw_pipeline, PhantomData)
}
}
unsafe impl<'a, P, Pl> AddCommand<&'a CmdBindPipeline<Pl>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdBindPipeline<Pl>) -> Result<Self::Out, CommandAddError> {
if command.raw_pipeline != 0 {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdBindPipeline(cmd, command.pipeline_ty, command.raw_pipeline);
}
}
Ok(self)
}
}
unsafe impl<Pl> DeviceOwned for CmdBindPipeline<Pl> {
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
}
}
/// Object that represents the internals of the bind pipeline command.
#[derive(Debug, Copy, Clone)]
pub struct CmdBindPipelineSys<'a>(vk::Pipeline, PhantomData<&'a ()>);
unsafe impl<'a> VulkanObject for CmdBindPipelineSys<'a> {
type Object = vk::Pipeline;
#[inline]
fn internal_object(&self) -> vk::Pipeline {
self.0
}
}

View File

@ -0,0 +1,154 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use smallvec::SmallVec;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use pipeline::vertex::VertexSource;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that binds vertex buffers to a command buffer.
pub struct CmdBindVertexBuffers<B> {
// Actual raw state of the command.
state: CmdBindVertexBuffersHash,
// Offset within `state` to start binding.
first_binding: u32,
// Number of bindings to pass to the command.
num_bindings: u32,
// The device of the buffer, so that we can compare it with the command buffer's device.
device: Arc<Device>,
// The buffers to bind. Unused, but we need to keep it alive.
buffers: B,
}
/// A "hash" of the bind vertex buffers command. Can be compared with a previous hash to determine
/// if two commands are identical.
///
/// > **Note**: This is not *actually* a hash, because there's no collision. If two objects are
/// > equal, then the commands are always identical.
#[derive(Clone, PartialEq, Eq)]
pub struct CmdBindVertexBuffersHash {
// Raw handles of the buffers to bind.
raw_buffers: SmallVec<[vk::Buffer; 4]>,
// Raw offsets of the buffers to bind.
offsets: SmallVec<[vk::DeviceSize; 4]>,
}
impl<B> CmdBindVertexBuffers<B> {
/// Builds the command.
#[inline]
pub fn new<S>(source_def: &S, buffers: B) -> CmdBindVertexBuffers<B>
where S: VertexSource<B>
{
let (device, raw_buffers, offsets) = {
let (buffers, _, _) = source_def.decode(&buffers);
let device = buffers.first().unwrap().buffer.device().clone();
let raw_buffers: SmallVec<_> = buffers.iter().map(|b| b.buffer.internal_object()).collect();
let offsets = buffers.iter().map(|b| b.offset as vk::DeviceSize).collect();
(device, raw_buffers, offsets)
};
let num_bindings = raw_buffers.len() as u32;
CmdBindVertexBuffers {
state: CmdBindVertexBuffersHash {
raw_buffers: raw_buffers,
offsets: offsets,
},
first_binding: 0,
num_bindings: num_bindings,
device: device,
buffers: buffers,
}
}
/// Returns a hash that represents the command.
#[inline]
pub fn hash(&self) -> &CmdBindVertexBuffersHash {
&self.state
}
/// Modifies the command so that it doesn't bind vertex buffers that were already bound by a
/// previous command with the given hash.
///
/// Note that this doesn't modify the hash of the command.
pub fn diff(&mut self, previous_hash: &CmdBindVertexBuffersHash) {
// We don't want to split the command into multiple ones, so we just trim the list of
// vertex buffers at the start and at the end.
let left_trim = self.state.raw_buffers
.iter()
.zip(self.state.offsets.iter())
.zip(previous_hash.raw_buffers.iter())
.zip(previous_hash.offsets.iter())
.position(|(((&cur_buf, &cur_off), &prev_buf), &prev_off)| {
cur_buf != prev_buf || cur_off != prev_off
})
.map(|p| p as u32)
.unwrap_or(self.num_bindings);
let right_trim = self.state.raw_buffers
.iter()
.zip(self.state.offsets.iter().rev())
.zip(previous_hash.raw_buffers.iter().rev())
.zip(previous_hash.offsets.iter().rev())
.position(|(((&cur_buf, &cur_off), &prev_buf), &prev_off)| {
cur_buf != prev_buf || cur_off != prev_off
})
.map(|p| p as u32)
.unwrap_or(self.num_bindings);
self.first_binding = left_trim;
debug_assert!(left_trim <= self.state.raw_buffers.len() as u32);
self.num_bindings = (self.state.raw_buffers.len() as u32 - left_trim).saturating_sub(right_trim);
}
}
unsafe impl<B> DeviceOwned for CmdBindVertexBuffers<B> {
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
}
}
unsafe impl<'a, P, B> AddCommand<&'a CmdBindVertexBuffers<B>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdBindVertexBuffers<B>) -> Result<Self::Out, CommandAddError> {
unsafe {
debug_assert_eq!(command.state.offsets.len(), command.state.raw_buffers.len());
debug_assert!(command.num_bindings <= command.state.raw_buffers.len() as u32);
if command.num_bindings == 0 {
return Ok(self);
}
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdBindVertexBuffers(cmd, command.first_binding,
command.num_bindings,
command.state.raw_buffers[command.first_binding as usize..].as_ptr(),
command.state.offsets[command.first_binding as usize..].as_ptr());
}
Ok(self)
}
}

View File

@ -0,0 +1,159 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use std::sync::Arc;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that blits from an image to another image.
#[derive(Debug, Clone)]
pub struct CmdBlitImage<S, D> {
// The source image.
source: S,
// Raw source image.
source_raw: vk::Image,
// Layout of the source image.
source_layout: vk::ImageLayout,
// Offset in the source.
source_offset1: [i32; 3],
source_offset2: [i32; 3],
source_aspect_mask: vk::ImageAspectFlags,
source_mip_level: u32,
source_base_array_layer: u32,
source_layer_count: u32,
// The destination image.
destination: D,
// Raw destination image.
destination_raw: vk::Image,
// Layout of the destination image.
destination_layout: vk::ImageLayout,
// Offset in the destination.
destination_offset1: [i32; 3],
destination_offset2: [i32; 3],
destination_aspect_mask: vk::ImageAspectFlags,
destination_mip_level: u32,
destination_base_array_layer: u32,
destination_layer_count: u32,
filter: vk::Filter,
}
// TODO: add constructor
impl<S, D> CmdBlitImage<S, D> {
/// Returns the source image.
#[inline]
pub fn source(&self) -> &S {
&self.source
}
/// Returns the destination image.
#[inline]
pub fn destination(&self) -> &D {
&self.destination
}
}
unsafe impl<S, D> DeviceOwned for CmdBlitImage<S, D> where S: DeviceOwned {
#[inline]
fn device(&self) -> &Arc<Device> {
self.source.device()
}
}
unsafe impl<'a, P, S, D> AddCommand<&'a CmdBlitImage<S, D>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdBlitImage<S, D>) -> Result<Self::Out, CommandAddError> {
unsafe {
debug_assert!(command.source_layout == vk::IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL ||
command.source_layout == vk::IMAGE_LAYOUT_GENERAL);
debug_assert!(command.destination_layout == vk::IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL ||
command.destination_layout == vk::IMAGE_LAYOUT_GENERAL);
let region = vk::ImageBlit {
srcSubresource: vk::ImageSubresourceLayers {
aspectMask: command.source_aspect_mask,
mipLevel: command.source_mip_level,
baseArrayLayer: command.source_base_array_layer,
layerCount: command.source_layer_count,
},
srcOffsets: [
vk::Offset3D {
x: command.source_offset1[0],
y: command.source_offset1[1],
z: command.source_offset1[2],
},
vk::Offset3D {
x: command.source_offset2[0],
y: command.source_offset2[1],
z: command.source_offset2[2],
},
],
dstSubresource: vk::ImageSubresourceLayers {
aspectMask: command.destination_aspect_mask,
mipLevel: command.destination_mip_level,
baseArrayLayer: command.destination_base_array_layer,
layerCount: command.destination_layer_count,
},
dstOffsets: [
vk::Offset3D {
x: command.destination_offset1[0],
y: command.destination_offset1[1],
z: command.destination_offset1[2],
},
vk::Offset3D {
x: command.destination_offset2[0],
y: command.destination_offset2[1],
z: command.destination_offset2[2],
},
],
};
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdBlitImage(cmd, command.source_raw, command.source_layout,
command.destination_raw, command.destination_layout,
1, &region as *const _, command.filter);
}
Ok(self)
}
}
/// Error that can happen when creating a `CmdBlitImage`.
#[derive(Debug, Copy, Clone)]
pub enum CmdBlitImageError {
}
impl error::Error for CmdBlitImageError {
#[inline]
fn description(&self) -> &str {
match *self {
}
}
}
impl fmt::Display for CmdBlitImageError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}

View File

@ -0,0 +1,49 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use smallvec::SmallVec;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that clears framebuffer attachments of the current render pass.
pub struct CmdClearAttachments {
// The attachments to clear.
attachments: SmallVec<[vk::ClearAttachment; 8]>,
// The rectangles to clear.
rects: SmallVec<[vk::ClearRect; 4]>,
}
// TODO: add constructor
unsafe impl<'a, P> AddCommand<&'a CmdClearAttachments> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdClearAttachments) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdClearAttachments(cmd, command.attachments.len() as u32,
command.attachments.as_ptr(), command.rects.len() as u32,
command.rects.as_ptr());
}
Ok(self)
}
}

View File

@ -0,0 +1,175 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::cmp;
use std::error;
use std::fmt;
use std::sync::Arc;
use buffer::BufferAccess;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that copies from a buffer to another.
pub struct CmdCopyBuffer<S, D> {
source: S,
source_raw: vk::Buffer,
destination: D,
destination_raw: vk::Buffer,
src_offset: vk::DeviceSize,
dst_offset: vk::DeviceSize,
size: vk::DeviceSize,
}
impl<S, D> CmdCopyBuffer<S, D>
where S: BufferAccess, D: BufferAccess
{
/// Builds a new command.
///
/// This command will copy from the source to the destination. If their size is not equal, then
/// the amount of data copied is equal to the smallest of the two.
///
/// # Panic
///
/// - Panics if the source and destination were not created with the same device.
// FIXME: type safety
pub fn new(source: S, destination: D)
-> Result<CmdCopyBuffer<S, D>, CmdCopyBufferError>
{
// TODO:
//assert!(previous.is_outside_render_pass()); // TODO: error
assert_eq!(source.inner().buffer.device().internal_object(),
destination.inner().buffer.device().internal_object());
let (source_raw, src_offset) = {
let inner = source.inner();
if !inner.buffer.usage_transfer_src() {
return Err(CmdCopyBufferError::SourceMissingTransferUsage);
}
(inner.buffer.internal_object(), inner.offset)
};
let (destination_raw, dst_offset) = {
let inner = destination.inner();
if !inner.buffer.usage_transfer_dest() {
return Err(CmdCopyBufferError::DestinationMissingTransferUsage);
}
(inner.buffer.internal_object(), inner.offset)
};
let size = cmp::min(source.size(), destination.size());
if source.conflicts_buffer(0, size, &destination, 0, size) {
return Err(CmdCopyBufferError::OverlappingRanges);
} else {
debug_assert!(!destination.conflicts_buffer(0, size, &source, 0, size));
}
Ok(CmdCopyBuffer {
source: source,
source_raw: source_raw,
destination: destination,
destination_raw: destination_raw,
src_offset: src_offset as u64,
dst_offset: dst_offset as u64,
size: size as u64,
})
}
}
impl<S, D> CmdCopyBuffer<S, D> {
/// Returns the source buffer.
#[inline]
pub fn source(&self) -> &S {
&self.source
}
/// Returns the destination buffer.
#[inline]
pub fn destination(&self) -> &D {
&self.destination
}
}
unsafe impl<S, D> DeviceOwned for CmdCopyBuffer<S, D>
where S: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.source.device()
}
}
unsafe impl<'a, P, S, D> AddCommand<&'a CmdCopyBuffer<S, D>> for UnsafeCommandBufferBuilder<P>
where S: BufferAccess,
D: BufferAccess,
P: CommandPool,
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdCopyBuffer<S, D>) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
let region = vk::BufferCopy {
srcOffset: command.src_offset,
dstOffset: command.dst_offset,
size: command.size,
};
vk.CmdCopyBuffer(cmd, command.source_raw, command.destination_raw, 1, &region);
}
Ok(self)
}
}
/// Error that can happen when creating a `CmdCopyBuffer`.
#[derive(Debug, Copy, Clone)]
pub enum CmdCopyBufferError {
/// The source buffer is missing the transfer source usage.
SourceMissingTransferUsage,
/// The destination buffer is missing the transfer destination usage.
DestinationMissingTransferUsage,
/// The source and destination are overlapping.
OverlappingRanges,
}
impl error::Error for CmdCopyBufferError {
#[inline]
fn description(&self) -> &str {
match *self {
CmdCopyBufferError::SourceMissingTransferUsage => {
"the source buffer is missing the transfer source usage"
},
CmdCopyBufferError::DestinationMissingTransferUsage => {
"the destination buffer is missing the transfer destination usage"
},
CmdCopyBufferError::OverlappingRanges => {
"the source and destination are overlapping"
},
}
}
}
impl fmt::Display for CmdCopyBufferError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}

View File

@ -0,0 +1,233 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use std::sync::Arc;
use buffer::BufferAccess;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use image::ImageAccess;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that copies from a buffer to an image.
#[derive(Debug, Clone)]
pub struct CmdCopyBufferToImage<S, D> {
// The source buffer.
buffer: S,
// Raw source buffer.
buffer_raw: vk::Buffer,
// Offset in the source.
buffer_offset: vk::DeviceSize,
buffer_row_length: u32,
buffer_image_height: u32,
// The destination image.
destination: D,
// Raw destination image.
destination_raw: vk::Image,
// Layout of the destination image.
destination_layout: vk::ImageLayout,
// Offset in the destination.
destination_offset: [i32; 3],
destination_aspect_mask: vk::ImageAspectFlags,
destination_mip_level: u32,
destination_base_array_layer: u32,
destination_layer_count: u32,
// Size.
extent: [u32; 3],
}
impl<S, D> CmdCopyBufferToImage<S, D> where S: BufferAccess, D: ImageAccess {
#[inline]
pub fn new(source: S, destination: D)
-> Result<CmdCopyBufferToImage<S, D>, CmdCopyBufferToImageError>
{
let dims = destination.dimensions().width_height_depth();
CmdCopyBufferToImage::with_dimensions(source, destination, [0, 0, 0], dims, 0, 1, 0)
}
pub fn with_dimensions(source: S, destination: D, offset: [u32; 3], size: [u32; 3],
first_layer: u32, num_layers: u32, mipmap: u32)
-> Result<CmdCopyBufferToImage<S, D>, CmdCopyBufferToImageError>
{
// FIXME: check buffer content format
// FIXME: check that the buffer is large enough
// FIXME: check image dimensions
assert_eq!(source.inner().buffer.device().internal_object(),
destination.inner().device().internal_object());
let (source_raw, src_offset) = {
let inner = source.inner();
if !inner.buffer.usage_transfer_src() {
return Err(CmdCopyBufferToImageError::SourceMissingTransferUsage);
}
(inner.buffer.internal_object(), inner.offset)
};
if destination.samples() != 1 {
return Err(CmdCopyBufferToImageError::DestinationMultisampled);
}
let destination_raw = {
let inner = destination.inner();
if !inner.usage_transfer_dest() {
return Err(CmdCopyBufferToImageError::DestinationMissingTransferUsage);
}
inner.internal_object()
};
if source.conflicts_image(0, source.size(), &destination, first_layer, num_layers,
mipmap, 1)
{
return Err(CmdCopyBufferToImageError::OverlappingRanges);
} else {
debug_assert!(!destination.conflicts_buffer(first_layer, num_layers, mipmap,
1, &source, 0, source.size()));
}
let aspect_mask = if destination.has_color() {
vk::IMAGE_ASPECT_COLOR_BIT
} else {
unimplemented!() // TODO:
};
Ok(CmdCopyBufferToImage {
buffer: source,
buffer_raw: source_raw,
buffer_offset: src_offset as vk::DeviceSize,
buffer_row_length: 0,
buffer_image_height: 0,
destination: destination,
destination_raw: destination_raw,
destination_layout: vk::IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, // FIXME:
destination_offset: [offset[0] as i32, offset[1] as i32, offset[2] as i32],
destination_aspect_mask: aspect_mask,
destination_mip_level: mipmap,
destination_base_array_layer: first_layer,
destination_layer_count: num_layers,
extent: size,
})
}
}
impl<S, D> CmdCopyBufferToImage<S, D> {
/// Returns the source buffer.
#[inline]
pub fn source(&self) -> &S {
&self.buffer
}
/// Returns the destination image.
#[inline]
pub fn destination(&self) -> &D {
&self.destination
}
}
unsafe impl<S, D> DeviceOwned for CmdCopyBufferToImage<S, D> where S: DeviceOwned {
#[inline]
fn device(&self) -> &Arc<Device> {
self.buffer.device()
}
}
unsafe impl<'a, P, S, D> AddCommand<&'a CmdCopyBufferToImage<S, D>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdCopyBufferToImage<S, D>) -> Result<Self::Out, CommandAddError> {
unsafe {
debug_assert!(command.destination_layout == vk::IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL ||
command.destination_layout == vk::IMAGE_LAYOUT_GENERAL);
let region = vk::BufferImageCopy {
bufferOffset: command.buffer_offset,
bufferRowLength: command.buffer_row_length,
bufferImageHeight: command.buffer_image_height,
imageSubresource: vk::ImageSubresourceLayers {
aspectMask: command.destination_aspect_mask,
mipLevel: command.destination_mip_level,
baseArrayLayer: command.destination_base_array_layer,
layerCount: command.destination_layer_count,
},
imageOffset: vk::Offset3D {
x: command.destination_offset[0],
y: command.destination_offset[1],
z: command.destination_offset[2],
},
imageExtent: vk::Extent3D {
width: command.extent[0],
height: command.extent[1],
depth: command.extent[2],
},
};
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdCopyBufferToImage(cmd, command.buffer_raw, command.destination_raw,
command.destination_layout, 1, &region as *const _);
}
Ok(self)
}
}
/// Error that can happen when creating a `CmdCopyBufferToImage`.
#[derive(Debug, Copy, Clone)]
pub enum CmdCopyBufferToImageError {
/// The source buffer is missing the transfer source usage.
SourceMissingTransferUsage,
/// The destination image is missing the transfer destination usage.
DestinationMissingTransferUsage,
/// The destination image has more than one sample per pixel.
DestinationMultisampled,
/// The dimensions are out of range of the image.
OutOfImageRange,
/// The source and destination are overlapping in memory.
OverlappingRanges,
}
impl error::Error for CmdCopyBufferToImageError {
#[inline]
fn description(&self) -> &str {
match *self {
CmdCopyBufferToImageError::SourceMissingTransferUsage => {
"the source buffer is missing the transfer source usage"
},
CmdCopyBufferToImageError::DestinationMissingTransferUsage => {
"the destination image is missing the transfer destination usage"
},
CmdCopyBufferToImageError::DestinationMultisampled => {
"the destination image has more than one sample per pixel"
},
CmdCopyBufferToImageError::OutOfImageRange => {
"the dimensions are out of range of the image"
},
CmdCopyBufferToImageError::OverlappingRanges => {
"the source and destination are overlapping in memory"
},
}
}
}
impl fmt::Display for CmdCopyBufferToImageError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}

View File

@ -0,0 +1,149 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use std::sync::Arc;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that copies from an image to another image.
#[derive(Debug, Clone)]
pub struct CmdCopyImage<S, D> {
// The source image.
source: S,
// Raw source image.
source_raw: vk::Image,
// Layout of the source image.
source_layout: vk::ImageLayout,
// Offset in the source.
source_offset: [i32; 3],
source_aspect_mask: vk::ImageAspectFlags,
source_mip_level: u32,
source_base_array_layer: u32,
source_layer_count: u32,
// The destination image.
destination: D,
// Raw destination image.
destination_raw: vk::Image,
// Layout of the destination image.
destination_layout: vk::ImageLayout,
// Offset in the destination.
destination_offset: [i32; 3],
destination_aspect_mask: vk::ImageAspectFlags,
destination_mip_level: u32,
destination_base_array_layer: u32,
destination_layer_count: u32,
// Size.
extent: [u32; 3],
}
// TODO: add constructor
impl<S, D> CmdCopyImage<S, D> {
/// Returns the source image.
#[inline]
pub fn source(&self) -> &S {
&self.source
}
/// Returns the destination image.
#[inline]
pub fn destination(&self) -> &D {
&self.destination
}
}
unsafe impl<S, D> DeviceOwned for CmdCopyImage<S, D> where S: DeviceOwned {
#[inline]
fn device(&self) -> &Arc<Device> {
self.source.device()
}
}
unsafe impl<'a, P, S, D> AddCommand<&'a CmdCopyImage<S, D>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdCopyImage<S, D>) -> Result<Self::Out, CommandAddError> {
unsafe {
debug_assert!(command.source_layout == vk::IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL ||
command.source_layout == vk::IMAGE_LAYOUT_GENERAL);
debug_assert!(command.destination_layout == vk::IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL ||
command.destination_layout == vk::IMAGE_LAYOUT_GENERAL);
let region = vk::ImageCopy {
srcSubresource: vk::ImageSubresourceLayers {
aspectMask: command.source_aspect_mask,
mipLevel: command.source_mip_level,
baseArrayLayer: command.source_base_array_layer,
layerCount: command.source_layer_count,
},
srcOffset: vk::Offset3D {
x: command.source_offset[0],
y: command.source_offset[1],
z: command.source_offset[2],
},
dstSubresource: vk::ImageSubresourceLayers {
aspectMask: command.destination_aspect_mask,
mipLevel: command.destination_mip_level,
baseArrayLayer: command.destination_base_array_layer,
layerCount: command.destination_layer_count,
},
dstOffset: vk::Offset3D {
x: command.destination_offset[0],
y: command.destination_offset[1],
z: command.destination_offset[2],
},
extent: vk::Extent3D {
width: command.extent[0],
height: command.extent[1],
depth: command.extent[2],
},
};
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdCopyImage(cmd, command.source_raw, command.source_layout,
command.destination_raw, command.destination_layout,
1, &region as *const _);
}
Ok(self)
}
}
/// Error that can happen when creating a `CmdCopyImage`.
#[derive(Debug, Copy, Clone)]
pub enum CmdCopyImageError {
}
impl error::Error for CmdCopyImageError {
#[inline]
fn description(&self) -> &str {
match *self {
}
}
}
impl fmt::Display for CmdCopyImageError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}

View File

@ -0,0 +1,169 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use std::sync::Arc;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
/// Command that executes a compute shader.
///
/// > **Note**: Unless you are writing a custom implementation of a command buffer, you are
/// > encouraged to ignore this struct and use a `CmdDispatch` instead.
pub struct CmdDispatchRaw {
dimensions: [u32; 3],
device: Arc<Device>,
}
impl CmdDispatchRaw {
/// Builds a new command that executes a compute shader.
///
/// The command will use the descriptor sets, push constants, and pipeline currently bound.
///
/// This function checks whether the dimensions are supported by the device. It returns an
/// error if they are not.
///
/// # Safety
///
/// While building the command is always safe, care must be taken when it is added to a command
/// buffer. A correct combination of compute pipeline, descriptor set and push constants must
/// have been bound beforehand.
///
#[inline]
pub unsafe fn new(device: Arc<Device>, dimensions: [u32; 3])
-> Result<CmdDispatchRaw, CmdDispatchRawError>
{
let max_dims = device.physical_device().limits().max_compute_work_group_count();
if dimensions[0] > max_dims[0] || dimensions[1] > max_dims[1] ||
dimensions[2] > max_dims[2]
{
return Err(CmdDispatchRawError::DimensionsTooLarge);
}
Ok(CmdDispatchRaw {
dimensions: dimensions,
device: device,
})
}
/// Builds a new command that executes a compute shader.
///
/// The command will use the descriptor sets, push constants, and pipeline currently bound.
///
/// Contrary to `new`, this function doesn't check whether the dimensions are supported by the
/// device. It always succeeds.
///
/// # Safety
///
/// See the documentation of `new`. Contrary to `new`, the dimensions are not checked by
/// this function. It is illegal to build a command with dimensions that are not supported by
/// the device.
///
#[inline]
pub unsafe fn unchecked_dimensions(device: Arc<Device>, dimensions: [u32; 3])
-> Result<CmdDispatchRaw, CmdDispatchRawError>
{
Ok(CmdDispatchRaw {
dimensions: dimensions,
device: device,
})
}
}
unsafe impl DeviceOwned for CmdDispatchRaw {
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
}
}
unsafe impl<'a, P> AddCommand<&'a CmdDispatchRaw> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdDispatchRaw) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdDispatch(cmd, command.dimensions[0], command.dimensions[1],
command.dimensions[2]);
}
Ok(self)
}
}
/// Error that can happen when creating a `CmdDispatch`.
#[derive(Debug, Copy, Clone)]
pub enum CmdDispatchRawError {
/// The dispatch dimensions are larger than the hardware limits.
DimensionsTooLarge,
}
impl error::Error for CmdDispatchRawError {
#[inline]
fn description(&self) -> &str {
match *self {
CmdDispatchRawError::DimensionsTooLarge => {
"the dispatch dimensions are larger than the hardware limits"
},
}
}
}
impl fmt::Display for CmdDispatchRawError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}
#[cfg(test)]
mod tests {
use command_buffer::commands_raw::CmdDispatchRaw;
use command_buffer::commands_raw::CmdDispatchRawError;
#[test]
fn basic_create() {
let (device, _) = gfx_dev_and_queue!();
// Min required supported dimensions is 65535.
match unsafe { CmdDispatchRaw::new(device, [128, 128, 128]) } {
Ok(_) => (),
_ => panic!()
}
}
#[test]
fn limit_checked() {
let (device, _) = gfx_dev_and_queue!();
let limit = device.physical_device().limits().max_compute_work_group_count();
let x = match limit[0].checked_add(2) {
None => return,
Some(x) => x,
};
match unsafe { CmdDispatchRaw::new(device, [x, limit[1], limit[2]]) } {
Err(CmdDispatchRawError::DimensionsTooLarge) => (),
_ => panic!()
}
}
}

View File

@ -0,0 +1,80 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
/// Command that draws indexed vertices.
///
/// > **Note**: Unless you are writing a custom implementation of a command buffer, you are
/// > encouraged to ignore this struct and use a `CmdDrawIndexed` instead.
pub struct CmdDrawIndexedRaw {
index_count: u32,
instance_count: u32,
first_index: u32,
vertex_offset: i32,
first_instance: u32,
}
impl CmdDrawIndexedRaw {
/// Builds a new command that executes an indexed draw command.
///
/// The command will use the vertex buffers, index buffer, dynamic states, descriptor sets,
/// push constants, and graphics pipeline currently bound.
///
/// This command corresponds to the `vkCmdDrawIndexed` function in Vulkan. It takes the first
/// `index_count` indices in the index buffer starting at `first_index`, and adds the value of
/// `vertex_offset` to each index. `instance_count` and `first_instance` are related to
/// instancing and serve the same purpose as in other drawing commands.
///
/// # Safety
///
/// While building the command is always safe, care must be taken when it is added to a command
/// buffer. A correct combination of graphics pipeline, descriptor set, push constants, vertex
/// buffers, index buffer, and dynamic state must have been bound beforehand.
///
/// There is no limit to the values of the parameters, but they must be in range of the index
/// buffer and vertex buffer.
///
#[inline]
pub unsafe fn new(index_count: u32, instance_count: u32, first_index: u32,
vertex_offset: i32, first_instance: u32) -> CmdDrawIndexedRaw
{
CmdDrawIndexedRaw {
index_count: index_count,
instance_count: instance_count,
first_index: first_index,
vertex_offset: vertex_offset,
first_instance: first_instance,
}
}
}
unsafe impl<'a, P> AddCommand<&'a CmdDrawIndexedRaw> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdDrawIndexedRaw) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdDrawIndexed(cmd, command.index_count, command.instance_count,
command.first_index, command.vertex_offset, command.first_instance);
}
Ok(self)
}
}

View File

@ -0,0 +1,78 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use buffer::BufferAccess;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
use vk;
pub struct CmdDrawIndirectRaw<B> {
buffer: B,
draw_count: u32,
stride: u32,
}
impl<B> CmdDrawIndirectRaw<B> where B: BufferAccess {
#[inline]
pub unsafe fn new(buffer: B, draw_count: u32) -> CmdDrawIndirectRaw<B> {
assert_eq!(buffer.inner().offset % 4, 0);
// FIXME: all checks are missing here
CmdDrawIndirectRaw {
buffer: buffer,
draw_count: draw_count,
stride: 16, // TODO:
}
}
}
impl<B> CmdDrawIndirectRaw<B> {
/// Returns the buffer that contains the indirect command.
#[inline]
pub fn buffer(&self) -> &B {
&self.buffer
}
}
unsafe impl<B> DeviceOwned for CmdDrawIndirectRaw<B>
where B: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.buffer.device()
}
}
unsafe impl<'a, B, P> AddCommand<&'a CmdDrawIndirectRaw<B>> for UnsafeCommandBufferBuilder<P>
where B: BufferAccess,
P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdDrawIndirectRaw<B>) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdDrawIndirect(cmd, command.buffer.inner().buffer.internal_object(),
command.buffer.inner().offset as vk::DeviceSize,
command.draw_count, command.stride);
}
Ok(self)
}
}

View File

@ -0,0 +1,75 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
/// Command that draws non-indexed vertices.
///
/// > **Note**: Unless you are writing a custom implementation of a command buffer, you are
/// > encouraged to ignore this struct and use a `CmdDraw` instead.
pub struct CmdDrawRaw {
vertex_count: u32,
instance_count: u32,
first_vertex: u32,
first_instance: u32,
}
impl CmdDrawRaw {
/// Builds a new command that executes a non-indexed draw command.
///
/// The command will use the vertex buffers, dynamic states, descriptor sets, push constants,
/// and graphics pipeline currently bound.
///
/// This command corresponds to the `vkCmdDraw` function in Vulkan.
///
/// # Safety
///
/// While building the command is always safe, care must be taken when it is added to a command
/// buffer. A correct combination of graphics pipeline, descriptor set, push constants, vertex
/// buffers, and dynamic state must have been bound beforehand.
///
/// There is no limit to the values of the parameters, but they must be in range of the vertex
/// buffer.
///
#[inline]
pub unsafe fn new(vertex_count: u32, instance_count: u32, first_vertex: u32,
first_instance: u32) -> CmdDrawRaw
{
CmdDrawRaw {
vertex_count: vertex_count,
instance_count: instance_count,
first_vertex: first_vertex,
first_instance: first_instance,
}
}
}
unsafe impl<'a, P> AddCommand<&'a CmdDrawRaw> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdDrawRaw) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdDraw(cmd, command.vertex_count, command.instance_count, command.first_vertex,
command.first_instance);
}
Ok(self)
}
}

View File

@ -0,0 +1,45 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
/// Command that exits the current render pass.
#[derive(Debug, Copy, Clone)]
pub struct CmdEndRenderPass;
impl CmdEndRenderPass {
/// See the documentation of the `end_render_pass` method.
#[inline]
pub fn new() -> CmdEndRenderPass {
CmdEndRenderPass
}
}
unsafe impl<'a, P> AddCommand<&'a CmdEndRenderPass> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdEndRenderPass) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdEndRenderPass(cmd);
}
Ok(self)
}
}

View File

@ -0,0 +1,79 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use smallvec::SmallVec;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that executes a secondary command buffer.
pub struct CmdExecuteCommands<Cb> {
// Raw list of command buffers to execute.
raw_list: SmallVec<[vk::CommandBuffer; 4]>,
// Command buffer to execute.
command_buffer: Cb,
}
impl<Cb> CmdExecuteCommands<Cb> {
/// See the documentation of the `execute_commands` method.
#[inline]
pub fn new(command_buffer: Cb) -> CmdExecuteCommands<Cb> {
unimplemented!() // TODO:
/*let raw_list = {
let mut l = SmallVec::new();
l.push(command_buffer.inner());
l
};
CmdExecuteCommands {
raw_list: raw_list,
command_buffer: command_buffer,
}*/
}
/// Returns the command buffer to be executed.
#[inline]
pub fn command_buffer(&self) -> &Cb {
&self.command_buffer
}
}
unsafe impl<Cb> DeviceOwned for CmdExecuteCommands<Cb>
where Cb: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.command_buffer.device()
}
}
unsafe impl<'a, P, Cb> AddCommand<&'a CmdExecuteCommands<Cb>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdExecuteCommands<Cb>) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdExecuteCommands(cmd, command.raw_list.len() as u32, command.raw_list.as_ptr());
}
Ok(self)
}
}

View File

@ -0,0 +1,158 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use std::sync::Arc;
use buffer::BufferAccess;
use buffer::BufferInner;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
use vk;
pub struct CmdFillBuffer<B> {
// The buffer to update.
buffer: B,
// Raw buffer handle.
buffer_handle: vk::Buffer,
// Offset of the update.
offset: vk::DeviceSize,
// Size of the update.
size: vk::DeviceSize,
// The data to write to the buffer.
data: u32,
}
unsafe impl<B> DeviceOwned for CmdFillBuffer<B>
where B: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.buffer.device()
}
}
impl<B> CmdFillBuffer<B>
where B: BufferAccess
{
/// Builds a command that writes data to a buffer.
// TODO: not safe because of signalling NaNs
pub fn new(buffer: B, data: u32) -> Result<CmdFillBuffer<B>, CmdFillBufferError> {
let size = buffer.size();
let (buffer_handle, offset) = {
let BufferInner { buffer: buffer_inner, offset } = buffer.inner();
if !buffer_inner.usage_transfer_dest() {
return Err(CmdFillBufferError::BufferMissingUsage);
}
if offset % 4 != 0 {
return Err(CmdFillBufferError::WrongAlignment);
}
(buffer_inner.internal_object(), offset)
};
Ok(CmdFillBuffer {
buffer: buffer,
buffer_handle: buffer_handle,
offset: offset as vk::DeviceSize,
size: size as vk::DeviceSize,
data: data,
})
}
}
impl<B> CmdFillBuffer<B> {
/// Returns the buffer that is going to be filled.
#[inline]
pub fn buffer(&self) -> &B {
&self.buffer
}
}
unsafe impl<'a, P, B> AddCommand<&'a CmdFillBuffer<B>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdFillBuffer<B>) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdFillBuffer(cmd, command.buffer_handle, command.offset,
command.size, command.data);
}
Ok(self)
}
}
/// Error that can happen when creating a `CmdFillBuffer`.
#[derive(Debug, Copy, Clone)]
pub enum CmdFillBufferError {
/// The "transfer destination" usage must be enabled on the buffer.
BufferMissingUsage,
/// The data or size must be 4-bytes aligned.
WrongAlignment,
}
impl error::Error for CmdFillBufferError {
#[inline]
fn description(&self) -> &str {
match *self {
CmdFillBufferError::BufferMissingUsage => {
"the transfer destination usage must be enabled on the buffer"
},
CmdFillBufferError::WrongAlignment => {
"the offset or size are not aligned to 4 bytes"
},
}
}
}
impl fmt::Display for CmdFillBufferError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}
/* TODO: restore
#[cfg(test)]
mod tests {
use std::time::Duration;
use buffer::BufferUsage;
use buffer::CpuAccessibleBuffer;
use command_buffer::commands_raw::PrimaryCbBuilder;
use command_buffer::commands_raw::CommandsList;
use command_buffer::submit::CommandBuffer;
#[test]
fn basic() {
let (device, queue) = gfx_dev_and_queue!();
let buffer = CpuAccessibleBuffer::from_data(&device, &BufferUsage::transfer_dest(),
Some(queue.family()), 0u32).unwrap();
let _ = PrimaryCbBuilder::new(&device, queue.family())
.fill_buffer(buffer.clone(), 128u32)
.build()
.submit(&queue);
let content = buffer.read(Duration::from_secs(0)).unwrap();
assert_eq!(*content, 128);
}
}*/

View File

@ -0,0 +1,62 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
//! All the commands used in the internals of vulkano.
//!
//! This module only contains the base commands that have direct equivalents in the Vulkan API.
pub use self::begin_render_pass::CmdBeginRenderPass;
pub use self::bind_index_buffer::CmdBindIndexBuffer;
pub use self::bind_descriptor_sets::{CmdBindDescriptorSets, CmdBindDescriptorSetsError};
pub use self::bind_pipeline::{CmdBindPipeline, CmdBindPipelineSys};
pub use self::bind_vertex_buffers::{CmdBindVertexBuffers, CmdBindVertexBuffersHash};
pub use self::blit_image::{CmdBlitImage, CmdBlitImageError};
pub use self::clear_attachments::CmdClearAttachments;
pub use self::copy_buffer::{CmdCopyBuffer, CmdCopyBufferError};
pub use self::copy_buffer_to_image::{CmdCopyBufferToImage, CmdCopyBufferToImageError};
pub use self::copy_image::{CmdCopyImage, CmdCopyImageError};
pub use self::dispatch_raw::{CmdDispatchRaw, CmdDispatchRawError};
pub use self::draw_indexed_raw::CmdDrawIndexedRaw;
pub use self::draw_indirect_raw::CmdDrawIndirectRaw;
pub use self::draw_raw::CmdDrawRaw;
pub use self::end_render_pass::CmdEndRenderPass;
pub use self::execute::CmdExecuteCommands;
pub use self::fill_buffer::{CmdFillBuffer, CmdFillBufferError};
pub use self::next_subpass::CmdNextSubpass;
pub use self::pipeline_barrier::CmdPipelineBarrier;
pub use self::push_constants::{CmdPushConstants, CmdPushConstantsError};
pub use self::resolve_image::{CmdResolveImage, CmdResolveImageError};
pub use self::set_event::CmdSetEvent;
pub use self::set_state::{CmdSetState};
pub use self::update_buffer::{CmdUpdateBuffer, CmdUpdateBufferError};
mod begin_render_pass;
mod bind_descriptor_sets;
mod bind_index_buffer;
mod bind_pipeline;
mod bind_vertex_buffers;
mod blit_image;
mod clear_attachments;
mod copy_buffer;
mod copy_buffer_to_image;
mod copy_image;
mod dispatch_raw;
mod draw_indexed_raw;
mod draw_indirect_raw;
mod draw_raw;
mod end_render_pass;
mod execute;
mod fill_buffer;
mod next_subpass;
mod pipeline_barrier;
mod push_constants;
mod resolve_image;
mod set_event;
mod set_state;
mod update_buffer;

View File

@ -0,0 +1,52 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that goes to the next subpass of the current render pass.
#[derive(Debug, Copy, Clone)]
pub struct CmdNextSubpass{
// The parameter for vkCmdNextSubpass.
contents: vk::SubpassContents,
}
impl CmdNextSubpass {
/// See the documentation of the `next_subpass` method.
#[inline]
pub fn new(secondary: bool) -> CmdNextSubpass {
CmdNextSubpass {
contents: if secondary { vk::SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS }
else { vk::SUBPASS_CONTENTS_INLINE }
}
}
}
unsafe impl<'a, P> AddCommand<&'a CmdNextSubpass> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdNextSubpass) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdNextSubpass(cmd, command.contents);
}
Ok(self)
}
}

View File

@ -0,0 +1,268 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::marker::PhantomData;
use std::ops::Range;
use std::ptr;
use std::u32;
use smallvec::SmallVec;
use buffer::BufferAccess;
use buffer::BufferInner;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use image::ImageAccess;
use image::Layout;
use sync::AccessFlagBits;
use sync::PipelineStages;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that adds a pipeline barrier to a command buffer builder.
///
/// A pipeline barrier is a low-level system-ish command that is often necessary for safety. By
/// default all commands that you add to a command buffer can potentially run simultaneously.
/// Adding a pipeline barrier separates commands before the barrier from commands after the barrier
/// and prevents them from running simultaneously.
///
/// Please take a look at the Vulkan specifications for more information. Pipeline barriers are a
/// complex topic and explaining them in this documentation would be redundant.
///
/// > **Note**: We use a builder-like API here so that users can pass multiple buffers or images of
/// > multiple different types. Doing so with a single function would be very tedious in terms of
/// > API.
pub struct CmdPipelineBarrier<'a> {
src_stage_mask: vk::PipelineStageFlags,
dst_stage_mask: vk::PipelineStageFlags,
dependency_flags: vk::DependencyFlags,
memory_barriers: SmallVec<[vk::MemoryBarrier; 2]>,
buffer_barriers: SmallVec<[vk::BufferMemoryBarrier; 8]>,
image_barriers: SmallVec<[vk::ImageMemoryBarrier; 8]>,
marker: PhantomData<&'a ()>,
}
impl<'a> CmdPipelineBarrier<'a> {
/// Creates a new empty pipeline barrier command.
#[inline]
pub fn new() -> CmdPipelineBarrier<'a> {
CmdPipelineBarrier {
src_stage_mask: 0,
dst_stage_mask: 0,
dependency_flags: vk::DEPENDENCY_BY_REGION_BIT,
memory_barriers: SmallVec::new(),
buffer_barriers: SmallVec::new(),
image_barriers: SmallVec::new(),
marker: PhantomData,
}
}
/// Returns true if no barrier or execution dependency has been added yet.
#[inline]
pub fn is_empty(&self) -> bool {
self.src_stage_mask == 0 || self.dst_stage_mask == 0
}
/// Merges another pipeline builder into this one.
#[inline]
pub fn merge(&mut self, other: CmdPipelineBarrier<'a>) {
self.src_stage_mask |= other.src_stage_mask;
self.dst_stage_mask |= other.dst_stage_mask;
self.dependency_flags &= other.dependency_flags;
self.memory_barriers.extend(other.memory_barriers.into_iter());
self.buffer_barriers.extend(other.buffer_barriers.into_iter());
self.image_barriers.extend(other.image_barriers.into_iter());
}
/// Adds an execution dependency. This means that all the stages in `source` of the previous
/// commands must finish before any of the stages in `dest` of the following commands can start.
///
/// # Safety
///
/// - If the pipeline stages include geometry or tessellation stages, then the corresponding
/// features must have been enabled in the device.
/// - There are certain rules regarding the pipeline barriers inside render passes.
///
#[inline]
pub unsafe fn add_execution_dependency(&mut self, source: PipelineStages, dest: PipelineStages,
by_region: bool)
{
if !by_region {
self.dependency_flags = 0;
}
self.src_stage_mask |= source.into();
self.dst_stage_mask |= dest.into();
}
/// Adds a memory barrier. This means that all the memory writes by the given source stages
/// for the given source accesses must be visible by the given dest stages for the given dest
/// accesses.
///
/// Also adds an execution dependency similar to `add_execution_dependency`.
///
/// # Safety
///
/// - Same as `add_execution_dependency`.
///
pub unsafe fn add_memory_barrier(&mut self, source_stage: PipelineStages,
source_access: AccessFlagBits, dest_stage: PipelineStages,
dest_access: AccessFlagBits, by_region: bool)
{
self.add_execution_dependency(source_stage, dest_stage, by_region);
self.memory_barriers.push(vk::MemoryBarrier {
sType: vk::STRUCTURE_TYPE_MEMORY_BARRIER,
pNext: ptr::null(),
srcAccessMask: source_access.into(),
dstAccessMask: dest_access.into(),
});
}
/// Adds a buffer memory barrier. This means that all the memory writes to the given buffer by
/// the given source stages for the given source accesses must be visible by the given dest
/// stages for the given dest accesses.
///
/// Also adds an execution dependency similar to `add_execution_dependency`.
///
/// Also allows transfering buffer ownership between queues.
///
/// # Safety
///
/// - Same as `add_execution_dependency`.
/// - The buffer must be alive for at least as long as the command buffer to which this barrier
/// is added.
/// - Queue ownership transfers must be correct.
///
pub unsafe fn add_buffer_memory_barrier<B: ?Sized>
(&mut self, buffer: &'a B, source_stage: PipelineStages,
source_access: AccessFlagBits, dest_stage: PipelineStages,
dest_access: AccessFlagBits, by_region: bool,
queue_transfer: Option<(u32, u32)>, offset: usize, size: usize)
where B: BufferAccess
{
self.add_execution_dependency(source_stage, dest_stage, by_region);
debug_assert!(size <= buffer.size());
let BufferInner { buffer, offset: org_offset } = buffer.inner();
let offset = offset + org_offset;
let (src_queue, dest_queue) = if let Some((src_queue, dest_queue)) = queue_transfer {
(src_queue, dest_queue)
} else {
(vk::QUEUE_FAMILY_IGNORED, vk::QUEUE_FAMILY_IGNORED)
};
self.buffer_barriers.push(vk::BufferMemoryBarrier {
sType: vk::STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER,
pNext: ptr::null(),
srcAccessMask: source_access.into(),
dstAccessMask: dest_access.into(),
srcQueueFamilyIndex: src_queue,
dstQueueFamilyIndex: dest_queue,
buffer: buffer.internal_object(),
offset: offset as vk::DeviceSize,
size: size as vk::DeviceSize,
});
}
/// Adds an image memory barrier. This is the equivalent of `add_buffer_memory_barrier` but
/// for images.
///
/// In addition to transfering image ownership between queues, it also allows changing the
/// layout of images.
///
/// Also adds an execution dependency similar to `add_execution_dependency`.
///
/// # Safety
///
/// - Same as `add_execution_dependency`.
/// - The buffer must be alive for at least as long as the command buffer to which this barrier
/// is added.
/// - Queue ownership transfers must be correct.
/// - Image layouts transfers must be correct.
/// - Access flags must be compatible with the image usage flags passed at image creation.
///
pub unsafe fn add_image_memory_barrier<I: ?Sized>(&mut self, image: &'a I, mipmaps: Range<u32>,
layers: Range<u32>, source_stage: PipelineStages, source_access: AccessFlagBits,
dest_stage: PipelineStages, dest_access: AccessFlagBits, by_region: bool,
queue_transfer: Option<(u32, u32)>, current_layout: Layout, new_layout: Layout)
where I: ImageAccess
{
self.add_execution_dependency(source_stage, dest_stage, by_region);
debug_assert!(mipmaps.start < mipmaps.end);
// TODO: debug_assert!(mipmaps.end <= image.mipmap_levels());
debug_assert!(layers.start < layers.end);
debug_assert!(layers.end <= image.dimensions().array_layers());
let (src_queue, dest_queue) = if let Some((src_queue, dest_queue)) = queue_transfer {
(src_queue, dest_queue)
} else {
(vk::QUEUE_FAMILY_IGNORED, vk::QUEUE_FAMILY_IGNORED)
};
self.image_barriers.push(vk::ImageMemoryBarrier {
sType: vk::STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
pNext: ptr::null(),
srcAccessMask: source_access.into(),
dstAccessMask: dest_access.into(),
oldLayout: current_layout as u32,
newLayout: new_layout as u32,
srcQueueFamilyIndex: src_queue,
dstQueueFamilyIndex: dest_queue,
image: image.inner().internal_object(),
subresourceRange: vk::ImageSubresourceRange {
aspectMask: 1 | 2 | 4 | 8, // FIXME: wrong
baseMipLevel: mipmaps.start,
levelCount: mipmaps.end - mipmaps.start,
baseArrayLayer: layers.start,
layerCount: layers.end - layers.start,
},
});
}
}
unsafe impl<'a, P> AddCommand<&'a CmdPipelineBarrier<'a>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdPipelineBarrier<'a>) -> Result<Self::Out, CommandAddError> {
// If barrier is empty, don't do anything.
if command.src_stage_mask == 0 || command.dst_stage_mask == 0 {
debug_assert!(command.src_stage_mask == 0 && command.dst_stage_mask == 0);
debug_assert!(command.memory_barriers.is_empty());
debug_assert!(command.buffer_barriers.is_empty());
debug_assert!(command.image_barriers.is_empty());
return Ok(self);
}
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdPipelineBarrier(cmd, command.src_stage_mask, command.dst_stage_mask,
command.dependency_flags, command.memory_barriers.len() as u32,
command.memory_barriers.as_ptr(),
command.buffer_barriers.len() as u32,
command.buffer_barriers.as_ptr(),
command.image_barriers.len() as u32,
command.image_barriers.as_ptr());
}
Ok(self)
}
}

View File

@ -0,0 +1,124 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use std::sync::Arc;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use descriptor::pipeline_layout::PipelineLayoutAbstract;
use descriptor::pipeline_layout::PipelineLayoutPushConstantsCompatible;
use device::Device;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
/// Command that sets the current push constants.
pub struct CmdPushConstants<Pc, Pl> {
// The device of the pipeline object, so that we can compare it with the command buffer's
// device.
device: Arc<Device>,
// The push constant data.
push_constants: Pc,
// The pipeline layout.
pipeline_layout: Pl,
}
impl<Pc, Pl> CmdPushConstants<Pc, Pl>
where Pl: PipelineLayoutAbstract
{
/// Builds the command.
///
/// Returns an error if the push constants are not compatible with the pipeline layout.
#[inline]
pub fn new(pipeline_layout: Pl, push_constants: Pc)
-> Result<CmdPushConstants<Pc, Pl>, CmdPushConstantsError>
{
if !PipelineLayoutPushConstantsCompatible::is_compatible(&pipeline_layout, &push_constants) {
return Err(CmdPushConstantsError::IncompatibleData);
}
let device = pipeline_layout.device().clone();
Ok(CmdPushConstants {
device: device,
push_constants: push_constants,
pipeline_layout: pipeline_layout,
})
}
}
unsafe impl<Pc, Pl> DeviceOwned for CmdPushConstants<Pc, Pl> {
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
}
}
unsafe impl<'a, P, Pc, Pl> AddCommand<&'a CmdPushConstants<Pc, Pl>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool,
Pl: PipelineLayoutAbstract
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdPushConstants<Pc, Pl>) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
let data_raw = &command.push_constants as *const Pc as *const u8;
for num_range in 0 .. command.pipeline_layout.num_push_constants_ranges() {
let range = match command.pipeline_layout.push_constants_range(num_range) {
Some(r) => r,
None => continue
};
debug_assert_eq!(range.offset % 4, 0);
debug_assert_eq!(range.size % 4, 0);
vk.CmdPushConstants(cmd, command.pipeline_layout.sys().internal_object(),
range.stages.into(), range.offset as u32, range.size as u32,
data_raw.offset(range.offset as isize) as *const _);
}
}
Ok(self)
}
}
/// Error that can happen when creating a `CmdPushConstants`.
#[derive(Debug, Copy, Clone)]
pub enum CmdPushConstantsError {
/// The push constants are not compatible with the pipeline layout.
// TODO: inner error
IncompatibleData,
}
impl error::Error for CmdPushConstantsError {
#[inline]
fn description(&self) -> &str {
match *self {
CmdPushConstantsError::IncompatibleData => {
"the push constants are not compatible with the pipeline layout"
},
}
}
}
impl fmt::Display for CmdPushConstantsError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}

View File

@ -0,0 +1,149 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use std::sync::Arc;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that resolves a multisample image into a non-multisample one.
#[derive(Debug, Clone)]
pub struct CmdResolveImage<S, D> {
// The source image.
source: S,
// Raw source image.
source_raw: vk::Image,
// Layout of the source image.
source_layout: vk::ImageLayout,
// Offset in the source.
source_offset: [i32; 3],
source_aspect_mask: vk::ImageAspectFlags,
source_mip_level: u32,
source_base_array_layer: u32,
source_layer_count: u32,
// The destination image.
destination: D,
// Raw destination image.
destination_raw: vk::Image,
// Layout of the destination image.
destination_layout: vk::ImageLayout,
// Offset in the destination.
destination_offset: [i32; 3],
destination_aspect_mask: vk::ImageAspectFlags,
destination_mip_level: u32,
destination_base_array_layer: u32,
destination_layer_count: u32,
// Size.
extent: [u32; 3],
}
// TODO: add constructor
impl<S, D> CmdResolveImage<S, D> {
/// Returns the source image.
#[inline]
pub fn source(&self) -> &S {
&self.source
}
/// Returns the destination image.
#[inline]
pub fn destination(&self) -> &D {
&self.destination
}
}
unsafe impl<S, D> DeviceOwned for CmdResolveImage<S, D> where S: DeviceOwned {
#[inline]
fn device(&self) -> &Arc<Device> {
self.source.device()
}
}
unsafe impl<'a, P, S, D> AddCommand<&'a CmdResolveImage<S, D>> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdResolveImage<S, D>) -> Result<Self::Out, CommandAddError> {
unsafe {
debug_assert!(command.source_layout == vk::IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL ||
command.source_layout == vk::IMAGE_LAYOUT_GENERAL);
debug_assert!(command.destination_layout == vk::IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL ||
command.destination_layout == vk::IMAGE_LAYOUT_GENERAL);
let region = vk::ImageResolve {
srcSubresource: vk::ImageSubresourceLayers {
aspectMask: command.source_aspect_mask,
mipLevel: command.source_mip_level,
baseArrayLayer: command.source_base_array_layer,
layerCount: command.source_layer_count,
},
srcOffset: vk::Offset3D {
x: command.source_offset[0],
y: command.source_offset[1],
z: command.source_offset[2],
},
dstSubresource: vk::ImageSubresourceLayers {
aspectMask: command.destination_aspect_mask,
mipLevel: command.destination_mip_level,
baseArrayLayer: command.destination_base_array_layer,
layerCount: command.destination_layer_count,
},
dstOffset: vk::Offset3D {
x: command.destination_offset[0],
y: command.destination_offset[1],
z: command.destination_offset[2],
},
extent: vk::Extent3D {
width: command.extent[0],
height: command.extent[1],
depth: command.extent[2],
},
};
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdResolveImage(cmd, command.source_raw, command.source_layout,
command.destination_raw, command.destination_layout,
1, &region as *const _);
}
Ok(self)
}
}
/// Error that can happen when creating a `CmdResolveImage`.
#[derive(Debug, Copy, Clone)]
pub enum CmdResolveImageError {
}
impl error::Error for CmdResolveImageError {
#[inline]
fn description(&self) -> &str {
match *self {
}
}
}
impl fmt::Display for CmdResolveImageError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}

View File

@ -0,0 +1,61 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use sync::Event;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that sets or resets an event.
#[derive(Debug, Clone)]
pub struct CmdSetEvent {
// The event to set or reset.
event: Arc<Event>,
// The pipeline stages after which the event should be set or reset.
stages: vk::PipelineStageFlags,
// If true calls `vkCmdSetEvent`, otherwise `vkCmdSetEvent`.
set: bool,
}
// TODO: add constructor
unsafe impl DeviceOwned for CmdSetEvent {
#[inline]
fn device(&self) -> &Arc<Device> {
self.event.device()
}
}
unsafe impl<'a, P> AddCommand<&'a CmdSetEvent> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdSetEvent) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
if command.set {
vk.CmdSetEvent(cmd, command.event.internal_object(), command.stages);
} else {
vk.CmdResetEvent(cmd, command.event.internal_object(), command.stages);
}
}
Ok(self)
}
}

View File

@ -0,0 +1,99 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use smallvec::SmallVec;
use command_buffer::CommandAddError;
use command_buffer::DynamicState;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
/// Command that sets the state of the pipeline to the given one.
///
/// Only the values that are `Some` are modified. Parameters that are `None` are left untouched.
pub struct CmdSetState {
// The device.
device: Arc<Device>,
// The state to set.
dynamic_state: DynamicState,
}
impl CmdSetState {
/// Builds a command.
///
/// Since this command checks whether the dynamic state is supported by the device, you have
/// to pass the device as well when building the command.
// TODO: should check the limits and features of the device
pub fn new(device: Arc<Device>, state: DynamicState) -> CmdSetState {
CmdSetState {
device: device,
dynamic_state: DynamicState {
// This constructor is explicitely layed out so that we don't forget to
// modify the code of this module if we add a new member to `DynamicState`.
line_width: state.line_width,
viewports: state.viewports,
scissors: state.scissors,
},
}
}
#[inline]
pub fn device(&self) -> &Arc<Device> {
&self.device
}
/// Returns the state that is going to be set.
#[inline]
pub fn state(&self) -> &DynamicState {
&self.dynamic_state
}
}
unsafe impl DeviceOwned for CmdSetState {
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
}
}
unsafe impl<'a, P> AddCommand<&'a CmdSetState> for UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdSetState) -> Result<Self::Out, CommandAddError> {
unsafe {
let vk = self.device().pointers();
let cmd = self.internal_object();
if let Some(line_width) = command.dynamic_state.line_width {
vk.CmdSetLineWidth(cmd, line_width);
}
if let Some(ref viewports) = command.dynamic_state.viewports {
let viewports = viewports.iter().map(|v| v.clone().into()).collect::<SmallVec<[_; 16]>>();
vk.CmdSetViewport(cmd, 0, viewports.len() as u32, viewports.as_ptr());
}
if let Some(ref scissors) = command.dynamic_state.scissors {
let scissors = scissors.iter().map(|v| v.clone().into()).collect::<SmallVec<[_; 16]>>();
vk.CmdSetScissor(cmd, 0, scissors.len() as u32, scissors.as_ptr());
}
}
Ok(self)
}
}

View File

@ -0,0 +1,185 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use std::sync::Arc;
use std::os::raw::c_void;
use std::ptr;
use buffer::Buffer;
use buffer::BufferAccess;
use buffer::TypedBuffer;
use buffer::TypedBufferAccess;
use buffer::BufferInner;
use command_buffer::CommandAddError;
use command_buffer::cb::AddCommand;
use command_buffer::cb::UnsafeCommandBufferBuilder;
use command_buffer::pool::CommandPool;
use device::Device;
use device::DeviceOwned;
use VulkanObject;
use VulkanPointers;
use vk;
/// Command that sets the content of a buffer to some data.
pub struct CmdUpdateBuffer<B, D> {
// The buffer to update.
buffer: B,
// Raw buffer handle.
buffer_handle: vk::Buffer,
// Offset of the update.
offset: vk::DeviceSize,
// Size of the update.
size: vk::DeviceSize,
// If null, contains a pointer to the raw data to write. If `None`, the data is the `data`
// field.
data_ptr: *const c_void,
// The data to write to the buffer or a reference to it.
data: D,
}
impl<B, D> CmdUpdateBuffer<B, D> {
/// Builds a command that writes data to a buffer.
///
/// If the size of the data and the size of the buffer mismatch, then only the intersection
/// of both will be written.
///
/// The size of the modification must not exceed 65536 bytes. The offset and size must be
/// multiples of four.
#[inline]
pub fn new<P>(buffer: P, data: D) -> Result<CmdUpdateBuffer<B, D>, CmdUpdateBufferError>
where P: Buffer<Access = B> + TypedBuffer<Content = D>,
B: BufferAccess,
D: 'static
{
unsafe {
CmdUpdateBuffer::unchecked_type(buffer.access(), data)
}
}
/// Same as `new`, except that the parameter is a `BufferAccess` instead of a `Buffer`.
#[inline]
pub fn from_access(buffer: B, data: D) -> Result<CmdUpdateBuffer<B, D>, CmdUpdateBufferError>
where B: BufferAccess + TypedBufferAccess<Content = D>,
D: 'static
{
unsafe {
CmdUpdateBuffer::unchecked_type(buffer, data)
}
}
/// Same as `from_access`, except that type safety is not enforced.
pub unsafe fn unchecked_type(buffer: B, data: D)
-> Result<CmdUpdateBuffer<B, D>, CmdUpdateBufferError>
where B: BufferAccess
{
let size = buffer.size();
let (buffer_handle, offset) = {
let BufferInner { buffer: buffer_inner, offset } = buffer.inner();
if !buffer_inner.usage_transfer_dest() {
return Err(CmdUpdateBufferError::BufferMissingUsage);
}
if offset % 4 != 0 {
return Err(CmdUpdateBufferError::WrongAlignment);
}
(buffer_inner.internal_object(), offset)
};
if size % 4 != 0 {
return Err(CmdUpdateBufferError::WrongAlignment);
}
if size > 65536 {
return Err(CmdUpdateBufferError::DataTooLarge);
}
Ok(CmdUpdateBuffer {
buffer: buffer,
buffer_handle: buffer_handle,
offset: offset as vk::DeviceSize,
size: size as vk::DeviceSize,
data_ptr: ptr::null(),
data: data,
})
}
/// Returns the buffer that is going to be written.
#[inline]
pub fn buffer(&self) -> &B {
&self.buffer
}
}
unsafe impl<B, D> DeviceOwned for CmdUpdateBuffer<B, D>
where B: DeviceOwned
{
#[inline]
fn device(&self) -> &Arc<Device> {
self.buffer.device()
}
}
unsafe impl<'a, P, B, D> AddCommand<&'a CmdUpdateBuffer<B, D>> for UnsafeCommandBufferBuilder<P>
where B: BufferAccess,
P: CommandPool,
{
type Out = UnsafeCommandBufferBuilder<P>;
#[inline]
fn add(self, command: &'a CmdUpdateBuffer<B, D>) -> Result<Self::Out, CommandAddError> {
unsafe {
let data = if command.data_ptr.is_null() {
&command.data as *const D as *const _
} else {
command.data_ptr as *const _
};
let vk = self.device().pointers();
let cmd = self.internal_object();
vk.CmdUpdateBuffer(cmd, command.buffer_handle, command.offset, command.size, data);
}
Ok(self)
}
}
/// Error that can happen when creating a `CmdUpdateBuffer`.
#[derive(Debug, Copy, Clone)]
pub enum CmdUpdateBufferError {
/// The "transfer destination" usage must be enabled on the buffer.
BufferMissingUsage,
/// The data or size must be 4-bytes aligned.
WrongAlignment,
/// The data must not be larger than 64k bytes.
DataTooLarge,
}
impl error::Error for CmdUpdateBufferError {
#[inline]
fn description(&self) -> &str {
match *self {
CmdUpdateBufferError::BufferMissingUsage => {
"the transfer destination usage must be enabled on the buffer"
},
CmdUpdateBufferError::WrongAlignment => {
"the offset or size are not aligned to 4 bytes"
},
CmdUpdateBufferError::DataTooLarge => "data is too large",
}
}
}
impl fmt::Display for CmdUpdateBufferError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}

File diff suppressed because it is too large Load Diff

View File

@ -11,15 +11,7 @@
//!
//! With Vulkan, before the GPU can do anything you must create a `CommandBuffer`. A command buffer
//! is a list of commands that will executed by the GPU. Once a command buffer is created, you can
//! execute it. A command buffer must be created even for the most simple tasks.
//!
//! # Pools
//!
//! Command buffers are allocated from pools. You must first create a command buffer pool which
//! you will create command buffers from.
//!
//! A pool is linked to a queue family. Command buffers that are created from a certain pool can
//! only be submitted to queues that belong to that specific family.
//! execute it. A command buffer must always be created even for the most simple tasks.
//!
//! # Primary and secondary command buffers.
//!
@ -27,43 +19,82 @@
//!
//! - **Primary command buffers**. They can contain any command. They are the only type of command
//! buffer that can be submitted to a queue.
//! - **Secondary "graphics" command buffers**. They contain draw and clear commands. They can be
//! called from a primary command buffer once a framebuffer has been selected.
//! - **Secondary "compute" command buffers**. They can contain non-draw and non-clear commands
//! (eg. copying between buffers) and can be called from a primary command buffer outside of a
//! render pass.
//! - **Secondary "graphics" command buffers**. They can only contain draw and clear commands.
//! They can only be called from a primary command buffer when inside a render pass.
//! - **Secondary "compute" command buffers**. They can only contain non-render-pass-related
//! commands (ie. everything but drawing, clearing, etc.) and cannot enter a render pass. They
//! can only be called from a primary command buffer outside of a render pass.
//!
//! Note that secondary command buffers cannot call other command buffers.
//! Using secondary command buffers leads to slightly lower performances on the GPU, but they have
//! two advantages on the CPU side:
//!
//! - Building a command buffer is a single-threaded operation, but by using secondary command
//! buffers you can build multiple secondary command buffers in multiple threads simultaneously.
//! - Secondary command buffers can be kept alive between frames. When you always repeat the same
//! operations, it might be a good idea to build a secondary command buffer once at
//! initialization and then reuse it afterwards.
//!
//! # The `AutoCommandBufferBuilder`
//!
//! The most basic (and recommended) way to create a command buffer is to create a
//! [`AutoCommandBufferBuilder`](struct.AutoCommandBufferBuilder.html). Then use the
//! [`CommandBufferBuilder` trait](trait.CommandBufferBuilder.html) to add commands to it.
//! When you are done adding commands, use
//! [the `CommandBufferBuild` trait](trait.CommandBufferBuild.html) to obtain a
//! `AutoCommandBuffer`.
//!
//! Once built, use [the `CommandBuffer` trait](trait.CommandBuffer.html) to submit the command
//! buffer. Submitting a command buffer returns an object that implements the `GpuFuture` trait and
//! that represents the moment when the execution will end on the GPU.
//!
//! ```
//! use vulkano::command_buffer::AutoCommandBufferBuilder;
//! use vulkano::command_buffer::CommandBufferBuilder;
//! use vulkano::command_buffer::CommandBuffer;
//!
//! # let device: std::sync::Arc<vulkano::device::Device> = return;
//! # let queue: std::sync::Arc<vulkano::device::Queue> = return;
//! let cb = AutoCommandBufferBuilder::new(device.clone(), queue.family()).unwrap()
//! // TODO: add an actual command to this example
//! .build().unwrap();
//!
//! let _future = cb.execute(queue.clone());
//! ```
//!
//! # Internal architecture of vulkano
//!
//! The `commands_raw` and `commands_extra` modules contain structs that correspond to various
//! commands that can be added to command buffer builders. A command can be added to a command
//! buffer builder by using the `AddCommand<C>` trait, where `C` is the command struct.
//!
//! The `AutoCommandBufferBuilder` internally uses a `UnsafeCommandBufferBuilder` wrapped around
//! multiple layers. See the `cb` module for more information.
//!
//! Command pools are automatically handled by default, but vulkano also allows you to use
//! alternative command pool implementations and use them. See the `pool` module for more
//! information.
// Implementation note.
// There are various restrictions about which command can be used at which moment. Therefore the
// API has several different command buffer wrappers, but they all use the same internal
// struct. The restrictions are enforced only in the public types.
pub use self::inner::Submission;
pub use self::outer::submit;
pub use self::outer::PrimaryCommandBufferBuilder;
pub use self::outer::PrimaryCommandBufferBuilderInlineDraw;
pub use self::outer::PrimaryCommandBufferBuilderSecondaryDraw;
pub use self::outer::PrimaryCommandBuffer;
pub use self::outer::SecondaryGraphicsCommandBufferBuilder;
pub use self::outer::SecondaryGraphicsCommandBuffer;
pub use self::outer::SecondaryComputeCommandBufferBuilder;
pub use self::outer::SecondaryComputeCommandBuffer;
pub use self::submit::CommandBuffer;
pub use self::submit::Submit;
pub use self::auto::AutoCommandBufferBuilder;
pub use self::builder::CommandAddError;
pub use self::builder::CommandBufferBuilder;
pub use self::builder::CommandBufferBuilderError;
pub use self::traits::CommandBuffer;
pub use self::traits::CommandBufferBuild;
pub use self::traits::CommandBufferExecError;
pub use self::traits::CommandBufferExecFuture;
use pipeline::viewport::Viewport;
use pipeline::viewport::Scissor;
mod inner;
mod outer;
pub mod cb;
pub mod commands_extra;
pub mod commands_raw;
pub mod pool;
pub mod std;
pub mod submit;
pub mod sys;
mod auto;
mod builder;
mod traits;
#[repr(C)]
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
@ -84,6 +115,14 @@ pub struct DrawIndexedIndirectCommand {
pub first_instance: u32,
}
#[repr(C)]
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub struct DispatchIndirectCommand {
pub x: u32,
pub y: u32,
pub z: u32,
}
/// The dynamic state to use for a draw command.
#[derive(Debug, Clone)]
pub struct DynamicState {

View File

@ -1,815 +0,0 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::ops::Range;
use std::sync::Arc;
use smallvec::SmallVec;
use buffer::Buffer;
use buffer::BufferSlice;
use buffer::TypedBuffer;
use command_buffer::DrawIndirectCommand;
use command_buffer::DynamicState;
use command_buffer::inner::InnerCommandBufferBuilder;
use command_buffer::inner::InnerCommandBuffer;
use command_buffer::inner::Submission;
use command_buffer::inner::submit as inner_submit;
use command_buffer::pool::CommandPool;
use command_buffer::pool::StandardCommandPool;
use descriptor::descriptor_set::DescriptorSetsCollection;
use descriptor::PipelineLayout;
use device::Device;
use device::Queue;
use framebuffer::Framebuffer;
use framebuffer::UnsafeRenderPass;
use framebuffer::RenderPassCompatible;
use framebuffer::RenderPass;
use framebuffer::RenderPassDesc;
use framebuffer::RenderPassClearValues;
use framebuffer::Subpass;
use image::traits::Image;
use image::traits::ImageClearValue;
use image::traits::ImageContent;
use instance::QueueFamily;
use pipeline::ComputePipeline;
use pipeline::GraphicsPipeline;
use pipeline::input_assembly::Index;
use pipeline::vertex::Source as VertexSource;
use OomError;
/// A prototype of a primary command buffer.
///
/// # Usage
///
/// ```ignore // TODO: change that
/// let commands_buffer =
/// PrimaryCommandBufferBuilder::new(&device)
/// .copy_memory(..., ...)
/// .draw(...)
/// .build();
///
/// ```
///
pub struct PrimaryCommandBufferBuilder<P = Arc<StandardCommandPool>> where P: CommandPool {
inner: InnerCommandBufferBuilder<P>,
}
impl PrimaryCommandBufferBuilder<Arc<StandardCommandPool>> {
/// Builds a new primary command buffer and start recording commands in it.
///
/// # Panic
///
/// - Panics if the device or host ran out of memory.
/// - Panics if the device and queue family do not belong to the same physical device.
///
#[inline]
pub fn new(device: &Arc<Device>, queue_family: QueueFamily)
-> PrimaryCommandBufferBuilder<Arc<StandardCommandPool>>
{
PrimaryCommandBufferBuilder::raw(Device::standard_command_pool(device, queue_family)).unwrap()
}
}
impl<P> PrimaryCommandBufferBuilder<P> where P: CommandPool {
/// See the docs of new().
#[inline]
pub fn raw(pool: P) -> Result<PrimaryCommandBufferBuilder<P>, OomError> {
let inner = try!(InnerCommandBufferBuilder::new::<UnsafeRenderPass>(pool, false, None, None));
Ok(PrimaryCommandBufferBuilder { inner: inner })
}
/// Writes data to a buffer.
///
/// The data is stored inside the command buffer and written to the given buffer slice.
/// This function is intended to be used for small amounts of data (only 64kB is allowed). if
/// you want to transfer large amounts of data, use copies between buffers.
///
/// # Panic
///
/// - Panics if the size of `data` is not the same as the size of the buffer slice.
/// - Panics if the size of `data` is superior to 65536 bytes.
/// - Panics if the offset or size is not a multiple of 4.
/// - Panics if the buffer wasn't created with the right usage.
/// - Panics if the queue family doesn't support transfer operations.
///
#[inline]
pub fn update_buffer<'a, B, T, Bb>(self, buffer: B, data: &T) -> PrimaryCommandBufferBuilder<P>
where B: Into<BufferSlice<'a, T, Bb>>, Bb: Buffer + 'static, T: Clone + 'static + Send + Sync
{
unsafe {
PrimaryCommandBufferBuilder {
inner: self.inner.update_buffer(buffer, data)
}
}
}
/// Fills a buffer with data.
///
/// The data is repeated until it fills the range from `offset` to `offset + size`.
/// Since the data is a u32, the offset and the size must be multiples of 4.
///
/// # Panic
///
/// - Panics if `offset + data` is superior to the size of the buffer.
/// - Panics if the offset or size is not a multiple of 4.
/// - Panics if the buffer wasn't created with the right usage.
/// - Panics if the queue family doesn't support transfer operations.
///
/// # Safety
///
/// - Type safety is not enforced by the API.
///
pub unsafe fn fill_buffer<B>(self, buffer: &Arc<B>, offset: usize,
size: usize, data: u32) -> PrimaryCommandBufferBuilder<P>
where B: Buffer + 'static
{
PrimaryCommandBufferBuilder {
inner: self.inner.fill_buffer(buffer, offset, size, data)
}
}
pub fn copy_buffer<T: ?Sized + 'static, Bs, Bd>(self, source: &Arc<Bs>, destination: &Arc<Bd>)
-> PrimaryCommandBufferBuilder<P>
where Bs: TypedBuffer<Content = T> + 'static, Bd: TypedBuffer<Content = T> + 'static
{
unsafe {
PrimaryCommandBufferBuilder {
inner: self.inner.copy_buffer(source, destination),
}
}
}
pub fn copy_buffer_to_color_image<'a, Pi, S, Img, Sb>(self, source: S, destination: &Arc<Img>, mip_level: u32, array_layers_range: Range<u32>,
offset: [u32; 3], extent: [u32; 3])
-> PrimaryCommandBufferBuilder<P>
where S: Into<BufferSlice<'a, [Pi], Sb>>, Sb: Buffer + 'static,
Img: ImageContent<Pi> + 'static
{
unsafe {
PrimaryCommandBufferBuilder {
inner: self.inner.copy_buffer_to_color_image(source, destination, mip_level,
array_layers_range, offset, extent),
}
}
}
pub fn copy_color_image_to_buffer<'a, Pi, S, Img, Sb>(self, dest: S, destination: &Arc<Img>, mip_level: u32, array_layers_range: Range<u32>,
offset: [u32; 3], extent: [u32; 3])
-> PrimaryCommandBufferBuilder<P>
where S: Into<BufferSlice<'a, [Pi], Sb>>, Sb: Buffer + 'static,
Img: ImageContent<Pi> + 'static
{
unsafe {
PrimaryCommandBufferBuilder {
inner: self.inner.copy_color_image_to_buffer(dest, destination, mip_level,
array_layers_range, offset, extent),
}
}
}
pub fn blit<Si, Di>(self, source: &Arc<Si>, source_mip_level: u32,
source_array_layers: Range<u32>, src_coords: [Range<i32>; 3],
destination: &Arc<Di>, dest_mip_level: u32,
dest_array_layers: Range<u32>, dest_coords: [Range<i32>; 3])
-> PrimaryCommandBufferBuilder<P>
where Si: Image + 'static, Di: Image + 'static
{
unsafe {
PrimaryCommandBufferBuilder {
inner: self.inner.blit(source, source_mip_level, source_array_layers, src_coords,
destination, dest_mip_level, dest_array_layers, dest_coords),
}
}
}
///
/// Note that compressed formats are not supported.
pub fn clear_color_image<'a, I, V>(self, image: &Arc<I>, color: V)
-> PrimaryCommandBufferBuilder<P>
where I: ImageClearValue<V> + 'static
{
unsafe {
PrimaryCommandBufferBuilder {
inner: self.inner.clear_color_image(image, color),
}
}
}
/// Executes secondary compute command buffers within this primary command buffer.
#[inline]
pub fn execute_commands<S>(self, cb: &Arc<SecondaryComputeCommandBuffer<S>>)
-> PrimaryCommandBufferBuilder<P>
where S: CommandPool + 'static,
S::Finished: Send + Sync + 'static,
{
unsafe {
PrimaryCommandBufferBuilder {
inner: self.inner.execute_commands(cb.clone() as Arc<_>, &cb.inner)
}
}
}
/// Executes a compute pipeline.
#[inline]
pub fn dispatch<Pl, L, Pc>(self, pipeline: &Arc<ComputePipeline<Pl>>, sets: L,
dimensions: [u32; 3], push_constants: &Pc) -> PrimaryCommandBufferBuilder<P>
where L: DescriptorSetsCollection + Send + Sync,
Pl: 'static + PipelineLayout + Send + Sync,
Pc: 'static + Clone + Send + Sync
{
unsafe {
PrimaryCommandBufferBuilder {
inner: self.inner.dispatch(pipeline, sets, dimensions, push_constants)
}
}
}
/// Start drawing on a framebuffer.
//
/// This function returns an object that can be used to submit draw commands on the first
/// subpass of the renderpass.
///
/// # Panic
///
/// - Panics if the framebuffer is not compatible with the renderpass.
///
// FIXME: rest of the parameters (render area and clear attachment values)
#[inline]
pub fn draw_inline<R, F, C>(self, renderpass: &Arc<R>,
framebuffer: &Arc<Framebuffer<F>>, clear_values: C)
-> PrimaryCommandBufferBuilderInlineDraw<P>
where F: RenderPass + RenderPassDesc + RenderPassClearValues<C> + 'static,
R: RenderPass + RenderPassDesc + 'static
{
// FIXME: check for compatibility
// TODO: allocate on stack instead (https://github.com/rust-lang/rfcs/issues/618)
let clear_values = framebuffer.render_pass().convert_clear_values(clear_values)
.collect::<SmallVec<[_; 16]>>();
unsafe {
let inner = self.inner.begin_renderpass(renderpass, framebuffer, false, &clear_values);
PrimaryCommandBufferBuilderInlineDraw {
inner: inner,
current_subpass: 0,
num_subpasses: framebuffer.render_pass().num_subpasses(),
}
}
}
/// Start drawing on a framebuffer.
//
/// This function returns an object that can be used to submit secondary graphics command
/// buffers that will operate on the first subpass of the renderpass.
///
/// # Panic
///
/// - Panics if the framebuffer is not compatible with the renderpass.
///
// FIXME: rest of the parameters (render area and clear attachment values)
#[inline]
pub fn draw_secondary<R, F, C>(self, renderpass: &Arc<R>,
framebuffer: &Arc<Framebuffer<F>>, clear_values: C)
-> PrimaryCommandBufferBuilderSecondaryDraw<P>
where F: RenderPass + RenderPassDesc + RenderPassClearValues<C> + 'static,
R: RenderPass + RenderPassDesc + 'static
{
// FIXME: check for compatibility
let clear_values = framebuffer.render_pass().convert_clear_values(clear_values)
.collect::<SmallVec<[_; 16]>>();
unsafe {
let inner = self.inner.begin_renderpass(renderpass, framebuffer, true, &clear_values);
PrimaryCommandBufferBuilderSecondaryDraw {
inner: inner,
current_subpass: 0,
num_subpasses: framebuffer.render_pass().num_subpasses(),
}
}
}
/// See the docs of build().
#[inline]
pub fn build_raw(self) -> Result<PrimaryCommandBuffer<P>, OomError> {
let inner = try!(self.inner.build());
Ok(PrimaryCommandBuffer { inner: inner })
}
/// Finish recording commands and build the command buffer.
///
/// # Panic
///
/// - Panics if the device or host ran out of memory.
///
#[inline]
pub fn build(self) -> Arc<PrimaryCommandBuffer<P>> {
Arc::new(self.build_raw().unwrap())
}
}
/// Object that you obtain when calling `draw_inline` or `next_subpass_inline`.
pub struct PrimaryCommandBufferBuilderInlineDraw<P = Arc<StandardCommandPool>>
where P: CommandPool
{
inner: InnerCommandBufferBuilder<P>,
current_subpass: u32,
num_subpasses: u32,
}
impl<P> PrimaryCommandBufferBuilderInlineDraw<P> where P: CommandPool {
/// Calls `vkCmdDraw`.
// FIXME: push constants
pub fn draw<V, L, Pv, Pl, Rp, Pc>(self, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Rp>>,
vertices: V, dynamic: &DynamicState, sets: L, push_constants: &Pc)
-> PrimaryCommandBufferBuilderInlineDraw<P>
where Pv: VertexSource<V> + 'static, Pl: PipelineLayout + 'static + Send + Sync, Rp: 'static + Send + Sync,
L: DescriptorSetsCollection + Send + Sync, Pc: 'static + Clone + Send + Sync
{
// FIXME: check subpass
unsafe {
PrimaryCommandBufferBuilderInlineDraw {
inner: self.inner.draw(pipeline, vertices, dynamic, sets, push_constants),
num_subpasses: self.num_subpasses,
current_subpass: self.current_subpass,
}
}
}
/// Calls `vkCmdDrawIndexed`.
pub fn draw_indexed<'a, V, L, Pv, Pl, Rp, I, Ib, Ibb, Pc>(self, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Rp>>,
vertices: V, indices: Ib, dynamic: &DynamicState,
sets: L, push_constants: &Pc) -> PrimaryCommandBufferBuilderInlineDraw<P>
where Pv: 'static + VertexSource<V> + Send + Sync, Pl: 'static + PipelineLayout + Send + Sync, Rp: 'static + Send + Sync,
Ib: Into<BufferSlice<'a, [I], Ibb>>, I: 'static + Index, Ibb: Buffer + 'static + Send + Sync,
L: DescriptorSetsCollection + Send + Sync, Pc: 'static + Clone + Send + Sync
{
// FIXME: check subpass
unsafe {
PrimaryCommandBufferBuilderInlineDraw {
inner: self.inner.draw_indexed(pipeline, vertices, indices, dynamic, sets, push_constants),
num_subpasses: self.num_subpasses,
current_subpass: self.current_subpass,
}
}
}
/// Switches to the next subpass of the current renderpass.
///
/// This function is similar to `draw_inline` on the builder.
///
/// # Panic
///
/// - Panics if no more subpasses remain.
///
#[inline]
pub fn next_subpass_inline(self) -> PrimaryCommandBufferBuilderInlineDraw<P> {
assert!(self.current_subpass + 1 < self.num_subpasses);
unsafe {
let inner = self.inner.next_subpass(false);
PrimaryCommandBufferBuilderInlineDraw {
inner: inner,
num_subpasses: self.num_subpasses,
current_subpass: self.current_subpass + 1,
}
}
}
/// Switches to the next subpass of the current renderpass.
///
/// This function is similar to `draw_secondary` on the builder.
///
/// # Panic
///
/// - Panics if no more subpasses remain.
///
#[inline]
pub fn next_subpass_secondary(self) -> PrimaryCommandBufferBuilderSecondaryDraw<P> {
assert!(self.current_subpass + 1 < self.num_subpasses);
unsafe {
let inner = self.inner.next_subpass(true);
PrimaryCommandBufferBuilderSecondaryDraw {
inner: inner,
num_subpasses: self.num_subpasses,
current_subpass: self.current_subpass + 1,
}
}
}
/// Finish drawing this renderpass and get back the builder.
///
/// # Panic
///
/// - Panics if not at the last subpass.
///
#[inline]
pub fn draw_end(self) -> PrimaryCommandBufferBuilder<P> {
assert!(self.current_subpass + 1 == self.num_subpasses);
unsafe {
let inner = self.inner.end_renderpass();
PrimaryCommandBufferBuilder {
inner: inner,
}
}
}
}
/// Object that you obtain when calling `draw_secondary` or `next_subpass_secondary`.
pub struct PrimaryCommandBufferBuilderSecondaryDraw<P = Arc<StandardCommandPool>>
where P: CommandPool
{
inner: InnerCommandBufferBuilder<P>,
current_subpass: u32,
num_subpasses: u32,
}
impl<P> PrimaryCommandBufferBuilderSecondaryDraw<P> where P: CommandPool {
/// Switches to the next subpass of the current renderpass.
///
/// This function is similar to `draw_inline` on the builder.
///
/// # Panic
///
/// - Panics if no more subpasses remain.
///
#[inline]
pub fn next_subpass_inline(self) -> PrimaryCommandBufferBuilderInlineDraw<P> {
assert!(self.current_subpass + 1 < self.num_subpasses);
unsafe {
let inner = self.inner.next_subpass(false);
PrimaryCommandBufferBuilderInlineDraw {
inner: inner,
num_subpasses: self.num_subpasses,
current_subpass: self.current_subpass + 1,
}
}
}
/// Switches to the next subpass of the current renderpass.
///
/// This function is similar to `draw_secondary` on the builder.
///
/// # Panic
///
/// - Panics if no more subpasses remain.
///
#[inline]
pub fn next_subpass_secondary(self) -> PrimaryCommandBufferBuilderSecondaryDraw<P> {
assert!(self.current_subpass + 1 < self.num_subpasses);
unsafe {
let inner = self.inner.next_subpass(true);
PrimaryCommandBufferBuilderSecondaryDraw {
inner: inner,
num_subpasses: self.num_subpasses,
current_subpass: self.current_subpass + 1,
}
}
}
/// Executes secondary graphics command buffers within this primary command buffer.
///
/// # Panic
///
/// - Panics if the secondary command buffers wasn't created with a compatible
/// renderpass or is using the wrong subpass.
#[inline]
pub fn execute_commands<R, Ps>(mut self, cb: &Arc<SecondaryGraphicsCommandBuffer<R, Ps>>)
-> PrimaryCommandBufferBuilderSecondaryDraw<P>
where R: 'static + Send + Sync,
Ps: CommandPool + 'static,
Ps::Finished: Send + Sync + 'static,
{
// FIXME: check renderpass, subpass and framebuffer
unsafe {
self.inner = self.inner.execute_commands(cb.clone() as Arc<_>, &cb.inner);
self
}
}
/// Finish drawing this renderpass and get back the builder.
///
/// # Panic
///
/// - Panics if not at the last subpass.
///
#[inline]
pub fn draw_end(self) -> PrimaryCommandBufferBuilder<P> {
assert!(self.current_subpass + 1 == self.num_subpasses);
unsafe {
let inner = self.inner.end_renderpass();
PrimaryCommandBufferBuilder {
inner: inner,
}
}
}
}
/// Represents a collection of commands to be executed by the GPU.
///
/// A primary command buffer can contain any command.
pub struct PrimaryCommandBuffer<P = Arc<StandardCommandPool>> where P: CommandPool {
inner: InnerCommandBuffer<P>,
}
/// Submits the command buffer to a queue so that it is executed.
///
/// Fences and semaphores are automatically handled.
///
/// # Panic
///
/// - Panics if the queue doesn't belong to the device this command buffer was created with.
/// - Panics if the queue doesn't belong to the family the pool was created with.
///
#[inline]
pub fn submit<P>(cmd: &Arc<PrimaryCommandBuffer<P>>, queue: &Arc<Queue>)
-> Result<Arc<Submission>, OomError>
where P: CommandPool + 'static,
P::Finished: Send + Sync + 'static
{ // TODO: wrong error type
inner_submit(&cmd.inner, cmd.clone() as Arc<_>, queue)
}
/// A prototype of a secondary compute command buffer.
pub struct SecondaryGraphicsCommandBufferBuilder<R, P = Arc<StandardCommandPool>>
where P: CommandPool
{
inner: InnerCommandBufferBuilder<P>,
render_pass: Arc<R>,
render_pass_subpass: u32,
framebuffer: Option<Arc<Framebuffer<R>>>,
}
impl<R> SecondaryGraphicsCommandBufferBuilder<R, Arc<StandardCommandPool>>
where R: RenderPass + RenderPassDesc + 'static
{
/// Builds a new secondary command buffer and start recording commands in it.
///
/// The `framebuffer` parameter is optional and can be used as an optimisation.
///
/// # Panic
///
/// - Panics if the device or host ran out of memory.
/// - Panics if the device and queue family do not belong to the same physical device.
///
#[inline]
pub fn new(device: &Arc<Device>, queue_family: QueueFamily, subpass: Subpass<R>,
framebuffer: Option<&Arc<Framebuffer<R>>>)
-> SecondaryGraphicsCommandBufferBuilder<R, Arc<StandardCommandPool>>
where R: 'static + Send + Sync
{
SecondaryGraphicsCommandBufferBuilder::raw(Device::standard_command_pool(device,
queue_family), subpass, framebuffer).unwrap()
}
}
impl<R, P> SecondaryGraphicsCommandBufferBuilder<R, P>
where R: RenderPass + RenderPassDesc + 'static,
P: CommandPool
{
/// See the docs of new().
#[inline]
pub fn raw(pool: P, subpass: Subpass<R>, framebuffer: Option<&Arc<Framebuffer<R>>>)
-> Result<SecondaryGraphicsCommandBufferBuilder<R, P>, OomError>
where R: 'static + Send + Sync
{
let inner = try!(InnerCommandBufferBuilder::new(pool, true, Some(subpass), framebuffer.clone()));
Ok(SecondaryGraphicsCommandBufferBuilder {
inner: inner,
render_pass: subpass.render_pass().clone(),
render_pass_subpass: subpass.index(),
framebuffer: framebuffer.map(|fb| fb.clone()),
})
}
/// Calls `vkCmdDraw`.
// FIXME: push constants
pub fn draw<V, L, Pv, Pl, Rp, Pc>(self, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Rp>>,
vertices: V, dynamic: &DynamicState, sets: L, push_constants: &Pc)
-> SecondaryGraphicsCommandBufferBuilder<R, P>
where Pv: VertexSource<V> + 'static, Pl: PipelineLayout + 'static + Send + Sync,
Rp: RenderPass + RenderPassDesc + 'static + Send + Sync, L: DescriptorSetsCollection + Send + Sync,
R: RenderPassCompatible<Rp>, Pc: 'static + Clone + Send + Sync
{
assert!(self.render_pass.is_compatible_with(pipeline.subpass().render_pass()));
assert_eq!(self.render_pass_subpass, pipeline.subpass().index());
unsafe {
SecondaryGraphicsCommandBufferBuilder {
inner: self.inner.draw(pipeline, vertices, dynamic, sets, push_constants),
render_pass: self.render_pass,
render_pass_subpass: self.render_pass_subpass,
framebuffer: self.framebuffer,
}
}
}
/// Calls `vkCmdDrawIndexed`.
pub fn draw_indexed<'a, V, L, Pv, Pl, Rp, I, Ib, Ibb, Pc>(self, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Rp>>,
vertices: V, indices: Ib, dynamic: &DynamicState,
sets: L, push_constants: &Pc) -> SecondaryGraphicsCommandBufferBuilder<R, P>
where Pv: 'static + VertexSource<V>, Pl: 'static + PipelineLayout + Send + Sync,
Rp: RenderPass + RenderPassDesc + 'static + Send + Sync,
Ib: Into<BufferSlice<'a, [I], Ibb>>, I: 'static + Index, Ibb: Buffer + 'static,
L: DescriptorSetsCollection + Send + Sync, Pc: 'static + Clone + Send + Sync
{
assert!(self.render_pass.is_compatible_with(pipeline.subpass().render_pass()));
assert_eq!(self.render_pass_subpass, pipeline.subpass().index());
unsafe {
SecondaryGraphicsCommandBufferBuilder {
inner: self.inner.draw_indexed(pipeline, vertices, indices, dynamic, sets, push_constants),
render_pass: self.render_pass,
render_pass_subpass: self.render_pass_subpass,
framebuffer: self.framebuffer,
}
}
}
/// Calls `vkCmdDrawIndirect`.
pub fn draw_indirect<I, V, Pv, Pl, L, Rp, Pc>(self, buffer: &Arc<I>, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Rp>>,
vertices: V, dynamic: &DynamicState,
sets: L, push_constants: &Pc) -> SecondaryGraphicsCommandBufferBuilder<R, P>
where Pv: 'static + VertexSource<V>, L: DescriptorSetsCollection + Send + Sync,
Pl: 'static + PipelineLayout + Send + Sync, Rp: RenderPass + RenderPassDesc + 'static + Send + Sync,
Pc: 'static + Clone + Send + Sync,
I: 'static + TypedBuffer<Content = [DrawIndirectCommand]>
{
assert!(self.render_pass.is_compatible_with(pipeline.subpass().render_pass()));
assert_eq!(self.render_pass_subpass, pipeline.subpass().index());
unsafe {
SecondaryGraphicsCommandBufferBuilder {
inner: self.inner.draw_indirect(buffer, pipeline, vertices, dynamic, sets, push_constants),
render_pass: self.render_pass,
render_pass_subpass: self.render_pass_subpass,
framebuffer: self.framebuffer,
}
}
}
/// See the docs of build().
#[inline]
pub fn build_raw(self) -> Result<SecondaryGraphicsCommandBuffer<R, P>, OomError> {
let inner = try!(self.inner.build());
Ok(SecondaryGraphicsCommandBuffer {
inner: inner,
render_pass: self.render_pass,
render_pass_subpass: self.render_pass_subpass,
})
}
/// Finish recording commands and build the command buffer.
///
/// # Panic
///
/// - Panics if the device or host ran out of memory.
///
#[inline]
pub fn build(self) -> Arc<SecondaryGraphicsCommandBuffer<R, P>> {
Arc::new(self.build_raw().unwrap())
}
}
/// Represents a collection of commands to be executed by the GPU.
///
/// A secondary graphics command buffer contains draw commands and non-draw commands. Secondary
/// command buffers can't specify which framebuffer they are drawing to. Instead you must create
/// a primary command buffer, specify a framebuffer, and then call the secondary command buffer.
///
/// A secondary graphics command buffer can't be called outside of a renderpass.
pub struct SecondaryGraphicsCommandBuffer<R, P = Arc<StandardCommandPool>> where P: CommandPool {
inner: InnerCommandBuffer<P>,
render_pass: Arc<R>,
render_pass_subpass: u32,
}
/// A prototype of a secondary compute command buffer.
pub struct SecondaryComputeCommandBufferBuilder<P = Arc<StandardCommandPool>> where P: CommandPool {
inner: InnerCommandBufferBuilder<P>,
}
impl SecondaryComputeCommandBufferBuilder<Arc<StandardCommandPool>> {
/// Builds a new secondary command buffer and start recording commands in it.
///
/// # Panic
///
/// - Panics if the device or host ran out of memory.
/// - Panics if the device and queue family do not belong to the same physical device.
///
#[inline]
pub fn new(device: &Arc<Device>, queue_family: QueueFamily)
-> SecondaryComputeCommandBufferBuilder<Arc<StandardCommandPool>>
{
SecondaryComputeCommandBufferBuilder::raw(Device::standard_command_pool(device,
queue_family)).unwrap()
}
}
impl<P> SecondaryComputeCommandBufferBuilder<P> where P: CommandPool {
/// See the docs of new().
#[inline]
pub fn raw(pool: P) -> Result<SecondaryComputeCommandBufferBuilder<P>, OomError> {
let inner = try!(InnerCommandBufferBuilder::new::<UnsafeRenderPass>(pool, true, None, None));
Ok(SecondaryComputeCommandBufferBuilder { inner: inner })
}
/// Writes data to a buffer.
///
/// The data is stored inside the command buffer and written to the given buffer slice.
/// This function is intended to be used for small amounts of data (only 64kB is allowed). if
/// you want to transfer large amounts of data, use copies between buffers.
///
/// # Panic
///
/// - Panics if the size of `data` is not the same as the size of the buffer slice.
/// - Panics if the size of `data` is superior to 65536 bytes.
/// - Panics if the offset or size is not a multiple of 4.
/// - Panics if the buffer wasn't created with the right usage.
/// - Panics if the queue family doesn't support transfer operations.
///
#[inline]
pub fn update_buffer<'a, B, T, Bb>(self, buffer: B, data: &T) -> SecondaryComputeCommandBufferBuilder<P>
where B: Into<BufferSlice<'a, T, Bb>>, Bb: Buffer + 'static, T: Clone + 'static + Send + Sync
{
unsafe {
SecondaryComputeCommandBufferBuilder {
inner: self.inner.update_buffer(buffer, data)
}
}
}
/// Fills a buffer with data.
///
/// The data is repeated until it fills the range from `offset` to `offset + size`.
/// Since the data is a u32, the offset and the size must be multiples of 4.
///
/// # Panic
///
/// - Panics if `offset + data` is superior to the size of the buffer.
/// - Panics if the offset or size is not a multiple of 4.
/// - Panics if the buffer wasn't created with the right usage.
/// - Panics if the queue family doesn't support transfer operations.
///
/// # Safety
///
/// - Type safety is not enforced by the API.
pub unsafe fn fill_buffer<B>(self, buffer: &Arc<B>, offset: usize, size: usize, data: u32)
-> SecondaryComputeCommandBufferBuilder<P>
where B: Buffer + 'static
{
SecondaryComputeCommandBufferBuilder {
inner: self.inner.fill_buffer(buffer, offset, size, data)
}
}
/// See the docs of build().
#[inline]
pub fn build_raw(self) -> Result<SecondaryComputeCommandBuffer<P>, OomError> {
let inner = try!(self.inner.build());
Ok(SecondaryComputeCommandBuffer { inner: inner })
}
/// Finish recording commands and build the command buffer.
///
/// # Panic
///
/// - Panics if the device or host ran out of memory.
///
#[inline]
pub fn build(self) -> Arc<SecondaryComputeCommandBuffer<P>> {
Arc::new(self.build_raw().unwrap())
}
}
/// Represents a collection of commands to be executed by the GPU.
///
/// A secondary compute command buffer contains non-draw commands (like copy commands, compute
/// shader execution, etc.). It can only be called outside of a renderpass.
pub struct SecondaryComputeCommandBuffer<P = Arc<StandardCommandPool>>
where P: CommandPool
{
inner: InnerCommandBuffer<P>,
}

View File

@ -16,106 +16,88 @@
//! trait. By default vulkano will use the `StandardCommandPool` struct, but you can implement
//! this trait yourself by wrapping around the `UnsafeCommandPool` type.
use std::sync::Arc;
use instance::QueueFamily;
use device::Device;
use device::DeviceOwned;
use OomError;
use VulkanObject;
use vk;
pub use self::standard::StandardCommandPool;
pub use self::standard::StandardCommandPoolFinished;
pub use self::sys::UnsafeCommandPool;
pub use self::sys::UnsafeCommandPoolAlloc;
pub use self::sys::UnsafeCommandPoolAllocIter;
pub use self::sys::CommandPoolTrimError;
mod standard;
pub mod standard;
mod sys;
/// Types that manage the memory of command buffers.
pub unsafe trait CommandPool {
///
/// # Safety
///
/// A Vulkan command pool must be externally synchronized as if it owned the command buffers that
/// were allocated from it. This includes allocating from the pool, freeing from the pool,
/// resetting the pool or individual command buffers, and most importantly recording commands to
/// command buffers.
///
/// The implementation of `CommandPool` is expected to manage this. For as long as a `Builder`
/// is alive, the trait implementation is expected to lock the pool that allocated the `Builder`
/// for the current thread.
///
/// > **Note**: This may be modified in the future to allow different implementation strategies.
///
/// The destructors of the `CommandPoolBuilderAlloc` and the `CommandPoolAlloc` are expected to
/// free the command buffer, reset the command buffer, or add it to a pool so that it gets reused.
/// If the implementation frees or resets the command buffer, it must not forget that this
/// operation must lock the pool.
///
pub unsafe trait CommandPool: DeviceOwned {
/// See `alloc()`.
type Iter: Iterator<Item = AllocatedCommandBuffer>;
/// See `lock()`.
type Lock;
/// See `finish()`.
type Finished: CommandPoolFinished;
type Iter: Iterator<Item = Self::Builder>;
/// Represents a command buffer that has been allocated and that is currently being built.
type Builder: CommandPoolBuilderAlloc<Alloc = Self::Alloc>;
/// Represents a command buffer that has been allocated and that is pending execution or is
/// being executed.
type Alloc: CommandPoolAlloc;
/// Allocates command buffers from this pool.
///
/// Returns an iterator that contains an bunch of allocated command buffers.
fn alloc(&self, secondary: bool, count: u32) -> Result<Self::Iter, OomError>;
/// Frees command buffers from this pool.
///
/// # Safety
///
/// - The command buffers must have been allocated from this pool.
/// - `secondary` must have the same value as what was passed to `alloc`.
///
unsafe fn free<I>(&self, secondary: bool, command_buffers: I)
where I: Iterator<Item = AllocatedCommandBuffer>;
/// Once a command buffer has finished being built, it should call this method in order to
/// produce a `Finished` object.
///
/// The `Finished` object must hold the pool alive.
///
/// The point of this object is to change the Send/Sync strategy after a command buffer has
/// finished being built compared to before.
fn finish(self) -> Self::Finished;
/// Before any command buffer allocated from this pool can be modified, the pool itself must
/// be locked by calling this method.
///
/// All the operations are atomic at the thread level, so the point of this lock is to
/// prevent the pool from being accessed from multiple threads in parallel.
fn lock(&self) -> Self::Lock;
/// Returns true if command buffers can be reset individually. In other words, if the pool
/// was created with `reset_cb` set to true.
fn can_reset_invidual_command_buffers(&self) -> bool;
/// Returns the device used to create this pool.
fn device(&self) -> &Arc<Device>;
/// Returns the queue family that this pool targets.
fn queue_family(&self) -> QueueFamily;
}
/// See `CommandPool::finish()`.
pub unsafe trait CommandPoolFinished {
/// Frees command buffers.
///
/// # Safety
///
/// - The command buffers must have been allocated from this pool.
/// - `secondary` must have the same value as what was passed to `alloc`.
///
unsafe fn free<I>(&self, secondary: bool, command_buffers: I)
where I: Iterator<Item = AllocatedCommandBuffer>;
/// A command buffer allocated from a pool and that can be recorded.
///
/// # Safety
///
/// See `CommandPool` for information about safety.
///
pub unsafe trait CommandPoolBuilderAlloc: DeviceOwned {
/// Return type of `into_alloc`.
type Alloc: CommandPoolAlloc;
/// Returns the device used to create this pool.
fn device(&self) -> &Arc<Device>;
/// Returns the internal object that contains the command buffer.
fn inner(&self) -> &UnsafeCommandPoolAlloc;
/// Returns the queue family that this pool targets.
/// Turns this builder into a command buffer that is pending execution.
fn into_alloc(self) -> Self::Alloc;
/// Returns the queue family that the pool targets.
fn queue_family(&self) -> QueueFamily;
}
/// Opaque type that represents a command buffer allocated from a pool.
pub struct AllocatedCommandBuffer(vk::CommandBuffer);
/// A command buffer allocated from a pool that has finished being recorded.
///
/// # Safety
///
/// See `CommandPool` for information about safety.
///
pub unsafe trait CommandPoolAlloc: DeviceOwned {
/// Returns the internal object that contains the command buffer.
fn inner(&self) -> &UnsafeCommandPoolAlloc;
impl From<vk::CommandBuffer> for AllocatedCommandBuffer {
#[inline]
fn from(cmd: vk::CommandBuffer) -> AllocatedCommandBuffer {
AllocatedCommandBuffer(cmd)
}
}
unsafe impl VulkanObject for AllocatedCommandBuffer {
type Object = vk::CommandBuffer;
#[inline]
fn internal_object(&self) -> vk::CommandBuffer {
self.0
}
/// Returns the queue family that the pool targets.
fn queue_family(&self) -> QueueFamily;
}

View File

@ -8,24 +8,23 @@
// according to those terms.
use std::cmp;
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::hash::BuildHasherDefault;
use std::iter::Chain;
use std::marker::PhantomData;
use std::sync::Arc;
use std::sync::Mutex;
use std::vec::IntoIter as VecIntoIter;
use std::sync::Weak;
use fnv::FnvHasher;
use command_buffer::pool::AllocatedCommandBuffer;
use command_buffer::pool::CommandPool;
use command_buffer::pool::CommandPoolFinished;
use command_buffer::pool::CommandPoolAlloc;
use command_buffer::pool::CommandPoolBuilderAlloc;
use command_buffer::pool::UnsafeCommandPool;
use command_buffer::pool::UnsafeCommandPoolAllocIter;
use command_buffer::pool::UnsafeCommandPoolAlloc;
use instance::QueueFamily;
use device::Device;
use device::DeviceOwned;
use OomError;
use VulkanObject;
@ -38,7 +37,8 @@ fn curr_thread_id() -> usize { THREAD_ID.with(|data| &**data as *const u8 as usi
/// Standard implementation of a command pool.
///
/// Will use one Vulkan pool per thread in order to avoid locking. Will try to reuse command
/// buffers. Locking is required only when allocating/freeing command buffers.
/// buffers. Command buffers can't be moved between threads during the building process, but
/// finished command buffers can.
pub struct StandardCommandPool {
// The device.
device: Arc<Device>,
@ -47,17 +47,20 @@ pub struct StandardCommandPool {
queue_family: u32,
// For each "thread id" (see `THREAD_ID` above), we store thread-specific info.
per_thread: Mutex<HashMap<usize, StandardCommandPoolPerThread, BuildHasherDefault<FnvHasher>>>,
per_thread: Mutex<HashMap<usize, Weak<Mutex<StandardCommandPoolPerThread>>,
BuildHasherDefault<FnvHasher>>>,
}
// Dummy marker in order to not implement `Send` and `Sync`.
//
// Since `StandardCommandPool` isn't Send/Sync, then the command buffers that use this pool
// won't be Send/Sync either, which means that we don't need to lock the pool while the CB
// is being built.
//
// However `StandardCommandPoolFinished` *is* Send/Sync because the only operation that can
// be called on `StandardCommandPoolFinished` is freeing, and freeing does actually lock.
dummy_avoid_send_sync: PhantomData<*const u8>,
unsafe impl Send for StandardCommandPool {}
unsafe impl Sync for StandardCommandPool {}
struct StandardCommandPoolPerThread {
// The Vulkan pool of this thread.
pool: UnsafeCommandPool,
// List of existing primary command buffers that are available for reuse.
available_primary_command_buffers: Vec<UnsafeCommandPoolAlloc>,
// List of existing secondary command buffers that are available for reuse.
available_secondary_command_buffers: Vec<UnsafeCommandPoolAlloc>,
}
impl StandardCommandPool {
@ -67,108 +70,80 @@ impl StandardCommandPool {
///
/// - Panics if the device and the queue family don't belong to the same physical device.
///
pub fn new(device: &Arc<Device>, queue_family: QueueFamily) -> StandardCommandPool {
pub fn new(device: Arc<Device>, queue_family: QueueFamily) -> StandardCommandPool {
assert_eq!(device.physical_device().internal_object(),
queue_family.physical_device().internal_object());
StandardCommandPool {
device: device.clone(),
device: device,
queue_family: queue_family.id(),
per_thread: Mutex::new(Default::default()),
dummy_avoid_send_sync: PhantomData,
}
}
}
struct StandardCommandPoolPerThread {
// The Vulkan pool of this thread.
pool: UnsafeCommandPool,
// List of existing primary command buffers that are available for reuse.
available_primary_command_buffers: Vec<AllocatedCommandBuffer>,
// List of existing secondary command buffers that are available for reuse.
available_secondary_command_buffers: Vec<AllocatedCommandBuffer>,
}
unsafe impl CommandPool for Arc<StandardCommandPool> {
type Iter = Chain<VecIntoIter<AllocatedCommandBuffer>, UnsafeCommandPoolAllocIter>;
type Lock = ();
type Finished = StandardCommandPoolFinished;
type Iter = Box<Iterator<Item = StandardCommandPoolBuilder>>; // TODO: meh for Box
type Builder = StandardCommandPoolBuilder;
type Alloc = StandardCommandPoolAlloc;
fn alloc(&self, secondary: bool, count: u32) -> Result<Self::Iter, OomError> {
// Find the correct `StandardCommandPoolPerThread` structure.
let mut per_thread = self.per_thread.lock().unwrap();
let mut per_thread = match per_thread.entry(curr_thread_id()) {
Entry::Occupied(entry) => entry.into_mut(),
Entry::Vacant(entry) => {
let new_pool = try!(UnsafeCommandPool::new(&self.device, self.queue_family(),
false, true));
let mut hashmap = self.per_thread.lock().unwrap();
//hashmap.retain(|_, w| w.upgrade().is_some()); // TODO: unstable // TODO: meh for iterating everything every time
entry.insert(StandardCommandPoolPerThread {
// TODO: this hashmap lookup can probably be optimized
let curr_thread_id = curr_thread_id();
let per_thread = hashmap.get(&curr_thread_id).and_then(|p| p.upgrade());
let per_thread = match per_thread {
Some(pt) => pt,
None => {
let new_pool = try!(UnsafeCommandPool::new(self.device.clone(), self.queue_family(),
false, true));
let pt = Arc::new(Mutex::new(StandardCommandPoolPerThread {
pool: new_pool,
available_primary_command_buffers: Vec::new(),
available_secondary_command_buffers: Vec::new(),
})
}));
hashmap.insert(curr_thread_id, Arc::downgrade(&pt));
pt
},
};
// Which list of already-existing command buffers we are going to pick CBs from.
let mut existing = if secondary { &mut per_thread.available_secondary_command_buffers }
else { &mut per_thread.available_primary_command_buffers };
let mut pt_lock = per_thread.lock().unwrap();
// Build an iterator to pick from already-existing command buffers.
let num_from_existing = cmp::min(count as usize, existing.len());
let from_existing = existing.drain(0 .. num_from_existing).collect::<Vec<_>>().into_iter();
let (num_from_existing, from_existing) = {
// Which list of already-existing command buffers we are going to pick CBs from.
let mut existing = if secondary { &mut pt_lock.available_secondary_command_buffers }
else { &mut pt_lock.available_primary_command_buffers };
let num_from_existing = cmp::min(count as usize, existing.len());
let from_existing = existing.drain(0 .. num_from_existing).collect::<Vec<_>>().into_iter();
(num_from_existing, from_existing)
};
// Build an iterator to construct the missing command buffers from the Vulkan pool.
let num_new = count as usize - num_from_existing;
debug_assert!(num_new <= count as usize); // Check overflows.
let newly_allocated = try!(per_thread.pool.alloc_command_buffers(secondary, num_new));
let newly_allocated = try!(pt_lock.pool.alloc_command_buffers(secondary, num_new));
// Returning them as a chain.
Ok(from_existing.chain(newly_allocated))
}
unsafe fn free<I>(&self, secondary: bool, command_buffers: I)
where I: Iterator<Item = AllocatedCommandBuffer>
{
// Do not actually free the command buffers. Instead adding them to the list of command
// buffers available for reuse.
let mut per_thread = self.per_thread.lock().unwrap();
let mut per_thread = per_thread.get_mut(&curr_thread_id()).unwrap();
if secondary {
for cb in command_buffers {
per_thread.available_secondary_command_buffers.push(cb);
let device = self.device.clone();
let queue_family_id = self.queue_family;
let per_thread = per_thread.clone();
let final_iter = from_existing.chain(newly_allocated).map(move |cmd| {
StandardCommandPoolBuilder {
cmd: Some(cmd),
pool: per_thread.clone(),
secondary: secondary,
device: device.clone(),
queue_family_id: queue_family_id,
dummy_avoid_send_sync: PhantomData,
}
} else {
for cb in command_buffers {
per_thread.available_primary_command_buffers.push(cb);
}
}
}
}).collect::<Vec<_>>();
#[inline]
fn finish(self) -> Self::Finished {
StandardCommandPoolFinished {
pool: self,
thread_id: curr_thread_id(),
}
}
#[inline]
fn lock(&self) -> Self::Lock {
()
}
#[inline]
fn can_reset_invidual_command_buffers(&self) -> bool {
true
}
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
Ok(Box::new(final_iter.into_iter()))
}
#[inline]
@ -177,40 +152,106 @@ unsafe impl CommandPool for Arc<StandardCommandPool> {
}
}
pub struct StandardCommandPoolFinished {
pool: Arc<StandardCommandPool>,
thread_id: usize,
unsafe impl DeviceOwned for StandardCommandPool {
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
}
}
unsafe impl CommandPoolFinished for StandardCommandPoolFinished {
unsafe fn free<I>(&self, secondary: bool, command_buffers: I)
where I: Iterator<Item = AllocatedCommandBuffer>
{
let mut per_thread = self.pool.per_thread.lock().unwrap();
let mut per_thread = per_thread.get_mut(&curr_thread_id()).unwrap();
pub struct StandardCommandPoolBuilder {
cmd: Option<UnsafeCommandPoolAlloc>,
pool: Arc<Mutex<StandardCommandPoolPerThread>>,
secondary: bool,
device: Arc<Device>,
queue_family_id: u32,
dummy_avoid_send_sync: PhantomData<*const u8>,
}
if secondary {
for cb in command_buffers {
per_thread.available_secondary_command_buffers.push(cb);
}
} else {
for cb in command_buffers {
per_thread.available_primary_command_buffers.push(cb);
}
unsafe impl CommandPoolBuilderAlloc for StandardCommandPoolBuilder {
type Alloc = StandardCommandPoolAlloc;
#[inline]
fn inner(&self) -> &UnsafeCommandPoolAlloc {
self.cmd.as_ref().unwrap()
}
#[inline]
fn into_alloc(mut self) -> Self::Alloc {
StandardCommandPoolAlloc {
cmd: Some(self.cmd.take().unwrap()),
pool: self.pool.clone(),
secondary: self.secondary,
device: self.device.clone(),
queue_family_id: self.queue_family_id,
}
}
#[inline]
fn queue_family(&self) -> QueueFamily {
self.device.physical_device().queue_family_by_id(self.queue_family_id).unwrap()
}
}
unsafe impl DeviceOwned for StandardCommandPoolBuilder {
#[inline]
fn device(&self) -> &Arc<Device> {
self.pool.device()
&self.device
}
}
impl Drop for StandardCommandPoolBuilder {
fn drop(&mut self) {
if let Some(cmd) = self.cmd.take() {
let mut pool = self.pool.lock().unwrap();
if self.secondary {
pool.available_secondary_command_buffers.push(cmd);
} else {
pool.available_primary_command_buffers.push(cmd);
}
}
}
}
pub struct StandardCommandPoolAlloc {
cmd: Option<UnsafeCommandPoolAlloc>,
pool: Arc<Mutex<StandardCommandPoolPerThread>>,
secondary: bool,
device: Arc<Device>,
queue_family_id: u32,
}
unsafe impl Send for StandardCommandPoolAlloc {}
unsafe impl Sync for StandardCommandPoolAlloc {}
unsafe impl CommandPoolAlloc for StandardCommandPoolAlloc {
#[inline]
fn inner(&self) -> &UnsafeCommandPoolAlloc {
self.cmd.as_ref().unwrap()
}
#[inline]
fn queue_family(&self) -> QueueFamily {
self.pool.queue_family()
self.device.physical_device().queue_family_by_id(self.queue_family_id).unwrap()
}
}
// See `StandardCommandPool` for comments about this.
unsafe impl Send for StandardCommandPoolFinished {}
unsafe impl Sync for StandardCommandPoolFinished {}
unsafe impl DeviceOwned for StandardCommandPoolAlloc {
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
}
}
impl Drop for StandardCommandPoolAlloc {
fn drop(&mut self) {
let mut pool = self.pool.lock().unwrap();
if self.secondary {
pool.available_secondary_command_buffers.push(self.cmd.take().unwrap());
} else {
pool.available_primary_command_buffers.push(self.cmd.take().unwrap());
}
}
}

View File

@ -10,29 +10,39 @@
use std::marker::PhantomData;
use std::mem;
use std::ptr;
use std::error;
use std::fmt;
use std::sync::Arc;
use std::vec::IntoIter as VecIntoIter;
use smallvec::SmallVec;
use command_buffer::pool::AllocatedCommandBuffer;
use instance::QueueFamily;
use device::Device;
use device::DeviceOwned;
use OomError;
use VulkanObject;
use VulkanPointers;
use Error;
use check_errors;
use vk;
/// Low-level implementation of a command pool.
///
/// A command pool is always tied to a specific queue family. Command buffers allocated from a pool
/// can only be executed on the corresponding queue familiy.
///
/// This struct doesn't implement the `Sync` trait because Vulkan command pools are not thread
/// safe. In other words, you can only use a pool from one thread at a time.
#[derive(Debug)]
pub struct UnsafeCommandPool {
pool: vk::CommandPool,
device: Arc<Device>,
// Index of the associated queue family in the physical device.
queue_family_index: u32,
// We don't want `UnsafeCommandPool` to implement Sync, since the Vulkan command pool isn't
// thread safe.
//
// We don't want `UnsafeCommandPool` to implement Sync.
// This marker unimplements both Send and Sync, but we reimplement Send manually right under.
dummy_avoid_sync: PhantomData<*const u8>,
}
@ -53,11 +63,12 @@ impl UnsafeCommandPool {
///
/// - Panics if the queue family doesn't belong to the same physical device as `device`.
///
pub fn new(device: &Arc<Device>, queue_family: QueueFamily, transient: bool,
pub fn new(device: Arc<Device>, queue_family: QueueFamily, transient: bool,
reset_cb: bool) -> Result<UnsafeCommandPool, OomError>
{
assert_eq!(device.physical_device().internal_object(),
queue_family.physical_device().internal_object());
queue_family.physical_device().internal_object(),
"Device doesn't match physical device when creating a command pool");
let vk = device.pointers();
@ -92,11 +103,13 @@ impl UnsafeCommandPool {
/// Resets the pool, which resets all the command buffers that were allocated from it.
///
/// If `release_resources` is true, it is a hint to the implementation that it should free all
/// the memory internally allocated for this pool.
///
/// # Safety
///
/// The command buffers allocated from this pool jump to the initial state.
///
#[inline]
pub unsafe fn reset(&self, release_resources: bool) -> Result<(), OomError> {
let flags = if release_resources { vk::COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT }
else { 0 };
@ -106,6 +119,27 @@ impl UnsafeCommandPool {
Ok(())
}
/// Trims a command pool, which recycles unused internal memory from the command pool back to
/// the system.
///
/// Command buffers allocated from the pool are not affected by trimming.
///
/// This function is supported only if the `VK_KHR_maintenance1` extension was enabled at
/// device creation. Otherwise an error is returned.
/// Since this operation is purely an optimization it is legitimate to call this function and
/// simply ignore any possible error.
pub fn trim(&self) -> Result<(), CommandPoolTrimError> {
unsafe {
if !self.device.loaded_extensions().khr_maintenance1 {
return Err(CommandPoolTrimError::Maintenance1ExtensionNotEnabled);
}
let vk = self.device.pointers();
vk.TrimCommandPoolKHR(self.device.internal_object(), self.pool, 0 /* reserved */);
Ok(())
}
}
/// Allocates `count` command buffers.
///
/// If `secondary` is true, allocates secondary command buffers. Otherwise, allocates primary
@ -114,7 +148,9 @@ impl UnsafeCommandPool {
-> Result<UnsafeCommandPoolAllocIter, OomError>
{
if count == 0 {
return Ok(UnsafeCommandPoolAllocIter(None));
return Ok(UnsafeCommandPoolAllocIter {
list: None
});
}
let infos = vk::CommandBufferAllocateInfo {
@ -134,7 +170,9 @@ impl UnsafeCommandPool {
out.set_len(count);
Ok(UnsafeCommandPoolAllocIter(Some(out.into_iter())))
Ok(UnsafeCommandPoolAllocIter {
list: Some(out.into_iter())
})
}
}
@ -142,10 +180,10 @@ impl UnsafeCommandPool {
///
/// # Safety
///
/// The command buffers must have been allocated from this pool.
/// The command buffers must have been allocated from this pool. They must not be in use.
///
pub unsafe fn free_command_buffers<I>(&self, command_buffers: I)
where I: Iterator<Item = AllocatedCommandBuffer>
where I: Iterator<Item = UnsafeCommandPoolAlloc>
{
let command_buffers: SmallVec<[_; 4]> = command_buffers.map(|cb| cb.0).collect();
let vk = self.device.pointers();
@ -153,12 +191,6 @@ impl UnsafeCommandPool {
command_buffers.len() as u32, command_buffers.as_ptr())
}
/// Returns the device this command pool was created with.
#[inline]
pub fn device(&self) -> &Arc<Device> {
&self.device
}
/// Returns the queue family on which command buffers of this pool can be executed.
#[inline]
pub fn queue_family(&self) -> QueueFamily {
@ -166,6 +198,13 @@ impl UnsafeCommandPool {
}
}
unsafe impl DeviceOwned for UnsafeCommandPool {
#[inline]
fn device(&self) -> &Arc<Device> {
&self.device
}
}
unsafe impl VulkanObject for UnsafeCommandPool {
type Object = vk::CommandPool;
@ -185,21 +224,116 @@ impl Drop for UnsafeCommandPool {
}
}
/// Iterator for newly-allocated command buffers.
pub struct UnsafeCommandPoolAllocIter(Option<VecIntoIter<vk::CommandBuffer>>);
/// Opaque type that represents a command buffer allocated from a pool.
pub struct UnsafeCommandPoolAlloc(vk::CommandBuffer);
impl Iterator for UnsafeCommandPoolAllocIter {
type Item = AllocatedCommandBuffer;
unsafe impl VulkanObject for UnsafeCommandPoolAlloc {
type Object = vk::CommandBuffer;
#[inline]
fn next(&mut self) -> Option<AllocatedCommandBuffer> {
self.0.as_mut().and_then(|i| i.next()).map(|cb| AllocatedCommandBuffer(cb))
fn internal_object(&self) -> vk::CommandBuffer {
self.0
}
}
/// Iterator for newly-allocated command buffers.
#[derive(Debug)]
pub struct UnsafeCommandPoolAllocIter {
list: Option<VecIntoIter<vk::CommandBuffer>>
}
impl Iterator for UnsafeCommandPoolAllocIter {
type Item = UnsafeCommandPoolAlloc;
#[inline]
fn next(&mut self) -> Option<UnsafeCommandPoolAlloc> {
self.list.as_mut().and_then(|i| i.next()).map(|cb| UnsafeCommandPoolAlloc(cb))
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
self.0.as_ref().map(|i| i.size_hint()).unwrap_or((0, Some(0)))
self.list.as_ref().map(|i| i.size_hint()).unwrap_or((0, Some(0)))
}
}
impl ExactSizeIterator for UnsafeCommandPoolAllocIter {}
/// Error that can happen when trimming command pools.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum CommandPoolTrimError {
/// The `KHR_maintenance1` extension was not enabled.
Maintenance1ExtensionNotEnabled,
}
impl error::Error for CommandPoolTrimError {
#[inline]
fn description(&self) -> &str {
match *self {
CommandPoolTrimError::Maintenance1ExtensionNotEnabled => "the `KHR_maintenance1` \
extension was not enabled",
}
}
}
impl fmt::Display for CommandPoolTrimError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}
impl From<Error> for CommandPoolTrimError {
#[inline]
fn from(err: Error) -> CommandPoolTrimError {
panic!("unexpected error: {:?}", err)
}
}
#[cfg(test)]
mod tests {
use command_buffer::pool::UnsafeCommandPool;
use command_buffer::pool::CommandPoolTrimError;
#[test]
fn basic_create() {
let (device, queue) = gfx_dev_and_queue!();
let _ = UnsafeCommandPool::new(device, queue.family(), false, false).unwrap();
}
#[test]
fn queue_family_getter() {
let (device, queue) = gfx_dev_and_queue!();
let pool = UnsafeCommandPool::new(device, queue.family(), false, false).unwrap();
assert_eq!(pool.queue_family().id(), queue.family().id());
}
#[test]
#[should_panic(expected = "Device doesn't match physical device when creating a command pool")]
fn panic_if_not_match_family() {
let (device, _) = gfx_dev_and_queue!();
let (_, queue) = gfx_dev_and_queue!();
let _ = UnsafeCommandPool::new(device, queue.family(), false, false);
}
#[test]
fn check_maintenance_when_trim() {
let (device, queue) = gfx_dev_and_queue!();
let pool = UnsafeCommandPool::new(device, queue.family(), false, false).unwrap();
match pool.trim() {
Err(CommandPoolTrimError::Maintenance1ExtensionNotEnabled) => (),
_ => panic!()
}
}
// TODO: test that trim works if VK_KHR_maintenance1 if enabled ; the test macro doesn't
// support enabling extensions yet
#[test]
fn basic_alloc() {
let (device, queue) = gfx_dev_and_queue!();
let pool = UnsafeCommandPool::new(device, queue.family(), false, false).unwrap();
let iter = pool.alloc_command_buffers(false, 12).unwrap();
assert_eq!(iter.count(), 12);
}
}

View File

@ -1,76 +0,0 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::any::Any;
pub struct CopyCommand<L, B> where B: Buffer, L: CommandsList {
previous: L,
buffer: B,
buffer_state: B::CommandBufferState,
transition: Option<>,
}
impl<L, B> CommandsList for CopyCommand<L, B> where B: Buffer, L: CommandsList {
pub fn new(previous: L, buffer: B) -> CopyCommand<L, B> {
let (state, transition) = previous.current_buffer_state(buffer)
.transition(false, self.num_commands(), ShaderStages, AccessFlagBits);
assert!(transition.after_command_num < self.num_commands());
}
}
unsafe impl<L, B> CommandsList for CopyCommand<L, B> where B: Buffer, L: CommandsList {
fn num_commands(&self) -> usize {
self.previous.num_commands() + 1
}
#[inline]
fn requires_graphics_queue(&self) -> bool {
self.previous.requires_graphics_queue()
}
#[inline]
fn requires_compute_queue(&self) -> bool {
self.previous.requires_compute_queue()
}
fn current_buffer_state<Ob>(&self, other: &Ob) -> Ob::CommandBufferState
where Ob: Buffer
{
if self.buffer.is_same(b) {
let s: &Ob::CommandBufferState = (&self.buffer_state as &Any).downcast_ref().unwrap();
s.clone()
} else {
self.previous.current_buffer_state(b)
}
}
unsafe fn build_unsafe_command_buffer<P, I>(&self, pool: P, transitions: I) -> UnsafeCommandBufferBuilder<P>
where P: CommandPool
{
let my_command_num = self.num_commands();
let mut transitions_to_apply = PipelineBarrierBuilder::new();
let transitions = transitions.filter_map(|transition| {
if transition.after_command_num >= my_command_num {
transitions_to_apply.push(transition);
None
} else {
Some(transition)
}
}).chain(self.transition.clone().into_iter());
let mut parent_cb = self.previous.build_unsafe_command_buffer(pool, transitions.clone().chain());
parent_cb.copy_buffer();
parent_cb.pipeline_barrier(transitions_to_apply);
parent_cb
}
}
unsafe impl StdPrimaryCommandsList

View File

@ -1,267 +0,0 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::iter;
use std::iter::Chain;
use std::sync::Arc;
use smallvec::SmallVec;
use buffer::traits::TrackedBuffer;
use command_buffer::std::OutsideRenderPass;
use command_buffer::std::ResourcesStates;
use command_buffer::std::StdCommandsList;
use command_buffer::submit::CommandBuffer;
use command_buffer::submit::SubmitInfo;
use command_buffer::sys::PipelineBarrierBuilder;
use command_buffer::sys::UnsafeCommandBuffer;
use command_buffer::sys::UnsafeCommandBufferBuilder;
use descriptor::PipelineLayout;
use descriptor::descriptor::ShaderStages;
use descriptor::descriptor_set::collection::TrackedDescriptorSetsCollection;
use descriptor::descriptor_set::collection::TrackedDescriptorSetsCollectionState;
use descriptor::descriptor_set::collection::TrackedDescriptorSetsCollectionFinished;
use device::Queue;
use image::traits::TrackedImage;
use instance::QueueFamily;
use pipeline::ComputePipeline;
use pipeline::GraphicsPipeline;
use sync::Fence;
use VulkanObject;
/// Wraps around a commands list and adds a dispatch command at the end of it.
pub struct DispatchCommand<'a, L, Pl, S, Pc>
where L: StdCommandsList, Pl: PipelineLayout, S: TrackedDescriptorSetsCollection, Pc: 'a
{
// Parent commands list.
previous: L,
// The compute pipeline.
pipeline: Arc<ComputePipeline<Pl>>,
// The descriptor sets to bind.
sets: S,
// The state of the descriptor sets.
sets_state: S::State,
// Pipeline barrier to inject in the final command buffer.
pipeline_barrier: (usize, PipelineBarrierBuilder),
// The push constants. TODO: use Cow
push_constants: &'a Pc,
// Dispatch dimensions.
dimensions: [u32; 3],
}
impl<'a, L, Pl, S, Pc> DispatchCommand<'a, L, Pl, S, Pc>
where L: StdCommandsList + OutsideRenderPass, Pl: PipelineLayout,
S: TrackedDescriptorSetsCollection, Pc: 'a
{
/// See the documentation of the `dispatch` method.
pub fn new(mut previous: L, pipeline: Arc<ComputePipeline<Pl>>, sets: S, dimensions: [u32; 3],
push_constants: &'a Pc) -> DispatchCommand<'a, L, Pl, S, Pc>
{
let (sets_state, barrier_loc, barrier) = unsafe {
sets.extract_states_and_transition(&mut previous)
};
DispatchCommand {
previous: previous,
pipeline: pipeline,
sets: sets,
sets_state: sets_state,
pipeline_barrier: (barrier_loc, barrier),
push_constants: push_constants,
dimensions: dimensions,
}
}
}
unsafe impl<'a, L, Pl, S, Pc> StdCommandsList for DispatchCommand<'a, L, Pl, S, Pc>
where L: StdCommandsList, Pl: PipelineLayout, S: TrackedDescriptorSetsCollection, Pc: 'a
{
type Pool = L::Pool;
type Output = DispatchCommandCb<L::Output, Pl, S>;
#[inline]
fn num_commands(&self) -> usize {
self.previous.num_commands() + 1
}
#[inline]
fn check_queue_validity(&self, queue: QueueFamily) -> Result<(), ()> {
if !queue.supports_compute() {
return Err(());
}
self.previous.check_queue_validity(queue)
}
#[inline]
fn is_compute_pipeline_bound<OPl>(&self, pipeline: &Arc<ComputePipeline<OPl>>) -> bool {
pipeline.internal_object() == self.pipeline.internal_object()
}
#[inline]
fn is_graphics_pipeline_bound<Pv, OPl, Prp>(&self, pipeline: &Arc<GraphicsPipeline<Pv, OPl, Prp>>)
-> bool
{
self.previous.is_graphics_pipeline_bound(pipeline)
}
#[inline]
fn buildable_state(&self) -> bool {
true
}
unsafe fn raw_build<I, F>(self, additional_elements: F, barriers: I,
mut final_barrier: PipelineBarrierBuilder) -> Self::Output
where F: FnOnce(&mut UnsafeCommandBufferBuilder<L::Pool>),
I: Iterator<Item = (usize, PipelineBarrierBuilder)>
{
let my_command_num = self.num_commands();
// Computing the finished state of the sets.
let (finished_state, fb) = self.sets_state.finish();
final_barrier.merge(fb);
// We split the barriers in two: those to apply after our command, and those to
// transfer to the parent so that they are applied before our command.
// The transitions to apply immediately after our command.
let mut transitions_to_apply = PipelineBarrierBuilder::new();
// The barriers to transfer to the parent.
let barriers = barriers.filter_map(|(after_command_num, barrier)| {
if after_command_num >= my_command_num || !transitions_to_apply.is_empty() {
transitions_to_apply.merge(barrier);
None
} else {
Some((after_command_num, barrier))
}
}).collect::<SmallVec<[_; 8]>>();
// Moving out some values, otherwise Rust complains that the closure below uses `self`
// while it's partially moved out.
let my_barrier = self.pipeline_barrier;
let my_pipeline = self.pipeline;
let my_sets = self.sets;
let my_push_constants = self.push_constants;
let my_dimensions = self.dimensions;
let bind_pipeline = !self.previous.is_compute_pipeline_bound(&my_pipeline);
// Passing to the parent.
let parent = self.previous.raw_build(|cb| {
// TODO: is the pipeline layout always the same as in the compute pipeline?
if bind_pipeline {
cb.bind_pipeline_compute(&my_pipeline);
}
let sets: SmallVec<[_; 8]> = my_sets.list().collect(); // TODO: ideally shouldn't collect, but there are lifetime problems
cb.bind_descriptor_sets(false, &**my_pipeline.layout(), 0,
sets.iter().map(|s| s.inner()), iter::empty()); // TODO: dynamic ranges, and don't bind if not necessary
cb.push_constants(&**my_pipeline.layout(), ShaderStages::all(), 0, // TODO: stages
&my_push_constants);
cb.dispatch(my_dimensions[0], my_dimensions[1], my_dimensions[2]);
cb.pipeline_barrier(transitions_to_apply);
additional_elements(cb);
}, Some(my_barrier).into_iter().chain(barriers.into_iter()), final_barrier);
DispatchCommandCb {
previous: parent,
pipeline: my_pipeline,
sets: my_sets,
sets_state: finished_state,
}
}
}
unsafe impl<'a, L, Pl, S, Pc> ResourcesStates for DispatchCommand<'a, L, Pl, S, Pc>
where L: StdCommandsList, Pl: PipelineLayout, S: TrackedDescriptorSetsCollection, Pc: 'a
{
unsafe fn extract_buffer_state<Ob>(&mut self, buffer: &Ob)
-> Option<Ob::CommandListState>
where Ob: TrackedBuffer
{
if let Some(s) = self.sets_state.extract_buffer_state(buffer) {
return Some(s);
}
self.previous.extract_buffer_state(buffer)
}
unsafe fn extract_image_state<I>(&mut self, image: &I) -> Option<I::CommandListState>
where I: TrackedImage
{
if let Some(s) = self.sets_state.extract_image_state(image) {
return Some(s);
}
self.previous.extract_image_state(image)
}
}
unsafe impl<'a, L, Pl, S, Pc> OutsideRenderPass for DispatchCommand<'a, L, Pl, S, Pc>
where L: StdCommandsList, Pl: PipelineLayout, S: TrackedDescriptorSetsCollection, Pc: 'a
{
}
/// Wraps around a command buffer and adds an update buffer command at the end of it.
pub struct DispatchCommandCb<L, Pl, S>
where L: CommandBuffer, Pl: PipelineLayout, S: TrackedDescriptorSetsCollection
{
// The previous commands.
previous: L,
// The barrier. We store it here to keep it alive.
pipeline: Arc<ComputePipeline<Pl>>,
// The descriptor sets. Stored here to keep them alive.
sets: S,
// State of the descriptor sets.
sets_state: S::Finished,
}
unsafe impl<L, Pl, S> CommandBuffer for DispatchCommandCb<L, Pl, S>
where L: CommandBuffer, Pl: PipelineLayout, S: TrackedDescriptorSetsCollection
{
type Pool = L::Pool;
type SemaphoresWaitIterator = Chain<L::SemaphoresWaitIterator,
<S::Finished as TrackedDescriptorSetsCollectionFinished>::
SemaphoresWaitIterator>;
type SemaphoresSignalIterator = Chain<L::SemaphoresSignalIterator,
<S::Finished as TrackedDescriptorSetsCollectionFinished>::
SemaphoresSignalIterator>;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<Self::Pool> {
self.previous.inner()
}
unsafe fn on_submit<F>(&self, queue: &Arc<Queue>, mut fence: F)
-> SubmitInfo<Self::SemaphoresWaitIterator,
Self::SemaphoresSignalIterator>
where F: FnMut() -> Arc<Fence>
{
// We query the parent.
let parent = self.previous.on_submit(queue, &mut fence);
// We query our sets.
let my_infos = self.sets_state.on_submit(queue, fence);
// We merge the two.
SubmitInfo {
semaphores_wait: parent.semaphores_wait.chain(my_infos.semaphores_wait),
semaphores_signal: parent.semaphores_signal.chain(my_infos.semaphores_signal),
pre_pipeline_barrier: {
let mut b = parent.pre_pipeline_barrier;
b.merge(my_infos.pre_pipeline_barrier);
b
},
post_pipeline_barrier: {
let mut b = parent.post_pipeline_barrier;
b.merge(my_infos.post_pipeline_barrier);
b
},
}
}
}

View File

@ -1,349 +0,0 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::iter;
use std::iter::Chain;
use std::sync::Arc;
use smallvec::SmallVec;
use buffer::traits::Buffer;
use buffer::traits::TrackedBuffer;
use command_buffer::DynamicState;
use command_buffer::std::InsideRenderPass;
use command_buffer::std::ResourcesStates;
use command_buffer::std::StdCommandsList;
use command_buffer::submit::CommandBuffer;
use command_buffer::submit::SubmitInfo;
use command_buffer::sys::PipelineBarrierBuilder;
use command_buffer::sys::UnsafeCommandBuffer;
use command_buffer::sys::UnsafeCommandBufferBuilder;
use descriptor::PipelineLayout;
use descriptor::descriptor::ShaderStages;
use descriptor::descriptor_set::collection::TrackedDescriptorSetsCollection;
use descriptor::descriptor_set::collection::TrackedDescriptorSetsCollectionState;
use descriptor::descriptor_set::collection::TrackedDescriptorSetsCollectionFinished;
use device::Queue;
use image::traits::TrackedImage;
use instance::QueueFamily;
use pipeline::ComputePipeline;
use pipeline::GraphicsPipeline;
use pipeline::vertex::Source;
use sync::Fence;
use VulkanObject;
/// Wraps around a commands list and adds a draw command at the end of it.
pub struct DrawCommand<'a, L, Pv, Pl, Prp, S, Pc>
where L: StdCommandsList, Pl: PipelineLayout, S: TrackedDescriptorSetsCollection, Pc: 'a
{
// Parent commands list.
previous: L,
// The graphics pipeline.
pipeline: Arc<GraphicsPipeline<Pv, Pl, Prp>>,
// The descriptor sets to bind.
sets: S,
// The state of the descriptor sets.
sets_state: S::State,
// Pipeline barrier to inject in the final command buffer.
pipeline_barrier: (usize, PipelineBarrierBuilder),
// The push constants. TODO: use Cow
push_constants: &'a Pc,
// FIXME: strong typing and state transitions
vertex_buffers: SmallVec<[Arc<Buffer>; 4]>,
// Actual type of draw.
inner: DrawInner,
}
enum DrawInner {
Regular {
vertex_count: u32,
instance_count: u32,
first_vertex: u32,
first_instance: u32,
},
Indexed {
vertex_count: u32,
instance_count: u32,
first_index: u32,
vertex_offset: i32,
first_instance: u32,
},
// TODO: indirect rendering
}
impl<'a, L, Pv, Pl, Prp, S, Pc> DrawCommand<'a, L, Pv, Pl, Prp, S, Pc>
where L: StdCommandsList + InsideRenderPass, Pl: PipelineLayout,
S: TrackedDescriptorSetsCollection, Pc: 'a
{
/// See the documentation of the `draw` method.
pub fn regular<V>(mut previous: L, pipeline: Arc<GraphicsPipeline<Pv, Pl, Prp>>,
dynamic: &DynamicState, vertices: V, sets: S, push_constants: &'a Pc)
-> DrawCommand<'a, L, Pv, Pl, Prp, S, Pc>
where Pv: Source<V>
{
let (sets_state, barrier_loc, barrier) = unsafe {
sets.extract_states_and_transition(&mut previous)
};
// FIXME: lot of stuff missing here
let (buffers, num_vertices, num_instances) = pipeline.vertex_definition().decode(vertices);
let buffers = buffers.collect();
DrawCommand {
previous: previous,
pipeline: pipeline,
sets: sets,
sets_state: sets_state,
pipeline_barrier: (barrier_loc, barrier),
push_constants: push_constants,
vertex_buffers: buffers,
inner: DrawInner::Regular {
vertex_count: num_vertices as u32,
instance_count: num_instances as u32,
first_vertex: 0,
first_instance: 0,
},
}
}
}
unsafe impl<'a, L, Pv, Pl, Prp, S, Pc> StdCommandsList for DrawCommand<'a, L, Pv, Pl, Prp, S, Pc>
where L: StdCommandsList, Pl: PipelineLayout, S: TrackedDescriptorSetsCollection, Pc: 'a
{
type Pool = L::Pool;
type Output = DrawCommandCb<L::Output, Pv, Pl, Prp, S>;
#[inline]
fn num_commands(&self) -> usize {
self.previous.num_commands() + 1
}
#[inline]
fn check_queue_validity(&self, queue: QueueFamily) -> Result<(), ()> {
if !queue.supports_graphics() {
return Err(());
}
self.previous.check_queue_validity(queue)
}
#[inline]
fn is_compute_pipeline_bound<OPl>(&self, pipeline: &Arc<ComputePipeline<OPl>>) -> bool {
self.previous.is_compute_pipeline_bound(pipeline)
}
#[inline]
fn is_graphics_pipeline_bound<OPv, OPl, OPrp>(&self, pipeline: &Arc<GraphicsPipeline<OPv, OPl, OPrp>>)
-> bool
{
pipeline.internal_object() == self.pipeline.internal_object()
}
#[inline]
fn buildable_state(&self) -> bool {
self.previous.buildable_state()
}
unsafe fn raw_build<I, F>(self, additional_elements: F, barriers: I,
mut final_barrier: PipelineBarrierBuilder) -> Self::Output
where F: FnOnce(&mut UnsafeCommandBufferBuilder<L::Pool>),
I: Iterator<Item = (usize, PipelineBarrierBuilder)>
{
let my_command_num = self.num_commands();
// Computing the finished state of the sets.
let (finished_state, fb) = self.sets_state.finish();
final_barrier.merge(fb);
// We split the barriers in two: those to apply after our command, and those to
// transfer to the parent so that they are applied before our command.
// The transitions to apply immediately after our command.
let mut transitions_to_apply = PipelineBarrierBuilder::new();
// The barriers to transfer to the parent.
let barriers = barriers.filter_map(|(after_command_num, barrier)| {
if after_command_num >= my_command_num || !transitions_to_apply.is_empty() {
transitions_to_apply.merge(barrier);
None
} else {
Some((after_command_num, barrier))
}
}).collect::<SmallVec<[_; 8]>>();
// Moving out some values, otherwise Rust complains that the closure below uses `self`
// while it's partially moved out.
let my_barrier = self.pipeline_barrier;
let my_pipeline = self.pipeline;
let bind_pipeline = !self.previous.is_graphics_pipeline_bound(&my_pipeline);
let my_sets = self.sets;
let my_push_constants = self.push_constants;
let my_vertex_buffers = self.vertex_buffers;
let my_inner = self.inner;
// Passing to the parent.
let parent = self.previous.raw_build(|cb| {
// TODO: is the pipeline layout always the same as in the graphics pipeline?
if bind_pipeline {
cb.bind_pipeline_graphics(&my_pipeline);
}
let sets: SmallVec<[_; 8]> = my_sets.list().collect(); // TODO: ideally shouldn't collect, but there are lifetime problems
cb.bind_descriptor_sets(true, &**my_pipeline.layout(), 0,
sets.iter().map(|s| s.inner()), iter::empty()); // TODO: dynamic ranges, and don't bind if not necessary
cb.push_constants(&**my_pipeline.layout(), ShaderStages::all(), 0, // TODO: stages
&my_push_constants);
cb.bind_vertex_buffers(0, my_vertex_buffers.iter().map(|buf| (buf.inner(), 0)));
match my_inner {
DrawInner::Regular { vertex_count, instance_count,
first_vertex, first_instance } =>
{
cb.draw(vertex_count, instance_count, first_vertex, first_instance);
},
DrawInner::Indexed { vertex_count, instance_count, first_index,
vertex_offset, first_instance } =>
{
cb.draw_indexed(vertex_count, instance_count, first_index,
vertex_offset, first_instance);
},
}
cb.pipeline_barrier(transitions_to_apply);
additional_elements(cb);
}, Some(my_barrier).into_iter().chain(barriers.into_iter()), final_barrier);
DrawCommandCb {
previous: parent,
pipeline: my_pipeline,
sets: my_sets,
sets_state: finished_state,
vertex_buffers: my_vertex_buffers,
}
}
}
unsafe impl<'a, L, Pv, Pl, Prp, S, Pc> ResourcesStates for DrawCommand<'a, L, Pv, Pl, Prp, S, Pc>
where L: StdCommandsList, Pl: PipelineLayout,
S: TrackedDescriptorSetsCollection, Pc: 'a
{
unsafe fn extract_buffer_state<Ob>(&mut self, buffer: &Ob)
-> Option<Ob::CommandListState>
where Ob: TrackedBuffer
{
if let Some(s) = self.sets_state.extract_buffer_state(buffer) {
return Some(s);
}
self.previous.extract_buffer_state(buffer)
}
unsafe fn extract_image_state<I>(&mut self, image: &I) -> Option<I::CommandListState>
where I: TrackedImage
{
if let Some(s) = self.sets_state.extract_image_state(image) {
return Some(s);
}
self.previous.extract_image_state(image)
}
}
unsafe impl<'a, L, Pv, Pl, Prp, S, Pc> InsideRenderPass for DrawCommand<'a, L, Pv, Pl, Prp, S, Pc>
where L: StdCommandsList + InsideRenderPass, Pl: PipelineLayout,
S: TrackedDescriptorSetsCollection, Pc: 'a
{
type RenderPass = L::RenderPass;
type Framebuffer = L::Framebuffer;
#[inline]
fn current_subpass(&self) -> u32 {
self.previous.current_subpass()
}
#[inline]
fn secondary_subpass(&self) -> bool {
self.previous.secondary_subpass()
}
#[inline]
fn render_pass(&self) -> &Arc<Self::RenderPass> {
self.previous.render_pass()
}
#[inline]
fn framebuffer(&self) -> &Self::Framebuffer {
self.previous.framebuffer()
}
}
/// Wraps around a command buffer and adds an update buffer command at the end of it.
pub struct DrawCommandCb<L, Pv, Pl, Prp, S>
where L: CommandBuffer, Pl: PipelineLayout, S: TrackedDescriptorSetsCollection
{
// The previous commands.
previous: L,
// The barrier. We store it here to keep it alive.
pipeline: Arc<GraphicsPipeline<Pv, Pl, Prp>>,
// The descriptor sets. Stored here to keep them alive.
sets: S,
// State of the descriptor sets.
sets_state: S::Finished,
// FIXME: strong typing and state transitions
vertex_buffers: SmallVec<[Arc<Buffer>; 4]>,
}
unsafe impl<L, Pv, Pl, Prp, S> CommandBuffer for DrawCommandCb<L, Pv, Pl, Prp, S>
where L: CommandBuffer, Pl: PipelineLayout, S: TrackedDescriptorSetsCollection
{
type Pool = L::Pool;
type SemaphoresWaitIterator = Chain<L::SemaphoresWaitIterator,
<S::Finished as TrackedDescriptorSetsCollectionFinished>::
SemaphoresWaitIterator>;
type SemaphoresSignalIterator = Chain<L::SemaphoresSignalIterator,
<S::Finished as TrackedDescriptorSetsCollectionFinished>::
SemaphoresSignalIterator>;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<Self::Pool> {
self.previous.inner()
}
unsafe fn on_submit<F>(&self, queue: &Arc<Queue>, mut fence: F)
-> SubmitInfo<Self::SemaphoresWaitIterator,
Self::SemaphoresSignalIterator>
where F: FnMut() -> Arc<Fence>
{
// We query the parent.
let parent = self.previous.on_submit(queue, &mut fence);
// We query our sets.
let my_infos = self.sets_state.on_submit(queue, fence);
// We merge the two.
SubmitInfo {
semaphores_wait: parent.semaphores_wait.chain(my_infos.semaphores_wait),
semaphores_signal: parent.semaphores_signal.chain(my_infos.semaphores_signal),
pre_pipeline_barrier: {
let mut b = parent.pre_pipeline_barrier;
b.merge(my_infos.pre_pipeline_barrier);
b
},
post_pipeline_barrier: {
let mut b = parent.post_pipeline_barrier;
b.merge(my_infos.post_pipeline_barrier);
b
},
}
}
}

View File

@ -1,184 +0,0 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::iter;
use std::iter::Empty;
use std::sync::Arc;
use buffer::traits::TrackedBuffer;
use command_buffer::pool::CommandPool;
use command_buffer::pool::StandardCommandPool;
use command_buffer::std::OutsideRenderPass;
use command_buffer::std::ResourcesStates;
use command_buffer::std::StdCommandsList;
use command_buffer::submit::CommandBuffer;
use command_buffer::submit::SubmitInfo;
use command_buffer::sys::PipelineBarrierBuilder;
use command_buffer::sys::UnsafeCommandBuffer;
use command_buffer::sys::UnsafeCommandBufferBuilder;
use command_buffer::sys::Flags;
use command_buffer::sys::Kind;
use device::Device;
use device::Queue;
use framebuffer::EmptySinglePassRenderPass;
use framebuffer::Framebuffer as OldFramebuffer;
use image::traits::TrackedImage;
use instance::QueueFamily;
use pipeline::ComputePipeline;
use pipeline::GraphicsPipeline;
use sync::Fence;
use sync::PipelineStages;
use sync::Semaphore;
pub struct PrimaryCbBuilder<P = Arc<StandardCommandPool>> where P: CommandPool {
pool: P,
flags: Flags,
}
impl PrimaryCbBuilder<Arc<StandardCommandPool>> {
/// Builds a new primary command buffer builder.
#[inline]
pub fn new(device: &Arc<Device>, family: QueueFamily)
-> PrimaryCbBuilder<Arc<StandardCommandPool>>
{
PrimaryCbBuilder::with_pool(Device::standard_command_pool(device, family))
}
}
impl<P> PrimaryCbBuilder<P> where P: CommandPool {
/// Builds a new primary command buffer builder that uses a specific pool.
pub fn with_pool(pool: P) -> PrimaryCbBuilder<P> {
PrimaryCbBuilder {
pool: pool,
flags: Flags::SimultaneousUse, // TODO: allow customization
}
}
}
unsafe impl<P> StdCommandsList for PrimaryCbBuilder<P> where P: CommandPool {
type Pool = P;
type Output = PrimaryCb<P>;
#[inline]
fn num_commands(&self) -> usize {
0
}
#[inline]
fn check_queue_validity(&self, queue: QueueFamily) -> Result<(), ()> {
Ok(())
}
#[inline]
fn is_compute_pipeline_bound<Pl>(&self, pipeline: &Arc<ComputePipeline<Pl>>) -> bool {
false
}
#[inline]
fn is_graphics_pipeline_bound<Pv, Pl, Prp>(&self, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Prp>>)
-> bool
{
false
}
#[inline]
fn buildable_state(&self) -> bool {
true
}
unsafe fn raw_build<I, F>(self, additional_elements: F, barriers: I,
final_barrier: PipelineBarrierBuilder) -> Self::Output
where F: FnOnce(&mut UnsafeCommandBufferBuilder<Self::Pool>),
I: Iterator<Item = (usize, PipelineBarrierBuilder)>
{
let kind = Kind::Primary::<EmptySinglePassRenderPass, OldFramebuffer<EmptySinglePassRenderPass>>;
let mut cb = UnsafeCommandBufferBuilder::new(self.pool, kind,
self.flags).unwrap(); // TODO: handle error
// Since we're at the start of the command buffer, there's no need wonder when to add the
// barriers. We have no choice but to add them immediately.
let mut pipeline_barrier = PipelineBarrierBuilder::new();
for (_, barrier) in barriers {
pipeline_barrier.merge(barrier);
}
cb.pipeline_barrier(pipeline_barrier);
// Then add the rest.
additional_elements(&mut cb);
cb.pipeline_barrier(final_barrier);
PrimaryCb {
cb: cb.build().unwrap(), // TODO: handle error
}
}
}
unsafe impl<P> ResourcesStates for PrimaryCbBuilder<P> where P: CommandPool {
#[inline]
unsafe fn extract_buffer_state<B>(&mut self, buffer: &B) -> Option<B::CommandListState>
where B: TrackedBuffer
{
None
}
#[inline]
unsafe fn extract_image_state<I>(&mut self, image: &I) -> Option<I::CommandListState>
where I: TrackedImage
{
None
}
}
unsafe impl<P> OutsideRenderPass for PrimaryCbBuilder<P> where P: CommandPool {}
pub struct PrimaryCb<P = Arc<StandardCommandPool>> where P: CommandPool {
cb: UnsafeCommandBuffer<P>,
}
unsafe impl<P> CommandBuffer for PrimaryCb<P> where P: CommandPool {
type Pool = P;
type SemaphoresWaitIterator = Empty<(Arc<Semaphore>, PipelineStages)>;
type SemaphoresSignalIterator = Empty<Arc<Semaphore>>;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<Self::Pool> {
&self.cb
}
unsafe fn on_submit<F>(&self, queue: &Arc<Queue>, fence: F)
-> SubmitInfo<Self::SemaphoresWaitIterator,
Self::SemaphoresSignalIterator>
where F: FnMut() -> Arc<Fence>
{
// TODO: Must handle non-SimultaneousUse and Once flags ; for now the `SimultaneousUse`
// flag is mandatory, so there's no safety issue. However it will need to be handled
// before allowing other flags to be used.
SubmitInfo {
semaphores_wait: iter::empty(),
semaphores_signal: iter::empty(),
pre_pipeline_barrier: PipelineBarrierBuilder::new(),
post_pipeline_barrier: PipelineBarrierBuilder::new(),
}
}
}
#[cfg(test)]
mod tests {
use command_buffer::std::PrimaryCbBuilder;
use command_buffer::std::StdCommandsList;
use command_buffer::submit::CommandBuffer;
#[test]
fn basic_submit() {
let (device, queue) = gfx_dev_and_queue!();
let _ = PrimaryCbBuilder::new(&device, queue.family()).build().submit(&queue);
}
}

View File

@ -1,213 +0,0 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::sync::Arc;
use smallvec::SmallVec;
use buffer::traits::TrackedBuffer;
use command_buffer::std::InsideRenderPass;
use command_buffer::std::OutsideRenderPass;
use command_buffer::std::ResourcesStates;
use command_buffer::std::StdCommandsList;
use command_buffer::submit::CommandBuffer;
use command_buffer::submit::SubmitInfo;
use command_buffer::sys::PipelineBarrierBuilder;
use command_buffer::sys::UnsafeCommandBuffer;
use command_buffer::sys::UnsafeCommandBufferBuilder;
use device::Queue;
use image::traits::TrackedImage;
use instance::QueueFamily;
use pipeline::ComputePipeline;
use pipeline::GraphicsPipeline;
use sync::Fence;
/// Wraps around a commands list and adds a command at the end of it that executes a secondary
/// command buffer.
pub struct ExecuteCommand<Cb, L> where Cb: CommandBuffer, L: StdCommandsList {
// Parent commands list.
previous: L,
// Command buffer to execute.
command_buffer: Cb,
}
impl<Cb, L> ExecuteCommand<Cb, L>
where Cb: CommandBuffer, L: StdCommandsList
{
/// See the documentation of the `execute` method.
#[inline]
pub fn new(previous: L, command_buffer: Cb) -> ExecuteCommand<Cb, L> {
// FIXME: check that the number of subpasses is correct
ExecuteCommand {
previous: previous,
command_buffer: command_buffer,
}
}
}
// TODO: specialize `execute()` so that multiple calls to `execute` are grouped together
unsafe impl<Cb, L> StdCommandsList for ExecuteCommand<Cb, L>
where Cb: CommandBuffer, L: StdCommandsList
{
type Pool = L::Pool;
type Output = ExecuteCommandCb<Cb, L::Output>;
#[inline]
fn num_commands(&self) -> usize {
self.previous.num_commands() + 1
}
#[inline]
fn check_queue_validity(&self, queue: QueueFamily) -> Result<(), ()> {
// FIXME: check the secondary cb's queue validity
self.previous.check_queue_validity(queue)
}
#[inline]
fn buildable_state(&self) -> bool {
self.previous.buildable_state()
}
#[inline]
fn is_compute_pipeline_bound<Pl>(&self, pipeline: &Arc<ComputePipeline<Pl>>) -> bool {
// Bindings are always invalidated after a execute command ends.
false
}
#[inline]
fn is_graphics_pipeline_bound<Pv, Pl, Prp>(&self, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Prp>>)
-> bool
{
// Bindings are always invalidated after a execute command ends.
false
}
unsafe fn raw_build<I, F>(self, additional_elements: F, barriers: I,
final_barrier: PipelineBarrierBuilder) -> Self::Output
where F: FnOnce(&mut UnsafeCommandBufferBuilder<L::Pool>),
I: Iterator<Item = (usize, PipelineBarrierBuilder)>
{
// We split the barriers in two: those to apply after our command, and those to
// transfer to the parent so that they are applied before our command.
let my_command_num = self.num_commands();
// The transitions to apply immediately after our command.
let mut transitions_to_apply = PipelineBarrierBuilder::new();
// The barriers to transfer to the parent.
let barriers = barriers.filter_map(|(after_command_num, barrier)| {
if after_command_num >= my_command_num || !transitions_to_apply.is_empty() {
transitions_to_apply.merge(barrier);
None
} else {
Some((after_command_num, barrier))
}
}).collect::<SmallVec<[_; 8]>>();
// Passing to the parent.
let parent = {
let local_cb_to_exec = self.command_buffer.inner();
self.previous.raw_build(|cb| {
cb.execute_commands(Some(local_cb_to_exec));
cb.pipeline_barrier(transitions_to_apply);
additional_elements(cb);
}, barriers.into_iter(), final_barrier)
};
ExecuteCommandCb {
previous: parent,
command_buffer: self.command_buffer,
}
}
}
unsafe impl<Cb, L> ResourcesStates for ExecuteCommand<Cb, L>
where Cb: CommandBuffer, L: StdCommandsList
{
#[inline]
unsafe fn extract_buffer_state<Ob>(&mut self, buffer: &Ob)
-> Option<Ob::CommandListState>
where Ob: TrackedBuffer
{
// FIXME:
self.previous.extract_buffer_state(buffer)
}
#[inline]
unsafe fn extract_image_state<I>(&mut self, image: &I) -> Option<I::CommandListState>
where I: TrackedImage
{
// FIXME:
self.previous.extract_image_state(image)
}
}
unsafe impl<Cb, L> InsideRenderPass for ExecuteCommand<Cb, L>
where Cb: CommandBuffer, L: InsideRenderPass
{
type RenderPass = L::RenderPass;
type Framebuffer = L::Framebuffer;
#[inline]
fn current_subpass(&self) -> u32 {
self.previous.current_subpass()
}
#[inline]
fn secondary_subpass(&self) -> bool {
debug_assert!(self.previous.secondary_subpass());
true
}
#[inline]
fn render_pass(&self) -> &Arc<Self::RenderPass> {
self.previous.render_pass()
}
#[inline]
fn framebuffer(&self) -> &Self::Framebuffer {
self.previous.framebuffer()
}
}
unsafe impl<Cb, L> OutsideRenderPass for ExecuteCommand<Cb, L>
where Cb: CommandBuffer, L: OutsideRenderPass
{
}
/// Wraps around a command buffer and adds an execute command at the end of it.
pub struct ExecuteCommandCb<Cb, L> where Cb: CommandBuffer, L: CommandBuffer {
// The previous commands.
previous: L,
// The secondary command buffer to execute.
command_buffer: Cb,
}
unsafe impl<Cb, L> CommandBuffer for ExecuteCommandCb<Cb, L>
where Cb: CommandBuffer, L: CommandBuffer
{
type Pool = L::Pool;
type SemaphoresWaitIterator = L::SemaphoresWaitIterator;
type SemaphoresSignalIterator = L::SemaphoresSignalIterator;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<Self::Pool> {
self.previous.inner()
}
#[inline]
unsafe fn on_submit<F>(&self, queue: &Arc<Queue>, mut fence: F)
-> SubmitInfo<Self::SemaphoresWaitIterator,
Self::SemaphoresSignalIterator>
where F: FnMut() -> Arc<Fence>
{
self.previous.on_submit(queue, &mut fence)
}
}

View File

@ -1,312 +0,0 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::any::Any;
use std::iter::Chain;
use std::option::IntoIter as OptionIntoIter;
use std::sync::Arc;
use smallvec::SmallVec;
use buffer::traits::CommandBufferState;
use buffer::traits::CommandListState;
use buffer::traits::PipelineBarrierRequest;
use buffer::traits::TrackedBuffer;
use command_buffer::std::OutsideRenderPass;
use command_buffer::std::ResourcesStates;
use command_buffer::std::StdCommandsList;
use command_buffer::submit::CommandBuffer;
use command_buffer::submit::SubmitInfo;
use command_buffer::sys::PipelineBarrierBuilder;
use command_buffer::sys::UnsafeCommandBuffer;
use command_buffer::sys::UnsafeCommandBufferBuilder;
use device::Queue;
use image::traits::TrackedImage;
use instance::QueueFamily;
use pipeline::ComputePipeline;
use pipeline::GraphicsPipeline;
use sync::AccessFlagBits;
use sync::Fence;
use sync::PipelineStages;
use sync::Semaphore;
/// Wraps around a commands list and adds a fill buffer command at the end of it.
pub struct FillCommand<L, B>
where B: TrackedBuffer, L: StdCommandsList
{
// Parent commands list.
previous: L,
// The buffer to fill.
buffer: B,
// Current state of the buffer to fill, or `None` if it has been extracted.
buffer_state: Option<B::CommandListState>,
// The data to fill the buffer with.
data: u32,
// Pipeline barrier to perform before this command.
barrier: Option<PipelineBarrierRequest>,
}
impl<L, B> FillCommand<L, B>
where B: TrackedBuffer,
L: StdCommandsList + OutsideRenderPass,
{
/// See the documentation of the `fill_buffer` method.
pub fn new(mut previous: L, buffer: B, data: u32) -> FillCommand<L, B> {
// Determining the new state of the buffer, and the optional pipeline barrier to add
// before our command in the final output.
let (state, barrier) = unsafe {
let stage = PipelineStages { transfer: true, .. PipelineStages::none() };
let access = AccessFlagBits { transfer_write: true, .. AccessFlagBits::none() };
previous.extract_buffer_state(&buffer)
.unwrap_or(buffer.initial_state())
.transition(previous.num_commands() + 1, buffer.inner(),
0, buffer.size(), true, stage, access)
};
// Minor safety check.
if let Some(ref barrier) = barrier {
assert!(barrier.after_command_num <= previous.num_commands());
}
FillCommand {
previous: previous,
buffer: buffer,
buffer_state: Some(state),
data: data,
barrier: barrier,
}
}
}
unsafe impl<L, B> StdCommandsList for FillCommand<L, B>
where B: TrackedBuffer,
L: StdCommandsList,
{
type Pool = L::Pool;
type Output = FillCommandCb<L::Output, B>;
#[inline]
fn num_commands(&self) -> usize {
self.previous.num_commands() + 1
}
#[inline]
fn check_queue_validity(&self, queue: QueueFamily) -> Result<(), ()> {
// No restriction
self.previous.check_queue_validity(queue)
}
#[inline]
fn buildable_state(&self) -> bool {
true
}
#[inline]
fn is_compute_pipeline_bound<Pl>(&self, pipeline: &Arc<ComputePipeline<Pl>>) -> bool {
self.previous.is_compute_pipeline_bound(pipeline)
}
#[inline]
fn is_graphics_pipeline_bound<Pv, Pl, Prp>(&self, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Prp>>)
-> bool
{
self.previous.is_graphics_pipeline_bound(pipeline)
}
unsafe fn raw_build<I, F>(mut self, additional_elements: F, barriers: I,
mut final_barrier: PipelineBarrierBuilder) -> Self::Output
where F: FnOnce(&mut UnsafeCommandBufferBuilder<L::Pool>),
I: Iterator<Item = (usize, PipelineBarrierBuilder)>
{
// Computing the finished state, or `None` if we don't have to manage it.
let finished_state = match self.buffer_state.take().map(|s| s.finish()) {
Some((s, t)) => {
if let Some(t) = t {
final_barrier.add_buffer_barrier_request(self.buffer.inner(), t);
}
Some(s)
},
None => None,
};
// We split the barriers in two: those to apply after our command, and those to
// transfer to the parent so that they are applied before our command.
let my_command_num = self.num_commands();
// The transitions to apply immediately after our command.
let mut transitions_to_apply = PipelineBarrierBuilder::new();
// The barriers to transfer to the parent.
let barriers = barriers.filter_map(|(after_command_num, barrier)| {
if after_command_num >= my_command_num || !transitions_to_apply.is_empty() {
transitions_to_apply.merge(barrier);
None
} else {
Some((after_command_num, barrier))
}
}).collect::<SmallVec<[_; 8]>>();
// The local barrier requested by this command, or `None` if no barrier requested.
let my_barrier = if let Some(my_barrier) = self.barrier.take() {
let mut t = PipelineBarrierBuilder::new();
let c_num = my_barrier.after_command_num;
t.add_buffer_barrier_request(self.buffer.inner(), my_barrier);
Some((c_num, t))
} else {
None
};
// Passing to the parent.
let my_buffer = self.buffer;
let my_data = self.data;
let parent = self.previous.raw_build(|cb| {
cb.fill_buffer(my_buffer.inner(), 0, my_buffer.size(), my_data);
cb.pipeline_barrier(transitions_to_apply);
additional_elements(cb);
}, my_barrier.into_iter().chain(barriers.into_iter()), final_barrier);
FillCommandCb {
previous: parent,
buffer: my_buffer,
buffer_state: finished_state,
}
}
}
unsafe impl<L, B> ResourcesStates for FillCommand<L, B>
where B: TrackedBuffer,
L: StdCommandsList,
{
unsafe fn extract_buffer_state<Ob>(&mut self, buffer: &Ob)
-> Option<Ob::CommandListState>
where Ob: TrackedBuffer
{
if self.buffer.is_same_buffer(buffer) {
let s: &mut Option<Ob::CommandListState> = (&mut self.buffer_state as &mut Any)
.downcast_mut().unwrap();
Some(s.take().unwrap())
} else {
self.previous.extract_buffer_state(buffer)
}
}
unsafe fn extract_image_state<I>(&mut self, image: &I) -> Option<I::CommandListState>
where I: TrackedImage
{
if self.buffer.is_same_image(image) {
let s: &mut Option<I::CommandListState> = (&mut self.buffer_state as &mut Any)
.downcast_mut().unwrap();
Some(s.take().unwrap())
} else {
self.previous.extract_image_state(image)
}
}
}
unsafe impl<L, B> OutsideRenderPass for FillCommand<L, B>
where B: TrackedBuffer,
L: StdCommandsList,
{
}
/// Wraps around a command buffer and adds an update buffer command at the end of it.
pub struct FillCommandCb<L, B> where B: TrackedBuffer, L: CommandBuffer {
// The previous commands.
previous: L,
// The buffer to update.
buffer: B,
// The state of the buffer to update, or `None` if we don't manage it. Will be used to
// determine which semaphores or barriers to add when submitting.
buffer_state: Option<B::FinishedState>,
}
unsafe impl<L, B> CommandBuffer for FillCommandCb<L, B>
where B: TrackedBuffer, L: CommandBuffer
{
type Pool = L::Pool;
type SemaphoresWaitIterator = Chain<L::SemaphoresWaitIterator,
OptionIntoIter<(Arc<Semaphore>, PipelineStages)>>;
type SemaphoresSignalIterator = Chain<L::SemaphoresSignalIterator,
OptionIntoIter<Arc<Semaphore>>>;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<Self::Pool> {
self.previous.inner()
}
unsafe fn on_submit<F>(&self, queue: &Arc<Queue>, mut fence: F)
-> SubmitInfo<Self::SemaphoresWaitIterator,
Self::SemaphoresSignalIterator>
where F: FnMut() -> Arc<Fence>
{
// We query the parent.
let parent = self.previous.on_submit(queue, &mut fence);
// Then build our own output that modifies the parent's.
if let Some(ref buffer_state) = self.buffer_state {
let submit_infos = buffer_state.on_submit(&self.buffer, queue, fence);
let mut out = SubmitInfo {
semaphores_wait: parent.semaphores_wait.chain(submit_infos.pre_semaphore.into_iter()),
semaphores_signal: parent.semaphores_signal.chain(submit_infos.post_semaphore.into_iter()),
pre_pipeline_barrier: parent.pre_pipeline_barrier,
post_pipeline_barrier: parent.post_pipeline_barrier,
};
if let Some(pre) = submit_infos.pre_barrier {
out.pre_pipeline_barrier.add_buffer_barrier_request(self.buffer.inner(), pre);
}
if let Some(post) = submit_infos.post_barrier {
out.post_pipeline_barrier.add_buffer_barrier_request(self.buffer.inner(), post);
}
out
} else {
SubmitInfo {
semaphores_wait: parent.semaphores_wait.chain(None.into_iter()),
semaphores_signal: parent.semaphores_signal.chain(None.into_iter()),
pre_pipeline_barrier: parent.pre_pipeline_barrier,
post_pipeline_barrier: parent.post_pipeline_barrier,
}
}
}
}
#[cfg(test)]
mod tests {
use std::time::Duration;
use buffer::BufferUsage;
use buffer::CpuAccessibleBuffer;
use command_buffer::std::PrimaryCbBuilder;
use command_buffer::std::StdCommandsList;
use command_buffer::submit::CommandBuffer;
#[test]
fn basic() {
let (device, queue) = gfx_dev_and_queue!();
let buffer = CpuAccessibleBuffer::from_data(&device, &BufferUsage::transfer_dest(),
Some(queue.family()), 0u32).unwrap();
let _ = PrimaryCbBuilder::new(&device, queue.family())
.fill_buffer(buffer.clone(), 128u32)
.build()
.submit(&queue);
let content = buffer.read(Duration::from_secs(0)).unwrap();
assert_eq!(*content, 128);
}
}

View File

@ -1,247 +0,0 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::iter;
use std::sync::Arc;
use buffer::traits::TrackedBuffer;
use command_buffer::DynamicState;
use command_buffer::pool::CommandPool;
use command_buffer::submit::CommandBuffer;
use command_buffer::sys::PipelineBarrierBuilder;
use command_buffer::sys::UnsafeCommandBufferBuilder;
use descriptor::PipelineLayout;
use descriptor::descriptor_set::collection::TrackedDescriptorSetsCollection;
use framebuffer::traits::Framebuffer;
use framebuffer::RenderPass;
use framebuffer::RenderPassClearValues;
use image::traits::TrackedImage;
use instance::QueueFamily;
use pipeline::ComputePipeline;
use pipeline::GraphicsPipeline;
use pipeline::vertex::Source;
pub use self::empty::PrimaryCb;
pub use self::empty::PrimaryCbBuilder;
pub mod dispatch;
pub mod draw;
pub mod empty;
pub mod execute;
pub mod fill_buffer;
pub mod render_pass;
pub mod update_buffer;
/// A list of commands that can be turned into a command buffer.
pub unsafe trait StdCommandsList: ResourcesStates {
/// The type of the pool that will be used to create the command buffer.
type Pool: CommandPool;
/// The type of the command buffer that will be generated.
type Output: CommandBuffer<Pool = Self::Pool>;
/// Adds a command that writes the content of a buffer.
///
/// After this command is executed, the content of `buffer` will become `data`.
#[inline]
fn update_buffer<'a, B, D: ?Sized>(self, buffer: B, data: &'a D)
-> update_buffer::UpdateCommand<'a, Self, B, D>
where Self: Sized + OutsideRenderPass, B: TrackedBuffer, D: Copy + 'static
{
update_buffer::UpdateCommand::new(self, buffer, data)
}
/// Adds a command that writes the content of a buffer.
#[inline]
fn fill_buffer<B>(self, buffer: B, data: u32) -> fill_buffer::FillCommand<Self, B>
where Self: Sized + OutsideRenderPass, B: TrackedBuffer
{
fill_buffer::FillCommand::new(self, buffer, data)
}
/// Adds a command that executes a secondary command buffer.
///
/// When you create a command buffer, you have the possibility to create either a primary
/// command buffer or a secondary command buffer. Secondary command buffers can't be executed
/// directly, but can be executed from a primary command buffer.
///
/// A secondary command buffer can't execute another secondary command buffer. The only way
/// you can use `execute` is to make a primary command buffer call a secondary command buffer.
#[inline]
fn execute<Cb>(self, command_buffer: Cb) -> execute::ExecuteCommand<Cb, Self>
where Self: Sized, Cb: CommandBuffer
{
execute::ExecuteCommand::new(self, command_buffer)
}
/// Adds a command that executes a compute shader.
///
/// The `dimensions` are the number of working groups to start. The GPU will execute the
/// compute shader `dimensions[0] * dimensions[1] * dimensions[2]` times.
///
/// The `pipeline` is the compute pipeline that will be executed, and the sets and push
/// constants will be accessible to all the invocations.
#[inline]
fn dispatch<'a, Pl, S, Pc>(self, pipeline: Arc<ComputePipeline<Pl>>, sets: S,
dimensions: [u32; 3], push_constants: &'a Pc)
-> dispatch::DispatchCommand<'a, Self, Pl, S, Pc>
where Self: Sized + StdCommandsList + OutsideRenderPass, Pl: PipelineLayout,
S: TrackedDescriptorSetsCollection, Pc: 'a
{
dispatch::DispatchCommand::new(self, pipeline, sets, dimensions, push_constants)
}
/// Adds a command that starts a render pass.
///
/// If `secondary` is true, then you will only be able to add secondary command buffers while
/// you're inside the first subpass on the render pass. If `secondary` is false, you will only
/// be able to add inline draw commands and not secondary command buffers.
///
/// You must call this before you can add draw commands.
#[inline]
fn begin_render_pass<F, C>(self, framebuffer: F, secondary: bool, clear_values: C)
-> render_pass::BeginRenderPassCommand<Self, F::RenderPass, F>
where Self: Sized + OutsideRenderPass,
F: Framebuffer, F::RenderPass: RenderPass + RenderPassClearValues<C>
{
render_pass::BeginRenderPassCommand::new(self, framebuffer, secondary, clear_values)
}
/// Adds a command that jumps to the next subpass of the current render pass.
fn next_subpass(self, secondary: bool) -> render_pass::NextSubpassCommand<Self>
where Self: Sized + InsideRenderPass
{
render_pass::NextSubpassCommand::new(self, secondary)
}
/// Adds a command that ends the current render pass.
///
/// This must be called after you went through all the subpasses and before you can build
/// the command buffer or add further commands.
fn end_render_pass(self) -> render_pass::EndRenderPassCommand<Self>
where Self: Sized + InsideRenderPass
{
render_pass::EndRenderPassCommand::new(self)
}
/// Adds a command that draws.
///
/// Can only be used from inside a render pass.
#[inline]
fn draw<'a, Pv, Pl, Prp, S, Pc, V>(self, pipeline: Arc<GraphicsPipeline<Pv, Pl, Prp>>,
dynamic: &DynamicState, vertices: V, sets: S,
push_constants: &'a Pc)
-> draw::DrawCommand<'a, Self, Pv, Pl, Prp, S, Pc>
where Self: Sized + StdCommandsList + InsideRenderPass, Pl: PipelineLayout,
S: TrackedDescriptorSetsCollection, Pc: 'a, Pv: Source<V>
{
draw::DrawCommand::regular(self, pipeline, dynamic, vertices, sets, push_constants)
}
/// Returns true if the command buffer can be built. This function should always return true,
/// except when we're building a primary command buffer that is inside a render pass.
fn buildable_state(&self) -> bool;
/// Turns the commands list into a command buffer that can be submitted.
fn build(self) -> Self::Output where Self: Sized {
assert!(self.buildable_state(), "Tried to build a command buffer still inside a \
render pass");
unsafe {
self.raw_build(|_| {}, iter::empty(), PipelineBarrierBuilder::new())
}
}
/// Returns the number of commands in the commands list.
///
/// Note that multiple actual commands may count for just 1.
fn num_commands(&self) -> usize;
/// Checks whether the command can be executed on the given queue family.
// TODO: error type?
fn check_queue_validity(&self, queue: QueueFamily) -> Result<(), ()>;
/// Returns true if the given compute pipeline is currently binded in the commands list.
fn is_compute_pipeline_bound<Pl>(&self, pipeline: &Arc<ComputePipeline<Pl>>) -> bool;
/// Returns true if the given graphics pipeline is currently binded in the commands list.
fn is_graphics_pipeline_bound<Pv, Pl, Prp>(&self, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Prp>>)
-> bool;
/// Turns the commands list into a command buffer.
///
/// This function accepts additional arguments that will customize the output:
///
/// - `additional_elements` is a closure that must be called on the command buffer builder
/// after it has finished building and before `final_barrier` are added.
/// - `barriers` is a list of pipeline barriers accompanied by a command number. The
/// pipeline barrier must happen after the given command number. Usually you want all the
/// the command numbers to be inferior to `num_commands`.
/// - `final_barrier` is a pipeline barrier that must be added at the end of the
/// command buffer builder.
///
/// This function doesn't check that `buildable_state` returns true.
unsafe fn raw_build<I, F>(self, additional_elements: F, barriers: I,
final_barrier: PipelineBarrierBuilder) -> Self::Output
where F: FnOnce(&mut UnsafeCommandBufferBuilder<Self::Pool>),
I: Iterator<Item = (usize, PipelineBarrierBuilder)>;
}
/// Extension trait for `StdCommandsList` that indicates that we're outside a render pass.
pub unsafe trait OutsideRenderPass: StdCommandsList {}
/// Extension trait for `StdCommandsList` that indicates that we're inside a render pass.
pub unsafe trait InsideRenderPass: StdCommandsList {
type RenderPass: RenderPass;
type Framebuffer: Framebuffer;
/// Returns the number of the subpass we're in. The value is 0-indexed, so immediately after
/// calling `begin_render_pass` the value will be `0`.
///
/// The value should always be strictly inferior to the number of subpasses in the render pass.
fn current_subpass(&self) -> u32;
/// If true, only secondary command buffers can be added inside the subpass. If false, only
/// inline draw commands can be added.
fn secondary_subpass(&self) -> bool;
// TODO: don't use Arc
fn render_pass(&self) -> &Arc<Self::RenderPass>;
fn framebuffer(&self) -> &Self::Framebuffer;
}
/// Trait for objects that hold states of buffers and images.
pub unsafe trait ResourcesStates {
/// Returns the state of a buffer, or `None` if the buffer hasn't been used yet.
///
/// Whether the buffer passed as parameter is the same as the one in the commands list must be
/// determined with the `is_same` method of `TrackedBuffer`.
///
/// Calling this function extracts the state from the list, meaning that the state will be
/// managed by the code that called this function instead of being managed by the object
/// itself. Hence why the function is unsafe.
///
/// # Panic
///
/// - Panics if the state of that buffer has already been previously extracted.
///
unsafe fn extract_buffer_state<B>(&mut self, buffer: &B) -> Option<B::CommandListState>
where B: TrackedBuffer;
/// Returns the state of an image, or `None` if the image hasn't been used yet.
///
/// See the description of `extract_buffer_state`.
///
/// # Panic
///
/// - Panics if the state of that image has already been previously extracted.
///
unsafe fn extract_image_state<I>(&mut self, image: &I) -> Option<I::CommandListState>
where I: TrackedImage;
}

View File

@ -1,497 +0,0 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::iter;
use std::sync::Arc;
use std::ops::Range;
use smallvec::SmallVec;
use buffer::traits::TrackedBuffer;
use command_buffer::std::InsideRenderPass;
use command_buffer::std::OutsideRenderPass;
use command_buffer::std::ResourcesStates;
use command_buffer::std::StdCommandsList;
use command_buffer::submit::CommandBuffer;
use command_buffer::submit::SubmitInfo;
use command_buffer::sys::PipelineBarrierBuilder;
use command_buffer::sys::UnsafeCommandBuffer;
use command_buffer::sys::UnsafeCommandBufferBuilder;
use device::Queue;
use format::ClearValue;
use framebuffer::traits::Framebuffer;
use framebuffer::RenderPass;
use framebuffer::RenderPassClearValues;
use image::traits::TrackedImage;
use instance::QueueFamily;
use pipeline::ComputePipeline;
use pipeline::GraphicsPipeline;
use sync::Fence;
/// Wraps around a commands list and adds an update buffer command at the end of it.
pub struct BeginRenderPassCommand<L, Rp, F>
where L: StdCommandsList, Rp: RenderPass, F: Framebuffer
{
// Parent commands list.
previous: L,
// True if only secondary command buffers can be added.
secondary: bool,
rect: [Range<u32>; 2],
clear_values: SmallVec<[ClearValue; 6]>,
render_pass: Arc<Rp>,
framebuffer: F,
}
impl<L, F> BeginRenderPassCommand<L, F::RenderPass, F>
where L: StdCommandsList + OutsideRenderPass, F: Framebuffer
{
/// See the documentation of the `begin_render_pass` method.
// TODO: allow setting more parameters
pub fn new<C>(previous: L, framebuffer: F, secondary: bool, clear_values: C)
-> BeginRenderPassCommand<L, F::RenderPass, F>
where F::RenderPass: RenderPassClearValues<C>
{
// FIXME: transition states of the images in the framebuffer
let clear_values = framebuffer.render_pass().convert_clear_values(clear_values)
.collect();
let rect = [0 .. framebuffer.dimensions()[0], 0 .. framebuffer.dimensions()[1]];
let render_pass = framebuffer.render_pass().clone();
BeginRenderPassCommand {
previous: previous,
secondary: secondary,
rect: rect,
clear_values: clear_values,
render_pass: render_pass,
framebuffer: framebuffer,
}
}
}
unsafe impl<L, Rp, Fb> StdCommandsList for BeginRenderPassCommand<L, Rp, Fb>
where L: StdCommandsList, Rp: RenderPass, Fb: Framebuffer
{
type Pool = L::Pool;
type Output = BeginRenderPassCommandCb<L::Output, Rp, Fb>;
#[inline]
fn num_commands(&self) -> usize {
self.previous.num_commands() + 1
}
#[inline]
fn check_queue_validity(&self, queue: QueueFamily) -> Result<(), ()> {
if !queue.supports_graphics() {
return Err(());
}
self.previous.check_queue_validity(queue)
}
#[inline]
fn buildable_state(&self) -> bool {
// We are no longer in a buildable state after entering a render pass.
false
}
#[inline]
fn is_compute_pipeline_bound<Pl>(&self, pipeline: &Arc<ComputePipeline<Pl>>) -> bool {
self.previous.is_compute_pipeline_bound(pipeline)
}
#[inline]
fn is_graphics_pipeline_bound<Pv, Pl, Prp>(&self, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Prp>>)
-> bool
{
self.previous.is_graphics_pipeline_bound(pipeline)
}
unsafe fn raw_build<I, F>(self, additional_elements: F, barriers: I,
final_barrier: PipelineBarrierBuilder) -> Self::Output
where F: FnOnce(&mut UnsafeCommandBufferBuilder<L::Pool>),
I: Iterator<Item = (usize, PipelineBarrierBuilder)>
{
let my_command_num = self.num_commands();
let barriers = barriers.map(move |(n, b)| { assert!(n < my_command_num); (n, b) });
let my_render_pass = self.render_pass;
let my_framebuffer = self.framebuffer;
let my_clear_values = self.clear_values;
let my_rect = self.rect;
let my_secondary = self.secondary;
let parent = self.previous.raw_build(|cb| {
cb.begin_render_pass(my_render_pass.inner(), &my_framebuffer,
my_clear_values.into_iter(), my_rect, my_secondary);
additional_elements(cb);
}, barriers, final_barrier);
BeginRenderPassCommandCb {
previous: parent,
render_pass: my_render_pass,
framebuffer: my_framebuffer,
}
}
}
unsafe impl<L, Rp, F> ResourcesStates for BeginRenderPassCommand<L, Rp, F>
where L: StdCommandsList, Rp: RenderPass, F: Framebuffer
{
unsafe fn extract_buffer_state<Ob>(&mut self, buffer: &Ob)
-> Option<Ob::CommandListState>
where Ob: TrackedBuffer
{
// FIXME: state of images in the framebuffer
self.previous.extract_buffer_state(buffer)
}
unsafe fn extract_image_state<I>(&mut self, image: &I) -> Option<I::CommandListState>
where I: TrackedImage
{
// FIXME: state of images in the framebuffer
self.previous.extract_image_state(image)
}
}
unsafe impl<L, Rp, F> InsideRenderPass for BeginRenderPassCommand<L, Rp, F>
where L: StdCommandsList, Rp: RenderPass, F: Framebuffer
{
type RenderPass = Rp;
type Framebuffer = F;
#[inline]
fn current_subpass(&self) -> u32 {
0
}
#[inline]
fn secondary_subpass(&self) -> bool {
self.secondary
}
#[inline]
fn render_pass(&self) -> &Arc<Self::RenderPass> {
&self.render_pass
}
#[inline]
fn framebuffer(&self) -> &Self::Framebuffer {
&self.framebuffer
}
}
/// Wraps around a command buffer and adds an update buffer command at the end of it.
pub struct BeginRenderPassCommandCb<L, Rp, F>
where L: CommandBuffer, Rp: RenderPass, F: Framebuffer
{
// The previous commands.
previous: L,
render_pass: Arc<Rp>,
framebuffer: F,
}
unsafe impl<L, Rp, Fb> CommandBuffer for BeginRenderPassCommandCb<L, Rp, Fb>
where L: CommandBuffer, Rp: RenderPass, Fb: Framebuffer
{
type Pool = L::Pool;
type SemaphoresWaitIterator = L::SemaphoresWaitIterator;
type SemaphoresSignalIterator = L::SemaphoresSignalIterator;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<Self::Pool> {
self.previous.inner()
}
#[inline]
unsafe fn on_submit<F>(&self, queue: &Arc<Queue>, mut fence: F)
-> SubmitInfo<Self::SemaphoresWaitIterator,
Self::SemaphoresSignalIterator>
where F: FnMut() -> Arc<Fence>
{
self.previous.on_submit(queue, &mut fence)
}
}
/// Wraps around a commands list and adds a command at the end of it that jumps to the next subpass.
pub struct NextSubpassCommand<L> where L: StdCommandsList {
// Parent commands list.
previous: L,
// True if only secondary command buffers can be added.
secondary: bool,
}
impl<L> NextSubpassCommand<L> where L: StdCommandsList + InsideRenderPass {
/// See the documentation of the `next_subpass` method.
#[inline]
pub fn new(previous: L, secondary: bool) -> NextSubpassCommand<L> {
// FIXME: put this check
//assert!(previous.current_subpass() + 1 < previous.render_pass().num_subpasses()); // TODO: error instead
NextSubpassCommand {
previous: previous,
secondary: secondary,
}
}
}
unsafe impl<L> StdCommandsList for NextSubpassCommand<L>
where L: StdCommandsList + InsideRenderPass
{
type Pool = L::Pool;
type Output = NextSubpassCommandCb<L::Output>;
#[inline]
fn num_commands(&self) -> usize {
self.previous.num_commands() + 1
}
#[inline]
fn check_queue_validity(&self, queue: QueueFamily) -> Result<(), ()> {
if !queue.supports_graphics() {
return Err(());
}
self.previous.check_queue_validity(queue)
}
#[inline]
fn buildable_state(&self) -> bool {
false
}
#[inline]
fn is_compute_pipeline_bound<Pl>(&self, pipeline: &Arc<ComputePipeline<Pl>>) -> bool {
self.previous.is_compute_pipeline_bound(pipeline)
}
#[inline]
fn is_graphics_pipeline_bound<Pv, Pl, Prp>(&self, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Prp>>)
-> bool
{
self.previous.is_graphics_pipeline_bound(pipeline)
}
unsafe fn raw_build<I, F>(self, additional_elements: F, barriers: I,
final_barrier: PipelineBarrierBuilder) -> Self::Output
where F: FnOnce(&mut UnsafeCommandBufferBuilder<L::Pool>),
I: Iterator<Item = (usize, PipelineBarrierBuilder)>
{
let secondary = self.secondary;
let parent = self.previous.raw_build(|cb| {
cb.next_subpass(secondary);
additional_elements(cb);
}, barriers, final_barrier);
NextSubpassCommandCb {
previous: parent,
}
}
}
unsafe impl<L> ResourcesStates for NextSubpassCommand<L>
where L: StdCommandsList + InsideRenderPass
{
#[inline]
unsafe fn extract_buffer_state<Ob>(&mut self, buffer: &Ob)
-> Option<Ob::CommandListState>
where Ob: TrackedBuffer
{
self.previous.extract_buffer_state(buffer)
}
#[inline]
unsafe fn extract_image_state<I>(&mut self, image: &I) -> Option<I::CommandListState>
where I: TrackedImage
{
self.previous.extract_image_state(image)
}
}
unsafe impl<L> InsideRenderPass for NextSubpassCommand<L>
where L: StdCommandsList + InsideRenderPass
{
type RenderPass = L::RenderPass;
type Framebuffer = L::Framebuffer;
#[inline]
fn current_subpass(&self) -> u32 {
self.previous.current_subpass() + 1
}
#[inline]
fn secondary_subpass(&self) -> bool {
self.secondary
}
#[inline]
fn render_pass(&self) -> &Arc<Self::RenderPass> {
self.previous.render_pass()
}
#[inline]
fn framebuffer(&self) -> &Self::Framebuffer {
self.previous.framebuffer()
}
}
/// Wraps around a command buffer and adds an end render pass command at the end of it.
pub struct NextSubpassCommandCb<L> where L: CommandBuffer {
// The previous commands.
previous: L,
}
unsafe impl<L> CommandBuffer for NextSubpassCommandCb<L> where L: CommandBuffer {
type Pool = L::Pool;
type SemaphoresWaitIterator = L::SemaphoresWaitIterator;
type SemaphoresSignalIterator = L::SemaphoresSignalIterator;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<Self::Pool> {
self.previous.inner()
}
#[inline]
unsafe fn on_submit<F>(&self, queue: &Arc<Queue>, mut fence: F)
-> SubmitInfo<Self::SemaphoresWaitIterator,
Self::SemaphoresSignalIterator>
where F: FnMut() -> Arc<Fence>
{
self.previous.on_submit(queue, &mut fence)
}
}
/// Wraps around a commands list and adds an end render pass command at the end of it.
pub struct EndRenderPassCommand<L> where L: StdCommandsList {
// Parent commands list.
previous: L,
}
impl<L> EndRenderPassCommand<L> where L: StdCommandsList + InsideRenderPass {
/// See the documentation of the `end_render_pass` method.
#[inline]
pub fn new(previous: L) -> EndRenderPassCommand<L> {
// FIXME: check that the number of subpasses is correct
EndRenderPassCommand {
previous: previous,
}
}
}
unsafe impl<L> StdCommandsList for EndRenderPassCommand<L> where L: StdCommandsList {
type Pool = L::Pool;
type Output = EndRenderPassCommandCb<L::Output>;
#[inline]
fn num_commands(&self) -> usize {
self.previous.num_commands() + 1
}
#[inline]
fn check_queue_validity(&self, queue: QueueFamily) -> Result<(), ()> {
if !queue.supports_graphics() {
return Err(());
}
self.previous.check_queue_validity(queue)
}
#[inline]
fn buildable_state(&self) -> bool {
true
}
#[inline]
fn is_compute_pipeline_bound<Pl>(&self, pipeline: &Arc<ComputePipeline<Pl>>) -> bool {
self.previous.is_compute_pipeline_bound(pipeline)
}
#[inline]
fn is_graphics_pipeline_bound<Pv, Pl, Prp>(&self, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Prp>>)
-> bool
{
self.previous.is_graphics_pipeline_bound(pipeline)
}
unsafe fn raw_build<I, F>(self, additional_elements: F, barriers: I,
final_barrier: PipelineBarrierBuilder) -> Self::Output
where F: FnOnce(&mut UnsafeCommandBufferBuilder<L::Pool>),
I: Iterator<Item = (usize, PipelineBarrierBuilder)>
{
// We need to flush all the barriers because regular (ie. non-self-referencing) barriers
// aren't allowed inside render passes.
let mut pipeline_barrier = PipelineBarrierBuilder::new();
for (num, barrier) in barriers {
debug_assert!(num <= self.num_commands());
pipeline_barrier.merge(barrier);
}
let parent = self.previous.raw_build(|cb| {
cb.end_render_pass();
cb.pipeline_barrier(pipeline_barrier);
additional_elements(cb);
}, iter::empty(), final_barrier);
EndRenderPassCommandCb {
previous: parent,
}
}
}
unsafe impl<L> ResourcesStates for EndRenderPassCommand<L> where L: StdCommandsList {
#[inline]
unsafe fn extract_buffer_state<Ob>(&mut self, buffer: &Ob)
-> Option<Ob::CommandListState>
where Ob: TrackedBuffer
{
self.previous.extract_buffer_state(buffer)
}
#[inline]
unsafe fn extract_image_state<I>(&mut self, image: &I) -> Option<I::CommandListState>
where I: TrackedImage
{
self.previous.extract_image_state(image)
}
}
unsafe impl<L> OutsideRenderPass for EndRenderPassCommand<L> where L: StdCommandsList {
}
/// Wraps around a command buffer and adds an end render pass command at the end of it.
pub struct EndRenderPassCommandCb<L> where L: CommandBuffer {
// The previous commands.
previous: L,
}
unsafe impl<L> CommandBuffer for EndRenderPassCommandCb<L> where L: CommandBuffer {
type Pool = L::Pool;
type SemaphoresWaitIterator = L::SemaphoresWaitIterator;
type SemaphoresSignalIterator = L::SemaphoresSignalIterator;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<Self::Pool> {
self.previous.inner()
}
#[inline]
unsafe fn on_submit<F>(&self, queue: &Arc<Queue>, mut fence: F)
-> SubmitInfo<Self::SemaphoresWaitIterator,
Self::SemaphoresSignalIterator>
where F: FnMut() -> Arc<Fence>
{
self.previous.on_submit(queue, &mut fence)
}
}

View File

@ -1,316 +0,0 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::any::Any;
use std::iter::Chain;
use std::option::IntoIter as OptionIntoIter;
use std::sync::Arc;
use smallvec::SmallVec;
use buffer::traits::CommandBufferState;
use buffer::traits::CommandListState;
use buffer::traits::PipelineBarrierRequest;
use buffer::traits::TrackedBuffer;
use command_buffer::std::OutsideRenderPass;
use command_buffer::std::ResourcesStates;
use command_buffer::std::StdCommandsList;
use command_buffer::submit::CommandBuffer;
use command_buffer::submit::SubmitInfo;
use command_buffer::sys::PipelineBarrierBuilder;
use command_buffer::sys::UnsafeCommandBuffer;
use command_buffer::sys::UnsafeCommandBufferBuilder;
use device::Queue;
use image::traits::TrackedImage;
use instance::QueueFamily;
use pipeline::ComputePipeline;
use pipeline::GraphicsPipeline;
use sync::AccessFlagBits;
use sync::Fence;
use sync::PipelineStages;
use sync::Semaphore;
/// Wraps around a commands list and adds an update buffer command at the end of it.
pub struct UpdateCommand<'a, L, B, D: ?Sized>
where B: TrackedBuffer, L: StdCommandsList, D: 'static
{
// Parent commands list.
previous: L,
// The buffer to update.
buffer: B,
// Current state of the buffer to update, or `None` if it has been extracted.
buffer_state: Option<B::CommandListState>,
// The data to write to the buffer.
data: &'a D,
// Pipeline barrier to perform before this command.
barrier: Option<PipelineBarrierRequest>,
}
impl<'a, L, B, D: ?Sized> UpdateCommand<'a, L, B, D>
where B: TrackedBuffer,
L: StdCommandsList + OutsideRenderPass,
D: Copy + 'static,
{
/// See the documentation of the `update_buffer` method.
pub fn new(mut previous: L, buffer: B, data: &'a D) -> UpdateCommand<'a, L, B, D> {
// Determining the new state of the buffer, and the optional pipeline barrier to add
// before our command in the final output.
let (state, barrier) = unsafe {
let stage = PipelineStages { transfer: true, .. PipelineStages::none() };
let access = AccessFlagBits { transfer_write: true, .. AccessFlagBits::none() };
previous.extract_buffer_state(&buffer)
.unwrap_or(buffer.initial_state())
.transition(previous.num_commands() + 1, buffer.inner(),
0, buffer.size(), true, stage, access)
};
// Minor safety check.
if let Some(ref barrier) = barrier {
assert!(barrier.after_command_num <= previous.num_commands());
}
UpdateCommand {
previous: previous,
buffer: buffer,
buffer_state: Some(state),
data: data,
barrier: barrier,
}
}
}
unsafe impl<'a, L, B, D: ?Sized> StdCommandsList for UpdateCommand<'a, L, B, D>
where B: TrackedBuffer,
L: StdCommandsList,
D: Copy + 'static,
{
type Pool = L::Pool;
type Output = UpdateCommandCb<L::Output, B>;
#[inline]
fn num_commands(&self) -> usize {
self.previous.num_commands() + 1
}
#[inline]
fn check_queue_validity(&self, queue: QueueFamily) -> Result<(), ()> {
// No restriction
self.previous.check_queue_validity(queue)
}
#[inline]
fn buildable_state(&self) -> bool {
true
}
#[inline]
fn is_compute_pipeline_bound<Pl>(&self, pipeline: &Arc<ComputePipeline<Pl>>) -> bool {
self.previous.is_compute_pipeline_bound(pipeline)
}
#[inline]
fn is_graphics_pipeline_bound<Pv, Pl, Prp>(&self, pipeline: &Arc<GraphicsPipeline<Pv, Pl, Prp>>)
-> bool
{
self.previous.is_graphics_pipeline_bound(pipeline)
}
unsafe fn raw_build<I, F>(mut self, additional_elements: F, barriers: I,
mut final_barrier: PipelineBarrierBuilder) -> Self::Output
where F: FnOnce(&mut UnsafeCommandBufferBuilder<L::Pool>),
I: Iterator<Item = (usize, PipelineBarrierBuilder)>
{
// Computing the finished state, or `None` if we don't have to manage it.
let finished_state = match self.buffer_state.take().map(|s| s.finish()) {
Some((s, t)) => {
if let Some(t) = t {
final_barrier.add_buffer_barrier_request(self.buffer.inner(), t);
}
Some(s)
},
None => None,
};
// We split the barriers in two: those to apply after our command, and those to
// transfer to the parent so that they are applied before our command.
let my_command_num = self.num_commands();
// The transitions to apply immediately after our command.
let mut transitions_to_apply = PipelineBarrierBuilder::new();
// The barriers to transfer to the parent.
let barriers = barriers.filter_map(|(after_command_num, barrier)| {
if after_command_num >= my_command_num || !transitions_to_apply.is_empty() {
transitions_to_apply.merge(barrier);
None
} else {
Some((after_command_num, barrier))
}
}).collect::<SmallVec<[_; 8]>>();
// The local barrier requested by this command, or `None` if no barrier requested.
let my_barrier = if let Some(my_barrier) = self.barrier.take() {
let mut t = PipelineBarrierBuilder::new();
let c_num = my_barrier.after_command_num;
t.add_buffer_barrier_request(self.buffer.inner(), my_barrier);
Some((c_num, t))
} else {
None
};
// Passing to the parent.
let my_buffer = self.buffer;
let my_data = self.data;
let parent = self.previous.raw_build(|cb| {
cb.update_buffer(my_buffer.inner(), 0, my_buffer.size(), my_data);
cb.pipeline_barrier(transitions_to_apply);
additional_elements(cb);
}, my_barrier.into_iter().chain(barriers.into_iter()), final_barrier);
UpdateCommandCb {
previous: parent,
buffer: my_buffer,
buffer_state: finished_state,
}
}
}
unsafe impl<'a, L, B, D: ?Sized> ResourcesStates for UpdateCommand<'a, L, B, D>
where B: TrackedBuffer,
L: StdCommandsList,
D: Copy + 'static,
{
unsafe fn extract_buffer_state<Ob>(&mut self, buffer: &Ob)
-> Option<Ob::CommandListState>
where Ob: TrackedBuffer
{
if self.buffer.is_same_buffer(buffer) {
let s: &mut Option<Ob::CommandListState> = (&mut self.buffer_state as &mut Any)
.downcast_mut().unwrap();
Some(s.take().unwrap())
} else {
self.previous.extract_buffer_state(buffer)
}
}
unsafe fn extract_image_state<I>(&mut self, image: &I) -> Option<I::CommandListState>
where I: TrackedImage
{
if self.buffer.is_same_image(image) {
let s: &mut Option<I::CommandListState> = (&mut self.buffer_state as &mut Any)
.downcast_mut().unwrap();
Some(s.take().unwrap())
} else {
self.previous.extract_image_state(image)
}
}
}
unsafe impl<'a, L, B, D: ?Sized> OutsideRenderPass for UpdateCommand<'a, L, B, D>
where B: TrackedBuffer,
L: StdCommandsList,
D: Copy + 'static,
{
}
/// Wraps around a command buffer and adds an update buffer command at the end of it.
pub struct UpdateCommandCb<L, B> where B: TrackedBuffer, L: CommandBuffer {
// The previous commands.
previous: L,
// The buffer to update.
buffer: B,
// The state of the buffer to update, or `None` if we don't manage it. Will be used to
// determine which semaphores or barriers to add when submitting.
buffer_state: Option<B::FinishedState>,
}
unsafe impl<L, B> CommandBuffer for UpdateCommandCb<L, B>
where B: TrackedBuffer, L: CommandBuffer
{
type Pool = L::Pool;
type SemaphoresWaitIterator = Chain<L::SemaphoresWaitIterator,
OptionIntoIter<(Arc<Semaphore>, PipelineStages)>>;
type SemaphoresSignalIterator = Chain<L::SemaphoresSignalIterator,
OptionIntoIter<Arc<Semaphore>>>;
#[inline]
fn inner(&self) -> &UnsafeCommandBuffer<Self::Pool> {
self.previous.inner()
}
unsafe fn on_submit<F>(&self, queue: &Arc<Queue>, mut fence: F)
-> SubmitInfo<Self::SemaphoresWaitIterator,
Self::SemaphoresSignalIterator>
where F: FnMut() -> Arc<Fence>
{
// We query the parent.
let parent = self.previous.on_submit(queue, &mut fence);
// Then build our own output that modifies the parent's.
if let Some(ref buffer_state) = self.buffer_state {
let submit_infos = buffer_state.on_submit(&self.buffer, queue, fence);
let mut out = SubmitInfo {
semaphores_wait: parent.semaphores_wait.chain(submit_infos.pre_semaphore.into_iter()),
semaphores_signal: parent.semaphores_signal.chain(submit_infos.post_semaphore.into_iter()),
pre_pipeline_barrier: parent.pre_pipeline_barrier,
post_pipeline_barrier: parent.post_pipeline_barrier,
};
if let Some(pre) = submit_infos.pre_barrier {
out.pre_pipeline_barrier.add_buffer_barrier_request(self.buffer.inner(), pre);
}
if let Some(post) = submit_infos.post_barrier {
out.post_pipeline_barrier.add_buffer_barrier_request(self.buffer.inner(), post);
}
out
} else {
SubmitInfo {
semaphores_wait: parent.semaphores_wait.chain(None.into_iter()),
semaphores_signal: parent.semaphores_signal.chain(None.into_iter()),
pre_pipeline_barrier: parent.pre_pipeline_barrier,
post_pipeline_barrier: parent.post_pipeline_barrier,
}
}
}
}
#[cfg(test)]
mod tests {
use std::time::Duration;
use buffer::BufferUsage;
use buffer::CpuAccessibleBuffer;
use command_buffer::std::PrimaryCbBuilder;
use command_buffer::std::StdCommandsList;
use command_buffer::submit::CommandBuffer;
#[test]
fn basic_submit() {
let (device, queue) = gfx_dev_and_queue!();
let buffer = CpuAccessibleBuffer::from_data(&device, &BufferUsage::transfer_dest(),
Some(queue.family()), 0u32).unwrap();
let _ = PrimaryCbBuilder::new(&device, queue.family())
.update_buffer(buffer.clone(), &128u32)
.build()
.submit(&queue);
let content = buffer.read(Duration::from_secs(0)).unwrap();
assert_eq!(*content, 128);
}
}

View File

@ -1,392 +0,0 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::ptr;
use std::sync::Arc;
use std::time::Duration;
use smallvec::SmallVec;
use command_buffer::pool::CommandPool;
use command_buffer::sys::Kind;
use command_buffer::sys::Flags;
use command_buffer::sys::PipelineBarrierBuilder;
use command_buffer::sys::UnsafeCommandBufferBuilder;
use command_buffer::sys::UnsafeCommandBuffer;
use device::Device;
use device::Queue;
use framebuffer::EmptySinglePassRenderPass;
use framebuffer::Framebuffer as OldFramebuffer;
use sync::Fence;
use sync::PipelineStages;
use sync::Semaphore;
use check_errors;
use vk;
use VulkanObject;
use VulkanPointers;
use SynchronizedVulkanObject;
/// Trait for objects that represent commands ready to be executed by the GPU.
pub unsafe trait CommandBuffer {
/// Submits the command buffer.
///
/// Note that since submitting has a fixed overhead, you should try, if possible, to submit
/// multiple command buffers at once instead.
///
/// This is a simple shortcut for creating a `Submit` object.
// TODO: remove 'static
#[inline]
fn submit(self, queue: &Arc<Queue>) -> Submission where Self: Sized + 'static {
Submit::new().add(self).submit(queue)
}
/// Type of the pool that was used to allocate the command buffer.
type Pool: CommandPool;
/// Iterator that returns the list of semaphores to wait upon before the command buffer is
/// submitted.
type SemaphoresWaitIterator: Iterator<Item = (Arc<Semaphore>, PipelineStages)>;
/// Iterator that returns the list of semaphores to signal after the command buffer has
/// finished execution.
type SemaphoresSignalIterator: Iterator<Item = Arc<Semaphore>>;
/// Returns the inner object.
fn inner(&self) -> &UnsafeCommandBuffer<Self::Pool>;
/// Called slightly before the command buffer is submitted. Signals the command buffers that it
/// is going to be submitted on the given queue. The function must return the list of
/// semaphores to wait upon and transitions to perform.
///
/// The `fence` parameter is a closure that can be used to pull a fence if required. If a fence
/// is pulled, it is guaranteed that it will be signaled after the command buffer ends.
///
/// # Safety for the caller
///
/// This function must only be called if there's actually a submission that follows. If a
/// fence is pulled, then it must eventually be signaled. All the semaphores that are waited
/// upon must become unsignaled, and all the semaphores that are supposed to be signaled must
/// become signaled.
///
/// This function is supposed to be called only by vulkano's internals. It is recommended
/// that you never call it.
///
/// # Safety for the implementation
///
/// The implementation must ensure that the command buffer doesn't get destroyed before the
/// fence is signaled, or before a fence of a later submission to the same queue is signaled.
///
unsafe fn on_submit<F>(&self, queue: &Arc<Queue>, fence: F)
-> SubmitInfo<Self::SemaphoresWaitIterator,
Self::SemaphoresSignalIterator>
where F: FnMut() -> Arc<Fence>;
}
/// Information about how the submitting function should synchronize the submission.
pub struct SubmitInfo<Swi, Ssi> {
/// List of semaphores to wait upon before the command buffer starts execution.
pub semaphores_wait: Swi,
/// List of semaphores to signal after the command buffer has finished.
pub semaphores_signal: Ssi,
/// Pipeline barrier to execute on the queue and immediately before the command buffer.
/// Ignored if empty.
pub pre_pipeline_barrier: PipelineBarrierBuilder,
/// Pipeline barrier to execute on the queue and immediately after the command buffer.
/// Ignored if empty.
pub post_pipeline_barrier: PipelineBarrierBuilder,
}
/// Returned when you submit one or multiple command buffers.
///
/// This object holds the resources that are used by the GPU and that must be kept alive for at
/// least as long as the GPU is executing the submission. Therefore destroying a `Submission`
/// object will block until the GPU is finished executing.
///
/// Whenever you submit a command buffer, you are encouraged to store the returned `Submission`
/// in a long-living container such as a `Vec`. From time to time, you can clean the obsolete
/// objects by checking whether `destroying_would_block()` returns false. For example, if you use
/// a `Vec` you can do `vec.retain(|s| s.destroying_would_block())`.
// TODO: docs
// # Leak safety
//
// The `Submission` object can hold borrows of command buffers. In order for it to be safe to leak
// a `Submission`, the borrowed object themselves must be protected by a fence.
#[must_use]
pub struct Submission {
fence: Arc<Fence>, // TODO: make optional
keep_alive: SmallVec<[Arc<KeepAlive>; 4]>,
}
impl Submission {
/// Returns `true` if destroying this `Submission` object would block the CPU for some time.
#[inline]
pub fn destroying_would_block(&self) -> bool {
!self.finished()
}
/// Returns `true` if the GPU has finished executing this submission.
#[inline]
pub fn finished(&self) -> bool {
self.fence.ready().unwrap_or(false) // TODO: what to do in case of error?
}
}
impl Drop for Submission {
fn drop(&mut self) {
self.fence.wait(Duration::from_secs(10)).unwrap(); // TODO: handle some errors
}
}
trait KeepAlive {}
impl<T> KeepAlive for T {}
#[derive(Debug, Copy, Clone)]
pub struct Submit<L> {
list: L,
}
impl Submit<()> {
/// Builds an empty submission list.
#[inline]
pub fn new() -> Submit<()> {
Submit { list: () }
}
}
impl<L> Submit<L> where L: SubmitList {
/// Adds a command buffer to submit to the list.
///
/// In the Vulkan API, a submission is divided into batches that each contain one or more
/// command buffers. Vulkano will automatically determine which command buffers can be grouped
/// into the same batch.
// TODO: remove 'static
#[inline]
pub fn add<C>(self, command_buffer: C) -> Submit<(C, L)> where C: CommandBuffer + 'static {
Submit { list: (command_buffer, self.list) }
}
/// Submits the list of command buffers.
pub fn submit(self, queue: &Arc<Queue>) -> Submission {
let SubmitListOpaque { fence, wait_semaphores, wait_stages, command_buffers,
signal_semaphores, mut submits, keep_alive }
= self.list.infos(queue);
// TODO: for now we always create a Fence in order to put it in the submission
let fence = fence.unwrap_or_else(|| Fence::new(queue.device().clone()));
// Filling the pointers inside `submits`.
unsafe {
debug_assert_eq!(wait_semaphores.len(), wait_stages.len());
let mut next_wait = 0;
let mut next_cb = 0;
let mut next_signal = 0;
for submit in submits.iter_mut() {
debug_assert!(submit.waitSemaphoreCount as usize + next_wait as usize <=
wait_semaphores.len());
debug_assert!(submit.commandBufferCount as usize + next_cb as usize <=
command_buffers.len());
debug_assert!(submit.signalSemaphoreCount as usize + next_signal as usize <=
signal_semaphores.len());
submit.pWaitSemaphores = wait_semaphores.as_ptr().offset(next_wait);
submit.pWaitDstStageMask = wait_stages.as_ptr().offset(next_wait);
submit.pCommandBuffers = command_buffers.as_ptr().offset(next_cb);
submit.pSignalSemaphores = signal_semaphores.as_ptr().offset(next_signal);
next_wait += submit.waitSemaphoreCount as isize;
next_cb += submit.commandBufferCount as isize;
next_signal += submit.signalSemaphoreCount as isize;
}
debug_assert_eq!(next_wait as usize, wait_semaphores.len());
debug_assert_eq!(next_wait as usize, wait_stages.len());
debug_assert_eq!(next_cb as usize, command_buffers.len());
debug_assert_eq!(next_signal as usize, signal_semaphores.len());
}
unsafe {
let vk = queue.device().pointers();
let queue = queue.internal_object_guard();
//let fence = fence.as_ref().map(|f| f.internal_object()).unwrap_or(0);
let fence = fence.internal_object();
check_errors(vk.QueueSubmit(*queue, submits.len() as u32, submits.as_ptr(),
fence)).unwrap(); // TODO: handle errors (trickier than it looks)
}
Submission {
keep_alive: keep_alive,
fence: fence,
}
}
}
/* TODO: All that stuff below is undocumented */
pub struct SubmitListOpaque {
fence: Option<Arc<Fence>>,
wait_semaphores: SmallVec<[vk::Semaphore; 16]>,
wait_stages: SmallVec<[vk::PipelineStageFlags; 16]>,
command_buffers: SmallVec<[vk::CommandBuffer; 16]>,
signal_semaphores: SmallVec<[vk::Semaphore; 16]>,
submits: SmallVec<[vk::SubmitInfo; 8]>,
keep_alive: SmallVec<[Arc<KeepAlive>; 4]>,
}
pub unsafe trait SubmitList {
fn infos(self, queue: &Arc<Queue>) -> SubmitListOpaque;
}
unsafe impl SubmitList for () {
fn infos(self, queue: &Arc<Queue>) -> SubmitListOpaque {
SubmitListOpaque {
fence: None,
wait_semaphores: SmallVec::new(),
wait_stages: SmallVec::new(),
command_buffers: SmallVec::new(),
signal_semaphores: SmallVec::new(),
submits: SmallVec::new(),
keep_alive: SmallVec::new(),
}
}
}
// TODO: remove 'static
unsafe impl<C, R> SubmitList for (C, R) where C: CommandBuffer + 'static, R: SubmitList {
fn infos(self, queue: &Arc<Queue>) -> SubmitListOpaque {
// TODO: attempt to group multiple submits into one when possible
let (current, rest) = self;
let mut infos = rest.infos(queue);
let device = current.inner().device().clone();
let current_infos = unsafe { current.on_submit(queue, || {
if let Some(fence) = infos.fence.as_ref() {
return fence.clone();
}
let new_fence = Fence::new(device.clone());
infos.fence = Some(new_fence.clone());
new_fence
})};
let mut new_submit = vk::SubmitInfo {
sType: vk::STRUCTURE_TYPE_SUBMIT_INFO,
pNext: ptr::null(),
waitSemaphoreCount: 0,
pWaitSemaphores: ptr::null(),
pWaitDstStageMask: ptr::null(),
commandBufferCount: 1,
pCommandBuffers: ptr::null(),
signalSemaphoreCount: 0,
pSignalSemaphores: ptr::null(),
};
if !current_infos.pre_pipeline_barrier.is_empty() {
let mut cb = UnsafeCommandBufferBuilder::new(Device::standard_command_pool(&device, queue.family()),
Kind::Primary::<EmptySinglePassRenderPass,
OldFramebuffer<EmptySinglePassRenderPass>>,
Flags::OneTimeSubmit).unwrap();
cb.pipeline_barrier(current_infos.pre_pipeline_barrier);
new_submit.commandBufferCount += 1;
infos.command_buffers.push(cb.internal_object());
infos.keep_alive.push(Arc::new(cb) as Arc<_>);
}
infos.command_buffers.push(current.inner().internal_object());
if !current_infos.post_pipeline_barrier.is_empty() {
let mut cb = UnsafeCommandBufferBuilder::new(Device::standard_command_pool(&device, queue.family()),
Kind::Primary::<EmptySinglePassRenderPass,
OldFramebuffer<EmptySinglePassRenderPass>>,
Flags::OneTimeSubmit).unwrap();
cb.pipeline_barrier(current_infos.post_pipeline_barrier);
new_submit.commandBufferCount += 1;
infos.command_buffers.push(cb.internal_object());
infos.keep_alive.push(Arc::new(cb) as Arc<_>);
}
for (semaphore, stage) in current_infos.semaphores_wait {
infos.wait_semaphores.push(semaphore.internal_object());
infos.wait_stages.push(stage.into());
infos.keep_alive.push(semaphore);
new_submit.waitSemaphoreCount += 1;
}
for semaphore in current_infos.semaphores_signal {
infos.signal_semaphores.push(semaphore.internal_object());
infos.keep_alive.push(semaphore);
new_submit.signalSemaphoreCount += 1;
}
infos.submits.push(new_submit);
infos.keep_alive.push(Arc::new(current) as Arc<_>);
infos
}
}
#[cfg(test)]
mod tests {
use std::iter;
use std::iter::Empty;
use std::sync::Arc;
use command_buffer::pool::StandardCommandPool;
use command_buffer::submit::CommandBuffer;
use command_buffer::submit::SubmitInfo;
use command_buffer::sys::Kind;
use command_buffer::sys::Flags;
use command_buffer::sys::PipelineBarrierBuilder;
use command_buffer::sys::UnsafeCommandBuffer;
use command_buffer::sys::UnsafeCommandBufferBuilder;
use device::Device;
use device::Queue;
use framebuffer::EmptySinglePassRenderPass;
use framebuffer::Framebuffer as OldFramebuffer;
use sync::Fence;
use sync::PipelineStages;
use sync::Semaphore;
#[test]
fn basic_submit() {
struct Basic { inner: UnsafeCommandBuffer<Arc<StandardCommandPool>> }
unsafe impl CommandBuffer for Basic {
type Pool = Arc<StandardCommandPool>;
type SemaphoresWaitIterator = Empty<(Arc<Semaphore>, PipelineStages)>;
type SemaphoresSignalIterator = Empty<Arc<Semaphore>>;
fn inner(&self) -> &UnsafeCommandBuffer<Self::Pool> { &self.inner }
unsafe fn on_submit<F>(&self, _: &Arc<Queue>, fence: F)
-> SubmitInfo<Self::SemaphoresWaitIterator,
Self::SemaphoresSignalIterator>
where F: FnOnce() -> Arc<Fence>
{
SubmitInfo {
semaphores_wait: iter::empty(),
semaphores_signal: iter::empty(),
pre_pipeline_barrier: PipelineBarrierBuilder::new(),
post_pipeline_barrier: PipelineBarrierBuilder::new(),
}
}
}
let (device, queue) = gfx_dev_and_queue!();
let pool = Device::standard_command_pool(&device, queue.family());
let kind = Kind::Primary::<EmptySinglePassRenderPass, OldFramebuffer<EmptySinglePassRenderPass>>;
let cb = UnsafeCommandBufferBuilder::new(pool, kind, Flags::OneTimeSubmit).unwrap();
let cb = Basic { inner: cb.build().unwrap() };
let _s = cb.submit(&queue);
}
}

View File

@ -0,0 +1,44 @@
// Copyright (c) 2016 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
//! Low-level builders that allow submitting an operation to a queue.
//!
//! In order to submit an operation to the GPU, you must use one of the builder structs of this
//! module. These structs are low-level and unsafe, and are mostly used to implement other parts
//! of vulkano, so you are encouraged to not use them directly.
pub use self::queue_present::SubmitPresentBuilder;
pub use self::queue_present::SubmitPresentError;
pub use self::queue_submit::SubmitCommandBufferBuilder;
pub use self::queue_submit::SubmitCommandBufferError;
pub use self::semaphores_wait::SubmitSemaphoresWaitBuilder;
mod queue_present;
mod queue_submit;
mod semaphores_wait;
/// Contains all the possible submission builders.
#[derive(Debug)]
pub enum SubmitAnyBuilder<'a> {
Empty,
SemaphoresWait(SubmitSemaphoresWaitBuilder<'a>),
CommandBuffer(SubmitCommandBufferBuilder<'a>),
QueuePresent(SubmitPresentBuilder<'a>),
}
impl<'a> SubmitAnyBuilder<'a> {
/// Returns true if equal to `SubmitAnyBuilder::Empty`.
#[inline]
pub fn is_empty(&self) -> bool {
match self {
&SubmitAnyBuilder::Empty => true,
_ => false,
}
}
}

View File

@ -0,0 +1,198 @@
// Copyright (c) 2017 The vulkano developers
// Licensed under the Apache License, Version 2.0
// <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT
// license <LICENSE-MIT or http://opensource.org/licenses/MIT>,
// at your option. All files in the project carrying such
// notice may not be copied, modified, or distributed except
// according to those terms.
use std::error;
use std::fmt;
use std::marker::PhantomData;
use std::mem;
use std::ptr;
use smallvec::SmallVec;
use device::Queue;
use swapchain::Swapchain;
use sync::Semaphore;
use check_errors;
use vk;
use Error;
use OomError;
use VulkanObject;
use VulkanPointers;
use SynchronizedVulkanObject;
/// Prototype for a submission that presents a swapchain on the screen.
// TODO: example here
#[derive(Debug)]
pub struct SubmitPresentBuilder<'a> {
wait_semaphores: SmallVec<[vk::Semaphore; 8]>,
swapchains: SmallVec<[vk::SwapchainKHR; 4]>,
image_indices: SmallVec<[u32; 4]>,
marker: PhantomData<&'a ()>,
}
impl<'a> SubmitPresentBuilder<'a> {
/// Builds a new empty `SubmitPresentBuilder`.
#[inline]
pub fn new() -> SubmitPresentBuilder<'a> {
SubmitPresentBuilder {
wait_semaphores: SmallVec::new(),
swapchains: SmallVec::new(),
image_indices: SmallVec::new(),
marker: PhantomData,
}
}
/// Adds a semaphore to be waited upon before the presents are executed.
///
/// # Safety
///
/// - If you submit this builder, the semaphore must be kept alive until you are guaranteed
/// that the GPU has presented the swapchains.
///
/// - If you submit this builder, no other queue must be waiting on these semaphores. In other
/// words, each semaphore signal can only correspond to one semaphore wait.
///
/// - If you submit this builder, the semaphores must be signaled when the queue execution
/// reaches this submission, or there must be one or more submissions in queues that are
/// going to signal these semaphores. In other words, you must not block the queue with
/// semaphores that can't get signaled.
///
/// - The swapchains and semaphores must all belong to the same device.
///
#[inline]
pub unsafe fn add_wait_semaphore(&mut self, semaphore: &'a Semaphore) {
self.wait_semaphores.push(semaphore.internal_object());
}
/// Adds an image of a swapchain to be presented.
///
/// # Safety
///
/// - If you submit this builder, the swapchain must be kept alive until you are
/// guaranteed that the GPU has finished presenting.
///
/// - The swapchains and semaphores must all belong to the same device.
///
#[inline]
pub unsafe fn add_swapchain(&mut self, swapchain: &'a Swapchain, image_num: u32) {
debug_assert!(image_num < swapchain.num_images());
self.swapchains.push(swapchain.internal_object());
self.image_indices.push(image_num);
}
/// Submits the command. Calls `vkQueuePresentKHR`.
///
/// # Panic
///
/// Panics if no swapchain image has been added to the builder.
///
pub fn submit(self, queue: &Queue) -> Result<(), SubmitPresentError> {
unsafe {
debug_assert_eq!(self.swapchains.len(), self.image_indices.len());
assert!(!self.swapchains.is_empty(),
"Tried to submit a present command without any swapchain");
let vk = queue.device().pointers();
let queue = queue.internal_object_guard();
let mut results = vec![mem::uninitialized(); self.swapchains.len()]; // TODO: alloca
let infos = vk::PresentInfoKHR {
sType: vk::STRUCTURE_TYPE_PRESENT_INFO_KHR,
pNext: ptr::null(),
waitSemaphoreCount: self.wait_semaphores.len() as u32,
pWaitSemaphores: self.wait_semaphores.as_ptr(),
swapchainCount: self.swapchains.len() as u32,
pSwapchains: self.swapchains.as_ptr(),
pImageIndices: self.image_indices.as_ptr(),
pResults: results.as_mut_ptr(),
};
try!(check_errors(vk.QueuePresentKHR(*queue, &infos)));
for result in results {
// TODO: AMD driver initially didn't write the results ; check that it's been fixed
//try!(check_errors(result));
}
Ok(())
}
}
}
/// Error that can happen when submitting the present prototype.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
#[repr(u32)]
pub enum SubmitPresentError {
/// Not enough memory.
OomError(OomError),
/// The connection to the device has been lost.
DeviceLost,
/// The surface is no longer accessible and must be recreated.
SurfaceLost,
/// The surface has changed in a way that makes the swapchain unusable. You must query the
/// surface's new properties and recreate a new swapchain if you want to continue drawing.
OutOfDate,
}
impl error::Error for SubmitPresentError {
#[inline]
fn description(&self) -> &str {
match *self {
SubmitPresentError::OomError(_) => "not enough memory",
SubmitPresentError::DeviceLost => "the connection to the device has been lost",
SubmitPresentError::SurfaceLost => "the surface of this swapchain is no longer valid",
SubmitPresentError::OutOfDate => "the swapchain needs to be recreated",
}
}
#[inline]
fn cause(&self) -> Option<&error::Error> {
match *self {
SubmitPresentError::OomError(ref err) => Some(err),
_ => None
}
}
}
impl fmt::Display for SubmitPresentError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}
impl From<Error> for SubmitPresentError {
#[inline]
fn from(err: Error) -> SubmitPresentError {
match err {
err @ Error::OutOfHostMemory => SubmitPresentError::OomError(OomError::from(err)),
err @ Error::OutOfDeviceMemory => SubmitPresentError::OomError(OomError::from(err)),
Error::DeviceLost => SubmitPresentError::DeviceLost,
Error::SurfaceLost => SubmitPresentError::SurfaceLost,
Error::OutOfDate => SubmitPresentError::OutOfDate,
_ => panic!("unexpected error: {:?}", err)
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
#[should_panic(expected = "Tried to submit a present command without any swapchain")]
fn no_swapchain_added() {
let (_, queue) = gfx_dev_and_queue!();
let _ = SubmitPresentBuilder::new().submit(&queue);
}
}

Some files were not shown because too many files have changed in this diff Show More