Initial wrapper

This commit is contained in:
Pierre Krieger 2016-02-18 09:33:06 +01:00
parent c924811fde
commit ce5433d7d5
47 changed files with 8984 additions and 0 deletions

View File

@ -8,3 +8,4 @@ rust:
script:
- cargo test -v --manifest-path glsl-to-spirv/Cargo.toml
- cargo test -v --manifest-path vulkano-shaders/Cargo.toml
- cargo test -v --manifest-path vulkano/Cargo.toml

7
README.md Normal file
View File

@ -0,0 +1,7 @@
# Vulkano
This repository contains three libraries:
- `vulkano` is the main one.
- `vulkano-shaders` can analyse SPIR-V shaders at compile-time.
- `glsl-to-spirv` can compile GLSL to SPIR-V.

2
vulkano/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
target
Cargo.lock

26
vulkano/CONTRIBUTING.md Normal file
View File

@ -0,0 +1,26 @@
# Contributing
The project is in its initial development phase. All code is potentially a draft.
If you want to contribute, you are encouraged to ask whether it's ok to implement something before
starting to do so. Otherwise your work could end up being useless or against the intended design.
For each module, the tile is checked if the code inside it is in an "acceptable" state:
- [ ] Buffer
- [ ] Command buffer
- [ ] Descriptor set
- [ ] Device
- [x] Features
- [ ] Formats
- [ ] Framebuffer
- [ ] Image
- [ ] Instance
- [ ] Lib
- [ ] Memory
- [ ] Pipeline
- [ ] Query
- [ ] Sampler
- [ ] Shader
- [ ] Swapchain
- [x] Sync
- [x] Version

21
vulkano/Cargo.toml Normal file
View File

@ -0,0 +1,21 @@
[package]
name = "vulkano"
version = "0.1.0"
authors = ["Pierre Krieger <pierre.krieger1708@gmail.com>"]
build = "build.rs"
description = "Safe wrapper for the Vulkan graphics API"
[dependencies]
shared_library = "0.1"
lazy_static = "0.1"
[build-dependencies]
vk-sys = { path = "../vk-sys" }
glsl-to-spirv = { path = "../glsl-to-spirv" }
vulkano-shaders = { path = "../vulkano-shaders" }
[dev-dependencies]
gdi32-sys = "*"
kernel32-sys = "*"
user32-sys = "*"
winapi = "*"

44
vulkano/README.md Normal file
View File

@ -0,0 +1,44 @@
# Vulkano
Safe Rust wrapper around Vulkan.
- Much easier to use than raw Vulkan.
- Any error that the validation layer would trigger is avoided in the first
place. This is done through a lot of compile-time checks and a few runtime
checks.
- Anything that is possible to do with Vulkan should be possible with vulkano
as well. Please open an issue if this is not the case.
- Safety is favored over performances. In particular, compared to raw Vulkan
vulkano does some runtime checks and wraps most objects around
reference-counted pointers.
## Usage
Add to your Cargo.toml:
```toml
vulkano = "0.1"
```
Note that this library doesn't handle creating and managing windows. In order
to render to a window, you will have to create that window separately and use
an unsafe function of this library to link to it.
This only concerns windows. Safe fullscreen rendering is possible with this
library alone.
## Shaders handling
The API of vulkano related to shader modules is entirely unsafe. This is
because you're not supposed to use it directly.
Instead, you are encouraged to use the `vulkano-shaders` crate which compiles
and analyses your shader, and generates Rust code that wraps around vulkano's
API.
Thanks to this, vulkano can provide compile-time guarantees about your
specialization constants, push constants, descriptor sets, vertex layouts, and
so on.

38
vulkano/build.rs Normal file
View File

@ -0,0 +1,38 @@
extern crate glsl_to_spirv;
extern crate vulkano_shaders;
extern crate vk_sys;
use std::env;
use std::fs::File;
use std::path::Path;
use std::io::Write;
fn main() {
// tell Cargo that this build script never needs to be rerun
println!("cargo:rerun-if-changed=build.rs");
let dest = env::var("OUT_DIR").unwrap();
let dest = Path::new(&dest);
let mut file_output = File::create(&dest.join("vk_bindings.rs")).unwrap();
vk_sys::write_bindings(&mut file_output).unwrap();
write_examples();
}
fn write_examples() {
let dest = env::var("OUT_DIR").unwrap();
let dest = Path::new(&dest);
let mut file_output = File::create(&dest.join("examples-triangle_vs.rs")).unwrap();
println!("cargo:rerun-if-changed=examples/triangle_vs.glsl");
let content = glsl_to_spirv::compile(include_str!("examples/triangle_vs.glsl"), glsl_to_spirv::ShaderType::Vertex).unwrap();
let output = vulkano_shaders::reflect("TriangleShader", content).unwrap();
write!(file_output, "{}", output).unwrap();
let mut file_output = File::create(&dest.join("examples-triangle_fs.rs")).unwrap();
println!("cargo:rerun-if-changed=examples/triangle_fs.glsl");
let content = glsl_to_spirv::compile(include_str!("examples/triangle_fs.glsl"), glsl_to_spirv::ShaderType::Fragment).unwrap();
let output = vulkano_shaders::reflect("TriangleShader", content).unwrap();
write!(file_output, "{}", output).unwrap();
}

280
vulkano/examples/test.rs Normal file
View File

@ -0,0 +1,280 @@
extern crate kernel32;
extern crate gdi32;
extern crate user32;
extern crate winapi;
#[macro_use]
extern crate vulkano;
use std::sync::Arc;
use std::ffi::OsStr;
use std::os::windows::ffi::OsStrExt;
use std::mem;
use std::ptr;
fn main() {
// The first step of any vulkan program is to create an instance.
let app = vulkano::instance::ApplicationInfo { application_name: "test", application_version: 1, engine_name: "test", engine_version: 1 };
let instance = vulkano::instance::Instance::new(Some(&app), None).expect("failed to create instance");
// We then choose which physical device to use.
//
// In a real application, there are three things to take into consideration:
//
// - Some devices support some optional features that may be required by your application.
// You should filter out the devices that don't support your app.
//
// - Not all devices can draw to a certain surface. Once you create your window, you have to
// choose a device that is capable of drawing to it.
//
// - You probably want to leave the choice between the remaining devices to the user.
//
// Here we are just going to use the first device.
let physical = vulkano::instance::PhysicalDevice::enumerate(&instance)
.next().expect("no device available");
println!("Using device: {} (type: {:?})", physical.name(), physical.ty());
// Vulkan provides a cross-platform API to draw on the whole monitor. In order to avoid the
// boiling plate of creating a window, we are going to use it.
let window = unsafe { create_window() };
let surface = unsafe { vulkano::swapchain::Surface::from_hwnd(&instance, kernel32::GetModuleHandleW(ptr::null()), window).unwrap() }; /*{
let display = vulkano::swapchain::Display::enumerate(&physical).unwrap().next().unwrap();
let display_mode = display.display_modes().unwrap().next().unwrap();;
let plane = vulkano::swapchain::DisplayPlane::enumerate(&physical).unwrap().next().unwrap();
vulkano::swapchain::Surface::from_display_mode(&display_mode, &plane).unwrap()
}*/
// The next step is to choose which queue will execute our draw commands.
//
// Devices can provide multiple queues to run things in parallel (for example a draw queue and
// a compute queue). This is something you have to have to manage manually in Vulkan.
//
// We have to specify which queues you are going to use when you create the device, therefore
// we need to choose that now.
let queue = physical.queue_families().find(|q| q.supports_graphics() &&
surface.is_supported(q).unwrap_or(false))
.expect("couldn't find a graphical queue family");
// Now initializing the device.
//
// We have to pass a list of optional Vulkan features that must be enabled. Here we don't need
// any of them.
//
// We also have to pass a list of queues to create and their priorities relative to each other.
// Since we create one queue, we don't really care about the priority and just pass `0.5`.
// The list of created queues is returned by the function alongside with the device.
let (device, queues) = vulkano::device::Device::new(&physical, physical.supported_features(),
[(queue, 0.5)].iter().cloned())
.expect("failed to create device");
// Since we can request multiple queues, the `queues` variable is a `Vec`. Our actual queue
// is the first element.
let queue = queues.into_iter().next().unwrap();
// Before we can draw on the surface, we have to create what is called a swapchain. Creating
// a swapchain allocates the color buffers that will contain the image that will be visible
// on the screen.
let (swapchain, images) = {
let caps = surface.get_capabilities(&physical).expect("failed to get surface capabilities");
let dimensions = caps.current_extent.unwrap_or([1280, 1024]);
let present = caps.present_modes[0];
let usage = caps.supported_usage_flags;
vulkano::swapchain::Swapchain::new(&device, &surface, 3,
vulkano::formats::B8G8R8A8Srgb, dimensions, 1,
&usage, vulkano::swapchain::SurfaceTransform::Identity,
vulkano::swapchain::CompositeAlpha::Opaque,
present, true).expect("failed to create swapchain")
};
let images = images.into_iter().map(|image| {
vulkano::image::ImageView::new(&image).expect("failed to create image view")
}).collect::<Vec<_>>();
// We create a buffer that will store the shape of our triangle.
//
// The first parameter is the device to use, and the second parameter is where to get the
// memory where the buffer will be stored. The latter is very important as it determines the
// way you are going to access and modify your buffer. Here we just ask for a basic host
// visible memory.
//
// Note that to store immutable data, the best way is to create two buffers. One buffer on
// the CPU and one buffer on the GPU. We then write our data to the buffer on the CPU and
// ask the GPU to copy it to the real buffer. This way the data is located on the most
// efficient memory possible.
let vertex_buffer: Arc<vulkano::buffer::Buffer<[Vertex; 3], _>> =
vulkano::buffer::Buffer::new(&device, &vulkano::buffer::Usage::all(),
vulkano::memory::HostVisible)
.expect("failed to create buffer");
struct Vertex { position: [f32; 2] }
impl_vertex!(Vertex, position);
// The buffer that we created contains uninitialized data.
// In order to fill it with data, we have to *map* it.
{
// The `try_write` function would return `None` if the buffer was in use by the GPU. This
// obviously can't happen here, since we haven't ask the GPU to do anything yet.
let mut mapping = vertex_buffer.try_write().unwrap();
mapping[0].position = [-0.5, -0.25];
mapping[1].position = [0.0, 0.5];
mapping[2].position = [0.25, -0.1];
}
// The next step is to create the shader.
//
// The shader creation API provided by the vulkano library is unsafe, for various reasons.
//
// Instead, in our build script we used the `vulkano-shaders` crate to parse our shader at
// compile time and provide a safe wrapper over vulkano's API. You can find the shader's
// source code in the `triangle.glsl` file.
//
// The code generated by the build script created a struct named `TriangleShader`, which we
// can now use to load the shader.
//
// Because of some restrictions with the `include!` macro, we need to use a module.
mod vs { include!{concat!(env!("OUT_DIR"), "/examples-triangle_vs.rs")} }
let vs = vs::TriangleShader::load(&device);
mod fs { include!{concat!(env!("OUT_DIR"), "/examples-triangle_fs.rs")} }
let fs = fs::TriangleShader::load(&device);
// At this point, OpenGL initialization would be finished. However in Vulkan it is not. OpenGL
// implicitely does a lot of computation whenever you draw. In Vulkan, you have to do all this
// manually.
// The next step is to create a *renderpass*, which is an object that describes where the
// output of the graphics pipeline will go. It describes the layout of the images
// where the colors, depth and/or stencil information will be written.
let renderpass = renderpass!{
device: &device,
attachments: {
color [Clear]
}
}.unwrap();
let framebuffers = images.iter().map(|image| {
vulkano::framebuffer::Framebuffer::new(&renderpass, (1244, 699, 1), image).unwrap()
}).collect::<Vec<_>>();
let pipeline: Arc<vulkano::pipeline::GraphicsPipeline<Arc<vulkano::buffer::Buffer<[Vertex; 3], _>>>> = {
let ia = vulkano::pipeline::input_assembly::InputAssembly {
topology: vulkano::pipeline::input_assembly::PrimitiveTopology::TriangleList,
primitive_restart_enable: false,
};
let raster = Default::default();
let ms = vulkano::pipeline::multisample::Multisample::disabled();
let blend = vulkano::pipeline::blend::Blend {
logic_op: None,
blend_constants: Some([0.0; 4]),
};
vulkano::pipeline::GraphicsPipeline::new(&device, &vs.main_entry_point(), &ia, &raster,
&ms, &blend, &fs.main_entry_point(),
&renderpass.subpass(0).unwrap()).unwrap()
};
// We are going to create a command buffer right below. Command buffers need to be allocated
// from a *command buffer pool*, so we create the pool.
let cb_pool = vulkano::command_buffer::CommandBufferPool::new(&device, &queue.lock().unwrap().family())
.expect("failed to create command buffer pool");
// The final initialization step is to create a command buffer.
//
// A command buffer contains a list of commands that the GPU must execute. This can include
// transfers between buffers, clearing images or attachments, etc. and draw commands. Here we
// create a command buffer with two commands: clearing the attachment and drawing the triangle.
let command_buffers = framebuffers.iter().map(|framebuffer| {
vulkano::command_buffer::PrimaryCommandBufferBuilder::new(&cb_pool).unwrap()
.draw_inline(&renderpass, &framebuffer, [0.0, 0.0, 1.0, 1.0])
.draw(&pipeline, vertex_buffer.clone(), &vulkano::command_buffer::DynamicState::none())
.draw_end()
.build().unwrap()
}).collect::<Vec<_>>();
// Initialization is finally finished!
// Note that the only thing we need now is the `command_buffer` variable. Everything else is
// kept alive internally with `Arc`s (even the vertex buffer), so the only variable that we
// need is this one.
loop {
let image_num = swapchain.acquire_next_image().unwrap();
// Our queue is wrapped around a `Mutex`, so we have to lock it.
let mut queue = queue.lock().unwrap();
// In order to draw, all we need to do is submit the command buffer to the queue.
command_buffers[image_num].submit(&mut queue).unwrap();
// The color output should now contain our triangle. But in order to show it on the
// screen, we have to *present* the swapchain. This is required because the swapchain
// usually uses double-buffering or triple-buffering.
swapchain.present(&mut queue, image_num).unwrap();
// In a real application we want to submit things to the same queue in parallel, so we
// shouldn't keep it locked too long.
drop(queue);
unsafe {
let mut msg = mem::uninitialized();
if user32::GetMessageW(&mut msg, ptr::null_mut(), 0, 0) == 0 {
break;
}
user32::TranslateMessage(&msg);
user32::DispatchMessageW(&msg);
}
}
}
unsafe fn create_window() -> winapi::HWND {
let class_name = register_window_class();
let title: Vec<u16> = vec![b'V' as u16, b'u' as u16, b'l' as u16, b'k' as u16,
b'a' as u16, b'n' as u16, 0];
user32::CreateWindowExW(winapi::WS_EX_APPWINDOW | winapi::WS_EX_WINDOWEDGE, class_name.as_ptr(),
title.as_ptr() as winapi::LPCWSTR,
winapi::WS_OVERLAPPEDWINDOW | winapi::WS_CLIPSIBLINGS |
winapi::WS_VISIBLE,
winapi::CW_USEDEFAULT, winapi::CW_USEDEFAULT,
winapi::CW_USEDEFAULT, winapi::CW_USEDEFAULT,
ptr::null_mut(), ptr::null_mut(),
kernel32::GetModuleHandleW(ptr::null()),
ptr::null_mut())
}
unsafe fn register_window_class() -> Vec<u16> {
let class_name: Vec<u16> = OsStr::new("Window Class").encode_wide().chain(Some(0).into_iter())
.collect::<Vec<u16>>();
let class = winapi::WNDCLASSEXW {
cbSize: mem::size_of::<winapi::WNDCLASSEXW>() as winapi::UINT,
style: winapi::CS_HREDRAW | winapi::CS_VREDRAW | winapi::CS_OWNDC,
lpfnWndProc: Some(callback),
cbClsExtra: 0,
cbWndExtra: 0,
hInstance: kernel32::GetModuleHandleW(ptr::null()),
hIcon: ptr::null_mut(),
hCursor: ptr::null_mut(),
hbrBackground: ptr::null_mut(),
lpszMenuName: ptr::null(),
lpszClassName: class_name.as_ptr(),
hIconSm: ptr::null_mut(),
};
user32::RegisterClassExW(&class);
class_name
}
unsafe extern "system" fn callback(window: winapi::HWND, msg: winapi::UINT,
wparam: winapi::WPARAM, lparam: winapi::LPARAM)
-> winapi::LRESULT
{
user32::DefWindowProcW(window, msg, wparam, lparam)
}

View File

@ -0,0 +1,203 @@
extern crate vulkano;
use std::thread;
fn main() {
// The first step of any vulkan program is to create an instance.
let instance = vulkano::instance::Instance::new(None, None).expect("failed to create instance");
// We then choose which physical device to use.
//
// In a real application, there are three things to take into consideration:
//
// - Some devices support some optional features that may be required by your application.
// You should filter out the devices that don't support your app.
//
// - Not all devices can draw to a certain surface. Once you create your window, you have to
// choose a device that is capable of drawing to it.
//
// - You probably want to leave the choice between the remaining devices to the user.
//
// Here we are just going to use the first device.
let physical = vulkano::instance::PhysicalDevice::enumerate(&instance)
.next().expect("no device available");
println!("Using device: {} (type: {:?})", physical.name(), physical.ty());
// Vulkan provides a cross-platform API to draw on the whole monitor. In order to avoid the
// boiling plate of creating a window, we are going to use it.
let surface = {
let display = vulkano::swapchain::Display::enumerate(&physical).unwrap().next().unwrap();
let display_mode = display.display_modes().unwrap().next().unwrap();;
let plane = vulkano::swapchain::DisplayPlane::enumerate(&physical).unwrap().next().unwrap();
vulkano::swapchain::Surface::from_display_mode(&display_mode, &plane).unwrap()
};
// The next step is to choose which queue will execute our draw commands.
//
// Devices can provide multiple queues to run things in parallel (for example a draw queue and
// a compute queue). This is something you have to have to manage manually in Vulkan.
//
// We have to specify which queues you are going to use when you create the device, therefore
// we need to choose that now.
let queue = physical.queue_families().find(|q| q.supports_graphics() &&
surface.is_supported(q).unwrap_or(false))
.expect("couldn't find a graphical queue family");
// Now initializing the device.
//
// We have to pass a list of optional Vulkan features that must be enabled. Here we don't need
// any of them.
//
// We also have to pass a list of queues to create and their priorities relative to each other.
// Since we create one queue, we don't really care about the priority and just pass `0.5`.
// The list of created queues is returned by the function alongside with the device.
let (device, queues) = vulkano::device::Device::new(&physical,
&vulkano::instance::Features::none(),
[(queue, 0.5)].iter().cloned())
.expect("failed to create device");
// Since we can request multiple queues, the `queues` variable is a `Vec`. Our actual queue
// is the first element.
let queue = queues.into_iter().next().unwrap();
// Before we can draw on the surface, we have to create what is called a swapchain. Creating
// a swapchain allocates the color buffers that will contain the image that will be visible
// on the screen.
let (swapchain, images) = {
let caps = surface.get_capabilities(&physical).expect("failed to get surface capabilities");
println!("{:?}", caps);
let dimensions = caps.current_extent.unwrap_or([1280, 1024]);
let present = caps.present_modes[0];
let usage = caps.supported_usage_flags;
vulkano::swapchain::Swapchain::new(&device, &surface, 3,
vulkano::formats::B8G8R8A8Srgb, dimensions, 1,
&usage, vulkano::swapchain::SurfaceTransform::Identity,
vulkano::swapchain::CompositeAlpha::Opaque,
present, true).expect("failed to create swapchain")
};
// We create a buffer that will store the shape of our triangle.
//
// The first parameter is the device to use, and the second parameter is where to get the
// memory where the buffer will be stored. The latter is very important as it determines the
// way you are going to access and modify your buffer. Here we just ask for a basic host
// visible memory.
//
// Note that to store immutable data, the best way is to create two buffers. One buffer on
// the CPU and one buffer on the GPU. We then write our data to the buffer on the CPU and
// ask the GPU to copy it to the real buffer. This way the data is located on the most
// efficient memory possible.
let vertex_buffer: Arc<vulkano::buffer::Buffer<[Vertex; 3], _>> =
vulkano::buffer::Buffer::new(&device, vulkano::memory::HostVisible)
.expect("failed to create buffer");
struct Vertex { position: [f32; 2] }
// The buffer that we created contains uninitialized data.
// In order to fill it with data, we have to *map* it.
{
// The `try_write` function would return `None` if the buffer was in use by the GPU. This
// obviously can't happen here, since we haven't ask the GPU to do anything yet.
let mut mapping = vertex_buffer.try_write().unwrap();
mapping[0].position = [-0.5, -0.25];
mapping[1].position = [0.0, 0.5];
mapping[2].position = [0.25, -0.1];
}
// The next step is to create the shader.
//
// The shader creation API provided by the vulkano library is unsafe, for various reasons.
//
// Instead, in our build script we used the `vulkano-shaders` crate to parse our shader at
// compile time and provide a safe wrapper over vulkano's API. You can find the shader's
// source code in the `triangle.glsl` file.
//
// The code generated by the build script created a struct named `TriangleShader`, which we
// can now use to load the shader.
//
// Because of some restrictions with the `include!` macro, we need to use a module.
mod shader { include!{concat!(env!("OUT_DIR"), "/examples-triangle.rs")} }
let shader = shader::TriangleShader::load(&device);
// At this point, OpenGL initialization would be finished. However in Vulkan it is not. OpenGL
// implicitely does a lot of computation whenever you draw. In Vulkan, you have to do all this
// manually.
// The next step is to create a *renderpass*, which is an object that describes where the
// output of the graphics pipeline will go. It describes the layout of the images
// where the colors, depth and/or stencil information will be written.
let renderpass = renderpass!{
device: &device,
attachments: {
color [Clear]
}
}.unwrap();
// However the renderpass doesn't contain the *actual* attachments. It only describes the
// layout of the output. In order to describe the actual attachments, we have to create a
// *framebuffer*.
//
// A framebuffer is built upon a renderpass, but you can use a framebuffer with any other
// renderpass as long as the layout is the same.
//
// In our situation we want to draw on the swapchain we created above. To do so, we extract
// images from it.
let framebuffers = images.iter().map(|image| {
vulkano::framebuffer::Framebuffer::new(&renderpass, (1244, 699, 1), image).unwrap()
}).collect::<Vec<_>>();
// Don't worry, it's almost over!
//
// The next step is to create a *graphics pipeline*. This describes the state in which the GPU
// must be in order to draw our triangle. It contains various information like the list of
// shaders, depth function, primitive types, etc.
let graphics_pipeline = ;
// We are going to create a command buffer right below. Command buffers need to be allocated
// from a *command buffer pool*, so we create the pool.
let cb_pool = vulkano::command_buffer::CommandBufferPool::new(&device, &queue.lock().unwrap().family())
.expect("failed to create command buffer pool");
// The final initialization step is to create a command buffer.
//
// A command buffer contains a list of commands that the GPU must execute. This can include
// transfers between buffers, clearing images or attachments, etc. and draw commands. Here we
// create a command buffer with two commands: clearing the attachment and drawing the triangle.
let command_buffers = framebuffers.iter().map(|framebuffer| {
vulkano::command_buffer::PrimaryCommandBufferBuilder::new(&cb_pool).unwrap()
.draw_inline(&renderpass, &framebuffer, [0.0, 0.0, 1.0, 1.0])
.draw_end()
.build().unwrap()
}).collect::<Vec<_>>();
// Initialization is finally finished!
// Note that the only thing we need now is the `command_buffer` variable. Everything else is
// kept alive internally with `Arc`s (even the vertex buffer), so the only variable that we
// need is this one.
loop {
// Before we can draw on the output, we have to *acquire* an image from the swapchain.
// This operation returns the index of the image that we are allowed to draw upon.
let image_num = swapchain.acquire_next_image().unwrap();
// Our queue is wrapped around a `Mutex`, so we have to lock it.
let mut queue = queue.lock().unwrap();
// In order to draw, all we need to do is submit the command buffer to the queue.
command_buffers[image_num].submit(&mut queue).unwrap();
// The color output should now contain our triangle. But in order to show it on the
// screen, we have to *present* the image. Depending on the presentation mode, this may
// be shown immediatly or on the next redraw.
swapchain.present(&mut queue, image_num).unwrap();
// In a real application we want to submit things to the same queue in parallel, so we
// shouldn't keep it locked too long.
drop(queue);
// Sleeping a bit in order not take up too much CPU.
thread::sleep_ms(16);
}
}

View File

@ -0,0 +1,10 @@
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable
layout(location = 0) out vec4 f_color;
void main() {
f_color = vec4(1.0, 0.0, 0.0, 1.0);
}

View File

@ -0,0 +1,10 @@
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable
in vec2 position;
void main() {
gl_Position = vec4(position, 0.0, 1.0);
}

14
vulkano/src/alloc.rs Normal file
View File

@ -0,0 +1,14 @@
use std::mem;
use std::os::raw::c_void;
pub unsafe trait Alloc {
fn alloc(&self, size: usize, alignment: usize) -> Result<*mut c_void, ()>;
fn realloc(&self, original: *mut c_void, size: usize, alignment: usize) -> Result<*mut c_void, ()>;
fn free(&self, *mut c_void);
fn internal_free_notification(&self, size: usize);
fn internal_allocation_notification(&self, size: usize);
}

463
vulkano/src/buffer.rs Normal file
View File

@ -0,0 +1,463 @@
//! Location in memory that contains data.
//!
//! All buffers are guaranteed to be accessible from the GPU.
//!
//! # Strong typing
//!
//! All buffers take a template parameter that indicates their content.
//!
//! # Memory
//!
//! Creating a buffer requires passing an object that will be used by this library to provide
//! memory to the buffer.
//!
//! All accesses to the memory are done through the `Buffer` object.
//!
//! TODO: proof read this section
//!
use std::marker::PhantomData;
use std::mem;
use std::ptr;
use std::sync::Arc;
use device::Device;
use memory::CpuAccessible;
use memory::CpuWriteAccessible;
use memory::ChunkProperties;
use memory::MemorySource;
use memory::MemorySourceChunk;
use OomError;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
pub struct Buffer<T: ?Sized, M> {
marker: PhantomData<T>,
inner: Inner<M>,
}
struct Inner<M> {
device: Arc<Device>,
memory: M,
buffer: vk::Buffer,
size: usize,
usage: vk::BufferUsageFlags,
queue_families: Vec<u32>, // TODO: use smallvec instead
}
impl<T, M> Buffer<T, M> where M: MemorySourceChunk {
/// Creates a new buffer.
pub fn new<S>(device: &Arc<Device>, usage: &Usage, memory: S)
-> Result<Arc<Buffer<T, M>>, OomError>
where S: MemorySource<Chunk = M>
{
let vk = device.pointers();
let usage = usage.to_usage_bits();
let queue_families = vec![0]; // TODO: let user choose
assert!(!memory.is_sparse()); // not implemented
let buffer = unsafe {
let infos = vk::BufferCreateInfo {
sType: vk::STRUCTURE_TYPE_BUFFER_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // TODO: sparse resources binding
size: mem::size_of::<T>() as u64,
usage: usage,
sharingMode: if queue_families.len() >= 2 { vk::SHARING_MODE_EXCLUSIVE } else { vk::SHARING_MODE_CONCURRENT },
queueFamilyIndexCount: if queue_families.len() >= 2 { queue_families.len() as u32 } else { 0 },
pQueueFamilyIndices: if queue_families.len() >= 2 { queue_families.as_ptr() } else { ptr::null() },
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateBuffer(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
let mem_reqs: vk::MemoryRequirements = unsafe {
let mut output = mem::uninitialized();
vk.GetBufferMemoryRequirements(device.internal_object(), buffer, &mut output);
output
};
let memory = memory.allocate(device, mem_reqs.size as usize, mem_reqs.alignment as usize,
mem_reqs.memoryTypeBits)
.expect("failed to allocate"); // TODO: use try!() instead
unsafe {
match memory.properties() {
ChunkProperties::Regular { memory, offset, .. } => {
try!(check_errors(vk.BindBufferMemory(device.internal_object(), buffer,
memory.internal_object(),
offset as vk::DeviceSize)));
},
_ => unimplemented!()
}
}
Ok(Arc::new(Buffer {
marker: PhantomData,
inner: Inner {
device: device.clone(),
memory: memory,
buffer: buffer,
size: mem_reqs.size as usize,
usage: usage,
queue_families: queue_families,
}
}))
}
}
impl<T: ?Sized, M> Buffer<T, M> {
/// Returns the size of the buffer in bytes.
#[inline]
pub fn size(&self) -> usize {
self.inner.size
}
/// True if the buffer can be used as a source for buffer transfers.
#[inline]
pub fn usage_transfer_src(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_TRANSFER_SRC_BIT) != 0
}
/// True if the buffer can be used as a destination for buffer transfers.
#[inline]
pub fn usage_transfer_dest(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_TRANSFER_DST_BIT) != 0
}
/// True if the buffer can be used as
#[inline]
pub fn usage_uniform_texel_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT) != 0
}
/// True if the buffer can be used as
#[inline]
pub fn usage_storage_texel_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT) != 0
}
/// True if the buffer can be used as
#[inline]
pub fn usage_uniform_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_UNIFORM_BUFFER_BIT) != 0
}
/// True if the buffer can be used as
#[inline]
pub fn usage_storage_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_STORAGE_BUFFER_BIT) != 0
}
/// True if the buffer can be used as a source for index data.
#[inline]
pub fn usage_index_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_INDEX_BUFFER_BIT) != 0
}
/// True if the buffer can be used as a source for vertex data.
#[inline]
pub fn usage_vertex_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_VERTEX_BUFFER_BIT) != 0
}
/// True if the buffer can be used as an indirect buffer.
#[inline]
pub fn usage_indirect_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_INDIRECT_BUFFER_BIT) != 0
}
/*pub fn try_read(&self) -> Option<Read> {
}
pub fn read(&self, timeout_ns: u64) -> Result<Read, > {
}
pub fn try_write(&self) -> Option<ReadWrite> {
}
pub fn write(&self) -> Result<ReadWrite, > {
}*/
}
impl<T, M> Buffer<[T], M> {
/// Returns the number of elements in the buffer.
#[inline]
pub fn len(&self) -> usize {
self.size() / mem::size_of::<T>()
}
}
impl<'a, T: ?Sized, M> Buffer<T, M> where M: CpuAccessible<'a, T> {
/// Gives a read access to the content of the buffer.
///
/// If the buffer is in use by the GPU, blocks until it is available.
#[inline]
pub fn read(&'a self, timeout_ns: u64) -> M::Read {
self.inner.memory.read(timeout_ns)
}
/// Tries to give a read access to the content of the buffer.
///
/// If the buffer is in use by the GPU, returns `None`.
#[inline]
pub fn try_read(&'a self) -> Option<M::Read> {
self.inner.memory.try_read()
}
}
impl<'a, T: ?Sized, M> Buffer<T, M> where M: CpuWriteAccessible<'a, T> {
/// Gives a write access to the content of the buffer.
///
/// If the buffer is in use by the GPU, blocks until it is available.
#[inline]
pub fn write(&'a self, timeout_ns: u64) -> M::Write {
self.inner.memory.write(timeout_ns)
}
/// Tries to give a write access to the content of the buffer.
///
/// If the buffer is in use by the GPU, returns `None`.
#[inline]
pub fn try_write(&'a self) -> Option<M::Write> {
self.inner.memory.try_write()
}
}
unsafe impl<'a, T: ?Sized, M> CpuAccessible<'a, T> for Buffer<T, M>
where M: CpuAccessible<'a, T>
{
type Read = M::Read;
#[inline]
fn read(&'a self, timeout_ns: u64) -> M::Read {
self.read(timeout_ns)
}
#[inline]
fn try_read(&'a self) -> Option<M::Read> {
self.try_read()
}
}
unsafe impl<'a, T: ?Sized, M> CpuWriteAccessible<'a, T> for Buffer<T, M>
where M: CpuWriteAccessible<'a, T>
{
type Write = M::Write;
#[inline]
fn write(&'a self, timeout_ns: u64) -> M::Write {
self.write(timeout_ns)
}
#[inline]
fn try_write(&'a self) -> Option<M::Write> {
self.try_write()
}
}
impl<T: ?Sized, M> VulkanObject for Buffer<T, M> {
type Object = vk::Buffer;
#[inline]
fn internal_object(&self) -> vk::Buffer {
self.inner.buffer
}
}
impl<T: ?Sized, M> Drop for Buffer<T, M> {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.inner.device.pointers();
vk.DestroyBuffer(self.inner.device.internal_object(), self.inner.buffer, ptr::null());
}
}
}
/// Describes how a buffer is going to be used. This is **not** an optimization.
///
/// If you try to use a buffer in a way that you didn't declare, a panic will happen.
#[derive(Debug, Copy, Clone)]
pub struct Usage {
pub transfer_source: bool,
pub transfer_dest: bool,
pub uniform_texel_buffer: bool,
pub storage_texel_buffer: bool,
pub uniform_buffer: bool,
pub storage_buffer: bool,
pub index_buffer: bool,
pub vertex_buffer: bool,
pub indirect_buffer: bool,
}
impl Usage {
/// Builds a `Usage` with all values set to true. Can be used for quick prototyping.
#[inline]
pub fn all() -> Usage {
Usage {
transfer_source: true,
transfer_dest: true,
uniform_texel_buffer: true,
storage_texel_buffer: true,
uniform_buffer: true,
storage_buffer: true,
index_buffer: true,
vertex_buffer: true,
indirect_buffer: true,
}
}
#[inline]
fn to_usage_bits(&self) -> vk::BufferUsageFlagBits {
let mut result = 0;
if self.transfer_source { result |= vk::BUFFER_USAGE_TRANSFER_SRC_BIT; }
if self.transfer_dest { result |= vk::BUFFER_USAGE_TRANSFER_DST_BIT; }
if self.uniform_texel_buffer { result |= vk::BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT; }
if self.storage_texel_buffer { result |= vk::BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT; }
if self.uniform_buffer { result |= vk::BUFFER_USAGE_UNIFORM_BUFFER_BIT; }
if self.storage_buffer { result |= vk::BUFFER_USAGE_STORAGE_BUFFER_BIT; }
if self.index_buffer { result |= vk::BUFFER_USAGE_INDEX_BUFFER_BIT; }
if self.vertex_buffer { result |= vk::BUFFER_USAGE_VERTEX_BUFFER_BIT; }
if self.indirect_buffer { result |= vk::BUFFER_USAGE_INDIRECT_BUFFER_BIT; }
result
}
}
/// A subpart of a buffer.
///
/// This object doesn't correspond to any Vulkan object. It exists for the programmer's
/// convenience.
#[derive(Copy, Clone)]
pub struct BufferSlice<'a, T: ?Sized + 'a, M: 'a> {
marker: PhantomData<T>,
inner: &'a Inner<M>,
offset: usize,
size: usize,
}
impl<'a, T: ?Sized + 'a, M: 'a> BufferSlice<'a, T, M> {
/// Returns the offset of that slice within the buffer.
#[inline]
pub fn offset(&self) -> usize {
self.offset
}
/// Returns the size of that slice in bytes.
#[inline]
pub fn size(&self) -> usize {
self.size
}
/// True if the buffer can be used as a source for buffer transfers.
#[inline]
pub fn usage_transfer_src(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_TRANSFER_SRC_BIT) != 0
}
/// True if the buffer can be used as a destination for buffer transfers.
#[inline]
pub fn usage_transfer_dest(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_TRANSFER_DST_BIT) != 0
}
/// True if the buffer can be used as
#[inline]
pub fn usage_uniform_texel_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT) != 0
}
/// True if the buffer can be used as
#[inline]
pub fn usage_storage_texel_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT) != 0
}
/// True if the buffer can be used as
#[inline]
pub fn usage_uniform_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_UNIFORM_BUFFER_BIT) != 0
}
/// True if the buffer can be used as
#[inline]
pub fn usage_storage_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_STORAGE_BUFFER_BIT) != 0
}
/// True if the buffer can be used as a source for index data.
#[inline]
pub fn usage_index_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_INDEX_BUFFER_BIT) != 0
}
/// True if the buffer can be used as a source for vertex data.
#[inline]
pub fn usage_vertex_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_VERTEX_BUFFER_BIT) != 0
}
/// True if the buffer can be used as
#[inline]
pub fn usage_indirect_buffer(&self) -> bool {
(self.inner.usage & vk::BUFFER_USAGE_INDIRECT_BUFFER_BIT) != 0
}
}
impl<'a, T: 'a, M: 'a> BufferSlice<'a, [T], M> {
/// Returns the number of elements in this slice.
#[inline]
pub fn len(&self) -> usize {
self.size() / mem::size_of::<T>()
}
}
impl<'a, T: ?Sized, M> VulkanObject for BufferSlice<'a, T, M> {
type Object = vk::Buffer;
#[inline]
fn internal_object(&self) -> vk::Buffer {
self.inner.buffer
}
}
impl<'a, T: ?Sized + 'a, M: 'a> From<&'a Arc<Buffer<T, M>>> for BufferSlice<'a, T, M> {
#[inline]
fn from(r: &'a Arc<Buffer<T, M>>) -> BufferSlice<'a, T, M> {
BufferSlice {
marker: PhantomData,
inner: &r.inner,
offset: 0,
size: r.inner.size,
}
}
}
impl<'a, T: 'a, M: 'a> From<BufferSlice<'a, T, M>> for BufferSlice<'a, [T], M> {
#[inline]
fn from(r: BufferSlice<'a, T, M>) -> BufferSlice<'a, [T], M> {
BufferSlice {
marker: PhantomData,
inner: r.inner,
offset: r.offset,
size: r.size,
}
}
}
/// Represents a way for the GPU to interpret buffer data.
///
/// Note that a buffer view is only required for some operations. For example using a buffer as a
/// uniform buffer doesn't require creating a `BufferView`.
pub struct BufferView<T: ?Sized, M> {
buffer: Arc<Buffer<T, M>>,
}

View File

@ -0,0 +1,478 @@
use std::mem;
use std::ptr;
use std::sync::Arc;
use buffer::Buffer;
use buffer::BufferSlice;
use command_buffer::CommandBufferPool;
use command_buffer::DynamicState;
use device::Queue;
use framebuffer::ClearValue;
use framebuffer::Framebuffer;
use framebuffer::RenderPass;
use framebuffer::RenderPassLayout;
use memory::MemorySourceChunk;
use pipeline::GraphicsPipeline;
use pipeline::vertex::MultiVertex;
use device::Device;
use OomError;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
/// Actual implementation of all command buffer builders.
///
/// Doesn't check whether the command type is appropriate for the command buffer type.
pub struct InnerCommandBufferBuilder {
device: Arc<Device>,
pool: Arc<CommandBufferPool>,
cmd: Option<vk::CommandBuffer>,
resources: Vec<Arc<MemorySourceChunk>>,
// Current pipeline object binded to the graphics bind point.
graphics_pipeline: Option<vk::Pipeline>,
// Current pipeline object binded to the compute bind point.
compute_pipeline: Option<vk::Pipeline>,
// Current state of the dynamic state within the command buffer.
dynamic_state: DynamicState,
// When we use a buffer whose sharing mode is exclusive in a different queue family, we have
// to transfer back ownership to the original queue family. To do so, we store the list of
// barriers that must be queued before calling `vkEndCommandBuffer`.
buffer_restore_queue_family: Vec<vk::BufferMemoryBarrier>,
}
impl InnerCommandBufferBuilder {
/// Creates a new builder.
pub fn new(pool: &Arc<CommandBufferPool>, secondary: bool)
-> Result<InnerCommandBufferBuilder, OomError>
{
let device = pool.device();
let vk = device.pointers();
let cmd = unsafe {
let infos = vk::CommandBufferAllocateInfo {
sType: vk::STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO,
pNext: ptr::null(),
commandPool: pool.internal_object(),
level: if secondary {
vk::COMMAND_BUFFER_LEVEL_SECONDARY
} else {
vk::COMMAND_BUFFER_LEVEL_PRIMARY
},
// vulkan can allocate multiple command buffers at once, hence the 1
commandBufferCount: 1,
};
let mut output = mem::uninitialized();
try!(check_errors(vk.AllocateCommandBuffers(device.internal_object(), &infos,
&mut output)));
output
};
unsafe {
let infos = vk::CommandBufferBeginInfo {
sType: vk::STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO,
pNext: ptr::null(),
flags: vk::COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT, // TODO:
pInheritanceInfo: ptr::null(), // TODO:
};
try!(check_errors(vk.BeginCommandBuffer(cmd, &infos)));
}
Ok(InnerCommandBufferBuilder {
device: device.clone(),
pool: pool.clone(),
cmd: Some(cmd),
resources: Vec::new(),
graphics_pipeline: None,
compute_pipeline: None,
dynamic_state: DynamicState::none(),
buffer_restore_queue_family: Vec::new(),
})
}
/// Executes the content of another command buffer.
///
/// # Safety
///
/// Care must be taken to respect the rules about secondary command buffers.
pub unsafe fn execute_commands<'a, I>(self, iter: I)
-> InnerCommandBufferBuilder
where I: Iterator<Item = &'a InnerCommandBuffer>
{
{
let mut command_buffers = Vec::with_capacity(iter.size_hint().0);
for cb in iter {
command_buffers.push(cb.cmd);
// FIXME: push resources
}
let vk = self.device.pointers();
vk.CmdExecuteCommands(self.cmd.unwrap(), command_buffers.len() as u32,
command_buffers.as_ptr());
}
self
}
/// Writes data to a buffer.
///
/// # Panic
///
/// - Panicks if the size of `data` is not the same as the size of the buffer slice.
/// - Panicks if the size of `data` is superior to 65536 bytes.
/// - Panicks if the offset or size is not a multiple of 4.
/// - Panicks if the buffer wasn't created with the right usage.
/// - Panicks if the queue family doesn't support transfer operations.
///
/// # Safety
///
/// - Care must be taken to respect the rules about secondary command buffers.
///
pub unsafe fn update_buffer<'a, B, T: 'a, M: 'a>(self, buffer: B, data: &T)
-> InnerCommandBufferBuilder
where B: Into<BufferSlice<'a, T, M>>
{
{
let vk = self.device.pointers();
let buffer = buffer.into();
assert!(self.pool.queue_family().supports_transfers());
assert_eq!(buffer.size(), mem::size_of_val(data));
assert!(buffer.size() <= 65536);
assert!(buffer.offset() % 4 == 0);
assert!(buffer.size() % 4 == 0);
assert!(buffer.usage_transfer_dest());
// FIXME: check that the queue family supports transfers
// FIXME: add the buffer to the list of resources
// FIXME: check queue family of the buffer
vk.CmdUpdateBuffer(self.cmd.unwrap(), buffer.internal_object(),
buffer.offset() as vk::DeviceSize,
buffer.size() as vk::DeviceSize, data as *const T as *const _);
}
self
}
/// Fills a buffer with data.
///
/// # Panic
///
/// - Panicks if `offset + data` is superior to the size of the buffer.
/// - Panicks if the offset or size is not a multiple of 4.
/// - Panicks if the buffer wasn't created with the right usage.
/// - Panicks if the queue family doesn't support transfer operations.
///
/// # Safety
///
/// - Type safety is not enforced by the API.
/// - Care must be taken to respect the rules about secondary command buffers.
///
pub unsafe fn fill_buffer<'a, T: 'a, M: 'a>(self, buffer: &Arc<Buffer<T, M>>, offset: usize,
size: usize, data: u32) -> InnerCommandBufferBuilder
{
{
let vk = self.device.pointers();
assert!(self.pool.queue_family().supports_transfers());
assert!(offset + size <= buffer.size());
assert!(offset % 4 == 0);
assert!(size % 4 == 0);
assert!(buffer.usage_transfer_dest());
// FIXME: check that the queue family supports transfers
// FIXME: add the buffer to the list of resources
// FIXME: check queue family of the buffer
vk.CmdFillBuffer(self.cmd.unwrap(), buffer.internal_object(),
offset as vk::DeviceSize, size as vk::DeviceSize, data);
}
self
}
/*fn copy_buffer<I>(source: &Arc<Buffer>, destination: &Arc<Buffer>, copies: I)
-> InnerCommandBufferBuilder
where I: IntoIter<Item = CopyCommand>
{
assert!(self.pool.queue_family().supports_transfers());
// TODO: check values
let copies = copies.into_iter().map(|command| {
vk::BufferCopy {
srcOffset: command.source_offset,
dstOffset: command.destination_offset,
size: command.size,
}
}).collect::<Vec<_>>();
vk.CmdCopyBuffer(self.cmd.unwrap(), source.internal_object(), destination.internal_object(),
copies.len(), copies.as_ptr());
}*/
/// Calls `vkCmdDraw`.
// FIXME: push constants
pub unsafe fn draw<V>(mut self, pipeline: &Arc<GraphicsPipeline<V>>,
vertices: V, dynamic: &DynamicState)
-> InnerCommandBufferBuilder
where V: MultiVertex
{
{
self.bind_gfx_pipeline_state(pipeline, dynamic);
let vk = self.device.pointers();
let ids = vertices.ids();
let offsets = (0 .. ids.len()).map(|_| 0).collect::<Vec<_>>();
vk.CmdBindVertexBuffers(self.cmd.unwrap(), 0, ids.len() as u32, ids.as_ptr(),
offsets.as_ptr());
vk.CmdDraw(self.cmd.unwrap(), 3, 1, 0, 0); // FIXME: params
}
self
}
fn bind_gfx_pipeline_state<V>(&mut self, pipeline: &Arc<GraphicsPipeline<V>>,
dynamic: &DynamicState)
{
let vk = self.device.pointers();
if self.graphics_pipeline != Some(pipeline.internal_object()) {
// FIXME: add pipeline to resources list
unsafe {
vk.CmdBindPipeline(self.cmd.unwrap(), vk::PIPELINE_BIND_POINT_GRAPHICS,
pipeline.internal_object());
}
self.graphics_pipeline = Some(pipeline.internal_object());
}
if let Some(line_width) = dynamic.line_width {
assert!(pipeline.has_dynamic_line_width());
// TODO: check limits
if self.dynamic_state.line_width != Some(line_width) {
unsafe { vk.CmdSetLineWidth(self.cmd.unwrap(), line_width) };
self.dynamic_state.line_width = Some(line_width);
}
} else {
assert!(!pipeline.has_dynamic_line_width());
}
}
/// Calls `vkCmdBeginRenderPass`.
///
/// # Panic
///
/// - Panicks if the framebuffer is not compatible with the renderpass.
///
/// # Safety
///
/// - Care must be taken to respect the rules about secondary command buffers.
///
#[inline]
pub unsafe fn begin_renderpass<R, F>(self, renderpass: &Arc<RenderPass<R>>,
framebuffer: &Arc<Framebuffer<F>>,
secondary_cmd_buffers: bool,
clear_values: &[ClearValue]) -> InnerCommandBufferBuilder
where R: RenderPassLayout
{
// FIXME: framebuffer synchronization
assert!(framebuffer.is_compatible_with(renderpass));
let clear_values = clear_values.iter().map(|value| {
match *value {
ClearValue::None => vk::ClearValue::color({
vk::ClearColorValue::float32([0.0, 0.0, 0.0, 0.0])
}),
ClearValue::Float(data) => vk::ClearValue::color(vk::ClearColorValue::float32(data)),
ClearValue::Int(data) => vk::ClearValue::color(vk::ClearColorValue::int32(data)),
ClearValue::Uint(data) => vk::ClearValue::color(vk::ClearColorValue::uint32(data)),
ClearValue::Depth(d) => vk::ClearValue::depth_stencil({
vk::ClearDepthStencilValue { depth: d, stencil: 0 }
}),
ClearValue::Stencil(s) => vk::ClearValue::depth_stencil({
vk::ClearDepthStencilValue { depth: 0.0, stencil: s }
}),
ClearValue::DepthStencil((d, s)) => vk::ClearValue::depth_stencil({
vk::ClearDepthStencilValue { depth: d, stencil: s }
}),
}
}).collect::<Vec<_>>();
// TODO: change attachment image layouts if necessary, for both initial and final
for attachment in R::attachments() {
}
{
let vk = self.device.pointers();
let infos = vk::RenderPassBeginInfo {
sType: vk::STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO,
pNext: ptr::null(),
renderPass: renderpass.internal_object(),
framebuffer: framebuffer.internal_object(),
renderArea: vk::Rect2D { // TODO: let user customize
offset: vk::Offset2D { x: 0, y: 0 },
extent: vk::Extent2D {
width: framebuffer.width(),
height: framebuffer.height(),
},
},
clearValueCount: clear_values.len() as u32,
pClearValues: clear_values.as_ptr(),
};
let content = if secondary_cmd_buffers {
vk::SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS
} else {
vk::SUBPASS_CONTENTS_INLINE
};
vk.CmdBeginRenderPass(self.cmd.unwrap(), &infos, content);
}
self
}
#[inline]
pub unsafe fn next_subpass(self, secondary_cmd_buffers: bool) -> InnerCommandBufferBuilder {
{
let vk = self.device.pointers();
let content = if secondary_cmd_buffers {
vk::SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS
} else {
vk::SUBPASS_CONTENTS_INLINE
};
vk.CmdNextSubpass(self.cmd.unwrap(), content);
}
self
}
#[inline]
pub unsafe fn end_renderpass(self) -> InnerCommandBufferBuilder {
{
let vk = self.device.pointers();
vk.CmdEndRenderPass(self.cmd.unwrap());
}
self
}
/// Finishes building the command buffer.
pub fn build(mut self) -> Result<InnerCommandBuffer, OomError> {
unsafe {
let vk = self.device.pointers();
let cmd = self.cmd.take().unwrap();
// committing the necessary barriers
if !self.buffer_restore_queue_family.is_empty() {
vk.CmdPipelineBarrier(cmd, vk::PIPELINE_STAGE_ALL_COMMANDS_BIT,
vk::PIPELINE_STAGE_TOP_OF_PIPE_BIT, 0,
0, ptr::null(),
self.buffer_restore_queue_family.len() as u32,
self.buffer_restore_queue_family.as_ptr(),
0, ptr::null());
}
// ending the commands recording
try!(check_errors(vk.EndCommandBuffer(cmd)));
Ok(InnerCommandBuffer {
device: self.device.clone(),
pool: self.pool.clone(),
cmd: cmd,
resources: mem::replace(&mut self.resources, Vec::new()),
})
}
}
}
impl Drop for InnerCommandBufferBuilder {
#[inline]
fn drop(&mut self) {
if let Some(cmd) = self.cmd {
unsafe {
let vk = self.device.pointers();
vk.EndCommandBuffer(cmd);
vk.FreeCommandBuffers(self.device.internal_object(), self.pool.internal_object(),
1, &cmd);
}
}
}
}
/// Actual implementation of all command buffers.
pub struct InnerCommandBuffer {
device: Arc<Device>,
pool: Arc<CommandBufferPool>,
cmd: vk::CommandBuffer,
resources: Vec<Arc<MemorySourceChunk>>,
}
impl InnerCommandBuffer {
/// Submits the command buffer to a queue.
///
/// Queues are not thread-safe, therefore we need to get a `&mut`.
///
/// # Panic
///
/// - Panicks if the queue doesn't belong to the device this command buffer was created with.
/// - Panicks if the queue doesn't belong to the family the pool was created with.
///
pub fn submit(&self, queue: &mut Queue) -> Result<(), OomError> { // TODO: wrong error type
// FIXME: the whole function should be checked
let vk = self.device.pointers();
assert_eq!(queue.device().internal_object(), self.pool.device().internal_object());
assert_eq!(queue.family().id(), self.pool.queue_family().id());
// FIXME: call resources access controllers
let infos = vk::SubmitInfo {
sType: vk::STRUCTURE_TYPE_SUBMIT_INFO,
pNext: ptr::null(),
waitSemaphoreCount: 0, // TODO:
pWaitSemaphores: ptr::null(), // TODO:
pWaitDstStageMask: ptr::null(), // TODO:
commandBufferCount: 1,
pCommandBuffers: &self.cmd,
signalSemaphoreCount: 0, // TODO:
pSignalSemaphores: ptr::null(), // TODO:
};
unsafe {
try!(check_errors(vk.QueueSubmit(queue.internal_object(), 1,
&infos, mem::transmute(0u64) /*vk::NULL_HANDLE*/)));
}
Ok(())
}
/* TODO:
fn reset() -> InnerCommandBufferBuilder {
}*/
}
impl Drop for InnerCommandBuffer {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.FreeCommandBuffers(self.device.internal_object(), self.pool.internal_object(),
1, &self.cmd);
}
}
}

View File

@ -0,0 +1,48 @@
//! Commands that the GPU will execute (includes draw commands).
//!
//! With Vulkan, before the GPU can do anything you must create a `CommandBuffer`. A command buffer
//! is a list of commands that will executed by the GPU. Once a command buffer is created, you can
//! execute it. A command buffer must be created even for the most simple tasks.
//!
//! # Pools
//!
//! Command buffers are allocated from pools. You must first create a command buffer pool which
//! you will create command buffers from.
//!
//! A pool is linked to a queue family. Command buffers that are created from a certain pool can
//! only be submitted to queues that belong to that specific family.
//!
//! # Primary and secondary command buffers.
//!
//! There are three types of command buffers:
//!
//! - **Primary command buffers**. They can contain any command. They are the only type of command
//! buffer that can be submitted to a queue.
//! - **Secondary "graphics" command buffers**. They contain draw and clear commands. They can be
//! called from a primary command buffer once a framebuffer has been selected.
//! - **Secondary "compute" command buffers**. They can contain non-draw and non-clear commands
//! (eg. copying between buffers) and can be called from a primary command buffer outside of a
//! render pass.
//!
//! Note that secondary command buffers cannot call other command buffers.
//!
// Implementation note.
// There are various restrictions about which command can be used at which moment. Therefore the
// API has several different command buffer wrappers, but they all use the same internal
// struct. The restrictions are enforced only in the public types.
pub use self::outer::DynamicState;
pub use self::outer::PrimaryCommandBufferBuilder;
pub use self::outer::PrimaryCommandBufferBuilderInlineDraw;
pub use self::outer::PrimaryCommandBufferBuilderSecondaryDraw;
pub use self::outer::PrimaryCommandBuffer;
pub use self::outer::SecondaryGraphicsCommandBufferBuilder;
pub use self::outer::SecondaryGraphicsCommandBuffer;
pub use self::outer::SecondaryComputeCommandBufferBuilder;
pub use self::outer::SecondaryComputeCommandBuffer;
pub use self::pool::CommandBufferPool;
mod inner;
mod outer;
mod pool;

View File

@ -0,0 +1,512 @@
use std::sync::Arc;
use buffer::Buffer;
use buffer::BufferSlice;
use command_buffer::CommandBufferPool;
use command_buffer::inner::InnerCommandBufferBuilder;
use command_buffer::inner::InnerCommandBuffer;
use device::Queue;
use framebuffer::Framebuffer;
use framebuffer::RenderPass;
use framebuffer::RenderPassLayout;
use pipeline::GraphicsPipeline;
use pipeline::vertex::MultiVertex;
use OomError;
/// A prototype of a primary command buffer.
///
/// # Usage
///
/// ```ignore // TODO: change that
/// let commands_buffer =
/// PrimaryCommandBufferBuilder::new(&device)
/// .copy_memory(..., ...)
/// .draw(...)
/// .build();
///
/// ```
///
pub struct PrimaryCommandBufferBuilder {
inner: InnerCommandBufferBuilder,
}
impl PrimaryCommandBufferBuilder {
/// Builds a new primary command buffer and start recording commands in it.
#[inline]
pub fn new(pool: &Arc<CommandBufferPool>)
-> Result<PrimaryCommandBufferBuilder, OomError>
{
let inner = try!(InnerCommandBufferBuilder::new(pool, false));
Ok(PrimaryCommandBufferBuilder { inner: inner })
}
/// Writes data to a buffer.
///
/// The data is stored inside the command buffer and written to the given buffer slice.
/// This function is intended to be used for small amounts of data (only 64kB is allowed). if
/// you want to transfer large amounts of data, use copies between buffers.
///
/// # Panic
///
/// - Panicks if the size of `data` is not the same as the size of the buffer slice.
/// - Panicks if the size of `data` is superior to 65536 bytes.
/// - Panicks if the offset or size is not a multiple of 4.
/// - Panicks if the buffer wasn't created with the right usage.
/// - Panicks if the queue family doesn't support transfer operations.
///
#[inline]
pub fn update_buffer<'a, B, T: 'a, M: 'a>(self, buffer: B, data: &T)
-> PrimaryCommandBufferBuilder
where B: Into<BufferSlice<'a, T, M>>
{
unsafe {
PrimaryCommandBufferBuilder {
inner: self.inner.update_buffer(buffer, data)
}
}
}
/// Fills a buffer with data.
///
/// The data is repeated until it fills the range from `offset` to `offset + size`.
/// Since the data is a u32, the offset and the size must be multiples of 4.
///
/// # Panic
///
/// - Panicks if `offset + data` is superior to the size of the buffer.
/// - Panicks if the offset or size is not a multiple of 4.
/// - Panicks if the buffer wasn't created with the right usage.
/// - Panicks if the queue family doesn't support transfer operations.
///
/// # Safety
///
/// - Type safety is not enforced by the API.
///
pub unsafe fn fill_buffer<'a, T: 'a, M: 'a>(self, buffer: &Arc<Buffer<T, M>>, offset: usize,
size: usize, data: u32)
-> PrimaryCommandBufferBuilder
{
PrimaryCommandBufferBuilder {
inner: self.inner.fill_buffer(buffer, offset, size, data)
}
}
/// Executes secondary compute command buffers within this primary command buffer.
#[inline]
pub fn execute_commands<'a, I>(self, iter: I) -> PrimaryCommandBufferBuilder
where I: Iterator<Item = &'a SecondaryComputeCommandBuffer>
{
unsafe {
PrimaryCommandBufferBuilder {
inner: self.inner.execute_commands(iter.map(|cb| &cb.inner))
}
}
}
/// Start drawing on a framebuffer.
//
/// This function returns an object that can be used to submit draw commands on the first
/// subpass of the renderpass.
///
/// # Panic
///
/// - Panicks if the framebuffer is not compatible with the renderpass.
///
// FIXME: rest of the parameters (render area and clear attachment values)
#[inline]
pub fn draw_inline<R, F>(self, renderpass: &Arc<RenderPass<R>>,
framebuffer: &Arc<Framebuffer<F>>, clear_values: F::ClearValues)
-> PrimaryCommandBufferBuilderInlineDraw
where F: RenderPassLayout, R: RenderPassLayout
{
// FIXME: check for compatibility
let clear_values = F::convert_clear_values(clear_values);
unsafe {
let inner = self.inner.begin_renderpass(renderpass, framebuffer, false, &clear_values);
PrimaryCommandBufferBuilderInlineDraw {
inner: inner,
current_subpass: 0,
num_subpasses: 1, // FIXME:
}
}
}
/// Start drawing on a framebuffer.
//
/// This function returns an object that can be used to submit secondary graphics command
/// buffers that will operate on the first subpass of the renderpass.
///
/// # Panic
///
/// - Panicks if the framebuffer is not compatible with the renderpass.
///
// FIXME: rest of the parameters (render area and clear attachment values)
#[inline]
pub fn draw_secondary<R, F>(self, renderpass: &Arc<RenderPass<R>>,
framebuffer: &Arc<Framebuffer<F>>, clear_values: F::ClearValues)
-> PrimaryCommandBufferBuilderSecondaryDraw
where F: RenderPassLayout, R: RenderPassLayout
{
// FIXME: check for compatibility
let clear_values = F::convert_clear_values(clear_values);
unsafe {
let inner = self.inner.begin_renderpass(renderpass, framebuffer, true, &clear_values);
PrimaryCommandBufferBuilderSecondaryDraw {
inner: inner,
current_subpass: 0,
num_subpasses: 1, // FIXME:
}
}
}
/// Finish recording commands and build the command buffer.
#[inline]
pub fn build(self) -> Result<Arc<PrimaryCommandBuffer>, OomError> {
let inner = try!(self.inner.build());
Ok(Arc::new(PrimaryCommandBuffer { inner: inner }))
}
}
/// Object that you obtain when calling `draw_inline` or `next_subpass_inline`.
pub struct PrimaryCommandBufferBuilderInlineDraw {
inner: InnerCommandBufferBuilder,
current_subpass: u32,
num_subpasses: u32,
}
impl PrimaryCommandBufferBuilderInlineDraw {
/// Calls `vkCmdDraw`.
// FIXME: push constants
pub fn draw<V>(self, pipeline: &Arc<GraphicsPipeline<V>>,
vertices: V, dynamic: &DynamicState) -> PrimaryCommandBufferBuilderInlineDraw
where V: MultiVertex
{
unsafe {
PrimaryCommandBufferBuilderInlineDraw {
inner: self.inner.draw(pipeline, vertices, dynamic),
num_subpasses: self.num_subpasses,
current_subpass: self.current_subpass,
}
}
}
/// Switches to the next subpass of the current renderpass.
///
/// This function is similar to `draw_inline` on the builder.
///
/// # Panic
///
/// - Panicks if no more subpasses remain.
///
#[inline]
pub fn next_subpass_inline(self) -> PrimaryCommandBufferBuilderInlineDraw {
assert!(self.current_subpass + 1 < self.num_subpasses);
unsafe {
let inner = self.inner.next_subpass(false);
PrimaryCommandBufferBuilderInlineDraw {
inner: inner,
num_subpasses: self.num_subpasses,
current_subpass: self.current_subpass + 1,
}
}
}
/// Switches to the next subpass of the current renderpass.
///
/// This function is similar to `draw_secondary` on the builder.
///
/// # Panic
///
/// - Panicks if no more subpasses remain.
///
#[inline]
pub fn next_subpass_secondary(self) -> PrimaryCommandBufferBuilderSecondaryDraw {
assert!(self.current_subpass + 1 < self.num_subpasses);
unsafe {
let inner = self.inner.next_subpass(true);
PrimaryCommandBufferBuilderSecondaryDraw {
inner: inner,
num_subpasses: self.num_subpasses,
current_subpass: self.current_subpass + 1,
}
}
}
/// Finish drawing this renderpass and get back the builder.
#[inline]
pub fn draw_end(mut self) -> PrimaryCommandBufferBuilder {
unsafe {
// skipping the remaining subpasses
for _ in 0 .. (self.num_subpasses - self.current_subpass - 1) {
self.inner = self.inner.next_subpass(false);
}
let inner = self.inner.end_renderpass();
PrimaryCommandBufferBuilder {
inner: inner,
}
}
}
}
/// Object that you obtain when calling `draw_secondary` or `next_subpass_secondary`.
pub struct PrimaryCommandBufferBuilderSecondaryDraw {
inner: InnerCommandBufferBuilder,
num_subpasses: u32,
current_subpass: u32,
}
impl PrimaryCommandBufferBuilderSecondaryDraw {
/// Switches to the next subpass of the current renderpass.
///
/// This function is similar to `draw_inline` on the builder.
///
/// # Panic
///
/// - Panicks if no more subpasses remain.
///
#[inline]
pub fn next_subpass_inline(self) -> PrimaryCommandBufferBuilderInlineDraw {
assert!(self.current_subpass + 1 < self.num_subpasses);
unsafe {
let inner = self.inner.next_subpass(false);
PrimaryCommandBufferBuilderInlineDraw {
inner: inner,
num_subpasses: self.num_subpasses,
current_subpass: self.current_subpass + 1,
}
}
}
/// Switches to the next subpass of the current renderpass.
///
/// This function is similar to `draw_secondary` on the builder.
///
/// # Panic
///
/// - Panicks if no more subpasses remain.
///
#[inline]
pub fn next_subpass_secondary(self) -> PrimaryCommandBufferBuilderSecondaryDraw {
assert!(self.current_subpass + 1 < self.num_subpasses);
unsafe {
let inner = self.inner.next_subpass(true);
PrimaryCommandBufferBuilderSecondaryDraw {
inner: inner,
num_subpasses: self.num_subpasses,
current_subpass: self.current_subpass + 1,
}
}
}
/// Executes secondary graphics command buffers within this primary command buffer.
///
/// # Panic
///
/// - Panicks if one of the secondary command buffers wasn't created with a compatible
/// renderpass or is using the wrong subpass.
#[inline]
pub fn execute_commands<'a, I>(mut self, iter: I) -> PrimaryCommandBufferBuilderSecondaryDraw
where I: Iterator<Item = &'a SecondaryGraphicsCommandBuffer>
{
// FIXME: check renderpass and subpass
unsafe {
self.inner = self.inner.execute_commands(iter.map(|cb| &cb.inner));
self
}
}
/// Finish drawing this renderpass and get back the builder.
#[inline]
pub fn draw_end(mut self) -> PrimaryCommandBufferBuilder {
unsafe {
// skipping the remaining subpasses
for _ in 0 .. (self.num_subpasses - self.current_subpass - 1) {
self.inner = self.inner.next_subpass(false);
}
let inner = self.inner.end_renderpass();
PrimaryCommandBufferBuilder {
inner: inner,
}
}
}
}
/// Represents a collection of commands to be executed by the GPU.
///
/// A primary command buffer can contain any command.
pub struct PrimaryCommandBuffer {
inner: InnerCommandBuffer,
}
impl PrimaryCommandBuffer {
/// Submits the command buffer to a queue so that it is executed.
///
/// Fences and semaphores are automatically handled.
///
/// # Panic
///
/// - Panicks if the queue doesn't belong to the device this command buffer was created with.
/// - Panicks if the queue doesn't belong to the family the pool was created with.
///
#[inline]
pub fn submit(&self, queue: &mut Queue) -> Result<(), OomError> { // TODO: wrong error type
self.inner.submit(queue)
}
}
/// A prototype of a secondary compute command buffer.
pub struct SecondaryGraphicsCommandBufferBuilder {
inner: InnerCommandBufferBuilder,
}
impl SecondaryGraphicsCommandBufferBuilder {
/// Builds a new secondary command buffer and start recording commands in it.
#[inline]
pub fn new(pool: &Arc<CommandBufferPool>)
-> Result<SecondaryGraphicsCommandBufferBuilder, OomError>
{
let inner = try!(InnerCommandBufferBuilder::new(pool, true));
Ok(SecondaryGraphicsCommandBufferBuilder { inner: inner })
}
/// Finish recording commands and build the command buffer.
#[inline]
pub fn build(self) -> Result<Arc<SecondaryGraphicsCommandBuffer>, OomError> {
let inner = try!(self.inner.build());
Ok(Arc::new(SecondaryGraphicsCommandBuffer { inner: inner }))
}
}
/// Represents a collection of commands to be executed by the GPU.
///
/// A secondary graphics command buffer contains draw commands and non-draw commands. Secondary
/// command buffers can't specify which framebuffer they are drawing to. Instead you must create
/// a primary command buffer, specify a framebuffer, and then call the secondary command buffer.
///
/// A secondary graphics command buffer can't be called outside of a renderpass.
pub struct SecondaryGraphicsCommandBuffer {
inner: InnerCommandBuffer,
}
/// A prototype of a secondary compute command buffer.
pub struct SecondaryComputeCommandBufferBuilder {
inner: InnerCommandBufferBuilder,
}
impl SecondaryComputeCommandBufferBuilder {
/// Builds a new secondary command buffer and start recording commands in it.
#[inline]
pub fn new(pool: &Arc<CommandBufferPool>)
-> Result<SecondaryComputeCommandBufferBuilder, OomError>
{
let inner = try!(InnerCommandBufferBuilder::new(pool, true));
Ok(SecondaryComputeCommandBufferBuilder { inner: inner })
}
/// Writes data to a buffer.
///
/// The data is stored inside the command buffer and written to the given buffer slice.
/// This function is intended to be used for small amounts of data (only 64kB is allowed). if
/// you want to transfer large amounts of data, use copies between buffers.
///
/// # Panic
///
/// - Panicks if the size of `data` is not the same as the size of the buffer slice.
/// - Panicks if the size of `data` is superior to 65536 bytes.
/// - Panicks if the offset or size is not a multiple of 4.
/// - Panicks if the buffer wasn't created with the right usage.
/// - Panicks if the queue family doesn't support transfer operations.
///
#[inline]
pub fn update_buffer<'a, B, T: 'a, M: 'a>(self, buffer: B, data: &T)
-> SecondaryComputeCommandBufferBuilder
where B: Into<BufferSlice<'a, T, M>>
{
unsafe {
SecondaryComputeCommandBufferBuilder {
inner: self.inner.update_buffer(buffer, data)
}
}
}
/// Fills a buffer with data.
///
/// The data is repeated until it fills the range from `offset` to `offset + size`.
/// Since the data is a u32, the offset and the size must be multiples of 4.
///
/// # Panic
///
/// - Panicks if `offset + data` is superior to the size of the buffer.
/// - Panicks if the offset or size is not a multiple of 4.
/// - Panicks if the buffer wasn't created with the right usage.
/// - Panicks if the queue family doesn't support transfer operations.
///
/// # Safety
///
/// - Type safety is not enforced by the API.
pub unsafe fn fill_buffer<'a, T: 'a, M: 'a>(self, buffer: &Arc<Buffer<T, M>>, offset: usize,
size: usize, data: u32)
-> SecondaryComputeCommandBufferBuilder
{
SecondaryComputeCommandBufferBuilder {
inner: self.inner.fill_buffer(buffer, offset, size, data)
}
}
/// Finish recording commands and build the command buffer.
#[inline]
pub fn build(self) -> Result<Arc<SecondaryComputeCommandBuffer>, OomError> {
let inner = try!(self.inner.build());
Ok(Arc::new(SecondaryComputeCommandBuffer { inner: inner }))
}
}
/// Represents a collection of commands to be executed by the GPU.
///
/// A secondary compute command buffer contains non-draw commands (like copy commands, compute
/// shader execution, etc.). It can only be called outside of a renderpass.
pub struct SecondaryComputeCommandBuffer {
inner: InnerCommandBuffer,
}
/// The dynamic state to use for a draw command.
#[derive(Debug, Copy, Clone)]
pub struct DynamicState {
pub line_width: Option<f32>,
}
impl DynamicState {
#[inline]
pub fn none() -> DynamicState {
DynamicState {
line_width: None,
}
}
}
impl Default for DynamicState {
#[inline]
fn default() -> DynamicState {
DynamicState::none()
}
}

View File

@ -0,0 +1,91 @@
use std::mem;
use std::ptr;
use std::sync::Arc;
use instance::QueueFamily;
use device::Device;
use OomError;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
/// A pool from which command buffers are created from.
pub struct CommandBufferPool {
device: Arc<Device>,
pool: vk::CommandPool,
queue_family_index: u32,
}
impl CommandBufferPool {
/// Creates a new pool.
///
/// The command buffers created with this pool can only be executed on queues of the given
/// family.
///
/// # Panic
///
/// Panicks if the queue family doesn't belong to the same physical device as `device`.
///
#[inline]
pub fn new(device: &Arc<Device>, queue_family: &QueueFamily)
-> Result<Arc<CommandBufferPool>, OomError>
{
assert_eq!(device.physical_device().internal_object(),
queue_family.physical_device().internal_object());
let vk = device.pointers();
let pool = unsafe {
let infos = vk::CommandPoolCreateInfo {
sType: vk::STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // TODO:
queueFamilyIndex: queue_family.id(),
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateCommandPool(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(CommandBufferPool {
device: device.clone(),
pool: pool,
queue_family_index: queue_family.id(),
}))
}
/// Returns the device this command pool was created with.
#[inline]
pub fn device(&self) -> &Arc<Device> {
&self.device
}
/// Returns the queue family on which command buffers of this pool can be executed.
#[inline]
pub fn queue_family(&self) -> QueueFamily {
self.device.physical_device().queue_family_by_id(self.queue_family_index).unwrap()
}
}
impl VulkanObject for CommandBufferPool {
type Object = vk::CommandPool;
#[inline]
fn internal_object(&self) -> vk::CommandPool {
self.pool
}
}
impl Drop for CommandBufferPool {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.DestroyCommandPool(self.device.internal_object(), self.pool, ptr::null());
}
}
}

View File

@ -0,0 +1,197 @@
use std::iter;
use device::Device;
/// Trait implemented on structs that describe a buffer layout.
///
/// The API will accept any buffer whose content implements `Layout` where the `RawLayout` matches
/// what is expected. This way you can create multiple structs compatible with each other.
pub unsafe trait Layout {
type RawLayout;
}
/// Represents the layout of the resources and data that can be binded before drawing and that will
/// be accessible from the shaders.
///
/// The template parameter represents the descriptor sets.
// TODO: push constants.
pub struct PipelineLayout<DescriptorSets> {
device: Arc<Device>,
layout: VkPipelineLayout,
}
impl<DescriptorSets> PipelineLayout<DescriptorSets> {
/// Creates a new `PipelineLayout`.
pub fn new(device: &Arc<Device>) -> Result<Arc<PipelineLayout<L>>, > {
let layout = unsafe {
let infos = VkPipelineLayoutCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
setLayoutCount: ,
pSetLayouts: ,
pushConstantRangeCount: ,
pPushConstantRanges: ,
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreatePipelineLayout(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(PipelineLayout {
device: device.clone(),
layout: layout,
}))
}
}
pub trait DescriptorDef {
type BindData;
}
pub struct StorageImageMarker;
pub struct SamplerMarker;
pub struct SampledImageMarker;
pub struct CombinedImageSamplerMarker;
pub struct UniformTexelBufferMarker<T: ?Sized, M>(Buffer<T, M>);
pub struct StorageTexelBufferMarker<T: ?Sized, M>(Buffer<T, M>);
pub struct UniformBufferMarker<T: ?Sized, M>(Buffer<T, M>);
pub struct StorageBufferMarker<T: ?Sized, M>(Buffer<T, M>);
pub struct DynamicUniformBufferMarker<T: ?Sized, M>(Buffer<T, M>);
pub struct DynamicStorageBufferMarker<T: ?Sized, M>(Buffer<T, M>);
pub struct InputAttachmentMarker;
pub trait DescriptorSetDefinition {
type Raw;
fn into_raw(self) -> Self::Raw;
}
pub struct DescriptorSetLayout<D> {
device: Arc<Device>,
layout: vk::DescriptorSetLayout,
marker: PhantomData<D>,
}
impl<D> DescriptorSetLayout<D> where D: DescriptorSetDefinition {
pub fn new(device: &Arc<Device>) -> Result<Arc<DescriptorSetLayout<D>> {
let vk = device.pointers();
let bindings = ;
let layout = unsafe {
let infos = vk::DescriptorSetLayoutCreateInfo {
sType: vk::STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
bindingCount: bindings.len() as u32,
pBindings: bindings.as_ptr(),
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateDescriptorSetLayout(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(DescriptorSetLayout {
device: device.clone(),
layout: layout,
marker: PhantomData,
}))
}
}
impl DescriptorSet {
pub fn new<I>(device: &Arc<Device>, descriptors: I) -> Arc<DescriptorSet>
where I: IntoIterator<Item = Descriptor>
{
let descriptors: Vec<_> = descriptors.into_iter().map(|descriptor| {
let stage_flags =
if descriptor.stages.vertex { vk::SHADER_STAGE_VERTEX_BIT } else { 0 } |
if descriptor.stages.vertex { vk::SHADER_STAGE_TESSELLATION_CONTROL_BIT }
else { 0 } |
if descriptor.stages.vertex { vk::SHADER_STAGE_TESSELLATION_EVALUATION_BIT }
else { 0 } |
if descriptor.stages.vertex { vk::SHADER_STAGE_GEOMETRY_BIT } else { 0 } |
if descriptor.stages.vertex { vk::SHADER_STAGE_FRAGMENT_BIT } else { 0 } |
if descriptor.stages.vertex { vk::SHADER_STAGE_COMPUTE_BIT } else { 0 };
// TODO: check array size limits
VkDescriptorSetLayoutBinding {
descriptorType: ,
arraySize: descriptor.array_size,
stageFlags: stage_flags,
pImmutableSamplers: ,
}
}).collect();
vkCreateDescriptorSetLayout
}
#[inline]
pub fn write(&self) {
DescriptorSet::update(Some(write), iter::empty())
}
#[inline]
pub fn copy(&self) {
DescriptorSet::update(iter::empty(), Some(copy))
}
#[inline]
pub fn multi_write<I>(writes: I) {
DescriptorSet::update(writes, iter::empty())
}
#[inline]
pub fn multi_copy<I>(copies: I) {
DescriptorSet::update(iter::empty(), copies)
}
pub fn update(writes: I, copies: J) {
}
}
pub struct Descriptor {
pub ty: DescriptorType,
pub array_size: u32,
pub stages: ShaderStages,
}
pub enum DescriptorType {
Sampler,
CombinedImageSampler,
SampledImage,
StorageImage,
UniformTexelBuffer,
StorageTexelBuffer,
UniformBuffer,
StorageBuffer,
UniformBufferDynamic,
StorageBufferDynamic,
InputAttachment,
}
pub struct ShaderStages {
pub vertex: bool,
pub tessellation_control: bool,
pub tessellation_evaluation: bool,
pub geometry: bool,
pub fragment: bool,
pub compute: bool,
}

294
vulkano/src/device.rs Normal file
View File

@ -0,0 +1,294 @@
//! Communication channel with a physical device.
//!
//! The `Device` is one of the most important objects of Vulkan. Creating a `Device` is required
//! before you can create buffers, textures, shaders, etc.
//!
use std::fmt;
use std::error;
use std::mem;
use std::ptr;
use std::sync::Arc;
use std::sync::Mutex;
use instance::Features;
use instance::Instance;
use instance::PhysicalDevice;
use instance::QueueFamily;
use Error;
use OomError;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
/// Represents a Vulkan context.
pub struct Device {
instance: Arc<Instance>,
physical_device: PhysicalDevice,
device: vk::Device,
vk: vk::DevicePointers,
features: Features,
}
impl Device {
/// Builds a new Vulkan device for the given physical device.
///
/// You must pass two things when creating a logical device:
///
/// - A list of optional Vulkan features that must be enabled on the device. Note that if a
/// feature is not enabled at device creation, you can't use it later even it it's supported
/// by the physical device.
///
/// - An iterator to a list of queues to create. Each element of the iterator must indicate
/// the family whose queue belongs to and a priority between 0.0 and 1.0 to assign to it.
/// A queue with a higher value indicates that the commands will execute faster than on a
/// queue with a lower value. Note however that no guarantee can be made on the way the
/// priority value is handled by the implementation.
///
/// # Panic
///
/// - Panicks if one of the requested features is not supported by the physical device.
/// - Panicks if one of the queue families doesn't belong to the given device.
/// - Panicks if you request more queues from a family than available.
/// - Panicks if one of the priorities is outside of the `[0.0 ; 1.0]` range.
///
pub fn new<'a, I>(phys: &'a PhysicalDevice, requested_features: &Features, queue_families: I)
-> Result<(Arc<Device>, Vec<Arc<Mutex<Queue>>>), DeviceCreationError>
where I: IntoIterator<Item = (QueueFamily<'a>, f32)>
{
let queue_families = queue_families.into_iter();
assert!(phys.supported_features().superset_of(&requested_features));
let vk_i = phys.instance().pointers();
// this variable will contain the queue family ID and queue ID of each requested queue
let mut output_queues: Vec<(u32, u32)> = Vec::with_capacity(queue_families.size_hint().0);
// device creation
let device = unsafe {
// each element of `queues` is a `(queue_family, priorities)`
// each queue family must only have one entry in `queues`
let mut queues: Vec<(u32, Vec<f32>)> = Vec::with_capacity(phys.queue_families().len());
for (queue_family, priority) in queue_families {
// checking the parameters
assert_eq!(queue_family.physical_device().internal_object(),
phys.internal_object());
assert!(priority >= 0.0 && priority <= 1.0);
// adding to `queues` and `output_queues`
if let Some(q) = queues.iter_mut().find(|q| q.0 == queue_family.id()) {
output_queues.push((queue_family.id(), q.1.len() as u32));
q.1.push(priority);
assert!(q.1.len() < queue_family.queues_count());
continue;
}
queues.push((queue_family.id(), vec![priority]));
output_queues.push((queue_family.id(), 0));
}
// turning `queues` into an array of `vkDeviceQueueCreateInfo` suitable for Vulkan
let queues = queues.iter().map(|&(queue_id, ref priorities)| {
vk::DeviceQueueCreateInfo {
sType: vk::STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
queueFamilyIndex: queue_id,
queueCount: priorities.len() as u32,
pQueuePriorities: priorities.as_ptr()
}
}).collect::<Vec<_>>();
let features: vk::PhysicalDeviceFeatures = requested_features.clone().into();
let infos = vk::DeviceCreateInfo {
sType: vk::STRUCTURE_TYPE_DEVICE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
queueCreateInfoCount: queues.len() as u32,
pQueueCreateInfos: queues.as_ptr(),
enabledLayerCount: 0, // TODO:
ppEnabledLayerNames: ptr::null(), // TODO:
enabledExtensionCount: 0, // TODO:
ppEnabledExtensionNames: ptr::null(), // TODO:
pEnabledFeatures: &features,
};
let mut output = mem::uninitialized();
try!(check_errors(vk_i.CreateDevice(phys.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
// loading the function pointers of the newly-created device
let vk = vk::DevicePointers::load(|name| {
unsafe { vk_i.GetDeviceProcAddr(device, name.as_ptr()) as *const _ }
});
let device = Arc::new(Device {
instance: phys.instance().clone(),
physical_device: phys.clone(),
device: device,
vk: vk,
features: requested_features.clone(),
});
// querying the queues
let output_queues = output_queues.into_iter().map(|(family, id)| {
unsafe {
let mut output = mem::uninitialized();
device.vk.GetDeviceQueue(device.device, family, id, &mut output);
Arc::new(Mutex::new(Queue {
device: device.clone(),
queue: output,
family: family,
id: id,
}))
}
}).collect();
Ok((device, output_queues))
}
/// Waits until all work on this device has finished. You should never need to call
/// this function, but it can be useful for debugging or benchmarking purposes.
///
/// This is the Vulkan equivalent of `glFinish`.
#[inline]
pub fn wait(&self) -> Result<(), OomError> {
unsafe {
try!(check_errors(self.vk.DeviceWaitIdle(self.device)));
Ok(())
}
}
/// Returns the physical device that was used to create this device.
#[inline]
pub fn physical_device(&self) -> &PhysicalDevice {
&self.physical_device
}
/// Returns the features that are enabled in the device.
#[inline]
pub fn enabled_features(&self) -> &Features {
&self.features
}
}
impl fmt::Debug for Device {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "<Vulkan device>")
}
}
impl VulkanObject for Device {
type Object = vk::Device;
#[inline]
fn internal_object(&self) -> vk::Device {
self.device
}
}
impl VulkanPointers for Device {
type Pointers = vk::DevicePointers;
#[inline]
fn pointers(&self) -> &vk::DevicePointers {
&self.vk
}
}
impl Drop for Device {
#[inline]
fn drop(&mut self) {
unsafe {
self.vk.DeviceWaitIdle(self.device);
self.vk.DestroyDevice(self.device, ptr::null());
}
}
}
/// Error that can be returned when creating a device.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum DeviceCreationError {
/// There is no memory available on the host (ie. the CPU, RAM, etc.).
OutOfHostMemory,
/// There is no memory available on the device (ie. video memory).
OutOfDeviceMemory,
// FIXME: other values
}
impl error::Error for DeviceCreationError {
#[inline]
fn description(&self) -> &str {
match *self {
DeviceCreationError::OutOfHostMemory => "no memory available on the host",
DeviceCreationError::OutOfDeviceMemory => "no memory available on the graphical device",
}
}
}
impl fmt::Display for DeviceCreationError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}
impl From<Error> for DeviceCreationError {
#[inline]
fn from(err: Error) -> DeviceCreationError {
match err {
Error::OutOfHostMemory => DeviceCreationError::OutOfHostMemory,
Error::OutOfDeviceMemory => DeviceCreationError::OutOfDeviceMemory,
_ => panic!("Unexpected error value: {}", err as i32)
}
}
}
/// Represents a queue where commands can be submitted.
pub struct Queue {
device: Arc<Device>,
queue: vk::Queue,
family: u32,
id: u32, // id within family
}
impl Queue {
/// Returns the device this queue belongs to.
#[inline]
pub fn device(&self) -> &Arc<Device> {
&self.device
}
/// Returns the family this queue belongs to.
#[inline]
pub fn family(&self) -> QueueFamily {
self.device.physical_device().queue_family_by_id(self.family).unwrap()
}
/// Waits until all work on this queue has finished.
///
/// Just like `Device::wait()`, you shouldn't have to call this function.
#[inline]
pub fn wait(&mut self) -> Result<(), OomError> {
unsafe {
let vk = self.device.pointers();
try!(check_errors(vk.QueueWaitIdle(self.queue)));
Ok(())
}
}
}
impl VulkanObject for Queue {
type Object = vk::Queue;
#[inline]
fn internal_object(&self) -> vk::Queue {
self.queue
}
}

153
vulkano/src/features.rs Normal file
View File

@ -0,0 +1,153 @@
use vk;
macro_rules! features {
($($name:ident => $vk:ident,)+) => (
/// Represents all the features that are available on a physical device or enabled on
/// a logical device.
///
/// Note that the `robust_buffer_access` is guaranteed to be supported by all Vulkan
/// implementations.
///
/// # Example
///
/// ```no_run
/// # let physical_device: vulkano::instance::PhysicalDevice = unsafe { ::std::mem::uninitialized() };
/// const MINIMAL_FEATURES: vulkano::instance::Features = vulkano::instance::Features {
/// geometry_shader: true,
/// .. vulkano::instance::Features::none()
/// };
///
/// const OPTIMAL_FEATURES: vulkano::instance::Features = vulkano::instance::Features {
/// geometry_shader: true,
/// tessellation_shader: true,
/// .. vulkano::instance::Features::none()
/// };
///
/// if !physical_device.supported_features().superset_of(&MINIMAL_FEATURES) {
/// panic!("The physical device is not good enough for this application.");
/// }
///
/// assert!(OPTIMAL_FEATURES.superset_of(MINIMAL_FEATURES));
/// let features_to_request = OPTIMAL_FEATURES.intersection(physical_device.supported_features());
/// ```
///
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
#[allow(missing_docs)]
pub struct Features {
$(
pub $name: bool,
)+
}
impl Features {
/// Builds a `Features` object with all values to false.
pub fn none() -> Features {
Features {
$(
$name: false,
)+
}
}
/// Returns true if `self` is a superset of the parameter.
///
/// That is, for each feature of the parameter that is true, the corresponding value
/// in self is true as well.
pub fn superset_of(&self, other: &Features) -> bool {
$((self.$name == true || other.$name == false))&&+
}
/// Builds a `Features` that is the intersection of `self` and another `Features`
/// object.
///
/// The result's field will be true if it is also true in both `self` and `other`.
pub fn intersection(&self, other: &Features) -> Features {
Features {
$(
$name: self.$name && other.$name,
)+
}
}
}
#[doc(hidden)]
impl From<vk::PhysicalDeviceFeatures> for Features {
fn from(features: vk::PhysicalDeviceFeatures) -> Features {
Features {
$(
$name: features.$vk != 0,
)+
}
}
}
#[doc(hidden)]
impl Into<vk::PhysicalDeviceFeatures> for Features {
fn into(self) -> vk::PhysicalDeviceFeatures {
vk::PhysicalDeviceFeatures {
$(
$vk: if self.$name { vk::TRUE } else { vk::FALSE },
)+
}
}
}
)
}
features!{
robust_buffer_access => robustBufferAccess,
full_draw_index_uint32 => fullDrawIndexUint32,
image_cube_array => imageCubeArray,
independent_blend => independentBlend,
geometry_shader => geometryShader,
tessellation_shader => tessellationShader,
sample_rate_shading => sampleRateShading,
dual_src_blend => dualSrcBlend,
logic_op => logicOp,
multi_draw_indirect => multiDrawIndirect,
draw_indirect_first_instance => drawIndirectFirstInstance,
depth_clamp => depthClamp,
depth_bias_clamp => depthBiasClamp,
fill_mode_non_solid => fillModeNonSolid,
depth_bounds => depthBounds,
wide_lines => wideLines,
large_points => largePoints,
alpha_to_one => alphaToOne,
multi_viewport => multiViewport,
sampler_anisotropy => samplerAnisotropy,
texture_compression_etc2 => textureCompressionETC2,
texture_compression_astc_ldr => textureCompressionASTC_LDR,
texture_compression_bc => textureCompressionBC,
occlusion_query_precise => occlusionQueryPrecise,
pipeline_statistics_query => pipelineStatisticsQuery,
vertex_pipeline_stores_and_atomics => vertexPipelineStoresAndAtomics,
fragment_stores_and_atomics => fragmentStoresAndAtomics,
shader_tessellation_and_geometry_point_size => shaderTessellationAndGeometryPointSize,
shader_image_gather_extended => shaderImageGatherExtended,
shader_storage_image_extended_formats => shaderStorageImageExtendedFormats,
shader_storage_image_multisample => shaderStorageImageMultisample,
shader_storage_image_read_without_format => shaderStorageImageReadWithoutFormat,
shader_storage_image_write_without_format => shaderStorageImageWriteWithoutFormat,
shader_uniform_buffer_array_dynamic_indexing => shaderUniformBufferArrayDynamicIndexing,
shader_sampled_image_array_dynamic_indexing => shaderSampledImageArrayDynamicIndexing,
shader_storage_buffer_array_dynamic_indexing => shaderStorageBufferArrayDynamicIndexing,
shader_storage_image_array_dynamic_indexing => shaderStorageImageArrayDynamicIndexing,
shader_clip_distance => shaderClipDistance,
shader_cull_distance => shaderCullDistance,
shader_f3264 => shaderf3264,
shader_int64 => shaderInt64,
shader_int16 => shaderInt16,
shader_resource_residency => shaderResourceResidency,
shader_resource_min_lod => shaderResourceMinLod,
sparse_binding => sparseBinding,
sparse_residency_buffer => sparseResidencyBuffer,
sparse_residency_image2d => sparseResidencyImage2D,
sparse_residency_image3d => sparseResidencyImage3D,
sparse_residency2_samples => sparseResidency2Samples,
sparse_residency4_samples => sparseResidency4Samples,
sparse_residency8_samples => sparseResidency8Samples,
sparse_residency16_samples => sparseResidency16Samples,
sparse_residency_aliased => sparseResidencyAliased,
variable_multisample_rate => variableMultisampleRate,
inherited_queries => inheritedQueries,
}

298
vulkano/src/formats.rs Normal file
View File

@ -0,0 +1,298 @@
//! Declares all the formats of data and images supported by Vulkan.
//!
//! # Content of this module
//!
//! This module contains three things:
//!
//! - The `Format` enumeration, which contains all the available formats.
//! - The `FormatMarker` trait.
//! - One struct for each format.
//!
//! # Formats
//!
//! List of suffixes:
//!
//! - `Unorm` means that the values are unsigned integers that are converted into floating points.
//! The maximum possible representable value becomes `1.0`, and the minimum representable value
//! becomes `0.0`. For example the value `255` in a `R8Unorm` will be interpreted as `1.0`.
//!
//! - `Snorm` is the same as `Unorm`, but the integers are signed and the range is from `-1.0` to
//! `1.0` instead.
//!
//! - `Uscaled` means that the values are unsigned integers that are converted into floating points.
//! No change in the value is done. For example the value `255` in a `R8Uscaled` will be
//! interpreted as `255.0`.
//!
//! - `Sscaled` is the same as `Uscaled` expect that the integers are signed.
//!
//! - `Uint` means that the values are unsigned integers. No conversion is performed.
//!
//! - `Sint` means that the values are signed integers. No conversion is performed.
//!
//! - `Ufloat` means that the values are unsigned floating points. No conversion is performed. This
//! format is very unsual.
//!
//! - `Sfloat` means that the values are regular floating points. No conversion is performed.
//!
//! - `Srgb` is the same as `Unorm`, except that the value is interpreted as being in the sRGB
//! color space. This means that its value will be converted to fit in the RGB color space when
//! it is read. The fourth channel (usually used for alpha), if present, is not concerned by the
//! conversion.
//!
use vk;
/// Some data whose type must be known by the library.
///
/// This trait is unsafe to implement because bad things will happen if `ty()` returns a wrong
/// value.
pub unsafe trait Data {
/// Returns the type of the data from an enum.
fn ty() -> Format;
// TODO "is_supported" functions that redirect to `Self::ty().is_supported()`
}
// TODO: that's just an example ; implement for all common data types
unsafe impl Data for u8 {
#[inline]
fn ty() -> Format { Format::R8Uint }
}
macro_rules! formats {
($($name:ident => $vk:ident $(as $t:ty)*,)+) => (
/// An enumeration of all the possible formats.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
#[repr(u32)]
#[allow(missing_docs)]
#[allow(non_camel_case_types)]
pub enum Format {
$($name = vk::$vk,)+
}
impl Format {
/*pub fn is_supported_for_vertex_attributes(&self) -> bool {
}
.. other functions ..
*/
/// Returns the `Format` corresponding to a Vulkan constant.
#[doc(hidden)]
pub fn from_num(val: u32) -> Option<Format> {
match val {
$(
vk::$vk => Some(Format::$name),
)+
_ => None,
}
}
}
$(
#[derive(Debug, Copy, Clone)]
#[allow(missing_docs)]
#[allow(non_camel_case_types)]
pub struct $name;
impl FormatMarker for $name {
#[inline]
fn format() -> Format {
Format::$name
}
}
)+
);
}
formats! {
Undefined => FORMAT_UNDEFINED,
R4G4UnormPack8 => FORMAT_R4G4_UNORM_PACK8,
R4G4B4A4UnormPack16 => FORMAT_R4G4B4A4_UNORM_PACK16,
B4G4R4A4UnormPack16 => FORMAT_B4G4R4A4_UNORM_PACK16,
R5G6B5UnormPack16 => FORMAT_R5G6B5_UNORM_PACK16,
B5G6R5UnormPack16 => FORMAT_B5G6R5_UNORM_PACK16,
R5G5B5A1UnormPack16 => FORMAT_R5G5B5A1_UNORM_PACK16,
B5G5R5A1UnormPack16 => FORMAT_B5G5R5A1_UNORM_PACK16,
A1R5G5B5UnormPack16 => FORMAT_A1R5G5B5_UNORM_PACK16,
R8Unorm => FORMAT_R8_UNORM,
R8Snorm => FORMAT_R8_SNORM,
R8Uscaled => FORMAT_R8_USCALED,
R8Sscaled => FORMAT_R8_SSCALED,
R8Uint => FORMAT_R8_UINT,
R8Sint => FORMAT_R8_SINT,
R8Srgb => FORMAT_R8_SRGB,
R8G8Unorm => FORMAT_R8G8_UNORM,
R8G8Snorm => FORMAT_R8G8_SNORM,
R8G8Uscaled => FORMAT_R8G8_USCALED,
R8G8Sscaled => FORMAT_R8G8_SSCALED,
R8G8Uint => FORMAT_R8G8_UINT,
R8G8Sint => FORMAT_R8G8_SINT,
R8G8Srgb => FORMAT_R8G8_SRGB,
R8G8B8Unorm => FORMAT_R8G8B8_UNORM,
R8G8B8Snorm => FORMAT_R8G8B8_SNORM,
R8G8B8Uscaled => FORMAT_R8G8B8_USCALED,
R8G8B8Sscaled => FORMAT_R8G8B8_SSCALED,
R8G8B8Uint => FORMAT_R8G8B8_UINT,
R8G8B8Sint => FORMAT_R8G8B8_SINT,
R8G8B8Srgb => FORMAT_R8G8B8_SRGB,
B8G8R8Unorm => FORMAT_B8G8R8_UNORM,
B8G8R8Snorm => FORMAT_B8G8R8_SNORM,
B8G8R8Uscaled => FORMAT_B8G8R8_USCALED,
B8G8R8Sscaled => FORMAT_B8G8R8_SSCALED,
B8G8R8Uint => FORMAT_B8G8R8_UINT,
B8G8R8Sint => FORMAT_B8G8R8_SINT,
B8G8R8Srgb => FORMAT_B8G8R8_SRGB,
R8G8B8A8Unorm => FORMAT_R8G8B8A8_UNORM,
R8G8B8A8Snorm => FORMAT_R8G8B8A8_SNORM,
R8G8B8A8Uscaled => FORMAT_R8G8B8A8_USCALED,
R8G8B8A8Sscaled => FORMAT_R8G8B8A8_SSCALED,
R8G8B8A8Uint => FORMAT_R8G8B8A8_UINT,
R8G8B8A8Sint => FORMAT_R8G8B8A8_SINT,
R8G8B8A8Srgb => FORMAT_R8G8B8A8_SRGB,
B8G8R8A8Unorm => FORMAT_B8G8R8A8_UNORM,
B8G8R8A8Snorm => FORMAT_B8G8R8A8_SNORM,
B8G8R8A8Uscaled => FORMAT_B8G8R8A8_USCALED,
B8G8R8A8Sscaled => FORMAT_B8G8R8A8_SSCALED,
B8G8R8A8Uint => FORMAT_B8G8R8A8_UINT,
B8G8R8A8Sint => FORMAT_B8G8R8A8_SINT,
B8G8R8A8Srgb => FORMAT_B8G8R8A8_SRGB,
A8B8G8R8UnormPack32 => FORMAT_A8B8G8R8_UNORM_PACK32,
A8B8G8R8SnormPack32 => FORMAT_A8B8G8R8_SNORM_PACK32,
A8B8G8R8UscaledPack32 => FORMAT_A8B8G8R8_USCALED_PACK32,
A8B8G8R8SscaledPack32 => FORMAT_A8B8G8R8_SSCALED_PACK32,
A8B8G8R8UintPack32 => FORMAT_A8B8G8R8_UINT_PACK32,
A8B8G8R8SintPack32 => FORMAT_A8B8G8R8_SINT_PACK32,
A8B8G8R8SrgbPack32 => FORMAT_A8B8G8R8_SRGB_PACK32,
A2R10G10B10UnormPack32 => FORMAT_A2R10G10B10_UNORM_PACK32,
A2R10G10B10SnormPack32 => FORMAT_A2R10G10B10_SNORM_PACK32,
A2R10G10B10UscaledPack32 => FORMAT_A2R10G10B10_USCALED_PACK32,
A2R10G10B10SscaledPack32 => FORMAT_A2R10G10B10_SSCALED_PACK32,
A2R10G10B10UintPack32 => FORMAT_A2R10G10B10_UINT_PACK32,
A2R10G10B10SintPack32 => FORMAT_A2R10G10B10_SINT_PACK32,
A2B10G10R10UnormPack32 => FORMAT_A2B10G10R10_UNORM_PACK32,
A2B10G10R10SnormPack32 => FORMAT_A2B10G10R10_SNORM_PACK32,
A2B10G10R10UscaledPack32 => FORMAT_A2B10G10R10_USCALED_PACK32,
A2B10G10R10SscaledPack32 => FORMAT_A2B10G10R10_SSCALED_PACK32,
A2B10G10R10UintPack32 => FORMAT_A2B10G10R10_UINT_PACK32,
A2B10G10R10SintPack32 => FORMAT_A2B10G10R10_SINT_PACK32,
R16Unorm => FORMAT_R16_UNORM,
R16Snorm => FORMAT_R16_SNORM,
R16Uscaled => FORMAT_R16_USCALED,
R16Sscaled => FORMAT_R16_SSCALED,
R16Uint => FORMAT_R16_UINT,
R16Sint => FORMAT_R16_SINT,
R16Sfloat => FORMAT_R16_SFLOAT,
R16G16Unorm => FORMAT_R16G16_UNORM,
R16G16Snorm => FORMAT_R16G16_SNORM,
R16G16Uscaled => FORMAT_R16G16_USCALED,
R16G16Sscaled => FORMAT_R16G16_SSCALED,
R16G16Uint => FORMAT_R16G16_UINT,
R16G16Sint => FORMAT_R16G16_SINT,
R16G16Sfloat => FORMAT_R16G16_SFLOAT,
R16G16B16Unorm => FORMAT_R16G16B16_UNORM,
R16G16B16Snorm => FORMAT_R16G16B16_SNORM,
R16G16B16Uscaled => FORMAT_R16G16B16_USCALED,
R16G16B16Sscaled => FORMAT_R16G16B16_SSCALED,
R16G16B16Uint => FORMAT_R16G16B16_UINT,
R16G16B16Sint => FORMAT_R16G16B16_SINT,
R16G16B16Sfloat => FORMAT_R16G16B16_SFLOAT,
R16G16B16A16Unorm => FORMAT_R16G16B16A16_UNORM,
R16G16B16A16Snorm => FORMAT_R16G16B16A16_SNORM,
R16G16B16A16Uscaled => FORMAT_R16G16B16A16_USCALED,
R16G16B16A16Sscaled => FORMAT_R16G16B16A16_SSCALED,
R16G16B16A16Uint => FORMAT_R16G16B16A16_UINT,
R16G16B16A16Sint => FORMAT_R16G16B16A16_SINT,
R16G16B16A16Sfloat => FORMAT_R16G16B16A16_SFLOAT,
R32Uint => FORMAT_R32_UINT,
R32Sint => FORMAT_R32_SINT,
R32Sfloat => FORMAT_R32_SFLOAT,
R32G32Uint => FORMAT_R32G32_UINT,
R32G32Sint => FORMAT_R32G32_SINT,
R32G32Sfloat => FORMAT_R32G32_SFLOAT,
R32G32B32Uint => FORMAT_R32G32B32_UINT,
R32G32B32Sint => FORMAT_R32G32B32_SINT,
R32G32B32Sfloat => FORMAT_R32G32B32_SFLOAT,
R32G32B32A32Uint => FORMAT_R32G32B32A32_UINT,
R32G32B32A32Sint => FORMAT_R32G32B32A32_SINT,
R32G32B32A32Sfloat => FORMAT_R32G32B32A32_SFLOAT,
R64Uint => FORMAT_R64_UINT,
R64Sint => FORMAT_R64_SINT,
R64Sfloat => FORMAT_R64_SFLOAT,
R64G64Uint => FORMAT_R64G64_UINT,
R64G64Sint => FORMAT_R64G64_SINT,
R64G64Sfloat => FORMAT_R64G64_SFLOAT,
R64G64B64Uint => FORMAT_R64G64B64_UINT,
R64G64B64Sint => FORMAT_R64G64B64_SINT,
R64G64B64Sfloat => FORMAT_R64G64B64_SFLOAT,
R64G64B64A64Uint => FORMAT_R64G64B64A64_UINT,
R64G64B64A64Sint => FORMAT_R64G64B64A64_SINT,
R64G64B64A64Sfloat => FORMAT_R64G64B64A64_SFLOAT,
B10G11R11UfloatPack32 => FORMAT_B10G11R11_UFLOAT_PACK32,
E5B9G9R9UfloatPack32 => FORMAT_E5B9G9R9_UFLOAT_PACK32,
D16Unorm => FORMAT_D16_UNORM,
X8_D24UnormPack32 => FORMAT_X8_D24_UNORM_PACK32,
D32Sfloat => FORMAT_D32_SFLOAT,
S8Uint => FORMAT_S8_UINT,
D16Unorm_S8Uint => FORMAT_D16_UNORM_S8_UINT,
D24Unorm_S8Uint => FORMAT_D24_UNORM_S8_UINT,
D32Sfloat_S8Uint => FORMAT_D32_SFLOAT_S8_UINT,
BC1_RGBUnormBlock => FORMAT_BC1_RGB_UNORM_BLOCK,
BC1_RGBSrgbBlock => FORMAT_BC1_RGB_SRGB_BLOCK,
BC1_RGBAUnormBlock => FORMAT_BC1_RGBA_UNORM_BLOCK,
BC1_RGBASrgbBlock => FORMAT_BC1_RGBA_SRGB_BLOCK,
BC2UnormBlock => FORMAT_BC2_UNORM_BLOCK,
BC2SrgbBlock => FORMAT_BC2_SRGB_BLOCK,
BC3UnormBlock => FORMAT_BC3_UNORM_BLOCK,
BC3SrgbBlock => FORMAT_BC3_SRGB_BLOCK,
BC4UnormBlock => FORMAT_BC4_UNORM_BLOCK,
BC4SnormBlock => FORMAT_BC4_SNORM_BLOCK,
BC5UnormBlock => FORMAT_BC5_UNORM_BLOCK,
BC5SnormBlock => FORMAT_BC5_SNORM_BLOCK,
BC6HUfloatBlock => FORMAT_BC6H_UFLOAT_BLOCK,
BC6HSfloatBlock => FORMAT_BC6H_SFLOAT_BLOCK,
BC7UnormBlock => FORMAT_BC7_UNORM_BLOCK,
BC7SrgbBlock => FORMAT_BC7_SRGB_BLOCK,
ETC2_R8G8B8UnormBlock => FORMAT_ETC2_R8G8B8_UNORM_BLOCK,
ETC2_R8G8B8SrgbBlock => FORMAT_ETC2_R8G8B8_SRGB_BLOCK,
ETC2_R8G8B8A1UnormBlock => FORMAT_ETC2_R8G8B8A1_UNORM_BLOCK,
ETC2_R8G8B8A1SrgbBlock => FORMAT_ETC2_R8G8B8A1_SRGB_BLOCK,
ETC2_R8G8B8A8UnormBlock => FORMAT_ETC2_R8G8B8A8_UNORM_BLOCK,
ETC2_R8G8B8A8SrgbBlock => FORMAT_ETC2_R8G8B8A8_SRGB_BLOCK,
EAC_R11UnormBlock => FORMAT_EAC_R11_UNORM_BLOCK,
EAC_R11SnormBlock => FORMAT_EAC_R11_SNORM_BLOCK,
EAC_R11G11UnormBlock => FORMAT_EAC_R11G11_UNORM_BLOCK,
EAC_R11G11SnormBlock => FORMAT_EAC_R11G11_SNORM_BLOCK,
ASTC_4x4UnormBlock => FORMAT_ASTC_4x4_UNORM_BLOCK,
ASTC_4x4SrgbBlock => FORMAT_ASTC_4x4_SRGB_BLOCK,
ASTC_5x4UnormBlock => FORMAT_ASTC_5x4_UNORM_BLOCK,
ASTC_5x4SrgbBlock => FORMAT_ASTC_5x4_SRGB_BLOCK,
ASTC_5x5UnormBlock => FORMAT_ASTC_5x5_UNORM_BLOCK,
ASTC_5x5SrgbBlock => FORMAT_ASTC_5x5_SRGB_BLOCK,
ASTC_6x5UnormBlock => FORMAT_ASTC_6x5_UNORM_BLOCK,
ASTC_6x5SrgbBlock => FORMAT_ASTC_6x5_SRGB_BLOCK,
ASTC_6x6UnormBlock => FORMAT_ASTC_6x6_UNORM_BLOCK,
ASTC_6x6SrgbBlock => FORMAT_ASTC_6x6_SRGB_BLOCK,
ASTC_8x5UnormBlock => FORMAT_ASTC_8x5_UNORM_BLOCK,
ASTC_8x5SrgbBlock => FORMAT_ASTC_8x5_SRGB_BLOCK,
ASTC_8x6UnormBlock => FORMAT_ASTC_8x6_UNORM_BLOCK,
ASTC_8x6SrgbBlock => FORMAT_ASTC_8x6_SRGB_BLOCK,
ASTC_8x8UnormBlock => FORMAT_ASTC_8x8_UNORM_BLOCK,
ASTC_8x8SrgbBlock => FORMAT_ASTC_8x8_SRGB_BLOCK,
ASTC_10x5UnormBlock => FORMAT_ASTC_10x5_UNORM_BLOCK,
ASTC_10x5SrgbBlock => FORMAT_ASTC_10x5_SRGB_BLOCK,
ASTC_10x6UnormBlock => FORMAT_ASTC_10x6_UNORM_BLOCK,
ASTC_10x6SrgbBlock => FORMAT_ASTC_10x6_SRGB_BLOCK,
ASTC_10x8UnormBlock => FORMAT_ASTC_10x8_UNORM_BLOCK,
ASTC_10x8SrgbBlock => FORMAT_ASTC_10x8_SRGB_BLOCK,
ASTC_10x10UnormBlock => FORMAT_ASTC_10x10_UNORM_BLOCK,
ASTC_10x10SrgbBlock => FORMAT_ASTC_10x10_SRGB_BLOCK,
ASTC_12x10UnormBlock => FORMAT_ASTC_12x10_UNORM_BLOCK,
ASTC_12x10SrgbBlock => FORMAT_ASTC_12x10_SRGB_BLOCK,
ASTC_12x12UnormBlock => FORMAT_ASTC_12x12_UNORM_BLOCK,
ASTC_12x12SrgbBlock => FORMAT_ASTC_12x12_SRGB_BLOCK,
}
pub trait FormatMarker {
fn format() -> Format;
}

439
vulkano/src/framebuffer.rs Normal file
View File

@ -0,0 +1,439 @@
//! Targets on which your draw commands are executed.
//!
//! There are two concepts in Vulkan:
//!
//! - A `RenderPass` is a collection of rendering passes (called subpasses). Each subpass contains
//! the list of attachments that are written to when drawing. The `RenderPass` only defines the
//! formats and dimensions of all the attachments of the multiple subpasses.
//! - A `Framebuffer` defines the actual images that are attached. The format of the images must
//! match what the `RenderPass` expects.
//!
//! Creating a `RenderPass` is necessary before you create a graphics pipeline.
//! A `Framebuffer`, however, is only needed when you build the command buffer.
//!
//! # Creating a RenderPass
//!
//! Creating a `RenderPass` in the vulkano library is best done with the `renderpass!` macro.
//!
//! This macro creates an inaccessible struct which implements the `RenderPassLayout` trait. This
//! trait tells vulkano what the characteristics of the renderpass are, and is also used to
//! determine the types of the various parameters later on.
//!
use std::marker::PhantomData;
use std::mem;
use std::ptr;
use std::sync::Arc;
use device::Device;
use formats::Format;
use formats::FormatMarker;
use image::Layout as ImageLayout;
use OomError;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
/// Types that describes the characteristics of a renderpass.
// TODO: should that take `&self`?
pub unsafe trait RenderPassLayout {
/// The list of clear values to use when beginning to draw on this renderpass.
type ClearValues;
/// Decodes a `ClearValues` into a list of clear values where each element corresponds
/// to an attachment. The size of the returned array must be the same as the number of
/// attachments.
///
/// The format of the clear value **must** match the format of the attachment. Only attachments
/// that are loaded with `LoadOp::Clear` must have an entry in the array.
fn convert_clear_values(Self::ClearValues) -> Vec<ClearValue>;
/// Returns the descriptions of the attachments.
fn attachments() -> Vec<AttachmentDescription>; // TODO: static array?
}
pub unsafe trait RenderPassLayoutExt<'a, M: 'a>: RenderPassLayout {
type AttachmentsList;
fn ids(&Self::AttachmentsList) -> Vec<u64>;
}
/// Describes a uniform value that will be used to fill an attachment at the start of the
/// renderpass.
// TODO: should have the same layout as `vk::ClearValue` for performances
#[derive(Debug, Copy, Clone, PartialEq)]
pub enum ClearValue {
/// Entry for attachments that aren't cleared.
None,
/// Value for floating-point attachments, including `Unorm`, `Snorm`, `Sfloat`.
Float([f32; 4]),
/// Value for integer attachments, including `Int`.
Int([i32; 4]),
/// Value for unsigned integer attachments, including `Uint`.
Uint([u32; 4]),
/// Value for depth attachments.
Depth(f32),
/// Value for stencil attachments.
Stencil(u32),
/// Value for depth and stencil attachments.
DepthStencil((f32, u32)),
}
pub struct AttachmentDescription {
pub format: Format,
pub samples: u32,
pub load: LoadOp,
pub store: StoreOp,
pub initial_layout: ImageLayout,
pub final_layout: ImageLayout,
}
/// Builds a `RenderPass` object.
#[macro_export]
macro_rules! renderpass {
(
device: $device:expr,
attachments: { $($atch_name:ident [$($attrs:ident),*]),+ }
) => (
{
use std::sync::Arc;
struct Layout;
unsafe impl $crate::framebuffer::RenderPassLayout for Layout {
type ClearValues = [f32; 4]; // FIXME:
#[inline]
fn convert_clear_values(val: Self::ClearValues) -> Vec<$crate::framebuffer::ClearValue> {
vec![$crate::framebuffer::ClearValue::Float(val)]
}
#[inline]
fn attachments() -> Vec<$crate::framebuffer::AttachmentDescription> {
vec![
$(
$crate::framebuffer::AttachmentDescription {
format: $crate::formats::Format::B8G8R8A8Srgb, // FIXME:
samples: 1, // FIXME:
load: renderpass!(__load_op__ $($attrs),*),
store: $crate::framebuffer::StoreOp::Store, // FIXME:
initial_layout: $crate::image::Layout::ColorAttachmentOptimal, // FIXME:
final_layout: $crate::image::Layout::ColorAttachmentOptimal, // FIXME:
}
)*
]
}
}
unsafe impl<'a, M: 'a> $crate::framebuffer::RenderPassLayoutExt<'a, M> for Layout {
type AttachmentsList = (&'a Arc<$crate::image::ImageView<$crate::image::Type2d, $crate::formats::B8G8R8A8Srgb, M>>); // FIXME:
fn ids(l: &Self::AttachmentsList) -> Vec<u64> {
vec![l.id()]
}
}
$crate::framebuffer::RenderPass::<Layout>::new($device)
}
);
// Gets the load operation to use for an attachment from the list of attributes.
(__load_op__ LoadDontCare $($attrs:ident),*) => (
$crate::framebuffer::LoadOp::DontCare
);
(__load_op__ Clear $($attrs:ident),*) => (
$crate::framebuffer::LoadOp::Clear
);
(__load_op__ $first:ident $($attrs:ident),*) => (
renderpass!(__load_op__ $($attrs),*)
);
(__load_op__) => (
$crate::framebuffer::LoadOp::Load
);
}
/// Describes what the implementation should do with an attachment after all the subpasses have
/// completed.
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
#[repr(u32)]
pub enum StoreOp {
/// The attachment will be stored. This is what you usually want.
Store = vk::ATTACHMENT_STORE_OP_STORE,
/// What happens is implementation-specific.
///
/// This is purely an optimization compared to `Store`. The implementation doesn't need to copy
/// from the internal cache to the memory, which saves bandwidth.
///
/// This doesn't mean that the data won't be copied, as an implementation is also free to not
/// use a cache and write the output directly in memory.
DontCare = vk::ATTACHMENT_STORE_OP_DONT_CARE,
}
/// Describes what the implementation should do with an attachment at the start of the subpass.
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
#[repr(u32)]
pub enum LoadOp {
/// The attachment will be loaded. This is what you want if you want to draw over
/// something existing.
Load = vk::ATTACHMENT_LOAD_OP_LOAD,
/// The attachment will be cleared by the implementation with a uniform value that you must
/// provide when you start drawing.
///
/// This is what you usually use at the start of a frame, in order to reset the content of
/// the color, depth and/or stencil buffers.
///
/// See the `draw_inline` and `draw_secondary` methods of `PrimaryComputeBufferBuilder`.
Clear = vk::ATTACHMENT_LOAD_OP_CLEAR,
/// The attachment will have undefined content.
///
/// This is what you should use for attachments that you intend to overwrite entirely.
DontCare = vk::ATTACHMENT_LOAD_OP_DONT_CARE,
}
/// Defines the layout of multiple subpasses.
pub struct RenderPass<L> {
device: Arc<Device>,
renderpass: vk::RenderPass,
num_passes: u32,
marker: PhantomData<L>,
}
impl<L> RenderPass<L> where L: RenderPassLayout {
pub fn new(device: &Arc<Device>) -> Result<Arc<RenderPass<L>>, OomError> {
let vk = device.pointers();
let attachments = L::attachments().iter().map(|attachment| {
vk::AttachmentDescription {
flags: 0, // FIXME: may alias flag
format: attachment.format as u32,
samples: attachment.samples,
loadOp: attachment.load as u32,
storeOp: attachment.store as u32,
stencilLoadOp: 0, // FIXME:
stencilStoreOp: 0, // FIXME:,
initialLayout: attachment.initial_layout as u32,
finalLayout: attachment.final_layout as u32,
}
}).collect::<Vec<_>>();
// FIXME: totally hacky
let color_attachment_references = L::attachments().iter().map(|attachment| {
vk::AttachmentReference {
attachment: 0,
layout: vk::IMAGE_LAYOUT_GENERAL,
}
}).collect::<Vec<_>>();
let passes = (0 .. 1).map(|_| {
vk::SubpassDescription {
flags: 0, // reserved
pipelineBindPoint: vk::PIPELINE_BIND_POINT_GRAPHICS,
inputAttachmentCount: 0, // FIXME:
pInputAttachments: ptr::null(), // FIXME:
colorAttachmentCount: color_attachment_references.len() as u32, // FIXME:
pColorAttachments: color_attachment_references.as_ptr(), // FIXME:
pResolveAttachments: ptr::null(), // FIXME:
pDepthStencilAttachment: ptr::null(), // FIXME:
preserveAttachmentCount: 0, // FIXME:
pPreserveAttachments: ptr::null(), // FIXME:
}
}).collect::<Vec<_>>();
let renderpass = unsafe {
let infos = vk::RenderPassCreateInfo {
sType: vk::STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
attachmentCount: attachments.len() as u32,
pAttachments: attachments.as_ptr(),
subpassCount: passes.len() as u32,
pSubpasses: passes.as_ptr(),
dependencyCount: 0, // FIXME:
pDependencies: ptr::null(), // FIXME:
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateRenderPass(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(RenderPass {
device: device.clone(),
renderpass: renderpass,
num_passes: passes.len() as u32,
marker: PhantomData,
}))
}
/// Returns the device that was used to create this renderpass.
#[inline]
pub fn device(&self) -> &Arc<Device> {
&self.device
}
/// Returns the number of subpasses.
#[inline]
pub fn num_subpasses(&self) -> u32 {
self.num_passes
}
/// Returns a handle that represents a subpass of this renderpass.
#[inline]
pub fn subpass(&self, id: u32) -> Option<Subpass<L>> {
if id < self.num_passes {
Some(Subpass {
renderpass: self,
subpass_id: id,
})
} else {
None
}
}
/// Returns true if this renderpass is compatible with another one.
///
/// This means that framebuffers created with this renderpass can also be used alongside with
/// the other renderpass.
pub fn is_compatible_with<R2>(&self, other: &RenderPass<R2>) -> bool {
true // FIXME:
}
}
impl<L> VulkanObject for RenderPass<L> {
type Object = vk::RenderPass;
#[inline]
fn internal_object(&self) -> vk::RenderPass {
self.renderpass
}
}
impl<L> Drop for RenderPass<L> {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.DestroyRenderPass(self.device.internal_object(), self.renderpass, ptr::null());
}
}
}
/// Represents a subpass within a `RenderPass`.
///
/// This struct doesn't correspond to anything in Vulkan. It is simply an equivalent to a
/// combination of a render pass and subpass ID.
#[derive(Copy, Clone)]
pub struct Subpass<'a, L: 'a> {
renderpass: &'a RenderPass<L>,
subpass_id: u32,
}
impl<'a, L: 'a> Subpass<'a, L> {
/// Returns the renderpass of this subpass.
#[inline]
pub fn renderpass(&self) -> &'a RenderPass<L> {
self.renderpass
}
/// Returns the index of this subpass within the renderpass.
#[inline]
pub fn index(&self) -> u32 {
self.subpass_id
}
}
pub struct Framebuffer<L> {
device: Arc<Device>,
renderpass: Arc<RenderPass<L>>,
framebuffer: vk::Framebuffer,
dimensions: (u32, u32, u32),
}
impl<L> Framebuffer<L> {
pub fn new<'a, M>(renderpass: &Arc<RenderPass<L>>, dimensions: (u32, u32, u32),
attachments: L::AttachmentsList) -> Result<Arc<Framebuffer<L>>, OomError>
where L: RenderPassLayoutExt<'a, M>
{
let vk = renderpass.device.pointers();
let device = renderpass.device.clone();
let framebuffer = unsafe {
let ids = L::ids(&attachments);
let infos = vk::FramebufferCreateInfo {
sType: vk::STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
renderPass: renderpass.internal_object(),
attachmentCount: ids.len() as u32,
pAttachments: ids.as_ptr(),
width: dimensions.0,
height: dimensions.1,
layers: dimensions.2,
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateFramebuffer(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(Framebuffer {
device: device,
renderpass: renderpass.clone(),
framebuffer: framebuffer,
dimensions: dimensions,
}))
}
/// Returns true if this framebuffer can be used with the specified renderpass.
#[inline]
pub fn is_compatible_with<R>(&self, renderpass: &Arc<RenderPass<R>>) -> bool {
true // FIXME:
//(&*self.renderpass as *const RenderPass<L> as usize == &**renderpass as *const _ as usize)
//|| self.renderpass.is_compatible_with(renderpass)
}
/// Returns the width of the framebuffer in pixels.
#[inline]
pub fn width(&self) -> u32 {
self.dimensions.0
}
/// Returns the height of the framebuffer in pixels.
#[inline]
pub fn height(&self) -> u32 {
self.dimensions.1
}
/// Returns the number of layers (or depth) of the framebuffer.
#[inline]
pub fn layers(&self) -> u32 {
self.dimensions.2
}
}
impl<L> VulkanObject for Framebuffer<L> {
type Object = vk::Framebuffer;
#[inline]
fn internal_object(&self) -> vk::Framebuffer {
self.framebuffer
}
}
impl<L> Drop for Framebuffer<L> {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.DestroyFramebuffer(self.device.internal_object(), self.framebuffer, ptr::null());
}
}
}

839
vulkano/src/image.rs Normal file
View File

@ -0,0 +1,839 @@
//! Images storage (1D, 2D, 3D, arrays, etc.).
//!
//! # Strong typing
//!
//! Images in vulkano are strong-typed. Their signature is `Image<Ty, F, M>`.
//!
//! The `Ty` parameter describes the type of image: 1D, 2D, 3D, 1D array, 2D array. All these come
//! in two variants: with or without multisampling. The actual type of `Ty` must be one of the
//! marker structs of this module that start with the `Ty` prefix.
//!
//! The `F` parameter describes the format of each pixel of the image. It must be one of the marker
//! structs of the `formats` module.
//!
//! The `M` parameter describes where the image's memory was allocated from. It is similar to
//! buffers.
//!
use std::marker::PhantomData;
use std::mem;
use std::ptr;
use std::sync::Arc;
use device::Device;
use formats::FormatMarker;
use memory::ChunkProperties;
use memory::MemorySource;
use memory::MemorySourceChunk;
use OomError;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
pub unsafe trait ImageGpuAccess {
/// All images in vulkano must have a *default layout*. Whenever this image is used in a
/// command buffer, it is switched from this default layout to something else (if necessary),
/// then back again to the default.
fn default_layout(&self) -> Layout;
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
#[repr(u32)]
pub enum Layout {
Undefined = vk::IMAGE_LAYOUT_UNDEFINED,
General = vk::IMAGE_LAYOUT_GENERAL,
ColorAttachmentOptimal = vk::IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL,
DepthStencilAttachmentOptimal = vk::IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL,
DepthStencilReadOnlyOptimal = vk::IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL,
ShaderReadOnlyOptimal = vk::IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL,
TransferSrcOptimal = vk::IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
TransferDstOptimal = vk::IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
Preinitialized = vk::IMAGE_LAYOUT_PREINITIALIZED,
PresentSrc = vk::IMAGE_LAYOUT_PRESENT_SRC_KHR,
}
pub unsafe trait TypeMarker {}
pub unsafe trait ImageTypeMarker: TypeMarker {
type Dimensions: Copy + Clone;
type NumSamples: Copy + Clone;
fn extent(Self::Dimensions) -> [u32; 3];
fn array_layers(Self::Dimensions) -> u32;
/// Must return `1` for non-multisampled types.
fn num_samples(Self::NumSamples) -> u32;
fn ty() -> ImageType;
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
#[repr(u32)]
pub enum ImageType {
Type1d = vk::IMAGE_TYPE_1D,
Type2d = vk::IMAGE_TYPE_2D,
Type3d = vk::IMAGE_TYPE_3D,
}
pub unsafe trait ImageViewTypeMarker: TypeMarker {}
pub unsafe trait CanCreateView<Dest>: ImageTypeMarker where Dest: ImageViewTypeMarker {}
unsafe impl<T> CanCreateView<T> for T where T: ImageTypeMarker + ImageViewTypeMarker {}
pub unsafe trait MultisampleType: TypeMarker {}
/// A storage for pixels or arbitrary data.
pub struct Image<Ty, F, M> where Ty: ImageTypeMarker {
device: Arc<Device>,
image: vk::Image,
memory: M,
usage: vk::ImageUsageFlagBits,
dimensions: Ty::Dimensions,
samples: Ty::NumSamples,
mipmaps: u32,
needs_destruction: bool,
default_layout: Layout,
marker: PhantomData<F>,
}
/// Specifies how many mipmaps must be allocated.
///
/// Note that at least one mipmap must be allocated, to store the main level of the image.
#[derive(Debug, Copy, Clone)]
pub enum MipmapsCount {
/// Allocate the given number of mipmaps.
Specific(u32),
/// Allocates the number of mipmaps required to store all the mipmaps of the image where each
/// mipmap is half the dimensions of the previous level.
Log2,
/// Allocate one mipmap (ie. just the main level).
One,
}
impl From<u32> for MipmapsCount {
#[inline]
fn from(num: u32) -> MipmapsCount {
MipmapsCount::Specific(num)
}
}
impl<Ty, F, M> Image<Ty, F, M>
where M: MemorySourceChunk, Ty: ImageTypeMarker, F: FormatMarker
{
/// Creates a new image and allocates memory for it.
///
/// # Panic
///
/// - Panicks if one of the dimensions is 0.
/// - Panicks if the number of mipmaps is 0.
/// - Panicks if the number of samples is 0.
///
pub fn new<S, Mi>(device: &Arc<Device>, usage: &Usage, memory: S, dimensions: Ty::Dimensions,
num_samples: Ty::NumSamples, mipmaps: Mi)
-> Result<Arc<Image<Ty, F, M>>, OomError>
where S: MemorySource<Chunk = M>, Mi: Into<MipmapsCount>
{
let vk = device.pointers();
let usage = usage.to_usage_bits();
assert!(!memory.is_sparse()); // not implemented
let samples = Ty::num_samples(num_samples);
assert!(samples >= 1);
// compute the number of mipmaps
let mipmaps = match mipmaps.into() {
MipmapsCount::Specific(num) => {
assert!(num >= 1);
num
},
MipmapsCount::Log2 => {
let dims = Ty::extent(dimensions);
let dim: u32 = match Ty::ty() {
ImageType::Type1d => dims[0],
ImageType::Type2d => [dims[0], dims[1]].iter().cloned().min().unwrap(),
ImageType::Type3d => [dims[0], dims[1], dims[2]].iter().cloned().min().unwrap(),
};
assert!(dim >= 1);
32 - dim.leading_zeros()
},
MipmapsCount::One => 1,
};
let image = unsafe {
let infos = vk::ImageCreateInfo {
sType: vk::STRUCTURE_TYPE_IMAGE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // TODO:
imageType: Ty::ty() as u32,
format: F::format() as u32,
extent: {
let dims = Ty::extent(dimensions);
assert!(dims[0] >= 1); assert!(dims[1] >= 1); assert!(dims[2] >= 1);
vk::Extent3D { width: dims[0], height: dims[1], depth: dims[2] }
},
mipLevels: mipmaps,
arrayLayers: Ty::array_layers(dimensions),
samples: samples,
tiling: vk::IMAGE_TILING_OPTIMAL, // TODO:
usage: usage,
sharingMode: vk::SHARING_MODE_EXCLUSIVE, // TODO:
queueFamilyIndexCount: 0, // TODO:
pQueueFamilyIndices: ptr::null(), // TODO:
initialLayout: vk::IMAGE_LAYOUT_UNDEFINED, // TODO:
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateImage(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
let mem_reqs: vk::MemoryRequirements = unsafe {
let mut output = mem::uninitialized();
vk.GetImageMemoryRequirements(device.internal_object(), image, &mut output);
output
};
let memory = memory.allocate(device, mem_reqs.size as usize, mem_reqs.alignment as usize,
mem_reqs.memoryTypeBits)
.expect("failed to allocate"); // TODO: use try!() instead
unsafe {
match memory.properties() {
ChunkProperties::Regular { memory, offset, .. } => {
try!(check_errors(vk.BindImageMemory(device.internal_object(), image,
memory.internal_object(),
offset as vk::DeviceSize)));
},
_ => unimplemented!()
}
}
Ok(Arc::new(Image {
device: device.clone(),
image: image,
memory: memory,
usage: usage,
dimensions: dimensions.clone(),
samples: num_samples,
mipmaps: mipmaps,
needs_destruction: true,
default_layout: Layout::General, // FIXME:
marker: PhantomData,
}))
}
/// Creates an image from a raw handle. The image won't be destroyed.
///
/// This function is for example used at the swapchain's initialization.
pub unsafe fn from_raw_unowned(device: &Arc<Device>, handle: u64, memory: M, usage: u32,
dimensions: Ty::Dimensions, samples: Ty::NumSamples, mipmaps: u32)
-> Arc<Image<Ty, F, M>>
{
Arc::new(Image {
device: device.clone(),
image: handle,
memory: memory,
usage: usage,
dimensions: dimensions.clone(),
samples: samples,
mipmaps: mipmaps,
needs_destruction: false,
default_layout: Layout::General, // FIXME:
marker: PhantomData,
})
}
/// Returns the dimensions of this image.
#[inline]
pub fn dimensions(&self) -> Ty::Dimensions {
self.dimensions
}
/// Returns the number of array layers of this image.
#[inline]
pub fn array_layers(&self) -> u32 {
Ty::array_layers(self.dimensions)
}
/// Returns the number of mipmap levels of this image.
#[inline]
pub fn mipmap_levels(&self) -> u32 {
self.mipmaps
}
/// Returns the number of samples of each pixel of this image.
///
/// Returns `1` if the image is not multisampled.
#[inline]
pub fn num_samples(&self) -> u32 {
Ty::num_samples(self.samples)
}
/// True if the image can be used as a source for transfers.
#[inline]
pub fn usage_transfer_src(&self) -> bool {
(self.usage & vk::IMAGE_USAGE_TRANSFER_SRC_BIT) != 0
}
/// True if the image can be used as a destination for transfers.
#[inline]
pub fn usage_transfer_dest(&self) -> bool {
(self.usage & vk::IMAGE_USAGE_TRANSFER_DST_BIT) != 0
}
/// True if the image can be sampled from a shader.
#[inline]
pub fn usage_sampled(&self) -> bool {
(self.usage & vk::IMAGE_USAGE_SAMPLED_BIT) != 0
}
/// True if the image can be used for image loads/stores in shaders.
#[inline]
pub fn usage_storage(&self) -> bool {
(self.usage & vk::IMAGE_USAGE_STORAGE_BIT) != 0
}
/// True if the image can be used as a color attachment in a framebuffer.
#[inline]
pub fn usage_color_attachment(&self) -> bool {
(self.usage & vk::IMAGE_USAGE_COLOR_ATTACHMENT_BIT) != 0
}
/// True if the image can be used as a depth and/or stencil attachment in a framebuffer.
#[inline]
pub fn usage_depth_stencil_attachment(&self) -> bool {
(self.usage & vk::IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT) != 0
}
/// True if the image can be used as a transient attachment in a framebuffer.
#[inline]
pub fn usage_transient_attachment(&self) -> bool {
(self.usage & vk::IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT) != 0
}
/// True if the image can be used as an input attachment in a framebuffer.
#[inline]
pub fn usage_input_attachment(&self) -> bool {
(self.usage & vk::IMAGE_USAGE_INPUT_ATTACHMENT_BIT) != 0
}
}
unsafe impl<Ty, F, M> ImageGpuAccess for Image<Ty, F, M>
where Ty: ImageTypeMarker
{
#[inline]
fn default_layout(&self) -> Layout {
self.default_layout
}
}
impl<Ty, F, M> Drop for Image<Ty, F, M>
where Ty: ImageTypeMarker
{
#[inline]
fn drop(&mut self) {
if !self.needs_destruction {
return;
}
unsafe {
let vk = self.device.pointers();
vk.DestroyImage(self.device.internal_object(), self.image, ptr::null());
}
}
}
/// Describes how an image is going to be used. This is **not** an optimization.
///
/// If you try to use an image in a way that you didn't declare, a panic will happen.
#[derive(Debug, Copy, Clone)]
pub struct Usage {
pub transfer_source: bool,
pub transfer_dest: bool,
pub sampled: bool,
pub storage: bool,
pub color_attachment: bool,
pub depth_stencil_attachment: bool,
pub transient_attachment: bool,
pub input_attachment: bool,
}
impl Usage {
/// Builds a `Usage` with all values set to true. Can be used for quick prototyping.
#[inline]
pub fn all() -> Usage {
Usage {
transfer_source: true,
transfer_dest: true,
sampled: true,
storage: true,
color_attachment: true,
depth_stencil_attachment: true,
transient_attachment: true,
input_attachment: true,
}
}
#[doc(hidden)]
#[inline]
pub fn to_usage_bits(&self) -> vk::ImageUsageFlagBits {
let mut result = 0;
if self.transfer_source { result |= vk::IMAGE_USAGE_TRANSFER_SRC_BIT; }
if self.transfer_dest { result |= vk::IMAGE_USAGE_TRANSFER_DST_BIT; }
if self.sampled { result |= vk::IMAGE_USAGE_SAMPLED_BIT; }
if self.storage { result |= vk::IMAGE_USAGE_STORAGE_BIT; }
if self.color_attachment { result |= vk::IMAGE_USAGE_COLOR_ATTACHMENT_BIT; }
if self.depth_stencil_attachment { result |= vk::IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT; }
if self.transient_attachment { result |= vk::IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT; }
if self.input_attachment { result |= vk::IMAGE_USAGE_INPUT_ATTACHMENT_BIT; }
result
}
#[inline]
#[doc(hidden)]
pub fn from_bits(val: u32) -> Usage {
Usage {
transfer_source: (val & vk::IMAGE_USAGE_TRANSFER_SRC_BIT) != 0,
transfer_dest: (val & vk::IMAGE_USAGE_TRANSFER_DST_BIT) != 0,
sampled: (val & vk::IMAGE_USAGE_SAMPLED_BIT) != 0,
storage: (val & vk::IMAGE_USAGE_STORAGE_BIT) != 0,
color_attachment: (val & vk::IMAGE_USAGE_COLOR_ATTACHMENT_BIT) != 0,
depth_stencil_attachment: (val & vk::IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT) != 0,
transient_attachment: (val & vk::IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT) != 0,
input_attachment: (val & vk::IMAGE_USAGE_INPUT_ATTACHMENT_BIT) != 0,
}
}
}
/// A representation of an image.
///
/// Accessing an image from within a shader can only be done through an `ImageView`. An `ImageView`
/// represents a region of an image. You can also do things like creating a 2D view of a 3D
/// image, swizzle the channels, or change the format of the texture (with some restrictions).
pub struct ImageView<Ty, F, M> where Ty: ImageTypeMarker {
image: Arc<Image<Ty, F, M>>,
view: vk::ImageView,
}
impl<Ty, F, M> ImageView<Ty, F, M> where Ty: ImageTypeMarker {
pub fn new(image: &Arc<Image<Ty, F, M>>) -> Result<Arc<ImageView<Ty, F, M>>, OomError>
where F: FormatMarker
{
let vk = image.device.pointers();
let view = unsafe {
let infos = vk::ImageViewCreateInfo {
sType: vk::STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
image: image.image,
viewType: vk::IMAGE_VIEW_TYPE_2D, // FIXME:
format: F::format() as u32,
components: vk::ComponentMapping { r: 0, g: 0, b: 0, a: 0 }, // FIXME:
subresourceRange: vk::ImageSubresourceRange {
aspectMask: 1, // FIXME:
baseMipLevel: 0, // FIXME:
levelCount: 1, // FIXME:
baseArrayLayer: 0, // FIXME:
layerCount: 1, // FIXME:
},
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateImageView(image.device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(ImageView {
image: image.clone(),
view: view,
}))
}
/// Returns the image from which this view is taken from.
#[inline]
pub fn image(&self) -> &Arc<Image<Ty, F, M>> {
&self.image
}
}
impl<Ty, F, M> ImageView<Ty, F, M> where Ty: ImageTypeMarker {
// TODO: hack, remove
#[doc(hidden)]
pub fn id(&self) -> u64 { self.view }
}
impl<Ty, F, M> Drop for ImageView<Ty, F, M> where Ty: ImageTypeMarker {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.image.device.pointers();
vk.DestroyImageView(self.image.device.internal_object(), self.view, ptr::null());
}
}
}
#[derive(Copy, Clone, Debug, Default, PartialEq, Eq)]
pub struct Swizzle {
pub r: ComponentSwizzle,
pub g: ComponentSwizzle,
pub b: ComponentSwizzle,
pub a: ComponentSwizzle,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum ComponentSwizzle {
Identity,
Zero,
One,
Red,
Green,
Blue,
Alpha,
}
impl Default for ComponentSwizzle {
#[inline]
fn default() -> ComponentSwizzle {
ComponentSwizzle::Identity
}
}
pub struct Type1d;
unsafe impl TypeMarker for Type1d {
}
unsafe impl ImageViewTypeMarker for Type1d {
}
unsafe impl ImageTypeMarker for Type1d {
type Dimensions = u32;
type NumSamples = ();
#[inline]
fn extent(dims: u32) -> [u32; 3] {
[dims, 1, 1]
}
#[inline]
fn array_layers(_: u32) -> u32 {
1
}
#[inline]
fn num_samples(_: ()) -> u32 {
1
}
#[inline]
fn ty() -> ImageType {
ImageType::Type1d
}
}
pub struct Type1dMultisample;
unsafe impl TypeMarker for Type1dMultisample {
}
unsafe impl ImageViewTypeMarker for Type1dMultisample {
}
unsafe impl ImageTypeMarker for Type1dMultisample {
type Dimensions = u32;
type NumSamples = u32;
#[inline]
fn extent(dims: u32) -> [u32; 3] {
[dims, 1, 1]
}
#[inline]
fn array_layers(_: u32) -> u32 {
1
}
#[inline]
fn num_samples(num: u32) -> u32 {
num
}
#[inline]
fn ty() -> ImageType {
ImageType::Type1d
}
}
unsafe impl MultisampleType for Type1dMultisample {
}
pub struct Type2d;
unsafe impl TypeMarker for Type2d {
}
unsafe impl ImageViewTypeMarker for Type2d {
}
unsafe impl ImageTypeMarker for Type2d {
type Dimensions = [u32; 2];
type NumSamples = ();
#[inline]
fn extent(dims: [u32; 2]) -> [u32; 3] {
[dims[0], dims[1], 1]
}
#[inline]
fn array_layers(_: [u32; 2]) -> u32 {
1
}
#[inline]
fn num_samples(_: ()) -> u32 {
1
}
#[inline]
fn ty() -> ImageType {
ImageType::Type2d
}
}
pub struct Type2dMultisample;
unsafe impl TypeMarker for Type2dMultisample {
}
unsafe impl ImageViewTypeMarker for Type2dMultisample {
}
unsafe impl ImageTypeMarker for Type2dMultisample {
type Dimensions = [u32; 2];
type NumSamples = u32;
#[inline]
fn extent(dims: [u32; 2]) -> [u32; 3] {
[dims[0], dims[1], 1]
}
#[inline]
fn array_layers(_: [u32; 2]) -> u32 {
1
}
#[inline]
fn num_samples(num: u32) -> u32 {
num
}
#[inline]
fn ty() -> ImageType {
ImageType::Type2d
}
}
unsafe impl MultisampleType for Type2dMultisample {
}
pub struct Type3d;
unsafe impl TypeMarker for Type3d {
}
unsafe impl ImageViewTypeMarker for Type3d {
}
unsafe impl ImageTypeMarker for Type3d {
type Dimensions = [u32; 3];
type NumSamples = ();
#[inline]
fn extent(dims: [u32; 3]) -> [u32; 3] {
dims
}
#[inline]
fn array_layers(_: [u32; 3]) -> u32 {
1
}
#[inline]
fn num_samples(_: ()) -> u32 {
1
}
#[inline]
fn ty() -> ImageType {
ImageType::Type3d
}
}
pub struct Type3dMultisample;
unsafe impl TypeMarker for Type3dMultisample {
}
unsafe impl ImageViewTypeMarker for Type3dMultisample {
}
unsafe impl ImageTypeMarker for Type3dMultisample {
type Dimensions = [u32; 3];
type NumSamples = u32;
#[inline]
fn extent(dims: [u32; 3]) -> [u32; 3] {
dims
}
#[inline]
fn array_layers(_: [u32; 3]) -> u32 {
1
}
#[inline]
fn num_samples(num: u32) -> u32 {
num
}
#[inline]
fn ty() -> ImageType {
ImageType::Type3d
}
}
unsafe impl MultisampleType for Type3dMultisample {
}
pub struct TypeCube;
unsafe impl TypeMarker for TypeCube {
}
unsafe impl ImageViewTypeMarker for TypeCube {
}
pub struct TypeCubeMultisample;
unsafe impl TypeMarker for TypeCubeMultisample {
}
unsafe impl ImageViewTypeMarker for TypeCubeMultisample {
}
unsafe impl MultisampleType for TypeCubeMultisample {
}
pub struct Type1dArray;
unsafe impl TypeMarker for Type1dArray {
}
unsafe impl ImageViewTypeMarker for Type1dArray {
}
pub struct Type1dArrayMultisample;
unsafe impl TypeMarker for Type1dArrayMultisample {
}
unsafe impl ImageViewTypeMarker for Type1dArrayMultisample {
}
unsafe impl MultisampleType for Type1dArrayMultisample {
}
pub struct Type2dArray;
unsafe impl TypeMarker for Type2dArray {
}
unsafe impl ImageViewTypeMarker for Type2dArray {
}
unsafe impl ImageTypeMarker for Type2dArray {
type Dimensions = ([u32; 2], u32);
type NumSamples = ();
#[inline]
fn extent(dims: ([u32; 2], u32)) -> [u32; 3] {
[dims.0[0], dims.0[1], 1]
}
#[inline]
fn array_layers(dims: ([u32; 2], u32)) -> u32 {
dims.1
}
#[inline]
fn num_samples(_: ()) -> u32 {
1
}
#[inline]
fn ty() -> ImageType {
ImageType::Type2d
}
}
pub struct Type2dArrayMultisample;
unsafe impl TypeMarker for Type2dArrayMultisample {
}
unsafe impl ImageViewTypeMarker for Type2dArrayMultisample {
}
unsafe impl ImageTypeMarker for Type2dArrayMultisample {
type Dimensions = ([u32; 2], u32);
type NumSamples = u32;
#[inline]
fn extent(dims: ([u32; 2], u32)) -> [u32; 3] {
[dims.0[0], dims.0[1], 1]
}
#[inline]
fn array_layers(dims: ([u32; 2], u32)) -> u32 {
dims.1
}
#[inline]
fn num_samples(num: u32) -> u32 {
num
}
#[inline]
fn ty() -> ImageType {
ImageType::Type2d
}
}
unsafe impl MultisampleType for Type2dArrayMultisample {
}
pub struct TypeCubeArray;
unsafe impl TypeMarker for TypeCubeArray {
}
unsafe impl ImageViewTypeMarker for TypeCubeArray {
}
pub struct TypeCubeArrayMultisample;
unsafe impl TypeMarker for TypeCubeArrayMultisample {
}
unsafe impl ImageViewTypeMarker for TypeCubeArrayMultisample {
}
unsafe impl MultisampleType for TypeCubeArrayMultisample {
}

880
vulkano/src/instance.rs Normal file
View File

@ -0,0 +1,880 @@
//! API entry point.
//!
//! Creating an instance initializes everything and allows you to:
//!
//! - Enumerate physical devices.
//! - Enumerate monitors.
//! - Create surfaces.
//!
//! # Application info
//!
//! When you create an instance, you have the possibility to pass an `ApplicationInfo` struct. This
//! struct contains various information about your application, most notably its name and engine.
//!
//! Passing such a structure allows for example the driver to let the user configure the driver's
//! behavior for your application alone through a control panel.
//!
//! # Enumerating physical devices
//!
//! After you have created an instance, the next step is to enumerate the physical devices that
//! are available on the system with `PhysicalDevice::enumerate()`.
//!
//! When choosing which physical device to use, keep in mind that physical devices may or may not
//! be able to draw to a certain surface (ie. to a window or a monitor). See the `swapchain`
//! module for more info.
//!
//! A physical device can designate a video card, an integrated chip, but also multiple video
//! cards working together. Once you have chosen a physical device, you can create a `Device`
//! from it. See the `device` module for more info.
//!
use std::error;
use std::ffi::CStr;
use std::ffi::CString;
use std::fmt;
use std::mem;
use std::ptr;
use std::sync::Arc;
//use alloc::Alloc;
use check_errors;
use Error;
use OomError;
use VulkanObject;
use VulkanPointers;
use vk;
use VK_ENTRY;
use VK_STATIC;
pub use features::Features;
pub use version::Version;
/// An instance of a Vulkan context. This is the main object that should be created by an
/// application before everything else.
pub struct Instance {
instance: vk::Instance,
//alloc: Option<Box<Alloc + Send + Sync>>,
physical_devices: Vec<PhysicalDeviceInfos>,
vk: vk::InstancePointers,
}
impl Instance {
/// Initializes a new instance of Vulkan.
// TODO: if no allocator is specified by the user, use Rust's allocator instead of leaving
// the choice to Vulkan
pub fn new<'a, L>(app_infos: Option<&ApplicationInfo>, layers: L)
-> Result<Arc<Instance>, InstanceCreationError>
where L: IntoIterator<Item = &'a &'a str>
{
// Building the CStrings from the `str`s within `app_infos`.
// They need to be created ahead of time, since we pass pointers to them.
let app_infos_strings = if let Some(app_infos) = app_infos {
Some((
CString::new(app_infos.application_name).unwrap(),
CString::new(app_infos.engine_name).unwrap()
))
} else {
None
};
// Building the `vk::ApplicationInfo` if required.
let app_infos = if let Some(app_infos) = app_infos {
Some(vk::ApplicationInfo {
sType: vk::STRUCTURE_TYPE_APPLICATION_INFO,
pNext: ptr::null(),
pApplicationName: app_infos_strings.as_ref().unwrap().0.as_ptr(),
applicationVersion: app_infos.application_version,
pEngineName: app_infos_strings.as_ref().unwrap().1.as_ptr(),
engineVersion: app_infos.engine_version,
apiVersion: Version { major: 1, minor: 0, patch: 0 }.into_vulkan_version(), // TODO:
})
} else {
None
};
let layers = layers.into_iter().map(|&layer| {
// FIXME: check whether each layer is supported
CString::new(layer).unwrap()
}).collect::<Vec<_>>();
let layers = layers.iter().map(|layer| {
layer.as_ptr()
}).collect::<Vec<_>>();
let extensions = ["VK_KHR_surface", "VK_KHR_win32_surface"].iter().map(|&ext| {
// FIXME: check whether each extension is supported
CString::new(ext).unwrap()
}).collect::<Vec<_>>();
let extensions = extensions.iter().map(|extension| {
extension.as_ptr()
}).collect::<Vec<_>>();
// Creating the Vulkan instance.
let instance = unsafe {
let mut output = mem::uninitialized();
let infos = vk::InstanceCreateInfo {
sType: vk::STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
pNext: ptr::null(),
flags: 0,
pApplicationInfo: if let Some(app) = app_infos.as_ref() {
app as *const _
} else {
ptr::null()
},
enabledLayerCount: layers.len() as u32,
ppEnabledLayerNames: layers.as_ptr(),
enabledExtensionCount: extensions.len() as u32,
ppEnabledExtensionNames: extensions.as_ptr(),
};
try!(check_errors(VK_ENTRY.CreateInstance(&infos, ptr::null(), &mut output)));
output
};
// Loading the function pointers of the newly-created instance.
let vk = vk::InstancePointers::load(|name| unsafe {
mem::transmute(VK_STATIC.GetInstanceProcAddr(instance, name.as_ptr()))
});
// Enumerating all physical devices.
let physical_devices: Vec<vk::PhysicalDevice> = unsafe {
let mut num = mem::uninitialized();
try!(check_errors(vk.EnumeratePhysicalDevices(instance, &mut num, ptr::null_mut())));
let mut devices = Vec::with_capacity(num as usize);
try!(check_errors(vk.EnumeratePhysicalDevices(instance, &mut num,
devices.as_mut_ptr())));
devices.set_len(num as usize);
devices
};
// Getting the properties of all physical devices.
let physical_devices = {
let mut output = Vec::with_capacity(physical_devices.len());
for device in physical_devices.into_iter() {
let properties: vk::PhysicalDeviceProperties = unsafe {
let mut output = mem::uninitialized();
vk.GetPhysicalDeviceProperties(device, &mut output);
output
};
let queue_families = unsafe {
let mut num = mem::uninitialized();
vk.GetPhysicalDeviceQueueFamilyProperties(device, &mut num, ptr::null_mut());
let mut families = Vec::with_capacity(num as usize);
vk.GetPhysicalDeviceQueueFamilyProperties(device, &mut num,
families.as_mut_ptr());
families.set_len(num as usize);
families
};
let memory: vk::PhysicalDeviceMemoryProperties = unsafe {
let mut output = mem::uninitialized();
vk.GetPhysicalDeviceMemoryProperties(device, &mut output);
output
};
let available_features: vk::PhysicalDeviceFeatures = unsafe {
let mut output = mem::uninitialized();
vk.GetPhysicalDeviceFeatures(device, &mut output);
output
};
output.push(PhysicalDeviceInfos {
device: device,
properties: properties,
memory: memory,
queue_families: queue_families,
available_features: Features::from(available_features),
});
}
output
};
Ok(Arc::new(Instance {
instance: instance,
//alloc: None,
physical_devices: physical_devices,
vk: vk,
}))
}
/*/// Same as `new`, but provides an allocator that will be used by the Vulkan library whenever
/// it needs to allocate memory on the host.
///
/// Note that this allocator can be overriden when you create a `Device`, a `MemoryPool`, etc.
pub fn with_alloc(app_infos: Option<&ApplicationInfo>, alloc: Box<Alloc + Send + Sync>) -> Arc<Instance> {
unimplemented!()
}*/
}
impl fmt::Debug for Instance {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "<Vulkan instance>")
}
}
impl VulkanObject for Instance {
type Object = vk::Instance;
#[inline]
fn internal_object(&self) -> vk::Instance {
self.instance
}
}
impl VulkanPointers for Instance {
type Pointers = vk::InstancePointers;
#[inline]
fn pointers(&self) -> &vk::InstancePointers {
&self.vk
}
}
impl Drop for Instance {
#[inline]
fn drop(&mut self) {
unsafe {
self.vk.DestroyInstance(self.instance, ptr::null());
}
}
}
/// Information that can be given to the Vulkan driver so that it can identify your application.
pub struct ApplicationInfo<'a> {
/// Name of the application.
pub application_name: &'a str,
/// An opaque number that contains the version number of the application.
pub application_version: u32,
/// Name of the engine used to power the application.
pub engine_name: &'a str,
/// An opaque number that contains the version number of the engine.
pub engine_version: u32,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
#[repr(u32)]
pub enum InstanceCreationError {
OutOfHostMemory = vk::ERROR_OUT_OF_HOST_MEMORY,
OutOfDeviceMemory = vk::ERROR_OUT_OF_DEVICE_MEMORY,
InitializationFailed = vk::ERROR_INITIALIZATION_FAILED,
LayerNotPresent = vk::ERROR_LAYER_NOT_PRESENT,
ExtensionNotPresent = vk::ERROR_EXTENSION_NOT_PRESENT,
IncompatibleDriver = vk::ERROR_INCOMPATIBLE_DRIVER,
}
impl error::Error for InstanceCreationError {
#[inline]
fn description(&self) -> &str {
match *self {
InstanceCreationError::OutOfHostMemory => "no memory available on the host",
InstanceCreationError::OutOfDeviceMemory => "no memory available on the graphical device",
InstanceCreationError::InitializationFailed => "initialization failed",
InstanceCreationError::LayerNotPresent => "layer not present",
InstanceCreationError::ExtensionNotPresent => "extension not present",
InstanceCreationError::IncompatibleDriver => "incompatible driver",
}
}
}
impl fmt::Display for InstanceCreationError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}
impl From<Error> for InstanceCreationError {
#[inline]
fn from(err: Error) -> InstanceCreationError {
match err {
Error::OutOfHostMemory => InstanceCreationError::OutOfHostMemory,
Error::OutOfDeviceMemory => InstanceCreationError::OutOfDeviceMemory,
Error::InitializationFailed => InstanceCreationError::InitializationFailed,
Error::LayerNotPresent => InstanceCreationError::LayerNotPresent,
Error::ExtensionNotPresent => InstanceCreationError::ExtensionNotPresent,
Error::IncompatibleDriver => InstanceCreationError::IncompatibleDriver,
_ => panic!("unexpected error: {:?}", err)
}
}
}
/// Queries the list of layers that are available when creating an instance.
pub fn layers_list() -> Result<Vec<LayerProperties>, OomError> {
unsafe {
let mut num = mem::uninitialized();
try!(check_errors(VK_ENTRY.EnumerateInstanceLayerProperties(&mut num, ptr::null_mut())));
let mut layers: Vec<vk::LayerProperties> = Vec::with_capacity(num as usize);
try!(check_errors(VK_ENTRY.EnumerateInstanceLayerProperties(&mut num, layers.as_mut_ptr())));
layers.set_len(num as usize);
Ok(layers.into_iter().map(|layer| {
LayerProperties { props: layer }
}).collect())
}
}
/// Properties of an available layer.
pub struct LayerProperties {
props: vk::LayerProperties,
}
impl LayerProperties {
/// Returns the name of the layer.
#[inline]
pub fn name(&self) -> &str {
unsafe { CStr::from_ptr(self.props.layerName.as_ptr()).to_str().unwrap() }
}
/// Returns a description of the layer.
#[inline]
pub fn description(&self) -> &str {
unsafe { CStr::from_ptr(self.props.description.as_ptr()).to_str().unwrap() }
}
/// Returns the version of Vulkan supported by this layer.
#[inline]
pub fn vulkan_version(&self) -> Version {
Version::from_vulkan_version(self.props.specVersion)
}
/// Returns an implementation-specific version number for this layer.
#[inline]
pub fn implementation_version(&self) -> u32 {
self.props.implementationVersion
}
}
struct PhysicalDeviceInfos {
device: vk::PhysicalDevice,
properties: vk::PhysicalDeviceProperties,
queue_families: Vec<vk::QueueFamilyProperties>,
memory: vk::PhysicalDeviceMemoryProperties,
available_features: Features,
}
/// Represents one of the available devices on this machine.
#[derive(Debug, Clone)]
pub struct PhysicalDevice {
instance: Arc<Instance>,
device: usize,
}
impl PhysicalDevice {
/// Returns an iterator that enumerates the physical devices available.
#[inline]
pub fn enumerate(instance: &Arc<Instance>) -> PhysicalDevicesIter {
PhysicalDevicesIter {
instance: instance,
current_id: 0,
}
}
/// Returns the instance corresponding to this physical device.
#[inline]
pub fn instance(&self) -> &Arc<Instance> {
&self.instance
}
/// Returns the human-readable name of the device.
#[inline]
pub fn name(&self) -> String { // FIXME: for some reason this panicks if you use a `&str`
unsafe {
let val = self.infos().properties.deviceName;
let val = CStr::from_ptr(val.as_ptr());
val.to_str().expect("physical device name contained non-UTF8 characters").to_owned()
}
}
/// Returns the type of the device.
#[inline]
pub fn ty(&self) -> PhysicalDeviceType {
match self.instance.physical_devices[self.device].properties.deviceType {
vk::PHYSICAL_DEVICE_TYPE_OTHER => PhysicalDeviceType::Other,
vk::PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU => PhysicalDeviceType::IntegratedGpu,
vk::PHYSICAL_DEVICE_TYPE_DISCRETE_GPU => PhysicalDeviceType::DiscreteGpu,
vk::PHYSICAL_DEVICE_TYPE_VIRTUAL_GPU => PhysicalDeviceType::VirtualGpu,
vk::PHYSICAL_DEVICE_TYPE_CPU => PhysicalDeviceType::Cpu,
_ => panic!("Unrecognized Vulkan device type")
}
}
/// Returns the version of Vulkan supported by this device.
#[inline]
pub fn api_version(&self) -> Version {
let val = self.infos().properties.apiVersion;
Version::from_vulkan_version(val)
}
/// Returns the Vulkan features that are supported by this physical device.
#[inline]
pub fn supported_features(&self) -> &Features {
&self.infos().available_features
}
/// Builds an iterator that enumerates all the queue families on this physical device.
#[inline]
pub fn queue_families(&self) -> QueueFamiliesIter {
QueueFamiliesIter {
physical_device: self,
current_id: 0,
}
}
/// Returns the queue family with the given index, or `None` if out of range.
#[inline]
pub fn queue_family_by_id(&self, id: u32) -> Option<QueueFamily> {
if (id as usize) < self.infos().queue_families.len() {
Some(QueueFamily {
physical_device: self,
id: id,
})
} else {
None
}
}
/// Builds an iterator that enumerates all the memory types on this physical device.
#[inline]
pub fn memory_types(&self) -> MemoryTypesIter {
MemoryTypesIter {
physical_device: self,
current_id: 0,
}
}
/// Returns the memory type with the given index, or `None` if out of range.
#[inline]
pub fn memory_type_by_id(&self, id: u32) -> Option<MemoryType> {
if id < self.infos().memory.memoryTypeCount {
Some(MemoryType {
physical_device: self,
id: id,
})
} else {
None
}
}
/// Builds an iterator that enumerates all the memory heaps on this physical device.
#[inline]
pub fn memory_heaps(&self) -> MemoryHeapsIter {
MemoryHeapsIter {
physical_device: self,
current_id: 0,
}
}
/// Returns the memory heap with the given index, or `None` if out of range.
#[inline]
pub fn memory_heap_by_id(&self, id: u32) -> Option<MemoryHeap> {
if id < self.infos().memory.memoryHeapCount {
Some(MemoryHeap {
physical_device: self,
id: id,
})
} else {
None
}
}
/// Returns an opaque number representing the version of the driver of this device.
#[inline]
pub fn driver_version(&self) -> u32 {
self.infos().properties.driverVersion
}
/// Returns the PCI ID of the device.
#[inline]
pub fn pci_device_id(&self) -> u32 {
self.infos().properties.deviceID
}
/// Returns the PCI ID of the vendor.
#[inline]
pub fn pci_vendor_id(&self) -> u32 {
self.infos().properties.vendorID
}
/// Returns a unique identifier for the device.
#[inline]
pub fn uuid(&self) -> &[u8; 16] { // must be equal to vk::UUID_SIZE
&self.infos().properties.pipelineCacheUUID
}
/// Internal function to make it easier to get the infos of this device.
#[inline]
fn infos(&self) -> &PhysicalDeviceInfos {
&self.instance.physical_devices[self.device]
}
}
impl VulkanObject for PhysicalDevice {
type Object = vk::PhysicalDevice;
#[inline]
fn internal_object(&self) -> vk::PhysicalDevice {
self.infos().device
}
}
/// Iterator for all the physical devices available on hardware.
#[derive(Debug, Clone)]
pub struct PhysicalDevicesIter<'a> {
instance: &'a Arc<Instance>,
current_id: usize,
}
impl<'a> Iterator for PhysicalDevicesIter<'a> {
type Item = PhysicalDevice;
#[inline]
fn next(&mut self) -> Option<PhysicalDevice> {
if self.current_id >= self.instance.physical_devices.len() {
return None;
}
let dev = PhysicalDevice {
instance: self.instance.clone(),
device: self.current_id,
};
self.current_id += 1;
Some(dev)
}
}
/// Type of a physical device.
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
#[repr(u32)]
pub enum PhysicalDeviceType {
/// The device is an integrated GPU.
IntegratedGpu = 1,
/// The device is a discrete GPU.
DiscreteGpu = 2,
/// The device is a virtual GPU.
VirtualGpu = 3,
/// The device is a CPU.
Cpu = 4,
/// The device is something else.
Other = 0,
}
/// Represents a queue family in a physical device.
///
/// A queue family is group of one or multiple queues. All queues of one family have the same
/// characteristics.
#[derive(Debug, Copy, Clone)]
pub struct QueueFamily<'a> {
physical_device: &'a PhysicalDevice,
id: u32,
}
impl<'a> QueueFamily<'a> {
/// Returns the physical device associated to this queue family.
#[inline]
pub fn physical_device(&self) -> &'a PhysicalDevice {
self.physical_device
}
/// Returns the identifier of this queue family within the physical device.
#[inline]
pub fn id(&self) -> u32 {
self.id
}
/// Returns the number of queues that belong to this family.
///
/// Guaranteed to be at least 1 (or else that family wouldn't exist).
#[inline]
pub fn queues_count(&self) -> usize {
self.physical_device.infos().queue_families[self.id as usize].queueCount as usize
}
/// Returns true if queues of this family can execute graphics operations.
#[inline]
pub fn supports_graphics(&self) -> bool {
(self.flags() & vk::QUEUE_GRAPHICS_BIT) != 0
}
/// Returns true if queues of this family can execute compute operations.
#[inline]
pub fn supports_compute(&self) -> bool {
(self.flags() & vk::QUEUE_COMPUTE_BIT) != 0
}
/// Returns true if queues of this family can execute transfer operations.
#[inline]
pub fn supports_transfers(&self) -> bool {
(self.flags() & vk::QUEUE_TRANSFER_BIT) != 0
}
/// Returns true if queues of this family can execute sparse resources binding operations.
#[inline]
pub fn supports_sparse_binding(&self) -> bool {
(self.flags() & vk::QUEUE_SPARSE_BINDING_BIT) != 0
}
/// Internal utility function that returns the flags of this queue family.
#[inline]
fn flags(&self) -> u32 {
self.physical_device.infos().queue_families[self.id as usize].queueFlags
}
}
/// Iterator for all the queue families available on a physical device.
#[derive(Debug, Clone)]
pub struct QueueFamiliesIter<'a> {
physical_device: &'a PhysicalDevice,
current_id: u32,
}
impl<'a> Iterator for QueueFamiliesIter<'a> {
type Item = QueueFamily<'a>;
#[inline]
fn next(&mut self) -> Option<QueueFamily<'a>> {
if self.current_id as usize >= self.physical_device.infos().queue_families.len() {
return None;
}
let dev = QueueFamily {
physical_device: self.physical_device,
id: self.current_id,
};
self.current_id += 1;
Some(dev)
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
let len = self.physical_device.infos().queue_families.len();
let remain = len - self.current_id as usize;
(remain, Some(remain))
}
}
impl<'a> ExactSizeIterator for QueueFamiliesIter<'a> {}
/// Represents a memory type in a physical device.
#[derive(Debug, Copy, Clone)]
pub struct MemoryType<'a> {
physical_device: &'a PhysicalDevice,
id: u32,
}
impl<'a> MemoryType<'a> {
/// Returns the physical device associated to this memory type.
#[inline]
pub fn physical_device(&self) -> &'a PhysicalDevice {
self.physical_device
}
/// Returns the identifier of this memory type within the physical device.
#[inline]
pub fn id(&self) -> u32 {
self.id
}
/// Returns the heap that corresponds to this memory type.
#[inline]
pub fn heap(&self) -> MemoryHeap<'a> {
let heap_id = self.physical_device.infos().memory.memoryTypes[self.id as usize].heapIndex;
MemoryHeap { physical_device: self.physical_device, id: heap_id }
}
/// Returns true if the memory type is located on the device, which means that it's the most
/// efficient for GPU accesses.
#[inline]
pub fn is_device_local(&self) -> bool {
(self.flags() & vk::MEMORY_PROPERTY_DEVICE_LOCAL_BIT) != 0
}
/// Returns true if the memory type can be accessed by the host.
#[inline]
pub fn is_host_visible(&self) -> bool {
(self.flags() & vk::MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0
}
/// Returns true if modifications made by the host or the GPU on this memory type are
/// instantaneously visible to the other party. False means that changes have to be flushed.
///
/// You don't need to worry about this, as this library handles that for you.
#[inline]
pub fn is_host_coherent(&self) -> bool {
(self.flags() & vk::MEMORY_PROPERTY_HOST_COHERENT_BIT) != 0
}
/// Returns true if memory of this memory type is cached by the host. Host memory accesses to
/// cached memory is faster than for uncached memory. However you are not guaranteed that it
/// is coherent.
#[inline]
pub fn is_host_cached(&self) -> bool {
(self.flags() & vk::MEMORY_PROPERTY_HOST_CACHED_BIT) != 0
}
/// Returns true if allocations made to this memory type is lazy.
///
/// This means that no actual allocation is performed. Instead memory is automatically
/// allocated by the Vulkan implementation.
///
/// Memory of this type can only be used on images created with a certain flag. Memory of this
/// type is never host-visible.
#[inline]
pub fn is_lazily_allocated(&self) -> bool {
(self.flags() & vk::MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT) != 0
}
/// Internal utility function that returns the flags of this queue family.
#[inline]
fn flags(&self) -> u32 {
self.physical_device.infos().memory.memoryTypes[self.id as usize].propertyFlags
}
}
/// Iterator for all the memory types available on a physical device.
#[derive(Debug, Clone)]
pub struct MemoryTypesIter<'a> {
physical_device: &'a PhysicalDevice,
current_id: u32,
}
impl<'a> Iterator for MemoryTypesIter<'a> {
type Item = MemoryType<'a>;
#[inline]
fn next(&mut self) -> Option<MemoryType<'a>> {
if self.current_id >= self.physical_device.infos().memory.memoryTypeCount {
return None;
}
let dev = MemoryType {
physical_device: self.physical_device,
id: self.current_id,
};
self.current_id += 1;
Some(dev)
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
let len = self.physical_device.infos().memory.memoryTypeCount;
let remain = (len - self.current_id) as usize;
(remain, Some(remain))
}
}
impl<'a> ExactSizeIterator for MemoryTypesIter<'a> {}
/// Represents a memory heap in a physical device.
#[derive(Debug, Copy, Clone)]
pub struct MemoryHeap<'a> {
physical_device: &'a PhysicalDevice,
id: u32,
}
impl<'a> MemoryHeap<'a> {
/// Returns the physical device associated to this memory heap.
#[inline]
pub fn physical_device(&self) -> &'a PhysicalDevice {
self.physical_device
}
/// Returns the identifier of this memory heap within the physical device.
#[inline]
pub fn id(&self) -> u32 {
self.id
}
/// Returns the size in bytes on this heap.
#[inline]
pub fn size(&self) -> usize {
self.physical_device.infos().memory.memoryHeaps[self.id as usize].size as usize
}
/// Returns true if the heap is local to the GPU.
#[inline]
pub fn is_device_local(&self) -> bool {
let flags = self.physical_device.infos().memory.memoryHeaps[self.id as usize].flags;
(flags & vk::MEMORY_HEAP_DEVICE_LOCAL_BIT) != 0
}
}
/// Iterator for all the memory heaps available on a physical device.
#[derive(Debug, Clone)]
pub struct MemoryHeapsIter<'a> {
physical_device: &'a PhysicalDevice,
current_id: u32,
}
impl<'a> Iterator for MemoryHeapsIter<'a> {
type Item = MemoryHeap<'a>;
#[inline]
fn next(&mut self) -> Option<MemoryHeap<'a>> {
if self.current_id >= self.physical_device.infos().memory.memoryHeapCount {
return None;
}
let dev = MemoryHeap {
physical_device: self.physical_device,
id: self.current_id,
};
self.current_id += 1;
Some(dev)
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
let len = self.physical_device.infos().memory.memoryHeapCount;
let remain = (len - self.current_id) as usize;
(remain, Some(remain))
}
}
impl<'a> ExactSizeIterator for MemoryHeapsIter<'a> {}
#[cfg(test)]
mod tests {
use instance;
#[test]
fn create_instance() {
let _ = instance::Instance::new(None, None);
}
#[test]
fn layers_list() {
let _ = instance::layers_list();
}
#[test]
fn queue_family_by_id() {
let instance = match instance::Instance::new(None, None) {
Ok(i) => i, Err(_) => return
};
let phys = match instance::PhysicalDevice::enumerate(&instance).next() {
Some(p) => p,
None => return
};
let queue_family = match phys.queue_families().next() {
Some(q) => q,
None => return
};
let by_id = phys.queue_family_by_id(queue_family.id()).unwrap();
assert_eq!(by_id.id(), queue_family.id());
}
}

217
vulkano/src/lib.rs Normal file
View File

@ -0,0 +1,217 @@
//!
//! # Brief summary of Vulkan
//!
//! - The `Instance` object is the API entry point. It is the first object you must create before
//! starting to use Vulkan.
//!
//! - The `PhysicalDevice` object represents an implementation of Vulkan available on the system
//! (eg. a graphics card, a CPU implementation, multiple graphics card working together, etc.).
//! Physical devices can be enumerated from an instance with `PhysicalDevice::enumerate()`.
//!
//! - Once you have chosen a physical device to use, you must a `Device` object from it. The
//! `Device` is another very important object, as it represents an open channel of
//! communicaton with the physical device.
//!
//! - `Buffer`s and `Image`s can be used to store data on memory accessible from the GPU (or
//! Vulkan implementation). Buffers are usually used to store vertices, lights, etc. or
//! arbitrary data, while images are used to store textures or multi-dimensional data.
//!
//! - In order to show something on the screen, you need a `Swapchain`. A `Swapchain` contains a
//! special `Image` that corresponds to the content of the window or the monitor. When you
//! *present* a swapchain, the content of that special image is shown on the screen.
//!
//! - `ComputePipeline`s and `GraphicsPipeline`s describe the way the GPU must perform a certain
//! operation. `Shader`s are programs that the GPU will execute as part of a pipeline.
//!
//! - `RenderPass`es and `Framebuffer`s describe on which attachments the implementation must draw
//! on. They are only used for graphical operations.
//!
//! - In order to ask the GPU to do something, you must create a `CommandBuffer`. A `CommandBuffer`
//! contains a list of commands that the GPU must perform. This can include copies between
//! buffers, compute operations, or graphics operations. For the work to start, the
//! `CommandBuffer` must then be submitted to a `Queue`, which is obtained when you create
//! the `Device`.
//!
//#![warn(missing_docs)] // TODO: activate
#![allow(dead_code)] // TODO: remove
#![allow(unused_variables)] // TODO: remove
#[macro_use]
extern crate lazy_static;
extern crate shared_library;
mod features;
mod version;
pub mod buffer;
pub mod command_buffer;
pub mod device;
pub mod formats;
pub mod framebuffer;
pub mod image;
pub mod instance;
pub mod memory;
pub mod pipeline;
//pub mod query;
pub mod sampler;
pub mod shader;
pub mod swapchain;
pub mod sync;
use std::error;
use std::fmt;
use std::mem;
use std::path::Path;
mod vk {
#![allow(dead_code)]
#![allow(non_upper_case_globals)]
#![allow(non_snake_case)]
#![allow(non_camel_case_types)]
include!(concat!(env!("OUT_DIR"), "/vk_bindings.rs"));
}
lazy_static! {
static ref VK_LIB: shared_library::dynamic_library::DynamicLibrary = {
#[cfg(windows)] fn get_path() -> &'static Path { Path::new("vulkan-1.dll") }
#[cfg(unix)] fn get_path() -> &'static Path { Path::new("libvulkan-1.so") }
let path = get_path();
shared_library::dynamic_library::DynamicLibrary::open(Some(path)).unwrap()
};
static ref VK_STATIC: vk::Static = {
vk::Static::load(|name| unsafe {
VK_LIB.symbol(name.to_str().unwrap()).unwrap() // TODO: error handling
})
};
static ref VK_ENTRY: vk::EntryPoints = {
vk::EntryPoints::load(|name| unsafe {
mem::transmute(VK_STATIC.GetInstanceProcAddr(0, name.as_ptr()))
})
};
}
/// Gives access to the internals of an object.
trait VulkanObject {
/// The type of the object.
type Object;
/// Returns a reference to the object.
fn internal_object(&self) -> Self::Object;
}
/// Gives access to the Vulkan function pointers stored in this object.
trait VulkanPointers {
/// The struct that provides access to the function pointers.
type Pointers;
// Returns a reference to the pointers.
fn pointers(&self) -> &Self::Pointers;
}
/// Error type returned by most Vulkan functions.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum OomError {
/// There is no memory available on the host (ie. the CPU, RAM, etc.).
OutOfHostMemory,
/// There is no memory available on the device (ie. video memory).
OutOfDeviceMemory,
}
impl error::Error for OomError {
#[inline]
fn description(&self) -> &str {
match *self {
OomError::OutOfHostMemory => "no memory available on the host",
OomError::OutOfDeviceMemory => "no memory available on the graphical device",
}
}
}
impl fmt::Display for OomError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}
impl From<Error> for OomError {
#[inline]
fn from(err: Error) -> OomError {
match err {
Error::OutOfHostMemory => OomError::OutOfHostMemory,
Error::OutOfDeviceMemory => OomError::OutOfDeviceMemory,
_ => panic!("unexpected error: {:?}", err)
}
}
}
/// All possible success codes returned by any Vulkan function.
#[derive(Debug, Copy, Clone)]
#[repr(u32)]
enum Success {
Success = vk::SUCCESS,
NotReady = vk::NOT_READY,
Timeout = vk::TIMEOUT,
EventSet = vk::EVENT_SET,
EventReset = vk::EVENT_RESET,
Incomplete = vk::INCOMPLETE,
Suboptimal = vk::SUBOPTIMAL_KHR,
}
/// All possible errors returned by any Vulkan function.
///
/// This type is not public. Instead all public error types should implement `From<Error>` and
/// panic for error code that arent supposed to happen.
#[derive(Debug, Copy, Clone)]
#[repr(u32)]
enum Error {
OutOfHostMemory = vk::ERROR_OUT_OF_HOST_MEMORY,
OutOfDeviceMemory = vk::ERROR_OUT_OF_DEVICE_MEMORY,
InitializationFailed = vk::ERROR_INITIALIZATION_FAILED,
DeviceLost = vk::ERROR_DEVICE_LOST,
MemoryMapFailed = vk::ERROR_MEMORY_MAP_FAILED,
LayerNotPresent = vk::ERROR_LAYER_NOT_PRESENT,
ExtensionNotPresent = vk::ERROR_EXTENSION_NOT_PRESENT,
FeatureNotPresent = vk::ERROR_FEATURE_NOT_PRESENT,
IncompatibleDriver = vk::ERROR_INCOMPATIBLE_DRIVER,
TooManyObjects = vk::ERROR_TOO_MANY_OBJECTS,
FormatNotSupported = vk::ERROR_FORMAT_NOT_SUPPORTED,
SurfaceLost = vk::ERROR_SURFACE_LOST_KHR,
NativeWindowInUse = vk::ERROR_NATIVE_WINDOW_IN_USE_KHR,
OutOfDate = vk::ERROR_OUT_OF_DATE_KHR,
IncompatibleDisplay = vk::ERROR_INCOMPATIBLE_DISPLAY_KHR,
ValidationFailed = vk::ERROR_VALIDATION_FAILED_EXT,
}
/// Checks whether the result returned correctly.
fn check_errors(result: vk::Result) -> Result<Success, Error> {
match result {
vk::SUCCESS => Ok(Success::Success),
vk::NOT_READY => Ok(Success::NotReady),
vk::TIMEOUT => Ok(Success::Timeout),
vk::EVENT_SET => Ok(Success::EventSet),
vk::EVENT_RESET => Ok(Success::EventReset),
vk::INCOMPLETE => Ok(Success::Incomplete),
vk::ERROR_OUT_OF_HOST_MEMORY => Err(Error::OutOfHostMemory),
vk::ERROR_OUT_OF_DEVICE_MEMORY => Err(Error::OutOfDeviceMemory),
vk::ERROR_INITIALIZATION_FAILED => Err(Error::InitializationFailed),
vk::ERROR_DEVICE_LOST => Err(Error::DeviceLost),
vk::ERROR_MEMORY_MAP_FAILED => Err(Error::MemoryMapFailed),
vk::ERROR_LAYER_NOT_PRESENT => Err(Error::LayerNotPresent),
vk::ERROR_EXTENSION_NOT_PRESENT => Err(Error::ExtensionNotPresent),
vk::ERROR_FEATURE_NOT_PRESENT => Err(Error::FeatureNotPresent),
vk::ERROR_INCOMPATIBLE_DRIVER => Err(Error::IncompatibleDriver),
vk::ERROR_TOO_MANY_OBJECTS => Err(Error::TooManyObjects),
vk::ERROR_FORMAT_NOT_SUPPORTED => Err(Error::FormatNotSupported),
vk::ERROR_SURFACE_LOST_KHR => Err(Error::SurfaceLost),
vk::ERROR_NATIVE_WINDOW_IN_USE_KHR => Err(Error::NativeWindowInUse),
vk::SUBOPTIMAL_KHR => Ok(Success::Suboptimal),
vk::ERROR_OUT_OF_DATE_KHR => Err(Error::OutOfDate),
vk::ERROR_INCOMPATIBLE_DISPLAY_KHR => Err(Error::IncompatibleDisplay),
vk::ERROR_VALIDATION_FAILED_EXT => Err(Error::ValidationFailed),
_ => unreachable!("Unexpected error code returned by Vulkan")
}
}

View File

@ -0,0 +1,156 @@
use std::mem;
use std::ptr;
use std::os::raw::c_void;
use std::sync::Arc;
use instance::MemoryType;
use device::Device;
use OomError;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
/// Represents memory that has been allocated.
pub struct DeviceMemory {
device: Arc<Device>,
memory: vk::DeviceMemory,
size: usize,
memory_type_index: u32,
}
impl DeviceMemory {
/// Allocates a chunk of memory from the device.
///
/// # Panic
///
/// - Panicks if `memory_type` doesn't belong to the same physical device as `device`.
///
#[inline]
pub fn alloc(device: &Arc<Device>, memory_type: &MemoryType, size: usize)
-> Result<DeviceMemory, OomError>
{
assert_eq!(device.physical_device().internal_object(),
memory_type.physical_device().internal_object());
let vk = device.pointers();
let memory = unsafe {
let infos = vk::MemoryAllocateInfo {
sType: vk::STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO,
pNext: ptr::null(),
allocationSize: size as u64,
memoryTypeIndex: memory_type.id(),
};
let mut output = mem::uninitialized();
try!(check_errors(vk.AllocateMemory(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(DeviceMemory {
device: device.clone(),
memory: memory,
size: size,
memory_type_index: memory_type.id(),
})
}
/// Allocates a chunk of memory and maps it.
///
/// # Panic
///
/// - Panicks if `memory_type` doesn't belong to the same physical device as `device`.
/// - Panicks if the memory type is not host-visible.
///
pub fn alloc_and_map(device: &Arc<Device>, memory_type: &MemoryType, size: usize)
-> Result<MappedDeviceMemory, OomError>
{
let vk = device.pointers();
assert!(memory_type.is_host_visible());
let mem = try!(DeviceMemory::alloc(device, memory_type, size));
let ptr = unsafe {
let mut output = mem::uninitialized();
try!(check_errors(vk.MapMemory(device.internal_object(), mem.memory, 0,
mem.size as vk::DeviceSize, 0 /* reserved flags */,
&mut output)));
output
};
Ok(MappedDeviceMemory {
memory: mem,
pointer: ptr,
})
}
/// Returns the memory type this chunk was allocated on.
#[inline]
pub fn memory_type(&self) -> MemoryType {
self.device.physical_device().memory_type_by_id(self.memory_type_index).unwrap()
}
/// Returns the size in bytes of that memory chunk.
#[inline]
pub fn size(&self) -> usize {
self.size
}
/// Returns the device associated with this allocation.
#[inline]
pub fn device(&self) -> &Arc<Device> {
&self.device
}
}
impl VulkanObject for DeviceMemory {
type Object = vk::DeviceMemory;
#[inline]
fn internal_object(&self) -> vk::DeviceMemory {
self.memory
}
}
impl Drop for DeviceMemory {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.FreeMemory(self.device.internal_object(), self.memory, ptr::null());
}
}
}
/// Represents memory that has been allocated and mapped in CPU accessible space.
pub struct MappedDeviceMemory {
memory: DeviceMemory,
pointer: *mut c_void,
}
impl MappedDeviceMemory {
/// Returns the underlying `DeviceMemory`.
#[inline]
pub fn memory(&self) -> &DeviceMemory {
&self.memory
}
/// Returns a pointer to the mapping.
///
/// Note that access to this pointer is not safe at all.
pub fn mapping_pointer(&self) -> *mut c_void {
self.pointer
}
}
impl Drop for MappedDeviceMemory {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.memory.device.pointers();
vk.UnmapMemory(self.memory.device.internal_object(), self.memory.memory);
}
}
}

181
vulkano/src/memory/mod.rs Normal file
View File

@ -0,0 +1,181 @@
//! GPU-visible memory allocation and management.
//!
//! When you create a buffer or a texture with Vulkan, you have to bind it to a chunk of allocated
//! memory. To do so, you have to pass a type that implements the `MemorySource` trait.
//!
//! There are several implementations of the trait, ie. several things that you can pass to the
//! constructors of buffers and textures:
//!
//! - `&Arc<Device>`, which will simply allocate a new chunk of memory every time (easy but not
//! very efficient).
//! - `MemorySource`, which is the same as `&Arc<Device>` except that it will use the
//! already-allocated block.
//! - ... needs more ...
//!
//! # Synchronization
//!
//! In Vulkan, it's the job of the programmer to enforce memory safety. In other words, the
//! programmer must take care that two chunks of memory are not read and written simultaneously.
//!
//! In this library, this is enforced by the implementation of `MemorySource` or
//! `MemorySourceChunk`.
//!
//! There are two mechanisms in Vulkan that can provide synchronization: fences and semaphores.
//! Fences provide synchronization between the CPU and the GPU, and semaphores provide
//! synchronization between multiple queues of the GPU. See the `sync` module for more info.
//!
//! # Sparse resources
//!
//! **Not yet implemented**.
//!
//! Instead of creating a buffer or an image with a single chunk of memory, you also have the
//! possibility to create resources with *sparse memory*.
//!
//! For example you can bind the first half of the buffer to a memory chunk, and the second half of
//! the buffer to another memory chunk.
//!
//! There is a hierarchy of three features related to sparse resources:
//!
//! - The `sparseBinding` feature allows you to use sparse resources.
//! - The `sparseResidency` feature is a superset of `sparseBinding` and allows you to leave some
//! parts of the resource unbinded before using it.
//! - The `sparseResidencyAliased` feature is a superset of `sparseResidency` and allows you to
//! bind the same memory chunk to multiple different resources at once.
//!
use std::sync::Arc;
use sync::Fence;
use sync::Semaphore;
use device::Device;
use device::Queue;
pub use self::device_memory::DeviceMemory;
pub use self::device_memory::MappedDeviceMemory;
pub use self::single::DeviceLocal;
pub use self::single::DeviceLocalChunk;
pub use self::single::HostVisible;
pub use self::single::HostVisibleChunk;
mod device_memory;
mod single;
/// Trait for memory objects that can be accessed from the CPU.
pub unsafe trait CpuAccessible<'a, T: ?Sized> {
type Read;
/// Gives a read access to the content of the buffer.
///
/// If the buffer is in use by the GPU, blocks until it is available.
// TODO: what happens if timeout is reached? a panic?
fn read(&'a self, timeout_ns: u64) -> Self::Read;
/// Tries to give a read access to the content of the buffer.
///
/// If the buffer is in use by the GPU, returns `None`.
fn try_read(&'a self) -> Option<Self::Read>;
}
/// Trait for memory objects that be mutably accessed from the CPU.
pub unsafe trait CpuWriteAccessible<'a, T: ?Sized>: CpuAccessible<'a, T> {
type Write;
/// Gives a write access to the content of the buffer.
///
/// If the buffer is in use by the GPU, blocks until it is available.
// TODO: what happens if timeout is reached? a panic?
fn write(&'a self, timeout_ns: u64) -> Self::Write;
/// Tries to give a write access to the content of the buffer.
///
/// If the buffer is in use by the GPU, returns `None`.
fn try_write(&'a self) -> Option<Self::Write>;
}
/// Trait for objects that can be used to fill the memory requirements of a buffer or an image.
pub unsafe trait MemorySource {
/// An object that represents one block of allocation. Returned by `allocate`.
type Chunk: MemorySourceChunk;
/// Returns true if the chunks allocated by this source will use sparse memory.
// TODO: should return the level of the required sparse feature
fn is_sparse(&self) -> bool;
/// Allocates a block of memory to be used.
///
/// `memory_type_bits` is a bitsfield which indicates from which memory type the memory can
/// be allocated. For example if the bit 2 is set (`memory_type_bits & (1 << 2) != 0`), that
/// means that the memory type whose ID is 2 can be used.
///
/// The implementation is allowed to return a chunk with a larger size or alignment.
// TODO: error type
fn allocate(self, &Arc<Device>, size: usize, alignment: usize, memory_type_bits: u32)
-> Result<Self::Chunk, ()>;
}
/// A chunk of GPU-visible memory.
pub unsafe trait MemorySourceChunk {
/// Returns the properties of this chunk.
fn properties(&self) -> ChunkProperties;
/// Returns true if the `gpu_access` function should be passed a fence.
#[inline]
fn requires_fence(&self) -> bool {
true
}
/// Returns true if the `gpu_access` function should be passed a semaphore.
#[inline]
fn requires_semaphore(&self) -> bool {
true
}
/// Instructs the manager that a part of this chunk of memory is going to be used by the
/// GPU soon in the future. The function should block if the memory is currently being
/// accessed by the CPU.
///
/// `write` indicates whether the GPU will write to the memory. If `false`, then it will only
/// be written.
///
/// `offset` and `size` indicate the part of the chunk that is concerned.
///
/// `queue` is the queue where the command buffer that accesses the memory will be submitted.
/// If the `gpu_access` function submits something to that queue, it will thus be submitted
/// beforehand. This behavior can be used for example to submit sparse binding commands.
///
/// `fence` is a fence that will be signaled when this GPU access will stop. It should be
/// waited upon whenever the user wants to read this memory from the CPU. If `requires_fence`
/// returned false, then this value will be `None`.
///
/// `semaphore` is a semaphore that will be signaled when this GPU access will stop. This value
/// is intended to be returned later, in a follow-up call to `gpu_access`. If
/// `requires_semaphore` returned false, then this value will be `None`.
///
/// The manager must track whether this chunk of memory is being accessed by the CPU/GPU and
/// return a semaphore that must be waited upon by the GPU before the access can start. The
/// semaphore being returned is usually one that has been previously passed to this function,
/// but it doesn't need to be the case.
///
/// See the documentation of `GpuAccessResult` to see what the function should return.
fn gpu_access(&self, write: bool, offset: usize, size: usize, queue: &mut Queue,
fence: Option<Arc<Fence>>, semaphore: Option<Arc<Semaphore>>)
-> Option<Arc<Semaphore>>;
/// Returns true if this chunk of memory may be used, now or in the future, by multiple buffers
/// or images (or a combination of both) simultaneously. If you're not sure, it's safer to
/// return true.
///
/// If this value is true, then the Vulkan implementation must be more conservative about
/// reordering subpasses in a renderpass.
fn may_alias(&self) -> bool;
}
pub enum ChunkProperties<'a> {
Regular {
memory: &'a DeviceMemory,
offset: usize,
size: usize,
},
Sparse, // TODO: unimplemented
}

View File

@ -0,0 +1,288 @@
use std::mem;
use std::ptr;
use std::ops::Deref;
use std::ops::DerefMut;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::MutexGuard;
use std::sync::TryLockError;
use memory::ChunkProperties;
use memory::CpuAccessible;
use memory::CpuWriteAccessible;
use memory::MemorySource;
use memory::MemorySourceChunk;
use memory::DeviceMemory;
use memory::MappedDeviceMemory;
use sync::Fence;
use sync::Semaphore;
use device::Device;
use device::Queue;
use VulkanObject;
use VulkanPointers;
use vk;
/// Dummy marker whose strategy is to allocate a new chunk of memory for each allocation.
///
/// The memory will not be accessible since it is not necessarily in host-visible memory.
///
/// This is good for large buffers, but inefficient is you use a lot of small buffers.
///
/// The memory is locked globally. That means that it doesn't matter whether you access the buffer
/// for reading or writing (like a `Mutex`).
#[derive(Debug, Copy, Clone)]
pub struct DeviceLocal;
unsafe impl MemorySource for DeviceLocal {
type Chunk = DeviceLocalChunk;
#[inline]
fn is_sparse(&self) -> bool {
false
}
#[inline]
fn allocate(self, device: &Arc<Device>, size: usize, alignment: usize, memory_type_bits: u32)
-> Result<DeviceLocalChunk, ()>
{
let mem_ty = device.physical_device().memory_types()
.skip(memory_type_bits.trailing_zeros() as usize).next().unwrap();
let mem = try!(DeviceMemory::alloc(device, &mem_ty, size).map_err(|_| ()));
// note: alignment doesn't need to be checked because allocating memory is guaranteed to
// fulfill any alignment requirement
Ok(DeviceLocalChunk {
mem: mem,
semaphore: Mutex::new(None),
})
}
}
/// A chunk allocated from a `DeviceLocal`.
pub struct DeviceLocalChunk {
mem: DeviceMemory,
semaphore: Mutex<Option<Arc<Semaphore>>>,
}
unsafe impl MemorySourceChunk for DeviceLocalChunk {
#[inline]
fn gpu_access(&self, _write: bool, _offset: usize, _size: usize, _: &mut Queue,
_: Option<Arc<Fence>>, mut semaphore: Option<Arc<Semaphore>>)
-> Option<Arc<Semaphore>>
{
assert!(semaphore.is_some());
let mut self_semaphore = self.semaphore.lock().unwrap();
mem::swap(&mut *self_semaphore, &mut semaphore);
semaphore
}
#[inline]
fn requires_fence(&self) -> bool {
false
}
#[inline]
fn properties(&self) -> ChunkProperties {
ChunkProperties::Regular {
memory: &self.mem,
offset: 0,
size: self.mem.size(),
}
}
#[inline]
fn may_alias(&self) -> bool {
false
}
}
/// Dummy marker whose strategy is to allocate a new chunk of memory for each allocation.
///
/// Guaranteed to allocate from a host-visible memory type.
///
/// This is good for large buffers, but inefficient is you use a lot of small buffers.
///
/// The memory is locked globally. That means that it doesn't matter whether you access the buffer
/// for reading or writing (like a `Mutex`).
#[derive(Debug, Copy, Clone)]
pub struct HostVisible;
unsafe impl MemorySource for HostVisible {
type Chunk = HostVisibleChunk;
#[inline]
fn is_sparse(&self) -> bool {
false
}
#[inline]
fn allocate(self, device: &Arc<Device>, size: usize, alignment: usize, memory_type_bits: u32)
-> Result<HostVisibleChunk, ()>
{
let mem_ty = device.physical_device().memory_types()
.filter(|t| (memory_type_bits & (1 << t.id())) != 0)
.filter(|t| t.is_host_visible())
.next().unwrap();
let mem = try!(DeviceMemory::alloc_and_map(device, &mem_ty, size).map_err(|_| ()));
// note: alignment doesn't need to be checked because allocating memory is guaranteed to
// fulfill any alignment requirement
Ok(HostVisibleChunk {
mem: mem,
lock: Mutex::new((None, None)),
})
}
}
/// A chunk allocated from a `HostVisible`.
pub struct HostVisibleChunk {
mem: MappedDeviceMemory,
lock: Mutex<(Option<Arc<Semaphore>>, Option<Arc<Fence>>)>,
}
unsafe impl MemorySourceChunk for HostVisibleChunk {
#[inline]
fn gpu_access(&self, _write: bool, _offset: usize, _size: usize, _: &mut Queue,
fence: Option<Arc<Fence>>, mut semaphore: Option<Arc<Semaphore>>)
-> Option<Arc<Semaphore>>
{
assert!(fence.is_some());
assert!(semaphore.is_some());
let mut self_lock = self.lock.lock().unwrap();
mem::swap(&mut self_lock.0, &mut semaphore);
self_lock.1 = fence;
semaphore
}
#[inline]
fn properties(&self) -> ChunkProperties {
ChunkProperties::Regular {
memory: &self.mem.memory(),
offset: 0,
size: self.mem.memory().size(),
}
}
#[inline]
fn may_alias(&self) -> bool {
false
}
}
unsafe impl<'a, T: 'a> CpuAccessible<'a, T> for HostVisibleChunk { // TODO: ?Sized
type Read = GpuAccess<'a, T>;
#[inline]
fn read(&'a self, timeout_ns: u64) -> GpuAccess<'a, T> {
self.write(timeout_ns)
}
#[inline]
fn try_read(&'a self) -> Option<GpuAccess<'a, T>> {
self.try_write()
}
}
unsafe impl<'a, T: 'a> CpuWriteAccessible<'a, T> for HostVisibleChunk { // TODO: ?Sized
type Write = GpuAccess<'a, T>;
#[inline]
fn write(&'a self, timeout_ns: u64) -> GpuAccess<'a, T> {
let pointer = self.mem.mapping_pointer() as *mut T;
let mut lock = self.lock.lock().unwrap();
if let Some(ref fence) = lock.1 {
fence.wait(timeout_ns).unwrap(); // FIXME: error
}
lock.1 = None;
// TODO: invalidate
GpuAccess {
mem: &self.mem,
guard: lock,
pointer: pointer,
}
}
#[inline]
fn try_write(&'a self) -> Option<GpuAccess<'a, T>> {
let pointer = self.mem.mapping_pointer() as *mut T;
let mut lock = match self.lock.try_lock() {
Ok(l) => l,
Err(TryLockError::Poisoned(_)) => panic!(),
Err(TryLockError::WouldBlock) => return None,
};
if let Some(ref fence) = lock.1 {
if fence.ready() != Ok(true) { // TODO: we ignore ready()'s error here?
return None;
}
}
lock.1 = None;
// TODO: invalidate
Some(GpuAccess {
mem: &self.mem,
guard: lock,
pointer: pointer,
})
}
}
/// Object that can be used to read or write the content of a `HostVisibleChunk`.
///
/// Note that this object holds a mutex guard on the chunk. If another thread tries to access
/// this memory's content or tries to submit a GPU command that uses this memory, it will block.
pub struct GpuAccess<'a, T: ?Sized + 'a> {
mem: &'a MappedDeviceMemory,
guard: MutexGuard<'a, (Option<Arc<Semaphore>>, Option<Arc<Fence>>)>,
pointer: *mut T,
}
impl<'a, T: ?Sized + 'a> Deref for GpuAccess<'a, T> {
type Target = T;
#[inline]
fn deref(&self) -> &T {
unsafe { &*self.pointer }
}
}
impl<'a, T: ?Sized + 'a> DerefMut for GpuAccess<'a, T> {
#[inline]
fn deref_mut(&mut self) -> &mut T {
unsafe { &mut *self.pointer }
}
}
impl<'a, T: ?Sized + 'a> Drop for GpuAccess<'a, T> {
#[inline]
fn drop(&mut self) {
// TODO: only flush if necessary
let vk = self.mem.memory().device().pointers();
let range = vk::MappedMemoryRange {
sType: vk::STRUCTURE_TYPE_MAPPED_MEMORY_RANGE,
pNext: ptr::null(),
memory: self.mem.memory().internal_object(),
offset: 0,
size: vk::WHOLE_SIZE,
};
// TODO: check result?
unsafe {
vk.FlushMappedMemoryRanges(self.mem.memory().device().internal_object(), 1, &range);
}
}
}

View File

@ -0,0 +1,125 @@
//! Defines how the color output of the fragment shader is written to the attachment.
//!
//! There are three kinds of color attachments for the purpose of blending:
//!
//! - Attachments with a floating-point, fixed point format.
//! - Attachments with a (non-normalized) integer format.
//! - Attachments with a normalized integer format.
//!
//! For floating-point and fixed-point formats, the blending operation is applied. For integer
//! formats, the logic operation is applied. For normalized integer formats, the logic operation
//! will take precedence if it is activated. Otherwise the blending operation is applied.
//!
use vk;
pub struct Blend {
pub logic_op: Option<LogicOp>,
/// The constant color to use for the `Constant*` blending operation.
///
/// If you pass `None`, then this state will be considered as dynamic and the blend constants
/// will need to be set when you build the command buffer.
pub blend_constants: Option<[f32; 4]>,
}
/*
VkStructureType sType;
const void* pNext;
VkPipelineColorBlendStateCreateFlags flags;
VkBool32 logicOpEnable;
VkLogicOp logicOp;
uint32_t attachmentCount;
const VkPipelineColorBlendAttachmentState* pAttachments;
float blendConstants[4];
} VkPipelineColorBlendStateCreateInfo;
typedef struct {
VkBool32 blendEnable;
VkBlend srcBlendColor;
VkBlend dstBlendColor;
VkBlendOp blendOpColor;
VkBlend srcBlendAlpha;
VkBlend dstBlendAlpha;
VkBlendOp blendOpAlpha;
VkChannelFlags channelWriteMask;
} VkPipelineColorBlendAttachmentState;
*/
/// Which logical operation to apply to the output values.
///
/// The operation is applied individually for each channel (red, green, blue and alpha).
///
/// Only relevant for integer or unsigned attachments.
///
/// Also note that some implementations don't support logic operations.
#[derive(Debug, Copy, Clone)]
#[repr(u32)]
pub enum LogicOp {
/// Returns `0`.
Clear = vk::LOGIC_OP_CLEAR,
/// Returns `src & dest`.
And = vk::LOGIC_OP_AND,
/// Returns `src & !dest`.
AndReverse = vk::LOGIC_OP_AND_REVERSE,
/// Returns `src`.
Copy = vk::LOGIC_OP_COPY,
/// Returns `!src & dest`.
AndInverted = vk::LOGIC_OP_AND_INVERTED,
/// Returns `dest`.
Noop = vk::LOGIC_OP_NO_OP,
/// Returns `src ^ dest`.
Xor = vk::LOGIC_OP_XOR,
/// Returns `src | dest`.
Or = vk::LOGIC_OP_OR,
/// Returns `!(src | dest)`.
Nor = vk::LOGIC_OP_NOR,
/// Returns `!(src ^ dest)`.
Equivalent = vk::LOGIC_OP_EQUIVALENT,
/// Returns `!dest`.
Invert = vk::LOGIC_OP_INVERT,
/// Returns `src | !dest.
OrReverse = vk::LOGIC_OP_OR_REVERSE,
/// Returns `!src`.
CopyInverted = vk::LOGIC_OP_COPY_INVERTED,
/// Returns `!src | dest`.
OrInverted = vk::LOGIC_OP_OR_INVERTED,
/// Returns `!(src & dest)`.
Nand = vk::LOGIC_OP_NAND,
/// Returns `!0` (all bits set to 1).
Set = vk::LOGIC_OP_SET,
}
impl Default for LogicOp {
#[inline]
fn default() -> LogicOp {
LogicOp::Noop
}
}
///
#[derive(Debug, Copy, Clone)]
#[repr(u32)]
pub enum BlendOp {
Zero = vk::BLEND_FACTOR_ZERO,
One = vk::BLEND_FACTOR_ONE,
SrcColor = vk::BLEND_FACTOR_SRC_COLOR,
OneMinusSrcColor = vk::BLEND_FACTOR_ONE_MINUS_SRC_COLOR,
DstColor = vk::BLEND_FACTOR_DST_COLOR,
OneMinusDstColor = vk::BLEND_FACTOR_ONE_MINUS_DST_COLOR,
SrcAlpha = vk::BLEND_FACTOR_SRC_ALPHA,
OneMinusSrcAlpha = vk::BLEND_FACTOR_ONE_MINUS_SRC_ALPHA,
DstAlpha = vk::BLEND_FACTOR_DST_ALPHA,
OneMinusDstAlpha = vk::BLEND_FACTOR_ONE_MINUS_DST_ALPHA,
ConstantColor = vk::BLEND_FACTOR_CONSTANT_COLOR,
OneMinusConstantColor = vk::BLEND_FACTOR_ONE_MINUS_CONSTANT_COLOR,
ConstantAlpha = vk::BLEND_FACTOR_CONSTANT_ALPHA,
OneMinusConstantAlpha = vk::BLEND_FACTOR_ONE_MINUS_CONSTANT_ALPHA,
SrcAlphaSaturate = vk::BLEND_FACTOR_SRC_ALPHA_SATURATE,
Src1Color = vk::BLEND_FACTOR_SRC1_COLOR,
OneMinusSrc1Color = vk::BLEND_FACTOR_ONE_MINUS_SRC1_COLOR,
Src1Alpha = vk::BLEND_FACTOR_SRC1_ALPHA,
OneMinusSrc1Alpha = vk::BLEND_FACTOR_ONE_MINUS_SRC1_ALPHA,
}

View File

@ -0,0 +1,139 @@
//! Cache the pipeline objects to disk for faster reloads.
//!
//! A pipeline cache is an opaque type that allow you to cache your graphics and compute
//! pipelines on the disk.
//!
//! You can create either an empty cache or a cache from some initial data. Whenever you create a
//! graphics or compute pipeline, you have the possibility to pass a reference to that cache.
//! The Vulkan implementation will then look in the cache for an existing entry, or add one if it
//! doesn't exist.
//!
//! Once that is done, you can extract the data from the cache and store it.
//!
use std::mem;
use std::ptr;
use std::sync::Arc;
use device::Device;
use OomError;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
/// Opaque cache that contains pipeline objects.
pub struct PipelineCache {
device: Arc<Device>,
cache: vk::PipelineCache,
}
impl PipelineCache {
/// Builds a new pipeline cache.
///
/// You can pass optional data to initialize the cache with. If you don't pass any data, the
/// cache will be empty.
// TODO: is that unsafe? is it safe to pass garbage data?
pub unsafe fn new(device: &Arc<Device>, initial_data: Option<&[u8]>)
-> Result<Arc<PipelineCache>, OomError>
{
let vk = device.pointers();
let cache = unsafe {
let infos = vk::PipelineCacheCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_CACHE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
initialDataSize: initial_data.map(|d| d.len()).unwrap_or(0),
pInitialData: initial_data.map(|d| d.as_ptr() as *const _).unwrap_or(ptr::null()),
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreatePipelineCache(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(PipelineCache {
device: device.clone(),
cache: cache,
}))
}
/// Merges other pipeline caches into this one.
///
/// # Panic
///
/// - Panicks if `self` is included in the list of other pipelines.
///
pub fn merge<'a, I>(&self, pipelines: I) -> Result<(), OomError>
where I: IntoIterator<Item = &'a &'a Arc<PipelineCache>>
{
unsafe {
let vk = self.device.pointers();
let pipelines = pipelines.into_iter().map(|pipeline| {
assert!(&***pipeline as *const _ != &*self as *const _);
pipeline.cache
}).collect::<Vec<_>>();
try!(check_errors(vk.MergePipelineCaches(self.device.internal_object(), self.cache,
pipelines.len() as u32, pipelines.as_ptr())));
Ok(())
}
}
/// Obtains the data from the cache.
///
/// This data can be stored and then reloaded and passed to `PipelineCache::new`.
pub fn get_data(&self) -> Result<Vec<u8>, OomError> {
unsafe {
let vk = self.device.pointers();
let mut num = 0;
try!(check_errors(vk.GetPipelineCacheData(self.device.internal_object(), self.cache,
&mut num, ptr::null_mut())));
let mut data: Vec<u8> = Vec::with_capacity(num as usize);
try!(check_errors(vk.GetPipelineCacheData(self.device.internal_object(), self.cache,
&mut num, data.as_mut_ptr() as *mut _)));
data.set_len(num as usize);
Ok(data)
}
}
}
impl VulkanObject for PipelineCache {
type Object = vk::PipelineCache;
#[inline]
fn internal_object(&self) -> vk::PipelineCache {
self.cache
}
}
impl Drop for PipelineCache {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.DestroyPipelineCache(self.device.internal_object(), self.cache, ptr::null());
}
}
}
#[cfg(test)]
mod tests {
use instance;
#[test]
//#[should_panic]
fn merge_self() {
let instance = instance::Instance::new(None, None).unwrap();
// let pipeline = PipelineCache::new(&device).unwrap();
// pipeline.merge(&[&pipeline]).unwrap();
}
}

View File

@ -0,0 +1,85 @@
use std::marker::PhantomData;
use std::mem;
use std::ptr;
use std::sync::Arc;
use shader::EntryPoint;
use device::Device;
use OomError;
use Success;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
///
///
/// The template parameter contains the descriptor set to use with this pipeline.
pub struct ComputePipeline<D, C> {
device: Arc<Device>,
pipeline: vk::Pipeline,
marker: PhantomData<(D, C)>,
}
impl<D, C> ComputePipeline<D, C> {
///
///
/// # Panic
///
/// Panicks if the pipeline layout and/or shader don't belong to the device.
pub fn new<D, S, P>(device: &Arc<Device>, pipeline_layout: &Arc<PipelineLayout<D, C>>,
shader: &ComputeShaderEntryPoint<D, S, P>, specialization: &S)
-> Result<ComputePipeline<D, C>, OomError>
where S: SpecializationConstants
{
let vk = device.pointers();
let pipeline = unsafe {
let spec_descriptors = specialization.descriptors();
let specialization = vk::SpecializationInfo {
mapEntryCount: spec_descriptors.len(),
pMapEntries: spec_descriptors.as_ptr() as *const _,
dataSize: mem::size_of_val(specialization),
pData: specialization as *const S as *const _,
};
let stage = vk::PipelineShaderStageCreateInfo {
sType: vk::STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO,
pNext: ptr::null(),
flags: 0,
shader: shader,
pSpecializationInfo: if mem::size_of_val(specialization) == 0 {
ptr::null()
} else {
&specialization
},
};
let infos = VkComputePipelineCreateInfo {
sType: vk::STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO,
pNext: ptr::null(),
flags: 0,
stage: stage,
layout: pipeline_layout.internal_object(),
basePipelineHandle: vk::NULL_HANDLE,
basePipelineIndex: 0,
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateComputePipelines(device.internal_object(), vk::NULL_HANDLE,
1, &infos, ptr::null(), &mut output)));
output
};
}
}
impl<D, C> Drop for ComputePipeline<D, C> {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.DestroyPipeline(self.device.internal_object(), self.pipeline, ptr::null());
}
}
}

View File

@ -0,0 +1,52 @@
pub struct DepthStencil {
depth_write: bool,
depth_compare: Compare,
depth_bounds_test: bool,
}
VkBool32 depthTestEnable;
VkBool32 depthWriteEnable;
VkCompareOp depthCompareOp;
VkBool32 depthBoundsTestEnable;
VkBool32 stencilTestEnable;
VkStencilOpState front;
VkStencilOpState back;
float minDepthBounds;
float maxDepthBounds;
typedef struct {
VkStencilOp stencilFailOp;
VkStencilOp stencilPassOp;
VkStencilOp stencilDepthFailOp;
VkCompareOp stencilCompareOp;
uint32_t stencilCompareMask;
uint32_t stencilWriteMask;
uint32_t stencilReference;
} VkStencilOpState;
/// Specifies how two values should be compared to decide whether a test passes or fails.
///
/// Used for both depth testing and stencil testing.
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
#[repr(u32)]
pub enum Compare {
/// The test never passes.
Never => vk::COMPARE_OP_NEVER,
/// The test passes if `value < reference_value`.
Less => vk::COMPARE_OP_LESS,
/// The test passes if `value == reference_value`.
Equal => vk::COMPARE_OP_EQUAL,
/// The test passes if `value <= reference_value`.
LessOrEqual => vk::COMPARE_OP_LESS_OR_EQUAL,
/// The test passes if `value > reference_value`.
Greater => vk::COMPARE_OP_GREATER,
/// The test passes if `value != reference_value`.
NotEqual => vk::COMPARE_OP_NOT_EQUAL,
/// The test passes if `value >= reference_value`.
GreaterOrEqual => vk::COMPARE_OP_GREATER_OR_EQUAL,
/// The test always passes.
Always => vk::COMPARE_OP_ALWAYS,
}

View File

@ -0,0 +1,306 @@
use std::marker::PhantomData;
use std::mem;
use std::ptr;
use std::sync::Arc;
use device::Device;
use framebuffer::Subpass;
use shader::FragmentShaderEntryPoint;
use shader::VertexShaderEntryPoint;
use OomError;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
use pipeline::blend::Blend;
use pipeline::input_assembly::InputAssembly;
use pipeline::multisample::Multisample;
use pipeline::raster::Rasterization;
use pipeline::vertex::MultiVertex;
use pipeline::vertex::Vertex;
///
///
/// The template parameter contains the descriptor set to use with this pipeline, and the
/// renderpass layout.
pub struct GraphicsPipeline<MultiVertex> {
device: Arc<Device>,
pipeline: vk::Pipeline,
dynamic_line_width: bool,
marker: PhantomData<(MultiVertex,)>
}
impl<MV> GraphicsPipeline<MV>
where MV: MultiVertex
{
/// Builds a new graphics pipeline object.
///
/// # Panic
///
/// - Panicks if primitive restart is enabled and the topology doesn't support this feature.
/// - Panicks if the `rasterization_samples` parameter of `multisample` is not >= 1.
/// - Panicks if the `sample_shading` parameter of `multisample` is not between 0.0 and 1.0.
///
// TODO: check all the device's limits
pub fn new<V, F, R>(device: &Arc<Device>, vertex_shader: &VertexShaderEntryPoint<V>,
input_assembly: &InputAssembly, raster: &Rasterization,
multisample: &Multisample, blend: &Blend,
fragment_shader: &FragmentShaderEntryPoint<F>, render_pass: &Subpass<R>)
-> Result<Arc<GraphicsPipeline<MV>>, OomError>
{
let vk = device.pointers();
let pipeline = unsafe {
let mut dynamic_states: Vec<vk::DynamicState> = Vec::new();
let mut stages = Vec::with_capacity(5);
stages.push(vk::PipelineShaderStageCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
stage: vk::SHADER_STAGE_VERTEX_BIT,
module: vertex_shader.module().internal_object(),
pName: vertex_shader.name().as_ptr(),
pSpecializationInfo: ptr::null(), // TODO:
});
stages.push(vk::PipelineShaderStageCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
stage: vk::SHADER_STAGE_FRAGMENT_BIT,
module: fragment_shader.module().internal_object(),
pName: fragment_shader.name().as_ptr(),
pSpecializationInfo: ptr::null(), // TODO:
});
let binding_descriptions = (0 .. MV::num_buffers()).map(|num| {
let (stride, rate) = MV::buffer_info(num);
vk::VertexInputBindingDescription {
binding: num,
stride: stride,
inputRate: rate as u32,
}
}).collect::<Vec<_>>();
let attribute_descriptions = vertex_shader.attributes().iter().enumerate().map(|(loc, name)| {
let (binding, info) = MV::attrib(name).expect("missing attr"); // TODO: error
vk::VertexInputAttributeDescription {
location: loc as u32,
binding: binding,
format: info.format as u32,
offset: info.offset as u32,
}
}).collect::<Vec<_>>();
let vertex_input_state = vk::PipelineVertexInputStateCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
vertexBindingDescriptionCount: binding_descriptions.len() as u32,
pVertexBindingDescriptions: binding_descriptions.as_ptr(),
vertexAttributeDescriptionCount: attribute_descriptions.len() as u32,
pVertexAttributeDescriptions: attribute_descriptions.as_ptr(),
};
assert!(!input_assembly.primitive_restart_enable ||
input_assembly.topology.supports_primitive_restart());
let input_assembly = vk::PipelineInputAssemblyStateCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
topology: input_assembly.topology as u32,
primitiveRestartEnable: if input_assembly.primitive_restart_enable { vk::TRUE } else { vk::FALSE },
};
let vp = vk::Viewport { x: 0.0, y: 0.0, width: 1244.0, height: 699.0, minDepth: 0.0, maxDepth: 0.0 };
let sc = vk::Rect2D { offset: vk::Offset2D { x: 0, y: 0 }, extent: vk::Extent2D { width: 1244, height: 699 } };
let viewport = vk::PipelineViewportStateCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
viewportCount: 1, // FIXME:
pViewports: &vp, // FIXME:
scissorCount: 1, // FIXME:
pScissors: &sc, // FIXME:
};
if raster.line_width.is_none() {
dynamic_states.push(vk::DYNAMIC_STATE_LINE_WIDTH);
}
let rasterization = vk::PipelineRasterizationStateCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
depthClampEnable: if raster.depth_clamp { vk::TRUE } else { vk::FALSE },
rasterizerDiscardEnable: if raster.rasterizer_discard { vk::TRUE } else { vk::FALSE },
polygonMode: raster.polygon_mode as u32,
cullMode: raster.cull_mode as u32,
frontFace: raster.front_face as u32,
depthBiasEnable: if raster.depthBiasEnable { vk::TRUE } else { vk::FALSE },
depthBiasConstantFactor: raster.depthBiasConstantFactor,
depthBiasClamp: raster.depthBiasClamp,
depthBiasSlopeFactor: raster.depthBiasSlopeFactor,
lineWidth: raster.line_width.unwrap_or(1.0),
};
assert!(multisample.rasterization_samples >= 1);
if let Some(s) = multisample.sample_shading { assert!(s >= 0.0 && s <= 1.0); }
let multisample = vk::PipelineMultisampleStateCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
rasterizationSamples: multisample.rasterization_samples,
sampleShadingEnable: if multisample.sample_shading.is_some() { vk::TRUE } else { vk::FALSE },
minSampleShading: multisample.sample_shading.unwrap_or(1.0),
pSampleMask: multisample.sample_mask.as_ptr(),
alphaToCoverageEnable: if multisample.alpha_to_coverage { vk::TRUE } else { vk::FALSE },
alphaToOneEnable: if multisample.alpha_to_one { vk::TRUE } else { vk::FALSE },
};
let depth_stencil = vk::PipelineDepthStencilStateCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
depthTestEnable: vk::FALSE, // FIXME:
depthWriteEnable: vk::FALSE, // FIXME:
depthCompareOp: 0, // FIXME:
depthBoundsTestEnable: vk::FALSE, // FIXME:
stencilTestEnable: vk::FALSE, // FIXME:
front: vk::StencilOpState {
failOp: 0, // FIXME:
passOp: 0, // FIXME:
depthFailOp: 0, // FIXME:
compareOp: 0, // FIXME:
compareMask: 0, // FIXME:
writeMask: 0, // FIXME:
reference: 0, // FIXME:
},
back: vk::StencilOpState {
failOp: 0, // FIXME:
passOp: 0, // FIXME:
depthFailOp: 0, // FIXME:
compareOp: 0, // FIXME:
compareMask: 0, // FIXME:
writeMask: 0, // FIXME:
reference: 0, // FIXME:
},
minDepthBounds: 0.0, // FIXME:
maxDepthBounds: 1.0, // FIXME:
};
let atch = vk::PipelineColorBlendAttachmentState {
blendEnable: 0,
srcColorBlendFactor: 0,
dstColorBlendFactor: 0,
colorBlendOp: 0,
srcAlphaBlendFactor: 0,
dstAlphaBlendFactor: 0,
alphaBlendOp: 0,
colorWriteMask: 0xf,
};
let blend = vk::PipelineColorBlendStateCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
logicOpEnable: if blend.logic_op.is_some() { vk::TRUE } else { vk::FALSE },
logicOp: blend.logic_op.unwrap_or(Default::default()) as u32,
attachmentCount: 1, // FIXME:
pAttachments: &atch, // FIXME:
blendConstants: blend.blend_constants.unwrap_or([0.0, 0.0, 0.0, 0.0]),
};
let dynamic_states = vk::PipelineDynamicStateCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
dynamicStateCount: dynamic_states.len() as u32,
pDynamicStates: dynamic_states.as_ptr(),
};
// FIXME: hack with leaking pipeline layout
let layout = {
let infos = vk::PipelineLayoutCreateInfo {
sType: vk::STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO,
pNext: ptr::null(),
flags: 0,
setLayoutCount: 0,
pSetLayouts: ptr::null(),
pushConstantRangeCount: 0,
pPushConstantRanges: ptr::null(),
};
let mut out = mem::uninitialized();
try!(check_errors(vk.CreatePipelineLayout(device.internal_object(), &infos,
ptr::null(), &mut out)));
out
};
let infos = vk::GraphicsPipelineCreateInfo {
sType: vk::STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // TODO: some flags are available but none are critical
stageCount: stages.len() as u32,
pStages: stages.as_ptr(),
pVertexInputState: &vertex_input_state,
pInputAssemblyState: &input_assembly,
pTessellationState: ptr::null(), // FIXME:
pViewportState: &viewport,
pRasterizationState: &rasterization,
pMultisampleState: &multisample,
pDepthStencilState: &depth_stencil,
pColorBlendState: &blend,
pDynamicState: &dynamic_states,
layout: layout, // FIXME:
renderPass: render_pass.renderpass().internal_object(),
subpass: render_pass.index(),
basePipelineHandle: 0, // TODO:
basePipelineIndex: 0, // TODO:
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateGraphicsPipelines(device.internal_object(), 0,
1, &infos, ptr::null(), &mut output)));
output
};
Ok(Arc::new(GraphicsPipeline {
device: device.clone(),
pipeline: pipeline,
dynamic_line_width: raster.line_width.is_none(),
marker: PhantomData,
}))
}
}
impl<MultiVertex> GraphicsPipeline<MultiVertex> {
/// Returns true if the line width used by this pipeline is dynamic.
#[inline]
pub fn has_dynamic_line_width(&self) -> bool {
self.dynamic_line_width
}
}
impl<MultiVertex> VulkanObject for GraphicsPipeline<MultiVertex> {
type Object = vk::Pipeline;
#[inline]
fn internal_object(&self) -> vk::Pipeline {
self.pipeline
}
}
impl<MultiVertex> Drop for GraphicsPipeline<MultiVertex> {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.DestroyPipeline(self.device.internal_object(), self.pipeline, ptr::null());
}
}
}

View File

@ -0,0 +1,79 @@
use vk;
/// How the input assembly stage should behave.
#[derive(Copy, Clone, Debug)]
pub struct InputAssembly {
/// The type of primitives.
///
/// Note that some tologies don't support primitive restart.
pub topology: PrimitiveTopology,
/// If true, then the special index value `0xffff` or `0xffffffff` will tell the GPU that it is
/// the end of the current primitive. A new primitive will restart at the next index.
///
/// Note that some tologies don't support primitive restart.
pub primitive_restart_enable: bool,
}
/// Describes how vertices must be grouped together to form primitives.
///
/// Note that some tologies don't support primitive restart.
#[derive(Copy, Clone, Debug)]
#[repr(u32)]
pub enum PrimitiveTopology {
PointList = vk::PRIMITIVE_TOPOLOGY_POINT_LIST,
LineList = vk::PRIMITIVE_TOPOLOGY_LINE_LIST,
LineStrip = vk::PRIMITIVE_TOPOLOGY_LINE_STRIP,
TriangleList = vk::PRIMITIVE_TOPOLOGY_TRIANGLE_LIST,
TriangleStrip = vk::PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP,
TriangleFan = vk::PRIMITIVE_TOPOLOGY_TRIANGLE_FAN,
LineListWithAdjacency = vk::PRIMITIVE_TOPOLOGY_LINE_LIST_WITH_ADJACENCY,
LineStripWithAdjacency = vk::PRIMITIVE_TOPOLOGY_LINE_STRIP_WITH_ADJACENCY,
TriangleListWithAdjancecy = vk::PRIMITIVE_TOPOLOGY_TRIANGLE_LIST_WITH_ADJACENCY,
TriangleStripWithAdjacency = vk::PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP_WITH_ADJACENCY,
PatchList = vk::PRIMITIVE_TOPOLOGY_PATCH_LIST,
}
impl PrimitiveTopology {
/// Returns true if this primitive topology supports using primitives restart.
#[inline]
pub fn supports_primitive_restart(&self) -> bool {
match *self {
PrimitiveTopology::LineStrip => true,
PrimitiveTopology::TriangleStrip => true,
PrimitiveTopology::TriangleFan => true,
PrimitiveTopology::LineStripWithAdjacency => true,
PrimitiveTopology::TriangleStripWithAdjacency => true,
_ => false
}
}
}
/// Trait for types that can be used as indices by the GPU.
pub unsafe trait Index {
/// Returns the type of data.
fn ty() -> IndexType;
}
unsafe impl Index for u16 {
#[inline(always)]
fn ty() -> IndexType {
IndexType::U16
}
}
unsafe impl Index for u32 {
#[inline(always)]
fn ty() -> IndexType {
IndexType::U32
}
}
/// An enumeration of all valid index types.
#[derive(Copy, Clone, Debug)]
#[allow(missing_docs)]
#[repr(u32)]
pub enum IndexType {
U16 = vk::INDEX_TYPE_UINT16,
U32 = vk::INDEX_TYPE_UINT32,
}

View File

@ -0,0 +1,14 @@
pub use self::graphics_pipeline::GraphicsPipeline;
//mod compute_pipeline;
mod graphics_pipeline;
pub mod blend;
pub mod cache;
//pub mod depth_stencil;
pub mod input_assembly;
pub mod multisample;
pub mod raster;
pub mod vertex;
//pub mod viewport;

View File

@ -0,0 +1,50 @@
//! State of multisampling.
//!
//! Multisampling allows you to ask the GPU to run the rasterizer to generate more than one
//! sample per pixel.
//!
//! For example, if `rasterization_samples` is 1 then the fragment shader, depth test and stencil
//! test will be run once for each pixel. However if `rasterization_samples` is `n`, then the
//! GPU will pick `n` different locations within each pixel and assign to each of these locations
//! a different depth value. Depth and stencil test will then be run `n` times.
//!
//! In addition to this, the `sample_shading` parameter is the proportion (between 0.0 and 1.0) or
//! the samples that will be run through the fragment shader. For example if you set this to 1.0,
//! then all the sub-pixel samples will run through the shader and get a different value. If you
//! set this to 0.5, about half of the samples will run through the shader and the other half will
//! get their values from the ones which went through the shader.
//!
//! If `alpha_to_coverage` is true, then the alpha value of the fragment will be used in
//! an implementation-defined way to determine which samples get disabled or not. For example if
//! the alpha value is 0.5, then about half of the samples will be discarded. If you render to a
//! multisample image, this means that the color will end up being mixed with whatever color was
//! undernearth, which gives the same effect as alpha blending.
//!
//! If `alpha_to_one` is true, the alpha value of all the samples will be forced to 1.0 (or the
//! maximum possible value) after the effects of `alpha_to_coverage` have been applied.
// TODO: handle some weird behaviors with non-floating-point targets
/// State of the multisampling.
///
/// See the documentation in this module.
pub struct Multisample {
pub rasterization_samples: u32,
pub sample_mask: [u32; 4],
pub sample_shading: Option<f32>,
pub alpha_to_coverage: bool,
pub alpha_to_one: bool,
}
impl Multisample {
#[inline]
pub fn disabled() -> Multisample {
Multisample {
rasterization_samples: 1,
sample_mask: [0xffffffff; 4],
sample_shading: None,
alpha_to_coverage: false,
alpha_to_one: false,
}
}
}

View File

@ -0,0 +1,109 @@
use vk;
/// The rasterization stage is when collections of triangles are turned into collections of pixels.
#[derive(Clone, Debug)]
pub struct Rasterization {
/// If true, then the depth value of the vertices will be clamped to [0.0 ; 1.0]. If false,
/// fragments whose depth is outside of this range will be discarded.
pub depth_clamp: bool,
/// If true, all the fragments will be discarded. This is usually used when your vertex shader
/// has some side effects and you don't need to run the fragment shader.
pub rasterizer_discard: bool,
pub polygon_mode: PolygonMode,
/// Specifies whether front faces or back faces should be discarded, or none, or both.
pub cull_mode: CullMode,
/// Specifies which triangle orientation corresponds to the front or the triangle.
pub front_face: FrontFace,
/// Width, in pixels, of lines when drawing lines.
///
/// If you pass `None`, then this state will be considered as dynamic and the line width will
/// need to be set when you build the command buffer.
pub line_width: Option<f32>,
// TODO: clean this
pub depthBiasEnable: bool,
pub depthBiasConstantFactor: f32,
pub depthBiasClamp: f32,
pub depthBiasSlopeFactor: f32,
}
impl Default for Rasterization {
#[inline]
fn default() -> Rasterization {
Rasterization {
depth_clamp: false,
rasterizer_discard: false,
polygon_mode: Default::default(),
cull_mode: Default::default(),
front_face: Default::default(),
line_width: Some(1.0),
depthBiasEnable: false,
depthBiasConstantFactor: 0.0,
depthBiasClamp: 0.0,
depthBiasSlopeFactor: 0.0,
}
}
}
/// Specifies the culling mode.
///
/// This setting works in pair with `front_face`. The `front_face` setting tells the GPU whether
/// clockwise or counter-clockwise correspond to the front and the back of each triangle. Then
/// `cull_mode` lets you specify whether front faces should be discarded, back faces should be
/// discarded, or none, or both.
#[derive(Copy, Clone, Debug)]
#[repr(u32)]
pub enum CullMode {
None = vk::CULL_MODE_NONE,
Front = vk::CULL_MODE_FRONT_BIT,
Back = vk::CULL_MODE_BACK_BIT,
FrontAndBack = vk::CULL_MODE_FRONT_AND_BACK,
}
impl Default for CullMode {
#[inline]
fn default() -> CullMode {
CullMode::None
}
}
/// Specifies which triangle orientation corresponds to the front or the triangle.
#[derive(Copy, Clone, Debug)]
#[repr(u32)]
pub enum FrontFace {
/// Triangles whose vertices are oriented counter-clockwise on the screen will be considered
/// as facing their front. Otherwise they will be considered as facing their back.
CounterClockwise = vk::FRONT_FACE_COUNTER_CLOCKWISE,
/// Triangles whose vertices are oriented clockwise on the screen will be considered
/// as facing their front. Otherwise they will be considered as facing their back.
Clockwise = vk::FRONT_FACE_CLOCKWISE,
}
impl Default for FrontFace {
#[inline]
fn default() -> FrontFace {
FrontFace::CounterClockwise
}
}
#[derive(Copy, Clone, Debug)]
#[repr(u32)]
pub enum PolygonMode {
Fill = vk::POLYGON_MODE_FILL,
Line = vk::POLYGON_MODE_LINE,
Point = vk::POLYGON_MODE_POINT,
}
impl Default for PolygonMode {
#[inline]
fn default() -> PolygonMode {
PolygonMode::Fill
}
}

View File

@ -0,0 +1,161 @@
use std::mem;
use std::sync::Arc;
use VulkanObject;
use buffer::Buffer;
use formats::Format;
use vk;
#[derive(Copy, Clone, Debug)]
#[repr(u32)]
pub enum VertexInputRate {
Vertex = vk::VERTEX_INPUT_RATE_VERTEX,
Instance = vk::VERTEX_INPUT_RATE_INSTANCE,
}
/// Describes an individual `Vertex`. More precisely, a collection of attributes that can be read
/// from a vertex shader.
pub unsafe trait Vertex {
/// Returns the characteristics of a vertex attribute.
fn attrib(name: &str) -> Option<VertexAttribute>;
}
pub struct VertexAttribute {
pub offset: usize,
pub format: Format,
}
/// Trait for types that contain the layout of a collection of vertex buffers.
pub unsafe trait MultiVertex {
fn attrib(name: &str) -> Option<(u32, VertexAttribute)>;
/// Returns the number of buffers in this collection.
fn num_buffers() -> u32;
fn buffer_info(buffer_id: u32) -> (u32, VertexInputRate);
// TODO: hacky
fn ids(&self) -> Vec<u64>;
}
unsafe impl<T, M> MultiVertex for Arc<Buffer<T, M>> where T: Vertex {
#[inline]
fn attrib(name: &str) -> Option<(u32, VertexAttribute)> {
T::attrib(name).map(|attr| (0, attr))
}
#[inline]
fn num_buffers() -> u32 {
1
}
#[inline]
fn buffer_info(buffer_id: u32) -> (u32, VertexInputRate) {
assert_eq!(buffer_id, 0);
(mem::size_of::<T>() as u32, VertexInputRate::Vertex)
}
fn ids(&self) -> Vec<u64> {
vec![self.internal_object()]
}
}
unsafe impl<T, M> MultiVertex for Arc<Buffer<[T], M>> where T: Vertex {
#[inline]
fn attrib(name: &str) -> Option<(u32, VertexAttribute)> {
T::attrib(name).map(|attr| (0, attr))
}
#[inline]
fn num_buffers() -> u32 {
1
}
#[inline]
fn buffer_info(buffer_id: u32) -> (u32, VertexInputRate) {
assert_eq!(buffer_id, 0);
(mem::size_of::<T>() as u32, VertexInputRate::Vertex)
}
fn ids(&self) -> Vec<u64> {
vec![self.internal_object()]
}
}
macro_rules! impl_mv {
($t1:ident, $t2:ty) => (
unsafe impl<$t1, M> MultiVertex for Arc<Buffer<$t2, M>> where T: Vertex {
#[inline]
fn attrib(name: &str) -> Option<(u32, VertexAttribute)> {
T::attrib(name).map(|attr| (0, attr))
}
#[inline]
fn num_buffers() -> u32 {
1
}
#[inline]
fn buffer_info(buffer_id: u32) -> (u32, VertexInputRate) {
assert_eq!(buffer_id, 0);
(mem::size_of::<T>() as u32, VertexInputRate::Vertex)
}
fn ids(&self) -> Vec<u64> {
vec![self.internal_object()]
}
}
);
}
impl_mv!(T, [T; 1]);
impl_mv!(T, [T; 2]);
impl_mv!(T, [T; 3]);
impl_mv!(T, [T; 4]);
impl_mv!(T, [T; 5]);
impl_mv!(T, [T; 6]);
impl_mv!(T, [T; 7]);
impl_mv!(T, [T; 8]);
impl_mv!(T, [T; 9]);
impl_mv!(T, [T; 10]);
impl_mv!(T, [T; 11]);
impl_mv!(T, [T; 12]);
impl_mv!(T, [T; 13]);
impl_mv!(T, [T; 14]);
impl_mv!(T, [T; 15]);
impl_mv!(T, [T; 16]);
impl_mv!(T, [T; 32]);
impl_mv!(T, [T; 64]);
impl_mv!(T, [T; 128]);
impl_mv!(T, [T; 256]);
impl_mv!(T, [T; 512]);
impl_mv!(T, [T; 1024]);
impl_mv!(T, [T; 2048]);
impl_mv!(T, [T; 4096]);
#[macro_export]
macro_rules! impl_vertex {
($out:ident $(, $member:ident)*) => (
unsafe impl $crate::pipeline::vertex::Vertex for $out {
#[inline(always)]
fn attrib(name: &str) -> Option<$crate::pipeline::vertex::VertexAttribute> {
$(
if name == stringify!($member) {
return Some($crate::pipeline::vertex::VertexAttribute {
offset: unsafe {
let dummy = 0usize as *const $out;
let member = (&(&*dummy).$member) as *const _;
member as usize
},
format: <$crate::formats::R32G32Sfloat as $crate::formats::FormatMarker>::format(), // FIXME:
});
}
)*
None
}
}
)
}

View File

@ -0,0 +1,35 @@
use std::ops::Range;
/*
typedef struct {
VkStructureType sType;
const void* pNext;
VkPipelineViewportStateCreateFlags flags;
uint32_t viewportCount;
const VkViewport* pViewports;
uint32_t scissorCount;
const VkRect2D* pScissors;
} VkPipelineViewportStateCreateInfo;
*/
pub enum ViewportsState {
Fixed {
},
DynamicViewports {
},
DynamicScissors {
},
Dynamic,
}
#[derive(Debug, Copy, Clone)]
pub struct Viewport {
pub origin: [f32; 2],
pub dimensions: [f32; 2],
pub depth_range: Range<f32>,
}

55
vulkano/src/query.rs Normal file
View File

@ -0,0 +1,55 @@
//! This module provides support for query pools.
//!
//! In Vulkan, queries are not created individually. Instead you manipulate **query pools**, which
//! represent a collection of queries. Whenever you use a query, you have to specify both the query
//! pool and the slot id within that query pool.
use std::mem;
use std::ptr;
use std::sync::Arc;
use vk;
use device::Device;
macro_rules! query_pool {
($name:ident, $query_type:expr) => {
pub struct $name {
device: Arc<Device>,
pool: VkQueryPool,
num_slots: u32,
}
impl $name {
/// Builds a new query pool.
pub fn new(device: Arc<Device>, num_slots: u32) -> Arc<$name> {
let create_infos = VkQueryPoolCreateInfo {
sType: vk::QUERY_POOL_CREATE_INFO,
pNext: ptr::null(),
flags: 0,
queryType: $query_type,
num_slots: num_slots,
pipelineStatistics: 0, // TODO:
};
let mut output = mem::uninitialized();
vkCreateQueryPool(device.internal_object(), &create_infos, ptr::null(), &mut output);
}
/// Returns the number of slots of that query pool.
#[inline]
pub fn num_slots(&self) -> u32 {
self.num_slots
}
}
impl Drop for $name {
fn drop(&mut self) {
unsafe {
vkDestroyQueryPool
}
}
}
};
}
query_pool!(OcclusionQueriesPool, vk::QUERY_TYPE_OCCLUSION);

118
vulkano/src/sampler.rs Normal file
View File

@ -0,0 +1,118 @@
//! How to retreive data from an image within a shader.
//!
//! This module contains a struct named `Sampler` which describes how to get pixel data from
//! a texture.
//!
use std::mem;
use std::ptr;
use std::sync::Arc;
use device::Device;
use OomError;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
/// Describes how to retreive data from an image within a shader.
pub struct Sampler {
device: Arc<Device>,
sampler: vk::Sampler,
}
impl Sampler {
/// Creates a new `Sampler` with the given behavior.
///
/// # Panic
///
/// Panicks if `max_anisotropy < 1.0`.
/// Panicks if `min_lod > max_lod`.
pub fn new(device: &Arc<Device>, mag_filter: Filter, min_filter: Filter,
mipmap_mode: MipmapMode, address_u: SamplerAddressMode,
address_v: SamplerAddressMode, address_w: SamplerAddressMode, mip_lod_bias: f32,
max_anisotropy: f32, min_lod: f32, max_lod: f32) -> Result<Arc<Sampler>, OomError>
{
assert!(max_anisotropy >= 1.0);
assert!(min_lod <= max_lod);
let vk = device.pointers();
let sampler = unsafe {
let infos = vk::SamplerCreateInfo {
sType: vk::STRUCTURE_TYPE_SAMPLER_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
magFilter: mag_filter as u32,
minFilter: min_filter as u32,
mipmapMode: mipmap_mode as u32,
addressModeU: address_u as u32,
addressModeV: address_v as u32,
addressModeW: address_w as u32,
mipLodBias: mip_lod_bias,
anisotropyEnable: if max_anisotropy > 1.0 { vk::TRUE } else { vk::FALSE },
maxAnisotropy: max_anisotropy,
compareEnable: 0, // FIXME:
compareOp: 0, // FIXME:
minLod: min_lod,
maxLod: max_lod,
borderColor: 0, // FIXME:
unnormalizedCoordinates: 0, // FIXME:
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateSampler(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(Sampler {
device: device.clone(),
sampler: sampler,
}))
}
}
impl Drop for Sampler {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.DestroySampler(self.device.internal_object(), self.sampler, ptr::null());
}
}
}
/// Describes how the color of each pixel should be determined.
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
#[repr(u32)]
pub enum Filter {
/// The four pixels whose center surround the requested coordinates are taken, then their
/// values are interpolated.
Linear = vk::FILTER_LINEAR,
/// The pixel whose center is nearest to the requested coordinates is taken from the source
/// and its value is returned as-is.
Nearest = vk::FILTER_NEAREST,
}
/// Describes which mipmap from the source to use.
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
#[repr(u32)]
pub enum MipmapMode {
/// Use the mipmap whose dimensions are the nearest to the dimensions of the destination.
Nearest = vk::SAMPLER_MIPMAP_MODE_NEAREST,
/// Take the two mipmaps whose dimensions are immediately inferior and superior to the
/// dimensions of the destination, calculate the value for both, and interpolate them.
Linear = vk::SAMPLER_MIPMAP_MODE_LINEAR,
}
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
#[repr(u32)]
pub enum SamplerAddressMode {
Repeat = vk::SAMPLER_ADDRESS_MODE_REPEAT,
MirroredRepeat = vk::SAMPLER_ADDRESS_MODE_MIRRORED_REPEAT,
ClampToEdge = vk::SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE,
ClampToBorder = vk::SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER,
MirrorClampToEdge = vk::SAMPLER_ADDRESS_MODE_MIRROR_CLAMP_TO_EDGE,
}

191
vulkano/src/shader.rs Normal file
View File

@ -0,0 +1,191 @@
use std::borrow::Cow;
use std::marker::PhantomData;
use std::mem;
use std::ptr;
use std::sync::Arc;
use std::ffi::CStr;
use device::Device;
use OomError;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
///
///
/// Note that it is advised to wrap around a `ShaderModule` with a struct that is different for
/// each shader.
pub struct ShaderModule {
device: Arc<Device>,
module: vk::ShaderModule,
}
impl ShaderModule {
// TODO: even if the code has been validated at compile-time, we still need to check for
// capabilities at runtime
pub unsafe fn new(device: &Arc<Device>, spirv: &[u8])
-> Result<Arc<ShaderModule>, OomError>
{
let vk = device.pointers();
let module = {
let infos = vk::ShaderModuleCreateInfo {
sType: vk::STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO,
pNext: ptr::null(),
flags: 0, // reserved
codeSize: spirv.len(),
pCode: spirv.as_ptr() as *const _,
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateShaderModule(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(ShaderModule {
device: device.clone(),
module: module,
}))
}
pub unsafe fn vertex_shader_entry_point<'a, V>(&'a self, name: &'a CStr)
-> VertexShaderEntryPoint<'a, V>
{
VertexShaderEntryPoint {
module: self,
name: name,
marker: PhantomData,
attributes: vec!["position".into()], // FIXME:
}
}
/// Gets access to an entry point contained in this module.
///
/// This is purely a *logical* operation. It returns a struct that *represents* the entry
/// point but doesn't actually do anything.
///
/// # Safety
///
/// - The user must check that the entry point exists in the module, as this is not checked
/// by Vulkan.
/// - Calling this function also determines the template parameters associated to the
/// `EntryPoint` struct. Therefore care must be taken that the values there are correct.
///
pub unsafe fn fragment_shader_entry_point<'a, F>(&'a self, name: &'a CStr)
-> FragmentShaderEntryPoint<'a, F>
{
FragmentShaderEntryPoint {
module: self,
name: name,
marker: PhantomData,
}
}
}
impl VulkanObject for ShaderModule {
type Object = vk::ShaderModule;
#[inline]
fn internal_object(&self) -> vk::ShaderModule {
self.module
}
}
impl Drop for ShaderModule {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.DestroyShaderModule(self.device.internal_object(), self.module, ptr::null());
}
}
}
pub struct VertexShaderEntryPoint<'a, V> {
module: &'a ShaderModule,
name: &'a CStr,
marker: PhantomData<V>,
attributes: Vec<Cow<'static, str>>,
}
impl<'a, V> VertexShaderEntryPoint<'a, V> {
#[inline]
pub fn module(&self) -> &'a ShaderModule {
self.module
}
#[inline]
pub fn name(&self) -> &'a CStr {
self.name
}
// TODO: change API
#[inline]
pub fn attributes(&self) -> &Vec<Cow<'static, str>> {
&self.attributes
}
}
pub struct ComputeShaderEntryPoint<'a, D, S, P> {
module: &'a ShaderModule,
name: &'a CStr,
marker: PhantomData<(D, S, P)>
}
pub struct FragmentShaderEntryPoint<'a, F> {
module: &'a ShaderModule,
name: &'a CStr,
marker: PhantomData<F>
}
impl<'a, F> FragmentShaderEntryPoint<'a, F> {
#[inline]
pub fn module(&self) -> &'a ShaderModule {
self.module
}
#[inline]
pub fn name(&self) -> &'a CStr {
self.name
}
}
/// Trait to describe structs that contain specialization data for shaders.
///
/// It is implemented on `()` for shaders that don't have any specialization constant.
pub unsafe trait SpecializationConstants {
/// Returns descriptors of the struct's layout.
fn descriptors() -> &'static [SpecializationMapEntry];
}
unsafe impl SpecializationConstants for () {
#[inline]
fn descriptors() -> &'static [SpecializationMapEntry] {
&[]
}
}
/// Describes an invidiual constant to set in the shader. Also a field in the struct.
// Has the same memory representation as a `VkSpecializationMapEntry`.
#[repr(C)]
pub struct SpecializationMapEntry {
/// Identifier of the constant in the shader that corresponds to this field.
pub constant_id: u32,
/// Offset within this struct for the data.
pub offset: u32,
/// Size of the data in bytes.
pub size: usize,
}
/// Trait to describe structs that contain push constants for shaders.
///
/// It is implemented on `()` for shaders that don't have any push constant.
pub unsafe trait PushConstants {
// TODO:
}
unsafe impl PushConstants for () {
}

View File

@ -0,0 +1,280 @@
//! Link between Vulkan and a window and/or the screen.
//!
//! In order to draw on the screen or a window, you have to use two steps:
//!
//! - Create a `Surface` object that represents the location where the image will show up.
//! - Create a `Swapchain` using that `Surface`.
//!
//! Creating a surface can be done with only an `Instance` object. However creating a swapchain
//! requires a `Device` object.
//!
//! Once you have a swapchain, you can retreive `Image` objects from it and draw to them. However
//! due to double-buffering or other caching mechanism, the rendering will not automatically be
//! shown on screen. In order to show the output on screen, you have to *present* the swapchain
//! by using the method with the same name.
//!
//! # Extensions
//!
//! Theses capabilities depend on some extensions:
//!
//! - `VK_KHR_surface`
//! - `VK_KHR_swapchain`
//! - `VK_KHR_display`
//! - `VK_KHR_display_swapchain`
//! - `VK_KHR_xlib_surface`
//! - `VK_KHR_xcb_surface`
//! - `VK_KHR_wayland_surface`
//! - `VK_KHR_mir_surface`
//! - `VK_KHR_android_surface`
//! - `VK_KHR_win32_surface`
//!
use std::ffi::CStr;
use std::mem;
use std::ptr;
use std::sync::Arc;
use std::vec::IntoIter;
use instance::PhysicalDevice;
use memory::MemorySourceChunk;
use check_errors;
use OomError;
use VulkanObject;
use VulkanPointers;
use vk;
pub use self::surface::Capabilities;
pub use self::surface::Surface;
pub use self::surface::PresentMode;
pub use self::surface::SurfaceTransform;
pub use self::surface::CompositeAlpha;
pub use self::surface::ColorSpace;
pub use self::swapchain::Swapchain;
pub use self::swapchain::AcquireError;
mod surface;
mod swapchain;
// TODO: extract this to a `display` module and solve the visibility problems
/// ?
// TODO: plane capabilities
pub struct DisplayPlane {
device: PhysicalDevice,
index: u32,
properties: vk::DisplayPlanePropertiesKHR,
supported_displays: Vec<vk::DisplayKHR>,
}
impl DisplayPlane {
/// Enumerates all the display planes that are available on a given physical device.
pub fn enumerate(device: &PhysicalDevice) -> Result<IntoIter<DisplayPlane>, OomError> {
let vk = device.instance().pointers();
let num = unsafe {
let mut num: u32 = mem::uninitialized();
try!(check_errors(vk.GetPhysicalDeviceDisplayPlanePropertiesKHR(device.internal_object(),
&mut num, ptr::null_mut())));
num
};
let planes: Vec<vk::DisplayPlanePropertiesKHR> = unsafe {
let mut planes = Vec::with_capacity(num as usize);
let mut num = num;
try!(check_errors(vk.GetPhysicalDeviceDisplayPlanePropertiesKHR(device.internal_object(),
&mut num,
planes.as_mut_ptr())));
planes.set_len(num as usize);
planes
};
Ok(planes.into_iter().enumerate().map(|(index, prop)| {
let num = unsafe {
let mut num: u32 = mem::uninitialized();
check_errors(vk.GetDisplayPlaneSupportedDisplaysKHR(device.internal_object(), index as u32,
&mut num, ptr::null_mut())).unwrap(); // TODO: shouldn't unwrap
num
};
let supported_displays: Vec<vk::DisplayKHR> = unsafe {
let mut displays = Vec::with_capacity(num as usize);
let mut num = num;
check_errors(vk.GetDisplayPlaneSupportedDisplaysKHR(device.internal_object(),
index as u32, &mut num,
displays.as_mut_ptr())).unwrap(); // TODO: shouldn't unwrap
displays.set_len(num as usize);
displays
};
DisplayPlane {
device: device.clone(),
index: index as u32,
properties: prop,
supported_displays: supported_displays,
}
}).collect::<Vec<_>>().into_iter())
}
/// Returns true if this plane supports the given display.
#[inline]
pub fn supports(&self, display: &Display) -> bool {
// making sure that the physical device is the same
if self.device.internal_object() != display.device.internal_object() {
return false;
}
self.supported_displays.iter().find(|&&d| d == display.internal_object()).is_some()
}
}
/// Represents a monitor connected to a physical device.
#[derive(Clone)]
pub struct Display {
device: PhysicalDevice,
properties: Arc<vk::DisplayPropertiesKHR>, // TODO: Arc because struct isn't clone
}
impl Display {
/// Enumerates all the displays that are available on a given physical device.
pub fn enumerate(device: &PhysicalDevice) -> Result<IntoIter<Display>, OomError> {
let vk = device.instance().pointers();
let num = unsafe {
let mut num = mem::uninitialized();
try!(check_errors(vk.GetPhysicalDeviceDisplayPropertiesKHR(device.internal_object(),
&mut num, ptr::null_mut())));
num
};
let displays: Vec<vk::DisplayPropertiesKHR> = unsafe {
let mut displays = Vec::with_capacity(num as usize);
let mut num = num;
try!(check_errors(vk.GetPhysicalDeviceDisplayPropertiesKHR(device.internal_object(),
&mut num,
displays.as_mut_ptr())));
displays.set_len(num as usize);
displays
};
Ok(displays.into_iter().map(|prop| {
Display {
device: device.clone(),
properties: Arc::new(prop),
}
}).collect::<Vec<_>>().into_iter())
}
/// Returns the name of the display.
#[inline]
pub fn name(&self) -> &str {
unsafe {
CStr::from_ptr(self.properties.displayName).to_str()
.expect("non UTF-8 characters in display name")
}
}
/// Returns the physical resolution of the display.
#[inline]
pub fn physical_resolution(&self) -> [u32; 2] {
let ref r = self.properties.physicalResolution;
[r.width, r.height]
}
/// Returns a list of all modes available on this display.
pub fn display_modes(&self) -> Result<IntoIter<DisplayMode>, OomError> {
let vk = self.device.instance().pointers();
let num = unsafe {
let mut num = mem::uninitialized();
try!(check_errors(vk.GetDisplayModePropertiesKHR(self.device.internal_object(),
self.properties.display,
&mut num, ptr::null_mut())));
num
};
let modes: Vec<vk::DisplayModePropertiesKHR> = unsafe {
let mut modes = Vec::with_capacity(num as usize);
let mut num = num;
try!(check_errors(vk.GetDisplayModePropertiesKHR(self.device.internal_object(),
self.properties.display, &mut num,
modes.as_mut_ptr())));
modes.set_len(num as usize);
modes
};
Ok(modes.into_iter().map(|mode| {
DisplayMode {
display: self.clone(),
display_mode: mode.displayMode,
parameters: mode.parameters,
}
}).collect::<Vec<_>>().into_iter())
}
}
impl VulkanObject for Display {
type Object = vk::DisplayKHR;
#[inline]
fn internal_object(&self) -> vk::DisplayKHR {
self.properties.display
}
}
/// Represents a mode on a specific display.
pub struct DisplayMode {
display: Display,
display_mode: vk::DisplayModeKHR,
parameters: vk::DisplayModeParametersKHR,
}
impl DisplayMode {
/*pub fn new(display: &Display) -> Result<Arc<DisplayMode>, OomError> {
let vk = instance.pointers();
let parameters = vk::DisplayModeParametersKHR {
visibleRegion: vk::Extent2D { width: , height: },
refreshRate: ,
};
let display_mode = {
let infos = vk::DisplayModeCreateInfoKHR {
sType: vk::STRUCTURE_TYPE_DISPLAY_MODE_CREATE_INFO_KHR,
pNext: ptr::null(),
flags: 0, // reserved
parameters: parameters,
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateDisplayModeKHR(display.device.internal_object(),
display.display, &infos, ptr::null(),
&mut output)));
output
};
Ok(Arc::new(DisplayMode {
instance: display.device.instance().clone(),
display_mode: display_mode,
parameters: ,
}))
}*/
/// Returns the display corresponding to this mode.
#[inline]
pub fn display(&self) -> &Display {
&self.display
}
/// Returns the dimensions of the region that is visible on the monitor.
#[inline]
pub fn visible_region(&self) -> [u32; 2] {
let ref d = self.parameters.visibleRegion;
[d.width, d.height]
}
/// Returns the refresh rate of this mode.
#[inline]
pub fn refresh_rate(&self) -> u32 {
self.parameters.refreshRate
}
}

View File

@ -0,0 +1,412 @@
use std::mem;
use std::ops::Range;
use std::ptr;
use std::sync::Arc;
use formats::Format;
use formats::FormatMarker;
use image::Usage as ImageUsage;
use instance::Instance;
use instance::PhysicalDevice;
use instance::QueueFamily;
use memory::MemorySourceChunk;
use swapchain::DisplayMode;
use swapchain::DisplayPlane;
use check_errors;
use OomError;
use VulkanObject;
use VulkanPointers;
use vk;
/// Represents a surface on the screen.
///
/// Creating a `Surface` is platform-specific.
pub struct Surface {
instance: Arc<Instance>,
surface: vk::SurfaceKHR,
}
impl Surface {
/// Creates a `Surface` that covers a display mode.
///
/// # Panic
///
/// - Panicks if `display_mode` and `plane` don't belong to the same physical device.
/// - Panicks if `plane` doesn't support the display of `display_mode`.
///
pub fn from_display_mode(display_mode: &DisplayMode, plane: &DisplayPlane)
-> Result<Arc<Surface>, OomError>
{
assert_eq!(display_mode.display.device.internal_object(), plane.device.internal_object());
assert!(plane.supports(display_mode.display()));
let instance = display_mode.display.device.instance();
let vk = instance.pointers();
let surface = unsafe {
let infos = vk::DisplaySurfaceCreateInfoKHR {
sType: vk::STRUCTURE_TYPE_DISPLAY_SURFACE_CREATE_INFO_KHR,
pNext: ptr::null(),
flags: 0, // reserved
displayMode: display_mode.display_mode,
planeIndex: plane.index,
planeStackIndex: plane.properties.currentStackIndex,
transform: vk::SURFACE_TRANSFORM_IDENTITY_BIT_KHR, // TODO: let user choose
globalAlpha: 0.0, // TODO: let user choose
alphaMode: vk::DISPLAY_PLANE_ALPHA_OPAQUE_BIT_KHR, // TODO: let user choose
imageExtent: vk::Extent2D { // TODO: let user choose
width: display_mode.parameters.visibleRegion.width,
height: display_mode.parameters.visibleRegion.height,
},
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateDisplayPlaneSurfaceKHR(instance.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(Surface {
instance: instance.clone(),
surface: surface,
}))
}
/// Creates a `Surface` from a Win32 window.
///
/// # Safety
///
/// The caller must ensure that the `hinstance` and the `hwnd` are both correct and stay
/// alive for the entire lifetime of the surface.
pub unsafe fn from_hwnd<T, U>(instance: &Arc<Instance>, hinstance: *const T, hwnd: *const U)
-> Result<Arc<Surface>, OomError>
{
let vk = instance.pointers();
let surface = {
let infos = vk::Win32SurfaceCreateInfoKHR {
sType: vk::STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR,
pNext: ptr::null(),
flags: 0, // reserved
hinstance: hinstance as *mut _,
hwnd: hwnd as *mut _,
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateWin32SurfaceKHR(instance.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(Surface {
instance: instance.clone(),
surface: surface,
}))
}
/// Creates a `Surface` from an Android window.
///
/// # Safety
///
/// The caller must ensure that the `window` is correct and stays alive for the entire
/// lifetime of the surface.
pub unsafe fn from_anativewindow<T>(instance: &Arc<Instance>, window: *const T)
-> Result<Arc<Surface>, OomError>
{
let vk = instance.pointers();
let surface = {
let infos = vk::AndroidSurfaceCreateInfoKHR {
sType: vk::STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR,
pNext: ptr::null(),
flags: 0, // reserved
window: window as *mut _,
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateAndroidSurfaceKHR(instance.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(Surface {
instance: instance.clone(),
surface: surface,
}))
}
/// Returns true if the given queue family can draw on this surface.
pub fn is_supported(&self, queue: &QueueFamily) -> Result<bool, OomError> {
unsafe {
let vk = self.instance.pointers();
let mut output = mem::uninitialized();
try!(check_errors(
vk.GetPhysicalDeviceSurfaceSupportKHR(queue.physical_device().internal_object(),
queue.id(), self.surface, &mut output)
));
Ok(output != 0)
}
}
/// Retreives the capabilities of a surface when used by a certain device.
///
/// # Panic
///
/// - Panicks if the device and the surface don't belong to the same instance.
///
pub fn get_capabilities(&self, device: &PhysicalDevice) -> Result<Capabilities, OomError> { // TODO: wrong error type
unsafe {
assert_eq!(&*self.instance as *const _, &**device.instance() as *const _);
let vk = self.instance.pointers();
let caps = {
let mut out: vk::SurfaceCapabilitiesKHR = mem::uninitialized();
try!(check_errors(
vk.GetPhysicalDeviceSurfaceCapabilitiesKHR(device.internal_object(),
self.surface, &mut out)
));
out
};
let formats = {
let mut num = 0;
try!(check_errors(
vk.GetPhysicalDeviceSurfaceFormatsKHR(device.internal_object(),
self.surface, &mut num,
ptr::null_mut())
));
let mut formats = Vec::with_capacity(num as usize);
try!(check_errors(
vk.GetPhysicalDeviceSurfaceFormatsKHR(device.internal_object(),
self.surface, &mut num,
formats.as_mut_ptr())
));
formats.set_len(num as usize);
formats
};
let modes = {
let mut num = 0;
try!(check_errors(
vk.GetPhysicalDeviceSurfacePresentModesKHR(device.internal_object(),
self.surface, &mut num,
ptr::null_mut())
));
let mut modes = Vec::with_capacity(num as usize);
try!(check_errors(
vk.GetPhysicalDeviceSurfacePresentModesKHR(device.internal_object(),
self.surface, &mut num,
modes.as_mut_ptr())
));
modes.set_len(num as usize);
modes
};
Ok(Capabilities {
image_count: caps.minImageCount .. caps.maxImageCount + 1,
current_extent: if caps.currentExtent.width == 0xffffffff &&
caps.currentExtent.height == 0xffffffff
{
None
} else {
Some([caps.currentExtent.width, caps.currentExtent.height])
},
min_image_extent: [caps.minImageExtent.width, caps.minImageExtent.height],
max_image_extent: [caps.maxImageExtent.width, caps.maxImageExtent.height],
max_image_array_layers: caps.maxImageArrayLayers,
supported_transforms: SurfaceTransform::from_bits(caps.supportedTransforms),
current_transform: SurfaceTransform::from_bits(caps.supportedTransforms).into_iter().next().unwrap(), // TODO:
supported_composite_alpha: CompositeAlpha::from_bits(caps.supportedCompositeAlpha),
supported_usage_flags: {
let usage = ImageUsage::from_bits(caps.supportedUsageFlags);
debug_assert!(usage.color_attachment); // specs say that this must be true
usage
},
supported_formats: formats.into_iter().map(|f| {
(Format::from_num(f.format).unwrap(), ColorSpace::from_num(f.colorSpace))
}).collect(),
present_modes: modes.into_iter().map(|mode| PresentMode::from_num(mode)).collect(),
})
}
}
}
impl VulkanObject for Surface {
type Object = vk::SurfaceKHR;
#[inline]
fn internal_object(&self) -> vk::SurfaceKHR {
self.surface
}
}
impl Drop for Surface {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.instance.pointers();
vk.DestroySurfaceKHR(self.instance.internal_object(), self.surface, ptr::null());
}
}
}
/// The capabilities of a surface when used by a physical device.
///
/// You have to match these capabilities when you create a swapchain.
#[derive(Clone, Debug)]
pub struct Capabilities {
/// Range of the number of images that can be created. Please remember that the end is out of
/// the range.
pub image_count: Range<u32>,
/// The current dimensions of the surface. `None` means that the surface's dimensions will
/// depend on the dimensions of the swapchain that you are going to create.
pub current_extent: Option<[u32; 2]>,
pub min_image_extent: [u32; 2],
pub max_image_extent: [u32; 2],
pub max_image_array_layers: u32,
pub supported_transforms: Vec<SurfaceTransform>,
pub current_transform: SurfaceTransform,
pub supported_composite_alpha: Vec<CompositeAlpha>,
pub supported_usage_flags: ImageUsage,
pub supported_formats: Vec<(Format, ColorSpace)>, // FIXME: driver can return FORMAT_UNDEFINED which indicates that it has no preferred format, so that field should be an Option
/// List of present modes that are supported.
pub present_modes: Vec<PresentMode>,
}
/// The way presenting a swapchain is accomplished.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
#[repr(u32)]
pub enum PresentMode {
/// Immediately shows the image to the user. May result in visible tearing.
Immediate = vk::PRESENT_MODE_IMMEDIATE_KHR,
/// The action of presenting an image puts it in wait. When the next vertical blanking period
/// happens, the waiting image is effectively shown to the user. If an image is presented while
/// another one is waiting, it is replaced.
Mailbox = vk::PRESENT_MODE_MAILBOX_KHR,
/// The action of presenting an image adds it to a queue of images. At each vertical blanking
/// period, the queue is poped and an image is presented.
///
/// This is the equivalent of OpenGL's `SwapInterval` with a value of 1.
Fifo = vk::PRESENT_MODE_FIFO_KHR,
/// Same as `Fifo`, except that if the queue was empty during the previous vertical blanking
/// period then it is equivalent to `Immediate`.
///
/// This is the equivalent of OpenGL's `SwapInterval` with a value of -1.
Relaxed = vk::PRESENT_MODE_FIFO_RELAXED_KHR,
}
impl PresentMode {
/// Panicks if the mode is unrecognized.
#[inline]
fn from_num(num: u32) -> PresentMode {
match num {
vk::PRESENT_MODE_IMMEDIATE_KHR => PresentMode::Immediate,
vk::PRESENT_MODE_MAILBOX_KHR => PresentMode::Mailbox,
vk::PRESENT_MODE_FIFO_KHR => PresentMode::Fifo,
vk::PRESENT_MODE_FIFO_RELAXED_KHR => PresentMode::Relaxed,
m => panic!("unrecognized present mode: {:?}", m)
}
}
}
/// A transformation to apply to the image before showing it on the screen.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
#[repr(u32)]
pub enum SurfaceTransform {
Identity = vk::SURFACE_TRANSFORM_IDENTITY_BIT_KHR,
Rotate90 = vk::SURFACE_TRANSFORM_ROTATE_90_BIT_KHR,
Rotate180 = vk::SURFACE_TRANSFORM_ROTATE_180_BIT_KHR,
Rotate270 = vk::SURFACE_TRANSFORM_ROTATE_270_BIT_KHR,
HorizontalMirror = vk::SURFACE_TRANSFORM_HORIZONTAL_MIRROR_BIT_KHR,
HorizontalMirrorRotate90 = vk::SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_90_BIT_KHR,
HorizontalMirrorRotate180 = vk::SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_180_BIT_KHR,
HorizontalMirrorRotate270 = vk::SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_270_BIT_KHR,
Inherit = vk::SURFACE_TRANSFORM_INHERIT_BIT_KHR,
}
impl SurfaceTransform {
fn from_bits(val: u32) -> Vec<SurfaceTransform> {
macro_rules! v {
($val:expr, $out:ident, $e:expr, $o:ident) => (
if ($val & $e) != 0 { $out.push(SurfaceTransform::$o); }
);
}
let mut result = Vec::with_capacity(9);
v!(val, result, vk::SURFACE_TRANSFORM_IDENTITY_BIT_KHR, Identity);
v!(val, result, vk::SURFACE_TRANSFORM_ROTATE_90_BIT_KHR, Rotate90);
v!(val, result, vk::SURFACE_TRANSFORM_ROTATE_180_BIT_KHR, Rotate180);
v!(val, result, vk::SURFACE_TRANSFORM_ROTATE_270_BIT_KHR, Rotate270);
v!(val, result, vk::SURFACE_TRANSFORM_HORIZONTAL_MIRROR_BIT_KHR, HorizontalMirror);
v!(val, result, vk::SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_90_BIT_KHR,
HorizontalMirrorRotate90);
v!(val, result, vk::SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_180_BIT_KHR,
HorizontalMirrorRotate180);
v!(val, result, vk::SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_270_BIT_KHR,
HorizontalMirrorRotate270);
v!(val, result, vk::SURFACE_TRANSFORM_INHERIT_BIT_KHR, Inherit);
result
}
}
impl Default for SurfaceTransform {
#[inline]
fn default() -> SurfaceTransform {
SurfaceTransform::Identity
}
}
// How the alpha values of the pixels of the window are treated.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
#[repr(u32)]
pub enum CompositeAlpha {
/// The alpha channel of the image is ignored. All the pixels are considered as if they have a
/// value of 1.0.
Opaque = vk::COMPOSITE_ALPHA_OPAQUE_BIT_KHR,
/// The alpha channel of the image is respected. The color channels are expected to have
/// already been multiplied by the alpha value.
PreMultiplied = vk::COMPOSITE_ALPHA_PRE_MULTIPLIED_BIT_KHR,
/// The alpha channel of the image is respected. The color channels will be multiplied by the
/// alpha value by the compositor before being added to what is behind.
PostMultiplied = vk::COMPOSITE_ALPHA_POST_MULTIPLIED_BIT_KHR,
/// Platform-specific behavior.
Inherit = vk::COMPOSITE_ALPHA_INHERIT_BIT_KHR,
}
impl CompositeAlpha {
fn from_bits(val: u32) -> Vec<CompositeAlpha> {
let mut result = Vec::with_capacity(4);
if (val & vk::COMPOSITE_ALPHA_OPAQUE_BIT_KHR) != 0 { result.push(CompositeAlpha::Opaque); }
if (val & vk::COMPOSITE_ALPHA_PRE_MULTIPLIED_BIT_KHR) != 0 { result.push(CompositeAlpha::PreMultiplied); }
if (val & vk::COMPOSITE_ALPHA_POST_MULTIPLIED_BIT_KHR) != 0 { result.push(CompositeAlpha::PostMultiplied); }
if (val & vk::COMPOSITE_ALPHA_INHERIT_BIT_KHR) != 0 { result.push(CompositeAlpha::Inherit); }
result
}
}
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum ColorSpace {
SrgbNonLinear,
}
impl ColorSpace {
#[inline]
fn from_num(val: u32) -> ColorSpace {
assert_eq!(val, vk::COLORSPACE_SRGB_NONLINEAR_KHR);
ColorSpace::SrgbNonLinear
}
}

View File

@ -0,0 +1,260 @@
use std::error;
use std::fmt;
use std::mem;
use std::ptr;
use std::sync::Arc;
use device::Device;
use device::Queue;
use formats::FormatMarker;
use image::Image;
use image::Type2d;
use image::Usage as ImageUsage;
use memory::ChunkProperties;
use memory::MemorySourceChunk;
use swapchain::CompositeAlpha;
use swapchain::PresentMode;
use swapchain::Surface;
use swapchain::SurfaceTransform;
use sync::Fence;
use sync::Semaphore;
use check_errors;
use Error;
use OomError;
use Success;
use VulkanObject;
use VulkanPointers;
use vk;
/// Contains the swapping system and the images that can be shown on a surface.
pub struct Swapchain {
device: Arc<Device>,
surface: Arc<Surface>,
swapchain: vk::SwapchainKHR,
}
impl Swapchain {
/// Builds a new swapchain. Allocates images who content can be made visible on a surface.
///
/// See also the `Surface::get_capabilities` function which returns the values that are
/// supported by the implementation. All the parameters that you pass to `Swapchain::new`
/// must be supported.
///
/// The `clipped` parameter indicates whether the implementation is allowed to discard
/// rendering operations that affect regions of the surface which aren't visible. This is
/// important to take into account if your fragment shader has side-effects or if you want to
/// read back the content of the image afterwards.
///
/// This function returns the swapchain plus a list of the images that belong to the
/// swapchain. The order in which the images are returned is important for the
/// `acquire_next_image` and `present` functions.
///
/// # Panic
///
/// - Panicks if the device and the surface don't belong to the same instance.
/// - Panicks if `color_attachment` is false in `usage`.
///
pub fn new<F>(device: &Arc<Device>, surface: &Arc<Surface>, num_images: u32, format: F,
dimensions: [u32; 2], layers: u32, usage: &ImageUsage,
transform: SurfaceTransform, alpha: CompositeAlpha, mode: PresentMode,
clipped: bool) -> Result<(Arc<Swapchain>, Vec<Arc<Image<Type2d, F, SwapchainAllocatedChunk>>>), OomError>
where F: FormatMarker
{
// FIXME: check that the parameters are supported
// FIXME: check that the device and the surface belong to the same instance
let vk = device.pointers();
assert!(usage.color_attachment);
let usage = usage.to_usage_bits();
let swapchain = unsafe {
let infos = vk::SwapchainCreateInfoKHR {
sType: vk::STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR,
pNext: ptr::null(),
flags: 0, // reserved
surface: surface.internal_object(),
minImageCount: num_images,
imageFormat: F::format() as u32,
imageColorSpace: vk::COLORSPACE_SRGB_NONLINEAR_KHR, // only available value
imageExtent: vk::Extent2D { width: dimensions[0], height: dimensions[1] },
imageArrayLayers: layers,
imageUsage: usage,
imageSharingMode: vk::SHARING_MODE_EXCLUSIVE, // FIXME:
queueFamilyIndexCount: 0, // FIXME:
pQueueFamilyIndices: ptr::null(), // FIXME:
preTransform: transform as u32,
compositeAlpha: alpha as u32,
presentMode: mode as u32,
clipped: if clipped { vk::TRUE } else { vk::FALSE },
oldSwapchain: 0, // TODO:
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateSwapchainKHR(device.internal_object(), &infos,
ptr::null(), &mut output)));
output
};
let swapchain = Arc::new(Swapchain {
device: device.clone(),
surface: surface.clone(),
swapchain: swapchain,
});
let images = unsafe {
let mut num = mem::uninitialized();
try!(check_errors(vk.GetSwapchainImagesKHR(device.internal_object(),
swapchain.swapchain, &mut num,
ptr::null_mut())));
let mut images = Vec::with_capacity(num as usize);
try!(check_errors(vk.GetSwapchainImagesKHR(device.internal_object(),
swapchain.swapchain, &mut num,
images.as_mut_ptr())));
images.set_len(num as usize);
images
};
let images = images.into_iter().map(|image| unsafe {
let mem = SwapchainAllocatedChunk { swapchain: swapchain.clone() };
Image::from_raw_unowned(&device, image, mem, usage, dimensions, (), 1)
}).collect::<Vec<_>>();
Ok((swapchain, images))
}
/// Tries to take ownership of an image in order to draw on it.
///
/// The function returns the index of the image in the array of images that was returned
/// when creating the swapchain.
///
/// If you try to draw on an image without acquiring it first, the execution will block. (TODO
/// behavior may change).
pub fn acquire_next_image(&self) -> Result<usize, AcquireError> {
let vk = self.device.pointers();
unsafe {
let mut out = mem::uninitialized();
let r = try!(check_errors(vk.AcquireNextImageKHR(self.device.internal_object(),
self.swapchain, 1000000, 0, 0, // TODO: timeout
&mut out)));
match r {
Success::Success => Ok(out as usize),
Success::Suboptimal => Ok(out as usize), // TODO: give that info to the user
Success::NotReady => Err(AcquireError::Timeout),
Success::Timeout => Err(AcquireError::Timeout),
s => panic!("unexpected success value: {:?}", s)
}
}
}
/// Presents an image on the screen.
///
/// The parameter is the same index as what `acquire_next_image` returned. The image must
/// have been acquired first.
///
/// The actual behavior depends on the present mode that you passed when creating the
/// swapchain.
pub fn present(&self, queue: &mut Queue, index: usize) -> Result<(), OomError> { // FIXME: wrong error
let vk = self.device.pointers();
let index = index as u32;
unsafe {
let mut result = mem::uninitialized();
let infos = vk::PresentInfoKHR {
sType: vk::STRUCTURE_TYPE_PRESENT_INFO_KHR,
pNext: ptr::null(),
waitSemaphoreCount: 0,
pWaitSemaphores: ptr::null(),
swapchainCount: 1,
pSwapchains: &self.swapchain,
pImageIndices: &index,
pResults: &mut result,
};
try!(check_errors(vk.QueuePresentKHR(queue.internal_object(), &infos)));
try!(check_errors(result));
Ok(())
}
}
}
impl Drop for Swapchain {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.DestroySwapchainKHR(self.device.internal_object(), self.swapchain, ptr::null());
}
}
}
/// Error that can happen when calling `acquire_next_image`.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
#[repr(u32)]
pub enum AcquireError {
Timeout,
SurfaceLost,
OutOfDate,
}
impl error::Error for AcquireError {
#[inline]
fn description(&self) -> &str {
match *self {
AcquireError::Timeout => "no image is available for acquiring yet",
AcquireError::SurfaceLost => "the surface of this swapchain is no longer valid",
AcquireError::OutOfDate => "the swapchain needs to be recreated",
}
}
}
impl fmt::Display for AcquireError {
#[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "{}", error::Error::description(self))
}
}
impl From<Error> for AcquireError {
#[inline]
fn from(err: Error) -> AcquireError {
match err {
Error::SurfaceLost => AcquireError::SurfaceLost,
Error::OutOfDate => AcquireError::OutOfDate,
_ => panic!("unexpected error: {:?}", err)
}
}
}
/// "Dummy" object used for images that indicates that they were allocated as part of a swapchain.
pub struct SwapchainAllocatedChunk {
// the dummy object is also used to keep ownership of the swapchain
swapchain: Arc<Swapchain>,
}
// FIXME: needs correct synchronization as well
unsafe impl MemorySourceChunk for SwapchainAllocatedChunk {
#[inline]
fn properties(&self) -> ChunkProperties {
unreachable!()
}
#[inline]
fn requires_fence(&self) -> bool { false }
#[inline]
fn requires_semaphore(&self) -> bool { false }
#[inline]
fn may_alias(&self) -> bool { false }
#[inline]
fn gpu_access(&self, _: bool, _: usize, _: usize, _: &mut Queue, _: Option<Arc<Fence>>,
_: Option<Arc<Semaphore>>) -> Option<Arc<Semaphore>>
{
None
}
}

273
vulkano/src/sync.rs Normal file
View File

@ -0,0 +1,273 @@
//! Synchronization primitives for Vulkan objects.
//!
//! In Vulkan, you have to manually ensure two things:
//!
//! - That a buffer or an image are not read and written simultaneously (similarly to the CPU).
//! - That writes to a buffer or an image are propagated to other queues by inserting memory
//! barriers.
//!
//! But don't worry ; this is automatically enforced by this library (as long as you don't use
//! any unsafe function). See the `memory` module for more info.
//!
use std::mem;
use std::ptr;
use std::sync::Arc;
use device::Device;
use OomError;
use Success;
use VulkanObject;
use VulkanPointers;
use check_errors;
use vk;
/// A fence is used to know when a command buffer submission has finished its execution.
///
/// When a command buffer accesses a ressource, you have to ensure that the CPU doesn't access
/// the same ressource simultaneously (except for concurrent reads). Therefore in order to know
/// when the CPU can access a ressource again, a fence has to be used.
pub struct Fence {
device: Arc<Device>,
fence: vk::Fence,
}
impl Fence {
/// Builds a new fence.
#[inline]
pub fn new(device: &Arc<Device>) -> Arc<Fence> {
Fence::new_impl(device, false)
}
/// Builds a new fence already in the "signaled" state.
#[inline]
pub fn signaled(device: &Arc<Device>) -> Arc<Fence> {
Fence::new_impl(device, true)
}
fn new_impl(device: &Arc<Device>, signaled: bool) -> Arc<Fence> {
let vk = device.pointers();
let fence = unsafe {
let infos = vk::FenceCreateInfo {
sType: vk::STRUCTURE_TYPE_FENCE_CREATE_INFO,
pNext: ptr::null(),
flags: if signaled { vk::FENCE_CREATE_SIGNALED_BIT } else { 0 },
};
let mut output = mem::uninitialized();
vk.CreateFence(device.internal_object(), &infos, ptr::null(), &mut output);
output
};
Arc::new(Fence {
device: device.clone(),
fence: fence,
})
}
/// Returns true if the fence is signaled.
#[inline]
pub fn ready(&self) -> Result<bool, OomError> {
unsafe {
let vk = self.device.pointers();
let result = try!(check_errors(vk.GetFenceStatus(self.device.internal_object(),
self.fence)));
match result {
Success::Success => Ok(true),
Success::NotReady => Ok(false),
_ => unreachable!()
}
}
}
/// Waits until the fence is signaled, or at least until the number of nanoseconds of the
/// timeout has elapsed.
///
/// Returns `Ok` if the fence is now signaled. Returns `Err` if the timeout was reached instead.
pub fn wait(&self, timeout_ns: u64) -> Result<(), ()> {
unsafe {
let vk = self.device.pointers();
vk.WaitForFences(self.device.internal_object(), 1, &self.fence, vk::TRUE, timeout_ns);
Ok(()) // FIXME:
}
}
/// Resets the fence.
#[inline]
pub fn reset(&self) {
unsafe {
let vk = self.device.pointers();
vk.ResetFences(self.device.internal_object(), 1, &self.fence);
}
}
/// Resets multiple fences at once.
///
/// # Panic
///
/// Panicks if not all fences belong to the same device.
pub fn multi_reset<'a, I>(iter: I)
where I: IntoIterator<Item = &'a Fence>
{
let mut device = None;
let fences: Vec<vk::Fence> = iter.into_iter().map(|fence| {
match &mut device {
dev @ &mut None => *dev = Some(fence.device.clone()),
&mut Some(ref dev) if &**dev as *const Device == &*fence.device as *const Device => {},
_ => panic!("Tried to reset multiple fences that didn't belong to the same device"),
};
fence.fence
}).collect();
if let Some(device) = device {
unsafe {
let vk = device.pointers();
vk.ResetFences(device.internal_object(), fences.len() as u32, fences.as_ptr());
}
}
}
}
impl Drop for Fence {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.DestroyFence(self.device.internal_object(), self.fence, ptr::null());
}
}
}
/// Used to provide synchronization between command buffers during their execution.
///
/// It is similar to a fence, except that it is purely on the GPU side. The CPU can't query a
/// semaphore's status or wait for it to be signaled.
///
pub struct Semaphore {
device: Arc<Device>,
semaphore: vk::Semaphore,
}
impl Semaphore {
/// Builds a new semaphore.
#[inline]
pub fn new(device: &Arc<Device>) -> Result<Arc<Semaphore>, OomError> {
let vk = device.pointers();
let semaphore = unsafe {
// since the creation is constant, we use a `static` instead of a struct on the stack
static mut INFOS: vk::SemaphoreCreateInfo = vk::SemaphoreCreateInfo {
sType: vk::STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO,
pNext: 0 as *const _, // ptr::null()
flags: 0, // reserved
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateSemaphore(device.internal_object(), &INFOS,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(Semaphore {
device: device.clone(),
semaphore: semaphore,
}))
}
}
impl Drop for Semaphore {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.DestroySemaphore(self.device.internal_object(), self.semaphore, ptr::null());
}
}
}
/// Used to block the GPU execution until an event on the CPU occurs.
///
/// Note that Vulkan implementations may have limits on how long a command buffer will wait for an
/// event to be signaled, in order to avoid interfering with progress of other clients of the GPU.
/// If the event isn't signaled within these limits, results are undefined and may include
/// device loss.
pub struct Event {
device: Arc<Device>,
event: vk::Event,
}
impl Event {
/// Builds a new event.
#[inline]
pub fn new(device: &Arc<Device>) -> Result<Arc<Event>, OomError> {
let vk = device.pointers();
let event = unsafe {
// since the creation is constant, we use a `static` instead of a struct on the stack
static mut INFOS: vk::EventCreateInfo = vk::EventCreateInfo {
sType: vk::STRUCTURE_TYPE_EVENT_CREATE_INFO,
pNext: 0 as *const _, //ptr::null(),
flags: 0, // reserved
};
let mut output = mem::uninitialized();
try!(check_errors(vk.CreateEvent(device.internal_object(), &INFOS,
ptr::null(), &mut output)));
output
};
Ok(Arc::new(Event {
device: device.clone(),
event: event,
}))
}
/// Returns true if the event is signaled.
#[inline]
pub fn signaled(&self) -> Result<bool, OomError> {
unsafe {
let vk = self.device.pointers();
let result = try!(check_errors(vk.GetEventStatus(self.device.internal_object(),
self.event)));
match result {
Success::EventSet => Ok(true),
Success::EventReset => Ok(false),
_ => unreachable!()
}
}
}
/// Changes the `Event` to the signaled state.
///
/// If a command buffer is waiting on this event, it is then unblocked.
#[inline]
pub fn set(&self) -> Result<(), OomError> {
unsafe {
let vk = self.device.pointers();
try!(check_errors(vk.SetEvent(self.device.internal_object(), self.event)).map(|_| ()));
Ok(())
}
}
/// Changes the `Event` to the unsignaled state.
#[inline]
pub fn reset(&self) -> Result<(), OomError> {
unsafe {
let vk = self.device.pointers();
try!(check_errors(vk.ResetEvent(self.device.internal_object(), self.event)).map(|_| ()));
Ok(())
}
}
}
impl Drop for Event {
#[inline]
fn drop(&mut self) {
unsafe {
let vk = self.device.pointers();
vk.DestroyEvent(self.device.internal_object(), self.event, ptr::null());
}
}
}

50
vulkano/src/version.rs Normal file
View File

@ -0,0 +1,50 @@
/// Represents an API version of Vulkan.
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub struct Version {
/// Major version number.
pub major: u16,
/// Minor version number.
pub minor: u16,
/// Patch version number.
pub patch: u16,
}
// TODO: implement PartialOrd & Ord
impl Version {
/// Turns a version number given by Vulkan into a `Version` struct.
#[inline]
pub fn from_vulkan_version(value: u32) -> Version {
Version {
major: ((value & 0xffc00000) >> 22) as u16,
minor: ((value & 0x003ff000) >> 12) as u16,
patch: (value & 0x00000fff) as u16,
}
}
/// Turns a `Version` into a version number accepted by Vulkan.
///
/// # Panic
///
/// Panicks if the values in the `Version` are out of acceptable range.
#[inline]
pub fn into_vulkan_version(&self) -> u32 {
assert!(self.major <= 0x3ff);
assert!(self.minor <= 0x3ff);
assert!(self.patch <= 0xfff);
(self.major as u32) << 22 | (self.minor as u32) << 12 | (self.patch as u32)
}
}
#[cfg(test)]
mod tests {
use super::Version;
#[test]
fn test() {
let version = Version { major: 1, minor: 0, patch: 0 };
assert_eq!(version.into_vulkan_version(), 0x400000);
}
}