Renderpass take resource ownership (#5884)

* share timestamp write struct

* Make name of set_push_constants methods consistently plural

* remove lifetime bounds of resources passed into render pass

* first render pass resource ownership test

* introduce dynrenderpass & immediately create ArcCommands and take ownership of resources passed on pass creation

* Use of dynrenderpass in deno

* Separate active occlusion & pipeline statitics query

* resolve render/compute command is now behind `replay` feature

* add vertex & index buffer to ownership test

* test for pipeline statistics query

* add occlusion query set to pass resource test

* add tests for resource ownership of render pass query timestamps

* RenderPass can now be made 'static just like ComputePass. Add respective test

* Extend encoder_operations_fail_while_pass_alive test to also check encoder locking errors with render passes

* improve changelog entry on lifetime bounds
This commit is contained in:
Andreas Reich 2024-07-01 18:36:24 +02:00 committed by GitHub
parent c9a2d972ad
commit 0a76c0fa84
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
21 changed files with 2151 additions and 802 deletions

View File

@ -41,24 +41,38 @@ Bottom level categories:
### Major Changes
#### Remove lifetime bounds on `wgpu::ComputePass`
#### Lifetime bounds on `wgpu::RenderPass` & `wgpu::ComputePass`
TODO(wumpf): This is still work in progress. Should write a bit more about it. Also will very likely extend to `wgpu::RenderPass` before release.
`wgpu::RenderPass` & `wgpu::ComputePass` recording methods (e.g. `wgpu::RenderPass:set_render_pipeline`) no longer impose a lifetime constraint to objects passed to a pass (like pipelines/buffers/bindgroups/query-sets etc.).
`wgpu::ComputePass` recording methods (e.g. `wgpu::ComputePass:set_render_pipeline`) no longer impose a lifetime constraint passed in resources.
This means the following pattern works now as expected:
```rust
let mut pipelines: Vec<wgpu::RenderPipeline> = ...;
// ...
let mut cpass = encoder.begin_compute_pass(&wgpu::ComputePassDescriptor::default());
cpass.set_pipeline(&pipelines[123]);
// Change pipeline container - this requires mutable access to `pipelines` while one of the pipelines is in use.
pipelines.push(/* ... */);
// Continue pass recording.
cpass.set_bindgroup(...);
```
Previously, a set pipeline (or other resource) had to outlive pass recording which often affected wider systems,
meaning that users needed to prove to the borrow checker that `Vec<wgpu::RenderPipeline>` (or similar constructs)
aren't accessed mutably for the duration of pass recording.
Furthermore, you can now opt out of `wgpu::ComputePass`'s lifetime dependency on its parent `wgpu::CommandEncoder` using `wgpu::ComputePass::forget_lifetime`:
Furthermore, you can now opt out of `wgpu::RenderPass`/`wgpu::ComputePass`'s lifetime dependency on its parent `wgpu::CommandEncoder` using `wgpu::RenderPass::forget_lifetime`/`wgpu::ComputePass::forget_lifetime`:
```rust
fn independent_cpass<'enc>(encoder: &'enc mut wgpu::CommandEncoder) -> wgpu::ComputePass<'static> {
let cpass: wgpu::ComputePass<'enc> = encoder.begin_compute_pass(&wgpu::ComputePassDescriptor::default());
cpass.forget_lifetime()
}
```
⚠️ As long as a `wgpu::ComputePass` is pending for a given `wgpu::CommandEncoder`, creation of a compute or render pass is an error and invalidates the `wgpu::CommandEncoder`.
This is very useful for library authors, but opens up an easy way for incorrect use, so use with care.
`forget_lifetime` is zero overhead and has no side effects on pass recording.
⚠️ As long as a `wgpu::RenderPass`/`wgpu::ComputePass` is pending for a given `wgpu::CommandEncoder`, creation of a compute or render pass is an error and invalidates the `wgpu::CommandEncoder`.
`forget_lifetime` can be very useful for library authors, but opens up an easy way for incorrect use, so use with care.
This method doesn't add any additional overhead and has no side effects on pass recording.
By @wumpf in [#5569](https://github.com/gfx-rs/wgpu/pull/5569), [#5575](https://github.com/gfx-rs/wgpu/pull/5575), [#5620](https://github.com/gfx-rs/wgpu/pull/5620), [#5768](https://github.com/gfx-rs/wgpu/pull/5768) (together with @kpreid), [#5671](https://github.com/gfx-rs/wgpu/pull/5671).
By @wumpf in [#5569](https://github.com/gfx-rs/wgpu/pull/5569), [#5575](https://github.com/gfx-rs/wgpu/pull/5575), [#5620](https://github.com/gfx-rs/wgpu/pull/5620), [#5768](https://github.com/gfx-rs/wgpu/pull/5768) (together with @kpreid), [#5671](https://github.com/gfx-rs/wgpu/pull/5671), [#5794](https://github.com/gfx-rs/wgpu/pull/5794), [#5884](https://github.com/gfx-rs/wgpu/pull/5884).
#### Querying shader compilation errors

View File

@ -186,7 +186,7 @@ pub fn op_webgpu_command_encoder_begin_render_pass(
.get::<WebGpuQuerySet>(timestamp_writes.query_set)?;
let query_set = query_set_resource.1;
Some(wgpu_core::command::RenderPassTimestampWrites {
Some(wgpu_core::command::PassTimestampWrites {
query_set,
beginning_of_pass_write_index: timestamp_writes.beginning_of_pass_write_index,
end_of_pass_write_index: timestamp_writes.end_of_pass_write_index,
@ -200,6 +200,8 @@ pub fn op_webgpu_command_encoder_begin_render_pass(
.transpose()?
.map(|query_set| query_set.1);
let instance = state.borrow::<super::Instance>();
let command_encoder = &command_encoder_resource.1;
let descriptor = wgpu_core::command::RenderPassDescriptor {
label: Some(label),
color_attachments: Cow::from(color_attachments),
@ -208,15 +210,14 @@ pub fn op_webgpu_command_encoder_begin_render_pass(
occlusion_query_set: occlusion_query_set_resource,
};
let render_pass = wgpu_core::command::RenderPass::new(command_encoder_resource.1, &descriptor);
let (render_pass, error) = gfx_select!(command_encoder => instance.command_encoder_create_render_pass_dyn(*command_encoder, &descriptor));
let rid = state
.resource_table
.add(super::render_pass::WebGpuRenderPass(RefCell::new(
render_pass,
)));
Ok(WebGpuResult::rid(rid))
Ok(WebGpuResult::rid_err(rid, error))
}
#[derive(Deserialize)]
@ -245,7 +246,7 @@ pub fn op_webgpu_command_encoder_begin_compute_pass(
.get::<WebGpuQuerySet>(timestamp_writes.query_set)?;
let query_set = query_set_resource.1;
Some(wgpu_core::command::ComputePassTimestampWrites {
Some(wgpu_core::command::PassTimestampWrites {
query_set,
beginning_of_pass_write_index: timestamp_writes.beginning_of_pass_write_index,
end_of_pass_write_index: timestamp_writes.end_of_pass_write_index,

View File

@ -9,11 +9,10 @@ use deno_core::ResourceId;
use serde::Deserialize;
use std::borrow::Cow;
use std::cell::RefCell;
use wgpu_core::global::Global;
use super::error::WebGpuResult;
pub(crate) struct WebGpuRenderPass(pub(crate) RefCell<wgpu_core::command::RenderPass>);
pub(crate) struct WebGpuRenderPass(pub(crate) RefCell<Box<dyn wgpu_core::command::DynRenderPass>>);
impl Resource for WebGpuRenderPass {
fn name(&self) -> Cow<str> {
"webGPURenderPass".into()
@ -42,8 +41,8 @@ pub fn op_webgpu_render_pass_set_viewport(
.resource_table
.get::<WebGpuRenderPass>(args.render_pass_rid)?;
state.borrow::<Global>().render_pass_set_viewport(
&mut render_pass_resource.0.borrow_mut(),
render_pass_resource.0.borrow_mut().set_viewport(
state.borrow(),
args.x,
args.y,
args.width,
@ -69,13 +68,10 @@ pub fn op_webgpu_render_pass_set_scissor_rect(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state.borrow::<Global>().render_pass_set_scissor_rect(
&mut render_pass_resource.0.borrow_mut(),
x,
y,
width,
height,
)?;
render_pass_resource
.0
.borrow_mut()
.set_scissor_rect(state.borrow(), x, y, width, height)?;
Ok(WebGpuResult::empty())
}
@ -91,9 +87,10 @@ pub fn op_webgpu_render_pass_set_blend_constant(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state
.borrow::<Global>()
.render_pass_set_blend_constant(&mut render_pass_resource.0.borrow_mut(), &color)?;
render_pass_resource
.0
.borrow_mut()
.set_blend_constant(state.borrow(), color)?;
Ok(WebGpuResult::empty())
}
@ -109,9 +106,10 @@ pub fn op_webgpu_render_pass_set_stencil_reference(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state
.borrow::<Global>()
.render_pass_set_stencil_reference(&mut render_pass_resource.0.borrow_mut(), reference)?;
render_pass_resource
.0
.borrow_mut()
.set_stencil_reference(state.borrow(), reference)?;
Ok(WebGpuResult::empty())
}
@ -127,9 +125,10 @@ pub fn op_webgpu_render_pass_begin_occlusion_query(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state
.borrow::<Global>()
.render_pass_begin_occlusion_query(&mut render_pass_resource.0.borrow_mut(), query_index)?;
render_pass_resource
.0
.borrow_mut()
.begin_occlusion_query(state.borrow(), query_index)?;
Ok(WebGpuResult::empty())
}
@ -144,9 +143,10 @@ pub fn op_webgpu_render_pass_end_occlusion_query(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state
.borrow::<Global>()
.render_pass_end_occlusion_query(&mut render_pass_resource.0.borrow_mut())?;
render_pass_resource
.0
.borrow_mut()
.end_occlusion_query(state.borrow())?;
Ok(WebGpuResult::empty())
}
@ -172,9 +172,10 @@ pub fn op_webgpu_render_pass_execute_bundles(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state
.borrow::<Global>()
.render_pass_execute_bundles(&mut render_pass_resource.0.borrow_mut(), &bundles)?;
render_pass_resource
.0
.borrow_mut()
.execute_bundles(state.borrow(), &bundles)?;
Ok(WebGpuResult::empty())
}
@ -189,12 +190,7 @@ pub fn op_webgpu_render_pass_end(
.resource_table
.take::<WebGpuRenderPass>(render_pass_rid)?;
// TODO: Just like parent_id ComputePass, there's going to be DynComputePass soon which will eliminate the need of doing gfx_select here.
let instance = state.borrow::<Global>();
let parent_id = render_pass_resource.0.borrow().parent_id();
gfx_select!(parent_id => instance.render_pass_end(
&mut render_pass_resource.0.borrow_mut()
))?;
render_pass_resource.0.borrow_mut().end(state.borrow())?;
Ok(WebGpuResult::empty())
}
@ -226,8 +222,8 @@ pub fn op_webgpu_render_pass_set_bind_group(
let dynamic_offsets_data: &[u32] = &dynamic_offsets_data[start..start + len];
state.borrow::<Global>().render_pass_set_bind_group(
&mut render_pass_resource.0.borrow_mut(),
render_pass_resource.0.borrow_mut().set_bind_group(
state.borrow(),
index,
bind_group_resource.1,
dynamic_offsets_data,
@ -247,8 +243,8 @@ pub fn op_webgpu_render_pass_push_debug_group(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state.borrow::<Global>().render_pass_push_debug_group(
&mut render_pass_resource.0.borrow_mut(),
render_pass_resource.0.borrow_mut().push_debug_group(
state.borrow(),
group_label,
0, // wgpu#975
)?;
@ -266,9 +262,10 @@ pub fn op_webgpu_render_pass_pop_debug_group(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state
.borrow::<Global>()
.render_pass_pop_debug_group(&mut render_pass_resource.0.borrow_mut())?;
render_pass_resource
.0
.borrow_mut()
.pop_debug_group(state.borrow())?;
Ok(WebGpuResult::empty())
}
@ -284,8 +281,8 @@ pub fn op_webgpu_render_pass_insert_debug_marker(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state.borrow::<Global>().render_pass_insert_debug_marker(
&mut render_pass_resource.0.borrow_mut(),
render_pass_resource.0.borrow_mut().insert_debug_marker(
state.borrow(),
marker_label,
0, // wgpu#975
)?;
@ -307,10 +304,10 @@ pub fn op_webgpu_render_pass_set_pipeline(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state.borrow::<Global>().render_pass_set_pipeline(
&mut render_pass_resource.0.borrow_mut(),
render_pipeline_resource.1,
)?;
render_pass_resource
.0
.borrow_mut()
.set_pipeline(state.borrow(), render_pipeline_resource.1)?;
Ok(WebGpuResult::empty())
}
@ -341,8 +338,8 @@ pub fn op_webgpu_render_pass_set_index_buffer(
None
};
state.borrow::<Global>().render_pass_set_index_buffer(
&mut render_pass_resource.0.borrow_mut(),
render_pass_resource.0.borrow_mut().set_index_buffer(
state.borrow(),
buffer_resource.1,
index_format,
offset,
@ -378,8 +375,8 @@ pub fn op_webgpu_render_pass_set_vertex_buffer(
None
};
state.borrow::<Global>().render_pass_set_vertex_buffer(
&mut render_pass_resource.0.borrow_mut(),
render_pass_resource.0.borrow_mut().set_vertex_buffer(
state.borrow(),
slot,
buffer_resource.1,
offset,
@ -403,8 +400,8 @@ pub fn op_webgpu_render_pass_draw(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state.borrow::<Global>().render_pass_draw(
&mut render_pass_resource.0.borrow_mut(),
render_pass_resource.0.borrow_mut().draw(
state.borrow(),
vertex_count,
instance_count,
first_vertex,
@ -429,8 +426,8 @@ pub fn op_webgpu_render_pass_draw_indexed(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state.borrow::<Global>().render_pass_draw_indexed(
&mut render_pass_resource.0.borrow_mut(),
render_pass_resource.0.borrow_mut().draw_indexed(
state.borrow(),
index_count,
instance_count,
first_index,
@ -456,8 +453,8 @@ pub fn op_webgpu_render_pass_draw_indirect(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state.borrow::<Global>().render_pass_draw_indirect(
&mut render_pass_resource.0.borrow_mut(),
render_pass_resource.0.borrow_mut().draw_indirect(
state.borrow(),
buffer_resource.1,
indirect_offset,
)?;
@ -480,8 +477,8 @@ pub fn op_webgpu_render_pass_draw_indexed_indirect(
.resource_table
.get::<WebGpuRenderPass>(render_pass_rid)?;
state.borrow::<Global>().render_pass_draw_indexed_indirect(
&mut render_pass_resource.0.borrow_mut(),
render_pass_resource.0.borrow_mut().draw_indexed_indirect(
state.borrow(),
buffer_resource.1,
indirect_offset,
)?;

View File

@ -111,14 +111,14 @@ async fn compute_pass_query_set_ownership_pipeline_statistics(ctx: TestingContex
}
#[gpu_test]
static COMPUTE_PASS_QUERY_TIMESTAMPS: GpuTestConfiguration =
static COMPUTE_PASS_QUERY_SET_OWNERSHIP_TIMESTAMPS: GpuTestConfiguration =
GpuTestConfiguration::new()
.parameters(TestParameters::default().test_features_limits().features(
wgpu::Features::TIMESTAMP_QUERY | wgpu::Features::TIMESTAMP_QUERY_INSIDE_PASSES,
))
.run_async(compute_pass_query_timestamps);
.run_async(compute_pass_query_set_ownership_timestamps);
async fn compute_pass_query_timestamps(ctx: TestingContext) {
async fn compute_pass_query_set_ownership_timestamps(ctx: TestingContext) {
let ResourceSetup {
gpu_buffer,
cpu_buffer,

View File

@ -77,18 +77,16 @@ static DROP_ENCODER_AFTER_ERROR: GpuTestConfiguration = GpuTestConfiguration::ne
drop(encoder);
});
// TODO: This should also apply to render passes once the lifetime bound is lifted.
#[gpu_test]
static ENCODER_OPERATIONS_FAIL_WHILE_COMPUTE_PASS_ALIVE: GpuTestConfiguration =
GpuTestConfiguration::new()
.parameters(TestParameters::default().features(
wgpu::Features::CLEAR_TEXTURE
| wgpu::Features::TIMESTAMP_QUERY
| wgpu::Features::TIMESTAMP_QUERY_INSIDE_ENCODERS,
))
.run_sync(encoder_operations_fail_while_compute_pass_alive);
static ENCODER_OPERATIONS_FAIL_WHILE_PASS_ALIVE: GpuTestConfiguration = GpuTestConfiguration::new()
.parameters(TestParameters::default().features(
wgpu::Features::CLEAR_TEXTURE
| wgpu::Features::TIMESTAMP_QUERY
| wgpu::Features::TIMESTAMP_QUERY_INSIDE_ENCODERS,
))
.run_sync(encoder_operations_fail_while_pass_alive);
fn encoder_operations_fail_while_compute_pass_alive(ctx: TestingContext) {
fn encoder_operations_fail_while_pass_alive(ctx: TestingContext) {
let buffer_source = ctx
.device
.create_buffer_init(&wgpu::util::BufferInitDescriptor {
@ -129,6 +127,23 @@ fn encoder_operations_fail_while_compute_pass_alive(ctx: TestingContext) {
label: None,
});
let target_desc = wgpu::TextureDescriptor {
label: Some("target_tex"),
size: wgpu::Extent3d {
width: 4,
height: 4,
depth_or_array_layers: 1,
},
mip_level_count: 1,
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: wgpu::TextureFormat::Bgra8UnormSrgb,
usage: wgpu::TextureUsages::RENDER_ATTACHMENT,
view_formats: &[wgpu::TextureFormat::Bgra8UnormSrgb],
};
let target_tex = ctx.device.create_texture(&target_desc);
let color_attachment_view = target_tex.create_view(&wgpu::TextureViewDescriptor::default());
#[allow(clippy::type_complexity)]
let recording_ops: Vec<(_, Box<dyn Fn(&mut CommandEncoder)>)> = vec![
(
@ -252,55 +267,81 @@ fn encoder_operations_fail_while_compute_pass_alive(ctx: TestingContext) {
),
];
for (op_name, op) in recording_ops.iter() {
let mut encoder = ctx
.device
.create_command_encoder(&wgpu::CommandEncoderDescriptor::default());
let pass = encoder
.begin_compute_pass(&wgpu::ComputePassDescriptor::default())
.forget_lifetime();
ctx.device.push_error_scope(wgpu::ErrorFilter::Validation);
log::info!("Testing operation {} on a locked command encoder", op_name);
fail(
&ctx.device,
|| op(&mut encoder),
Some("Command encoder is locked"),
);
// Drop the pass - this also fails now since the encoder is invalid:
fail(
&ctx.device,
|| drop(pass),
Some("Command encoder is invalid"),
);
// Also, it's not possible to create a new pass on the encoder:
fail(
&ctx.device,
|| encoder.begin_compute_pass(&wgpu::ComputePassDescriptor::default()),
Some("Command encoder is invalid"),
);
#[derive(Clone, Copy, Debug)]
enum PassType {
Compute,
Render,
}
// Test encoder finishing separately since it consumes the encoder and doesn't fit above pattern.
{
let mut encoder = ctx
.device
.create_command_encoder(&wgpu::CommandEncoderDescriptor::default());
let pass = encoder
.begin_compute_pass(&wgpu::ComputePassDescriptor::default())
.forget_lifetime();
fail(
&ctx.device,
|| encoder.finish(),
Some("Command encoder is locked"),
);
fail(
&ctx.device,
|| drop(pass),
Some("Command encoder is invalid"),
);
let create_pass = |encoder: &mut wgpu::CommandEncoder, pass_type| -> Box<dyn std::any::Any> {
match pass_type {
PassType::Compute => Box::new(
encoder
.begin_compute_pass(&wgpu::ComputePassDescriptor::default())
.forget_lifetime(),
),
PassType::Render => Box::new(
encoder
.begin_render_pass(&wgpu::RenderPassDescriptor {
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &color_attachment_view,
resolve_target: None,
ops: wgpu::Operations::default(),
})],
..Default::default()
})
.forget_lifetime(),
),
}
};
for &pass_type in [PassType::Compute, PassType::Render].iter() {
for (op_name, op) in recording_ops.iter() {
let mut encoder = ctx
.device
.create_command_encoder(&wgpu::CommandEncoderDescriptor::default());
let pass = create_pass(&mut encoder, pass_type);
ctx.device.push_error_scope(wgpu::ErrorFilter::Validation);
log::info!("Testing operation {op_name:?} on a locked command encoder while a {pass_type:?} pass is active");
fail(
&ctx.device,
|| op(&mut encoder),
Some("Command encoder is locked"),
);
// Drop the pass - this also fails now since the encoder is invalid:
fail(
&ctx.device,
|| drop(pass),
Some("Command encoder is invalid"),
);
// Also, it's not possible to create a new pass on the encoder:
fail(
&ctx.device,
|| encoder.begin_compute_pass(&wgpu::ComputePassDescriptor::default()),
Some("Command encoder is invalid"),
);
}
// Test encoder finishing separately since it consumes the encoder and doesn't fit above pattern.
{
let mut encoder = ctx
.device
.create_command_encoder(&wgpu::CommandEncoderDescriptor::default());
let pass = create_pass(&mut encoder, pass_type);
fail(
&ctx.device,
|| encoder.finish(),
Some("Command encoder is locked"),
);
fail(
&ctx.device,
|| drop(pass),
Some("Command encoder is invalid"),
);
}
}
}

View File

@ -0,0 +1,552 @@
//! Tests that render passes take ownership of resources that are associated with.
//! I.e. once a resource is passed in to a render pass, it can be dropped.
//!
//! TODO: Methods that take resources that weren't tested here:
//! * rpass.draw_indexed_indirect(indirect_buffer, indirect_offset)
//! * rpass.execute_bundles(render_bundles)
//! * rpass.multi_draw_indirect(indirect_buffer, indirect_offset, count)
//! * rpass.multi_draw_indexed_indirect(indirect_buffer, indirect_offset, count)
//! * rpass.multi_draw_indirect_count
//! * rpass.multi_draw_indexed_indirect_count
//!
use std::num::NonZeroU64;
use wgpu::util::DeviceExt as _;
use wgpu_test::{gpu_test, valid, GpuTestConfiguration, TestParameters, TestingContext};
// Minimal shader with buffer based side effect - only needed to check whether the render pass has executed at all.
const SHADER_SRC: &str = "
@group(0) @binding(0)
var<storage, read_write> buffer: array<vec4f>;
var<private> positions: array<vec2f, 3> = array<vec2f, 3>(
vec2f(-1.0, -3.0),
vec2f(-1.0, 1.0),
vec2f(3.0, 1.0)
);
@vertex
fn vs_main(@builtin(vertex_index) vertex_index: u32) -> @builtin(position) vec4<f32> {
return vec4f(positions[vertex_index], 0.0, 1.0);
}
@fragment
fn fs_main() -> @location(0) vec4<f32> {
buffer[0] *= 2.0;
return vec4<f32>(1.0, 0.0, 1.0, 1.0);
}";
#[gpu_test]
static RENDER_PASS_RESOURCE_OWNERSHIP: GpuTestConfiguration = GpuTestConfiguration::new()
.parameters(TestParameters::default().test_features_limits())
.run_async(render_pass_resource_ownership);
async fn render_pass_resource_ownership(ctx: TestingContext) {
let ResourceSetup {
gpu_buffer,
cpu_buffer,
buffer_size,
indirect_buffer,
vertex_buffer,
index_buffer,
bind_group,
pipeline,
color_attachment_view,
color_attachment_resolve_view,
depth_stencil_view,
occlusion_query_set,
} = resource_setup(&ctx);
let mut encoder = ctx
.device
.create_command_encoder(&wgpu::CommandEncoderDescriptor::default());
{
let mut rpass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("render_pass"),
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &color_attachment_view,
resolve_target: Some(&color_attachment_resolve_view),
ops: wgpu::Operations::default(),
})],
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
view: &depth_stencil_view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
}),
timestamp_writes: None,
occlusion_query_set: Some(&occlusion_query_set),
});
// Drop render pass attachments right away.
drop(color_attachment_view);
drop(color_attachment_resolve_view);
drop(depth_stencil_view);
rpass.set_pipeline(&pipeline);
rpass.set_bind_group(0, &bind_group, &[]);
rpass.set_vertex_buffer(0, vertex_buffer.slice(..));
rpass.set_index_buffer(index_buffer.slice(..), wgpu::IndexFormat::Uint32);
rpass.begin_occlusion_query(0);
rpass.draw_indirect(&indirect_buffer, 0);
rpass.end_occlusion_query();
// Now drop all resources we set. Then do a device poll to make sure the resources are really not dropped too early, no matter what.
drop(pipeline);
drop(bind_group);
drop(indirect_buffer);
drop(vertex_buffer);
drop(index_buffer);
drop(occlusion_query_set);
ctx.async_poll(wgpu::Maintain::wait())
.await
.panic_on_timeout();
}
assert_render_pass_executed_normally(encoder, gpu_buffer, cpu_buffer, buffer_size, ctx).await;
}
#[gpu_test]
static RENDER_PASS_QUERY_SET_OWNERSHIP_PIPELINE_STATISTICS: GpuTestConfiguration =
GpuTestConfiguration::new()
.parameters(
TestParameters::default()
.test_features_limits()
.features(wgpu::Features::PIPELINE_STATISTICS_QUERY),
)
.run_async(render_pass_query_set_ownership_pipeline_statistics);
async fn render_pass_query_set_ownership_pipeline_statistics(ctx: TestingContext) {
let ResourceSetup {
gpu_buffer,
cpu_buffer,
buffer_size,
vertex_buffer,
index_buffer,
bind_group,
pipeline,
color_attachment_view,
depth_stencil_view,
..
} = resource_setup(&ctx);
let query_set = ctx.device.create_query_set(&wgpu::QuerySetDescriptor {
label: Some("query_set"),
ty: wgpu::QueryType::PipelineStatistics(
wgpu::PipelineStatisticsTypes::VERTEX_SHADER_INVOCATIONS,
),
count: 1,
});
let mut encoder = ctx
.device
.create_command_encoder(&wgpu::CommandEncoderDescriptor::default());
{
let mut rpass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &color_attachment_view,
resolve_target: None,
ops: wgpu::Operations::default(),
})],
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
view: &depth_stencil_view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
}),
..Default::default()
});
rpass.set_pipeline(&pipeline);
rpass.set_bind_group(0, &bind_group, &[]);
rpass.set_vertex_buffer(0, vertex_buffer.slice(..));
rpass.set_index_buffer(index_buffer.slice(..), wgpu::IndexFormat::Uint32);
rpass.begin_pipeline_statistics_query(&query_set, 0);
rpass.draw(0..3, 0..1);
rpass.end_pipeline_statistics_query();
// Drop the query set. Then do a device poll to make sure it's not dropped too early, no matter what.
drop(query_set);
ctx.async_poll(wgpu::Maintain::wait())
.await
.panic_on_timeout();
}
assert_render_pass_executed_normally(encoder, gpu_buffer, cpu_buffer, buffer_size, ctx).await;
}
#[gpu_test]
static RENDER_PASS_QUERY_SET_OWNERSHIP_TIMESTAMPS: GpuTestConfiguration =
GpuTestConfiguration::new()
.parameters(TestParameters::default().test_features_limits().features(
wgpu::Features::TIMESTAMP_QUERY | wgpu::Features::TIMESTAMP_QUERY_INSIDE_PASSES,
))
.run_async(render_pass_query_set_ownership_timestamps);
async fn render_pass_query_set_ownership_timestamps(ctx: TestingContext) {
let ResourceSetup {
gpu_buffer,
cpu_buffer,
buffer_size,
color_attachment_view,
depth_stencil_view,
pipeline,
bind_group,
vertex_buffer,
index_buffer,
..
} = resource_setup(&ctx);
let query_set_timestamp_writes = ctx.device.create_query_set(&wgpu::QuerySetDescriptor {
label: Some("query_set_timestamp_writes"),
ty: wgpu::QueryType::Timestamp,
count: 2,
});
let query_set_write_timestamp = ctx.device.create_query_set(&wgpu::QuerySetDescriptor {
label: Some("query_set_write_timestamp"),
ty: wgpu::QueryType::Timestamp,
count: 1,
});
let mut encoder = ctx
.device
.create_command_encoder(&wgpu::CommandEncoderDescriptor::default());
{
let mut rpass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &color_attachment_view,
resolve_target: None,
ops: wgpu::Operations::default(),
})],
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
view: &depth_stencil_view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
}),
timestamp_writes: Some(wgpu::RenderPassTimestampWrites {
query_set: &query_set_timestamp_writes,
beginning_of_pass_write_index: Some(0),
end_of_pass_write_index: Some(1),
}),
..Default::default()
});
rpass.write_timestamp(&query_set_write_timestamp, 0);
rpass.set_pipeline(&pipeline);
rpass.set_bind_group(0, &bind_group, &[]);
rpass.set_vertex_buffer(0, vertex_buffer.slice(..));
rpass.set_index_buffer(index_buffer.slice(..), wgpu::IndexFormat::Uint32);
rpass.draw(0..3, 0..1);
// Drop the query sets. Then do a device poll to make sure they're not dropped too early, no matter what.
drop(query_set_timestamp_writes);
drop(query_set_write_timestamp);
ctx.async_poll(wgpu::Maintain::wait())
.await
.panic_on_timeout();
}
assert_render_pass_executed_normally(encoder, gpu_buffer, cpu_buffer, buffer_size, ctx).await;
}
#[gpu_test]
static RENDER_PASS_KEEP_ENCODER_ALIVE: GpuTestConfiguration = GpuTestConfiguration::new()
.parameters(TestParameters::default().test_features_limits())
.run_async(render_pass_keep_encoder_alive);
async fn render_pass_keep_encoder_alive(ctx: TestingContext) {
let ResourceSetup {
bind_group,
vertex_buffer,
index_buffer,
pipeline,
color_attachment_view,
depth_stencil_view,
..
} = resource_setup(&ctx);
let mut encoder = ctx
.device
.create_command_encoder(&wgpu::CommandEncoderDescriptor::default());
let rpass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
color_attachments: &[Some(wgpu::RenderPassColorAttachment {
view: &color_attachment_view,
resolve_target: None,
ops: wgpu::Operations::default(),
})],
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
view: &depth_stencil_view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: wgpu::StoreOp::Store,
}),
stencil_ops: None,
}),
..Default::default()
});
// Now drop the encoder - it is kept alive by the compute pass.
// To do so, we have to make the compute pass forget the lifetime constraint first.
let mut rpass = rpass.forget_lifetime();
drop(encoder);
ctx.async_poll(wgpu::Maintain::wait())
.await
.panic_on_timeout();
// Record some a draw command.
rpass.set_pipeline(&pipeline);
rpass.set_bind_group(0, &bind_group, &[]);
rpass.set_vertex_buffer(0, vertex_buffer.slice(..));
rpass.set_index_buffer(index_buffer.slice(..), wgpu::IndexFormat::Uint32);
rpass.draw(0..3, 0..1);
// Dropping the pass will still execute the pass, even though there's no way to submit it.
// Ideally, this would log an error, but the encoder is not dropped until the compute pass is dropped,
// making this a valid operation.
// (If instead the encoder was explicitly destroyed or finished, this would be an error.)
valid(&ctx.device, || drop(rpass));
}
async fn assert_render_pass_executed_normally(
mut encoder: wgpu::CommandEncoder,
gpu_buffer: wgpu::Buffer,
cpu_buffer: wgpu::Buffer,
buffer_size: u64,
ctx: TestingContext,
) {
encoder.copy_buffer_to_buffer(&gpu_buffer, 0, &cpu_buffer, 0, buffer_size);
ctx.queue.submit([encoder.finish()]);
cpu_buffer.slice(..).map_async(wgpu::MapMode::Read, |_| ());
ctx.async_poll(wgpu::Maintain::wait())
.await
.panic_on_timeout();
let data = cpu_buffer.slice(..).get_mapped_range();
let floats: &[f32] = bytemuck::cast_slice(&data);
assert!(floats[0] >= 2.0);
assert!(floats[1] >= 4.0);
assert!(floats[2] >= 6.0);
assert!(floats[3] >= 8.0);
}
// Setup ------------------------------------------------------------
struct ResourceSetup {
gpu_buffer: wgpu::Buffer,
cpu_buffer: wgpu::Buffer,
buffer_size: u64,
indirect_buffer: wgpu::Buffer,
vertex_buffer: wgpu::Buffer,
index_buffer: wgpu::Buffer,
bind_group: wgpu::BindGroup,
pipeline: wgpu::RenderPipeline,
color_attachment_view: wgpu::TextureView,
color_attachment_resolve_view: wgpu::TextureView,
depth_stencil_view: wgpu::TextureView,
occlusion_query_set: wgpu::QuerySet,
}
fn resource_setup(ctx: &TestingContext) -> ResourceSetup {
let sm = ctx
.device
.create_shader_module(wgpu::ShaderModuleDescriptor {
label: Some("shader"),
source: wgpu::ShaderSource::Wgsl(SHADER_SRC.into()),
});
let buffer_size = 4 * std::mem::size_of::<f32>() as u64;
let bgl = ctx
.device
.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: Some("bind_group_layout"),
entries: &[wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Storage { read_only: false },
has_dynamic_offset: false,
min_binding_size: NonZeroU64::new(buffer_size),
},
count: None,
}],
});
let gpu_buffer = ctx
.device
.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("gpu_buffer"),
usage: wgpu::BufferUsages::STORAGE | wgpu::BufferUsages::COPY_SRC,
contents: bytemuck::bytes_of(&[1.0_f32, 2.0, 3.0, 4.0]),
});
let cpu_buffer = ctx.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("cpu_buffer"),
size: buffer_size,
usage: wgpu::BufferUsages::COPY_DST | wgpu::BufferUsages::MAP_READ,
mapped_at_creation: false,
});
let vertex_count = 3;
let indirect_buffer = ctx
.device
.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("gpu_buffer"),
usage: wgpu::BufferUsages::INDIRECT,
contents: wgpu::util::DrawIndirectArgs {
vertex_count,
instance_count: 1,
first_vertex: 0,
first_instance: 0,
}
.as_bytes(),
});
let vertex_buffer = ctx.device.create_buffer(&wgpu::BufferDescriptor {
label: Some("vertex_buffer"),
usage: wgpu::BufferUsages::VERTEX,
size: std::mem::size_of::<u32>() as u64 * vertex_count as u64,
mapped_at_creation: false,
});
let index_buffer = ctx
.device
.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("vertex_buffer"),
usage: wgpu::BufferUsages::INDEX,
contents: bytemuck::cast_slice(&[0_u32, 1, 2]),
});
let bind_group = ctx.device.create_bind_group(&wgpu::BindGroupDescriptor {
label: Some("bind_group"),
layout: &bgl,
entries: &[wgpu::BindGroupEntry {
binding: 0,
resource: gpu_buffer.as_entire_binding(),
}],
});
let pipeline_layout = ctx
.device
.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("pipeline_layout"),
bind_group_layouts: &[&bgl],
push_constant_ranges: &[],
});
let target_size = wgpu::Extent3d {
width: 4,
height: 4,
depth_or_array_layers: 1,
};
let target_msaa = 4;
let target_format = wgpu::TextureFormat::Bgra8UnormSrgb;
let target_desc = wgpu::TextureDescriptor {
label: Some("target_tex"),
size: target_size,
mip_level_count: 1,
sample_count: target_msaa,
dimension: wgpu::TextureDimension::D2,
format: target_format,
usage: wgpu::TextureUsages::RENDER_ATTACHMENT,
view_formats: &[target_format],
};
let target_tex = ctx.device.create_texture(&target_desc);
let target_tex_resolve = ctx.device.create_texture(&wgpu::TextureDescriptor {
label: Some("target_resolve"),
sample_count: 1,
..target_desc
});
let color_attachment_view = target_tex.create_view(&wgpu::TextureViewDescriptor::default());
let color_attachment_resolve_view =
target_tex_resolve.create_view(&wgpu::TextureViewDescriptor::default());
let depth_stencil_format = wgpu::TextureFormat::Depth32Float;
let depth_stencil = ctx.device.create_texture(&wgpu::TextureDescriptor {
label: Some("depth_stencil"),
format: depth_stencil_format,
view_formats: &[depth_stencil_format],
..target_desc
});
let depth_stencil_view = depth_stencil.create_view(&wgpu::TextureViewDescriptor::default());
let occlusion_query_set = ctx.device.create_query_set(&wgpu::QuerySetDescriptor {
label: Some("occ_query_set"),
ty: wgpu::QueryType::Occlusion,
count: 1,
});
let pipeline = ctx
.device
.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some("pipeline"),
layout: Some(&pipeline_layout),
vertex: wgpu::VertexState {
module: &sm,
entry_point: "vs_main",
compilation_options: Default::default(),
buffers: &[wgpu::VertexBufferLayout {
array_stride: 4,
step_mode: wgpu::VertexStepMode::Vertex,
attributes: &wgpu::vertex_attr_array![0 => Uint32],
}],
},
fragment: Some(wgpu::FragmentState {
module: &sm,
entry_point: "fs_main",
compilation_options: Default::default(),
targets: &[Some(target_format.into())],
}),
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleStrip,
strip_index_format: Some(wgpu::IndexFormat::Uint32),
..Default::default()
},
depth_stencil: Some(wgpu::DepthStencilState {
format: depth_stencil_format,
depth_write_enabled: true,
depth_compare: wgpu::CompareFunction::LessEqual,
stencil: wgpu::StencilState::default(),
bias: wgpu::DepthBiasState::default(),
}),
multisample: wgpu::MultisampleState {
count: target_msaa,
mask: !0,
alpha_to_coverage_enabled: false,
},
multiview: None,
cache: None,
});
ResourceSetup {
gpu_buffer,
cpu_buffer,
buffer_size,
indirect_buffer,
vertex_buffer,
index_buffer,
bind_group,
pipeline,
color_attachment_view,
color_attachment_resolve_view,
depth_stencil_view,
occlusion_query_set,
}
}

View File

@ -29,6 +29,7 @@ mod poll;
mod push_constants;
mod query_set;
mod queue_transfer;
mod render_pass_ownership;
mod resource_descriptor_accessor;
mod resource_error;
mod scissor_tests;

View File

@ -4,12 +4,12 @@ use crate::{
},
command::{
bind::Binder,
compute_command::{ArcComputeCommand, ComputeCommand},
compute_command::ArcComputeCommand,
end_pipeline_statistics_query,
memory_init::{fixup_discarded_surfaces, SurfacesInDiscardState},
validate_and_begin_pipeline_statistics_query, BasePass, BindGroupStateChange,
CommandBuffer, CommandEncoderError, CommandEncoderStatus, MapPassErr, PassErrorScope,
QueryUseError, StateChange,
validate_and_begin_pipeline_statistics_query, ArcPassTimestampWrites, BasePass,
BindGroupStateChange, CommandBuffer, CommandEncoderError, CommandEncoderStatus, MapPassErr,
PassErrorScope, PassTimestampWrites, QueryUseError, StateChange,
},
device::{Device, DeviceError, MissingDownlevelFlags, MissingFeatures},
error::{ErrorFormatter, PrettyError},
@ -28,10 +28,6 @@ use crate::{
};
use hal::CommandEncoder as _;
#[cfg(feature = "serde")]
use serde::Deserialize;
#[cfg(feature = "serde")]
use serde::Serialize;
use thiserror::Error;
use wgt::{BufferAddress, DynamicOffset};
@ -53,7 +49,7 @@ pub struct ComputePass<A: HalApi> {
/// If it is none, this pass is invalid and any operation on it will return an error.
parent: Option<Arc<CommandBuffer<A>>>,
timestamp_writes: Option<ArcComputePassTimestampWrites<A>>,
timestamp_writes: Option<ArcPassTimestampWrites<A>>,
// Resource binding dedupe state.
current_bind_groups: BindGroupStateChange,
@ -103,39 +99,17 @@ impl<A: HalApi> fmt::Debug for ComputePass<A> {
}
}
/// Describes the writing of timestamp values in a compute pass.
#[derive(Clone, Debug, PartialEq, Eq)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
pub struct ComputePassTimestampWrites {
/// The query set to write the timestamps to.
pub query_set: id::QuerySetId,
/// The index of the query set at which a start timestamp of this pass is written, if any.
pub beginning_of_pass_write_index: Option<u32>,
/// The index of the query set at which an end timestamp of this pass is written, if any.
pub end_of_pass_write_index: Option<u32>,
}
/// Describes the writing of timestamp values in a compute pass with the query set resolved.
struct ArcComputePassTimestampWrites<A: HalApi> {
/// The query set to write the timestamps to.
pub query_set: Arc<resource::QuerySet<A>>,
/// The index of the query set at which a start timestamp of this pass is written, if any.
pub beginning_of_pass_write_index: Option<u32>,
/// The index of the query set at which an end timestamp of this pass is written, if any.
pub end_of_pass_write_index: Option<u32>,
}
#[derive(Clone, Debug, Default)]
pub struct ComputePassDescriptor<'a> {
pub label: Label<'a>,
/// Defines where and when timestamp values will be written for this pass.
pub timestamp_writes: Option<&'a ComputePassTimestampWrites>,
pub timestamp_writes: Option<&'a PassTimestampWrites>,
}
struct ArcComputePassDescriptor<'a, A: HalApi> {
pub label: &'a Label<'a>,
/// Defines where and when timestamp values will be written for this pass.
pub timestamp_writes: Option<ArcComputePassTimestampWrites<A>>,
pub timestamp_writes: Option<ArcPassTimestampWrites<A>>,
}
#[derive(Clone, Debug, Error)]
@ -370,7 +344,9 @@ impl Global {
let Ok(query_set) = hub.query_sets.read().get_owned(tw.query_set) else {
return (
ComputePass::new(None, arc_desc),
Some(CommandEncoderError::InvalidTimestampWritesQuerySetId),
Some(CommandEncoderError::InvalidTimestampWritesQuerySetId(
tw.query_set,
)),
);
};
@ -378,7 +354,7 @@ impl Global {
return (ComputePass::new(None, arc_desc), Some(e.into()));
}
Some(ArcComputePassTimestampWrites {
Some(ArcPassTimestampWrites {
query_set,
beginning_of_pass_write_index: tw.beginning_of_pass_write_index,
end_of_pass_write_index: tw.end_of_pass_write_index,
@ -429,20 +405,22 @@ impl Global {
}
#[doc(hidden)]
#[cfg(feature = "replay")]
pub fn compute_pass_end_with_unresolved_commands<A: HalApi>(
&self,
encoder_id: id::CommandEncoderId,
base: BasePass<ComputeCommand>,
timestamp_writes: Option<&ComputePassTimestampWrites>,
base: BasePass<super::ComputeCommand>,
timestamp_writes: Option<&PassTimestampWrites>,
) -> Result<(), ComputePassError> {
let hub = A::hub(self);
let scope = PassErrorScope::PassEncoder(encoder_id);
let cmd_buf = CommandBuffer::get_encoder(hub, encoder_id).map_pass_err(scope)?;
let commands = ComputeCommand::resolve_compute_command_ids(A::hub(self), &base.commands)?;
let commands =
super::ComputeCommand::resolve_compute_command_ids(A::hub(self), &base.commands)?;
let timestamp_writes = if let Some(tw) = timestamp_writes {
Some(ArcComputePassTimestampWrites {
Some(ArcPassTimestampWrites {
query_set: hub
.query_sets
.read()
@ -473,7 +451,7 @@ impl Global {
&self,
cmd_buf: &CommandBuffer<A>,
base: BasePass<ArcComputeCommand<A>>,
mut timestamp_writes: Option<ArcComputePassTimestampWrites<A>>,
mut timestamp_writes: Option<ArcPassTimestampWrites<A>>,
) -> Result<(), ComputePassError> {
profiling::scope!("CommandEncoder::run_compute_pass");
let pass_scope = PassErrorScope::Pass(Some(cmd_buf.as_info().id()));
@ -494,13 +472,11 @@ impl Global {
string_data: base.string_data.to_vec(),
push_constant_data: base.push_constant_data.to_vec(),
},
timestamp_writes: timestamp_writes
.as_ref()
.map(|tw| ComputePassTimestampWrites {
query_set: tw.query_set.as_info().id(),
beginning_of_pass_write_index: tw.beginning_of_pass_write_index,
end_of_pass_write_index: tw.end_of_pass_write_index,
}),
timestamp_writes: timestamp_writes.as_ref().map(|tw| PassTimestampWrites {
query_set: tw.query_set.as_info().id(),
beginning_of_pass_write_index: tw.beginning_of_pass_write_index,
end_of_pass_write_index: tw.end_of_pass_write_index,
}),
});
}
@ -1104,7 +1080,7 @@ impl Global {
Ok(())
}
pub fn compute_pass_set_push_constant<A: HalApi>(
pub fn compute_pass_set_push_constants<A: HalApi>(
&self,
pass: &mut ComputePass<A>,
offset: u32,

View File

@ -8,8 +8,6 @@ use crate::{
resource::{Buffer, QuerySet},
};
use super::{ComputePassError, ComputePassErrorInner, PassErrorScope};
#[derive(Clone, Copy, Debug)]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
pub enum ComputeCommand {
@ -72,13 +70,13 @@ pub enum ComputeCommand {
impl ComputeCommand {
/// Resolves all ids in a list of commands into the corresponding resource Arc.
//
// TODO: Once resolving is done on-the-fly during recording, this function should be only needed with the replay feature:
// #[cfg(feature = "replay")]
#[cfg(feature = "replay")]
pub fn resolve_compute_command_ids<A: HalApi>(
hub: &crate::hub::Hub<A>,
commands: &[ComputeCommand],
) -> Result<Vec<ArcComputeCommand<A>>, ComputePassError> {
) -> Result<Vec<ArcComputeCommand<A>>, super::ComputePassError> {
use super::{ComputePassError, ComputePassErrorInner, PassErrorScope};
let buffers_guard = hub.buffers.read();
let bind_group_guard = hub.bind_groups.read();
let query_set_guard = hub.query_sets.read();

View File

@ -21,7 +21,7 @@ pub trait DynComputePass: std::fmt::Debug + WasmNotSendSync {
context: &global::Global,
pipeline_id: id::ComputePipelineId,
) -> Result<(), ComputePassError>;
fn set_push_constant(
fn set_push_constants(
&mut self,
context: &global::Global,
offset: u32,
@ -93,13 +93,13 @@ impl<A: HalApi> DynComputePass for ComputePass<A> {
context.compute_pass_set_pipeline(self, pipeline_id)
}
fn set_push_constant(
fn set_push_constants(
&mut self,
context: &global::Global,
offset: u32,
data: &[u8],
) -> Result<(), ComputePassError> {
context.compute_pass_set_push_constant(self, offset, data)
context.compute_pass_set_push_constants(self, offset, data)
}
fn dispatch_workgroups(

View File

@ -0,0 +1,458 @@
use wgt::WasmNotSendSync;
use crate::{global, hal_api::HalApi, id};
use super::{RenderPass, RenderPassError};
/// Trait for type erasing RenderPass.
// TODO(#5124): wgpu-core's RenderPass trait should not be hal type dependent.
// Practically speaking this allows us merge gfx_select with type erasure:
// The alternative would be to introduce RenderPassId which then first needs to be looked up and then dispatch via gfx_select.
pub trait DynRenderPass: std::fmt::Debug + WasmNotSendSync {
fn set_bind_group(
&mut self,
context: &global::Global,
index: u32,
bind_group_id: id::BindGroupId,
offsets: &[wgt::DynamicOffset],
) -> Result<(), RenderPassError>;
fn set_index_buffer(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
index_format: wgt::IndexFormat,
offset: wgt::BufferAddress,
size: Option<wgt::BufferSize>,
) -> Result<(), RenderPassError>;
fn set_vertex_buffer(
&mut self,
context: &global::Global,
slot: u32,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
size: Option<wgt::BufferSize>,
) -> Result<(), RenderPassError>;
fn set_pipeline(
&mut self,
context: &global::Global,
pipeline_id: id::RenderPipelineId,
) -> Result<(), RenderPassError>;
fn set_push_constants(
&mut self,
context: &global::Global,
stages: wgt::ShaderStages,
offset: u32,
data: &[u8],
) -> Result<(), RenderPassError>;
fn draw(
&mut self,
context: &global::Global,
vertex_count: u32,
instance_count: u32,
first_vertex: u32,
first_instance: u32,
) -> Result<(), RenderPassError>;
fn draw_indexed(
&mut self,
context: &global::Global,
index_count: u32,
instance_count: u32,
first_index: u32,
base_vertex: i32,
first_instance: u32,
) -> Result<(), RenderPassError>;
fn draw_indirect(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
) -> Result<(), RenderPassError>;
fn draw_indexed_indirect(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
) -> Result<(), RenderPassError>;
fn multi_draw_indirect(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
count: u32,
) -> Result<(), RenderPassError>;
fn multi_draw_indexed_indirect(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
count: u32,
) -> Result<(), RenderPassError>;
fn multi_draw_indirect_count(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
count_buffer_id: id::BufferId,
count_buffer_offset: wgt::BufferAddress,
max_count: u32,
) -> Result<(), RenderPassError>;
fn multi_draw_indexed_indirect_count(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
count_buffer_id: id::BufferId,
count_buffer_offset: wgt::BufferAddress,
max_count: u32,
) -> Result<(), RenderPassError>;
fn set_blend_constant(
&mut self,
context: &global::Global,
color: wgt::Color,
) -> Result<(), RenderPassError>;
fn set_scissor_rect(
&mut self,
context: &global::Global,
x: u32,
y: u32,
width: u32,
height: u32,
) -> Result<(), RenderPassError>;
fn set_viewport(
&mut self,
context: &global::Global,
x: f32,
y: f32,
width: f32,
height: f32,
min_depth: f32,
max_depth: f32,
) -> Result<(), RenderPassError>;
fn set_stencil_reference(
&mut self,
context: &global::Global,
reference: u32,
) -> Result<(), RenderPassError>;
fn push_debug_group(
&mut self,
context: &global::Global,
label: &str,
color: u32,
) -> Result<(), RenderPassError>;
fn pop_debug_group(&mut self, context: &global::Global) -> Result<(), RenderPassError>;
fn insert_debug_marker(
&mut self,
context: &global::Global,
label: &str,
color: u32,
) -> Result<(), RenderPassError>;
fn write_timestamp(
&mut self,
context: &global::Global,
query_set_id: id::QuerySetId,
query_index: u32,
) -> Result<(), RenderPassError>;
fn begin_occlusion_query(
&mut self,
context: &global::Global,
query_index: u32,
) -> Result<(), RenderPassError>;
fn end_occlusion_query(&mut self, context: &global::Global) -> Result<(), RenderPassError>;
fn begin_pipeline_statistics_query(
&mut self,
context: &global::Global,
query_set_id: id::QuerySetId,
query_index: u32,
) -> Result<(), RenderPassError>;
fn end_pipeline_statistics_query(
&mut self,
context: &global::Global,
) -> Result<(), RenderPassError>;
fn execute_bundles(
&mut self,
context: &global::Global,
bundles: &[id::RenderBundleId],
) -> Result<(), RenderPassError>;
fn end(&mut self, context: &global::Global) -> Result<(), RenderPassError>;
fn label(&self) -> Option<&str>;
}
impl<A: HalApi> DynRenderPass for RenderPass<A> {
fn set_index_buffer(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
index_format: wgt::IndexFormat,
offset: wgt::BufferAddress,
size: Option<wgt::BufferSize>,
) -> Result<(), RenderPassError> {
context.render_pass_set_index_buffer(self, buffer_id, index_format, offset, size)
}
fn set_vertex_buffer(
&mut self,
context: &global::Global,
slot: u32,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
size: Option<wgt::BufferSize>,
) -> Result<(), RenderPassError> {
context.render_pass_set_vertex_buffer(self, slot, buffer_id, offset, size)
}
fn set_bind_group(
&mut self,
context: &global::Global,
index: u32,
bind_group_id: id::BindGroupId,
offsets: &[wgt::DynamicOffset],
) -> Result<(), RenderPassError> {
context.render_pass_set_bind_group(self, index, bind_group_id, offsets)
}
fn set_pipeline(
&mut self,
context: &global::Global,
pipeline_id: id::RenderPipelineId,
) -> Result<(), RenderPassError> {
context.render_pass_set_pipeline(self, pipeline_id)
}
fn set_push_constants(
&mut self,
context: &global::Global,
stages: wgt::ShaderStages,
offset: u32,
data: &[u8],
) -> Result<(), RenderPassError> {
context.render_pass_set_push_constants(self, stages, offset, data)
}
fn draw(
&mut self,
context: &global::Global,
vertex_count: u32,
instance_count: u32,
first_vertex: u32,
first_instance: u32,
) -> Result<(), RenderPassError> {
context.render_pass_draw(
self,
vertex_count,
instance_count,
first_vertex,
first_instance,
)
}
fn draw_indexed(
&mut self,
context: &global::Global,
index_count: u32,
instance_count: u32,
first_index: u32,
base_vertex: i32,
first_instance: u32,
) -> Result<(), RenderPassError> {
context.render_pass_draw_indexed(
self,
index_count,
instance_count,
first_index,
base_vertex,
first_instance,
)
}
fn draw_indirect(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
) -> Result<(), RenderPassError> {
context.render_pass_draw_indirect(self, buffer_id, offset)
}
fn draw_indexed_indirect(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
) -> Result<(), RenderPassError> {
context.render_pass_draw_indexed_indirect(self, buffer_id, offset)
}
fn multi_draw_indirect(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
count: u32,
) -> Result<(), RenderPassError> {
context.render_pass_multi_draw_indirect(self, buffer_id, offset, count)
}
fn multi_draw_indexed_indirect(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
count: u32,
) -> Result<(), RenderPassError> {
context.render_pass_multi_draw_indexed_indirect(self, buffer_id, offset, count)
}
fn multi_draw_indirect_count(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
count_buffer_id: id::BufferId,
count_buffer_offset: wgt::BufferAddress,
max_count: u32,
) -> Result<(), RenderPassError> {
context.render_pass_multi_draw_indirect_count(
self,
buffer_id,
offset,
count_buffer_id,
count_buffer_offset,
max_count,
)
}
fn multi_draw_indexed_indirect_count(
&mut self,
context: &global::Global,
buffer_id: id::BufferId,
offset: wgt::BufferAddress,
count_buffer_id: id::BufferId,
count_buffer_offset: wgt::BufferAddress,
max_count: u32,
) -> Result<(), RenderPassError> {
context.render_pass_multi_draw_indexed_indirect_count(
self,
buffer_id,
offset,
count_buffer_id,
count_buffer_offset,
max_count,
)
}
fn set_blend_constant(
&mut self,
context: &global::Global,
color: wgt::Color,
) -> Result<(), RenderPassError> {
context.render_pass_set_blend_constant(self, color)
}
fn set_scissor_rect(
&mut self,
context: &global::Global,
x: u32,
y: u32,
width: u32,
height: u32,
) -> Result<(), RenderPassError> {
context.render_pass_set_scissor_rect(self, x, y, width, height)
}
fn set_viewport(
&mut self,
context: &global::Global,
x: f32,
y: f32,
width: f32,
height: f32,
min_depth: f32,
max_depth: f32,
) -> Result<(), RenderPassError> {
context.render_pass_set_viewport(self, x, y, width, height, min_depth, max_depth)
}
fn set_stencil_reference(
&mut self,
context: &global::Global,
reference: u32,
) -> Result<(), RenderPassError> {
context.render_pass_set_stencil_reference(self, reference)
}
fn push_debug_group(
&mut self,
context: &global::Global,
label: &str,
color: u32,
) -> Result<(), RenderPassError> {
context.render_pass_push_debug_group(self, label, color)
}
fn pop_debug_group(&mut self, context: &global::Global) -> Result<(), RenderPassError> {
context.render_pass_pop_debug_group(self)
}
fn insert_debug_marker(
&mut self,
context: &global::Global,
label: &str,
color: u32,
) -> Result<(), RenderPassError> {
context.render_pass_insert_debug_marker(self, label, color)
}
fn write_timestamp(
&mut self,
context: &global::Global,
query_set_id: id::QuerySetId,
query_index: u32,
) -> Result<(), RenderPassError> {
context.render_pass_write_timestamp(self, query_set_id, query_index)
}
fn begin_occlusion_query(
&mut self,
context: &global::Global,
query_index: u32,
) -> Result<(), RenderPassError> {
context.render_pass_begin_occlusion_query(self, query_index)
}
fn end_occlusion_query(&mut self, context: &global::Global) -> Result<(), RenderPassError> {
context.render_pass_end_occlusion_query(self)
}
fn begin_pipeline_statistics_query(
&mut self,
context: &global::Global,
query_set_id: id::QuerySetId,
query_index: u32,
) -> Result<(), RenderPassError> {
context.render_pass_begin_pipeline_statistics_query(self, query_set_id, query_index)
}
fn end_pipeline_statistics_query(
&mut self,
context: &global::Global,
) -> Result<(), RenderPassError> {
context.render_pass_end_pipeline_statistics_query(self)
}
fn execute_bundles(
&mut self,
context: &global::Global,
bundles: &[id::RenderBundleId],
) -> Result<(), RenderPassError> {
context.render_pass_execute_bundles(self, bundles)
}
fn end(&mut self, context: &global::Global) -> Result<(), RenderPassError> {
context.render_pass_end(self)
}
fn label(&self) -> Option<&str> {
self.label()
}
}

View File

@ -6,10 +6,12 @@ mod compute;
mod compute_command;
mod draw;
mod dyn_compute_pass;
mod dyn_render_pass;
mod memory_init;
mod query;
mod render;
mod render_command;
mod timestamp_writes;
mod transfer;
use std::sync::Arc;
@ -17,11 +19,14 @@ use std::sync::Arc;
pub(crate) use self::clear::clear_texture;
pub use self::{
bundle::*, clear::ClearError, compute::*, compute_command::ComputeCommand, draw::*,
dyn_compute_pass::DynComputePass, query::*, render::*, render_command::RenderCommand,
transfer::*,
dyn_compute_pass::DynComputePass, dyn_render_pass::DynRenderPass, query::*, render::*,
render_command::RenderCommand, transfer::*,
};
pub(crate) use allocator::CommandAllocator;
pub(crate) use timestamp_writes::ArcPassTimestampWrites;
pub use timestamp_writes::PassTimestampWrites;
use self::memory_init::CommandBufferTextureMemoryActions;
use crate::device::{Device, DeviceError};
@ -604,8 +609,28 @@ pub enum CommandEncoderError {
Device(#[from] DeviceError),
#[error("Command encoder is locked by a previously created render/compute pass. Before recording any new commands, the pass must be ended.")]
Locked,
#[error("QuerySet provided for pass timestamp writes is invalid.")]
InvalidTimestampWritesQuerySetId,
#[error("QuerySet {0:?} for pass timestamp writes is invalid.")]
InvalidTimestampWritesQuerySetId(id::QuerySetId),
#[error("Attachment texture view {0:?} is invalid")]
InvalidAttachment(id::TextureViewId),
#[error("Attachment texture view {0:?} for resolve is invalid")]
InvalidResolveTarget(id::TextureViewId),
#[error("Depth stencil attachment view {0:?} is invalid")]
InvalidDepthStencilAttachment(id::TextureViewId),
#[error("Occlusion query set {0:?} is invalid")]
InvalidOcclusionQuerySetId(id::QuerySetId),
}
impl PrettyError for CommandEncoderError {
fn fmt_pretty(&self, fmt: &mut ErrorFormatter) {
fmt.error(self);
if let Self::InvalidAttachment(id) = *self {
fmt.texture_view_label_with_key(&id, "attachment");
} else if let Self::InvalidResolveTarget(id) = *self {
fmt.texture_view_label_with_key(&id, "resolve target");
};
}
}
impl Global {
@ -860,10 +885,7 @@ pub enum PassErrorScope {
#[error("In a bundle parameter")]
Bundle,
#[error("In a pass parameter")]
// TODO: To be removed in favor of `Pass`.
// ComputePass is already operating on command buffer instead,
// same should apply to RenderPass in the future.
PassEncoder(id::CommandEncoderId),
PassEncoder(id::CommandEncoderId), // Needed only for ending pass via tracing.
#[error("In a pass parameter")]
Pass(Option<id::CommandBufferId>),
#[error("In a set_bind_group command")]

File diff suppressed because it is too large Load Diff

View File

@ -9,10 +9,7 @@ use wgt::{BufferAddress, BufferSize, Color};
use std::{num::NonZeroU32, sync::Arc};
use super::{
DrawKind, PassErrorScope, Rect, RenderBundle, RenderCommandError, RenderPassError,
RenderPassErrorInner,
};
use super::{Rect, RenderBundle};
#[doc(hidden)]
#[derive(Clone, Copy, Debug)]
@ -128,13 +125,15 @@ pub enum RenderCommand {
impl RenderCommand {
/// Resolves all ids in a list of commands into the corresponding resource Arc.
//
// TODO: Once resolving is done on-the-fly during recording, this function should be only needed with the replay feature:
// #[cfg(feature = "replay")]
#[cfg(feature = "replay")]
pub fn resolve_render_command_ids<A: HalApi>(
hub: &crate::hub::Hub<A>,
commands: &[RenderCommand],
) -> Result<Vec<ArcRenderCommand<A>>, RenderPassError> {
) -> Result<Vec<ArcRenderCommand<A>>, super::RenderPassError> {
use super::{
DrawKind, PassErrorScope, RenderCommandError, RenderPassError, RenderPassErrorInner,
};
let buffers_guard = hub.buffers.read();
let bind_group_guard = hub.bind_groups.read();
let query_set_guard = hub.query_sets.read();

View File

@ -0,0 +1,25 @@
use std::sync::Arc;
use crate::{hal_api::HalApi, id};
/// Describes the writing of timestamp values in a render or compute pass.
#[derive(Clone, Debug, PartialEq, Eq)]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
pub struct PassTimestampWrites {
/// The query set to write the timestamps to.
pub query_set: id::QuerySetId,
/// The index of the query set at which a start timestamp of this pass is written, if any.
pub beginning_of_pass_write_index: Option<u32>,
/// The index of the query set at which an end timestamp of this pass is written, if any.
pub end_of_pass_write_index: Option<u32>,
}
/// Describes the writing of timestamp values in a render or compute pass with the query set resolved.
pub struct ArcPassTimestampWrites<A: HalApi> {
/// The query set to write the timestamps to.
pub query_set: Arc<crate::resource::QuerySet<A>>,
/// The index of the query set at which a start timestamp of this pass is written, if any.
pub beginning_of_pass_write_index: Option<u32>,
/// The index of the query set at which an end timestamp of this pass is written, if any.
pub end_of_pass_write_index: Option<u32>,
}

View File

@ -60,15 +60,6 @@ pub(crate) struct AttachmentData<T> {
pub depth_stencil: Option<T>,
}
impl<T: PartialEq> Eq for AttachmentData<T> {}
impl<T> AttachmentData<T> {
pub(crate) fn map<U, F: Fn(&T) -> U>(&self, fun: F) -> AttachmentData<U> {
AttachmentData {
colors: self.colors.iter().map(|c| c.as_ref().map(&fun)).collect(),
resolves: self.resolves.iter().map(&fun).collect(),
depth_stencil: self.depth_stencil.as_ref().map(&fun),
}
}
}
#[derive(Clone, Debug, Hash, PartialEq)]
#[cfg_attr(feature = "serde", derive(serde::Deserialize, serde::Serialize))]

View File

@ -179,13 +179,13 @@ pub enum Command {
InsertDebugMarker(String),
RunComputePass {
base: crate::command::BasePass<crate::command::ComputeCommand>,
timestamp_writes: Option<crate::command::ComputePassTimestampWrites>,
timestamp_writes: Option<crate::command::PassTimestampWrites>,
},
RunRenderPass {
base: crate::command::BasePass<crate::command::RenderCommand>,
target_colors: Vec<Option<crate::command::RenderPassColorAttachment>>,
target_depth_stencil: Option<crate::command::RenderPassDepthStencilAttachment>,
timestamp_writes: Option<crate::command::RenderPassTimestampWrites>,
timestamp_writes: Option<crate::command::PassTimestampWrites>,
occlusion_query_set_id: Option<id::QuerySetId>,
},
}

View File

@ -2570,7 +2570,7 @@ impl crate::context::Context for ContextWebGpu {
&self,
_encoder: &Self::CommandEncoderId,
encoder_data: &Self::CommandEncoderData,
desc: &crate::RenderPassDescriptor<'_, '_>,
desc: &crate::RenderPassDescriptor<'_>,
) -> (Self::RenderPassId, Self::RenderPassData) {
let mapped_color_attachments = desc
.color_attachments

View File

@ -490,7 +490,7 @@ pub struct ComputePass {
#[derive(Debug)]
pub struct RenderPass {
pass: wgc::command::RenderPass,
pass: Box<dyn wgc::command::DynRenderPass>,
error_sink: ErrorSink,
}
@ -1913,7 +1913,7 @@ impl crate::Context for ContextWgpuCore {
let timestamp_writes =
desc.timestamp_writes
.as_ref()
.map(|tw| wgc::command::ComputePassTimestampWrites {
.map(|tw| wgc::command::PassTimestampWrites {
query_set: tw.query_set.id.into(),
beginning_of_pass_write_index: tw.beginning_of_pass_write_index,
end_of_pass_write_index: tw.end_of_pass_write_index,
@ -1947,7 +1947,7 @@ impl crate::Context for ContextWgpuCore {
&self,
encoder: &Self::CommandEncoderId,
encoder_data: &Self::CommandEncoderData,
desc: &crate::RenderPassDescriptor<'_, '_>,
desc: &crate::RenderPassDescriptor<'_>,
) -> (Self::RenderPassId, Self::RenderPassData) {
if desc.color_attachments.len() > wgc::MAX_COLOR_ATTACHMENTS {
self.handle_error_fatal(
@ -1982,27 +1982,34 @@ impl crate::Context for ContextWgpuCore {
let timestamp_writes =
desc.timestamp_writes
.as_ref()
.map(|tw| wgc::command::RenderPassTimestampWrites {
.map(|tw| wgc::command::PassTimestampWrites {
query_set: tw.query_set.id.into(),
beginning_of_pass_write_index: tw.beginning_of_pass_write_index,
end_of_pass_write_index: tw.end_of_pass_write_index,
});
let (pass, err) = gfx_select!(encoder => self.0.command_encoder_create_render_pass_dyn(*encoder, &wgc::command::RenderPassDescriptor {
label: desc.label.map(Borrowed),
timestamp_writes: timestamp_writes.as_ref(),
color_attachments: std::borrow::Cow::Borrowed(&colors),
depth_stencil_attachment: depth_stencil.as_ref(),
occlusion_query_set: desc.occlusion_query_set.map(|query_set| query_set.id.into()),
}));
if let Some(cause) = err {
self.handle_error(
&encoder_data.error_sink,
cause,
LABEL,
desc.label,
"CommandEncoder::begin_compute_pass",
);
}
(
Unused,
RenderPass {
pass: wgc::command::RenderPass::new(
*encoder,
&wgc::command::RenderPassDescriptor {
label: desc.label.map(Borrowed),
color_attachments: Borrowed(&colors),
depth_stencil_attachment: depth_stencil.as_ref(),
timestamp_writes: timestamp_writes.as_ref(),
occlusion_query_set: desc
.occlusion_query_set
.map(|query_set| query_set.id.into()),
},
),
Self::RenderPassData {
pass,
error_sink: encoder_data.error_sink.clone(),
},
)
@ -2438,7 +2445,7 @@ impl crate::Context for ContextWgpuCore {
offset: u32,
data: &[u8],
) {
if let Err(cause) = pass_data.pass.set_push_constant(&self.0, offset, data) {
if let Err(cause) = pass_data.pass.set_push_constants(&self.0, offset, data) {
self.handle_error(
&pass_data.error_sink,
cause,
@ -2810,10 +2817,7 @@ impl crate::Context for ContextWgpuCore {
pipeline: &Self::RenderPipelineId,
_pipeline_data: &Self::RenderPipelineData,
) {
if let Err(cause) = self
.0
.render_pass_set_pipeline(&mut pass_data.pass, *pipeline)
{
if let Err(cause) = pass_data.pass.set_pipeline(&self.0, *pipeline) {
self.handle_error(
&pass_data.error_sink,
cause,
@ -2833,9 +2837,9 @@ impl crate::Context for ContextWgpuCore {
_bind_group_data: &Self::BindGroupData,
offsets: &[wgt::DynamicOffset],
) {
if let Err(cause) =
self.0
.render_pass_set_bind_group(&mut pass_data.pass, index, *bind_group, offsets)
if let Err(cause) = pass_data
.pass
.set_bind_group(&self.0, index, *bind_group, offsets)
{
self.handle_error(
&pass_data.error_sink,
@ -2857,13 +2861,11 @@ impl crate::Context for ContextWgpuCore {
offset: wgt::BufferAddress,
size: Option<wgt::BufferSize>,
) {
if let Err(cause) = self.0.render_pass_set_index_buffer(
&mut pass_data.pass,
*buffer,
index_format,
offset,
size,
) {
if let Err(cause) =
pass_data
.pass
.set_index_buffer(&self.0, *buffer, index_format, offset, size)
{
self.handle_error(
&pass_data.error_sink,
cause,
@ -2884,9 +2886,9 @@ impl crate::Context for ContextWgpuCore {
offset: wgt::BufferAddress,
size: Option<wgt::BufferSize>,
) {
if let Err(cause) =
self.0
.render_pass_set_vertex_buffer(&mut pass_data.pass, slot, *buffer, offset, size)
if let Err(cause) = pass_data
.pass
.set_vertex_buffer(&self.0, slot, *buffer, offset, size)
{
self.handle_error(
&pass_data.error_sink,
@ -2906,9 +2908,9 @@ impl crate::Context for ContextWgpuCore {
offset: u32,
data: &[u8],
) {
if let Err(cause) =
self.0
.render_pass_set_push_constants(&mut pass_data.pass, stages, offset, data)
if let Err(cause) = pass_data
.pass
.set_push_constants(&self.0, stages, offset, data)
{
self.handle_error(
&pass_data.error_sink,
@ -2927,8 +2929,8 @@ impl crate::Context for ContextWgpuCore {
vertices: Range<u32>,
instances: Range<u32>,
) {
if let Err(cause) = self.0.render_pass_draw(
&mut pass_data.pass,
if let Err(cause) = pass_data.pass.draw(
&self.0,
vertices.end - vertices.start,
instances.end - instances.start,
vertices.start,
@ -2952,8 +2954,8 @@ impl crate::Context for ContextWgpuCore {
base_vertex: i32,
instances: Range<u32>,
) {
if let Err(cause) = self.0.render_pass_draw_indexed(
&mut pass_data.pass,
if let Err(cause) = pass_data.pass.draw_indexed(
&self.0,
indices.end - indices.start,
instances.end - instances.start,
indices.start,
@ -2978,9 +2980,9 @@ impl crate::Context for ContextWgpuCore {
_indirect_buffer_data: &Self::BufferData,
indirect_offset: wgt::BufferAddress,
) {
if let Err(cause) =
self.0
.render_pass_draw_indirect(&mut pass_data.pass, *indirect_buffer, indirect_offset)
if let Err(cause) = pass_data
.pass
.draw_indirect(&self.0, *indirect_buffer, indirect_offset)
{
self.handle_error(
&pass_data.error_sink,
@ -3000,11 +3002,11 @@ impl crate::Context for ContextWgpuCore {
_indirect_buffer_data: &Self::BufferData,
indirect_offset: wgt::BufferAddress,
) {
if let Err(cause) = self.0.render_pass_draw_indexed_indirect(
&mut pass_data.pass,
*indirect_buffer,
indirect_offset,
) {
if let Err(cause) =
pass_data
.pass
.draw_indexed_indirect(&self.0, *indirect_buffer, indirect_offset)
{
self.handle_error(
&pass_data.error_sink,
cause,
@ -3024,12 +3026,11 @@ impl crate::Context for ContextWgpuCore {
indirect_offset: wgt::BufferAddress,
count: u32,
) {
if let Err(cause) = self.0.render_pass_multi_draw_indirect(
&mut pass_data.pass,
*indirect_buffer,
indirect_offset,
count,
) {
if let Err(cause) =
pass_data
.pass
.multi_draw_indirect(&self.0, *indirect_buffer, indirect_offset, count)
{
self.handle_error(
&pass_data.error_sink,
cause,
@ -3049,8 +3050,8 @@ impl crate::Context for ContextWgpuCore {
indirect_offset: wgt::BufferAddress,
count: u32,
) {
if let Err(cause) = self.0.render_pass_multi_draw_indexed_indirect(
&mut pass_data.pass,
if let Err(cause) = pass_data.pass.multi_draw_indexed_indirect(
&self.0,
*indirect_buffer,
indirect_offset,
count,
@ -3077,8 +3078,8 @@ impl crate::Context for ContextWgpuCore {
count_buffer_offset: wgt::BufferAddress,
max_count: u32,
) {
if let Err(cause) = self.0.render_pass_multi_draw_indirect_count(
&mut pass_data.pass,
if let Err(cause) = pass_data.pass.multi_draw_indirect_count(
&self.0,
*indirect_buffer,
indirect_offset,
*count_buffer,
@ -3107,8 +3108,8 @@ impl crate::Context for ContextWgpuCore {
count_buffer_offset: wgt::BufferAddress,
max_count: u32,
) {
if let Err(cause) = self.0.render_pass_multi_draw_indexed_indirect_count(
&mut pass_data.pass,
if let Err(cause) = pass_data.pass.multi_draw_indexed_indirect_count(
&self.0,
*indirect_buffer,
indirect_offset,
*count_buffer,
@ -3131,10 +3132,7 @@ impl crate::Context for ContextWgpuCore {
pass_data: &mut Self::RenderPassData,
color: wgt::Color,
) {
if let Err(cause) = self
.0
.render_pass_set_blend_constant(&mut pass_data.pass, &color)
{
if let Err(cause) = pass_data.pass.set_blend_constant(&self.0, color) {
self.handle_error(
&pass_data.error_sink,
cause,
@ -3154,9 +3152,9 @@ impl crate::Context for ContextWgpuCore {
width: u32,
height: u32,
) {
if let Err(cause) =
self.0
.render_pass_set_scissor_rect(&mut pass_data.pass, x, y, width, height)
if let Err(cause) = pass_data
.pass
.set_scissor_rect(&self.0, x, y, width, height)
{
self.handle_error(
&pass_data.error_sink,
@ -3179,15 +3177,10 @@ impl crate::Context for ContextWgpuCore {
min_depth: f32,
max_depth: f32,
) {
if let Err(cause) = self.0.render_pass_set_viewport(
&mut pass_data.pass,
x,
y,
width,
height,
min_depth,
max_depth,
) {
if let Err(cause) = pass_data
.pass
.set_viewport(&self.0, x, y, width, height, min_depth, max_depth)
{
self.handle_error(
&pass_data.error_sink,
cause,
@ -3204,10 +3197,7 @@ impl crate::Context for ContextWgpuCore {
pass_data: &mut Self::RenderPassData,
reference: u32,
) {
if let Err(cause) = self
.0
.render_pass_set_stencil_reference(&mut pass_data.pass, reference)
{
if let Err(cause) = pass_data.pass.set_stencil_reference(&self.0, reference) {
self.handle_error(
&pass_data.error_sink,
cause,
@ -3224,10 +3214,7 @@ impl crate::Context for ContextWgpuCore {
pass_data: &mut Self::RenderPassData,
label: &str,
) {
if let Err(cause) = self
.0
.render_pass_insert_debug_marker(&mut pass_data.pass, label, 0)
{
if let Err(cause) = pass_data.pass.insert_debug_marker(&self.0, label, 0) {
self.handle_error(
&pass_data.error_sink,
cause,
@ -3244,10 +3231,7 @@ impl crate::Context for ContextWgpuCore {
pass_data: &mut Self::RenderPassData,
group_label: &str,
) {
if let Err(cause) = self
.0
.render_pass_push_debug_group(&mut pass_data.pass, group_label, 0)
{
if let Err(cause) = pass_data.pass.push_debug_group(&self.0, group_label, 0) {
self.handle_error(
&pass_data.error_sink,
cause,
@ -3263,7 +3247,7 @@ impl crate::Context for ContextWgpuCore {
_pass: &mut Self::RenderPassId,
pass_data: &mut Self::RenderPassData,
) {
if let Err(cause) = self.0.render_pass_pop_debug_group(&mut pass_data.pass) {
if let Err(cause) = pass_data.pass.pop_debug_group(&self.0) {
self.handle_error(
&pass_data.error_sink,
cause,
@ -3282,9 +3266,9 @@ impl crate::Context for ContextWgpuCore {
_query_set_data: &Self::QuerySetData,
query_index: u32,
) {
if let Err(cause) =
self.0
.render_pass_write_timestamp(&mut pass_data.pass, *query_set, query_index)
if let Err(cause) = pass_data
.pass
.write_timestamp(&self.0, *query_set, query_index)
{
self.handle_error(
&pass_data.error_sink,
@ -3302,10 +3286,7 @@ impl crate::Context for ContextWgpuCore {
pass_data: &mut Self::RenderPassData,
query_index: u32,
) {
if let Err(cause) = self
.0
.render_pass_begin_occlusion_query(&mut pass_data.pass, query_index)
{
if let Err(cause) = pass_data.pass.begin_occlusion_query(&self.0, query_index) {
self.handle_error(
&pass_data.error_sink,
cause,
@ -3321,7 +3302,7 @@ impl crate::Context for ContextWgpuCore {
_pass: &mut Self::RenderPassId,
pass_data: &mut Self::RenderPassData,
) {
if let Err(cause) = self.0.render_pass_end_occlusion_query(&mut pass_data.pass) {
if let Err(cause) = pass_data.pass.end_occlusion_query(&self.0) {
self.handle_error(
&pass_data.error_sink,
cause,
@ -3340,11 +3321,11 @@ impl crate::Context for ContextWgpuCore {
_query_set_data: &Self::QuerySetData,
query_index: u32,
) {
if let Err(cause) = self.0.render_pass_begin_pipeline_statistics_query(
&mut pass_data.pass,
*query_set,
query_index,
) {
if let Err(cause) =
pass_data
.pass
.begin_pipeline_statistics_query(&self.0, *query_set, query_index)
{
self.handle_error(
&pass_data.error_sink,
cause,
@ -3360,10 +3341,7 @@ impl crate::Context for ContextWgpuCore {
_pass: &mut Self::RenderPassId,
pass_data: &mut Self::RenderPassData,
) {
if let Err(cause) = self
.0
.render_pass_end_pipeline_statistics_query(&mut pass_data.pass)
{
if let Err(cause) = pass_data.pass.end_pipeline_statistics_query(&self.0) {
self.handle_error(
&pass_data.error_sink,
cause,
@ -3381,9 +3359,9 @@ impl crate::Context for ContextWgpuCore {
render_bundles: &mut dyn Iterator<Item = (Self::RenderBundleId, &Self::RenderBundleData)>,
) {
let temp_render_bundles = render_bundles.map(|(i, _)| i).collect::<SmallVec<[_; 4]>>();
if let Err(cause) = self
.0
.render_pass_execute_bundles(&mut pass_data.pass, &temp_render_bundles)
if let Err(cause) = pass_data
.pass
.execute_bundles(&self.0, &temp_render_bundles)
{
self.handle_error(
&pass_data.error_sink,
@ -3400,9 +3378,7 @@ impl crate::Context for ContextWgpuCore {
_pass: &mut Self::RenderPassId,
pass_data: &mut Self::RenderPassData,
) {
let encoder = pass_data.pass.parent_id();
if let Err(cause) = wgc::gfx_select!(encoder => self.0.render_pass_end(&mut pass_data.pass))
{
if let Err(cause) = pass_data.pass.end(&self.0) {
self.handle_error(
&pass_data.error_sink,
cause,

View File

@ -470,7 +470,7 @@ pub trait Context: Debug + WasmNotSendSync + Sized {
&self,
encoder: &Self::CommandEncoderId,
encoder_data: &Self::CommandEncoderData,
desc: &RenderPassDescriptor<'_, '_>,
desc: &RenderPassDescriptor<'_>,
) -> (Self::RenderPassId, Self::RenderPassData);
fn command_encoder_finish(
&self,
@ -1477,7 +1477,7 @@ pub(crate) trait DynContext: Debug + WasmNotSendSync {
&self,
encoder: &ObjectId,
encoder_data: &crate::Data,
desc: &RenderPassDescriptor<'_, '_>,
desc: &RenderPassDescriptor<'_>,
) -> (ObjectId, Box<crate::Data>);
fn command_encoder_finish(
&self,
@ -2799,7 +2799,7 @@ where
&self,
encoder: &ObjectId,
encoder_data: &crate::Data,
desc: &RenderPassDescriptor<'_, '_>,
desc: &RenderPassDescriptor<'_>,
) -> (ObjectId, Box<crate::Data>) {
let encoder = <T::CommandEncoderId>::from(*encoder);
let encoder_data = downcast_ref(encoder_data);

View File

@ -1273,10 +1273,20 @@ impl Drop for CommandEncoder {
/// Corresponds to [WebGPU `GPURenderPassEncoder`](
/// https://gpuweb.github.io/gpuweb/#render-pass-encoder).
#[derive(Debug)]
pub struct RenderPass<'a> {
pub struct RenderPass<'encoder> {
/// The inner data of the render pass, separated out so it's easy to replace the lifetime with 'static if desired.
inner: RenderPassInner,
/// This lifetime is used to protect the [`CommandEncoder`] from being used
/// while the pass is alive.
encoder_guard: PhantomData<&'encoder ()>,
}
#[derive(Debug)]
struct RenderPassInner {
id: ObjectId,
data: Box<Data>,
parent: &'a mut CommandEncoder,
context: Arc<C>,
}
/// In-progress recording of a compute pass.
@ -1825,28 +1835,25 @@ static_assertions::assert_impl_all!(BindGroupDescriptor<'_>: Send, Sync);
///
/// For use with [`CommandEncoder::begin_render_pass`].
///
/// Note: separate lifetimes are needed because the texture views (`'tex`)
/// have to live as long as the pass is recorded, while everything else (`'desc`) doesn't.
///
/// Corresponds to [WebGPU `GPURenderPassDescriptor`](
/// https://gpuweb.github.io/gpuweb/#dictdef-gpurenderpassdescriptor).
#[derive(Clone, Debug, Default)]
pub struct RenderPassDescriptor<'tex, 'desc> {
pub struct RenderPassDescriptor<'a> {
/// Debug label of the render pass. This will show up in graphics debuggers for easy identification.
pub label: Label<'desc>,
pub label: Label<'a>,
/// The color attachments of the render pass.
pub color_attachments: &'desc [Option<RenderPassColorAttachment<'tex>>],
pub color_attachments: &'a [Option<RenderPassColorAttachment<'a>>],
/// The depth and stencil attachment of the render pass, if any.
pub depth_stencil_attachment: Option<RenderPassDepthStencilAttachment<'tex>>,
pub depth_stencil_attachment: Option<RenderPassDepthStencilAttachment<'a>>,
/// Defines which timestamp values will be written for this pass, and where to write them to.
///
/// Requires [`Features::TIMESTAMP_QUERY`] to be enabled.
pub timestamp_writes: Option<RenderPassTimestampWrites<'desc>>,
pub timestamp_writes: Option<RenderPassTimestampWrites<'a>>,
/// Defines where the occlusion query results will be stored for this pass.
pub occlusion_query_set: Option<&'tex QuerySet>,
pub occlusion_query_set: Option<&'a QuerySet>,
}
#[cfg(send_sync)]
static_assertions::assert_impl_all!(RenderPassDescriptor<'_, '_>: Send, Sync);
static_assertions::assert_impl_all!(RenderPassDescriptor<'_>: Send, Sync);
/// Describes how the vertex buffer is interpreted.
///
@ -3886,16 +3893,17 @@ impl CommandEncoder {
/// Begins recording of a render pass.
///
/// This function returns a [`RenderPass`] object which records a single render pass.
//
// TODO(https://github.com/gfx-rs/wgpu/issues/1453):
// Just like with compute passes, we should have a way to opt out of the lifetime constraint.
// See https://github.com/gfx-rs/wgpu/pull/5768 for details
// Once this is done, the documentation for `begin_render_pass` and `begin_compute_pass` should
// be nearly identical.
pub fn begin_render_pass<'pass>(
&'pass mut self,
desc: &RenderPassDescriptor<'pass, '_>,
) -> RenderPass<'pass> {
///
/// As long as the returned [`RenderPass`] has not ended,
/// any mutating operation on this command encoder causes an error and invalidates it.
/// Note that the `'encoder` lifetime relationship protects against this,
/// but it is possible to opt out of it by calling [`RenderPass::forget_lifetime`].
/// This can be useful for runtime handling of the encoder->pass
/// dependency e.g. when pass and encoder are stored in the same data structure.
pub fn begin_render_pass<'encoder>(
&'encoder mut self,
desc: &RenderPassDescriptor<'_>,
) -> RenderPass<'encoder> {
let id = self.id.as_ref().unwrap();
let (id, data) = DynContext::command_encoder_begin_render_pass(
&*self.context,
@ -3904,9 +3912,12 @@ impl CommandEncoder {
desc,
);
RenderPass {
id,
data,
parent: self,
inner: RenderPassInner {
id,
data,
context: self.context.clone(),
},
encoder_guard: PhantomData,
}
}
@ -4177,7 +4188,26 @@ impl CommandEncoder {
}
}
impl<'a> RenderPass<'a> {
impl<'encoder> RenderPass<'encoder> {
/// Drops the lifetime relationship to the parent command encoder, making usage of
/// the encoder while this pass is recorded a run-time error instead.
///
/// Attention: As long as the render pass has not been ended, any mutating operation on the parent
/// command encoder will cause a run-time error and invalidate it!
/// By default, the lifetime constraint prevents this, but it can be useful
/// to handle this at run time, such as when storing the pass and encoder in the same
/// data structure.
///
/// This operation has no effect on pass recording.
/// It's a safe operation, since [`CommandEncoder`] is in a locked state as long as the pass is active
/// regardless of the lifetime constraint or its absence.
pub fn forget_lifetime(self) -> RenderPass<'static> {
RenderPass {
inner: self.inner,
encoder_guard: PhantomData,
}
}
/// Sets the active bind group for a given bind group index. The bind group layout
/// in the active pipeline when any `draw_*()` method is called must match the layout of
/// this bind group.
@ -4190,13 +4220,13 @@ impl<'a> RenderPass<'a> {
pub fn set_bind_group(
&mut self,
index: u32,
bind_group: &'a BindGroup,
bind_group: &BindGroup,
offsets: &[DynamicOffset],
) {
DynContext::render_pass_set_bind_group(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
index,
&bind_group.id,
bind_group.data.as_ref(),
@ -4207,11 +4237,11 @@ impl<'a> RenderPass<'a> {
/// Sets the active render pipeline.
///
/// Subsequent draw calls will exhibit the behavior defined by `pipeline`.
pub fn set_pipeline(&mut self, pipeline: &'a RenderPipeline) {
pub fn set_pipeline(&mut self, pipeline: &RenderPipeline) {
DynContext::render_pass_set_pipeline(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
&pipeline.id,
pipeline.data.as_ref(),
)
@ -4224,9 +4254,9 @@ impl<'a> RenderPass<'a> {
/// (all components zero).
pub fn set_blend_constant(&mut self, color: Color) {
DynContext::render_pass_set_blend_constant(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
color,
)
}
@ -4235,11 +4265,11 @@ impl<'a> RenderPass<'a> {
///
/// Subsequent calls to [`draw_indexed`](RenderPass::draw_indexed) on this [`RenderPass`] will
/// use `buffer` as the source index buffer.
pub fn set_index_buffer(&mut self, buffer_slice: BufferSlice<'a>, index_format: IndexFormat) {
pub fn set_index_buffer(&mut self, buffer_slice: BufferSlice<'_>, index_format: IndexFormat) {
DynContext::render_pass_set_index_buffer(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
&buffer_slice.buffer.id,
buffer_slice.buffer.data.as_ref(),
index_format,
@ -4258,11 +4288,11 @@ impl<'a> RenderPass<'a> {
///
/// [`draw`]: RenderPass::draw
/// [`draw_indexed`]: RenderPass::draw_indexed
pub fn set_vertex_buffer(&mut self, slot: u32, buffer_slice: BufferSlice<'a>) {
pub fn set_vertex_buffer(&mut self, slot: u32, buffer_slice: BufferSlice<'_>) {
DynContext::render_pass_set_vertex_buffer(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
slot,
&buffer_slice.buffer.id,
buffer_slice.buffer.data.as_ref(),
@ -4282,9 +4312,9 @@ impl<'a> RenderPass<'a> {
/// but it does not affect the coordinate system, only which fragments are discarded.
pub fn set_scissor_rect(&mut self, x: u32, y: u32, width: u32, height: u32) {
DynContext::render_pass_set_scissor_rect(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
x,
y,
width,
@ -4300,9 +4330,9 @@ impl<'a> RenderPass<'a> {
/// targets.
pub fn set_viewport(&mut self, x: f32, y: f32, w: f32, h: f32, min_depth: f32, max_depth: f32) {
DynContext::render_pass_set_viewport(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
x,
y,
w,
@ -4318,9 +4348,9 @@ impl<'a> RenderPass<'a> {
/// If this method has not been called, the stencil reference value defaults to `0`.
pub fn set_stencil_reference(&mut self, reference: u32) {
DynContext::render_pass_set_stencil_reference(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
reference,
);
}
@ -4328,9 +4358,9 @@ impl<'a> RenderPass<'a> {
/// Inserts debug marker.
pub fn insert_debug_marker(&mut self, label: &str) {
DynContext::render_pass_insert_debug_marker(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
label,
);
}
@ -4338,9 +4368,9 @@ impl<'a> RenderPass<'a> {
/// Start record commands and group it into debug marker group.
pub fn push_debug_group(&mut self, label: &str) {
DynContext::render_pass_push_debug_group(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
label,
);
}
@ -4348,9 +4378,9 @@ impl<'a> RenderPass<'a> {
/// Stops command recording and creates debug group.
pub fn pop_debug_group(&mut self) {
DynContext::render_pass_pop_debug_group(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
);
}
@ -4377,9 +4407,9 @@ impl<'a> RenderPass<'a> {
/// It is not affected by changes to the state that are performed after it is called.
pub fn draw(&mut self, vertices: Range<u32>, instances: Range<u32>) {
DynContext::render_pass_draw(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
vertices,
instances,
)
@ -4411,9 +4441,9 @@ impl<'a> RenderPass<'a> {
/// It is not affected by changes to the state that are performed after it is called.
pub fn draw_indexed(&mut self, indices: Range<u32>, base_vertex: i32, instances: Range<u32>) {
DynContext::render_pass_draw_indexed(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
indices,
base_vertex,
instances,
@ -4433,11 +4463,11 @@ impl<'a> RenderPass<'a> {
/// any use of `@builtin(vertex_index)` or `@builtin(instance_index)` in the vertex shader will have different values.
///
/// See details on the individual flags for more information.
pub fn draw_indirect(&mut self, indirect_buffer: &'a Buffer, indirect_offset: BufferAddress) {
pub fn draw_indirect(&mut self, indirect_buffer: &Buffer, indirect_offset: BufferAddress) {
DynContext::render_pass_draw_indirect(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
&indirect_buffer.id,
indirect_buffer.data.as_ref(),
indirect_offset,
@ -4460,13 +4490,13 @@ impl<'a> RenderPass<'a> {
/// See details on the individual flags for more information.
pub fn draw_indexed_indirect(
&mut self,
indirect_buffer: &'a Buffer,
indirect_buffer: &Buffer,
indirect_offset: BufferAddress,
) {
DynContext::render_pass_draw_indexed_indirect(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
&indirect_buffer.id,
indirect_buffer.data.as_ref(),
indirect_offset,
@ -4478,22 +4508,25 @@ impl<'a> RenderPass<'a> {
///
/// Commands in the bundle do not inherit this render pass's current render state, and after the
/// bundle has executed, the state is **cleared** (reset to defaults, not the previous state).
pub fn execute_bundles<I: IntoIterator<Item = &'a RenderBundle>>(&mut self, render_bundles: I) {
pub fn execute_bundles<'a, I: IntoIterator<Item = &'a RenderBundle>>(
&mut self,
render_bundles: I,
) {
let mut render_bundles = render_bundles
.into_iter()
.map(|rb| (&rb.id, rb.data.as_ref()));
DynContext::render_pass_execute_bundles(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
&mut render_bundles,
)
}
}
/// [`Features::MULTI_DRAW_INDIRECT`] must be enabled on the device in order to call these functions.
impl<'a> RenderPass<'a> {
impl<'encoder> RenderPass<'encoder> {
/// Dispatches multiple draw calls from the active vertex buffer(s) based on the contents of the `indirect_buffer`.
/// `count` draw calls are issued.
///
@ -4506,14 +4539,14 @@ impl<'a> RenderPass<'a> {
/// It is not affected by changes to the state that are performed after it is called.
pub fn multi_draw_indirect(
&mut self,
indirect_buffer: &'a Buffer,
indirect_buffer: &Buffer,
indirect_offset: BufferAddress,
count: u32,
) {
DynContext::render_pass_multi_draw_indirect(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
&indirect_buffer.id,
indirect_buffer.data.as_ref(),
indirect_offset,
@ -4534,14 +4567,14 @@ impl<'a> RenderPass<'a> {
/// It is not affected by changes to the state that are performed after it is called.
pub fn multi_draw_indexed_indirect(
&mut self,
indirect_buffer: &'a Buffer,
indirect_buffer: &Buffer,
indirect_offset: BufferAddress,
count: u32,
) {
DynContext::render_pass_multi_draw_indexed_indirect(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
&indirect_buffer.id,
indirect_buffer.data.as_ref(),
indirect_offset,
@ -4551,7 +4584,7 @@ impl<'a> RenderPass<'a> {
}
/// [`Features::MULTI_DRAW_INDIRECT_COUNT`] must be enabled on the device in order to call these functions.
impl<'a> RenderPass<'a> {
impl<'encoder> RenderPass<'encoder> {
/// Dispatches multiple draw calls from the active vertex buffer(s) based on the contents of the `indirect_buffer`.
/// The count buffer is read to determine how many draws to issue.
///
@ -4576,16 +4609,16 @@ impl<'a> RenderPass<'a> {
/// It is not affected by changes to the state that are performed after it is called.
pub fn multi_draw_indirect_count(
&mut self,
indirect_buffer: &'a Buffer,
indirect_buffer: &Buffer,
indirect_offset: BufferAddress,
count_buffer: &'a Buffer,
count_buffer: &Buffer,
count_offset: BufferAddress,
max_count: u32,
) {
DynContext::render_pass_multi_draw_indirect_count(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
&indirect_buffer.id,
indirect_buffer.data.as_ref(),
indirect_offset,
@ -4623,16 +4656,16 @@ impl<'a> RenderPass<'a> {
/// It is not affected by changes to the state that are performed after it is called.
pub fn multi_draw_indexed_indirect_count(
&mut self,
indirect_buffer: &'a Buffer,
indirect_buffer: &Buffer,
indirect_offset: BufferAddress,
count_buffer: &'a Buffer,
count_buffer: &Buffer,
count_offset: BufferAddress,
max_count: u32,
) {
DynContext::render_pass_multi_draw_indexed_indirect_count(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
&indirect_buffer.id,
indirect_buffer.data.as_ref(),
indirect_offset,
@ -4645,7 +4678,7 @@ impl<'a> RenderPass<'a> {
}
/// [`Features::PUSH_CONSTANTS`] must be enabled on the device in order to call these functions.
impl<'a> RenderPass<'a> {
impl<'encoder> RenderPass<'encoder> {
/// Set push constant data for subsequent draw calls.
///
/// Write the bytes in `data` at offset `offset` within push constant
@ -4688,9 +4721,9 @@ impl<'a> RenderPass<'a> {
/// [`PushConstant`]: https://docs.rs/naga/latest/naga/enum.StorageClass.html#variant.PushConstant
pub fn set_push_constants(&mut self, stages: ShaderStages, offset: u32, data: &[u8]) {
DynContext::render_pass_set_push_constants(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
stages,
offset,
data,
@ -4699,7 +4732,7 @@ impl<'a> RenderPass<'a> {
}
/// [`Features::TIMESTAMP_QUERY_INSIDE_PASSES`] must be enabled on the device in order to call these functions.
impl<'a> RenderPass<'a> {
impl<'encoder> RenderPass<'encoder> {
/// Issue a timestamp command at this point in the queue. The
/// timestamp will be written to the specified query set, at the specified index.
///
@ -4709,9 +4742,9 @@ impl<'a> RenderPass<'a> {
/// for a string of operations to complete.
pub fn write_timestamp(&mut self, query_set: &QuerySet, query_index: u32) {
DynContext::render_pass_write_timestamp(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
&query_set.id,
query_set.data.as_ref(),
query_index,
@ -4719,14 +4752,14 @@ impl<'a> RenderPass<'a> {
}
}
impl<'a> RenderPass<'a> {
impl<'encoder> RenderPass<'encoder> {
/// Start a occlusion query on this render pass. It can be ended with
/// `end_occlusion_query`. Occlusion queries may not be nested.
pub fn begin_occlusion_query(&mut self, query_index: u32) {
DynContext::render_pass_begin_occlusion_query(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
query_index,
);
}
@ -4735,22 +4768,22 @@ impl<'a> RenderPass<'a> {
/// `begin_occlusion_query`. Occlusion queries may not be nested.
pub fn end_occlusion_query(&mut self) {
DynContext::render_pass_end_occlusion_query(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
);
}
}
/// [`Features::PIPELINE_STATISTICS_QUERY`] must be enabled on the device in order to call these functions.
impl<'a> RenderPass<'a> {
impl<'encoder> RenderPass<'encoder> {
/// Start a pipeline statistics query on this render pass. It can be ended with
/// `end_pipeline_statistics_query`. Pipeline statistics queries may not be nested.
pub fn begin_pipeline_statistics_query(&mut self, query_set: &QuerySet, query_index: u32) {
DynContext::render_pass_begin_pipeline_statistics_query(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
&query_set.id,
query_set.data.as_ref(),
query_index,
@ -4761,18 +4794,17 @@ impl<'a> RenderPass<'a> {
/// `begin_pipeline_statistics_query`. Pipeline statistics queries may not be nested.
pub fn end_pipeline_statistics_query(&mut self) {
DynContext::render_pass_end_pipeline_statistics_query(
&*self.parent.context,
&mut self.id,
self.data.as_mut(),
&*self.inner.context,
&mut self.inner.id,
self.inner.data.as_mut(),
);
}
}
impl<'a> Drop for RenderPass<'a> {
impl Drop for RenderPassInner {
fn drop(&mut self) {
if !thread::panicking() {
self.parent
.context
self.context
.render_pass_end(&mut self.id, self.data.as_mut());
}
}