When setting the primary buffer location for direct scanout, subtract
the offset of that output to put the buffer location in output-relative
coordinates.
Fixes#3910
Enable scene-tree direct scanout of a single buffer with various options
for scaling and source crop. This is intended to support direct scanout
for fullscreen video with/without scaling, letterboxing/pillarboxing
(e.g. 4:3 content on a 16:9 display), and source crop (e.g. when
1920x1088 planes are used for 1920x1080 video).
This works by explicitly specifying the source crop and destination box
for the primary buffer in the output state. DRM atomic and libliftoff
backends will turn this into a crop and scale of the plane (assuming the
hardware supports that). For the Wayland/X11/DRM-legacy backends I just
reject this so scanout will be disabled.
The previous behaviour is preserved if buffer_src_box and buffer_dst_box
are unset: the buffer is displayed at native size at the top-left of the
output with no crop.
The change to `struct wlr_output_state` makes this a binary breaking
change (but this works transparently for scene-tree compositors like
labwc after a recompile).
Since wlr_damage_ring now only works with buffer local coordinates, this
creates an inpedance mismatch for compositors that want to use this
function. Instead of compositors needing to the the conversion itself,
change thu function to take buffer local coordinates directly.
This wasn't that great:
1. Now that damage ring tracks damage across actual wlr_buffer objects,
it can use the buffer size to do any sort of cropping that needs to
happen.
2. The damage ring size really should be the size of the transformed
size of the output. Compositors currently have to call
`wlr_damage_ring_set_bounds()` where it might not be clear when to
call the function. Compositors can just check against the actual
output bounds that they care about when processing the damage.
Fixes: #3891
If a surface with an existing buffer has a syncobj surface state created
without committing a new buffer with associated timelines, callers will
see the surface as having a syncobj state and may try to use it, but
calling the signal_release_with_buffer helper at this time will assert
on the lacking release timeline.
As this is a valid situation, remove the assert and replace it with an
early return so that callers do not need to explicitly check for the
presence of valid timelines.
Fixes: https://gitlab.freedesktop.org/wlroots/wlroots/-/issues/3895
A struct wlr_xdg_toplevel_configure is passed in with the whole
state. This makes it a lot clearer that the size and WM state are
always sent to the client.
It was completely wrong: according to the protocol, the effective
geometry is only updated on commit time if there pending state has
new state from xdg_surface.set_window_geometry or
xdg_surface.set_window_geometry has never been sent at all.
This commit adds wlr_xdg_surface.geometry which correctly matches the
effective window geometry and removes now-useless
wlr_xdg_surface_get_geometry().
It seems that some scene compositors want to avoid wlr_scene_output_commit
and use the lower lever wlr_scene_output_build_state. However, build
state does not early return if a frame is not needed so compositors will
implement the check themselves. Let's add a helper function that compositors
can use to implement the check.
Technically pending_commit_damage is a private interface, so this lets
compositors not interface with private interfaces to implement the check.