We create a renderer for the sole purpose of blitting buffers from a
primary renderer that we might not be able to scan-out from. If we end
up with the pixman renderer, it either won't work becuase it cannot
import dmabufs from the primary renderer, or won't have any effect
because the primary renderer already uses dumb buffers.
We test for DMA-BUF capabilities specifically to make it clear what our
interest is, rather than focusing too much on the pixman renderer.
If init_drm_renderer failed, it would destroy the renderer but would not
set it to NULL, leading to use-after-free.
NULL the renderer after destroying it.
We store both queued and current buffers to be able to retain both the
framebuffer currently on screen and the one queued to replace it. From a
re-use perspective, we only care about the last committed framebuffer.
The viewport is only stored in order to be re-used together with the
last committed framebuffer, so do away with the queued/current
distinction and store a single viewport updated every time a commit
completes.
Instead of trying to restore the drm state when the session is activated
again, just disconnect all outputs when the session is deactivated. The
scan that triggers on session activation will rediscover the connectors.
Accessing the output state viewport require a buffer, and that might not
have a state with a buffer when preparing the plane properties for an
atomic commit.
Instead, store the properties at the same time as the fb, and use a
similar mechanism to carry the state around.
If our session is re-activated during scanout, restore_drm_device will
reset planes and then attempt an enabling modeset commit without a
buffer. The new plane transform logic requires a committed buffer to be
present to calculate the boxes if they were not explicitly provided, and
at least amdgpu rejects commits that try to use 0 as default.
Skip updating plane props instead of segfaulting if no buffer is set.
A better fix would be to not rely on restore_drm_device at all and
instead require compositors to modeset in response to session
activation.
Fixes: https://gitlab.freedesktop.org/wlroots/wlroots/-/issues/3912
We were checking whether the damage region was empty before
clipping. However a non-empty damage region can become empty after
clipping. Instead, check whether the clipped region is empty.
Fixes: 4339c37f99 ("backend/drm: clip FB damage")
Enable scene-tree direct scanout of a single buffer with various options
for scaling and source crop. This is intended to support direct scanout
for fullscreen video with/without scaling, letterboxing/pillarboxing
(e.g. 4:3 content on a 16:9 display), and source crop (e.g. when
1920x1088 planes are used for 1920x1080 video).
This works by explicitly specifying the source crop and destination box
for the primary buffer in the output state. DRM atomic and libliftoff
backends will turn this into a crop and scale of the plane (assuming the
hardware supports that). For the Wayland/X11/DRM-legacy backends I just
reject this so scanout will be disabled.
The previous behaviour is preserved if buffer_src_box and buffer_dst_box
are unset: the buffer is displayed at native size at the top-left of the
output with no crop.
The change to `struct wlr_output_state` makes this a binary breaking
change (but this works transparently for scene-tree compositors like
labwc after a recompile).
This piece of code checks for multi-GPU renderer support, so it
needs to run after the renderer is initialized.
Fixes: 514c4b4cce ("backend: add timeline feature flag")
Closes: https://github.com/swaywm/sway/issues/8382
The output feature flag has a flaw: it's not possible to check
whether the backend supports timelines during compositor
initialization when we need to figure out whether we want to enable
the linux-drm-syncobj-v1 protocol.
Introduce a backend-wide feature flag to indicate support for
timelines to address this defect.
Closes: https://gitlab.freedesktop.org/wlroots/wlroots/-/issues/3904
After a connector scan, new connectors might have appeared and old ones
gone away. At this point, old CRTC allocations are already gone, while
new allocations are not yet needed. Skip the call.
The page_flip can be destroyed, but it is unconditionally accessed later
on when setting present_flags. Fix this by simply setting the
present_flags before the page_flip gets destroyed.
../backend/drm/drm.c:415:49: error: ‘calloc’ sizes specified with ‘sizeof’ in the earlier argument and not in the later argument [-Werror=calloc-transposed-args]
415 | layer->candidate_planes = calloc(sizeof(bool), drm->num_planes);
| ^~~~
../backend/drm/drm.c:1435:24: error: incompatible types when returning type ‘_Bool’ but ‘struct wlr_drm_connector *’ was expected
1435 | return false;
| ^~~~~
This will let compositors know if changing adaptive_sync state has any
chance of working. When false, then the current state is the only
supported state, including if adaptive_sync is currently enabled as is
the case for Wayland and X11 backends.
When true, changing state might succeed, but no guarantee is made. It
just indicates that the backend does not already know it to be
impossible.
Our multi-gpu path currently needs to blit a buffer in order to have a
primaryfb to add to the commit. This is expensive, and we skip it
entirely during test commits. This in turn also means that we skip tests
commits entirely for such outputs, outside our own basic tests.
Backend-wide commits missed this check, and tried to perform test
commits for multi-gpu outputs despite no primaryfb having been attached,
making them always fail. Add the same exception as we have in the
per-connector commit-test.
When a compositors submits a wlr_output_state with
WLR_OUTPUT_STATE_ADAPTIVE_SYNC_ENABLED set and
adaptive_sync_enabled = false on an output which doesn't support
adaptive sync, we'd fail the commit. Fix this.
This bug was previously hidden because wlr_output_commit() drops
no-op changes from wlr_output_state.committed.