drmModeAddFB2 doesn't support explicit modifiers. Only accept INVALID
which indicates an implicit modifier and LINEAR which may indicate
that GBM_BO_USE_LINEAR has been used.
They are never used in practice, which makes all of our flag
handling effectively dead code. Also, APIs such as KMS don't
provide a good way to deal with the flags. Let's just fail the
DMA-BUF import when clients provide flags.
The BO handle table exists to avoid double-closing a BO handle,
which aren't reference-counted by the kernel. But if we can
guarantee that there is only ever a single ref for each BO handle,
then we don't need the BO handle table anymore.
This is possible if we create the handle right before the ADDFB2
IOCTL, and close the handle right after. The handles are very
short-lived and we don't need to track their lifetime.
Because of multi-planar FBs, we need to be a bit careful: some
FB planes might share the same handle. But with a small check, it's
easy to avoid double-closing the same handle (which wouldn't be a
big deal anyways).
There's one gotcha though: drmModeSetCursor2 takes a BO handle as
input. Saving the handles until drmModeSetCursor2 time would require
us to track BO handle lifetimes, so we wouldn't be able to get rid
of the BO handle table. As a workaround, use drmModeGetFB to turn the
FB ID back to a BO handle, call drmModeSetCursor2 and then immediately
close the BO handle. The overhead should be minimal since these IOCTLs
are pretty cheap.
Closes: https://github.com/swaywm/wlroots/issues/3164
drmModeAddFB2 doesn't support explicit modifiers. Only accept INVALID
which indicates an implicit modifier and LINEAR which may indicate
that GBM_BO_USE_LINEAR has been used.
Using GBM to import DRM dumb buffers tends to not work well. By
using GBM we're calling some driver-specific functions in Mesa.
These functions check whether Mesa can work with the buffer.
Sometimes Mesa has requirements which differ from DRM dumb buffers
and the GBM import will fail (e.g. on amdgpu).
Instead, drop GBM and use drmPrimeFDToHandle directly. But there's
a twist: BO handles are not ref'counted by the kernel and need to
be ref'counted in user-space [1]. libdrm usually performs this
bookkeeping and is used under-the-hood by Mesa.
We can't re-use libdrm for this task without using driver-specific
APIs. So let's just re-implement the ref'counting logic in wlroots.
The wlroots implementation is inspired from amdgpu's in libdrm [2].
Closes: https://github.com/swaywm/wlroots/issues/2916
[1]: https://gitlab.freedesktop.org/mesa/drm/-/merge_requests/110
[2]: 1a4c0ec9ae/amdgpu/handle_table.c
Unless we're dealing with a multi-GPU setup and the backend being
initialized is secondary, we don't need a renderer nor an allocator.
Stop initializing these.
drm_surface_make_current and drm_surface_unset_current set implicit
state and are an unnecessary mid-layer. Prefer to use directly
wlr_renderer_begin_with_buffer, which automatically unsets the back
buffer on wlr_renderer_end.
I'd like to get rid of drm_surface_make_current once we stop using
it for the primary swapchain.
Instead of passing a wlr_texture to the backend, directly pass a
wlr_buffer. Use get_cursor_size and get_cursor_formats to create
a wlr_buffer that can be used as a cursor.
We don't want to pass a wlr_texture because we want to remove as
many rendering bits from the backend as possible.
On multi-GPU setups, there is a primary DRM backend and secondary
DRM backends. wlr_backend_get_drm_fd will always return the parent
DRM FD even on secondary backends, so that users always use the
primary device for rendering.
However, for our internal rendering we want to use the secondary
device. Use allocator_autocreate_with_drm_fd to make sure the
allocator will create buffers on the secondary device.
We do something similar to ensure our internal rendering will
happen on the secondary device with renderer_autocreate_with_drm_fd.
Fixes: cc1b66364c ("backend: use wlr_allocator_autocreate")
This function is only required because the DRM backend still needs
to perform multi-GPU magic under-the-hood. Remove the wlr_ prefix
to make it clear it's not a candidate for being made public.
Downgrade errors to DEBUG level, because drm_fb_create is used in
test_buffer, so errors aren't always fatal. Add ERROR logs at call
sites where a failure is fatal, to make it clear something wrong
happened.
Compute only the transform matrix in the output. The projection matrix
will be calculated inside the gles2 renderer when we start rendering.
The goal is to help the pixman rendering process.
The new wlr_renderer_autocreate API is great for compositors, however
it causes some issues with DRM multi-GPU support.
A DRM child backend wants the compositor to use the parent GPU, so it
exposes the parent's DRM FD in get_drm_fd. However, in order to be able
to perform multi-GPU buffer copies, the child DRM backend still needs to
create a local renderer.
Use the new private wlr_renderer_autocreate_with_drm_fd function to
avoid creating a renderer for the parent GPU.
Fixes: e128e6c08d ("render: drop egl parameters from wlr_renderer_autocreate")
Instead of requiring callers to manually make the EGL context current
before binding a buffer and unsetting it after unbinding a buffer, do
it inside wlr_renderer_bind_buffer.
This hides renderer-specific implementation details inside the
wlr_renderer interface. Non-GLES2 renderers may not use EGL.
This removes all EGL dependencies from the backends.
References: https://github.com/swaywm/wlroots/issues/2618
References: https://github.com/swaywm/wlroots/pull/2615#issuecomment-756687006
Stop keeping track of buffers on the parent GPU when multi-GPU is used.
This removes support for export_dmabuf on secondary GPUs, but renderer
v6 will bring this back by managing the swapchains in wlr_output instead
of the backends.
This callback allowed compositors to customize the EGL config used by
the renderer. However with renderer v6 EGL configs aren't used anymore.
Instead, buffers are allocated via GBM and GL FBOs are rendered to. So
customizing the EGL config is a no-op.