mbox series

[00/13] drm: Fix reservation locking for pin/unpin and console

Message ID 20240227113853.8464-1-tzimmermann@suse.de (mailing list archive)
Headers show
Series drm: Fix reservation locking for pin/unpin and console | expand

Message

Thomas Zimmermann Feb. 27, 2024, 10:14 a.m. UTC
Dma-buf locking semantics require the caller of pin and unpin to hold
the buffer's reservation lock. Fix DRM to adhere to the specs. This
enables to fix the locking in DRM's console emulation. Similar changes
for vmap and mmap have been posted at [1][2]

Most DRM drivers and memory managers acquire the buffer object's
reservation lock within their GEM pin and unpin callbacks. This
violates dma-buf locking semantics. We get away with it because PRIME
does not provide pin/unpin, but attach/detach, for which the locking
semantics is correct.

Patches 1 to 8 rework DRM GEM code in various implementations to
acquire the reservation lock when entering the pin and unpin callbacks.
This prepares them for the next patch. Drivers that are not affected
by these patches either don't acquire the reservation lock (amdgpu)
or don't need preparation (loongson).

Patch 9 moves reservation locking from the GEM pin/unpin callbacks
into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
internally it still gets the reservation lock.

With the updated GEM callbacks, the rest of the patchset fixes the
fbdev emulation's buffer locking. Fbdev emulation needs to keep its
GEM buffer object inplace while updating its content. This required
a implicit pinning and apparently amdgpu didn't do this at all.

Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
The former function map a GEM buffer into the kernel's address space
with regular vmap operations, but keeps holding the reservation lock.
The _vunmap_local() helper undoes the vmap and releases the lock. The
updated GEM callbacks make this possible. Between the two calls, the
fbdev emulation can update the buffer content without have the buffer
moved or evicted. Update fbdev-generic to use vmap_local helpers,
which fix amdgpu. The idea of adding a "local vmap" has previously been
attempted at [3] in a different form.

Patch 11 adds implicit pinning to the DRM client's regular vmap
helper so that long-term vmap'ed buffers won't be evicted. This only
affects fbdev-dma, but GEM DMA helpers don't require pinning. So
there are no practical changes.

Patches 12 and 13 remove implicit pinning from the vmap and vunmap
operations in gem-vram and qxl. These pin operations are not supposed
to be part of vmap code, but were required to keep the buffers in place
for fbdev emulation. With the conversion o ffbdev-generic to to
vmap_local helpers, that code can finally be removed.

Tested with amdgpu, nouveau, radeon, simpledrm and vc4.

[1] https://patchwork.freedesktop.org/series/106371/
[2] https://patchwork.freedesktop.org/series/116001/
[3] https://patchwork.freedesktop.org/series/84732/

Thomas Zimmermann (13):
  drm/gem-shmem: Acquire reservation lock in GEM pin/unpin callbacks
  drm/gem-vram: Acquire reservation lock in GEM pin/unpin callbacks
  drm/msm: Provide msm_gem_get_pages_locked()
  drm/msm: Acquire reservation lock in GEM pin/unpin callback
  drm/nouveau: Provide nouveau_bo_{pin,unpin}_locked()
  drm/nouveau: Acquire reservation lock in GEM pin/unpin callbacks
  drm/qxl: Provide qxl_bo_{pin,unpin}_locked()
  drm/qxl: Acquire reservation lock in GEM pin/unpin callbacks
  drm/gem: Acquire reservation lock in drm_gem_{pin/unpin}()
  drm/fbdev-generic: Fix locking with drm_client_buffer_vmap_local()
  drm/client: Pin vmap'ed GEM buffers
  drm/gem-vram: Do not pin buffer objects for vmap
  drm/qxl: Do not pin buffer objects for vmap

 drivers/gpu/drm/drm_client.c            |  92 ++++++++++++++++++---
 drivers/gpu/drm/drm_fbdev_generic.c     |   4 +-
 drivers/gpu/drm/drm_gem.c               |  34 +++++++-
 drivers/gpu/drm/drm_gem_shmem_helper.c  |   6 +-
 drivers/gpu/drm/drm_gem_vram_helper.c   | 101 ++++++++++--------------
 drivers/gpu/drm/drm_internal.h          |   2 +
 drivers/gpu/drm/loongson/lsdc_gem.c     |  13 +--
 drivers/gpu/drm/msm/msm_gem.c           |  20 ++---
 drivers/gpu/drm/msm/msm_gem.h           |   4 +-
 drivers/gpu/drm/msm/msm_gem_prime.c     |  20 +++--
 drivers/gpu/drm/nouveau/nouveau_bo.c    |  43 +++++++---
 drivers/gpu/drm/nouveau/nouveau_bo.h    |   2 +
 drivers/gpu/drm/nouveau/nouveau_prime.c |   8 +-
 drivers/gpu/drm/qxl/qxl_object.c        |  26 +++---
 drivers/gpu/drm/qxl/qxl_object.h        |   2 +
 drivers/gpu/drm/qxl/qxl_prime.c         |   4 +-
 drivers/gpu/drm/radeon/radeon_prime.c   |  11 ---
 drivers/gpu/drm/vmwgfx/vmwgfx_gem.c     |  25 ++----
 include/drm/drm_client.h                |  10 +++
 include/drm/drm_gem.h                   |   3 +
 include/drm/drm_gem_shmem_helper.h      |   7 +-
 21 files changed, 265 insertions(+), 172 deletions(-)


base-commit: 7291e2e67dff0ff573900266382c9c9248a7dea5
prerequisite-patch-id: bdfa0e6341b30cc9d7647172760b3473007c1216
prerequisite-patch-id: bc27ac702099f481890ae2c7c4a9c531f4a62d64
prerequisite-patch-id: f5d4bf16dc45334254527c2e31ee21ba4582761c
prerequisite-patch-id: 734c87e610747779aa41be12eb9e4c984bdfa743
prerequisite-patch-id: 0aa359f6144c4015c140c8a6750be19099c676fb
prerequisite-patch-id: c67e5d886a47b7d0266d81100837557fda34cb24
prerequisite-patch-id: cbc453ee02fae02af22fbfdce56ab732c7a88c36

Comments

Christian König Feb. 27, 2024, 2:03 p.m. UTC | #1
Nice, looks totally valid to me.

Feel free to add to patch #2, #9, #10, #11 and #12 Reviewed-by: 
Christian König <christian.koenig@amd.com>

And Acked-by: Christian König <christian.koenig@amd.com> to the rest.

Regards,
Christian.

Am 27.02.24 um 11:14 schrieb Thomas Zimmermann:
> Dma-buf locking semantics require the caller of pin and unpin to hold
> the buffer's reservation lock. Fix DRM to adhere to the specs. This
> enables to fix the locking in DRM's console emulation. Similar changes
> for vmap and mmap have been posted at [1][2]
>
> Most DRM drivers and memory managers acquire the buffer object's
> reservation lock within their GEM pin and unpin callbacks. This
> violates dma-buf locking semantics. We get away with it because PRIME
> does not provide pin/unpin, but attach/detach, for which the locking
> semantics is correct.
>
> Patches 1 to 8 rework DRM GEM code in various implementations to
> acquire the reservation lock when entering the pin and unpin callbacks.
> This prepares them for the next patch. Drivers that are not affected
> by these patches either don't acquire the reservation lock (amdgpu)
> or don't need preparation (loongson).
>
> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
> internally it still gets the reservation lock.
>
> With the updated GEM callbacks, the rest of the patchset fixes the
> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
> GEM buffer object inplace while updating its content. This required
> a implicit pinning and apparently amdgpu didn't do this at all.
>
> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
> The former function map a GEM buffer into the kernel's address space
> with regular vmap operations, but keeps holding the reservation lock.
> The _vunmap_local() helper undoes the vmap and releases the lock. The
> updated GEM callbacks make this possible. Between the two calls, the
> fbdev emulation can update the buffer content without have the buffer
> moved or evicted. Update fbdev-generic to use vmap_local helpers,
> which fix amdgpu. The idea of adding a "local vmap" has previously been
> attempted at [3] in a different form.
>
> Patch 11 adds implicit pinning to the DRM client's regular vmap
> helper so that long-term vmap'ed buffers won't be evicted. This only
> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
> there are no practical changes.
>
> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
> operations in gem-vram and qxl. These pin operations are not supposed
> to be part of vmap code, but were required to keep the buffers in place
> for fbdev emulation. With the conversion o ffbdev-generic to to
> vmap_local helpers, that code can finally be removed.
>
> Tested with amdgpu, nouveau, radeon, simpledrm and vc4.
>
> [1] https://patchwork.freedesktop.org/series/106371/
> [2] https://patchwork.freedesktop.org/series/116001/
> [3] https://patchwork.freedesktop.org/series/84732/
>
> Thomas Zimmermann (13):
>    drm/gem-shmem: Acquire reservation lock in GEM pin/unpin callbacks
>    drm/gem-vram: Acquire reservation lock in GEM pin/unpin callbacks
>    drm/msm: Provide msm_gem_get_pages_locked()
>    drm/msm: Acquire reservation lock in GEM pin/unpin callback
>    drm/nouveau: Provide nouveau_bo_{pin,unpin}_locked()
>    drm/nouveau: Acquire reservation lock in GEM pin/unpin callbacks
>    drm/qxl: Provide qxl_bo_{pin,unpin}_locked()
>    drm/qxl: Acquire reservation lock in GEM pin/unpin callbacks
>    drm/gem: Acquire reservation lock in drm_gem_{pin/unpin}()
>    drm/fbdev-generic: Fix locking with drm_client_buffer_vmap_local()
>    drm/client: Pin vmap'ed GEM buffers
>    drm/gem-vram: Do not pin buffer objects for vmap
>    drm/qxl: Do not pin buffer objects for vmap
>
>   drivers/gpu/drm/drm_client.c            |  92 ++++++++++++++++++---
>   drivers/gpu/drm/drm_fbdev_generic.c     |   4 +-
>   drivers/gpu/drm/drm_gem.c               |  34 +++++++-
>   drivers/gpu/drm/drm_gem_shmem_helper.c  |   6 +-
>   drivers/gpu/drm/drm_gem_vram_helper.c   | 101 ++++++++++--------------
>   drivers/gpu/drm/drm_internal.h          |   2 +
>   drivers/gpu/drm/loongson/lsdc_gem.c     |  13 +--
>   drivers/gpu/drm/msm/msm_gem.c           |  20 ++---
>   drivers/gpu/drm/msm/msm_gem.h           |   4 +-
>   drivers/gpu/drm/msm/msm_gem_prime.c     |  20 +++--
>   drivers/gpu/drm/nouveau/nouveau_bo.c    |  43 +++++++---
>   drivers/gpu/drm/nouveau/nouveau_bo.h    |   2 +
>   drivers/gpu/drm/nouveau/nouveau_prime.c |   8 +-
>   drivers/gpu/drm/qxl/qxl_object.c        |  26 +++---
>   drivers/gpu/drm/qxl/qxl_object.h        |   2 +
>   drivers/gpu/drm/qxl/qxl_prime.c         |   4 +-
>   drivers/gpu/drm/radeon/radeon_prime.c   |  11 ---
>   drivers/gpu/drm/vmwgfx/vmwgfx_gem.c     |  25 ++----
>   include/drm/drm_client.h                |  10 +++
>   include/drm/drm_gem.h                   |   3 +
>   include/drm/drm_gem_shmem_helper.h      |   7 +-
>   21 files changed, 265 insertions(+), 172 deletions(-)
>
>
> base-commit: 7291e2e67dff0ff573900266382c9c9248a7dea5
> prerequisite-patch-id: bdfa0e6341b30cc9d7647172760b3473007c1216
> prerequisite-patch-id: bc27ac702099f481890ae2c7c4a9c531f4a62d64
> prerequisite-patch-id: f5d4bf16dc45334254527c2e31ee21ba4582761c
> prerequisite-patch-id: 734c87e610747779aa41be12eb9e4c984bdfa743
> prerequisite-patch-id: 0aa359f6144c4015c140c8a6750be19099c676fb
> prerequisite-patch-id: c67e5d886a47b7d0266d81100837557fda34cb24
> prerequisite-patch-id: cbc453ee02fae02af22fbfdce56ab732c7a88c36
Thomas Zimmermann Feb. 27, 2024, 3:42 p.m. UTC | #2
Hi

Am 27.02.24 um 15:03 schrieb Christian König:
> Nice, looks totally valid to me.
>
> Feel free to add to patch #2, #9, #10, #11 and #12 Reviewed-by: 
> Christian König <christian.koenig@amd.com>
>
> And Acked-by: Christian König <christian.koenig@amd.com> to the rest.

Oh, wow. That was quick! Thanks a lot.

Best regards
Thomas

>
> Regards,
> Christian.
>
> Am 27.02.24 um 11:14 schrieb Thomas Zimmermann:
>> Dma-buf locking semantics require the caller of pin and unpin to hold
>> the buffer's reservation lock. Fix DRM to adhere to the specs. This
>> enables to fix the locking in DRM's console emulation. Similar changes
>> for vmap and mmap have been posted at [1][2]
>>
>> Most DRM drivers and memory managers acquire the buffer object's
>> reservation lock within their GEM pin and unpin callbacks. This
>> violates dma-buf locking semantics. We get away with it because PRIME
>> does not provide pin/unpin, but attach/detach, for which the locking
>> semantics is correct.
>>
>> Patches 1 to 8 rework DRM GEM code in various implementations to
>> acquire the reservation lock when entering the pin and unpin callbacks.
>> This prepares them for the next patch. Drivers that are not affected
>> by these patches either don't acquire the reservation lock (amdgpu)
>> or don't need preparation (loongson).
>>
>> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
>> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
>> internally it still gets the reservation lock.
>>
>> With the updated GEM callbacks, the rest of the patchset fixes the
>> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
>> GEM buffer object inplace while updating its content. This required
>> a implicit pinning and apparently amdgpu didn't do this at all.
>>
>> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
>> The former function map a GEM buffer into the kernel's address space
>> with regular vmap operations, but keeps holding the reservation lock.
>> The _vunmap_local() helper undoes the vmap and releases the lock. The
>> updated GEM callbacks make this possible. Between the two calls, the
>> fbdev emulation can update the buffer content without have the buffer
>> moved or evicted. Update fbdev-generic to use vmap_local helpers,
>> which fix amdgpu. The idea of adding a "local vmap" has previously been
>> attempted at [3] in a different form.
>>
>> Patch 11 adds implicit pinning to the DRM client's regular vmap
>> helper so that long-term vmap'ed buffers won't be evicted. This only
>> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
>> there are no practical changes.
>>
>> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
>> operations in gem-vram and qxl. These pin operations are not supposed
>> to be part of vmap code, but were required to keep the buffers in place
>> for fbdev emulation. With the conversion o ffbdev-generic to to
>> vmap_local helpers, that code can finally be removed.
>>
>> Tested with amdgpu, nouveau, radeon, simpledrm and vc4.
>>
>> [1] https://patchwork.freedesktop.org/series/106371/
>> [2] https://patchwork.freedesktop.org/series/116001/
>> [3] https://patchwork.freedesktop.org/series/84732/
>>
>> Thomas Zimmermann (13):
>>    drm/gem-shmem: Acquire reservation lock in GEM pin/unpin callbacks
>>    drm/gem-vram: Acquire reservation lock in GEM pin/unpin callbacks
>>    drm/msm: Provide msm_gem_get_pages_locked()
>>    drm/msm: Acquire reservation lock in GEM pin/unpin callback
>>    drm/nouveau: Provide nouveau_bo_{pin,unpin}_locked()
>>    drm/nouveau: Acquire reservation lock in GEM pin/unpin callbacks
>>    drm/qxl: Provide qxl_bo_{pin,unpin}_locked()
>>    drm/qxl: Acquire reservation lock in GEM pin/unpin callbacks
>>    drm/gem: Acquire reservation lock in drm_gem_{pin/unpin}()
>>    drm/fbdev-generic: Fix locking with drm_client_buffer_vmap_local()
>>    drm/client: Pin vmap'ed GEM buffers
>>    drm/gem-vram: Do not pin buffer objects for vmap
>>    drm/qxl: Do not pin buffer objects for vmap
>>
>>   drivers/gpu/drm/drm_client.c            |  92 ++++++++++++++++++---
>>   drivers/gpu/drm/drm_fbdev_generic.c     |   4 +-
>>   drivers/gpu/drm/drm_gem.c               |  34 +++++++-
>>   drivers/gpu/drm/drm_gem_shmem_helper.c  |   6 +-
>>   drivers/gpu/drm/drm_gem_vram_helper.c   | 101 ++++++++++--------------
>>   drivers/gpu/drm/drm_internal.h          |   2 +
>>   drivers/gpu/drm/loongson/lsdc_gem.c     |  13 +--
>>   drivers/gpu/drm/msm/msm_gem.c           |  20 ++---
>>   drivers/gpu/drm/msm/msm_gem.h           |   4 +-
>>   drivers/gpu/drm/msm/msm_gem_prime.c     |  20 +++--
>>   drivers/gpu/drm/nouveau/nouveau_bo.c    |  43 +++++++---
>>   drivers/gpu/drm/nouveau/nouveau_bo.h    |   2 +
>>   drivers/gpu/drm/nouveau/nouveau_prime.c |   8 +-
>>   drivers/gpu/drm/qxl/qxl_object.c        |  26 +++---
>>   drivers/gpu/drm/qxl/qxl_object.h        |   2 +
>>   drivers/gpu/drm/qxl/qxl_prime.c         |   4 +-
>>   drivers/gpu/drm/radeon/radeon_prime.c   |  11 ---
>>   drivers/gpu/drm/vmwgfx/vmwgfx_gem.c     |  25 ++----
>>   include/drm/drm_client.h                |  10 +++
>>   include/drm/drm_gem.h                   |   3 +
>>   include/drm/drm_gem_shmem_helper.h      |   7 +-
>>   21 files changed, 265 insertions(+), 172 deletions(-)
>>
>>
>> base-commit: 7291e2e67dff0ff573900266382c9c9248a7dea5
>> prerequisite-patch-id: bdfa0e6341b30cc9d7647172760b3473007c1216
>> prerequisite-patch-id: bc27ac702099f481890ae2c7c4a9c531f4a62d64
>> prerequisite-patch-id: f5d4bf16dc45334254527c2e31ee21ba4582761c
>> prerequisite-patch-id: 734c87e610747779aa41be12eb9e4c984bdfa743
>> prerequisite-patch-id: 0aa359f6144c4015c140c8a6750be19099c676fb
>> prerequisite-patch-id: c67e5d886a47b7d0266d81100837557fda34cb24
>> prerequisite-patch-id: cbc453ee02fae02af22fbfdce56ab732c7a88c36
>
Dmitry Osipenko Feb. 27, 2024, 6:14 p.m. UTC | #3
Hello,

Thank you for the patches!

On 2/27/24 13:14, Thomas Zimmermann wrote:
> Dma-buf locking semantics require the caller of pin and unpin to hold
> the buffer's reservation lock. Fix DRM to adhere to the specs. This
> enables to fix the locking in DRM's console emulation. Similar changes
> for vmap and mmap have been posted at [1][2]
> 
> Most DRM drivers and memory managers acquire the buffer object's
> reservation lock within their GEM pin and unpin callbacks. This
> violates dma-buf locking semantics. We get away with it because PRIME
> does not provide pin/unpin, but attach/detach, for which the locking
> semantics is correct.
> 
> Patches 1 to 8 rework DRM GEM code in various implementations to
> acquire the reservation lock when entering the pin and unpin callbacks.
> This prepares them for the next patch. Drivers that are not affected
> by these patches either don't acquire the reservation lock (amdgpu)
> or don't need preparation (loongson).
> 
> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
> internally it still gets the reservation lock.
> 
> With the updated GEM callbacks, the rest of the patchset fixes the
> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
> GEM buffer object inplace while updating its content. This required
> a implicit pinning and apparently amdgpu didn't do this at all.
> 
> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
> The former function map a GEM buffer into the kernel's address space
> with regular vmap operations, but keeps holding the reservation lock.
> The _vunmap_local() helper undoes the vmap and releases the lock. The
> updated GEM callbacks make this possible. Between the two calls, the
> fbdev emulation can update the buffer content without have the buffer
> moved or evicted. Update fbdev-generic to use vmap_local helpers,
> which fix amdgpu. The idea of adding a "local vmap" has previously been
> attempted at [3] in a different form.
> 
> Patch 11 adds implicit pinning to the DRM client's regular vmap
> helper so that long-term vmap'ed buffers won't be evicted. This only
> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
> there are no practical changes.
> 
> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
> operations in gem-vram and qxl. These pin operations are not supposed
> to be part of vmap code, but were required to keep the buffers in place
> for fbdev emulation. With the conversion o ffbdev-generic to to
> vmap_local helpers, that code can finally be removed.

Isn't it a common behaviour for all DRM drivers to implicitly pin BO
while it's vmapped? I was sure it should be common /o\

Why would you want to kmap BO that isn't pinned?

Shouldn't TTM's vmap() be changed to do the pinning?

I missed that TTM doesn't pin BO on vmap() and now surprised to see it.
It should be a rather serious problem requiring backporting of the
fixes, but I don't see the fixes tags on the patches (?)
Christian König Feb. 27, 2024, 6:33 p.m. UTC | #4
Am 27.02.24 um 19:14 schrieb Dmitry Osipenko:
> Hello,
>
> Thank you for the patches!
>
> On 2/27/24 13:14, Thomas Zimmermann wrote:
>> Dma-buf locking semantics require the caller of pin and unpin to hold
>> the buffer's reservation lock. Fix DRM to adhere to the specs. This
>> enables to fix the locking in DRM's console emulation. Similar changes
>> for vmap and mmap have been posted at [1][2]
>>
>> Most DRM drivers and memory managers acquire the buffer object's
>> reservation lock within their GEM pin and unpin callbacks. This
>> violates dma-buf locking semantics. We get away with it because PRIME
>> does not provide pin/unpin, but attach/detach, for which the locking
>> semantics is correct.
>>
>> Patches 1 to 8 rework DRM GEM code in various implementations to
>> acquire the reservation lock when entering the pin and unpin callbacks.
>> This prepares them for the next patch. Drivers that are not affected
>> by these patches either don't acquire the reservation lock (amdgpu)
>> or don't need preparation (loongson).
>>
>> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
>> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
>> internally it still gets the reservation lock.
>>
>> With the updated GEM callbacks, the rest of the patchset fixes the
>> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
>> GEM buffer object inplace while updating its content. This required
>> a implicit pinning and apparently amdgpu didn't do this at all.
>>
>> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
>> The former function map a GEM buffer into the kernel's address space
>> with regular vmap operations, but keeps holding the reservation lock.
>> The _vunmap_local() helper undoes the vmap and releases the lock. The
>> updated GEM callbacks make this possible. Between the two calls, the
>> fbdev emulation can update the buffer content without have the buffer
>> moved or evicted. Update fbdev-generic to use vmap_local helpers,
>> which fix amdgpu. The idea of adding a "local vmap" has previously been
>> attempted at [3] in a different form.
>>
>> Patch 11 adds implicit pinning to the DRM client's regular vmap
>> helper so that long-term vmap'ed buffers won't be evicted. This only
>> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
>> there are no practical changes.
>>
>> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
>> operations in gem-vram and qxl. These pin operations are not supposed
>> to be part of vmap code, but were required to keep the buffers in place
>> for fbdev emulation. With the conversion o ffbdev-generic to to
>> vmap_local helpers, that code can finally be removed.
> Isn't it a common behaviour for all DRM drivers to implicitly pin BO
> while it's vmapped? I was sure it should be common /o\

No, at least amdgpu and radon doesn't pin kmapped BOs and I don't think 
nouveau does either.

> Why would you want to kmap BO that isn't pinned?

The usual use case is to call the ttm kmap function when you need CPU 
access.

When the buffer hasn't moved we can use the cached CPU mapping, if the 
buffer has moved since the last time or this is the first time that is 
called we setup a new mapping.

> Shouldn't TTM's vmap() be changed to do the pinning?

Absolutely not, no. That would break tons of use cases.

Regards,
Christian.

>
> I missed that TTM doesn't pin BO on vmap() and now surprised to see it.
> It should be a rather serious problem requiring backporting of the
> fixes, but I don't see the fixes tags on the patches (?)
>
Zack Rusin Feb. 28, 2024, 3:54 a.m. UTC | #5
On Tue, Feb 27, 2024 at 6:38 AM Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
> Dma-buf locking semantics require the caller of pin and unpin to hold
> the buffer's reservation lock. Fix DRM to adhere to the specs. This
> enables to fix the locking in DRM's console emulation. Similar changes
> for vmap and mmap have been posted at [1][2]
>
> Most DRM drivers and memory managers acquire the buffer object's
> reservation lock within their GEM pin and unpin callbacks. This
> violates dma-buf locking semantics. We get away with it because PRIME
> does not provide pin/unpin, but attach/detach, for which the locking
> semantics is correct.
>
> Patches 1 to 8 rework DRM GEM code in various implementations to
> acquire the reservation lock when entering the pin and unpin callbacks.
> This prepares them for the next patch. Drivers that are not affected
> by these patches either don't acquire the reservation lock (amdgpu)
> or don't need preparation (loongson).
>
> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
> internally it still gets the reservation lock.
>
> With the updated GEM callbacks, the rest of the patchset fixes the
> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
> GEM buffer object inplace while updating its content. This required
> a implicit pinning and apparently amdgpu didn't do this at all.
>
> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
> The former function map a GEM buffer into the kernel's address space
> with regular vmap operations, but keeps holding the reservation lock.
> The _vunmap_local() helper undoes the vmap and releases the lock. The
> updated GEM callbacks make this possible. Between the two calls, the
> fbdev emulation can update the buffer content without have the buffer
> moved or evicted. Update fbdev-generic to use vmap_local helpers,
> which fix amdgpu. The idea of adding a "local vmap" has previously been
> attempted at [3] in a different form.
>
> Patch 11 adds implicit pinning to the DRM client's regular vmap
> helper so that long-term vmap'ed buffers won't be evicted. This only
> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
> there are no practical changes.
>
> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
> operations in gem-vram and qxl. These pin operations are not supposed
> to be part of vmap code, but were required to keep the buffers in place
> for fbdev emulation. With the conversion o ffbdev-generic to to
> vmap_local helpers, that code can finally be removed.
>
> Tested with amdgpu, nouveau, radeon, simpledrm and vc4.
>
> [1] https://patchwork.freedesktop.org/series/106371/
> [2] https://patchwork.freedesktop.org/series/116001/
> [3] https://patchwork.freedesktop.org/series/84732/
>
> Thomas Zimmermann (13):
>   drm/gem-shmem: Acquire reservation lock in GEM pin/unpin callbacks
>   drm/gem-vram: Acquire reservation lock in GEM pin/unpin callbacks
>   drm/msm: Provide msm_gem_get_pages_locked()
>   drm/msm: Acquire reservation lock in GEM pin/unpin callback
>   drm/nouveau: Provide nouveau_bo_{pin,unpin}_locked()
>   drm/nouveau: Acquire reservation lock in GEM pin/unpin callbacks
>   drm/qxl: Provide qxl_bo_{pin,unpin}_locked()
>   drm/qxl: Acquire reservation lock in GEM pin/unpin callbacks
>   drm/gem: Acquire reservation lock in drm_gem_{pin/unpin}()
>   drm/fbdev-generic: Fix locking with drm_client_buffer_vmap_local()
>   drm/client: Pin vmap'ed GEM buffers
>   drm/gem-vram: Do not pin buffer objects for vmap
>   drm/qxl: Do not pin buffer objects for vmap
>
>  drivers/gpu/drm/drm_client.c            |  92 ++++++++++++++++++---
>  drivers/gpu/drm/drm_fbdev_generic.c     |   4 +-
>  drivers/gpu/drm/drm_gem.c               |  34 +++++++-
>  drivers/gpu/drm/drm_gem_shmem_helper.c  |   6 +-
>  drivers/gpu/drm/drm_gem_vram_helper.c   | 101 ++++++++++--------------
>  drivers/gpu/drm/drm_internal.h          |   2 +
>  drivers/gpu/drm/loongson/lsdc_gem.c     |  13 +--
>  drivers/gpu/drm/msm/msm_gem.c           |  20 ++---
>  drivers/gpu/drm/msm/msm_gem.h           |   4 +-
>  drivers/gpu/drm/msm/msm_gem_prime.c     |  20 +++--
>  drivers/gpu/drm/nouveau/nouveau_bo.c    |  43 +++++++---
>  drivers/gpu/drm/nouveau/nouveau_bo.h    |   2 +
>  drivers/gpu/drm/nouveau/nouveau_prime.c |   8 +-
>  drivers/gpu/drm/qxl/qxl_object.c        |  26 +++---
>  drivers/gpu/drm/qxl/qxl_object.h        |   2 +
>  drivers/gpu/drm/qxl/qxl_prime.c         |   4 +-
>  drivers/gpu/drm/radeon/radeon_prime.c   |  11 ---
>  drivers/gpu/drm/vmwgfx/vmwgfx_gem.c     |  25 ++----
>  include/drm/drm_client.h                |  10 +++
>  include/drm/drm_gem.h                   |   3 +
>  include/drm/drm_gem_shmem_helper.h      |   7 +-
>  21 files changed, 265 insertions(+), 172 deletions(-)
>
>
> base-commit: 7291e2e67dff0ff573900266382c9c9248a7dea5
> prerequisite-patch-id: bdfa0e6341b30cc9d7647172760b3473007c1216
> prerequisite-patch-id: bc27ac702099f481890ae2c7c4a9c531f4a62d64
> prerequisite-patch-id: f5d4bf16dc45334254527c2e31ee21ba4582761c
> prerequisite-patch-id: 734c87e610747779aa41be12eb9e4c984bdfa743
> prerequisite-patch-id: 0aa359f6144c4015c140c8a6750be19099c676fb
> prerequisite-patch-id: c67e5d886a47b7d0266d81100837557fda34cb24
> prerequisite-patch-id: cbc453ee02fae02af22fbfdce56ab732c7a88c36
> --
> 2.43.2
>

That's a really nice cleanup! I already gave a r-b for 9/13. For the rest:
Acked-by: Zack Rusin <zack.rusin@broadcom.com>

z
Thomas Zimmermann Feb. 28, 2024, 8:19 a.m. UTC | #6
Hi

Am 27.02.24 um 19:14 schrieb Dmitry Osipenko:
> Hello,
>
> Thank you for the patches!
>
> On 2/27/24 13:14, Thomas Zimmermann wrote:
>> Dma-buf locking semantics require the caller of pin and unpin to hold
>> the buffer's reservation lock. Fix DRM to adhere to the specs. This
>> enables to fix the locking in DRM's console emulation. Similar changes
>> for vmap and mmap have been posted at [1][2]
>>
>> Most DRM drivers and memory managers acquire the buffer object's
>> reservation lock within their GEM pin and unpin callbacks. This
>> violates dma-buf locking semantics. We get away with it because PRIME
>> does not provide pin/unpin, but attach/detach, for which the locking
>> semantics is correct.
>>
>> Patches 1 to 8 rework DRM GEM code in various implementations to
>> acquire the reservation lock when entering the pin and unpin callbacks.
>> This prepares them for the next patch. Drivers that are not affected
>> by these patches either don't acquire the reservation lock (amdgpu)
>> or don't need preparation (loongson).
>>
>> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
>> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
>> internally it still gets the reservation lock.
>>
>> With the updated GEM callbacks, the rest of the patchset fixes the
>> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
>> GEM buffer object inplace while updating its content. This required
>> a implicit pinning and apparently amdgpu didn't do this at all.
>>
>> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
>> The former function map a GEM buffer into the kernel's address space
>> with regular vmap operations, but keeps holding the reservation lock.
>> The _vunmap_local() helper undoes the vmap and releases the lock. The
>> updated GEM callbacks make this possible. Between the two calls, the
>> fbdev emulation can update the buffer content without have the buffer
>> moved or evicted. Update fbdev-generic to use vmap_local helpers,
>> which fix amdgpu. The idea of adding a "local vmap" has previously been
>> attempted at [3] in a different form.
>>
>> Patch 11 adds implicit pinning to the DRM client's regular vmap
>> helper so that long-term vmap'ed buffers won't be evicted. This only
>> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
>> there are no practical changes.
>>
>> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
>> operations in gem-vram and qxl. These pin operations are not supposed
>> to be part of vmap code, but were required to keep the buffers in place
>> for fbdev emulation. With the conversion o ffbdev-generic to to
>> vmap_local helpers, that code can finally be removed.
> Isn't it a common behaviour for all DRM drivers to implicitly pin BO
> while it's vmapped? I was sure it should be common /o\

That's what I originally thought as well, but the intention is for pin 
and vmap to be distinct operation. So far each driver has been 
different, as you probably know best from your vmap refactoring. :)

>
> Why would you want to kmap BO that isn't pinned?

Pinning places the buffer object for the GPU. As a side effect, the 
buffer is then kept in place, which enables vmap. So pinning only makes 
sense for buffer objects that never move (shmem, dma). That's what patch 
11 is for.

>
> Shouldn't TTM's vmap() be changed to do the pinning?

I don't think so. One problem is that pinning needs a memory area (vram, 
GTT, system ram, etc) specified, which vmap simply doesn't know about. 
That has been a problem for fbdev emulation at some point. Our fbdev 
code tried to pin as part of vmap, but chose the wrong area and suddenly 
the GPU could not see the buffer object any longer.  So the next best 
thing for vmap was to pin the buffer object where ever it is currently 
located. That is what gem-vram and qxl did so far. And of course, the 
fbdev code needs to unpin and vunmap the buffer object quickly, so that 
it can be relocated if the GPU needs it.  Hence, the vmap_local 
interface removes such short-term pinning in favor of holding the 
reservation lock.

>
> I missed that TTM doesn't pin BO on vmap() and now surprised to see it.
> It should be a rather serious problem requiring backporting of the
> fixes, but I don't see the fixes tags on the patches (?)

No chance TBH. The old code has worked for years and backporting all 
this would require your vmap patches at a minimum.

Except maybe for amdgpu. It uses fbdev-generic, which requires pinning, 
but amdgpu doesn't pin. That looks fishy, but I'm not aware of any bug 
reports either. I guess, a quick workaround could fix older amdgpu if 
necessary.

Best regards
Thomas

>
Dmitry Osipenko March 1, 2024, 4:44 p.m. UTC | #7
On 2/28/24 11:19, Thomas Zimmermann wrote:
> Hi
> 
> Am 27.02.24 um 19:14 schrieb Dmitry Osipenko:
>> Hello,
>>
>> Thank you for the patches!
>>
>> On 2/27/24 13:14, Thomas Zimmermann wrote:
>>> Dma-buf locking semantics require the caller of pin and unpin to hold
>>> the buffer's reservation lock. Fix DRM to adhere to the specs. This
>>> enables to fix the locking in DRM's console emulation. Similar changes
>>> for vmap and mmap have been posted at [1][2]
>>>
>>> Most DRM drivers and memory managers acquire the buffer object's
>>> reservation lock within their GEM pin and unpin callbacks. This
>>> violates dma-buf locking semantics. We get away with it because PRIME
>>> does not provide pin/unpin, but attach/detach, for which the locking
>>> semantics is correct.
>>>
>>> Patches 1 to 8 rework DRM GEM code in various implementations to
>>> acquire the reservation lock when entering the pin and unpin callbacks.
>>> This prepares them for the next patch. Drivers that are not affected
>>> by these patches either don't acquire the reservation lock (amdgpu)
>>> or don't need preparation (loongson).
>>>
>>> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
>>> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
>>> internally it still gets the reservation lock.
>>>
>>> With the updated GEM callbacks, the rest of the patchset fixes the
>>> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
>>> GEM buffer object inplace while updating its content. This required
>>> a implicit pinning and apparently amdgpu didn't do this at all.
>>>
>>> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
>>> The former function map a GEM buffer into the kernel's address space
>>> with regular vmap operations, but keeps holding the reservation lock.
>>> The _vunmap_local() helper undoes the vmap and releases the lock. The
>>> updated GEM callbacks make this possible. Between the two calls, the
>>> fbdev emulation can update the buffer content without have the buffer
>>> moved or evicted. Update fbdev-generic to use vmap_local helpers,
>>> which fix amdgpu. The idea of adding a "local vmap" has previously been
>>> attempted at [3] in a different form.
>>>
>>> Patch 11 adds implicit pinning to the DRM client's regular vmap
>>> helper so that long-term vmap'ed buffers won't be evicted. This only
>>> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
>>> there are no practical changes.
>>>
>>> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
>>> operations in gem-vram and qxl. These pin operations are not supposed
>>> to be part of vmap code, but were required to keep the buffers in place
>>> for fbdev emulation. With the conversion o ffbdev-generic to to
>>> vmap_local helpers, that code can finally be removed.
>> Isn't it a common behaviour for all DRM drivers to implicitly pin BO
>> while it's vmapped? I was sure it should be common /o\
> 
> That's what I originally thought as well, but the intention is for pin
> and vmap to be distinct operation. So far each driver has been
> different, as you probably know best from your vmap refactoring. :)
> 
>>
>> Why would you want to kmap BO that isn't pinned?
> 
> Pinning places the buffer object for the GPU. As a side effect, the
> buffer is then kept in place, which enables vmap. So pinning only makes
> sense for buffer objects that never move (shmem, dma). That's what patch
> 11 is for.
> 
>>
>> Shouldn't TTM's vmap() be changed to do the pinning?
> 
> I don't think so. One problem is that pinning needs a memory area (vram,
> GTT, system ram, etc) specified, which vmap simply doesn't know about.
> That has been a problem for fbdev emulation at some point. Our fbdev
> code tried to pin as part of vmap, but chose the wrong area and suddenly
> the GPU could not see the buffer object any longer.  So the next best
> thing for vmap was to pin the buffer object where ever it is currently
> located. That is what gem-vram and qxl did so far. And of course, the
> fbdev code needs to unpin and vunmap the buffer object quickly, so that
> it can be relocated if the GPU needs it.  Hence, the vmap_local
> interface removes such short-term pinning in favor of holding the
> reservation lock.
> 
>>
>> I missed that TTM doesn't pin BO on vmap() and now surprised to see it.
>> It should be a rather serious problem requiring backporting of the
>> fixes, but I don't see the fixes tags on the patches (?)
> 
> No chance TBH. The old code has worked for years and backporting all
> this would require your vmap patches at a minimum.
> 
> Except maybe for amdgpu. It uses fbdev-generic, which requires pinning,
> but amdgpu doesn't pin. That looks fishy, but I'm not aware of any bug
> reports either. I guess, a quick workaround could fix older amdgpu if
> necessary.

Thanks! I'll make another pass on the patches on Monday
Dmitry Osipenko March 5, 2024, 9:58 p.m. UTC | #8
On 2/27/24 13:14, Thomas Zimmermann wrote:
> Dma-buf locking semantics require the caller of pin and unpin to hold
> the buffer's reservation lock. Fix DRM to adhere to the specs. This
> enables to fix the locking in DRM's console emulation. Similar changes
> for vmap and mmap have been posted at [1][2]
> 
> Most DRM drivers and memory managers acquire the buffer object's
> reservation lock within their GEM pin and unpin callbacks. This
> violates dma-buf locking semantics. We get away with it because PRIME
> does not provide pin/unpin, but attach/detach, for which the locking
> semantics is correct.
> 
> Patches 1 to 8 rework DRM GEM code in various implementations to
> acquire the reservation lock when entering the pin and unpin callbacks.
> This prepares them for the next patch. Drivers that are not affected
> by these patches either don't acquire the reservation lock (amdgpu)
> or don't need preparation (loongson).
> 
> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
> internally it still gets the reservation lock.
> 
> With the updated GEM callbacks, the rest of the patchset fixes the
> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
> GEM buffer object inplace while updating its content. This required
> a implicit pinning and apparently amdgpu didn't do this at all.
> 
> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
> The former function map a GEM buffer into the kernel's address space
> with regular vmap operations, but keeps holding the reservation lock.
> The _vunmap_local() helper undoes the vmap and releases the lock. The
> updated GEM callbacks make this possible. Between the two calls, the
> fbdev emulation can update the buffer content without have the buffer
> moved or evicted. Update fbdev-generic to use vmap_local helpers,
> which fix amdgpu. The idea of adding a "local vmap" has previously been
> attempted at [3] in a different form.
> 
> Patch 11 adds implicit pinning to the DRM client's regular vmap
> helper so that long-term vmap'ed buffers won't be evicted. This only
> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
> there are no practical changes.
> 
> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
> operations in gem-vram and qxl. These pin operations are not supposed
> to be part of vmap code, but were required to keep the buffers in place
> for fbdev emulation. With the conversion o ffbdev-generic to to
> vmap_local helpers, that code can finally be removed.
> 
> Tested with amdgpu, nouveau, radeon, simpledrm and vc4.
> 
> [1] https://patchwork.freedesktop.org/series/106371/
> [2] https://patchwork.freedesktop.org/series/116001/
> [3] https://patchwork.freedesktop.org/series/84732/
> 
> Thomas Zimmermann (13):
>   drm/gem-shmem: Acquire reservation lock in GEM pin/unpin callbacks
>   drm/gem-vram: Acquire reservation lock in GEM pin/unpin callbacks
>   drm/msm: Provide msm_gem_get_pages_locked()
>   drm/msm: Acquire reservation lock in GEM pin/unpin callback
>   drm/nouveau: Provide nouveau_bo_{pin,unpin}_locked()
>   drm/nouveau: Acquire reservation lock in GEM pin/unpin callbacks
>   drm/qxl: Provide qxl_bo_{pin,unpin}_locked()
>   drm/qxl: Acquire reservation lock in GEM pin/unpin callbacks
>   drm/gem: Acquire reservation lock in drm_gem_{pin/unpin}()
>   drm/fbdev-generic: Fix locking with drm_client_buffer_vmap_local()
>   drm/client: Pin vmap'ed GEM buffers
>   drm/gem-vram: Do not pin buffer objects for vmap
>   drm/qxl: Do not pin buffer objects for vmap

The patches look good. I gave them fbtest on virtio-gpu, no problems
spotted.

Reviewed-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> # virtio-gpu
Thomas Zimmermann March 6, 2024, 2:44 p.m. UTC | #9
Hi

Am 05.03.24 um 22:58 schrieb Dmitry Osipenko:
> On 2/27/24 13:14, Thomas Zimmermann wrote:
>> Dma-buf locking semantics require the caller of pin and unpin to hold
>> the buffer's reservation lock. Fix DRM to adhere to the specs. This
>> enables to fix the locking in DRM's console emulation. Similar changes
>> for vmap and mmap have been posted at [1][2]
>>
>> Most DRM drivers and memory managers acquire the buffer object's
>> reservation lock within their GEM pin and unpin callbacks. This
>> violates dma-buf locking semantics. We get away with it because PRIME
>> does not provide pin/unpin, but attach/detach, for which the locking
>> semantics is correct.
>>
>> Patches 1 to 8 rework DRM GEM code in various implementations to
>> acquire the reservation lock when entering the pin and unpin callbacks.
>> This prepares them for the next patch. Drivers that are not affected
>> by these patches either don't acquire the reservation lock (amdgpu)
>> or don't need preparation (loongson).
>>
>> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
>> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
>> internally it still gets the reservation lock.
>>
>> With the updated GEM callbacks, the rest of the patchset fixes the
>> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
>> GEM buffer object inplace while updating its content. This required
>> a implicit pinning and apparently amdgpu didn't do this at all.
>>
>> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
>> The former function map a GEM buffer into the kernel's address space
>> with regular vmap operations, but keeps holding the reservation lock.
>> The _vunmap_local() helper undoes the vmap and releases the lock. The
>> updated GEM callbacks make this possible. Between the two calls, the
>> fbdev emulation can update the buffer content without have the buffer
>> moved or evicted. Update fbdev-generic to use vmap_local helpers,
>> which fix amdgpu. The idea of adding a "local vmap" has previously been
>> attempted at [3] in a different form.
>>
>> Patch 11 adds implicit pinning to the DRM client's regular vmap
>> helper so that long-term vmap'ed buffers won't be evicted. This only
>> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
>> there are no practical changes.
>>
>> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
>> operations in gem-vram and qxl. These pin operations are not supposed
>> to be part of vmap code, but were required to keep the buffers in place
>> for fbdev emulation. With the conversion o ffbdev-generic to to
>> vmap_local helpers, that code can finally be removed.
>>
>> Tested with amdgpu, nouveau, radeon, simpledrm and vc4.
>>
>> [1] https://patchwork.freedesktop.org/series/106371/
>> [2] https://patchwork.freedesktop.org/series/116001/
>> [3] https://patchwork.freedesktop.org/series/84732/
>>
>> Thomas Zimmermann (13):
>>    drm/gem-shmem: Acquire reservation lock in GEM pin/unpin callbacks
>>    drm/gem-vram: Acquire reservation lock in GEM pin/unpin callbacks
>>    drm/msm: Provide msm_gem_get_pages_locked()
>>    drm/msm: Acquire reservation lock in GEM pin/unpin callback
>>    drm/nouveau: Provide nouveau_bo_{pin,unpin}_locked()
>>    drm/nouveau: Acquire reservation lock in GEM pin/unpin callbacks
>>    drm/qxl: Provide qxl_bo_{pin,unpin}_locked()
>>    drm/qxl: Acquire reservation lock in GEM pin/unpin callbacks
>>    drm/gem: Acquire reservation lock in drm_gem_{pin/unpin}()
>>    drm/fbdev-generic: Fix locking with drm_client_buffer_vmap_local()
>>    drm/client: Pin vmap'ed GEM buffers
>>    drm/gem-vram: Do not pin buffer objects for vmap
>>    drm/qxl: Do not pin buffer objects for vmap
> The patches look good. I gave them fbtest on virtio-gpu, no problems
> spotted.
>
> Reviewed-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> # virtio-gpu

Great, thanks a lot. If no other reviews come in, I'll land the patchset 
within the next days.

Best regards
Thomas

>