diff mbox series

[1/5] drm/panfrost: Stop using drm_gem_shmem_put_pages()

Message ID 20230626120247.1337962-2-boris.brezillon@collabora.com (mailing list archive)
State New, archived
Headers show
Series drm/shmem-helper: Follow-up on 'Switch to reservation lock' | expand

Commit Message

Boris Brezillon June 26, 2023, 12:02 p.m. UTC
We want to get rid of this helper function, so let's use
drm_gem_shmem_unpin() and move this call out of the
dma_resv-locked section.

Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Emil Velikov <emil.l.velikov@gmail.com>
Cc: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Steven Price <steven.price@arm.com>
---
 drivers/gpu/drm/panfrost/panfrost_mmu.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

Comments

Dmitry Osipenko June 26, 2023, 1:20 p.m. UTC | #1
On 6/26/23 15:02, Boris Brezillon wrote:
> -err_pages:
> -	drm_gem_shmem_put_pages(&bo->base);
>  err_unlock:
>  	dma_resv_unlock(obj->resv);
> +
> +	if (ret && pinned)
> +		drm_gem_shmem_unpin(&bo->base);

The drm_gem_shmem_unpin() was supposed to be used only in conjunction
with drm_gem_shmem_pin(). I've a pending patch to enable the pin/unpin
refcounting needed by drm-shmem shrinker, it will prohibit invocation of
unpin without a previous pin.

I'm wondering whether it will be okay to simply remove
drm_gem_shmem_put_pages() from the Panfrost code, letting pages to be
kept allocated in a error case. They will be freed once BO is destroyed.
Boris Brezillon June 26, 2023, 1:38 p.m. UTC | #2
On Mon, 26 Jun 2023 16:20:53 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:

> On 6/26/23 15:02, Boris Brezillon wrote:
> > -err_pages:
> > -	drm_gem_shmem_put_pages(&bo->base);
> >  err_unlock:
> >  	dma_resv_unlock(obj->resv);
> > +
> > +	if (ret && pinned)
> > +		drm_gem_shmem_unpin(&bo->base);  
> 
> The drm_gem_shmem_unpin() was supposed to be used only in conjunction
> with drm_gem_shmem_pin(). I've a pending patch to enable the pin/unpin
> refcounting needed by drm-shmem shrinker, it will prohibit invocation of
> unpin without a previous pin.

That driver is a bit special in that, in the growable BO case
(AKA pin-on-demand), the driver replaces the drm_gem_shmem_pin()
implementation by a custom one (the logic in
panfrost_mmu_map_fault_addr()), but still relies on the
default implementation to release things. We do increment the
pages_use_count manually to make sure the drm_gem_shmem_unpin() is
balanced.

> 
> I'm wondering whether it will be okay to simply remove
> drm_gem_shmem_put_pages() from the Panfrost code, letting pages to be
> kept allocated in a error case. They will be freed once BO is destroyed.
> 

I'm pretty sure the implementation will then complain about unbalanced
pin/unamp (or get_pages/put_pages) if we do that. I guess one option
would be to completely bypass drm_gem_shmem_[un]pin() for growable BOs
and manage the pages separately at the panfrost_gem_object level, but
the original idea was probably to re-use some of the fields/logic we
had in drm_gem_shmem_object and make partial pinning as close as
possible to regular pinning. Another option would be to teach the shmem
about partial pinning, but I'm not sure we want to expose such a
feature.
Boris Brezillon June 26, 2023, 3:43 p.m. UTC | #3
On Mon, 26 Jun 2023 16:20:53 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:

> On 6/26/23 15:02, Boris Brezillon wrote:
> > -err_pages:
> > -	drm_gem_shmem_put_pages(&bo->base);
> >  err_unlock:
> >  	dma_resv_unlock(obj->resv);
> > +
> > +	if (ret && pinned)
> > +		drm_gem_shmem_unpin(&bo->base);  
> 
> The drm_gem_shmem_unpin() was supposed to be used only in conjunction
> with drm_gem_shmem_pin(). I've a pending patch to enable the pin/unpin
> refcounting needed by drm-shmem shrinker, it will prohibit invocation of
> unpin without a previous pin.
> 
> I'm wondering whether it will be okay to simply remove
> drm_gem_shmem_put_pages() from the Panfrost code, letting pages to be
> kept allocated in a error case. They will be freed once BO is destroyed.
> 

Okay, so after looking at your shmem-shrinker series, I confirm we need
to take a pin ref here (hard-pin), otherwise the buffer might be
evicted before the GPU is done, especially after you drop gpu_usecount
and use only pin_count to check whether a GEM object can be evicted or
not.
Dmitry Osipenko June 26, 2023, 4:06 p.m. UTC | #4
On 6/26/23 18:43, Boris Brezillon wrote:
> On Mon, 26 Jun 2023 16:20:53 +0300
> Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> 
>> On 6/26/23 15:02, Boris Brezillon wrote:
>>> -err_pages:
>>> -	drm_gem_shmem_put_pages(&bo->base);
>>>  err_unlock:
>>>  	dma_resv_unlock(obj->resv);
>>> +
>>> +	if (ret && pinned)
>>> +		drm_gem_shmem_unpin(&bo->base);  
>>
>> The drm_gem_shmem_unpin() was supposed to be used only in conjunction
>> with drm_gem_shmem_pin(). I've a pending patch to enable the pin/unpin
>> refcounting needed by drm-shmem shrinker, it will prohibit invocation of
>> unpin without a previous pin.
>>
>> I'm wondering whether it will be okay to simply remove
>> drm_gem_shmem_put_pages() from the Panfrost code, letting pages to be
>> kept allocated in a error case. They will be freed once BO is destroyed.
>>
> 
> Okay, so after looking at your shmem-shrinker series, I confirm we need
> to take a pin ref here (hard-pin), otherwise the buffer might be
> evicted before the GPU is done, especially after you drop gpu_usecount
> and use only pin_count to check whether a GEM object can be evicted or
> not.

See the drm_gem_evict() [1], it checks whether GEM is busy, preventing
BO eviction while it is in-use by GPU. Note that in case of Panfrost,
shrinker isn't enabled for growable BOs.

[1]
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/drivers/gpu/drm/drm_gem.c?h=next-20230626#n1495
Boris Brezillon June 26, 2023, 4:21 p.m. UTC | #5
On Mon, 26 Jun 2023 19:06:55 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:

> On 6/26/23 18:43, Boris Brezillon wrote:
> > On Mon, 26 Jun 2023 16:20:53 +0300
> > Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> >   
> >> On 6/26/23 15:02, Boris Brezillon wrote:  
> >>> -err_pages:
> >>> -	drm_gem_shmem_put_pages(&bo->base);
> >>>  err_unlock:
> >>>  	dma_resv_unlock(obj->resv);
> >>> +
> >>> +	if (ret && pinned)
> >>> +		drm_gem_shmem_unpin(&bo->base);    
> >>
> >> The drm_gem_shmem_unpin() was supposed to be used only in conjunction
> >> with drm_gem_shmem_pin(). I've a pending patch to enable the pin/unpin
> >> refcounting needed by drm-shmem shrinker, it will prohibit invocation of
> >> unpin without a previous pin.
> >>
> >> I'm wondering whether it will be okay to simply remove
> >> drm_gem_shmem_put_pages() from the Panfrost code, letting pages to be
> >> kept allocated in a error case. They will be freed once BO is destroyed.
> >>  
> > 
> > Okay, so after looking at your shmem-shrinker series, I confirm we need
> > to take a pin ref here (hard-pin), otherwise the buffer might be
> > evicted before the GPU is done, especially after you drop gpu_usecount
> > and use only pin_count to check whether a GEM object can be evicted or
> > not.  
> 
> See the drm_gem_evict() [1], it checks whether GEM is busy, preventing
> BO eviction while it is in-use by GPU. Note that in case of Panfrost,
> shrinker isn't enabled for growable BOs.

Okay, we should be good then, sorry for the confusion.
diff mbox series

Patch

diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index c0123d09f699..0b12f03ef0be 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -447,6 +447,7 @@  static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	pgoff_t page_offset;
 	struct sg_table *sgt;
 	struct page **pages;
+	bool pinned = false;
 
 	bomapping = addr_to_mapping(pfdev, as, addr);
 	if (!bomapping)
@@ -488,12 +489,14 @@  static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 		}
 		bo->base.pages = pages;
 		bo->base.pages_use_count = 1;
+		pinned = true;
 	} else {
 		pages = bo->base.pages;
 		if (pages[page_offset]) {
 			/* Pages are already mapped, bail out. */
 			goto out;
 		}
+		pinned = true;
 	}
 
 	mapping = bo->base.base.filp->f_mapping;
@@ -504,7 +507,7 @@  static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 		if (IS_ERR(pages[i])) {
 			ret = PTR_ERR(pages[i]);
 			pages[i] = NULL;
-			goto err_pages;
+			goto err_unlock;
 		}
 	}
 
@@ -512,7 +515,7 @@  static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
 					NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
 	if (ret)
-		goto err_pages;
+		goto err_unlock;
 
 	ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0);
 	if (ret)
@@ -534,10 +537,12 @@  static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 
 err_map:
 	sg_free_table(sgt);
-err_pages:
-	drm_gem_shmem_put_pages(&bo->base);
 err_unlock:
 	dma_resv_unlock(obj->resv);
+
+	if (ret && pinned)
+		drm_gem_shmem_unpin(&bo->base);
+
 err_bo:
 	panfrost_gem_mapping_put(bomapping);
 	return ret;