From patchwork Fri Apr 4 09:26:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Boris Brezillon X-Patchwork-Id: 14038243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A96C9C3601E for ; Fri, 4 Apr 2025 09:26:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4E75E10EBA4; Fri, 4 Apr 2025 09:26:42 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.b="e4ELU7Ih"; dkim-atps=neutral Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0D0DE10EB9C; Fri, 4 Apr 2025 09:26:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1743758796; bh=xliOrKm5kMx4CT9Ta7jUUPECfyVE3BnGikvenamuELg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e4ELU7IhvekWFYwXBi0n5vHf8P1cjl+Eo3jYH5OpRfqsFgZTw+/3JBVl9LKE+NjmA 7g9Vcj5o1sKEqw7UIlvF5IlgxEpJjkfzSEGBriqB8MNUmMi8v+V3NlypdZOrIaZvGs ZanhA7GeEV6pZtCK/jGLxfQMXTpXHau8BJtHJF7HoRoLLepyQPKvN2iGPMDEy0O6IX oH47FMarCnCPJcHUXwauYj1JFKAWh/adrsHeumya4/uupUQIwI2pfC11ZJwapRl1V9 ji+s2oFRzzCP0JQf0yIi+r8Fqz+PZ4tY7hpCqgQi3mj+YAOMPufg+4BIRX53m/kg7d f+cXqgGklvkUg== Received: from localhost.localdomain (unknown [IPv6:2a01:e0a:2c:6930:5cf4:84a1:2763:fe0d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by bali.collaboradmins.com (Postfix) with ESMTPSA id 3A6CA17E0865; Fri, 4 Apr 2025 11:26:36 +0200 (CEST) From: Boris Brezillon To: Boris Brezillon , Steven Price , Liviu Dudau , =?utf-8?q?Adri=C3=A1n_Larumbe?= , lima@lists.freedesktop.org, Qiang Yu Cc: David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , dri-devel@lists.freedesktop.org, Dmitry Osipenko , kernel@collabora.com Subject: [PATCH v3 1/8] drm/gem: Add helpers to request a range of pages on a GEM Date: Fri, 4 Apr 2025 11:26:27 +0200 Message-ID: <20250404092634.2968115-2-boris.brezillon@collabora.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250404092634.2968115-1-boris.brezillon@collabora.com> References: <20250404092634.2968115-1-boris.brezillon@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Adrián Larumbe This new API provides a way to partially populate/unpopulate a GEM object, and also lets the caller specify the GFP flags to use for the allocation. This will help drivers that need to support sparse/alloc-on-demand GEM objects. Signed-off-by: Adrián Larumbe --- drivers/gpu/drm/drm_gem.c | 134 ++++++++++++++++++++++++++++++++++++++ include/drm/drm_gem.h | 14 ++++ 2 files changed, 148 insertions(+) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 1e659d2660f7..769eaf9943d7 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -679,6 +679,140 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, } EXPORT_SYMBOL(drm_gem_put_pages); +/** + * drm_gem_put_page_range - helper to return a range of pages backing a GEM + * @obj: Object this request applies to. + * @pa: Page array to unpopulate. + * @start: The first page to unpopulate. + * @npages: The number of pages to unpopulate. + * @dirty: Flag all returned pages dirty if true. + * @accessed: Flag all returned pages accessed if true. + * + * This is used to flag pages as unused. The pages themselves will stay + * unreclaimable until all pages are gone, because we can't partially + * flag a mapping unevictable. + * + * @npages is clamped to the object size, so start=0, npages=UINT_MAX + * effectively return all pages. + */ +void drm_gem_put_page_range(struct drm_gem_object *obj, struct xarray *pa, + pgoff_t start, unsigned int npages, + bool dirty, bool accessed) +{ + struct folio_batch fbatch; + unsigned long idx; + unsigned long end = start + npages - 1; + struct page *page; + + xa_for_each_range(pa, idx, page, start, end) + xa_clear_mark(pa, idx, DRM_GEM_PAGE_USED); + + /* If the mapping is still used, we bail out. */ + if (xa_marked(pa, DRM_GEM_PAGE_USED)) + return; + + mapping_clear_unevictable(file_inode(obj->filp)->i_mapping); + + folio_batch_init(&fbatch); + + xa_for_each(pa, idx, page) { + struct folio *folio = page_folio(page); + unsigned long folio_pg_idx = folio_page_idx(folio, page); + + xa_erase(pa, idx); + + if (dirty) + folio_mark_dirty(folio); + + if (accessed) + folio_mark_accessed(folio); + + /* Undo the reference we took when populating the table */ + if (!folio_batch_add(&fbatch, folio)) + drm_gem_check_release_batch(&fbatch); + + idx += folio_nr_pages(folio) - folio_pg_idx - 1; + } + + if (folio_batch_count(&fbatch)) + drm_gem_check_release_batch(&fbatch); +} +EXPORT_SYMBOL(drm_gem_put_page_range); + +/** + * drm_gem_get_page_range - helper to populate GEM a range of pages + * @obj: Object this request applies to. + * @pa: Page array to populate. + * @start: The first page to populate. + * @npages: The number of pages to populate. + * @page_gfp: GFP flags to use for page allocations. + * @other_gfp: GFP flags to use for other allocations, like extending the xarray. + * + * Partially or fully populate a page xarray backing a GEM object. @npages will + * be clamped to the object size, so passing start=0, npages=UINT_MAX fully + * populates the GEM object. + * + * There's no optimization to avoid repopulating already populated ranges, but + * this case is not rejected either. As soon as one page is populated, the entire + * mapping is flagged unevictable, meaning pages returned with + * drm_gem_put_page_range() won't be reclaimable until all pages have been + * returned. + * + * If something fails in the middle, pages that were acquired stay there. The + * caller should call drm_gem_put_page_range() explicitly to undo what was + * partially done. + * + * Return: 0 on success, a negative error code otherwise. + */ +int drm_gem_get_page_range(struct drm_gem_object *obj, struct xarray *pa, + pgoff_t start, unsigned int npages, gfp_t page_gfp, + gfp_t other_gfp) +{ + struct address_space *mapping; + struct page *page; + unsigned long i; + int ret = 0; + + if (WARN_ON(!obj->filp)) + return -EINVAL; + + if (start + npages < start) + return -EINVAL; + + if (start + npages > obj->size >> PAGE_SHIFT) + return -EINVAL; + + if (npages == 0) + return 0; + + /* This is the shared memory object that backs the GEM resource */ + mapping = obj->filp->f_mapping; + + /* We already BUG_ON() for non-page-aligned sizes in + * drm_gem_object_init(), so we should never hit this unless + * driver author is doing something really wrong: + */ + WARN_ON((obj->size & (PAGE_SIZE - 1)) != 0); + + mapping_set_unevictable(mapping); + + for (i = 0; i < npages; i++) { + page = shmem_read_mapping_page_gfp(mapping, start + i, page_gfp); + if (IS_ERR(page)) + return PTR_ERR(page); + + /* Add the page into the xarray */ + ret = xa_err(xa_store(pa, start + i, page, other_gfp)); + if (ret) + return ret; + + xa_set_mark(pa, start + i, DRM_GEM_PAGE_USED); + } + + return 0; +} +EXPORT_SYMBOL(drm_gem_get_page_range); + static int objects_lookup(struct drm_file *filp, u32 *handle, int count, struct drm_gem_object **objs) { diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 9b71f7a9f3f8..9980c04355b6 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -39,11 +39,13 @@ #include #include #include +#include #include struct iosys_map; struct drm_gem_object; +struct xarray; /** * enum drm_gem_object_status - bitmask of object state for fdinfo reporting @@ -537,6 +539,18 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj); void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, bool dirty, bool accessed); +/* drm_gem_{get,put}_page_range() use XA_MARK_1 to track which pages are + * currently used. Make sure you don't mess up with this mark. + */ +#define DRM_GEM_PAGE_USED XA_MARK_1 + +int drm_gem_get_page_range(struct drm_gem_object *obj, struct xarray *pa, + pgoff_t start, unsigned int npages, + gfp_t page_gfp, gfp_t other_gfp); +void drm_gem_put_page_range(struct drm_gem_object *obj, struct xarray *pa, + pgoff_t start, unsigned int npages, + bool dirty, bool accessed); + void drm_gem_lock(struct drm_gem_object *obj); void drm_gem_unlock(struct drm_gem_object *obj); From patchwork Fri Apr 4 09:26:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Brezillon X-Patchwork-Id: 14038241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0C44DC369A1 for ; Fri, 4 Apr 2025 09:26:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B2F4810EBA2; Fri, 4 Apr 2025 09:26:41 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.b="PSrjU1Ap"; dkim-atps=neutral Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0817810EB9B; Fri, 4 Apr 2025 09:26:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1743758797; bh=HkdaPvg07AvJ7+zigc1LJGMKs69amgEdZfPzMvxkjFg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PSrjU1ApDppcNJk40hABaEfhuFGq3MAMTrUvsrN/890tn1+li1dp7bVtoe2hGHzfp bqHAVPV3QhDI3xCsEGE2M1NmBhz+vxycP29cRUSnxPyRPOmmew5LAsaEvMS40lgdwr j0ChdWo9XSic+eoIEygpps6ooC4q9QMQgAr58BlUk8XZh18oTOlCvV6abgwoarxxGK sfE3NtaEy5kV+n+LeBUPz7MoQSLWSVhEfrL8kgBpE/aGwZHmoxT968HjviTpfZ2i5i r4An3FEMCDA2yvtyKKMn8ahjiwf4V8UZS4Kj6WNV2qEXMgQ7drlhcg7WqHJMb7bSYJ hdbUz5lP7gFRQ== Received: from localhost.localdomain (unknown [IPv6:2a01:e0a:2c:6930:5cf4:84a1:2763:fe0d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by bali.collaboradmins.com (Postfix) with ESMTPSA id E1C6017E0B0B; Fri, 4 Apr 2025 11:26:36 +0200 (CEST) From: Boris Brezillon To: Boris Brezillon , Steven Price , Liviu Dudau , =?utf-8?q?Adri=C3=A1n_Larumbe?= , lima@lists.freedesktop.org, Qiang Yu Cc: David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , dri-devel@lists.freedesktop.org, Dmitry Osipenko , kernel@collabora.com Subject: [PATCH v3 2/8] drm/gem-shmem: Support sparse backing Date: Fri, 4 Apr 2025 11:26:28 +0200 Message-ID: <20250404092634.2968115-3-boris.brezillon@collabora.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250404092634.2968115-1-boris.brezillon@collabora.com> References: <20250404092634.2968115-1-boris.brezillon@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" We have several drivers open-coding an alloc-on-fault behavior, each in a slightly different way. This is an attempt at generalizing the implementation and allowing for real non-blocking allocations in the fault handler, so we can finally stop violating one of the dma-fence signalling rules: nothing in the fence signalling path should block on memory allocation. Signed-off-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 404 ++++++++++++++++++++++++- include/drm/drm_gem_shmem_helper.h | 285 ++++++++++++++++- 2 files changed, 680 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 2d924d547a51..13ab497bd9e0 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -176,6 +176,16 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) if (shmem->pages) drm_gem_shmem_put_pages_locked(shmem); + /* Auto-cleanup of sparse resources if it's not been done before. + * We shouldn't rely on that, but the implicit ref taken by + * the sgt also pulls resources from the sparse arrays when + * sparse GEM is used as a regular GEM. Hopefully this all goes + * away once we've patched drivers to explicitly request/release + * pages instead of relying on the implicit ref taken by the sgt. + */ + if (shmem->sparse) + drm_gem_shmem_sparse_finish(shmem); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); @@ -191,17 +201,51 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct page **pages; + int ret; dma_resv_assert_held(shmem->base.resv); if (refcount_inc_not_zero(&shmem->pages_use_count)) return 0; - pages = drm_gem_get_pages(obj); - if (IS_ERR(pages)) { - drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", - PTR_ERR(pages)); - return PTR_ERR(pages); + if (shmem->sparse) { + /* Request pages for the entire object. */ + pages = kvmalloc_array(obj->size >> PAGE_SHIFT, + sizeof(*pages), GFP_KERNEL); + if (!pages) { + ret = -ENOMEM; + } else { + drm_gem_shmem_sparse_get_locked(shmem); + ret = drm_gem_get_page_range(obj, &shmem->sparse->pages, 0, UINT_MAX, + mapping_gfp_mask(obj->filp->f_mapping), + GFP_KERNEL); + if (!ret) { + unsigned int npages = obj->size >> PAGE_SHIFT; + unsigned int copied; + + copied = xa_extract(&shmem->sparse->pages, + (void **)pages, 0, UINT_MAX, + npages, XA_PRESENT); + if (copied != npages) + ret = -EINVAL; + } + + if (ret) { + drm_gem_shmem_sparse_put_locked(shmem); + kvfree(pages); + } + } + } else { + pages = drm_gem_get_pages(obj); + if (IS_ERR(pages)) + ret = PTR_ERR(pages); + else + ret = 0; + } + + if (ret) { + drm_dbg_kms(obj->dev, "Failed to get pages (%d)\n", ret); + return ret; } /* @@ -233,7 +277,17 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); - if (refcount_dec_and_test(&shmem->pages_use_count)) { + if (!refcount_dec_and_test(&shmem->pages_use_count)) + return; + + if (shmem->sparse) { + /* drm_gem_shmem_sparse_finish() will return pages to WB mode. + * all we have to do here is free the array we allocated in + * drm_gem_shmem_get_pages_locked(). + */ + drm_gem_shmem_sparse_put_locked(shmem); + kvfree(shmem->pages); + } else { #ifdef CONFIG_X86 if (shmem->map_wc) set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); @@ -242,8 +296,9 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) drm_gem_put_pages(obj, shmem->pages, shmem->pages_mark_dirty_on_put, shmem->pages_mark_accessed_on_put); - shmem->pages = NULL; } + + shmem->pages = NULL; } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); @@ -258,9 +313,14 @@ int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) if (refcount_inc_not_zero(&shmem->pages_pin_count)) return 0; + if (shmem->sparse) + drm_gem_shmem_sparse_pin_locked(shmem); + ret = drm_gem_shmem_get_pages_locked(shmem); if (!ret) refcount_set(&shmem->pages_pin_count, 1); + else if (shmem->sparse) + drm_gem_shmem_sparse_unpin_locked(shmem); return ret; } @@ -270,8 +330,12 @@ void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) { dma_resv_assert_held(shmem->base.resv); - if (refcount_dec_and_test(&shmem->pages_pin_count)) + if (refcount_dec_and_test(&shmem->pages_pin_count)) { + if (shmem->sparse) + drm_gem_shmem_sparse_unpin_locked(shmem); + drm_gem_shmem_put_pages_locked(shmem); + } } EXPORT_SYMBOL(drm_gem_shmem_unpin_locked); @@ -327,6 +391,61 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); +/** + * drm_gem_shmem_sparse_vmap_range - helper to vmap() a range of pages in a sparse object + * + * Returns a kernel mapping of a portion of a sparse GEM object. In case of + * success the returned value must be unmapped with vunmap(). + * + * This mapping is not shared with the any mapping returned by drm_gem_vmap() and + * must be explicitly unmappped. + * + * Return: a valid pointer in case of success, and ERR_PTR() otherwise. + */ +void *drm_gem_shmem_sparse_vmap_range(struct drm_gem_shmem_object *shmem, + pgoff_t first_page, unsigned int npages) +{ + pgprot_t prot = PAGE_KERNEL; + unsigned int copied; + struct page **pages; + void *vaddr; + + if (!shmem->sparse) + return ERR_PTR(-EINVAL); + + if (!npages) + return NULL; + + if (shmem->map_wc) + prot = pgprot_writecombine(prot); + + pages = kvmalloc_array(npages, sizeof(*pages), GFP_KERNEL); + if (!pages) + return ERR_PTR(-ENOMEM); + + copied = xa_extract(&shmem->sparse->pages, (void **)pages, + first_page, first_page + npages - 1, + npages, XA_PRESENT); + if (copied != npages) { + vaddr = ERR_PTR(-EINVAL); + goto out_free_pages; + } + + vaddr = vmap(pages, npages, VM_MAP, prot); + if (!vaddr) { + vaddr = ERR_PTR(-ENOMEM); + goto out_free_pages; + } + +out_free_pages: + /* Once the thing is mapped, we can get rid of the pages array, + * since pages are retained in the xarray anyway. + */ + kvfree(pages); + return vaddr; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_sparse_vmap_range); + /* * drm_gem_shmem_vmap_locked - Create a virtual mapping for a shmem GEM object * @shmem: shmem GEM object @@ -673,6 +792,275 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, } EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info); +struct drm_gem_shmem_sparse_sgt { + struct sg_table sgt; + u32 gem_pgoffset; + u32 npages; +}; + +/** + * drm_gem_shmem_sparse_get_sgt - helper to get an SGT for a given GEM offset + * @shmem: object to populate. + * @gem_pgoffset: page to get an sgt for. + * @sgt_pgoffset: returns the page offset to start at in the returned sgt. + * @sgt_remaining_pages: returns the number of pages remaining in the sgt after *sgt_pgoffset. + * + * Used to retrieve an sgt allocated by drm_gem_shmem_sparse_populate_range() so the + * driver can map a section of a sparse GEM. + * + * Return: a valid sg_table pointer in case of success, and ERR_PTR() otherwise. + */ +struct sg_table *drm_gem_shmem_sparse_get_sgt(struct drm_gem_shmem_object *shmem, + pgoff_t gem_pgoffset, + pgoff_t *sgt_pgoffset, + unsigned int *sgt_remaining_pages) +{ + struct drm_gem_shmem_sparse_sgt *sgt_entry; + unsigned long sgt_idx; + u32 granularity_shift; + + if (!shmem->sparse) + return ERR_PTR(-EINVAL); + + if (gem_pgoffset & (shmem->sparse->granularity - 1)) + return ERR_PTR(-EINVAL); + + granularity_shift = ilog2(shmem->sparse->granularity); + sgt_idx = gem_pgoffset >> granularity_shift; + sgt_entry = xa_load(&shmem->sparse->sgts, sgt_idx); + if (xa_err(sgt_entry)) + return ERR_PTR(xa_err(sgt_entry)); + else if (!sgt_entry) + return ERR_PTR(-ENOENT); + + *sgt_pgoffset = gem_pgoffset - sgt_entry->gem_pgoffset; + *sgt_remaining_pages = sgt_entry->npages - *sgt_pgoffset; + return &sgt_entry->sgt; +} +EXPORT_SYMBOL(drm_gem_shmem_sparse_get_sgt); + +static int drm_gem_shmem_sparse_add_sgt(struct drm_gem_shmem_object *shmem, + struct page **pages, unsigned int npages, + pgoff_t gem_pgoffset, gfp_t gfp) +{ + u32 granularity_shift = ilog2(shmem->sparse->granularity); + unsigned long first_sgt = gem_pgoffset >> granularity_shift; + unsigned long last_sgt = (gem_pgoffset + npages - 1) >> granularity_shift; + struct drm_gem_shmem_sparse_sgt *sgt_entry; + size_t max_segment = 0; + unsigned int copied; + int ret; + + copied = xa_extract(&shmem->sparse->pages, (void **)pages, + gem_pgoffset, gem_pgoffset + npages - 1, + npages, XA_PRESENT); + + if (copied != npages) + return -EINVAL; + +#ifdef CONFIG_X86 + if (shmem->map_wc) + set_pages_array_wc(pages, npages); +#endif + + sgt_entry = kzalloc(sizeof(*sgt_entry), gfp); + if (!sgt_entry) + return -ENOMEM; + + sgt_entry->npages = npages; + sgt_entry->gem_pgoffset = gem_pgoffset; + + if (shmem->base.dev->dev) + max_segment = dma_max_mapping_size(shmem->base.dev->dev); + + if (!max_segment) + max_segment = UINT_MAX; + + ret = sg_alloc_table_from_pages_segment(&sgt_entry->sgt, pages, npages, 0, + npages << PAGE_SHIFT, max_segment, + gfp); + if (ret) + goto err_free_sgt; + + ret = dma_map_sgtable(shmem->base.dev->dev, &sgt_entry->sgt, DMA_BIDIRECTIONAL, 0); + if (ret) + goto err_free_sgt; + + ret = xa_err(xa_store_range(&shmem->sparse->sgts, first_sgt, last_sgt, sgt_entry, gfp)); + if (ret) + goto err_unmap_sgt; + + return 0; + +err_unmap_sgt: + dma_unmap_sgtable(shmem->base.dev->dev, &sgt_entry->sgt, DMA_BIDIRECTIONAL, 0); + +err_free_sgt: + sg_free_table(&sgt_entry->sgt); + kfree(sgt_entry); + return ret; +} + +/** + * drm_gem_shmem_sparse_populate_range - populate GEM object range + * @shmem: object to populate. + * @offset: first page to populate. + * @npages: Number of pages to populate. + * @page_gfp: GFP flags to use for page allocation. + * @other_gfp: GFP flags to use for other allocations. + * + * This function takes care of both the page allocation, and the sg_table + * chunks preparation. + * + * Return: 0 on success, a negative error code otherwise. + */ +int drm_gem_shmem_sparse_populate_range(struct drm_gem_shmem_object *shmem, + pgoff_t offset, unsigned int npages, + gfp_t page_gfp, gfp_t other_gfp) +{ + unsigned long first_sgt, last_sgt; + u32 granularity_shift; + struct page **pages; + int ret; + + if (!shmem->sparse || !is_power_of_2(shmem->sparse->granularity)) + return -EINVAL; + + /* The range must be aligned on the granularity. */ + if ((offset | npages) & (shmem->sparse->granularity - 1)) + return -EINVAL; + + /* Bail out early if there's nothing to populate. */ + if (!npages) + return 0; + + ret = drm_gem_get_page_range(&shmem->base, &shmem->sparse->pages, + offset, npages, page_gfp, other_gfp); + if (ret) + return ret; + + pages = kmalloc_array(npages, sizeof(*pages), other_gfp); + if (!pages) + return -ENOMEM; + + granularity_shift = ilog2(shmem->sparse->granularity); + first_sgt = offset >> granularity_shift; + last_sgt = (offset + npages - 1) >> granularity_shift; + + for (unsigned long sgt_idx = first_sgt; sgt_idx <= last_sgt; ) { + struct drm_gem_shmem_sparse_sgt *sgt_entry = NULL; + unsigned long next_sgt_idx = sgt_idx; + + sgt_entry = xa_load(&shmem->sparse->sgts, sgt_idx); + if (sgt_entry) { + /* Skip already populated sections. */ + sgt_idx += sgt_entry->npages >> granularity_shift; + continue; + } + + if (!xa_find_after(&shmem->sparse->sgts, &next_sgt_idx, last_sgt, XA_PRESENT)) + next_sgt_idx = last_sgt + 1; + + ret = drm_gem_shmem_sparse_add_sgt(shmem, pages, + (next_sgt_idx - sgt_idx) << granularity_shift, + sgt_idx << granularity_shift, + other_gfp); + if (ret) + break; + + sgt_idx = next_sgt_idx; + } + + kfree(pages); + return ret; +} +EXPORT_SYMBOL(drm_gem_shmem_sparse_populate_range); + +/** + * drm_gem_shmem_sparse_init - Initialize the sparse backing + * + * Must be called just after drm_gem_shmem_create[with_mnt]() when sparse + * allocation is wanted. + */ +int drm_gem_shmem_sparse_init(struct drm_gem_shmem_object *shmem, + struct drm_gem_shmem_sparse_backing *sparse, + unsigned int granularity) +{ + if (!is_power_of_2(granularity)) + return -EINVAL; + + sparse->granularity = granularity; + xa_init_flags(&sparse->pages, 0); + xa_init_flags(&sparse->sgts, 0); + shmem->sparse = sparse; + return 0; +} +EXPORT_SYMBOL(drm_gem_shmem_sparse_init); + +#ifdef CONFIG_X86 +static int drm_gem_shmem_set_pages_wb(struct drm_gem_shmem_object *shmem, + struct page **pages, pgoff_t offset, + unsigned int count, void *data) +{ + set_pages_array_wb(pages, count); + return 0; +} +#endif + +/** + * drm_gem_shmem_sparse_finish - Release sparse backing resources + * + * Must be called just before drm_gem_shmem_free(). + */ +void +drm_gem_shmem_sparse_finish(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_shmem_sparse_sgt *sgt_entry; + unsigned long sgt_idx; + + if (!shmem->sparse) + return; + + /* drm_gem_shmem_object::pages_use_count should be zero and + * drm_gem_shmem_object::pages NULL when this function is called, + * otherwise the pages array would contain pages that might be + * reclaimed after that point. + */ + drm_WARN_ON(shmem->base.dev, + refcount_read(&shmem->pages_use_count)); + + xa_for_each(&shmem->sparse->sgts, sgt_idx, sgt_entry) { + xa_erase(&shmem->sparse->sgts, sgt_idx); + dma_unmap_sgtable(shmem->base.dev->dev, &sgt_entry->sgt, DMA_BIDIRECTIONAL, 0); + sg_free_table(&sgt_entry->sgt); + kfree(sgt_entry); + } + xa_destroy(&shmem->sparse->sgts); + +#ifdef CONFIG_X86 + if (shmem->map_wc) { + unsigned int npages = shmem->base.size >> PAGE_SHIFT; + struct page *pages[64]; + pgoff_t pos = 0; + int ret; + + ret = drm_gem_shmem_sparse_iterate_pages_in_batch(shmem, &pos, npages, + drm_gem_shmem_set_pages_wb, + pages, ARRAY_SIZE(pages), + NULL); + drm_WARN_ON(shmem->base.dev, ret); + } +#endif + + drm_gem_put_page_range(&shmem->base, &shmem->sparse->pages, 0, + UINT_MAX, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + xa_destroy(&shmem->sparse->pages); + shmem->sparse = NULL; +} +EXPORT_SYMBOL(drm_gem_shmem_sparse_finish); + /** * drm_gem_shmem_get_sg_table - Provide a scatter/gather table of pinned * pages for a shmem GEM object diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index b4f993da3cae..d8d6456d2171 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -17,6 +18,56 @@ struct drm_mode_create_dumb; struct drm_printer; struct sg_table; +/** + * struct drm_gem_shmem_sparse_backing - Structure used to manage sparse backing + * + * Locking is deferred to the user, which is fundamental if we want to support + * allocation-on-fault, where blocking on allocation or taking the GEM resv lock + * is not allowed. + */ +struct drm_gem_shmem_sparse_backing { + /** + * @pages: Page table + */ + struct xarray pages; + + /** + * @sgts: Array of sgt tables. + */ + struct xarray sgts; + + /** + * @granularity: Granularity of the page population in number of pages. + * + * Must be a power-of-two. + */ + unsigned int granularity; + + /** + * @counters_lock: Lock used to protect {use,pin}_count when they + * go out/return to zero. + */ + spinlock_t counters_lock; + + /** + * @use_count: Use count on the page/sgt tables. + * + * Count the number of users of any portion of the GEM object. + * Pages can be reclaimed if/once the GEM is idle (no active fences in the GEM resv), + * but should otherwise be considered used. + */ + refcount_t use_count; + + /** + * @pin_count: Pin count on the page/sgt tables. + * + * Count the number of users of any portion of the GEM object requiring memory + * to be resident. The pages are considered unreclaimable until this counter + * drops to zero. + */ + refcount_t pin_count; +}; + /** * struct drm_gem_shmem_object - GEM object backed by shmem */ @@ -84,6 +135,13 @@ struct drm_gem_shmem_object { */ refcount_t vmap_use_count; + /** + * @sparse: object used to manage sparse backing. + * + * NULL if sparse backing is disabled. + */ + struct drm_gem_shmem_sparse_backing *sparse; + /** * @pages_mark_dirty_on_put: * @@ -113,9 +171,181 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *de struct vfsmount *gemfs); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); +/** + * drm_gem_shmem_sparse_get_locked - Take a use ref on the sparse shmem object + * @shmem: object to take a ref on. + * + * Like drm_gem_shmem_sparse_get(), but with the resv lock held by the caller. + */ +static inline void drm_gem_shmem_sparse_get_locked(struct drm_gem_shmem_object *shmem) +{ + dma_resv_assert_held(shmem->base.resv); + + if (!shmem->sparse) + return; + + if (!refcount_inc_not_zero(&shmem->sparse->use_count)) + refcount_set(&shmem->sparse->use_count, 1); +} + +/** + * drm_gem_shmem_sparse_get - Take a use ref on the sparse shmem object + * @shmem: object to take a ref on. + * + * Note that this doesn't populate the pages array, it just flags the + * array as being used. The sparse array can be populated with + * drm_gem_shmem_sparse_populate_range() after drm_gem_shmem_sparse_get() + * or drm_gem_shmem_sparse_pin() is called. + */ +static inline void drm_gem_shmem_sparse_get(struct drm_gem_shmem_object *shmem) +{ + if (!shmem->sparse) + return; + + if (refcount_inc_not_zero(&shmem->sparse->use_count)) + return; + + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_sparse_get_locked(shmem); + dma_resv_unlock(shmem->base.resv); +} + +/** + * drm_gem_shmem_sparse_put_locked - Return a use ref on the sparse shmem object + * @shmem: object to take a ref on. + * + * Like drm_gem_shmem_sparse_put(), but with the resv lock held by the caller. + */ +static inline void drm_gem_shmem_sparse_put_locked(struct drm_gem_shmem_object *shmem) +{ + dma_resv_assert_held(shmem->base.resv); + + if (!shmem->sparse) + return; + + if (!refcount_dec_not_one(&shmem->sparse->use_count)) + refcount_set(&shmem->sparse->use_count, 0); +} + +/** + * drm_gem_shmem_sparse_put - Return a use ref on the sparse shmem object + * @shmem: object to return a ref on. + * + * Note that this doesn't release the pages in the page array, it just flags + * the array as being unused. The sparse array will be shrunk when/if the + * object is purged. + */ +static inline void drm_gem_shmem_sparse_put(struct drm_gem_shmem_object *shmem) +{ + if (!shmem->sparse) + return; + + if (refcount_dec_not_one(&shmem->sparse->use_count)) + return; + + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_sparse_put_locked(shmem); + dma_resv_unlock(shmem->base.resv); +} + +/** + * drm_gem_shmem_sparse_pin_locked - Take a pin ref on the sparse shmem object + * @shmem: object to take a ref on. + * + * Like drm_gem_shmem_sparse_pin() but with the resv lock held by the called. + */ +static inline void drm_gem_shmem_sparse_pin_locked(struct drm_gem_shmem_object *shmem) +{ + dma_resv_assert_held(shmem->base.resv); + + if (!shmem->sparse) + return; + + if (!refcount_inc_not_zero(&shmem->sparse->pin_count)) { + drm_gem_shmem_sparse_get_locked(shmem); + refcount_set(&shmem->sparse->pin_count, 1); + } +} + +/** + * drm_gem_shmem_sparse_pin - Take a pin ref on the sparse shmem object + * @shmem: object to take a ref on. + * + * This also takes a use ref along the way. Like with + * drm_gem_shmem_sparse_get(), this function doesn't populate the sparse + * arrays, it just flags the existing resources and all future resources + * populated with drm_gem_shmem_sparse_populate_range() as pinned. + */ +static inline void drm_gem_shmem_sparse_pin(struct drm_gem_shmem_object *shmem) +{ + if (!shmem->sparse) + return; + + if (refcount_inc_not_zero(&shmem->sparse->pin_count)) + return; + + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_sparse_pin_locked(shmem); + dma_resv_unlock(shmem->base.resv); +} + +/** + * drm_gem_shmem_sparse_unpin_locked - Return a pin ref on the sparse shmem object + * @shmem: object to take a ref on. + * + * Like drm_gem_shmem_sparse_unpin() but with the resv lock held by the called. + */ +static inline void drm_gem_shmem_sparse_unpin_locked(struct drm_gem_shmem_object *shmem) +{ + dma_resv_assert_held(shmem->base.resv); + + if (!shmem->sparse) + return; + + if (!refcount_dec_not_one(&shmem->sparse->pin_count)) { + refcount_set(&shmem->sparse->pin_count, 0); + drm_gem_shmem_sparse_put_locked(shmem); + } +} + +/** + * drm_gem_shmem_sparse_unpin - Return a pin ref on the sparse shmem object + * @shmem: object to take a ref on. + * + * This also returns a use ref along the way. Like with + * drm_gem_shmem_sparse_put(), this function doesn't release the resources, + * this will be done at purge/reclaim time. + */ +static inline void drm_gem_shmem_sparse_unpin(struct drm_gem_shmem_object *shmem) +{ + if (!shmem->sparse) + return; + + if (refcount_dec_not_one(&shmem->sparse->pin_count)) + return; + + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_sparse_unpin_locked(shmem); + dma_resv_unlock(shmem->base.resv); +} + +struct sg_table *drm_gem_shmem_sparse_get_sgt(struct drm_gem_shmem_object *shmem, + pgoff_t gem_pgoffset, + pgoff_t *sgt_pgoffset, + unsigned int *sgt_remaining_pages); +int drm_gem_shmem_sparse_populate_range(struct drm_gem_shmem_object *shmem, + pgoff_t offset, unsigned int npages, + gfp_t page_gfp, gfp_t other_gfp); +int drm_gem_shmem_sparse_init(struct drm_gem_shmem_object *shmem, + struct drm_gem_shmem_sparse_backing *sparse, + unsigned int granularity); +void drm_gem_shmem_sparse_finish(struct drm_gem_shmem_object *shmem); + void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); +void *drm_gem_shmem_sparse_vmap_range(struct drm_gem_shmem_object *shmem, + pgoff_t first_page, unsigned int npages); int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct iosys_map *map); void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, @@ -129,7 +359,7 @@ int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv); static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) { - return (shmem->madv > 0) && + return (shmem->madv > 0) && !shmem->sparse && !refcount_read(&shmem->pages_pin_count) && shmem->sgt && !shmem->base.dma_buf && !drm_gem_is_imported(&shmem->base); } @@ -142,6 +372,59 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, struct drm_printer *p, unsigned int indent); +/** + * drm_gem_shmem_iterate_pages_in_batch - helper to iterate over GEM pages in batch + * @shmem: GEM object to iterate pages on. + * @pos: Position to start the iteration from. Updated to point to the final position, + * so one can know where things failed when en error is returned. + * @npages: Number of pages to iterate. + * @cb: Function called for each batch of pages. + * @tmp_pages: Array temporary array if page pointers to copy the xarray portion into. + * @tmp_page_count: Size of the @tmp_pages array. + * @data: Extra data passed to the callback. + * + * Some helper functions require a plain array of pages, which means we occasionally + * have to turn our xarray into an array. The buffer object can cover a significant + * amount of pages, and, in some occasions, we have the ability to iteratively pass + * pages to the helper function, meaning we don't have to copy the entire array in + * one go and can instead process things in batches. This function automates this + * batching. + */ +static inline int +drm_gem_shmem_sparse_iterate_pages_in_batch(struct drm_gem_shmem_object *shmem, + pgoff_t *pos, unsigned int npages, + int (*cb)(struct drm_gem_shmem_object *shmem, + struct page **pages, pgoff_t offset, + unsigned int count, void *data), + struct page **tmp_pages, + unsigned int tmp_page_count, + void *data) +{ + if (!shmem->sparse) + return -EINVAL; + + for (unsigned int i = 0; i < npages; i += tmp_page_count) { + unsigned int batch_size = MIN(tmp_page_count, npages - i); + unsigned int copied; + int ret; + + /* We expect all pages in the iterated range to be populated. */ + copied = xa_extract(&shmem->sparse->pages, (void **)tmp_pages, + *pos, *pos + batch_size - 1, + batch_size, XA_PRESENT); + if (copied != batch_size) + return -EINVAL; + + ret = cb(shmem, tmp_pages, *pos, batch_size, data); + if (ret) + return ret; + + *pos += batch_size; + } + + return 0; +} + extern const struct vm_operations_struct drm_gem_shmem_vm_ops; /* From patchwork Fri Apr 4 09:26:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Brezillon X-Patchwork-Id: 14038239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5FA75C3601E for ; Fri, 4 Apr 2025 09:26:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id ACFAB10EB9A; Fri, 4 Apr 2025 09:26:40 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.b="ah9t7SAT"; dkim-atps=neutral Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) by gabe.freedesktop.org (Postfix) with ESMTPS id 03C9910EB9A; Fri, 4 Apr 2025 09:26:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1743758798; bh=uhqIj7F/iaGotn7HPjJhMgANzaswGoO7rAy/WQfgNaA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ah9t7SATpG8xhP2VEl4qUt9ZEsWT25gw923fDsKnHrmptMXc9RUlURCdt9Qz9k2f3 D6bQpyHb8/j8e9OZesVFOaW6nW/OthPBEIKLe0TbFMHWcWwwEq2Zk//BGdUtrXoN6V p8NOWE/ZSkew7BDNVUoid5lXGSKLW900q/r5LFjj2tD+5E5yKKYP4qsCfCTbDgHLYm kG6FRpm0lnLspShFb1gcfUaqhohz5OX/OneonzfhtSB/5OM/NEKiEtZ5Ig1ozWJMzs Z8+7C+Q9ZPvpoe5YaBlq7r7N3CFMcdxjN6a9y2unP+uF5LtbPNHrGnED0HynUDxcxR MkQPj937EnNnw== Received: from localhost.localdomain (unknown [IPv6:2a01:e0a:2c:6930:5cf4:84a1:2763:fe0d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by bali.collaboradmins.com (Postfix) with ESMTPSA id 94D5317E1017; Fri, 4 Apr 2025 11:26:37 +0200 (CEST) From: Boris Brezillon To: Boris Brezillon , Steven Price , Liviu Dudau , =?utf-8?q?Adri=C3=A1n_Larumbe?= , lima@lists.freedesktop.org, Qiang Yu Cc: David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , dri-devel@lists.freedesktop.org, Dmitry Osipenko , kernel@collabora.com Subject: [PATCH v3 3/8] drm/panfrost: Switch to sparse gem shmem to implement our alloc-on-fault Date: Fri, 4 Apr 2025 11:26:29 +0200 Message-ID: <20250404092634.2968115-4-boris.brezillon@collabora.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250404092634.2968115-1-boris.brezillon@collabora.com> References: <20250404092634.2968115-1-boris.brezillon@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use the new gem_shmem helpers providing sparse GEM backing so we can simplify the code, and finally have a non-blocking allocation scheme in the fault handler path. Signed-off-by: Boris Brezillon --- drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +- drivers/gpu/drm/panfrost/panfrost_gem.c | 37 ++++++---- drivers/gpu/drm/panfrost/panfrost_gem.h | 8 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 98 +++++++------------------ drivers/gpu/drm/panfrost/panfrost_mmu.h | 2 + 5 files changed, 56 insertions(+), 91 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index b87f83e94eda..93831d18da90 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -389,7 +389,7 @@ static int panfrost_ioctl_mmap_bo(struct drm_device *dev, void *data, } /* Don't allow mmapping of heap objects as pages are not pinned. */ - if (to_panfrost_bo(gem_obj)->is_heap) { + if (panfrost_gem_is_heap(to_panfrost_bo(gem_obj))) { ret = -EINVAL; goto out; } diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 8e0ff3efede7..08fbe47ac146 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -35,18 +35,9 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) */ WARN_ON_ONCE(!list_empty(&bo->mappings.list)); - if (bo->sgts) { - int i; - int n_sgt = bo->base.base.size / SZ_2M; - - for (i = 0; i < n_sgt; i++) { - if (bo->sgts[i].sgl) { - dma_unmap_sgtable(pfdev->dev, &bo->sgts[i], - DMA_BIDIRECTIONAL, 0); - sg_free_table(&bo->sgts[i]); - } - } - kvfree(bo->sgts); + if (panfrost_gem_is_heap(bo)) { + drm_gem_shmem_sparse_unpin(&bo->base); + drm_gem_shmem_sparse_finish(&bo->base); } drm_gem_shmem_free(&bo->base); @@ -149,7 +140,7 @@ int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv) if (ret) goto err; - if (!bo->is_heap) { + if (!panfrost_gem_is_heap(bo)) { ret = panfrost_mmu_map(mapping); if (ret) goto err; @@ -189,7 +180,7 @@ static int panfrost_gem_pin(struct drm_gem_object *obj) { struct panfrost_gem_object *bo = to_panfrost_bo(obj); - if (bo->is_heap) + if (panfrost_gem_is_heap(bo)) return -EINVAL; return drm_gem_shmem_pin_locked(&bo->base); @@ -213,7 +204,7 @@ static size_t panfrost_gem_rss(struct drm_gem_object *obj) { struct panfrost_gem_object *bo = to_panfrost_bo(obj); - if (bo->is_heap) { + if (panfrost_gem_is_heap(bo)) { return bo->heap_rss_size; } else if (bo->base.pages) { WARN_ON(bo->heap_rss_size); @@ -280,7 +271,21 @@ panfrost_gem_create(struct drm_device *dev, size_t size, u32 flags) bo = to_panfrost_bo(&shmem->base); bo->noexec = !!(flags & PANFROST_BO_NOEXEC); - bo->is_heap = !!(flags & PANFROST_BO_HEAP); + + if (flags & PANFROST_BO_HEAP) { + int ret; + + ret = drm_gem_shmem_sparse_init(shmem, &bo->sparse, NUM_FAULT_PAGES); + if (ret) { + drm_gem_shmem_free(shmem); + return ERR_PTR(ret); + } + + /* Flag all pages of the sparse GEM as pinned as soon + * as they are populated, so they can't be reclaimed. + */ + drm_gem_shmem_sparse_pin(shmem); + } return bo; } diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 7516b7ecf7fe..566532ed4790 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -11,7 +11,7 @@ struct panfrost_mmu; struct panfrost_gem_object { struct drm_gem_shmem_object base; - struct sg_table *sgts; + struct drm_gem_shmem_sparse_backing sparse; /* * Use a list for now. If searching a mapping ever becomes the @@ -42,9 +42,13 @@ struct panfrost_gem_object { size_t heap_rss_size; bool noexec :1; - bool is_heap :1; }; +static inline bool panfrost_gem_is_heap(struct panfrost_gem_object *bo) +{ + return bo->base.sparse != NULL; +} + struct panfrost_gem_mapping { struct list_head node; struct kref refcount; diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index f6b91c052cfb..a95eb1882a30 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -469,9 +469,9 @@ void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping) size_t unmapped_page, pgcount; size_t pgsize = get_pgsize(iova, len - unmapped_len, &pgcount); - if (bo->is_heap) + if (panfrost_gem_is_heap(bo)) pgcount = 1; - if (!bo->is_heap || ops->iova_to_phys(ops, iova)) { + if (!panfrost_gem_is_heap(bo) || ops->iova_to_phys(ops, iova)) { unmapped_page = ops->unmap_pages(ops, iova, pgsize, pgcount, NULL); WARN_ON(unmapped_page != pgsize * pgcount); } @@ -539,26 +539,24 @@ addr_to_mapping(struct panfrost_device *pfdev, int as, u64 addr) return mapping; } -#define NUM_FAULT_PAGES (SZ_2M / PAGE_SIZE) - static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, u64 addr) { - int ret, i; + int ret; struct panfrost_gem_mapping *bomapping; struct panfrost_gem_object *bo; struct address_space *mapping; - struct drm_gem_object *obj; - pgoff_t page_offset; + pgoff_t page_offset, sgt_pgoffset; + unsigned int sgt_remaining_pages; struct sg_table *sgt; - struct page **pages; + gfp_t page_gfp; bomapping = addr_to_mapping(pfdev, as, addr); if (!bomapping) return -ENOENT; bo = bomapping->obj; - if (!bo->is_heap) { + if (!panfrost_gem_is_heap(bo)) { dev_WARN(pfdev->dev, "matching BO is not heap type (GPU VA = %llx)", bomapping->mmnode.start << PAGE_SHIFT); ret = -EINVAL; @@ -570,66 +568,30 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, addr &= ~((u64)SZ_2M - 1); page_offset = addr >> PAGE_SHIFT; page_offset -= bomapping->mmnode.start; - - obj = &bo->base.base; - - dma_resv_lock(obj->resv, NULL); - - if (!bo->base.pages) { - bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M, - sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); - if (!bo->sgts) { - ret = -ENOMEM; - goto err_unlock; - } - - pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, - sizeof(struct page *), GFP_KERNEL | __GFP_ZERO); - if (!pages) { - kvfree(bo->sgts); - bo->sgts = NULL; - ret = -ENOMEM; - goto err_unlock; - } - bo->base.pages = pages; - refcount_set(&bo->base.pages_use_count, 1); - } else { - pages = bo->base.pages; - if (pages[page_offset]) { - /* Pages are already mapped, bail out. */ - goto out; - } - } - mapping = bo->base.base.filp->f_mapping; - mapping_set_unevictable(mapping); - for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { - /* Can happen if the last fault only partially filled this - * section of the pages array before failing. In that case - * we skip already filled pages. - */ - if (pages[i]) - continue; + page_gfp = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM) | + __GFP_NORETRY | __GFP_NOWARN; - pages[i] = shmem_read_mapping_page(mapping, i); - if (IS_ERR(pages[i])) { - ret = PTR_ERR(pages[i]); - pages[i] = NULL; - goto err_unlock; - } + /* We want non-blocking allocations, if we're OOM, we just fail the job + * to unblock things. + */ + ret = drm_gem_shmem_sparse_populate_range(&bo->base, page_offset, + NUM_FAULT_PAGES, page_gfp, + __GFP_NORETRY | __GFP_NOWARN); + if (ret) + goto err_bo; + + sgt = drm_gem_shmem_sparse_get_sgt(&bo->base, page_offset, + &sgt_pgoffset, &sgt_remaining_pages); + if (IS_ERR(sgt)) { + ret = PTR_ERR(sgt); + goto err_bo; + } else if (sgt_pgoffset != 0 || sgt_remaining_pages != NUM_FAULT_PAGES) { + ret = -EINVAL; + goto err_bo; } - sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; - ret = sg_alloc_table_from_pages(sgt, pages + page_offset, - NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); - if (ret) - goto err_unlock; - - ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0); - if (ret) - goto err_map; - mmu_map_sg(pfdev, bomapping->mmu, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_CACHE | IOMMU_NOEXEC, sgt); @@ -637,18 +599,10 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, bo->heap_rss_size += SZ_2M; dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr); - -out: - dma_resv_unlock(obj->resv); - panfrost_gem_mapping_put(bomapping); return 0; -err_map: - sg_free_table(sgt); -err_unlock: - dma_resv_unlock(obj->resv); err_bo: panfrost_gem_mapping_put(bomapping); return ret; diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.h b/drivers/gpu/drm/panfrost/panfrost_mmu.h index 022a9a74a114..a84ed4209f8d 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.h +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.h @@ -8,6 +8,8 @@ struct panfrost_gem_mapping; struct panfrost_file_priv; struct panfrost_mmu; +#define NUM_FAULT_PAGES (SZ_2M / PAGE_SIZE) + int panfrost_mmu_map(struct panfrost_gem_mapping *mapping); void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping); From patchwork Fri Apr 4 09:26:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Brezillon X-Patchwork-Id: 14038242 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 63D5BC36010 for ; Fri, 4 Apr 2025 09:26:48 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2D33310EB9F; Fri, 4 Apr 2025 09:26:42 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.b="TmBmcqJR"; dkim-atps=neutral Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) by gabe.freedesktop.org (Postfix) with ESMTPS id 32DE010EB9D; Fri, 4 Apr 2025 09:26:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1743758798; bh=BU3CTLQxUyFiI6yaHwAZ1algtYs9Av0mreKFx47QP14=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TmBmcqJRPDC5RSjTt5n7guP3HYAm1AT6Z1oxsf5atZsT0p0bZw3jeNnx9UshWQcA3 XIHkBFBUWdLmR4o2dC/hB0/jn6qq0LaBlXpn1VAbg6zTZXjh3VlNpUKm29r/IIXWwn Yq4a0IMczXnunb2x+tTrLHH3TxH0dfkx/aRqvwzjr1pZPhFvYikwbdQ2fNNRbsAKbj La/5fweDfZTe2skpKia+E5Dio59sU7EwmtWuLvf7h5URelzCow6/D77eTiIkKjgfWX 9iR204iBCZ9rbEl2xkMtGTOKc6dzTzaBNS/69qC4mVHJ8UwvJJ4KJD+OiDyc5Zwf0y YK/M0hHi0ChKA== Received: from localhost.localdomain (unknown [IPv6:2a01:e0a:2c:6930:5cf4:84a1:2763:fe0d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by bali.collaboradmins.com (Postfix) with ESMTPSA id 43F5517E1034; Fri, 4 Apr 2025 11:26:38 +0200 (CEST) From: Boris Brezillon To: Boris Brezillon , Steven Price , Liviu Dudau , =?utf-8?q?Adri=C3=A1n_Larumbe?= , lima@lists.freedesktop.org, Qiang Yu Cc: David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , dri-devel@lists.freedesktop.org, Dmitry Osipenko , kernel@collabora.com Subject: [PATCH v3 4/8] drm/panthor: Add support for alloc-on-fault buffers Date: Fri, 4 Apr 2025 11:26:30 +0200 Message-ID: <20250404092634.2968115-5-boris.brezillon@collabora.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250404092634.2968115-1-boris.brezillon@collabora.com> References: <20250404092634.2968115-1-boris.brezillon@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This lets the UMD flag buffers are alloc-on-fault (AKA lazy allocation, AKA alloc-on-demand). The ultimate goal is to use this infrastructure for heap objects, but commit only deals with GEM/VM bits. Signed-off-by: Boris Brezillon --- drivers/gpu/drm/panthor/panthor_drv.c | 20 +- drivers/gpu/drm/panthor/panthor_gem.c | 11 +- drivers/gpu/drm/panthor/panthor_gem.h | 8 +- drivers/gpu/drm/panthor/panthor_mmu.c | 456 +++++++++++++++++++------- include/uapi/drm/panthor_drm.h | 19 +- 5 files changed, 390 insertions(+), 124 deletions(-) diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c index 06fe46e32073..9d3cbae5c1d6 100644 --- a/drivers/gpu/drm/panthor/panthor_drv.c +++ b/drivers/gpu/drm/panthor/panthor_drv.c @@ -899,7 +899,8 @@ static int panthor_ioctl_vm_destroy(struct drm_device *ddev, void *data, return panthor_vm_pool_destroy_vm(pfile->vms, args->id); } -#define PANTHOR_BO_FLAGS DRM_PANTHOR_BO_NO_MMAP +#define PANTHOR_BO_FLAGS (DRM_PANTHOR_BO_NO_MMAP | \ + DRM_PANTHOR_BO_ALLOC_ON_FAULT) static int panthor_ioctl_bo_create(struct drm_device *ddev, void *data, struct drm_file *file) @@ -912,8 +913,18 @@ static int panthor_ioctl_bo_create(struct drm_device *ddev, void *data, if (!drm_dev_enter(ddev, &cookie)) return -ENODEV; - if (!args->size || args->pad || - (args->flags & ~PANTHOR_BO_FLAGS)) { + if (!args->size || (args->flags & ~PANTHOR_BO_FLAGS)) { + ret = -EINVAL; + goto out_dev_exit; + } + + if (args->flags & DRM_PANTHOR_BO_ALLOC_ON_FAULT) { + if (args->alloc_on_faut_granularity < PAGE_SIZE || + !is_power_of_2(args->alloc_on_faut_granularity)) { + ret = -EINVAL; + goto out_dev_exit; + } + } else if (args->alloc_on_faut_granularity) { ret = -EINVAL; goto out_dev_exit; } @@ -927,7 +938,8 @@ static int panthor_ioctl_bo_create(struct drm_device *ddev, void *data, } ret = panthor_gem_create_with_handle(file, ddev, vm, &args->size, - args->flags, &args->handle); + args->flags, args->alloc_on_faut_granularity, + &args->handle); panthor_vm_put(vm); diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c index 8244a4e6c2a2..52b8d5468d53 100644 --- a/drivers/gpu/drm/panthor/panthor_gem.c +++ b/drivers/gpu/drm/panthor/panthor_gem.c @@ -18,6 +18,9 @@ static void panthor_gem_free_object(struct drm_gem_object *obj) struct panthor_gem_object *bo = to_panthor_bo(obj); struct drm_gem_object *vm_root_gem = bo->exclusive_vm_root_gem; + if (bo->base.sparse) + drm_gem_shmem_sparse_finish(&bo->base); + drm_gem_free_mmap_offset(&bo->base.base); mutex_destroy(&bo->gpuva_list_lock); drm_gem_shmem_free(&bo->base); @@ -215,7 +218,9 @@ int panthor_gem_create_with_handle(struct drm_file *file, struct drm_device *ddev, struct panthor_vm *exclusive_vm, - u64 *size, u32 flags, u32 *handle) + u64 *size, u32 flags, + u32 alloc_on_fault_granularity, + u32 *handle) { int ret; struct drm_gem_shmem_object *shmem; @@ -228,6 +233,10 @@ panthor_gem_create_with_handle(struct drm_file *file, bo = to_panthor_bo(&shmem->base); bo->flags = flags; + if (flags & DRM_PANTHOR_BO_ALLOC_ON_FAULT) + drm_gem_shmem_sparse_init(&bo->base, &bo->sparse, + alloc_on_fault_granularity >> PAGE_SHIFT); + if (exclusive_vm) { bo->exclusive_vm_root_gem = panthor_vm_root_gem(exclusive_vm); drm_gem_object_get(bo->exclusive_vm_root_gem); diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h index 1a363bb814f4..53a85a463c1e 100644 --- a/drivers/gpu/drm/panthor/panthor_gem.h +++ b/drivers/gpu/drm/panthor/panthor_gem.h @@ -20,6 +20,9 @@ struct panthor_gem_object { /** @base: Inherit from drm_gem_shmem_object. */ struct drm_gem_shmem_object base; + /** @sparse: Used when alloc-on-fault is requested. */ + struct drm_gem_shmem_sparse_backing sparse; + /** * @exclusive_vm_root_gem: Root GEM of the exclusive VM this GEM object * is attached to. @@ -89,7 +92,10 @@ int panthor_gem_create_with_handle(struct drm_file *file, struct drm_device *ddev, struct panthor_vm *exclusive_vm, - u64 *size, u32 flags, uint32_t *handle); + u64 *size, u32 flags, + u32 alloc_on_fault_granularity, + u32 *handle); + static inline u64 panthor_kernel_bo_gpuva(struct panthor_kernel_bo *bo) diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c index dc173c6edde0..e05aaac10481 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -380,13 +380,20 @@ struct panthor_vm { */ bool unusable; - /** - * @unhandled_fault: Unhandled fault happened. - * - * This should be reported to the scheduler, and the queue/group be - * flagged as faulty as a result. - */ - bool unhandled_fault; + /** @fault: Fields related to VM faults. */ + struct { + /** @work: Work used to defer VM fault processing. */ + struct work_struct work; + + /** + * @status: Value of the FAULTSTATUS register at the time the fault was + * reported. + */ + u32 status; + + /** @addr: Value of the FAULTADDRESS register at the time the fault was reported. */ + u64 addr; + } fault; }; /** @@ -456,11 +463,22 @@ static void *alloc_pt(void *cookie, size_t size, gfp_t gfp) if (drm_WARN_ON(&vm->ptdev->base, size != SZ_4K)) return NULL; + if (!vm->op_ctx) { + /* No op_ctx means alloc-on-fault, in that case we just allocate with + * non-blocking gfp flags. + */ + page = kmem_cache_zalloc(pt_cache, __GFP_NORETRY | __GFP_NOWARN); + if (!page) + return NULL; + + kmemleak_ignore(page); + return page; + } + /* We must have some op_ctx attached to the VM and it must have at least one * free page. */ - if (drm_WARN_ON(&vm->ptdev->base, !vm->op_ctx) || - drm_WARN_ON(&vm->ptdev->base, + if (drm_WARN_ON(&vm->ptdev->base, vm->op_ctx->rsvd_page_tables.ptr >= vm->op_ctx->rsvd_page_tables.count)) return NULL; @@ -666,7 +684,7 @@ static u32 panthor_mmu_as_fault_mask(struct panthor_device *ptdev, u32 as) */ bool panthor_vm_has_unhandled_faults(struct panthor_vm *vm) { - return vm->unhandled_fault; + return vm->fault.status && !work_busy(&vm->fault.work); } /** @@ -773,7 +791,8 @@ int panthor_vm_active(struct panthor_vm *vm) transcfg |= AS_TRANSCFG_PTW_SH_OS; /* If the VM is re-activated, we clear the fault. */ - vm->unhandled_fault = false; + vm->fault.status = 0; + vm->fault.addr = 0; /* Unhandled pagefault on this AS, clear the fault and re-enable interrupts * before enabling the AS. @@ -907,18 +926,32 @@ int panthor_vm_flush_all(struct panthor_vm *vm) return panthor_vm_flush_range(vm, vm->base.mm_start, vm->base.mm_range); } -static int panthor_vm_unmap_pages(struct panthor_vm *vm, u64 iova, u64 size) +static int panthor_vm_unmap_pages(struct panthor_vm *vm, u64 iova, u64 size, + u32 sparse_granularity) { struct panthor_device *ptdev = vm->ptdev; struct io_pgtable_ops *ops = vm->pgtbl_ops; u64 offset = 0; drm_dbg(&ptdev->base, "unmap: as=%d, iova=%llx, len=%llx", vm->as.id, iova, size); + sparse_granularity <<= PAGE_SHIFT; while (offset < size) { size_t unmapped_sz = 0, pgcount; size_t pgsize = get_pgsize(iova + offset, size - offset, &pgcount); + if (sparse_granularity && !ops->iova_to_phys(ops, iova + offset)) { + offset += sparse_granularity; + continue; + } else if (sparse_granularity) { + u32 chunk_size = min_t(u32, size - offset, sparse_granularity); + + pgsize = get_pgsize(iova + offset, chunk_size, &pgcount); + pgcount = 1; + } else { + pgsize = get_pgsize(iova + offset, size - offset, &pgcount); + } + unmapped_sz = ops->unmap_pages(ops, iova + offset, pgsize, pgcount, NULL); if (drm_WARN_ON(&ptdev->base, unmapped_sz != pgsize * pgcount)) { @@ -985,7 +1018,7 @@ panthor_vm_map_pages(struct panthor_vm *vm, u64 iova, int prot, */ drm_WARN_ON(&ptdev->base, panthor_vm_unmap_pages(vm, start_iova, - iova - start_iova)); + iova - start_iova, 0)); return ret; } } @@ -1104,8 +1137,12 @@ static void panthor_vm_bo_put(struct drm_gpuvm_bo *vm_bo) /* If the vm_bo object was destroyed, release the pin reference that * was hold by this object. */ - if (unpin && !bo->base.base.import_attach) - drm_gem_shmem_unpin(&bo->base); + if (unpin) { + if (bo->flags & DRM_PANTHOR_BO_ALLOC_ON_FAULT) + drm_gem_shmem_sparse_unpin(&bo->base); + else if (!bo->base.base.import_attach) + drm_gem_shmem_unpin(&bo->base); + } drm_gpuvm_put(vm); drm_gem_object_put(&bo->base.base); @@ -1205,7 +1242,6 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx, u32 flags) { struct drm_gpuvm_bo *preallocated_vm_bo; - struct sg_table *sgt = NULL; u64 pt_count; int ret; @@ -1235,29 +1271,41 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx, if (ret) goto err_cleanup; - if (!bo->base.base.import_attach) { - /* Pre-reserve the BO pages, so the map operation doesn't have to - * allocate. + if (bo->flags & DRM_PANTHOR_BO_ALLOC_ON_FAULT) { + /* For alloc-on-faut objects, we just flag the sparse + * resources as pinned, but the actual allocation is + * deferred (done at fault time). */ - ret = drm_gem_shmem_pin(&bo->base); - if (ret) + drm_gem_shmem_sparse_pin(&bo->base); + } else { + struct sg_table *sgt = NULL; + + if (!bo->base.base.import_attach) { + /* Pre-reserve the BO pages, so the map operation + * doesn't have to allocate. + */ + ret = drm_gem_shmem_pin(&bo->base); + if (ret) + goto err_cleanup; + } + + sgt = drm_gem_shmem_get_pages_sgt(&bo->base); + if (IS_ERR(sgt)) { + if (!bo->base.base.import_attach) + drm_gem_shmem_unpin(&bo->base); + + ret = PTR_ERR(sgt); goto err_cleanup; + } + + op_ctx->map.sgt = sgt; } - sgt = drm_gem_shmem_get_pages_sgt(&bo->base); - if (IS_ERR(sgt)) { - if (!bo->base.base.import_attach) - drm_gem_shmem_unpin(&bo->base); - - ret = PTR_ERR(sgt); - goto err_cleanup; - } - - op_ctx->map.sgt = sgt; - preallocated_vm_bo = drm_gpuvm_bo_create(&vm->base, &bo->base.base); if (!preallocated_vm_bo) { - if (!bo->base.base.import_attach) + if (bo->flags & DRM_PANTHOR_BO_ALLOC_ON_FAULT) + drm_gem_shmem_sparse_unpin(&bo->base); + else if (!bo->base.base.import_attach) drm_gem_shmem_unpin(&bo->base); ret = -ENOMEM; @@ -1282,42 +1330,47 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx, * If our pre-allocated vm_bo is picked, it now retains the pin ref, * which will be released in panthor_vm_bo_put(). */ - if (preallocated_vm_bo != op_ctx->map.vm_bo && - !bo->base.base.import_attach) - drm_gem_shmem_unpin(&bo->base); + if (preallocated_vm_bo != op_ctx->map.vm_bo) { + if (bo->flags & DRM_PANTHOR_BO_ALLOC_ON_FAULT) + drm_gem_shmem_sparse_unpin(&bo->base); + else if (!bo->base.base.import_attach) + drm_gem_shmem_unpin(&bo->base); + } op_ctx->map.bo_offset = offset; - /* L1, L2 and L3 page tables. - * We could optimize L3 allocation by iterating over the sgt and merging - * 2M contiguous blocks, but it's simpler to over-provision and return - * the pages if they're not used. - */ - pt_count = ((ALIGN(va + size, 1ull << 39) - ALIGN_DOWN(va, 1ull << 39)) >> 39) + - ((ALIGN(va + size, 1ull << 30) - ALIGN_DOWN(va, 1ull << 30)) >> 30) + - ((ALIGN(va + size, 1ull << 21) - ALIGN_DOWN(va, 1ull << 21)) >> 21); + if (!(bo->flags & DRM_PANTHOR_BO_ALLOC_ON_FAULT)) { + /* L1, L2 and L3 page tables. + * We could optimize L3 allocation by iterating over the sgt and merging + * 2M contiguous blocks, but it's simpler to over-provision and return + * the pages if they're not used. + */ + pt_count = ((ALIGN(va + size, 1ull << 39) - ALIGN_DOWN(va, 1ull << 39)) >> 39) + + ((ALIGN(va + size, 1ull << 30) - ALIGN_DOWN(va, 1ull << 30)) >> 30) + + ((ALIGN(va + size, 1ull << 21) - ALIGN_DOWN(va, 1ull << 21)) >> 21); - op_ctx->rsvd_page_tables.pages = kcalloc(pt_count, - sizeof(*op_ctx->rsvd_page_tables.pages), - GFP_KERNEL); - if (!op_ctx->rsvd_page_tables.pages) { - ret = -ENOMEM; - goto err_cleanup; + op_ctx->rsvd_page_tables.pages = kcalloc(pt_count, + sizeof(*op_ctx->rsvd_page_tables.pages), + GFP_KERNEL); + if (!op_ctx->rsvd_page_tables.pages) { + ret = -ENOMEM; + goto err_cleanup; + } + + ret = kmem_cache_alloc_bulk(pt_cache, GFP_KERNEL, pt_count, + op_ctx->rsvd_page_tables.pages); + op_ctx->rsvd_page_tables.count = ret; + if (ret != pt_count) { + ret = -ENOMEM; + goto err_cleanup; + } + + /* Insert BO into the extobj list last, when we know nothing can fail. */ + dma_resv_lock(panthor_vm_resv(vm), NULL); + drm_gpuvm_bo_extobj_add(op_ctx->map.vm_bo); + dma_resv_unlock(panthor_vm_resv(vm)); } - ret = kmem_cache_alloc_bulk(pt_cache, GFP_KERNEL, pt_count, - op_ctx->rsvd_page_tables.pages); - op_ctx->rsvd_page_tables.count = ret; - if (ret != pt_count) { - ret = -ENOMEM; - goto err_cleanup; - } - - /* Insert BO into the extobj list last, when we know nothing can fail. */ - dma_resv_lock(panthor_vm_resv(vm), NULL); - drm_gpuvm_bo_extobj_add(op_ctx->map.vm_bo); - dma_resv_unlock(panthor_vm_resv(vm)); - return 0; err_cleanup: @@ -1665,35 +1718,94 @@ static const char *access_type_name(struct panthor_device *ptdev, } } -static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) +static int panthor_vm_map_on_demand_locked(struct panthor_vm *vm, + struct panthor_vma *vma, u64 offset, + u64 size, gfp_t page_gfp, + gfp_t other_gfp) { - bool has_unhandled_faults = false; + struct panthor_gem_object *bo = to_panthor_bo(vma->base.gem.obj); + u64 bo_offset = vma->base.gem.offset + offset; + u64 iova = vma->base.va.addr + offset; + u32 granularity = bo->sparse.granularity << PAGE_SHIFT; + pgoff_t first_page = bo_offset >> PAGE_SHIFT; + pgoff_t last_page = (bo_offset + size) >> PAGE_SHIFT; + int prot = flags_to_prot(vma->flags); + int ret; - status = panthor_mmu_fault_mask(ptdev, status); - while (status) { - u32 as = ffs(status | (status >> 16)) - 1; - u32 mask = panthor_mmu_as_fault_mask(ptdev, as); - u32 new_int_mask; - u64 addr; - u32 fault_status; - u32 exception_type; - u32 access_type; - u32 source_id; + lockdep_assert_held(&vm->op_lock); - fault_status = gpu_read(ptdev, AS_FAULTSTATUS(as)); - addr = gpu_read(ptdev, AS_FAULTADDRESS_LO(as)); - addr |= (u64)gpu_read(ptdev, AS_FAULTADDRESS_HI(as)) << 32; + if ((size | bo_offset | iova) & (granularity - 1)) + return -EINVAL; + if (offset + size < offset || offset + size > vma->base.va.range) + return -EINVAL; + + ret = drm_gem_shmem_sparse_populate_range(&bo->base, + bo_offset >> PAGE_SHIFT, + size >> PAGE_SHIFT, page_gfp, + other_gfp); + if (ret) + return ret; + + for (pgoff_t p = first_page; p < last_page; ) { + unsigned int sgt_remaining_pages; + pgoff_t sgt_pgoffset; + struct sg_table *sgt; + + sgt = drm_gem_shmem_sparse_get_sgt(&bo->base, p, + &sgt_pgoffset, &sgt_remaining_pages); + if (IS_ERR(sgt)) + return PTR_ERR(sgt); + + ret = panthor_vm_map_pages(vm, iova, prot, sgt, + (u64)sgt_pgoffset << PAGE_SHIFT, + (u64)sgt_remaining_pages << PAGE_SHIFT); + if (ret) + return ret; + + p += sgt_remaining_pages; + iova += (u64)sgt_remaining_pages << PAGE_SHIFT; + } + + return 0; +} + +static void panthor_vm_handle_fault_locked(struct panthor_vm *vm) +{ + struct panthor_device *ptdev = vm->ptdev; + struct panthor_gem_object *bo = NULL; + struct address_space *mapping; + gfp_t page_gfp, other_gfp; + struct drm_gpuva *gpuva; + struct panthor_vma *vma; + u64 iova; + int ret; + + gpuva = drm_gpuva_find_first(&vm->base, vm->fault.addr, 1); + vma = gpuva ? container_of(gpuva, struct panthor_vma, base) : NULL; + if (vma && vma->base.gem.obj) + bo = to_panthor_bo(vma->base.gem.obj); + + if (!bo || !(bo->flags & DRM_PANTHOR_BO_ALLOC_ON_FAULT)) { + ret = -EFAULT; + goto out; + } + + iova = vm->fault.addr & ~((u64)bo->sparse.granularity - 1); + mapping = bo->base.base.filp->f_mapping; + page_gfp = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM) | + __GFP_NORETRY | __GFP_NOWARN; + other_gfp = __GFP_NORETRY | __GFP_NOWARN; + ret = panthor_vm_map_on_demand_locked(vm, vma, iova - vma->base.va.addr, + bo->sparse.granularity, page_gfp, + other_gfp); + +out: + if (ret) { /* decode the fault status */ - exception_type = fault_status & 0xFF; - access_type = (fault_status >> 8) & 0x3; - source_id = (fault_status >> 16); - - mutex_lock(&ptdev->mmu->as.slots_lock); - - ptdev->mmu->as.faulty_mask |= mask; - new_int_mask = - panthor_mmu_fault_mask(ptdev, ~ptdev->mmu->as.faulty_mask); + u32 exception_type = vm->fault.status & 0xFF; + u32 access_type = (vm->fault.status >> 8) & 0x3; + u32 source_id = vm->fault.status >> 16; /* terminal fault, print info about the fault */ drm_err(&ptdev->base, @@ -1703,38 +1815,99 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) "exception type 0x%X: %s\n" "access type 0x%X: %s\n" "source id 0x%X\n", - as, addr, - fault_status, - (fault_status & (1 << 10) ? "DECODER FAULT" : "SLAVE FAULT"), + vm->as.id, vm->fault.addr, + vm->fault.status, + (vm->fault.status & (1 << 10) ? "DECODER FAULT" : "SLAVE FAULT"), exception_type, panthor_exception_name(ptdev, exception_type), - access_type, access_type_name(ptdev, fault_status), + access_type, access_type_name(ptdev, vm->fault.status), source_id); - /* We don't handle VM faults at the moment, so let's just clear the - * interrupt and let the writer/reader crash. - * Note that COMPLETED irqs are never cleared, but this is fine - * because they are always masked. - */ - gpu_write(ptdev, MMU_INT_CLEAR, mask); + panthor_sched_report_mmu_fault(ptdev); + } - /* Ignore MMU interrupts on this AS until it's been - * re-enabled. - */ - ptdev->mmu->irq.mask = new_int_mask; + mutex_lock(&ptdev->mmu->as.slots_lock); + if (vm->as.id >= 0) { + if (ret) { + /* If we failed to handle the fault, disable + * the AS to kill jobs. + */ + panthor_mmu_as_disable(ptdev, vm->as.id); + } else { + u32 as_fault_mask = panthor_mmu_as_fault_mask(ptdev, vm->as.id); - if (ptdev->mmu->as.slots[as].vm) - ptdev->mmu->as.slots[as].vm->unhandled_fault = true; + /* If we handled the fault, clear the interrupt and re-enable it. */ + vm->fault.status = 0; + vm->fault.addr = 0; - /* Disable the MMU to kill jobs on this AS. */ - panthor_mmu_as_disable(ptdev, as); + gpu_write(ptdev, MMU_INT_CLEAR, as_fault_mask); + ptdev->mmu->as.faulty_mask &= ~as_fault_mask; + ptdev->mmu->irq.mask = + panthor_mmu_fault_mask(ptdev, ~ptdev->mmu->as.faulty_mask); + gpu_write(ptdev, MMU_INT_MASK, ptdev->mmu->irq.mask); + } + } + mutex_unlock(&ptdev->mmu->as.slots_lock); +} + +static void panthor_vm_fault_work(struct work_struct *work) +{ + struct panthor_vm *vm = container_of(work, struct panthor_vm, fault.work); + + mutex_lock(&vm->op_lock); + panthor_vm_handle_fault_locked(vm); + mutex_unlock(&vm->op_lock); + panthor_vm_put(vm); +} + +static void panthor_mmu_handle_fault_locked(struct panthor_device *ptdev, u32 as) +{ + u32 status = panthor_mmu_fault_mask(ptdev, gpu_read(ptdev, MMU_INT_RAWSTAT)); + u32 mask = panthor_mmu_as_fault_mask(ptdev, as); + u32 fault_status, new_int_mask; + struct panthor_vm *vm; + u64 fault_addr; + + /* Slot got recycled while we were trying to acquire the lock. + * We'll try to handle the MMU fault next time the VM is bound. + */ + if (!status) + return; + + fault_status = gpu_read(ptdev, AS_FAULTSTATUS(as)); + fault_addr = gpu_read(ptdev, AS_FAULTADDRESS_LO(as)); + fault_addr |= (u64)gpu_read(ptdev, AS_FAULTADDRESS_HI(as)) << 32; + + ptdev->mmu->as.faulty_mask |= mask; + + new_int_mask = + panthor_mmu_fault_mask(ptdev, ~ptdev->mmu->as.faulty_mask); + ptdev->mmu->irq.mask = new_int_mask; + vm = ptdev->mmu->as.slots[as].vm; + + if (vm) { + vm->fault.status = fault_status; + vm->fault.addr = fault_addr; + if (queue_work(ptdev->mmu->vm.wq, &vm->fault.work)) + panthor_vm_get(vm); + } +} + +static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) +{ + /* Note that COMPLETED irqs are never cleared, but this is fine because + * they are always masked. + */ + status = panthor_mmu_fault_mask(ptdev, status); + while (status) { + u32 as = ffs(status | (status >> 16)) - 1; + u32 mask = panthor_mmu_as_fault_mask(ptdev, as); + + mutex_lock(&ptdev->mmu->as.slots_lock); + panthor_mmu_handle_fault_locked(ptdev, as); mutex_unlock(&ptdev->mmu->as.slots_lock); status &= ~mask; - has_unhandled_faults = true; } - - if (has_unhandled_faults) - panthor_sched_report_mmu_fault(ptdev); } PANTHOR_IRQ_HANDLER(mmu, MMU, panthor_mmu_irq_handler); @@ -2066,6 +2239,7 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv) struct panthor_vm *vm = priv; struct panthor_vm_op_ctx *op_ctx = vm->op_ctx; struct panthor_vma *vma = panthor_vm_op_ctx_get_vma(op_ctx); + struct panthor_gem_object *bo = to_panthor_bo(op->map.gem.obj); int ret; if (!vma) @@ -2073,11 +2247,14 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv) panthor_vma_init(vma, op_ctx->flags & PANTHOR_VM_MAP_FLAGS); - ret = panthor_vm_map_pages(vm, op->map.va.addr, flags_to_prot(vma->flags), - op_ctx->map.sgt, op->map.gem.offset, - op->map.va.range); - if (ret) - return ret; + /* Don't map alloc-on-fault objects. This will happen at fault time. */ + if (!(bo->flags & DRM_PANTHOR_BO_ALLOC_ON_FAULT)) { + ret = panthor_vm_map_pages(vm, op->map.va.addr, flags_to_prot(vma->flags), + op_ctx->map.sgt, op->map.gem.offset, + op->map.va.range); + if (ret) + return ret; + } /* Ref owned by the mapping now, clear the obj field so we don't release the * pinning/obj ref behind GPUVA's back. @@ -2096,10 +2273,43 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op, struct panthor_vm_op_ctx *op_ctx = vm->op_ctx; struct panthor_vma *prev_vma = NULL, *next_vma = NULL; u64 unmap_start, unmap_range; + u32 sparse_granularity = 0; int ret; + if (op->remap.unmap->va->gem.obj) { + struct panthor_gem_object *unmapped_bo = to_panthor_bo(op->remap.prev->gem.obj); + + sparse_granularity = unmapped_bo->sparse.granularity; + } + + if (op->remap.prev && op->remap.prev->gem.obj) { + struct panthor_gem_object *prev_bo = to_panthor_bo(op->remap.prev->gem.obj); + + if (!sparse_granularity) + sparse_granularity = prev_bo->sparse.granularity; + else + sparse_granularity = MIN(prev_bo->sparse.granularity, sparse_granularity); + } + + if (op->remap.next && op->remap.next->gem.obj) { + struct panthor_gem_object *next_bo = to_panthor_bo(op->remap.next->gem.obj); + + if (!sparse_granularity) + sparse_granularity = next_bo->sparse.granularity; + else + sparse_granularity = MIN(next_bo->sparse.granularity, sparse_granularity); + } + drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range); - ret = panthor_vm_unmap_pages(vm, unmap_start, unmap_range); + + if (sparse_granularity) { + sparse_granularity = 1 << (ffs(sparse_granularity | + (unmap_start >> PAGE_SHIFT) | + (unmap_range >> PAGE_SHIFT)) - + 1); + } + + ret = panthor_vm_unmap_pages(vm, unmap_start, unmap_range, sparse_granularity); if (ret) return ret; @@ -2141,10 +2351,23 @@ static int panthor_gpuva_sm_step_unmap(struct drm_gpuva_op *op, { struct panthor_vma *unmap_vma = container_of(op->unmap.va, struct panthor_vma, base); struct panthor_vm *vm = priv; + u32 sparse_granularity = 0; int ret; + if (op->unmap.va->gem.obj) { + struct panthor_gem_object *unmapped_bo = to_panthor_bo(op->unmap.va->gem.obj); + + sparse_granularity = unmapped_bo->sparse.granularity; + } + + if (sparse_granularity) { + sparse_granularity = 1 << (ffs(sparse_granularity | + (unmap_vma->base.va.addr >> PAGE_SHIFT) | + (unmap_vma->base.va.range >> PAGE_SHIFT)) - 1); + } + ret = panthor_vm_unmap_pages(vm, unmap_vma->base.va.addr, - unmap_vma->base.va.range); + unmap_vma->base.va.range, sparse_granularity); if (drm_WARN_ON(&vm->ptdev->base, ret)) return ret; @@ -2338,6 +2561,7 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu, goto err_free_vm; } + INIT_WORK(&vm->fault.work, panthor_vm_fault_work); mutex_init(&vm->heaps.lock); vm->for_mcu = for_mcu; vm->ptdev = ptdev; diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h index 97e2c4510e69..8071f1c438e2 100644 --- a/include/uapi/drm/panthor_drm.h +++ b/include/uapi/drm/panthor_drm.h @@ -615,6 +615,16 @@ struct drm_panthor_vm_get_state { enum drm_panthor_bo_flags { /** @DRM_PANTHOR_BO_NO_MMAP: The buffer object will never be CPU-mapped in userspace. */ DRM_PANTHOR_BO_NO_MMAP = (1 << 0), + + /** + * @DRM_PANTHOR_BO_ALLOC_ON_FAULT: The buffer sections will be allocated on-demand. + * + * When alloc-on-faut is used, the user should expect job failures, because the + * allocation happens in a path where waiting is not allowed, meaning the allocation + * can fail and there's nothing the kernel will do to mitigate that. The group will + * be unusable after such a failure. + */ + DRM_PANTHOR_BO_ALLOC_ON_FAULT = (1 << 1), }; /** @@ -649,8 +659,13 @@ struct drm_panthor_bo_create { */ __u32 handle; - /** @pad: MBZ. */ - __u32 pad; + /** + * @alloc_on_fault_granularity: Granularity of the alloc-on-fault behavior. + * + * Must be zero when DRM_PANTHOR_BO_ALLOC_ON_FAULT is not set. + * Must be a power-of-two, at least a page size, and less or equal to @size. + */ + __u32 alloc_on_faut_granularity; }; /** From patchwork Fri Apr 4 09:26:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Brezillon X-Patchwork-Id: 14038244 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E00CAC369A1 for ; Fri, 4 Apr 2025 09:26:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BB77810EBBB; Fri, 4 Apr 2025 09:26:44 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.b="alvdkG5/"; dkim-atps=neutral Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9E13910EB9D; Fri, 4 Apr 2025 09:26:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1743758799; bh=hxJBTuzS+24gjqw89YtmPgSL6j1sBZAB5yNOiBdglJM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=alvdkG5/rmYnDSojZDcnMv1yAxHNhDL80q+zI6IO2cyfpaRswRTZB1v6YW5MWRMwz +5MkPrEj2B6sTpsexK7ZJhuJ1reAKKURO+vEAUGUUjgqwxFdMLasCPh9kAin1vSIsF D3L+p496v6mYeweUsa67VPE2G+Ankc9b0GeDmvOr8s0RLrR5ynvb5VVncp1YsdTyua Ogze3JxvILZ4oTnPzVjWpOJRw1Yen7zi0OeFjfBsu3N1Ea8rDYoR6ZnM8xAJhK8LbZ 9NzjOPNIS0JoCSABDud7MIrGJw/jtKK/i2T/ISGFLacft3nzhJ8lS10msGeULbGjta TnXvZoYQe6Pgw== Received: from localhost.localdomain (unknown [IPv6:2a01:e0a:2c:6930:5cf4:84a1:2763:fe0d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by bali.collaboradmins.com (Postfix) with ESMTPSA id EB94017E105F; Fri, 4 Apr 2025 11:26:38 +0200 (CEST) From: Boris Brezillon To: Boris Brezillon , Steven Price , Liviu Dudau , =?utf-8?q?Adri=C3=A1n_Larumbe?= , lima@lists.freedesktop.org, Qiang Yu Cc: David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , dri-devel@lists.freedesktop.org, Dmitry Osipenko , kernel@collabora.com Subject: [PATCH v3 5/8] drm/panthor: Allow kernel BOs to pass DRM_PANTHOR_BO_ALLOC_ON_FAULT Date: Fri, 4 Apr 2025 11:26:31 +0200 Message-ID: <20250404092634.2968115-6-boris.brezillon@collabora.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250404092634.2968115-1-boris.brezillon@collabora.com> References: <20250404092634.2968115-1-boris.brezillon@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This will be used by the heap logic to allow for real non-blocking allocations when growing the heap. Signed-off-by: Boris Brezillon --- drivers/gpu/drm/panthor/panthor_fw.c | 6 +++--- drivers/gpu/drm/panthor/panthor_gem.c | 7 ++++++- drivers/gpu/drm/panthor/panthor_gem.h | 4 ++-- drivers/gpu/drm/panthor/panthor_sched.c | 6 +++--- 4 files changed, 14 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/panthor/panthor_fw.c b/drivers/gpu/drm/panthor/panthor_fw.c index 446bb377b953..cb6b72a513b1 100644 --- a/drivers/gpu/drm/panthor/panthor_fw.c +++ b/drivers/gpu/drm/panthor/panthor_fw.c @@ -446,7 +446,7 @@ panthor_fw_alloc_queue_iface_mem(struct panthor_device *ptdev, int ret; mem = panthor_kernel_bo_create(ptdev, ptdev->fw->vm, SZ_8K, - DRM_PANTHOR_BO_NO_MMAP, + DRM_PANTHOR_BO_NO_MMAP, 0, DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED, PANTHOR_VM_KERNEL_AUTO_VA); @@ -479,7 +479,7 @@ struct panthor_kernel_bo * panthor_fw_alloc_suspend_buf_mem(struct panthor_device *ptdev, size_t size) { return panthor_kernel_bo_create(ptdev, panthor_fw_vm(ptdev), size, - DRM_PANTHOR_BO_NO_MMAP, + DRM_PANTHOR_BO_NO_MMAP, 0, DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC, PANTHOR_VM_KERNEL_AUTO_VA); } @@ -600,7 +600,7 @@ static int panthor_fw_load_section_entry(struct panthor_device *ptdev, section->mem = panthor_kernel_bo_create(ptdev, panthor_fw_vm(ptdev), section_size, - DRM_PANTHOR_BO_NO_MMAP, + DRM_PANTHOR_BO_NO_MMAP, 0, vm_map_flags, va); if (IS_ERR(section->mem)) return PTR_ERR(section->mem); diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c index 52b8d5468d53..809d3ca48ba1 100644 --- a/drivers/gpu/drm/panthor/panthor_gem.c +++ b/drivers/gpu/drm/panthor/panthor_gem.c @@ -75,7 +75,8 @@ void panthor_kernel_bo_destroy(struct panthor_kernel_bo *bo) */ struct panthor_kernel_bo * panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm, - size_t size, u32 bo_flags, u32 vm_map_flags, + size_t size, u32 bo_flags, + u32 alloc_on_fault_granularity, u32 vm_map_flags, u64 gpu_va) { struct drm_gem_shmem_object *obj; @@ -100,6 +101,10 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm, kbo->obj = &obj->base; bo->flags = bo_flags; + if (bo_flags & DRM_PANTHOR_BO_ALLOC_ON_FAULT) + drm_gem_shmem_sparse_init(&bo->base, &bo->sparse, + alloc_on_fault_granularity); + /* The system and GPU MMU page size might differ, which becomes a * problem for FW sections that need to be mapped at explicit address * since our PAGE_SIZE alignment might cover a VA range that's diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h index 53a85a463c1e..8ae0b19b4d90 100644 --- a/drivers/gpu/drm/panthor/panthor_gem.h +++ b/drivers/gpu/drm/panthor/panthor_gem.h @@ -139,8 +139,8 @@ panthor_kernel_bo_vunmap(struct panthor_kernel_bo *bo) struct panthor_kernel_bo * panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm, - size_t size, u32 bo_flags, u32 vm_map_flags, - u64 gpu_va); + size_t size, u32 bo_flags, u32 alloc_on_fault_granularity, + u32 vm_map_flags, u64 gpu_va); void panthor_kernel_bo_destroy(struct panthor_kernel_bo *bo); diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c index 446ec780eb4a..fe86886442bf 100644 --- a/drivers/gpu/drm/panthor/panthor_sched.c +++ b/drivers/gpu/drm/panthor/panthor_sched.c @@ -3329,7 +3329,7 @@ group_create_queue(struct panthor_group *group, queue->ringbuf = panthor_kernel_bo_create(group->ptdev, group->vm, args->ringbuf_size, - DRM_PANTHOR_BO_NO_MMAP, + DRM_PANTHOR_BO_NO_MMAP, 0, DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED, PANTHOR_VM_KERNEL_AUTO_VA); @@ -3359,7 +3359,7 @@ group_create_queue(struct panthor_group *group, panthor_kernel_bo_create(group->ptdev, group->vm, queue->profiling.slot_count * sizeof(struct panthor_job_profiling_data), - DRM_PANTHOR_BO_NO_MMAP, + DRM_PANTHOR_BO_NO_MMAP, 0, DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED, PANTHOR_VM_KERNEL_AUTO_VA); @@ -3490,7 +3490,7 @@ int panthor_group_create(struct panthor_file *pfile, group->syncobjs = panthor_kernel_bo_create(ptdev, group->vm, group_args->queues.count * sizeof(struct panthor_syncobj_64b), - DRM_PANTHOR_BO_NO_MMAP, + DRM_PANTHOR_BO_NO_MMAP, 0, DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED, PANTHOR_VM_KERNEL_AUTO_VA); From patchwork Fri Apr 4 09:26:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Brezillon X-Patchwork-Id: 14038247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7C21C3601E for ; Fri, 4 Apr 2025 09:26:53 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0D08510EBB0; Fri, 4 Apr 2025 09:26:48 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.b="UUNEp4U/"; dkim-atps=neutral Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) by gabe.freedesktop.org (Postfix) with ESMTPS id A190110EB9F; Fri, 4 Apr 2025 09:26:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1743758800; bh=tG8lfFAviS5kJzCGazqrwyKEN7RGYsYNHdE7EpTka+g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UUNEp4U/HD+DUxcPYuDKHx84Jw2ivn5vQhHJbdRnMeWFWEuIWA7EyYrSU+9v00EKy gMOPeFCwEGhNRMPlYB0HHKfuUHS6lkKJ946fmd6pgo3BzRnJaH/Sc4/kK5ymEmUvvt 4zV+TpbEsqflcp+FskaAcUl9cur+kHq7r54fkr19dOxuEuKstRiY/BLxITbb5TSgoR pFGl2J3xGqb5HYZB3Yaqm0PLbOaAM2T5MwXHB2GN8xTqxyJztZ9fb6HAB43MDHS+Fv do1HylXddeYfsvlLanqMreJuAyofHIGRy7cBiUBNgwrXWYi3L1Y72B3Ez/eipXaqaV X1r+XQ3mI6m3w== Received: from localhost.localdomain (unknown [IPv6:2a01:e0a:2c:6930:5cf4:84a1:2763:fe0d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by bali.collaboradmins.com (Postfix) with ESMTPSA id 9A68417E1060; Fri, 4 Apr 2025 11:26:39 +0200 (CEST) From: Boris Brezillon To: Boris Brezillon , Steven Price , Liviu Dudau , =?utf-8?q?Adri=C3=A1n_Larumbe?= , lima@lists.freedesktop.org, Qiang Yu Cc: David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , dri-devel@lists.freedesktop.org, Dmitry Osipenko , kernel@collabora.com Subject: [PATCH v3 6/8] drm/panthor: Add a panthor_vm_pre_fault_range() helper Date: Fri, 4 Apr 2025 11:26:32 +0200 Message-ID: <20250404092634.2968115-7-boris.brezillon@collabora.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250404092634.2968115-1-boris.brezillon@collabora.com> References: <20250404092634.2968115-1-boris.brezillon@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This allows one to pre-allocate resources on a sparse BO to avoid faulting when the GPU accesses the memory region. Will be used by the heap logic to pre-populate a heap object with a predefined number of chunks. Signed-off-by: Boris Brezillon --- drivers/gpu/drm/panthor/panthor_mmu.c | 25 +++++++++++++++++++++++++ drivers/gpu/drm/panthor/panthor_mmu.h | 2 ++ 2 files changed, 27 insertions(+) diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c index e05aaac10481..aea9b5f2ce64 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -1770,6 +1770,31 @@ static int panthor_vm_map_on_demand_locked(struct panthor_vm *vm, return 0; } +int panthor_vm_pre_fault_range(struct panthor_vm *vm, u64 iova, u64 size, + gfp_t page_gfp, gfp_t other_gfp) +{ + struct panthor_gem_object *bo = NULL; + struct drm_gpuva *gpuva; + struct panthor_vma *vma; + int ret; + + mutex_lock(&vm->op_lock); + gpuva = drm_gpuva_find_first(&vm->base, iova, 1); + vma = gpuva ? container_of(gpuva, struct panthor_vma, base) : NULL; + if (vma && vma->base.gem.obj) + bo = to_panthor_bo(vma->base.gem.obj); + + if (bo && (bo->flags & DRM_PANTHOR_BO_ALLOC_ON_FAULT)) { + ret = panthor_vm_map_on_demand_locked(vm, vma, iova - vma->base.va.addr, + size, page_gfp, other_gfp); + } else { + ret = -EFAULT; + } + mutex_unlock(&vm->op_lock); + + return ret; +} + static void panthor_vm_handle_fault_locked(struct panthor_vm *vm) { struct panthor_device *ptdev = vm->ptdev; diff --git a/drivers/gpu/drm/panthor/panthor_mmu.h b/drivers/gpu/drm/panthor/panthor_mmu.h index fc274637114e..d57c86d293bd 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.h +++ b/drivers/gpu/drm/panthor/panthor_mmu.h @@ -28,6 +28,8 @@ int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo int panthor_vm_unmap_range(struct panthor_vm *vm, u64 va, u64 size); struct panthor_gem_object * panthor_vm_get_bo_for_va(struct panthor_vm *vm, u64 va, u64 *bo_offset); +int panthor_vm_pre_fault_range(struct panthor_vm *vm, u64 iova, u64 size, + gfp_t page_gfp, gfp_t other_gfp); int panthor_vm_active(struct panthor_vm *vm); void panthor_vm_idle(struct panthor_vm *vm); From patchwork Fri Apr 4 09:26:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Brezillon X-Patchwork-Id: 14038246 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CFC46C369A4 for ; Fri, 4 Apr 2025 09:26:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C67F610EBC2; Fri, 4 Apr 2025 09:26:46 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.b="nkQin5b2"; dkim-atps=neutral Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2C39C10EB9D; Fri, 4 Apr 2025 09:26:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1743758800; bh=+BQ4HaAKmbN6C1yD9xgCQgW2C8+9uZXZpSkLqiRnWlc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nkQin5b2g1SB15vRv7k5VpKq029b41GJdQ/uIA3zeqV4zRPp4QCDMBO7XSpz7BpUg a1yGgIwmq8GaWKTMWMuaGyi/xCPlWyER2bWwv8SHNQusdT7GsOY2FqNY40fsItCGCH UzyZCVEkQqwD0u4EuDn3ubCWk5Csv/rsKzkPaojydxB0AxAUd/oV0DQbMLn34XGHfs fupCjnze6jSuSFtHgCWC8+oR6Cf22hRo7Ssxq2J83zgkgd2ZxoUxxTjuy8mBqsT+eH 61gu/o6Zr/kr5pQxUEJh9HCJ8KrrzMcHLHBZ6Q6rdYckLxnpEwhQnIz9j5iWcNCsfB 0Gf6KjDAMvfHA== Received: from localhost.localdomain (unknown [IPv6:2a01:e0a:2c:6930:5cf4:84a1:2763:fe0d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by bali.collaboradmins.com (Postfix) with ESMTPSA id 4948617E1062; Fri, 4 Apr 2025 11:26:40 +0200 (CEST) From: Boris Brezillon To: Boris Brezillon , Steven Price , Liviu Dudau , =?utf-8?q?Adri=C3=A1n_Larumbe?= , lima@lists.freedesktop.org, Qiang Yu Cc: David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , dri-devel@lists.freedesktop.org, Dmitry Osipenko , kernel@collabora.com Subject: [PATCH v3 7/8] drm/panthor: Make heap chunk allocation non-blocking Date: Fri, 4 Apr 2025 11:26:33 +0200 Message-ID: <20250404092634.2968115-8-boris.brezillon@collabora.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250404092634.2968115-1-boris.brezillon@collabora.com> References: <20250404092634.2968115-1-boris.brezillon@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Make heap chunk allocation non-blocking when we are in the growing path. This way, we can fail the job and signal its fence instead of blocking on memory reclaim, which will become problematic once we throw a memory shrinker into the mix. Signed-off-by: Boris Brezillon --- drivers/gpu/drm/panthor/panthor_heap.c | 222 ++++++++++++------------- 1 file changed, 110 insertions(+), 112 deletions(-) diff --git a/drivers/gpu/drm/panthor/panthor_heap.c b/drivers/gpu/drm/panthor/panthor_heap.c index 3bdf61c14264..2017a1950f63 100644 --- a/drivers/gpu/drm/panthor/panthor_heap.c +++ b/drivers/gpu/drm/panthor/panthor_heap.c @@ -3,6 +3,7 @@ #include #include +#include #include @@ -35,25 +36,14 @@ struct panthor_heap_chunk_header { u32 unknown[14]; }; -/** - * struct panthor_heap_chunk - Structure used to keep track of allocated heap chunks. - */ -struct panthor_heap_chunk { - /** @node: Used to insert the heap chunk in panthor_heap::chunks. */ - struct list_head node; - - /** @bo: Buffer object backing the heap chunk. */ - struct panthor_kernel_bo *bo; -}; - /** * struct panthor_heap - Structure used to manage tiler heap contexts. */ struct panthor_heap { - /** @chunks: List containing all heap chunks allocated so far. */ - struct list_head chunks; + /** @bo: Buffer object backing a heap. */ + struct panthor_kernel_bo *bo; - /** @lock: Lock protecting insertion in the chunks list. */ + /** @lock: Lock protecting chunks addition. */ struct mutex lock; /** @chunk_size: Size of each chunk. */ @@ -70,6 +60,9 @@ struct panthor_heap { /** @chunk_count: Number of heap chunks currently allocated. */ u32 chunk_count; + + /** @free_list: List of free chunks. */ + u64 free_list; }; #define MAX_HEAPS_PER_POOL 128 @@ -121,100 +114,120 @@ static void *panthor_get_heap_ctx(struct panthor_heap_pool *pool, int id) panthor_get_heap_ctx_offset(pool, id); } -static void panthor_free_heap_chunk(struct panthor_heap_pool *pool, - struct panthor_heap *heap, - struct panthor_heap_chunk *chunk) -{ - mutex_lock(&heap->lock); - list_del(&chunk->node); - heap->chunk_count--; - mutex_unlock(&heap->lock); - - atomic_sub(heap->chunk_size, &pool->size); - - panthor_kernel_bo_destroy(chunk->bo); - kfree(chunk); -} - static int panthor_alloc_heap_chunk(struct panthor_heap_pool *pool, struct panthor_heap *heap, - bool initial_chunk) + bool initial_chunk, + u64 *chunk_gpu_va) { - struct panthor_heap_chunk *chunk; struct panthor_heap_chunk_header *hdr; + unsigned int npages = heap->chunk_size >> PAGE_SHIFT; + pgoff_t pgoffs = heap->chunk_count * npages; + struct panthor_kernel_bo *kbo = heap->bo; + struct panthor_gem_object *bo = to_panthor_bo(kbo->obj); + struct address_space *mapping = bo->base.base.filp->f_mapping; + pgprot_t prot = PAGE_KERNEL; + gfp_t page_gfp, other_gfp; + bool from_free_list = false; + struct page *page; int ret; - chunk = kmalloc(sizeof(*chunk), GFP_KERNEL); - if (!chunk) - return -ENOMEM; - - chunk->bo = panthor_kernel_bo_create(pool->ptdev, pool->vm, heap->chunk_size, - DRM_PANTHOR_BO_NO_MMAP, - DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC, - PANTHOR_VM_KERNEL_AUTO_VA); - if (IS_ERR(chunk->bo)) { - ret = PTR_ERR(chunk->bo); - goto err_free_chunk; + if (initial_chunk) { + page_gfp = mapping_gfp_mask(mapping); + other_gfp = GFP_KERNEL; + } else { + page_gfp = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM) | + __GFP_NORETRY | __GFP_NOWARN; + other_gfp = __GFP_NORETRY | __GFP_NOWARN; } - ret = panthor_kernel_bo_vmap(chunk->bo); - if (ret) - goto err_destroy_bo; + if (!heap->free_list) { + u64 new_chunk_va = panthor_kernel_bo_gpuva(kbo) + + ((u64)heap->chunk_size * heap->chunk_count); + + ret = panthor_vm_pre_fault_range(pool->vm, new_chunk_va, + heap->chunk_size, + page_gfp, other_gfp); + if (ret) + return ret; + + page = xa_load(&bo->sparse.pages, pgoffs); + if (!page) + return -ENOMEM; + + *chunk_gpu_va = new_chunk_va; + } else { + u64 offset = heap->free_list - panthor_kernel_bo_gpuva(kbo); + + page = xa_load(&bo->sparse.pages, offset >> PAGE_SHIFT); + *chunk_gpu_va = heap->free_list; + from_free_list = true; + } + + if (bo->base.map_wc) + prot = pgprot_writecombine(prot); + + hdr = vmap(&page, 1, VM_MAP, prot); + if (!hdr) + return -ENOMEM; + + if (from_free_list) + heap->free_list = hdr->next & GENMASK_ULL(63, 12); - hdr = chunk->bo->kmap; memset(hdr, 0, sizeof(*hdr)); - if (initial_chunk && !list_empty(&heap->chunks)) { - struct panthor_heap_chunk *prev_chunk; - u64 prev_gpuva; + if (initial_chunk && heap->chunk_count) { + u64 prev_gpuva = panthor_kernel_bo_gpuva(kbo) + + ((u64)heap->chunk_size * (heap->chunk_count - 1)); - prev_chunk = list_first_entry(&heap->chunks, - struct panthor_heap_chunk, - node); - - prev_gpuva = panthor_kernel_bo_gpuva(prev_chunk->bo); hdr->next = (prev_gpuva & GENMASK_ULL(63, 12)) | (heap->chunk_size >> 12); } - panthor_kernel_bo_vunmap(chunk->bo); + vunmap(hdr); - mutex_lock(&heap->lock); - list_add(&chunk->node, &heap->chunks); - heap->chunk_count++; - mutex_unlock(&heap->lock); + if (!from_free_list) + heap->chunk_count++; atomic_add(heap->chunk_size, &pool->size); return 0; - -err_destroy_bo: - panthor_kernel_bo_destroy(chunk->bo); - -err_free_chunk: - kfree(chunk); - - return ret; } -static void panthor_free_heap_chunks(struct panthor_heap_pool *pool, - struct panthor_heap *heap) +static void panthor_free_heap_chunk(struct panthor_heap *heap, + u64 chunk_gpu_va) { - struct panthor_heap_chunk *chunk, *tmp; + struct panthor_kernel_bo *kbo = heap->bo; + struct panthor_gem_object *bo = to_panthor_bo(kbo->obj); + u64 offset = chunk_gpu_va - panthor_kernel_bo_gpuva(kbo); + pgoff_t pgoffs = offset >> PAGE_SHIFT; + struct panthor_heap_chunk_header *hdr; + pgprot_t prot = bo->base.map_wc ? pgprot_writecombine(PAGE_KERNEL) : + PAGE_KERNEL; + struct page *page; - list_for_each_entry_safe(chunk, tmp, &heap->chunks, node) - panthor_free_heap_chunk(pool, heap, chunk); + page = xa_load(&bo->sparse.pages, pgoffs); + if (!page) + return; + + hdr = vmap(&page, 1, VM_MAP, prot); + if (!hdr) + return; + + hdr->next = heap->free_list; + heap->free_list = chunk_gpu_va; + vunmap(hdr); } static int panthor_alloc_heap_chunks(struct panthor_heap_pool *pool, struct panthor_heap *heap, - u32 chunk_count) + u32 chunk_count, + u64 *first_chunk_gpu_va) { int ret; u32 i; for (i = 0; i < chunk_count; i++) { - ret = panthor_alloc_heap_chunk(pool, heap, true); + ret = panthor_alloc_heap_chunk(pool, heap, true, first_chunk_gpu_va); if (ret) return ret; } @@ -231,7 +244,7 @@ panthor_heap_destroy_locked(struct panthor_heap_pool *pool, u32 handle) if (!heap) return -EINVAL; - panthor_free_heap_chunks(pool, heap); + panthor_kernel_bo_destroy(heap->bo); mutex_destroy(&heap->lock); kfree(heap); return 0; @@ -278,7 +291,7 @@ int panthor_heap_create(struct panthor_heap_pool *pool, u64 *first_chunk_gpu_va) { struct panthor_heap *heap; - struct panthor_heap_chunk *first_chunk; + struct panthor_kernel_bo *bo; struct panthor_vm *vm; int ret = 0; u32 id; @@ -308,20 +321,27 @@ int panthor_heap_create(struct panthor_heap_pool *pool, } mutex_init(&heap->lock); - INIT_LIST_HEAD(&heap->chunks); heap->chunk_size = chunk_size; heap->max_chunks = max_chunks; heap->target_in_flight = target_in_flight; - ret = panthor_alloc_heap_chunks(pool, heap, initial_chunk_count); + bo = panthor_kernel_bo_create(pool->ptdev, pool->vm, max_chunks * chunk_size, + DRM_PANTHOR_BO_NO_MMAP | DRM_PANTHOR_BO_ALLOC_ON_FAULT, + chunk_size >> PAGE_SHIFT, + DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC, + PANTHOR_VM_KERNEL_AUTO_VA); + if (IS_ERR(bo)) { + ret = PTR_ERR(bo); + goto err_free_heap; + } + + heap->bo = bo; + + ret = panthor_alloc_heap_chunks(pool, heap, initial_chunk_count, + first_chunk_gpu_va); if (ret) goto err_free_heap; - first_chunk = list_first_entry(&heap->chunks, - struct panthor_heap_chunk, - node); - *first_chunk_gpu_va = panthor_kernel_bo_gpuva(first_chunk->bo); - down_write(&pool->lock); /* The pool has been destroyed, we can't create a new heap. */ if (!pool->vm) { @@ -346,7 +366,7 @@ int panthor_heap_create(struct panthor_heap_pool *pool, return id; err_free_heap: - panthor_free_heap_chunks(pool, heap); + panthor_kernel_bo_destroy(heap->bo); mutex_destroy(&heap->lock); kfree(heap); @@ -371,7 +391,6 @@ int panthor_heap_return_chunk(struct panthor_heap_pool *pool, { u64 offset = heap_gpu_va - panthor_kernel_bo_gpuva(pool->gpu_contexts); u32 heap_id = (u32)offset / panthor_heap_ctx_stride(pool->ptdev); - struct panthor_heap_chunk *chunk, *tmp, *removed = NULL; struct panthor_heap *heap; int ret; @@ -385,28 +404,10 @@ int panthor_heap_return_chunk(struct panthor_heap_pool *pool, goto out_unlock; } - chunk_gpu_va &= GENMASK_ULL(63, 12); - mutex_lock(&heap->lock); - list_for_each_entry_safe(chunk, tmp, &heap->chunks, node) { - if (panthor_kernel_bo_gpuva(chunk->bo) == chunk_gpu_va) { - removed = chunk; - list_del(&chunk->node); - heap->chunk_count--; - atomic_sub(heap->chunk_size, &pool->size); - break; - } - } + panthor_free_heap_chunk(heap, chunk_gpu_va & GENMASK_ULL(63, 12)); mutex_unlock(&heap->lock); - if (removed) { - panthor_kernel_bo_destroy(chunk->bo); - kfree(chunk); - ret = 0; - } else { - ret = -EINVAL; - } - out_unlock: up_read(&pool->lock); return ret; @@ -435,7 +436,6 @@ int panthor_heap_grow(struct panthor_heap_pool *pool, { u64 offset = heap_gpu_va - panthor_kernel_bo_gpuva(pool->gpu_contexts); u32 heap_id = (u32)offset / panthor_heap_ctx_stride(pool->ptdev); - struct panthor_heap_chunk *chunk; struct panthor_heap *heap; int ret; @@ -471,15 +471,13 @@ int panthor_heap_grow(struct panthor_heap_pool *pool, * further jobs in this queue fail immediately instead of having to * wait for the job timeout. */ - ret = panthor_alloc_heap_chunk(pool, heap, false); + mutex_lock(&heap->lock); + ret = panthor_alloc_heap_chunk(pool, heap, false, new_chunk_gpu_va); + mutex_unlock(&heap->lock); if (ret) goto out_unlock; - chunk = list_first_entry(&heap->chunks, - struct panthor_heap_chunk, - node); - *new_chunk_gpu_va = (panthor_kernel_bo_gpuva(chunk->bo) & GENMASK_ULL(63, 12)) | - (heap->chunk_size >> 12); + *new_chunk_gpu_va |= (heap->chunk_size >> 12); ret = 0; out_unlock: @@ -553,7 +551,7 @@ panthor_heap_pool_create(struct panthor_device *ptdev, struct panthor_vm *vm) kref_init(&pool->refcount); pool->gpu_contexts = panthor_kernel_bo_create(ptdev, vm, bosize, - DRM_PANTHOR_BO_NO_MMAP, + DRM_PANTHOR_BO_NO_MMAP, 0, DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC, PANTHOR_VM_KERNEL_AUTO_VA); if (IS_ERR(pool->gpu_contexts)) { From patchwork Fri Apr 4 09:26:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Brezillon X-Patchwork-Id: 14038245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D8CBDC36010 for ; Fri, 4 Apr 2025 09:26:51 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B13D310EBB8; Fri, 4 Apr 2025 09:26:46 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.b="iKaHxkHU"; dkim-atps=neutral Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) by gabe.freedesktop.org (Postfix) with ESMTPS id CE4A610EB9D; Fri, 4 Apr 2025 09:26:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1743758801; bh=dRE5vKS4gGWWcaFLc9LqaNs60IxI7jaIn250QK+cMBo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iKaHxkHUgP9cfIIBb3K7NUwrdKxVyi+O3jSdI/Bz96AyTYWo9eYfitTTjdXJj/AAv opf7zs+t0ebsbEe2GPlXviS7XMLFwxoySGe3+OIiKd7lI7JEvLNCZbLlNsj3G7UKYW Tcct32ME2j3kzgdioVfXBEV4vD2VYQj1FUkzIZcmpiU8eoA6xJj8KXMoQGd7VWHVUI v8zh2PZb5mqpEGd3BcLFjnEp0ixOvEu28uEt8C9TopzHSP+kgh7nl9I+qSCb1/P+N+ oayZGX27lZJkWKIPyJh+yEwIjvAhj1W+miiJZYjyxpZHMvGFn5H6vpkbSw2UY1dInK zdRbeLGI6SE6g== Received: from localhost.localdomain (unknown [IPv6:2a01:e0a:2c:6930:5cf4:84a1:2763:fe0d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by bali.collaboradmins.com (Postfix) with ESMTPSA id EEDE817E07FD; Fri, 4 Apr 2025 11:26:40 +0200 (CEST) From: Boris Brezillon To: Boris Brezillon , Steven Price , Liviu Dudau , =?utf-8?q?Adri=C3=A1n_Larumbe?= , lima@lists.freedesktop.org, Qiang Yu Cc: David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , dri-devel@lists.freedesktop.org, Dmitry Osipenko , kernel@collabora.com Subject: [PATCH v3 8/8] drm/lima: Use drm_gem_shmem_sparse_backing for heap buffers Date: Fri, 4 Apr 2025 11:26:34 +0200 Message-ID: <20250404092634.2968115-9-boris.brezillon@collabora.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250404092634.2968115-1-boris.brezillon@collabora.com> References: <20250404092634.2968115-1-boris.brezillon@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Now that with have generic support for sparse shmem objects, use it to simplify the code. This has only been compile-tested, and we might want to consider using NOWAIT gfp flags allocations happening in the fault handler path, but I don't know the driver well enough to take that decision. Signed-off-by: Boris Brezillon --- drivers/gpu/drm/lima/lima_gem.c | 89 ++++++++++----------------------- drivers/gpu/drm/lima/lima_gem.h | 1 + drivers/gpu/drm/lima/lima_vm.c | 48 +++++++++++++++--- 3 files changed, 67 insertions(+), 71 deletions(-) diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 5deec673c11e..f9435d412cdc 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -20,89 +20,35 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) { - struct page **pages; struct address_space *mapping = bo->base.base.filp->f_mapping; - struct device *dev = bo->base.base.dev->dev; size_t old_size = bo->heap_size; size_t new_size = bo->heap_size ? bo->heap_size * 2 : (lima_heap_init_nr_pages << PAGE_SHIFT); - struct sg_table sgt; - int i, ret; + int ret; if (bo->heap_size >= bo->base.base.size) return -ENOSPC; new_size = min(new_size, bo->base.base.size); - dma_resv_lock(bo->base.base.resv, NULL); - - if (bo->base.pages) { - pages = bo->base.pages; - } else { - pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, - sizeof(*pages), GFP_KERNEL | __GFP_ZERO); - if (!pages) { - dma_resv_unlock(bo->base.base.resv); - return -ENOMEM; - } - - bo->base.pages = pages; - refcount_set(&bo->base.pages_use_count, 1); - - mapping_set_unevictable(mapping); - } - - for (i = old_size >> PAGE_SHIFT; i < new_size >> PAGE_SHIFT; i++) { - struct page *page = shmem_read_mapping_page(mapping, i); - - if (IS_ERR(page)) { - dma_resv_unlock(bo->base.base.resv); - return PTR_ERR(page); - } - pages[i] = page; - } - - dma_resv_unlock(bo->base.base.resv); - - ret = sg_alloc_table_from_pages(&sgt, pages, i, 0, - new_size, GFP_KERNEL); + /* FIXME: Should we do a non-blocking allocation if we're called + * from the fault handler (vm != NULL)? + */ + ret = drm_gem_shmem_sparse_populate_range(&bo->base, old_size >> PAGE_SHIFT, + (new_size - old_size) >> PAGE_SHIFT, + mapping_gfp_mask(mapping), + GFP_KERNEL); if (ret) return ret; - if (bo->base.sgt) { - dma_unmap_sgtable(dev, bo->base.sgt, DMA_BIDIRECTIONAL, 0); - sg_free_table(bo->base.sgt); - } else { - bo->base.sgt = kmalloc(sizeof(*bo->base.sgt), GFP_KERNEL); - if (!bo->base.sgt) { - ret = -ENOMEM; - goto err_out0; - } - } - - ret = dma_map_sgtable(dev, &sgt, DMA_BIDIRECTIONAL, 0); - if (ret) - goto err_out1; - - *bo->base.sgt = sgt; - if (vm) { ret = lima_vm_map_bo(vm, bo, old_size >> PAGE_SHIFT); if (ret) - goto err_out2; + return ret; } bo->heap_size = new_size; return 0; - -err_out2: - dma_unmap_sgtable(dev, &sgt, DMA_BIDIRECTIONAL, 0); -err_out1: - kfree(bo->base.sgt); - bo->base.sgt = NULL; -err_out0: - sg_free_table(&sgt); - return ret; } int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, @@ -128,7 +74,19 @@ int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, mapping_set_gfp_mask(obj->filp->f_mapping, mask); if (is_heap) { + /* Granularity is the closest power-of-two less than + * lima_heap_init_nr_pages. + */ + u32 granularity = lima_heap_init_nr_pages ? + 1 << (fls(lima_heap_init_nr_pages) - 1) : 8; + bo = to_lima_bo(obj); + err = drm_gem_shmem_sparse_init(shmem, &bo->sparse, granularity); + if (err) + goto out; + + drm_gem_shmem_sparse_pin(shmem); + err = lima_heap_alloc(bo, NULL); if (err) goto out; @@ -157,6 +115,11 @@ static void lima_gem_free_object(struct drm_gem_object *obj) if (!list_empty(&bo->va)) dev_err(obj->dev->dev, "lima gem free bo still has va\n"); + if (bo->base.sparse) { + drm_gem_shmem_sparse_unpin(&bo->base); + drm_gem_shmem_sparse_finish(&bo->base); + } + drm_gem_shmem_free(&bo->base); } diff --git a/drivers/gpu/drm/lima/lima_gem.h b/drivers/gpu/drm/lima/lima_gem.h index ccea06142f4b..9326b408306a 100644 --- a/drivers/gpu/drm/lima/lima_gem.h +++ b/drivers/gpu/drm/lima/lima_gem.h @@ -11,6 +11,7 @@ struct lima_vm; struct lima_bo { struct drm_gem_shmem_object base; + struct drm_gem_shmem_sparse_backing sparse; struct mutex lock; struct list_head va; diff --git a/drivers/gpu/drm/lima/lima_vm.c b/drivers/gpu/drm/lima/lima_vm.c index 2b2739adc7f5..6e73b6a4881a 100644 --- a/drivers/gpu/drm/lima/lima_vm.c +++ b/drivers/gpu/drm/lima/lima_vm.c @@ -280,10 +280,32 @@ void lima_vm_print(struct lima_vm *vm) } } +static int lima_vm_map_sgt(struct lima_vm *vm, struct sg_table *sgt, + u32 base, int pageoff) +{ + struct sg_dma_page_iter sg_iter; + int err, offset = 0; + + for_each_sgtable_dma_page(sgt, &sg_iter, pageoff) { + err = lima_vm_map_page(vm, sg_page_iter_dma_address(&sg_iter), + base + offset); + if (err) + goto err_unmap; + + offset += PAGE_SIZE; + } + + return 0; + +err_unmap: + if (offset) + lima_vm_unmap_range(vm, base, base + offset - 1); + return err; +} + int lima_vm_map_bo(struct lima_vm *vm, struct lima_bo *bo, int pageoff) { struct lima_bo_va *bo_va; - struct sg_dma_page_iter sg_iter; int offset = 0, err; u32 base; @@ -296,15 +318,24 @@ int lima_vm_map_bo(struct lima_vm *vm, struct lima_bo *bo, int pageoff) } mutex_lock(&vm->lock); - base = bo_va->node.start + (pageoff << PAGE_SHIFT); - for_each_sgtable_dma_page(bo->base.sgt, &sg_iter, pageoff) { - err = lima_vm_map_page(vm, sg_page_iter_dma_address(&sg_iter), - base + offset); - if (err) - goto err_out1; - offset += PAGE_SIZE; + if (bo->base.sparse) { + unsigned int sgt_remaining_pages; + pgoff_t sgt_pgoffset; + struct sg_table *sgt; + + sgt = drm_gem_shmem_sparse_get_sgt(&bo->base, pageoff, + &sgt_pgoffset, + &sgt_remaining_pages); + if (IS_ERR(sgt)) { + err = PTR_ERR(sgt); + goto err_out1; + } + + err = lima_vm_map_sgt(vm, sgt, base, sgt_pgoffset); + } else { + err = lima_vm_map_sgt(vm, bo->base.sgt, base, pageoff); } mutex_unlock(&vm->lock); @@ -315,6 +346,7 @@ int lima_vm_map_bo(struct lima_vm *vm, struct lima_bo *bo, int pageoff) err_out1: if (offset) lima_vm_unmap_range(vm, base, base + offset - 1); + mutex_unlock(&vm->lock); err_out0: mutex_unlock(&bo->lock);