From patchwork Mon Aug 22 22:01:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 12951419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 915A2C28D13 for ; Mon, 22 Aug 2022 22:01:55 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 16FCFA4767; Mon, 22 Aug 2022 22:01:53 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by gabe.freedesktop.org (Postfix) with ESMTP id 786A0A474E for ; Mon, 22 Aug 2022 22:01:42 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 565DB12FC; Mon, 22 Aug 2022 15:01:45 -0700 (PDT) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1C0FA3F67D; Mon, 22 Aug 2022 15:01:41 -0700 (PDT) From: Robin Murphy To: robh@kernel.org, tomeu.vizoso@collabora.com Subject: [PATCH] drm/panfrost: Update io-pgtable API Date: Mon, 22 Aug 2022 23:01:27 +0100 Message-Id: X-Mailer: git-send-email 2.36.1.dirty MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, alyssa.rosenzweig@collabora.com, dri-devel@lists.freedesktop.org, steven.price@arm.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Convert to io-pgtable's bulk {map,unmap}_pages() APIs, to help the old single-page interfaces eventually go away. Unmapping heap BOs still wants to be done a page at a time, but everything else can get the full benefit of the more efficient interface. Signed-off-by: Robin Murphy Reviewed-by: Steven Price Tested-by: Dmitry Osipenko --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 40 +++++++++++++++---------- 1 file changed, 25 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index b285a8001b1d..e246d914e7f6 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -248,11 +248,15 @@ void panfrost_mmu_reset(struct panfrost_device *pfdev) mmu_write(pfdev, MMU_INT_MASK, ~0); } -static size_t get_pgsize(u64 addr, size_t size) +static size_t get_pgsize(u64 addr, size_t size, size_t *count) { - if (addr & (SZ_2M - 1) || size < SZ_2M) - return SZ_4K; + size_t blk_offset = -addr % SZ_2M; + if (blk_offset || size < SZ_2M) { + *count = min_not_zero(blk_offset, size) / SZ_4K; + return SZ_4K; + } + *count = size / SZ_2M; return SZ_2M; } @@ -287,12 +291,16 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu, dev_dbg(pfdev->dev, "map: as=%d, iova=%llx, paddr=%lx, len=%zx", mmu->as, iova, paddr, len); while (len) { - size_t pgsize = get_pgsize(iova | paddr, len); + size_t pgcount, mapped = 0; + size_t pgsize = get_pgsize(iova | paddr, len, &pgcount); - ops->map(ops, iova, paddr, pgsize, prot, GFP_KERNEL); - iova += pgsize; - paddr += pgsize; - len -= pgsize; + ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, + GFP_KERNEL, &mapped); + /* Don't get stuck if things have gone wrong */ + mapped = max(mapped, pgsize); + iova += mapped; + paddr += mapped; + len -= mapped; } } @@ -344,15 +352,17 @@ void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping) mapping->mmu->as, iova, len); while (unmapped_len < len) { - size_t unmapped_page; - size_t pgsize = get_pgsize(iova, len - unmapped_len); + size_t unmapped_page, pgcount; + size_t pgsize = get_pgsize(iova, len - unmapped_len, &pgcount); - if (ops->iova_to_phys(ops, iova)) { - unmapped_page = ops->unmap(ops, iova, pgsize, NULL); - WARN_ON(unmapped_page != pgsize); + if (bo->is_heap) + pgcount = 1; + if (!bo->is_heap || ops->iova_to_phys(ops, iova)) { + unmapped_page = ops->unmap_pages(ops, iova, pgsize, pgcount, NULL); + WARN_ON(unmapped_page != pgsize * pgcount); } - iova += pgsize; - unmapped_len += pgsize; + iova += pgsize * pgcount; + unmapped_len += pgsize * pgcount; } panfrost_mmu_flush_range(pfdev, mapping->mmu,