From patchwork Mon Oct 14 15:16:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11188971 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6AE54139A for ; Mon, 14 Oct 2019 15:16:27 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 517E121835 for ; Mon, 14 Oct 2019 15:16:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 517E121835 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AA77C6E50D; Mon, 14 Oct 2019 15:16:26 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by gabe.freedesktop.org (Postfix) with ESMTP id 4914B6E50D for ; Mon, 14 Oct 2019 15:16:26 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E731A28; Mon, 14 Oct 2019 08:16:25 -0700 (PDT) Received: from e112269-lin.cambridge.arm.com (e112269-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C8FEF3F68E; Mon, 14 Oct 2019 08:16:24 -0700 (PDT) From: Steven Price To: Daniel Vetter , David Airlie , Rob Herring , Tomeu Vizoso Subject: [PATCH] drm/panfrost: DMA map all pages shared with the GPU Date: Mon, 14 Oct 2019 16:16:16 +0100 Message-Id: <20191014151616.14099-1-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, Alyssa Rosenzweig , dri-devel@lists.freedesktop.org, Steven Price Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Pages shared with the GPU are (often) not cache coherent with the CPU so cache maintenance is required to flush the CPU's caches. This was already done when mapping pages on fault, but wasn't previously done when mapping a freshly allocated page. Fix this by moving the call to dma_map_sg() into mmu_map_sg() ensuring that it is always called when pages are mapped onto the GPU. Since mmu_map_sg() can now fail the code also now has to handle an error return. Not performing this cache maintenance can cause errors in the GPU output (CPU caches are later flushed and overwrite the GPU output). In theory it also allows the GPU (and by extension user space) to observe the memory contents prior to sanitization. Fixes: f3ba91228e8e ("drm/panfrost: Add initial panfrost driver") Signed-off-by: Steven Price --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index bdd990568476..0495e48c238d 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -248,6 +248,9 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu, struct io_pgtable_ops *ops = mmu->pgtbl_ops; u64 start_iova = iova; + if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL)) + return -EINVAL; + for_each_sg(sgt->sgl, sgl, sgt->nents, count) { unsigned long paddr = sg_dma_address(sgl); size_t len = sg_dma_len(sgl); @@ -275,6 +278,7 @@ int panfrost_mmu_map(struct panfrost_gem_object *bo) struct panfrost_device *pfdev = to_panfrost_device(obj->dev); struct sg_table *sgt; int prot = IOMMU_READ | IOMMU_WRITE; + int ret; if (WARN_ON(bo->is_mapped)) return 0; @@ -286,10 +290,12 @@ int panfrost_mmu_map(struct panfrost_gem_object *bo) if (WARN_ON(IS_ERR(sgt))) return PTR_ERR(sgt); - mmu_map_sg(pfdev, bo->mmu, bo->node.start << PAGE_SHIFT, prot, sgt); - bo->is_mapped = true; + ret = mmu_map_sg(pfdev, bo->mmu, bo->node.start << PAGE_SHIFT, + prot, sgt); + if (ret == 0) + bo->is_mapped = true; - return 0; + return ret; } void panfrost_mmu_unmap(struct panfrost_gem_object *bo) @@ -503,12 +509,10 @@ int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, u64 addr) if (ret) goto err_pages; - if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL)) { - ret = -EINVAL; + ret = mmu_map_sg(pfdev, bo->mmu, addr, + IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt); + if (ret) goto err_map; - } - - mmu_map_sg(pfdev, bo->mmu, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt); bo->is_mapped = true;