From patchwork Fri Jan 18 18:37:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Liam Mark X-Patchwork-Id: 10772461 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5E30613B4 for ; Sun, 20 Jan 2019 17:04:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A00028B08 for ; Sun, 20 Jan 2019 17:04:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3DD8D2A510; Sun, 20 Jan 2019 17:04:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CD1C728B08 for ; Sun, 20 Jan 2019 17:04:17 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B992F6E624; Sun, 20 Jan 2019 17:04:04 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from smtp.codeaurora.org (smtp.codeaurora.org [198.145.29.96]) by gabe.freedesktop.org (Postfix) with ESMTPS id 41FF76F8A6 for ; Fri, 18 Jan 2019 18:38:02 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 1000) id 2BA5E6087D; Fri, 18 Jan 2019 18:38:02 +0000 (UTC) Received: from lmark-linux.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: lmark@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 716BE6087D; Fri, 18 Jan 2019 18:38:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 716BE6087D From: Liam Mark To: labbott@redhat.com, sumit.semwal@linaro.org Subject: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory Date: Fri, 18 Jan 2019 10:37:45 -0800 Message-Id: <1547836667-13695-3-git-send-email-lmark@codeaurora.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1547836667-13695-1-git-send-email-lmark@codeaurora.org> References: <1547836667-13695-1-git-send-email-lmark@codeaurora.org> MIME-Version: 1.0 X-Mailman-Approved-At: Sun, 20 Jan 2019 17:03:52 +0000 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devel@driverdev.osuosl.org, tkjos@android.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, afd@ti.com, linaro-mm-sig@lists.linaro.org, arve@android.com, Liam Mark , joel@joelfernandes.org, maco@android.com, christian@brauner.io Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP The ION begin_cpu_access and end_cpu_access functions use the dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache maintenance. Currently it is possible to apply cache maintenance, via the begin_cpu_access and end_cpu_access APIs, to ION buffers which are not dma mapped. The dma sync sg APIs should not be called on sg lists which have not been dma mapped as this can result in cache maintenance being applied to the wrong address. If an sg list has not been dma mapped then its dma_address field has not been populated, some dma ops such as the swiotlb_dma_ops ops use the dma_address field to calculate the address onto which to apply cache maintenance. Also I don’t think we want CMOs to be applied to a buffer which is not dma mapped as the memory should already be coherent for access from the CPU. Any CMOs required for device access taken care of in the dma_buf_map_attachment and dma_buf_unmap_attachment calls. So really it only makes sense for begin_cpu_access and end_cpu_access to apply CMOs if the buffer is dma mapped. Fix the ION begin_cpu_access and end_cpu_access functions to only apply cache maintenance to buffers which are dma mapped. Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and mapping") Signed-off-by: Liam Mark Reviewed-by: Brian Starkey Reviewed-by: Andrew F. Davis Tested-by: Ørjan Eide --- drivers/staging/android/ion/ion.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 6f5afab7c1a1..1fe633a7fdba 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment { struct device *dev; struct sg_table *table; struct list_head list; + bool dma_mapped; }; static int ion_dma_buf_attach(struct dma_buf *dmabuf, @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf, a->table = table; a->dev = attachment->dev; + a->dma_mapped = false; INIT_LIST_HEAD(&a->list); attachment->priv = a; @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, { struct ion_dma_buf_attachment *a = attachment->priv; struct sg_table *table; + struct ion_buffer *buffer = attachment->dmabuf->priv; table = a->table; + mutex_lock(&buffer->lock); if (!dma_map_sg(attachment->dev, table->sgl, table->nents, - direction)) + direction)) { + mutex_unlock(&buffer->lock); return ERR_PTR(-ENOMEM); + } + a->dma_mapped = true; + mutex_unlock(&buffer->lock); return table; } @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, enum dma_data_direction direction) { + struct ion_dma_buf_attachment *a = attachment->priv; + struct ion_buffer *buffer = attachment->dmabuf->priv; + + mutex_lock(&buffer->lock); dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction); + a->dma_mapped = false; + mutex_unlock(&buffer->lock); } static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, mutex_lock(&buffer->lock); list_for_each_entry(a, &buffer->attachments, list) { - dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents, - direction); + if (a->dma_mapped) + dma_sync_sg_for_cpu(a->dev, a->table->sgl, + a->table->nents, direction); } unlock: @@ -369,8 +384,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf, mutex_lock(&buffer->lock); list_for_each_entry(a, &buffer->attachments, list) { - dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents, - direction); + if (a->dma_mapped) + dma_sync_sg_for_device(a->dev, a->table->sgl, + a->table->nents, direction); } mutex_unlock(&buffer->lock);