From patchwork Mon Dec 24 08:19:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xiaqing (A)" X-Patchwork-Id: 10742183 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 699B314DE for ; Mon, 24 Dec 2018 10:52:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A22028C13 for ; Mon, 24 Dec 2018 10:52:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4E1CF28C15; Mon, 24 Dec 2018 10:52:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 00D4A28C13 for ; Mon, 24 Dec 2018 10:52:28 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6054A6E58C; Mon, 24 Dec 2018 10:52:16 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org X-Greylist: delayed 962 seconds by postgrey-1.36 at gabe; Mon, 24 Dec 2018 08:35:58 UTC Received: from huawei.com (szxga07-in.huawei.com [45.249.212.35]) by gabe.freedesktop.org (Postfix) with ESMTPS id EB6286E0A0 for ; Mon, 24 Dec 2018 08:35:58 +0000 (UTC) Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 58632A8E6B83B; Mon, 24 Dec 2018 16:19:50 +0800 (CST) Received: from huawei.com (10.141.125.218) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.408.0; Mon, 24 Dec 2018 16:19:45 +0800 From: Qing Xia To: , , , , , Subject: [PATCH] staging: android: ion: move map_kernel to ion_dma_buf_kmap Date: Mon, 24 Dec 2018 16:19:45 +0800 Message-ID: <1545639585-24785-1-git-send-email-saberlily.xia@hisilicon.com> X-Mailer: git-send-email 2.7.3 MIME-Version: 1.0 X-Originating-IP: [10.141.125.218] X-CFilter-Loop: Reflected X-Mailman-Approved-At: Mon, 24 Dec 2018 10:52:02 +0000 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devel@driverdev.osuosl.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, kongfei@hisilicon.com, linaro-mm-sig@lists.linaro.org, daniel.liuyi@hisilicon.com, yudongbin@hisilicon.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Now, as Google's user guide, if userspace need clean ION buffer's cache, they should call ioctl(fd, DMA_BUF_IOCTL_SYNC, sync). Then we found that ion_dma_buf_begin_cpu_access/ion_dma_buf_end_cpu_access will do ION buffer's map_kernel, it's not necessary. And if usersapce only need clean cache, they will call ion_dma_buf_end_cpu_access by dmabuf's ioctl, ION buffer's kmap_cnt will be wrong value -1, then driver could not get right kernel vaddr by dma_buf_kmap. Signed-off-by: Qing Xia --- drivers/staging/android/ion/ion.c | 46 ++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 25 deletions(-) diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 9907332..f7e2812 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -303,45 +303,47 @@ static void ion_dma_buf_release(struct dma_buf *dmabuf) static void *ion_dma_buf_kmap(struct dma_buf *dmabuf, unsigned long offset) { struct ion_buffer *buffer = dmabuf->priv; + void *vaddr; - return buffer->vaddr + offset * PAGE_SIZE; + if (buffer->heap->ops->map_kernel) { + mutex_lock(&buffer->lock); + vaddr = ion_buffer_kmap_get(buffer); + mutex_unlock(&buffer->lock); + if (IS_ERR(vaddr)) + return vaddr; + + return vaddr + offset * PAGE_SIZE; + } + + return NULL; } static void ion_dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long offset, void *ptr) { + struct ion_buffer *buffer = dmabuf->priv; + + if (buffer->heap->ops->map_kernel) { + mutex_lock(&buffer->lock); + ion_buffer_kmap_put(buffer); + mutex_unlock(&buffer->lock); + } } static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) { struct ion_buffer *buffer = dmabuf->priv; - void *vaddr; struct ion_dma_buf_attachment *a; - int ret = 0; - - /* - * TODO: Move this elsewhere because we don't always need a vaddr - */ - if (buffer->heap->ops->map_kernel) { - mutex_lock(&buffer->lock); - vaddr = ion_buffer_kmap_get(buffer); - if (IS_ERR(vaddr)) { - ret = PTR_ERR(vaddr); - goto unlock; - } - mutex_unlock(&buffer->lock); - } mutex_lock(&buffer->lock); list_for_each_entry(a, &buffer->attachments, list) { dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents, direction); } - -unlock: mutex_unlock(&buffer->lock); - return ret; + + return 0; } static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf, @@ -350,12 +352,6 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf, struct ion_buffer *buffer = dmabuf->priv; struct ion_dma_buf_attachment *a; - if (buffer->heap->ops->map_kernel) { - mutex_lock(&buffer->lock); - ion_buffer_kmap_put(buffer); - mutex_unlock(&buffer->lock); - } - mutex_lock(&buffer->lock); list_for_each_entry(a, &buffer->attachments, list) { dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,