From patchwork Thu Dec 15 00:07:43 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9475205 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8D46260571 for ; Thu, 15 Dec 2016 00:26:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7ECC0286E5 for ; Thu, 15 Dec 2016 00:26:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7245428736; Thu, 15 Dec 2016 00:26:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 00773286E5 for ; Thu, 15 Dec 2016 00:26:16 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1cHJql-0003FK-Hb; Thu, 15 Dec 2016 00:24:43 +0000 Received: from mail-qk0-f173.google.com ([209.85.220.173]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1cHJaw-00036H-AP for linux-arm-kernel@lists.infradead.org; Thu, 15 Dec 2016 00:08:45 +0000 Received: by mail-qk0-f173.google.com with SMTP id x190so39866067qkb.0 for ; Wed, 14 Dec 2016 16:08:01 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MwHhvXcE7bdao8E+S24I+wEEsJ97N2LVAqZk8OyEbgc=; b=PvzgP4YybdpJXdglWxpYioExszsUpBcfCECoGJ9wXkhNyFbzVqafIARStANAlXps2m n9QvdJ+0rgmugf1QViIJkHcxRgdMEXT1YJqhNBA6rlIpPIgjuRqKGKw79U6e+odPQ/qr 23+op58vMK7NQzff6r4jLtdnPNQkuW9QoxgFRBos0tWmJLQYV+wT2zKC7XYyuXiG2FnG kc6qxdQg7xcwqjAEOMVRXz7BqLIxBYb61Ro9gFQt/qIARQ6l6vhZAVUE8VdXZ0zN/IQP DrVxZC+nAFndEis6/YSRgtdZimXdQjbcrklwMmpp5ZRp0rHs/vJlLZYVy1EE78UtvymU R9UA== X-Gm-Message-State: AKaTC01pnO/jncDT07Qkv0sgeaNIam6oO4uvapC+nVLDXNFZy8qqoHZrT5nCwT9PtYQjOHF4 X-Received: by 10.55.99.141 with SMTP id x135mr84821553qkb.147.1481760480648; Wed, 14 Dec 2016 16:08:00 -0800 (PST) Received: from labbott-redhat-machine.redhat.com ([2601:602:9800:177f::df9b]) by smtp.gmail.com with ESMTPSA id 21sm32935830qkf.17.2016.12.14.16.07.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Dec 2016 16:07:59 -0800 (PST) From: Laura Abbott To: Sumit Semwal , Riley Andrews , arve@android.com Subject: [RFC PATCH 4/4] staging: android: ion: Call dma_map_sg for syncing and mapping Date: Wed, 14 Dec 2016 16:07:43 -0800 Message-Id: <1481760463-3515-5-git-send-email-labbott@redhat.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1481760463-3515-1-git-send-email-labbott@redhat.com> References: <1481760463-3515-1-git-send-email-labbott@redhat.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20161214_160823_280252_C5F4AD27 X-CRM114-Status: GOOD ( 13.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devel@driverdev.osuosl.org, romlem@google.com, Greg Kroah-Hartman , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Bryan Huntsman , Laura Abbott , pratikp@codeaurora.org, Brian Starkey , linux-arm-kernel@lists.infradead.org, linux-media@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Technically, calling dma_buf_map_attachment should return a buffer properly dma_mapped. Add calls to dma_map_sg to begin_cpu_access to ensure this happens. As a side effect, this lets Ion buffers take advantage of the dma_buf sync ioctls. Not-signed-off-by: Laura Abbott --- drivers/staging/android/ion/ion.c | 61 ++++++++++++++++++++++++++++++--------- 1 file changed, 47 insertions(+), 14 deletions(-) diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 86dba07..5177d79 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -776,6 +776,11 @@ static struct sg_table *dup_sg_table(struct sg_table *table) return new_table; } +static void free_duped_table(struct sg_table *table) +{ + sg_free_table(table); + kfree(table); +} static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direction) @@ -784,15 +789,29 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, struct ion_buffer *buffer = dmabuf->priv; struct sg_table *table; - return dup_sg_table(buffer->sg_table); + /* + * TODO: Need to sync wrt CPU or device completely owning? + */ + + table = dup_sg_table(buffer->sg_table); + + if (!dma_map_sg(attachment->dev, table->sgl, table->nents, + direction)){ + ret = -ENOMEM; + goto err; + } + +err: + free_duped_table(table); + return ERR_PTR(ret); } static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, enum dma_data_direction direction) { - sg_free_table(table); - kfree(table); + dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction); + free_duped_table(table); } struct ion_vma_list { @@ -889,16 +908,24 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, struct ion_buffer *buffer = dmabuf->priv; void *vaddr; - if (!buffer->heap->ops->map_kernel) { - pr_err("%s: map kernel is not implemented by this heap.\n", - __func__); - return -ENODEV; + /* + * TODO: Move this elsewhere because we don't always need a vaddr + */ + if (buffer->heap->ops->map_kernel) { + mutex_lock(&buffer->lock); + vaddr = ion_buffer_kmap_get(buffer); + mutex_unlock(&buffer->lock); } - mutex_lock(&buffer->lock); - vaddr = ion_buffer_kmap_get(buffer); - mutex_unlock(&buffer->lock); - return PTR_ERR_OR_ZERO(vaddr); + /* + * Close enough right now? Flag to skip sync? + */ + if (!dma_map_sg(buffer->dev->dev.this_device, buffer->sg_table->sgl, + buffer->sg_table->nents, + DMA_BIDIRECTIONAL)) + return -ENOMEM; + + return 0; } static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf, @@ -906,9 +933,15 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf, { struct ion_buffer *buffer = dmabuf->priv; - mutex_lock(&buffer->lock); - ion_buffer_kmap_put(buffer); - mutex_unlock(&buffer->lock); + if (buffer->heap->ops->map_kernel) { + mutex_lock(&buffer->lock); + ion_buffer_kmap_put(buffer); + mutex_unlock(&buffer->lock); + } + + dma_unmap_sg(buffer->dev->dev.this_device, buffer->sg_table->sgl, + buffer->sg_table->nents, + DMA_BIDIRECTIONAL); return 0; }