From patchwork Tue Jul 15 02:10:06 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Courbot X-Patchwork-Id: 4550391 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 311D4C0514 for ; Tue, 15 Jul 2014 02:10:39 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4F68A20127 for ; Tue, 15 Jul 2014 02:10:38 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 3499F20109 for ; Tue, 15 Jul 2014 02:10:37 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 89F206E0AA; Mon, 14 Jul 2014 19:10:35 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from hqemgate14.nvidia.com (hqemgate14.nvidia.com [216.228.121.143]) by gabe.freedesktop.org (Postfix) with ESMTP id 173BF6E0AA for ; Mon, 14 Jul 2014 19:10:33 -0700 (PDT) Received: from hqnvupgp07.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com id ; Mon, 14 Jul 2014 19:11:27 -0700 Received: from hqemhub03.nvidia.com ([172.20.12.94]) by hqnvupgp07.nvidia.com (PGP Universal service); Mon, 14 Jul 2014 18:59:20 -0700 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Mon, 14 Jul 2014 18:59:20 -0700 Received: from percival.nvidia.com (172.20.144.16) by hqemhub03.nvidia.com (172.20.150.15) with Microsoft SMTP Server (TLS) id 8.3.342.0; Mon, 14 Jul 2014 19:10:32 -0700 From: Alexandre Courbot To: David Airlie , Ben Skeggs , David Herrmann Subject: [PATCH RESEND] drm/ttm: expose CPU address of DMA-allocated pages Date: Tue, 15 Jul 2014 11:10:06 +0900 Message-ID: <1405390206-5081-1-git-send-email-acourbot@nvidia.com> X-Mailer: git-send-email 2.0.1 X-NVConfidentiality: public MIME-Version: 1.0 Cc: linux-arm-kernel@lists.infradead.org, gnurou@gmail.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Pages allocated using the DMA API have a coherent memory mapping. Make this mapping visible to drivers so they can decide to use it instead of creating their own redundant one. This is not a mere optimization: for instance, on ARM it is illegal to have several memory mappings to the same memory with different protection. The mapping provided by dma_alloc_coherent() and exposed by this patch is guaranteed to be safe, but subsequent mappings performed by drivers are not. Thus drivers using the DMA page allocator should use this mapping instead of creating their own. Signed-off-by: Alexandre Courbot --- This patch was previously part of a series but I figured it would make more sense to send it separately. It is to be used by Nouveau (and hopefully other drivers) on ARM. drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 2 ++ drivers/gpu/drm/ttm/ttm_tt.c | 6 +++++- include/drm/ttm/ttm_bo_driver.h | 2 ++ 3 files changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index fb8259f69839..0301fac5badd 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -847,6 +847,7 @@ static int ttm_dma_pool_get_pages(struct dma_pool *pool, if (count) { d_page = list_first_entry(&pool->free_list, struct dma_page, page_list); ttm->pages[index] = d_page->p; + ttm_dma->cpu_address[index] = d_page->vaddr; ttm_dma->dma_address[index] = d_page->dma; list_move_tail(&d_page->page_list, &ttm_dma->pages_list); r = 0; @@ -978,6 +979,7 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) INIT_LIST_HEAD(&ttm_dma->pages_list); for (i = 0; i < ttm->num_pages; i++) { ttm->pages[i] = NULL; + ttm_dma->cpu_address[i] = 0; ttm_dma->dma_address[i] = 0; } diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 75f319090043..341594ede596 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -58,6 +58,8 @@ static void ttm_dma_tt_alloc_page_directory(struct ttm_dma_tt *ttm) ttm->ttm.pages = drm_calloc_large(ttm->ttm.num_pages, sizeof(void*)); ttm->dma_address = drm_calloc_large(ttm->ttm.num_pages, sizeof(*ttm->dma_address)); + ttm->cpu_address = drm_calloc_large(ttm->ttm.num_pages, + sizeof(*ttm->cpu_address)); } #ifdef CONFIG_X86 @@ -228,7 +230,7 @@ int ttm_dma_tt_init(struct ttm_dma_tt *ttm_dma, struct ttm_bo_device *bdev, INIT_LIST_HEAD(&ttm_dma->pages_list); ttm_dma_tt_alloc_page_directory(ttm_dma); - if (!ttm->pages || !ttm_dma->dma_address) { + if (!ttm->pages || !ttm_dma->dma_address || !ttm_dma->cpu_address) { ttm_tt_destroy(ttm); pr_err("Failed allocating page table\n"); return -ENOMEM; @@ -243,6 +245,8 @@ void ttm_dma_tt_fini(struct ttm_dma_tt *ttm_dma) drm_free_large(ttm->pages); ttm->pages = NULL; + drm_free_large(ttm_dma->cpu_address); + ttm_dma->cpu_address = NULL; drm_free_large(ttm_dma->dma_address); ttm_dma->dma_address = NULL; } diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 202f0a7171e8..1d9f0f1ff52d 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -133,6 +133,7 @@ struct ttm_tt { * struct ttm_dma_tt * * @ttm: Base ttm_tt struct. + * @cpu_address: The CPU address of the pages * @dma_address: The DMA (bus) addresses of the pages * @pages_list: used by some page allocation backend * @@ -142,6 +143,7 @@ struct ttm_tt { */ struct ttm_dma_tt { struct ttm_tt ttm; + void **cpu_address; dma_addr_t *dma_address; struct list_head pages_list; };