From patchwork Tue Sep 8 16:47:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11763681 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81ECF13B1 for ; Tue, 8 Sep 2020 16:54:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 657FC2080C for ; Tue, 8 Sep 2020 16:54:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="AA0Kf5qu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731770AbgIHQyP (ORCPT ); Tue, 8 Sep 2020 12:54:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731995AbgIHQsR (ORCPT ); Tue, 8 Sep 2020 12:48:17 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B37A3C061573; Tue, 8 Sep 2020 09:48:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xNroi2fQ2LnRFAI12wN1KKLZSTotzY0SXtfdnZTTIos=; b=AA0Kf5qukyJJF7MkOCwypgkS33 /9uIf0pDUw9yHvqC8KJ1rywDjlURJjhFQKoJ5JLEnFSCjR4lC0bVxVPWBCQIc3CZzRERQKS3UQ+gw I22xMPZVHOqtiK3IKTJi3f71iLFBqthMUIqDwuFBFFLFHmbReJebzh1Md5+FiUr8rVxmLVj4AM8Vx mkh33CNH4hAubysdiaORRdFMmgpAvoDD1fXuzfCnhRC6L+Y7XDat3qyKJ5vkTfl2p72O1LGSzegRO 7uEQw+vCoS9eQ0docorHu0qGOg3N2ahqMS6rY+HGJz5W+bTr6fZZzcNHW8pHGftCXfPJDRdVl1Bsn kLknVUEQ==; Received: from [2001:4bb8:184:af1:3dc3:9c83:fc6c:e0f] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kFgms-0001SI-KI; Tue, 08 Sep 2020 16:48:06 +0000 From: Christoph Hellwig To: Tony Luck , Fenghua Yu , Thomas Bogendoerfer , iommu@lists.linux-foundation.org Cc: Tomasz Figa , Joerg Roedel , Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org Subject: [PATCH 06/12] dma-direct: remove dma_direct_{alloc,free}_pages Date: Tue, 8 Sep 2020 18:47:52 +0200 Message-Id: <20200908164758.3177341-7-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200908164758.3177341-1-hch@lst.de> References: <20200908164758.3177341-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-mips-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Just merge these helpers into the main dma_direct_{alloc,free} routines, as the additional checks are always false for the two callers. Signed-off-by: Christoph Hellwig Reviewed-by: Robin Murphy --- arch/x86/kernel/amd_gart_64.c | 6 +++--- include/linux/dma-direct.h | 4 ---- kernel/dma/direct.c | 39 ++++++++++++++--------------------- kernel/dma/pool.c | 2 +- 4 files changed, 19 insertions(+), 32 deletions(-) diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c index bccc5357bffd6c..153374b996a279 100644 --- a/arch/x86/kernel/amd_gart_64.c +++ b/arch/x86/kernel/amd_gart_64.c @@ -467,7 +467,7 @@ gart_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_addr, { void *vaddr; - vaddr = dma_direct_alloc_pages(dev, size, dma_addr, flag, attrs); + vaddr = dma_direct_alloc(dev, size, dma_addr, flag, attrs); if (!vaddr || !force_iommu || dev->coherent_dma_mask <= DMA_BIT_MASK(24)) return vaddr; @@ -479,7 +479,7 @@ gart_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_addr, goto out_free; return vaddr; out_free: - dma_direct_free_pages(dev, size, vaddr, *dma_addr, attrs); + dma_direct_free(dev, size, vaddr, *dma_addr, attrs); return NULL; } @@ -489,7 +489,7 @@ gart_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_addr, unsigned long attrs) { gart_unmap_page(dev, dma_addr, size, DMA_BIDIRECTIONAL, 0); - dma_direct_free_pages(dev, size, vaddr, dma_addr, attrs); + dma_direct_free(dev, size, vaddr, dma_addr, attrs); } static int no_agp; diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 95e3e28bd93f47..20eceb2e4f91f8 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -77,10 +77,6 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs); -void *dma_direct_alloc_pages(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); -void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, - dma_addr_t dma_addr, unsigned long attrs); int dma_direct_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs); diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 949c1cbf08b2d5..1d564bea58571b 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -151,13 +151,18 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, return page; } -void *dma_direct_alloc_pages(struct device *dev, size_t size, +void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { struct page *page; void *ret; int err; + if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && + dma_alloc_need_uncached(dev, attrs)) + return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); + size = PAGE_ALIGN(size); if (dma_should_alloc_from_pool(dev, gfp, attrs)) { @@ -256,11 +261,18 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, return NULL; } -void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, - dma_addr_t dma_addr, unsigned long attrs) +void dma_direct_free(struct device *dev, size_t size, + void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { unsigned int page_order = get_order(size); + if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && + dma_alloc_need_uncached(dev, attrs)) { + arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); + return; + } + /* If cpu_addr is not from an atomic pool, dma_free_from_pool() fails */ if (dma_should_free_from_pool(dev, attrs) && dma_free_from_pool(dev, cpu_addr, PAGE_ALIGN(size))) @@ -284,27 +296,6 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size); } -void *dma_direct_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) -{ - if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - dma_alloc_need_uncached(dev, attrs)) - return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); - return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); -} - -void dma_direct_free(struct device *dev, size_t size, - void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) -{ - if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - dma_alloc_need_uncached(dev, attrs)) - arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); - else - dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); -} - #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ defined(CONFIG_SWIOTLB) void dma_direct_sync_sg_for_device(struct device *dev, diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 1281c0f0442bc5..fe11643ff9cc7b 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -115,7 +115,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, #endif /* * Memory in the atomic DMA pools must be unencrypted, the pools do not - * shrink so no re-encryption occurs in dma_direct_free_pages(). + * shrink so no re-encryption occurs in dma_direct_free(). */ ret = set_memory_decrypted((unsigned long)page_to_virt(page), 1 << order);