From patchwork Fri Dec 29 08:18:21 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10136763 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 911CA6037D for ; Fri, 29 Dec 2017 08:36:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D8912CDDC for ; Fri, 29 Dec 2017 08:36:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 71D932D91C; Fri, 29 Dec 2017 08:36:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 08DAB2CDDC for ; Fri, 29 Dec 2017 08:36:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=En0ReamFM0TbePkS9XnRImRLkf43f3tkqYQ6w4KFs2c=; b=YSMNtmmVvs4engF8sb/elSjQ1Y 6kKaGFiysmfDHVuGbmO4T4/uH6OuZlFJy1TGMQYWvLLjTopB8RGHeEzEUXQPenBkL7Gp/wQi1oVf0 ThKQi1F4Y7qNafXzF/CmZAbSTSy+k1eFykyxM/5Yp9q/xjabkTkEcQ4fIqU2WYi6lXUIWr8ZVrZt6 HeiX8Y8mJMBl3Unhy+MByNEZOWcm7ho7sotTCqrjd4vfaFXwsQUtFFsGxOg0lV1u9ANdULPsTWTYP UgB19StnHZq0A5pbpjSvvgKCigTmeVykIvi3x28Z8srGeQO68yEwsxaQx1cVh46k9eEdxUV9T7NkE 07UIGEug==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1eUq9f-0006E5-HB; Fri, 29 Dec 2017 08:36:39 +0000 Received: from 77.117.237.29.wireless.dyn.drei.com ([77.117.237.29] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.89 #1 (Red Hat Linux)) id 1eUpu4-0000yZ-Nv; Fri, 29 Dec 2017 08:20:33 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Subject: [PATCH 17/67] microblaze: rename dma_direct to dma_microblaze Date: Fri, 29 Dec 2017 09:18:21 +0100 Message-Id: <20171229081911.2802-18-hch@lst.de> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171229081911.2802-1-hch@lst.de> References: <20171229081911.2802-1-hch@lst.de> X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-mips@linux-mips.org, linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, Guan Xuetao , linux-arch@vger.kernel.org, linux-s390@vger.kernel.org, linux-c6x-dev@linux-c6x.org, linux-hexagon@vger.kernel.org, x86@kernel.org, linux-snps-arc@lists.infradead.org, adi-buildroot-devel@lists.sourceforge.net, linux-m68k@lists.linux-m68k.org, patches@groups.riscv.org, linux-metag@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Michal Simek , linux-parisc@vger.kernel.org, linux-cris-kernel@axis.com, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linuxppc-dev@lists.ozlabs.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This frees the dma_direct_* namespace for a generic implementation. Signed-off-by: Christoph Hellwig --- arch/microblaze/include/asm/dma-mapping.h | 4 +-- arch/microblaze/kernel/dma.c | 50 +++++++++++++++---------------- 2 files changed, 27 insertions(+), 27 deletions(-) diff --git a/arch/microblaze/include/asm/dma-mapping.h b/arch/microblaze/include/asm/dma-mapping.h index 6b9ea39405b8..add50c1373bf 100644 --- a/arch/microblaze/include/asm/dma-mapping.h +++ b/arch/microblaze/include/asm/dma-mapping.h @@ -18,11 +18,11 @@ /* * Available generic sets of operations */ -extern const struct dma_map_ops dma_direct_ops; +extern const struct dma_map_ops dma_nommu_ops; static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus) { - return &dma_direct_ops; + return &dma_nommu_ops; } #endif /* _ASM_MICROBLAZE_DMA_MAPPING_H */ diff --git a/arch/microblaze/kernel/dma.c b/arch/microblaze/kernel/dma.c index 2a9a0ec14c46..364b0ac41452 100644 --- a/arch/microblaze/kernel/dma.c +++ b/arch/microblaze/kernel/dma.c @@ -17,7 +17,7 @@ #define NOT_COHERENT_CACHE -static void *dma_direct_alloc_coherent(struct device *dev, size_t size, +static void *dma_nommu_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) { @@ -42,7 +42,7 @@ static void *dma_direct_alloc_coherent(struct device *dev, size_t size, #endif } -static void dma_direct_free_coherent(struct device *dev, size_t size, +static void dma_nommu_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { @@ -69,7 +69,7 @@ static inline void __dma_sync(unsigned long paddr, } } -static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, +static int dma_nommu_map_sg(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction direction, unsigned long attrs) { @@ -89,12 +89,12 @@ static int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, return nents; } -static int dma_direct_dma_supported(struct device *dev, u64 mask) +static int dma_nommu_dma_supported(struct device *dev, u64 mask) { return 1; } -static inline dma_addr_t dma_direct_map_page(struct device *dev, +static inline dma_addr_t dma_nommu_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, @@ -106,7 +106,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, return page_to_phys(page) + offset; } -static inline void dma_direct_unmap_page(struct device *dev, +static inline void dma_nommu_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, enum dma_data_direction direction, @@ -122,7 +122,7 @@ static inline void dma_direct_unmap_page(struct device *dev, } static inline void -dma_direct_sync_single_for_cpu(struct device *dev, +dma_nommu_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction direction) { @@ -136,7 +136,7 @@ dma_direct_sync_single_for_cpu(struct device *dev, } static inline void -dma_direct_sync_single_for_device(struct device *dev, +dma_nommu_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction direction) { @@ -150,7 +150,7 @@ dma_direct_sync_single_for_device(struct device *dev, } static inline void -dma_direct_sync_sg_for_cpu(struct device *dev, +dma_nommu_sync_sg_for_cpu(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction direction) { @@ -164,7 +164,7 @@ dma_direct_sync_sg_for_cpu(struct device *dev, } static inline void -dma_direct_sync_sg_for_device(struct device *dev, +dma_nommu_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction direction) { @@ -178,7 +178,7 @@ dma_direct_sync_sg_for_device(struct device *dev, } static -int dma_direct_mmap_coherent(struct device *dev, struct vm_area_struct *vma, +int dma_nommu_mmap_coherent(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t handle, size_t size, unsigned long attrs) { @@ -204,21 +204,21 @@ int dma_direct_mmap_coherent(struct device *dev, struct vm_area_struct *vma, #endif } -const struct dma_map_ops dma_direct_ops = { - .alloc = dma_direct_alloc_coherent, - .free = dma_direct_free_coherent, - .mmap = dma_direct_mmap_coherent, - .map_sg = dma_direct_map_sg, - .dma_supported = dma_direct_dma_supported, - .map_page = dma_direct_map_page, - .unmap_page = dma_direct_unmap_page, - .sync_single_for_cpu = dma_direct_sync_single_for_cpu, - .sync_single_for_device = dma_direct_sync_single_for_device, - .sync_sg_for_cpu = dma_direct_sync_sg_for_cpu, - .sync_sg_for_device = dma_direct_sync_sg_for_device, - .is_phys = true, +const struct dma_map_ops dma_nommu_ops = { + .alloc = dma_nommu_alloc_coherent, + .free = dma_nommu_free_coherent, + .mmap = dma_nommu_mmap_coherent, + .map_sg = dma_nommu_map_sg, + .dma_supported = dma_nommu_dma_supported, + .map_page = dma_nommu_map_page, + .unmap_page = dma_nommu_unmap_page, + .sync_single_for_cpu = dma_nommu_sync_single_for_cpu, + .sync_single_for_device = dma_nommu_sync_single_for_device, + .sync_sg_for_cpu = dma_nommu_sync_sg_for_cpu, + .sync_sg_for_device = dma_nommu_sync_sg_for_device, + .is_phys = true, }; -EXPORT_SYMBOL(dma_direct_ops); +EXPORT_SYMBOL(dma_nommu_ops); /* Number of entries preallocated for DMA-API debugging */ #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)