From patchwork Mon Nov 9 18:17:19 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 7586081 Return-Path: X-Original-To: patchwork-linux-parisc@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 777969F1AF for ; Mon, 9 Nov 2015 18:22:42 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 860D1206F3 for ; Mon, 9 Nov 2015 18:22:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8307A20737 for ; Mon, 9 Nov 2015 18:22:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752584AbbKISWk (ORCPT ); Mon, 9 Nov 2015 13:22:40 -0500 Received: from bombadil.infradead.org ([198.137.202.9]:58821 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752137AbbKISWj (ORCPT ); Mon, 9 Nov 2015 13:22:39 -0500 Received: from [83.175.99.196] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux)) id 1Zvr5I-0003ik-2B; Mon, 09 Nov 2015 18:22:28 +0000 From: Christoph Hellwig To: Andrew Morton , Haavard Skinnemoen , Hans-Christian Egtvedt , Steven Miao , Ley Foon Tan , David Howells , Koichi Yasutake , Chris Metcalf , linux-snps-arc@lists.infraded.org, linux-c6x-dev@linux-c6x.org, linux-cris-kernel@axis.com, linux-parisc@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-metag@vger.kernel.org, sparclinux@vger.kernel.org, iommu@lists.linux-foundation.org Subject: [PATCH 14/16] tile: uninline dma_set_mask Date: Mon, 9 Nov 2015 19:17:19 +0100 Message-Id: <1447093041-21832-15-git-send-email-hch@lst.de> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1447093041-21832-1-git-send-email-hch@lst.de> References: <1447093041-21832-1-git-send-email-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html Sender: linux-parisc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org X-Spam-Status: No, score=-7.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We'll soon merge into and the reference to dma_capable in the tile dma_set_mask would create a circular dependency. Fix this by moving the implementation out of line. Signed-off-by: Christoph Hellwig --- arch/tile/include/asm/dma-mapping.h | 29 +---------------------------- arch/tile/kernel/pci-dma.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 30 insertions(+), 28 deletions(-) diff --git a/arch/tile/include/asm/dma-mapping.h b/arch/tile/include/asm/dma-mapping.h index 96ac6cc..c342736 100644 --- a/arch/tile/include/asm/dma-mapping.h +++ b/arch/tile/include/asm/dma-mapping.h @@ -76,34 +76,7 @@ static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) #include -static inline int -dma_set_mask(struct device *dev, u64 mask) -{ - struct dma_map_ops *dma_ops = get_dma_ops(dev); - - /* - * For PCI devices with 64-bit DMA addressing capability, promote - * the dma_ops to hybrid, with the consistent memory DMA space limited - * to 32-bit. For 32-bit capable devices, limit the streaming DMA - * address range to max_direct_dma_addr. - */ - if (dma_ops == gx_pci_dma_map_ops || - dma_ops == gx_hybrid_pci_dma_map_ops || - dma_ops == gx_legacy_pci_dma_map_ops) { - if (mask == DMA_BIT_MASK(64) && - dma_ops == gx_legacy_pci_dma_map_ops) - set_dma_ops(dev, gx_hybrid_pci_dma_map_ops); - else if (mask > dev->archdata.max_direct_dma_addr) - mask = dev->archdata.max_direct_dma_addr; - } - - if (!dev->dma_mask || !dma_supported(dev, mask)) - return -EIO; - - *dev->dma_mask = mask; - - return 0; -} +int dma_set_mask(struct device *dev, u64 mask); /* * dma_alloc_noncoherent() is #defined to return coherent memory, diff --git a/arch/tile/kernel/pci-dma.c b/arch/tile/kernel/pci-dma.c index 09b5870..b6bc054 100644 --- a/arch/tile/kernel/pci-dma.c +++ b/arch/tile/kernel/pci-dma.c @@ -583,6 +583,35 @@ struct dma_map_ops *gx_hybrid_pci_dma_map_ops; EXPORT_SYMBOL(gx_legacy_pci_dma_map_ops); EXPORT_SYMBOL(gx_hybrid_pci_dma_map_ops); +int dma_set_mask(struct device *dev, u64 mask) +{ + struct dma_map_ops *dma_ops = get_dma_ops(dev); + + /* + * For PCI devices with 64-bit DMA addressing capability, promote + * the dma_ops to hybrid, with the consistent memory DMA space limited + * to 32-bit. For 32-bit capable devices, limit the streaming DMA + * address range to max_direct_dma_addr. + */ + if (dma_ops == gx_pci_dma_map_ops || + dma_ops == gx_hybrid_pci_dma_map_ops || + dma_ops == gx_legacy_pci_dma_map_ops) { + if (mask == DMA_BIT_MASK(64) && + dma_ops == gx_legacy_pci_dma_map_ops) + set_dma_ops(dev, gx_hybrid_pci_dma_map_ops); + else if (mask > dev->archdata.max_direct_dma_addr) + mask = dev->archdata.max_direct_dma_addr; + } + + if (!dev->dma_mask || !dma_supported(dev, mask)) + return -EIO; + + *dev->dma_mask = mask; + + return 0; +} +EXPORT_SYMBOL(dma_set_mask); + #ifdef CONFIG_ARCH_HAS_DMA_SET_COHERENT_MASK int dma_set_coherent_mask(struct device *dev, u64 mask) {