From patchwork Thu Mar 21 22:52:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10864489 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B2E213B5 for ; Thu, 21 Mar 2019 22:54:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4DB1C2A2CC for ; Thu, 21 Mar 2019 22:54:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 417C12A2D3; Thu, 21 Mar 2019 22:54:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E4D672A2CC for ; Thu, 21 Mar 2019 22:54:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jf0lbxV5R+VhD3FOyThVeNvIMj0wvgf518rr+SWb1Os=; b=o7FvQFa9Vfv7C6 nEXOpyDL2h5vKGYrjigjEx+8M5rALDUE1zsKBCeXpxuKkPfj/i79SjebBQc5OMfyk6FNru+Aly/4Q QFGwNxNZ0VObzWmJgqr4rOWg50jzdubrlnQKHBku9eG5piG116ASkN+RGeM5l/EjhlKpy8eAYfBWb fTmbvZ2kqtn2nJmlJDEnUz7xFHFx/6gmefcNDWQFeP/2XknYWRVpNjrICfXlO53CTwZDph6bV81ou kHFypXPMBS5A81vPd9kt0XRIaySERLROeyYx9mi5daYQ1Uv6bqifw5yiI/JqbJE7PO/NOl5yQVNzn J37P5jHSCwCk4m36Kadw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h76Zi-00014Y-PJ; Thu, 21 Mar 2019 22:54:14 +0000 Received: from [199.255.44.128] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1h76Yx-0008RO-Nz; Thu, 21 Mar 2019 22:53:27 +0000 From: Christoph Hellwig To: Russell King , x86@kernel.org, Sudip Mukherjee , Bartlomiej Zolnierkiewicz Subject: [PATCH 6/7] dma-mapping: remove leftover NULL device support Date: Thu, 21 Mar 2019 15:52:34 -0700 Message-Id: <20190321225235.30648-7-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190321225235.30648-1-hch@lst.de> References: <20190321225235.30648-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-fbdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Most dma_map_ops implementations already had some issues with a NULL device, or did simply crash if one was fed to them. Now that we have cleaned up all the obvious offenders we can stop to pretend we support this mode. Signed-off-by: Christoph Hellwig --- Documentation/DMA-API-HOWTO.txt | 13 ++++++------- include/linux/dma-mapping.h | 6 +++--- kernel/dma/direct.c | 2 +- 3 files changed, 10 insertions(+), 11 deletions(-) diff --git a/Documentation/DMA-API-HOWTO.txt b/Documentation/DMA-API-HOWTO.txt index 1a721d0f35c8..0e98cf7466dd 100644 --- a/Documentation/DMA-API-HOWTO.txt +++ b/Documentation/DMA-API-HOWTO.txt @@ -365,13 +365,12 @@ __get_free_pages() (but takes size instead of a page order). If your driver needs regions sized smaller than a page, you may prefer using the dma_pool interface, described below. -The consistent DMA mapping interfaces, for non-NULL dev, will by -default return a DMA address which is 32-bit addressable. Even if the -device indicates (via DMA mask) that it may address the upper 32-bits, -consistent allocation will only return > 32-bit addresses for DMA if -the consistent DMA mask has been explicitly changed via -dma_set_coherent_mask(). This is true of the dma_pool interface as -well. +The consistent DMA mapping interfaces, will by default return a DMA address +which is 32-bit addressable. Even if the device indicates (via the DMA mask) +that it may address the upper 32-bits, consistent allocation will only +return > 32-bit addresses for DMA if the consistent DMA mask has been +explicitly changed via dma_set_coherent_mask(). This is true of the +dma_pool interface as well. dma_alloc_coherent() returns two values: the virtual address which you can use to access it from the CPU and dma_handle which you pass to the diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 75e60be91e5f..6309a721394b 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -267,9 +267,9 @@ size_t dma_direct_max_mapping_size(struct device *dev); static inline const struct dma_map_ops *get_dma_ops(struct device *dev) { - if (dev && dev->dma_ops) + if (dev->dma_ops) return dev->dma_ops; - return get_arch_dma_ops(dev ? dev->bus : NULL); + return get_arch_dma_ops(dev->bus); } static inline void set_dma_ops(struct device *dev, @@ -650,7 +650,7 @@ static inline void dma_free_coherent(struct device *dev, size_t size, static inline u64 dma_get_mask(struct device *dev) { - if (dev && dev->dma_mask && *dev->dma_mask) + if (dev->dma_mask && *dev->dma_mask) return *dev->dma_mask; return DMA_BIT_MASK(32); } diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index fcdb23e8d2fc..2c2772e9702a 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -311,7 +311,7 @@ static inline bool dma_direct_possible(struct device *dev, dma_addr_t dma_addr, size_t size) { return swiotlb_force != SWIOTLB_FORCE && - (!dev || dma_capable(dev, dma_addr, size)); + dma_capable(dev, dma_addr, size); } dma_addr_t dma_direct_map_page(struct device *dev, struct page *page,