From patchwork Thu May 23 07:00:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10956989 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E48DC933 for ; Thu, 23 May 2019 07:03:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB7602807B for ; Thu, 23 May 2019 07:03:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AFA102811E; Thu, 23 May 2019 07:03:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5AAA626256 for ; Thu, 23 May 2019 07:03:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wv79XlPzNKQcXZJQcsaZP6zfrqTxYpF/g5+drwo7KLo=; b=GJC/350txTAKn2 QaVFmjHe4Ld1661kWoS8AW/WeKsm1YYMpNcok0WCNUIsq+ymlovR8bwYS+CTb3CRUDzgWovBNMUPN MrEoxLynLIlPGPgbTErm97lVg1HKkHUbcxh0S4+R4KvWQbfR1QCPZeZ6oO+hbaiAsiwGg0iOK/hFF zlXStC6Y++NlY+Gzi5W0fBvRRkMz8G359qXVKw5gX3vy15Q0RX1azDgpAH4sZWEI+mHFiA+0JUJU4 mHZ4+0RNjTxIz+U9/4eqmviTqifVNj+XCcL0U9jceCy4Hb0FOshJtUiP5Jx2l3w6jTKQcQWOw5X7e szwEYTlQO7dOYw3kHu+g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hThku-0008Ki-JT; Thu, 23 May 2019 07:03:12 +0000 Received: from 213-225-10-46.nat.highway.a1.net ([213.225.10.46] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hThix-0005ZM-17; Thu, 23 May 2019 07:01:11 +0000 From: Christoph Hellwig To: Robin Murphy Subject: [PATCH 14/23] iommu/dma: Merge the CMA and alloc_pages allocation paths Date: Thu, 23 May 2019 09:00:19 +0200 Message-Id: <20190523070028.7435-15-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190523070028.7435-1-hch@lst.de> References: <20190523070028.7435-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Tom Murphy , Catalin Marinas , Will Deacon , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of having a separate code path for the non-blocking alloc_pages and CMA allocations paths merge them into one. There is a slight behavior change here in that we try the page allocator if CMA fails. This matches what dma-direct and other iommu drivers do and will be needed to use the dma-iommu code on architectures without DMA remapping later on. Signed-off-by: Christoph Hellwig --- drivers/iommu/dma-iommu.c | 32 ++++++++++++-------------------- 1 file changed, 12 insertions(+), 20 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 3629bc2f59ee..6b8cedae7cff 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -974,7 +974,7 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, bool coherent = dev_is_dma_coherent(dev); int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs); size_t iosize = size; - struct page *page; + struct page *page = NULL; void *addr; size = PAGE_ALIGN(size); @@ -984,35 +984,26 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, !(attrs & DMA_ATTR_FORCE_CONTIGUOUS)) return iommu_dma_alloc_remap(dev, iosize, handle, gfp, attrs); - if (!gfpflags_allow_blocking(gfp)) { - /* - * In atomic context we can't remap anything, so we'll only - * get the virtually contiguous buffer we need by way of a - * physically contiguous allocation. - */ - if (coherent) { - page = alloc_pages(gfp, get_order(size)); - addr = page ? page_address(page) : NULL; - } else { - addr = dma_alloc_from_pool(size, &page, gfp); - } + if (!gfpflags_allow_blocking(gfp) && !coherent) { + addr = dma_alloc_from_pool(size, &page, gfp); if (!addr) return NULL; *handle = __iommu_dma_map(dev, page_to_phys(page), iosize, ioprot); if (*handle == DMA_MAPPING_ERROR) { - if (coherent) - __free_pages(page, get_order(size)); - else - dma_free_from_pool(addr, size); + dma_free_from_pool(addr, size); return NULL; } return addr; } - page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT, - get_order(size), gfp & __GFP_NOWARN); + if (gfpflags_allow_blocking(gfp)) + page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT, + get_order(size), + gfp & __GFP_NOWARN); + if (!page) + page = alloc_pages(gfp, get_order(size)); if (!page) return NULL; @@ -1038,7 +1029,8 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, out_unmap: __iommu_dma_unmap(dev, *handle, iosize); out_free_pages: - dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT); + if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT)) + __free_pages(page, get_order(size)); return NULL; }