From patchwork Mon Mar 28 06:32:11 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WW9uZyBXdSAo5ZC05YuHKQ==?= X-Patchwork-Id: 8678371 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 56952C0553 for ; Mon, 28 Mar 2016 06:44:27 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7937C2020F for ; Mon, 28 Mar 2016 06:44:26 +0000 (UTC) Received: from bombadil.infradead.org (unknown [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8A18E20260 for ; Mon, 28 Mar 2016 06:44:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1akQl0-0004Zi-Rq; Mon, 28 Mar 2016 06:34:34 +0000 Received: from [210.61.82.183] (helo=mailgw01.mediatek.com) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1akQkr-0004NW-Pw; Mon, 28 Mar 2016 06:34:26 +0000 Received: from mtkhts09.mediatek.inc [(172.21.101.70)] by mailgw01.mediatek.com (envelope-from ) (mhqrelay.mediatek.com ESMTP with TLS) with ESMTP id 1368906623; Mon, 28 Mar 2016 14:33:59 +0800 Received: from mhfsdcap03.mhfswrd (10.17.3.153) by mtkhts09.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 14.3.266.1; Mon, 28 Mar 2016 14:33:43 +0800 From: Yong Wu To: Joerg Roedel , Catalin Marinas , Will Deacon Subject: [PATCH v2 1/2] dma/iommu: Add pgsize_bitmap confirmation in __iommu_dma_alloc_pages Date: Mon, 28 Mar 2016 14:32:11 +0800 Message-ID: <1459146732-15620-1-git-send-email-yong.wu@mediatek.com> X-Mailer: git-send-email 1.8.1.1.dirty MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160327_233425_986310_CCB2CE23 X-CRM114-Status: GOOD ( 17.03 ) X-Spam-Score: -1.1 (-) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: srv_heupstream@mediatek.com, Arnd Bergmann , Douglas Anderson , linux-kernel@vger.kernel.org, Tomasz Figa , iommu@lists.linux-foundation.org, Daniel Kurtz , Yong Wu , Matthias Brugger , linux-mediatek@lists.infradead.org, Marek Szyprowski , Robin Murphy , linux-arm-kernel@lists.infradead.org, Lucas Stach Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-3.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RDNS_NONE,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently __iommu_dma_alloc_pages assumes that all the IOMMU support the granule of PAGE_SIZE. It call alloc_page to try allocating memory in the last time. Fortunately the mininum pagesize in all the current IOMMU is SZ_4K, so this works well. But there may be a case in which the mininum granule of IOMMU may be larger than PAGE_SIZE, then it will abort as the IOMMU cann't map the discontinuous memory within a granule. For example, the pgsize_bitmap of the IOMMU only has SZ_16K while the PAGE_SIZE is SZ_4K, then we have to prepare SZ_16K continuous memory at least for each a granule iommu mapping. Signed-off-by: Yong Wu --- v2: -rebase on v4.6-rc1. -add a new patch([1/2] add pgsize_bitmap) here. drivers/iommu/dma-iommu.c | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 72d6182..75ce71e 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -190,11 +190,13 @@ static void __iommu_dma_free_pages(struct page **pages, int count) kvfree(pages); } -static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) +static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp, + unsigned long pgsize_bitmap) { struct page **pages; unsigned int i = 0, array_size = count * sizeof(*pages); - unsigned int order = MAX_ORDER; + int min_order = get_order(1 << __ffs(pgsize_bitmap)); + int order = MAX_ORDER; if (array_size <= PAGE_SIZE) pages = kzalloc(array_size, GFP_KERNEL); @@ -213,13 +215,16 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) /* * Higher-order allocations are a convenience rather * than a necessity, hence using __GFP_NORETRY until - * falling back to single-page allocations. + * falling back to min size allocations. */ - for (order = min_t(unsigned int, order, __fls(count)); - order > 0; order--) { - page = alloc_pages(gfp | __GFP_NORETRY, order); + for (order = min_t(int, order, __fls(count)); + order >= min_order; order--) { + page = alloc_pages((order == min_order) ? gfp : + gfp | __GFP_NORETRY, order); if (!page) continue; + if (!order) + break; if (PageCompound(page)) { if (!split_huge_page(page)) break; @@ -229,8 +234,6 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) break; } } - if (!page) - page = alloc_page(gfp); if (!page) { __iommu_dma_free_pages(pages, i); return NULL; @@ -292,7 +295,8 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, *handle = DMA_ERROR_CODE; - pages = __iommu_dma_alloc_pages(count, gfp); + pages = __iommu_dma_alloc_pages(count, gfp, + domain->ops->pgsize_bitmap); if (!pages) return NULL;