From patchwork Mon Dec 8 08:39:27 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Courbot X-Patchwork-Id: 5454921 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 0E1319F30B for ; Mon, 8 Dec 2014 08:42:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 473B82012E for ; Mon, 8 Dec 2014 08:42:08 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6AEA32012B for ; Mon, 8 Dec 2014 08:42:07 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XxtrT-0002Yv-Ou; Mon, 08 Dec 2014 08:40:07 +0000 Received: from hqemgate15.nvidia.com ([216.228.121.64]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XxtrQ-0001c7-Cm for linux-arm-kernel@lists.infradead.org; Mon, 08 Dec 2014 08:40:05 +0000 Received: from hqnvupgp08.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com id ; Mon, 08 Dec 2014 00:38:54 -0800 Received: from hqemhub02.nvidia.com ([172.20.12.94]) by hqnvupgp08.nvidia.com (PGP Universal service); Mon, 08 Dec 2014 00:37:57 -0800 X-PGP-Universal: processed; by hqnvupgp08.nvidia.com on Mon, 08 Dec 2014 00:37:57 -0800 Received: from percival.nvidia.com (172.20.144.16) by hqemhub02.nvidia.com (172.20.150.31) with Microsoft SMTP Server (TLS) id 8.3.342.0; Mon, 8 Dec 2014 00:39:42 -0800 From: Alexandre Courbot To: Russell King , Marek Szyprowski Subject: [PATCH] ARM: DMA: Fix kzalloc flags in __iommu_alloc_buffer() Date: Mon, 8 Dec 2014 17:39:27 +0900 Message-ID: <1418027967-12923-1-git-send-email-acourbot@nvidia.com> X-Mailer: git-send-email 2.1.3 X-NVConfidentiality: public MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141208_004004_455343_8769C6E3 X-CRM114-Status: GOOD ( 15.09 ) X-Spam-Score: -5.0 (-----) Cc: gnurou@gmail.com, Arnd Bergmann , Konrad Rzeszutek Wilk , linux-kernel@vger.kernel.org, Alexandre Courbot , Thierry Reding , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There doesn't seem to be any valid reason to allocate the pages array with the same flags as the buffer itself. Doing so can eventually lead to the following safeguard in mm/slab.c to be hit: BUG_ON(flags & GFP_SLAB_BUG_MASK); This happens when buffers are allocated with __GFP_DMA32 or __GFP_HIGHMEM. Fix this by allocating the pages array with GFP_KERNEL to follow what is done elsewhere in this file. Using GFP_KERNEL in __iommu_alloc_buffer() is safe because atomic allocations are handled by __iommu_alloc_atomic(). Signed-off-by: Alexandre Courbot Cc: Russell King Cc: Marek Szyprowski Cc: Arnd Bergmann Cc: Thierry Reding Cc: Konrad Rzeszutek Wilk Acked-by: Marek Szyprowski --- arch/arm/mm/dma-mapping.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index e8907117861e..bc495354c802 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1106,7 +1106,7 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, int i = 0; if (array_size <= PAGE_SIZE) - pages = kzalloc(array_size, gfp); + pages = kzalloc(array_size, GFP_KERNEL); else pages = vzalloc(array_size); if (!pages)