From patchwork Tue Jun 24 19:33:30 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Micha=C5=82_Nazarewicz?= X-Patchwork-Id: 4412741 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id AFD659F402 for ; Tue, 24 Jun 2014 19:37:10 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CFD57201EF for ; Tue, 24 Jun 2014 19:37:09 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 35F92201ED for ; Tue, 24 Jun 2014 19:37:08 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WzWUB-0001ge-JF; Tue, 24 Jun 2014 19:34:31 +0000 Received: from mail-wi0-f201.google.com ([209.85.212.201]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WzWTs-00018t-Pk for linux-arm-kernel@lists.infradead.org; Tue, 24 Jun 2014 19:34:14 +0000 Received: by mail-wi0-f201.google.com with SMTP id cc10so143117wib.0 for ; Tue, 24 Jun 2014 12:33:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-type:content-transfer-encoding; bh=wZ7HyjiGKcLUpGUqpdxeNeMcRhjd+1CW5ulGIPUe2iM=; b=Y65AV968ql+0vdGTuuB1+zerbgQU8ft22w7VeGk1C3VtruD1FL290jUWnzQejGeK8i s+7TkO1oq0FNJJwVjgXcCykxWc0cumxfRfyTZx9ZLtGinBBRA9qY7xy0BIDwQRNpWGhT S4mwRSFtInZXuZ4veDPz7idu5aUFRtmVf481ywWzvYrWsal++9851zMn816e/w06/HlW +DoXAV2TDvR8GGV7OeJN7Pm/YMeLUHVbDkk1Dy5dnbFJszUiKZ1tBXgkyUGwxouFHYx1 4NrS7o/wqeDBwPIFRj8gNILTw6t5CrX6DLM5F/azxkr4FUITiT0LlG5rtn4u0K8G0u0H OfdQ== X-Gm-Message-State: ALoCoQlM/BEhQXF/Ud4KwB/q6azew73D9gglYjJUWtOLYINVetRSgcy31eOC2DL3mN6wogr62X2F X-Received: by 10.180.39.196 with SMTP id r4mr530015wik.4.1403638429603; Tue, 24 Jun 2014 12:33:49 -0700 (PDT) Received: from corp2gmr1-1.eem.corp.google.com (corp2gmr1-1.eem.corp.google.com [172.25.138.99]) by gmr-mx.google.com with ESMTPS id mx7si1107894wic.1.2014.06.24.12.33.49 for (version=TLSv1.1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 Jun 2014 12:33:49 -0700 (PDT) Received: from mpn.zrh.corp.google.com (mpn.zrh.corp.google.com [172.16.113.113]) by corp2gmr1-1.eem.corp.google.com (Postfix) with ESMTP id 56CE11CA016; Tue, 24 Jun 2014 12:33:49 -0700 (PDT) Received: by mpn.zrh.corp.google.com (Postfix, from userid 126942) id D129D1E00EC; Tue, 24 Jun 2014 21:33:48 +0200 (CEST) From: Michal Nazarewicz To: Andrew Morton Subject: [PATCHv3 RESEND] mm: page_alloc: fix CMA area initialisation when pageblock > MAX_ORDER Date: Tue, 24 Jun 2014 21:33:30 +0200 Message-Id: <1403638410-30190-1-git-send-email-mina86@mina86.com> X-Mailer: git-send-email 2.0.0.526.g5318336 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140624_123413_132031_8AAC7756 X-CRM114-Status: GOOD ( 15.39 ) X-Spam-Score: -0.7 (/) Cc: Catalin Marinas , Mark Salter , linux-kernel@vger.kernel.org, stable@vger.kernel.org, linux-mm@kvack.org, Mel Gorman , David Rientjes , Michal Nazarewicz , linux-arm-kernel@lists.infradead.org, Marek Szyprowski X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP With a kernel configured with ARM64_64K_PAGES && !TRANSPARENT_HUGEPAGE, the following is triggered at early boot: SMP: Total of 8 processors activated. devtmpfs: initialized Unable to handle kernel NULL pointer dereference at virtual address 00000008 pgd = fffffe0000050000 [00000008] *pgd=00000043fba00003, *pmd=00000043fba00003, *pte=00e0000078010407 Internal error: Oops: 96000006 [#1] SMP Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.15.0-rc864k+ #44 task: fffffe03bc040000 ti: fffffe03bc080000 task.ti: fffffe03bc080000 PC is at __list_add+0x10/0xd4 LR is at free_one_page+0x270/0x638 ... Call trace: [] __list_add+0x10/0xd4 [] free_one_page+0x26c/0x638 [] __free_pages_ok.part.52+0x84/0xbc [] __free_pages+0x74/0xbc [] init_cma_reserved_pageblock+0xe8/0x104 [] cma_init_reserved_areas+0x190/0x1e4 [] do_one_initcall+0xc4/0x154 [] kernel_init_freeable+0x204/0x2a8 [] kernel_init+0xc/0xd4 This happens because init_cma_reserved_pageblock() calls __free_one_page() with pageblock_order as page order but it is bigger han MAX_ORDER. This in turn causes accesses past zone->free_list[]. Fix the problem by changing init_cma_reserved_pageblock() such that it splits pageblock into individual MAX_ORDER pages if pageblock is bigger than a MAX_ORDER page. In cases where !CONFIG_HUGETLB_PAGE_SIZE_VARIABLE, which is all architectures expect for ia64, powerpc and tile at the moment, the “pageblock_order > MAX_ORDER” condition will be optimised out since both sides of the operator are constants. In cases where pageblock size is variable, the performance degradation should not be significant anyway since init_cma_reserved_pageblock() is called only at boot time at most MAX_CMA_AREAS times which by default is eight. Cc: # v3.5+ Signed-off-by: Michal Nazarewicz Reported-and-tested-by: Mark Salter Tested-by: Christopher Covington --- mm/page_alloc.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) Resent with somehow bigger distribution including Andrew, Mel and linux-mm, as well as stable. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ee92384..fef9614 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -816,9 +816,21 @@ void __init init_cma_reserved_pageblock(struct page *page) set_page_count(p, 0); } while (++p, --i); - set_page_refcounted(page); set_pageblock_migratetype(page, MIGRATE_CMA); - __free_pages(page, pageblock_order); + + if (pageblock_order >= MAX_ORDER) { + i = pageblock_nr_pages; + p = page; + do { + set_page_refcounted(p); + __free_pages(p, MAX_ORDER - 1); + p += MAX_ORDER_NR_PAGES; + } while (i -= MAX_ORDER_NR_PAGES); + } else { + set_page_refcounted(page); + __free_pages(page, pageblock_order); + } + adjust_managed_page_count(page, pageblock_nr_pages); } #endif