From patchwork Sat Mar 14 00:08:35 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 6010341 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id B74ED9F399 for ; Sat, 14 Mar 2015 00:12:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 50DF720256 for ; Sat, 14 Mar 2015 00:12:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4D456201ED for ; Sat, 14 Mar 2015 00:12:21 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YWZdj-0001VG-6B; Sat, 14 Mar 2015 00:09:15 +0000 Received: from smtp.codeaurora.org ([198.145.29.96]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YWZdd-0001Pa-Bn for linux-arm-kernel@lists.infradead.org; Sat, 14 Mar 2015 00:09:10 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id B852D13F9BD; Sat, 14 Mar 2015 00:08:47 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id A44B013FA04; Sat, 14 Mar 2015 00:08:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from linux-kernel-memory-lab-01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: lauraa@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id C873F13F9BD; Sat, 14 Mar 2015 00:08:46 +0000 (UTC) From: Laura Abbott To: Vlastimil Babka , Srinivas Kandagatla , linux-arm-kernel@lists.infradead.org, Russell King - ARM Linux , ssantosh@kernel.org, Andrew Morton Subject: [PATCHv3] mm: Don't offset memmap for flatmem Date: Fri, 13 Mar 2015 17:08:35 -0700 Message-Id: <1426291715-16242-1-git-send-email-lauraa@codeaurora.org> X-Mailer: git-send-email 1.8.2.1 X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150313_170909_484185_125291FD X-CRM114-Status: GOOD ( 18.09 ) X-Spam-Score: -0.0 (/) Cc: Kevin Hilman , Laura Abbott , Arnd Bergman , Stephen Boyd , linux-mm@kvack.org, Mel Gorman , Kumar Gala X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Srinivas Kandagatla reported bad page messages when trying to remove the bottom 2MB on an ARM based IFC6410 board BUG: Bad page state in process swapper pfn:fffa8 page:ef7fb500 count:0 mapcount:0 mapping: (null) index:0x0 flags: 0x96640253(locked|error|dirty|active|arch_1|reclaim|mlocked) page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set bad because of flags: flags: 0x200041(locked|active|mlocked) Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 3.19.0-rc3-00007-g412f9ba-dirty #816 Hardware name: Qualcomm (Flattened Device Tree) [] (unwind_backtrace) from [] (show_stack+0x20/0x24) [] (show_stack) from [] (dump_stack+0x80/0x9c) [] (dump_stack) from [] (bad_page+0xc8/0x128) [] (bad_page) from [] (free_pages_prepare+0x168/0x1e0) [] (free_pages_prepare) from [] (free_hot_cold_page+0x3c/0x174) [] (free_hot_cold_page) from [] (__free_pages+0x54/0x58) [] (__free_pages) from [] (free_highmem_page+0x38/0x88) [] (free_highmem_page) from [] (mem_init+0x240/0x430) [] (mem_init) from [] (start_kernel+0x1e4/0x3c8) [] (start_kernel) from [<80208074>] (0x80208074) Disabling lock debugging due to kernel taint Removing the lower 2MB made the start of the lowmem zone to no longer be page block aligned. IFC6410 uses CONFIG_FLATMEM where alloc_node_mem_map allocates memory for the mem_map. alloc_node_mem_map will offset for unaligned nodes with the assumption the pfn/page translation functions will account for the offset. The functions for CONFIG_FLATMEM do not offset however, resulting in overrunning the memmap array. Just use the allocated memmap without any offset when running with CONFIG_FLATMEM to avoid the overrun. Signed-off-by: Laura Abbott Reported-by: Srinivas Kandagatla Acked-by: Vlastimil Babka Tested-by: Srinivas Kandagatla Tested-by: Bjorn Andersson --- The thread got too deep so I split this out into a new thread. See http://marc.info/?l=linux-mm&m=142188852025672&w=2 for previous thread discussion, last comment by Vlastimil http://marc.info/?l=linux-mm&m=142505070430844&w=2 --- mm/page_alloc.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a47f0b2..a308ec7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4945,6 +4945,8 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat, static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat) { + unsigned long __maybe_unused offset = 0; + /* Skip empty nodes */ if (!pgdat->node_spanned_pages) return; @@ -4961,6 +4963,7 @@ static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat) * for the buddy allocator to function correctly. */ start = pgdat->node_start_pfn & ~(MAX_ORDER_NR_PAGES - 1); + offset = pgdat->node_start_pfn - start; end = pgdat_end_pfn(pgdat); end = ALIGN(end, MAX_ORDER_NR_PAGES); size = (end - start) * sizeof(struct page); @@ -4968,7 +4971,7 @@ static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat) if (!map) map = memblock_virt_alloc_node_nopanic(size, pgdat->node_id); - pgdat->node_mem_map = map + (pgdat->node_start_pfn - start); + pgdat->node_mem_map = map + offset; } #ifndef CONFIG_NEED_MULTIPLE_NODES /* @@ -4976,10 +4979,12 @@ static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat) */ if (pgdat == NODE_DATA(0)) { mem_map = NODE_DATA(0)->node_mem_map; -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP - if (page_to_pfn(mem_map) != pgdat->node_start_pfn) - mem_map -= (pgdat->node_start_pfn - ARCH_PFN_OFFSET); -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ +#if defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) || defined(CONFIG_FLATMEM) + if (page_to_pfn(mem_map) != pgdat->node_start_pfn) { + mem_map -= offset; + VM_BUG_ON(page_to_pfn(mem_map) != pgdat->node_start_pfn); + } +#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP || CONFIG_FLATMEM */ } #endif #endif /* CONFIG_FLAT_NODE_MEM_MAP */