From patchwork Fri May 18 04:01:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 10408165 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 66A77602C2 for ; Fri, 18 May 2018 04:01:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3AAC3288F1 for ; Fri, 18 May 2018 04:01:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 392CE2890C; Fri, 18 May 2018 04:01:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 95848288F1 for ; Fri, 18 May 2018 04:01:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 445506B0562; Fri, 18 May 2018 00:01:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3CD056B0563; Fri, 18 May 2018 00:01:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26E476B0564; Fri, 18 May 2018 00:01:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg0-f71.google.com (mail-pg0-f71.google.com [74.125.83.71]) by kanga.kvack.org (Postfix) with ESMTP id D5F746B0562 for ; Fri, 18 May 2018 00:01:07 -0400 (EDT) Received: by mail-pg0-f71.google.com with SMTP id n26-v6so2405996pgd.2 for ; Thu, 17 May 2018 21:01:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:date:from:to :cc:subject:message-id:references:mime-version:content-disposition :content-transfer-encoding:in-reply-to:user-agent; bh=Skzp2WCMErf2oZd9aYTUfPoFCD4LHubpCfW4fXQ2Uk4=; b=Kjs9e1ReUOOW3mKUEl2wb795Xkz6K0hqSVKFnvwyYGHrhMVGBZYtzSqsy9Sxgl9Vth YYDHx4YKuEF2ogWhEqQxXD8e6mUzKK4nJpYgYLl24QKjmjYtnaJGtTsDlcFuzQQU6g4Z 2FAeYFPf5ow4WBg/u+QRv6GTA+KAHD9bo+C8/pb8lOOL1NeFzrFrpiposNjmVx77Rk0I Junk51TG7ZKBPw/8ajDmlzE8IiZ85cU3g9uURvXGBYtBJWv1XeePOD56Lf75q3HhkICU cNgycEmvkpIQKdRq0KORDIMZhdIbDjiVdqKRjycZX4l31Zw95YA8RPWtqmYUGCqgjxt1 VrNw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of iamjoonsoo.kim@lge.com designates 156.147.23.51 as permitted sender) smtp.mailfrom=iamjoonsoo.kim@lge.com X-Gm-Message-State: ALKqPwfhBZ+sz5HOeX0KCdQgsGtcfvS/3dz2KS1Ll6EnnJ10x6IJmfUs irNMLLGErNxkwZ22ZdRuUieCzPWhEP88I6RGOCyuLwKZrOiogI/EJdjmqhVO70A/dhYaLI6MJ2p IHrV9eVEZQEraL57fqZbi29/r1+4H0cmOQZyg6dscdeoATl+xqoArXjgDiAC59HifPA== X-Received: by 2002:a17:902:8303:: with SMTP id bd3-v6mr7752344plb.290.1526616067218; Thu, 17 May 2018 21:01:07 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrylqMTIftb/VWq2pAoz62idEMy+p/mrvoezy+ItgJaHb/NPXyB1pwZ7v2gAofKrxKa4OT9 X-Received: by 2002:a17:902:8303:: with SMTP id bd3-v6mr7752282plb.290.1526616066119; Thu, 17 May 2018 21:01:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526616066; cv=none; d=google.com; s=arc-20160816; b=KtPIv9D9IPAk0N//T6scmj0aWsASXMB2Pmb1UNWNhxG0WRtIPBhVieOaqNbw2U5wIb DhsvOV6PRVKZ55Dvol+LAPShH2BYn4ZOiXWPjnfjHPOSAWeoI2R3wIq1YCuOLAKNgxAW Yi1jErnC0MY8y2g6qGz+iA/0EQGud8eyYxUCllD6IIdPkYoWOrlXJgEdsGWdksMO2Wek Bn7/xI1B3K+k/94kv1mHcvACmFfuZckS/niCZBeNV69rNP57f/WMZ1cPYh8k5TJCMS6e otZVH7vzVHW34NALp1m05IF0EuUvr+fBzTOsydpzIOaZRPS0M8gw9WkT/ipxhM1L9WF5 hn0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=user-agent:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=Skzp2WCMErf2oZd9aYTUfPoFCD4LHubpCfW4fXQ2Uk4=; b=zewqSXrEjBZFN1/q7ngrvo6cdmXmYSCPhCjAXwYBxt4LxAOvKAlOhP9H1sR3ZVqxsj 4r8vOgKXQay/ViBfmaOyMcbZFAIdymH0OESM8wugeeA7eCOjcxPDHZIWnN7iQFUw7BtO y1t1Ez2XVFRsNZr8h29/mVf6UV4G2MH/lXWqUZ81RJwTpsISBrBtSe+nRqcPb3741jg3 g/lqKeSRqGFOQZmYFwW9+MTzNLEDTFAgGQLctONY8p7GHBrUXI7dIQ34zsTK4n01tHYd yxV+pG0BMdpFRQ1OrPhH6uqf25WDW52ez0IpEKfCbTedYTo86ay2Y6F0PyiOIrvKeqTt sNoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of iamjoonsoo.kim@lge.com designates 156.147.23.51 as permitted sender) smtp.mailfrom=iamjoonsoo.kim@lge.com Received: from lgeamrelo11.lge.com (lgeamrelo11.lge.com. [156.147.23.51]) by mx.google.com with ESMTP id e11-v6si5356911pgu.459.2018.05.17.21.01.05 for ; Thu, 17 May 2018 21:01:06 -0700 (PDT) Received-SPF: pass (google.com: domain of iamjoonsoo.kim@lge.com designates 156.147.23.51 as permitted sender) client-ip=156.147.23.51; Authentication-Results: mx.google.com; spf=pass (google.com: domain of iamjoonsoo.kim@lge.com designates 156.147.23.51 as permitted sender) smtp.mailfrom=iamjoonsoo.kim@lge.com Received: from unknown (HELO lgemrelse6q.lge.com) (156.147.1.121) by 156.147.23.51 with ESMTP; 18 May 2018 13:01:04 +0900 X-Original-SENDERIP: 156.147.1.121 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Received: from unknown (HELO localhost) (10.177.220.142) by 156.147.1.121 with ESMTP; 18 May 2018 13:01:04 +0900 X-Original-SENDERIP: 10.177.220.142 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Date: Fri, 18 May 2018 13:01:04 +0900 From: Joonsoo Kim To: Laura Abbott Cc: Michal Hocko , Ville =?iso-8859-1?Q?Syrj=E4l=E4?= , "Aneesh Kumar K . V" , Tony Lindgren , Vlastimil Babka , Johannes Weiner , Laura Abbott , Marek Szyprowski , Mel Gorman , Michal Nazarewicz , Minchan Kim , Rik van Riel , Russell King , Will Deacon , Andrew Morton , Linus Torvalds , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] Revert "mm/cma: manage the memory of the CMA area by using the ZONE_MOVABLE" Message-ID: <20180518040104.GA17433@js1304-desktop> References: <20180517125959.8095-1-ville.syrjala@linux.intel.com> <20180517132109.GU12670@dhcp22.suse.cz> <20180517133629.GH23723@intel.com> <20180517135832.GI23723@intel.com> <20180517164947.GV12670@dhcp22.suse.cz> <20180517170816.GW12670@dhcp22.suse.cz> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP On Thu, May 17, 2018 at 10:53:32AM -0700, Laura Abbott wrote: > On 05/17/2018 10:08 AM, Michal Hocko wrote: > >On Thu 17-05-18 18:49:47, Michal Hocko wrote: > >>On Thu 17-05-18 16:58:32, Ville Syrjälä wrote: > >>>On Thu, May 17, 2018 at 04:36:29PM +0300, Ville Syrjälä wrote: > >>>>On Thu, May 17, 2018 at 03:21:09PM +0200, Michal Hocko wrote: > >>>>>On Thu 17-05-18 15:59:59, Ville Syrjala wrote: > >>>>>>From: Ville Syrjälä > >>>>>> > >>>>>>This reverts commit bad8c6c0b1144694ecb0bc5629ede9b8b578b86e. > >>>>>> > >>>>>>Make x86 with HIGHMEM=y and CMA=y boot again. > >>>>> > >>>>>Is there any bug report with some more details? It is much more > >>>>>preferable to fix the issue rather than to revert the whole thing > >>>>>right away. > >>>> > >>>>The machine I have in front of me right now didn't give me anything. > >>>>Black screen, and netconsole was silent. No serial port on this > >>>>machine unfortunately. > >>> > >>>Booted on another machine with serial: > >> > >>Could you provide your .config please? > >> > >>[...] > >>>[ 0.000000] cma: Reserved 4 MiB at 0x0000000037000000 > >>[...] > >>>[ 0.000000] BUG: Bad page state in process swapper pfn:377fe > >>>[ 0.000000] page:f53effc0 count:0 mapcount:-127 mapping:00000000 index:0x0 > >> > >>OK, so this looks the be the source of the problem. -128 would be a > >>buddy page but I do not see anything that would set the counter to -127 > >>and the real map count updates shouldn't really happen that early. > >> > >>Maybe CONFIG_DEBUG_VM and CONFIG_DEBUG_HIGHMEM will tell us more. > > > >Looking closer, I _think_ that the bug is in set_highmem_pages_init->is_highmem > >and zone_movable_is_highmem might force CMA pages in the zone movable to > >be initialized as highmem. And that sounds supicious to me. Joonsoo? > > > > For a point of reference, arm with this configuration doesn't hit this bug > because highmem pages are freed via the memblock interface only instead > of iterating through each zone. It looks like the x86 highmem code > assumes only a single highmem zone and/or it's disjoint? Good point! Reason of the crash is that the span of MOVABLE_ZONE is extended to whole node span for future CMA initialization, and, normal memory is wrongly freed here. Here goes the fix. Ville, Could you test below patch? I re-generated the issue on my side and this patch fixed it. Thanks. ------------>8------------- From 569899a4dbd28cebb8d350d3d1ebb590d88b2629 Mon Sep 17 00:00:00 2001 From: Joonsoo Kim Date: Fri, 18 May 2018 10:52:05 +0900 Subject: [PATCH] x86/32/highmem: check if the zone is matched when free highmem pages on init If CONFIG_CMA is enabled, it extends the span of the MOVABLE_ZONE to manage the CMA memory later. And, in this case, the span of the MOVABLE_ZONE could overlap the other zone's memory. We need to avoid freeing this overlapped memory here since it would be the memory of the other zone. Therefore, this patch adds a check whether the page is indeed on the requested zone or not. Skipped page will be freed when the memory of the matched zone is freed. Reported-by: Ville Syrjälä Signed-off-by: Joonsoo Kim Reviewed-by: Laura Abbott --- arch/x86/include/asm/highmem.h | 4 ++-- arch/x86/mm/highmem_32.c | 5 ++++- arch/x86/mm/init_32.c | 25 +++++++++++++++++++++---- 3 files changed, 27 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h index a805993..e383f57 100644 --- a/arch/x86/include/asm/highmem.h +++ b/arch/x86/include/asm/highmem.h @@ -72,8 +72,8 @@ void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot); #define flush_cache_kmaps() do { } while (0) -extern void add_highpages_with_active_regions(int nid, unsigned long start_pfn, - unsigned long end_pfn); +extern void add_highpages_with_active_regions(int nid, struct zone *zone, + unsigned long start_pfn, unsigned long end_pfn); #endif /* __KERNEL__ */ diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c index 6d18b70..bf9f5b8 100644 --- a/arch/x86/mm/highmem_32.c +++ b/arch/x86/mm/highmem_32.c @@ -120,6 +120,9 @@ void __init set_highmem_pages_init(void) if (!is_highmem(zone)) continue; + if (!populated_zone(zone)) + continue; + zone_start_pfn = zone->zone_start_pfn; zone_end_pfn = zone_start_pfn + zone->spanned_pages; @@ -127,7 +130,7 @@ void __init set_highmem_pages_init(void) printk(KERN_INFO "Initializing %s for node %d (%08lx:%08lx)\n", zone->name, nid, zone_start_pfn, zone_end_pfn); - add_highpages_with_active_regions(nid, zone_start_pfn, + add_highpages_with_active_regions(nid, zone, zone_start_pfn, zone_end_pfn); } } diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 8008db2..f646072 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -431,7 +431,7 @@ static void __init permanent_kmaps_init(pgd_t *pgd_base) pkmap_page_table = pte; } -void __init add_highpages_with_active_regions(int nid, +void __init add_highpages_with_active_regions(int nid, struct zone *zone, unsigned long start_pfn, unsigned long end_pfn) { phys_addr_t start, end; @@ -442,9 +442,26 @@ void __init add_highpages_with_active_regions(int nid, start_pfn, end_pfn); unsigned long e_pfn = clamp_t(unsigned long, PFN_DOWN(end), start_pfn, end_pfn); - for ( ; pfn < e_pfn; pfn++) - if (pfn_valid(pfn)) - free_highmem_page(pfn_to_page(pfn)); + for ( ; pfn < e_pfn; pfn++) { + struct page *page; + + if (!pfn_valid(pfn)) + continue; + + page = pfn_to_page(pfn); + + /* + * If CONFIG_CMA is enabled, it extends the span of + * the MOVABLE_ZONE to manage the CMA memory + * in the future. And, in this case, the span of the + * MOVABLE_ZONE could overlap the other zone's memory. + * We need to avoid freeing this memory here. + */ + if (IS_ENABLED(CONFIG_CMA) && page_zone(page) != zone) + continue; + + free_highmem_page(pfn_to_page(pfn)); + } } } #else