From patchwork Wed Mar 14 15:54:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10282591 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EDC8F6061F for ; Wed, 14 Mar 2018 15:54:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DAA052835B for ; Wed, 14 Mar 2018 15:54:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CEDFA28372; Wed, 14 Mar 2018 15:54:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5BE622835B for ; Wed, 14 Mar 2018 15:54:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:Subject:Message-ID:Date:From: References:In-Reply-To:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PuCK0+ZI1G+4HkStJA7BCcVAYSiW7LlvtJTzZu7gT0I=; b=PUdeCMqGJvE+5b GLDMK2lZxRXuEXdeAuTk4L+9W74JpeZGlclu7JDxKN4fD1e8wBlCWsmRYw6ENmXB2Eib9umpqTxVA 6dZVm2YGyDEPZmgaFi+psSmB54QmAc6RPy2VYcAn1vuarS7hih1aVtIRLNEOhk2ES7RH8+ORczK3j UasEwJijlrGZIn6STAhcO5H+IVZmcNtJP9StegEDV7Nj5sfMB/GeghTIVMt5pnQpRjvBitt3SZ+Kc qd7oV0xNN/YPmFvKqIp4msFLT4QjM/Y1fdsbgY5cumUIm/0+qlXqu0RCHw6C5ZNRb2idwi4P/hJKC W8Jxw20NZ3tKBLlG9VMg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1ew8jq-00042U-Mw; Wed, 14 Mar 2018 15:54:50 +0000 Received: from mail-it0-x244.google.com ([2607:f8b0:4001:c0b::244]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1ew8jU-0003i5-OP for linux-arm-kernel@lists.infradead.org; Wed, 14 Mar 2018 15:54:33 +0000 Received: by mail-it0-x244.google.com with SMTP id e98-v6so71353itd.4 for ; Wed, 14 Mar 2018 08:54:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=iHcjNE9aF4sAf7uSyQyBQ+OtYNs2BKd2QM4APS/Qs5E=; b=Bhk3FeL21lFHWqMjUavVYnyIHrdCuhhTGn8LCIZEyW6cRsA+WjqzSs0wvvt9GC6y5o GflfWeRI9fJjTndJLf0w8rq0+fiUgBJ0WpDVNlnJ3Ib0MZ+6tMPEX0UvdfkDwqnmDEYz h6S6xpeN3RpRCvKK0O99I5zzt9NrRC9yxRIWk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=iHcjNE9aF4sAf7uSyQyBQ+OtYNs2BKd2QM4APS/Qs5E=; b=j8AS1MvQXDtuEUvcV5YGBHUqopyuWFwBX+6M3hNKzXK5KAlNBAtn0mQaDL3u4MK9eV NCgh/gi7z22elLnZ0AME9MpqAhw3t4Pvxlw+RhShaAtN2pNQ8TUxN4vKztFcxH9dc3OP 6cS9WYkHD11hy8HeI8brNcHTzeoSEHIsjvxLBtrzYQp72HDuLB7tZPWFpYwi0ab6VcJX Zda69S0EBVfooIWRewliRbef5O9wwMjAlmRVA/LLE82fX0HHQO0z5rrlGmdOFiuiv//r MOA/aRPCIrd/RbGy5PQElZdk98CHPv9lhwqH3PQMOvJ6j5g9blDXcmDw1fSSGwuzVp10 xCAA== X-Gm-Message-State: AElRT7H0OBqcA+udb276IYYMteY5VDBkEiG+iNJAARyKVg7YF1yw35r4 gciHP5425FuKCMECtpuVMtVzpeD9aDP/g8DpAUlGVt2EyQE= X-Google-Smtp-Source: AG47ELst8ZftH4X4mIjRUNZbkPOUo6MTwHuJxy2jTchGU4xtcV8ec//5p4aQdyvXWh3fervF6HPoewlDHrMnQxL6HnY= X-Received: by 2002:a24:d98d:: with SMTP id p135-v6mr130278itg.106.1521042856900; Wed, 14 Mar 2018 08:54:16 -0700 (PDT) MIME-Version: 1.0 Received: by 10.107.138.209 with HTTP; Wed, 14 Mar 2018 08:54:16 -0700 (PDT) In-Reply-To: <20180314145450.GI23100@dhcp22.suse.cz> References: <20180314134431.13241-1-ard.biesheuvel@linaro.org> <20180314141323.GD23100@dhcp22.suse.cz> <20180314145450.GI23100@dhcp22.suse.cz> From: Ard Biesheuvel Date: Wed, 14 Mar 2018 15:54:16 +0000 Message-ID: Subject: Re: [PATCH] Revert "mm/page_alloc: fix memmap_init_zone pageblock alignment" To: Michal Hocko X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180314_085429_418994_A3ECA250 X-CRM114-Status: GOOD ( 23.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Paul Burton , Marc Zyngier , Catalin Marinas , Will Deacon , Linux Kernel Mailing List , Pavel Tatashin , Linus Torvalds , Vlastimil Babka , Andrew Morton , Mel Gorman , Daniel Vacek , linux-arm-kernel Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On 14 March 2018 at 14:54, Michal Hocko wrote: > On Wed 14-03-18 14:35:12, Ard Biesheuvel wrote: >> On 14 March 2018 at 14:13, Michal Hocko wrote: >> > Does http://lkml.kernel.org/r/20180313224240.25295-1-neelx@redhat.com >> > fix your issue? From the debugging info you provided it should because >> > the patch prevents jumping backwards. >> > >> >> The patch does fix the boot hang. >> >> But I am concerned that we are papering over a fundamental flaw in >> memblock_next_valid_pfn(). > > It seems that memblock_next_valid_pfn is doing the right thing here. It > is the alignment which moves the pfn back AFAICS. I am not really > impressed about the original patch either, to be completely honest. > It just looks awfully tricky. I still didn't manage to wrap my head > around the original issue though so I do not have much better ideas to > be honest. So first of all, memblock_next_valid_pfn() never refers to its max_pfn argument, which is odd nut easily fixed. Then, the whole idea of substracting one so that the pfn++ will produce the expected value is rather hacky, But the real problem is that rounding down pfn for the next iteration is dodgy, because early_pfn_valid() isn't guaranteed to return true for the rounded down value. I know it is probably fine in reality, but dodgy as hell. The same applies to the call to early_pfn_in_nid() btw So how about something like this (apologies on Gmail's behalf for the whitespace damage, I can resend it as a proper patch) ---------8<----------- ---------8<----------- This ensures that we enter the remainder of the loop with a properly aligned pfn, rather than tweaking the value of pfn so it assumes the expected value after 'pfn++' diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3d974cb2a1a1..b89ca999ee3b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5352,28 +5352,29 @@ * function. They do not exist on hotplugged memory. */ if (context != MEMMAP_EARLY) goto not_early; - if (!early_pfn_valid(pfn)) { + if (!early_pfn_valid(pfn) || !early_pfn_in_nid(pfn, nid)) { #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP /* * Skip to the pfn preceding the next valid one (or * end_pfn), such that we hit a valid pfn (or end_pfn) * on our next iteration of the loop. Note that it needs * to be pageblock aligned even when the region itself * is not. move_freepages_block() can shift ahead of * the valid region but still depends on correct page * metadata. */ - pfn = (memblock_next_valid_pfn(pfn, end_pfn) & - ~(pageblock_nr_pages-1)) - 1; -#endif + pfn = memblock_next_valid_pfn(pfn, end_pfn); + if (pfn >= end_pfn) + break; + pfn &= ~(pageblock_nr_pages - 1); +#else continue; +#endif } - if (!early_pfn_in_nid(pfn, nid)) - continue; if (!update_defer_init(pgdat, pfn, end_pfn, &nr_initialised)) break; #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP /*