From patchwork Fri May 4 08:53:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 10380235 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2395A60159 for ; Fri, 4 May 2018 08:53:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1687629335 for ; Fri, 4 May 2018 08:53:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0AFE629350; Fri, 4 May 2018 08:53:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E751E29335 for ; Fri, 4 May 2018 08:53:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 540B66B000C; Fri, 4 May 2018 04:53:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4EF8C6B0010; Fri, 4 May 2018 04:53:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 406FD6B0011; Fri, 4 May 2018 04:53:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f198.google.com (mail-pf0-f198.google.com [209.85.192.198]) by kanga.kvack.org (Postfix) with ESMTP id EEF766B000C for ; Fri, 4 May 2018 04:53:28 -0400 (EDT) Received: by mail-pf0-f198.google.com with SMTP id y5so6153653pfm.17 for ; Fri, 04 May 2018 01:53:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:mime-version; bh=lIGGiFDlCIXx+6zwjvLJlQRKbF1nnlqg/UfrhYDQK5w=; b=slE5L0TyPd2maX6BVbbbHbAhYXThycJxfTaY3wKKDiz3U098GrH7Oj6clPQ40MzSNF ua2X3bnOiWLEYk9BHBw/CefNvO7T70+Iflan0aJOd0CWG/NtgepNKQSgFghCXgmJ5qr0 e+yfzHqSA+TCyNQAd2yQkKtNVmlCnYM6hhVD8o9Vqk6Ucqqq/2Pu1P7g2RYi5fav7B1z qaVuP7vxtV2LGI888xMa4n+r7l0+0R60RAc60+amwjiOZet04YagQs1q8oP75Qe4JGlf gCSry7zdd03Xb8SUOPOHbVZ/95EuX5mllWAwiJbawhPoZTEAAKJDpa/wqF1Bz1uoC25L 716A== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of jonathan.cameron@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=jonathan.cameron@huawei.com X-Gm-Message-State: ALQs6tAwcanhrtbdX8KekJLgSJMrfwi9H2CP4uDZF+u3+KSCKMVKQHz0 /evRfzBdf4w66TKStZLHcTEe6Am++zVfG4//9iJHJifT8oywIWzZ8vG/MFcBku4NucVcX0pjj0b 0yo8TOJCFLj3NvyrGze6yRuJV4RHT74raDpg0hJ5Y40QLkOZ7ZgOd9f3KsMss2R3m6g== X-Received: by 2002:a17:902:b112:: with SMTP id q18-v6mr27088749plr.371.1525424008639; Fri, 04 May 2018 01:53:28 -0700 (PDT) X-Google-Smtp-Source: AB8JxZp8qsbTraisReGJ+Bok8MjEWThSytKavTgL891DIcpOXxKuri7aybgq0y1WpPWyKbyJH/Sz X-Received: by 2002:a17:902:b112:: with SMTP id q18-v6mr27088710plr.371.1525424007756; Fri, 04 May 2018 01:53:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525424007; cv=none; d=google.com; s=arc-20160816; b=Xe4zsB7fk/V9dWzcjwVBQP5NLG3HMy3bFkURV0py9nHH+KeKxwXmXUT4mvofhwN/LS ZJgiH4gN3QmNtSEKwMznaNUHWme3yToJof1nAtH5C9KIxPhahFon5KaE8pfbqsAuD6ht Mgo2GSUIRZOwqNWD7kMVUa3bxH4xSRa9amecQybGGDfNiXOuVbbAccuvK/qemvQsblT9 1XhVSyYGEtD1DMBuU2dZwsUPnB7BYNlROSoNhoooi4FcM1+4ehYBbzW4ic2cxHBU2k1W Qk70I/zmb2/kg3NM7Fl+/IfNcLSv0xXXJSCaVMKR+xMUDIUpne5eERq1FHxvkxh37wug 2sFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:message-id:date:subject:cc:to:from :arc-authentication-results; bh=lIGGiFDlCIXx+6zwjvLJlQRKbF1nnlqg/UfrhYDQK5w=; b=WgiBuQh1CPhLPODzQwXfvvk9dNXE5RArqzagLC5lem/ULCkVlXH02SWd9XLVwxPvn4 gdsvOMVpPQgo6ZYL1qBxkpSZozcxbehsX2YUEJQ5B6k+AjrwqFwe8R2widOhn3NiQlh4 xUpuUXmL3DJ3u4MxyMZZiO5g+JS+nQpKeTKoJ+tORMJRpYz0sziyw11Chied2g5aQUsH swh/o5hCOWwHZvW83s8CDOzHiBH8tS7n35UYo6UPIoXcNA2vP+TmPBDWI9uu5wLcifWZ xzD5XhVovV38y39tPXmELBuMJK9qNnEOOLutcPTZAqbWExgw5Zqxt/r5QV1KRh6MFpXa QUxw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of jonathan.cameron@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=jonathan.cameron@huawei.com Received: from huawei.com ([45.249.212.35]) by mx.google.com with ESMTPS id a1-v6si15442687plt.39.2018.05.04.01.53.27 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 04 May 2018 01:53:27 -0700 (PDT) Received-SPF: pass (google.com: domain of jonathan.cameron@huawei.com designates 45.249.212.35 as permitted sender) client-ip=45.249.212.35; Authentication-Results: mx.google.com; spf=pass (google.com: domain of jonathan.cameron@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=jonathan.cameron@huawei.com Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id E2D74B8DE4855; Fri, 4 May 2018 16:53:24 +0800 (CST) Received: from J00421895.china.huawei.com (10.202.226.42) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.361.1; Fri, 4 May 2018 16:53:19 +0800 From: Jonathan Cameron To: linux-mm CC: , Pavel Tatashin , "Andrew Morton" , Jonathan Cameron Subject: [PATCH] mm/memory_hotplug: Fix leftover use of struct page during hotplug Date: Fri, 4 May 2018 09:53:11 +0100 Message-ID: <20180504085311.1240-1-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.16.1.windows.4 MIME-Version: 1.0 X-Originating-IP: [10.202.226.42] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The case of a new numa node got missed in avoiding using the node info from page_struct during hotplug. In this path we have a call to register_mem_sect_under_node (which allows us to specify it is hotplug so don't change the node), via link_mem_sections which unfortunately does not. Fix is to pass check_nid through link_mem_sections as well and disable it in the new numa node path. Note the bug only 'sometimes' manifests depending on what happens to be in the struct page structures - there are lots of them and it only needs to match one of them. Fixes: fc44f7f9231a ("mm/memory_hotplug: don't read nid from struct page during hotplug") Signed-off-by: Jonathan Cameron Reviewed-by: Pavel Tatashin Acked-by: Michal Hocko --- drivers/base/node.c | 5 +++-- include/linux/node.h | 8 +++++--- mm/memory_hotplug.c | 2 +- 3 files changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/base/node.c b/drivers/base/node.c index 7a3a580821e0..a5e821d09656 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -490,7 +490,8 @@ int unregister_mem_sect_under_nodes(struct memory_block *mem_blk, return 0; } -int link_mem_sections(int nid, unsigned long start_pfn, unsigned long nr_pages) +int link_mem_sections(int nid, unsigned long start_pfn, unsigned long nr_pages, + bool check_nid) { unsigned long end_pfn = start_pfn + nr_pages; unsigned long pfn; @@ -514,7 +515,7 @@ int link_mem_sections(int nid, unsigned long start_pfn, unsigned long nr_pages) mem_blk = find_memory_block_hinted(mem_sect, mem_blk); - ret = register_mem_sect_under_node(mem_blk, nid, true); + ret = register_mem_sect_under_node(mem_blk, nid, check_nid); if (!err) err = ret; diff --git a/include/linux/node.h b/include/linux/node.h index 41f171861dcc..6d336e38d155 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -32,9 +32,11 @@ extern struct node *node_devices[]; typedef void (*node_registration_func_t)(struct node *); #if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA) -extern int link_mem_sections(int nid, unsigned long start_pfn, unsigned long nr_pages); +extern int link_mem_sections(int nid, unsigned long start_pfn, + unsigned long nr_pages, bool check_nid); #else -static inline int link_mem_sections(int nid, unsigned long start_pfn, unsigned long nr_pages) +static inline int link_mem_sections(int nid, unsigned long start_pfn, + unsigned long nr_pages, bool check_nid) { return 0; } @@ -57,7 +59,7 @@ static inline int register_one_node(int nid) if (error) return error; /* link memory sections under this node */ - error = link_mem_sections(nid, pgdat->node_start_pfn, pgdat->node_spanned_pages); + error = link_mem_sections(nid, pgdat->node_start_pfn, pgdat->node_spanned_pages, true); } return error; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index f74826cdceea..25982467800b 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1158,7 +1158,7 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online) * nodes have to go through register_node. * TODO clean up this mess. */ - ret = link_mem_sections(nid, start_pfn, nr_pages); + ret = link_mem_sections(nid, start_pfn, nr_pages, false); register_fail: /* * If sysfs file of new node can't create, cpu on the node