From patchwork Sat Nov 2 12:02:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11224081 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD50E912 for ; Sat, 2 Nov 2019 12:02:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 78AA1218AC for ; Sat, 2 Nov 2019 12:02:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DyqezeU7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 78AA1218AC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 86D8E6B0005; Sat, 2 Nov 2019 08:02:46 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7F6996B0006; Sat, 2 Nov 2019 08:02:46 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BE976B0007; Sat, 2 Nov 2019 08:02:46 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id 4F9926B0005 for ; Sat, 2 Nov 2019 08:02:46 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 932E2180AD80F for ; Sat, 2 Nov 2019 12:02:45 +0000 (UTC) X-FDA: 76111200690.14.end45_1801482603f52 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,david@redhat.com,:linux-kernel@vger.kernel.org::david@redhat.com:tangchen@cn.fujitsu.com:gregkh@linuxfoundation.org:rafael@kernel.org:akpm@linux-foundation.org:keith.busch@intel.com:jolsa@kernel.org:peterz@infradead.org:jani.nikula@intel.com:nayna@linux.ibm.com:mhocko@suse.com:osalvador@suse.de:sfr@canb.auug.org.au:dan.j.williams@intel.com:pasha.tatashin@soleen.com,RULES_HIT:30054:30055:30064:30070:30080,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:21,LUA_SUMMARY:none X-HE-Tag: end45_1801482603f52 X-Filterd-Recvd-Size: 11309 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Sat, 2 Nov 2019 12:02:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1572696164; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=jnRJATl2rQ2Q8B3WO/mOVZzhIP778upzYXYv5asejew=; b=DyqezeU7Pk8NPZF4fkFmIyd0y2bj12KKnRmL1NGjXyS3YTfHWL3u+2sQHJN4o6VT/PINcq h/TApQcLmHcLs6rC08ax9mVcZXRJuVCjSresVFKTEzuL35cGQmnuikWVRxGNv9Lt17pgcf 1bTbDtgJtGYcLI7iKxNh8vvn5gIqxn0= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-375-VkXqf0sUO_K0WImVSzYJGQ-1; Sat, 02 Nov 2019 08:02:40 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DBD34800EBA; Sat, 2 Nov 2019 12:02:38 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-109.ams2.redhat.com [10.36.116.109]) by smtp.corp.redhat.com (Postfix) with ESMTP id E9CB61001E75; Sat, 2 Nov 2019 12:02:21 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Tang Chen , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Keith Busch , Jiri Olsa , "Peter Zijlstra (Intel)" , Jani Nikula , Nayna Jain , Michal Hocko , Oscar Salvador , Stephen Rothwell , Dan Williams , Pavel Tatashin Subject: [PATCH v3] mm/memory_hotplug: Fix try_offline_node() Date: Sat, 2 Nov 2019 13:02:21 +0100 Message-Id: <20191102120221.7553-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-MC-Unique: VkXqf0sUO_K0WImVSzYJGQ-1 X-Mimecast-Spam-Score: 0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: try_offline_node() is pretty much broken right now: - The node span is updated when onlining memory, not when adding it. We ignore memory that was mever onlined. Bad. - We touch possible garbage memmaps. The pfn_to_nid(pfn) can easily trigger a kernel panic. Bad for memory that is offline but also bad for subsection hotadd with ZONE_DEVICE, whereby the memmap of the first PFN of a section might contain garbage. - Sections belonging to mixed nodes are not properly considered. As memory blocks might belong to multiple nodes, we would have to walk all pageblocks (or at least subsections) within present sections. However, we don't have a way to identify whether a memmap that is not online was initialized (relevant for ZONE_DEVICE). This makes things more complicated. Luckily, we can piggy pack on the node span and the nid stored in memory blocks. Currently, the node span is grown when calling move_pfn_range_to_zone() - e.g., when onlining memory, and shrunk when removing memory, before calling try_offline_node(). Sysfs links are created via link_mem_sections(), e.g., during boot or when adding memory. If the node still spans memory or if any memory block belongs to the nid, we don't set the node offline. As memory blocks that span multiple nodes cannot get offlined, the nid stored in memory blocks is reliable enough (for such online memory blocks, the node still spans the memory). Introduce for_each_memory_block() to efficiently walk all memory blocks. Note: We will soon stop shrinking the ZONE_DEVICE zone and the node span when removing ZONE_DEVICE memory to fix similar issues (access of garbage memmaps) - until we have a reliable way to identify whether these memmaps were properly initialized. This implies later, that once a node had ZONE_DEVICE memory, we won't be able to set a node offline - which should be acceptable. Since commit f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded memory to zones until online") memory that is added is not assoziated with a zone/node (memmap not initialized). The introducing commit 60a5a19e7419 ("memory-hotplug: remove sysfs file of node") already missed that we could have multiple nodes for a section and that the zone/node span is updated when onlining pages, not when adding them. I tested this by hotplugging two DIMMs to a memory-less and cpu-less NUMA node. The node is properly onlined when adding the DIMMs. When removing the DIMMs, the node is properly offlined. Fixes: 60a5a19e7419 ("memory-hotplug: remove sysfs file of node") Fixes: f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded memory to zones until online") # visiable after d0dc12e86b319 Cc: Tang Chen Cc: Greg Kroah-Hartman Cc: "Rafael J. Wysocki" Cc: Andrew Morton Cc: Keith Busch Cc: Jiri Olsa Cc: "Peter Zijlstra (Intel)" Cc: Jani Nikula Cc: Nayna Jain Cc: Michal Hocko Cc: Oscar Salvador Cc: Stephen Rothwell Cc: Dan Williams Cc: Pavel Tatashin Signed-off-by: David Hildenbrand --- v2 -> v3: - Introduce and use for_each_memory_block(), which will run significantly faster than walk_memory_blocks() v1 -> v2: - Drop sysfs handling, simplify, and add a comment - Make sure to include last section fully We stop shrinking the ZONE_DEVICE zone after the following patch: [PATCH v6 04/10] mm/memory_hotplug: Don't access uninitialized memmaps in shrink_zone_span() This implies, the above note regarding ZONE_DEVICE on a node blocking a node from getting offlined until we sorted out how to properly shrink the ZONE_DEVICE zone. This patch is especially important for: [PATCH v6 05/10] mm/memory_hotplug: Shrink zones when offlining memory As the BUG fixed with this patch becomes now easier to observe when memory is offlined (in contrast to when memory would never have been onlined before). As both patches are stable fixes and in next/master for a long time, we should probably pull this patch in front of both and also backport this patch at least to Cc: stable@vger.kernel.org # v4.13+ I have not checked yet if there are real blockers to do that. I guess not. --- drivers/base/memory.c | 36 +++++++++++++++++++++++++++++++++++ include/linux/memory.h | 1 + mm/memory_hotplug.c | 43 ++++++++++++++++++++++++++---------------- 3 files changed, 64 insertions(+), 16 deletions(-) diff --git a/drivers/base/memory.c b/drivers/base/memory.c index a757d9ed88a7..d65ecdeb83e8 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -867,3 +867,39 @@ int walk_memory_blocks(unsigned long start, unsigned long size, } return ret; } + +struct for_each_memory_block_cb_data { + walk_memory_blocks_func_t func; + void *arg; +}; + +static int for_each_memory_block_cb(struct device *dev, void *data) +{ + struct memory_block *mem = to_memory_block(dev); + struct for_each_memory_block_cb_data *cb_data = data; + + return cb_data->func(mem, cb_data->arg); +} + +/** + * for_each_memory_block - walk through all present memory blocks + * + * @arg: argument passed to func + * @func: callback for each memory block walked + * + * This function walks through all present memory blocks, calling func on + * each memory block. + * + * In case func() returns an error, walking is aborted and the error is + * returned. + */ +int for_each_memory_block(void *arg, walk_memory_blocks_func_t func) +{ + struct for_each_memory_block_cb_data cb_data = { + .func = func, + .arg = arg, + }; + + return bus_for_each_dev(&memory_subsys, NULL, &cb_data, + for_each_memory_block_cb); +} diff --git a/include/linux/memory.h b/include/linux/memory.h index 0ebb105eb261..4c75dae8dd29 100644 --- a/include/linux/memory.h +++ b/include/linux/memory.h @@ -119,6 +119,7 @@ extern struct memory_block *find_memory_block(struct mem_section *); typedef int (*walk_memory_blocks_func_t)(struct memory_block *, void *); extern int walk_memory_blocks(unsigned long start, unsigned long size, void *arg, walk_memory_blocks_func_t func); +extern int for_each_memory_block(void *arg, walk_memory_blocks_func_t func); #define CONFIG_MEM_BLOCK_SIZE (PAGES_PER_SECTION<nid == nid ? -EEXIST : 0; +} + /** * try_offline_node * @nid: the node ID @@ -1646,25 +1658,24 @@ static int check_cpu_on_node(pg_data_t *pgdat) void try_offline_node(int nid) { pg_data_t *pgdat = NODE_DATA(nid); - unsigned long start_pfn = pgdat->node_start_pfn; - unsigned long end_pfn = start_pfn + pgdat->node_spanned_pages; - unsigned long pfn; - - for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) { - unsigned long section_nr = pfn_to_section_nr(pfn); - - if (!present_section_nr(section_nr)) - continue; + int rc; - if (pfn_to_nid(pfn) != nid) - continue; + /* + * If the node still spans pages (especially ZONE_DEVICE), don't + * offline it. A node spans memory after move_pfn_range_to_zone(), + * e.g., after the memory block was onlined. + */ + if (pgdat->node_spanned_pages) + return; - /* - * some memory sections of this node are not removed, and we - * can't offline node now. - */ + /* + * Especially offline memory blocks might not be spanned by the + * node. They will get spanned by the node once they get onlined. + * However, they link to the node in sysfs and can get onlined later. + */ + rc = for_each_memory_block(&nid, check_no_memblock_for_node_cb); + if (rc) return; - } if (check_cpu_on_node(pgdat)) return;