From patchwork Mon Jun 11 12:32:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xie XiuQi X-Patchwork-Id: 10457769 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9F89A601A0 for ; Mon, 11 Jun 2018 12:33:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A850256E6 for ; Mon, 11 Jun 2018 12:33:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7EF8D28173; Mon, 11 Jun 2018 12:33:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1B722256E6 for ; Mon, 11 Jun 2018 12:33:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date: Message-ID:From:References:To:Subject:Reply-To:Content-ID:Content-Description :Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=w3LxrzG1S9Vc3RpV22PNtuIVDIWICu+bHjkiRHukcps=; b=iNezGrJhNmHXSF sBZaNyC2CjWJaOm7zhxPgoojWqCFnTekFGiaiAhikfVV3aj/xleXRM418lWpbDYdNIMUshLisIa3n jjSQjvAa7J/8TmBet4dzq8xMtKKc5+r+EO11y7uaPnsgggG6TZWvcZS74i1XaayB3wdXbkjfcywHQ k8/aDO9RRRMaGUouih4HvhdVSz3PdjVY99Tdr9M0gPOClswI3tWDzjpJdWrCn5W2qX/+/bEqiMA93 lgHkdREEGH/5O+40EikCMZ7VFqhbmLQFWohKipahOvL/jFr/mUlA7Y/x2C/LhScxLnJpjEM4ZHxuf erHemRWOyROI0oOOfwwg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fSM0g-0004X4-SH; Mon, 11 Jun 2018 12:33:22 +0000 Received: from [45.249.212.35] (helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fSM0Q-0004LL-UX for linux-arm-kernel@lists.infradead.org; Mon, 11 Jun 2018 12:33:19 +0000 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id C6785672B379D; Mon, 11 Jun 2018 20:32:49 +0800 (CST) Received: from [127.0.0.1] (10.177.19.210) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.382.0; Mon, 11 Jun 2018 20:32:44 +0800 Subject: Re: [PATCH 1/2] arm64: avoid alloc memory on offline node To: Michal Hocko References: <1527768879-88161-1-git-send-email-xiexiuqi@huawei.com> <1527768879-88161-2-git-send-email-xiexiuqi@huawei.com> <20180606154516.GL6631@arm.com> <20180607105514.GA13139@dhcp22.suse.cz> <5ed798a0-6c9c-086e-e5e8-906f593ca33e@huawei.com> <20180607122152.GP32433@dhcp22.suse.cz> <20180611085237.GI13364@dhcp22.suse.cz> From: Xie XiuQi Message-ID: <16c4db2f-bc70-d0f2-fb38-341d9117ff66@huawei.com> Date: Mon, 11 Jun 2018 20:32:10 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <20180611085237.GI13364@dhcp22.suse.cz> X-Originating-IP: [10.177.19.210] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180611_053307_195737_B0EE232E X-CRM114-Status: GOOD ( 25.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Hanjun Guo , tnowicki@caviumnetworks.com, linux-pci@vger.kernel.org, Catalin Marinas , "Rafael J. Wysocki" , Will Deacon , Linux Kernel Mailing List , Jarkko Sakkinen , linux-mm@kvack.org, wanghuiqiang@huawei.com, Greg Kroah-Hartman , Bjorn Helgaas , Andrew Morton , zhongjiang , linux-arm Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Hi Michal, On 2018/6/11 16:52, Michal Hocko wrote: > On Mon 11-06-18 11:23:18, Xie XiuQi wrote: >> Hi Michal, >> >> On 2018/6/7 20:21, Michal Hocko wrote: >>> On Thu 07-06-18 19:55:53, Hanjun Guo wrote: >>>> On 2018/6/7 18:55, Michal Hocko wrote: >>> [...] >>>>> I am not sure I have the full context but pci_acpi_scan_root calls >>>>> kzalloc_node(sizeof(*info), GFP_KERNEL, node) >>>>> and that should fall back to whatever node that is online. Offline node >>>>> shouldn't keep any pages behind. So there must be something else going >>>>> on here and the patch is not the right way to handle it. What does >>>>> faddr2line __alloc_pages_nodemask+0xf0 tells on this kernel? >>>> >>>> The whole context is: >>>> >>>> The system is booted with a NUMA node has no memory attaching to it >>>> (memory-less NUMA node), also with NR_CPUS less than CPUs presented >>>> in MADT, so CPUs on this memory-less node are not brought up, and >>>> this NUMA node will not be online (but SRAT presents this NUMA node); >>>> >>>> Devices attaching to this NUMA node such as PCI host bridge still >>>> return the valid NUMA node via _PXM, but actually that valid NUMA node >>>> is not online which lead to this issue. >>> >>> But we should have other numa nodes on the zonelists so the allocator >>> should fall back to other node. If the zonelist is not intiailized >>> properly, though, then this can indeed show up as a problem. Knowing >>> which exact place has blown up would help get a better picture... >>> >> >> I specific a non-exist node to allocate memory using kzalloc_node, >> and got this following error message. >> >> And I found out there is just a VM_WARN, but it does not prevent the memory >> allocation continue. >> >> This nid would be use to access NODE_DADA(nid), so if nid is invalid, >> it would cause oops here. >> >> 459 /* >> 460 * Allocate pages, preferring the node given as nid. The node must be valid and >> 461 * online. For more general interface, see alloc_pages_node(). >> 462 */ >> 463 static inline struct page * >> 464 __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) >> 465 { >> 466 VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); >> 467 VM_WARN_ON(!node_online(nid)); >> 468 >> 469 return __alloc_pages(gfp_mask, order, nid); >> 470 } >> 471 >> >> (I wrote a ko, to allocate memory on a non-exist node using kzalloc_node().) > > OK, so this is an artificialy broken code, right. You shouldn't get a > non-existent node via standard APIs AFAICS. The original report was > about an existing node which is offline AFAIU. That would be a different > case. If I am missing something and there are legitimate users that try > to allocate from non-existing nodes then we should handle that in > node_zonelist. I think hanjun's comments may help to understood this question: - NUMA node will be built if CPUs and (or) memory are valid on this NUMA node; - But if we boot the system with memory-less node and also with CONFIG_NR_CPUS less than CPUs in SRAT, for example, 64 CPUs total with 4 NUMA nodes, 16 CPUs on each NUMA node, if we boot with CONFIG_NR_CPUS=48, then we will not built numa node for node 3, but with devices on that numa node, alloc memory will be panic because NUMA node 3 is not a valid node. I triggered this BUG on arm64 platform, and I found a similar bug has been fixed on x86 platform. So I sent a similar patch for this bug. Or, could we consider to fix it in the mm subsystem? From b755de8dfdfef97effaa91379ffafcb81f4d62a1 Mon Sep 17 00:00:00 2001 From: Yinghai Lu Date: Wed, 20 Feb 2008 12:41:52 -0800 Subject: [PATCH] x86: make dev_to_node return online node a numa system (with multi HT chains) may return node without ram. Aka it is not online. Try to get an online node, otherwise return -1. Signed-off-by: Yinghai Lu Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner --- arch/x86/pci/acpi.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/pci/acpi.c b/arch/x86/pci/acpi.c index d95de2f..ea8685f 100644 --- a/arch/x86/pci/acpi.c +++ b/arch/x86/pci/acpi.c @@ -172,6 +172,9 @@ struct pci_bus * __devinit pci_acpi_scan_root(struct acpi_device *device, int do set_mp_bus_to_node(busnum, node); else node = get_mp_bus_to_node(busnum); + + if (node != -1 && !node_online(node)) + node = -1; #endif /* Allocate per-root-bus (not per bus) arch-specific data.