From patchwork Wed May 23 08:43:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 10420677 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B17806032A for ; Wed, 23 May 2018 08:43:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A082428E86 for ; Wed, 23 May 2018 08:43:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 94E1928EAF; Wed, 23 May 2018 08:43:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 02B3A28E86 for ; Wed, 23 May 2018 08:43:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 081886B0285; Wed, 23 May 2018 04:43:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 02FD16B0287; Wed, 23 May 2018 04:43:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E61C76B0288; Wed, 23 May 2018 04:43:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wr0-f199.google.com (mail-wr0-f199.google.com [209.85.128.199]) by kanga.kvack.org (Postfix) with ESMTP id 8CAB66B0285 for ; Wed, 23 May 2018 04:43:47 -0400 (EDT) Received: by mail-wr0-f199.google.com with SMTP id r2-v6so286429wrm.15 for ; Wed, 23 May 2018 01:43:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:date:from:to :cc:subject:message-id:references:mime-version:content-disposition :in-reply-to:user-agent; bh=EX0bLvpIpX4oCSYeYxZzIpG1HgUtGXHISDACxPUt3CE=; b=ZP9ECyffMsqbqE6S9CVjfovgH1iHdRAeMxNYz1/XwdZ6IB592kDyy7J/FyVcxlKRV/ RdkHWNNSsvMgmbioOhBOFHWW5gxxGE68Qp0WAxisZkBfAcgsAbS0g6QuCQT+CY6BLfzx ZqOk4ZUsXKSk2XnCn4948wlJpmOhexdcDlbJrahJgEUeASWotZK6eM8HD+Jwt1EIKK0J 1Ri3SzDxsthxT5iY81rQtNAhlHJstXymaI/assgKqrHql7SohL9ttnEplRMmVdSpG6g1 n18nHKZVS4Jgsg5rp3OlgGbL1pWQ+6kLFqV2S0RuLgws1rfIAD85p1mErzFQt487E9wi g+ag== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of mhocko@suse.com designates 195.135.220.15 as permitted sender) smtp.mailfrom=mhocko@suse.com X-Gm-Message-State: ALKqPweSbkm6YEIt6dA5jopAOswiOSWO536ZYWCr8kZNshySv9P/tgrv nWySkAWKtZhBjhZLPe7mrKamRETQiYn4imFy2dq4lvCFqDcXpZJ5asDgSacEPSdc4Z9/UnDFSNK OWUkXLYedP03S5nYQ+DTFqVZinMUykEoMW6vr3e3SvRKIIZ1IAaZVCOE/jJh1Zstc8Q== X-Received: by 2002:a50:aa3d:: with SMTP id o58-v6mr6086362edc.186.1527065027130; Wed, 23 May 2018 01:43:47 -0700 (PDT) X-Google-Smtp-Source: AB8JxZomPxwRoiGuBHPPF3Pbrx1O1iYhy76Ern408VJUZ/2uWVNLJIDfOiRK2CRivgXas0I0o3aC X-Received: by 2002:a50:aa3d:: with SMTP id o58-v6mr6086310edc.186.1527065026287; Wed, 23 May 2018 01:43:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527065026; cv=none; d=google.com; s=arc-20160816; b=C1CApaBdJx/Z571KwFH418Poe9jKoBV0cTBzbCJ8YQZ5DGiJ66rRFQibTAS+o4JtNV WQXia8Xa4XdsAbUy6JhUC1FmbNgIxkjHPuUBIqwdLtWqCVv8WYS3zr7FUIefAbAM1IvN 5p0XlQbw+fGkcX+zjlvMxgEa7i3dvjT7PQVGLs529lP2bKTA2w5l+Is+47Pw9HHvtFED CFs/9ym24QP7G/k+kS5kJztKmWv08uHNYAR40cUNOzK1I93iVmOFwcYXwr2svUUah5HR CIUV1FA11CQOaHic26AUZTirtpDVtqB42wVSn1/NkgTjOYpn65b/HhfbSxOO7lQNjh0k 43mA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=user-agent:in-reply-to:content-disposition:mime-version:references :message-id:subject:cc:to:from:date:arc-authentication-results; bh=EX0bLvpIpX4oCSYeYxZzIpG1HgUtGXHISDACxPUt3CE=; b=cIlqXcu0oImgbp7drYvJ98/+a3moowj1Ct9uCG6PoXVHq1zoB8iLO2+USrbWDJ1q0t mYujT/Cu1fHXIdOvwKDbvab28/gMS51cVhzLmMvOknf7cG26j6qJisU7XzRuqvyoU89H u3R740Z8BuAjBj3SJ8yNVinxbgDLDCTG0XDhRHlbaufraxHpsdOgRy+y4dbJ85HdqFBw f6t8pKy8T2rOMPtmPgYwgl+K9dG6wTVmziQkaTeEIo/0HI8zaWk/ENJfgfUzDwbQJNhx LBwz3wj2NxXuD4bhSTT9LzHe6k7M6+jyI1wDEyoCjpuwRqjYuFmJW0lM1qdr5fny82ur VD/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of mhocko@suse.com designates 195.135.220.15 as permitted sender) smtp.mailfrom=mhocko@suse.com Received: from mx2.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id l46-v6si252488edd.291.2018.05.23.01.43.45 for (version=TLS1 cipher=AES128-SHA bits=128/128); Wed, 23 May 2018 01:43:46 -0700 (PDT) Received-SPF: pass (google.com: domain of mhocko@suse.com designates 195.135.220.15 as permitted sender) client-ip=195.135.220.15; Authentication-Results: mx.google.com; spf=pass (google.com: domain of mhocko@suse.com designates 195.135.220.15 as permitted sender) smtp.mailfrom=mhocko@suse.com X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 48819AC0A; Wed, 23 May 2018 08:43:44 +0000 (UTC) Date: Wed, 23 May 2018 10:43:42 +0200 From: Michal Hocko To: Oscar Salvador Cc: linux-mm@kvack.org, vbabka@suse.cz, pasha.tatashin@oracle.com, dan.j.williams@intel.com Subject: Re: [RFC] trace when adding memory to an offline nod Message-ID: <20180523084342.GK20441@dhcp22.suse.cz> References: <20180523080108.GA30350@techadventures.net> <20180523083756.GJ20441@dhcp22.suse.cz> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180523083756.GJ20441@dhcp22.suse.cz> User-Agent: Mutt/1.9.5 (2018-04-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP On Wed 23-05-18 10:37:56, Michal Hocko wrote: > On Wed 23-05-18 10:01:08, Oscar Salvador wrote: > > Hi guys, > > > > while testing memhotplug, I spotted the following trace: > > > > ===== > > linux kernel: WARNING: CPU: 0 PID: 64 at ./include/linux/gfp.h:467 vmemmap_alloc_block+0x4e/0xc9 > > This warning is too loud and not really helpful. We are doing > gfp_t gfp_mask = GFP_KERNEL|__GFP_RETRY_MAYFAIL|__GFP_NOWARN; > > page = alloc_pages_node(node, gfp_mask, order); > > so we do not really insist on the allocation succeeding on the requested > node (it is more a hint which node is the best one but we can fallback > to any other node). Moreover we do explicitly do not care about > allocation warnings by __GFP_NOWARN. So maybe we want to soften the > warning like this? > The patch with the full changelog From 13a168ec3b84561abc201bd116ad53af343928c0 Mon Sep 17 00:00:00 2001 From: Michal Hocko Date: Wed, 23 May 2018 10:38:06 +0200 Subject: [PATCH] mm: do not warn on offline nodes unless the specific node is explicitly requested Oscar has noticed that we splat linux kernel: WARNING: CPU: 0 PID: 64 at ./include/linux/gfp.h:467 vmemmap_alloc_block+0x4e/0xc9 [...] linux kernel: CPU: 0 PID: 64 Comm: kworker/u4:1 Tainted: G W E 4.17.0-rc5-next-20180517-1-default+ #66 linux kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.0.0-prebuilt.qemu-project.org 04/01/2014 linux kernel: Workqueue: kacpi_hotplug acpi_hotplug_work_fn linux kernel: RIP: 0010:vmemmap_alloc_block+0x4e/0xc9 linux kernel: Code: fb ff 8d 69 01 75 07 65 8b 1d 9d cb 93 7e 81 fb ff 03 00 00 76 02 0f 0b 48 63 c3 48 0f a3 05 c8 b1 b4 00 0f 92 c0 84 c0 75 02 <0f> 0b 31 c9 89 da 89 ee bf c0 06 40 01 e8 0f d1 ad ff 48 85 c0 74 linux kernel: RSP: 0018:ffffc90000d03bf0 EFLAGS: 00010246 linux kernel: RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000008 linux kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00000000000001ff linux kernel: RBP: 0000000000000009 R08: 0000000000000001 R09: ffffc90000d03ae8 linux kernel: R10: 0000000000000001 R11: 0000000000000000 R12: ffffea0006000000 linux kernel: R13: ffffea0005e00000 R14: ffffea0006000000 R15: 0000000000000001 linux kernel: FS: 0000000000000000(0000) GS:ffff88013fc00000(0000) knlGS:0000000000000000 linux kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 linux kernel: CR2: 00007fa92a698018 CR3: 00000001184ce000 CR4: 00000000000006f0 linux kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 linux kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 linux kernel: Call Trace: linux kernel: vmemmap_populate+0xf2/0x2ae linux kernel: sparse_mem_map_populate+0x28/0x35 linux kernel: sparse_add_one_section+0x4c/0x187 linux kernel: __add_pages+0xe7/0x1a0 linux kernel: add_pages+0x16/0x70 linux kernel: add_memory_resource+0xa3/0x1d0 linux kernel: add_memory+0xe4/0x110 linux kernel: acpi_memory_device_add+0x134/0x2e0 linux kernel: acpi_bus_attach+0xd9/0x190 linux kernel: acpi_bus_scan+0x37/0x70 linux kernel: acpi_device_hotplug+0x389/0x4e0 linux kernel: acpi_hotplug_work_fn+0x1a/0x30 linux kernel: process_one_work+0x146/0x340 linux kernel: worker_thread+0x47/0x3e0 linux kernel: kthread+0xf5/0x130 linux kernel: ? max_active_store+0x60/0x60 linux kernel: ? kthread_bind+0x10/0x10 linux kernel: ret_from_fork+0x35/0x40 linux kernel: ---[ end trace 2e2241f4e2f2f018 ]--- Tested-by: Oscar Salvador ==== when adding memory to a node that is currently offline. The VM_WARN_ON is just too loud without a good reason. In this particular case we are doing alloc_pages_node(node, GFP_KERNEL|__GFP_RETRY_MAYFAIL|__GFP_NOWARN, order) so we do not insist on allocating from the given node (it is more a hint) so we can fall back to any other populated node and moreover we explicitly ask to not warn for the allocation failure. Soften the warning only to cases when somebody asks for the given node explicitly by __GFP_THISNODE. Signed-off-by: Michal Hocko --- include/linux/gfp.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 036846fc00a6..7f860ea29ec6 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -464,7 +464,7 @@ static inline struct page * __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) { VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); - VM_WARN_ON(!node_online(nid)); + VM_WARN_ON((gfp_mask & __GFP_THISNODE) && !node_online(nid)); return __alloc_pages(gfp_mask, order, nid); }