From patchwork Wed Aug 29 19:24:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 10580823 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 148E014E1 for ; Wed, 29 Aug 2018 19:25:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F0BBC2BB1C for ; Wed, 29 Aug 2018 19:24:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E28C92BB1E; Wed, 29 Aug 2018 19:24:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E597F2BB1C for ; Wed, 29 Aug 2018 19:24:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D9716B4D4B; Wed, 29 Aug 2018 15:24:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 789446B4D4C; Wed, 29 Aug 2018 15:24:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 678E26B4D4D; Wed, 29 Aug 2018 15:24:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by kanga.kvack.org (Postfix) with ESMTP id 0184F6B4D4B for ; Wed, 29 Aug 2018 15:24:57 -0400 (EDT) Received: by mail-ed1-f71.google.com with SMTP id c25-v6so2574430edb.12 for ; Wed, 29 Aug 2018 12:24:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:date:from:to :cc:subject:message-id:references:mime-version:content-disposition :in-reply-to:user-agent; bh=nA4ACaLNHkaC2fQbrgy7r3Wd7aVRZK006W+Oc+rmASU=; b=LxJPl38Nexi74ZFQx1GGZd0nLZA//MEV9F00Ob2Gbqc2/skJuc0nD5LFEFdyer+3Wl K4RlvhMWdfRtBHWg3sPkyFw8EGidsIs2M/Z/n3FBU6epLGIjauwraohad7SYib0vUZDk 1Dz3tnUNkbaVgE1IcFztoS8Edhx6v+n1a3L7dKNE0x08LQv4qTEarG9YtzL6ucuOLOx8 SbdDJGch9ZCoJWaj2vLwQBrH3ucXHbrOekzLpyCKq72snS1/3ZLRLnZ0HPa8NtkTybEa xmBaiB4mRWcRf1Xttdb5tiyrtrDRucz74U2OElBZnNIfqmdZaBoS11VXLZWu/6DWcqoA NyRA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of mhocko@suse.com designates 195.135.220.15 as permitted sender) smtp.mailfrom=mhocko@suse.com X-Gm-Message-State: APzg51DFrjEnNjRry72U7B9QWKSl1J9it8ff3UE4b3SFAICnbRno0wx8 8ARpu1oofx3TXKz4D49jc/oFQSNGZoPmd44fvlQnrwDay1CnEJEmtkD8DVO2R/mqHh7PRvSGY+G gqdQ65UQmBGDGyxsmEUPLxf2P8vaihIM+3Ad43xGhTglHcX1JVyZv04rHIuZ4sKJZKg== X-Received: by 2002:aa7:c553:: with SMTP id s19-v6mr9101875edr.202.1535570696433; Wed, 29 Aug 2018 12:24:56 -0700 (PDT) X-Google-Smtp-Source: ANB0VdarRwzHQrCxF3Iy4kPHVf1TN4a2LmYlFkEFAltQtjCqIB5SBYcUB2wPIvX9t6qcTgdgyb8u X-Received: by 2002:aa7:c553:: with SMTP id s19-v6mr9101798edr.202.1535570695078; Wed, 29 Aug 2018 12:24:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535570695; cv=none; d=google.com; s=arc-20160816; b=MOTUWXN9A1BcPIEzwlVqARxrSycxfXQ/ewykB89vB3CZPZ8UOKkt3rX0wj3yoZhPQV ViPrvTjSWtrFPGwckF69uJfkgJaPSoO1wvV4GWmt/mX1zGLLFv4jgPtN4TwaAhXoTdTX 644MY8jXEi59zDmKkApDwt3fLUNh859qHMcIyr2BFHk45ofk6ptizOUCgoib9ITGBQbf kIQW5t4cFW+FeuJ/rM2FQkuYEwmX10h4/E+mxL2xPxSpvRgwkrKNoVjspQRYn8SYhWno 2UPLqxqWRJ0zMSERJUtkhrtJSoaQZ+hMJwqjKaDgPLlJNOoBxkq+C+T/xU6+axTjMPpv yWiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=user-agent:in-reply-to:content-disposition:mime-version:references :message-id:subject:cc:to:from:date:arc-authentication-results; bh=nA4ACaLNHkaC2fQbrgy7r3Wd7aVRZK006W+Oc+rmASU=; b=cFOJj37OfXxqat8SP9TKXOJtiLSY+S7955EPmzopyxZxQi3beN4T/5pjdmmQg4C+p6 tV7/RckkReP+ltYat4dm1dy1Yy8kyPfjdAtmW18d6RD6QvWmKI68Ypspc4Echotw4yfG NQeN1rPtTuLoEhWOqNgsJhUPZqdpC45OLOgx2uYZuktutk0e8WGns1XBDeewOeFbK9aJ i0f5Df4C+7HuT1QIiTkYdLXcHH64CaIyEta36cR1JmTyHXEm3ZJK4zl8gZJIFV76xrgK e/ojRJD98WMNPSmZY/K946PR27lChIxDU+Tsa5aXoW1rqm//GQ3QByYxoM0FtVJv6AcU ko/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of mhocko@suse.com designates 195.135.220.15 as permitted sender) smtp.mailfrom=mhocko@suse.com Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id k5-v6si3815273edd.380.2018.08.29.12.24.54 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Aug 2018 12:24:54 -0700 (PDT) Received-SPF: pass (google.com: domain of mhocko@suse.com designates 195.135.220.15 as permitted sender) client-ip=195.135.220.15; Authentication-Results: mx.google.com; spf=pass (google.com: domain of mhocko@suse.com designates 195.135.220.15 as permitted sender) smtp.mailfrom=mhocko@suse.com X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 92E65AD3B; Wed, 29 Aug 2018 19:24:53 +0000 (UTC) Date: Wed, 29 Aug 2018 21:24:51 +0200 From: Michal Hocko To: Zi Yan Cc: Andrea Arcangeli , Andrew Morton , linux-mm@kvack.org, Alex Williamson , David Rientjes , Vlastimil Babka , Stefan Priebe - Profihost AG Subject: [PATCH] mm, thp: relax __GFP_THISNODE for MADV_HUGEPAGE mappings Message-ID: <20180829192451.GG10223@dhcp22.suse.cz> References: <20180823105253.GB29735@dhcp22.suse.cz> <20180828075321.GD10223@dhcp22.suse.cz> <20180828081837.GG10223@dhcp22.suse.cz> <20180829142816.GX10223@dhcp22.suse.cz> <20180829143545.GY10223@dhcp22.suse.cz> <82CA00EB-BF8E-4137-953B-8BC4B74B99AF@cs.rutgers.edu> <20180829154744.GC10223@dhcp22.suse.cz> <39BE14E6-D0FB-428A-B062-8B5AEDC06E61@cs.rutgers.edu> <20180829162528.GD10223@dhcp22.suse.cz> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180829162528.GD10223@dhcp22.suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP On Wed 29-08-18 18:25:28, Michal Hocko wrote: > On Wed 29-08-18 12:06:48, Zi Yan wrote: > > The warning goes away with this change. I am OK with this patch (plus the original one you sent out, > > which could be merged with this one). > > I will respin the patch, update the changelog and repost. Tomorrow I > hope. Here is the v2 From 4dc2f772756e6f91b9e64d1a3e2df4dca3475f5b Mon Sep 17 00:00:00 2001 From: Michal Hocko Date: Tue, 28 Aug 2018 09:59:19 +0200 Subject: [PATCH] mm, thp: relax __GFP_THISNODE for MADV_HUGEPAGE mappings Andrea has noticed [1] that a THP allocation might be really disruptive when allocated on NUMA system with the local node full or hard to reclaim. Stefan has posted an allocation stall report on 4.12 based SLES kernel which suggests the same issue: [245513.362669] kvm: page allocation stalls for 194572ms, order:9, mode:0x4740ca(__GFP_HIGHMEM|__GFP_IO|__GFP_FS|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL|__GFP_THISNODE|__GFP_MOVABLE|__GFP_DIRECT_RECLAIM), nodemask=(null) [245513.363983] kvm cpuset=/ mems_allowed=0-1 [245513.364604] CPU: 10 PID: 84752 Comm: kvm Tainted: G W 4.12.0+98-ph 0000001 SLE15 (unreleased) [245513.365258] Hardware name: Supermicro SYS-1029P-WTRT/X11DDW-NT, BIOS 2.0 12/05/2017 [245513.365905] Call Trace: [245513.366535] dump_stack+0x5c/0x84 [245513.367148] warn_alloc+0xe0/0x180 [245513.367769] __alloc_pages_slowpath+0x820/0xc90 [245513.368406] ? __slab_free+0xa9/0x2f0 [245513.369048] ? __slab_free+0xa9/0x2f0 [245513.369671] __alloc_pages_nodemask+0x1cc/0x210 [245513.370300] alloc_pages_vma+0x1e5/0x280 [245513.370921] do_huge_pmd_wp_page+0x83f/0xf00 [245513.371554] ? set_huge_zero_page.isra.52.part.53+0x9b/0xb0 [245513.372184] ? do_huge_pmd_anonymous_page+0x631/0x6d0 [245513.372812] __handle_mm_fault+0x93d/0x1060 [245513.373439] handle_mm_fault+0xc6/0x1b0 [245513.374042] __do_page_fault+0x230/0x430 [245513.374679] ? get_vtime_delta+0x13/0xb0 [245513.375411] do_page_fault+0x2a/0x70 [245513.376145] ? page_fault+0x65/0x80 [245513.376882] page_fault+0x7b/0x80 [...] [245513.382056] Mem-Info: [245513.382634] active_anon:126315487 inactive_anon:1612476 isolated_anon:5 active_file:60183 inactive_file:245285 isolated_file:0 unevictable:15657 dirty:286 writeback:1 unstable:0 slab_reclaimable:75543 slab_unreclaimable:2509111 mapped:81814 shmem:31764 pagetables:370616 bounce:0 free:32294031 free_pcp:6233 free_cma:0 [245513.386615] Node 0 active_anon:254680388kB inactive_anon:1112760kB active_file:240648kB inactive_file:981168kB unevictable:13368kB isolated(anon):0kB isolated(file):0kB mapped:280240kB dirty:1144kB writeback:0kB shmem:95832kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 81225728kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no [245513.388650] Node 1 active_anon:250583072kB inactive_anon:5337144kB active_file:84kB inactive_file:0kB unevictable:49260kB isolated(anon):20kB isolated(file):0kB mapped:47016kB dirty:0kB writeback:4kB shmem:31224kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 31897600kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no The defrag mode is "madvise" and from the above report it is clear that the THP has been allocated for MADV_HUGEPAGA vma. Andrea has identified that the main source of the problem is __GFP_THISNODE usage: : The problem is that direct compaction combined with the NUMA : __GFP_THISNODE logic in mempolicy.c is telling reclaim to swap very : hard the local node, instead of failing the allocation if there's no : THP available in the local node. : : Such logic was ok until __GFP_THISNODE was added to the THP allocation : path even with MPOL_DEFAULT. : : The idea behind the __GFP_THISNODE addition, is that it is better to : provide local memory in PAGE_SIZE units than to use remote NUMA THP : backed memory. That largely depends on the remote latency though, on : threadrippers for example the overhead is relatively low in my : experience. : : The combination of __GFP_THISNODE and __GFP_DIRECT_RECLAIM results in : extremely slow qemu startup with vfio, if the VM is larger than the : size of one host NUMA node. This is because it will try very hard to : unsuccessfully swapout get_user_pages pinned pages as result of the : __GFP_THISNODE being set, instead of falling back to PAGE_SIZE : allocations and instead of trying to allocate THP on other nodes (it : would be even worse without vfio type1 GUP pins of course, except it'd : be swapping heavily instead). Fix this by removing __GFP_THISNODE handling from alloc_pages_vma where it doesn't belong and move it to alloc_hugepage_direct_gfpmask where we juggle gfp flags for different allocation modes. The rationale is that __GFP_THISNODE is helpful in relaxed defrag modes because falling back to a different node might be more harmful than the benefit of a large page. If the user really requires THP (e.g. by MADV_HUGEPAGE) then the THP has a higher priority than local NUMA placement. Be careful when the vma has an explicit numa binding though, because __GFP_THISNODE is not playing well with it. We want to follow the explicit numa policy rather than enforce a node which happens to be local to the cpu we are running on. [1] http://lkml.kernel.org/r/20180820032204.9591-1-aarcange@redhat.com Fixes: 5265047ac301 ("mm, thp: really limit transparent hugepage allocation to local node") Reported-by: Stefan Priebe Debugged-by: Andrea Arcangeli Signed-off-by: Michal Hocko Reported-by: Stefan Priebe Signed-off-by: Michal Hocko Tested-by: Stefan Priebe --- include/linux/mempolicy.h | 2 ++ mm/huge_memory.c | 25 +++++++++++++++++-------- mm/mempolicy.c | 28 +--------------------------- 3 files changed, 20 insertions(+), 35 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 5228c62af416..bac395f1d00a 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -139,6 +139,8 @@ struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp, struct mempolicy *get_task_policy(struct task_struct *p); struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, unsigned long addr); +struct mempolicy *get_vma_policy(struct vm_area_struct *vma, + unsigned long addr); bool vma_policy_mof(struct vm_area_struct *vma); extern void numa_default_policy(void); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c3bc7e9c9a2a..94472bf9a31b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -629,21 +629,30 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, * available * never: never stall for any thp allocation */ -static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma) +static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma, unsigned long addr) { const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE); + gfp_t this_node = 0; + struct mempolicy *pol; + +#ifdef CONFIG_NUMA + /* __GFP_THISNODE makes sense only if there is no explicit binding */ + pol = get_vma_policy(vma, addr); + if (pol->mode != MPOL_BIND) + this_node = __GFP_THISNODE; +#endif if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags)) - return GFP_TRANSHUGE | (vma_madvised ? 0 : __GFP_NORETRY); + return GFP_TRANSHUGE | (vma_madvised ? 0 : __GFP_NORETRY | this_node); if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags)) - return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM; + return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM | this_node; if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags)) return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM : - __GFP_KSWAPD_RECLAIM); + __GFP_KSWAPD_RECLAIM | this_node); if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags)) return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM : - 0); - return GFP_TRANSHUGE_LIGHT; + this_node); + return GFP_TRANSHUGE_LIGHT | this_node; } /* Caller must hold page table lock. */ @@ -715,7 +724,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) pte_free(vma->vm_mm, pgtable); return ret; } - gfp = alloc_hugepage_direct_gfpmask(vma); + gfp = alloc_hugepage_direct_gfpmask(vma, haddr); page = alloc_hugepage_vma(gfp, vma, haddr, HPAGE_PMD_ORDER); if (unlikely(!page)) { count_vm_event(THP_FAULT_FALLBACK); @@ -1290,7 +1299,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) alloc: if (transparent_hugepage_enabled(vma) && !transparent_hugepage_debug_cow()) { - huge_gfp = alloc_hugepage_direct_gfpmask(vma); + huge_gfp = alloc_hugepage_direct_gfpmask(vma, haddr); new_page = alloc_hugepage_vma(huge_gfp, vma, haddr, HPAGE_PMD_ORDER); } else new_page = NULL; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index da858f794eb6..75bbfc3d6233 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1648,7 +1648,7 @@ struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, * freeing by another task. It is the caller's responsibility to free the * extra reference for shared policies. */ -static struct mempolicy *get_vma_policy(struct vm_area_struct *vma, +struct mempolicy *get_vma_policy(struct vm_area_struct *vma, unsigned long addr) { struct mempolicy *pol = __get_vma_policy(vma, addr); @@ -2026,32 +2026,6 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, goto out; } - if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) { - int hpage_node = node; - - /* - * For hugepage allocation and non-interleave policy which - * allows the current node (or other explicitly preferred - * node) we only try to allocate from the current/preferred - * node and don't fall back to other nodes, as the cost of - * remote accesses would likely offset THP benefits. - * - * If the policy is interleave, or does not allow the current - * node in its nodemask, we allocate the standard way. - */ - if (pol->mode == MPOL_PREFERRED && - !(pol->flags & MPOL_F_LOCAL)) - hpage_node = pol->v.preferred_node; - - nmask = policy_nodemask(gfp, pol); - if (!nmask || node_isset(hpage_node, *nmask)) { - mpol_cond_put(pol); - page = __alloc_pages_node(hpage_node, - gfp | __GFP_THISNODE, order); - goto out; - } - } - nmask = policy_nodemask(gfp, pol); preferred_nid = policy_node(gfp, pol, node); page = __alloc_pages_nodemask(gfp, order, preferred_nid, nmask);