From patchwork Wed Mar 17 03:40:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12144669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8940DC433DB for ; Wed, 17 Mar 2021 03:40:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 38D3864D9D for ; Wed, 17 Mar 2021 03:40:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 38D3864D9D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CF5806B007D; Tue, 16 Mar 2021 23:40:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C7F5B6B007E; Tue, 16 Mar 2021 23:40:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD7ED6B0080; Tue, 16 Mar 2021 23:40:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 8E1A16B007D for ; Tue, 16 Mar 2021 23:40:44 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 57C408249980 for ; Wed, 17 Mar 2021 03:40:44 +0000 (UTC) X-FDA: 77927964408.26.B7E20CF Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf30.hostedemail.com (Postfix) with ESMTP id 73870E0001B4 for ; Wed, 17 Mar 2021 03:40:43 +0000 (UTC) IronPort-SDR: wKbt86H9q6r5UAVCOO6ZsJM6O94ZOAKDYiA6xlxRGfmcvLXQyD37eCtJoav25U8YGEDzJBcSoX +rkMjvHfPBSw== X-IronPort-AV: E=McAfee;i="6000,8403,9925"; a="253394668" X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="253394668" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2021 20:40:43 -0700 IronPort-SDR: gOh4cGWysJyppcFDDpf/okBQ9aviOEoKXOr3vICqoZYKkIwy4fzUG2+muI6VcsGKesrr8O1RoF XwwVj6UAOBYA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="602075935" Received: from shbuild999.sh.intel.com ([10.239.147.94]) by fmsmga006.fm.intel.com with ESMTP; 16 Mar 2021 20:40:39 -0700 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi Kleen , Dan Williams , Feng Tang Subject: [PATCH v4 08/13] mm/mempolicy: Create a page allocator for policy Date: Wed, 17 Mar 2021 11:40:05 +0800 Message-Id: <1615952410-36895-9-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1615952410-36895-1-git-send-email-feng.tang@intel.com> References: <1615952410-36895-1-git-send-email-feng.tang@intel.com> X-Stat-Signature: zsqtc8egzbpbf8nf511fj5g3dyh8pgke X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 73870E0001B4 Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf30; identity=mailfrom; envelope-from=""; helo=mga07.intel.com; client-ip=134.134.136.100 X-HE-DKIM-Result: none/none X-HE-Tag: 1615952443-266165 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ben Widawsky Add a helper function which takes care of handling multiple preferred nodes. It will be called by future patches that need to handle this, specifically VMA based page allocation, and task based page allocation. Huge pages don't quite fit the same pattern because they use different underlying page allocation functions. This consumes the previous interleave policy specific allocation function to make a one stop shop for policy based allocation. With this, MPOL_PREFERRED_MANY's semantic is more like MPOL_PREFERRED that it will first try the preferred node/nodes, and fallback to all other nodes when first try fails. Thanks to Michal Hocko for suggestions on this. For now, only interleaved policy will be used so there should be no functional change yet. However, if bisection points to issues in the next few commits, it was likely the fault of this patch. Similar functionality is offered via policy_node() and policy_nodemask(). By themselves however, neither can achieve this fallback style of sets of nodes. [ Feng: for the first try, add NOWARN flag, and skip the direct reclaim to speedup allocation in some case ] Link: https://lore.kernel.org/r/20200630212517.308045-9-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/mempolicy.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 52 insertions(+), 13 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index d945f29..d21105b 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2187,22 +2187,60 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, return ret; } -/* Allocate a page in interleaved policy. - Own path because it needs to do special accounting. */ -static struct page *alloc_page_interleave(gfp_t gfp, unsigned order, - unsigned nid) +/* Handle page allocation for all but interleaved policies */ +static struct page *alloc_pages_policy(struct mempolicy *pol, gfp_t gfp, + unsigned int order, int preferred_nid) { struct page *page; + gfp_t gfp_mask = gfp; - page = __alloc_pages(gfp, order, nid); - /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ - if (!static_branch_likely(&vm_numa_stat_key)) + if (pol->mode == MPOL_INTERLEAVE) { + page = __alloc_pages(gfp, order, preferred_nid); + /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ + if (!static_branch_likely(&vm_numa_stat_key)) + return page; + if (page && page_to_nid(page) == preferred_nid) { + preempt_disable(); + __inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT); + preempt_enable(); + } return page; - if (page && page_to_nid(page) == nid) { - preempt_disable(); - __inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT); - preempt_enable(); } + + VM_BUG_ON(preferred_nid != NUMA_NO_NODE); + + preferred_nid = numa_node_id(); + + /* + * There is a two pass approach implemented here for + * MPOL_PREFERRED_MANY. In the first pass we try the preferred nodes + * but allow the allocation to fail. The below table explains how + * this is achieved. + * + * | Policy | preferred nid | nodemask | + * |-------------------------------|---------------|------------| + * | MPOL_DEFAULT | local | NULL | + * | MPOL_PREFERRED | best | NULL | + * | MPOL_INTERLEAVE | ERR | ERR | + * | MPOL_BIND | local | pol->nodes | + * | MPOL_PREFERRED_MANY | best | pol->nodes | + * | MPOL_PREFERRED_MANY (round 2) | local | NULL | + * +-------------------------------+---------------+------------+ + */ + if (pol->mode == MPOL_PREFERRED_MANY) { + gfp_mask |= __GFP_NOWARN; + + /* Skip direct reclaim, as there will be a second try */ + gfp_mask &= ~__GFP_DIRECT_RECLAIM; + } + + page = __alloc_pages_nodemask(gfp_mask, order, + policy_node(gfp, pol, preferred_nid), + policy_nodemask(gfp, pol)); + + if (unlikely(!page && pol->mode == MPOL_PREFERRED_MANY)) + page = __alloc_pages_nodemask(gfp, order, preferred_nid, NULL); + return page; } @@ -2244,8 +2282,8 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned nid; nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order); + page = alloc_pages_policy(pol, gfp, order, nid); mpol_cond_put(pol); - page = alloc_page_interleave(gfp, order, nid); goto out; } @@ -2329,7 +2367,8 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order) * nor system default_policy */ if (pol->mode == MPOL_INTERLEAVE) - page = alloc_page_interleave(gfp, order, interleave_nodes(pol)); + page = alloc_pages_policy(pol, gfp, order, + interleave_nodes(pol)); else page = __alloc_pages_nodemask(gfp, order, policy_node(gfp, pol, numa_node_id()),