From patchwork Wed Mar 3 10:20:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3216C433E0 for ; Wed, 3 Mar 2021 10:21:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2049C64EDF for ; Wed, 3 Mar 2021 10:21:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2049C64EDF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 765608D0145; Wed, 3 Mar 2021 05:21:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C6B88D0135; Wed, 3 Mar 2021 05:21:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 569D68D0145; Wed, 3 Mar 2021 05:21:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 228A68D0135 for ; Wed, 3 Mar 2021 05:21:08 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D9F548249980 for ; Wed, 3 Mar 2021 10:21:07 +0000 (UTC) X-FDA: 77878170174.20.76A4641 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id 0BAE5407F8EB for ; Wed, 3 Mar 2021 10:21:05 +0000 (UTC) IronPort-SDR: 69rLT1WOEfG2UX2U3wBEIinJiQ6+i/Ul+UiJO9Pfsak5RTR071bGqWqtVGfRiaooqYK25ar+oa mVFHPcEdoD+A== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162724" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162724" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:06 -0800 IronPort-SDR: MRHzt/NdJJ8FWdsuRTptLifMQQnxpngh+dv2T5zzI55Qqxm9YUPoy+Wa2b3YLM0wgVbHYqkJcz 42YRkVq0arFQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445199976" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:03 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Feng Tang Subject: [PATCH v3 01/14] mm/mempolicy: Add comment for missing LOCAL Date: Wed, 3 Mar 2021 18:20:45 +0800 Message-Id: <1614766858-90344-2-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0BAE5407F8EB X-Stat-Signature: 1c9ssm5s8ry8qjyry4pcrfokgqy5zzc6 Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766865-285543 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ben Widawsky MPOL_LOCAL is a bit weird because it is simply a different name for an existing behavior (preferred policy with no node mask). It has been this way since it was added here: commit 479e2802d09f ("mm: mempolicy: Make MPOL_LOCAL a real policy") It is so similar to MPOL_PREFERRED in fact that when the policy is created in mpol_new, the mode is set as PREFERRED, and an internal state representing LOCAL doesn't exist. To prevent future explorers from scratching their head as to why MPOL_LOCAL isn't defined in the mpol_ops table, add a small comment explaining the situations. v2: Change comment to refer to mpol_new (Michal) Link: https://lore.kernel.org/r/20200630212517.308045-2-ben.widawsky@intel.com #Acked-by: Michal Hocko Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang Acked-by: Michal Hocko --- mm/mempolicy.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 2c3a865..5730fc1 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -427,6 +427,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { .create = mpol_new_bind, .rebind = mpol_rebind_nodemask, }, + /* [MPOL_LOCAL] - see mpol_new() */ }; static int migrate_page_add(struct page *page, struct list_head *pagelist, From patchwork Wed Mar 3 10:20:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-21.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08188C433DB for ; Wed, 3 Mar 2021 10:21:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8097E64EDE for ; Wed, 3 Mar 2021 10:21:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8097E64EDE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0D3F18D0146; Wed, 3 Mar 2021 05:21:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 084E88D0135; Wed, 3 Mar 2021 05:21:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E41028D0146; Wed, 3 Mar 2021 05:21:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0192.hostedemail.com [216.40.44.192]) by kanga.kvack.org (Postfix) with ESMTP id C2CA58D0135 for ; Wed, 3 Mar 2021 05:21:12 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 747BF3622 for ; Wed, 3 Mar 2021 10:21:12 +0000 (UTC) X-FDA: 77878170384.22.13BD85A Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id EEFDA407F8DC for ; Wed, 3 Mar 2021 10:21:09 +0000 (UTC) IronPort-SDR: BdS6Eaa/RiuCdqErj2RzDjkuv5dNkVHxzbzlN+KF96lskT+BAEGk1l4V8SH5FhKLlz7f0vfL8T 9FpI7TpZC0Ig== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162749" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162749" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:10 -0800 IronPort-SDR: xqkJ6SBSmfYIHlCGGLPdpGMUY8sLf0ETHxjXvCIBaeDTS9bJVqzN12rR375oNAgdju41p2Zmn5 0eW/yCSG0OhQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200039" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:06 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Dave Hansen , Feng Tang Subject: [PATCH v3 02/14] mm/mempolicy: convert single preferred_node to full nodemask Date: Wed, 3 Mar 2021 18:20:46 +0800 Message-Id: <1614766858-90344-3-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: EEFDA407F8DC X-Stat-Signature: wbdnyk7enai4wi55q67n37iw87bqb8mg Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766869-451282 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen The NUMA APIs currently allow passing in a "preferred node" as a single bit set in a nodemask. If more than one bit it set, bits after the first are ignored. Internally, this is implemented as a single integer: mempolicy->preferred_node. This single node is generally OK for location-based NUMA where memory being allocated will eventually be operated on by a single CPU. However, in systems with multiple memory types, folks want to target a *type* of memory instead of a location. For instance, someone might want some high-bandwidth memory but do not care about the CPU next to which it is allocated. Or, they want a cheap, high capacity allocation and want to target all NUMA nodes which have persistent memory in volatile mode. In both of these cases, the application wants to target a *set* of nodes, but does not want strict MPOL_BIND behavior as that could lead to OOM killer or SIGSEGV. To get that behavior, a MPOL_PREFERRED mode is desirable, but one that honors multiple nodes to be set in the nodemask. The first step in that direction is to be able to internally store multiple preferred nodes, which is implemented in this patch. This should not have any function changes and just switches the internal representation of mempolicy->preferred_node from an integer to a nodemask called 'mempolicy->preferred_nodes'. This is not a pie-in-the-sky dream for an API. This was a response to a specific ask of more than one group at Intel. Specifically: 1. There are existing libraries that target memory types such as https://github.com/memkind/memkind. These are known to suffer from SIGSEGV's when memory is low on targeted memory "kinds" that span more than one node. The MCDRAM on a Xeon Phi in "Cluster on Die" mode is an example of this. 2. Volatile-use persistent memory users want to have a memory policy which is targeted at either "cheap and slow" (PMEM) or "expensive and fast" (DRAM). However, they do not want to experience allocation failures when the targeted type is unavailable. 3. Allocate-then-run. Generally, we let the process scheduler decide on which physical CPU to run a task. That location provides a default allocation policy, and memory availability is not generally considered when placing tasks. For situations where memory is valuable and constrained, some users want to allocate memory first, *then* allocate close compute resources to the allocation. This is the reverse of the normal (CPU) model. Accelerators such as GPUs that operate on core-mm-managed memory are interested in this model. v2: Fix spelling errors in commit message. (Ben) clang-format. (Ben) Integrated bit from another patch. (Ben) Update the docs to reflect the internal data structure change (Ben) Don't advertise MPOL_PREFERRED_MANY in UAPI until we can handle it (Ben) Added more to the commit message (Dave) Link: https://lore.kernel.org/r/20200630212517.308045-3-ben.widawsky@intel.com Co-developed-by: Ben Widawsky Signed-off-by: Dave Hansen Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- .../admin-guide/mm/numa_memory_policy.rst | 6 ++-- include/linux/mempolicy.h | 4 +-- mm/mempolicy.c | 40 ++++++++++++---------- 3 files changed, 27 insertions(+), 23 deletions(-) diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst index 067a90a..1ad020c 100644 --- a/Documentation/admin-guide/mm/numa_memory_policy.rst +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst @@ -205,9 +205,9 @@ MPOL_PREFERRED of increasing distance from the preferred node based on information provided by the platform firmware. - Internally, the Preferred policy uses a single node--the - preferred_node member of struct mempolicy. When the internal - mode flag MPOL_F_LOCAL is set, the preferred_node is ignored + Internally, the Preferred policy uses a nodemask--the + preferred_nodes member of struct mempolicy. When the internal + mode flag MPOL_F_LOCAL is set, the preferred_nodes are ignored and the policy is interpreted as local allocation. "Local" allocation policy can be viewed as a Preferred policy that starts at the node containing the cpu where the allocation diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 5f1c74d..23ee105 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -47,8 +47,8 @@ struct mempolicy { unsigned short mode; /* See MPOL_* above */ unsigned short flags; /* See set_mempolicy() MPOL_F_* above */ union { - short preferred_node; /* preferred */ - nodemask_t nodes; /* interleave/bind */ + nodemask_t preferred_nodes; /* preferred */ + nodemask_t nodes; /* interleave/bind */ /* undefined for default */ } v; union { diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 5730fc1..8f4a32a 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -205,7 +205,7 @@ static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes) else if (nodes_empty(*nodes)) return -EINVAL; /* no allowed nodes */ else - pol->v.preferred_node = first_node(*nodes); + pol->v.preferred_nodes = nodemask_of_node(first_node(*nodes)); return 0; } @@ -345,22 +345,26 @@ static void mpol_rebind_preferred(struct mempolicy *pol, const nodemask_t *nodes) { nodemask_t tmp; + nodemask_t preferred_node; + + /* MPOL_PREFERRED uses only the first node in the mask */ + preferred_node = nodemask_of_node(first_node(*nodes)); if (pol->flags & MPOL_F_STATIC_NODES) { int node = first_node(pol->w.user_nodemask); if (node_isset(node, *nodes)) { - pol->v.preferred_node = node; + pol->v.preferred_nodes = nodemask_of_node(node); pol->flags &= ~MPOL_F_LOCAL; } else pol->flags |= MPOL_F_LOCAL; } else if (pol->flags & MPOL_F_RELATIVE_NODES) { mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes); - pol->v.preferred_node = first_node(tmp); + pol->v.preferred_nodes = tmp; } else if (!(pol->flags & MPOL_F_LOCAL)) { - pol->v.preferred_node = node_remap(pol->v.preferred_node, - pol->w.cpuset_mems_allowed, - *nodes); + nodes_remap(tmp, pol->v.preferred_nodes, + pol->w.cpuset_mems_allowed, preferred_node); + pol->v.preferred_nodes = tmp; pol->w.cpuset_mems_allowed = *nodes; } } @@ -912,7 +916,7 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) break; case MPOL_PREFERRED: if (!(p->flags & MPOL_F_LOCAL)) - node_set(p->v.preferred_node, *nodes); + *nodes = p->v.preferred_nodes; /* else return empty node mask for local allocation */ break; default: @@ -1881,9 +1885,9 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) /* Return the node id preferred by the given mempolicy, or the given id */ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) { - if (policy->mode == MPOL_PREFERRED && !(policy->flags & MPOL_F_LOCAL)) - nd = policy->v.preferred_node; - else { + if (policy->mode == MPOL_PREFERRED && !(policy->flags & MPOL_F_LOCAL)) { + nd = first_node(policy->v.preferred_nodes); + } else { /* * __GFP_THISNODE shouldn't even be used with the bind policy * because we might easily break the expectation to stay on the @@ -1928,7 +1932,7 @@ unsigned int mempolicy_slab_node(void) /* * handled MPOL_F_LOCAL above */ - return policy->v.preferred_node; + return first_node(policy->v.preferred_nodes); case MPOL_INTERLEAVE: return interleave_nodes(policy); @@ -2062,7 +2066,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) if (mempolicy->flags & MPOL_F_LOCAL) nid = numa_node_id(); else - nid = mempolicy->v.preferred_node; + nid = first_node(mempolicy->v.preferred_nodes); init_nodemask_of_node(mask, nid); break; @@ -2200,7 +2204,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * node in its nodemask, we allocate the standard way. */ if (pol->mode == MPOL_PREFERRED && !(pol->flags & MPOL_F_LOCAL)) - hpage_node = pol->v.preferred_node; + hpage_node = first_node(pol->v.preferred_nodes); nmask = policy_nodemask(gfp, pol); if (!nmask || node_isset(hpage_node, *nmask)) { @@ -2339,7 +2343,7 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) /* a's ->flags is the same as b's */ if (a->flags & MPOL_F_LOCAL) return true; - return a->v.preferred_node == b->v.preferred_node; + return nodes_equal(a->v.preferred_nodes, b->v.preferred_nodes); default: BUG(); return false; @@ -2483,7 +2487,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long if (pol->flags & MPOL_F_LOCAL) polnid = numa_node_id(); else - polnid = pol->v.preferred_node; + polnid = first_node(pol->v.preferred_nodes); break; case MPOL_BIND: @@ -2800,7 +2804,7 @@ void __init numa_policy_init(void) .refcnt = ATOMIC_INIT(1), .mode = MPOL_PREFERRED, .flags = MPOL_F_MOF | MPOL_F_MORON, - .v = { .preferred_node = nid, }, + .v = { .preferred_nodes = nodemask_of_node(nid), }, }; } @@ -2966,7 +2970,7 @@ int mpol_parse_str(char *str, struct mempolicy **mpol) if (mode != MPOL_PREFERRED) new->v.nodes = nodes; else if (nodelist) - new->v.preferred_node = first_node(nodes); + new->v.preferred_nodes = nodemask_of_node(first_node(nodes)); else new->flags |= MPOL_F_LOCAL; @@ -3019,7 +3023,7 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) if (flags & MPOL_F_LOCAL) mode = MPOL_LOCAL; else - node_set(pol->v.preferred_node, nodes); + nodes_or(nodes, nodes, pol->v.preferred_nodes); break; case MPOL_BIND: case MPOL_INTERLEAVE: From patchwork Wed Mar 3 10:20:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4125DC433E6 for ; Wed, 3 Mar 2021 10:21:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B0B7964EDE for ; Wed, 3 Mar 2021 10:21:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B0B7964EDE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 36D288D0147; Wed, 3 Mar 2021 05:21:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F4E58D0135; Wed, 3 Mar 2021 05:21:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 147338D0147; Wed, 3 Mar 2021 05:21:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id E5F198D0135 for ; Wed, 3 Mar 2021 05:21:15 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id ADFD23489 for ; Wed, 3 Mar 2021 10:21:15 +0000 (UTC) X-FDA: 77878170510.27.928BC00 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id 9E2BC407F8DC for ; Wed, 3 Mar 2021 10:21:13 +0000 (UTC) IronPort-SDR: HRja70LRcn7leDG+MkPIfHI8xhgi0spJ0vbt3Ya1gGi4Ams675WnxZ5NGIbSy33En8+AQqWH2q VW4Tw66X//xw== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162761" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162761" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:14 -0800 IronPort-SDR: HFMXJJmFdTgCDnHo3vKHCM4nuKMwa7AL9OpxQf7jMO04J3VuYt9nOMvdfSbBt9kGgU2l1Jrt5E at3X+BkJf7oA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200101" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:10 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Dave Hansen , Feng Tang Subject: [PATCH v3 03/14] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes Date: Wed, 3 Mar 2021 18:20:47 +0800 Message-Id: <1614766858-90344-4-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 9E2BC407F8DC X-Stat-Signature: ep3dm3ca6yincgtykikk8r6zu41p9n1j Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766873-821445 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen MPOL_PREFERRED honors only a single node set in the nodemask. Add the bare define for a new mode which will allow more than one. The patch does all the plumbing without actually adding the new policy type. v2: Plumb most MPOL_PREFERRED_MANY without exposing UAPI (Ben) Fixes for checkpatch (Ben) Link: https://lore.kernel.org/r/20200630212517.308045-4-ben.widawsky@intel.com Co-developed-by: Ben Widawsky Signed-off-by: Ben Widawsky Signed-off-by: Dave Hansen Signed-off-by: Feng Tang --- mm/mempolicy.c | 46 ++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 40 insertions(+), 6 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 8f4a32a..79258b2 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -31,6 +31,9 @@ * but useful to set in a VMA when you have a non default * process policy. * + * preferred many Try a set of nodes first before normal fallback. This is + * similar to preferred without the special case. + * * default Allocate on the local node first, or when on a VMA * use the process policy. This is what Linux always did * in a NUMA aware kernel and still does by, ahem, default. @@ -105,6 +108,8 @@ #include "internal.h" +#define MPOL_PREFERRED_MANY MPOL_MAX + /* Internal flags */ #define MPOL_MF_DISCONTIG_OK (MPOL_MF_INTERNAL << 0) /* Skip checks for continuous vmas */ #define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */ @@ -175,7 +180,7 @@ struct mempolicy *get_task_policy(struct task_struct *p) static const struct mempolicy_operations { int (*create)(struct mempolicy *pol, const nodemask_t *nodes); void (*rebind)(struct mempolicy *pol, const nodemask_t *nodes); -} mpol_ops[MPOL_MAX]; +} mpol_ops[MPOL_MAX + 1]; static inline int mpol_store_user_nodemask(const struct mempolicy *pol) { @@ -415,7 +420,7 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new) mmap_write_unlock(mm); } -static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { +static const struct mempolicy_operations mpol_ops[MPOL_MAX + 1] = { [MPOL_DEFAULT] = { .rebind = mpol_rebind_default, }, @@ -432,6 +437,10 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { .rebind = mpol_rebind_nodemask, }, /* [MPOL_LOCAL] - see mpol_new() */ + [MPOL_PREFERRED_MANY] = { + .create = NULL, + .rebind = NULL, + }, }; static int migrate_page_add(struct page *page, struct list_head *pagelist, @@ -914,6 +923,9 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) case MPOL_INTERLEAVE: *nodes = p->v.nodes; break; + case MPOL_PREFERRED_MANY: + *nodes = p->v.preferred_nodes; + break; case MPOL_PREFERRED: if (!(p->flags & MPOL_F_LOCAL)) *nodes = p->v.preferred_nodes; @@ -1885,7 +1897,9 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) /* Return the node id preferred by the given mempolicy, or the given id */ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) { - if (policy->mode == MPOL_PREFERRED && !(policy->flags & MPOL_F_LOCAL)) { + if ((policy->mode == MPOL_PREFERRED || + policy->mode == MPOL_PREFERRED_MANY) && + !(policy->flags & MPOL_F_LOCAL)) { nd = first_node(policy->v.preferred_nodes); } else { /* @@ -1928,6 +1942,7 @@ unsigned int mempolicy_slab_node(void) return node; switch (policy->mode) { + case MPOL_PREFERRED_MANY: case MPOL_PREFERRED: /* * handled MPOL_F_LOCAL above @@ -2062,6 +2077,9 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) task_lock(current); mempolicy = current->mempolicy; switch (mempolicy->mode) { + case MPOL_PREFERRED_MANY: + *mask = mempolicy->v.preferred_nodes; + break; case MPOL_PREFERRED: if (mempolicy->flags & MPOL_F_LOCAL) nid = numa_node_id(); @@ -2116,6 +2134,9 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, * nodes in mask. */ break; + case MPOL_PREFERRED_MANY: + ret = nodes_intersects(mempolicy->v.preferred_nodes, *mask); + break; case MPOL_BIND: case MPOL_INTERLEAVE: ret = nodes_intersects(mempolicy->v.nodes, *mask); @@ -2200,10 +2221,13 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * node and don't fall back to other nodes, as the cost of * remote accesses would likely offset THP benefits. * - * If the policy is interleave, or does not allow the current - * node in its nodemask, we allocate the standard way. + * If the policy is interleave or multiple preferred nodes, or + * does not allow the current node in its nodemask, we allocate + * the standard way. */ - if (pol->mode == MPOL_PREFERRED && !(pol->flags & MPOL_F_LOCAL)) + if ((pol->mode == MPOL_PREFERRED || + pol->mode == MPOL_PREFERRED_MANY) && + !(pol->flags & MPOL_F_LOCAL)) hpage_node = first_node(pol->v.preferred_nodes); nmask = policy_nodemask(gfp, pol); @@ -2339,6 +2363,9 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) case MPOL_BIND: case MPOL_INTERLEAVE: return !!nodes_equal(a->v.nodes, b->v.nodes); + case MPOL_PREFERRED_MANY: + return !!nodes_equal(a->v.preferred_nodes, + b->v.preferred_nodes); case MPOL_PREFERRED: /* a's ->flags is the same as b's */ if (a->flags & MPOL_F_LOCAL) @@ -2507,6 +2534,8 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = zone_to_nid(z->zone); break; + /* case MPOL_PREFERRED_MANY: */ + default: BUG(); } @@ -2858,6 +2887,7 @@ static const char * const policy_modes[] = [MPOL_BIND] = "bind", [MPOL_INTERLEAVE] = "interleave", [MPOL_LOCAL] = "local", + [MPOL_PREFERRED_MANY] = "prefer (many)", }; @@ -2937,6 +2967,7 @@ int mpol_parse_str(char *str, struct mempolicy **mpol) if (!nodelist) err = 0; goto out; + case MPOL_PREFERRED_MANY: case MPOL_BIND: /* * Insist on a nodelist @@ -3019,6 +3050,9 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) switch (mode) { case MPOL_DEFAULT: break; + case MPOL_PREFERRED_MANY: + WARN_ON(flags & MPOL_F_LOCAL); + fallthrough; case MPOL_PREFERRED: if (flags & MPOL_F_LOCAL) mode = MPOL_LOCAL; From patchwork Wed Mar 3 10:20:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0179C433DB for ; Wed, 3 Mar 2021 10:21:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4482064EDE for ; Wed, 3 Mar 2021 10:21:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4482064EDE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C1E658D0148; Wed, 3 Mar 2021 05:21:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BCDFF8D0135; Wed, 3 Mar 2021 05:21:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A484A8D0148; Wed, 3 Mar 2021 05:21:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0185.hostedemail.com [216.40.44.185]) by kanga.kvack.org (Postfix) with ESMTP id 818638D0135 for ; Wed, 3 Mar 2021 05:21:19 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3E560181CCB00 for ; Wed, 3 Mar 2021 10:21:19 +0000 (UTC) X-FDA: 77878170678.24.BD10ECA Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id 5DC58407F8ED for ; Wed, 3 Mar 2021 10:21:17 +0000 (UTC) IronPort-SDR: +/3m7lokOro1Px+yQa6NQT3Z9rD8cSnM5/OjQYIcyLLizHFM83Je5d0yKTpaWVrTslqCeEFRxS 4GJzjCful+0A== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162772" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162772" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:18 -0800 IronPort-SDR: mPlRnNypqju1PiaghxQklHY+nzJgDdNCJ2ChUxwovJ7BW/ReSPF9GVLxJmVegudx20xYvgMKd/ 9GoT9OMEdHYA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200165" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:14 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Dave Hansen , Feng Tang Subject: [PATCH v3 04/14] mm/mempolicy: allow preferred code to take a nodemask Date: Wed, 3 Mar 2021 18:20:48 +0800 Message-Id: <1614766858-90344-5-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 5DC58407F8ED X-Stat-Signature: oettc94rcfbejz81jxpbbtqsmtmhr3kn Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766877-686026 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Create a helper function (mpol_new_preferred_many()) which is usable both by the old, single-node MPOL_PREFERRED and the new MPOL_PREFERRED_MANY. Enforce the old single-node MPOL_PREFERRED behavior in the "new" version of mpol_new_preferred() which calls mpol_new_preferred_many(). v3: * fix a stack overflow caused by emty nodemask (Feng) Link: https://lore.kernel.org/r/20200630212517.308045-5-ben.widawsky@intel.com Signed-off-by: Dave Hansen Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/mempolicy.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 79258b2..19ec954 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -203,17 +203,34 @@ static int mpol_new_interleave(struct mempolicy *pol, const nodemask_t *nodes) return 0; } -static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes) +static int mpol_new_preferred_many(struct mempolicy *pol, + const nodemask_t *nodes) { if (!nodes) pol->flags |= MPOL_F_LOCAL; /* local allocation */ else if (nodes_empty(*nodes)) return -EINVAL; /* no allowed nodes */ else - pol->v.preferred_nodes = nodemask_of_node(first_node(*nodes)); + pol->v.preferred_nodes = *nodes; return 0; } +static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes) +{ + if (nodes) { + /* MPOL_PREFERRED can only take a single node: */ + nodemask_t tmp; + + if (nodes_empty(*nodes)) + return -EINVAL; + + tmp = nodemask_of_node(first_node(*nodes)); + return mpol_new_preferred_many(pol, &tmp); + } + + return mpol_new_preferred_many(pol, NULL); +} + static int mpol_new_bind(struct mempolicy *pol, const nodemask_t *nodes) { if (nodes_empty(*nodes)) From patchwork Wed Mar 3 10:20:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3650C433E0 for ; Wed, 3 Mar 2021 10:21:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2627364EDE for ; Wed, 3 Mar 2021 10:21:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2627364EDE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A49648D0149; Wed, 3 Mar 2021 05:21:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F92E8D0135; Wed, 3 Mar 2021 05:21:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 89A018D0149; Wed, 3 Mar 2021 05:21:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0143.hostedemail.com [216.40.44.143]) by kanga.kvack.org (Postfix) with ESMTP id 689638D0135 for ; Wed, 3 Mar 2021 05:21:23 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 304088249980 for ; Wed, 3 Mar 2021 10:21:23 +0000 (UTC) X-FDA: 77878170846.03.133255B Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id 41A57407F8EB for ; Wed, 3 Mar 2021 10:21:21 +0000 (UTC) IronPort-SDR: OVxZoM8aRPL8jThmtR5U/i8aG4fF9z4O68YnAZjPhdsTsyGOBD8XokMSC5zaCBiwfI6nupaQFm IyY8qy8aJh1Q== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162785" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162785" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:22 -0800 IronPort-SDR: ranA4Q4AMEgxq+bNn2jWDrlh0NnmkGJDvLlHIB9I6l4d4tCBBrVLJd2UUkgU8gtMHPQ1UhD9Jm S9gCS0nBVrAw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200192" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:18 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Dave Hansen , Feng Tang Subject: [PATCH v3 05/14] mm/mempolicy: refactor rebind code for PREFERRED_MANY Date: Wed, 3 Mar 2021 18:20:49 +0800 Message-Id: <1614766858-90344-6-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 41A57407F8EB X-Stat-Signature: i1buc4bs183ngk7wwh9gwpdanbgyk3bh Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766881-403129 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Again, this extracts the "only one node must be set" behavior of MPOL_PREFERRED. It retains virtually all of the existing code so it can be used by MPOL_PREFERRED_MANY as well. v2: Fixed typos in commit message. (Ben) Merged bits from other patches. (Ben) annotate mpol_rebind_preferred_many as unused (Ben) Link: https://lore.kernel.org/r/20200630212517.308045-6-ben.widawsky@intel.com Signed-off-by: Dave Hansen Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/mempolicy.c | 29 ++++++++++++++++++++++------- 1 file changed, 22 insertions(+), 7 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 19ec954..0103c20 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -363,14 +363,11 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes) pol->v.nodes = tmp; } -static void mpol_rebind_preferred(struct mempolicy *pol, - const nodemask_t *nodes) +static void mpol_rebind_preferred_common(struct mempolicy *pol, + const nodemask_t *preferred_nodes, + const nodemask_t *nodes) { nodemask_t tmp; - nodemask_t preferred_node; - - /* MPOL_PREFERRED uses only the first node in the mask */ - preferred_node = nodemask_of_node(first_node(*nodes)); if (pol->flags & MPOL_F_STATIC_NODES) { int node = first_node(pol->w.user_nodemask); @@ -385,12 +382,30 @@ static void mpol_rebind_preferred(struct mempolicy *pol, pol->v.preferred_nodes = tmp; } else if (!(pol->flags & MPOL_F_LOCAL)) { nodes_remap(tmp, pol->v.preferred_nodes, - pol->w.cpuset_mems_allowed, preferred_node); + pol->w.cpuset_mems_allowed, *preferred_nodes); pol->v.preferred_nodes = tmp; pol->w.cpuset_mems_allowed = *nodes; } } +/* MPOL_PREFERRED_MANY allows multiple nodes to be set in 'nodes' */ +static void __maybe_unused mpol_rebind_preferred_many(struct mempolicy *pol, + const nodemask_t *nodes) +{ + mpol_rebind_preferred_common(pol, nodes, nodes); +} + +static void mpol_rebind_preferred(struct mempolicy *pol, + const nodemask_t *nodes) +{ + nodemask_t preferred_node; + + /* MPOL_PREFERRED uses only the first node in 'nodes' */ + preferred_node = nodemask_of_node(first_node(*nodes)); + + mpol_rebind_preferred_common(pol, &preferred_node, nodes); +} + /* * mpol_rebind_policy - Migrate a policy to a different set of nodes * From patchwork Wed Mar 3 10:20:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CCB4C433E0 for ; Wed, 3 Mar 2021 10:21:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CC7B1601FC for ; Wed, 3 Mar 2021 10:21:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CC7B1601FC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 567808D014A; Wed, 3 Mar 2021 05:21:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 517128D0135; Wed, 3 Mar 2021 05:21:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 391828D014A; Wed, 3 Mar 2021 05:21:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id 086318D0135 for ; Wed, 3 Mar 2021 05:21:28 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6532C1812870B for ; Wed, 3 Mar 2021 10:21:27 +0000 (UTC) X-FDA: 77878171014.22.F5C3B46 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id 07FF7407F8F1 for ; Wed, 3 Mar 2021 10:21:24 +0000 (UTC) IronPort-SDR: F25BHAfpWY6q4o3PjQZ1uCDkygRYquHi2pBY//XzaibZAlEVSS4Cdbdhv8cYXkEH8zI1ZG20jt Nw+rFLhuoYDg== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162795" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162795" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:25 -0800 IronPort-SDR: gxgJY0O1d6FDlu1Dqaxnr/5cQDRQG7Q/QA/kn0K5ODcm53NQy6aNCJgsJVxgCQ03NXCa02tYjv FDvD6XWsYPWg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200210" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:22 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Feng Tang Subject: [PATCH v3 06/14] mm/mempolicy: kill v.preferred_nodes Date: Wed, 3 Mar 2021 18:20:50 +0800 Message-Id: <1614766858-90344-7-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 07FF7407F8F1 X-Stat-Signature: weqbsga678oi1zqrg1q1wktgaqkn6snj Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766884-667529 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ben Widawsky Now that preferred_nodes is just a mask, and policies are mutually exclusive, there is no reason to have a separate mask. This patch is optional. It definitely helps clean up code in future patches, but there is no functional difference to leaving it with the previous name. I do believe it helps demonstrate the exclusivity of the fields. Link: https://lore.kernel.org/r/20200630212517.308045-7-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- include/linux/mempolicy.h | 6 +-- mm/mempolicy.c | 112 ++++++++++++++++++++++------------------------ 2 files changed, 55 insertions(+), 63 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 23ee105..ec811c3 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -46,11 +46,7 @@ struct mempolicy { atomic_t refcnt; unsigned short mode; /* See MPOL_* above */ unsigned short flags; /* See set_mempolicy() MPOL_F_* above */ - union { - nodemask_t preferred_nodes; /* preferred */ - nodemask_t nodes; /* interleave/bind */ - /* undefined for default */ - } v; + nodemask_t nodes; /* interleave/bind/many */ union { nodemask_t cpuset_mems_allowed; /* relative to these nodes */ nodemask_t user_nodemask; /* nodemask passed by user */ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0103c20..fe1d83c 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -199,7 +199,7 @@ static int mpol_new_interleave(struct mempolicy *pol, const nodemask_t *nodes) { if (nodes_empty(*nodes)) return -EINVAL; - pol->v.nodes = *nodes; + pol->nodes = *nodes; return 0; } @@ -211,7 +211,7 @@ static int mpol_new_preferred_many(struct mempolicy *pol, else if (nodes_empty(*nodes)) return -EINVAL; /* no allowed nodes */ else - pol->v.preferred_nodes = *nodes; + pol->nodes = *nodes; return 0; } @@ -235,7 +235,7 @@ static int mpol_new_bind(struct mempolicy *pol, const nodemask_t *nodes) { if (nodes_empty(*nodes)) return -EINVAL; - pol->v.nodes = *nodes; + pol->nodes = *nodes; return 0; } @@ -352,15 +352,15 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes) else if (pol->flags & MPOL_F_RELATIVE_NODES) mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes); else { - nodes_remap(tmp, pol->v.nodes,pol->w.cpuset_mems_allowed, - *nodes); + nodes_remap(tmp, pol->nodes, pol->w.cpuset_mems_allowed, + *nodes); pol->w.cpuset_mems_allowed = *nodes; } if (nodes_empty(tmp)) tmp = *nodes; - pol->v.nodes = tmp; + pol->nodes = tmp; } static void mpol_rebind_preferred_common(struct mempolicy *pol, @@ -373,17 +373,17 @@ static void mpol_rebind_preferred_common(struct mempolicy *pol, int node = first_node(pol->w.user_nodemask); if (node_isset(node, *nodes)) { - pol->v.preferred_nodes = nodemask_of_node(node); + pol->nodes = nodemask_of_node(node); pol->flags &= ~MPOL_F_LOCAL; } else pol->flags |= MPOL_F_LOCAL; } else if (pol->flags & MPOL_F_RELATIVE_NODES) { mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes); - pol->v.preferred_nodes = tmp; + pol->nodes = tmp; } else if (!(pol->flags & MPOL_F_LOCAL)) { - nodes_remap(tmp, pol->v.preferred_nodes, - pol->w.cpuset_mems_allowed, *preferred_nodes); - pol->v.preferred_nodes = tmp; + nodes_remap(tmp, pol->nodes, pol->w.cpuset_mems_allowed, + *preferred_nodes); + pol->nodes = tmp; pol->w.cpuset_mems_allowed = *nodes; } } @@ -953,14 +953,14 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) switch (p->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - *nodes = p->v.nodes; + *nodes = p->nodes; break; case MPOL_PREFERRED_MANY: - *nodes = p->v.preferred_nodes; + *nodes = p->nodes; break; case MPOL_PREFERRED: if (!(p->flags & MPOL_F_LOCAL)) - *nodes = p->v.preferred_nodes; + *nodes = p->nodes; /* else return empty node mask for local allocation */ break; default: @@ -1046,7 +1046,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, *policy = err; } else if (pol == current->mempolicy && pol->mode == MPOL_INTERLEAVE) { - *policy = next_node_in(current->il_prev, pol->v.nodes); + *policy = next_node_in(current->il_prev, pol->nodes); } else { err = -EINVAL; goto out; @@ -1898,14 +1898,14 @@ static int apply_policy_zone(struct mempolicy *policy, enum zone_type zone) BUG_ON(dynamic_policy_zone == ZONE_MOVABLE); /* - * if policy->v.nodes has movable memory only, + * if policy->nodes has movable memory only, * we apply policy when gfp_zone(gfp) = ZONE_MOVABLE only. * - * policy->v.nodes is intersect with node_states[N_MEMORY]. + * policy->nodes is intersect with node_states[N_MEMORY]. * so if the following test faile, it implies - * policy->v.nodes has movable memory only. + * policy->nodes has movable memory only. */ - if (!nodes_intersects(policy->v.nodes, node_states[N_HIGH_MEMORY])) + if (!nodes_intersects(policy->nodes, node_states[N_HIGH_MEMORY])) dynamic_policy_zone = ZONE_MOVABLE; return zone >= dynamic_policy_zone; @@ -1919,9 +1919,9 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) { /* Lower zones don't get a nodemask applied for MPOL_BIND */ if (unlikely(policy->mode == MPOL_BIND) && - apply_policy_zone(policy, gfp_zone(gfp)) && - cpuset_nodemask_valid_mems_allowed(&policy->v.nodes)) - return &policy->v.nodes; + apply_policy_zone(policy, gfp_zone(gfp)) && + cpuset_nodemask_valid_mems_allowed(&policy->nodes)) + return &policy->nodes; return NULL; } @@ -1932,7 +1932,7 @@ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) if ((policy->mode == MPOL_PREFERRED || policy->mode == MPOL_PREFERRED_MANY) && !(policy->flags & MPOL_F_LOCAL)) { - nd = first_node(policy->v.preferred_nodes); + nd = first_node(policy->nodes); } else { /* * __GFP_THISNODE shouldn't even be used with the bind policy @@ -1951,7 +1951,7 @@ static unsigned interleave_nodes(struct mempolicy *policy) unsigned next; struct task_struct *me = current; - next = next_node_in(me->il_prev, policy->v.nodes); + next = next_node_in(me->il_prev, policy->nodes); if (next < MAX_NUMNODES) me->il_prev = next; return next; @@ -1979,7 +1979,7 @@ unsigned int mempolicy_slab_node(void) /* * handled MPOL_F_LOCAL above */ - return first_node(policy->v.preferred_nodes); + return first_node(policy->nodes); case MPOL_INTERLEAVE: return interleave_nodes(policy); @@ -1995,7 +1995,7 @@ unsigned int mempolicy_slab_node(void) enum zone_type highest_zoneidx = gfp_zone(GFP_KERNEL); zonelist = &NODE_DATA(node)->node_zonelists[ZONELIST_FALLBACK]; z = first_zones_zonelist(zonelist, highest_zoneidx, - &policy->v.nodes); + &policy->nodes); return z->zone ? zone_to_nid(z->zone) : node; } @@ -2006,12 +2006,12 @@ unsigned int mempolicy_slab_node(void) /* * Do static interleaving for a VMA with known offset @n. Returns the n'th - * node in pol->v.nodes (starting from n=0), wrapping around if n exceeds the + * node in pol->nodes (starting from n=0), wrapping around if n exceeds the * number of present nodes. */ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n) { - unsigned nnodes = nodes_weight(pol->v.nodes); + unsigned nnodes = nodes_weight(pol->nodes); unsigned target; int i; int nid; @@ -2019,9 +2019,9 @@ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n) if (!nnodes) return numa_node_id(); target = (unsigned int)n % nnodes; - nid = first_node(pol->v.nodes); + nid = first_node(pol->nodes); for (i = 0; i < target; i++) - nid = next_node(nid, pol->v.nodes); + nid = next_node(nid, pol->nodes); return nid; } @@ -2077,7 +2077,7 @@ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags, } else { nid = policy_node(gfp_flags, *mpol, numa_node_id()); if ((*mpol)->mode == MPOL_BIND) - *nodemask = &(*mpol)->v.nodes; + *nodemask = &(*mpol)->nodes; } return nid; } @@ -2110,19 +2110,19 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) mempolicy = current->mempolicy; switch (mempolicy->mode) { case MPOL_PREFERRED_MANY: - *mask = mempolicy->v.preferred_nodes; + *mask = mempolicy->nodes; break; case MPOL_PREFERRED: if (mempolicy->flags & MPOL_F_LOCAL) nid = numa_node_id(); else - nid = first_node(mempolicy->v.preferred_nodes); + nid = first_node(mempolicy->nodes); init_nodemask_of_node(mask, nid); break; case MPOL_BIND: case MPOL_INTERLEAVE: - *mask = mempolicy->v.nodes; + *mask = mempolicy->nodes; break; default: @@ -2167,11 +2167,11 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, */ break; case MPOL_PREFERRED_MANY: - ret = nodes_intersects(mempolicy->v.preferred_nodes, *mask); + ret = nodes_intersects(mempolicy->nodes, *mask); break; case MPOL_BIND: case MPOL_INTERLEAVE: - ret = nodes_intersects(mempolicy->v.nodes, *mask); + ret = nodes_intersects(mempolicy->nodes, *mask); break; default: BUG(); @@ -2260,7 +2260,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, if ((pol->mode == MPOL_PREFERRED || pol->mode == MPOL_PREFERRED_MANY) && !(pol->flags & MPOL_F_LOCAL)) - hpage_node = first_node(pol->v.preferred_nodes); + hpage_node = first_node(pol->nodes); nmask = policy_nodemask(gfp, pol); if (!nmask || node_isset(hpage_node, *nmask)) { @@ -2394,15 +2394,14 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) switch (a->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - return !!nodes_equal(a->v.nodes, b->v.nodes); + return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED_MANY: - return !!nodes_equal(a->v.preferred_nodes, - b->v.preferred_nodes); + return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED: /* a's ->flags is the same as b's */ if (a->flags & MPOL_F_LOCAL) return true; - return nodes_equal(a->v.preferred_nodes, b->v.preferred_nodes); + return nodes_equal(a->nodes, b->nodes); default: BUG(); return false; @@ -2546,7 +2545,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long if (pol->flags & MPOL_F_LOCAL) polnid = numa_node_id(); else - polnid = first_node(pol->v.preferred_nodes); + polnid = first_node(pol->nodes); break; case MPOL_BIND: @@ -2557,12 +2556,11 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long * else select nearest allowed node, if any. * If no allowed nodes, use current [!misplaced]. */ - if (node_isset(curnid, pol->v.nodes)) + if (node_isset(curnid, pol->nodes)) goto out; - z = first_zones_zonelist( - node_zonelist(numa_node_id(), GFP_HIGHUSER), - gfp_zone(GFP_HIGHUSER), - &pol->v.nodes); + z = first_zones_zonelist(node_zonelist(numa_node_id(), + GFP_HIGHUSER), + gfp_zone(GFP_HIGHUSER), &pol->nodes); polnid = zone_to_nid(z->zone); break; @@ -2763,11 +2761,9 @@ int mpol_set_shared_policy(struct shared_policy *info, struct sp_node *new = NULL; unsigned long sz = vma_pages(vma); - pr_debug("set_shared_policy %lx sz %lu %d %d %lx\n", - vma->vm_pgoff, - sz, npol ? npol->mode : -1, - npol ? npol->flags : -1, - npol ? nodes_addr(npol->v.nodes)[0] : NUMA_NO_NODE); + pr_debug("set_shared_policy %lx sz %lu %d %d %lx\n", vma->vm_pgoff, sz, + npol ? npol->mode : -1, npol ? npol->flags : -1, + npol ? nodes_addr(npol->nodes)[0] : NUMA_NO_NODE); if (npol) { new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, npol); @@ -2861,11 +2857,11 @@ void __init numa_policy_init(void) 0, SLAB_PANIC, NULL); for_each_node(nid) { - preferred_node_policy[nid] = (struct mempolicy) { + preferred_node_policy[nid] = (struct mempolicy){ .refcnt = ATOMIC_INIT(1), .mode = MPOL_PREFERRED, .flags = MPOL_F_MOF | MPOL_F_MORON, - .v = { .preferred_nodes = nodemask_of_node(nid), }, + .nodes = nodemask_of_node(nid), }; } @@ -3031,9 +3027,9 @@ int mpol_parse_str(char *str, struct mempolicy **mpol) * for /proc/mounts, /proc/pid/mounts and /proc/pid/mountinfo. */ if (mode != MPOL_PREFERRED) - new->v.nodes = nodes; + new->nodes = nodes; else if (nodelist) - new->v.preferred_nodes = nodemask_of_node(first_node(nodes)); + new->nodes = nodemask_of_node(first_node(nodes)); else new->flags |= MPOL_F_LOCAL; @@ -3089,11 +3085,11 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) if (flags & MPOL_F_LOCAL) mode = MPOL_LOCAL; else - nodes_or(nodes, nodes, pol->v.preferred_nodes); + nodes_or(nodes, nodes, pol->nodes); break; case MPOL_BIND: case MPOL_INTERLEAVE: - nodes = pol->v.nodes; + nodes = pol->nodes; break; default: WARN_ON_ONCE(1); From patchwork Wed Mar 3 10:20:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EE99C433DB for ; Wed, 3 Mar 2021 10:21:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C7BE4601FC for ; Wed, 3 Mar 2021 10:21:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C7BE4601FC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4A4568D014B; Wed, 3 Mar 2021 05:21:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 455EE8D0135; Wed, 3 Mar 2021 05:21:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2CEBE8D014B; Wed, 3 Mar 2021 05:21:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0199.hostedemail.com [216.40.44.199]) by kanga.kvack.org (Postfix) with ESMTP id 09F778D0135 for ; Wed, 3 Mar 2021 05:21:31 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BEC4A181CCB00 for ; Wed, 3 Mar 2021 10:21:30 +0000 (UTC) X-FDA: 77878171140.23.FC52658 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id D59CC407F8F1 for ; Wed, 3 Mar 2021 10:21:28 +0000 (UTC) IronPort-SDR: rRxQTtxNbYHf0lEySzSoMDG74AeinD5KhhEbUTPQVdguz/IFQhW8ddl1GipWMI99LS4rnFGNf/ IsXwRbleYY6g== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162800" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162800" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:29 -0800 IronPort-SDR: +PuQ+EoQPBbh167kHFpmBFDlW2VkXHTMPWlrkuvktDUdKr4PIGQ0X6fCRjbBXUnam+TWrD/t8a /WM044BZ4MmQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200240" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:25 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Feng Tang Subject: [PATCH v3 07/14] mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND Date: Wed, 3 Mar 2021 18:20:51 +0800 Message-Id: <1614766858-90344-8-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D59CC407F8F1 X-Stat-Signature: aitmqfpnz1f58ozi5ydm8hy4edfw1fwe Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766888-389995 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ben Widawsky Begin the real plumbing for handling this new policy. Now that the internal representation for preferred nodes and bound nodes is the same, and we can envision what multiple preferred nodes will behave like, there are obvious places where we can simply reuse the bind behavior. In v1 of this series, the moral equivalent was: "mm: Finish handling MPOL_PREFERRED_MANY". Like that, this attempts to implement the easiest spots for the new policy. Unlike that, this just reuses BIND. Link: https://lore.kernel.org/r/20200630212517.308045-8-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/mempolicy.c | 22 +++++++--------------- 1 file changed, 7 insertions(+), 15 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index fe1d83c..80cb554 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -953,8 +953,6 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) switch (p->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - *nodes = p->nodes; - break; case MPOL_PREFERRED_MANY: *nodes = p->nodes; break; @@ -1918,7 +1916,8 @@ static int apply_policy_zone(struct mempolicy *policy, enum zone_type zone) nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) { /* Lower zones don't get a nodemask applied for MPOL_BIND */ - if (unlikely(policy->mode == MPOL_BIND) && + if (unlikely(policy->mode == MPOL_BIND || + policy->mode == MPOL_PREFERRED_MANY) && apply_policy_zone(policy, gfp_zone(gfp)) && cpuset_nodemask_valid_mems_allowed(&policy->nodes)) return &policy->nodes; @@ -1974,7 +1973,6 @@ unsigned int mempolicy_slab_node(void) return node; switch (policy->mode) { - case MPOL_PREFERRED_MANY: case MPOL_PREFERRED: /* * handled MPOL_F_LOCAL above @@ -1984,6 +1982,7 @@ unsigned int mempolicy_slab_node(void) case MPOL_INTERLEAVE: return interleave_nodes(policy); + case MPOL_PREFERRED_MANY: case MPOL_BIND: { struct zoneref *z; @@ -2109,9 +2108,6 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) task_lock(current); mempolicy = current->mempolicy; switch (mempolicy->mode) { - case MPOL_PREFERRED_MANY: - *mask = mempolicy->nodes; - break; case MPOL_PREFERRED: if (mempolicy->flags & MPOL_F_LOCAL) nid = numa_node_id(); @@ -2122,6 +2118,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) case MPOL_BIND: case MPOL_INTERLEAVE: + case MPOL_PREFERRED_MANY: *mask = mempolicy->nodes; break; @@ -2165,12 +2162,11 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, * Thus, it's possible for tsk to have allocated memory from * nodes in mask. */ - break; - case MPOL_PREFERRED_MANY: ret = nodes_intersects(mempolicy->nodes, *mask); break; case MPOL_BIND: case MPOL_INTERLEAVE: + case MPOL_PREFERRED_MANY: ret = nodes_intersects(mempolicy->nodes, *mask); break; default: @@ -2394,7 +2390,6 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) switch (a->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED_MANY: return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED: @@ -2548,6 +2543,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = first_node(pol->nodes); break; + case MPOL_PREFERRED_MANY: case MPOL_BIND: /* @@ -2564,8 +2560,6 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = zone_to_nid(z->zone); break; - /* case MPOL_PREFERRED_MANY: */ - default: BUG(); } @@ -3078,15 +3072,13 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) switch (mode) { case MPOL_DEFAULT: break; - case MPOL_PREFERRED_MANY: - WARN_ON(flags & MPOL_F_LOCAL); - fallthrough; case MPOL_PREFERRED: if (flags & MPOL_F_LOCAL) mode = MPOL_LOCAL; else nodes_or(nodes, nodes, pol->nodes); break; + case MPOL_PREFERRED_MANY: case MPOL_BIND: case MPOL_INTERLEAVE: nodes = pol->nodes; From patchwork Wed Mar 3 10:20:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97EEBC433E0 for ; Wed, 3 Mar 2021 10:21:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3D71C64EEF for ; Wed, 3 Mar 2021 10:21:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D71C64EEF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BA8E68D014C; Wed, 3 Mar 2021 05:21:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B593A8D0135; Wed, 3 Mar 2021 05:21:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9AD6A8D014C; Wed, 3 Mar 2021 05:21:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id 778968D0135 for ; Wed, 3 Mar 2021 05:21:34 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2BB128249980 for ; Wed, 3 Mar 2021 10:21:34 +0000 (UTC) X-FDA: 77878171308.24.8F965C0 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id 48B08407F8E8 for ; Wed, 3 Mar 2021 10:21:32 +0000 (UTC) IronPort-SDR: +utecMn2mWNLDatd9TjpglGV2BrE/3j3NqUEfYQ1yoUa3QWx0yBVuu8+l0QBB4qj2AgChhQWOW 9C4NRUWnVL2Q== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162808" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162808" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:33 -0800 IronPort-SDR: V4jbwveSmYXsymojVfYTNIwrO3bM5N2c8uKlHVE8U6gDN0C1EXBDZohj1wE9IT/SHfMTpLWtcP 356CGRwjpVug== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200274" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:29 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Feng Tang Subject: [PATCH v3 08/14] mm/mempolicy: Create a page allocator for policy Date: Wed, 3 Mar 2021 18:20:52 +0800 Message-Id: <1614766858-90344-9-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 48B08407F8E8 X-Stat-Signature: jbtrh61uzh449wouuqqfn1kinxfw53mh Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766892-769147 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ben Widawsky Add a helper function which takes care of handling multiple preferred nodes. It will be called by future patches that need to handle this, specifically VMA based page allocation, and task based page allocation. Huge pages don't quite fit the same pattern because they use different underlying page allocation functions. This consumes the previous interleave policy specific allocation function to make a one stop shop for policy based allocation. For now, only interleaved policy will be used so there should be no functional change yet. However, if bisection points to issues in the next few commits, it was likely the fault of this patch. Similar functionality is offered via policy_node() and policy_nodemask(). By themselves however, neither can achieve this fallback style of sets of nodes. v3: add __GFP_NOWARN for first try (Feng) Link: https://lore.kernel.org/r/20200630212517.308045-9-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/mempolicy.c | 61 +++++++++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 48 insertions(+), 13 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 80cb554..a737e02 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2177,22 +2177,56 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, return ret; } -/* Allocate a page in interleaved policy. - Own path because it needs to do special accounting. */ -static struct page *alloc_page_interleave(gfp_t gfp, unsigned order, - unsigned nid) +/* Handle page allocation for all but interleaved policies */ +static struct page *alloc_pages_policy(struct mempolicy *pol, gfp_t gfp, + unsigned int order, int preferred_nid) { struct page *page; + gfp_t gfp_mask = gfp; - page = __alloc_pages(gfp, order, nid); - /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ - if (!static_branch_likely(&vm_numa_stat_key)) + if (pol->mode == MPOL_INTERLEAVE) { + page = __alloc_pages(gfp, order, preferred_nid); + /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ + if (!static_branch_likely(&vm_numa_stat_key)) + return page; + if (page && page_to_nid(page) == preferred_nid) { + preempt_disable(); + __inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT); + preempt_enable(); + } return page; - if (page && page_to_nid(page) == nid) { - preempt_disable(); - __inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT); - preempt_enable(); } + + VM_BUG_ON(preferred_nid != NUMA_NO_NODE); + + preferred_nid = numa_node_id(); + + /* + * There is a two pass approach implemented here for + * MPOL_PREFERRED_MANY. In the first pass we pretend the preferred nodes + * are bound, but allow the allocation to fail. The below table explains + * how this is achieved. + * + * | Policy | preferred nid | nodemask | + * |-------------------------------|---------------|------------| + * | MPOL_DEFAULT | local | NULL | + * | MPOL_PREFERRED | best | NULL | + * | MPOL_INTERLEAVE | ERR | ERR | + * | MPOL_BIND | local | pol->nodes | + * | MPOL_PREFERRED_MANY | best | pol->nodes | + * | MPOL_PREFERRED_MANY (round 2) | local | NULL | + * +-------------------------------+---------------+------------+ + */ + if (pol->mode == MPOL_PREFERRED_MANY) + gfp_mask |= __GFP_RETRY_MAYFAIL | __GFP_NOWARN; + + page = __alloc_pages_nodemask(gfp_mask, order, + policy_node(gfp, pol, preferred_nid), + policy_nodemask(gfp, pol)); + + if (unlikely(!page && pol->mode == MPOL_PREFERRED_MANY)) + page = __alloc_pages_nodemask(gfp, order, preferred_nid, NULL); + return page; } @@ -2234,8 +2268,8 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned nid; nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order); + page = alloc_pages_policy(pol, gfp, order, nid); mpol_cond_put(pol); - page = alloc_page_interleave(gfp, order, nid); goto out; } @@ -2319,7 +2353,8 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order) * nor system default_policy */ if (pol->mode == MPOL_INTERLEAVE) - page = alloc_page_interleave(gfp, order, interleave_nodes(pol)); + page = alloc_pages_policy(pol, gfp, order, + interleave_nodes(pol)); else page = __alloc_pages_nodemask(gfp, order, policy_node(gfp, pol, numa_node_id()), From patchwork Wed Mar 3 10:20:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 861A7C433E0 for ; Wed, 3 Mar 2021 10:21:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1D40364EDF for ; Wed, 3 Mar 2021 10:21:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1D40364EDF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 540168D014D; Wed, 3 Mar 2021 05:21:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F0118D0135; Wed, 3 Mar 2021 05:21:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 392298D014D; Wed, 3 Mar 2021 05:21:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0030.hostedemail.com [216.40.44.30]) by kanga.kvack.org (Postfix) with ESMTP id 13B5A8D0135 for ; Wed, 3 Mar 2021 05:21:38 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id CA5351812870B for ; Wed, 3 Mar 2021 10:21:37 +0000 (UTC) X-FDA: 77878171434.11.700135B Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id D6160407F8F7 for ; Wed, 3 Mar 2021 10:21:35 +0000 (UTC) IronPort-SDR: 3u3plT+/o0yKsxJ56lzuhpM0/ZF6FQxXEzp1pqaoJH4P0xUYeRtWHxx9dEJERhd3wWBCAfXKzP 9r+fFYSoe+/w== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162838" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162838" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:36 -0800 IronPort-SDR: U5u7VYbZWHEF2Ff6yC7yZbO3bdpzp3c3suxMeAdpZHy0+OTi4f/vM1dfA8myM1l4C0WGLnErCE ykSqLlkpFU/Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200300" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:33 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Feng Tang Subject: [PATCH v3 09/14] mm/mempolicy: Thread allocation for many preferred Date: Wed, 3 Mar 2021 18:20:53 +0800 Message-Id: <1614766858-90344-10-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D6160407F8F7 X-Stat-Signature: 33nmdk1fmax8mrgfz71xuexk6mx465jj Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766895-280834 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ben Widawsky In order to support MPOL_PREFERRED_MANY as the mode used by set_mempolicy(2), alloc_pages_current() needs to support it. This patch does that by using the new helper function to allocate properly based on policy. All the actual machinery to make this work was part of ("mm/mempolicy: Create a page allocator for policy") Link: https://lore.kernel.org/r/20200630212517.308045-10-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/mempolicy.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index a737e02..ceee90e 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2343,7 +2343,7 @@ EXPORT_SYMBOL(alloc_pages_vma); struct page *alloc_pages_current(gfp_t gfp, unsigned order) { struct mempolicy *pol = &default_policy; - struct page *page; + int nid = NUMA_NO_NODE; if (!in_interrupt() && !(gfp & __GFP_THISNODE)) pol = get_task_policy(current); @@ -2353,14 +2353,9 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order) * nor system default_policy */ if (pol->mode == MPOL_INTERLEAVE) - page = alloc_pages_policy(pol, gfp, order, - interleave_nodes(pol)); - else - page = __alloc_pages_nodemask(gfp, order, - policy_node(gfp, pol, numa_node_id()), - policy_nodemask(gfp, pol)); + nid = interleave_nodes(pol); - return page; + return alloc_pages_policy(pol, gfp, order, nid); } EXPORT_SYMBOL(alloc_pages_current); From patchwork Wed Mar 3 10:20:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2DF6C433DB for ; Wed, 3 Mar 2021 10:21:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 48B7D64EDF for ; Wed, 3 Mar 2021 10:21:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 48B7D64EDF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C6CD68D014E; Wed, 3 Mar 2021 05:21:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BF4968D0135; Wed, 3 Mar 2021 05:21:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6E128D014E; Wed, 3 Mar 2021 05:21:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id 8353F8D0135 for ; Wed, 3 Mar 2021 05:21:41 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 463018249980 for ; Wed, 3 Mar 2021 10:21:41 +0000 (UTC) X-FDA: 77878171602.10.048DC37 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id 57BD34080F48 for ; Wed, 3 Mar 2021 10:21:39 +0000 (UTC) IronPort-SDR: t9o89qO0s41QJRnlZXGE97BPpyUsrGkA7sh6iMbROksEuEQYIJetCOt3DCnaypJEHyibIUaPqC y9eZ3yjloDNQ== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162849" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162849" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:40 -0800 IronPort-SDR: 5Jfm/bphJbdhsQhRUAzskLG9h5ZYFkXQpu2B/9pPlpoB5A6cP+s4KMmLlXegyOf21t5Xuyt1xN 7xz8BloUHRRg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200316" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:36 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Feng Tang Subject: [PATCH v3 10/14] mm/mempolicy: VMA allocation for many preferred Date: Wed, 3 Mar 2021 18:20:54 +0800 Message-Id: <1614766858-90344-11-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 57BD34080F48 X-Stat-Signature: jmd11igi5jbmnqudyurhphb7kpxtxcne Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766899-990476 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ben Widawsky This patch implements MPOL_PREFERRED_MANY for alloc_pages_vma(). Like alloc_pages_current(), alloc_pages_vma() needs to support policy based decisions if they've been configured via mbind(2). The temporary "hack" of treating MPOL_PREFERRED and MPOL_PREFERRED_MANY can now be removed with this, too. All the actual machinery to make this work was part of ("mm/mempolicy: Create a page allocator for policy") Link: https://lore.kernel.org/r/20200630212517.308045-11-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/mempolicy.c | 29 +++++++++++++++++++++-------- 1 file changed, 21 insertions(+), 8 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index ceee90e..0cb92ab 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2259,8 +2259,6 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, { struct mempolicy *pol; struct page *page; - int preferred_nid; - nodemask_t *nmask; pol = get_vma_policy(vma, addr); @@ -2274,6 +2272,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, } if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) { + nodemask_t *nmask; int hpage_node = node; /* @@ -2287,10 +2286,26 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * does not allow the current node in its nodemask, we allocate * the standard way. */ - if ((pol->mode == MPOL_PREFERRED || - pol->mode == MPOL_PREFERRED_MANY) && - !(pol->flags & MPOL_F_LOCAL)) + if (pol->mode == MPOL_PREFERRED || !(pol->flags & MPOL_F_LOCAL)) { hpage_node = first_node(pol->nodes); + } else if (pol->mode == MPOL_PREFERRED_MANY) { + struct zoneref *z; + + /* + * In this policy, with direct reclaim, the normal + * policy based allocation will do the right thing - try + * twice using the preferred nodes first, and all nodes + * second. + */ + if (gfp & __GFP_DIRECT_RECLAIM) { + page = alloc_pages_policy(pol, gfp, order, NUMA_NO_NODE); + goto out; + } + + z = first_zones_zonelist(node_zonelist(numa_node_id(), GFP_HIGHUSER), + gfp_zone(GFP_HIGHUSER), &pol->nodes); + hpage_node = zone_to_nid(z->zone); + } nmask = policy_nodemask(gfp, pol); if (!nmask || node_isset(hpage_node, *nmask)) { @@ -2316,9 +2331,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, } } - nmask = policy_nodemask(gfp, pol); - preferred_nid = policy_node(gfp, pol, node); - page = __alloc_pages_nodemask(gfp, order, preferred_nid, nmask); + page = alloc_pages_policy(pol, gfp, order, NUMA_NO_NODE); mpol_cond_put(pol); out: return page; From patchwork Wed Mar 3 10:20:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FDF8C433E0 for ; Wed, 3 Mar 2021 10:21:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CD8D9601FC for ; Wed, 3 Mar 2021 10:21:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CD8D9601FC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 629C18D014F; Wed, 3 Mar 2021 05:21:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D9758D0135; Wed, 3 Mar 2021 05:21:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42CCE8D014F; Wed, 3 Mar 2021 05:21:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0192.hostedemail.com [216.40.44.192]) by kanga.kvack.org (Postfix) with ESMTP id 245968D0135 for ; Wed, 3 Mar 2021 05:21:45 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E6DF052D1 for ; Wed, 3 Mar 2021 10:21:44 +0000 (UTC) X-FDA: 77878171728.23.966F023 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id DA1FD407F8F7 for ; Wed, 3 Mar 2021 10:21:42 +0000 (UTC) IronPort-SDR: V8q2/oMyodjAAquCw8Qax4xIG/NfzFbpKM8ziWCmeDJf/p7EEhwP9GGegtijJxtncmatzOK9Fw JfFC75aRsnfA== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162856" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162856" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:43 -0800 IronPort-SDR: HSu42u1kR4AT0B/qmPXrhnbj0wc+SCAlWmoRFD5p1Cpw98kRJxD/KS6MWKHHgsiv8vTUx6hXd6 EiJ4XxVn+A5A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200345" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:40 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Feng Tang Subject: [PATCH v3 11/14] mm/mempolicy: huge-page allocation for many preferred Date: Wed, 3 Mar 2021 18:20:55 +0800 Message-Id: <1614766858-90344-12-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: DA1FD407F8F7 X-Stat-Signature: fdryaho14r9cx98sgfyt9sdsshkoafi8 Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766902-551152 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ben Widawsky Implement the missing huge page allocation functionality while obeying the preferred node semantics. This uses a fallback mechanism to try multiple preferred nodes first, and then all other nodes. It cannot use the helper function that was introduced because huge page allocation already has its own helpers and it was more LOC, and effort to try to consolidate that. The weirdness is MPOL_PREFERRED_MANY can't be called yet because it is part of the UAPI we haven't yet exposed. Instead of make that define global, it's simply changed with the UAPI patch. v3: add __GFP_NOWARN for first try of prefer_many allocation (Feng) Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/hugetlb.c | 22 +++++++++++++++++++--- mm/mempolicy.c | 3 ++- 2 files changed, 21 insertions(+), 4 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4bdb58a..c7c9ef3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1110,7 +1110,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, unsigned long address, int avoid_reserve, long chg) { - struct page *page; + struct page *page = NULL; struct mempolicy *mpol; gfp_t gfp_mask; nodemask_t *nodemask; @@ -1131,7 +1131,15 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, gfp_mask = htlb_alloc_mask(h); nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask); - page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); + if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */ + page = dequeue_huge_page_nodemask(h, + gfp_mask | __GFP_RETRY_MAYFAIL | __GFP_NOWARN, + nid, nodemask); + if (!page) + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, NULL); + } else { + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); + } if (page && !avoid_reserve && vma_has_reserves(vma, chg)) { SetPagePrivate(page); h->resv_huge_pages--; @@ -1935,7 +1943,15 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, nodemask_t *nodemask; nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask); - page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask); + if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */ + page = alloc_surplus_huge_page(h, + gfp_mask | __GFP_RETRY_MAYFAIL | __GFP_NOWARN, + nid, nodemask); + if (!page) + alloc_surplus_huge_page(h, gfp_mask, nid, NULL); + } else { + page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask); + } mpol_cond_put(mpol); return page; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0cb92ab..f9b2167 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2075,7 +2075,8 @@ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags, huge_page_shift(hstate_vma(vma))); } else { nid = policy_node(gfp_flags, *mpol, numa_node_id()); - if ((*mpol)->mode == MPOL_BIND) + if ((*mpol)->mode == MPOL_BIND || + (*mpol)->mode == MPOL_PREFERRED_MANY) *nodemask = &(*mpol)->nodes; } return nid; From patchwork Wed Mar 3 10:20:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D36BCC433E0 for ; Wed, 3 Mar 2021 10:21:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 61F90601FF for ; Wed, 3 Mar 2021 10:21:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 61F90601FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DCBCE8D0150; Wed, 3 Mar 2021 05:21:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D7BEB8D0135; Wed, 3 Mar 2021 05:21:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCE268D0150; Wed, 3 Mar 2021 05:21:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id 9E5498D0135 for ; Wed, 3 Mar 2021 05:21:48 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5B9C6181C5C32 for ; Wed, 3 Mar 2021 10:21:48 +0000 (UTC) X-FDA: 77878171896.10.F5BEAE2 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id 7917B407F8F3 for ; Wed, 3 Mar 2021 10:21:46 +0000 (UTC) IronPort-SDR: 8GNO9i8wIg7yntB+xPC2hThLX97ZKqBIWC3ERy72hOCCOYGKfAEJF8N8ksiQRz2bCsr5SrxpKz SfFTc6X1R0ZQ== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162862" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162862" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:47 -0800 IronPort-SDR: vGh+xyG3+g1wV31ZWpDTY44msRlng5hyza9oj1Z16Ci5Q323xl73cqPueJtF+DdD8gHXCczpUY csDVHahwPwGA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200372" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:43 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Feng Tang Subject: [PATCH v3 12/14] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Date: Wed, 3 Mar 2021 18:20:56 +0800 Message-Id: <1614766858-90344-13-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 7917B407F8F3 X-Stat-Signature: 9fh7eezsunzf5g7udjdt6687k977iahr Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766906-604891 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ben Widawsky Adds a new mode to the existing mempolicy modes, MPOL_PREFERRED_MANY. MPOL_PREFERRED_MANY will be adequately documented in the internal admin-guide with this patch. Eventually, the man pages for mbind(2), get_mempolicy(2), set_mempolicy(2) and numactl(8) will also have text about this mode. Those shall contain the canonical reference. NUMA systems continue to become more prevalent. New technologies like PMEM make finer grain control over memory access patterns increasingly desirable. MPOL_PREFERRED_MANY allows userspace to specify a set of nodes that will be tried first when performing allocations. If those allocations fail, all remaining nodes will be tried. It's a straight forward API which solves many of the presumptive needs of system administrators wanting to optimize workloads on such machines. The mode will work either per VMA, or per thread. Generally speaking, this is similar to the way MPOL_BIND works, except the user will only get a SIGSEGV if all nodes in the system are unable to satisfy the allocation request. v3: fix a typo of checking policy (Feng) Link: https://lore.kernel.org/r/20200630212517.308045-13-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- Documentation/admin-guide/mm/numa_memory_policy.rst | 16 ++++++++++++---- include/uapi/linux/mempolicy.h | 6 +++--- mm/hugetlb.c | 4 ++-- mm/mempolicy.c | 14 ++++++-------- 4 files changed, 23 insertions(+), 17 deletions(-) diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst index 1ad020c..b69963a 100644 --- a/Documentation/admin-guide/mm/numa_memory_policy.rst +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst @@ -245,6 +245,14 @@ MPOL_INTERLEAVED address range or file. During system boot up, the temporary interleaved system default policy works in this mode. +MPOL_PREFERRED_MANY + This mode specifies that the allocation should be attempted from the + nodemask specified in the policy. If that allocation fails, the kernel + will search other nodes, in order of increasing distance from the first + set bit in the nodemask based on information provided by the platform + firmware. It is similar to MPOL_PREFERRED with the main exception that + is is an error to have an empty nodemask. + NUMA memory policy supports the following optional mode flags: MPOL_F_STATIC_NODES @@ -253,10 +261,10 @@ MPOL_F_STATIC_NODES nodes changes after the memory policy has been defined. Without this flag, any time a mempolicy is rebound because of a - change in the set of allowed nodes, the node (Preferred) or - nodemask (Bind, Interleave) is remapped to the new set of - allowed nodes. This may result in nodes being used that were - previously undesired. + change in the set of allowed nodes, the preferred nodemask (Preferred + Many), preferred node (Preferred) or nodemask (Bind, Interleave) is + remapped to the new set of allowed nodes. This may result in nodes + being used that were previously undesired. With this flag, if the user-specified nodes overlap with the nodes allowed by the task's cpuset, then the memory policy is diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index 3354774..ad3eee6 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -16,13 +16,13 @@ */ /* Policies */ -enum { - MPOL_DEFAULT, +enum { MPOL_DEFAULT, MPOL_PREFERRED, MPOL_BIND, MPOL_INTERLEAVE, MPOL_LOCAL, - MPOL_MAX, /* always last member of enum */ + MPOL_PREFERRED_MANY, + MPOL_MAX, /* always last member of enum */ }; /* Flags for set_mempolicy */ diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c7c9ef3..60a0d57 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1131,7 +1131,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, gfp_mask = htlb_alloc_mask(h); nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask); - if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */ + if (mpol->mode == MPOL_PREFERRED_MANY) { page = dequeue_huge_page_nodemask(h, gfp_mask | __GFP_RETRY_MAYFAIL | __GFP_NOWARN, nid, nodemask); @@ -1943,7 +1943,7 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, nodemask_t *nodemask; nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask); - if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */ + if (mpol->mode == MPOL_PREFERRED_MANY) { page = alloc_surplus_huge_page(h, gfp_mask | __GFP_RETRY_MAYFAIL | __GFP_NOWARN, nid, nodemask); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index f9b2167..1438d58 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -108,8 +108,6 @@ #include "internal.h" -#define MPOL_PREFERRED_MANY MPOL_MAX - /* Internal flags */ #define MPOL_MF_DISCONTIG_OK (MPOL_MF_INTERNAL << 0) /* Skip checks for continuous vmas */ #define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */ @@ -180,7 +178,7 @@ struct mempolicy *get_task_policy(struct task_struct *p) static const struct mempolicy_operations { int (*create)(struct mempolicy *pol, const nodemask_t *nodes); void (*rebind)(struct mempolicy *pol, const nodemask_t *nodes); -} mpol_ops[MPOL_MAX + 1]; +} mpol_ops[MPOL_MAX]; static inline int mpol_store_user_nodemask(const struct mempolicy *pol) { @@ -389,8 +387,8 @@ static void mpol_rebind_preferred_common(struct mempolicy *pol, } /* MPOL_PREFERRED_MANY allows multiple nodes to be set in 'nodes' */ -static void __maybe_unused mpol_rebind_preferred_many(struct mempolicy *pol, - const nodemask_t *nodes) +static void mpol_rebind_preferred_many(struct mempolicy *pol, + const nodemask_t *nodes) { mpol_rebind_preferred_common(pol, nodes, nodes); } @@ -452,7 +450,7 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new) mmap_write_unlock(mm); } -static const struct mempolicy_operations mpol_ops[MPOL_MAX + 1] = { +static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { [MPOL_DEFAULT] = { .rebind = mpol_rebind_default, }, @@ -470,8 +468,8 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX + 1] = { }, /* [MPOL_LOCAL] - see mpol_new() */ [MPOL_PREFERRED_MANY] = { - .create = NULL, - .rebind = NULL, + .create = mpol_new_preferred_many, + .rebind = mpol_rebind_preferred_many, }, }; From patchwork Wed Mar 3 10:20:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 095C4C433E0 for ; Wed, 3 Mar 2021 10:21:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 800FA601FF for ; Wed, 3 Mar 2021 10:21:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 800FA601FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0D4958D0151; Wed, 3 Mar 2021 05:21:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 05BBE8D0135; Wed, 3 Mar 2021 05:21:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E412E8D0151; Wed, 3 Mar 2021 05:21:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id C54E88D0135 for ; Wed, 3 Mar 2021 05:21:52 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8E01B181CCB0D for ; Wed, 3 Mar 2021 10:21:52 +0000 (UTC) X-FDA: 77878172064.12.E74AE7D Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id AC1AC407F8DF for ; Wed, 3 Mar 2021 10:21:50 +0000 (UTC) IronPort-SDR: y7S3APTsaQlhDIulyPPYcIrCU6MZexhNPKtXKu1UgzJki6OqpotV77eqMgENVTHOsEuINF7eaM Pfx3XwRFXD2w== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162870" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162870" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:50 -0800 IronPort-SDR: mJG94OlRdkB4bOUsyIwA11SZcy0cODpfW1CyOe98JK90aoacB/QL/yLrvQCmsUDfhKn3QBNid9 /4O9MP8Cdmlg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200397" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:47 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Feng Tang Subject: [PATCH v3 13/14] mem/mempolicy: unify mpol_new_preferred() and mpol_new_preferred_many() Date: Wed, 3 Mar 2021 18:20:57 +0800 Message-Id: <1614766858-90344-14-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: AC1AC407F8DF X-Stat-Signature: 31mk438yuzjkwj4rrfh3cneib9e6i458 Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766910-450544 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To reduce some code duplication. Signed-off-by: Feng Tang --- mm/mempolicy.c | 25 +++++++------------------ 1 file changed, 7 insertions(+), 18 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 1438d58..d66c1c0 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -201,32 +201,21 @@ static int mpol_new_interleave(struct mempolicy *pol, const nodemask_t *nodes) return 0; } -static int mpol_new_preferred_many(struct mempolicy *pol, +/* cover both MPOL_PREFERRED and MPOL_PREFERRED_MANY */ +static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes) { if (!nodes) pol->flags |= MPOL_F_LOCAL; /* local allocation */ else if (nodes_empty(*nodes)) return -EINVAL; /* no allowed nodes */ - else - pol->nodes = *nodes; - return 0; -} - -static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes) -{ - if (nodes) { + else { /* MPOL_PREFERRED can only take a single node: */ - nodemask_t tmp; + nodemask_t tmp = nodemask_of_node(first_node(*nodes)); - if (nodes_empty(*nodes)) - return -EINVAL; - - tmp = nodemask_of_node(first_node(*nodes)); - return mpol_new_preferred_many(pol, &tmp); + pol->nodes = (pol->mode == MPOL_PREFERRED) ? tmp : *nodes; } - - return mpol_new_preferred_many(pol, NULL); + return 0; } static int mpol_new_bind(struct mempolicy *pol, const nodemask_t *nodes) @@ -468,7 +457,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { }, /* [MPOL_LOCAL] - see mpol_new() */ [MPOL_PREFERRED_MANY] = { - .create = mpol_new_preferred_many, + .create = mpol_new_preferred, .rebind = mpol_rebind_preferred_many, }, }; From patchwork Wed Mar 3 10:20:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2687CC433DB for ; Wed, 3 Mar 2021 10:21:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0FD2601FF for ; Wed, 3 Mar 2021 10:21:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0FD2601FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2CE5C8D0152; Wed, 3 Mar 2021 05:21:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 27DEB8D0135; Wed, 3 Mar 2021 05:21:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11F868D0152; Wed, 3 Mar 2021 05:21:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id E58048D0135 for ; Wed, 3 Mar 2021 05:21:56 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A321F8249980 for ; Wed, 3 Mar 2021 10:21:56 +0000 (UTC) X-FDA: 77878172232.11.43324CE Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id 7B8D6407F8F7 for ; Wed, 3 Mar 2021 10:21:53 +0000 (UTC) IronPort-SDR: DoXhW6xIyvY68BfBit3FBPrhdVAnXiEkZqBwDavH56gxQQZgbSoK01+z1pWo1YjakWs1Vzce7B 6QtisXxJmGlw== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162880" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162880" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:54 -0800 IronPort-SDR: wq82e/lOEyykZxs9PW7u5fkZ50pf6gS06CUPn93kDOM5AOVzmbNuwLtWO4w8kTqTiRzCt2CZ29 sgrE/QkcAxlA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200421" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:50 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Feng Tang Subject: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit Date: Wed, 3 Mar 2021 18:20:58 +0800 Message-Id: <1614766858-90344-15-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 7B8D6407F8F7 X-Stat-Signature: h4r4a6765zibxgdaby4z6rnbwkcctgig Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766913-925600 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When doing broader test, we noticed allocation slowness in one test case that malloc memory with size which is slightly bigger than free memory of targeted nodes, but much less then the total free memory of system. The reason is the code enters the slowpath of __alloc_pages_nodemask(), which takes quite some time. As alloc_pages_policy() will give it a 2nd try with NULL nodemask, so there is no need to enter the slowpath for the first try. Add a new gfp bit to skip the slowpath, so that user cases like this can leverage. With it, the malloc in such case is much accelerated as it never enters the slowpath. Adding a new gfp_mask bit is generally not liked, and another idea is to add another nodemask to struct 'alloc_context', so it has 2: 'preferred-nmask' and 'fallback-nmask', and they will be tried in turn if not NULL, with it we can call __alloc_pages_nodemask() only once. Signed-off-by: Feng Tang Signed-off-by: Feng Tang --- include/linux/gfp.h | 9 +++++++-- mm/mempolicy.c | 2 +- mm/page_alloc.c | 2 +- 3 files changed, 9 insertions(+), 4 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 6e479e9..81bacbe 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -39,8 +39,9 @@ struct vm_area_struct; #define ___GFP_HARDWALL 0x100000u #define ___GFP_THISNODE 0x200000u #define ___GFP_ACCOUNT 0x400000u +#define ___GFP_NO_SLOWPATH 0x800000u #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x800000u +#define ___GFP_NOLOCKDEP 0x1000000u #else #define ___GFP_NOLOCKDEP 0 #endif @@ -220,11 +221,15 @@ struct vm_area_struct; #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) +/* Do not go into the slowpath */ +#define __GFP_NO_SLOWPATH ((__force gfp_t)___GFP_NO_SLOWPATH) + /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) + /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** diff --git a/mm/mempolicy.c b/mm/mempolicy.c index d66c1c0..e84b56d 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2206,7 +2206,7 @@ static struct page *alloc_pages_policy(struct mempolicy *pol, gfp_t gfp, * +-------------------------------+---------------+------------+ */ if (pol->mode == MPOL_PREFERRED_MANY) - gfp_mask |= __GFP_RETRY_MAYFAIL | __GFP_NOWARN; + gfp_mask |= __GFP_RETRY_MAYFAIL | __GFP_NOWARN | __GFP_NO_SLOWPATH; page = __alloc_pages_nodemask(gfp_mask, order, policy_node(gfp, pol, preferred_nid), diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 519a60d..969e3a1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4993,7 +4993,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, /* First allocation attempt */ page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac); - if (likely(page)) + if (likely(page) || (gfp_mask & __GFP_NO_SLOWPATH)) goto out; /*