From patchwork Fri Oct 30 19:02:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870663 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 88944697 for ; Fri, 30 Oct 2020 19:02:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3D9A82224E for ; Fri, 30 Oct 2020 19:02:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D9A82224E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 30AF36B005C; Fri, 30 Oct 2020 15:02:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 29EA06B0062; Fri, 30 Oct 2020 15:02:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 15A3B6B005D; Fri, 30 Oct 2020 15:02:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0225.hostedemail.com [216.40.44.225]) by kanga.kvack.org (Postfix) with ESMTP id D6C406B0036 for ; Fri, 30 Oct 2020 15:02:47 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 83ADE180AD811 for ; Fri, 30 Oct 2020 19:02:47 +0000 (UTC) X-FDA: 77429513574.03.cloud74_3d085c427298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 5A58A28A4E8 for ; Fri, 30 Oct 2020 19:02:47 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30029:30045:30054:30064:30070,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95;04y8q7imr7gxbct7xocnywpcnh5hmocqqcqbu5bpbs4oin6hcx3hkw6isct75st.ohd5rrcza4oi4pzf9ewfzgwdns1jksxfhahxztd4o479r1bpk5jiok8qogqocrb.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: cloud74_3d085c427298 X-Filterd-Recvd-Size: 3051 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:46 +0000 (UTC) IronPort-SDR: AwEk2q/g0JvhMy7DF8orL2VbcakBD4epRF//1eAj0BgwIyPxMWcFsr/ismLPrWxmbSAtd37ATl 0fO7D3coZWDQ== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629088" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629088" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:43 -0700 IronPort-SDR: 5YXUIAQv+McZhxTjcsBckey+vREkkHnN2xde3eKCPu6HB0BAma7fFwgOc9E23f46Aa+5yKFBMI tPe1edzpuM+Q== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167643" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:43 -0700 From: Ben Widawsky To: linux-mm , Andrew Morton Cc: Ben Widawsky , Dave Hansen , Michal Hocko , Michal Hocko , linux-kernel@vger.kernel.org Subject: [PATCH 01/12] mm/mempolicy: Add comment for missing LOCAL Date: Fri, 30 Oct 2020 12:02:27 -0700 Message-Id: <20201030190238.306764-2-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MPOL_LOCAL is a bit weird because it is simply a different name for an existing behavior (preferred policy with no node mask). It has been this way since it was added here: commit 479e2802d09f ("mm: mempolicy: Make MPOL_LOCAL a real policy") It is so similar to MPOL_PREFERRED in fact that when the policy is created in mpol_new, the mode is set as PREFERRED, and an internal state representing LOCAL doesn't exist. To prevent future explorers from scratching their head as to why MPOL_LOCAL isn't defined in the mpol_ops table, add a small comment explaining the situations. v2: Change comment to refer to mpol_new (Michal) Link: https://lore.kernel.org/r/20200630212517.308045-2-ben.widawsky@intel.com Acked-by: Michal Hocko Signed-off-by: Ben Widawsky --- mm/mempolicy.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 3fde772ef5ef..e24f0133ff1f 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -427,6 +427,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { .create = mpol_new_bind, .rebind = mpol_rebind_nodemask, }, + /* [MPOL_LOCAL] - see mpol_new() */ }; static int migrate_page_add(struct page *page, struct list_head *pagelist, From patchwork Fri Oct 30 19:02:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870665 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6016D697 for ; Fri, 30 Oct 2020 19:02:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0D9DF221EB for ; Fri, 30 Oct 2020 19:02:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D9DF221EB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 74EF66B0036; Fri, 30 Oct 2020 15:02:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6FF366B005D; Fri, 30 Oct 2020 15:02:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C6D66B006C; Fri, 30 Oct 2020 15:02:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0201.hostedemail.com [216.40.44.201]) by kanga.kvack.org (Postfix) with ESMTP id 22AB86B0036 for ; Fri, 30 Oct 2020 15:02:48 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C0E1D1EF1 for ; Fri, 30 Oct 2020 19:02:47 +0000 (UTC) X-FDA: 77429513574.08.bear97_2b0030b27298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id A0A1E1819E76B for ; Fri, 30 Oct 2020 19:02:47 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30025:30054:30055:30064:30070:30075:30080,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.50.0.100;04ygz7buu5wxzs5s7uifc4yrbezpeypa9rm3sk6yokbfudapgt3fsd75udcgdd7.8gnbcz5xscaie33wo7953qed73gqn5imp7tk5px78ptdpoiojs1iobkk9bakns7.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:66,LUA_SUMMARY:none X-HE-Tag: bear97_2b0030b27298 X-Filterd-Recvd-Size: 12079 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:46 +0000 (UTC) IronPort-SDR: uxbcVLJ7ZY4kECLPLdbsK27nvDolebz211WS2x0qZIUEBnqaSsHJIWeMw86beIstY8/ooGUScg /0BgAK7B5jgA== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629097" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629097" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:44 -0700 IronPort-SDR: GbRNYe2qdx5ld/FufN7RCKbNuIA2/flLKlX427Jbbos8V9aZKG9KRfzVco7y9Jad81NfWUp6OY y1FiUz2QJeyA== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167648" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:43 -0700 From: Ben Widawsky To: linux-mm , Jonathan Corbet , Andrew Morton , Nathan Chancellor , Nick Desaulniers Cc: Dave Hansen , Dave Hansen , Michal Hocko , Ben Widawsky , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, clang-built-linux@googlegroups.com Subject: [PATCH 02/12] mm/mempolicy: convert single preferred_node to full nodemask Date: Fri, 30 Oct 2020 12:02:28 -0700 Message-Id: <20201030190238.306764-3-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen The NUMA APIs currently allow passing in a "preferred node" as a single bit set in a nodemask. If more than one bit it set, bits after the first are ignored. Internally, this is implemented as a single integer: mempolicy->preferred_node. This single node is generally OK for location-based NUMA where memory being allocated will eventually be operated on by a single CPU. However, in systems with multiple memory types, folks want to target a *type* of memory instead of a location. For instance, someone might want some high-bandwidth memory but do not care about the CPU next to which it is allocated. Or, they want a cheap, high capacity allocation and want to target all NUMA nodes which have persistent memory in volatile mode. In both of these cases, the application wants to target a *set* of nodes, but does not want strict MPOL_BIND behavior as that could lead to OOM killer or SIGSEGV. To get that behavior, a MPOL_PREFERRED mode is desirable, but one that honors multiple nodes to be set in the nodemask. The first step in that direction is to be able to internally store multiple preferred nodes, which is implemented in this patch. This should not have any function changes and just switches the internal representation of mempolicy->preferred_node from an integer to a nodemask called 'mempolicy->preferred_nodes'. This is not a pie-in-the-sky dream for an API. This was a response to a specific ask of more than one group at Intel. Specifically: 1. There are existing libraries that target memory types such as https://github.com/memkind/memkind. These are known to suffer from SIGSEGV's when memory is low on targeted memory "kinds" that span more than one node. The MCDRAM on a Xeon Phi in "Cluster on Die" mode is an example of this. 2. Volatile-use persistent memory users want to have a memory policy which is targeted at either "cheap and slow" (PMEM) or "expensive and fast" (DRAM). However, they do not want to experience allocation failures when the targeted type is unavailable. 3. Allocate-then-run. Generally, we let the process scheduler decide on which physical CPU to run a task. That location provides a default allocation policy, and memory availability is not generally considered when placing tasks. For situations where memory is valuable and constrained, some users want to allocate memory first, *then* allocate close compute resources to the allocation. This is the reverse of the normal (CPU) model. Accelerators such as GPUs that operate on core-mm-managed memory are interested in this model. v2: Fix spelling errors in commit message. (Ben) clang-format. (Ben) Integrated bit from another patch. (Ben) Update the docs to reflect the internal data structure change (Ben) Don't advertise MPOL_PREFERRED_MANY in UAPI until we can handle it (Ben) Added more to the commit message (Dave) Link: https://lore.kernel.org/r/20200630212517.308045-3-ben.widawsky@intel.com Co-developed-by: Ben Widawsky Signed-off-by: Dave Hansen Signed-off-by: Ben Widawsky --- .../admin-guide/mm/numa_memory_policy.rst | 6 +-- include/linux/mempolicy.h | 4 +- mm/mempolicy.c | 40 ++++++++++--------- 3 files changed, 27 insertions(+), 23 deletions(-) diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst index 067a90a1499c..1ad020c459b8 100644 --- a/Documentation/admin-guide/mm/numa_memory_policy.rst +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst @@ -205,9 +205,9 @@ MPOL_PREFERRED of increasing distance from the preferred node based on information provided by the platform firmware. - Internally, the Preferred policy uses a single node--the - preferred_node member of struct mempolicy. When the internal - mode flag MPOL_F_LOCAL is set, the preferred_node is ignored + Internally, the Preferred policy uses a nodemask--the + preferred_nodes member of struct mempolicy. When the internal + mode flag MPOL_F_LOCAL is set, the preferred_nodes are ignored and the policy is interpreted as local allocation. "Local" allocation policy can be viewed as a Preferred policy that starts at the node containing the cpu where the allocation diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 5f1c74df264d..23ee10556b82 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -47,8 +47,8 @@ struct mempolicy { unsigned short mode; /* See MPOL_* above */ unsigned short flags; /* See set_mempolicy() MPOL_F_* above */ union { - short preferred_node; /* preferred */ - nodemask_t nodes; /* interleave/bind */ + nodemask_t preferred_nodes; /* preferred */ + nodemask_t nodes; /* interleave/bind */ /* undefined for default */ } v; union { diff --git a/mm/mempolicy.c b/mm/mempolicy.c index e24f0133ff1f..ba3bc4f28d27 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -205,7 +205,7 @@ static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes) else if (nodes_empty(*nodes)) return -EINVAL; /* no allowed nodes */ else - pol->v.preferred_node = first_node(*nodes); + pol->v.preferred_nodes = nodemask_of_node(first_node(*nodes)); return 0; } @@ -345,22 +345,26 @@ static void mpol_rebind_preferred(struct mempolicy *pol, const nodemask_t *nodes) { nodemask_t tmp; + nodemask_t preferred_node; + + /* MPOL_PREFERRED uses only the first node in the mask */ + preferred_node = nodemask_of_node(first_node(*nodes)); if (pol->flags & MPOL_F_STATIC_NODES) { int node = first_node(pol->w.user_nodemask); if (node_isset(node, *nodes)) { - pol->v.preferred_node = node; + pol->v.preferred_nodes = nodemask_of_node(node); pol->flags &= ~MPOL_F_LOCAL; } else pol->flags |= MPOL_F_LOCAL; } else if (pol->flags & MPOL_F_RELATIVE_NODES) { mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes); - pol->v.preferred_node = first_node(tmp); + pol->v.preferred_nodes = tmp; } else if (!(pol->flags & MPOL_F_LOCAL)) { - pol->v.preferred_node = node_remap(pol->v.preferred_node, - pol->w.cpuset_mems_allowed, - *nodes); + nodes_remap(tmp, pol->v.preferred_nodes, + pol->w.cpuset_mems_allowed, preferred_node); + pol->v.preferred_nodes = tmp; pol->w.cpuset_mems_allowed = *nodes; } } @@ -912,7 +916,7 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) break; case MPOL_PREFERRED: if (!(p->flags & MPOL_F_LOCAL)) - node_set(p->v.preferred_node, *nodes); + *nodes = p->v.preferred_nodes; /* else return empty node mask for local allocation */ break; default: @@ -1885,9 +1889,9 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) /* Return the node id preferred by the given mempolicy, or the given id */ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) { - if (policy->mode == MPOL_PREFERRED && !(policy->flags & MPOL_F_LOCAL)) - nd = policy->v.preferred_node; - else { + if (policy->mode == MPOL_PREFERRED && !(policy->flags & MPOL_F_LOCAL)) { + nd = first_node(policy->v.preferred_nodes); + } else { /* * __GFP_THISNODE shouldn't even be used with the bind policy * because we might easily break the expectation to stay on the @@ -1932,7 +1936,7 @@ unsigned int mempolicy_slab_node(void) /* * handled MPOL_F_LOCAL above */ - return policy->v.preferred_node; + return first_node(policy->v.preferred_nodes); case MPOL_INTERLEAVE: return interleave_nodes(policy); @@ -2066,7 +2070,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) if (mempolicy->flags & MPOL_F_LOCAL) nid = numa_node_id(); else - nid = mempolicy->v.preferred_node; + nid = first_node(mempolicy->v.preferred_nodes); init_nodemask_of_node(mask, nid); break; @@ -2204,7 +2208,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * node in its nodemask, we allocate the standard way. */ if (pol->mode == MPOL_PREFERRED && !(pol->flags & MPOL_F_LOCAL)) - hpage_node = pol->v.preferred_node; + hpage_node = first_node(pol->v.preferred_nodes); nmask = policy_nodemask(gfp, pol); if (!nmask || node_isset(hpage_node, *nmask)) { @@ -2343,7 +2347,7 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) /* a's ->flags is the same as b's */ if (a->flags & MPOL_F_LOCAL) return true; - return a->v.preferred_node == b->v.preferred_node; + return nodes_equal(a->v.preferred_nodes, b->v.preferred_nodes); default: BUG(); return false; @@ -2487,7 +2491,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long if (pol->flags & MPOL_F_LOCAL) polnid = numa_node_id(); else - polnid = pol->v.preferred_node; + polnid = first_node(pol->v.preferred_nodes); break; case MPOL_BIND: @@ -2804,7 +2808,7 @@ void __init numa_policy_init(void) .refcnt = ATOMIC_INIT(1), .mode = MPOL_PREFERRED, .flags = MPOL_F_MOF | MPOL_F_MORON, - .v = { .preferred_node = nid, }, + .v = { .preferred_nodes = nodemask_of_node(nid), }, }; } @@ -2970,7 +2974,7 @@ int mpol_parse_str(char *str, struct mempolicy **mpol) if (mode != MPOL_PREFERRED) new->v.nodes = nodes; else if (nodelist) - new->v.preferred_node = first_node(nodes); + new->v.preferred_nodes = nodemask_of_node(first_node(nodes)); else new->flags |= MPOL_F_LOCAL; @@ -3023,7 +3027,7 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) if (flags & MPOL_F_LOCAL) mode = MPOL_LOCAL; else - node_set(pol->v.preferred_node, nodes); + nodes_or(nodes, nodes, pol->v.preferred_nodes); break; case MPOL_BIND: case MPOL_INTERLEAVE: From patchwork Fri Oct 30 19:02:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870687 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4BBD96A2 for ; Fri, 30 Oct 2020 19:03:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 004F9208B6 for ; Fri, 30 Oct 2020 19:03:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 004F9208B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 294986B0085; Fri, 30 Oct 2020 15:03:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 245B36B0087; Fri, 30 Oct 2020 15:03:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BD376B0088; Fri, 30 Oct 2020 15:03:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0192.hostedemail.com [216.40.44.192]) by kanga.kvack.org (Postfix) with ESMTP id CCC756B0085 for ; Fri, 30 Oct 2020 15:03:03 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6FAFD181AEF1E for ; Fri, 30 Oct 2020 19:03:03 +0000 (UTC) X-FDA: 77429514246.25.wave81_430801a27298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 6B9691804DF89 for ; Fri, 30 Oct 2020 19:02:48 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30054:30064:30070:30080,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95;04yfsi47sf7teu4rmzytumaxwrbygopwp98kom8zq4o7p6m484ursx5b4j1tqgj.thrnn6kazaubjryfw8ty1e7cgbuf9gmo98r8qcznjc5ki1t7rshasqjiku64wex.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:67,LUA_SUMMARY:none X-HE-Tag: wave81_430801a27298 X-Filterd-Recvd-Size: 8309 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:47 +0000 (UTC) IronPort-SDR: /6YTjR+WxQvEhR9CjzMRz6pXG8qtHrMqs6B/CyESKCmjvejQu8r7WB2D3cXCiZpsKhQ7rOvSDP zGwMC7PsTwKw== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629100" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629100" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:45 -0700 IronPort-SDR: BM1BL2+VRKeofJNnoSPAEDwLhT9tJ8TG5YNC98pI7lQR0Gni6oRexQ+vYG6ZdgQLc6LAg7knfc thwppUTdUNLg== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167656" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:44 -0700 From: Ben Widawsky To: linux-mm , Andrew Morton Cc: Dave Hansen , Dave Hansen , Michal Hocko , Ben Widawsky , linux-kernel@vger.kernel.org Subject: [PATCH 03/12] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes Date: Fri, 30 Oct 2020 12:02:29 -0700 Message-Id: <20201030190238.306764-4-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen MPOL_PREFERRED honors only a single node set in the nodemask. Add the bare define for a new mode which will allow more than one. The patch does all the plumbing without actually adding the new policy type. v2: Plumb most MPOL_PREFERRED_MANY without exposing UAPI (Ben) Fixes for checkpatch (Ben) Link: https://lore.kernel.org/r/20200630212517.308045-4-ben.widawsky@intel.com Co-developed-by: Ben Widawsky Signed-off-by: Ben Widawsky Signed-off-by: Dave Hansen --- mm/mempolicy.c | 46 ++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 40 insertions(+), 6 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index ba3bc4f28d27..21a6f80f91a9 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -31,6 +31,9 @@ * but useful to set in a VMA when you have a non default * process policy. * + * preferred many Try a set of nodes first before normal fallback. This is + * similar to preferred without the special case. + * * default Allocate on the local node first, or when on a VMA * use the process policy. This is what Linux always did * in a NUMA aware kernel and still does by, ahem, default. @@ -105,6 +108,8 @@ #include "internal.h" +#define MPOL_PREFERRED_MANY MPOL_MAX + /* Internal flags */ #define MPOL_MF_DISCONTIG_OK (MPOL_MF_INTERNAL << 0) /* Skip checks for continuous vmas */ #define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */ @@ -175,7 +180,7 @@ struct mempolicy *get_task_policy(struct task_struct *p) static const struct mempolicy_operations { int (*create)(struct mempolicy *pol, const nodemask_t *nodes); void (*rebind)(struct mempolicy *pol, const nodemask_t *nodes); -} mpol_ops[MPOL_MAX]; +} mpol_ops[MPOL_MAX + 1]; static inline int mpol_store_user_nodemask(const struct mempolicy *pol) { @@ -415,7 +420,7 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new) mmap_write_unlock(mm); } -static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { +static const struct mempolicy_operations mpol_ops[MPOL_MAX + 1] = { [MPOL_DEFAULT] = { .rebind = mpol_rebind_default, }, @@ -432,6 +437,10 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { .rebind = mpol_rebind_nodemask, }, /* [MPOL_LOCAL] - see mpol_new() */ + [MPOL_PREFERRED_MANY] = { + .create = NULL, + .rebind = NULL, + }, }; static int migrate_page_add(struct page *page, struct list_head *pagelist, @@ -914,6 +923,9 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) case MPOL_INTERLEAVE: *nodes = p->v.nodes; break; + case MPOL_PREFERRED_MANY: + *nodes = p->v.preferred_nodes; + break; case MPOL_PREFERRED: if (!(p->flags & MPOL_F_LOCAL)) *nodes = p->v.preferred_nodes; @@ -1889,7 +1901,9 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) /* Return the node id preferred by the given mempolicy, or the given id */ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) { - if (policy->mode == MPOL_PREFERRED && !(policy->flags & MPOL_F_LOCAL)) { + if ((policy->mode == MPOL_PREFERRED || + policy->mode == MPOL_PREFERRED_MANY) && + !(policy->flags & MPOL_F_LOCAL)) { nd = first_node(policy->v.preferred_nodes); } else { /* @@ -1932,6 +1946,7 @@ unsigned int mempolicy_slab_node(void) return node; switch (policy->mode) { + case MPOL_PREFERRED_MANY: case MPOL_PREFERRED: /* * handled MPOL_F_LOCAL above @@ -2066,6 +2081,9 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) task_lock(current); mempolicy = current->mempolicy; switch (mempolicy->mode) { + case MPOL_PREFERRED_MANY: + *mask = mempolicy->v.preferred_nodes; + break; case MPOL_PREFERRED: if (mempolicy->flags & MPOL_F_LOCAL) nid = numa_node_id(); @@ -2120,6 +2138,9 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, * nodes in mask. */ break; + case MPOL_PREFERRED_MANY: + ret = nodes_intersects(mempolicy->v.preferred_nodes, *mask); + break; case MPOL_BIND: case MPOL_INTERLEAVE: ret = nodes_intersects(mempolicy->v.nodes, *mask); @@ -2204,10 +2225,13 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * node and don't fall back to other nodes, as the cost of * remote accesses would likely offset THP benefits. * - * If the policy is interleave, or does not allow the current - * node in its nodemask, we allocate the standard way. + * If the policy is interleave or multiple preferred nodes, or + * does not allow the current node in its nodemask, we allocate + * the standard way. */ - if (pol->mode == MPOL_PREFERRED && !(pol->flags & MPOL_F_LOCAL)) + if ((pol->mode == MPOL_PREFERRED || + pol->mode == MPOL_PREFERRED_MANY) && + !(pol->flags & MPOL_F_LOCAL)) hpage_node = first_node(pol->v.preferred_nodes); nmask = policy_nodemask(gfp, pol); @@ -2343,6 +2367,9 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) case MPOL_BIND: case MPOL_INTERLEAVE: return !!nodes_equal(a->v.nodes, b->v.nodes); + case MPOL_PREFERRED_MANY: + return !!nodes_equal(a->v.preferred_nodes, + b->v.preferred_nodes); case MPOL_PREFERRED: /* a's ->flags is the same as b's */ if (a->flags & MPOL_F_LOCAL) @@ -2511,6 +2538,8 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = zone_to_nid(z->zone); break; + /* case MPOL_PREFERRED_MANY: */ + default: BUG(); } @@ -2862,6 +2891,7 @@ static const char * const policy_modes[] = [MPOL_BIND] = "bind", [MPOL_INTERLEAVE] = "interleave", [MPOL_LOCAL] = "local", + [MPOL_PREFERRED_MANY] = "prefer (many)", }; @@ -2941,6 +2971,7 @@ int mpol_parse_str(char *str, struct mempolicy **mpol) if (!nodelist) err = 0; goto out; + case MPOL_PREFERRED_MANY: case MPOL_BIND: /* * Insist on a nodelist @@ -3023,6 +3054,9 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) switch (mode) { case MPOL_DEFAULT: break; + case MPOL_PREFERRED_MANY: + WARN_ON(flags & MPOL_F_LOCAL); + fallthrough; case MPOL_PREFERRED: if (flags & MPOL_F_LOCAL) mode = MPOL_LOCAL; From patchwork Fri Oct 30 19:02:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870667 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 429606A2 for ; Fri, 30 Oct 2020 19:02:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F2A31208B6 for ; Fri, 30 Oct 2020 19:02:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2A31208B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AD2086B005D; Fri, 30 Oct 2020 15:02:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A36C06B0062; Fri, 30 Oct 2020 15:02:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 811F46B006C; Fri, 30 Oct 2020 15:02:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id 46DFB6B005D for ; Fri, 30 Oct 2020 15:02:49 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DB3F4181AEF1D for ; Fri, 30 Oct 2020 19:02:48 +0000 (UTC) X-FDA: 77429513616.14.shade24_17061dc27298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id B6F8E18229818 for ; Fri, 30 Oct 2020 19:02:48 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30012:30054:30064,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95;04yrbca4oxnz4syiok8hp5p13wo7toc93e3twaczt7id1fncnitbzg79p1bzciy.io5ig1kjke9w4gumopy9hqgq1dtuett6jdjjtstntpkinqxj9tgzphwpajhqgwo.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: shade24_17061dc27298 X-Filterd-Recvd-Size: 3552 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:47 +0000 (UTC) IronPort-SDR: JZSc6kD+kCx2yz0pptoE0bYbDjr7tMly017gvA04zp7aQg1ZQDLPVeJ1PNqHSuKD8gG0MaSUd8 dNErIlDIVKWA== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629102" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629102" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:45 -0700 IronPort-SDR: moOtnPBPvXI5kXDkK8QMb74wV3yzANcuPk57m+g7D5MI2QX/T5J0tVhwiKX+QXZQ2hr4MINbk1 cjBzXf6htP2w== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167660" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:45 -0700 From: Ben Widawsky To: linux-mm , Andrew Morton Cc: Dave Hansen , Dave Hansen , Michal Hocko , Ben Widawsky , linux-kernel@vger.kernel.org Subject: [PATCH 04/12] mm/mempolicy: allow preferred code to take a nodemask Date: Fri, 30 Oct 2020 12:02:30 -0700 Message-Id: <20201030190238.306764-5-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Create a helper function (mpol_new_preferred_many()) which is usable both by the old, single-node MPOL_PREFERRED and the new MPOL_PREFERRED_MANY. Enforce the old single-node MPOL_PREFERRED behavior in the "new" version of mpol_new_preferred() which calls mpol_new_preferred_many(). Link: https://lore.kernel.org/r/20200630212517.308045-5-ben.widawsky@intel.com Signed-off-by: Dave Hansen Signed-off-by: Ben Widawsky Reported-by: kernel test robot --- mm/mempolicy.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 21a6f80f91a9..b1b43e511d6f 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -203,17 +203,30 @@ static int mpol_new_interleave(struct mempolicy *pol, const nodemask_t *nodes) return 0; } -static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes) +static int mpol_new_preferred_many(struct mempolicy *pol, + const nodemask_t *nodes) { if (!nodes) pol->flags |= MPOL_F_LOCAL; /* local allocation */ else if (nodes_empty(*nodes)) return -EINVAL; /* no allowed nodes */ else - pol->v.preferred_nodes = nodemask_of_node(first_node(*nodes)); + pol->v.preferred_nodes = *nodes; return 0; } +static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes) +{ + if (nodes) { + /* MPOL_PREFERRED can only take a single node: */ + nodemask_t tmp = nodemask_of_node(first_node(*nodes)); + + return mpol_new_preferred_many(pol, &tmp); + } + + return mpol_new_preferred_many(pol, NULL); +} + static int mpol_new_bind(struct mempolicy *pol, const nodemask_t *nodes) { if (nodes_empty(*nodes)) From patchwork Fri Oct 30 19:02:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870671 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4D91B697 for ; Fri, 30 Oct 2020 19:02:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1033D2072C for ; Fri, 30 Oct 2020 19:02:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1033D2072C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BA0186B006C; Fri, 30 Oct 2020 15:02:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AD4FF6B0074; Fri, 30 Oct 2020 15:02:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86E486B006C; Fri, 30 Oct 2020 15:02:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0193.hostedemail.com [216.40.44.193]) by kanga.kvack.org (Postfix) with ESMTP id 4021F6B006C for ; Fri, 30 Oct 2020 15:02:50 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D7847180AD811 for ; Fri, 30 Oct 2020 19:02:49 +0000 (UTC) X-FDA: 77429513658.10.field54_3213e4a27298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id B40AC16A0D2 for ; Fri, 30 Oct 2020 19:02:49 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30012:30054:30064,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.50.0.100;04y8ox76kjf3uds1cc8c1z4mgapm6oc6jsme4a3ffpo7gdegewegfyudm8mdsue.zepez4ccpphq7dfep48h86yc1nwxxhbzhbjyh39rgd511mpdaruo5guekfujrcn.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: field54_3213e4a27298 X-Filterd-Recvd-Size: 4273 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:47 +0000 (UTC) IronPort-SDR: /D4/VYnpYKaoz/IDXthytD8oqA8XokHbFopo3beuPHSTXKbKcnW6Fk9wRGJcYegoL4LWYHsq60 fo1oW/31KMsg== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629105" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629105" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:46 -0700 IronPort-SDR: zY1RkexspqZ2jaT18JTEukY09sP8wM6SQZf5xmso6brYs57D0yspOYtauXox2ibu/NZUbEVPQk ma8aZhUijPmg== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167672" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:46 -0700 From: Ben Widawsky To: linux-mm , Andrew Morton Cc: Dave Hansen , Dave Hansen , Michal Hocko , Ben Widawsky , linux-kernel@vger.kernel.org Subject: [PATCH 05/12] mm/mempolicy: refactor rebind code for PREFERRED_MANY Date: Fri, 30 Oct 2020 12:02:31 -0700 Message-Id: <20201030190238.306764-6-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Again, this extracts the "only one node must be set" behavior of MPOL_PREFERRED. It retains virtually all of the existing code so it can be used by MPOL_PREFERRED_MANY as well. v2: Fixed typos in commit message. (Ben) Merged bits from other patches. (Ben) annotate mpol_rebind_preferred_many as unused (Ben) Link: https://lore.kernel.org/r/20200630212517.308045-6-ben.widawsky@intel.com Signed-off-by: Dave Hansen Signed-off-by: Ben Widawsky --- mm/mempolicy.c | 29 ++++++++++++++++++++++------- 1 file changed, 22 insertions(+), 7 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b1b43e511d6f..1b88c133f5c5 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -359,14 +359,11 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes) pol->v.nodes = tmp; } -static void mpol_rebind_preferred(struct mempolicy *pol, - const nodemask_t *nodes) +static void mpol_rebind_preferred_common(struct mempolicy *pol, + const nodemask_t *preferred_nodes, + const nodemask_t *nodes) { nodemask_t tmp; - nodemask_t preferred_node; - - /* MPOL_PREFERRED uses only the first node in the mask */ - preferred_node = nodemask_of_node(first_node(*nodes)); if (pol->flags & MPOL_F_STATIC_NODES) { int node = first_node(pol->w.user_nodemask); @@ -381,12 +378,30 @@ static void mpol_rebind_preferred(struct mempolicy *pol, pol->v.preferred_nodes = tmp; } else if (!(pol->flags & MPOL_F_LOCAL)) { nodes_remap(tmp, pol->v.preferred_nodes, - pol->w.cpuset_mems_allowed, preferred_node); + pol->w.cpuset_mems_allowed, *preferred_nodes); pol->v.preferred_nodes = tmp; pol->w.cpuset_mems_allowed = *nodes; } } +/* MPOL_PREFERRED_MANY allows multiple nodes to be set in 'nodes' */ +static void __maybe_unused mpol_rebind_preferred_many(struct mempolicy *pol, + const nodemask_t *nodes) +{ + mpol_rebind_preferred_common(pol, nodes, nodes); +} + +static void mpol_rebind_preferred(struct mempolicy *pol, + const nodemask_t *nodes) +{ + nodemask_t preferred_node; + + /* MPOL_PREFERRED uses only the first node in 'nodes' */ + preferred_node = nodemask_of_node(first_node(*nodes)); + + mpol_rebind_preferred_common(pol, &preferred_node, nodes); +} + /* * mpol_rebind_policy - Migrate a policy to a different set of nodes * From patchwork Fri Oct 30 19:02:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870673 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01C66697 for ; Fri, 30 Oct 2020 19:03:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 936362072C for ; Fri, 30 Oct 2020 19:02:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 936362072C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1D1C26B0070; Fri, 30 Oct 2020 15:02:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 139F86B0071; Fri, 30 Oct 2020 15:02:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D21DD6B0073; Fri, 30 Oct 2020 15:02:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id 833E56B0070 for ; Fri, 30 Oct 2020 15:02:50 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 13D53824999B for ; Fri, 30 Oct 2020 19:02:50 +0000 (UTC) X-FDA: 77429513700.28.rod78_470d52c27298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id D636B6D72 for ; Fri, 30 Oct 2020 19:02:49 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30012:30029:30034:30051:30054:30056:30064:30075,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95;04yrbdkshpazsw8s48asczq7en11ayp7uz9iyuu864z455pg1ps9et8jxz3gcaq.dz5wqotzdgdjhhddf64r5x316s9h5eermixqnrng6cdtgze7h1jc3fiwmskzomk.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:67,LUA_SUMMARY:none X-HE-Tag: rod78_470d52c27298 X-Filterd-Recvd-Size: 15021 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:48 +0000 (UTC) IronPort-SDR: mgooD/TFmnqyczUUlcTFyZ/vlLmZ9UCan6PrA23n2XNmx+PfXu7IyBQX1oVseKL0Mu+Er7ouZ3 XCfuhTdfmWMw== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629107" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629107" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:47 -0700 IronPort-SDR: 7ado/V/MftgPZt36C4q29kh3mPCOcl6vDSJ88+BpJB93JCfMaEwvz2q3GX1aach8CqEbHJXq8f Vlwxwg9VlCbQ== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167680" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:46 -0700 From: Ben Widawsky To: linux-mm , Andrew Morton Cc: Ben Widawsky , Dave Hansen , Michal Hocko , linux-kernel@vger.kernel.org Subject: [PATCH 06/12] mm/mempolicy: kill v.preferred_nodes Date: Fri, 30 Oct 2020 12:02:32 -0700 Message-Id: <20201030190238.306764-7-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that preferred_nodes is just a mask, and policies are mutually exclusive, there is no reason to have a separate mask. This patch is optional. It definitely helps clean up code in future patches, but there is no functional difference to leaving it with the previous name. I do believe it helps demonstrate the exclusivity of the fields. Link: https://lore.kernel.org/r/20200630212517.308045-7-ben.widawsky@intel.com Signed-off-by: Ben Widawsky --- include/linux/mempolicy.h | 6 +- mm/mempolicy.c | 112 ++++++++++++++++++-------------------- 2 files changed, 55 insertions(+), 63 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 23ee10556b82..ec811c35513e 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -46,11 +46,7 @@ struct mempolicy { atomic_t refcnt; unsigned short mode; /* See MPOL_* above */ unsigned short flags; /* See set_mempolicy() MPOL_F_* above */ - union { - nodemask_t preferred_nodes; /* preferred */ - nodemask_t nodes; /* interleave/bind */ - /* undefined for default */ - } v; + nodemask_t nodes; /* interleave/bind/many */ union { nodemask_t cpuset_mems_allowed; /* relative to these nodes */ nodemask_t user_nodemask; /* nodemask passed by user */ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 1b88c133f5c5..f15dae340333 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -199,7 +199,7 @@ static int mpol_new_interleave(struct mempolicy *pol, const nodemask_t *nodes) { if (nodes_empty(*nodes)) return -EINVAL; - pol->v.nodes = *nodes; + pol->nodes = *nodes; return 0; } @@ -211,7 +211,7 @@ static int mpol_new_preferred_many(struct mempolicy *pol, else if (nodes_empty(*nodes)) return -EINVAL; /* no allowed nodes */ else - pol->v.preferred_nodes = *nodes; + pol->nodes = *nodes; return 0; } @@ -231,7 +231,7 @@ static int mpol_new_bind(struct mempolicy *pol, const nodemask_t *nodes) { if (nodes_empty(*nodes)) return -EINVAL; - pol->v.nodes = *nodes; + pol->nodes = *nodes; return 0; } @@ -348,15 +348,15 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes) else if (pol->flags & MPOL_F_RELATIVE_NODES) mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes); else { - nodes_remap(tmp, pol->v.nodes,pol->w.cpuset_mems_allowed, - *nodes); + nodes_remap(tmp, pol->nodes, pol->w.cpuset_mems_allowed, + *nodes); pol->w.cpuset_mems_allowed = *nodes; } if (nodes_empty(tmp)) tmp = *nodes; - pol->v.nodes = tmp; + pol->nodes = tmp; } static void mpol_rebind_preferred_common(struct mempolicy *pol, @@ -369,17 +369,17 @@ static void mpol_rebind_preferred_common(struct mempolicy *pol, int node = first_node(pol->w.user_nodemask); if (node_isset(node, *nodes)) { - pol->v.preferred_nodes = nodemask_of_node(node); + pol->nodes = nodemask_of_node(node); pol->flags &= ~MPOL_F_LOCAL; } else pol->flags |= MPOL_F_LOCAL; } else if (pol->flags & MPOL_F_RELATIVE_NODES) { mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes); - pol->v.preferred_nodes = tmp; + pol->nodes = tmp; } else if (!(pol->flags & MPOL_F_LOCAL)) { - nodes_remap(tmp, pol->v.preferred_nodes, - pol->w.cpuset_mems_allowed, *preferred_nodes); - pol->v.preferred_nodes = tmp; + nodes_remap(tmp, pol->nodes, pol->w.cpuset_mems_allowed, + *preferred_nodes); + pol->nodes = tmp; pol->w.cpuset_mems_allowed = *nodes; } } @@ -949,14 +949,14 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) switch (p->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - *nodes = p->v.nodes; + *nodes = p->nodes; break; case MPOL_PREFERRED_MANY: - *nodes = p->v.preferred_nodes; + *nodes = p->nodes; break; case MPOL_PREFERRED: if (!(p->flags & MPOL_F_LOCAL)) - *nodes = p->v.preferred_nodes; + *nodes = p->nodes; /* else return empty node mask for local allocation */ break; default: @@ -1042,7 +1042,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, *policy = err; } else if (pol == current->mempolicy && pol->mode == MPOL_INTERLEAVE) { - *policy = next_node_in(current->il_prev, pol->v.nodes); + *policy = next_node_in(current->il_prev, pol->nodes); } else { err = -EINVAL; goto out; @@ -1898,14 +1898,14 @@ static int apply_policy_zone(struct mempolicy *policy, enum zone_type zone) BUG_ON(dynamic_policy_zone == ZONE_MOVABLE); /* - * if policy->v.nodes has movable memory only, + * if policy->nodes has movable memory only, * we apply policy when gfp_zone(gfp) = ZONE_MOVABLE only. * - * policy->v.nodes is intersect with node_states[N_MEMORY]. + * policy->nodes is intersect with node_states[N_MEMORY]. * so if the following test faile, it implies - * policy->v.nodes has movable memory only. + * policy->nodes has movable memory only. */ - if (!nodes_intersects(policy->v.nodes, node_states[N_HIGH_MEMORY])) + if (!nodes_intersects(policy->nodes, node_states[N_HIGH_MEMORY])) dynamic_policy_zone = ZONE_MOVABLE; return zone >= dynamic_policy_zone; @@ -1919,9 +1919,9 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) { /* Lower zones don't get a nodemask applied for MPOL_BIND */ if (unlikely(policy->mode == MPOL_BIND) && - apply_policy_zone(policy, gfp_zone(gfp)) && - cpuset_nodemask_valid_mems_allowed(&policy->v.nodes)) - return &policy->v.nodes; + apply_policy_zone(policy, gfp_zone(gfp)) && + cpuset_nodemask_valid_mems_allowed(&policy->nodes)) + return &policy->nodes; return NULL; } @@ -1932,7 +1932,7 @@ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) if ((policy->mode == MPOL_PREFERRED || policy->mode == MPOL_PREFERRED_MANY) && !(policy->flags & MPOL_F_LOCAL)) { - nd = first_node(policy->v.preferred_nodes); + nd = first_node(policy->nodes); } else { /* * __GFP_THISNODE shouldn't even be used with the bind policy @@ -1951,7 +1951,7 @@ static unsigned interleave_nodes(struct mempolicy *policy) unsigned next; struct task_struct *me = current; - next = next_node_in(me->il_prev, policy->v.nodes); + next = next_node_in(me->il_prev, policy->nodes); if (next < MAX_NUMNODES) me->il_prev = next; return next; @@ -1979,7 +1979,7 @@ unsigned int mempolicy_slab_node(void) /* * handled MPOL_F_LOCAL above */ - return first_node(policy->v.preferred_nodes); + return first_node(policy->nodes); case MPOL_INTERLEAVE: return interleave_nodes(policy); @@ -1995,7 +1995,7 @@ unsigned int mempolicy_slab_node(void) enum zone_type highest_zoneidx = gfp_zone(GFP_KERNEL); zonelist = &NODE_DATA(node)->node_zonelists[ZONELIST_FALLBACK]; z = first_zones_zonelist(zonelist, highest_zoneidx, - &policy->v.nodes); + &policy->nodes); return z->zone ? zone_to_nid(z->zone) : node; } @@ -2006,12 +2006,12 @@ unsigned int mempolicy_slab_node(void) /* * Do static interleaving for a VMA with known offset @n. Returns the n'th - * node in pol->v.nodes (starting from n=0), wrapping around if n exceeds the + * node in pol->nodes (starting from n=0), wrapping around if n exceeds the * number of present nodes. */ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n) { - unsigned nnodes = nodes_weight(pol->v.nodes); + unsigned nnodes = nodes_weight(pol->nodes); unsigned target; int i; int nid; @@ -2019,9 +2019,9 @@ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n) if (!nnodes) return numa_node_id(); target = (unsigned int)n % nnodes; - nid = first_node(pol->v.nodes); + nid = first_node(pol->nodes); for (i = 0; i < target; i++) - nid = next_node(nid, pol->v.nodes); + nid = next_node(nid, pol->nodes); return nid; } @@ -2077,7 +2077,7 @@ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags, } else { nid = policy_node(gfp_flags, *mpol, numa_node_id()); if ((*mpol)->mode == MPOL_BIND) - *nodemask = &(*mpol)->v.nodes; + *nodemask = &(*mpol)->nodes; } return nid; } @@ -2110,19 +2110,19 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) mempolicy = current->mempolicy; switch (mempolicy->mode) { case MPOL_PREFERRED_MANY: - *mask = mempolicy->v.preferred_nodes; + *mask = mempolicy->nodes; break; case MPOL_PREFERRED: if (mempolicy->flags & MPOL_F_LOCAL) nid = numa_node_id(); else - nid = first_node(mempolicy->v.preferred_nodes); + nid = first_node(mempolicy->nodes); init_nodemask_of_node(mask, nid); break; case MPOL_BIND: case MPOL_INTERLEAVE: - *mask = mempolicy->v.nodes; + *mask = mempolicy->nodes; break; default: @@ -2167,11 +2167,11 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, */ break; case MPOL_PREFERRED_MANY: - ret = nodes_intersects(mempolicy->v.preferred_nodes, *mask); + ret = nodes_intersects(mempolicy->nodes, *mask); break; case MPOL_BIND: case MPOL_INTERLEAVE: - ret = nodes_intersects(mempolicy->v.nodes, *mask); + ret = nodes_intersects(mempolicy->nodes, *mask); break; default: BUG(); @@ -2260,7 +2260,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, if ((pol->mode == MPOL_PREFERRED || pol->mode == MPOL_PREFERRED_MANY) && !(pol->flags & MPOL_F_LOCAL)) - hpage_node = first_node(pol->v.preferred_nodes); + hpage_node = first_node(pol->nodes); nmask = policy_nodemask(gfp, pol); if (!nmask || node_isset(hpage_node, *nmask)) { @@ -2394,15 +2394,14 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) switch (a->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - return !!nodes_equal(a->v.nodes, b->v.nodes); + return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED_MANY: - return !!nodes_equal(a->v.preferred_nodes, - b->v.preferred_nodes); + return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED: /* a's ->flags is the same as b's */ if (a->flags & MPOL_F_LOCAL) return true; - return nodes_equal(a->v.preferred_nodes, b->v.preferred_nodes); + return nodes_equal(a->nodes, b->nodes); default: BUG(); return false; @@ -2546,7 +2545,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long if (pol->flags & MPOL_F_LOCAL) polnid = numa_node_id(); else - polnid = first_node(pol->v.preferred_nodes); + polnid = first_node(pol->nodes); break; case MPOL_BIND: @@ -2557,12 +2556,11 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long * else select nearest allowed node, if any. * If no allowed nodes, use current [!misplaced]. */ - if (node_isset(curnid, pol->v.nodes)) + if (node_isset(curnid, pol->nodes)) goto out; - z = first_zones_zonelist( - node_zonelist(numa_node_id(), GFP_HIGHUSER), - gfp_zone(GFP_HIGHUSER), - &pol->v.nodes); + z = first_zones_zonelist(node_zonelist(numa_node_id(), + GFP_HIGHUSER), + gfp_zone(GFP_HIGHUSER), &pol->nodes); polnid = zone_to_nid(z->zone); break; @@ -2763,11 +2761,9 @@ int mpol_set_shared_policy(struct shared_policy *info, struct sp_node *new = NULL; unsigned long sz = vma_pages(vma); - pr_debug("set_shared_policy %lx sz %lu %d %d %lx\n", - vma->vm_pgoff, - sz, npol ? npol->mode : -1, - npol ? npol->flags : -1, - npol ? nodes_addr(npol->v.nodes)[0] : NUMA_NO_NODE); + pr_debug("set_shared_policy %lx sz %lu %d %d %lx\n", vma->vm_pgoff, sz, + npol ? npol->mode : -1, npol ? npol->flags : -1, + npol ? nodes_addr(npol->nodes)[0] : NUMA_NO_NODE); if (npol) { new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, npol); @@ -2861,11 +2857,11 @@ void __init numa_policy_init(void) 0, SLAB_PANIC, NULL); for_each_node(nid) { - preferred_node_policy[nid] = (struct mempolicy) { + preferred_node_policy[nid] = (struct mempolicy){ .refcnt = ATOMIC_INIT(1), .mode = MPOL_PREFERRED, .flags = MPOL_F_MOF | MPOL_F_MORON, - .v = { .preferred_nodes = nodemask_of_node(nid), }, + .nodes = nodemask_of_node(nid), }; } @@ -3031,9 +3027,9 @@ int mpol_parse_str(char *str, struct mempolicy **mpol) * for /proc/mounts, /proc/pid/mounts and /proc/pid/mountinfo. */ if (mode != MPOL_PREFERRED) - new->v.nodes = nodes; + new->nodes = nodes; else if (nodelist) - new->v.preferred_nodes = nodemask_of_node(first_node(nodes)); + new->nodes = nodemask_of_node(first_node(nodes)); else new->flags |= MPOL_F_LOCAL; @@ -3089,11 +3085,11 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) if (flags & MPOL_F_LOCAL) mode = MPOL_LOCAL; else - nodes_or(nodes, nodes, pol->v.preferred_nodes); + nodes_or(nodes, nodes, pol->nodes); break; case MPOL_BIND: case MPOL_INTERLEAVE: - nodes = pol->v.nodes; + nodes = pol->nodes; break; default: WARN_ON_ONCE(1); From patchwork Fri Oct 30 19:02:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870669 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 295B6697 for ; Fri, 30 Oct 2020 19:02:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DA7532151B for ; Fri, 30 Oct 2020 19:02:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DA7532151B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 817916B0062; Fri, 30 Oct 2020 15:02:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7AABA6B0071; Fri, 30 Oct 2020 15:02:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52F5D6B0070; Fri, 30 Oct 2020 15:02:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id 210EC6B0062 for ; Fri, 30 Oct 2020 15:02:50 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id BCB62181AEF1D for ; Fri, 30 Oct 2020 19:02:49 +0000 (UTC) X-FDA: 77429513658.29.bike89_1f0cef727298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 913711808658D for ; Fri, 30 Oct 2020 19:02:49 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30054:30064,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95;04yfrfbgxfsd1k9p589gr3zafua38yp4tkdmnncsgxpdd9at4ouwdutmjq11onf.wr7sw7d1ojdo1i6tezwb3w3gjbp3c7xtbzbfaog8yfddfoggkqidb5gd6ae5pxy.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: bike89_1f0cef727298 X-Filterd-Recvd-Size: 5898 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:48 +0000 (UTC) IronPort-SDR: mCzr3nG2tbeBjxXDMeOd+GJFUE9/ID6/yhfKx3R3L5wc190uaaN86MX9gvHhLwf2Gw8lWotz8Y ieZNqYQyMbDg== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629112" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629112" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:47 -0700 IronPort-SDR: 24CKGV2ZTWdkoHTRAXXx5t8VkNBk6e7V/ORJK960s9NkdXGjaSh5lsQSjeXxcpceLeui7AbRs8 K1wjTTiXBtLA== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167692" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:47 -0700 From: Ben Widawsky To: linux-mm , Andrew Morton Cc: Ben Widawsky , Dave Hansen , Michal Hocko , linux-kernel@vger.kernel.org Subject: [PATCH 07/12] mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND Date: Fri, 30 Oct 2020 12:02:33 -0700 Message-Id: <20201030190238.306764-8-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Begin the real plumbing for handling this new policy. Now that the internal representation for preferred nodes and bound nodes is the same, and we can envision what multiple preferred nodes will behave like, there are obvious places where we can simply reuse the bind behavior. In v1 of this series, the moral equivalent was: "mm: Finish handling MPOL_PREFERRED_MANY". Like that, this attempts to implement the easiest spots for the new policy. Unlike that, this just reuses BIND. Link: https://lore.kernel.org/r/20200630212517.308045-8-ben.widawsky@intel.com Signed-off-by: Ben Widawsky --- mm/mempolicy.c | 22 +++++++--------------- 1 file changed, 7 insertions(+), 15 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index f15dae340333..a991dabb636d 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -949,8 +949,6 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) switch (p->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - *nodes = p->nodes; - break; case MPOL_PREFERRED_MANY: *nodes = p->nodes; break; @@ -1918,7 +1916,8 @@ static int apply_policy_zone(struct mempolicy *policy, enum zone_type zone) nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) { /* Lower zones don't get a nodemask applied for MPOL_BIND */ - if (unlikely(policy->mode == MPOL_BIND) && + if (unlikely(policy->mode == MPOL_BIND || + policy->mode == MPOL_PREFERRED_MANY) && apply_policy_zone(policy, gfp_zone(gfp)) && cpuset_nodemask_valid_mems_allowed(&policy->nodes)) return &policy->nodes; @@ -1974,7 +1973,6 @@ unsigned int mempolicy_slab_node(void) return node; switch (policy->mode) { - case MPOL_PREFERRED_MANY: case MPOL_PREFERRED: /* * handled MPOL_F_LOCAL above @@ -1984,6 +1982,7 @@ unsigned int mempolicy_slab_node(void) case MPOL_INTERLEAVE: return interleave_nodes(policy); + case MPOL_PREFERRED_MANY: case MPOL_BIND: { struct zoneref *z; @@ -2109,9 +2108,6 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) task_lock(current); mempolicy = current->mempolicy; switch (mempolicy->mode) { - case MPOL_PREFERRED_MANY: - *mask = mempolicy->nodes; - break; case MPOL_PREFERRED: if (mempolicy->flags & MPOL_F_LOCAL) nid = numa_node_id(); @@ -2122,6 +2118,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) case MPOL_BIND: case MPOL_INTERLEAVE: + case MPOL_PREFERRED_MANY: *mask = mempolicy->nodes; break; @@ -2165,12 +2162,11 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, * Thus, it's possible for tsk to have allocated memory from * nodes in mask. */ - break; - case MPOL_PREFERRED_MANY: ret = nodes_intersects(mempolicy->nodes, *mask); break; case MPOL_BIND: case MPOL_INTERLEAVE: + case MPOL_PREFERRED_MANY: ret = nodes_intersects(mempolicy->nodes, *mask); break; default: @@ -2394,7 +2390,6 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) switch (a->mode) { case MPOL_BIND: case MPOL_INTERLEAVE: - return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED_MANY: return !!nodes_equal(a->nodes, b->nodes); case MPOL_PREFERRED: @@ -2548,6 +2543,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = first_node(pol->nodes); break; + case MPOL_PREFERRED_MANY: case MPOL_BIND: /* @@ -2564,8 +2560,6 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = zone_to_nid(z->zone); break; - /* case MPOL_PREFERRED_MANY: */ - default: BUG(); } @@ -3078,15 +3072,13 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) switch (mode) { case MPOL_DEFAULT: break; - case MPOL_PREFERRED_MANY: - WARN_ON(flags & MPOL_F_LOCAL); - fallthrough; case MPOL_PREFERRED: if (flags & MPOL_F_LOCAL) mode = MPOL_LOCAL; else nodes_or(nodes, nodes, pol->nodes); break; + case MPOL_PREFERRED_MANY: case MPOL_BIND: case MPOL_INTERLEAVE: nodes = pol->nodes; From patchwork Fri Oct 30 19:02:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870677 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F3237697 for ; Fri, 30 Oct 2020 19:03:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B25F920797 for ; Fri, 30 Oct 2020 19:03:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B25F920797 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 57B846B0073; Fri, 30 Oct 2020 15:02:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5304F6B0075; Fri, 30 Oct 2020 15:02:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3806B6B0078; Fri, 30 Oct 2020 15:02:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id 078D86B0075 for ; Fri, 30 Oct 2020 15:02:51 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A4317180AD801 for ; Fri, 30 Oct 2020 19:02:51 +0000 (UTC) X-FDA: 77429513742.09.sense44_5114fd527298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 83211180AD817 for ; Fri, 30 Oct 2020 19:02:51 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30036:30054:30064:30070,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.50.0.100;04ygayeaamkk5z4beanp79uwwcbs6op3osm3paudnmy9uqw7d5qffak31zg4n5a.gjy91sa34e89osifium7t9ssu1jgddoxrw4ryup3qmj9jtn58zgnthwqidrzuyz.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: sense44_5114fd527298 X-Filterd-Recvd-Size: 6299 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:50 +0000 (UTC) IronPort-SDR: 2CYY/bQ3fkfRqva53pn+PoX7LrMj6rQUj50b5ZM66/qrdpXXI2HNK37eWKAi1BBAkKDL3qb8PK liG/h5Lbmleg== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629118" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629118" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:50 -0700 IronPort-SDR: drrmRt5ZUAZUFt3ysi/VrFEIa4/st4BPncpacxtvvg25cNaYN+zQiHP3lua/xZFCicFdafdqJ3 x7YCtVvEwcxw== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167697" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:47 -0700 From: Ben Widawsky To: linux-mm , Andrew Morton Cc: Ben Widawsky , Dave Hansen , Michal Hocko , linux-kernel@vger.kernel.org Subject: [PATCH 08/12] mm/mempolicy: Create a page allocator for policy Date: Fri, 30 Oct 2020 12:02:34 -0700 Message-Id: <20201030190238.306764-9-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a helper function which takes care of handling multiple preferred nodes. It will be called by future patches that need to handle this, specifically VMA based page allocation, and task based page allocation. Huge pages don't quite fit the same pattern because they use different underlying page allocation functions. This consumes the previous interleave policy specific allocation function to make a one stop shop for policy based allocation. For now, only interleaved policy will be used so there should be no functional change yet. However, if bisection points to issues in the next few commits, it was likely the fault of this patch. Similar functionality is offered via policy_node() and policy_nodemask(). By themselves however, neither can achieve this fallback style of sets of nodes. Link: https://lore.kernel.org/r/20200630212517.308045-9-ben.widawsky@intel.com Signed-off-by: Ben Widawsky --- mm/mempolicy.c | 61 +++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 48 insertions(+), 13 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index a991dabb636d..1fd0da0f9631 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2177,22 +2177,56 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, return ret; } -/* Allocate a page in interleaved policy. - Own path because it needs to do special accounting. */ -static struct page *alloc_page_interleave(gfp_t gfp, unsigned order, - unsigned nid) +/* Handle page allocation for all but interleaved policies */ +static struct page *alloc_pages_policy(struct mempolicy *pol, gfp_t gfp, + unsigned int order, int preferred_nid) { struct page *page; + gfp_t gfp_mask = gfp; - page = __alloc_pages(gfp, order, nid); - /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ - if (!static_branch_likely(&vm_numa_stat_key)) + if (pol->mode == MPOL_INTERLEAVE) { + page = __alloc_pages(gfp, order, preferred_nid); + /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ + if (!static_branch_likely(&vm_numa_stat_key)) + return page; + if (page && page_to_nid(page) == preferred_nid) { + preempt_disable(); + __inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT); + preempt_enable(); + } return page; - if (page && page_to_nid(page) == nid) { - preempt_disable(); - __inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT); - preempt_enable(); } + + VM_BUG_ON(preferred_nid != NUMA_NO_NODE); + + preferred_nid = numa_node_id(); + + /* + * There is a two pass approach implemented here for + * MPOL_PREFERRED_MANY. In the first pass we pretend the preferred nodes + * are bound, but allow the allocation to fail. The below table explains + * how this is achieved. + * + * | Policy | preferred nid | nodemask | + * |-------------------------------|---------------|------------| + * | MPOL_DEFAULT | local | NULL | + * | MPOL_PREFERRED | best | NULL | + * | MPOL_INTERLEAVE | ERR | ERR | + * | MPOL_BIND | local | pol->nodes | + * | MPOL_PREFERRED_MANY | best | pol->nodes | + * | MPOL_PREFERRED_MANY (round 2) | local | NULL | + * +-------------------------------+---------------+------------+ + */ + if (pol->mode == MPOL_PREFERRED_MANY) + gfp_mask |= __GFP_RETRY_MAYFAIL; + + page = __alloc_pages_nodemask(gfp_mask, order, + policy_node(gfp, pol, preferred_nid), + policy_nodemask(gfp, pol)); + + if (unlikely(!page && pol->mode == MPOL_PREFERRED_MANY)) + page = __alloc_pages_nodemask(gfp, order, preferred_nid, NULL); + return page; } @@ -2234,8 +2268,8 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned nid; nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order); + page = alloc_pages_policy(pol, gfp, order, nid); mpol_cond_put(pol); - page = alloc_page_interleave(gfp, order, nid); goto out; } @@ -2319,7 +2353,8 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order) * nor system default_policy */ if (pol->mode == MPOL_INTERLEAVE) - page = alloc_page_interleave(gfp, order, interleave_nodes(pol)); + page = alloc_pages_policy(pol, gfp, order, + interleave_nodes(pol)); else page = __alloc_pages_nodemask(gfp, order, policy_node(gfp, pol, numa_node_id()), From patchwork Fri Oct 30 19:02:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870679 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0EBC86A2 for ; Fri, 30 Oct 2020 19:03:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C0AF42223F for ; Fri, 30 Oct 2020 19:03:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C0AF42223F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 618296B0075; Fri, 30 Oct 2020 15:02:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5C8426B0078; Fri, 30 Oct 2020 15:02:53 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F4BD6B007B; Fri, 30 Oct 2020 15:02:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id 066876B0075 for ; Fri, 30 Oct 2020 15:02:52 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A231A8249980 for ; Fri, 30 Oct 2020 19:02:52 +0000 (UTC) X-FDA: 77429513784.27.value20_570a24427298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 6DD0B3D670 for ; Fri, 30 Oct 2020 19:02:52 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30054:30064,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.50.0.100;04y8d1u88tan7xkj849uc53us55daypn1utczzu9iwu5uxzfzxd96ttn1cmt581.3gkgpjzx3nik7xko9a4hramo9wcokz54fde8say47xtfrgs965bnzrgxbttu85r.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:73,LUA_SUMMARY:none X-HE-Tag: value20_570a24427298 X-Filterd-Recvd-Size: 3244 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:51 +0000 (UTC) IronPort-SDR: hTLbIkpCM0yLCM876KdI5bn3ofqBXYkZr/TH9riJF4GyWKcZm8QkIDqV9dEH1slF4h21NlErOn ZAyKG/YP2yyw== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629121" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629121" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:50 -0700 IronPort-SDR: Jqs0dvB0pEpCMFGr6rZXyjqHhX8kSo1drRxzU5wtdujTa7eT6z/1D0FlJM8qXF8/uRjyVfTIQG sM9Vq415DLSQ== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167702" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:50 -0700 From: Ben Widawsky To: linux-mm , Andrew Morton Cc: Ben Widawsky , Dave Hansen , Michal Hocko , linux-kernel@vger.kernel.org Subject: [PATCH 09/12] mm/mempolicy: Thread allocation for many preferred Date: Fri, 30 Oct 2020 12:02:35 -0700 Message-Id: <20201030190238.306764-10-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to support MPOL_PREFERRED_MANY as the mode used by set_mempolicy(2), alloc_pages_current() needs to support it. This patch does that by using the new helper function to allocate properly based on policy. All the actual machinery to make this work was part of ("mm/mempolicy: Create a page allocator for policy") Link: https://lore.kernel.org/r/20200630212517.308045-10-ben.widawsky@intel.com Signed-off-by: Ben Widawsky --- mm/mempolicy.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 1fd0da0f9631..2d19235413db 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2343,7 +2343,7 @@ EXPORT_SYMBOL(alloc_pages_vma); struct page *alloc_pages_current(gfp_t gfp, unsigned order) { struct mempolicy *pol = &default_policy; - struct page *page; + int nid = NUMA_NO_NODE; if (!in_interrupt() && !(gfp & __GFP_THISNODE)) pol = get_task_policy(current); @@ -2353,14 +2353,9 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order) * nor system default_policy */ if (pol->mode == MPOL_INTERLEAVE) - page = alloc_pages_policy(pol, gfp, order, - interleave_nodes(pol)); - else - page = __alloc_pages_nodemask(gfp, order, - policy_node(gfp, pol, numa_node_id()), - policy_nodemask(gfp, pol)); + nid = interleave_nodes(pol); - return page; + return alloc_pages_policy(pol, gfp, order, nid); } EXPORT_SYMBOL(alloc_pages_current); From patchwork Fri Oct 30 19:02:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870685 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 10AF56A2 for ; Fri, 30 Oct 2020 19:03:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C75202222F for ; Fri, 30 Oct 2020 19:03:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C75202222F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E2FAE6B0080; Fri, 30 Oct 2020 15:02:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DB5A46B0081; Fri, 30 Oct 2020 15:02:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA8F56B0082; Fri, 30 Oct 2020 15:02:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id 871F46B0080 for ; Fri, 30 Oct 2020 15:02:56 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 224D51F0A for ; Fri, 30 Oct 2020 19:02:56 +0000 (UTC) X-FDA: 77429513952.20.knot13_4c0cc2f27298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id C2E32180C0F75 for ; Fri, 30 Oct 2020 19:02:52 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30054:30064,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95;04yr9sj5gw6cqukrucisqzfj8f15focw9p8tk46teaf1tf54yyoa79bjhh7ru7k.wuhj69ho1n8qmhr8iz9b8rmbodf9kaek89yeyp9dk7opax5tqoeywqs4pmj5ye1.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: knot13_4c0cc2f27298 X-Filterd-Recvd-Size: 4453 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:51 +0000 (UTC) IronPort-SDR: cNn1nfmo6o0iKLByrTNoSxEOtxMKNZ7xt+G4vB1aEOmgE9fAgf3WHMnIPiWcRMsIbd4RXYhL4a Gpb8Hk5JfysA== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629124" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629124" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:51 -0700 IronPort-SDR: SuIoIL/dDSU08OxuWu8YahYZfn8VQvQTKVMPi4c+6T8GsDkvv3KVoI87AZbhAFxrAVg2m9nEpy ZdMoZYP/dKfA== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167706" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:50 -0700 From: Ben Widawsky To: linux-mm , Andrew Morton Cc: Ben Widawsky , Dave Hansen , Michal Hocko , linux-kernel@vger.kernel.org Subject: [PATCH 10/12] mm/mempolicy: VMA allocation for many preferred Date: Fri, 30 Oct 2020 12:02:36 -0700 Message-Id: <20201030190238.306764-11-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch implements MPOL_PREFERRED_MANY for alloc_pages_vma(). Like alloc_pages_current(), alloc_pages_vma() needs to support policy based decisions if they've been configured via mbind(2). The temporary "hack" of treating MPOL_PREFERRED and MPOL_PREFERRED_MANY can now be removed with this, too. All the actual machinery to make this work was part of ("mm/mempolicy: Create a page allocator for policy") Link: https://lore.kernel.org/r/20200630212517.308045-11-ben.widawsky@intel.com Signed-off-by: Ben Widawsky --- mm/mempolicy.c | 29 +++++++++++++++++++++-------- 1 file changed, 21 insertions(+), 8 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 2d19235413db..343340c87f03 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2259,8 +2259,6 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, { struct mempolicy *pol; struct page *page; - int preferred_nid; - nodemask_t *nmask; pol = get_vma_policy(vma, addr); @@ -2274,6 +2272,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, } if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) { + nodemask_t *nmask; int hpage_node = node; /* @@ -2287,10 +2286,26 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * does not allow the current node in its nodemask, we allocate * the standard way. */ - if ((pol->mode == MPOL_PREFERRED || - pol->mode == MPOL_PREFERRED_MANY) && - !(pol->flags & MPOL_F_LOCAL)) + if (pol->mode == MPOL_PREFERRED || !(pol->flags & MPOL_F_LOCAL)) { hpage_node = first_node(pol->nodes); + } else if (pol->mode == MPOL_PREFERRED_MANY) { + struct zoneref *z; + + /* + * In this policy, with direct reclaim, the normal + * policy based allocation will do the right thing - try + * twice using the preferred nodes first, and all nodes + * second. + */ + if (gfp & __GFP_DIRECT_RECLAIM) { + page = alloc_pages_policy(pol, gfp, order, NUMA_NO_NODE); + goto out; + } + + z = first_zones_zonelist(node_zonelist(numa_node_id(), GFP_HIGHUSER), + gfp_zone(GFP_HIGHUSER), &pol->nodes); + hpage_node = zone_to_nid(z->zone); + } nmask = policy_nodemask(gfp, pol); if (!nmask || node_isset(hpage_node, *nmask)) { @@ -2316,9 +2331,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, } } - nmask = policy_nodemask(gfp, pol); - preferred_nid = policy_node(gfp, pol, node); - page = __alloc_pages_nodemask(gfp, order, preferred_nid, nmask); + page = alloc_pages_policy(pol, gfp, order, NUMA_NO_NODE); mpol_cond_put(pol); out: return page; From patchwork Fri Oct 30 19:02:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870681 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B7346A2 for ; Fri, 30 Oct 2020 19:03:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B84CA208B6 for ; Fri, 30 Oct 2020 19:03:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B84CA208B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7A4926B007B; Fri, 30 Oct 2020 15:02:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 63C9A6B007D; Fri, 30 Oct 2020 15:02:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B60D6B007E; Fri, 30 Oct 2020 15:02:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0049.hostedemail.com [216.40.44.49]) by kanga.kvack.org (Postfix) with ESMTP id 140AE6B007B for ; Fri, 30 Oct 2020 15:02:54 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7718C362B for ; Fri, 30 Oct 2020 19:02:53 +0000 (UTC) X-FDA: 77429513826.22.beam48_5b0ca3527298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 4C35518038E60 for ; Fri, 30 Oct 2020 19:02:53 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30003:30054:30064,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95;04yfjjw8iuh8wkdciu1b5sqducub7ypegqdechmx93566fsreihufc5eb9cqm8u.rged1o1kkekp4rdf58tqbjf9gdooibxqqq6hsnsfh77ebsco3s914u31twwjsi4.y-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: beam48_5b0ca3527298 X-Filterd-Recvd-Size: 4832 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:52 +0000 (UTC) IronPort-SDR: Ht37eyqtim3meETS+kxr1f4zAfpwOqOlzvMo/kZFvskUKv5dffOY4AdtKG0d3iMhqDql3YwzgZ 26+kaJXw4tWA== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629126" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629126" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:52 -0700 IronPort-SDR: BaEq9jfMC8z/cKfYoy+F1sulYcsqWnz9zF3IlRo+BIpHkuzlqTj/BH3jtO3eGBtkr0xDiResWB Eb95nxA4EEPg== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167713" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:51 -0700 From: Ben Widawsky To: linux-mm , Mike Kravetz , Andrew Morton Cc: Ben Widawsky , Dave Hansen , Michal Hocko , linux-kernel@vger.kernel.org Subject: [PATCH 11/12] mm/mempolicy: huge-page allocation for many preferred Date: Fri, 30 Oct 2020 12:02:37 -0700 Message-Id: <20201030190238.306764-12-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Implement the missing huge page allocation functionality while obeying the preferred node semantics. This uses a fallback mechanism to try multiple preferred nodes first, and then all other nodes. It cannot use the helper function that was introduced because huge page allocation already has its own helpers and it was more LOC, and effort to try to consolidate that. The weirdness is MPOL_PREFERRED_MANY can't be called yet because it is part of the UAPI we haven't yet exposed. Instead of make that define global, it's simply changed with the UAPI patch. Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@intel.com Signed-off-by: Ben Widawsky --- mm/hugetlb.c | 20 +++++++++++++++++--- mm/mempolicy.c | 3 ++- 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index fe76f8fd5a73..d9acc25ed3b5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1094,7 +1094,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, unsigned long address, int avoid_reserve, long chg) { - struct page *page; + struct page *page = NULL; struct mempolicy *mpol; gfp_t gfp_mask; nodemask_t *nodemask; @@ -1115,7 +1115,14 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, gfp_mask = htlb_alloc_mask(h); nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask); - page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); + if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */ + page = dequeue_huge_page_nodemask(h, gfp_mask | __GFP_RETRY_MAYFAIL, + nid, nodemask); + if (!page) + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, NULL); + } else { + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); + } if (page && !avoid_reserve && vma_has_reserves(vma, chg)) { SetPagePrivate(page); h->resv_huge_pages--; @@ -1977,7 +1984,14 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, nodemask_t *nodemask; nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask); - page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask); + if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */ + page = alloc_surplus_huge_page(h, gfp_mask | __GFP_RETRY_MAYFAIL, + nid, nodemask); + if (!page) + alloc_surplus_huge_page(h, gfp_mask, nid, NULL); + } else { + page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask); + } mpol_cond_put(mpol); return page; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 343340c87f03..aab9ef698aa8 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2075,7 +2075,8 @@ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags, huge_page_shift(hstate_vma(vma))); } else { nid = policy_node(gfp_flags, *mpol, numa_node_id()); - if ((*mpol)->mode == MPOL_BIND) + if ((*mpol)->mode == MPOL_BIND || + (*mpol)->mode == MPOL_PREFERRED_MANY) *nodemask = &(*mpol)->nodes; } return nid; From patchwork Fri Oct 30 19:02:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870683 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D8319697 for ; Fri, 30 Oct 2020 19:03:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 90D6D208B6 for ; Fri, 30 Oct 2020 19:03:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 90D6D208B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 15DCC6B007D; Fri, 30 Oct 2020 15:02:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0E8046B007E; Fri, 30 Oct 2020 15:02:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D3F2C6B0080; Fri, 30 Oct 2020 15:02:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id A35416B007E for ; Fri, 30 Oct 2020 15:02:54 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4B4C4180AD801 for ; Fri, 30 Oct 2020 19:02:54 +0000 (UTC) X-FDA: 77429513868.12.mist40_5f06a8f27298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 1DA061805629F for ; Fri, 30 Oct 2020 19:02:54 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30003:30051:30054:30064:30070:30075:30090,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95;04y8dyn8gmkd8xwhp5bf5mwjmtd43yc9y5bnehknq55wpjktejptibw8x5wmb3e.u6x14w6k9b6k111zp7x6wf5e5omta7xx6zy48354j6487swybu6s8sg8gowxubw.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: mist40_5f06a8f27298 X-Filterd-Recvd-Size: 8506 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:53 +0000 (UTC) IronPort-SDR: V3TW5MudgJhvrkuX86ZYU3tN9QU8SIDZs2Au3uKwLgG7s24VRATl88/UW2lC3LFruNwS5barEI Ga8ED/T1T7+g== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629130" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629130" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:52 -0700 IronPort-SDR: 2EzgMc4crjn5qzbC34oumoE1oQS+ksxq4lt03rxhuDIFPECdjcfNlG0oNoJeG3E+pkqyjJvia8 FgbRmZ3NhjfA== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167717" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:52 -0700 From: Ben Widawsky To: linux-mm , Jonathan Corbet , Mike Kravetz , Andrew Morton Cc: Ben Widawsky , Dave Hansen , Michal Hocko , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 12/12] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Date: Fri, 30 Oct 2020 12:02:38 -0700 Message-Id: <20201030190238.306764-13-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds a new mode to the existing mempolicy modes, MPOL_PREFERRED_MANY. MPOL_PREFERRED_MANY will be adequately documented in the internal admin-guide with this patch. Eventually, the man pages for mbind(2), get_mempolicy(2), set_mempolicy(2) and numactl(8) will also have text about this mode. Those shall contain the canonical reference. NUMA systems continue to become more prevalent. New technologies like PMEM make finer grain control over memory access patterns increasingly desirable. MPOL_PREFERRED_MANY allows userspace to specify a set of nodes that will be tried first when performing allocations. If those allocations fail, all remaining nodes will be tried. It's a straight forward API which solves many of the presumptive needs of system administrators wanting to optimize workloads on such machines. The mode will work either per VMA, or per thread. Generally speaking, this is similar to the way MPOL_BIND works, except the user will only get a SIGSEGV if all nodes in the system are unable to satisfy the allocation request. Link: https://lore.kernel.org/r/20200630212517.308045-13-ben.widawsky@intel.com Signed-off-by: Ben Widawsky --- .../admin-guide/mm/numa_memory_policy.rst | 16 ++++++++++++---- include/uapi/linux/mempolicy.h | 6 +++--- mm/hugetlb.c | 4 ++-- mm/mempolicy.c | 14 ++++++-------- 4 files changed, 23 insertions(+), 17 deletions(-) diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst index 1ad020c459b8..b69963a37fc8 100644 --- a/Documentation/admin-guide/mm/numa_memory_policy.rst +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst @@ -245,6 +245,14 @@ MPOL_INTERLEAVED address range or file. During system boot up, the temporary interleaved system default policy works in this mode. +MPOL_PREFERRED_MANY + This mode specifies that the allocation should be attempted from the + nodemask specified in the policy. If that allocation fails, the kernel + will search other nodes, in order of increasing distance from the first + set bit in the nodemask based on information provided by the platform + firmware. It is similar to MPOL_PREFERRED with the main exception that + is is an error to have an empty nodemask. + NUMA memory policy supports the following optional mode flags: MPOL_F_STATIC_NODES @@ -253,10 +261,10 @@ MPOL_F_STATIC_NODES nodes changes after the memory policy has been defined. Without this flag, any time a mempolicy is rebound because of a - change in the set of allowed nodes, the node (Preferred) or - nodemask (Bind, Interleave) is remapped to the new set of - allowed nodes. This may result in nodes being used that were - previously undesired. + change in the set of allowed nodes, the preferred nodemask (Preferred + Many), preferred node (Preferred) or nodemask (Bind, Interleave) is + remapped to the new set of allowed nodes. This may result in nodes + being used that were previously undesired. With this flag, if the user-specified nodes overlap with the nodes allowed by the task's cpuset, then the memory policy is diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index 3354774af61e..ad3eee651d4e 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -16,13 +16,13 @@ */ /* Policies */ -enum { - MPOL_DEFAULT, +enum { MPOL_DEFAULT, MPOL_PREFERRED, MPOL_BIND, MPOL_INTERLEAVE, MPOL_LOCAL, - MPOL_MAX, /* always last member of enum */ + MPOL_PREFERRED_MANY, + MPOL_MAX, /* always last member of enum */ }; /* Flags for set_mempolicy */ diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d9acc25ed3b5..9539d0429706 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1115,7 +1115,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, gfp_mask = htlb_alloc_mask(h); nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask); - if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */ + if (mpol->mode == MPOL_PREFERRED_MANY) { page = dequeue_huge_page_nodemask(h, gfp_mask | __GFP_RETRY_MAYFAIL, nid, nodemask); if (!page) @@ -1984,7 +1984,7 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, nodemask_t *nodemask; nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask); - if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */ + if (mpol->mode != MPOL_PREFERRED_MANY) { page = alloc_surplus_huge_page(h, gfp_mask | __GFP_RETRY_MAYFAIL, nid, nodemask); if (!page) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index aab9ef698aa8..038c0432ec32 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -108,8 +108,6 @@ #include "internal.h" -#define MPOL_PREFERRED_MANY MPOL_MAX - /* Internal flags */ #define MPOL_MF_DISCONTIG_OK (MPOL_MF_INTERNAL << 0) /* Skip checks for continuous vmas */ #define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */ @@ -180,7 +178,7 @@ struct mempolicy *get_task_policy(struct task_struct *p) static const struct mempolicy_operations { int (*create)(struct mempolicy *pol, const nodemask_t *nodes); void (*rebind)(struct mempolicy *pol, const nodemask_t *nodes); -} mpol_ops[MPOL_MAX + 1]; +} mpol_ops[MPOL_MAX]; static inline int mpol_store_user_nodemask(const struct mempolicy *pol) { @@ -385,8 +383,8 @@ static void mpol_rebind_preferred_common(struct mempolicy *pol, } /* MPOL_PREFERRED_MANY allows multiple nodes to be set in 'nodes' */ -static void __maybe_unused mpol_rebind_preferred_many(struct mempolicy *pol, - const nodemask_t *nodes) +static void mpol_rebind_preferred_many(struct mempolicy *pol, + const nodemask_t *nodes) { mpol_rebind_preferred_common(pol, nodes, nodes); } @@ -448,7 +446,7 @@ void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new) mmap_write_unlock(mm); } -static const struct mempolicy_operations mpol_ops[MPOL_MAX + 1] = { +static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { [MPOL_DEFAULT] = { .rebind = mpol_rebind_default, }, @@ -466,8 +464,8 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX + 1] = { }, /* [MPOL_LOCAL] - see mpol_new() */ [MPOL_PREFERRED_MANY] = { - .create = NULL, - .rebind = NULL, + .create = mpol_new_preferred_many, + .rebind = mpol_rebind_preferred_many, }, };