From patchwork Mon Dec 18 19:46:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13497511 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8600272066; Mon, 18 Dec 2023 19:46:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BUsczWxI" Received: by mail-pl1-f195.google.com with SMTP id d9443c01a7336-1d39e2f1089so18976735ad.1; Mon, 18 Dec 2023 11:46:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702928804; x=1703533604; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GDed0UIc9OzNQfDEhqciny76MTGnL2wnvbADB9nJS3M=; b=BUsczWxIRKTB/DYBHzhYtNwk83rJW0Am5y20X6UaV4Al4Ay5k05f6ILiPXu9ctI0IA +BO+/vYkalzkQlh3hZBLAJg1j4V9wxbx2siIAMA3FSfXUYsVpf5ml6VY9KyzmkUA+ga+ Lke8HSQFAwF9bet+Cg7eCGb1EKlhR+Kxi/d49b16r4b+1Qd04TJ8JiPXl7aFW4NJz7gy xt418ZkqfnPMfdPWyGS+V6PjpYg9GRo4VEmGTIieRfSd0AbC9Pc6Gwff1E8Q/QSWAlWE NlraFjEJmKdnW3pE14IIk2WIB3mhPSRzDObXYsVA6qU0FHVokDkC5yZ/Z69Ky92g1b4+ k0FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702928804; x=1703533604; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GDed0UIc9OzNQfDEhqciny76MTGnL2wnvbADB9nJS3M=; b=s0JbYif9DtCsbX5/XC4Hv/riDcAQwdvYQXDghou+IUS6SFYTlULMqP+seTaz+361Jh qrP+436ejEBBrmh4x8ek1rN04Pv1YmHDYfF+kG7/uEpLkM5i2TjSJ54gGx210t6HY0av 7VBfqehqMXborf69KNepiM8SEuXIJov9aV4uslmsKNjammMeb2x8UCrWunZztkGWPjqX TkOiLAnzj5Dw5n9P5h7vILB3Jmh98JHSKrQxojrgeH8Nwmoi9zRHH+Jog5DtbPQpritc HrJL/p3keQPlwFeIz8yrIcWlQqDiKG+xQIDPG/McDJWeyLosAI6TKrfTUjRNXJDWXpzV v25A== X-Gm-Message-State: AOJu0YyzCMFg01jSCiNd9+Avj8PKBIOgZmoKuUGHRCFIOuDljxnhYgmh PtTfwtEq0k2LbmfiwWbaPkIYKUPjpn8LuIE= X-Google-Smtp-Source: AGHT+IFg3J8Dsv+P0eii5AMyUDNzVQXkAy+ScbpR0dK1IzX9SZNTNXOAML1PC5nJWGGmYFJ4t9e3sg== X-Received: by 2002:a17:903:32c5:b0:1d0:a53e:2662 with SMTP id i5-20020a17090332c500b001d0a53e2662mr20372154plr.104.1702928803785; Mon, 18 Dec 2023 11:46:43 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 11-20020a170902c20b00b001ce664c05b0sm19456335pll.33.2023.12.18.11.46.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 11:46:43 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-mm@kvack.org Cc: linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, tglx@linutronix.de, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mhocko@kernel.org, tj@kernel.org, ying.huang@intel.com, gregory.price@memverge.com, corbet@lwn.net, rakie.kim@sk.com, hyeongtak.ji@sk.com, honggyu.kim@sk.com, vtavarespetr@micron.com, peterz@infradead.org, jgroves@micron.com, ravis.opensrc@micron.com, sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com, seungjun.ha@samsung.com Subject: [PATCH v4 01/11] mm/mempolicy: implement the sysfs-based weighted_interleave interface Date: Mon, 18 Dec 2023 14:46:21 -0500 Message-Id: <20231218194631.21667-2-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231218194631.21667-1-gregory.price@memverge.com> References: <20231218194631.21667-1-gregory.price@memverge.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rakie Kim This patch provides a way to set interleave weight information under sysfs at /sys/kernel/mm/mempolicy/weighted_interleave/nodeN The sysfs structure is designed as follows. $ tree /sys/kernel/mm/mempolicy/ /sys/kernel/mm/mempolicy/ [1] └── weighted_interleave [2] ├── node0 [3] └── node1 Each file above can be explained as follows. [1] mm/mempolicy: configuration interface for mempolicy subsystem [2] weighted_interleave/: config interface for weighted interleave policy [3] weighted_interleave/nodeN: weight for nodeN If sysfs is disabled in the config, the global interleave weights will default to "1" for all nodes. Signed-off-by: Rakie Kim Signed-off-by: Honggyu Kim Co-developed-by: Gregory Price Signed-off-by: Gregory Price Co-developed-by: Hyeongtak Ji Signed-off-by: Hyeongtak Ji --- .../ABI/testing/sysfs-kernel-mm-mempolicy | 4 + ...fs-kernel-mm-mempolicy-weighted-interleave | 22 +++ mm/mempolicy.c | 156 ++++++++++++++++++ 3 files changed, 182 insertions(+) create mode 100644 Documentation/ABI/testing/sysfs-kernel-mm-mempolicy create mode 100644 Documentation/ABI/testing/sysfs-kernel-mm-mempolicy-weighted-interleave diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-mempolicy b/Documentation/ABI/testing/sysfs-kernel-mm-mempolicy new file mode 100644 index 000000000000..2dcf24f4384a --- /dev/null +++ b/Documentation/ABI/testing/sysfs-kernel-mm-mempolicy @@ -0,0 +1,4 @@ +What: /sys/kernel/mm/mempolicy/ +Date: December 2023 +Contact: Linux memory management mailing list +Description: Interface for Mempolicy diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-mempolicy-weighted-interleave b/Documentation/ABI/testing/sysfs-kernel-mm-mempolicy-weighted-interleave new file mode 100644 index 000000000000..aa27fdf08c19 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-kernel-mm-mempolicy-weighted-interleave @@ -0,0 +1,22 @@ +What: /sys/kernel/mm/mempolicy/weighted_interleave/ +Date: December 2023 +Contact: Linux memory management mailing list +Description: Configuration Interface for the Weighted Interleave policy + +What: /sys/kernel/mm/mempolicy/weighted_interleave/nodeN +Date: December 2023 +Contact: Linux memory management mailing list +Description: Weight configuration interface for nodeN + + The interleave weight for a memory node (N). These weights are + utilized by processes which have set their mempolicy to + MPOL_WEIGHTED_INTERLEAVE and have opted into global weights by + omitting a task-local weight array. + + These weights only affect new allocations, and changes at runtime + will not cause migrations on already allocated pages. + + Writing an empty string resets the weight value to 1. + + Minimum weight: 1 + Maximum weight: 255 diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 10a590ee1c89..0e77633b07a5 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -131,6 +131,8 @@ static struct mempolicy default_policy = { static struct mempolicy preferred_node_policy[MAX_NUMNODES]; +static char iw_table[MAX_NUMNODES]; + /** * numa_nearest_node - Find nearest node by state * @node: Node id to start the search @@ -3067,3 +3069,157 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) p += scnprintf(p, buffer + maxlen - p, ":%*pbl", nodemask_pr_args(&nodes)); } + +#ifdef CONFIG_SYSFS +struct iw_node_attr { + struct kobj_attribute kobj_attr; + int nid; +}; + +static ssize_t node_show(struct kobject *kobj, struct kobj_attribute *attr, + char *buf) +{ + struct iw_node_attr *node_attr; + + node_attr = container_of(attr, struct iw_node_attr, kobj_attr); + return sysfs_emit(buf, "%d\n", iw_table[node_attr->nid]); +} + +static ssize_t node_store(struct kobject *kobj, struct kobj_attribute *attr, + const char *buf, size_t count) +{ + struct iw_node_attr *node_attr; + unsigned char weight = 0; + + node_attr = container_of(attr, struct iw_node_attr, kobj_attr); + /* If no input, set default weight to 1 */ + if (count == 0 || sysfs_streq(buf, "")) + weight = 1; + else if (kstrtou8(buf, 0, &weight) || !weight) + return -EINVAL; + + iw_table[node_attr->nid] = weight; + return count; +} + +static struct iw_node_attr *node_attrs[MAX_NUMNODES]; + +static void sysfs_wi_node_release(struct iw_node_attr *node_attr, + struct kobject *parent) +{ + if (!node_attr) + return; + sysfs_remove_file(parent, &node_attr->kobj_attr.attr); + kfree(node_attr->kobj_attr.attr.name); + kfree(node_attr); +} + +static void sysfs_mempolicy_release(struct kobject *mempolicy_kobj) +{ + int i; + + for (i = 0; i < MAX_NUMNODES; i++) + sysfs_wi_node_release(node_attrs[i], mempolicy_kobj); + kobject_put(mempolicy_kobj); +} + +static const struct kobj_type mempolicy_ktype = { + .sysfs_ops = &kobj_sysfs_ops, + .release = sysfs_mempolicy_release, +}; + +static int add_weight_node(int nid, struct kobject *wi_kobj) +{ + struct iw_node_attr *node_attr; + char *name; + + node_attr = kzalloc(sizeof(*node_attr), GFP_KERNEL); + if (!node_attr) + return -ENOMEM; + + name = kasprintf(GFP_KERNEL, "node%d", nid); + if (!name) { + kfree(node_attr); + return -ENOMEM; + } + + sysfs_attr_init(&node_attr->kobj_attr.attr); + node_attr->kobj_attr.attr.name = name; + node_attr->kobj_attr.attr.mode = 0644; + node_attr->kobj_attr.show = node_show; + node_attr->kobj_attr.store = node_store; + node_attr->nid = nid; + + if (sysfs_create_file(wi_kobj, &node_attr->kobj_attr.attr)) { + kfree(node_attr->kobj_attr.attr.name); + kfree(node_attr); + pr_err("failed to add attribute to weighted_interleave\n"); + return -ENOMEM; + } + + node_attrs[nid] = node_attr; + return 0; +} + +static int add_weighted_interleave_group(struct kobject *root_kobj) +{ + struct kobject *wi_kobj; + int nid, err; + + wi_kobj = kzalloc(sizeof(struct kobject), GFP_KERNEL); + if (!wi_kobj) + return -ENOMEM; + + err = kobject_init_and_add(wi_kobj, &mempolicy_ktype, root_kobj, + "weighted_interleave"); + if (err) { + kfree(wi_kobj); + return err; + } + + memset(node_attrs, 0, sizeof(node_attrs)); + for_each_node_state(nid, N_POSSIBLE) { + err = add_weight_node(nid, wi_kobj); + if (err) { + pr_err("failed to add sysfs [node%d]\n", nid); + break; + } + } + if (err) + kobject_put(wi_kobj); + return 0; +} + +static int __init mempolicy_sysfs_init(void) +{ + int err; + struct kobject *root_kobj; + + memset(&iw_table, 1, sizeof(iw_table)); + + root_kobj = kobject_create_and_add("mempolicy", mm_kobj); + if (!root_kobj) { + pr_err("failed to add mempolicy kobject to the system\n"); + return -ENOMEM; + } + + err = add_weighted_interleave_group(root_kobj); + + if (err) + kobject_put(root_kobj); + return err; + +} +#else +static int __init mempolicy_sysfs_init(void) +{ + /* + * if sysfs is not enabled MPOL_WEIGHTED_INTERLEAVE defaults to + * MPOL_INTERLEAVE behavior, but is still defined separately to + * allow task-local weighted interleave to operate as intended. + */ + memset(&iw_table, 1, sizeof(iw_table)); + return 0; +} +#endif /* CONFIG_SYSFS */ +late_initcall(mempolicy_sysfs_init); From patchwork Mon Dec 18 19:46:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13497512 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C6D77348B; Mon, 18 Dec 2023 19:46:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Ij3xcmd5" Received: by mail-pl1-f194.google.com with SMTP id d9443c01a7336-1d3aa0321b5so16026175ad.2; Mon, 18 Dec 2023 11:46:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702928808; x=1703533608; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MdZLBs+oiNG0zm3w3BPfGZMs3PYCh1bPB4eoTV769Q8=; b=Ij3xcmd54b1wFr2ye2H9VV3udtc8vqFU4lbVTCknTxJ5IzdISZSV+lvTfliRBnplXq AKTbgQ+sbr2+Tyvl5QP2peaz5+G5ACu02lD+kY2YiIw1wVGNsipPMgYlaesc3qTRrsbR 7JOF8uAajLVVFhipk3NleK4bACNld3eDc6dVz+a9Ch10fS70py4mwqEDU3SHGxxE9Emc mlU1n1cnbKwyZ5+V7+/l4+gNVT571UP3P1aKm/vHA26ClEBDE38ihx/7WLYB7Dta0P5b u75RJscZBXDETj+uOlYqpKLZkSejyb0kSmwt5NqIEEYxEKj2/Ir1hWSCKRmnb5TqvIDs o/Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702928808; x=1703533608; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MdZLBs+oiNG0zm3w3BPfGZMs3PYCh1bPB4eoTV769Q8=; b=DSVlKpibgTzOZ2jVWRG7vAV1KrERVHjz50U3ylA87eHX/tewHjsswGBJ1s/fJcmKzf 0FPnGt/rEevREWHGRbcEWsxfkntSjtv33WSosufo4q5bB2PiruwxSrdZ2ciPRTyfPygT 4xcroT52eha3/45jRZOJ+jlIx+KSVjUCjHv6gcAMoxBy72MwrufYnGBdtPOLi5+iDUaT xFlRoQuA9XjLfWw3Pc45rzu1hsMhF0Yi8Dnxh0y6LcXJUHzYRLdty+N4/fcgMLY6BPSw y9cQMWJPugnUfteoxHyYxnk3i61zV2HO8ETM0/dTGAEfh34li1fCFW3+D1OGEBJjCNG3 Yywg== X-Gm-Message-State: AOJu0Yw/LTEfITb95q1x9ECOyZrO7o1PRrL2/S4vPyM5Rhq/YAk6UIhu B5v78OpESReXq0kugI7THw== X-Google-Smtp-Source: AGHT+IF7v+oOVMnZxJ8IgXybCSv1BekTITUegMFjixUi3Vek5p1bKLkiE4G3FvaJHstKoK466e9cqQ== X-Received: by 2002:a17:903:2312:b0:1cf:d620:c641 with SMTP id d18-20020a170903231200b001cfd620c641mr20710754plh.22.1702928808506; Mon, 18 Dec 2023 11:46:48 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 11-20020a170902c20b00b001ce664c05b0sm19456335pll.33.2023.12.18.11.46.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 11:46:48 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-mm@kvack.org Cc: linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, tglx@linutronix.de, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mhocko@kernel.org, tj@kernel.org, ying.huang@intel.com, gregory.price@memverge.com, corbet@lwn.net, rakie.kim@sk.com, hyeongtak.ji@sk.com, honggyu.kim@sk.com, vtavarespetr@micron.com, peterz@infradead.org, jgroves@micron.com, ravis.opensrc@micron.com, sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com, seungjun.ha@samsung.com, Srinivasulu Thanneeru Subject: [PATCH v4 02/11] mm/mempolicy: introduce MPOL_WEIGHTED_INTERLEAVE for weighted interleaving Date: Mon, 18 Dec 2023 14:46:22 -0500 Message-Id: <20231218194631.21667-3-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231218194631.21667-1-gregory.price@memverge.com> References: <20231218194631.21667-1-gregory.price@memverge.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When a system has multiple NUMA nodes and it becomes bandwidth hungry, the current MPOL_INTERLEAVE could be an wise option. However, if those NUMA nodes consist of different types of memory such as having local DRAM and CXL memory together, the current round-robin based interleaving policy doesn't maximize the overall bandwidth because of their different bandwidth characteristics. Instead, the interleaving can be more efficient when the allocation policy follows each NUMA nodes' bandwidth weight rather than having 1:1 round-robin allocation. This patch introduces a new memory policy, MPOL_WEIGHTED_INTERLEAVE, which enables weighted interleaving between NUMA nodes. Weighted interleave allows for a proportional distribution of memory across multiple numa nodes, preferablly apportioned to match the bandwidth capacity of each node from the perspective of the accessing node. For example, if a system has 1 CPU node (0), and 2 memory nodes (0,1), with a relative bandwidth of (100GB/s, 50GB/s) respectively, the appropriate weight distribution is (2:1). Weights will be acquired from the global weight matrix exposed by the sysfs extension: /sys/kernel/mm/mempolicy/weighted_interleave/ The policy will then allocate the number of pages according to the set weights. For example, if the weights are (2,1), then 2 pages will be allocated on node0 for every 1 page allocated on node1. The new flag MPOL_WEIGHTED_INTERLEAVE can be used in set_mempolicy(2) and mbind(2). There are 3 integration points: weighted_interleave_nodes: Counts the number of allocations as they occur, and applies the weight for the current node. When the weight reaches 0, switch to the next node. Applied by `mempolicy_slab_node()` and `policy_nodemask()` weighted_interleave_nid: Gets the total weight of the nodemask as well as each individual node weight, then calculates the node based on the given index. Applied by `policy_nodemask()` and `mpol_misplaced()` bulk_array_weighted_interleave: Gets the total weight of the nodemask as well as each individual node weight, then calculates the number of "interleave rounds" as well as any delta ("partial round"). Calculates the number of pages for each node and allocates them. If a node was scheduled for interleave via interleave_nodes, the current weight (pol->cur_weight) will be allocated first, before the remaining bulk calculation is done. This simplifies the calculation at the cost of an additional allocation call. One piece of complexity is the interaction between a recent refactor which split the logic to acquire the "ilx" (interleave index) of an allocation and the actually application of the interleave. The calculation of the `interleave index` is done by `get_vma_policy()`, while the actual selection of the node will be later appliex by the relevant weighted_interleave function. If CONFIG_SYSFS is disabled, the weight table will be initialized to set all nodes to weight 1, but the weighting code is still called. This is so that task-local weights (future patch) can still be engaged cleanly without ifdef spaghetti. Suggested-by: Hasan Al Maruf Signed-off-by: Gregory Price Co-developed-by: Rakie Kim Signed-off-by: Rakie Kim Co-developed-by: Honggyu Kim Signed-off-by: Honggyu Kim Co-developed-by: Hyeongtak Ji Signed-off-by: Hyeongtak Ji Co-developed-by: Srinivasulu Thanneeru Signed-off-by: Srinivasulu Thanneeru Co-developed-by: Ravi Jonnalagadda Signed-off-by: Ravi Jonnalagadda --- .../admin-guide/mm/numa_memory_policy.rst | 11 + include/linux/mempolicy.h | 5 + include/uapi/linux/mempolicy.h | 1 + mm/mempolicy.c | 197 +++++++++++++++++- 4 files changed, 211 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst index eca38fa81e0f..d2c8e712785b 100644 --- a/Documentation/admin-guide/mm/numa_memory_policy.rst +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst @@ -250,6 +250,17 @@ MPOL_PREFERRED_MANY can fall back to all existing numa nodes. This is effectively MPOL_PREFERRED allowed for a mask rather than a single node. +MPOL_WEIGHTED_INTERLEAVE + This mode operates the same as MPOL_INTERLEAVE, except that + interleaving behavior is executed based on weights set in + /sys/kernel/mm/mempolicy/weighted_interleave/ + + Weighted interleave allocations pages on nodes according to + their weight. For example if nodes [0,1] are weighted [5,2] + respectively, 5 pages will be allocated on node0 for every + 2 pages allocated on node1. This can better distribute data + according to bandwidth on heterogeneous memory systems. + NUMA memory policy supports the following optional mode flags: MPOL_F_STATIC_NODES diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 931b118336f4..ba09167e80f7 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -54,6 +54,11 @@ struct mempolicy { nodemask_t cpuset_mems_allowed; /* relative to these nodes */ nodemask_t user_nodemask; /* nodemask passed by user */ } w; + + /* Weighted interleave settings */ + struct { + unsigned char cur_weight; + } wil; }; /* diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index a8963f7ef4c2..1f9bb10d1a47 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -23,6 +23,7 @@ enum { MPOL_INTERLEAVE, MPOL_LOCAL, MPOL_PREFERRED_MANY, + MPOL_WEIGHTED_INTERLEAVE, MPOL_MAX, /* always last member of enum */ }; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0e77633b07a5..0a180c670f0c 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -305,6 +305,7 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags, policy->mode = mode; policy->flags = flags; policy->home_node = NUMA_NO_NODE; + policy->wil.cur_weight = 0; return policy; } @@ -417,6 +418,10 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { .create = mpol_new_nodemask, .rebind = mpol_rebind_preferred, }, + [MPOL_WEIGHTED_INTERLEAVE] = { + .create = mpol_new_nodemask, + .rebind = mpol_rebind_nodemask, + }, }; static bool migrate_folio_add(struct folio *folio, struct list_head *foliolist, @@ -838,7 +843,8 @@ static long do_set_mempolicy(unsigned short mode, unsigned short flags, old = current->mempolicy; current->mempolicy = new; - if (new && new->mode == MPOL_INTERLEAVE) + if (new && (new->mode == MPOL_INTERLEAVE || + new->mode == MPOL_WEIGHTED_INTERLEAVE)) current->il_prev = MAX_NUMNODES-1; task_unlock(current); mpol_put(old); @@ -864,6 +870,7 @@ static void get_policy_nodemask(struct mempolicy *pol, nodemask_t *nodes) case MPOL_INTERLEAVE: case MPOL_PREFERRED: case MPOL_PREFERRED_MANY: + case MPOL_WEIGHTED_INTERLEAVE: *nodes = pol->nodes; break; case MPOL_LOCAL: @@ -948,6 +955,13 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, } else if (pol == current->mempolicy && pol->mode == MPOL_INTERLEAVE) { *policy = next_node_in(current->il_prev, pol->nodes); + } else if (pol == current->mempolicy && + (pol->mode == MPOL_WEIGHTED_INTERLEAVE)) { + if (pol->wil.cur_weight) + *policy = current->il_prev; + else + *policy = next_node_in(current->il_prev, + pol->nodes); } else { err = -EINVAL; goto out; @@ -1777,7 +1791,8 @@ struct mempolicy *get_vma_policy(struct vm_area_struct *vma, pol = __get_vma_policy(vma, addr, ilx); if (!pol) pol = get_task_policy(current); - if (pol->mode == MPOL_INTERLEAVE) { + if (pol->mode == MPOL_INTERLEAVE || + pol->mode == MPOL_WEIGHTED_INTERLEAVE) { *ilx += vma->vm_pgoff >> order; *ilx += (addr - vma->vm_start) >> (PAGE_SHIFT + order); } @@ -1827,6 +1842,24 @@ bool apply_policy_zone(struct mempolicy *policy, enum zone_type zone) return zone >= dynamic_policy_zone; } +static unsigned int weighted_interleave_nodes(struct mempolicy *policy) +{ + unsigned int next; + struct task_struct *me = current; + + next = next_node_in(me->il_prev, policy->nodes); + if (next == MAX_NUMNODES) + return next; + + if (!policy->wil.cur_weight) + policy->wil.cur_weight = iw_table[next]; + + policy->wil.cur_weight--; + if (!policy->wil.cur_weight) + me->il_prev = next; + return next; +} + /* Do dynamic interleaving for a process */ static unsigned int interleave_nodes(struct mempolicy *policy) { @@ -1861,6 +1894,9 @@ unsigned int mempolicy_slab_node(void) case MPOL_INTERLEAVE: return interleave_nodes(policy); + case MPOL_WEIGHTED_INTERLEAVE: + return weighted_interleave_nodes(policy); + case MPOL_BIND: case MPOL_PREFERRED_MANY: { @@ -1885,6 +1921,41 @@ unsigned int mempolicy_slab_node(void) } } +static unsigned int weighted_interleave_nid(struct mempolicy *pol, pgoff_t ilx) +{ + nodemask_t nodemask = pol->nodes; + unsigned int target, weight_total = 0; + int nid; + unsigned char weights[MAX_NUMNODES]; + unsigned char weight; + + barrier(); + + /* first ensure we have a valid nodemask */ + nid = first_node(nodemask); + if (nid == MAX_NUMNODES) + return nid; + + /* Then collect weights on stack and calculate totals */ + for_each_node_mask(nid, nodemask) { + weight = iw_table[nid]; + weight_total += weight; + weights[nid] = weight; + } + + /* Finally, calculate the node offset based on totals */ + target = (unsigned int)ilx % weight_total; + nid = first_node(nodemask); + while (target) { + weight = weights[nid]; + if (target < weight) + break; + target -= weight; + nid = next_node_in(nid, nodemask); + } + return nid; +} + /* * Do static interleaving for interleave index @ilx. Returns the ilx'th * node in pol->nodes (starting from ilx=0), wrapping around if ilx @@ -1953,6 +2024,11 @@ static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *pol, *nid = (ilx == NO_INTERLEAVE_INDEX) ? interleave_nodes(pol) : interleave_nid(pol, ilx); break; + case MPOL_WEIGHTED_INTERLEAVE: + *nid = (ilx == NO_INTERLEAVE_INDEX) ? + weighted_interleave_nodes(pol) : + weighted_interleave_nid(pol, ilx); + break; } return nodemask; @@ -2014,6 +2090,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) case MPOL_PREFERRED_MANY: case MPOL_BIND: case MPOL_INTERLEAVE: + case MPOL_WEIGHTED_INTERLEAVE: *mask = mempolicy->nodes; break; @@ -2113,7 +2190,8 @@ struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order, * If the policy is interleave or does not allow the current * node in its nodemask, we allocate the standard way. */ - if (pol->mode != MPOL_INTERLEAVE && + if ((pol->mode != MPOL_INTERLEAVE && + pol->mode != MPOL_WEIGHTED_INTERLEAVE) && (!nodemask || node_isset(nid, *nodemask))) { /* * First, try to allocate THP only on local node, but @@ -2249,6 +2327,106 @@ static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, return total_allocated; } +static unsigned long alloc_pages_bulk_array_weighted_interleave(gfp_t gfp, + struct mempolicy *pol, unsigned long nr_pages, + struct page **page_array) +{ + struct task_struct *me = current; + unsigned long total_allocated = 0; + unsigned long nr_allocated; + unsigned long rounds; + unsigned long node_pages, delta; + unsigned char weight; + unsigned char weights[MAX_NUMNODES]; + unsigned int weight_total = 0; + unsigned long rem_pages = nr_pages; + nodemask_t nodes = pol->nodes; + int nnodes, node, prev_node; + int i; + + /* Stabilize the nodemask on the stack */ + barrier(); + + nnodes = nodes_weight(nodes); + + /* Collect weights and save them on stack so they don't change */ + for_each_node_mask(node, nodes) { + weight = iw_table[node]; + weight_total += weight; + weights[node] = weight; + } + + /* Continue allocating from most recent node and adjust the nr_pages */ + if (pol->wil.cur_weight) { + node = next_node_in(me->il_prev, nodes); + node_pages = pol->wil.cur_weight; + if (node_pages > rem_pages) + node_pages = rem_pages; + nr_allocated = __alloc_pages_bulk(gfp, node, NULL, node_pages, + NULL, page_array); + page_array += nr_allocated; + total_allocated += nr_allocated; + /* if that's all the pages, no need to interleave */ + if (rem_pages <= pol->wil.cur_weight) { + pol->wil.cur_weight -= rem_pages; + return total_allocated; + } + /* Otherwise we adjust nr_pages down, and continue from there */ + rem_pages -= pol->wil.cur_weight; + pol->wil.cur_weight = 0; + prev_node = node; + } + + /* Now we can continue allocating as if from 0 instead of an offset */ + rounds = rem_pages / weight_total; + delta = rem_pages % weight_total; + for (i = 0; i < nnodes; i++) { + node = next_node_in(prev_node, nodes); + weight = weights[node]; + node_pages = weight * rounds; + if (delta) { + if (delta > weight) { + node_pages += weight; + delta -= weight; + } else { + node_pages += delta; + delta = 0; + } + } + /* We may not make it all the way around */ + if (!node_pages) + break; + /* If an over-allocation would occur, floor it */ + if (node_pages + total_allocated > nr_pages) { + node_pages = nr_pages - total_allocated; + delta = 0; + } + nr_allocated = __alloc_pages_bulk(gfp, node, NULL, node_pages, + NULL, page_array); + page_array += nr_allocated; + total_allocated += nr_allocated; + prev_node = node; + } + + /* + * Finally, we need to update me->il_prev and pol->wil.cur_weight + * if there were overflow pages, but not equivalent to the node + * weight, set the cur_weight to node_weight - delta and the + * me->il_prev to the previous node. Otherwise if it was perfect + * we can simply set il_prev to node and cur_weight to 0 + */ + if (node_pages) { + me->il_prev = prev_node; + node_pages %= weight; + pol->wil.cur_weight = weight - node_pages; + } else { + me->il_prev = node; + pol->wil.cur_weight = 0; + } + + return total_allocated; +} + static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid, struct mempolicy *pol, unsigned long nr_pages, struct page **page_array) @@ -2289,6 +2467,11 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, return alloc_pages_bulk_array_interleave(gfp, pol, nr_pages, page_array); + if (pol->mode == MPOL_WEIGHTED_INTERLEAVE) + return alloc_pages_bulk_array_weighted_interleave(gfp, pol, + nr_pages, + page_array); + if (pol->mode == MPOL_PREFERRED_MANY) return alloc_pages_bulk_array_preferred_many(gfp, numa_node_id(), pol, nr_pages, page_array); @@ -2364,6 +2547,7 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) case MPOL_INTERLEAVE: case MPOL_PREFERRED: case MPOL_PREFERRED_MANY: + case MPOL_WEIGHTED_INTERLEAVE: return !!nodes_equal(a->nodes, b->nodes); case MPOL_LOCAL: return true; @@ -2500,6 +2684,10 @@ int mpol_misplaced(struct folio *folio, struct vm_area_struct *vma, polnid = interleave_nid(pol, ilx); break; + case MPOL_WEIGHTED_INTERLEAVE: + polnid = weighted_interleave_nid(pol, ilx); + break; + case MPOL_PREFERRED: if (node_isset(curnid, pol->nodes)) goto out; @@ -2874,6 +3062,7 @@ static const char * const policy_modes[] = [MPOL_PREFERRED] = "prefer", [MPOL_BIND] = "bind", [MPOL_INTERLEAVE] = "interleave", + [MPOL_WEIGHTED_INTERLEAVE] = "weighted interleave", [MPOL_LOCAL] = "local", [MPOL_PREFERRED_MANY] = "prefer (many)", }; @@ -2933,6 +3122,7 @@ int mpol_parse_str(char *str, struct mempolicy **mpol) } break; case MPOL_INTERLEAVE: + case MPOL_WEIGHTED_INTERLEAVE: /* * Default to online nodes with memory if no nodelist */ @@ -3043,6 +3233,7 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) case MPOL_PREFERRED_MANY: case MPOL_BIND: case MPOL_INTERLEAVE: + case MPOL_WEIGHTED_INTERLEAVE: nodes = pol->nodes; break; default: From patchwork Mon Dec 18 19:46:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13497513 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7B6174097; Mon, 18 Dec 2023 19:46:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Sgi3YmB2" Received: by mail-pl1-f195.google.com with SMTP id d9443c01a7336-1d337dc9697so29726495ad.3; Mon, 18 Dec 2023 11:46:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702928813; x=1703533613; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=F7pmxfeZqglD0vnlUxD8yjybFz/b8pMpATnV83Z/Zpk=; b=Sgi3YmB2444jz6igsZh+B+nmDuNskONPZ1QrqAdHKQXq9z/4cS4hmBcpVtEutV1ZJl GFIc9YnbZzgiVcolrvJiGBcBEyvW0Da/aqGJZEZVSsNXN0lWgJu2t2JPubknKescAiUW f6TY068UVh3Vydy+xoT7RrMiJ6uIm8d5VJzDBJ0Aj1c4uFbm1+aZtJYNXoRs0sxyNd4P pQfutJNjXzOxK66ibBn9xL/Efuf8xgrNjJyihI+gD8e58Y4zbm1oANvYAM+pYFXtXd5j C4dFjGQU5CKHvmD35qQDEY9wbwYPpCRJ9uvR43n32hUad2b3UmYNyNSEcN6hpBQfjPPP HbOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702928813; x=1703533613; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=F7pmxfeZqglD0vnlUxD8yjybFz/b8pMpATnV83Z/Zpk=; b=ZqW8dAqqGfad/2kgvzsQBR+G5kdJ/ZBEGA5QrI7C2w1F8AbQNvGvW3eDu3LIaHztK/ 2I2NTXCIAPYidGhMgTtbA6togKYkAyr+vLE3hXo0x8cJV5maFo20Yah1m/lx+ipfOOhl kHmhdvsYQ2CmGQmDb7wzihmCrKg8C5+Y0zoI7/v2dbolCNKUiuB2yZ8NzVrpdc3F/8+Z E2PiDSQ+Uk5M1RpF5KKxowf/7j/2VhRjbtJ+/6Xr9a7m22440B780ov/kOfQKgbhpHSB TzduCkf0SCl1xBUhEl9tUFgFcom1KI7F0E2c2O1AMqs8tiEGgcGE981Em+PlXTkaere+ 8fVg== X-Gm-Message-State: AOJu0YwXMtqXA2ymrpmBq3e3IqHCRzY71g9dTd/2yAdss+En87VSN8HF 7tCIKALmMJWsNEMUKlczLg== X-Google-Smtp-Source: AGHT+IHWBSwQevH72f8SxzvajryyspJ7hMUHu1VYL52umwdTTEsqToXU97NkdKoC89YzWNT1N90zFA== X-Received: by 2002:a17:902:dace:b0:1d3:d9d3:6a4d with SMTP id q14-20020a170902dace00b001d3d9d36a4dmr271727plx.83.1702928813313; Mon, 18 Dec 2023 11:46:53 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 11-20020a170902c20b00b001ce664c05b0sm19456335pll.33.2023.12.18.11.46.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 11:46:52 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-mm@kvack.org Cc: linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, tglx@linutronix.de, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mhocko@kernel.org, tj@kernel.org, ying.huang@intel.com, gregory.price@memverge.com, corbet@lwn.net, rakie.kim@sk.com, hyeongtak.ji@sk.com, honggyu.kim@sk.com, vtavarespetr@micron.com, peterz@infradead.org, jgroves@micron.com, ravis.opensrc@micron.com, sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com, seungjun.ha@samsung.com Subject: [PATCH v4 03/11] mm/mempolicy: refactor sanitize_mpol_flags for reuse Date: Mon, 18 Dec 2023 14:46:23 -0500 Message-Id: <20231218194631.21667-4-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231218194631.21667-1-gregory.price@memverge.com> References: <20231218194631.21667-1-gregory.price@memverge.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 split sanitize_mpol_flags into sanitize and validate. Sanitize is used by set_mempolicy to split (int mode) into mode and mode_flags, and then validates them. Validate validates already split flags. Validate will be reused for new syscalls that accept already split mode and mode_flags. Signed-off-by: Gregory Price --- mm/mempolicy.c | 29 ++++++++++++++++++++++------- 1 file changed, 22 insertions(+), 7 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0a180c670f0c..59ac0da24f56 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1463,24 +1463,39 @@ static int copy_nodes_to_user(unsigned long __user *mask, unsigned long maxnode, return copy_to_user(mask, nodes_addr(*nodes), copy) ? -EFAULT : 0; } -/* Basic parameter sanity check used by both mbind() and set_mempolicy() */ -static inline int sanitize_mpol_flags(int *mode, unsigned short *flags) +/* + * Basic parameter sanity check used by mbind/set_mempolicy + * May modify flags to include internal flags (e.g. MPOL_F_MOF/F_MORON) + */ +static inline int validate_mpol_flags(unsigned short mode, unsigned short *flags) { - *flags = *mode & MPOL_MODE_FLAGS; - *mode &= ~MPOL_MODE_FLAGS; - - if ((unsigned int)(*mode) >= MPOL_MAX) + if ((unsigned int)(mode) >= MPOL_MAX) return -EINVAL; if ((*flags & MPOL_F_STATIC_NODES) && (*flags & MPOL_F_RELATIVE_NODES)) return -EINVAL; if (*flags & MPOL_F_NUMA_BALANCING) { - if (*mode != MPOL_BIND) + if (mode != MPOL_BIND) return -EINVAL; *flags |= (MPOL_F_MOF | MPOL_F_MORON); } return 0; } +/* + * Used by mbind/set_memplicy to split and validate mode/flags + * set_mempolicy combines (mode | flags), split them out into separate + * fields and return just the mode in mode_arg and flags in flags. + */ +static inline int sanitize_mpol_flags(int *mode_arg, unsigned short *flags) +{ + unsigned short mode = (*mode_arg & ~MPOL_MODE_FLAGS); + + *flags = *mode_arg & MPOL_MODE_FLAGS; + *mode_arg = mode; + + return validate_mpol_flags(mode, flags); +} + static long kernel_mbind(unsigned long start, unsigned long len, unsigned long mode, const unsigned long __user *nmask, unsigned long maxnode, unsigned int flags) From patchwork Mon Dec 18 19:46:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13497514 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0835674E03; Mon, 18 Dec 2023 19:46:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XcXwJD+U" Received: by mail-pl1-f194.google.com with SMTP id d9443c01a7336-1d075392ff6so25377655ad.1; Mon, 18 Dec 2023 11:46:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702928818; x=1703533618; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YBA3ePY5uekMa/KMqKhaSSKP12ZWhQtrggaPxDH2dag=; b=XcXwJD+UBgaomo83KVIkSMdLcWD+ViC9rfA9QWF1Lw5TKJDgPAPFhnJxHI3XQiK3vz bEP8G94cEFufcTuvsg2AqFAH5sbLLvTcDclho2Gm00U34njFhn3vLJ4DMJVbpVgPHD5k SNf7QH/HldIdYAk2wyQYvzAoocQGPe/lxGswYrD+JTnRXluzb531sTRC3Dxax7DANluv dbg7vFamCIL3WboEY4aGgGovvqjmbOnYj2zYK6RrMKlZfuOqmyPCWIU/kbF/JmzNaLcW Uc5g48XQCnzHWRnuQK9+xkbO/ZuAEwGoqhLEKqw4O/tD6MqOFNPGNVYeKbJ/NhIrQaSN L4RQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702928818; x=1703533618; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YBA3ePY5uekMa/KMqKhaSSKP12ZWhQtrggaPxDH2dag=; b=BOlIUGeisQpUk2RF7kJhTAyuu7GiERWPOBBEm6+c63vvhpxMAtKE3A8rVKbFK6kWc+ Zx0edD56htLIHitiizCh/6j3lXEEdPW6c5tslk2M5XA1Rd/XbDFM5bTLuLFs/w1dlR7n 84Nntil6BFHSobvgwrcU1g5XER41Uyr/1fYvsDhDceXO0oKOZOmftVT0BFrh16N/o2GU r6SK6Dzy/rag/2Vtr3v/T5B1XDRxPnqkwtO+mw5VLol/7yyxaqHzW1ZdmidYyd3uzLlp Pqr8BgRx/FZOixGLkgVLNNtwDn5gdTGBT2cCJBOP7+Be2OP9i3z32yx7STm0UwBNjLvQ 5hFQ== X-Gm-Message-State: AOJu0YwobcE9PwHlbTXLwXF8SL2Q2r0DtwwaIKq4R45IbRshd284KAVC +KLcoxTge8SG4TJVp+3x9g== X-Google-Smtp-Source: AGHT+IEYK+dspaixE/auUnnO1R4O6t3OCGQb5P17R15zrUqjLRxNpGnwWZJ4q79+T7uNzRyy9DXOSQ== X-Received: by 2002:a17:902:b78a:b0:1d3:d7da:466a with SMTP id e10-20020a170902b78a00b001d3d7da466amr453588pls.20.1702928818232; Mon, 18 Dec 2023 11:46:58 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 11-20020a170902c20b00b001ce664c05b0sm19456335pll.33.2023.12.18.11.46.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 11:46:57 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-mm@kvack.org Cc: linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, tglx@linutronix.de, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mhocko@kernel.org, tj@kernel.org, ying.huang@intel.com, gregory.price@memverge.com, corbet@lwn.net, rakie.kim@sk.com, hyeongtak.ji@sk.com, honggyu.kim@sk.com, vtavarespetr@micron.com, peterz@infradead.org, jgroves@micron.com, ravis.opensrc@micron.com, sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com, seungjun.ha@samsung.com Subject: [PATCH v4 04/11] mm/mempolicy: create struct mempolicy_args for creating new mempolicies Date: Mon, 18 Dec 2023 14:46:24 -0500 Message-Id: <20231218194631.21667-5-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231218194631.21667-1-gregory.price@memverge.com> References: <20231218194631.21667-1-gregory.price@memverge.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This patch adds a new kernel structure `struct mempolicy_args`, intended to be used for an extensible get/set_mempolicy interface. This implements the fields required to support the existing syscall interfaces interfaces, but does not expose any user-facing arg structure. mpol_new is refactored to take the argument structure so that future mempolicy extensions can all be managed in the mempolicy constructor. The get_mempolicy and mbind syscalls are refactored to utilize the new argument structure, as are all the callers of mpol_new() and do_set_mempolicy. Signed-off-by: Gregory Price --- include/linux/mempolicy.h | 12 +++++++ mm/mempolicy.c | 69 +++++++++++++++++++++++++++++---------- 2 files changed, 63 insertions(+), 18 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index ba09167e80f7..aeac19dfc2b6 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -61,6 +61,18 @@ struct mempolicy { } wil; }; +/* + * Describes settings of a mempolicy during set/get syscalls and + * kernel internal calls to do_set_mempolicy() + */ +struct mempolicy_args { + unsigned short mode; /* policy mode */ + unsigned short mode_flags; /* policy mode flags */ + int home_node; /* mbind: use MPOL_MF_HOME_NODE */ + nodemask_t *policy_nodes; /* get/set/mbind */ + int policy_node; /* get: policy node information */ +}; + /* * Support for managing mempolicy data objects (clone, copy, destroy) * The default fast path of a NULL MPOL_DEFAULT policy is always inlined. diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 59ac0da24f56..42037b7ff6d6 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -265,10 +265,12 @@ static int mpol_set_nodemask(struct mempolicy *pol, * This function just creates a new policy, does some check and simple * initialization. You must invoke mpol_set_nodemask() to set nodes. */ -static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags, - nodemask_t *nodes) +static struct mempolicy *mpol_new(struct mempolicy_args *args) { struct mempolicy *policy; + unsigned short mode = args->mode; + unsigned short flags = args->mode_flags; + nodemask_t *nodes = args->policy_nodes; if (mode == MPOL_DEFAULT) { if (nodes && !nodes_empty(*nodes)) @@ -817,8 +819,7 @@ static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma, } /* Set the process memory policy */ -static long do_set_mempolicy(unsigned short mode, unsigned short flags, - nodemask_t *nodes) +static long do_set_mempolicy(struct mempolicy_args *args) { struct mempolicy *new, *old; NODEMASK_SCRATCH(scratch); @@ -827,14 +828,14 @@ static long do_set_mempolicy(unsigned short mode, unsigned short flags, if (!scratch) return -ENOMEM; - new = mpol_new(mode, flags, nodes); + new = mpol_new(args); if (IS_ERR(new)) { ret = PTR_ERR(new); goto out; } task_lock(current); - ret = mpol_set_nodemask(new, nodes, scratch); + ret = mpol_set_nodemask(new, args->policy_nodes, scratch); if (ret) { task_unlock(current); mpol_put(new); @@ -1232,8 +1233,7 @@ static struct folio *alloc_migration_target_by_mpol(struct folio *src, #endif static long do_mbind(unsigned long start, unsigned long len, - unsigned short mode, unsigned short mode_flags, - nodemask_t *nmask, unsigned long flags) + struct mempolicy_args *margs, unsigned long flags) { struct mm_struct *mm = current->mm; struct vm_area_struct *vma, *prev; @@ -1253,7 +1253,7 @@ static long do_mbind(unsigned long start, unsigned long len, if (start & ~PAGE_MASK) return -EINVAL; - if (mode == MPOL_DEFAULT) + if (margs->mode == MPOL_DEFAULT) flags &= ~MPOL_MF_STRICT; len = PAGE_ALIGN(len); @@ -1264,7 +1264,7 @@ static long do_mbind(unsigned long start, unsigned long len, if (end == start) return 0; - new = mpol_new(mode, mode_flags, nmask); + new = mpol_new(margs); if (IS_ERR(new)) return PTR_ERR(new); @@ -1281,7 +1281,8 @@ static long do_mbind(unsigned long start, unsigned long len, NODEMASK_SCRATCH(scratch); if (scratch) { mmap_write_lock(mm); - err = mpol_set_nodemask(new, nmask, scratch); + err = mpol_set_nodemask(new, margs->policy_nodes, + scratch); if (err) mmap_write_unlock(mm); } else @@ -1295,7 +1296,7 @@ static long do_mbind(unsigned long start, unsigned long len, * Lock the VMAs before scanning for pages to migrate, * to ensure we don't miss a concurrently inserted page. */ - nr_failed = queue_pages_range(mm, start, end, nmask, + nr_failed = queue_pages_range(mm, start, end, margs->policy_nodes, flags | MPOL_MF_INVERT | MPOL_MF_WRLOCK, &pagelist); if (nr_failed < 0) { @@ -1500,6 +1501,7 @@ static long kernel_mbind(unsigned long start, unsigned long len, unsigned long mode, const unsigned long __user *nmask, unsigned long maxnode, unsigned int flags) { + struct mempolicy_args margs; unsigned short mode_flags; nodemask_t nodes; int lmode = mode; @@ -1514,7 +1516,12 @@ static long kernel_mbind(unsigned long start, unsigned long len, if (err) return err; - return do_mbind(start, len, lmode, mode_flags, &nodes, flags); + memset(&margs, 0, sizeof(margs)); + margs.mode = lmode; + margs.mode_flags = mode_flags; + margs.policy_nodes = &nodes; + + return do_mbind(start, len, &margs, flags); } SYSCALL_DEFINE4(set_mempolicy_home_node, unsigned long, start, unsigned long, len, @@ -1595,6 +1602,7 @@ SYSCALL_DEFINE6(mbind, unsigned long, start, unsigned long, len, static long kernel_set_mempolicy(int mode, const unsigned long __user *nmask, unsigned long maxnode) { + struct mempolicy_args args; unsigned short mode_flags; nodemask_t nodes; int lmode = mode; @@ -1608,7 +1616,12 @@ static long kernel_set_mempolicy(int mode, const unsigned long __user *nmask, if (err) return err; - return do_set_mempolicy(lmode, mode_flags, &nodes); + memset(&args, 0, sizeof(args)); + args.mode = lmode; + args.mode_flags = mode_flags; + args.policy_nodes = &nodes; + + return do_set_mempolicy(&args); } SYSCALL_DEFINE3(set_mempolicy, int, mode, const unsigned long __user *, nmask, @@ -2890,6 +2903,7 @@ static int shared_policy_replace(struct shared_policy *sp, pgoff_t start, void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) { int ret; + struct mempolicy_args margs; sp->root = RB_ROOT; /* empty tree == default mempolicy */ rwlock_init(&sp->lock); @@ -2902,8 +2916,12 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) if (!scratch) goto put_mpol; + memset(&margs, 0, sizeof(margs)); + margs.mode = mpol->mode; + margs.mode_flags = mpol->flags; + margs.policy_nodes = &mpol->w.user_nodemask; /* contextualize the tmpfs mount point mempolicy to this file */ - npol = mpol_new(mpol->mode, mpol->flags, &mpol->w.user_nodemask); + npol = mpol_new(&margs); if (IS_ERR(npol)) goto free_scratch; /* no valid nodemask intersection */ @@ -3011,6 +3029,7 @@ static inline void __init check_numabalancing_enable(void) void __init numa_policy_init(void) { + struct mempolicy_args args; nodemask_t interleave_nodes; unsigned long largest = 0; int nid, prefer = 0; @@ -3056,7 +3075,11 @@ void __init numa_policy_init(void) if (unlikely(nodes_empty(interleave_nodes))) node_set(prefer, interleave_nodes); - if (do_set_mempolicy(MPOL_INTERLEAVE, 0, &interleave_nodes)) + memset(&args, 0, sizeof(args)); + args.mode = MPOL_INTERLEAVE; + args.policy_nodes = &interleave_nodes; + + if (do_set_mempolicy(&args)) pr_err("%s: interleaving failed\n", __func__); check_numabalancing_enable(); @@ -3065,7 +3088,12 @@ void __init numa_policy_init(void) /* Reset policy of current process to default */ void numa_default_policy(void) { - do_set_mempolicy(MPOL_DEFAULT, 0, NULL); + struct mempolicy_args args; + + memset(&args, 0, sizeof(args)); + args.mode = MPOL_DEFAULT; + + do_set_mempolicy(&args); } /* @@ -3095,6 +3123,7 @@ static const char * const policy_modes[] = */ int mpol_parse_str(char *str, struct mempolicy **mpol) { + struct mempolicy_args margs; struct mempolicy *new = NULL; unsigned short mode_flags; nodemask_t nodes; @@ -3181,7 +3210,11 @@ int mpol_parse_str(char *str, struct mempolicy **mpol) goto out; } - new = mpol_new(mode, mode_flags, &nodes); + memset(&margs, 0, sizeof(margs)); + margs.mode = mode; + margs.mode_flags = mode_flags; + margs.policy_nodes = &nodes; + new = mpol_new(&margs); if (IS_ERR(new)) goto out; From patchwork Mon Dec 18 19:46:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13497515 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9476E74E2F; Mon, 18 Dec 2023 19:47:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cpttFZQD" Received: by mail-pl1-f193.google.com with SMTP id d9443c01a7336-1d3b4b803f4so7589415ad.1; Mon, 18 Dec 2023 11:47:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702928823; x=1703533623; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LHTqs7dpBmEn4KdO/6oNdwNsZG5gzWbikrQ+JB0FZfM=; b=cpttFZQDEdMkB73ib0tsvz43Tgzqjz75F7wOu6cgrV+vMbl02jZ/dD/hJEHZ1QIg8v BEWERgDwcmJfqMRDgAkcW0w0KWrvW0xYdWTUooBRXhQt9Wq585+dZBZlElMkBwFAfKV2 6/ryNGPDyP5EMVx7nNgdGjFQCRWMWstFAoviE7FFs96gJ+6H2Dht4jNwPt6+1BJ+X5wu dqpF+JRDYjC/FRcd2qGJYd5/UCnDlng0R9UjxvH69AXfnJsKyRRaGo2eLi2AFdA+O316 BiPPJvfJidMrhY4umo6wEQ3qeDURmN9v6f741iYOYsgGqaQsBdTRQGgz53gIYQLU6+dZ 6iWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702928823; x=1703533623; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LHTqs7dpBmEn4KdO/6oNdwNsZG5gzWbikrQ+JB0FZfM=; b=Tu/sY0yOPIgKoR5VyVnDirsgLsX2HpJgnKLQeoPebCSdKSVXKS2tbfdislrpWnw/T5 Hf//rz/niuUjPEiYcW0jsYkcTIdyRXojDf0Kwphu4DF1O1fuMHEd/O34sq8989LzTaJn nflahOjChH7pi/+CaACkw02evO3YHbzqi/gT7riduPntuYsNGUxQZg9pGo+rcbBKILDB xYnEdZtLOaL4122g2O6Ono+FPXe7U5ZNSfu26iGpY1yNgRqUhKKz6ZkrD4zlrJy9eCVZ VIoIad+l7y2Fl/TIKRJi3kxuaKJnHcLZYeR6YGs/qhSRWDKs8T48L24Upg0n3D2OhI9e jXEg== X-Gm-Message-State: AOJu0YwyXxkB3Sl2O+j4yyarZ1dQOWVyQjGr+Pjl+DFaHMHg4EEdeI7t 8DbzG/TUubVmeZjYw3wMDw== X-Google-Smtp-Source: AGHT+IEuiCIzHZoSaKljz8Xu8gZjlhUFEwbhr2C8Xz88VdeRlmyWhChyIM2RKyajYncVtVwvuI1x7g== X-Received: by 2002:a17:902:db0d:b0:1d3:d9c2:224a with SMTP id m13-20020a170902db0d00b001d3d9c2224amr276918plx.47.1702928822837; Mon, 18 Dec 2023 11:47:02 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 11-20020a170902c20b00b001ce664c05b0sm19456335pll.33.2023.12.18.11.46.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 11:47:02 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-mm@kvack.org Cc: linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, tglx@linutronix.de, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mhocko@kernel.org, tj@kernel.org, ying.huang@intel.com, gregory.price@memverge.com, corbet@lwn.net, rakie.kim@sk.com, hyeongtak.ji@sk.com, honggyu.kim@sk.com, vtavarespetr@micron.com, peterz@infradead.org, jgroves@micron.com, ravis.opensrc@micron.com, sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com, seungjun.ha@samsung.com Subject: [PATCH v4 05/11] mm/mempolicy: refactor kernel_get_mempolicy for code re-use Date: Mon, 18 Dec 2023 14:46:25 -0500 Message-Id: <20231218194631.21667-6-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231218194631.21667-1-gregory.price@memverge.com> References: <20231218194631.21667-1-gregory.price@memverge.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Pull operation flag checking from inside do_get_mempolicy out to kernel_get_mempolicy. This allows us to flatten the internal code, and break it into separate functions for future syscalls (get_mempolicy2, process_get_mempolicy) to re-use the code, even after additional extensions are made. The primary change is that the flag is treated as the multiplexer that it actually is. For get_mempolicy, the flags represents 3 different primary operations: if (flags & MPOL_F_MEMS_ALLOWED) return task->mems_allowed else if (flags & MPOL_F_ADDR) return vma mempolicy information else return task mempolicy information Plus the behavior modifying flag: if (flags & MPOL_F_NODE) change the return value of (int __user *policy) based on whether MPOL_F_ADDR was set. The original behavior of get_mempolicy is retained, but we utilize the new mempolicy_args structure to pass the operations down the stack. This will allow us to extend the internal functions without affecting the legacy behavior of get_mempolicy. Signed-off-by: Gregory Price --- mm/mempolicy.c | 245 +++++++++++++++++++++++++++++++------------------ 1 file changed, 155 insertions(+), 90 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 42037b7ff6d6..4426365a353d 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -895,106 +895,111 @@ static int lookup_node(struct mm_struct *mm, unsigned long addr) return ret; } -/* Retrieve NUMA policy */ -static long do_get_mempolicy(int *policy, nodemask_t *nmask, - unsigned long addr, unsigned long flags) +/* Retrieve the mems_allowed for current task */ +static inline long do_get_mems_allowed(nodemask_t *nmask) { - int err; - struct mm_struct *mm = current->mm; - struct vm_area_struct *vma = NULL; - struct mempolicy *pol = current->mempolicy, *pol_refcount = NULL; + task_lock(current); + *nmask = cpuset_current_mems_allowed; + task_unlock(current); + return 0; +} - if (flags & - ~(unsigned long)(MPOL_F_NODE|MPOL_F_ADDR|MPOL_F_MEMS_ALLOWED)) - return -EINVAL; +/* If the policy has additional node information to retrieve, return it */ +static long do_get_policy_node(struct mempolicy *pol) +{ + /* + * For MPOL_INTERLEAVE, the extended node information is the next + * node that will be selected for interleave. For weighted interleave + * we return the next node based on the current weight. + */ + if (pol == current->mempolicy && pol->mode == MPOL_INTERLEAVE) + return next_node_in(current->il_prev, pol->nodes); - if (flags & MPOL_F_MEMS_ALLOWED) { - if (flags & (MPOL_F_NODE|MPOL_F_ADDR)) - return -EINVAL; - *policy = 0; /* just so it's initialized */ + if (pol == current->mempolicy && + pol->mode == MPOL_WEIGHTED_INTERLEAVE) { + if (pol->wil.cur_weight) + return current->il_prev; + else + return next_node_in(current->il_prev, pol->nodes); + } + return -EINVAL; +} + +/* Handle user_nodemask condition when fetching nodemask for userspace */ +static void do_get_mempolicy_nodemask(struct mempolicy *pol, nodemask_t *nmask) +{ + if (mpol_store_user_nodemask(pol)) { + *nmask = pol->w.user_nodemask; + } else { task_lock(current); - *nmask = cpuset_current_mems_allowed; + get_policy_nodemask(pol, nmask); task_unlock(current); - return 0; } +} - if (flags & MPOL_F_ADDR) { - pgoff_t ilx; /* ignored here */ - /* - * Do NOT fall back to task policy if the - * vma/shared policy at addr is NULL. We - * want to return MPOL_DEFAULT in this case. - */ - mmap_read_lock(mm); - vma = vma_lookup(mm, addr); - if (!vma) { - mmap_read_unlock(mm); - return -EFAULT; - } - pol = __get_vma_policy(vma, addr, &ilx); - } else if (addr) - return -EINVAL; +/* Retrieve NUMA policy for a VMA assocated with a given address */ +static long do_get_vma_mempolicy(unsigned long addr, int *addr_node, + struct mempolicy_args *args) +{ + pgoff_t ilx; + struct mm_struct *mm = current->mm; + struct vm_area_struct *vma = NULL; + struct mempolicy *pol = NULL; + mmap_read_lock(mm); + vma = vma_lookup(mm, addr); + if (!vma) { + mmap_read_unlock(mm); + return -EFAULT; + } + pol = __get_vma_policy(vma, addr, &ilx); if (!pol) - pol = &default_policy; /* indicates default behavior */ + pol = &default_policy; + else + mpol_get(pol); + mmap_read_unlock(mm); - if (flags & MPOL_F_NODE) { - if (flags & MPOL_F_ADDR) { - /* - * Take a refcount on the mpol, because we are about to - * drop the mmap_lock, after which only "pol" remains - * valid, "vma" is stale. - */ - pol_refcount = pol; - vma = NULL; - mpol_get(pol); - mmap_read_unlock(mm); - err = lookup_node(mm, addr); - if (err < 0) - goto out; - *policy = err; - } else if (pol == current->mempolicy && - pol->mode == MPOL_INTERLEAVE) { - *policy = next_node_in(current->il_prev, pol->nodes); - } else if (pol == current->mempolicy && - (pol->mode == MPOL_WEIGHTED_INTERLEAVE)) { - if (pol->wil.cur_weight) - *policy = current->il_prev; - else - *policy = next_node_in(current->il_prev, - pol->nodes); - } else { - err = -EINVAL; - goto out; - } - } else { - *policy = pol == &default_policy ? MPOL_DEFAULT : - pol->mode; - /* - * Internal mempolicy flags must be masked off before exposing - * the policy to userspace. - */ - *policy |= (pol->flags & MPOL_MODE_FLAGS); - } + /* Fetch the node for the given address */ + if (addr_node) + *addr_node = lookup_node(mm, addr); - err = 0; - if (nmask) { - if (mpol_store_user_nodemask(pol)) { - *nmask = pol->w.user_nodemask; - } else { - task_lock(current); - get_policy_nodemask(pol, nmask); - task_unlock(current); - } + args->mode = pol == &default_policy ? MPOL_DEFAULT : pol->mode; + args->mode_flags = (pol->flags & MPOL_MODE_FLAGS); + args->home_node = pol->home_node; + + /* If this policy has extra node info, fetch that */ + args->policy_node = do_get_policy_node(pol); + + if (args->policy_nodes) + do_get_mempolicy_nodemask(pol, args->policy_nodes); + + if (pol != &default_policy) { + mpol_put(pol); + mpol_cond_put(pol); } - out: - mpol_cond_put(pol); - if (vma) - mmap_read_unlock(mm); - if (pol_refcount) - mpol_put(pol_refcount); - return err; + return 0; +} + +/* Retrieve NUMA policy for the current task */ +static long do_get_task_mempolicy(struct mempolicy_args *args) +{ + struct mempolicy *pol = current->mempolicy; + + if (!pol) + pol = &default_policy; /* indicates default behavior */ + + args->mode = pol == &default_policy ? MPOL_DEFAULT : pol->mode; + /* Internal flags must be masked off before exposing to userspace */ + args->mode_flags = (pol->flags & MPOL_MODE_FLAGS); + args->home_node = NUMA_NO_NODE; + + args->policy_node = do_get_policy_node(pol); + + if (args->policy_nodes) + do_get_mempolicy_nodemask(pol, args->policy_nodes); + + return 0; } #ifdef CONFIG_MIGRATION @@ -1731,16 +1736,76 @@ static int kernel_get_mempolicy(int __user *policy, unsigned long addr, unsigned long flags) { + struct mempolicy_args args; int err; - int pval; + int address_node = NUMA_NO_NODE; + int pval = 0; nodemask_t nodes; if (nmask != NULL && maxnode < nr_node_ids) return -EINVAL; - addr = untagged_addr(addr); + if (flags & + ~(unsigned long)(MPOL_F_NODE|MPOL_F_ADDR|MPOL_F_MEMS_ALLOWED)) + return -EINVAL; - err = do_get_mempolicy(&pval, &nodes, addr, flags); + /* Ensure any data that may be copied to userland is initialized */ + memset(&args, 0, sizeof(args)); + args.policy_nodes = &nodes; + + /* + * set_mempolicy was originally multiplexed based on 3 flags: + * MPOL_F_MEMS_ALLOWED: fetch task->mems_allowed + * MPOL_F_ADDR : operate on vma->mempolicy + * MPOL_F_NODE : change return value of *policy + * + * Split this behavior out here, rather than internal functions, + * so that the internal functions can be re-used by future + * get_mempolicy2 interfaces and the arg structure made extensible + */ + if (flags & MPOL_F_MEMS_ALLOWED) { + if (flags & (MPOL_F_NODE|MPOL_F_ADDR)) + return -EINVAL; + pval = 0; /* just so it's initialized */ + err = do_get_mems_allowed(&nodes); + } else if (flags & MPOL_F_ADDR) { + /* If F_ADDR, we operation on a vma policy (or default) */ + err = do_get_vma_mempolicy(untagged_addr(addr), + &address_node, &args); + if (err) + return err; + /* if (F_ADDR | F_NODE), *pval is the address' node */ + if (flags & MPOL_F_NODE) { + /* if we failed to fetch, that's likely an EFAULT */ + if (address_node < 0) + return address_node; + pval = address_node; + } else + pval = args.mode | args.mode_flags; + } else { + /* if not F_ADDR and addr != null, EINVAL */ + if (addr) + return -EINVAL; + + err = do_get_task_mempolicy(&args); + if (err) + return err; + /* + * if F_NODE was set and mode was MPOL_INTERLEAVE + * *pval is equal to next interleave node. + * + * if args.policy_node < 0, this means the mode did + * not have a policy. This presently emulates the + * original behavior of (F_NODE) & (!MPOL_INTERLEAVE) + * producing -EINVAL + */ + if (flags & MPOL_F_NODE) { + if (args.policy_node < 0) + return args.policy_node; + pval = args.policy_node; + } else + pval = args.mode | args.mode_flags; + } if (err) return err; From patchwork Mon Dec 18 19:46:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13497516 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC4217608E; Mon, 18 Dec 2023 19:47:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BUGZwS5B" Received: by mail-pl1-f194.google.com with SMTP id d9443c01a7336-1d39e2f1089so18979955ad.1; Mon, 18 Dec 2023 11:47:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702928828; x=1703533628; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1HdbfjrUNXuDoTm+qDnoLliIoy9y0PH4Q8y6qoN4vRo=; b=BUGZwS5BuLtnGKLJ20jD9Se++Wq2gx3jGUipTPW6GC62ZBdy3UdkpLXCSV+lXT/aho y0Dt/j/FjDf++kz3FiPFFcCLRpxBA7bm6uW1hzLXP84tcnCOL9b1q+A+PxlsxixTXGy3 2nYueyJAvS8JL0TnbFwE+CAxAYlxju7JATnXnRsPFC4LiT83EzQZKfsnX9kc3xaOjE/B ce+nUZ8QmZIen3qh0ZWWVnRa1yiCkJCtqibF/nPuTYTYCUn43d0uxui/7x38XFr/LLzJ NzPFD3OVQQ+Lqtj1Ff8ldtojQrq3Ilbdt8rMSy0iTKQ/ZvhnUvaJuhU+IYNb4riS2+VQ //+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702928828; x=1703533628; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1HdbfjrUNXuDoTm+qDnoLliIoy9y0PH4Q8y6qoN4vRo=; b=oXqwr8jelZJZ0C4HqMT7SAHztX1qeeL/h9QP4Kz/hji5GMxQeiJl5Xw8Hze32gjn2E 9/E6MjwWI8oVHOclD0QwBmRCtx3q0O7moUf568Rjqu/YogFFBGxi+5I31x+njsngHs3M yl7D88p0j5EOtg/8HM0cjnwYfAdXPxqiD4UNyq/3YnKoSQsg2cNsE194n7PMwqj/nZl3 yA2Mt3QhVKA52NAQ5QLmgef61uTStAlz5s6Lb52eW7AzRZ8Dd+kbaJejfilWTOjcgh7z 4/MKTibIY7HlmCxdKOZLYBhMpKLLE76cATmk8DUASm4fK93ZmFO/MGfJlRvAIlg5tgpr nxbw== X-Gm-Message-State: AOJu0YwKJj9MeO1TG0+0KDqH6blRkiB3eBF/Bs67HQD7iZiMxwsGmUJ8 IdPH71sTSIjoU1uAwr4wPQ== X-Google-Smtp-Source: AGHT+IFFELTMN7anDySBQ6xg3aJOuW+1fBiU4AjT1h7wEnplFVcEhIUtJlqRaBhHprORVyr+ExKmAA== X-Received: by 2002:a17:902:6844:b0:1cf:7683:93e with SMTP id f4-20020a170902684400b001cf7683093emr16850857pln.24.1702928827983; Mon, 18 Dec 2023 11:47:07 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 11-20020a170902c20b00b001ce664c05b0sm19456335pll.33.2023.12.18.11.47.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 11:47:07 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-mm@kvack.org Cc: linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, tglx@linutronix.de, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mhocko@kernel.org, tj@kernel.org, ying.huang@intel.com, gregory.price@memverge.com, corbet@lwn.net, rakie.kim@sk.com, hyeongtak.ji@sk.com, honggyu.kim@sk.com, vtavarespetr@micron.com, peterz@infradead.org, jgroves@micron.com, ravis.opensrc@micron.com, sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com, seungjun.ha@samsung.com Subject: [PATCH v4 06/11] mm/mempolicy: allow home_node to be set by mpol_new Date: Mon, 18 Dec 2023 14:46:26 -0500 Message-Id: <20231218194631.21667-7-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231218194631.21667-1-gregory.price@memverge.com> References: <20231218194631.21667-1-gregory.price@memverge.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This patch adds the plumbing into mpol_new() to allow the argument structure's home_node field to be set during mempolicy creation. The syscall sys_set_mempolicy_home_node was added to allow a home node to be registered for a vma. For set_mempolicy2 and mbind2 syscalls, it would be useful to add this as an extension to allow the user to submit a fully formed mempolicy configuration in a single call, rather than require multiple calls to configure a mempolicy. This will become particularly useful if/when pidfd interfaces to change process mempolicies from outside the task appear, as each call to change the mempolicy does an atomic swap of that policy in the task, rather than mutate the policy. Signed-off-by: Gregory Price --- mm/mempolicy.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 4426365a353d..fe340480e296 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -306,7 +306,7 @@ static struct mempolicy *mpol_new(struct mempolicy_args *args) atomic_set(&policy->refcnt, 1); policy->mode = mode; policy->flags = flags; - policy->home_node = NUMA_NO_NODE; + policy->home_node = args->home_node; policy->wil.cur_weight = 0; return policy; @@ -1625,6 +1625,7 @@ static long kernel_set_mempolicy(int mode, const unsigned long __user *nmask, args.mode = lmode; args.mode_flags = mode_flags; args.policy_nodes = &nodes; + args.home_node = NUMA_NO_NODE; return do_set_mempolicy(&args); } @@ -2985,6 +2986,8 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) margs.mode = mpol->mode; margs.mode_flags = mpol->flags; margs.policy_nodes = &mpol->w.user_nodemask; + margs.home_node = NUMA_NO_NODE; + /* contextualize the tmpfs mount point mempolicy to this file */ npol = mpol_new(&margs); if (IS_ERR(npol)) @@ -3143,6 +3146,7 @@ void __init numa_policy_init(void) memset(&args, 0, sizeof(args)); args.mode = MPOL_INTERLEAVE; args.policy_nodes = &interleave_nodes; + args.home_node = NUMA_NO_NODE; if (do_set_mempolicy(&args)) pr_err("%s: interleaving failed\n", __func__); @@ -3157,6 +3161,7 @@ void numa_default_policy(void) memset(&args, 0, sizeof(args)); args.mode = MPOL_DEFAULT; + args.home_node = NUMA_NO_NODE; do_set_mempolicy(&args); } @@ -3279,6 +3284,8 @@ int mpol_parse_str(char *str, struct mempolicy **mpol) margs.mode = mode; margs.mode_flags = mode_flags; margs.policy_nodes = &nodes; + margs.home_node = NUMA_NO_NODE; + new = mpol_new(&margs); if (IS_ERR(new)) goto out; From patchwork Mon Dec 18 19:46:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13497517 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45A3C73462; Mon, 18 Dec 2023 19:47:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Hnn43iUC" Received: by mail-pl1-f194.google.com with SMTP id d9443c01a7336-1d2f1cecf89so16785235ad.1; Mon, 18 Dec 2023 11:47:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702928832; x=1703533632; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iHAGnEescBI+H6+nuN2AFaKvkx/2sXtk6C+MWO0ta0k=; b=Hnn43iUCKwrasE8qM/G94Hx8ENvjXq34zmbVPAtdZ/L1l5JEyHknrZnW1Jp2zvV2RN MoOKYuz024qFcypRHtz5yWs5LMf17v1nLSs6yavjXhKirIORBrNWmeB8oeiZV4qBJJ6l w2xIbzowFQ7FcPGeh1cuFnP61T+0V/ODasIO1qILO0jss/Y7NSeX/2w3GbYncc8Ax9Ht IL7gYiLK41ZJwDlK+FXLzKAQd6DW6Mw74HkhTnwUHIW8HxwI2D8YVdAcD3wxkQ7ZNSmY 65lyGvuBdatOXQnT2FhBBqvVMiMfx9V7ht68QYlaum/PdYjKbxQfqiXIwIX3czoFprzL qNeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702928832; x=1703533632; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iHAGnEescBI+H6+nuN2AFaKvkx/2sXtk6C+MWO0ta0k=; b=AoApucTZb0cHQuPWTY6I5qp4S9nitvs6BEiIG9vRdCqQQ2DCcDrRI+Hv874GYdtQym vCmlc4TBPGa04IeisKTBzjcpdWl1lqRdsdnTwHStJEtbCrpKQ63uG+Aab4mF3oAY2hS+ ldvn4MT/vZY/AQQo9cquNBrgbCLs+Omqn2kMXvLczJZRTQHAxLqD47bD82J+daWV1SYf vaGQ2ov5ejxRbPkDeTMwuScYyHg9wK77jUdpv3ed7DoP8T2hHnKYYPkXFNtubaRDx2ur 0gZarkklGpnOVsLNZBSkJGfMomzNoyQ4lxWLiUseTx354rb4DxSNdmHxi3y24tqkz8RQ Hw/w== X-Gm-Message-State: AOJu0YxMEl+D930W1gl+Go3L+8qX6zgtZJuv8jxun//riCNkWjwyeyUA tZktszcZBgoVqzWDJcqStPxBUFJLSNNa1NU= X-Google-Smtp-Source: AGHT+IH68lABBd7MdPlJz2f1cHXOHBSvwu1StwzNvbHAn7HqW2kB3VG59ByjU9BE7WtdrkFq/zNRBg== X-Received: by 2002:a17:903:2442:b0:1d3:b684:fc0c with SMTP id l2-20020a170903244200b001d3b684fc0cmr1144510pls.104.1702928832513; Mon, 18 Dec 2023 11:47:12 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 11-20020a170902c20b00b001ce664c05b0sm19456335pll.33.2023.12.18.11.47.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 11:47:12 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-mm@kvack.org Cc: linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, tglx@linutronix.de, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mhocko@kernel.org, tj@kernel.org, ying.huang@intel.com, gregory.price@memverge.com, corbet@lwn.net, rakie.kim@sk.com, hyeongtak.ji@sk.com, honggyu.kim@sk.com, vtavarespetr@micron.com, peterz@infradead.org, jgroves@micron.com, ravis.opensrc@micron.com, sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com, seungjun.ha@samsung.com, Frank van der Linden Subject: [PATCH v4 07/11] mm/mempolicy: add userland mempolicy arg structure Date: Mon, 18 Dec 2023 14:46:27 -0500 Message-Id: <20231218194631.21667-8-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231218194631.21667-1-gregory.price@memverge.com> References: <20231218194631.21667-1-gregory.price@memverge.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This patch adds the new user-api argument structure intended for set_mempolicy2 and mbind2. struct mpol_args { __u16 mode; __u16 mode_flags; __s32 home_node; /* mbind2: policy home node */ __aligned_u64 *pol_nodes; __u64 pol_maxnodes; __s32 policy_node; /* get_mempolicy: policy node info */ }; This structure is intended to be extensible as new mempolicy extensions are added. For example, set_mempolicy_home_node was added to allow vma mempolicies to have a preferred/home node assigned. This structure allows the addition of that setting at the time the mempolicy is set, rather than requiring additional calls to modify the policy. Full breakdown of arguments as of this patch: mode: Mempolicy mode (MPOL_DEFAULT, MPOL_INTERLEAVE) mode_flags: Flags previously or'd into mode in set_mempolicy (e.g.: MPOL_F_STATIC_NODES, MPOL_F_RELATIVE_NODES) home_node: for mbind2. Allows the setting of a policy's home with the use of MPOL_MF_HOME_NODE pol_nodes: Policy nodemask pol_maxnodes: Max number of nodes in the policy nodemask policy_node: for get_mempolicy2. Returns extended information about a policy that was previously reported by passing MPOL_F_NODE to get_mempolicy. Instead of overriding the mode value, simply add a field. Suggested-by: Frank van der Linden Suggested-by: Vinicius Tavares Petrucci Suggested-by: Hasan Al Maruf Signed-off-by: Gregory Price Co-developed-by: Vinicius Tavares Petrucci Signed-off-by: Vinicius Tavares Petrucci --- .../admin-guide/mm/numa_memory_policy.rst | 18 ++++++++++++++++++ include/linux/syscalls.h | 1 + include/uapi/linux/mempolicy.h | 10 ++++++++++ 3 files changed, 29 insertions(+) diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst index d2c8e712785b..d5fcebdd7996 100644 --- a/Documentation/admin-guide/mm/numa_memory_policy.rst +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst @@ -482,6 +482,24 @@ closest to which page allocation will come from. Specifying the home node overri the default allocation policy to allocate memory close to the local node for an executing CPU. +Extended Mempolicy Arguments:: + + struct mpol_args { + __u16 mode; + __u16 mode_flags; + __s32 home_node; /* mbind2: policy home node */ + __aligned_u64 pol_nodes; /* nodemask pointer */ + __u64 pol_maxnodes; + __s32 policy_node; /* get_mempolicy2: policy node information */ + }; + +The extended mempolicy argument structure is defined to allow the mempolicy +interfaces future extensibility without the need for additional system calls. + +The core arguments (mode, mode_flags, pol_nodes, and pol_maxnodes) apply to +all interfaces relative to their non-extended counterparts. Each additional +field may only apply to specific extended interfaces. See the respective +extended interface man page for more details. Memory Policy Command Line Interface ==================================== diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index fd9d12de7e92..a52395ca3f00 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -74,6 +74,7 @@ struct landlock_ruleset_attr; enum landlock_rule_type; struct cachestat_range; struct cachestat; +struct mpol_args; #include #include diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index 1f9bb10d1a47..c06f2afa7fe3 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -27,6 +27,16 @@ enum { MPOL_MAX, /* always last member of enum */ }; +struct mpol_args { + /* Basic mempolicy settings */ + __u16 mode; + __u16 mode_flags; + __s32 home_node; /* mbind2: policy home node */ + __aligned_u64 pol_nodes; + __u64 pol_maxnodes; + __s32 policy_node; /* get_mempolicy: policy node info */ +}; + /* Flags for set_mempolicy */ #define MPOL_F_STATIC_NODES (1 << 15) #define MPOL_F_RELATIVE_NODES (1 << 14) From patchwork Mon Dec 18 19:46:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13497518 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED74C74E2F; Mon, 18 Dec 2023 19:47:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZBqJBwlo" Received: by mail-pl1-f193.google.com with SMTP id d9443c01a7336-1d3b8184a84so7014725ad.1; Mon, 18 Dec 2023 11:47:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702928837; x=1703533637; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+2NTo+TTHRW539ROhG+rzpjd/BBoQi5OsXHmBxylw6M=; b=ZBqJBwloWeDQ4tEQO8xFxqL477nzYNnkdZfZCy72FYgU42I6GeaYOagV5OlDHJTs+h 3DVOIJipZnD6yFiQOWxwWJXy7syIhQ4XkcCLzmyMfc/dE2FBkCHPT2BX87TnAzUm5IXv P6SUe9AHeZixmWQwFoLd3NB76eQcoPw+jSTWRwXpMhOK3s85MQ45PoMNgY5QN8N5Kkn6 Eb1uNqFmUd4TWKkPNCZ78i8uNGuvJNRgyXV3g9gWYsmbnagMoQZVFZQWgz2G/DsFSKpf wMLROfP9nVjGKX/D6Sh0NnkCa30p1aWkbelz6eON5t+UMH926pGT+9zEL6Wn/3yYeyDj /TAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702928837; x=1703533637; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+2NTo+TTHRW539ROhG+rzpjd/BBoQi5OsXHmBxylw6M=; b=CCfwXwh6q+E59cIX1l+MYZ/of7pgH9WTXmLHW2HGOpc4NRnO4KPE43ouEuvvuWw2sI vxrx4eaxjKxIBVk75Yl5Jd+qmbZbhY9X90v4uGvMm4KpmfBZb8/W5jDCQRvmpdgUzwpJ NyhHVux0hkJMf1OTcZaohdTwW6mupUoUgxN0qbYl48oLPpa02rgcMIS1Go6pXlUeUf3O dxbj5oUpOaM2VLDSTHzOczVJMA0O2tW4VJ4xlpT6BD8DAQ8vDJnU4txWkKi3E2XBHQH9 EYfM/bPex2EIEXweQd+DTixmtXujh06w7JSok6xmdnx6fjJ+Sf2WkE4mOvq/FMStrGwP aShQ== X-Gm-Message-State: AOJu0YxSb3WmDViWao8wjTcAnQwORbUtYT8cwmf3g7c2Cy1VuKl0ie4f a0LmjWONHPQqitQiyS9TDA== X-Google-Smtp-Source: AGHT+IHOPQvcBLvwPFtGieO7cutT2SKZBtDiIpQDy1Kx5NYyQlu8RDWN2k1f2TfUdxS1oLrhlAKFJw== X-Received: by 2002:a17:902:f690:b0:1d3:b65a:40bb with SMTP id l16-20020a170902f69000b001d3b65a40bbmr1670789plg.12.1702928837178; Mon, 18 Dec 2023 11:47:17 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 11-20020a170902c20b00b001ce664c05b0sm19456335pll.33.2023.12.18.11.47.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 11:47:16 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-mm@kvack.org Cc: linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, tglx@linutronix.de, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mhocko@kernel.org, tj@kernel.org, ying.huang@intel.com, gregory.price@memverge.com, corbet@lwn.net, rakie.kim@sk.com, hyeongtak.ji@sk.com, honggyu.kim@sk.com, vtavarespetr@micron.com, peterz@infradead.org, jgroves@micron.com, ravis.opensrc@micron.com, sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com, seungjun.ha@samsung.com, Michal Hocko Subject: [PATCH v4 08/11] mm/mempolicy: add set_mempolicy2 syscall Date: Mon, 18 Dec 2023 14:46:28 -0500 Message-Id: <20231218194631.21667-9-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231218194631.21667-1-gregory.price@memverge.com> References: <20231218194631.21667-1-gregory.price@memverge.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 set_mempolicy2 is an extensible set_mempolicy interface which allows a user to set the per-task memory policy. Defined as: set_mempolicy2(struct mpol_args *args, size_t size, unsigned long flags); relevant mpol_args fields include the following: mode: The MPOL_* policy (DEFAULT, INTERLEAVE, etc.) mode_flags: The MPOL_F_* flags that were previously passed in or'd into the mode. This was split to hopefully allow future extensions additional mode/flag space. pol_nodes: the nodemask to apply for the memory policy pol_maxnodes: The max number of nodes described by pol_nodes The usize arg is intended for the user to pass in sizeof(mpol_args) to allow forward/backward compatibility whenever possible. The flags argument is intended to future proof the syscall against future extensions which may require interpreting the arguments in the structure differently. Semantics of `set_mempolicy` are otherwise the same as `set_mempolicy` as of this patch. Suggested-by: Michal Hocko Signed-off-by: Gregory Price --- .../admin-guide/mm/numa_memory_policy.rst | 10 ++++++ arch/alpha/kernel/syscalls/syscall.tbl | 1 + arch/arm/tools/syscall.tbl | 1 + arch/arm64/include/asm/unistd.h | 2 +- arch/arm64/include/asm/unistd32.h | 2 ++ arch/m68k/kernel/syscalls/syscall.tbl | 1 + arch/microblaze/kernel/syscalls/syscall.tbl | 1 + arch/mips/kernel/syscalls/syscall_n32.tbl | 1 + arch/mips/kernel/syscalls/syscall_o32.tbl | 1 + arch/parisc/kernel/syscalls/syscall.tbl | 1 + arch/powerpc/kernel/syscalls/syscall.tbl | 1 + arch/s390/kernel/syscalls/syscall.tbl | 1 + arch/sh/kernel/syscalls/syscall.tbl | 1 + arch/sparc/kernel/syscalls/syscall.tbl | 1 + arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + arch/xtensa/kernel/syscalls/syscall.tbl | 1 + include/linux/syscalls.h | 2 ++ include/uapi/asm-generic/unistd.h | 4 ++- kernel/sys_ni.c | 1 + mm/mempolicy.c | 36 +++++++++++++++++++ .../arch/mips/entry/syscalls/syscall_n64.tbl | 1 + .../arch/powerpc/entry/syscalls/syscall.tbl | 1 + .../perf/arch/s390/entry/syscalls/syscall.tbl | 1 + .../arch/x86/entry/syscalls/syscall_64.tbl | 1 + 25 files changed, 73 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst index d5fcebdd7996..e57d400d0281 100644 --- a/Documentation/admin-guide/mm/numa_memory_policy.rst +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst @@ -432,6 +432,8 @@ Set [Task] Memory Policy:: long set_mempolicy(int mode, const unsigned long *nmask, unsigned long maxnode); + long set_mempolicy2(struct mpol_args args, size_t size, + unsigned long flags); Set's the calling task's "task/process memory policy" to mode specified by the 'mode' argument and the set of nodes defined by @@ -440,6 +442,12 @@ specified by the 'mode' argument and the set of nodes defined by 'mode' argument with the flag (for example: MPOL_INTERLEAVE | MPOL_F_STATIC_NODES). +set_mempolicy2() is an extended version of set_mempolicy() capable +of setting a mempolicy which requires more information than can be +passed via get_mempolicy(). For example, weighted interleave with +task-local weights requires a weight array to be passed via the +'mpol_args->il_weights' argument in the 'struct mpol_args' arg. + See the set_mempolicy(2) man page for more details @@ -496,6 +504,8 @@ Extended Mempolicy Arguments:: The extended mempolicy argument structure is defined to allow the mempolicy interfaces future extensibility without the need for additional system calls. +Extended interfaces (set_mempolicy2) use this argument structure. + The core arguments (mode, mode_flags, pol_nodes, and pol_maxnodes) apply to all interfaces relative to their non-extended counterparts. Each additional field may only apply to specific extended interfaces. See the respective diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl index 18c842ca6c32..0dc288a1118a 100644 --- a/arch/alpha/kernel/syscalls/syscall.tbl +++ b/arch/alpha/kernel/syscalls/syscall.tbl @@ -496,3 +496,4 @@ 564 common futex_wake sys_futex_wake 565 common futex_wait sys_futex_wait 566 common futex_requeue sys_futex_requeue +567 common set_mempolicy2 sys_set_mempolicy2 diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl index 584f9528c996..50172ec0e1f5 100644 --- a/arch/arm/tools/syscall.tbl +++ b/arch/arm/tools/syscall.tbl @@ -470,3 +470,4 @@ 454 common futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h index 531effca5f1f..298313d2e0af 100644 --- a/arch/arm64/include/asm/unistd.h +++ b/arch/arm64/include/asm/unistd.h @@ -39,7 +39,7 @@ #define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE + 5) #define __ARM_NR_COMPAT_END (__ARM_NR_COMPAT_BASE + 0x800) -#define __NR_compat_syscalls 457 +#define __NR_compat_syscalls 458 #endif #define __ARCH_WANT_SYS_CLONE diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h index 9f7c1bf99526..cee8d669c342 100644 --- a/arch/arm64/include/asm/unistd32.h +++ b/arch/arm64/include/asm/unistd32.h @@ -919,6 +919,8 @@ __SYSCALL(__NR_futex_wake, sys_futex_wake) __SYSCALL(__NR_futex_wait, sys_futex_wait) #define __NR_futex_requeue 456 __SYSCALL(__NR_futex_requeue, sys_futex_requeue) +#define __NR_set_mempolicy2 457 +__SYSCALL(__NR_set_mempolicy2, sys_set_mempolicy2) /* * Please add new compat syscalls above this comment and update diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl index 7a4b780e82cb..839d90c535f2 100644 --- a/arch/m68k/kernel/syscalls/syscall.tbl +++ b/arch/m68k/kernel/syscalls/syscall.tbl @@ -456,3 +456,4 @@ 454 common futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl index 5b6a0b02b7de..567c8b883735 100644 --- a/arch/microblaze/kernel/syscalls/syscall.tbl +++ b/arch/microblaze/kernel/syscalls/syscall.tbl @@ -462,3 +462,4 @@ 454 common futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 diff --git a/arch/mips/kernel/syscalls/syscall_n32.tbl b/arch/mips/kernel/syscalls/syscall_n32.tbl index a842b41c8e06..cc0640e16f2f 100644 --- a/arch/mips/kernel/syscalls/syscall_n32.tbl +++ b/arch/mips/kernel/syscalls/syscall_n32.tbl @@ -395,3 +395,4 @@ 454 n32 futex_wake sys_futex_wake 455 n32 futex_wait sys_futex_wait 456 n32 futex_requeue sys_futex_requeue +457 n32 set_mempolicy2 sys_set_mempolicy2 diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl index 525cc54bc63b..f7262fde98d9 100644 --- a/arch/mips/kernel/syscalls/syscall_o32.tbl +++ b/arch/mips/kernel/syscalls/syscall_o32.tbl @@ -444,3 +444,4 @@ 454 o32 futex_wake sys_futex_wake 455 o32 futex_wait sys_futex_wait 456 o32 futex_requeue sys_futex_requeue +457 o32 set_mempolicy2 sys_set_mempolicy2 diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl index a47798fed54e..e10f0e8bd064 100644 --- a/arch/parisc/kernel/syscalls/syscall.tbl +++ b/arch/parisc/kernel/syscalls/syscall.tbl @@ -455,3 +455,4 @@ 454 common futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl index 7fab411378f2..4f03f5f42b78 100644 --- a/arch/powerpc/kernel/syscalls/syscall.tbl +++ b/arch/powerpc/kernel/syscalls/syscall.tbl @@ -543,3 +543,4 @@ 454 common futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl index 86fec9b080f6..f98dadc2e9df 100644 --- a/arch/s390/kernel/syscalls/syscall.tbl +++ b/arch/s390/kernel/syscalls/syscall.tbl @@ -459,3 +459,4 @@ 454 common futex_wake sys_futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 sys_set_mempolicy2 diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl index 363fae0fe9bf..f47ba9f2d05d 100644 --- a/arch/sh/kernel/syscalls/syscall.tbl +++ b/arch/sh/kernel/syscalls/syscall.tbl @@ -459,3 +459,4 @@ 454 common futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl index 7bcaa3d5ea44..53fb16616728 100644 --- a/arch/sparc/kernel/syscalls/syscall.tbl +++ b/arch/sparc/kernel/syscalls/syscall.tbl @@ -502,3 +502,4 @@ 454 common futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index c8fac5205803..4b4dc41b24ee 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -461,3 +461,4 @@ 454 i386 futex_wake sys_futex_wake 455 i386 futex_wait sys_futex_wait 456 i386 futex_requeue sys_futex_requeue +457 i386 set_mempolicy2 sys_set_mempolicy2 diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index 8cb8bf68721c..1bc2190bec27 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -378,6 +378,7 @@ 454 common futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 # # Due to a historical design error, certain syscalls are numbered differently diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl index 06eefa9c1458..e26dc89399eb 100644 --- a/arch/xtensa/kernel/syscalls/syscall.tbl +++ b/arch/xtensa/kernel/syscalls/syscall.tbl @@ -427,3 +427,4 @@ 454 common futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index a52395ca3f00..451f0089601f 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -823,6 +823,8 @@ asmlinkage long sys_get_mempolicy(int __user *policy, unsigned long addr, unsigned long flags); asmlinkage long sys_set_mempolicy(int mode, const unsigned long __user *nmask, unsigned long maxnode); +asmlinkage long sys_set_mempolicy2(struct mpol_args __user *args, size_t size, + unsigned long flags); asmlinkage long sys_migrate_pages(pid_t pid, unsigned long maxnode, const unsigned long __user *from, const unsigned long __user *to); diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index 756b013fb832..55486aba099f 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -828,9 +828,11 @@ __SYSCALL(__NR_futex_wake, sys_futex_wake) __SYSCALL(__NR_futex_wait, sys_futex_wait) #define __NR_futex_requeue 456 __SYSCALL(__NR_futex_requeue, sys_futex_requeue) +#define __NR_set_mempolicy2 457 +__SYSCALL(__NR_set_mempolicy2, sys_set_mempolicy2) #undef __NR_syscalls -#define __NR_syscalls 457 +#define __NR_syscalls 458 /* * 32 bit systems traditionally used different diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index e1a6e3c675c0..7d6eb0eec056 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -189,6 +189,7 @@ COND_SYSCALL(remap_file_pages); COND_SYSCALL(mbind); COND_SYSCALL(get_mempolicy); COND_SYSCALL(set_mempolicy); +COND_SYSCALL(set_mempolicy2); COND_SYSCALL(migrate_pages); COND_SYSCALL(move_pages); COND_SYSCALL(set_mempolicy_home_node); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index fe340480e296..eb296ed507e6 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1636,6 +1636,42 @@ SYSCALL_DEFINE3(set_mempolicy, int, mode, const unsigned long __user *, nmask, return kernel_set_mempolicy(mode, nmask, maxnode); } +SYSCALL_DEFINE3(set_mempolicy2, struct mpol_args __user *, uargs, size_t, usize, + unsigned long, flags) +{ + struct mpol_args kargs; + struct mempolicy_args margs; + int err; + nodemask_t policy_nodemask; + unsigned long __user *nodes_ptr; + + if (flags) + return -EINVAL; + + err = copy_struct_from_user(&kargs, sizeof(kargs), uargs, usize); + if (err) + return err; + + err = validate_mpol_flags(kargs.mode, &kargs.mode_flags); + if (err) + return err; + + memset(&margs, 0, sizeof(margs)); + margs.mode = kargs.mode; + margs.mode_flags = kargs.mode_flags; + if (kargs.pol_nodes) { + nodes_ptr = u64_to_user_ptr(kargs.pol_nodes); + err = get_nodes(&policy_nodemask, nodes_ptr, + kargs.pol_maxnodes); + if (err) + return err; + margs.policy_nodes = &policy_nodemask; + } else + margs.policy_nodes = NULL; + + return do_set_mempolicy(&margs); +} + static int kernel_migrate_pages(pid_t pid, unsigned long maxnode, const unsigned long __user *old_nodes, const unsigned long __user *new_nodes) diff --git a/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl b/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl index 116ff501bf92..bb1351df51d9 100644 --- a/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl +++ b/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl @@ -371,3 +371,4 @@ 454 n64 futex_wake sys_futex_wake 455 n64 futex_wait sys_futex_wait 456 n64 futex_requeue sys_futex_requeue +457 n64 set_mempolicy2 sys_set_mempolicy2 diff --git a/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl b/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl index 7fab411378f2..4f03f5f42b78 100644 --- a/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl +++ b/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl @@ -543,3 +543,4 @@ 454 common futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 diff --git a/tools/perf/arch/s390/entry/syscalls/syscall.tbl b/tools/perf/arch/s390/entry/syscalls/syscall.tbl index 86fec9b080f6..f98dadc2e9df 100644 --- a/tools/perf/arch/s390/entry/syscalls/syscall.tbl +++ b/tools/perf/arch/s390/entry/syscalls/syscall.tbl @@ -459,3 +459,4 @@ 454 common futex_wake sys_futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 sys_set_mempolicy2 diff --git a/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl b/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl index 8cb8bf68721c..21f2579679d4 100644 --- a/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl +++ b/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl @@ -378,6 +378,7 @@ 454 common futex_wake sys_futex_wake 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue +457 common set_mempolicy2 sys_set_mempolicy2 # # Due to a historical design error, certain syscalls are numbered differently From patchwork Mon Dec 18 19:46:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13497519 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9CA9F768ED; Mon, 18 Dec 2023 19:47:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iQHDaJM8" Received: by mail-pl1-f196.google.com with SMTP id d9443c01a7336-1d3b5f9860bso7036415ad.3; Mon, 18 Dec 2023 11:47:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702928842; x=1703533642; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RvG+cZpIx58A97wp4Wv9KDO67cHjFWHy2MSXjMjbLxM=; b=iQHDaJM8b4iZFPlUr6vWjfen1MDDc5CfEUuYnuP1F2oHhnhQ2AfvqbooMWM3qV8a9t zCtkPhBPlFnmL/KgYe2xKzMe+iccy4V34Dm2nkLywE6oFgk8IhfCFZrWSWOTmMzaCjbB PauTdo2JUn6mIQBgKvq+6XO/WIFYy0ejATnUms8/gkVzNw7SMTaMPtWAlDsDzQPkoknr wAIdpw3DdCen1iJ+tUxc67dIk8ZiVlWOzPk13DaE2jwhPJ8oBH7/I/W9lvW/qjRlc+Q8 P9warm1Yo4EsQUelheKR60nG+z1oz5UR8xhzAVkwQWAbbrG5dIcpLODCR5RpxCqcK5OV EjhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702928842; x=1703533642; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RvG+cZpIx58A97wp4Wv9KDO67cHjFWHy2MSXjMjbLxM=; b=RgaoXF8XeG0hbjZyAbfRVSeAuAPKApasRkKHDWMdNspaUM0BGN/Dv861ZMxVpWxzjA Lqwpqq05f886wXqIX+lvcnZJSfZoMpMeWOx/ekiDa/am9ROVrVUEmi26im8q+lwAfRJd QqaEKmHgkUEw33BQAHYNswBpx9L5wfUs6XdeS7BnSN9paU//ysrt1Cds7hdgDWLmCGq/ PW8lma47aamDpb4fTi5kpFL1UlvysBwCzBzSHnb2XHnKUFbJgAY/VKtf2bQMLEY7IRXn IEHaHie45PFw6wQMKBE0RBzyigp25CRhSg/k7mJU9RvWudDQVxm+0qQsFI9tz4KuqaZo T2Dw== X-Gm-Message-State: AOJu0Yw1RMBuDHnDrn5IpaQlw24FB4VUPltqLvx+itLMWG1rnsNMJVAt 6719j/nR8g+CT22LAryqoQ== X-Google-Smtp-Source: AGHT+IGBpwYLBgiYpwD3EufmLndGtkuui1m28L6ip1sQilNxlYgn8SBAySbYnvgRe6Mvp21xWBqpYw== X-Received: by 2002:a17:902:e851:b0:1d3:c8ff:4f6e with SMTP id t17-20020a170902e85100b001d3c8ff4f6emr1169531plg.103.1702928841805; Mon, 18 Dec 2023 11:47:21 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 11-20020a170902c20b00b001ce664c05b0sm19456335pll.33.2023.12.18.11.47.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 11:47:21 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-mm@kvack.org Cc: linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, tglx@linutronix.de, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mhocko@kernel.org, tj@kernel.org, ying.huang@intel.com, gregory.price@memverge.com, corbet@lwn.net, rakie.kim@sk.com, hyeongtak.ji@sk.com, honggyu.kim@sk.com, vtavarespetr@micron.com, peterz@infradead.org, jgroves@micron.com, ravis.opensrc@micron.com, sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com, seungjun.ha@samsung.com, Michal Hocko Subject: [PATCH v4 09/11] mm/mempolicy: add get_mempolicy2 syscall Date: Mon, 18 Dec 2023 14:46:29 -0500 Message-Id: <20231218194631.21667-10-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231218194631.21667-1-gregory.price@memverge.com> References: <20231218194631.21667-1-gregory.price@memverge.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 get_mempolicy2 is an extensible get_mempolicy interface which allows a user to retrieve the memory policy for a task or address. Defined as: get_mempolicy2(struct mpol_args *args, size_t size, unsigned long addr, unsigned long flags) Top level input values: mpol_args: The field which collects information about the mempolicy returned to userspace. addr: if MPOL_F_ADDR is passed in `flags`, this address will be used to return the mempolicy details of the vma the address belongs to flags: if MPOL_F_ADDR, return mempolicy info vma containing addr else, returns task mempolicy information Input values include the following fields of mpol_args: pol_nodes: if set, the nodemask of the policy returned here pol_maxnodes: if pol_nodes is set, must describe max number of nodes to be copied to pol_nodes Output values include the following fields of mpol_args: mode: mempolicy mode mode_flags: mempolicy mode flags home_node: policy home node will be returned here, or -1 if not. pol_nodes: if set, the nodemask for the mempolicy policy_node: if the policy has extended node information, it will be placed here. For example MPOL_INTERLEAVE will return the next node which will be used for allocation MPOL_F_NODE has been dropped from get_mempolicy2 (EINVAL). Suggested-by: Michal Hocko Signed-off-by: Gregory Price --- .../admin-guide/mm/numa_memory_policy.rst | 10 ++++- arch/alpha/kernel/syscalls/syscall.tbl | 1 + arch/arm/tools/syscall.tbl | 1 + arch/arm64/include/asm/unistd.h | 2 +- arch/arm64/include/asm/unistd32.h | 2 + arch/m68k/kernel/syscalls/syscall.tbl | 1 + arch/microblaze/kernel/syscalls/syscall.tbl | 1 + arch/mips/kernel/syscalls/syscall_n32.tbl | 1 + arch/mips/kernel/syscalls/syscall_o32.tbl | 1 + arch/parisc/kernel/syscalls/syscall.tbl | 1 + arch/powerpc/kernel/syscalls/syscall.tbl | 1 + arch/s390/kernel/syscalls/syscall.tbl | 1 + arch/sh/kernel/syscalls/syscall.tbl | 1 + arch/sparc/kernel/syscalls/syscall.tbl | 1 + arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + arch/xtensa/kernel/syscalls/syscall.tbl | 1 + include/linux/syscalls.h | 2 + include/uapi/asm-generic/unistd.h | 4 +- kernel/sys_ni.c | 1 + mm/mempolicy.c | 43 +++++++++++++++++++ .../arch/mips/entry/syscalls/syscall_n64.tbl | 1 + .../arch/powerpc/entry/syscalls/syscall.tbl | 1 + .../perf/arch/s390/entry/syscalls/syscall.tbl | 1 + .../arch/x86/entry/syscalls/syscall_64.tbl | 1 + 25 files changed, 79 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst index e57d400d0281..8c1fcdb30602 100644 --- a/Documentation/admin-guide/mm/numa_memory_policy.rst +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst @@ -456,11 +456,19 @@ Get [Task] Memory Policy or Related Information:: long get_mempolicy(int *mode, const unsigned long *nmask, unsigned long maxnode, void *addr, int flags); + long get_mempolicy2(struct mpol_args args, size_t size, + unsigned long addr, unsigned long flags); Queries the "task/process memory policy" of the calling task, or the policy or location of a specified virtual address, depending on the 'flags' argument. +get_mempolicy2() is an extended version of get_mempolicy() capable of +acquiring extended information about a mempolicy, including those +that can only be set via set_mempolicy2() or mbind2(). + +MPOL_F_NODE functionality has been removed from get_mempolicy2(). + See the get_mempolicy(2) man page for more details @@ -504,7 +512,7 @@ Extended Mempolicy Arguments:: The extended mempolicy argument structure is defined to allow the mempolicy interfaces future extensibility without the need for additional system calls. -Extended interfaces (set_mempolicy2) use this argument structure. +Extended interfaces (set_mempolicy2 and get_mempolicy2) use this structure. The core arguments (mode, mode_flags, pol_nodes, and pol_maxnodes) apply to all interfaces relative to their non-extended counterparts. Each additional diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl index 0dc288a1118a..0301a8b0a262 100644 --- a/arch/alpha/kernel/syscalls/syscall.tbl +++ b/arch/alpha/kernel/syscalls/syscall.tbl @@ -497,3 +497,4 @@ 565 common futex_wait sys_futex_wait 566 common futex_requeue sys_futex_requeue 567 common set_mempolicy2 sys_set_mempolicy2 +568 common get_mempolicy2 sys_get_mempolicy2 diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl index 50172ec0e1f5..771a33446e8e 100644 --- a/arch/arm/tools/syscall.tbl +++ b/arch/arm/tools/syscall.tbl @@ -471,3 +471,4 @@ 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h index 298313d2e0af..b63f870debaf 100644 --- a/arch/arm64/include/asm/unistd.h +++ b/arch/arm64/include/asm/unistd.h @@ -39,7 +39,7 @@ #define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE + 5) #define __ARM_NR_COMPAT_END (__ARM_NR_COMPAT_BASE + 0x800) -#define __NR_compat_syscalls 458 +#define __NR_compat_syscalls 459 #endif #define __ARCH_WANT_SYS_CLONE diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h index cee8d669c342..f8d01007aee0 100644 --- a/arch/arm64/include/asm/unistd32.h +++ b/arch/arm64/include/asm/unistd32.h @@ -921,6 +921,8 @@ __SYSCALL(__NR_futex_wait, sys_futex_wait) __SYSCALL(__NR_futex_requeue, sys_futex_requeue) #define __NR_set_mempolicy2 457 __SYSCALL(__NR_set_mempolicy2, sys_set_mempolicy2) +#define __NR_get_mempolicy2 458 +__SYSCALL(__NR_get_mempolicy2, sys_get_mempolicy2) /* * Please add new compat syscalls above this comment and update diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl index 839d90c535f2..048a409e684c 100644 --- a/arch/m68k/kernel/syscalls/syscall.tbl +++ b/arch/m68k/kernel/syscalls/syscall.tbl @@ -457,3 +457,4 @@ 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl index 567c8b883735..327b01bd6793 100644 --- a/arch/microblaze/kernel/syscalls/syscall.tbl +++ b/arch/microblaze/kernel/syscalls/syscall.tbl @@ -463,3 +463,4 @@ 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 diff --git a/arch/mips/kernel/syscalls/syscall_n32.tbl b/arch/mips/kernel/syscalls/syscall_n32.tbl index cc0640e16f2f..921d58e1da23 100644 --- a/arch/mips/kernel/syscalls/syscall_n32.tbl +++ b/arch/mips/kernel/syscalls/syscall_n32.tbl @@ -396,3 +396,4 @@ 455 n32 futex_wait sys_futex_wait 456 n32 futex_requeue sys_futex_requeue 457 n32 set_mempolicy2 sys_set_mempolicy2 +458 n32 get_mempolicy2 sys_get_mempolicy2 diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl index f7262fde98d9..9271c83c9993 100644 --- a/arch/mips/kernel/syscalls/syscall_o32.tbl +++ b/arch/mips/kernel/syscalls/syscall_o32.tbl @@ -445,3 +445,4 @@ 455 o32 futex_wait sys_futex_wait 456 o32 futex_requeue sys_futex_requeue 457 o32 set_mempolicy2 sys_set_mempolicy2 +458 o32 get_mempolicy2 sys_get_mempolicy2 diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl index e10f0e8bd064..0654f3f89fc7 100644 --- a/arch/parisc/kernel/syscalls/syscall.tbl +++ b/arch/parisc/kernel/syscalls/syscall.tbl @@ -456,3 +456,4 @@ 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl index 4f03f5f42b78..ac11d2064e7a 100644 --- a/arch/powerpc/kernel/syscalls/syscall.tbl +++ b/arch/powerpc/kernel/syscalls/syscall.tbl @@ -544,3 +544,4 @@ 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl index f98dadc2e9df..1cdcafe1ccca 100644 --- a/arch/s390/kernel/syscalls/syscall.tbl +++ b/arch/s390/kernel/syscalls/syscall.tbl @@ -460,3 +460,4 @@ 455 common futex_wait sys_futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 sys_get_mempolicy2 diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl index f47ba9f2d05d..f71742024c29 100644 --- a/arch/sh/kernel/syscalls/syscall.tbl +++ b/arch/sh/kernel/syscalls/syscall.tbl @@ -460,3 +460,4 @@ 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl index 53fb16616728..2fbf5dbe0620 100644 --- a/arch/sparc/kernel/syscalls/syscall.tbl +++ b/arch/sparc/kernel/syscalls/syscall.tbl @@ -503,3 +503,4 @@ 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index 4b4dc41b24ee..0af813b9a118 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -462,3 +462,4 @@ 455 i386 futex_wait sys_futex_wait 456 i386 futex_requeue sys_futex_requeue 457 i386 set_mempolicy2 sys_set_mempolicy2 +458 i386 get_mempolicy2 sys_get_mempolicy2 diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index 1bc2190bec27..0b777876fc15 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -379,6 +379,7 @@ 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 # # Due to a historical design error, certain syscalls are numbered differently diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl index e26dc89399eb..4536c9a4227d 100644 --- a/arch/xtensa/kernel/syscalls/syscall.tbl +++ b/arch/xtensa/kernel/syscalls/syscall.tbl @@ -428,3 +428,4 @@ 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index 451f0089601f..f696855cbe8c 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -821,6 +821,8 @@ asmlinkage long sys_get_mempolicy(int __user *policy, unsigned long __user *nmask, unsigned long maxnode, unsigned long addr, unsigned long flags); +asmlinkage long sys_get_mempolicy2(struct mpol_args __user *args, size_t size, + unsigned long addr, unsigned long flags); asmlinkage long sys_set_mempolicy(int mode, const unsigned long __user *nmask, unsigned long maxnode); asmlinkage long sys_set_mempolicy2(struct mpol_args __user *args, size_t size, diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index 55486aba099f..719accc731db 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -830,9 +830,11 @@ __SYSCALL(__NR_futex_wait, sys_futex_wait) __SYSCALL(__NR_futex_requeue, sys_futex_requeue) #define __NR_set_mempolicy2 457 __SYSCALL(__NR_set_mempolicy2, sys_set_mempolicy2) +#define __NR_get_mempolicy2 458 +__SYSCALL(__NR_get_mempolicy2, sys_get_mempolicy2) #undef __NR_syscalls -#define __NR_syscalls 458 +#define __NR_syscalls 459 /* * 32 bit systems traditionally used different diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index 7d6eb0eec056..e4883eaa4e61 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -188,6 +188,7 @@ COND_SYSCALL(process_mrelease); COND_SYSCALL(remap_file_pages); COND_SYSCALL(mbind); COND_SYSCALL(get_mempolicy); +COND_SYSCALL(get_mempolicy2); COND_SYSCALL(set_mempolicy); COND_SYSCALL(set_mempolicy2); COND_SYSCALL(migrate_pages); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index eb296ed507e6..ebb08261d7cb 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1863,6 +1863,49 @@ SYSCALL_DEFINE5(get_mempolicy, int __user *, policy, return kernel_get_mempolicy(policy, nmask, maxnode, addr, flags); } +SYSCALL_DEFINE4(get_mempolicy2, struct mpol_args __user *, uargs, size_t, usize, + unsigned long, addr, unsigned long, flags) +{ + struct mpol_args kargs; + struct mempolicy_args margs; + int err; + nodemask_t policy_nodemask; + unsigned long __user *nodes_ptr; + + if (flags & ~(MPOL_F_ADDR)) + return -EINVAL; + + /* initialize any memory liable to be copied to userland */ + memset(&margs, 0, sizeof(margs)); + + err = copy_struct_from_user(&kargs, sizeof(kargs), uargs, usize); + if (err) + return -EINVAL; + + margs.policy_nodes = kargs.pol_nodes ? &policy_nodemask : NULL; + if (flags & MPOL_F_ADDR) + err = do_get_vma_mempolicy(untagged_addr(addr), NULL, &margs); + else + err = do_get_task_mempolicy(&margs); + + if (err) + return err; + + kargs.mode = margs.mode; + kargs.mode_flags = margs.mode_flags; + kargs.policy_node = margs.policy_node; + kargs.home_node = margs.home_node; + if (kargs.pol_nodes) { + nodes_ptr = u64_to_user_ptr(kargs.pol_nodes); + err = copy_nodes_to_user(nodes_ptr, kargs.pol_maxnodes, + margs.policy_nodes); + if (err) + return err; + } + + return copy_to_user(uargs, &kargs, usize) ? -EFAULT : 0; +} + bool vma_migratable(struct vm_area_struct *vma) { if (vma->vm_flags & (VM_IO | VM_PFNMAP)) diff --git a/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl b/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl index bb1351df51d9..c34c6877379e 100644 --- a/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl +++ b/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl @@ -372,3 +372,4 @@ 455 n64 futex_wait sys_futex_wait 456 n64 futex_requeue sys_futex_requeue 457 n64 set_mempolicy2 sys_set_mempolicy2 +458 n64 get_mempolicy2 sys_get_mempolicy2 diff --git a/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl b/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl index 4f03f5f42b78..ac11d2064e7a 100644 --- a/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl +++ b/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl @@ -544,3 +544,4 @@ 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 diff --git a/tools/perf/arch/s390/entry/syscalls/syscall.tbl b/tools/perf/arch/s390/entry/syscalls/syscall.tbl index f98dadc2e9df..1cdcafe1ccca 100644 --- a/tools/perf/arch/s390/entry/syscalls/syscall.tbl +++ b/tools/perf/arch/s390/entry/syscalls/syscall.tbl @@ -460,3 +460,4 @@ 455 common futex_wait sys_futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 sys_get_mempolicy2 diff --git a/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl b/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl index 21f2579679d4..edf338f32645 100644 --- a/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl +++ b/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl @@ -379,6 +379,7 @@ 455 common futex_wait sys_futex_wait 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 +458 common get_mempolicy2 sys_get_mempolicy2 # # Due to a historical design error, certain syscalls are numbered differently From patchwork Mon Dec 18 19:46:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13497520 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FF4779953; Mon, 18 Dec 2023 19:47:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WEW5Tk4/" Received: by mail-pl1-f194.google.com with SMTP id d9443c01a7336-1d3ac87553bso7530865ad.3; Mon, 18 Dec 2023 11:47:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702928847; x=1703533647; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CT+UJzpSfF/DLW7d290OitDA+EDNziSk6P3gUcA0R6E=; b=WEW5Tk4/KOaQCX5tbJ5VNj/dvOVNEQCfF6oZHQwf0awkThNj9D/sQixvLjGYBM2nfz lpz1zB1uhraUfUCT4Q6NeJlW6f7tNMkY7Rsbsvq4Mz2A6iQ/kal/Sl/LR/fOfVRwMJcj cknWPxoqeC5dhfRtf3BCVZWPE1sS9QFnX2IcAZP6nOKs4wu62p9p4tfo/gASpl9LhY1J CSq4G8UILGB9peXiG1rSZk0gDn2+Ff1BEQZvf1uLVm595Ndt4GahjDIsue073j8TsjSg GxrOxHShmexrVVGmJawPGv1idZKMxQyHLmUHzJOZXKskTHH+7VY80QhytbbDrjaMTdBj 4Vcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702928847; x=1703533647; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CT+UJzpSfF/DLW7d290OitDA+EDNziSk6P3gUcA0R6E=; b=fddmuDFTpsunq9dlXtsm/LYpv8Me78/Q8DoE8FoVZ0AUVMftso8FWahZFvcr4EGy8c CoLqSQcvoScqa6eONDwebEx9byanSjDD0ay8uMsIE/WDzpEky+5VH9/+M04OAKj8oUSC gYJKi3E86AMZTxpqJYFZhULGFDePwWdRWReT0QdpiXpghAbIHCSzp3sjORS22qNRiomv hJAADiIyvEGObHCGd1I4KfYtfBMbqEjNNssRLrKEZamHFj44zNoAkyMFu71x8pYg/ZfL SSEN/mwUsWxiOiiPjhf8zP6sRAFNVyxYJOoaMwrFpZ6RBANHNKQMMIR+VUHjxBlu94rX OwQw== X-Gm-Message-State: AOJu0YzRMWQH5RkQO5ayb/2NjKNCZJmhfhUiAAJF9NDY1Pw7Qg4/BizD l/DnOsk7Qx6PknHpibMHcKoOzDM/ZsUVnOI= X-Google-Smtp-Source: AGHT+IG1jFK6szw44cHtH0JsuDhilm+mnnnWqPpT7uYcL3kOLLEF0pyjX+7virr8U1nflf+Kb5x+uA== X-Received: by 2002:a17:903:1107:b0:1d3:1be6:78dc with SMTP id n7-20020a170903110700b001d31be678dcmr7934245plh.26.1702928846801; Mon, 18 Dec 2023 11:47:26 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 11-20020a170902c20b00b001ce664c05b0sm19456335pll.33.2023.12.18.11.47.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 11:47:26 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-mm@kvack.org Cc: linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, tglx@linutronix.de, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mhocko@kernel.org, tj@kernel.org, ying.huang@intel.com, gregory.price@memverge.com, corbet@lwn.net, rakie.kim@sk.com, hyeongtak.ji@sk.com, honggyu.kim@sk.com, vtavarespetr@micron.com, peterz@infradead.org, jgroves@micron.com, ravis.opensrc@micron.com, sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com, seungjun.ha@samsung.com, Michal Hocko , Frank van der Linden Subject: [PATCH v4 10/11] mm/mempolicy: add the mbind2 syscall Date: Mon, 18 Dec 2023 14:46:30 -0500 Message-Id: <20231218194631.21667-11-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231218194631.21667-1-gregory.price@memverge.com> References: <20231218194631.21667-1-gregory.price@memverge.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 mbind2 is an extensible mbind interface which allows a user to set the mempolicy for one or more address ranges. Defined as: mbind2(unsigned long addr, unsigned long len, struct mpol_args *args, size_t size, unsigned long flags) addr: address of the memory range to operate on len: length of the memory range flags: MPOL_MF_HOME_NODE + original mbind() flags Input values include the following fields of mpol_args: mode: The MPOL_* policy (DEFAULT, INTERLEAVE, etc.) mode_flags: The MPOL_F_* flags that were previously passed in or'd into the mode. This was split to hopefully allow future extensions additional mode/flag space. pol_nodes: the nodemask to apply for the memory policy pol_maxnodes: The max number of nodes described by pol_nodes home_node: if MPOL_MF_HOME_NODE, set home node of policy to this otherwise it is ignored. The semantics are otherwise the same as mbind(), except that the home_node can be set. Suggested-by: Michal Hocko Suggested-by: Frank van der Linden Suggested-by: Vinicius Tavares Petrucci Suggested-by: Rakie Kim Suggested-by: Hyeongtak Ji Suggested-by: Honggyu Kim Signed-off-by: Gregory Price Co-developed-by: Vinicius Tavares Petrucci --- .../admin-guide/mm/numa_memory_policy.rst | 12 +++++- arch/alpha/kernel/syscalls/syscall.tbl | 1 + arch/arm/tools/syscall.tbl | 1 + arch/arm64/include/asm/unistd.h | 2 +- arch/arm64/include/asm/unistd32.h | 2 + arch/m68k/kernel/syscalls/syscall.tbl | 1 + arch/microblaze/kernel/syscalls/syscall.tbl | 1 + arch/mips/kernel/syscalls/syscall_n32.tbl | 1 + arch/mips/kernel/syscalls/syscall_o32.tbl | 1 + arch/parisc/kernel/syscalls/syscall.tbl | 1 + arch/powerpc/kernel/syscalls/syscall.tbl | 1 + arch/s390/kernel/syscalls/syscall.tbl | 1 + arch/sh/kernel/syscalls/syscall.tbl | 1 + arch/sparc/kernel/syscalls/syscall.tbl | 1 + arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + arch/xtensa/kernel/syscalls/syscall.tbl | 1 + include/linux/syscalls.h | 3 ++ include/uapi/asm-generic/unistd.h | 4 +- include/uapi/linux/mempolicy.h | 5 ++- kernel/sys_ni.c | 1 + mm/mempolicy.c | 43 +++++++++++++++++++ .../arch/mips/entry/syscalls/syscall_n64.tbl | 1 + .../arch/powerpc/entry/syscalls/syscall.tbl | 1 + .../perf/arch/s390/entry/syscalls/syscall.tbl | 1 + .../arch/x86/entry/syscalls/syscall_64.tbl | 1 + 26 files changed, 85 insertions(+), 5 deletions(-) diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst index 8c1fcdb30602..99e1f732cade 100644 --- a/Documentation/admin-guide/mm/numa_memory_policy.rst +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst @@ -477,12 +477,18 @@ Install VMA/Shared Policy for a Range of Task's Address Space:: long mbind(void *start, unsigned long len, int mode, const unsigned long *nmask, unsigned long maxnode, unsigned flags); + long mbind2(void* start, unsigned long len, struct mpol_args args, + size_t size, unsigned long flags); mbind() installs the policy specified by (mode, nmask, maxnodes) as a VMA policy for the range of the calling task's address space specified by the 'start' and 'len' arguments. Additional actions may be requested via the 'flags' argument. +mbind2() is an extended version of mbind() capable of setting extended +mempolicy features. For example, one can set the home node for the memory +policy without an additional call to set_mempolicy_home_node(). + See the mbind(2) man page for more details. Set home node for a Range of Task's Address Spacec:: @@ -498,6 +504,9 @@ closest to which page allocation will come from. Specifying the home node overri the default allocation policy to allocate memory close to the local node for an executing CPU. +mbind2() also provides a way for the home node to be set at the time the +mempolicy is set. See the mbind(2) man page for more details. + Extended Mempolicy Arguments:: struct mpol_args { @@ -512,7 +521,8 @@ Extended Mempolicy Arguments:: The extended mempolicy argument structure is defined to allow the mempolicy interfaces future extensibility without the need for additional system calls. -Extended interfaces (set_mempolicy2 and get_mempolicy2) use this structure. +Extended interfaces (set_mempolicy2, get_mempolicy2, and mbind2) use this +this argument structure. The core arguments (mode, mode_flags, pol_nodes, and pol_maxnodes) apply to all interfaces relative to their non-extended counterparts. Each additional diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl index 0301a8b0a262..e8239293c35a 100644 --- a/arch/alpha/kernel/syscalls/syscall.tbl +++ b/arch/alpha/kernel/syscalls/syscall.tbl @@ -498,3 +498,4 @@ 566 common futex_requeue sys_futex_requeue 567 common set_mempolicy2 sys_set_mempolicy2 568 common get_mempolicy2 sys_get_mempolicy2 +569 common mbind2 sys_mbind2 diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl index 771a33446e8e..a3f39750257a 100644 --- a/arch/arm/tools/syscall.tbl +++ b/arch/arm/tools/syscall.tbl @@ -472,3 +472,4 @@ 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h index b63f870debaf..abe10a833fcd 100644 --- a/arch/arm64/include/asm/unistd.h +++ b/arch/arm64/include/asm/unistd.h @@ -39,7 +39,7 @@ #define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE + 5) #define __ARM_NR_COMPAT_END (__ARM_NR_COMPAT_BASE + 0x800) -#define __NR_compat_syscalls 459 +#define __NR_compat_syscalls 460 #endif #define __ARCH_WANT_SYS_CLONE diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h index f8d01007aee0..89aaae33b81f 100644 --- a/arch/arm64/include/asm/unistd32.h +++ b/arch/arm64/include/asm/unistd32.h @@ -923,6 +923,8 @@ __SYSCALL(__NR_futex_requeue, sys_futex_requeue) __SYSCALL(__NR_set_mempolicy2, sys_set_mempolicy2) #define __NR_get_mempolicy2 458 __SYSCALL(__NR_get_mempolicy2, sys_get_mempolicy2) +#define __NR_get_mbind2 459 +__SYSCALL(__NR_get_mbind2, sys_get_mbind2) /* * Please add new compat syscalls above this comment and update diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl index 048a409e684c..9a12dface18e 100644 --- a/arch/m68k/kernel/syscalls/syscall.tbl +++ b/arch/m68k/kernel/syscalls/syscall.tbl @@ -458,3 +458,4 @@ 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl index 327b01bd6793..6cb740123137 100644 --- a/arch/microblaze/kernel/syscalls/syscall.tbl +++ b/arch/microblaze/kernel/syscalls/syscall.tbl @@ -464,3 +464,4 @@ 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 diff --git a/arch/mips/kernel/syscalls/syscall_n32.tbl b/arch/mips/kernel/syscalls/syscall_n32.tbl index 921d58e1da23..52cf720f8ae2 100644 --- a/arch/mips/kernel/syscalls/syscall_n32.tbl +++ b/arch/mips/kernel/syscalls/syscall_n32.tbl @@ -397,3 +397,4 @@ 456 n32 futex_requeue sys_futex_requeue 457 n32 set_mempolicy2 sys_set_mempolicy2 458 n32 get_mempolicy2 sys_get_mempolicy2 +459 n32 mbind2 sys_mbind2 diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl index 9271c83c9993..fd37c5301a48 100644 --- a/arch/mips/kernel/syscalls/syscall_o32.tbl +++ b/arch/mips/kernel/syscalls/syscall_o32.tbl @@ -446,3 +446,4 @@ 456 o32 futex_requeue sys_futex_requeue 457 o32 set_mempolicy2 sys_set_mempolicy2 458 o32 get_mempolicy2 sys_get_mempolicy2 +459 o32 mbind2 sys_mbind2 diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl index 0654f3f89fc7..fcd67bc405b1 100644 --- a/arch/parisc/kernel/syscalls/syscall.tbl +++ b/arch/parisc/kernel/syscalls/syscall.tbl @@ -457,3 +457,4 @@ 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl index ac11d2064e7a..89715417014c 100644 --- a/arch/powerpc/kernel/syscalls/syscall.tbl +++ b/arch/powerpc/kernel/syscalls/syscall.tbl @@ -545,3 +545,4 @@ 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl index 1cdcafe1ccca..c8304e0d0aa7 100644 --- a/arch/s390/kernel/syscalls/syscall.tbl +++ b/arch/s390/kernel/syscalls/syscall.tbl @@ -461,3 +461,4 @@ 456 common futex_requeue sys_futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 sys_mbind2 diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl index f71742024c29..e5c51b6c367f 100644 --- a/arch/sh/kernel/syscalls/syscall.tbl +++ b/arch/sh/kernel/syscalls/syscall.tbl @@ -461,3 +461,4 @@ 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl index 2fbf5dbe0620..74527f585500 100644 --- a/arch/sparc/kernel/syscalls/syscall.tbl +++ b/arch/sparc/kernel/syscalls/syscall.tbl @@ -504,3 +504,4 @@ 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index 0af813b9a118..be2e2aa17dd8 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -463,3 +463,4 @@ 456 i386 futex_requeue sys_futex_requeue 457 i386 set_mempolicy2 sys_set_mempolicy2 458 i386 get_mempolicy2 sys_get_mempolicy2 +459 i386 mbind2 sys_mbind2 diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index 0b777876fc15..6e2347eb8773 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -380,6 +380,7 @@ 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 # # Due to a historical design error, certain syscalls are numbered differently diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl index 4536c9a4227d..f00a21317dc0 100644 --- a/arch/xtensa/kernel/syscalls/syscall.tbl +++ b/arch/xtensa/kernel/syscalls/syscall.tbl @@ -429,3 +429,4 @@ 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index f696855cbe8c..b42622ea9ed9 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -817,6 +817,9 @@ asmlinkage long sys_mbind(unsigned long start, unsigned long len, const unsigned long __user *nmask, unsigned long maxnode, unsigned flags); +asmlinkage long sys_mbind2(unsigned long start, unsigned long len, + const struct mpol_args __user *uargs, size_t usize, + unsigned long flags); asmlinkage long sys_get_mempolicy(int __user *policy, unsigned long __user *nmask, unsigned long maxnode, diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index 719accc731db..cd31599bb9cc 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -832,9 +832,11 @@ __SYSCALL(__NR_futex_requeue, sys_futex_requeue) __SYSCALL(__NR_set_mempolicy2, sys_set_mempolicy2) #define __NR_get_mempolicy2 458 __SYSCALL(__NR_get_mempolicy2, sys_get_mempolicy2) +#define __NR_mbind2 459 +__SYSCALL(__NR_mbind2, sys_mbind2) #undef __NR_syscalls -#define __NR_syscalls 459 +#define __NR_syscalls 460 /* * 32 bit systems traditionally used different diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index c06f2afa7fe3..ec1402dae35b 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -54,13 +54,14 @@ struct mpol_args { #define MPOL_F_ADDR (1<<1) /* look up vma using address */ #define MPOL_F_MEMS_ALLOWED (1<<2) /* return allowed memories */ -/* Flags for mbind */ +/* Flags for mbind/mbind2 */ #define MPOL_MF_STRICT (1<<0) /* Verify existing pages in the mapping */ #define MPOL_MF_MOVE (1<<1) /* Move pages owned by this process to conform to policy */ #define MPOL_MF_MOVE_ALL (1<<2) /* Move every page to conform to policy */ #define MPOL_MF_LAZY (1<<3) /* UNSUPPORTED FLAG: Lazy migrate on fault */ -#define MPOL_MF_INTERNAL (1<<4) /* Internal flags start here */ +#define MPOL_MF_HOME_NODE (1<<4) /* mbind2: set home node */ +#define MPOL_MF_INTERNAL (1<<5) /* Internal flags start here */ #define MPOL_MF_VALID (MPOL_MF_STRICT | \ MPOL_MF_MOVE | \ diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index e4883eaa4e61..5239c2e94e37 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -187,6 +187,7 @@ COND_SYSCALL(process_madvise); COND_SYSCALL(process_mrelease); COND_SYSCALL(remap_file_pages); COND_SYSCALL(mbind); +COND_SYSCALL(mbind2); COND_SYSCALL(get_mempolicy); COND_SYSCALL(get_mempolicy2); COND_SYSCALL(set_mempolicy); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index ebb08261d7cb..0882fa4aa516 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1603,6 +1603,49 @@ SYSCALL_DEFINE6(mbind, unsigned long, start, unsigned long, len, return kernel_mbind(start, len, mode, nmask, maxnode, flags); } +SYSCALL_DEFINE5(mbind2, unsigned long, start, unsigned long, len, + const struct mpol_args __user *, uargs, size_t, usize, + unsigned long, flags) +{ + struct mpol_args kargs; + struct mempolicy_args margs; + nodemask_t policy_nodes; + unsigned long __user *nodes_ptr; + int err; + + if (!start || !len) + return -EINVAL; + + err = copy_struct_from_user(&kargs, sizeof(kargs), uargs, usize); + if (err) + return -EINVAL; + + err = validate_mpol_flags(kargs.mode, &kargs.mode_flags); + if (err) + return err; + + margs.mode = kargs.mode; + margs.mode_flags = kargs.mode_flags; + + /* if home node given, validate it is online */ + if (flags & MPOL_MF_HOME_NODE) { + if ((kargs.home_node >= MAX_NUMNODES) || + !node_online(kargs.home_node)) + return -EINVAL; + margs.home_node = kargs.home_node; + } else + margs.home_node = NUMA_NO_NODE; + flags &= ~MPOL_MF_HOME_NODE; + + nodes_ptr = u64_to_user_ptr(kargs.pol_nodes); + err = get_nodes(&policy_nodes, nodes_ptr, kargs.pol_maxnodes); + if (err) + return err; + margs.policy_nodes = &policy_nodes; + + return do_mbind(untagged_addr(start), len, &margs, flags); +} + /* Set the process memory policy */ static long kernel_set_mempolicy(int mode, const unsigned long __user *nmask, unsigned long maxnode) diff --git a/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl b/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl index c34c6877379e..4fd9f742d903 100644 --- a/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl +++ b/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl @@ -373,3 +373,4 @@ 456 n64 futex_requeue sys_futex_requeue 457 n64 set_mempolicy2 sys_set_mempolicy2 458 n64 get_mempolicy2 sys_get_mempolicy2 +459 n64 mbind2 sys_mbind2 diff --git a/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl b/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl index ac11d2064e7a..89715417014c 100644 --- a/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl +++ b/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl @@ -545,3 +545,4 @@ 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 diff --git a/tools/perf/arch/s390/entry/syscalls/syscall.tbl b/tools/perf/arch/s390/entry/syscalls/syscall.tbl index 1cdcafe1ccca..c8304e0d0aa7 100644 --- a/tools/perf/arch/s390/entry/syscalls/syscall.tbl +++ b/tools/perf/arch/s390/entry/syscalls/syscall.tbl @@ -461,3 +461,4 @@ 456 common futex_requeue sys_futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 sys_mbind2 diff --git a/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl b/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl index edf338f32645..3fc74241da5d 100644 --- a/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl +++ b/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl @@ -380,6 +380,7 @@ 456 common futex_requeue sys_futex_requeue 457 common set_mempolicy2 sys_set_mempolicy2 458 common get_mempolicy2 sys_get_mempolicy2 +459 common mbind2 sys_mbind2 # # Due to a historical design error, certain syscalls are numbered differently From patchwork Mon Dec 18 19:46:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13497521 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A59074E03; Mon, 18 Dec 2023 19:47:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="irSvJ53Q" Received: by mail-pl1-f194.google.com with SMTP id d9443c01a7336-1d3c1a0d91eso6906195ad.2; Mon, 18 Dec 2023 11:47:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702928851; x=1703533651; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RPYVPtPrCjtajzWL3D6J7JUQKi8NsK9T9SxsECL7kug=; b=irSvJ53QH/OmQQcFzftI+qUc00pQ0bZuwOaQqurRdQ13Ud9fZEb9D29/EE1rPve6w7 zvhH/TU0+5KGGY9oNkUKpbwvk1diS1sR9DBpF8iJnAH7VeDmn099dcAbp5zK/9DF4hsq U7+OH7ofFW7PbwInO6fyN2eZ04VguBIy8gxXlqJU637rj7Qf09j7jbEE5Cygb34ZeezI rvPP4Bh8gh9RQf6P7ghY3ycQYXwSt6aCaCnnYHPM+FLfaZwL2ebF8m0xojv+vQO9EU5h 7W9vOl1uBV114KuKtBiX8PxJ4DUd6VZ7gyzN2MnfTQ8MC6fs1IjzPadq5+qlkXrsHYHr MtEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702928851; x=1703533651; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RPYVPtPrCjtajzWL3D6J7JUQKi8NsK9T9SxsECL7kug=; b=V7NlU579CXJf9BD7Nv2vatcsGa1qsKkMizQCmKr2lubtHale60p8SHPi0Mx9bw6CRM V77hwYmhPUO0z0jwxTHzZhnHxHc1SrzfLZo3vpbgugwzLMZaRouI87taWbW9cs0abLGA YyRCx4wf3OJ/1lI/YQ5rPQ84Kv2YjK/lV70cfdrj0mmJGqbFuq4JC/SEMDPZHszbKnGh Ypb6fouuv83W+Iej9bHpdfJ/A4Lwk+M/aIbDo2MPP8+5HZQPar27yYZ6crUbOFeY+7XZ pzNkIIj11gVGFBpHZN25NeyujhNMTHeTUgioTV8ietrSkP1SMS+M9kwB3g2T3lT2fGqQ CexA== X-Gm-Message-State: AOJu0Ywun2JrCKvgdMlBMP5IvU916Vexho9X/tGknKN55KjkdJImiTtK tr76VJ6r4jN7KA6R7xjm5Q== X-Google-Smtp-Source: AGHT+IEYH+QYxyGax5kcvdnm0v2Dy7uD9SuyZHEyktgFzbXzT8sjYlVEP8BQzD3rDmOYnURg4JGMgw== X-Received: by 2002:a17:902:c946:b0:1d3:535e:c58 with SMTP id i6-20020a170902c94600b001d3535e0c58mr6063753pla.105.1702928851453; Mon, 18 Dec 2023 11:47:31 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id 11-20020a170902c20b00b001ce664c05b0sm19456335pll.33.2023.12.18.11.47.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 11:47:31 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-mm@kvack.org Cc: linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, akpm@linux-foundation.org, arnd@arndb.de, tglx@linutronix.de, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mhocko@kernel.org, tj@kernel.org, ying.huang@intel.com, gregory.price@memverge.com, corbet@lwn.net, rakie.kim@sk.com, hyeongtak.ji@sk.com, honggyu.kim@sk.com, vtavarespetr@micron.com, peterz@infradead.org, jgroves@micron.com, ravis.opensrc@micron.com, sthanneeru@micron.com, emirakhur@micron.com, Hasan.Maruf@amd.com, seungjun.ha@samsung.com Subject: [PATCH v4 11/11] mm/mempolicy: extend set_mempolicy2 and mbind2 to support weighted interleave Date: Mon, 18 Dec 2023 14:46:31 -0500 Message-Id: <20231218194631.21667-12-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231218194631.21667-1-gregory.price@memverge.com> References: <20231218194631.21667-1-gregory.price@memverge.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Extend set_mempolicy2 and mbind2 to support weighted interleave, and demonstrate the extensibility of the mpol_args structure. To support weighted interleave we add interleave weight fields to the following structures: Kernel Internal: (include/linux/mempolicy.h) struct mempolicy { /* task-local weights to apply to weighted interleave */ unsigned char weights[MAX_NUMNODES]; } struct mempolicy_args { /* Optional: interleave weights for MPOL_WEIGHTED_INTERLEAVE */ unsigned char *il_weights; /* of size MAX_NUMNODES */ } UAPI: (/include/uapi/linux/mempolicy.h) struct mpol_args { /* Optional: interleave weights for MPOL_WEIGHTED_INTERLEAVE */ unsigned char *il_weights; /* of size pol_max_nodes */ } The task-local weights are a single, one-dimensional array of weights that apply to all possible nodes on the system. If a node is set in the mempolicy nodemask, the weight in `il_weights` must be >= 1, otherwise set_mempolicy2() will return -EINVAL. If a node is not set in pol_nodemask, the weight will default to `1` in the task policy. The default value of `1` is required to handle the situation where a task migrates to a set of nodes for which weights were not set (up to and including the local numa node). For example, a migrated task whose nodemask changes entirely will have all its weights defaulted back to `1`, or if the nodemask changes to include a mix of nodes that were not previously accounted for - the weighted interleave may be suboptimal. If migrations are expected, a task should prefer not to use task-local interleave weights, and instead utilize the global settings for natural re-weighting on migration. To support global vs local weighting, we add the kernel-internal flag: MPOL_F_GWEIGHT (1 << 5) /* Utilize global weights */ This flag is set when il_weights is omitted by set_mempolicy2(), or when MPOL_WEIGHTED_INTERLEAVE is set by set_mempolicy(). This internal mode_flag dictates whether global weights or task-local weights are utilized by the the various weighted interleave functions: * weighted_interleave_nodes * weighted_interleave_nid * alloc_pages_bulk_array_weighted_interleave if (pol->flags & MPOL_F_GWEIGHT) pol_weights = iw_table; else pol_weights = pol->wil.weights; To simplify creations and duplication of mempolicies, the weights are added as a structure directly within mempolicy. This allows the existing logic in __mpol_dup to copy the weights without additional allocations: if (old == current->mempolicy) { task_lock(current); *new = *old; task_unlock(current); } else *new = *old Suggested-by: Rakie Kim Suggested-by: Hyeongtak Ji Suggested-by: Honggyu Kim Suggested-by: Vinicius Tavares Petrucci Signed-off-by: Gregory Price Co-developed-by: Rakie Kim Signed-off-by: Rakie Kim Co-developed-by: Hyeongtak Ji Signed-off-by: Hyeongtak Ji Co-developed-by: Honggyu Kim Signed-off-by: Honggyu Kim Co-developed-by: Vinicius Tavares Petrucci Signed-off-by: Vinicius Tavares Petrucci --- .../admin-guide/mm/numa_memory_policy.rst | 10 ++ include/linux/mempolicy.h | 2 + include/uapi/linux/mempolicy.h | 2 + mm/mempolicy.c | 129 +++++++++++++++++- 4 files changed, 139 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst index 99e1f732cade..0e91efe9e769 100644 --- a/Documentation/admin-guide/mm/numa_memory_policy.rst +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst @@ -254,6 +254,8 @@ MPOL_WEIGHTED_INTERLEAVE This mode operates the same as MPOL_INTERLEAVE, except that interleaving behavior is executed based on weights set in /sys/kernel/mm/mempolicy/weighted_interleave/ + when configured to utilize global weights, or based on task-local + weights configured with set_mempolicy2(2) or mbind2(2). Weighted interleave allocations pages on nodes according to their weight. For example if nodes [0,1] are weighted [5,2] @@ -261,6 +263,13 @@ MPOL_WEIGHTED_INTERLEAVE 2 pages allocated on node1. This can better distribute data according to bandwidth on heterogeneous memory systems. + When utilizing task-local weights, weights are not rebalanced + in the event of a task migration. If a weight has not been + explicitly set for a node set in the new nodemask, the + value of that weight defaults to "1". For this reason, if + migrations are expected or possible, users should consider + utilizing global interleave weights. + NUMA memory policy supports the following optional mode flags: MPOL_F_STATIC_NODES @@ -514,6 +523,7 @@ Extended Mempolicy Arguments:: __u16 mode_flags; __s32 home_node; /* mbind2: policy home node */ __aligned_u64 pol_nodes; /* nodemask pointer */ + __aligned_u64 il_weights; /* u8 buf of size pol_maxnodes */ __u64 pol_maxnodes; __s32 policy_node; /* get_mempolicy2: policy node information */ }; diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index aeac19dfc2b6..387c5c418a66 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -58,6 +58,7 @@ struct mempolicy { /* Weighted interleave settings */ struct { unsigned char cur_weight; + unsigned char weights[MAX_NUMNODES]; } wil; }; @@ -70,6 +71,7 @@ struct mempolicy_args { unsigned short mode_flags; /* policy mode flags */ int home_node; /* mbind: use MPOL_MF_HOME_NODE */ nodemask_t *policy_nodes; /* get/set/mbind */ + unsigned char *il_weights; /* for mode MPOL_WEIGHTED_INTERLEAVE */ int policy_node; /* get: policy node information */ }; diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index ec1402dae35b..16fedf966166 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -33,6 +33,7 @@ struct mpol_args { __u16 mode_flags; __s32 home_node; /* mbind2: policy home node */ __aligned_u64 pol_nodes; + __aligned_u64 il_weights; /* size: pol_maxnodes * sizeof(char) */ __u64 pol_maxnodes; __s32 policy_node; /* get_mempolicy: policy node info */ }; @@ -75,6 +76,7 @@ struct mpol_args { #define MPOL_F_SHARED (1 << 0) /* identify shared policies */ #define MPOL_F_MOF (1 << 3) /* this policy wants migrate on fault */ #define MPOL_F_MORON (1 << 4) /* Migrate On protnone Reference On Node */ +#define MPOL_F_GWEIGHT (1 << 5) /* Utilize global weights */ /* * These bit locations are exposed in the vm.zone_reclaim_mode sysctl diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0882fa4aa516..1d73ad29e36c 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -271,6 +271,7 @@ static struct mempolicy *mpol_new(struct mempolicy_args *args) unsigned short mode = args->mode; unsigned short flags = args->mode_flags; nodemask_t *nodes = args->policy_nodes; + int node; if (mode == MPOL_DEFAULT) { if (nodes && !nodes_empty(*nodes)) @@ -297,6 +298,19 @@ static struct mempolicy *mpol_new(struct mempolicy_args *args) (flags & MPOL_F_STATIC_NODES) || (flags & MPOL_F_RELATIVE_NODES)) return ERR_PTR(-EINVAL); + } else if (mode == MPOL_WEIGHTED_INTERLEAVE) { + /* weighted interleave requires a nodemask and weights > 0 */ + if (nodes_empty(*nodes)) + return ERR_PTR(-EINVAL); + if (args->il_weights) { + node = first_node(*nodes); + while (node != MAX_NUMNODES) { + if (!args->il_weights[node]) + return ERR_PTR(-EINVAL); + node = next_node(node, *nodes); + } + } else if (!(args->mode_flags & MPOL_F_GWEIGHT)) + return ERR_PTR(-EINVAL); } else if (nodes_empty(*nodes)) return ERR_PTR(-EINVAL); @@ -309,6 +323,17 @@ static struct mempolicy *mpol_new(struct mempolicy_args *args) policy->home_node = args->home_node; policy->wil.cur_weight = 0; + if (policy->mode == MPOL_WEIGHTED_INTERLEAVE && args->il_weights) { + policy->wil.cur_weight = 0; + /* Minimum weight value is always 1 */ + memset(policy->wil.weights, 1, MAX_NUMNODES); + node = first_node(*nodes); + while (node != MAX_NUMNODES) { + policy->wil.weights[node] = args->il_weights[node]; + node = next_node(node, *nodes); + } + } + return policy; } @@ -937,6 +962,17 @@ static void do_get_mempolicy_nodemask(struct mempolicy *pol, nodemask_t *nmask) } } +static void do_get_mempolicy_il_weights(struct mempolicy *pol, + unsigned char weights[MAX_NUMNODES]) +{ + if (pol->mode != MPOL_WEIGHTED_INTERLEAVE) + memset(weights, 0, MAX_NUMNODES); + else if (pol->flags & MPOL_F_GWEIGHT) + memcpy(weights, iw_table, MAX_NUMNODES); + else + memcpy(weights, pol->wil.weights, MAX_NUMNODES); +} + /* Retrieve NUMA policy for a VMA assocated with a given address */ static long do_get_vma_mempolicy(unsigned long addr, int *addr_node, struct mempolicy_args *args) @@ -973,6 +1009,9 @@ static long do_get_vma_mempolicy(unsigned long addr, int *addr_node, if (args->policy_nodes) do_get_mempolicy_nodemask(pol, args->policy_nodes); + if (args->il_weights) + do_get_mempolicy_il_weights(pol, args->il_weights); + if (pol != &default_policy) { mpol_put(pol); mpol_cond_put(pol); @@ -999,6 +1038,9 @@ static long do_get_task_mempolicy(struct mempolicy_args *args) if (args->policy_nodes) do_get_mempolicy_nodemask(pol, args->policy_nodes); + if (args->il_weights) + do_get_mempolicy_il_weights(pol, args->il_weights); + return 0; } @@ -1521,6 +1563,9 @@ static long kernel_mbind(unsigned long start, unsigned long len, if (err) return err; + if (mode & MPOL_WEIGHTED_INTERLEAVE) + mode_flags |= MPOL_F_GWEIGHT; + memset(&margs, 0, sizeof(margs)); margs.mode = lmode; margs.mode_flags = mode_flags; @@ -1611,6 +1656,8 @@ SYSCALL_DEFINE5(mbind2, unsigned long, start, unsigned long, len, struct mempolicy_args margs; nodemask_t policy_nodes; unsigned long __user *nodes_ptr; + unsigned char weights[MAX_NUMNODES]; + unsigned char __user *weights_ptr; int err; if (!start || !len) @@ -1643,6 +1690,23 @@ SYSCALL_DEFINE5(mbind2, unsigned long, start, unsigned long, len, return err; margs.policy_nodes = &policy_nodes; + if (kargs.mode == MPOL_WEIGHTED_INTERLEAVE) { + weights_ptr = u64_to_user_ptr(kargs.il_weights); + if (weights_ptr) { + err = copy_struct_from_user(weights, + sizeof(weights), + weights_ptr, + kargs.pol_maxnodes); + if (err) + return err; + margs.il_weights = weights; + } else { + margs.il_weights = NULL; + margs.mode_flags |= MPOL_F_GWEIGHT; + } + } else + margs.il_weights = NULL; + return do_mbind(untagged_addr(start), len, &margs, flags); } @@ -1664,6 +1728,9 @@ static long kernel_set_mempolicy(int mode, const unsigned long __user *nmask, if (err) return err; + if (mode & MPOL_WEIGHTED_INTERLEAVE) + mode_flags |= MPOL_F_GWEIGHT; + memset(&args, 0, sizeof(args)); args.mode = lmode; args.mode_flags = mode_flags; @@ -1687,6 +1754,8 @@ SYSCALL_DEFINE3(set_mempolicy2, struct mpol_args __user *, uargs, size_t, usize, int err; nodemask_t policy_nodemask; unsigned long __user *nodes_ptr; + unsigned char weights[MAX_NUMNODES]; + unsigned char __user *weights_ptr; if (flags) return -EINVAL; @@ -1712,6 +1781,20 @@ SYSCALL_DEFINE3(set_mempolicy2, struct mpol_args __user *, uargs, size_t, usize, } else margs.policy_nodes = NULL; + if (kargs.mode == MPOL_WEIGHTED_INTERLEAVE && kargs.il_weights) { + weights_ptr = u64_to_user_ptr(kargs.il_weights); + err = copy_struct_from_user(weights, + sizeof(weights), + weights_ptr, + kargs.pol_maxnodes); + if (err) + return err; + margs.il_weights = weights; + } else { + margs.il_weights = NULL; + margs.mode_flags |= MPOL_F_GWEIGHT; + } + return do_set_mempolicy(&margs); } @@ -1914,17 +1997,25 @@ SYSCALL_DEFINE4(get_mempolicy2, struct mpol_args __user *, uargs, size_t, usize, int err; nodemask_t policy_nodemask; unsigned long __user *nodes_ptr; + unsigned char __user *weights_ptr; + unsigned char weights[MAX_NUMNODES]; if (flags & ~(MPOL_F_ADDR)) return -EINVAL; /* initialize any memory liable to be copied to userland */ memset(&margs, 0, sizeof(margs)); + memset(weights, 0, sizeof(weights)); err = copy_struct_from_user(&kargs, sizeof(kargs), uargs, usize); if (err) return -EINVAL; + if (kargs.il_weights) + margs.il_weights = weights; + else + margs.il_weights = NULL; + margs.policy_nodes = kargs.pol_nodes ? &policy_nodemask : NULL; if (flags & MPOL_F_ADDR) err = do_get_vma_mempolicy(untagged_addr(addr), NULL, &margs); @@ -1946,6 +2037,13 @@ SYSCALL_DEFINE4(get_mempolicy2, struct mpol_args __user *, uargs, size_t, usize, return err; } + if (kargs.mode == MPOL_WEIGHTED_INTERLEAVE && kargs.il_weights) { + weights_ptr = u64_to_user_ptr(kargs.il_weights); + err = copy_to_user(weights_ptr, weights, kargs.pol_maxnodes); + if (err) + return err; + } + return copy_to_user(uargs, &kargs, usize) ? -EFAULT : 0; } @@ -2062,13 +2160,18 @@ static unsigned int weighted_interleave_nodes(struct mempolicy *policy) { unsigned int next; struct task_struct *me = current; + unsigned char next_weight; next = next_node_in(me->il_prev, policy->nodes); if (next == MAX_NUMNODES) return next; - if (!policy->wil.cur_weight) - policy->wil.cur_weight = iw_table[next]; + if (!policy->wil.cur_weight) { + next_weight = (policy->flags & MPOL_F_GWEIGHT) ? + iw_table[next] : + policy->wil.weights[next]; + policy->wil.cur_weight = next_weight ? next_weight : 1; + } policy->wil.cur_weight--; if (!policy->wil.cur_weight) @@ -2142,6 +2245,7 @@ static unsigned int weighted_interleave_nid(struct mempolicy *pol, pgoff_t ilx) nodemask_t nodemask = pol->nodes; unsigned int target, weight_total = 0; int nid; + unsigned char *pol_weights; unsigned char weights[MAX_NUMNODES]; unsigned char weight; @@ -2153,8 +2257,13 @@ static unsigned int weighted_interleave_nid(struct mempolicy *pol, pgoff_t ilx) return nid; /* Then collect weights on stack and calculate totals */ + if (pol->flags & MPOL_F_GWEIGHT) + pol_weights = iw_table; + else + pol_weights = pol->wil.weights; + for_each_node_mask(nid, nodemask) { - weight = iw_table[nid]; + weight = pol_weights[nid]; weight_total += weight; weights[nid] = weight; } @@ -2552,6 +2661,7 @@ static unsigned long alloc_pages_bulk_array_weighted_interleave(gfp_t gfp, unsigned long nr_allocated; unsigned long rounds; unsigned long node_pages, delta; + unsigned char *pol_weights; unsigned char weight; unsigned char weights[MAX_NUMNODES]; unsigned int weight_total = 0; @@ -2565,9 +2675,14 @@ static unsigned long alloc_pages_bulk_array_weighted_interleave(gfp_t gfp, nnodes = nodes_weight(nodes); + if (pol->flags & MPOL_F_GWEIGHT) + pol_weights = iw_table; + else + pol_weights = pol->wil.weights; + /* Collect weights and save them on stack so they don't change */ for_each_node_mask(node, nodes) { - weight = iw_table[node]; + weight = pol_weights[node]; weight_total += weight; weights[node] = weight; } @@ -3092,6 +3207,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) { int ret; struct mempolicy_args margs; + unsigned char weights[MAX_NUMNODES]; sp->root = RB_ROOT; /* empty tree == default mempolicy */ rwlock_init(&sp->lock); @@ -3109,6 +3225,11 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) margs.mode_flags = mpol->flags; margs.policy_nodes = &mpol->w.user_nodemask; margs.home_node = NUMA_NO_NODE; + if (margs.mode == MPOL_WEIGHTED_INTERLEAVE && + !(margs.mode_flags & MPOL_F_GWEIGHT)) { + memcpy(weights, mpol->wil.weights, sizeof(weights)); + margs.il_weights = weights; + } /* contextualize the tmpfs mount point mempolicy to this file */ npol = mpol_new(&margs);