From patchwork Fri Oct 30 19:02:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870675 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EED246A2 for ; Fri, 30 Oct 2020 19:03:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ACE272222F for ; Fri, 30 Oct 2020 19:03:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ACE272222F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C80016B0071; Fri, 30 Oct 2020 15:02:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C0D656B0073; Fri, 30 Oct 2020 15:02:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF6A36B0074; Fri, 30 Oct 2020 15:02:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0214.hostedemail.com [216.40.44.214]) by kanga.kvack.org (Postfix) with ESMTP id 6BC326B0071 for ; Fri, 30 Oct 2020 15:02:51 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 06D941EF1 for ; Fri, 30 Oct 2020 19:02:51 +0000 (UTC) X-FDA: 77429513742.06.pies43_1e150f427298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 18E1510048BB0 for ; Fri, 30 Oct 2020 19:02:46 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30012:30054:30064:30070:30083:30091,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.50.0.100;04yguch7z1r9chtagw4mshthm9j4hoc8t4ubapmidcrd7znd7pnmyw1m9tsprfn.3e8e7kd4cfxiiyctj7mozaw5e6c7xzc7dsdms6i778jmwmo3fao94zcqhzsb81p.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:162,LUA_SUMMARY:none X-HE-Tag: pies43_1e150f427298 X-Filterd-Recvd-Size: 7406 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:44 +0000 (UTC) IronPort-SDR: Rqsv1lNCTRteUky5pkR6DLTRNkF0v+B5RblofSjfnEHGqu3btX9LpICC/vGet9gnEQRG4iOyJf 8TJgB2OVOf9Q== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629085" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629085" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:42 -0700 IronPort-SDR: D8ZaKbLgkQ5c++7ZPBNvg1qqkv7sjDJOMHutLDCopiwZ9Y06ts62zNQ3lRyzpuiAOfw9QDkALk YADHyyefSBeA== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167639" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:42 -0700 From: Ben Widawsky To: linux-mm Cc: Ben Widawsky , Andrew Morton , Dave Hansen , Michal Hocko Subject: [PATCH v2 RESEND 00/12] Introduced multi-preference mempolicy Date: Fri, 30 Oct 2020 12:02:26 -0700 Message-Id: <20201030190238.306764-1-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Significant changes since last send: None. Just a rebase and conflict resolution. Using get_maintainer.pl for --to and --cc this time. Significant changes since v1: * Dropped patch to replace numa_node_id in some places (mhocko) * Dropped all the page allocation patches in favor of new mechanism to use fallbacks. (mhocko) * Dropped the special snowflake preferred node algorithm (bwidawsk) * If the preferred node fails, ALL nodes are rechecked instead of just the non-preferred nodes. In v1, Andi Kleen brought up reusing MPOL_PREFERRED as the mode for the API. There wasn't consensus around this, so I've left the existing API as it was. I'm open to more feedback here, but my slight preference is to use a new API as it ensures if people are using it, they are entirely aware of what they're doing and not accidentally misusing the old interface. (In a similar way to how MPOL_LOCAL was introduced). In v1, Michal also brought up renaming this MPOL_PREFERRED_MASK. I'm equally fine with that change, but I hadn't heard much emphatic support for one way or another, so I've left that too. v2 Summary: 1: Random fix I found along the way 2-5: Represent node preference as a mask internally 6-7: Tread many preferred like bind 8-11: Handle page allocation for the new policy 12: Enable the uapi This patch series introduces the concept of the MPOL_PREFERRED_MANY mempolicy. This mempolicy mode can be used with either the set_mempolicy(2) or mbind(2) interfaces. Like the MPOL_PREFERRED interface, it allows an application to set a preference for nodes which will fulfil memory allocation requests. Unlike the MPOL_PREFERRED mode, it takes a set of nodes. Like the MPOL_BIND interface, it works over a set of nodes. Unlike MPOL_BIND, it will not cause a SIGSEGV or invoke the OOM killer if those preferred nodes are not available. Along with these patches are patches for libnuma, numactl, numademo, and memhog. They still need some polish, but can be found here: https://gitlab.com/bwidawsk/numactl/-/tree/prefer-many It allows new usage: `numactl -P 0,3,4` The goal of the new mode is to enable some use-cases when using tiered memory usage models which I've lovingly named. 1a. The Hare - The interconnect is fast enough to meet bandwidth and latency requirements allowing preference to be given to all nodes with "fast" memory. 1b. The Indiscriminate Hare - An application knows it wants fast memory (or perhaps slow memory), but doesn't care which node it runs on. The application can prefer a set of nodes and then xpu bind to the local node (cpu, accelerator, etc). This reverses the nodes are chosen today where the kernel attempts to use local memory to the CPU whenever possible. This will attempt to use the local accelerator to the memory. 2. The Tortoise - The administrator (or the application itself) is aware it only needs slow memory, and so can prefer that. Much of this is almost achievable with the bind interface, but the bind interface suffers from an inability to fallback to another set of nodes if binding fails to all nodes in the nodemask. Like MPOL_BIND a nodemask is given. Inherently this removes ordering from the preference. > /* Set first two nodes as preferred in an 8 node system. */ > const unsigned long nodes = 0x3 > set_mempolicy(MPOL_PREFER_MANY, &nodes, 8); > /* Mimic interleave policy, but have fallback *. > const unsigned long nodes = 0xaa > set_mempolicy(MPOL_PREFER_MANY, &nodes, 8); Some internal discussion took place around the interface. There are two alternatives which we have discussed, plus one I stuck in: 1. Ordered list of nodes. Currently it's believed that the added complexity is nod needed for expected usecases. 2. A flag for bind to allow falling back to other nodes. This confuses the notion of binding and is less flexible than the current solution. 3. Create flags or new modes that helps with some ordering. This offers both a friendlier API as well as a solution for more customized usage. It's unknown if it's worth the complexity to support this. Here is sample code for how this might work: > // Prefer specific nodes for some something wacky > set_mempolicy(MPOL_PREFER_MANY, 0x17c, 1024); > > // Default > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_SOCKET, NULL, 0); > // which is the same as > set_mempolicy(MPOL_DEFAULT, NULL, 0); > > // The Hare > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, NULL, 0); > > // The Tortoise > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE_REV, NULL, 0); > > // Prefer the fast memory of the first two sockets > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, -1, 2); Ben Widawsky (8): mm/mempolicy: Add comment for missing LOCAL mm/mempolicy: kill v.preferred_nodes mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND mm/mempolicy: Create a page allocator for policy mm/mempolicy: Thread allocation for many preferred mm/mempolicy: VMA allocation for many preferred mm/mempolicy: huge-page allocation for many preferred mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Dave Hansen (4): mm/mempolicy: convert single preferred_node to full nodemask mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes mm/mempolicy: allow preferred code to take a nodemask mm/mempolicy: refactor rebind code for PREFERRED_MANY .../admin-guide/mm/numa_memory_policy.rst | 22 +- include/linux/mempolicy.h | 6 +- include/uapi/linux/mempolicy.h | 6 +- mm/hugetlb.c | 20 +- mm/mempolicy.c | 271 ++++++++++++------ 5 files changed, 220 insertions(+), 105 deletions(-)