From patchwork Fri Oct 30 19:02:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 11870685 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 10AF56A2 for ; Fri, 30 Oct 2020 19:03:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C75202222F for ; Fri, 30 Oct 2020 19:03:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C75202222F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E2FAE6B0080; Fri, 30 Oct 2020 15:02:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DB5A46B0081; Fri, 30 Oct 2020 15:02:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA8F56B0082; Fri, 30 Oct 2020 15:02:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id 871F46B0080 for ; Fri, 30 Oct 2020 15:02:56 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 224D51F0A for ; Fri, 30 Oct 2020 19:02:56 +0000 (UTC) X-FDA: 77429513952.20.knot13_4c0cc2f27298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id C2E32180C0F75 for ; Fri, 30 Oct 2020 19:02:52 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,ben.widawsky@intel.com,,RULES_HIT:30054:30064,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95;04yr9sj5gw6cqukrucisqzfj8f15focw9p8tk46teaf1tf54yyoa79bjhh7ru7k.wuhj69ho1n8qmhr8iz9b8rmbodf9kaek89yeyp9dk7opax5tqoeywqs4pmj5ye1.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: knot13_4c0cc2f27298 X-Filterd-Recvd-Size: 4453 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:02:51 +0000 (UTC) IronPort-SDR: cNn1nfmo6o0iKLByrTNoSxEOtxMKNZ7xt+G4vB1aEOmgE9fAgf3WHMnIPiWcRMsIbd4RXYhL4a Gpb8Hk5JfysA== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="155629124" X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="155629124" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:51 -0700 IronPort-SDR: SuIoIL/dDSU08OxuWu8YahYZfn8VQvQTKVMPi4c+6T8GsDkvv3KVoI87AZbhAFxrAVg2m9nEpy ZdMoZYP/dKfA== X-IronPort-AV: E=Sophos;i="5.77,434,1596524400"; d="scan'208";a="537167706" Received: from kingelix-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.139.120]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 12:02:50 -0700 From: Ben Widawsky To: linux-mm , Andrew Morton Cc: Ben Widawsky , Dave Hansen , Michal Hocko , linux-kernel@vger.kernel.org Subject: [PATCH 10/12] mm/mempolicy: VMA allocation for many preferred Date: Fri, 30 Oct 2020 12:02:36 -0700 Message-Id: <20201030190238.306764-11-ben.widawsky@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201030190238.306764-1-ben.widawsky@intel.com> References: <20201030190238.306764-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch implements MPOL_PREFERRED_MANY for alloc_pages_vma(). Like alloc_pages_current(), alloc_pages_vma() needs to support policy based decisions if they've been configured via mbind(2). The temporary "hack" of treating MPOL_PREFERRED and MPOL_PREFERRED_MANY can now be removed with this, too. All the actual machinery to make this work was part of ("mm/mempolicy: Create a page allocator for policy") Link: https://lore.kernel.org/r/20200630212517.308045-11-ben.widawsky@intel.com Signed-off-by: Ben Widawsky --- mm/mempolicy.c | 29 +++++++++++++++++++++-------- 1 file changed, 21 insertions(+), 8 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 2d19235413db..343340c87f03 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2259,8 +2259,6 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, { struct mempolicy *pol; struct page *page; - int preferred_nid; - nodemask_t *nmask; pol = get_vma_policy(vma, addr); @@ -2274,6 +2272,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, } if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) { + nodemask_t *nmask; int hpage_node = node; /* @@ -2287,10 +2286,26 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * does not allow the current node in its nodemask, we allocate * the standard way. */ - if ((pol->mode == MPOL_PREFERRED || - pol->mode == MPOL_PREFERRED_MANY) && - !(pol->flags & MPOL_F_LOCAL)) + if (pol->mode == MPOL_PREFERRED || !(pol->flags & MPOL_F_LOCAL)) { hpage_node = first_node(pol->nodes); + } else if (pol->mode == MPOL_PREFERRED_MANY) { + struct zoneref *z; + + /* + * In this policy, with direct reclaim, the normal + * policy based allocation will do the right thing - try + * twice using the preferred nodes first, and all nodes + * second. + */ + if (gfp & __GFP_DIRECT_RECLAIM) { + page = alloc_pages_policy(pol, gfp, order, NUMA_NO_NODE); + goto out; + } + + z = first_zones_zonelist(node_zonelist(numa_node_id(), GFP_HIGHUSER), + gfp_zone(GFP_HIGHUSER), &pol->nodes); + hpage_node = zone_to_nid(z->zone); + } nmask = policy_nodemask(gfp, pol); if (!nmask || node_isset(hpage_node, *nmask)) { @@ -2316,9 +2331,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, } } - nmask = policy_nodemask(gfp, pol); - preferred_nid = policy_node(gfp, pol, node); - page = __alloc_pages_nodemask(gfp, order, preferred_nid, nmask); + page = alloc_pages_policy(pol, gfp, order, NUMA_NO_NODE); mpol_cond_put(pol); out: return page;