From patchwork Wed Mar 17 03:40:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12144671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77793C433DB for ; Wed, 17 Mar 2021 03:40:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 248DD64D9D for ; Wed, 17 Mar 2021 03:40:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 248DD64D9D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BBC126B007E; Tue, 16 Mar 2021 23:40:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B92DF6B0080; Tue, 16 Mar 2021 23:40:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A33E46B0081; Tue, 16 Mar 2021 23:40:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0171.hostedemail.com [216.40.44.171]) by kanga.kvack.org (Postfix) with ESMTP id 80B0E6B007E for ; Tue, 16 Mar 2021 23:40:48 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 48E336C17 for ; Wed, 17 Mar 2021 03:40:48 +0000 (UTC) X-FDA: 77927964576.26.E70F63C Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf30.hostedemail.com (Postfix) with ESMTP id A998EE0011C5 for ; Wed, 17 Mar 2021 03:40:47 +0000 (UTC) IronPort-SDR: +Zgbd0ERy8hqEdguUKSdYveL1K04WGEdy7SY2pSN4iU46hhttCHkeYPunak/JjCinmGPj9lCUq G9Qb+YRYOiAw== X-IronPort-AV: E=McAfee;i="6000,8403,9925"; a="253394672" X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="253394672" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2021 20:40:46 -0700 IronPort-SDR: N0mWVmdNP6PLiKMb/zxRwfM4pv66INHYF1GGnDSe31OfPH58fv9fmUHhvXjVCo29K5qJ7nUfNb nXCDwaZCMCkQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="602075961" Received: from shbuild999.sh.intel.com ([10.239.147.94]) by fmsmga006.fm.intel.com with ESMTP; 16 Mar 2021 20:40:43 -0700 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi Kleen , Dan Williams , Feng Tang Subject: [PATCH v4 09/13] mm/mempolicy: Thread allocation for many preferred Date: Wed, 17 Mar 2021 11:40:06 +0800 Message-Id: <1615952410-36895-10-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1615952410-36895-1-git-send-email-feng.tang@intel.com> References: <1615952410-36895-1-git-send-email-feng.tang@intel.com> X-Stat-Signature: qxj7pne5b4ugmis83pjzz6t4b1n6t7b5 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A998EE0011C5 Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf30; identity=mailfrom; envelope-from=""; helo=mga07.intel.com; client-ip=134.134.136.100 X-HE-DKIM-Result: none/none X-HE-Tag: 1615952447-1814 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ben Widawsky In order to support MPOL_PREFERRED_MANY as the mode used by set_mempolicy(2), alloc_pages_current() needs to support it. This patch does that by using the new helper function to allocate properly based on policy. All the actual machinery to make this work was part of ("mm/mempolicy: Create a page allocator for policy") Link: https://lore.kernel.org/r/20200630212517.308045-10-ben.widawsky@intel.com Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/mempolicy.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index d21105b..a92efe7 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2357,7 +2357,7 @@ EXPORT_SYMBOL(alloc_pages_vma); struct page *alloc_pages_current(gfp_t gfp, unsigned order) { struct mempolicy *pol = &default_policy; - struct page *page; + int nid = NUMA_NO_NODE; if (!in_interrupt() && !(gfp & __GFP_THISNODE)) pol = get_task_policy(current); @@ -2367,14 +2367,9 @@ struct page *alloc_pages_current(gfp_t gfp, unsigned order) * nor system default_policy */ if (pol->mode == MPOL_INTERLEAVE) - page = alloc_pages_policy(pol, gfp, order, - interleave_nodes(pol)); - else - page = __alloc_pages_nodemask(gfp, order, - policy_node(gfp, pol, numa_node_id()), - policy_nodemask(gfp, pol)); + nid = interleave_nodes(pol); - return page; + return alloc_pages_policy(pol, gfp, order, nid); } EXPORT_SYMBOL(alloc_pages_current);