From patchwork Wed Mar 3 10:20:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12113231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0179C433DB for ; Wed, 3 Mar 2021 10:21:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4482064EDE for ; Wed, 3 Mar 2021 10:21:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4482064EDE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C1E658D0148; Wed, 3 Mar 2021 05:21:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BCDFF8D0135; Wed, 3 Mar 2021 05:21:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A484A8D0148; Wed, 3 Mar 2021 05:21:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0185.hostedemail.com [216.40.44.185]) by kanga.kvack.org (Postfix) with ESMTP id 818638D0135 for ; Wed, 3 Mar 2021 05:21:19 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3E560181CCB00 for ; Wed, 3 Mar 2021 10:21:19 +0000 (UTC) X-FDA: 77878170678.24.BD10ECA Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf10.hostedemail.com (Postfix) with ESMTP id 5DC58407F8ED for ; Wed, 3 Mar 2021 10:21:17 +0000 (UTC) IronPort-SDR: +/3m7lokOro1Px+yQa6NQT3Z9rD8cSnM5/OjQYIcyLLizHFM83Je5d0yKTpaWVrTslqCeEFRxS 4GJzjCful+0A== X-IronPort-AV: E=McAfee;i="6000,8403,9911"; a="272162772" X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="272162772" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 02:21:18 -0800 IronPort-SDR: mPlRnNypqju1PiaghxQklHY+nzJgDdNCJ2ChUxwovJ7BW/ReSPF9GVLxJmVegudx20xYvgMKd/ 9GoT9OMEdHYA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,219,1610438400"; d="scan'208";a="445200165" Received: from shbuild999.sh.intel.com ([10.239.146.165]) by orsmga001.jf.intel.com with ESMTP; 03 Mar 2021 02:21:14 -0800 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Michal Hocko , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi leen , Dan Williams , Dave Hansen , Feng Tang Subject: [PATCH v3 04/14] mm/mempolicy: allow preferred code to take a nodemask Date: Wed, 3 Mar 2021 18:20:48 +0800 Message-Id: <1614766858-90344-5-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1614766858-90344-1-git-send-email-feng.tang@intel.com> References: <1614766858-90344-1-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 5DC58407F8ED X-Stat-Signature: oettc94rcfbejz81jxpbbtqsmtmhr3kn Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1614766877-686026 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Create a helper function (mpol_new_preferred_many()) which is usable both by the old, single-node MPOL_PREFERRED and the new MPOL_PREFERRED_MANY. Enforce the old single-node MPOL_PREFERRED behavior in the "new" version of mpol_new_preferred() which calls mpol_new_preferred_many(). v3: * fix a stack overflow caused by emty nodemask (Feng) Link: https://lore.kernel.org/r/20200630212517.308045-5-ben.widawsky@intel.com Signed-off-by: Dave Hansen Signed-off-by: Ben Widawsky Signed-off-by: Feng Tang --- mm/mempolicy.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 79258b2..19ec954 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -203,17 +203,34 @@ static int mpol_new_interleave(struct mempolicy *pol, const nodemask_t *nodes) return 0; } -static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes) +static int mpol_new_preferred_many(struct mempolicy *pol, + const nodemask_t *nodes) { if (!nodes) pol->flags |= MPOL_F_LOCAL; /* local allocation */ else if (nodes_empty(*nodes)) return -EINVAL; /* no allowed nodes */ else - pol->v.preferred_nodes = nodemask_of_node(first_node(*nodes)); + pol->v.preferred_nodes = *nodes; return 0; } +static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes) +{ + if (nodes) { + /* MPOL_PREFERRED can only take a single node: */ + nodemask_t tmp; + + if (nodes_empty(*nodes)) + return -EINVAL; + + tmp = nodemask_of_node(first_node(*nodes)); + return mpol_new_preferred_many(pol, &tmp); + } + + return mpol_new_preferred_many(pol, NULL); +} + static int mpol_new_bind(struct mempolicy *pol, const nodemask_t *nodes) { if (nodes_empty(*nodes))