From patchwork Thu May 20 08:30:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12269449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8ECDBC433ED for ; Thu, 20 May 2021 08:30:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 29C7160FEE for ; Thu, 20 May 2021 08:30:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 29C7160FEE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BF8C36B0075; Thu, 20 May 2021 04:30:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B811D6B0078; Thu, 20 May 2021 04:30:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A48676B007D; Thu, 20 May 2021 04:30:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0004.hostedemail.com [216.40.44.4]) by kanga.kvack.org (Postfix) with ESMTP id 6D6156B0075 for ; Thu, 20 May 2021 04:30:16 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 071F482C879D for ; Thu, 20 May 2021 08:30:16 +0000 (UTC) X-FDA: 78160937232.02.1FD587F Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf05.hostedemail.com (Postfix) with ESMTP id 73A58E00013F for ; Thu, 20 May 2021 08:30:13 +0000 (UTC) IronPort-SDR: Gni1T62nK5ViM5p6q0eRZrH6VGhkdWUdWdqE7+q1T18oIdgd9yCmARMQV0crJkpKxZJXmbuyHm DN4DVJ23ZsQA== X-IronPort-AV: E=McAfee;i="6200,9189,9989"; a="181454376" X-IronPort-AV: E=Sophos;i="5.82,313,1613462400"; d="scan'208";a="181454376" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 01:30:15 -0700 IronPort-SDR: v1wQCbBUT3aWfY0Cm3DhdLDmQJeIfEP4e3JYnUQnAfONRhH6N0bAgFZR9Rf8qDZqRAEcFnXKim bL2UVqCIHMow== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,313,1613462400"; d="scan'208";a="473899320" Received: from shbuild999.sh.intel.com ([10.239.147.94]) by orsmga001.jf.intel.com with ESMTP; 20 May 2021 01:30:09 -0700 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Michal Hocko Cc: Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi Kleen , Dan Williams , ying.huang@intel.com, Feng Tang Subject: [RFC Patch v2 1/4] mm/mempolicy: skip nodemask intersect check for 'interleave' when oom Date: Thu, 20 May 2021 16:30:01 +0800 Message-Id: <1621499404-67756-2-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1621499404-67756-1-git-send-email-feng.tang@intel.com> References: <1621499404-67756-1-git-send-email-feng.tang@intel.com> Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf05.hostedemail.com: domain of feng.tang@intel.com has no SPF policy when checking 192.55.52.151) smtp.mailfrom=feng.tang@intel.com X-Stat-Signature: 1z6aq7mnwd1d7cn8tsai6xgp5uwgaq7q X-Rspamd-Queue-Id: 73A58E00013F X-Rspamd-Server: rspam02 X-HE-Tag: 1621499413-442720 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: mempolicy_nodemask_intersects() is used in oom case to check if a task may have memory allocated on some memory nodes. MPOL_INTERLEAVE has set a nodemask, which is not a forced requirement, but just a hint for chosing a node to allocate memory from, and the task may have memory allocated on other nodes than this nodemask. So skip the check. Suggested-by: Michal Hocko Signed-off-by: Feng Tang --- mm/mempolicy.c | 24 ++++-------------------- 1 file changed, 4 insertions(+), 20 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index d79fa29..1964cca 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2098,7 +2098,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) * * If tsk's mempolicy is "default" [NULL], return 'true' to indicate default * policy. Otherwise, check for intersection between mask and the policy - * nodemask for 'bind' or 'interleave' policy. For 'preferred' or 'local' + * nodemask for 'bind' policy. For 'interleave', 'preferred' or 'local' * policy, always return true since it may allocate elsewhere on fallback. * * Takes task_lock(tsk) to prevent freeing of its mempolicy. @@ -2111,29 +2111,13 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, if (!mask) return ret; + task_lock(tsk); mempolicy = tsk->mempolicy; - if (!mempolicy) - goto out; - - switch (mempolicy->mode) { - case MPOL_PREFERRED: - /* - * MPOL_PREFERRED and MPOL_F_LOCAL are only preferred nodes to - * allocate from, they may fallback to other nodes when oom. - * Thus, it's possible for tsk to have allocated memory from - * nodes in mask. - */ - break; - case MPOL_BIND: - case MPOL_INTERLEAVE: + if (mempolicy && mempolicy->mode == MPOL_BIND) ret = nodes_intersects(mempolicy->v.nodes, *mask); - break; - default: - BUG(); - } -out: task_unlock(tsk); + return ret; } From patchwork Thu May 20 08:30:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12269451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0516C433B4 for ; Thu, 20 May 2021 08:30:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 45005610A2 for ; Thu, 20 May 2021 08:30:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 45005610A2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DA5906B007D; Thu, 20 May 2021 04:30:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D55CF6B007E; Thu, 20 May 2021 04:30:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF9746B0080; Thu, 20 May 2021 04:30:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0024.hostedemail.com [216.40.44.24]) by kanga.kvack.org (Postfix) with ESMTP id 8C4216B007D for ; Thu, 20 May 2021 04:30:20 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 21497181AC9CB for ; Thu, 20 May 2021 08:30:20 +0000 (UTC) X-FDA: 78160937400.24.09F62E0 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf05.hostedemail.com (Postfix) with ESMTP id 85D7EE00200D for ; Thu, 20 May 2021 08:30:17 +0000 (UTC) IronPort-SDR: AHn/qQoplbrfMQ1RIeXCMCnMVH6UP7kmVAGHf3pdE32gIm7dvyJTgfsEmANtd+oEnArdf9CePz AlDDh7QXs9zQ== X-IronPort-AV: E=McAfee;i="6200,9189,9989"; a="181454397" X-IronPort-AV: E=Sophos;i="5.82,313,1613462400"; d="scan'208";a="181454397" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 01:30:18 -0700 IronPort-SDR: d/xtzwRVWrQjMJ24low5iOQu84fAtGYY0Mt7lk4Nz5Gyq5OK7ikeHhBhVMP0UNo1SSNsxw/lph koUNTsCRD74Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,313,1613462400"; d="scan'208";a="473899333" Received: from shbuild999.sh.intel.com ([10.239.147.94]) by orsmga001.jf.intel.com with ESMTP; 20 May 2021 01:30:15 -0700 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Michal Hocko Cc: Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi Kleen , Dan Williams , ying.huang@intel.com, Feng Tang Subject: [RFC Patch v2 2/4] mm/mempolicy: unify the preprocessing for mbind and set_mempolicy Date: Thu, 20 May 2021 16:30:02 +0800 Message-Id: <1621499404-67756-3-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1621499404-67756-1-git-send-email-feng.tang@intel.com> References: <1621499404-67756-1-git-send-email-feng.tang@intel.com> Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf05.hostedemail.com: domain of feng.tang@intel.com has no SPF policy when checking 192.55.52.151) smtp.mailfrom=feng.tang@intel.com X-Stat-Signature: g1fxq6yqefciydoda6ejkxmax7gjbqz5 X-Rspamd-Queue-Id: 85D7EE00200D X-Rspamd-Server: rspam02 X-HE-Tag: 1621499417-50140 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently the kernel_mbind() and kernel_set_mempolicy() do almost the same operation for parameter sanity check and preprocessing. Add a macro to unify the code to reduce the redundancy, and make it easier for changing the pre-processing code in future. Signed-off-by: Feng Tang --- mm/mempolicy.c | 46 +++++++++++++++++++--------------------------- 1 file changed, 19 insertions(+), 27 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 1964cca..0f5bf60 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1460,25 +1460,29 @@ static int copy_nodes_to_user(unsigned long __user *mask, unsigned long maxnode, return copy_to_user(mask, nodes_addr(*nodes), copy) ? -EFAULT : 0; } +#define MPOL_PRE_PROCESS() \ + \ + nodemask_t nodes; \ + int err; \ + unsigned short mode_flags; \ + mode_flags = mode & MPOL_MODE_FLAGS; \ + mode &= ~MPOL_MODE_FLAGS; \ + if (mode >= MPOL_MAX) \ + return -EINVAL; \ + if ((mode_flags & MPOL_F_STATIC_NODES) && \ + (mode_flags & MPOL_F_RELATIVE_NODES)) \ + return -EINVAL; \ + err = get_nodes(&nodes, nmask, maxnode); \ + if (err) \ + return err; + static long kernel_mbind(unsigned long start, unsigned long len, unsigned long mode, const unsigned long __user *nmask, unsigned long maxnode, unsigned int flags) { - nodemask_t nodes; - int err; - unsigned short mode_flags; + MPOL_PRE_PROCESS(); start = untagged_addr(start); - mode_flags = mode & MPOL_MODE_FLAGS; - mode &= ~MPOL_MODE_FLAGS; - if (mode >= MPOL_MAX) - return -EINVAL; - if ((mode_flags & MPOL_F_STATIC_NODES) && - (mode_flags & MPOL_F_RELATIVE_NODES)) - return -EINVAL; - err = get_nodes(&nodes, nmask, maxnode); - if (err) - return err; return do_mbind(start, len, mode, mode_flags, &nodes, flags); } @@ -1493,20 +1497,8 @@ SYSCALL_DEFINE6(mbind, unsigned long, start, unsigned long, len, static long kernel_set_mempolicy(int mode, const unsigned long __user *nmask, unsigned long maxnode) { - int err; - nodemask_t nodes; - unsigned short flags; - - flags = mode & MPOL_MODE_FLAGS; - mode &= ~MPOL_MODE_FLAGS; - if ((unsigned int)mode >= MPOL_MAX) - return -EINVAL; - if ((flags & MPOL_F_STATIC_NODES) && (flags & MPOL_F_RELATIVE_NODES)) - return -EINVAL; - err = get_nodes(&nodes, nmask, maxnode); - if (err) - return err; - return do_set_mempolicy(mode, flags, &nodes); + MPOL_PRE_PROCESS(); + return do_set_mempolicy(mode, mode_flags, &nodes); } SYSCALL_DEFINE3(set_mempolicy, int, mode, const unsigned long __user *, nmask, From patchwork Thu May 20 08:30:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12269453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76CA8C433B4 for ; Thu, 20 May 2021 08:30:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 10FAB610A8 for ; Thu, 20 May 2021 08:30:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 10FAB610A8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A74876B0080; Thu, 20 May 2021 04:30:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A4A478D0001; Thu, 20 May 2021 04:30:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C2B86B0083; Thu, 20 May 2021 04:30:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 55AB36B0080 for ; Thu, 20 May 2021 04:30:24 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E8C66BF05 for ; Thu, 20 May 2021 08:30:23 +0000 (UTC) X-FDA: 78160937526.31.C43E3D6 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf05.hostedemail.com (Postfix) with ESMTP id 4173CE00013F for ; Thu, 20 May 2021 08:30:21 +0000 (UTC) IronPort-SDR: 4SEjX51kpFfIGABJ/8ndPFcyVnB7Iv2lWze5Ck+I2cwW3ZcEmFuvA4XVdAcu5OMJQixPTcIukf C9AdSZdGS6xw== X-IronPort-AV: E=McAfee;i="6200,9189,9989"; a="181454404" X-IronPort-AV: E=Sophos;i="5.82,313,1613462400"; d="scan'208";a="181454404" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 01:30:22 -0700 IronPort-SDR: 7EwOJOstGBDvnGyEkD+yH2gTui2CgXZWZ+sj0n/M6c6omrxpGBa5lo/xJVtX5kO6eQ4rgZjPQ+ 2u9kGT1oIhxg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,313,1613462400"; d="scan'208";a="473899349" Received: from shbuild999.sh.intel.com ([10.239.147.94]) by orsmga001.jf.intel.com with ESMTP; 20 May 2021 01:30:19 -0700 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Michal Hocko Cc: Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi Kleen , Dan Williams , ying.huang@intel.com, Feng Tang Subject: [RFC Patch v2 3/4] mm/mempolicy: don't handle MPOL_LOCAL like a fake MPOL_PREFERRED policy Date: Thu, 20 May 2021 16:30:03 +0800 Message-Id: <1621499404-67756-4-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1621499404-67756-1-git-send-email-feng.tang@intel.com> References: <1621499404-67756-1-git-send-email-feng.tang@intel.com> Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf05.hostedemail.com: domain of feng.tang@intel.com has no SPF policy when checking 192.55.52.151) smtp.mailfrom=feng.tang@intel.com X-Stat-Signature: rdbqfmdmatzrenxjzhud7qwwns3ep5rm X-Rspamd-Queue-Id: 4173CE00013F X-Rspamd-Server: rspam02 X-HE-Tag: 1621499421-337895 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MPOL_LOCAL policy has been setup as a real policy, but it is still handled like a faked POL_PREFERRED policy with one internal MPOL_F_LOCAL flag bit set, and there are many places having to judge the real 'prefer' or the 'local' policy, which are quite confusing. In current code, there are four cases that MPOL_LOCAL are used: * user specifies 'local' policy * user specifies 'prefer' policy, but with empty nodemask * system 'default' policy is used * 'prefer' policy + valid 'preferred' node with MPOL_F_STATIC_NODES flag set, and when it is 'rebind' to a nodemask which doesn't contains the 'preferred' node, it will add the MPOL_F_LOCAL bit and performs as 'local' policy. In future if it is 'rebind' again with valid nodemask, the policy will be restored back to 'prefer' So for the first three cases, we make 'local' a real policy instead of a fake 'prefer' one, this will reduce confusion for reading code. And next optional patch will kill the 'MPOL_F_LOCAL' bit. Signed-off-by: Feng Tang --- mm/mempolicy.c | 60 ++++++++++++++++++++++++++++++++-------------------------- 1 file changed, 33 insertions(+), 27 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0f5bf60..833ed2d 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -121,8 +121,7 @@ enum zone_type policy_zone = 0; */ static struct mempolicy default_policy = { .refcnt = ATOMIC_INIT(1), /* never free it */ - .mode = MPOL_PREFERRED, - .flags = MPOL_F_LOCAL, + .mode = MPOL_LOCAL, }; static struct mempolicy preferred_node_policy[MAX_NUMNODES]; @@ -200,12 +199,9 @@ static int mpol_new_interleave(struct mempolicy *pol, const nodemask_t *nodes) static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes) { - if (!nodes) - pol->flags |= MPOL_F_LOCAL; /* local allocation */ - else if (nodes_empty(*nodes)) - return -EINVAL; /* no allowed nodes */ - else - pol->v.preferred_node = first_node(*nodes); + if (nodes_empty(*nodes)) + return -EINVAL; + pol->v.preferred_node = first_node(*nodes); return 0; } @@ -239,25 +235,19 @@ static int mpol_set_nodemask(struct mempolicy *pol, cpuset_current_mems_allowed, node_states[N_MEMORY]); VM_BUG_ON(!nodes); - if (pol->mode == MPOL_PREFERRED && nodes_empty(*nodes)) - nodes = NULL; /* explicit local allocation */ - else { - if (pol->flags & MPOL_F_RELATIVE_NODES) - mpol_relative_nodemask(&nsc->mask2, nodes, &nsc->mask1); - else - nodes_and(nsc->mask2, *nodes, nsc->mask1); - if (mpol_store_user_nodemask(pol)) - pol->w.user_nodemask = *nodes; - else - pol->w.cpuset_mems_allowed = - cpuset_current_mems_allowed; - } + if (pol->flags & MPOL_F_RELATIVE_NODES) + mpol_relative_nodemask(&nsc->mask2, nodes, &nsc->mask1); + else + nodes_and(nsc->mask2, *nodes, nsc->mask1); - if (nodes) - ret = mpol_ops[pol->mode].create(pol, &nsc->mask2); + if (mpol_store_user_nodemask(pol)) + pol->w.user_nodemask = *nodes; else - ret = mpol_ops[pol->mode].create(pol, NULL); + pol->w.cpuset_mems_allowed = + cpuset_current_mems_allowed; + + ret = mpol_ops[pol->mode].create(pol, &nsc->mask2); return ret; } @@ -290,13 +280,14 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags, if (((flags & MPOL_F_STATIC_NODES) || (flags & MPOL_F_RELATIVE_NODES))) return ERR_PTR(-EINVAL); + + mode = MPOL_LOCAL; } } else if (mode == MPOL_LOCAL) { if (!nodes_empty(*nodes) || (flags & MPOL_F_STATIC_NODES) || (flags & MPOL_F_RELATIVE_NODES)) return ERR_PTR(-EINVAL); - mode = MPOL_PREFERRED; } else if (nodes_empty(*nodes)) return ERR_PTR(-EINVAL); policy = kmem_cache_alloc(policy_cache, GFP_KERNEL); @@ -427,6 +418,9 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { .create = mpol_new_bind, .rebind = mpol_rebind_nodemask, }, + [MPOL_LOCAL] = { + .rebind = mpol_rebind_default, + }, }; static int migrate_page_add(struct page *page, struct list_head *pagelist, @@ -1952,6 +1946,8 @@ unsigned int mempolicy_slab_node(void) &policy->v.nodes); return z->zone ? zone_to_nid(z->zone) : node; } + case MPOL_LOCAL: + return node; default: BUG(); @@ -2076,6 +2072,11 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) *mask = mempolicy->v.nodes; break; + case MPOL_LOCAL: + nid = numa_node_id(); + init_nodemask_of_node(mask, nid); + break; + default: BUG(); } @@ -2320,6 +2321,8 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) if (a->flags & MPOL_F_LOCAL) return true; return a->v.preferred_node == b->v.preferred_node; + case MPOL_LOCAL: + return true; default: BUG(); return false; @@ -2463,6 +2466,10 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long polnid = pol->v.preferred_node; break; + case MPOL_LOCAL: + polnid = numa_node_id(); + break; + case MPOL_BIND: /* Optimize placement among multiple nodes via NUMA balancing */ if (pol->flags & MPOL_F_MORON) { @@ -2907,7 +2914,6 @@ int mpol_parse_str(char *str, struct mempolicy **mpol) */ if (nodelist) goto out; - mode = MPOL_PREFERRED; break; case MPOL_DEFAULT: /* @@ -2951,7 +2957,7 @@ int mpol_parse_str(char *str, struct mempolicy **mpol) else if (nodelist) new->v.preferred_node = first_node(nodes); else - new->flags |= MPOL_F_LOCAL; + new->mode = MPOL_LOCAL; /* * Save nodes for contextualization: this will be used to "clone" From patchwork Thu May 20 08:30:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12269455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 994B7C43461 for ; Thu, 20 May 2021 08:30:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3FFC4610A2 for ; Thu, 20 May 2021 08:30:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3FFC4610A2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D32896B0083; Thu, 20 May 2021 04:30:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D08166B0085; Thu, 20 May 2021 04:30:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA91B8D0001; Thu, 20 May 2021 04:30:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id 83E866B0083 for ; Thu, 20 May 2021 04:30:28 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1F88D181AC9CB for ; Thu, 20 May 2021 08:30:28 +0000 (UTC) X-FDA: 78160937736.05.74B3F69 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf05.hostedemail.com (Postfix) with ESMTP id 861BAE0001B2 for ; Thu, 20 May 2021 08:30:25 +0000 (UTC) IronPort-SDR: xE5B96kViIBGnaIKVnXl3v/RI6ur+suiNTNmLuvMX38lso0eIqsIkIblRzXlPOzJBHh0PpXNvc fahnIqtGU0yg== X-IronPort-AV: E=McAfee;i="6200,9189,9989"; a="181454410" X-IronPort-AV: E=Sophos;i="5.82,313,1613462400"; d="scan'208";a="181454410" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 01:30:27 -0700 IronPort-SDR: jZghz6j8L+alCAOmUvkei6YyF/cUcyqTBf464g0BsJe2lQU0ZHH+F+HiBLo0c/tzG8n3xG2TFj N4KdxVtUjMDw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,313,1613462400"; d="scan'208";a="473899361" Received: from shbuild999.sh.intel.com ([10.239.147.94]) by orsmga001.jf.intel.com with ESMTP; 20 May 2021 01:30:22 -0700 From: Feng Tang To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Michal Hocko Cc: Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi Kleen , Dan Williams , ying.huang@intel.com, Feng Tang Subject: [RFC Patch v2 4/4] mm/mempolicy: kill MPOL_F_LOCAL bit Date: Thu, 20 May 2021 16:30:04 +0800 Message-Id: <1621499404-67756-5-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1621499404-67756-1-git-send-email-feng.tang@intel.com> References: <1621499404-67756-1-git-send-email-feng.tang@intel.com> Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf05.hostedemail.com: domain of feng.tang@intel.com has no SPF policy when checking 192.55.52.151) smtp.mailfrom=feng.tang@intel.com X-Stat-Signature: euwkfu68wf1ewnm43c673nu3baqstyfp X-Rspamd-Queue-Id: 861BAE0001B2 X-Rspamd-Server: rspam02 X-HE-Tag: 1621499425-586750 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now the only remaining case of a real 'local' policy faked by 'prefer' policy plus MPOL_F_LOCAL bit is: A valid 'prefer' policy with a valid 'preferred' node is 'rebind' to a nodemask which doesn't contains the 'preferred' node, then it will handle allocation with 'local' policy. Add a new 'MPOL_F_LOCAL_TEMP' bit for this case, and kill the MPOL_F_LOCAL bit, which could simplify the code much. Reviewed-by: Andi Kleen Signed-off-by: Feng Tang --- include/uapi/linux/mempolicy.h | 1 + mm/mempolicy.c | 77 +++++++++++++++++++++++------------------- 2 files changed, 43 insertions(+), 35 deletions(-) diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index 4832fd0..942844a 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -63,6 +63,7 @@ enum { #define MPOL_F_LOCAL (1 << 1) /* preferred local allocation */ #define MPOL_F_MOF (1 << 3) /* this policy wants migrate on fault */ #define MPOL_F_MORON (1 << 4) /* Migrate On protnone Reference On Node */ +#define MPOL_F_LOCAL_TEMP (1 << 5) /* a policy temporarily changed from 'prefer' to 'local' */ /* * These bit locations are exposed in the vm.zone_reclaim_mode sysctl diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 833ed2d..53a480f 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -332,6 +332,22 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes) pol->v.nodes = tmp; } +static void mpol_rebind_local(struct mempolicy *pol, + const nodemask_t *nodes) +{ + if (unlikely(pol->flags & MPOL_F_STATIC_NODES)) { + int node = first_node(pol->w.user_nodemask); + + BUG_ON(!(pol->flags & MPOL_F_LOCAL_TEMP)); + + if (node_isset(node, *nodes)) { + pol->v.preferred_node = node; + pol->mode = MPOL_PREFERRED; + pol->flags &= ~MPOL_F_LOCAL_TEMP; + } + } +} + static void mpol_rebind_preferred(struct mempolicy *pol, const nodemask_t *nodes) { @@ -342,13 +358,19 @@ static void mpol_rebind_preferred(struct mempolicy *pol, if (node_isset(node, *nodes)) { pol->v.preferred_node = node; - pol->flags &= ~MPOL_F_LOCAL; - } else - pol->flags |= MPOL_F_LOCAL; + } else { + /* + * If there is no valid node, change the mode to + * MPOL_LOCAL, which will be restored back when + * next rebind() see a valid node. + */ + pol->mode = MPOL_LOCAL; + pol->flags |= MPOL_F_LOCAL_TEMP; + } } else if (pol->flags & MPOL_F_RELATIVE_NODES) { mpol_relative_nodemask(&tmp, &pol->w.user_nodemask, nodes); pol->v.preferred_node = first_node(tmp); - } else if (!(pol->flags & MPOL_F_LOCAL)) { + } else { pol->v.preferred_node = node_remap(pol->v.preferred_node, pol->w.cpuset_mems_allowed, *nodes); @@ -367,7 +389,7 @@ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask) { if (!pol) return; - if (!mpol_store_user_nodemask(pol) && !(pol->flags & MPOL_F_LOCAL) && + if (!mpol_store_user_nodemask(pol) && nodes_equal(pol->w.cpuset_mems_allowed, *newmask)) return; @@ -419,7 +441,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = { .rebind = mpol_rebind_nodemask, }, [MPOL_LOCAL] = { - .rebind = mpol_rebind_default, + .rebind = mpol_rebind_local, }, }; @@ -913,10 +935,12 @@ static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes) case MPOL_INTERLEAVE: *nodes = p->v.nodes; break; + case MPOL_LOCAL: + /* return empty node mask for local allocation */ + break; + case MPOL_PREFERRED: - if (!(p->flags & MPOL_F_LOCAL)) - node_set(p->v.preferred_node, *nodes); - /* else return empty node mask for local allocation */ + node_set(p->v.preferred_node, *nodes); break; default: BUG(); @@ -1880,9 +1904,9 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) /* Return the node id preferred by the given mempolicy, or the given id */ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) { - if (policy->mode == MPOL_PREFERRED && !(policy->flags & MPOL_F_LOCAL)) + if (policy->mode == MPOL_PREFERRED) { nd = policy->v.preferred_node; - else { + } else { /* * __GFP_THISNODE shouldn't even be used with the bind policy * because we might easily break the expectation to stay on the @@ -1919,14 +1943,11 @@ unsigned int mempolicy_slab_node(void) return node; policy = current->mempolicy; - if (!policy || policy->flags & MPOL_F_LOCAL) + if (!policy) return node; switch (policy->mode) { case MPOL_PREFERRED: - /* - * handled MPOL_F_LOCAL above - */ return policy->v.preferred_node; case MPOL_INTERLEAVE: @@ -2060,16 +2081,13 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) mempolicy = current->mempolicy; switch (mempolicy->mode) { case MPOL_PREFERRED: - if (mempolicy->flags & MPOL_F_LOCAL) - nid = numa_node_id(); - else - nid = mempolicy->v.preferred_node; + nid = mempolicy->v.preferred_node; init_nodemask_of_node(mask, nid); break; case MPOL_BIND: case MPOL_INTERLEAVE: - *mask = mempolicy->v.nodes; + *mask = mempolicy->v.nodes; break; case MPOL_LOCAL: @@ -2181,7 +2199,7 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, * If the policy is interleave, or does not allow the current * node in its nodemask, we allocate the standard way. */ - if (pol->mode == MPOL_PREFERRED && !(pol->flags & MPOL_F_LOCAL)) + if (pol->mode == MPOL_PREFERRED) hpage_node = pol->v.preferred_node; nmask = policy_nodemask(gfp, pol); @@ -2317,9 +2335,6 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) case MPOL_INTERLEAVE: return !!nodes_equal(a->v.nodes, b->v.nodes); case MPOL_PREFERRED: - /* a's ->flags is the same as b's */ - if (a->flags & MPOL_F_LOCAL) - return true; return a->v.preferred_node == b->v.preferred_node; case MPOL_LOCAL: return true; @@ -2460,10 +2475,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long break; case MPOL_PREFERRED: - if (pol->flags & MPOL_F_LOCAL) - polnid = numa_node_id(); - else - polnid = pol->v.preferred_node; + polnid = pol->v.preferred_node; break; case MPOL_LOCAL: @@ -2834,9 +2846,6 @@ void numa_default_policy(void) * Parse and format mempolicy from/to strings */ -/* - * "local" is implemented internally by MPOL_PREFERRED with MPOL_F_LOCAL flag. - */ static const char * const policy_modes[] = { [MPOL_DEFAULT] = "default", @@ -3003,12 +3012,10 @@ void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol) switch (mode) { case MPOL_DEFAULT: + case MPOL_LOCAL: break; case MPOL_PREFERRED: - if (flags & MPOL_F_LOCAL) - mode = MPOL_LOCAL; - else - node_set(pol->v.preferred_node, nodes); + node_set(pol->v.preferred_node, nodes); break; case MPOL_BIND: case MPOL_INTERLEAVE: