From patchwork Tue Sep 10 23:43:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ackerley Tng X-Patchwork-Id: 13799477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81326EE01F1 for ; Tue, 10 Sep 2024 23:44:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 30F2F8D00CF; Tue, 10 Sep 2024 19:44:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2BD938D0002; Tue, 10 Sep 2024 19:44:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A50D8D00CF; Tue, 10 Sep 2024 19:44:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D4C388D0002 for ; Tue, 10 Sep 2024 19:44:42 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8DE88160E87 for ; Tue, 10 Sep 2024 23:44:42 +0000 (UTC) X-FDA: 82550460804.16.0FC03CE Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf26.hostedemail.com (Postfix) with ESMTP id CEE96140007 for ; Tue, 10 Sep 2024 23:44:40 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="d9WIn/aA"; spf=pass (imf26.hostedemail.com: domain of 359ngZgsKCFw46E8LF8SNHAAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--ackerleytng.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=359ngZgsKCFw46E8LF8SNHAAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726011743; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jTd2rO5Zdb8WdV1vtO69lkT5XfB4R/SsMg9R2z/uiu4=; b=t0mh3PyTh4LQlUGXOOFS7ac6h9JUG/08JZBPc84dut30Wyf0vER1xEdrqV8W78rgDkmce7 8Rvj01EvPzCd+05ThJw893yhjWclujC8HpUibwWIOgXMNMikInzvESLAx6c4TPJRvIOpuI rZA+4uxlyT0GZp1Blo1FWBxzJiWzBoY= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="d9WIn/aA"; spf=pass (imf26.hostedemail.com: domain of 359ngZgsKCFw46E8LF8SNHAAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--ackerleytng.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=359ngZgsKCFw46E8LF8SNHAAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726011743; a=rsa-sha256; cv=none; b=0VB3Bwo/dDm/acXzMwi8YWOufO8QZTQpQ3vMe73smaN7SV9BJr9gQF2s37SuDesDr0pVy+ RevjKJNpl68xBZAX+7OTHfBwwqEr7LaN6kYhY+bnxdJTfttUT6cFw7BmSuWktpCsZVGlff xlnWXnIKGXxMFkrDBlcPJmrl6vNP6Cs= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e02b5792baaso12484829276.2 for ; Tue, 10 Sep 2024 16:44:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011880; x=1726616680; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jTd2rO5Zdb8WdV1vtO69lkT5XfB4R/SsMg9R2z/uiu4=; b=d9WIn/aAAnLBAG3aOEF9a8EPsRTMlXKdLonhjbui28RSMfSzCq7frxuXW9hXJqE8Sa fr8Vmi2FABzuFVVfSnJjnAkR8K21+PaqVgCm8pVWH52nfx/HTKvAzYDGLGg+Dncih/HG kK29NSqQNL2KkZ2V9B2ilS27UxX/X6d+KTzjkTCeWkQO6ursFEFd+zdkoAj+mZYQ4ilX ajbAUysdv+PrjUhngGiOKV3ax0EtDvzDLnQ3ZuVEPAnGVJecTj8wgbcDcOfWyP1YGH+0 qIDUnZkpisU62mCJ4ivIfskJ0pM/7osc8T5+nZRhfHAGzUkFDJmVWm0gkaXP5wK/HljP ibzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011880; x=1726616680; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jTd2rO5Zdb8WdV1vtO69lkT5XfB4R/SsMg9R2z/uiu4=; b=aakg/TcdsXrcyNFt+fprNFHXWEZQqoBweoqz9A7nS/CLuUf8n+KBUt0xNISWHBWYh3 f6bSZXPEtMATjphJjcPPdJTih5flhaQBxSK8XEK12kC75wU7As2WCCwRlzlrG5MrSJGD 2X8+Cl4v4DUNX5j2QE/wegurAnWueehofFTyXCOKyrrEZkMxpQopLhp5hFCp/+c5dvH5 0NUjDUSu7hlGWsOvHZAAn1l939JGN2lP/SN9mFEVlYwmo5BBfKB7Pcr4qUVz5plDOyhh O90NPzE8uoQRPMgUfAIxa4ck/8HM9cJcE811PMkn9lRNZfQL+InTodx7n/ngUaL6hFqv PqRA== X-Forwarded-Encrypted: i=1; AJvYcCXSQL4m226cclLpAfj0yhfFe6a0Wv5c7MgDAVyYpAgWqPYeFo8p2DNOrdMH0CFqGCaW2bWJ37nulw==@kvack.org X-Gm-Message-State: AOJu0Yx0hOvq8KN5tTv8hBFYw7xLu+GbEZDc6fBSb3AM4mBUxoKt4pV2 ZW4HJFoNKtGhYmA35mOgmDYejXkDvUQ9UCBlQnqSJMNZbkmAJTYx6g6qFd3pm84kM9uXl++IAwg Y7UuT48FQ59ibYAuQ0xYVFg== X-Google-Smtp-Source: AGHT+IELcWQwCF/9EVJh/HJrP9xEFU149NgacCTcvYjoLYUSuucXNFK9/9x50btN31jddKHhaQJO6vkBJPZwQK4+6w== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a25:ae9b:0:b0:e0b:f69b:da30 with SMTP id 3f1490d57ef6-e1d34a2f4b4mr88712276.9.1726011879738; Tue, 10 Sep 2024 16:44:39 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:35 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <9831cfcc77e325e48ec3674c3a518bda76e78df5.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 04/39] mm: mempolicy: Refactor out policy_node_nodemask() From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CEE96140007 X-Stat-Signature: n8argzssdmnajmhgp5q7scugjocfqsah X-Rspam-User: X-HE-Tag: 1726011880-301726 X-HE-Meta: U2FsdGVkX1/pGCSkjTzMXYylHypmhLkzfhqxFHFX20sGjE24qOAIHqOUbVqGetgRF/aOpMH+264B0UoXvEMavk/GFyYdNt6yEC7q52KMKOrehTKaKLg2ZqSCx4JSPF/opwe+Ba5RLRAHRskJiy6mZYsL+iRZsKXn/h4I01XAOhrhRXrRhsiuKZun1HbMc2SvE5z14+Aqn0/YW3n2xL6kJjMJ1vKglp0GiC+Bk/i58nUTCQEx4NflPcKkHN26la6aDJKCQpSkpLHfSEWH0aQaKcpJvb1+rPfvs4YuZgj+ZpL0s/E+ZulFuF1dsr/xKv5HDNt5oBRJJDTEDtWvKMyNV9BY+VYv8VkGmB7Y9sJmJai7eppDQQaQ6z9eDLCSV3nmx6B5sWpKjCH+vmJX3L0Ep7aXwwoIApTMw20Mci1ZsHOzJ+MEJyRRZqRxPoVghf9x5+P1IhNRRj0QascUUbUa1u1XyrZwSRGGe0V7/tVs+sGgKujzlD+qkutE9Sq1QguF3npc/EOteTEO4gw4qNRQZP8qC0stf3WZDZt+4RThtSsfdowc0kbYZMG2fzKWRjRrrUoo29Hhrnpha6J6LxHMPEwlNv7BYh0pgVxH0YfRbdS/jXQKmtLPNEfyzDL1jdEvi05P6iyr0VWQEhJ/JdneGeIZDvSYVajOH3NQB87KGBniX9wBC0id5dxJ8QKuzpIOEH1dxgE9fF5WG/78P7siU2Q3NarUjvMvsjlzvqxN8BbhgFUM5KlfkfoGgTISGQvVMtpdb3NZK1PNyYw8vnvpEQVp/lrKWXWRyevxm8ogxX1UANL4/gnUlMtHW3QIJrHN2dVoiV6P5uqvKT1vXsWc8i/3PgntFubYW87NMr2riHb6mp3I1btIULywHHTpbbtKAGlncsmu+0aIgvJ5eKY7egJrgriS+hLaKC5NXcfU92h7bIdUmcKQVxPANnB2MWBXf5C8acZHGegWTyZ72yo NeuLbhfx iQs9TdwsOjgqOdbsuez2eibNt6RrXSRIEPfkLMoXCg8rhBEnP+VOEr7no9z10DNMGXGDiADsKY8G/VyVGl06R7/f5s7YXVLCmz325x/8U8jU2w6i2XXiyyG22xIMR5Gw5140MbQA+ivY2ILD4cig7s6Vx+I9ITJ541niztvelNwDVpnP3ZrzqXVvkFFRPaYRQAtCipsWQO/JKOFmQKe+Ul80sIotrFmmd1NPUYkMYbyj9ASs2u4OjmoYk+K94VlUk3Or9SuXN3ObIwvJnTLxt+NRULvug3IcBiqOg00TzfelmsaHyp/CWfvU1rE7D9eK8y6zMPDxFn3PmmIB05AjSUVzGHE+bSagqkSB6W34lPzIRgC2NAiXeDCaGpuTkleNOefXX31MnYHCSeXz7SMgdAgZ+fDMMKm091jqL60pbQ2P47GPhM1DiK3Q2GvFYsm0c/HgGSAbbrjV32VfosYhb/Yp2+TiClzdyB+xctKsT4fF8zaMyRwNOILs+86AuqgRn0QXvpVqZFn48s2zNG8kSh7yryHTVLk9Fc0D880wmAe69P+6XEt7Sjt8HsGZLgNNxrGUzTwthEQTRykYM01FxXPBoVFqqdwIrzM6a0qBENaWPTrb3rpGtg48ERQBNl9H8QgYdCXQ6+CFKhN4NPehgycFd/76O+idyWGrv X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This was refactored out of huge_node(). huge_node()'s interpretation of vma for order assumes the hugetlb-specific storage of the hstate information in the inode. policy_node_nodemask() does not assume that, and can be used more generically. This refactoring also enforces that nid default to the current node id, which was not previously enforced. alloc_pages_mpol_noprof() is the last remaining direct user of policy_nodemask(). All its callers begin with nid being the current node id as well. More refactoring is required for to simplify that. Signed-off-by: Ackerley Tng Reviewed-by: Gregory Price --- include/linux/mempolicy.h | 2 ++ mm/mempolicy.c | 36 ++++++++++++++++++++++++++---------- 2 files changed, 28 insertions(+), 10 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 1add16f21612..a49631e47421 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -138,6 +138,8 @@ extern void numa_policy_init(void); extern void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new); extern void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new); +extern int policy_node_nodemask(struct mempolicy *mpol, gfp_t gfp_flags, + pgoff_t ilx, nodemask_t **nodemask); extern int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags, struct mempolicy **mpol, nodemask_t **nodemask); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b858e22b259d..f3e572e17775 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1212,7 +1212,6 @@ static struct folio *alloc_migration_target_by_mpol(struct folio *src, struct mempolicy *pol = mmpol->pol; pgoff_t ilx = mmpol->ilx; unsigned int order; - int nid = numa_node_id(); gfp_t gfp; order = folio_order(src); @@ -1221,10 +1220,11 @@ static struct folio *alloc_migration_target_by_mpol(struct folio *src, if (folio_test_hugetlb(src)) { nodemask_t *nodemask; struct hstate *h; + int nid; h = folio_hstate(src); gfp = htlb_alloc_mask(h); - nodemask = policy_nodemask(gfp, pol, ilx, &nid); + nid = policy_node_nodemask(pol, gfp, ilx, &nodemask); return alloc_hugetlb_folio_nodemask(h, nid, nodemask, gfp, htlb_allow_alloc_fallback(MR_MEMPOLICY_MBIND)); } @@ -1234,7 +1234,7 @@ static struct folio *alloc_migration_target_by_mpol(struct folio *src, else gfp = GFP_HIGHUSER_MOVABLE | __GFP_RETRY_MAYFAIL | __GFP_COMP; - return folio_alloc_mpol(gfp, order, pol, ilx, nid); + return folio_alloc_mpol(gfp, order, pol, ilx, numa_node_id()); } #else @@ -2084,6 +2084,27 @@ static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *pol, return nodemask; } +/** + * policy_node_nodemask(@mpol, @gfp_flags, @ilx, @nodemask) + * @mpol: the memory policy to interpret. Reference must be taken. + * @gfp_flags: for this request + * @ilx: interleave index, for use only when MPOL_INTERLEAVE or + * MPOL_WEIGHTED_INTERLEAVE + * @nodemask: (output) pointer to nodemask pointer for 'bind' and 'prefer-many' + * policy + * + * Returns a nid suitable for a page allocation and a pointer. If the effective + * policy is 'bind' or 'prefer-many', returns a pointer to the mempolicy's + * @nodemask for filtering the zonelist. + */ +int policy_node_nodemask(struct mempolicy *mpol, gfp_t gfp_flags, + pgoff_t ilx, nodemask_t **nodemask) +{ + int nid = numa_node_id(); + *nodemask = policy_nodemask(gfp_flags, mpol, ilx, &nid); + return nid; +} + #ifdef CONFIG_HUGETLBFS /* * huge_node(@vma, @addr, @gfp_flags, @mpol) @@ -2102,12 +2123,8 @@ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags, struct mempolicy **mpol, nodemask_t **nodemask) { pgoff_t ilx; - int nid; - - nid = numa_node_id(); *mpol = get_vma_policy(vma, addr, hstate_vma(vma)->order, &ilx); - *nodemask = policy_nodemask(gfp_flags, *mpol, ilx, &nid); - return nid; + return policy_node_nodemask(*mpol, gfp_flags, ilx, nodemask); } /* @@ -2549,8 +2566,7 @@ unsigned long alloc_pages_bulk_array_mempolicy_noprof(gfp_t gfp, return alloc_pages_bulk_array_preferred_many(gfp, numa_node_id(), pol, nr_pages, page_array); - nid = numa_node_id(); - nodemask = policy_nodemask(gfp, pol, NO_INTERLEAVE_INDEX, &nid); + nid = policy_node_nodemask(pol, gfp, NO_INTERLEAVE_INDEX, &nodemask); return alloc_pages_bulk_noprof(gfp, nid, nodemask, nr_pages, NULL, page_array); }