From patchwork Fri Nov 8 16:20:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13868435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91BA8D5D69B for ; Fri, 8 Nov 2024 16:20:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2AC796B0088; Fri, 8 Nov 2024 11:20:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 235756B008A; Fri, 8 Nov 2024 11:20:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AFD66B0098; Fri, 8 Nov 2024 11:20:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E14CE6B0088 for ; Fri, 8 Nov 2024 11:20:48 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 7ED8D140E27 for ; Fri, 8 Nov 2024 16:20:48 +0000 (UTC) X-FDA: 82763440956.02.F05F3D2 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf05.hostedemail.com (Postfix) with ESMTP id B35F210000C for ; Fri, 8 Nov 2024 16:19:36 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ISlgRr44; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3XDouZwUKCMk8pqqpv33v0t.r310x29C-11zAprz.36v@flex--tabba.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3XDouZwUKCMk8pqqpv33v0t.r310x29C-11zAprz.36v@flex--tabba.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731082721; a=rsa-sha256; cv=none; b=6+B/RM0NZ4TeCKJwUVmnQQTzJM1iAbYTpHqwiymDwZR9FGMzOl2U1cH/8ST3c0t+FT3jSp Bq5ZwiiKePECZkvZsREArB+Ue0KX0KuA/ieA69ScjF+Votmd3cAkVnoLDqJTo548/oibrH 0Mb0TsIYS1uskwIW0Wk7eNeXWKlaQ4g= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ISlgRr44; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3XDouZwUKCMk8pqqpv33v0t.r310x29C-11zAprz.36v@flex--tabba.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3XDouZwUKCMk8pqqpv33v0t.r310x29C-11zAprz.36v@flex--tabba.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731082721; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Kl2W4EtzrNZ1cJuCyrufhU34bgPSAtiEFnUtuMhqv3Y=; b=TwklA/FUg+N5MApQRKDvs5M27hyeJmwX6HgAywklb6WhkBm8xBKS8gvz1I1cQMr/d4Wmn7 yTsBklKWNhwFxqWm/INFtiFJG7MT0dEgYleWLbptxpMB6kFluTJOFJHCV4WmztLbSyoG3Q uRFYWUcNS0qDlnM0gwO4dF3XAeRTwwI= Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43157cff1d1so16466895e9.2 for ; Fri, 08 Nov 2024 08:20:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731082845; x=1731687645; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Kl2W4EtzrNZ1cJuCyrufhU34bgPSAtiEFnUtuMhqv3Y=; b=ISlgRr44Hj0r+3d4IIcUn12Eo5UOcg6BNahkptJcFuaMUyE7V789NTKXdbVP/TQXfX DNnbRexyFmnDn9xLJBtAJaYAt3zSJpMs2YJYOF0608uusPEBVuYf+dZtQ2VIdQv6XlyW h7ptw7X86V4MABBrlWMmKsVTdun3cv0XvS+CVxRuUZmz1BI62TbbmA9R/pEDkbcOa5xe zLEhNFwHixvhIIlIr5L5sNDNr6CfYYxvkOZrl51jFE73PidHRUgacnSpl8fmcTFIHhNs pEFdTyvE0jzQph3uicVa333fv2VB5QrL1CoRY21B/E492HqDY6fHd7Cf6eahkQ6dixGL ecbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731082845; x=1731687645; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Kl2W4EtzrNZ1cJuCyrufhU34bgPSAtiEFnUtuMhqv3Y=; b=AxSftM86KR67sMRFjYflAODdCFvhHEuV9MpBouIQrBx/BKsayIxhB8Qq7SMuZW/AW2 Dq0k8hdHv3BtBjpbRR75+P6n1fhlGD4svaN1hTptzrkuOXlXn49IrbLRmojSz0hnGTqw qIJUPtJjRo6FR/FPiI6LY9D8PnUxkSDn0ELKrjf4hE4JtYkMV0oONMVXr2ge4vEGTMHA 4eRYGE65RtyEo3ceDt2V6xmm9ENACBerw7Vhq8w3AmbLllhV1x3V9TAUYUJvt9EoXETJ CsL16tCOE3rJdMitHlLvr8eE2jNleHY3mBZLVG3NPd9+Ea0K92ClRPOpqjgt1Pw+sC6S syww== X-Gm-Message-State: AOJu0YwluYnQQNskEZgYWnUc7Wl8sxfNmhZVXrky5Ctx1YG/9zRqAytM FmImByE5oMiWi6UC+1PGTqnws686sw+rOtZXzONhdhIANuyi87Ghuu3DkVOLmeyY75nf/RF57Ng rQ8NTrGjFpvVUBbV3aYBJKb13/PXgjxRu+WhnThNhS0gxhWUfnZ9lWetu2GykcixyBBckFD9O7J Ah6RZAN3cpnlUEAa6avwlFxg== X-Google-Smtp-Source: AGHT+IGpbTMLKH3Z//v+mS0oCFNNN5K9KgwxRuxW+vMV7GiwPseAyRHtw03enomMXvu4TmjvVSMQK6Mc+A== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:600c:4a21:b0:431:43e6:adfc with SMTP id 5b1f17b1804b1-432b7527703mr24725e9.8.1731082844948; Fri, 08 Nov 2024 08:20:44 -0800 (PST) Date: Fri, 8 Nov 2024 16:20:31 +0000 In-Reply-To: <20241108162040.159038-1-tabba@google.com> Mime-Version: 1.0 References: <20241108162040.159038-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108162040.159038-2-tabba@google.com> Subject: [RFC PATCH v1 01/10] mm/hugetlb: rename isolate_hugetlb() to folio_isolate_hugetlb() From: Fuad Tabba To: linux-mm@kvack.org Cc: kvm@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, david@redhat.com, rppt@kernel.org, jglisse@redhat.com, akpm@linux-foundation.org, muchun.song@linux.dev, simona@ffwll.ch, airlied@gmail.com, pbonzini@redhat.com, seanjc@google.com, willy@infradead.org, jgg@nvidia.com, jhubbard@nvidia.com, ackerleytng@google.com, vannapurve@google.com, mail@maciej.szmigiero.name, kirill.shutemov@linux.intel.com, quic_eberman@quicinc.com, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, tabba@google.com X-Rspamd-Queue-Id: B35F210000C X-Stat-Signature: heqirh3htisdtaegeoc95kowb7kx4rww X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1731082776-851032 X-HE-Meta: U2FsdGVkX1+MlkNhPu9RcUX3nA5POPU4zTdxw27m3NjKz1iprflLVQtH+u36DoXzN9JjpzZ45L9bsz/g9r/cSVPdnst8N9m3NL3SP9/vdRNtTBQFJm2lsDdvJnqfwBEEyQxyLvnOz9XlSKv0qBHKPg6AKkqOOs3lwzxYhlBcnTMaPRRqbhiKzXnT+J/+aC1NXcfwPAV/AiTgaAwIyFJKKv3ZUnKDxA8oRHrddEtF6MmqjX3IejXDQe3HszXqIq3YkccXh8GeGwnTKUBHbVfT+ZPWtrC4Z6xrc9oQTOCplUMYd7j2DJtMEI+ibyUnkzeeuyP37rgxE3f4yvFousWaMaYcXSUHr0i8s6HsVS5P+KLqumv3J+hzZgkqfxz4kK7b1sttwKozDbelaMOLsHT3rRu0eoIcnMbyY+cxJvZbh9PQylMxQH3mrEYRe6QxPp50t33B7HLsxCEfAQI/TvzWR86W5sLmzXnA3izC6QEuzFpegeMVHK/jI6ghK/PtcNqsFs5umB+PEbEd6d9PZxt5JJgwQc+J5CCLa4XOkfx9DisG2c8SX7t5xbAsP7MdeoftgzuEdlL3WBSvS87kPvxnB9ScS8lnGCwaFVHdlJ/sIq1ff8YIENIwztDW7/Ue0+TpSO8RHGqd5Bm2e5ihXo83sn+Wb29UjxhBrLF4aDIIbcU70xS2Db2Eb21LTGmq+KTfwy19nyBBynR0CYzlIHsRTvmlqSNmrgHi0x3Bcc/c2yiyBU/RuMTPk5uIOLjust7iEW7UzZPTsTnhad0bfWRpuKiQMIUbTNy0rZSmmcxRPmv4MH6jdNxMyPIZIIjB+fJ25DeZ2TLVAhtyT1v0gnf6dNufpSNOU6O61GePEoP4s5Z8rEDUBbvHQJSHGioNbQGERGNNwdmjfDesdlla+M5FA8+mnWj02vYG6zScFCxscfd7x0SYOiVyO/O7GNldAj3M5oUsV5hmcb2Pbqtq5fo M/j9aVyE unNPx+RNpGrCzDModH0bTcBe2RQQfuQ6I229IzaIakBAOAznEv9G3XZH6DEke9CjCBFEfK3tekXRlitPgvYjYkrQSpS3CKQLHkAyVjjvPe0fH5XYJN7upP27buLeDEzc/baYHKFH6hsf3jC0ngE60q6D64IfW7WVermgWUjVu+60h/Jie469I6c8pydhMDGWruWBOEmPN7YXJTvRDsvIbA/9ic9WS+hdoK/O/okwP+18drLC22r/ZwOp21Z9NJ430ncSYFrOlHTT/gmUc85bOjbi+PJW3tnL1Etwq5cm9mIWLtxuPK9edZe39ras22EvSXtvR3BW+UwZKl5up0oGlEhwpAlFG5206scBpwcQtdahpOdaXOfxqWbyCeMXuQwVDE92p2oLbomc/SYl2oLK0PHWG7PnWjcJLFwhczZo+7f4hh3LBsJs9qIUea7UaZdTHfiWZhkEvSxy5Nsr8A4pYn9Gn0aO9JxirscNPgnbHygaNjkC3T9AZb4H7iV+s5fgBKZCIuzBPdPvb5Z9BiGoxBOHKcEEWdErQa8wkMGkV6QLrjYuEU6vQ9zJtmdJ0DbHeg9q8/Cd36NyMt7MSNU+2E/HhB0WSOjLBTWSnpVHRVfsEAlo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: David Hildenbrand Let's make the function name match "folio_isolate_lru()", and add some kernel doc. Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- include/linux/hugetlb.h | 4 ++-- mm/gup.c | 2 +- mm/hugetlb.c | 23 ++++++++++++++++++++--- mm/mempolicy.c | 2 +- mm/migrate.c | 6 +++--- 5 files changed, 27 insertions(+), 10 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ae4fe8615bb6..b0cf8dbfeb6a 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -153,7 +153,7 @@ bool hugetlb_reserve_pages(struct inode *inode, long from, long to, vm_flags_t vm_flags); long hugetlb_unreserve_pages(struct inode *inode, long start, long end, long freed); -bool isolate_hugetlb(struct folio *folio, struct list_head *list); +bool folio_isolate_hugetlb(struct folio *folio, struct list_head *list); int get_hwpoison_hugetlb_folio(struct folio *folio, bool *hugetlb, bool unpoison); int get_huge_page_for_hwpoison(unsigned long pfn, int flags, bool *migratable_cleared); @@ -414,7 +414,7 @@ static inline pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, return NULL; } -static inline bool isolate_hugetlb(struct folio *folio, struct list_head *list) +static inline bool folio_isolate_hugetlb(struct folio *folio, struct list_head *list) { return false; } diff --git a/mm/gup.c b/mm/gup.c index 28ae330ec4dd..40bbcffca865 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2301,7 +2301,7 @@ static unsigned long collect_longterm_unpinnable_folios( continue; if (folio_test_hugetlb(folio)) { - isolate_hugetlb(folio, movable_folio_list); + folio_isolate_hugetlb(folio, movable_folio_list); continue; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index cec4b121193f..e17bb2847572 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2868,7 +2868,7 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h, * Fail with -EBUSY if not possible. */ spin_unlock_irq(&hugetlb_lock); - isolated = isolate_hugetlb(old_folio, list); + isolated = folio_isolate_hugetlb(old_folio, list); ret = isolated ? 0 : -EBUSY; spin_lock_irq(&hugetlb_lock); goto free_new; @@ -2953,7 +2953,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) if (hstate_is_gigantic(h)) return -ENOMEM; - if (folio_ref_count(folio) && isolate_hugetlb(folio, list)) + if (folio_ref_count(folio) && folio_isolate_hugetlb(folio, list)) ret = 0; else if (!folio_ref_count(folio)) ret = alloc_and_dissolve_hugetlb_folio(h, folio, list); @@ -7396,7 +7396,24 @@ __weak unsigned long hugetlb_mask_last_page(struct hstate *h) #endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */ -bool isolate_hugetlb(struct folio *folio, struct list_head *list) +/** + * folio_isolate_hugetlb: try to isolate an allocated hugetlb folio + * @folio: the folio to isolate + * @list: the list to add the folio to on success + * + * Isolate an allocated (refcount > 0) hugetlb folio, marking it as + * isolated/non-migratable, and moving it from the active list to the + * given list. + * + * Isolation will fail if @folio is not an allocated hugetlb folio, or if + * it is already isolated/non-migratable. + * + * On success, an additional folio reference is taken that must be dropped + * using folio_putback_active_hugetlb() to undo the isolation. + * + * Return: True if isolation worked, otherwise False. + */ +bool folio_isolate_hugetlb(struct folio *folio, struct list_head *list) { bool ret = true; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index bb37cd1a51d8..41bdff67757c 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -647,7 +647,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask, */ if ((flags & MPOL_MF_MOVE_ALL) || (!folio_likely_mapped_shared(folio) && !hugetlb_pmd_shared(pte))) - if (!isolate_hugetlb(folio, qp->pagelist)) + if (!folio_isolate_hugetlb(folio, qp->pagelist)) qp->nr_failed++; unlock: spin_unlock(ptl); diff --git a/mm/migrate.c b/mm/migrate.c index dfb5eba3c522..55585b5f57ec 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -136,7 +136,7 @@ static void putback_movable_folio(struct folio *folio) * * This function shall be used whenever the isolated pageset has been * built from lru, balloon, hugetlbfs page. See isolate_migratepages_range() - * and isolate_hugetlb(). + * and folio_isolate_hugetlb(). */ void putback_movable_pages(struct list_head *l) { @@ -177,7 +177,7 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list) bool isolated, lru; if (folio_test_hugetlb(folio)) - return isolate_hugetlb(folio, list); + return folio_isolate_hugetlb(folio, list); lru = !__folio_test_movable(folio); if (lru) @@ -2208,7 +2208,7 @@ static int __add_folio_for_migration(struct folio *folio, int node, return -EACCES; if (folio_test_hugetlb(folio)) { - if (isolate_hugetlb(folio, pagelist)) + if (folio_isolate_hugetlb(folio, pagelist)) return 1; } else if (folio_isolate_lru(folio)) { list_add_tail(&folio->lru, pagelist); From patchwork Fri Nov 8 16:20:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13868436 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FC15D5D69A for ; Fri, 8 Nov 2024 16:20:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 965816B0085; Fri, 8 Nov 2024 11:20:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 91D7B6B0099; Fri, 8 Nov 2024 11:20:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F5816B009A; Fri, 8 Nov 2024 11:20:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 46CBE6B0085 for ; Fri, 8 Nov 2024 11:20:51 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B023B80E4F for ; Fri, 8 Nov 2024 16:20:50 +0000 (UTC) X-FDA: 82763440368.30.035B63C Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf16.hostedemail.com (Postfix) with ESMTP id 27E5C180017 for ; Fri, 8 Nov 2024 16:20:11 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=vrY2rH3C; spf=pass (imf16.hostedemail.com: domain of 3XzouZwUKCMwBsttsy66y3w.u64305CF-442Dsu2.69y@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3XzouZwUKCMwBsttsy66y3w.u64305CF-442Dsu2.69y@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731082621; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YSD3xsyZZCSV87kyUY/RPMIHEZzslYpSJkMBD8vjd+E=; b=AWOynkDdA3LiIc85OPSCq1unmAGmtO5K132w0q0ebnbGlf2CXM5XfXrcA+Cuuh6y/dcgnE zJ6QHYw21VX+AQwYIgjOQvUyqrbPAIwcaovcIXrKm6qBgdiFgsutuhOX57jYVsYtKkvEPW dn994uTOTYWMAGCmb1lzx7XsAFSQLhQ= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=vrY2rH3C; spf=pass (imf16.hostedemail.com: domain of 3XzouZwUKCMwBsttsy66y3w.u64305CF-442Dsu2.69y@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3XzouZwUKCMwBsttsy66y3w.u64305CF-442Dsu2.69y@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731082621; a=rsa-sha256; cv=none; b=6PBnPWHGOkiELYX+yeAEf2XHkYvoom79ZhFowOmnR3ifvzKPx3T7T5mmjqqctZ3KJrafUs 70/njqhNzuJQ4NkiiKLtNkAzEN0ed9KsRePg8dzBhhT6INH99AFZRi7fcBGXcQw9eSj7lf g2hfh+MvKq9ZnkG5yFG06Ng6BPfKncM= Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-431ad45828aso15922945e9.3 for ; Fri, 08 Nov 2024 08:20:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731082847; x=1731687647; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YSD3xsyZZCSV87kyUY/RPMIHEZzslYpSJkMBD8vjd+E=; b=vrY2rH3CHvyZhuSHqczdg++IzTt2Qk6lgVaCcHKJneI4x2OIgcSMAgbEkEDK8TidYD Lp5AVvaGkuTHLVC7pKAhmszGb15eQFgWfHsIw0t6/pgB8Ys2gPGuzlsw1wVyQ4oFiuXL kxoXsYEGYuFqjbx7u3EDD3PTkKqWBeT09pswkTWqgojy51AwDEF+WZxkkJbz8nGqDAEb 9RzXS1PVYp1lHf/5MqrVkK/kWyixtpN50nqlYHYX45Z9QaPb3FKThphonaAv0vUwpjII l2EzCz8djlF9ctD3mnKeRL4xoXr5/nbVVIMOU73o5UUwkflePijqQn3fexBVLU2OLdJq bPmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731082847; x=1731687647; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YSD3xsyZZCSV87kyUY/RPMIHEZzslYpSJkMBD8vjd+E=; b=UABZkuhNNxY9PXCgLae33kLNvgFlwqZV5aGGUHTInXtYp3APtkgfqbUiaCNNmQe52v eUi34kPYguO9NCE84eKjM/eFVZ8SqEzxk7eZa2DL7acJBv0IWXdbiMRYbpQZ+wPLOBIx KSAYZpAekI6PeRwKXBXoR5kHf75y1/saU4CMbjRcRQSzxXXtd1Jhn8ooGzq288Qw+i4d YMBPp0wqpPqwji+lejxpT/ywSjl6ZEO4db1twkVPForUclAur+6RH9mUqn6jQ5GNPfVr dY0dkkErHXITwh8pze30lZ1PgQR0ENgGf3qWTSuZkKneDsXYN1xLjmPJk9yluQj9gPj0 6R/g== X-Gm-Message-State: AOJu0YyMq2Bu9ivexl6t/fQmLcQtULx0T2fndVyIm3qZPaBnJJHjxZvv uz/EmX7fWLZlSkn61ByQ/nTP50bPVH9XaP21BUY3aApGBWU6jfzLzQEvHFaUd97GdDX3Le494vk YWowgcyNStX6KsBVqHBmVvTnKIXBjSLBdXvGBLcw260Hpf6PwHoOkpt21iBjksxLVMecrnppmd5 sN9cBvgODxdZdq3mMqhH/ENw== X-Google-Smtp-Source: AGHT+IFgJCvqHUnZF2sVggmAky7vOXqPzIu1h0GscO2JqqplWhQ8XkVaucC9oHlByVAHMi0UBju5RAkHJw== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:600c:6a84:b0:42c:a8b5:c26 with SMTP id 5b1f17b1804b1-432b74fc1e5mr108415e9.2.1731082847148; Fri, 08 Nov 2024 08:20:47 -0800 (PST) Date: Fri, 8 Nov 2024 16:20:32 +0000 In-Reply-To: <20241108162040.159038-1-tabba@google.com> Mime-Version: 1.0 References: <20241108162040.159038-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108162040.159038-3-tabba@google.com> Subject: [RFC PATCH v1 02/10] mm/migrate: don't call folio_putback_active_hugetlb() on dst hugetlb folio From: Fuad Tabba To: linux-mm@kvack.org Cc: kvm@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, david@redhat.com, rppt@kernel.org, jglisse@redhat.com, akpm@linux-foundation.org, muchun.song@linux.dev, simona@ffwll.ch, airlied@gmail.com, pbonzini@redhat.com, seanjc@google.com, willy@infradead.org, jgg@nvidia.com, jhubbard@nvidia.com, ackerleytng@google.com, vannapurve@google.com, mail@maciej.szmigiero.name, kirill.shutemov@linux.intel.com, quic_eberman@quicinc.com, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, tabba@google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 27E5C180017 X-Stat-Signature: bywuyen61q41fptfp4bcudu88xcprqpn X-Rspam-User: X-HE-Tag: 1731082811-492125 X-HE-Meta: U2FsdGVkX18b2VBdwenmL33f3Bls3Xh02ol2+rCMxxiCRXJDE5gNDVw8LZ7+9NPs/LBNpLfahAbiWX9groxF3W4WehxTrCRb1ewzr0H9Mwz45pHGTfXGdIQt5B9E2+mqBdXfWH33GZKMocuzaY4VPGzIVzozpXiqEsXFTOU2x4fueJ7T9YiBJEQz0wnaYaSbAgSTsUeKd29WGDxrYlclAZB7VrOEIO/UNjNCNz4jLF8r6KGwC/xbepi7OlkK6av5Hphqi4sjUKlG3RdnCaQ4PzLpJs8Gha3rc7RuWeuMPpd7lG4Ne85fUgatxn7xPNRMEWeS+aMRMGHApqJvX3wH581hp9xnBBge5uoQg2P9B/8HWJqjvlp+KT9niYnmV8SrtDjURD/bfGEhvG4I2pR/sjj9TUfxuXHlK8vosCjdRD30S8a8OoqkcKQrBBUHJ5jtOQB56AUr9Eg+J9UwyzJbsvdn41281/ZrDu4rPps1wwR7DW8KHRszoVxnPNARvfwV+56dw6kDGDSCNfSDjWLcycPRqBEjz5ryOAKP4ghMW4lvPal/bDYDdwVCmKyMQ9T1XVmD3Cq6v44IsOY0OVWbuFsxBHqvv9+xhF27XBdAUkfFQxVGRNA5BCpIlo+QrYPSf6g/D4hqIxP4GhcScHSQFeM+rRHnyIZboJijrVIyiah1/ABioJf9lrOMyuO9Fkmlelij5T+fc7ZuheyafnSAEzTY+QWmSxs0aoPwT2FYAGGYJ0/KdD8yBfLklGUP9GC+67PfQS/6mELkdubwWoA91zQjakNFpN7JnD7j8FKCCZFK4InZHKQ6cPXtY3NlefvMzE2U8oV9mvWcv8DyCBdjZuN0RmvIeeE9ycSh7pH2HDATjt3GM2k5JD1ECrXsEBn2QjYUVd+BOq+c4x2C3vY7JYc+VPMPGLYAzSaKu7OblmboseFAFs8/roGSIpHaklheG7rljGsCAHX8AtK/WXg JJYG8I5j IUYi62rH/XBnsORm67d8yuoHvfyrOQhQZtn7mcX2gKJ2e0CkumsP4v6MvzQqIw5GgWzDVu2rB4OOH7VRnrw0Jz6WnwmnF4EMKVI+Cobjl2PPhFFPl4Sqo343ovfo0sCbvykJ9A/wB8HKAI+MX6bBd7nrnJpJP8PeBaYpdaKApzWVF/xZSjcM57z/1gt0kmXGXz179NLq34R0KPWI82zZPi8L5BU3Hp5nZ/61TBbzi19dNeKChfAkzMLBDs0iYK+Tju66ur1rMpWlvCffnQ83j1rgS1b/t1YsQeqICoonUuijKvnNmdSFNZRMua9Cd03o8teuUDJTXdEQHAtRsQPMlecydzzMw5KSXinrUPxqoCCh6btG05mGmQ+ssvIDJFojTdvyGQURk07yvijkF7zYPVc03sIyIZg63sxdSBk0fiZhU2e8QfUCMTIsKaHF2p2RoqRajEu9fkmxpEuOJbfvipW7XhqKo2dgIJjn2nsj0lVmOJbGBeUx6KJQkZVSCN6dZYrtjawBfEoFsUd0NkWKLNzhb/xCmWhy60o2K5YQ7mX1N8kmXybp7gdVx8rOH0IqNXtxiMoNCmMraQ3jWBpJqMx9+IzbAz7J/+M+u X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: David Hildenbrand We replaced a simple put_page() by a putback_active_hugepage() call in commit 3aaa76e125c1 (" mm: migrate: hugetlb: putback destination hugepage to active list"), to set the "active" flag on the dst hugetlb folio. Nowadays, we decoupled the "active" list from the flag, by calling the flag "migratable". Calling "putback" on something that wasn't allocated is weird and not future proof, especially if we might reach that path when migration failed and we just want to free the freshly allocated hugetlb folio. Let's simply set the "migratable" flag in move_hugetlb_state(), where we know that allocation succeeded, and use simple folio_put() to return our reference. Do we need the hugetlb_lock for setting that flag? Staring at other users of folio_set_hugetlb_migratable(), it does not look like it. After all, the dst folio should already be on the active list, and we are not modifying that list. Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- mm/hugetlb.c | 5 +++++ mm/migrate.c | 8 ++++---- 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e17bb2847572..da3fe1840ab8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7508,6 +7508,11 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re } spin_unlock_irq(&hugetlb_lock); } + /* + * Our old folio is isolated and has "migratable" cleared until it + * is putback. As migration succeeded, set the new folio "migratable". + */ + folio_set_hugetlb_migratable(new_folio); } static void hugetlb_unshare_pmds(struct vm_area_struct *vma, diff --git a/mm/migrate.c b/mm/migrate.c index 55585b5f57ec..b129dc41c140 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1547,14 +1547,14 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, list_move_tail(&src->lru, ret); /* - * If migration was not successful and there's a freeing callback, use - * it. Otherwise, put_page() will drop the reference grabbed during - * isolation. + * If migration was not successful and there's a freeing callback, + * return the folio to that special allocator. Otherwise, simply drop + * our additional reference. */ if (put_new_folio) put_new_folio(dst, private); else - folio_putback_active_hugetlb(dst); + folio_put(dst); return rc; } From patchwork Fri Nov 8 16:20:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13868437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3021CD64061 for ; Fri, 8 Nov 2024 16:20:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B616C6B009A; Fri, 8 Nov 2024 11:20:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B10486B009B; Fri, 8 Nov 2024 11:20:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 915186B009C; Fri, 8 Nov 2024 11:20:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 6C69F6B009A for ; Fri, 8 Nov 2024 11:20:53 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 1CAC01A0DF7 for ; Fri, 8 Nov 2024 16:20:53 +0000 (UTC) X-FDA: 82763441208.28.13C0580 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf28.hostedemail.com (Postfix) with ESMTP id A6A95C0012 for ; Fri, 8 Nov 2024 16:20:13 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=fK0Yvnj8; spf=pass (imf28.hostedemail.com: domain of 3YTouZwUKCM4Duvvu08805y.w86527EH-664Fuw4.8B0@flex--tabba.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3YTouZwUKCM4Duvvu08805y.w86527EH-664Fuw4.8B0@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731082800; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=c97rKSzu5OtSEQDwZyqgn1x58KOnaAilRAoGtwwyknw=; b=r1/RhohHq38idZn45+diQqgC49rYgjOTl/gRzaWs9epxV59PU6eMP/YTyiNrRjyHk0sOQ0 C/gqW2PKJbkCUAYhR90Sisfc3vNnLWC4E6E3dlCi3ayj1np3kOee2bdvhfr1ElBQHKMpCh aH0prfF1mle5V+MWiFDIA++4UPL0UwQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731082800; a=rsa-sha256; cv=none; b=iI4JObsGAVroov7f3fjHwh7M+/CbR3/rjztjCvVCmbeSgVajZjYaAcm5Che4ZzM43WlgWH o/aYHZnIKpEfkc740HXmym3XTebMv+Dp4n+6iYg5A8ZsjtTWZ6nWhfOB2hoJO6CR0ajd1s U0S5Vgs6iSAiblcD2/qVRScZqn8dlPY= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=fK0Yvnj8; spf=pass (imf28.hostedemail.com: domain of 3YTouZwUKCM4Duvvu08805y.w86527EH-664Fuw4.8B0@flex--tabba.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3YTouZwUKCM4Duvvu08805y.w86527EH-664Fuw4.8B0@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-37d45f1e935so1279773f8f.0 for ; Fri, 08 Nov 2024 08:20:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731082850; x=1731687650; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=c97rKSzu5OtSEQDwZyqgn1x58KOnaAilRAoGtwwyknw=; b=fK0Yvnj80DPONBoWOJkt0oQSF3SU6pzKCQyTFEe8XzM2RYdErzr9oL6I8PuMu1D3M/ cBGcsjWoizOABxBWH/kYq+es/OlkL7ckDN4u52YsHVnVFoTflAKMmVNJOBrpe5+0w57B tLC9s5LjTclGdR/P5ULYnUfJj3osRO4ivG0ujAvvpHiG1Lm2KinyS52ty+dGKaQRr0Es vPlfB5Zfe495cSRmaCCiP8olmtcA5gjVb/sDjQLWUZVq4T8L9Vssoe0uhbDn6MfIrSG5 c/uUBByzo9xkVfByAeLuC8wgbNizv739fm/o2SyvpYD4vPy5BUkXgNfnjmFu9wBs1Mm6 SPVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731082850; x=1731687650; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=c97rKSzu5OtSEQDwZyqgn1x58KOnaAilRAoGtwwyknw=; b=u89a4Yv/N3j36zx/uOknpVFX9HIBVlMBDO/NfnChlk2dk6/V5vbfJ3YWaDp/MNzQQt 212yKiL0wjtmNm/wMIUPcdEbJ11pr/sb2R52ZtThtTiyZvrsTdlCZsbyCrF37up16EUy u6VTf7M9CxL3ImneiZivkNysSzwvbdZYSEiXHR6SyFqKJWta0m1Hfuo59Cv4wH1l0zHf QsmwWmoVWuOrkIQNTx38A7v6g96pbmh9NuY+GJ/ENXlZDkS6ml7jZnxSovcIBoBbTgVP E77WQI9pkj2Ld0MP5432zeBAqxB61DYDbDBdNx4ZLfw0DO1Im+hK5fM/WDLt78+w9Frg OZYw== X-Gm-Message-State: AOJu0YwPb7v+1BKTrVwjLATdNp74eLWtj7UUs4oKl2clB6RKcFLYq3Vm 0iUIodQZ1QyGoP3hYvfZOKwgbac6qlBw2IHnUfX21e0fJfUslPPtepUOrOUxw8pPnZVdNIS9M4h HliIJLmtpBu3aCphI5bLIefAKtg3goiHSUPF3Tfa+nqOQFDgdnIBS6POGYXE7PxSJkADZiFFCaR lBU7oqyitp4N0+yYeh25TH6A== X-Google-Smtp-Source: AGHT+IELhlKA+w5GoDYCQVGGU0Q69yO5Jt+Zri88wGQ6Nd2QAGecV+Ce7InAsjA0a+zrL4cO721qJjhh5w== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a5d:4f84:0:b0:381:d049:c688 with SMTP id ffacd0b85a97d-381f1884303mr2415f8f.9.1731082849389; Fri, 08 Nov 2024 08:20:49 -0800 (PST) Date: Fri, 8 Nov 2024 16:20:33 +0000 In-Reply-To: <20241108162040.159038-1-tabba@google.com> Mime-Version: 1.0 References: <20241108162040.159038-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108162040.159038-4-tabba@google.com> Subject: [RFC PATCH v1 03/10] mm/hugetlb: rename "folio_putback_active_hugetlb()" to "folio_putback_hugetlb()" From: Fuad Tabba To: linux-mm@kvack.org Cc: kvm@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, david@redhat.com, rppt@kernel.org, jglisse@redhat.com, akpm@linux-foundation.org, muchun.song@linux.dev, simona@ffwll.ch, airlied@gmail.com, pbonzini@redhat.com, seanjc@google.com, willy@infradead.org, jgg@nvidia.com, jhubbard@nvidia.com, ackerleytng@google.com, vannapurve@google.com, mail@maciej.szmigiero.name, kirill.shutemov@linux.intel.com, quic_eberman@quicinc.com, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, tabba@google.com X-Stat-Signature: 9nkoxtom1fztupwpr8okz388f343fib9 X-Rspam-User: X-Rspamd-Queue-Id: A6A95C0012 X-Rspamd-Server: rspam02 X-HE-Tag: 1731082813-202023 X-HE-Meta: U2FsdGVkX18XUkU5DQT/Y5m9mvcWDNYwux6cPCu8rZcwe+HWc+CUC7QWYSu9TsH/CRgDdWysnpvhj7WPDlG5TpjQiM4CWkDS7XalBRWOVIdf8Q42v1MGyUQx0IXxdNg3piDVUihbuo+0uWq62PrCirxYPr4cPdJsxh2dn3v+T5oSrdc0TBg7hH+6Q+eYQ3VME9xznTjLZ3kQTK0ai9zAo4JFqIz29VzPQkObm+FoFpJi4t36e1SXf33FJBta4usdnlyL/yYjXEgFGTChLK+TpATYBFPf9D0DrvG4VtjA9iqV2q+T1Mh86/uYbk6gJQDv3sWJL912bafqgD5v3rRBcCDdnHsI4Ia1PJFUIDDpQmHutMcMdaxQDAUtsnIKUMPkVWD3IbwTeB5pXvkWJfkYGdoN0OVHsswaurdchNoRIrBEocotOMBuk/hF6cjMP5fAKdtWyIfs1Oqpc6Iz1dsNGRBmE3s5Ggugtg0FSOJLmRaZZUDYFGsjoahgBJUrrn71HVqSUIn1MG7SIgb2ds7vVpysZHqycMBIub02/epjrBrA8cdOI1sj5ETZ1f1MYF2iCq8B0xcQGGBYjrmMCOpuulm4HzF/6RHFFs4AQxVBgI/x/F3rxU++B/mGo+hLcDCFhQz4ukDetxZciLWlvXpmCr1DYIy7I88wZTsS6+g/iniN5lUXAUu58Ew43hEPOOP/li8/ktm3ND27VIO2OlPjUB2ysFyz9xffbr6n5s0GCZ753k56Za4cDenNe9M/zouLkbKhVpeDLmojaQfEJLnyzMBtOGLLuWcopU9eWWm9NCYRaXhL1s17y9cEnxMJASA48pgG9RYSs3DMWH/bccUP96CDxV6QTeXvL86f+kW6lp2cFabNjT7b8ayYaPOYIJFhCKQqo+xR0ZeaAfnLG9o6PMLmUJSESP853tuB2KlnR8DJp+/CWRRmJChwkpNOEEqWoHwtf/pzcKCOJcH3bXn PckPSTCQ nIW80ttjr08ZL1wFK6tyCpr7eyecrTvMosyESZhmSZqwz4BV4FmBJGBzSs3GPP2Cv7Qu06RaZWOb+tBlBeowKXDgz7y0Fc/unKM9GAZbo5ElHIOHCJlkDAN3/gFMjUNkDm0/+z0c3p2vz4E8BNfkV3/aplwpQZsG816T8SOXgA79/X+su5OiphQw+JQZ61rq6UcDMyEYVcTTVNdjJ8JuVmicp1nXrSBYLT+uK8+MpZ8zcEhr0JkG5kKkLVmhclVbv6j6z7VnMMTbrj3g5QzogJzvMN+MB05Se8E6lf5e1YxzNgL81c77djdPMVwEZ2kDz5CZGBecZpbP8zDT4ND1q1+PxvYC5v3qgGVrSdkK5QTtgLYiCIvgL42G5FTCmmfi58Q2jEEpyjtYaMzbiOux5yVvSCtMovkLOLHZwGWAmWwKzYwOJwYWGp/w2GbL7R+7CPqaC4/T3t9S2xmn39bRKgAdjSqgbTG3zRMitDiCuCo3ic7KPVH4brVEVv584oBFHYH6j6upLYAbYn0fv9mKE4rot35/DcKuKuIJBIxGCrPQUxUBYc4dvnwS3KEBttsvcpP+63zqibrCNO8lYHBBLKfF/atd+KDR1SnfrzZsWwLRjHjY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: David Hildenbrand Now that folio_putback_hugetlb() is only called on folios that were previously isolated through folio_isolate_hugetlb(), let's rename it to match folio_putback_lru(). Add some kernel doc to clarify how this function is supposed to be used. Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- include/linux/hugetlb.h | 4 ++-- mm/hugetlb.c | 15 +++++++++++++-- mm/migrate.c | 6 +++--- 3 files changed, 18 insertions(+), 7 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index b0cf8dbfeb6a..e846d7dac77c 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -157,7 +157,7 @@ bool folio_isolate_hugetlb(struct folio *folio, struct list_head *list); int get_hwpoison_hugetlb_folio(struct folio *folio, bool *hugetlb, bool unpoison); int get_huge_page_for_hwpoison(unsigned long pfn, int flags, bool *migratable_cleared); -void folio_putback_active_hugetlb(struct folio *folio); +void folio_putback_hugetlb(struct folio *folio); void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int reason); void hugetlb_fix_reserve_counts(struct inode *inode); extern struct mutex *hugetlb_fault_mutex_table; @@ -430,7 +430,7 @@ static inline int get_huge_page_for_hwpoison(unsigned long pfn, int flags, return 0; } -static inline void folio_putback_active_hugetlb(struct folio *folio) +static inline void folio_putback_hugetlb(struct folio *folio) { } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index da3fe1840ab8..d58bd815fdf2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7409,7 +7409,7 @@ __weak unsigned long hugetlb_mask_last_page(struct hstate *h) * it is already isolated/non-migratable. * * On success, an additional folio reference is taken that must be dropped - * using folio_putback_active_hugetlb() to undo the isolation. + * using folio_putback_hugetlb() to undo the isolation. * * Return: True if isolation worked, otherwise False. */ @@ -7461,7 +7461,18 @@ int get_huge_page_for_hwpoison(unsigned long pfn, int flags, return ret; } -void folio_putback_active_hugetlb(struct folio *folio) +/** + * folio_putback_hugetlb: unisolate a hugetlb folio + * @folio: the isolated hugetlb folio + * + * Putback/un-isolate the hugetlb folio that was previous isolated using + * folio_isolate_hugetlb(): marking it non-isolated/migratable and putting it + * back onto the active list. + * + * Will drop the additional folio reference obtained through + * folio_isolate_hugetlb(). + */ +void folio_putback_hugetlb(struct folio *folio) { spin_lock_irq(&hugetlb_lock); folio_set_hugetlb_migratable(folio); diff --git a/mm/migrate.c b/mm/migrate.c index b129dc41c140..89292d131148 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -145,7 +145,7 @@ void putback_movable_pages(struct list_head *l) list_for_each_entry_safe(folio, folio2, l, lru) { if (unlikely(folio_test_hugetlb(folio))) { - folio_putback_active_hugetlb(folio); + folio_putback_hugetlb(folio); continue; } list_del(&folio->lru); @@ -1459,7 +1459,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, if (folio_ref_count(src) == 1) { /* page was freed from under us. So we are done. */ - folio_putback_active_hugetlb(src); + folio_putback_hugetlb(src); return MIGRATEPAGE_SUCCESS; } @@ -1542,7 +1542,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, folio_unlock(src); out: if (rc == MIGRATEPAGE_SUCCESS) - folio_putback_active_hugetlb(src); + folio_putback_hugetlb(src); else if (rc != -EAGAIN) list_move_tail(&src->lru, ret); From patchwork Fri Nov 8 16:20:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13868438 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 544A4D64065 for ; Fri, 8 Nov 2024 16:20:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B77E6B009B; Fri, 8 Nov 2024 11:20:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C8FF6B009C; Fri, 8 Nov 2024 11:20:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 544D66B009D; Fri, 8 Nov 2024 11:20:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2EAF06B009B for ; Fri, 8 Nov 2024 11:20:55 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id DEFB180E2E for ; Fri, 8 Nov 2024 16:20:54 +0000 (UTC) X-FDA: 82763440914.04.93F5E93 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf01.hostedemail.com (Postfix) with ESMTP id 3A4E740018 for ; Fri, 8 Nov 2024 16:20:24 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=2xqxyL89; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of 3YzouZwUKCNAFwxxw2AA270.yA8749GJ-886Hwy6.AD2@flex--tabba.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3YzouZwUKCNAFwxxw2AA270.yA8749GJ-886Hwy6.AD2@flex--tabba.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731082792; a=rsa-sha256; cv=none; b=N8ovMRwU8nrf7oEXfb8tmQemfXuK9BEtSVChWiY1kacCiJ1yN2isw9Ua6kG7ZyuYTckICD Vj4OTFfo65pqfJaNassxLPQeHzGjUeRqwsxq0cbI2tvla5VsCTAkSaAahVV1g5A0jrF2a9 SUdGexND5IRdP5Bx/7FNrHHMkkKLN6A= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=2xqxyL89; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of 3YzouZwUKCNAFwxxw2AA270.yA8749GJ-886Hwy6.AD2@flex--tabba.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3YzouZwUKCNAFwxxw2AA270.yA8749GJ-886Hwy6.AD2@flex--tabba.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731082792; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xoczasfH3EjGVlSybMjc91OekbCn/q1l859VFPlt17E=; b=a89crMn1g+Se2QEiOZbKbgyKjQRzuiNEbAoY9rOnLQ3Giewl6hKfddoJHRyRInxiy7s6Z3 E5NJ1s+Fq2Js6qr6RN3z5YLEW3UMfatAkqAojjdqVr4h2KlsBNptDH8HPS1NMOe73/JN/k TgWa3k77LLslZz2A1EiaJNOAn3QFzmE= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6eae6aba6d4so8324367b3.3 for ; Fri, 08 Nov 2024 08:20:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731082852; x=1731687652; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xoczasfH3EjGVlSybMjc91OekbCn/q1l859VFPlt17E=; b=2xqxyL89Rj64KEsHCF1NiQshRZo2+avxutUcPnaNmDJCFgIhVvXfbbqzDcUoF2Gcoy nhwBcnlFn8LwD7+EYgiEglnMY6GV0YMeKL+Ib5fYbkx5FI8KCJraAsYWqiKvCvx2vGhw lscWlEjTEMjTuv5koSsWpAZ7KdiLLQh8gX6IHjAVXA9b1H9Vz7CN72K9fO5uqsbTERuC Kuw/y4WWl6wslemT+XRCirrSpgo7jTYb0mt2qooW3Ph2Rm2/SHFWPiQaT2t2Mv0BnNba duCvTteshzMCTBbjr0pRHBftrY5G4gD2fxoqLa4gqIIsL1ys+JKmXrRBaem4zMCUbT4I tugA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731082852; x=1731687652; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xoczasfH3EjGVlSybMjc91OekbCn/q1l859VFPlt17E=; b=GTEPiXZrSkT49mChmU/rwsC78rLmFm1cZ6sLU1XGEQB/delpjiOu4toaT3GlC/ohKk T7zrm1pslQNmMhwBL9ZRXVFwo/uoq29TgZDQdIK/BxeGQTfOBKJHuPbhPu00z99muO2C xR0G0KLM1ccjrAYbmTwGhshQuIE8VoDg95JJG+6mpLjygOwId698KXr27RtDgOcGheXy d9YUHHa9sg0FTcWuh/QjzfIwy97wEuQMghMxsG+3g60PVWUpPMXIc6kq/hyZSQg9Ygrg N8bB8m8G6R/Bi9UpFrh5QwvM2VCNZ0A9e2U/ELsueGbDuSp2wwVtyeU5UfmrDggek/bl KAkw== X-Gm-Message-State: AOJu0Yztjz2gOVFt2y3dfo9xHYOjqGU4+TDD/9cc6SojHLprx2yez8lb rHXAsSUiOPbLNlbIg/tKDCHlJkCXgPRG6fnoWU9IP5t345Q1+Pt05fNoSQE02i+E3y+axPoHZro pEU7T8hmGlxPnBe9kspi9g+DDrNwpJibgg7f7qBGJGYwmQw5PnbtGtHgfCY+fka1PxCQ5RCtwMJ Hv4Vc2Zassks8c3WHxkDNQhg== X-Google-Smtp-Source: AGHT+IGi0dElEr9qu8fzvtM9cId0LpD1vPVzTDBDvUodGuNk9OVUxqdu5KHHQFxgVeqHmDaBFP7u+PMuDw== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:690c:4b13:b0:6ea:decd:84e with SMTP id 00721157ae682-6eadecd0dd3mr590627b3.5.1731082851750; Fri, 08 Nov 2024 08:20:51 -0800 (PST) Date: Fri, 8 Nov 2024 16:20:34 +0000 In-Reply-To: <20241108162040.159038-1-tabba@google.com> Mime-Version: 1.0 References: <20241108162040.159038-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108162040.159038-5-tabba@google.com> Subject: [RFC PATCH v1 04/10] mm/hugetlb-cgroup: convert hugetlb_cgroup_css_offline() to work on folios From: Fuad Tabba To: linux-mm@kvack.org Cc: kvm@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, david@redhat.com, rppt@kernel.org, jglisse@redhat.com, akpm@linux-foundation.org, muchun.song@linux.dev, simona@ffwll.ch, airlied@gmail.com, pbonzini@redhat.com, seanjc@google.com, willy@infradead.org, jgg@nvidia.com, jhubbard@nvidia.com, ackerleytng@google.com, vannapurve@google.com, mail@maciej.szmigiero.name, kirill.shutemov@linux.intel.com, quic_eberman@quicinc.com, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, tabba@google.com X-Rspam-User: X-Rspamd-Queue-Id: 3A4E740018 X-Rspamd-Server: rspam01 X-Stat-Signature: 5kx6uypip7b9dgnf31s6dbdnnw94ixm3 X-HE-Tag: 1731082824-223788 X-HE-Meta: U2FsdGVkX1+IFAQkAUeArXP96LwABD0AnFJQ+LF9utqPY8n6KEXXr1WQT2fAM3eLb7M4amGSnyaKPPta31iR1fMX631aNfbKFAr6rt+Ekac0VWURZfyeOX3o5Rutz+Rn2SKc9jkocV3m8w/8oJFfwWNLHto9H/Yb1Xlx4fHixCqHrOGziJ/Ap1ZrPjzFSLEL4GYZjom5udy8dsgZVnq91Vz+P+va8+Lq/QPPpSHJgbh3sjrc7BoKzhgE2wRiiDrQdYtPWv1i86fDa78nlZussjMdnEkOFgQLsINcBeI06HdkBbt8IzgkFognXxuZdd/uAkFaETooyAfkgM+81gNUFmDhi+lh7jA9y0I8WGyo5/5nR7RkeWQghIvm24wCmduVacIIbsOWQjnaOrgSiXFGhegWtYPVKsIAm7xSOC4ueJgnFFAbaX/2Eu6bHZnhMR22+b5WzEr9ROsbQYkN/xGYChLImXDc9gC6nR3VcfaW+bPStqdvVNDWtLjY5HtyHF5cujMO0p2MR9JHm3fjk8g7xkLOdmjO+BK8mExlKTspucX+6xaSJ4y1GAEank5Qs0JToYXgYO1X7JyrDIrBhjYO5mKSGCA8Beit3SBu6OyM5IdeUUDpZTunhYa1pMoNPm+OzqO0lGSHBDPxLCHBtAMR0ttDE52n+VXcarR+SHv0BhiB/tXnrIxqQoypX+fTMJG/8qSUS7xOF8UIsuTPOgYmUEfUDmlHaRPOTKQYXn3PPQsBg98WDTuvdfy8C6sF4XBv0u/HLmvq3DAH/Ap+BY8vfEQT4Elw9KRaLOhYhh2pLjx7DVPGqj7SJgVT4/Ln/x7pSzCuQStFPAAWqaaBqQeo1tRHI2sOqk82JdNFYoA+wd+tLAZy8DUz3YuVcV/65cScnXr43beqI9wVy1PIvbFzA1/NrvZYeu40ZC6PRWHwI3i4eOp/P6vCkSnutLkvhahuWH40FGQydQ8iJcl1OrY fCDUTt88 hZ+Rq4i6Y0bV7jOD6xwKwpNbFl0obkqoK69yhCPkwDnS90OAsFFLaNKpk3hsssgNFgwxaSD4p+13elQhcT1GHAhw7HRUwABvwGPJxF5f3ehiSjtNTINLUCRKOwnGcZz395yvA9tZAcxL5okCYMYySX+aw/TXnWurVpS6chG1u58fopR5n+bh0ZMBvKft6xqW22fXOp9SpPaOG6HLa2PqT21gY5Y9M5BC6Re3CLaRl9NBR5Ado7gRrQwRH4qOU/nv6qj7g9orNWmrD4N8YzGGJoDBP6lySP1MLdpWoxvGWeVv7ZCwTZYWwIXoxTRJ0WtYIjpN2wAFypIMcN0EXLTRlUVRN+6QIMNr8MHtrE9jlHGsN9UqkZX5J6ehaQNYw0l/cXCQNvKE6STg0APjwnNZ90/2mTDj6IYizlNCKwtoWe5JEoFtxeidKnYgTVrUXzkhCe/uBU48W5E79ynOgFqeLltiNw/V+jmlNJcWsyNZJR/m/SN4aFbJh9mX9ik7wzjZmGgUb86uc3c2T4AwTmsJvvEKKCJVnaxhIIZRZvgrMwwbxgI8YI3F/hlLYMaMz6BYpn0QMkXJ0RF41g6hJhOKG7tu0eeGEUEjBeiOqrkvsoYieMcdwvykH2rU//dGngkB3/1+PZxhPrWAdArxKko7eCr3E9w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: David Hildenbrand Let's convert hugetlb_cgroup_css_offline() and hugetlb_cgroup_move_parent() to work on folios. hugepage_activelist contains folios, not pages. While at it, rename page_hcg simply to hcg, removing most of the "page" terminology. Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- mm/hugetlb_cgroup.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c index d8d0e665caed..1bdeaf25f640 100644 --- a/mm/hugetlb_cgroup.c +++ b/mm/hugetlb_cgroup.c @@ -195,24 +195,23 @@ static void hugetlb_cgroup_css_free(struct cgroup_subsys_state *css) * cannot fail. */ static void hugetlb_cgroup_move_parent(int idx, struct hugetlb_cgroup *h_cg, - struct page *page) + struct folio *folio) { unsigned int nr_pages; struct page_counter *counter; - struct hugetlb_cgroup *page_hcg; + struct hugetlb_cgroup *hcg; struct hugetlb_cgroup *parent = parent_hugetlb_cgroup(h_cg); - struct folio *folio = page_folio(page); - page_hcg = hugetlb_cgroup_from_folio(folio); + hcg = hugetlb_cgroup_from_folio(folio); /* * We can have pages in active list without any cgroup * ie, hugepage with less than 3 pages. We can safely * ignore those pages. */ - if (!page_hcg || page_hcg != h_cg) + if (!hcg || hcg != h_cg) goto out; - nr_pages = compound_nr(page); + nr_pages = folio_nr_pages(folio); if (!parent) { parent = root_h_cgroup; /* root has no limit */ @@ -235,13 +234,13 @@ static void hugetlb_cgroup_css_offline(struct cgroup_subsys_state *css) { struct hugetlb_cgroup *h_cg = hugetlb_cgroup_from_css(css); struct hstate *h; - struct page *page; + struct folio *folio; do { for_each_hstate(h) { spin_lock_irq(&hugetlb_lock); - list_for_each_entry(page, &h->hugepage_activelist, lru) - hugetlb_cgroup_move_parent(hstate_index(h), h_cg, page); + list_for_each_entry(folio, &h->hugepage_activelist, lru) + hugetlb_cgroup_move_parent(hstate_index(h), h_cg, folio); spin_unlock_irq(&hugetlb_lock); } From patchwork Fri Nov 8 16:20:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13868439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C619D64064 for ; Fri, 8 Nov 2024 16:21:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 34FA46B009D; Fri, 8 Nov 2024 11:20:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D4566B009E; Fri, 8 Nov 2024 11:20:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0FF826B009F; Fri, 8 Nov 2024 11:20:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E48FF6B009D for ; Fri, 8 Nov 2024 11:20:57 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 66BC2160E40 for ; Fri, 8 Nov 2024 16:20:57 +0000 (UTC) X-FDA: 82763441334.02.11CAA97 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf17.hostedemail.com (Postfix) with ESMTP id 2604240027 for ; Fri, 8 Nov 2024 16:20:26 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=gFAtlHw+; spf=pass (imf17.hostedemail.com: domain of 3ZjouZwUKCNMIz00z5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--tabba.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3ZjouZwUKCNMIz00z5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731082686; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EuhnD2p0KG553uFJn8sTE5BvDRtJggITHppkxTMLSiU=; b=ue6vBA2Mom03H8qumjYZCpz7KqdkY1vBXkcYNQQcr/bg4Kk/nIY/mnBSwdw9UJz6fZ+N4G eDWOKYKtxUlSuq6hHpLSfgLbStuOZU+NSf7cZi1YPbiE4r8/fRzJxS7q9+7DR9xWIRDhTC KoSvX7Dv6JhZDgcnNTbM2rgyPrsAy34= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=gFAtlHw+; spf=pass (imf17.hostedemail.com: domain of 3ZjouZwUKCNMIz00z5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--tabba.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3ZjouZwUKCNMIz00z5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731082686; a=rsa-sha256; cv=none; b=p0mEuZ7R2qIQRGYWRUgS1hy57bA1ZM2xAQXxSQUBAy7FZQ9d5v/OKx+CkU6bPEp4CVRh1X RE5xGAowfDHOe3sDgeLeK9DKN5j9XNNr2W+SOPJ9rpoZ9h173zvqTsTxF33XUy11PNLacM xh/T66SPCDmSkYgl9UiB6YxDjMv2XNs= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e293150c2c6so4623141276.1 for ; Fri, 08 Nov 2024 08:20:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731082854; x=1731687654; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EuhnD2p0KG553uFJn8sTE5BvDRtJggITHppkxTMLSiU=; b=gFAtlHw+bXtEJ7lYjMYWi6xe9iRm0vFaLVneIOxZf8+KOzwCn1LEa3LegGTqTcgLm/ +m7oc+aIlgZwjcy90pl8GtnZ4wi/sHDPBpo1XMXz8e3aECAlFJ7HM+tBPT+xM+PoRdEI Djl2MDxSgiZ3FHi5Tduw57BYhAtgkHOfyr5TqttmKyE5KWaxxKq8qtGdNca46ZLHkpk+ aRGvMNcMnrkpAgjanKg7IQZGfC8HiFbpRIP4B5w3VN2xPVo0gIRoXzGRM/oXSOEEWQi8 LpxzT4cLs8RznsxvEUc93Fg+Y/UZhoyvKBi89FH/XRX6go3oG1R284Ppm0VRVArJUEny YQdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731082854; x=1731687654; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EuhnD2p0KG553uFJn8sTE5BvDRtJggITHppkxTMLSiU=; b=N5ZmJwgY8AUeZ75rzpBWsnTtaA8xDl6sQICGQf4onZkp/vIS8N+6pURdMjzF79vNdz F09IjZ1sjXK74Bj80AkgwQ6Fjdrs74uDOaAuFU+9iwr2l8aN8vv1kS7cMo8kjzxu9Fdi L26Ci/6d6/NSgebdtxAROykeiGyes3WTkKBxWSPKb6bWQBdGRl2RFEwAL5xCs1zuW+Ij Rx0YPu3ARm2qDSGBUrj1duZOWRgqsUWJy/bNMHGz0+0pVB3bH0a+IUU7ruA/BsodkCdL 4HPT0U7Le+S2bbL139XSCta7JYcv5Uufu8mI3XRBE1sG3c7y2v/CH/h14hMqwjTz8YhA g9tA== X-Gm-Message-State: AOJu0Yy/MhZm49WvixnITN0LjgrV7mHXDVSmZiLBRbXorKGKkWKbKBoY tHJ/831IZJGM5Ddd2oZeT7ppcZ3vMWWigXN2JnX+k4SHFqQ8fxs1WSWhKm3Sl6oaS+OQHCtGo3A J7ZUsr3ENK/dCc7PzwE70S2eOvFb8GPOAJNSBW7sFD1UWJYqcfooYwzrfUgaRtywSkpSxUxXlW9 4riKyTzPOeVOsW9AhQe9FOfg== X-Google-Smtp-Source: AGHT+IESFJvscd0BPH0GhrFsgF5ykZdsLy8VBXsCYVPr2h3JOn3SzzNujpcRbxNhrkNgQoNrxNGAtYBroQ== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a25:d001:0:b0:e30:d518:30f2 with SMTP id 3f1490d57ef6-e337f8417b3mr2585276.1.1731082854058; Fri, 08 Nov 2024 08:20:54 -0800 (PST) Date: Fri, 8 Nov 2024 16:20:35 +0000 In-Reply-To: <20241108162040.159038-1-tabba@google.com> Mime-Version: 1.0 References: <20241108162040.159038-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108162040.159038-6-tabba@google.com> Subject: [RFC PATCH v1 05/10] mm/hugetlb: use folio->lru int demote_free_hugetlb_folios() From: Fuad Tabba To: linux-mm@kvack.org Cc: kvm@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, david@redhat.com, rppt@kernel.org, jglisse@redhat.com, akpm@linux-foundation.org, muchun.song@linux.dev, simona@ffwll.ch, airlied@gmail.com, pbonzini@redhat.com, seanjc@google.com, willy@infradead.org, jgg@nvidia.com, jhubbard@nvidia.com, ackerleytng@google.com, vannapurve@google.com, mail@maciej.szmigiero.name, kirill.shutemov@linux.intel.com, quic_eberman@quicinc.com, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, tabba@google.com X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 2604240027 X-Stat-Signature: mm6cg57bc7yykfh6sibbgu43bpasdgzk X-Rspam-User: X-HE-Tag: 1731082826-972065 X-HE-Meta: U2FsdGVkX1+fBmapp4Ol+6rikZyl79+O1XhEs1M85CmGH+wQsnN+jBfGHXMVel3R8Bryq58eRychrj4ld47M/XuStUByGkKHRvxFmZJXMz1N/VCwo+Wofvm7pdBGp7yhq0QxjvkF0y75p1DqaacsCtJyI7vGo+aGTT3yBtD2fmSgA8qHAKJPb5KIT2VOByBNeWvVYX5fotnc3f0bjhqM84Yw4L3ljMbckvgq4gMdxyxhNIfpS7ciEkJinQ6ZFxIy02QcIxx89ldJrNlITA5okH0pDeG2POZAgdNBN+HsIlQjJ97wDHIJQasrIYYX9sMGA4xw4UoosZXE/0a3mpg+ua3c0U6J67D5q7kH+WLCuC2CWvBAyUAHhcXfcbiE5OfNPhUfLj17o7ISPhoEfvkpnkqkbz5uMmBTTp6jrAkJk/yV81HmZpA8DtSLUewcJ635u24sMOajGV352WM8tK8wXQi39s1qEygdq7yyIFdtWNNITChUKlm2NNGzEvrbdxVh0cxidKvLIiPl3qQCtP4jyoYa1cFvszUnCOYbW1qEiUoCkiLTByoc7M7aEcfD6MqokRwQ4HJtWfiEAPY5l/lCks8N3i4Mto84J0rWCVUMrV1V2EDarOPZsPrsXFVvlS+ADqd4Rx1HOuaHpGFagtyI51rmeO7uFhaqX9EgzInwsqzoAg8wzG2QlPEETfeKH2bAG6jn3f+wUZeZLlukceqaoMUNwLVWoU1v9bGw9CnYBCzCEJvenu5Yg3oiVd4d9OIJUJUGub1QRtHkM/8kAiysBgMAw7WTT41qjODu1krsNRiFN/1lWJ9RV/Vx+7OWW3bvBNYvSqXea9KAIJr2RHeZzKhngRHU4ncJMS0zuYyAYQZzD4h3m5JrfvD5v/GOmW6cNfqGk9j8JFEP91cm695m5L/T/ert93VN6UpH5RQT/4bHR7S/wZawva2cgIVk56+ftw2fYb6Q8IvHfjyOA6I V1J63f7r EfsK/qTANR5QioyDRD3Rp47Ed99FqBdFe1+DeAhwL2YgqoqZIhR8qW7PBwMV+xWdP7Xw9o6C+Ujliu1ZwZC4Ravf+mt0FJd/5vsoR3OP8NKOf01YT+Uo1LukMuQAtSEQeR6AiRw9/dcO6Eh6cO0+iFteO3w1+YSN3RQbCsHmOP8IOK4qZobKxALpOc48vHh4kaAjtd4DkhkXhOLaYu6LV43WXp1frzsDyS8HPYLcMu7g8d1f+/AhV8tVON6uN1I9ZPQz1yGbANixKRSrTr37eAjY0SgTRfHM5cmkibRqmh4pidhHkWvoobb8f4Hbj5Pv7G48l1wxrirGYay+z+EqW8fDuPj4ghUKnxahOfPpJl5+sDfXiI1iIDROBzav7Y96zBsJu5gK925mojtkvp9/rjLw/hb+aPCK04MhabXEfmIE7PNVt0BCYSIvfHVrim4Kk0iRq3DhLVpV2odMPPsidsH5mocgR/bPEwm/M9R4hfhE1PLr7cByY/E476ELUsMdFlai41Y3V/U70lwxzcqH1JFCLyy6msV+oNqYoAJHNRHlkVbe3+BFOcuGZfQBjEwDEmS3bx5rkiPfJ1i0907p2xYsaRs88i250wTezYOCbvSt4erHiDnf1efczoBeKXTnQ0Wdp X-Bogosity: Ham, tests=bogofilter, spamicity=0.001103, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: David Hildenbrand Let's avoid messing with pages. Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- mm/hugetlb.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d58bd815fdf2..a64852280213 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3806,13 +3806,15 @@ static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst, for (i = 0; i < pages_per_huge_page(src); i += pages_per_huge_page(dst)) { struct page *page = folio_page(folio, i); + struct folio *new_folio; page->mapping = NULL; clear_compound_head(page); prep_compound_page(page, dst->order); + new_folio = page_folio(page); - init_new_hugetlb_folio(dst, page_folio(page)); - list_add(&page->lru, &dst_list); + init_new_hugetlb_folio(dst, new_folio); + list_add(&new_folio->lru, &dst_list); } } From patchwork Fri Nov 8 16:20:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13868440 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69F53D64060 for ; Fri, 8 Nov 2024 16:21:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6CBB46B009F; Fri, 8 Nov 2024 11:21:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 67B356B00A0; Fri, 8 Nov 2024 11:21:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45C466B00A1; Fri, 8 Nov 2024 11:21:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1E66E6B009F for ; Fri, 8 Nov 2024 11:21:00 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A949480E35 for ; Fri, 8 Nov 2024 16:20:59 +0000 (UTC) X-FDA: 82763440158.08.3F587B6 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf16.hostedemail.com (Postfix) with ESMTP id 0F51F18001A for ; Fri, 8 Nov 2024 16:20:20 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Cfbe+rxd; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3aDouZwUKCNUK12217FF7C5.3FDC9ELO-DDBM13B.FI7@flex--tabba.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3aDouZwUKCNUK12217FF7C5.3FDC9ELO-DDBM13B.FI7@flex--tabba.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731082797; a=rsa-sha256; cv=none; b=jw/1jMu0iwxaKKsxqT8lkEGEDhSPbb3JBYQfinG1viU/gLfDm1t3XYfDYjun/RJzcDHM3n NN5MivsHDkN0Q0jAqTOt++hLUI+GlDuAk8As1sYSNgO7HS7pGn8naNouEdttepYN5Z46x4 BdZfgnM/gJAcFnXUd8+l2Hh7lLX5qQI= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Cfbe+rxd; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3aDouZwUKCNUK12217FF7C5.3FDC9ELO-DDBM13B.FI7@flex--tabba.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3aDouZwUKCNUK12217FF7C5.3FDC9ELO-DDBM13B.FI7@flex--tabba.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731082797; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=djjKrUl0oR5lK1iUj5wBqHv7X6lVNwT0SqGCjb6L8ss=; b=UJnGH7qBKbv7TOr85qbkDkckk6fmrRD/giFoQr2mq5U4IuBL6GVqtWWwSWvK/lEidoONtg pjDTl33pSrennGKokevTfnmBh1c7hu6aeLoLf68BhTtXIg/0GgGTWHvFWIquksy/sceYW2 3fwOC4+llQStLjVUA1gtTV5kWeNb5p8= Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4315c1b5befso15156205e9.1 for ; Fri, 08 Nov 2024 08:20:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731082856; x=1731687656; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=djjKrUl0oR5lK1iUj5wBqHv7X6lVNwT0SqGCjb6L8ss=; b=Cfbe+rxdyXtJ43QNBX22LYld7djtOi2MKYG06GKxYRCvsLmnNGJqO8bE6+uqWyxKHY 8+GpJNTsqOdOSDNTtDpyEJEkdU3lVrWHDbX/4TNd+twK1usVpV0t8wGJ6mmta9MsUpcY zQTv4NDzAZ/kwYHm/Sma3dbcOjZjX5wJS+4uDDNnuu8yeO+4SzdDZcl3fRn8fAFYnbHa a2EkefTNirrt33I3NqYkR05NwivyXWEG4r9gK4F2eB6yU6FZhYxDy/4Dt/ZcnZmgxvqj R69TNQU1lyHdaLFBZJ5PokDpZWQdmivufP2yNppzJpe1TqnCHqLfgnzfMX6qD+HBNmof b8mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731082856; x=1731687656; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=djjKrUl0oR5lK1iUj5wBqHv7X6lVNwT0SqGCjb6L8ss=; b=FemJjb7TBwDHmSi9KxGL4xv7vpmJEjUlUb2SFq9kUecWQ2/HXmsUVAkBU597WUM2Ug kv0Jhcx4KAvYsVS617KAx2pn0i6uw4cY30K+CPRPpjyEknfziPayHMuyq78lR951wl8B XMiDQ9Tu/40f4hDsy/UxgkEoyOn+RaU1HPla0cVUAaMF8eh41EDXE3QliHdkYbsT/S7B oxYv0OzptofiqXdX6ZdlHEp56ESqyG7u8fFnK5L/D9yqejUrsYQiWJnxVTrb+F/fM+49 tewWFH5g3hPYqAxlVk2V5eJVyRAdo96zGHuYljLoQiHPw6uNEwB+hBMuTN1o1h0pyR/N ALNQ== X-Gm-Message-State: AOJu0YzVQMImb0TGaXHhDak4LVqCODMyOneYneqhVAjiWWiM/4g7g3iZ eaWbDAbfARj03uumS3Wr0hHFOU0Og5971vCiFokr7KuQOg8s+3qMAS0u5aJ+iUL0usQGCOuj9vy b1oeWZaM96cB25dJpJN32JCobXRyvwJPanyzzVxA1KBOfkka/ZbFzG8eZ0Qdi/+9qTSyk5k3hcS m/HQ/ccX/QiDwBN1J9aS1vxA== X-Google-Smtp-Source: AGHT+IG8oqeRtBHhwUKqiIKTnkqXE8W7GFfQboidk/t9PyUEaqaxv663mtEdw0kf/yfOyooXYNVI8Y+B3g== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a7b:c845:0:b0:42c:b995:20f1 with SMTP id 5b1f17b1804b1-432b7515d74mr74205e9.4.1731082856332; Fri, 08 Nov 2024 08:20:56 -0800 (PST) Date: Fri, 8 Nov 2024 16:20:36 +0000 In-Reply-To: <20241108162040.159038-1-tabba@google.com> Mime-Version: 1.0 References: <20241108162040.159038-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108162040.159038-7-tabba@google.com> Subject: [RFC PATCH v1 06/10] mm/hugetlb: use separate folio->_hugetlb_list for hugetlb-internals From: Fuad Tabba To: linux-mm@kvack.org Cc: kvm@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, david@redhat.com, rppt@kernel.org, jglisse@redhat.com, akpm@linux-foundation.org, muchun.song@linux.dev, simona@ffwll.ch, airlied@gmail.com, pbonzini@redhat.com, seanjc@google.com, willy@infradead.org, jgg@nvidia.com, jhubbard@nvidia.com, ackerleytng@google.com, vannapurve@google.com, mail@maciej.szmigiero.name, kirill.shutemov@linux.intel.com, quic_eberman@quicinc.com, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, tabba@google.com X-Rspam-User: X-Rspamd-Queue-Id: 0F51F18001A X-Rspamd-Server: rspam01 X-Stat-Signature: aq64x6ugb477eg3orpye7mwbzkxfsip3 X-HE-Tag: 1731082820-29347 X-HE-Meta: U2FsdGVkX1/YPn44UUY2O46uaEHln7TdMP+hvdu8rfF17qiVbP4ICcQCoLCvLB+vAY3ONVJbVyjOXc/p8AGSEZI1qA3HfkfoJIKV9fz7a7ng+eV0EHIxmPbCnzipvBltQtfC+xou5ywZW/Eb1LL6I6F7VD5YruqFvxcfK7booWWDXxNWrA2nb6clR2/4Dv3BjRNLx0tR0fkYnirkx+H57w196UcE5tokJujpjn7JT55GOcGtvH1yoYyhhNg1rkcLdmlddQEbaBl2bFwHFCLlqGVI6vE0W0NrY8jEI97hjbVj0TYM1HKP6sd0f4FD7dqvxFA1fvRsUH13BrTNH+Y/YrzbZXEZuMxpEt+J7GByGr4ktK7ZBSskKM/GscBMGKCKcoCX++eewMTsOPmGagorSDEqebL0CUD65o2h+h9oCCnGv9MhlE2mZ6EMkga91f5uGtkwjuDjeiyuuNHxAnTTVUMbbSUSjl8Q0/tpgQWzImqXIKPJx7JCYj97J8vzn4bssrfGbSVZXPKh7IMaTaIWZ4Hyq5Jr90kK2WKDF8oqcHYUE/GXaraioOCbhFRjxY4pVcdFC8EoNQjZy/Xf2GlrtgP7/ZPJEogmzk2Jvhndw3fbGeyhfJAZv8BgPr2dk0pTMFZ5LosTPfdbL3TQoWXoX0Vl2KV/8j8MWAY9+pPrMVUHfu1u/pGpTMiv+D2Ev4E0n+h9Oxa22bVULgcEiROrkC7k/MbX9+aZNMIjm2U73zLGIinZEL1Gq+Jhg9aA+ZZv+Sr3Q42+JomyFZaeAawELCemY/bHb07gd0mJQGdxjJm9+kPtrVngphKxjI1/FbG3CiyGsyru6f8yXp0GnUkPITJbQ/41/syN4YlVr6KItWoA1vViLlJGLFPw3wsw2aOp9OzPeeySmmyPl1k8h1KGeJ76eNesESYxalYRQpu4jePnGzf2FN11t2+JqHxsv/8j2uUpq+D9MeP4r3ZK3fD UTyXkEgt aiJ4zebks0eqbOjdeIHg4L2tnsUcY2pQm680lX1P+cryh0c1T5PHleaFsJ+1IIBverm1eUCU/c+efVGe9g58rqyAD3IMwC7tBeweXMdux1Nmko9ShUS3+dCqjVaGb6A2jjvRXMZs+rgT8VcpeKN+kW9TAQRdcjBXAAnTLZK89R5vkRWWXirpGUXW2V87JJrrro32Yk/mUKD2LegELB+eIxCnvAG6n2X5q4hoq6WlgDu2jT1J2R72M/POmQho1m+mlx8sAVgXRxGlvQqC+9qWhjbMXp2NqjpYBy/vsh1lNYNAf0kU7DZKq/WIkl32sCisOLAxQFZm1oBRBZ0UcVFwJujbY+iUPNW86rYBYMF2C6tBQ2TNrMgoq3SzqHvuZcDdc/9vFTI4qGv75BYF+WplnC1tgbxXB0yjpIHzeUxf1srQElhx5cpa8J/mUH5+NsO0Qbd9FEUU3yLE42t1tpoC/ndKZs4Q0Mj87Gsz2mJEs6peUWofuQrn0R4wCSqKiXIAvwNuVL7daIvPQbHazUkq72z4yRVpxVR74pCDQ5xzl333gT8pmZCaXuX9r6UcctZX6igI35klGZKc6VOqB+pKnTN67fuMQs0iJX1NAL+uJoSIm1lU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: David Hildenbrand Let's use a separate list head in the folio, as long as hugetlb folios are not isolated. This way, we can reuse folio->lru for different purpose (e.g., owner_ops) as long as they are not isolated. Consequently, folio->lru will only be used while there is an additional folio reference that cannot be dropped until putback/un-isolated. Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- include/linux/mm_types.h | 18 +++++++++ mm/hugetlb.c | 81 +++++++++++++++++++++------------------- mm/hugetlb_cgroup.c | 4 +- mm/hugetlb_vmemmap.c | 8 ++-- 4 files changed, 66 insertions(+), 45 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 80fef38d9d64..365c73be0bb4 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -310,6 +310,7 @@ typedef struct { * @_hugetlb_cgroup: Do not use directly, use accessor in hugetlb_cgroup.h. * @_hugetlb_cgroup_rsvd: Do not use directly, use accessor in hugetlb_cgroup.h. * @_hugetlb_hwpoison: Do not use directly, call raw_hwp_list_head(). + * @_hugetlb_list: To be used in hugetlb core code only. * @_deferred_list: Folios to be split under memory pressure. * @_unused_slab_obj_exts: Placeholder to match obj_exts in struct slab. * @@ -397,6 +398,17 @@ struct folio { }; struct page __page_2; }; + union { + struct { + unsigned long _flags_3; + unsigned long _head_3; + /* public: */ + struct list_head _hugetlb_list; + /* private: the union with struct page is transitional */ + }; + struct page __page_3; + }; + }; #define FOLIO_MATCH(pg, fl) \ @@ -433,6 +445,12 @@ FOLIO_MATCH(compound_head, _head_2); FOLIO_MATCH(flags, _flags_2a); FOLIO_MATCH(compound_head, _head_2a); #undef FOLIO_MATCH +#define FOLIO_MATCH(pg, fl) \ + static_assert(offsetof(struct folio, fl) == \ + offsetof(struct page, pg) + 3 * sizeof(struct page)) +FOLIO_MATCH(flags, _flags_3); +FOLIO_MATCH(compound_head, _head_3); +#undef FOLIO_MATCH /** * struct ptdesc - Memory descriptor for page tables. diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a64852280213..2308e94d8615 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1316,7 +1316,7 @@ static void enqueue_hugetlb_folio(struct hstate *h, struct folio *folio) lockdep_assert_held(&hugetlb_lock); VM_BUG_ON_FOLIO(folio_ref_count(folio), folio); - list_move(&folio->lru, &h->hugepage_freelists[nid]); + list_move(&folio->_hugetlb_list, &h->hugepage_freelists[nid]); h->free_huge_pages++; h->free_huge_pages_node[nid]++; folio_set_hugetlb_freed(folio); @@ -1329,14 +1329,14 @@ static struct folio *dequeue_hugetlb_folio_node_exact(struct hstate *h, bool pin = !!(current->flags & PF_MEMALLOC_PIN); lockdep_assert_held(&hugetlb_lock); - list_for_each_entry(folio, &h->hugepage_freelists[nid], lru) { + list_for_each_entry(folio, &h->hugepage_freelists[nid], _hugetlb_list) { if (pin && !folio_is_longterm_pinnable(folio)) continue; if (folio_test_hwpoison(folio)) continue; - list_move(&folio->lru, &h->hugepage_activelist); + list_move(&folio->_hugetlb_list, &h->hugepage_activelist); folio_ref_unfreeze(folio, 1); folio_clear_hugetlb_freed(folio); h->free_huge_pages--; @@ -1599,7 +1599,7 @@ static void remove_hugetlb_folio(struct hstate *h, struct folio *folio, if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; - list_del(&folio->lru); + list_del(&folio->_hugetlb_list); if (folio_test_hugetlb_freed(folio)) { folio_clear_hugetlb_freed(folio); @@ -1616,8 +1616,9 @@ static void remove_hugetlb_folio(struct hstate *h, struct folio *folio, * pages. Otherwise, someone (memory error handling) may try to write * to tail struct pages. */ - if (!folio_test_hugetlb_vmemmap_optimized(folio)) + if (!folio_test_hugetlb_vmemmap_optimized(folio)) { __folio_clear_hugetlb(folio); + } h->nr_huge_pages--; h->nr_huge_pages_node[nid]--; @@ -1632,7 +1633,7 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, lockdep_assert_held(&hugetlb_lock); - INIT_LIST_HEAD(&folio->lru); + INIT_LIST_HEAD(&folio->_hugetlb_list); h->nr_huge_pages++; h->nr_huge_pages_node[nid]++; @@ -1640,8 +1641,8 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, h->surplus_huge_pages++; h->surplus_huge_pages_node[nid]++; } - __folio_set_hugetlb(folio); + folio_change_private(folio, NULL); /* * We have to set hugetlb_vmemmap_optimized again as above @@ -1789,8 +1790,8 @@ static void bulk_vmemmap_restore_error(struct hstate *h, * hugetlb pages with vmemmap we will free up memory so that we * can allocate vmemmap for more hugetlb pages. */ - list_for_each_entry_safe(folio, t_folio, non_hvo_folios, lru) { - list_del(&folio->lru); + list_for_each_entry_safe(folio, t_folio, non_hvo_folios, _hugetlb_list) { + list_del(&folio->_hugetlb_list); spin_lock_irq(&hugetlb_lock); __folio_clear_hugetlb(folio); spin_unlock_irq(&hugetlb_lock); @@ -1808,14 +1809,14 @@ static void bulk_vmemmap_restore_error(struct hstate *h, * If are able to restore vmemmap and free one hugetlb page, we * quit processing the list to retry the bulk operation. */ - list_for_each_entry_safe(folio, t_folio, folio_list, lru) + list_for_each_entry_safe(folio, t_folio, folio_list, _hugetlb_list) if (hugetlb_vmemmap_restore_folio(h, folio)) { - list_del(&folio->lru); + list_del(&folio->_hugetlb_list); spin_lock_irq(&hugetlb_lock); add_hugetlb_folio(h, folio, true); spin_unlock_irq(&hugetlb_lock); } else { - list_del(&folio->lru); + list_del(&folio->_hugetlb_list); spin_lock_irq(&hugetlb_lock); __folio_clear_hugetlb(folio); spin_unlock_irq(&hugetlb_lock); @@ -1856,12 +1857,12 @@ static void update_and_free_pages_bulk(struct hstate *h, VM_WARN_ON(ret < 0); if (!list_empty(&non_hvo_folios) && ret) { spin_lock_irq(&hugetlb_lock); - list_for_each_entry(folio, &non_hvo_folios, lru) + list_for_each_entry(folio, &non_hvo_folios, _hugetlb_list) __folio_clear_hugetlb(folio); spin_unlock_irq(&hugetlb_lock); } - list_for_each_entry_safe(folio, t_folio, &non_hvo_folios, lru) { + list_for_each_entry_safe(folio, t_folio, &non_hvo_folios, _hugetlb_list) { update_and_free_hugetlb_folio(h, folio, false); cond_resched(); } @@ -1959,7 +1960,7 @@ static void __prep_account_new_huge_page(struct hstate *h, int nid) static void init_new_hugetlb_folio(struct hstate *h, struct folio *folio) { __folio_set_hugetlb(folio); - INIT_LIST_HEAD(&folio->lru); + INIT_LIST_HEAD(&folio->_hugetlb_list); hugetlb_set_folio_subpool(folio, NULL); set_hugetlb_cgroup(folio, NULL); set_hugetlb_cgroup_rsvd(folio, NULL); @@ -2112,7 +2113,7 @@ static void prep_and_add_allocated_folios(struct hstate *h, /* Add all new pool pages to free lists in one lock cycle */ spin_lock_irqsave(&hugetlb_lock, flags); - list_for_each_entry_safe(folio, tmp_f, folio_list, lru) { + list_for_each_entry_safe(folio, tmp_f, folio_list, _hugetlb_list) { __prep_account_new_huge_page(h, folio_nid(folio)); enqueue_hugetlb_folio(h, folio); } @@ -2165,7 +2166,7 @@ static struct folio *remove_pool_hugetlb_folio(struct hstate *h, if ((!acct_surplus || h->surplus_huge_pages_node[node]) && !list_empty(&h->hugepage_freelists[node])) { folio = list_entry(h->hugepage_freelists[node].next, - struct folio, lru); + struct folio, _hugetlb_list); remove_hugetlb_folio(h, folio, acct_surplus); break; } @@ -2491,7 +2492,7 @@ static int gather_surplus_pages(struct hstate *h, long delta) alloc_ok = false; break; } - list_add(&folio->lru, &surplus_list); + list_add(&folio->_hugetlb_list, &surplus_list); cond_resched(); } allocated += i; @@ -2526,7 +2527,7 @@ static int gather_surplus_pages(struct hstate *h, long delta) ret = 0; /* Free the needed pages to the hugetlb pool */ - list_for_each_entry_safe(folio, tmp, &surplus_list, lru) { + list_for_each_entry_safe(folio, tmp, &surplus_list, _hugetlb_list) { if ((--needed) < 0) break; /* Add the page to the hugetlb allocator */ @@ -2539,7 +2540,7 @@ static int gather_surplus_pages(struct hstate *h, long delta) * Free unnecessary surplus pages to the buddy allocator. * Pages have no ref count, call free_huge_folio directly. */ - list_for_each_entry_safe(folio, tmp, &surplus_list, lru) + list_for_each_entry_safe(folio, tmp, &surplus_list, _hugetlb_list) free_huge_folio(folio); spin_lock_irq(&hugetlb_lock); @@ -2588,7 +2589,7 @@ static void return_unused_surplus_pages(struct hstate *h, if (!folio) goto out; - list_add(&folio->lru, &page_list); + list_add(&folio->_hugetlb_list, &page_list); } out: @@ -3051,7 +3052,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; } - list_add(&folio->lru, &h->hugepage_activelist); + list_add(&folio->_hugetlb_list, &h->hugepage_activelist); folio_ref_unfreeze(folio, 1); /* Fall through */ } @@ -3211,7 +3212,7 @@ static void __init prep_and_add_bootmem_folios(struct hstate *h, /* Send list for bulk vmemmap optimization processing */ hugetlb_vmemmap_optimize_folios(h, folio_list); - list_for_each_entry_safe(folio, tmp_f, folio_list, lru) { + list_for_each_entry_safe(folio, tmp_f, folio_list, _hugetlb_list) { if (!folio_test_hugetlb_vmemmap_optimized(folio)) { /* * If HVO fails, initialize all tail struct pages @@ -3260,7 +3261,7 @@ static void __init gather_bootmem_prealloc_node(unsigned long nid) hugetlb_folio_init_vmemmap(folio, h, HUGETLB_VMEMMAP_RESERVE_PAGES); init_new_hugetlb_folio(h, folio); - list_add(&folio->lru, &folio_list); + list_add(&folio->_hugetlb_list, &folio_list); /* * We need to restore the 'stolen' pages to totalram_pages @@ -3317,7 +3318,7 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) &node_states[N_MEMORY], NULL); if (!folio) break; - list_add(&folio->lru, &folio_list); + list_add(&folio->_hugetlb_list, &folio_list); } cond_resched(); } @@ -3379,7 +3380,7 @@ static void __init hugetlb_pages_alloc_boot_node(unsigned long start, unsigned l if (!folio) break; - list_move(&folio->lru, &folio_list); + list_move(&folio->_hugetlb_list, &folio_list); cond_resched(); } @@ -3544,13 +3545,13 @@ static void try_to_free_low(struct hstate *h, unsigned long count, for_each_node_mask(i, *nodes_allowed) { struct folio *folio, *next; struct list_head *freel = &h->hugepage_freelists[i]; - list_for_each_entry_safe(folio, next, freel, lru) { + list_for_each_entry_safe(folio, next, freel, _hugetlb_list) { if (count >= h->nr_huge_pages) goto out; if (folio_test_highmem(folio)) continue; remove_hugetlb_folio(h, folio, false); - list_add(&folio->lru, &page_list); + list_add(&folio->_hugetlb_list, &page_list); } } @@ -3703,7 +3704,7 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, goto out; } - list_add(&folio->lru, &page_list); + list_add(&folio->_hugetlb_list, &page_list); allocated++; /* Bail for signals. Probably ctrl-c from user */ @@ -3750,7 +3751,7 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, if (!folio) break; - list_add(&folio->lru, &page_list); + list_add(&folio->_hugetlb_list, &page_list); } /* free the pages after dropping lock */ spin_unlock_irq(&hugetlb_lock); @@ -3793,13 +3794,13 @@ static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst, */ mutex_lock(&dst->resize_lock); - list_for_each_entry_safe(folio, next, src_list, lru) { + list_for_each_entry_safe(folio, next, src_list, _hugetlb_list) { int i; if (folio_test_hugetlb_vmemmap_optimized(folio)) continue; - list_del(&folio->lru); + list_del(&folio->_hugetlb_list); split_page_owner(&folio->page, huge_page_order(src), huge_page_order(dst)); pgalloc_tag_split(folio, huge_page_order(src), huge_page_order(dst)); @@ -3814,7 +3815,7 @@ static long demote_free_hugetlb_folios(struct hstate *src, struct hstate *dst, new_folio = page_folio(page); init_new_hugetlb_folio(dst, new_folio); - list_add(&new_folio->lru, &dst_list); + list_add(&new_folio->_hugetlb_list, &dst_list); } } @@ -3847,12 +3848,12 @@ static long demote_pool_huge_page(struct hstate *src, nodemask_t *nodes_allowed, LIST_HEAD(list); struct folio *folio, *next; - list_for_each_entry_safe(folio, next, &src->hugepage_freelists[node], lru) { + list_for_each_entry_safe(folio, next, &src->hugepage_freelists[node], _hugetlb_list) { if (folio_test_hwpoison(folio)) continue; remove_hugetlb_folio(src, folio, false); - list_add(&folio->lru, &list); + list_add(&folio->_hugetlb_list, &list); if (++nr_demoted == nr_to_demote) break; @@ -3864,8 +3865,8 @@ static long demote_pool_huge_page(struct hstate *src, nodemask_t *nodes_allowed, spin_lock_irq(&hugetlb_lock); - list_for_each_entry_safe(folio, next, &list, lru) { - list_del(&folio->lru); + list_for_each_entry_safe(folio, next, &list, _hugetlb_list) { + list_del(&folio->_hugetlb_list); add_hugetlb_folio(src, folio, false); nr_demoted--; @@ -7427,7 +7428,8 @@ bool folio_isolate_hugetlb(struct folio *folio, struct list_head *list) goto unlock; } folio_clear_hugetlb_migratable(folio); - list_move_tail(&folio->lru, list); + list_del_init(&folio->_hugetlb_list); + list_add_tail(&folio->lru, list); unlock: spin_unlock_irq(&hugetlb_lock); return ret; @@ -7478,7 +7480,8 @@ void folio_putback_hugetlb(struct folio *folio) { spin_lock_irq(&hugetlb_lock); folio_set_hugetlb_migratable(folio); - list_move_tail(&folio->lru, &(folio_hstate(folio))->hugepage_activelist); + list_del_init(&folio->lru); + list_add_tail(&folio->_hugetlb_list, &(folio_hstate(folio))->hugepage_activelist); spin_unlock_irq(&hugetlb_lock); folio_put(folio); } diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c index 1bdeaf25f640..ee720eeaf6b1 100644 --- a/mm/hugetlb_cgroup.c +++ b/mm/hugetlb_cgroup.c @@ -239,7 +239,7 @@ static void hugetlb_cgroup_css_offline(struct cgroup_subsys_state *css) do { for_each_hstate(h) { spin_lock_irq(&hugetlb_lock); - list_for_each_entry(folio, &h->hugepage_activelist, lru) + list_for_each_entry(folio, &h->hugepage_activelist, _hugetlb_list) hugetlb_cgroup_move_parent(hstate_index(h), h_cg, folio); spin_unlock_irq(&hugetlb_lock); @@ -933,7 +933,7 @@ void hugetlb_cgroup_migrate(struct folio *old_folio, struct folio *new_folio) /* move the h_cg details to new cgroup */ set_hugetlb_cgroup(new_folio, h_cg); set_hugetlb_cgroup_rsvd(new_folio, h_cg_rsvd); - list_move(&new_folio->lru, &h->hugepage_activelist); + list_move(&new_folio->_hugetlb_list, &h->hugepage_activelist); spin_unlock_irq(&hugetlb_lock); return; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 57b7f591eee8..b2cb8d328aac 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -519,7 +519,7 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, long ret = 0; unsigned long flags = VMEMMAP_REMAP_NO_TLB_FLUSH | VMEMMAP_SYNCHRONIZE_RCU; - list_for_each_entry_safe(folio, t_folio, folio_list, lru) { + list_for_each_entry_safe(folio, t_folio, folio_list, _hugetlb_list) { if (folio_test_hugetlb_vmemmap_optimized(folio)) { ret = __hugetlb_vmemmap_restore_folio(h, folio, flags); /* only need to synchronize_rcu() once for each batch */ @@ -531,7 +531,7 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, } /* Add non-optimized folios to output list */ - list_move(&folio->lru, non_hvo_folios); + list_move(&folio->_hugetlb_list, non_hvo_folios); } if (restored) @@ -651,7 +651,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l LIST_HEAD(vmemmap_pages); unsigned long flags = VMEMMAP_REMAP_NO_TLB_FLUSH | VMEMMAP_SYNCHRONIZE_RCU; - list_for_each_entry(folio, folio_list, lru) { + list_for_each_entry(folio, folio_list, _hugetlb_list) { int ret = hugetlb_vmemmap_split_folio(h, folio); /* @@ -666,7 +666,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l flush_tlb_all(); - list_for_each_entry(folio, folio_list, lru) { + list_for_each_entry(folio, folio_list, _hugetlb_list) { int ret; ret = __hugetlb_vmemmap_optimize_folio(h, folio, &vmemmap_pages, flags); From patchwork Fri Nov 8 16:20:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13868441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BA78D64065 for ; Fri, 8 Nov 2024 16:21:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A05D96B00A2; Fri, 8 Nov 2024 11:21:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 98B876B00A3; Fri, 8 Nov 2024 11:21:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B8006B00A4; Fri, 8 Nov 2024 11:21:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 54C436B00A2 for ; Fri, 8 Nov 2024 11:21:02 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id F184B1A0DCE for ; Fri, 8 Nov 2024 16:21:01 +0000 (UTC) X-FDA: 82763439402.20.1611000 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf29.hostedemail.com (Postfix) with ESMTP id 3539312001E for ; Fri, 8 Nov 2024 16:20:09 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=YJpIYkow; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3ajouZwUKCNcM34439HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--tabba.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3ajouZwUKCNcM34439HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--tabba.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731082689; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Z1UKVDyubm+0FNKQOmqAnK7wACyIXxMp06jV3hLqPe8=; b=jJiMyYjKGJQetJ6WENyp49hdsm+MMCebXP93iKjgUito0EVBZGuoIuSJctGTPbPlvQtU3i cwq+NaIBLnvodfoom/o2371o0G/PSl3EMeKoZTGny93kdpXWAW5UxdDvgTHyuNP8F+CYdl Nixq7dZKGmVUwdVbcR2V9UJjyzeb/K8= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=YJpIYkow; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3ajouZwUKCNcM34439HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--tabba.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3ajouZwUKCNcM34439HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--tabba.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731082689; a=rsa-sha256; cv=none; b=Y9wPTXhgLEBG4uPD1e2kcer0VBG9OEBz2s8Zlq2pvfSk7SN4grg1kRFkSTb7hJWntVKntB 549Guq6PVR439R5ApJBsBuLvHvnH6FbbFVhXdIUAPUqH5n9vMISvKJF29vwZX/mZwxQGzB pht2K0lWEU3QHecMhw6rXJ59UpVNEkY= Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-37d59ad50f3so1163419f8f.0 for ; Fri, 08 Nov 2024 08:20:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731082859; x=1731687659; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Z1UKVDyubm+0FNKQOmqAnK7wACyIXxMp06jV3hLqPe8=; b=YJpIYkowqpQ4IbjEWih2MjGNWkU8QjTqo3YChMMF5ZhmnF10XKKPG7+hZXzvY9hWLg sH9UJyzNG39zUqqIehTF3Gy6XxzU/VpyQNV4dC7H2cA4FCKESGsu9hfE48BrhfGXQ8yI nKCPRF8sT7+gnV4SGDzRTDTU/HFp4KngQlLMNPi4D0W9iomiVUiDyz/lWIZNvIwsyyLq eFhpcyRQBFstiWr6rxV0kDIy9af3Wady9h72FQCB1tSNuxfhnBCFQe0rG+61ZEa6wZxa hofOj+XgSFXxBgwveV8GSy597kSMKd9QCsAABMc2fE/YEh/9Jwu0r6zAULx71i8rAkXR eEyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731082859; x=1731687659; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Z1UKVDyubm+0FNKQOmqAnK7wACyIXxMp06jV3hLqPe8=; b=e0Adqh3ge6cXYjJi80FGIXbfb1dqE3h1Horcxxbm10zLm2+7eg0lLwavjV5Ktm7jD/ 3FZz2M58EMFvs8Nfjg/QVsM8woYiRc3hObOQTvm35glbockq8nMesxcA0DuUfkQTPrtp ndbwh9KFCWVq0tSpW+ylsx4kR5SAJq2t+OsXsbCITR4aYzxPEuCG/DIVSzPYQUD0GfvB J8ATkcBC6YROhezVefhN/C+jsMckBvVWMQ16sOzZPjHgFNrrU1FxX9DCqsLo6PxfOJ6s 8FLGfOrCthYd4PpS2WJx8OkjqLnves7eaUGUYGCM+2bhWYx59hcnFF//qpBMTL4wXUQV V+MA== X-Gm-Message-State: AOJu0Yzi90M9K+Yqi0CnMLpU82Gdve/qpuUI8hDUzXSiFvbszXDdtpFz 5H4m65aWC05uWJPNbcWvJ/tbhxtWpCWwWeTrrMr33Rk8ijKzPR9brMCV9Z06SlPrDE5ZClpYEHB za5rfb2Au7Ir0UO81EDb8nAvCpKjhl7p4OIr8DRm7sC/mGKYjT60UTLHdBPnz9HZNp3RekeH0cs wWse9ZlAOaXAFAsW484iqD0Q== X-Google-Smtp-Source: AGHT+IHkZAD0auPV4emIDnQo0+EU0NmTafyVDODnqw54sbX8+2pfhiQ95jeQsPF/8IsVn+XR42DkmZua1Q== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:adf:ffca:0:b0:37d:4cee:559 with SMTP id ffacd0b85a97d-381f1862148mr2432f8f.3.1731082858585; Fri, 08 Nov 2024 08:20:58 -0800 (PST) Date: Fri, 8 Nov 2024 16:20:37 +0000 In-Reply-To: <20241108162040.159038-1-tabba@google.com> Mime-Version: 1.0 References: <20241108162040.159038-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108162040.159038-8-tabba@google.com> Subject: [RFC PATCH v1 07/10] mm: Introduce struct folio_owner_ops From: Fuad Tabba To: linux-mm@kvack.org Cc: kvm@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, david@redhat.com, rppt@kernel.org, jglisse@redhat.com, akpm@linux-foundation.org, muchun.song@linux.dev, simona@ffwll.ch, airlied@gmail.com, pbonzini@redhat.com, seanjc@google.com, willy@infradead.org, jgg@nvidia.com, jhubbard@nvidia.com, ackerleytng@google.com, vannapurve@google.com, mail@maciej.szmigiero.name, kirill.shutemov@linux.intel.com, quic_eberman@quicinc.com, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, tabba@google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3539312001E X-Stat-Signature: xrstrc69figghqcb945qbgw19i1kckqx X-Rspam-User: X-HE-Tag: 1731082809-287209 X-HE-Meta: U2FsdGVkX18mbWscBMJxFc0n6YXz7gQooqucgA1V/x+hoW5ILlIr+ZGYWBKHqtlT5PWnBPUi+a9iaHbKksU7TTgxQIReaASRpnKMwQnIVXgz4ud9FF+BGuSfNWkd3zsoitst4h3Qw7aRcXPj5uoJbESSesQDkOys/KfosK3ueAg7pk/kIDbopOFKW86X9BuIE1EETrYRA7QS48bG71kf7hsl2RAtWIQ2gMNnZPHDsanMX3Big41JAuiO+jTZfgWj5POszI7uB7o4Ph/zReLMPoY1urLDpdsidAhwSEiHJc7CtbOkzajecPnskjayWVlFV2MtLHAhGvrq05HtQTeOiz7CCsMhW2DZAAUsK6r+6ko/AWxyp1U6udu2uXrHwwLF5KnaBHCRjK8G99VyzXCriKp9nh2hJaqnPjSzSFlR5Owwn7su4jv5wio7V+SVcwhajTZx7bedOrW5dISK8mjOFuJwgKJSkTjuUMefja6zWB9WdkY/FxTpXmNqB+2UgPuqz0pSb4UEqcA+p27mApJ5moWfndC565jByXK1sQjEVC9zYeHDvP+eumDNJee+93DP25wmBFtwhA9PsqhoGXXK+jN/l648xeUVDPFXlEXvsbC/odsrw3nVgQ3g2m+bERuOn65TC4jKw1gzutrYPeLUBxQegq8YNCMmBSLn7bTIdM+LF4nKkFm+AFKk7rquK1PRRL2j31geFZhMFMJCrm4eRL3mYc7ksRZjh7mntxRXHXk3i0+W/Jkri7NvrxDnOTStQ76rRuOgRsfACuNFabjirb+4sn1Sx9qNeuvs8rgrLHDEKE8QR544qF0k8VQk02XMUFqYQYCjJ67kY7WmexcnPeroUxgwFEalZNSk3bpUxef28uxVCRCbyP78a8FzSgj+YhT716+lIva9PbGDwQ4PGOCXIzKyo75XN9S+w5/vaiRRPV137ogFsjeQy/tkfOYgtdYXnPQ9lyshecozJkJ 5MGbzKr7 8WzD47f2sI9DKI5z/n86m3b+8YbkUg+UX+eH9bgC0Lvzo3RFn5mjTHTAsTIzgN+j6RXxQmdjVBdft6HBForyfCYlyLHPfh2E42WUP8pdPVp6kqV4jT0BV0MVL5Fqlws+4GPt16gA1qwldnVr789KsxjIhk82ZpOfRY2QR4nl8xONv4Q8NCtzrQ5m2UF8fufVwXYrJMlq0LFkgB6xreNzhaeqQK2ihFnlYlTIRbS5WazUUaPoZVSEKYwZ0tNw6/Nn8CzLSo22Jab7+lzNzQUH/6p6CKOnUPNwbQ0vtOPKZja3xm1ClxdfqFIlkHpqRTz35ht5TW2kn7iESWxNgywZtIgZiio+O3NTmq1jftq5nRx/a+Fh4NGQUkNAEgNYQx7gu3VU1HcKAn2IAhsJ2QXqQkYq1LHxwkbwoJDwr6ilHNah0B1EL30GjIIhqCD9a5NVYiLVs/fXw9ooBsZp9OijuRfYAXgiWgiWYj50xZy0gxXgLiS6hzjEmi41DAO0DrZqeBNvWeEjm8jpHF8O9BpGHPxR097xzHiJkY21RGaxv+RyFtrZkXYAvVHqSeZn3VGqKBoSDSBnkEoW6qfs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce struct folio_owner_ops, a method table that contains callbacks to owners of folios that need special handling for certain operations. For now, it only contains a callback for folio free(), which is called immediately after the folio refcount drops to 0. Add a pointer to this struct overlaid on struct page compound_head, pgmap, and struct page/folio lru. The users of this struct either will not use lru (e.g., zone device), or would be able to easily isolate when lru is being used (e.g., hugetlb) and handle it accordingly. While folios are isolated, they cannot get freed and the owner_ops are unstable. This is sufficient for the current use case of returning these folios to a custom allocator. To identify that a folio has owner_ops, we set bit 1 of the field, in a similar way to that bit 0 of compound_head is used to identify compound pages. Signed-off-by: Fuad Tabba --- include/linux/mm_types.h | 64 +++++++++++++++++++++++++++++++++++++--- mm/swap.c | 19 ++++++++++++ 2 files changed, 79 insertions(+), 4 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 365c73be0bb4..6e06286f44f1 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -41,10 +41,12 @@ struct mem_cgroup; * * If you allocate the page using alloc_pages(), you can use some of the * space in struct page for your own purposes. The five words in the main - * union are available, except for bit 0 of the first word which must be - * kept clear. Many users use this word to store a pointer to an object - * which is guaranteed to be aligned. If you use the same storage as - * page->mapping, you must restore it to NULL before freeing the page. + * union are available, except for bit 0 (used for compound_head pages) + * and bit 1 (used for owner_ops) of the first word, which must be kept + * clear and used with care. Many users use this word to store a pointer + * to an object which is guaranteed to be aligned. If you use the same + * storage as page->mapping, you must restore it to NULL before freeing + * the page. * * The mapcount field must not be used for own purposes. * @@ -283,10 +285,16 @@ typedef struct { unsigned long val; } swp_entry_t; +struct folio_owner_ops; + /** * struct folio - Represents a contiguous set of bytes. * @flags: Identical to the page flags. * @lru: Least Recently Used list; tracks how recently this folio was used. + * @owner_ops: Pointer to callback operations of the folio owner. Valid if bit 1 + * is set. + * NOTE: Cannot be used with lru, since it is overlaid with it. To use lru, + * owner_ops must be cleared first, and restored once done with lru. * @mlock_count: Number of times this folio has been pinned by mlock(). * @mapping: The file this page belongs to, or refers to the anon_vma for * anonymous memory. @@ -330,6 +338,7 @@ struct folio { unsigned long flags; union { struct list_head lru; + const struct folio_owner_ops *owner_ops; /* Bit 1 is set */ /* private: avoid cluttering the output */ struct { void *__filler; @@ -417,6 +426,7 @@ FOLIO_MATCH(flags, flags); FOLIO_MATCH(lru, lru); FOLIO_MATCH(mapping, mapping); FOLIO_MATCH(compound_head, lru); +FOLIO_MATCH(compound_head, owner_ops); FOLIO_MATCH(index, index); FOLIO_MATCH(private, private); FOLIO_MATCH(_mapcount, _mapcount); @@ -452,6 +462,13 @@ FOLIO_MATCH(flags, _flags_3); FOLIO_MATCH(compound_head, _head_3); #undef FOLIO_MATCH +struct folio_owner_ops { + /* + * Called once the folio refcount reaches 0. + */ + void (*free)(struct folio *folio); +}; + /** * struct ptdesc - Memory descriptor for page tables. * @__page_flags: Same as page flags. Powerpc only. @@ -560,6 +577,45 @@ static inline void *folio_get_private(struct folio *folio) return folio->private; } +/* + * Use bit 1, since bit 0 is used to indicate a compound page in compound_head, + * which owner_ops is overlaid with. + */ +#define FOLIO_OWNER_OPS_BIT 1UL +#define FOLIO_OWNER_OPS (1UL << FOLIO_OWNER_OPS_BIT) + +/* + * Set the folio owner_ops as well as bit 1 of the pointer to indicate that the + * folio has owner_ops. + */ +static inline void folio_set_owner_ops(struct folio *folio, const struct folio_owner_ops *owner_ops) +{ + owner_ops = (const struct folio_owner_ops *)((unsigned long)owner_ops | FOLIO_OWNER_OPS); + folio->owner_ops = owner_ops; +} + +/* + * Clear the folio owner_ops including bit 1 of the pointer. + */ +static inline void folio_clear_owner_ops(struct folio *folio) +{ + folio->owner_ops = NULL; +} + +/* + * Return the folio's owner_ops if it has them, otherwise, return NULL. + */ +static inline const struct folio_owner_ops *folio_get_owner_ops(struct folio *folio) +{ + const struct folio_owner_ops *owner_ops = folio->owner_ops; + + if (!((unsigned long)owner_ops & FOLIO_OWNER_OPS)) + return NULL; + + owner_ops = (const struct folio_owner_ops *)((unsigned long)owner_ops & ~FOLIO_OWNER_OPS); + return owner_ops; +} + struct page_frag_cache { void * va; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) diff --git a/mm/swap.c b/mm/swap.c index 638a3f001676..767ff6d8f47b 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -110,6 +110,13 @@ static void page_cache_release(struct folio *folio) void __folio_put(struct folio *folio) { + const struct folio_owner_ops *owner_ops = folio_get_owner_ops(folio); + + if (unlikely(owner_ops)) { + owner_ops->free(folio); + return; + } + if (unlikely(folio_is_zone_device(folio))) { free_zone_device_folio(folio); return; @@ -929,10 +936,22 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs) for (i = 0, j = 0; i < folios->nr; i++) { struct folio *folio = folios->folios[i]; unsigned int nr_refs = refs ? refs[i] : 1; + const struct folio_owner_ops *owner_ops; if (is_huge_zero_folio(folio)) continue; + owner_ops = folio_get_owner_ops(folio); + if (unlikely(owner_ops)) { + if (lruvec) { + unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec = NULL; + } + if (folio_ref_sub_and_test(folio, nr_refs)) + owner_ops->free(folio); + continue; + } + if (folio_is_zone_device(folio)) { if (lruvec) { unlock_page_lruvec_irqrestore(lruvec, flags); From patchwork Fri Nov 8 16:20:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13868442 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E4D8D64060 for ; Fri, 8 Nov 2024 16:21:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E45DD6B00A4; Fri, 8 Nov 2024 11:21:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DCEDB6B00A6; Fri, 8 Nov 2024 11:21:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFB6C6B00A5; Fri, 8 Nov 2024 11:21:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 906456B00A3 for ; Fri, 8 Nov 2024 11:21:04 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 53AE5AC45A for ; Fri, 8 Nov 2024 16:21:04 +0000 (UTC) X-FDA: 82763440956.24.19EC2AA Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf27.hostedemail.com (Postfix) with ESMTP id 854064001D for ; Fri, 8 Nov 2024 16:20:24 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=y6brMbQw; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of 3bDouZwUKCNkO5665BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3bDouZwUKCNkO5665BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--tabba.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731082802; a=rsa-sha256; cv=none; b=svFv+7MwuU3BGDI8qPd5jPXDGWng7UueYfPpxPjOR1yYzXoYqGefUuQ94XKqA9Gr8UsC7C 5aJFOjPylmOijSutawrBKD+5VDc7OaN456Brnk0fwpzdZk4Ixb7YnE9bBAX7N2xoaZADNB g7EF2v5r/sLPRjr2g/Z1D2JcbKfxVOo= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=y6brMbQw; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of 3bDouZwUKCNkO5665BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3bDouZwUKCNkO5665BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--tabba.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731082802; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Zc2jp+pMldWQELuiNGfEMyRAPqgAV/dSRAey9KCp9+8=; b=qFl9pUTraOkICk71KKI5/JA6vY7MiKPNlWhs2u9daHSOFEM+iS+86+IBTpX14siRGTfOCW WW+PG60lOGVvVpNIDqxiEn/fqySOo9F0QiY4meBZM0Remeo0AdiKxNTqyOPQoZsG0C5ZHg VGJH/qcG/TVYQPz5GXOBYN0ecV0GFKw= Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-431518ae047so18129215e9.0 for ; Fri, 08 Nov 2024 08:21:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731082861; x=1731687661; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Zc2jp+pMldWQELuiNGfEMyRAPqgAV/dSRAey9KCp9+8=; b=y6brMbQwlb9eemfh1eeOosH5O5hji+Ez4I6TJODypmPyP7/9C4lt36dMcLI9LxUaJG SbI980jfBJrkJ+03/pkQoJqXHkb4ovdoSX7LZlXFBKafV1fi1h0d7J0ZG1LSuSV17pg3 NHwhDK4xocM2Df5igR3koga75Z1LjoGdqbhFDgCVEb0ZYU5d0UFqVPqcDPHHaMZiwU+p 3rdD93P2Vb7g6AC1T8If6TJSXCJpOjzgYQt303vHZqAx9J+VlQY6qVJZvuoUosm5NwRl mB/CaFlWLnAXsuyfD/SAVIeLff4rrGw1FCPNVJ+5OIZhQcvY80HGkvhUqbIPK95OUOlG x3iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731082861; x=1731687661; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Zc2jp+pMldWQELuiNGfEMyRAPqgAV/dSRAey9KCp9+8=; b=Q+7blzAt4WF5+iwpzKbrek93aFg5PgYL7YNZJCypBMNyjDg3flE1ZxSbEzqOieuYZ5 W31Xm4xz+akyr1d/EgTbUv1s6l9dXnM/dZWt5v0xEaSrnkut9QXQDKXYoHL2qXELxW0d lr3Pi5e681m8pJIADUhgmsN9aQDow+38JaptbUlGSo9SCryK6mS4CHmOacIzzsvEtyUV zZZUMcjV3B1+cztnQyyiEULoGG2Foh23DDq46KSW3jY2IXecGJQj9YtDsqLRn/TkjIrC jvHp50noK0O+R1/LnfNaPidaH0nJGuAUvnLDMFenInc6y+SOhrbQoGXXnCZ3Vdr13uU9 jByQ== X-Gm-Message-State: AOJu0YzjhFkP63qk/nA1xQPA1Rcyw9o+qX1The4PgFvbwIz3UB5wgFTB UPl2EzI9eGsRioEvAdQw8cy5b4JKkyTyrJuYZpeFcH7R/RT5UfC/vYFA1h8YxUaay/i/681xUFb Z8JUzwUo7HDfumN2a8VkjPuo2L+O/lnWDHWWBncGfvYSfgaj4Rb/coeCgr07M/5L3asaUZ9TlHX Ie1LacpqBsd9kmvBrUvkuARg== X-Google-Smtp-Source: AGHT+IEFnogtOzb1Xio0nKmbj4KqgDBTNvMMzqTZB5Q0tAq5Rm4eo6VUxTjFp/Ump5nYGzIZAg3lfcnSTA== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a7b:cbc9:0:b0:42e:6ad4:e411 with SMTP id 5b1f17b1804b1-432b741c9b5mr131765e9.1.1731082860803; Fri, 08 Nov 2024 08:21:00 -0800 (PST) Date: Fri, 8 Nov 2024 16:20:38 +0000 In-Reply-To: <20241108162040.159038-1-tabba@google.com> Mime-Version: 1.0 References: <20241108162040.159038-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108162040.159038-9-tabba@google.com> Subject: [RFC PATCH v1 08/10] mm: Use getters and setters to access page pgmap From: Fuad Tabba To: linux-mm@kvack.org Cc: kvm@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, david@redhat.com, rppt@kernel.org, jglisse@redhat.com, akpm@linux-foundation.org, muchun.song@linux.dev, simona@ffwll.ch, airlied@gmail.com, pbonzini@redhat.com, seanjc@google.com, willy@infradead.org, jgg@nvidia.com, jhubbard@nvidia.com, ackerleytng@google.com, vannapurve@google.com, mail@maciej.szmigiero.name, kirill.shutemov@linux.intel.com, quic_eberman@quicinc.com, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, tabba@google.com X-Rspam-User: X-Rspamd-Queue-Id: 854064001D X-Rspamd-Server: rspam01 X-Stat-Signature: xosyfh3kmhmiesicj7s7hm1tryxmco4a X-HE-Tag: 1731082824-916722 X-HE-Meta: U2FsdGVkX19Nd48PicjGcOSSf638Ch21lCmDO2qxEbaqBaamK322P96cDvcSaMTwqW/kfcxER9hahZb4RzN7sy3ocewpjcNvdQQux0PF1koOx1ksFgYQz+3de6Gl0hVsQ60cvh6m2mnYj4eX2Ag5Q3ijPc1zMoYhKM10xgbZaCEMq/TG2eZ3WuTV322qWtEmgsRWJZR2L2/NT7qbshCnQ+DsesuSP0OMrnUTJw9UM5I/p5bx9KL4r85JDCjPOHI8z/ooY6ZgfcY3sKm9y2sU1t2l6bxhKSyJpdX4iNEkHoTtugzx9ao1e0vnIRNrv+kU5rONAOWrEfPKPCQ8+fMMypgiH4G00iTC5KKQo3pTFq8VctMZzJWbIwGR2gPYntCLANLq19IuwQavLqtBYfHwW5f43vv90nrykWMS/nkF53JJk9rdLzCEy2vySWXTwyJ+HxPPumDH1droEnueHerieODLLzVIeJIj1/FmHEM1YaBQcnhfWTGtB6Z76xxW5Jubhcyfod9AdV7WtDh5CzmUNi81qfY6aEFvceqbXVCgLS2crOHAoDMR7Gh36tLla0LIV80J4TAaLTnK4Bbk8OY5glLEOipQpX7jLy5htJcKPo7vHDdiFkM3wj1tf3JNt3AWoflFjid/El1TFI9TrhIGAotaAJ6toQc/RiE4MgAE7JRQ8Fuw1IDZ+WWlYl4eVySvQuSipGXV0uUAFaO8Eb1eenPj00DzLlzNw3/iOLkDyOVVxBUorRrF0XnuXGGCd4Lul1tlYQDP9pzhABIEFhqka+T8udI+04uye07J7Fq/VbSkx2lze3KYq/FsrhXkVoSn4UhnqkUq/xjPXUcgyc79v/R8MJXbPf2Oho+7yt8fFg8+NQWZarmwstNnA0QpHHsBeWG9RXRnJAkHDI900D0EFf0SeCw5vQCD1S3OT7E4urPoZi88JgHC8/Wu4BmfCNuc+RRRfY8+DRLGiXGEk+i gmyKKhS+ eVYs8n7Ao+wD3Lq38DNtX6PysyGPy6iOsanQ2TqENzFNmayTZ7ALRv/ZY0TIbq3v1hMx0nIK3aQvhc0lx148chkWXvkD172nLflTWQ8TFoY4DS0eJ/+9V/VysF8d4io5YpYNiMQcXHu23JUt9O/sArcBpMnLN5bK5+V8BaNlCVGDoKR1FGAMVh8O7usYYJJS2WT7uHqLK9olyn3tpmVYHvGa+VsIB2Mwkzh30VHmS2gN5aOHn5rOrw37U2RIo7t8cwXXdTChfMNAd89WctnPZABnJslhd20zRQ7Lb8IvXJSy88J7CUiHUYbav+6jwk1JzSAprKZLwr6PxfmXM4iVtM0gV2U47qlmoNCSm2m6KcZJ7eq08LXy7YhHTZDU+n38G3XuQAZCVYb8hpNsxuud6/LtNHXD2oPbYbPAVVH2DIUrxiB0wQ0b2ZByUr64+HhPc8bhtWi3Nxyca5iJ0CJuEeunb7SuUTvuj/lkNxReLjQZhIfRh67eocbSGDxRwwDUnDFdymYy+1PTzX8/lL3mnIJHCg0vvOepmBSRr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The pointer to pgmap in struct page is overlaid with folio owner_ops. To indicate that a page/folio has owner ops, bit 1 is set. Therefore, before we can start to using owner_ops, we need to ensure that all accesses to page pgmap sanitize the pointer value. This patch introduces the accessors, which will be modified in the following patch to sanitize the pointer values. No functional change intended. Signed-off-by: Fuad Tabba --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 4 +++- drivers/pci/p2pdma.c | 8 +++++--- include/linux/memremap.h | 6 +++--- include/linux/mm_types.h | 13 +++++++++++++ lib/test_hmm.c | 2 +- mm/hmm.c | 2 +- mm/memory.c | 2 +- mm/memremap.c | 19 +++++++++++-------- mm/migrate_device.c | 4 ++-- mm/mm_init.c | 2 +- 10 files changed, 41 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 1a072568cef6..d7d9d9476bb0 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -88,7 +88,9 @@ struct nouveau_dmem { static struct nouveau_dmem_chunk *nouveau_page_to_chunk(struct page *page) { - return container_of(page->pgmap, struct nouveau_dmem_chunk, pagemap); + struct dev_pagemap *pgmap = page_get_pgmap(page); + + return container_of(pgmap, struct nouveau_dmem_chunk, pagemap); } static struct nouveau_drm *page_to_drm(struct page *page) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 4f47a13cb500..19519bb4ba56 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -193,7 +193,7 @@ static const struct attribute_group p2pmem_group = { static void p2pdma_page_free(struct page *page) { - struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(page->pgmap); + struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(page_get_pgmap(page)); /* safe to dereference while a reference is held to the percpu ref */ struct pci_p2pdma *p2pdma = rcu_dereference_protected(pgmap->provider->p2pdma, 1); @@ -1016,8 +1016,10 @@ enum pci_p2pdma_map_type pci_p2pdma_map_segment(struct pci_p2pdma_map_state *state, struct device *dev, struct scatterlist *sg) { - if (state->pgmap != sg_page(sg)->pgmap) { - state->pgmap = sg_page(sg)->pgmap; + struct dev_pagemap *pgmap = page_get_pgmap(sg_page(sg)); + + if (state->pgmap != pgmap) { + state->pgmap = pgmap; state->map = pci_p2pdma_map_type(state->pgmap, dev); state->bus_off = to_p2p_pgmap(state->pgmap)->bus_offset; } diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 3f7143ade32c..060e27b6aee0 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -161,7 +161,7 @@ static inline bool is_device_private_page(const struct page *page) { return IS_ENABLED(CONFIG_DEVICE_PRIVATE) && is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PRIVATE; + page_get_pgmap(page)->type == MEMORY_DEVICE_PRIVATE; } static inline bool folio_is_device_private(const struct folio *folio) @@ -173,13 +173,13 @@ static inline bool is_pci_p2pdma_page(const struct page *page) { return IS_ENABLED(CONFIG_PCI_P2PDMA) && is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA; + page_get_pgmap(page)->type == MEMORY_DEVICE_PCI_P2PDMA; } static inline bool is_device_coherent_page(const struct page *page) { return is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_COHERENT; + page_get_pgmap(page)->type == MEMORY_DEVICE_COHERENT; } static inline bool folio_is_device_coherent(const struct folio *folio) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6e06286f44f1..27075ea24e67 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -616,6 +616,19 @@ static inline const struct folio_owner_ops *folio_get_owner_ops(struct folio *fo return owner_ops; } +/* + * Get the page dev_pagemap pgmap pointer. + */ +#define page_get_pgmap(page) ((page)->pgmap) + +/* + * Set the page dev_pagemap pgmap pointer. + */ +static inline void page_set_pgmap(struct page *page, struct dev_pagemap *pgmap) +{ + page->pgmap = pgmap; +} + struct page_frag_cache { void * va; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 056f2e411d7b..d3e3843f57dd 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -195,7 +195,7 @@ static int dmirror_fops_release(struct inode *inode, struct file *filp) static struct dmirror_chunk *dmirror_page_to_chunk(struct page *page) { - return container_of(page->pgmap, struct dmirror_chunk, pagemap); + return container_of(page_get_pgmap(page), struct dmirror_chunk, pagemap); } static struct dmirror_device *dmirror_page_to_device(struct page *page) diff --git a/mm/hmm.c b/mm/hmm.c index 7e0229ae4a5a..b5f5ac218fda 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -248,7 +248,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, * just report the PFN. */ if (is_device_private_entry(entry) && - pfn_swap_entry_to_page(entry)->pgmap->owner == + page_get_pgmap(pfn_swap_entry_to_page(entry))->owner == range->dev_private_owner) { cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) diff --git a/mm/memory.c b/mm/memory.c index 80850cad0e6f..5853fa5767c7 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4276,7 +4276,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ get_page(vmf->page); pte_unmap_unlock(vmf->pte, vmf->ptl); - ret = vmf->page->pgmap->ops->migrate_to_ram(vmf); + ret = page_get_pgmap(vmf->page)->ops->migrate_to_ram(vmf); put_page(vmf->page); } else if (is_hwpoison_entry(entry)) { ret = VM_FAULT_HWPOISON; diff --git a/mm/memremap.c b/mm/memremap.c index 40d4547ce514..931bc85da1df 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -458,8 +458,9 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap); void free_zone_device_folio(struct folio *folio) { - if (WARN_ON_ONCE(!folio->page.pgmap->ops || - !folio->page.pgmap->ops->page_free)) + struct dev_pagemap *pgmap = page_get_pgmap(&folio->page); + + if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free)) return; mem_cgroup_uncharge(folio); @@ -486,17 +487,17 @@ void free_zone_device_folio(struct folio *folio) * to clear folio->mapping. */ folio->mapping = NULL; - folio->page.pgmap->ops->page_free(folio_page(folio, 0)); + pgmap->ops->page_free(folio_page(folio, 0)); - if (folio->page.pgmap->type != MEMORY_DEVICE_PRIVATE && - folio->page.pgmap->type != MEMORY_DEVICE_COHERENT) + if (pgmap->type != MEMORY_DEVICE_PRIVATE && + pgmap->type != MEMORY_DEVICE_COHERENT) /* * Reset the refcount to 1 to prepare for handing out the page * again. */ folio_set_count(folio, 1); else - put_dev_pagemap(folio->page.pgmap); + put_dev_pagemap(pgmap); } void zone_device_page_init(struct page *page) @@ -505,7 +506,7 @@ void zone_device_page_init(struct page *page) * Drivers shouldn't be allocating pages after calling * memunmap_pages(). */ - WARN_ON_ONCE(!percpu_ref_tryget_live(&page->pgmap->ref)); + WARN_ON_ONCE(!percpu_ref_tryget_live(&page_get_pgmap(page)->ref)); set_page_count(page, 1); lock_page(page); } @@ -514,7 +515,9 @@ EXPORT_SYMBOL_GPL(zone_device_page_init); #ifdef CONFIG_FS_DAX bool __put_devmap_managed_folio_refs(struct folio *folio, int refs) { - if (folio->page.pgmap->type != MEMORY_DEVICE_FS_DAX) + struct dev_pagemap *pgmap = page_get_pgmap(&folio->page); + + if (pgmap->type != MEMORY_DEVICE_FS_DAX) return false; /* diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 9cf26592ac93..368def358d02 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -135,7 +135,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, page = pfn_swap_entry_to_page(entry); if (!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) || - page->pgmap->owner != migrate->pgmap_owner) + page_get_pgmap(page)->owner != migrate->pgmap_owner) goto next; mpfn = migrate_pfn(page_to_pfn(page)) | @@ -156,7 +156,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, goto next; else if (page && is_device_coherent_page(page) && (!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_COHERENT) || - page->pgmap->owner != migrate->pgmap_owner)) + page_get_pgmap(page)->owner != migrate->pgmap_owner)) goto next; mpfn = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE; mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0; diff --git a/mm/mm_init.c b/mm/mm_init.c index 1c205b0a86ed..279cdaebfd2b 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -995,7 +995,7 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, * and zone_device_data. It is a bug if a ZONE_DEVICE page is * ever freed or placed on a driver-private list. */ - page->pgmap = pgmap; + page_set_pgmap(page, pgmap); page->zone_device_data = NULL; /* From patchwork Fri Nov 8 16:20:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13868443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CABAD64060 for ; Fri, 8 Nov 2024 16:21:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC0346B00A6; Fri, 8 Nov 2024 11:21:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C6FA46B00A7; Fri, 8 Nov 2024 11:21:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9AD16B00A8; Fri, 8 Nov 2024 11:21:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8662F6B00A6 for ; Fri, 8 Nov 2024 11:21:06 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 384FA1203DD for ; Fri, 8 Nov 2024 16:21:06 +0000 (UTC) X-FDA: 82763441334.18.5833B9F Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf09.hostedemail.com (Postfix) with ESMTP id A20E4140017 for ; Fri, 8 Nov 2024 16:20:38 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=iQbmD+M1; spf=pass (imf09.hostedemail.com: domain of 3bjouZwUKCNsQ7887DLLDIB.9LJIFKRU-JJHS79H.LOD@flex--tabba.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3bjouZwUKCNsQ7887DLLDIB.9LJIFKRU-JJHS79H.LOD@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731082676; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AGA0UiFmqkm5IQcmVA3JJZcNgt8jjwXyqomK1aijshk=; b=RJzYcgQQh2XWI/5DKZZGpJU3Y9ASvLxAeDcNTIRXiUTJ2SbxxIt4yGdV+on2YJhqFinkvc T7C6vOScT1DkBC+xJ8rhftPrg264mASo7gJGEzP17cbQvB3yLb8gEFEXkq61r5MWNp3RY1 Z6U1Od9haHsOfmJQAYjPFSQGFOXrqEo= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=iQbmD+M1; spf=pass (imf09.hostedemail.com: domain of 3bjouZwUKCNsQ7887DLLDIB.9LJIFKRU-JJHS79H.LOD@flex--tabba.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3bjouZwUKCNsQ7887DLLDIB.9LJIFKRU-JJHS79H.LOD@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731082676; a=rsa-sha256; cv=none; b=StVfkOqyHqkNFvmsRWOmQlo+SR6c2AB3cteFxya9tmfkZ6mRo6tq75i1wJuxiptNgIeI4h 9AR0gIK3f9sGiTI+ya1ZICVGQaKC+RbGTODgohY19B2Mq/SzfQr9je6kZq36yUpUtx6eL7 UqeRsTruYyWTE2rXCyvCmxJASUahHWc= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6ea82a5480fso45675767b3.0 for ; Fri, 08 Nov 2024 08:21:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731082863; x=1731687663; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AGA0UiFmqkm5IQcmVA3JJZcNgt8jjwXyqomK1aijshk=; b=iQbmD+M1HTtOL6pccIWa6jFuotOoTOHIQjm4LQwJPqmsRafJXYpUYpit6k1N31qYVp 410BYM5Z7tPN5CQ+zAYWHZKIQ9apS1fqGCyIwhuBuP5NEf8njN8AT2ve8f+mjToF2bWF /BZB2dPc6vhXF6gBfyfzWVrgQp9t81rAC7zJeMXadpFFJARcpbPcIWE3pQ8VzKBBo2TZ ZVtZL/bad56cyB+Emda6nRWf88ZMbsCnER9dipeKoI5UVph1dSQyY6RRt4icWxm+gxhr qHAhop16TcU1BLFQOVt6loU9LGiXX8lkZFV1UnuDkGq6Btu5veAMMwRzv0MCSHydP3j/ QVwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731082863; x=1731687663; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AGA0UiFmqkm5IQcmVA3JJZcNgt8jjwXyqomK1aijshk=; b=NeUcsPP348wp+PgugmXOaUw4k5CFJLn5PftRWQuWJyt6tvToKcIYiYj7KumtX51uaQ sFGon8fp2+6aYry0LW8L9bvZnuG2yacPJDdxYo9O9mjH8YqJY4K0TJApxu/XiYl/9ErC rcO5T1FUuINbG/EocIc+JSrqHbulS3xU1UXPDi3v7bDetqVdseWGsW/z271S02n3cMXn GdhoxA2JsfGMbuQGqotkfpnMZqj4E2hIXDuB4GDpdJbexXH66BJhupcqqL5CtOdcLj5r mUI3SECcP51nuNkimx+ujBcbNDdjL72xCvFe/g3925v8VYclPstt8EajT87uMzc6YWjo igEA== X-Gm-Message-State: AOJu0YzvHC5qumUER8aYKP/yaqIQwknkLSIEehipDGolUaEjUYz4KA3a 5oOGu6sEP4u9vuQQv39/dWscQXhGyps5GWJxGOdYdM77vdU8NX7HoAWwPCV6F9z6KISnboYAvs7 uY8pq5L+qJuh3wVvYBRzTzKKnriWf12iCyUCkETBTBoSBe556Xuk4pZZXdg1+m5ftAV7kw5zlx7 CyFpKbr35a5G1BGQbt3VKFlQ== X-Google-Smtp-Source: AGHT+IGtaX0Xx+3549jHaKW8S9KxdS0O7fbxqd21Fy1uVBwd66gZLlXyQvWaaGo3amHvs3pcci25pPLu+w== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a25:dc4a:0:b0:e25:5cb1:77d8 with SMTP id 3f1490d57ef6-e337f8ed8bbmr2122276.6.1731082862962; Fri, 08 Nov 2024 08:21:02 -0800 (PST) Date: Fri, 8 Nov 2024 16:20:39 +0000 In-Reply-To: <20241108162040.159038-1-tabba@google.com> Mime-Version: 1.0 References: <20241108162040.159038-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108162040.159038-10-tabba@google.com> Subject: [RFC PATCH v1 09/10] mm: Use owner_ops on folio_put for zone device pages From: Fuad Tabba To: linux-mm@kvack.org Cc: kvm@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, david@redhat.com, rppt@kernel.org, jglisse@redhat.com, akpm@linux-foundation.org, muchun.song@linux.dev, simona@ffwll.ch, airlied@gmail.com, pbonzini@redhat.com, seanjc@google.com, willy@infradead.org, jgg@nvidia.com, jhubbard@nvidia.com, ackerleytng@google.com, vannapurve@google.com, mail@maciej.szmigiero.name, kirill.shutemov@linux.intel.com, quic_eberman@quicinc.com, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, tabba@google.com X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: A20E4140017 X-Stat-Signature: qk5gsqpz1f83xzeigqs8asi4rgxfdt43 X-Rspam-User: X-HE-Tag: 1731082838-461246 X-HE-Meta: U2FsdGVkX18Hzz6AD4SfQlPAUU1/iT3qKunBQqfwb/T6HCUz7jt6YEOrOHX1mbIc8E+95XXdyIpVdT1BNtz5K+EE9n4JtpeRzL61lgRjHAjDv3S/tf9vNo9rYz8gfMkQSsxMrrIlFjEppF1/V8ZdfR1qwLE3RJKpRcp6Teb1PdIjcDXAtj1RLRqjlcBytx+JXJ91dOLmrTJbvM1uZHvkEMcNaSxlmd0WOQkZv3OLR3YcWGDfpKB/N3f4y9lvozbcKY55mq8m57yL12atly5+8brCRDNp5QHMtivksHVLmy6ozcZ1Q67aCG4YNP69EHUSHDGXQxi1V9PjyH++FTIxDVMLvJAP0jCRWjnXZ20ysl65PxZqiELCZd9YH93xTZ/Gp5efHb3hVzwLLSly6kr+Db43MPyD3HNmkGADI1vZKo44D5rFqAXSJuRj/AwLwkDmrrqsSCieW4+b37w2Ad4qwsbwafgY5/1q6pYq/gvBVWGqDerExNwPiOpu3HpCykCZz9XfnFslFhlE+LbVRF5r1XwojWn0cdASyP2eeTZrzoRkz4UFQZnE81vJAZ8x/5QdPZThoBQzkoNqDOBHYWSfIbD1PuHlag0Id53TvzdLdpsgSVCFqLMa67UxCJjggqS0dkNzyb6qeHVc7HWLP6zkZSGMUyi3/NHe6saV5qURfJf0V2r3mPoOkTP6QG6viBusCdlEjUQdXhjXMQKrTmr6lnDM/iXV5io6IRkrELNgJZC8kSD9PmHuTT+VJzM8E5j6GT875Trdjylf5Q+h+Vmy7rYkXOo8g67+Bw5Drw4ofdSUUofF6si1YmU82LjkhK6a1FEgRy+/X6oio6Y7xMyphM1uBb7bSBepO49XKcAmBwjOxeG5Pe0NakqtgYzlcsAOPeT/yFzkeStG1x2yp3PUYQf4g1UdXWgdD2553ICWM+UmlEymP/C8I4k+usKDtEtS1ZphaKw/e7KZCbdv/d9 4Sxs/wM7 5NKpKD9HxFGXWawHXagzogMSGhyEJEio5MaAQiQOKpOy3KvxYXXSwopJ9YHomwW6uTtEwUBB3hfn4E7bFYPrHI/B5LqLRJoSDmLMQn9fa5v5QP/eRaXJVl/k5RzT331OUo42bAmR7Lmb2BTyTiR60ou8S3s2UKo2GvIJbp6vVoVYEX0klI91+OC8nFtGE/pcEi49NtwM62ad27NJydSawhMlmcW823eO9f2XR7+vCM3o1OimsH+X4njCmDAVtupV8373iJQgEiuPT/23zDB8b9OsBfR/8+jKKCzvs9yEcmtH0fzu2qCDPFaREbMsOzCdvNXfyQmV2p++7RbZNnUuAaNmcfP5zUWgbMCXPVQeK2bs/OForOsOCTpDPxzJbJCtMo4rp10GRA/PtKiI/2Yi9jSwT0ktI5UITl/3MWIrQlDKGiGpWeWw/QbJm/wjFmyIpoZ+Fo15vQIB6n3xTuHyWq//9+JEZu5pfgSDjI49guTpzQqT3FbFy9CtlVyhG9WjOhJLOVqQ/V8JMD2l8d98uO4b7/rdEcI9iFGNDX871sQds7R3tcEiYaLfC5IG863lbxwNQ/v9BuxAXGy4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that we have the folio_owner_ops callback, use it for zone device pages instead of using a dedicated callback. Note that struct dev_pagemap (pgmap) in struct page is overlaid with struct folio owner_ops. Therefore, make struct dev_pagemap contain an instance of struct folio_owner_ops, to handle it the same way as struct folio_owner_ops. Also note that, although struct dev_pagemap_ops has a page_free() function, it does not have the same intention as the folio_owner_ops free() callback nor does it have the same behavior. The page_free() function is used as an optional callback to drivers that use zone device to inform them of the freeing of the page. Signed-off-by: Fuad Tabba --- include/linux/memremap.h | 8 +++++++ include/linux/mm_types.h | 16 ++++++++++++-- mm/internal.h | 1 - mm/memremap.c | 44 -------------------------------------- mm/mm_init.c | 46 ++++++++++++++++++++++++++++++++++++++++ mm/swap.c | 18 ++-------------- 6 files changed, 70 insertions(+), 63 deletions(-) diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 060e27b6aee0..5b68bbc588a3 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -106,6 +106,7 @@ struct dev_pagemap_ops { /** * struct dev_pagemap - metadata for ZONE_DEVICE mappings + * @folio_ops: method table for folio operations. * @altmap: pre-allocated/reserved memory for vmemmap allocations * @ref: reference count that pins the devm_memremap_pages() mapping * @done: completion for @ref @@ -125,6 +126,7 @@ struct dev_pagemap_ops { * @ranges: array of ranges to be mapped when nr_range > 1 */ struct dev_pagemap { + struct folio_owner_ops folio_ops; struct vmem_altmap altmap; struct percpu_ref ref; struct completion done; @@ -140,6 +142,12 @@ struct dev_pagemap { }; }; +/* + * The folio_owner_ops structure needs to be first since pgmap in struct page is + * overlaid with owner_ops in struct folio. + */ +static_assert(offsetof(struct dev_pagemap, folio_ops) == 0); + static inline bool pgmap_has_memory_failure(struct dev_pagemap *pgmap) { return pgmap->ops && pgmap->ops->memory_failure; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 27075ea24e67..a72fda20d5e9 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -427,6 +427,7 @@ FOLIO_MATCH(lru, lru); FOLIO_MATCH(mapping, mapping); FOLIO_MATCH(compound_head, lru); FOLIO_MATCH(compound_head, owner_ops); +FOLIO_MATCH(pgmap, owner_ops); FOLIO_MATCH(index, index); FOLIO_MATCH(private, private); FOLIO_MATCH(_mapcount, _mapcount); @@ -618,15 +619,26 @@ static inline const struct folio_owner_ops *folio_get_owner_ops(struct folio *fo /* * Get the page dev_pagemap pgmap pointer. + * + * The page pgmap is overlaid with the folio owner_ops, where bit 1 is used to + * indicate that the page/folio has owner ops. The dev_pagemap contains + * owner_ops and is handled the same way. The getter returns a sanitized + * pointer. */ -#define page_get_pgmap(page) ((page)->pgmap) +#define page_get_pgmap(page) \ + ((struct dev_pagemap *)((unsigned long)(page)->pgmap & ~FOLIO_OWNER_OPS)) /* * Set the page dev_pagemap pgmap pointer. + * + * The page pgmap is overlaid with the folio owner_ops, where bit 1 is used to + * indicate that the page/folio has owner ops. The dev_pagemap contains + * owner_ops and is handled the same way. The setter sets bit 1 to indicate + * that the page owner_ops. */ static inline void page_set_pgmap(struct page *page, struct dev_pagemap *pgmap) { - page->pgmap = pgmap; + page->pgmap = (struct dev_pagemap *)((unsigned long)pgmap | FOLIO_OWNER_OPS); } struct page_frag_cache { diff --git a/mm/internal.h b/mm/internal.h index 5a7302baeed7..a041247bed10 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1262,7 +1262,6 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, unsigned long addr, int *flags, bool writable, int *last_cpupid); -void free_zone_device_folio(struct folio *folio); int migrate_device_coherent_folio(struct folio *folio); struct vm_struct *__get_vm_area_node(unsigned long size, diff --git a/mm/memremap.c b/mm/memremap.c index 931bc85da1df..9fd5f57219eb 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -456,50 +456,6 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn, } EXPORT_SYMBOL_GPL(get_dev_pagemap); -void free_zone_device_folio(struct folio *folio) -{ - struct dev_pagemap *pgmap = page_get_pgmap(&folio->page); - - if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free)) - return; - - mem_cgroup_uncharge(folio); - - /* - * Note: we don't expect anonymous compound pages yet. Once supported - * and we could PTE-map them similar to THP, we'd have to clear - * PG_anon_exclusive on all tail pages. - */ - if (folio_test_anon(folio)) { - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); - __ClearPageAnonExclusive(folio_page(folio, 0)); - } - - /* - * When a device managed page is freed, the folio->mapping field - * may still contain a (stale) mapping value. For example, the - * lower bits of folio->mapping may still identify the folio as an - * anonymous folio. Ultimately, this entire field is just stale - * and wrong, and it will cause errors if not cleared. - * - * For other types of ZONE_DEVICE pages, migration is either - * handled differently or not done at all, so there is no need - * to clear folio->mapping. - */ - folio->mapping = NULL; - pgmap->ops->page_free(folio_page(folio, 0)); - - if (pgmap->type != MEMORY_DEVICE_PRIVATE && - pgmap->type != MEMORY_DEVICE_COHERENT) - /* - * Reset the refcount to 1 to prepare for handing out the page - * again. - */ - folio_set_count(folio, 1); - else - put_dev_pagemap(pgmap); -} - void zone_device_page_init(struct page *page) { /* diff --git a/mm/mm_init.c b/mm/mm_init.c index 279cdaebfd2b..47c1f8fd4914 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -974,6 +974,51 @@ static void __init memmap_init(void) } #ifdef CONFIG_ZONE_DEVICE + +static void free_zone_device_folio(struct folio *folio) +{ + struct dev_pagemap *pgmap = page_get_pgmap(&folio->page); + + if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free)) + return; + + mem_cgroup_uncharge(folio); + + /* + * Note: we don't expect anonymous compound pages yet. Once supported + * and we could PTE-map them similar to THP, we'd have to clear + * PG_anon_exclusive on all tail pages. + */ + if (folio_test_anon(folio)) { + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); + __ClearPageAnonExclusive(folio_page(folio, 0)); + } + + /* + * When a device managed page is freed, the folio->mapping field + * may still contain a (stale) mapping value. For example, the + * lower bits of folio->mapping may still identify the folio as an + * anonymous folio. Ultimately, this entire field is just stale + * and wrong, and it will cause errors if not cleared. + * + * For other types of ZONE_DEVICE pages, migration is either + * handled differently or not done at all, so there is no need + * to clear folio->mapping. + */ + folio->mapping = NULL; + pgmap->ops->page_free(folio_page(folio, 0)); + + if (pgmap->type != MEMORY_DEVICE_PRIVATE && + pgmap->type != MEMORY_DEVICE_COHERENT) + /* + * Reset the refcount to 1 to prepare for handing out the page + * again. + */ + folio_set_count(folio, 1); + else + put_dev_pagemap(pgmap); +} + static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, unsigned long zone_idx, int nid, struct dev_pagemap *pgmap) @@ -995,6 +1040,7 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, * and zone_device_data. It is a bug if a ZONE_DEVICE page is * ever freed or placed on a driver-private list. */ + pgmap->folio_ops.free = free_zone_device_folio; page_set_pgmap(page, pgmap); page->zone_device_data = NULL; diff --git a/mm/swap.c b/mm/swap.c index 767ff6d8f47b..d2578465e270 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -117,11 +117,6 @@ void __folio_put(struct folio *folio) return; } - if (unlikely(folio_is_zone_device(folio))) { - free_zone_device_folio(folio); - return; - } - if (folio_test_hugetlb(folio)) { free_huge_folio(folio); return; @@ -947,20 +942,11 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs) unlock_page_lruvec_irqrestore(lruvec, flags); lruvec = NULL; } - if (folio_ref_sub_and_test(folio, nr_refs)) - owner_ops->free(folio); - continue; - } - - if (folio_is_zone_device(folio)) { - if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = NULL; - } + /* fenced by folio_is_zone_device() */ if (put_devmap_managed_folio_refs(folio, nr_refs)) continue; if (folio_ref_sub_and_test(folio, nr_refs)) - free_zone_device_folio(folio); + owner_ops->free(folio); continue; } From patchwork Fri Nov 8 16:20:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13868444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A3B0D64061 for ; Fri, 8 Nov 2024 16:21:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 54BEE6B00A8; Fri, 8 Nov 2024 11:21:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4CE466B00AA; Fri, 8 Nov 2024 11:21:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3705A6B00AB; Fri, 8 Nov 2024 11:21:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 15E9B6B00A8 for ; Fri, 8 Nov 2024 11:21:09 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 89C5DC0EBB for ; Fri, 8 Nov 2024 16:21:08 +0000 (UTC) X-FDA: 82763441082.26.400C673 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf09.hostedemail.com (Postfix) with ESMTP id 10B8514001B for ; Fri, 8 Nov 2024 16:20:40 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=zHud5AwP; spf=pass (imf09.hostedemail.com: domain of 3cTouZwUKCN4TABBAGOOGLE.COMLINUX-MMKVACK.ORG@flex--tabba.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3cTouZwUKCN4TABBAGOOGLE.COMLINUX-MMKVACK.ORG@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731082697; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LLzT0eDLF6iAMaTBJqJboYyOg2pJMFOgkBQWEqDLmNQ=; b=iHOEhDvqRxseBrz9MXlSiSf0RKQLVfXAVRmBkrrdfp20fTefKCG/1nkKFmSj9zyMjymj1c 11NgOE9ASqZQsQpVTeLEI8wZK5uB4rY2aC72DRp/CizKZTyhmFvp+6IUp14JXu4gQ4BNZp Qr0y+NWbmc8/EHrHBWYJ9iPzXf3gW+M= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=zHud5AwP; spf=pass (imf09.hostedemail.com: domain of 3cTouZwUKCN4TABBAGOOGLE.COMLINUX-MMKVACK.ORG@flex--tabba.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3cTouZwUKCN4TABBAGOOGLE.COMLINUX-MMKVACK.ORG@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731082697; a=rsa-sha256; cv=none; b=plV4lIVgwDs9dJ3nNqd0/Zpb7wV5n3SsrfL9R8wcYcXAWNX1h8dc44w2v3KmdWaI1W+7na UDtZtH2RR5UG6tGqYYIigu9CFeLvd0j5IJIYjLaPFY4+gZ3Y88L6qe8K3u/r5wjUep9+bk q/tCAUUbxUsH9ooK0dOI7cS30ufKc8Y= Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-37d45f1e935so1279947f8f.0 for ; Fri, 08 Nov 2024 08:21:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731082865; x=1731687665; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LLzT0eDLF6iAMaTBJqJboYyOg2pJMFOgkBQWEqDLmNQ=; b=zHud5AwPsjFBm1Y8J9F6j733IU8qJzSdzOA0rB0Kapyoy4sqCNzoTW4Cod/WYTw0TH 0H1TFyvVhpCSKLNfbv2FMqtiMsJGWRMmfAcRgL9LnkP6CKixOE4U1ctPu6XG/8Cp67n8 GOD1DbfJPSSj5g2v7hp1N0RFdDDDNbl9TkYX7in32eO2xswmMlNCVWH3SzSZQZg3ybCC F9++1iPxWhBgcXHbJBmPQ3ycRsSR+Gc+2sAYFfke1h6r53UM6Eb3udwG5iq8incC6CSy 9GtRiab9Z6ggoiBPCv74/NFk1jI/008vccYGYYZmkYtgHBrG45SE5I0/AoLdp2182pm2 uFWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731082865; x=1731687665; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LLzT0eDLF6iAMaTBJqJboYyOg2pJMFOgkBQWEqDLmNQ=; b=Icg6Zj8d2SnV4jV6BRJqlAyQskotxw8azUA2OxsjFVyFRE+3vdfCfzhSoSLhBFB7Qr Ej9ukHaxVw2skpRHQ3NZAkcAv6uAwVeect3E1nY13HWJBa/c1Df8v2xg2hxlS6jbkep3 7ebpYGzp3IbhggjNL7g/1Sa5OFiOPYKsudWEMUzSzCWZaFuRTV3mnB8a3XCFgsQxEflJ pfDaRJvIfxo2NYFTeQWVlHoHgFhmoB6SITNL599EAItpQ71nvdprIzjNjlqAkaZFIf4F tsnMLjVU9GzfHvX4tWjIA/n1yDdhdAMxlUfSUchNwFiPEUik/NuGR6O+DOo5sfflEpyb pRFw== X-Gm-Message-State: AOJu0YzVG1kvuZDX3lev1B7H9OjJaw+C3ZONHfaV6Iun83hBmdBOtJ8X F5BgYMHcGthlKQbhUeiWN4n1OSB2MfXZkz/PqCba55C9E5cbS7e9CaPsd+f6tDZop/HozFapGLo si2iEEA8Ygi8LDZJGVYAE4Ay+NBejzs2v8ya6KrPQsyz0s9V6VSVStx2UDL7V6w0LLsWKV+xk8X 185rUJwTtePKWC9u5jtQVnmw== X-Google-Smtp-Source: AGHT+IHHoxyjGs1AUxhryPf5sywJLDUszI7O7sOrCReyd47RhmaPFBz34fv43FAG+LKSgY7icD6Ea0HGQw== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:6000:1b02:b0:37d:4850:c3be with SMTP id ffacd0b85a97d-381f18881dfmr2466f8f.10.1731082865212; Fri, 08 Nov 2024 08:21:05 -0800 (PST) Date: Fri, 8 Nov 2024 16:20:40 +0000 In-Reply-To: <20241108162040.159038-1-tabba@google.com> Mime-Version: 1.0 References: <20241108162040.159038-1-tabba@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241108162040.159038-11-tabba@google.com> Subject: [RFC PATCH v1 10/10] mm: hugetlb: Use owner_ops on folio_put for hugetlb From: Fuad Tabba To: linux-mm@kvack.org Cc: kvm@vger.kernel.org, nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org, david@redhat.com, rppt@kernel.org, jglisse@redhat.com, akpm@linux-foundation.org, muchun.song@linux.dev, simona@ffwll.ch, airlied@gmail.com, pbonzini@redhat.com, seanjc@google.com, willy@infradead.org, jgg@nvidia.com, jhubbard@nvidia.com, ackerleytng@google.com, vannapurve@google.com, mail@maciej.szmigiero.name, kirill.shutemov@linux.intel.com, quic_eberman@quicinc.com, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, tabba@google.com X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 10B8514001B X-Stat-Signature: iqiqnk8pk1h3tgie7cbksuw5hp7w8q9i X-Rspam-User: X-HE-Tag: 1731082840-427480 X-HE-Meta: U2FsdGVkX18GbuY/ZGuFH6UtHIvdYO3dfx5XSjErI69FNJFHbY2SrUWVEcphOSEbqpHIIwHCSiJZoVuWYIiVsklnXhfXbN81fMxSkpLRisUgRVIUtKgi59epvqWkITGrdyGNz3Ev5wyW0I2BL0K1eHFY8+CWeMTK6J4404o2dvHtoySvXPljVGajbsJYd2ms8nbOoolr0qsTvZBgalldKdYgxPff92zEUiR26UqF/zeax2y9jXYXOcXEK+9TB4ZRqtQQJDd1YsGBULn3bsc2E5rTTRLIwQNb4TBoUcSqSrelDa8CoZTX5e/Nb5M2a3t7VBX/9wGDwC+QAi3h9PGw7EJyBosiq5Y6NQ4zRrfGDxOSni8dryB3nE4MNj5mOieP3v0QBJY8pJ2FZnuQPGNRj9XmULCyZy+vD/wwFYta/6DRa1+pD1G65svBG+IA4aylXTF1PFgOUEx2cMODI0FbK4eogoW0rRN/WPbVqzT4JET1uqauF3rudcd9imUMqdAYWDxJHAvOaBDWLAcMg1+/W5fSQIlG9VxMjd/I+cYP76Fz86kp8YksYDcXuEZnJdnvZ5lZZY2xKZRr8XtjL/jcSHDNSm1+EUufIKHjbJ4uvkhMlKa4bQuzqsDg2JHLUNaDHYHz+uZiLYeCdPXqmEHbpgUWTDhEQbUiCVKMNnPPyR7qO3B8Lq+Iq+2yBUcIBLBwXAnmWT+1kfAZc6FxlgWiKNg/XwQGC4ztHX1teIA8l+Wk38acUx+g1krtdx+534dKiP3KGfopFkBl1X+WBs4Ob9guzmfCkbD/zQUmJy9nL2VMVFODc+eNJT5kH681J2wUG6SZvCEmAUE+TkguPx2SKTaOXia/+xSyQMtsX8GT9o6UOaytLsAqr+pKGECihovQVOICNPAXvx9ahM9hQqGuv+ubvOsSvQN0jN7V0qH0rZeZWnG5237U1lIYKd5kpTjHOS8upeDncGIUCdMkwvl tcdwVupE SEODFP4OO320sXxXPkh+3CzAInaieFqkORXlXfh4HZL4qRmAOTUZ2BKVy2pNnDYoGZOfrwx3iVMddeoeo8XAEeqzPGpiJrqn/MUnyF4XgSwv6tlSct7ekUj891gy2bCb9ce3PaaqbLDzm3y5rdYri8vhxNudQxEHmKonCcNJVHTpcn4G8Nla4I/Zot3XWE3WTYlqcxssK3Zr6vR0jVQmoRzy+N9mb6NjJ5jbdQrbaRbNil3yQBHNZw6aEid8FalwGmSXR4520CPQwoWvNo/QytwosYwppf3MkzEuuHrwfZBcW1FGSOxZZIZegtCx1Hf8QbfHjxLgVL4xDyx0bsAo1b/d5sDeYTdHqkpz/yrlpKONHpxJ0HreoAFmQfhOsPfamSLBo5JTDdOzWI3RuyRUrS0IvZCB6XdyUU4q6HYJQmzNTK29cW6mR2NDMi3UwAp1Om7d6gL1OH8wzBE6GrN+YexxBBxw7bx3rFTClwLlHlMoQlPzCOyHxGJRokLlequnGacvzP25dgni46Hb/09w+VmgG078xLKD/jbZmvHr8ps+afxrPm86rs/cJhavOYSzs0bOQ825AHMVSIkI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that we have the folio_owner_ops callback, use it for hugetlb pages instead of using a dedicated callback. Since owner_ops is overlaid with lru, we need to unset owner_ops to allow the use of lru when its isolated. At that point we know that the reference count is elevated, will not reach 0, and thus not trigger a callback. Therefore, it is safe to do so provided we restore it before we put the folio back. Signed-off-by: Fuad Tabba --- include/linux/hugetlb.h | 2 -- mm/hugetlb.c | 57 +++++++++++++++++++++++++++++++++-------- mm/swap.c | 14 ---------- 3 files changed, 47 insertions(+), 26 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index e846d7dac77c..500848862702 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -20,8 +20,6 @@ struct user_struct; struct mmu_gather; struct node; -void free_huge_folio(struct folio *folio); - #ifdef CONFIG_HUGETLB_PAGE #include diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2308e94d8615..4e1c87e37968 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -89,6 +89,33 @@ static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma); static void hugetlb_unshare_pmds(struct vm_area_struct *vma, unsigned long start, unsigned long end); static struct resv_map *vma_resv_map(struct vm_area_struct *vma); +static void free_huge_folio(struct folio *folio); + +static const struct folio_owner_ops hugetlb_owner_ops = { + .free = free_huge_folio, +}; + +/* + * Mark this folio as a hugetlb-owned folio. + * + * Set the folio hugetlb flag and owner operations. + */ +static void folio_set_hugetlb_owner(struct folio *folio) +{ + __folio_set_hugetlb(folio); + folio_set_owner_ops(folio, &hugetlb_owner_ops); +} + +/* + * Unmark this folio from being a hugetlb-owned folio. + * + * Clear the folio hugetlb flag and owner operations. + */ +static void folio_clear_hugetlb_owner(struct folio *folio) +{ + folio_clear_owner_ops(folio); + __folio_clear_hugetlb(folio); +} static void hugetlb_free_folio(struct folio *folio) { @@ -1617,7 +1644,7 @@ static void remove_hugetlb_folio(struct hstate *h, struct folio *folio, * to tail struct pages. */ if (!folio_test_hugetlb_vmemmap_optimized(folio)) { - __folio_clear_hugetlb(folio); + folio_clear_hugetlb_owner(folio); } h->nr_huge_pages--; @@ -1641,7 +1668,7 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, h->surplus_huge_pages++; h->surplus_huge_pages_node[nid]++; } - __folio_set_hugetlb(folio); + folio_set_hugetlb_owner(folio); folio_change_private(folio, NULL); /* @@ -1692,7 +1719,7 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, */ if (folio_test_hugetlb(folio)) { spin_lock_irq(&hugetlb_lock); - __folio_clear_hugetlb(folio); + folio_clear_hugetlb_owner(folio); spin_unlock_irq(&hugetlb_lock); } @@ -1793,7 +1820,7 @@ static void bulk_vmemmap_restore_error(struct hstate *h, list_for_each_entry_safe(folio, t_folio, non_hvo_folios, _hugetlb_list) { list_del(&folio->_hugetlb_list); spin_lock_irq(&hugetlb_lock); - __folio_clear_hugetlb(folio); + folio_clear_hugetlb_owner(folio); spin_unlock_irq(&hugetlb_lock); update_and_free_hugetlb_folio(h, folio, false); cond_resched(); @@ -1818,7 +1845,7 @@ static void bulk_vmemmap_restore_error(struct hstate *h, } else { list_del(&folio->_hugetlb_list); spin_lock_irq(&hugetlb_lock); - __folio_clear_hugetlb(folio); + folio_clear_hugetlb_owner(folio); spin_unlock_irq(&hugetlb_lock); update_and_free_hugetlb_folio(h, folio, false); cond_resched(); @@ -1851,14 +1878,14 @@ static void update_and_free_pages_bulk(struct hstate *h, * should only be pages on the non_hvo_folios list. * Do note that the non_hvo_folios list could be empty. * Without HVO enabled, ret will be 0 and there is no need to call - * __folio_clear_hugetlb as this was done previously. + * folio_clear_hugetlb_owner as this was done previously. */ VM_WARN_ON(!list_empty(folio_list)); VM_WARN_ON(ret < 0); if (!list_empty(&non_hvo_folios) && ret) { spin_lock_irq(&hugetlb_lock); list_for_each_entry(folio, &non_hvo_folios, _hugetlb_list) - __folio_clear_hugetlb(folio); + folio_clear_hugetlb_owner(folio); spin_unlock_irq(&hugetlb_lock); } @@ -1879,7 +1906,7 @@ struct hstate *size_to_hstate(unsigned long size) return NULL; } -void free_huge_folio(struct folio *folio) +static void free_huge_folio(struct folio *folio) { /* * Can't pass hstate in here because it is called from the @@ -1959,7 +1986,7 @@ static void __prep_account_new_huge_page(struct hstate *h, int nid) static void init_new_hugetlb_folio(struct hstate *h, struct folio *folio) { - __folio_set_hugetlb(folio); + folio_set_hugetlb_owner(folio); INIT_LIST_HEAD(&folio->_hugetlb_list); hugetlb_set_folio_subpool(folio, NULL); set_hugetlb_cgroup(folio, NULL); @@ -7428,6 +7455,14 @@ bool folio_isolate_hugetlb(struct folio *folio, struct list_head *list) goto unlock; } folio_clear_hugetlb_migratable(folio); + /* + * Clear folio->owner_ops; now we can use folio->lru. + * Note that the folio cannot get freed because we are holding a + * reference. The reference will be put in folio_putback_hugetlb(), + * after restoring folio->owner_ops. + */ + folio_clear_owner_ops(folio); + INIT_LIST_HEAD(&folio->lru); list_del_init(&folio->_hugetlb_list); list_add_tail(&folio->lru, list); unlock: @@ -7480,7 +7515,9 @@ void folio_putback_hugetlb(struct folio *folio) { spin_lock_irq(&hugetlb_lock); folio_set_hugetlb_migratable(folio); - list_del_init(&folio->lru); + list_del(&folio->lru); + /* Restore folio->owner_ops since we can no longer use folio->lru. */ + folio_set_owner_ops(folio, &hugetlb_owner_ops); list_add_tail(&folio->_hugetlb_list, &(folio_hstate(folio))->hugepage_activelist); spin_unlock_irq(&hugetlb_lock); folio_put(folio); diff --git a/mm/swap.c b/mm/swap.c index d2578465e270..9798ca47f26a 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -117,11 +117,6 @@ void __folio_put(struct folio *folio) return; } - if (folio_test_hugetlb(folio)) { - free_huge_folio(folio); - return; - } - page_cache_release(folio); folio_unqueue_deferred_split(folio); mem_cgroup_uncharge(folio); @@ -953,15 +948,6 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs) if (!folio_ref_sub_and_test(folio, nr_refs)) continue; - /* hugetlb has its own memcg */ - if (folio_test_hugetlb(folio)) { - if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = NULL; - } - free_huge_folio(folio); - continue; - } folio_unqueue_deferred_split(folio); __page_cache_release(folio, &lruvec, &flags);