From patchwork Tue Aug 6 02:21:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13754340 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 996D3C3DA4A for ; Tue, 6 Aug 2024 02:21:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C6BF6B00BE; Mon, 5 Aug 2024 22:21:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 24F116B00C2; Mon, 5 Aug 2024 22:21:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A0936B00C1; Mon, 5 Aug 2024 22:21:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D32E56B00BE for ; Mon, 5 Aug 2024 22:21:24 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 9212A12088E for ; Tue, 6 Aug 2024 02:21:24 +0000 (UTC) X-FDA: 82420218888.09.C9B43A2 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf05.hostedemail.com (Postfix) with ESMTP id CAACA10000B for ; Tue, 6 Aug 2024 02:21:22 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=pLQle4Uu; spf=pass (imf05.hostedemail.com: domain of 3oYixZgYKCLQsotbUiaiiafY.Wigfchor-ggepUWe.ila@flex--yuzhao.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3oYixZgYKCLQsotbUiaiiafY.Wigfchor-ggepUWe.ila@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722910834; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yV7dY4XDiIXGgrOmDcJWCpFmJrDLDUqsW20nfMTrSyE=; b=WopLgW22B93L8Zh9kSh9uTGI2xcI4Q0/5M2X/1YQNGRCWsbsN/9aVP9DzKmZkTpPgN0V+e 8oZ0SgF6qzytjs2PyYMEQ6O3cDPYE50+tW+VA0GIAPnEaIXl0kM7iULOdfDRATBbEEkzXA g2pDqDyZfwZannNhdsihANuSpzL2EPI= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=pLQle4Uu; spf=pass (imf05.hostedemail.com: domain of 3oYixZgYKCLQsotbUiaiiafY.Wigfchor-ggepUWe.ila@flex--yuzhao.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3oYixZgYKCLQsotbUiaiiafY.Wigfchor-ggepUWe.ila@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722910834; a=rsa-sha256; cv=none; b=ecRwLfolAxaeqG2tfPu/Kqgmh3bk1E8i13b9bobJQ56VtJ+vqtFl/OGOwYCJf2ydlyMxJL nUJuV4I3PiZKaNV247y+gDXln7tHsc0nZOmYwoU8LDAvTOaZDQZcLJQR+iGZBRx8OLK0L2 YPnuiCxckuQPhd4HJBI4X/IWTKkq/gU= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e0e3eb3feaeso340153276.2 for ; Mon, 05 Aug 2024 19:21:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722910882; x=1723515682; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yV7dY4XDiIXGgrOmDcJWCpFmJrDLDUqsW20nfMTrSyE=; b=pLQle4Uuk4Kp4rRUF5o+aoIelLlmUdPrnAaq3KHm50iFwJc9/b47tnjcQwkXsw6N6K mJBdCfsEkJZlxKnCHl4p/GBqAMROVpaEc7CRb7GQadxj8pGy84G8/rqFiPvZyL1KPuva IJ7HGeQkJIIrgTT5tydBe5T7t9aYBcjtXzIbfiKRyASwz3I3qaGgOSAyWyDL/XOroU1s a8UW0qQUh6Vawev6rXwDhOh/yPmyp1NDs9WIUIsdmtVocHXSi61A/zOtfiurlMrPzz2y dKvf0O/vry4LcdbFtnGWxl6EfgOkXcsiS+wfRQ1YycmAAxpoCpz/SG6TeTg3uFpNj97F bPQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722910882; x=1723515682; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yV7dY4XDiIXGgrOmDcJWCpFmJrDLDUqsW20nfMTrSyE=; b=hY/UyFAV2vCBITBwc5EovRsYzsCbK94W7L7UhiCSGzPMZUZaud7pim7JgqyUjo6qOR wgsFlBjfwE7Wp9ER07fuPs20SmYVvji/cpfrlaZpUetmdM2vJZEopfa5fle17gjPJYc1 3MxBSD9NZ+FLeHEImxhfb2FOkghswXPEgDklOk9xV5W2eG5KzdKPM4nDTOv/5N4Gv6hx ZYwRdxaCh0qXyLQkPfoZHygeiZa+Y4WTBTnEUSzfYTYnHY5SiITeJ0VuDgApNxdGCgnL 2rrL4PkyellNZ/ESOXhqkdxd/+4XJnp55nc+8Ds/VSPQgMvYiIlqYrsLtjzciY8G1NhE Jq2Q== X-Forwarded-Encrypted: i=1; AJvYcCUnt3oBIyybQyJrrgnc1Sm9nygVyPRqA2WdRezJ2DzgRu1LJ6jPgqS7kiNDI4LT42jNOTyk9euCXt/YH2kZrUt2KvE= X-Gm-Message-State: AOJu0YzqWJMg3/w9bDAAWaXntZOZ77OIg7BWKBEKLhnkVDlvoMhSxhFA mC/HcoHYNpw7R1/A9yPRFAeTxjhEyJpLU/PiE+o3m4kXSUuwoPpTuarTOe4iF6qoxzLCE3HZpPt +EA== X-Google-Smtp-Source: AGHT+IGSY6zrX4Qs7JrUctkrSdbtisepSykseFX5DdXlWxT8TG2QUbuQRw2CO7r/i/yRAh7feSGbV18ovH8= X-Received: from yuzhao2.bld.corp.google.com ([2a00:79e0:2e28:6:261c:802b:6b55:e09c]) (user=yuzhao job=sendgmr) by 2002:a05:6902:2b84:b0:e0b:cce3:45c7 with SMTP id 3f1490d57ef6-e0bde3ef075mr24855276.9.1722910881783; Mon, 05 Aug 2024 19:21:21 -0700 (PDT) Date: Mon, 5 Aug 2024 20:21:11 -0600 In-Reply-To: <20240806022114.3320543-1-yuzhao@google.com> Mime-Version: 1.0 References: <20240806022114.3320543-1-yuzhao@google.com> X-Mailer: git-send-email 2.46.0.rc2.264.g509ed76dc8-goog Message-ID: <20240806022114.3320543-2-yuzhao@google.com> Subject: [RFC PATCH 1/4] mm: HVO: introduce helper function to update and flush pgtable From: Yu Zhao To: Catalin Marinas , Will Deacon Cc: Andrew Morton , David Rientjes , Douglas Anderson , Frank van der Linden , Mark Rutland , Muchun Song , Nanyong Sun , Yang Shi , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song , Yu Zhao X-Stat-Signature: dtkhpqanhmsmk6ompj5w68ctzn8skgtw X-Rspam-User: X-Rspamd-Queue-Id: CAACA10000B X-Rspamd-Server: rspam02 X-HE-Tag: 1722910882-537532 X-HE-Meta: U2FsdGVkX1+OZUi+jya3l77OACxF+pMtZDDo4GMYeTHMQiufBOESfHZ6UcaSG81CDWLMjaimyRMTWc1lLaVy0xCzbKAlL/4D9DqMZUXABAyrnrH/pyESH4b8Wkos4TsmA2S7rG3uxIBqd3Z36C02SwLISvD1HCaLiGFy3TWouQC6ijrkMAZHvAeFECslIPQpmV+0MyD4HZb+/+use0hJYyolTz9xZMCn2HyTkbCCN03j2nt5X0/KhL+iGqVTtPvb7tXun6bW4eGKtj9G8kP2+g9NyTEM6ImcXcUxhX6fhMfA9vlBH91BUuUQakgfSk6RmxR9sGfxz+Ptneym5vEhW5nixwFQKT2PFSfKLyj9eHLshOFuASR0gNpGcdVGV6rNI6Vq38A8CxB+X2AXb0BOwXLiPjf90Zg5rf3wZQxpDJU6S2CWJjliR0y4PFult7Xie5I1pLqceIyt3wP5Mp/8E4eFRkI6vQG/vzquAj1f6ZdovaHenqiwWEy4PKYslj6kZnYPXwF/2+sxVe6BebB0OjRhw9Rr3knKhxVl1hnMs1KudjQx/t/H3nB941163XmOYn6Qjjl4cLmELZICLeT+KnnssGr3DW5hZnit0QKdackEfFrPIh32m70leod65h0UN5gkHxPXEySCFYIoxAk8hDp5NBUk2NcaUT5e15cvrGmW/26vJoDHMVKdX4L5nVftHtsPz0h1eTtHI6N2P9SbAyJ+MKOU1I5V6d7SVkmFIO+jkl0stoCsrnP/90LYvmd4/PXflHRD1+MTYOfmbuF/nOrotEIMQb31k3q/44ewHDnymax8VFHglG5gJOkx/o1vvicPJxTh7y3X8RwhVG4SgD/I2sAPETjmkQ86fj42/qJshr/wkOQEV8TIleqvxk2KyxMgHoQ7g2X1KBN74FsICl1By/GFWH2jX3avEknI+IH38bvBPYlwXocbYtXCzY5gPcCjpfVY8fpw/qbQyDF csEt6BOn SGjK7WNxDxOzTYwgmT7KpobZQL7k5uq0gOsiVic8zLnGLuAPNuPZAM+2CZZvqe/qiHmHPpa91QuzoxcJu7GihpzyX28hcwP1pdWtS8GlYy1Rdj0NFdgMG8i3FZudu39VTAS7HycVXE3iESSofivr/h0Uglpgo+xJmiOamvR2LsQh+ROmEXWNcozpeP1sMWeZuYAGcWl1B50A5XxnKBEh9LNUXyDbl78SyfYk0yH6XuYOxYU99eKJq1LaMcpw0Su/UZsMUEwUfv/EgEgLjDaOL+Cz9mO5DVdVDKoAv4CWR+36psDTdJKDqlxU3s24QQRFEw3bDj9cPRbdszfp1OdW80oa5yn6hpjgImXp/YXEFkTKuVsnr4gbOGTjs78Tmrz50kMwwGOJB9zfdtWIbWrf+TeJKExU5900TKauylMGJSGjIPPqsa5S0bKhBKEwIkHl/yIzSRqvftABdb1qTapEh9mvUfm+GYfsNSS0jAG1oa5C2EzCwqKDGHSHg6oGaKgeplRx3vIj1/pPWXc7sV0u479ozeTMZnEES7ez1QNyrpvsexsQfZW/CeOodpdttJe6sZO/GZ+3POW6VozJ5Vdhg6SNuDQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Nanyong Sun Add pmd/pte update and tlb flush helper function to update page table. This refactoring patch is designed to facilitate each architecture to implement its own special logic in preparation for the arm64 architecture to follow the necessary break-before-make sequence when updating page tables. Signed-off-by: Nanyong Sun Reviewed-by: Muchun Song Signed-off-by: Yu Zhao --- mm/hugetlb_vmemmap.c | 55 ++++++++++++++++++++++++++++++++++---------- 1 file changed, 43 insertions(+), 12 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 829112b0a914..2dd92e58f304 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -46,6 +46,37 @@ struct vmemmap_remap_walk { unsigned long flags; }; +#ifndef vmemmap_update_pmd +static inline void vmemmap_update_pmd(unsigned long addr, + pmd_t *pmdp, pte_t *ptep) +{ + pmd_populate_kernel(&init_mm, pmdp, ptep); +} +#endif + +#ifndef vmemmap_update_pte +static inline void vmemmap_update_pte(unsigned long addr, + pte_t *ptep, pte_t pte) +{ + set_pte_at(&init_mm, addr, ptep, pte); +} +#endif + +#ifndef vmemmap_flush_tlb_all +static inline void vmemmap_flush_tlb_all(void) +{ + flush_tlb_all(); +} +#endif + +#ifndef vmemmap_flush_tlb_range +static inline void vmemmap_flush_tlb_range(unsigned long start, + unsigned long end) +{ + flush_tlb_kernel_range(start, end); +} +#endif + static int vmemmap_split_pmd(pmd_t *pmd, struct page *head, unsigned long start, struct vmemmap_remap_walk *walk) { @@ -81,9 +112,9 @@ static int vmemmap_split_pmd(pmd_t *pmd, struct page *head, unsigned long start, /* Make pte visible before pmd. See comment in pmd_install(). */ smp_wmb(); - pmd_populate_kernel(&init_mm, pmd, pgtable); + vmemmap_update_pmd(start, pmd, pgtable); if (!(walk->flags & VMEMMAP_SPLIT_NO_TLB_FLUSH)) - flush_tlb_kernel_range(start, start + PMD_SIZE); + vmemmap_flush_tlb_range(start, start + PMD_SIZE); } else { pte_free_kernel(&init_mm, pgtable); } @@ -171,7 +202,7 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end, return ret; if (walk->remap_pte && !(walk->flags & VMEMMAP_REMAP_NO_TLB_FLUSH)) - flush_tlb_kernel_range(start, end); + vmemmap_flush_tlb_range(start, end); return 0; } @@ -220,15 +251,15 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, /* * Makes sure that preceding stores to the page contents from - * vmemmap_remap_free() become visible before the set_pte_at() - * write. + * vmemmap_remap_free() become visible before the + * vmemmap_update_pte() write. */ smp_wmb(); } entry = mk_pte(walk->reuse_page, pgprot); list_add(&page->lru, walk->vmemmap_pages); - set_pte_at(&init_mm, addr, pte, entry); + vmemmap_update_pte(addr, pte, entry); } /* @@ -267,10 +298,10 @@ static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, /* * Makes sure that preceding stores to the page contents become visible - * before the set_pte_at() write. + * before the vmemmap_update_pte() write. */ smp_wmb(); - set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); + vmemmap_update_pte(addr, pte, mk_pte(page, pgprot)); } /** @@ -536,7 +567,7 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, } if (restored) - flush_tlb_all(); + vmemmap_flush_tlb_all(); if (!ret) ret = restored; return ret; @@ -664,7 +695,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l break; } - flush_tlb_all(); + vmemmap_flush_tlb_all(); /* avoid writes from page_ref_add_unless() while folding vmemmap */ synchronize_rcu(); @@ -684,7 +715,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l * allowing more vmemmap remaps to occur. */ if (ret == -ENOMEM && !list_empty(&vmemmap_pages)) { - flush_tlb_all(); + vmemmap_flush_tlb_all(); free_vmemmap_page_list(&vmemmap_pages); INIT_LIST_HEAD(&vmemmap_pages); __hugetlb_vmemmap_optimize_folio(h, folio, &vmemmap_pages, @@ -692,7 +723,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l } } - flush_tlb_all(); + vmemmap_flush_tlb_all(); free_vmemmap_page_list(&vmemmap_pages); }