From patchwork Thu Oct 17 09:47:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13839733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6089D2126D for ; Thu, 17 Oct 2024 09:48:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 77AF56B009F; Thu, 17 Oct 2024 05:48:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 72B436B00A0; Thu, 17 Oct 2024 05:48:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A4AD6B00A1; Thu, 17 Oct 2024 05:48:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3B8146B009F for ; Thu, 17 Oct 2024 05:48:29 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 1CB99A114D for ; Thu, 17 Oct 2024 09:48:09 +0000 (UTC) X-FDA: 82682618886.16.C27C76C Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) by imf08.hostedemail.com (Postfix) with ESMTP id 2EFA2160016 for ; Thu, 17 Oct 2024 09:48:19 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=fSwki0Cz; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf08.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.170 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729158347; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=96N82BICgFqmbQqgF/vu6V3zKuLzfmtjKmdmS7DODoY=; b=5ss1Bz72uAsgVxKbSsS4l1dg4PcGWtdkz744Wi9IvLCa2QAfhxNi8RMo8MkKyp2u4RPjrE CMOc52KJJsPNeEDlAzoo7zW6cJgn1ZnN1RIIHR/cxOhHG3jJjaOK322fNF/KawzUk1U2zQ SinSQWw0WPCWviYjCRPCUyBliE7Vmpo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729158347; a=rsa-sha256; cv=none; b=wmtVNk509G6Gyv/WrZYpz/x7moleFiEpXT7upmxf8JDJShZV56KNF6J5dpAx1gBvZq/5wi 6QNJHQaXc9bprwv+su7FuNmn7auU87wiZpRLsHtg3EL7D6kWcYxUXCiz83PSwLh0TXdwCw mdijWleiYrc0EuSU2upSx7VK1M9Ckwk= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=fSwki0Cz; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf08.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.170 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com Received: by mail-pf1-f170.google.com with SMTP id d2e1a72fcca58-71e4e481692so627930b3a.1 for ; Thu, 17 Oct 2024 02:48:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1729158505; x=1729763305; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=96N82BICgFqmbQqgF/vu6V3zKuLzfmtjKmdmS7DODoY=; b=fSwki0CzjYQTYWq/8u8Nb9iBVvSIU1q9kQJfR4uAvDgAMjprVzQoHagka6LgF66j3Q oAo/hNHljY6vyf5cUrpNyUX5QyF/+ipgQYIfgR3g9TtbgTBR2XPTUMSj69cR1NyYiGbD z2ACfDUt95EOwScoCnhUsJLmleXu201q64P3lE6mZXG8jsIzk6R4J8ElP6yZ/5ulJDIp ZxaHKqR6QCxC8WAVBOUkDZEoO0S9CpET+V+h2/JqG1SBbQYIhpm7LHv9AdNbRnrKOEhw d/oBc21xkZWOdWL3anqhtvobMN4Ms1ra6QUsHn+1pSBgdNuuA69nEqwzA4XDrjA+sUWx qYIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729158505; x=1729763305; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=96N82BICgFqmbQqgF/vu6V3zKuLzfmtjKmdmS7DODoY=; b=dUeq+9DVhc7k3/1IQ9UsiIS/oqsAtUJdn6BXrt40lvwY24Zjn4GcT8K3XpeMOPAyBT fkygw3kJlHth1EuFL7QLznE/QIhq0fKBF1OiqKRQlCHegED1kOQNIHyi8BSNMxB3U26Y BZz+u6BE92auluAw0R93FUdq3MnljANhvlDjZmASbc/Ilus/Cb47t55d0M6JomSWrAFG LaZq4dfAOjxrlOcTp7bAQ7/IbNhovV4RKbFqpdA+Uigutwe3K7VyPJJgjGfGbiFasHAZ FcmRUmP3UCkvcqHRA5hrH+j9EX0FPP/eSYIbHZpAnoflUxPQq7VfIUz4TN+OvvUncw8D KGeQ== X-Gm-Message-State: AOJu0Yx4aArWhC6hE+aa8B9YzHO4p3MAocHSHl6SGBvcAzgLWTmDwkI8 WRpeiFlaWWCCyvVaB4lyVACFXJSsFCYXfaeN7N/uFyGyhPRj0kTgcDbhCNXw9Bo= X-Google-Smtp-Source: AGHT+IEf2y1GA9Rtz+/5nXvZXCaQlczdVC6XcSHY78uZicXgbVhL/Djw5qd37IxfTX2YDXEO2p/4DQ== X-Received: by 2002:a05:6a21:388a:b0:1d6:97f2:71b4 with SMTP id adf61e73a8af0-1d8c9577df8mr26874708637.1.1729158505469; Thu, 17 Oct 2024 02:48:25 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71e774a4218sm4385365b3a.120.2024.10.17.02.48.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Oct 2024 02:48:24 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, mgorman@suse.de, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, zokeefe@google.com, rientjes@google.com, jannh@google.com, peterx@redhat.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, Qi Zheng Subject: [PATCH v1 6/7] x86: mm: free page table pages by RCU instead of semi RCU Date: Thu, 17 Oct 2024 17:47:25 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 2EFA2160016 X-Stat-Signature: adwwbscihrpwwf6qytrymg8qmk3u8fq3 X-Rspam-User: X-HE-Tag: 1729158499-406663 X-HE-Meta: U2FsdGVkX18+/cn29pNsoXQAV0Jrr+pm5MNqWXFcXGcFCDarW5zFaStton2CCJXrMBeWhQJd6xDseD3WjBVrINOaaqZmedGZgrGG75O6StWE/WwgA7DACfRYkVj6e88xAnFFOUWSqpyB+GksUkxUStOJUA6itzynFFHv9Uqo3LIu8BkxYypn6LW2cs9wyMbgJ18Mu8JqLxLI56n5W175qROPtH7zJT4LjsGWVhWVcKmqVlU9dHPzVAwoAq2Q5WzDWIFjl+lh7EyFD0j/daABeWrhhVfcdZSgPaCS1YtNdlLAn2xsXsMWiffcccUbAb3VKTJNzMTJivVhfIR8pWOX6QDkoN/9cUMZI2ojGIoS64ni6XJr55wYpO2i+Yl4QB8KTfH+kUnsHg1KHD19Dtb2krm6TdOvn11ZXxAuYAm4sj6E9eXOW3riSZkVM5Mu4W/Iyix+GO6o9D4vHSMfHVvDp3bf5ookrYJsAXqt5aJR8bs1uRU7llp+Y1MNBmF/mVOp8wpnGN+keCLbEqQIb/0dJp5x1yo7brhbe8mY4HngK0hDrYi1IESMzpxnYdiZYszKE+BQrYp2x7dc16lbpg3jD8JzbbDK1Wh+EjgAH76y8uE9TlhDsF+GkIXTj7K2JQ2J8uZ4fJpIKKMGYPbRPVuOGq0wcj+7nIKQpbmYS96BIgj5CFreerreRLg6Bi0lRcK+rHC6Ps1V8zMImejsdiRvk6jH5kfXkcFhHrmsH8HNpEtZQnCgB95IKGKRrI6oYIG3V7hOedhBz9c5FIOowJy5lsWrTm3XsQMvwU3sYamDxOJ9VApeoGBF56qOQtJbyXPWjphHFDoLCwIitwKZE2nK+xNxAmclhvCVn8uXM7MVwy2GQFKb7o2Xnh4ooFcLBiJQKnVSGkrUDx9zpH2PGp0ufLP2YGfT6n2q1q9j6gXzfz8GhYeOPLRmx38SNbt6vHP3xN7fIZoaE8VkJuh5iyT mAuhdvyZ ydZmxwk6UIQfoe+tUhCqcEonhl8SlZw9KDX4y1GeOdvAMbnBEFBDOj/hZUNwuLBum2AhxgKlYouzc3sRHP69rPVW6bKwBxgYrOtGGSnAWhqJwMLgT39SQy+/+CIAS07GL1R+aR7So/YRsJEjf4EHrxyw75Sk/wXFXY6JxDcHP9q6VCO62mhA9lbL2n9kwvN7ZIlotEll4zILx9FE/b4a4EQV78NupH1ILjpK6NvUwuxG8lopL9SrOqwGU3GN0ZU9n4qWG0NVjaFVpMPp+OVELPbD57fvve1HUYPoWIPwNMtTDzV9Pkuz8puRirQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now, if CONFIG_MMU_GATHER_RCU_TABLE_FREE is selected, the page table pages will be freed by semi RCU, that is: - batch table freeing: asynchronous free by RCU - single table freeing: IPI + synchronous free In this way, the page table can be lockless traversed by disabling IRQ in paths such as fast GUP. But this is not enough to free the empty PTE page table pages in paths other that munmap and exit_mmap path, because IPI cannot be synchronized with rcu_read_lock() in pte_offset_map{_lock}(). In preparation for supporting empty PTE page table pages reclaimation, let single table also be freed by RCU like batch table freeing. Then we can also use pte_offset_map() etc to prevent PTE page from being freed. Like pte_free_defer(), we can also safely use ptdesc->pt_rcu_head to free the page table pages: - The pt_rcu_head is unioned with pt_list and pmd_huge_pte. - For pt_list, it is used to manage the PGD page in x86. Fortunately tlb_remove_table() will not be used for free PGD pages, so it is safe to use pt_rcu_head. - For pmd_huge_pte, we will do zap_deposited_table() before freeing the PMD page, so it is also safe. Signed-off-by: Qi Zheng --- arch/x86/include/asm/tlb.h | 19 +++++++++++++++++++ arch/x86/kernel/paravirt.c | 7 +++++++ arch/x86/mm/pgtable.c | 10 +++++++++- mm/mmu_gather.c | 9 ++++++++- 4 files changed, 43 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 580636cdc257b..e223b53a8b190 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -34,4 +34,23 @@ static inline void __tlb_remove_table(void *table) free_page_and_swap_cache(table); } +#ifdef CONFIG_PT_RECLAIM +static inline void __tlb_remove_table_one_rcu(struct rcu_head *head) +{ + struct page *page; + + page = container_of(head, struct page, rcu_head); + free_page_and_swap_cache(page); +} + +static inline void __tlb_remove_table_one(void *table) +{ + struct page *page; + + page = table; + call_rcu(&page->rcu_head, __tlb_remove_table_one_rcu); +} +#define __tlb_remove_table_one __tlb_remove_table_one +#endif /* CONFIG_PT_RECLAIM */ + #endif /* _ASM_X86_TLB_H */ diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index fec3815335558..89688921ea62e 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -59,10 +59,17 @@ void __init native_pv_lock_init(void) static_branch_enable(&virt_spin_lock_key); } +#ifndef CONFIG_PT_RECLAIM static void native_tlb_remove_table(struct mmu_gather *tlb, void *table) { tlb_remove_page(tlb, table); } +#else +static void native_tlb_remove_table(struct mmu_gather *tlb, void *table) +{ + tlb_remove_table(tlb, table); +} +#endif struct static_key paravirt_steal_enabled; struct static_key paravirt_steal_rq_enabled; diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 5745a354a241c..69a357b15974a 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -19,12 +19,20 @@ EXPORT_SYMBOL(physical_mask); #endif #ifndef CONFIG_PARAVIRT +#ifndef CONFIG_PT_RECLAIM static inline void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table) { tlb_remove_page(tlb, table); } -#endif +#else +static inline +void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table) +{ + tlb_remove_table(tlb, table); +} +#endif /* !CONFIG_PT_RECLAIM */ +#endif /* !CONFIG_PARAVIRT */ gfp_t __userpte_alloc_gfp = GFP_PGTABLE_USER | PGTABLE_HIGHMEM; diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 99b3e9408aa0f..d948479ca09e6 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -311,10 +311,17 @@ static inline void tlb_table_invalidate(struct mmu_gather *tlb) } } +#ifndef __tlb_remove_table_one +static inline void __tlb_remove_table_one(void *table) +{ + __tlb_remove_table(table); +} +#endif + static void tlb_remove_table_one(void *table) { tlb_remove_table_sync_one(); - __tlb_remove_table(table); + __tlb_remove_table_one(table); } static void tlb_table_flush(struct mmu_gather *tlb)