From patchwork Sun Jul 18 04:30:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12384151 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3D14C6377B for ; Sun, 18 Jul 2021 04:31:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 70C226117A for ; Sun, 18 Jul 2021 04:31:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 70C226117A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 045FA8D00FB; Sun, 18 Jul 2021 00:31:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 01CB88D00EC; Sun, 18 Jul 2021 00:31:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E009B8D00FB; Sun, 18 Jul 2021 00:31:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id BB23B8D00EC for ; Sun, 18 Jul 2021 00:31:22 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 704178249980 for ; Sun, 18 Jul 2021 04:31:21 +0000 (UTC) X-FDA: 78374434362.03.356F6F1 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf01.hostedemail.com (Postfix) with ESMTP id D921B500F98E for ; Sun, 18 Jul 2021 04:31:20 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id n11so7739992plc.2 for ; Sat, 17 Jul 2021 21:31:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PkDonhS/0zW6qX5fgqqNQXuBc0PI4yA6BT4exnNjD10=; b=K4TnxxmsICEKEHNKIOXpXvwPLmqSlyh9/RWeyaFHtulG7k4Inn+SQZADEgHR7PtUsb gSdQT3u8snsbOf9ULGpytuWfOGS29nh8o0pOgiXHxd7mV0FkImNAhng+Ks3zPAlwGHiC kgt0pcX976KmaT33CWY2qPTKGfGxINiYa17400jvTenZAPSNhdgunC3CG5Uh0sSe1WAo GinUeqDjRXc/ogJSC7SuSAumCfxIMdL2SsTn54VcNiQB6sRXVbzGaj+DRw5Da0nSznAQ lbCcbgi7Tw/u0BVfGvC49ZcyJZ303KEEf8RzhHAmrKdgzfV6+mbAMv/W1WGGEpJkjYiD 8vNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PkDonhS/0zW6qX5fgqqNQXuBc0PI4yA6BT4exnNjD10=; b=RysxcR1unHszhSKMXCJtlPviwiM3aZNtW+E7x1bIu5nyx/SnWrPfZPbdWBH7Perol5 yoWQgSCE+99wHxH8E3VgTKr136zsih+nQvh9C25jEyp+NzKtAO6IEieKJmnIpqYYaII+ WOk188l3Fvv2vpTkZXEbxzUVY0wtiCPOLzCDmct2FVMHLrPFm96rW1xMJrIYlPZ7U9gB Zsvw6ARBLs5nbLnphfBCmjMF5haTMAuW5I7W1wQ7lTQB7y0TSbeMQspNX7G+0bjiSgVV s/elnndF/OH3QrpYs4Y3gcUYIKyfNYmBhLOPmfOM+n4vyfUdEmGy0MKl178LFMx+iQ4T pAKQ== X-Gm-Message-State: AOAM531u/zMUIRwy6WRMrZmunvFDyoCdIY8Ufb5UhJdG2GdlpduYkFw3 rnueYKhiZt4F1YQPMEMDcw95og== X-Google-Smtp-Source: ABdhPJzMskEG9thJ2HkhJOWCgOLmwEa7eCNC+o+x9ePXxlMLV3xdpdIRH4bTUUExHUU97sZoR6YRpQ== X-Received: by 2002:a17:90b:1484:: with SMTP id js4mr17883369pjb.155.1626582679993; Sat, 17 Jul 2021 21:31:19 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.253]) by smtp.gmail.com with ESMTPSA id a22sm16263217pgv.84.2021.07.17.21.31.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 17 Jul 2021 21:31:19 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, tglx@linutronix.de, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, Qi Zheng Subject: [PATCH 7/7] mm: use mmu_gather to free PTE page table Date: Sun, 18 Jul 2021 12:30:33 +0800 Message-Id: <20210718043034.76431-8-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20210718043034.76431-1-zhengqi.arch@bytedance.com> References: <20210718043034.76431-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=K4Tnxxms; spf=pass (imf01.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam05 X-Stat-Signature: hd6yjmb6unnnwuih1ssjucfqqy67r8u9 X-Rspamd-Queue-Id: D921B500F98E X-HE-Tag: 1626582680-614698 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In unmap_region() and other paths, we can reuse @tlb to free PTE page table, which can reduce the number of tlb flush. Signed-off-by: Qi Zheng --- arch/x86/Kconfig | 2 +- include/linux/pte_ref.h | 32 +++++++++++++++++++++++++++----- mm/madvise.c | 4 ++-- mm/memory.c | 4 ++-- mm/mmu_gather.c | 40 +++++++++++++++++----------------------- mm/pte_ref.c | 12 +++++++++--- 6 files changed, 58 insertions(+), 36 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 761ad0830dd6..1f873c2d5f51 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -233,7 +233,7 @@ config X86 select HAVE_PCI select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT + select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT || FREE_USER_PTE select HAVE_POSIX_CPU_TIMERS_TASK_WORK select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if X86_64 && (UNWINDER_FRAME_POINTER || UNWINDER_ORC) && STACK_VALIDATION diff --git a/include/linux/pte_ref.h b/include/linux/pte_ref.h index f4d20faab684..bf6cd09b67d7 100644 --- a/include/linux/pte_ref.h +++ b/include/linux/pte_ref.h @@ -18,7 +18,8 @@ int __pte_alloc_try_get(struct mm_struct *mm, pmd_t *pmd); int __pte_alloc_get(struct mm_struct *mm, pmd_t *pmd); #ifdef CONFIG_FREE_USER_PTE -void free_pte_table(struct mm_struct *mm, pmd_t *pmdp, unsigned long addr); +void free_pte_table(struct mmu_gather *tlb, struct mm_struct *mm, pmd_t *pmdp, + unsigned long addr); static inline void pte_ref_init(pgtable_t pte, pmd_t *pmd, int count) { @@ -62,7 +63,6 @@ static inline bool pte_get_unless_zero(pmd_t *pmdp) { pgtable_t pte = pmd_pgtable(*pmdp); - VM_BUG_ON(!PageTable(pte)); return atomic_inc_not_zero(&pte->pte_refcount); } @@ -97,8 +97,8 @@ static inline bool pte_try_get(pmd_t *pmdp) * i_mmap_lock or when parallel threads are excluded by other means * which can make @pmdp entry stable. */ -static inline void pte_put_many(struct mm_struct *mm, pmd_t *pmdp, - unsigned long addr, unsigned int nr) +static inline void pte_put_many_tlb(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmdp, unsigned long addr, unsigned int nr) { pgtable_t pte = pmd_pgtable(*pmdp); @@ -106,7 +106,19 @@ static inline void pte_put_many(struct mm_struct *mm, pmd_t *pmdp, VM_BUG_ON(pmd_devmap_trans_unstable(pmdp)); VM_BUG_ON(pte->pmd != pmdp); if (atomic_sub_and_test(nr, &pte->pte_refcount)) - free_pte_table(mm, pmdp, addr & PMD_MASK); + free_pte_table(tlb, mm, pmdp, addr & PMD_MASK); +} + +static inline void pte_put_tlb(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmdp, unsigned long addr) +{ + pte_put_many_tlb(tlb, mm, pmdp, addr, 1); +} + +static inline void pte_put_many(struct mm_struct *mm, pmd_t *pmdp, + unsigned long addr, unsigned int nr) +{ + pte_put_many_tlb(NULL, mm, pmdp, addr, nr); } static inline void pte_put(struct mm_struct *mm, pmd_t *pmdp, unsigned long addr) @@ -180,6 +192,16 @@ static inline bool pte_try_get(pmd_t *pmdp) return true; } +static inline void pte_put_many_tlb(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmdp, unsigned long addr, unsigned int nr) +{ +} + +static inline void pte_put_tlb(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmdp, unsigned long addr) +{ +} + static inline void pte_put_many(struct mm_struct *mm, pmd_t *pmdp, unsigned long addr, unsigned int value) { diff --git a/mm/madvise.c b/mm/madvise.c index 0e849bbf268b..117bc5aad0f8 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -477,7 +477,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, arch_leave_lazy_mmu_mode(); pte_unmap_unlock(orig_pte, ptl); - pte_put(vma->vm_mm, pmd, start); + pte_put_tlb(tlb, vma->vm_mm, pmd, start); if (pageout) reclaim_pages(&page_list); cond_resched(); @@ -709,7 +709,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, arch_leave_lazy_mmu_mode(); pte_unmap_unlock(orig_pte, ptl); if (nr_put) - pte_put_many(mm, pmd, start, nr_put); + pte_put_many_tlb(tlb, mm, pmd, start, nr_put); cond_resched(); next: return 0; diff --git a/mm/memory.c b/mm/memory.c index c8ee0074c730..4f49e8135000 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1438,7 +1438,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, } if (nr_put) - pte_put_many(mm, pmd, start, nr_put); + pte_put_many_tlb(tlb, mm, pmd, start, nr_put); return addr; } @@ -1485,7 +1485,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb, goto next; next = zap_pte_range(tlb, vma, pmd, addr, next, details); - pte_put(tlb->mm, pmd, addr); + pte_put_tlb(tlb, tlb->mm, pmd, addr); next: cond_resched(); } while (pmd++, addr = next, addr != end); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 1b9837419bf9..1bd9fa889421 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -134,42 +134,42 @@ static void __tlb_remove_table_free(struct mmu_table_batch *batch) * */ -static void tlb_remove_table_smp_sync(void *arg) +static void tlb_remove_table_rcu(struct rcu_head *head) { - /* Simply deliver the interrupt */ + __tlb_remove_table_free(container_of(head, struct mmu_table_batch, rcu)); } -static void tlb_remove_table_sync_one(void) +static void tlb_remove_table_free(struct mmu_table_batch *batch) { - /* - * This isn't an RCU grace period and hence the page-tables cannot be - * assumed to be actually RCU-freed. - * - * It is however sufficient for software page-table walkers that rely on - * IRQ disabling. - */ - smp_call_function(tlb_remove_table_smp_sync, NULL, 1); + call_rcu(&batch->rcu, tlb_remove_table_rcu); } -static void tlb_remove_table_rcu(struct rcu_head *head) +static void tlb_remove_table_one_rcu(struct rcu_head *head) { - __tlb_remove_table_free(container_of(head, struct mmu_table_batch, rcu)); + struct page *page = container_of(head, struct page, rcu_head); + + __tlb_remove_table(page); } -static void tlb_remove_table_free(struct mmu_table_batch *batch) +static void tlb_remove_table_one(void *table) { - call_rcu(&batch->rcu, tlb_remove_table_rcu); + pgtable_t page = (pgtable_t)table; + + call_rcu(&page->rcu_head, tlb_remove_table_one_rcu); } #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */ -static void tlb_remove_table_sync_one(void) { } - static void tlb_remove_table_free(struct mmu_table_batch *batch) { __tlb_remove_table_free(batch); } +static void tlb_remove_table_one(void *table) +{ + __tlb_remove_table(table); +} + #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */ /* @@ -187,12 +187,6 @@ static inline void tlb_table_invalidate(struct mmu_gather *tlb) } } -static void tlb_remove_table_one(void *table) -{ - tlb_remove_table_sync_one(); - __tlb_remove_table(table); -} - static void tlb_table_flush(struct mmu_gather *tlb) { struct mmu_table_batch **batch = &tlb->batch; diff --git a/mm/pte_ref.c b/mm/pte_ref.c index 7fd3d687a9cd..92fb73c35d81 100644 --- a/mm/pte_ref.c +++ b/mm/pte_ref.c @@ -10,6 +10,7 @@ #include #include #include +#include #ifdef CONFIG_DEBUG_VM static void pte_free_debug(pmd_t pmd) @@ -34,7 +35,8 @@ static void pte_free_rcu(struct rcu_head *rcu) __free_page(page); } -void free_pte_table(struct mm_struct *mm, pmd_t *pmdp, unsigned long addr) +void free_pte_table(struct mmu_gather *tlb, struct mm_struct *mm, + pmd_t *pmdp, unsigned long addr) { struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); spinlock_t *ptl; @@ -45,9 +47,13 @@ void free_pte_table(struct mm_struct *mm, pmd_t *pmdp, unsigned long addr) spin_unlock(ptl); pte_free_debug(pmd); - flush_tlb_range(&vma, addr, addr + PMD_SIZE); + if (!tlb) { + flush_tlb_range(&vma, addr, addr + PMD_SIZE); + call_rcu(&pmd_pgtable(pmd)->rcu_head, pte_free_rcu); + } else { + pte_free_tlb(tlb, pmd_pgtable(pmd), addr); + } mm_dec_nr_ptes(mm); - call_rcu(&pmd_pgtable(pmd)->rcu_head, pte_free_rcu); } static inline void __pte_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte)