From patchwork Wed Apr 13 10:07:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 12811838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A68B4C433F5 for ; Wed, 13 Apr 2022 10:06:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 047426B0072; Wed, 13 Apr 2022 06:06:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F390E6B0073; Wed, 13 Apr 2022 06:06:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E00E06B0074; Wed, 13 Apr 2022 06:06:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0042.hostedemail.com [216.40.44.42]) by kanga.kvack.org (Postfix) with ESMTP id CC9336B0072 for ; Wed, 13 Apr 2022 06:06:53 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 919449F314 for ; Wed, 13 Apr 2022 10:06:53 +0000 (UTC) X-FDA: 79351427106.23.62FE3C4 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf15.hostedemail.com (Postfix) with ESMTP id 0F9B2A0005 for ; Wed, 13 Apr 2022 10:06:52 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CFF7E13D5; Wed, 13 Apr 2022 03:06:51 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.39.141]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DD6003F73B; Wed, 13 Apr 2022 03:06:46 -0700 (PDT) From: Anshuman Khandual To: inux-mm@kvack.org Cc: Anshuman Khandual , Andrew Morton , Will Deacon , "Aneesh Kumar K.V" , Nick Piggin , Peter Zijlstra , Arnd Bergmann , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] tlb/hugetlb: Add framework to handle PGDIR_SIZE HugeTLB pages Date: Wed, 13 Apr 2022 15:37:14 +0530 Message-Id: <20220413100714.509888-1-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Stat-Signature: jnpiuptq5her59upkpspp8wxeasu5wqu X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 0F9B2A0005 Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com X-Rspam-User: X-HE-Tag: 1649844412-618012 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change tlb_remove_huge_tlb_entry() to accommodate larger PGDIR_SIZE HugeTLB pages via adding a new helper tlb_flush_pgd_range(). While here also update struct mmu_gather as required, that is add a new member cleared_pgds. Cc: Andrew Morton Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Nick Piggin Cc: Peter Zijlstra Cc: Arnd Bergmann Cc: linux-arch@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- This applies on v5.18-rc2, some earlier context could be found here https://lore.kernel.org/all/20220406112124.GD2731@worktop.programming.kicks-ass.net/ include/asm-generic/tlb.h | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index eee6f7763a39..6eaf0080ef2d 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -282,6 +282,7 @@ struct mmu_gather { unsigned int cleared_pmds : 1; unsigned int cleared_puds : 1; unsigned int cleared_p4ds : 1; + unsigned int cleared_pgds : 1; /* * tracks VM_EXEC | VM_HUGETLB in tlb_start_vma @@ -325,6 +326,7 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) tlb->cleared_pmds = 0; tlb->cleared_puds = 0; tlb->cleared_p4ds = 0; + tlb->cleared_pgds = 0; /* * Do not reset mmu_gather::vma_* fields here, we do not * call into tlb_start_vma() again to set them if there is an @@ -420,7 +422,7 @@ static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) * these bits. */ if (!(tlb->freed_tables || tlb->cleared_ptes || tlb->cleared_pmds || - tlb->cleared_puds || tlb->cleared_p4ds)) + tlb->cleared_puds || tlb->cleared_p4ds || tlb->cleared_pgds)) return; tlb_flush(tlb); @@ -472,6 +474,8 @@ static inline unsigned long tlb_get_unmap_shift(struct mmu_gather *tlb) return PUD_SHIFT; if (tlb->cleared_p4ds) return P4D_SHIFT; + if (tlb->cleared_pgds) + return PGDIR_SHIFT; return PAGE_SHIFT; } @@ -545,6 +549,14 @@ static inline void tlb_flush_p4d_range(struct mmu_gather *tlb, tlb->cleared_p4ds = 1; } +static inline void tlb_flush_pgd_range(struct mmu_gather *tlb, + unsigned long address, unsigned long size) +{ + __tlb_adjust_range(tlb, address, size); + tlb->cleared_pgds = 1; +} + + #ifndef __tlb_remove_tlb_entry #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) #endif @@ -565,7 +577,9 @@ static inline void tlb_flush_p4d_range(struct mmu_gather *tlb, #define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ do { \ unsigned long _sz = huge_page_size(h); \ - if (_sz >= P4D_SIZE) \ + if (_sz >= PGDIR_SIZE) \ + tlb_flush_pgd_range(tlb, address, _sz); \ + else if (_sz >= P4D_SIZE) \ tlb_flush_p4d_range(tlb, address, _sz); \ else if (_sz >= PUD_SIZE) \ tlb_flush_pud_range(tlb, address, _sz); \