From patchwork Mon Dec 18 10:50:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13496568 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3589BC35274 for ; Mon, 18 Dec 2023 10:51:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C87D46B0098; Mon, 18 Dec 2023 05:51:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C10F76B0099; Mon, 18 Dec 2023 05:51:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD9EE6B009A; Mon, 18 Dec 2023 05:51:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 99A836B0098 for ; Mon, 18 Dec 2023 05:51:35 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 6C9611A012E for ; Mon, 18 Dec 2023 10:51:35 +0000 (UTC) X-FDA: 81579622950.21.50B71A9 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf20.hostedemail.com (Postfix) with ESMTP id B7C421C0027 for ; Mon, 18 Dec 2023 10:51:33 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702896693; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Tgbl19Y9zwf6Mq8Tb/Q5BDwkfU5WzhnvG6gc2ibDGlc=; b=B/eCowDzTEBgz5W1mIygAihsGa4+2c6foi3DAk+3uLdI5S9V1/vHlhya17g6qSkuLPlDbi uptFneSPKoHXkm0jaJagBmU4vaYoq3z6QbaaNoJN6WttjWLs/uCo4jgMjlBpP7LlXFq7Fc ZfhUJU3wB3I/12U9pguT22VwuaSyfqQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702896693; a=rsa-sha256; cv=none; b=iIPDOmdy6symUXXnmYewH03Xwtu3A9wQSirM14l9d+kqWKQms9qmuMlaJrSnH6DQb9Qrmv DmL1/Iw4b08Xa8Lbyl3xpLQQTRYhz8bycyHAn6l1KXcJIk/fxpixHI6qLheteM88IsR2Vi zIVpXCXRvaTjyWXuKx++YJ8RM9CcWOg= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6C3A71FB; Mon, 18 Dec 2023 02:52:17 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 936353F738; Mon, 18 Dec 2023 02:51:29 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 04/16] arm64/mm: set_pte(): New layer to manage contig bit Date: Mon, 18 Dec 2023 10:50:48 +0000 Message-Id: <20231218105100.172635-5-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231218105100.172635-1-ryan.roberts@arm.com> References: <20231218105100.172635-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B7C421C0027 X-Rspam-User: X-Stat-Signature: jugtr9cumcqosipxjukcy4ekamrw53i7 X-Rspamd-Server: rspam03 X-HE-Tag: 1702896693-766894 X-HE-Meta: U2FsdGVkX1+wPT0yOkssHfmL1vCWhO/EXy4VHAjUORqT1qhR1jpcPqP3vzymG4rTG1I5r+8KxabEq7LcnFtsviuJc3q/58ukGF+vAffaDsyAkCxyspb33BeBM2Wbms3ekgWnLUkhNPDp4B+gIHYQa2RXZOvYSLYUATS5cUiIYgrCh1l3i91Y/vnooqeFEGkLMdBx8VXOxbH10hL2uGSPSx3u5sbm37a69xXCFGSezXt1TfEpZvV28d9JcHYjx0M6p7QKjSm9EIuU4/Eq/G0weWyaBLWxh3UaKcchQpbu5Rw2s3o7uctNBm7RPOckRCIYjSJAedOVlaHEPyFgpnwLkw2T4q9Vhgt9Ls1ZlZbv3GXuQVKH/hLnOfckK8ZnFOxhuHSYVhSZQ54hsnUZ6r9A71uVx3fD28Qvx4tYCBPegsh5jJqxSKB2pGZaaVX75XCPgBsKAW94zdFAWlXsiErpmHb3iT8slSlyIIRTpK5uFP3IUUTv9FDB2Dg9IIWT/sfNJOCB1SSsrSFIGlXWoCZimH6dBULCfTIe/0ygAslxhSO7z3j+O0mRwyjNSFiM4vErlmMi4o61vCGO80QF1CIqLZsm90Ob8b63G2N/EGe/K+19x3qjuVBf3c2hhLjyisJFdszbSsg/fmCQ0kMqFNR17FBPf9Fg5ctbgbIMwQSoL2d4T0rBrYPjCH4FbSrlr5bhukovIaa1saYiCS8kvWuvw2/yt8ieThdX5GtJ5nwojCiZBmBWKkdGRzg+g5XNgBDdYYCvifeMt74/HMZ96eAc4oWsSWcOmlOm7VCt0OQciCKw0qBjWpfPZ5O5XMIqWJD6hO9rm/2DVchPujh5yNL15jCksLc9IVM5jhLIHKL/FLz2z9NplSyfnMU79S3u6cyqs4gQIvLX7mup4XMoOWxWS0MXoqz2YI4v/dZ4rD3/VP3as7GcfhcQUf2bvOlyOpLWaq9M+7Z0/cHG2msZVka r6/iL/7K rbubCiW/B26D4exfxv3k0k+oxT3gBjI2Zxmgt3h+lw9uyS+S7zWxBvaifuWBBAdVCKuzbMxvPQRLD6c/+8x2SXPiv/qxkOfshhD137KX6gAyKMhuO7co79lTwDYjQRxuqGcWRWO/IJkxJZRtxyd7rZyB4QuOu/rZ9YssnbqUiXg5dHNKJsx+ERoOXo7D00OzW02mm6lWiDOcRdwLg6vGEpuXZCKi0fuFMr2mNAhpoAshuuEXUQzBKJlevNrYftjdgIWW0vHW68nVA55smLBDftyW3Hz/FrKhEN8cF8wvsutQRQ1E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Create a new layer for the in-table PTE manipulation APIs. For now, The existing API is prefixed with double underscore to become the arch-private API and the public API is just a simple wrapper that calls the private API. The public API implementation will subsequently be used to transparently manipulate the contiguous bit where appropriate. But since there are already some contig-aware users (e.g. hugetlb, kernel mapper), we must first ensure those users use the private API directly so that the future contig-bit manipulations in the public API do not interfere with those existing uses. Tested-by: John Hubbard Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/pgtable.h | 12 ++++++++---- arch/arm64/kernel/efi.c | 2 +- arch/arm64/mm/fixmap.c | 2 +- arch/arm64/mm/kasan_init.c | 4 ++-- arch/arm64/mm/mmu.c | 2 +- arch/arm64/mm/pageattr.c | 2 +- arch/arm64/mm/trans_pgd.c | 4 ++-- 7 files changed, 16 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b19a8aee684c..650d4f4bb6dc 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -93,7 +93,8 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys) __pte(__phys_to_pte_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) #define pte_none(pte) (!pte_val(pte)) -#define pte_clear(mm,addr,ptep) set_pte(ptep, __pte(0)) +#define pte_clear(mm, addr, ptep) \ + __set_pte(ptep, __pte(0)) #define pte_page(pte) (pfn_to_page(pte_pfn(pte))) /* @@ -261,7 +262,7 @@ static inline pte_t pte_mkdevmap(pte_t pte) return set_pte_bit(pte, __pgprot(PTE_DEVMAP | PTE_SPECIAL)); } -static inline void set_pte(pte_t *ptep, pte_t pte) +static inline void __set_pte(pte_t *ptep, pte_t pte) { WRITE_ONCE(*ptep, pte); @@ -350,7 +351,7 @@ static inline void set_ptes(struct mm_struct *mm, for (;;) { __check_safe_pte_update(mm, ptep, pte); - set_pte(ptep, pte); + __set_pte(ptep, pte); if (--nr == 0) break; ptep++; @@ -534,7 +535,7 @@ static inline void __set_pte_at(struct mm_struct *mm, { __sync_cache_and_tags(pte, nr); __check_safe_pte_update(mm, ptep, pte); - set_pte(ptep, pte); + __set_pte(ptep, pte); } static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, @@ -1118,6 +1119,9 @@ extern pte_t ptep_modify_prot_start(struct vm_area_struct *vma, extern void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t old_pte, pte_t new_pte); + +#define set_pte __set_pte + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_PGTABLE_H */ diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c index 0228001347be..44288a12fc6c 100644 --- a/arch/arm64/kernel/efi.c +++ b/arch/arm64/kernel/efi.c @@ -111,7 +111,7 @@ static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data) pte = set_pte_bit(pte, __pgprot(PTE_PXN)); else if (system_supports_bti_kernel() && spd->has_bti) pte = set_pte_bit(pte, __pgprot(PTE_GP)); - set_pte(ptep, pte); + __set_pte(ptep, pte); return 0; } diff --git a/arch/arm64/mm/fixmap.c b/arch/arm64/mm/fixmap.c index c0a3301203bd..51cd4501816d 100644 --- a/arch/arm64/mm/fixmap.c +++ b/arch/arm64/mm/fixmap.c @@ -121,7 +121,7 @@ void __set_fixmap(enum fixed_addresses idx, ptep = fixmap_pte(addr); if (pgprot_val(flags)) { - set_pte(ptep, pfn_pte(phys >> PAGE_SHIFT, flags)); + __set_pte(ptep, pfn_pte(phys >> PAGE_SHIFT, flags)); } else { pte_clear(&init_mm, addr, ptep); flush_tlb_kernel_range(addr, addr+PAGE_SIZE); diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 555285ebd5af..5eade712e9e5 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -112,7 +112,7 @@ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr, if (!early) memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE); next = addr + PAGE_SIZE; - set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL)); + __set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL)); } while (ptep++, addr = next, addr != end && pte_none(READ_ONCE(*ptep))); } @@ -266,7 +266,7 @@ static void __init kasan_init_shadow(void) * so we should make sure that it maps the zero page read-only. */ for (i = 0; i < PTRS_PER_PTE; i++) - set_pte(&kasan_early_shadow_pte[i], + __set_pte(&kasan_early_shadow_pte[i], pfn_pte(sym_to_pfn(kasan_early_shadow_page), PAGE_KERNEL_RO)); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 15f6347d23b6..e884279b268e 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -178,7 +178,7 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, do { pte_t old_pte = READ_ONCE(*ptep); - set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot)); + __set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot)); /* * After the PTE entry has been populated once, we diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 924843f1f661..a7996d8edf0a 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -41,7 +41,7 @@ static int change_page_range(pte_t *ptep, unsigned long addr, void *data) pte = clear_pte_bit(pte, cdata->clear_mask); pte = set_pte_bit(pte, cdata->set_mask); - set_pte(ptep, pte); + __set_pte(ptep, pte); return 0; } diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 7b14df3c6477..230b607cf881 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -41,7 +41,7 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) * read only (code, rodata). Clear the RDONLY bit from * the temporary mappings we use during restore. */ - set_pte(dst_ptep, pte_mkwrite_novma(pte)); + __set_pte(dst_ptep, pte_mkwrite_novma(pte)); } else if ((debug_pagealloc_enabled() || is_kfence_address((void *)addr)) && !pte_none(pte)) { /* @@ -55,7 +55,7 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) */ BUG_ON(!pfn_valid(pte_pfn(pte))); - set_pte(dst_ptep, pte_mkpresent(pte_mkwrite_novma(pte))); + __set_pte(dst_ptep, pte_mkpresent(pte_mkwrite_novma(pte))); } }