From patchwork Sat Apr 10 09:56:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 12195547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.7 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59AB7C433ED for ; Sat, 10 Apr 2021 09:59:32 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D12A9610A2 for ; Sat, 10 Apr 2021 09:59:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D12A9610A2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=/D/lQYi1KpwSp6g4iVfabfyasGeCOex2QV+4VZksOb0=; b=CoYMv7AUOQHgBx/drzb+PvEDt j4jetv0r56QaiKVMcrBoU7kohfznlvUPFwF7rjr5E0eN6yz78YwvCLIGX4AuT9ul8c8r/vWlHXx2D vocjhBROVv3nM5nIjXkIQsLbgKTXt0ECm4osJXwoxPDiTYQrfV7ynJKNTL+yfZ+tv0XdbeNODvmtR Exf/5yfHS7GvpY/FV7x2KNAJ/Yhm9RFC8Qjk17UtXtmw6wkHTsGIdrT5//7HXuotgCIPBuOzTOTPV +3JjuOgeoRHhnlcVmnoXBfIY4aVBQUvJBAvEmiaZ6mQncRhGxBvJ+aICfcz1j3ZcLbhJ99QCYUc00 lHb0tJKNg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lVAMt-002bQB-3v; Sat, 10 Apr 2021 09:57:32 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lVAMi-002bOf-1T for linux-arm-kernel@desiato.infradead.org; Sat, 10 Apr 2021 09:57:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=nYpEtuTnhMZJDjsOy576PWjaFXEy4VEKhrE6/XIqMTQ=; b=pgi/ENwEGmN4XvFY5mlYQhFqwH y223aSSeNM4C5yLOqbNyRrO0+fTe4aKFb08ESfRbmOLhbAJsU8OUf+sG8QpvATyiogrGga181/ONS t+1TJ3Qm1EMyyoZOP3Rv4ehHszCbR7PoeyeG8l7exqcSl21RCYBiNKenLANNbKINxm6GYUE2ItsW+ YFCwJ0nnL8Mwom0Pqb3WXV601re6LiAAWR9VOP598dlAZQ20w4jpU12glTA0TzrMN39JnslGGix3y Cj8hEBNUxJWH8AXVkUJa7igC93CQcSPIKukSPFZ3nlaLbaVc39yrVTYew864iKOu42VDAjmyciJYM gB9kr99A==; Received: from mail-pl1-x630.google.com ([2607:f8b0:4864:20::630]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lVAMe-0051YV-BE for linux-arm-kernel@lists.infradead.org; Sat, 10 Apr 2021 09:57:18 +0000 Received: by mail-pl1-x630.google.com with SMTP id ay2so3932668plb.3 for ; Sat, 10 Apr 2021 02:57:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nYpEtuTnhMZJDjsOy576PWjaFXEy4VEKhrE6/XIqMTQ=; b=ivXk0iS2zP5ZTHwy7M1Lzc2U1/WrAQJbLqGqBYUGf7gMxwQmmUust+pwYu1TXTeGrc rcdthqA9aM3Cv4h44tem/mLrKxYImRcks34ry7Oee3kfgC/nH10X1iiQ7OeGplCKy2Xf V791TiqlArJsKI1d/YHH0b9J3luT718Dh2wxAxM7qJNSGj4M3y61+WnVv+kFIyNYWj3E s5g2RzgQ9wwGjF4qTGEqNxmi+byv7MJE/1NPyReBuy4k0vdFSqtlPgH7iu/5le8eigso k5eFHIzF0GxzsTFJ2Rz8lsBeAPeO8CLDvsgpR+EhfN9nrpSZUYgOh+g5NOPkgFiHXzfz X2QA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nYpEtuTnhMZJDjsOy576PWjaFXEy4VEKhrE6/XIqMTQ=; b=mvNke+ftuHlyGVzp1nZL215EfuizNcu8SBKpFyxKmq7r0i8Ck3gZIOH4odDOGMpnoS ZrhKsKMrkcmbcW9TRfitRYnAZnpIAkZoY7+PW54aty8EQHHOSXJmwVOSnj0XqBxTiPOQ fskSXpn0W4uMA7B9+FcZDdQ7MXNFQf1rR8sIkGN/G7FpRJNiCj7Rx6fUDt2uwR1G8HMN U52BF8H1Yi3hUXjYQePf/P0WEQBhNaIzcvr/B8bLtiLOsSu0tPDc9LQEyjLEZyxyN6sT 4KHFX69Z1StHQOPV4hrlRJqCveo0HgTSflc3UhvMr+u/3T+0VnTWOpbpv/iN6nW/e0+B 5mlg== X-Gm-Message-State: AOAM531zrlVQ26Uo9DhqpvEcymYkNfmiWibKjquzaT/lhOPyasWv5yxh RUko2v3p5kHjba0k6iz53myTLY5sdGpI X-Google-Smtp-Source: ABdhPJy6/xpDWky59IUqG52DsVPidnmbyAUZCq2OYaU593bN6gnbQwhuARWHa/+3o+nqDtU9Ot/2yQ== X-Received: by 2002:a17:90a:f298:: with SMTP id fs24mr195459pjb.176.1618048634956; Sat, 10 Apr 2021 02:57:14 -0700 (PDT) Received: from x1pad.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id m14sm3836322pfh.89.2021.04.10.02.57.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 10 Apr 2021 02:57:14 -0700 (PDT) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Catalin Marinas , Will Deacon , Marc Zyngier , Kristina Martsenko , James Morse , Steven Price , Jonathan Cameron , Pavel Tatashin , Anshuman Khandual , Atish Patra , Mike Rapoport , Logan Gunthorpe , Mark Brown Subject: [RFC 1/8] arm64/mm: split out __create_pgd_mapping() routines Date: Sat, 10 Apr 2021 17:56:47 +0800 Message-Id: <20210410095654.24102-2-kernelfans@gmail.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210410095654.24102-1-kernelfans@gmail.com> References: <20210410095654.24102-1-kernelfans@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210410_025716_420368_7EDEB5C8 X-CRM114-Status: GOOD ( 29.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Split out the routines for __create_pgd_mapping(), in order to use it to generate two sets of operations for CONFIG_PGTABLE_LEVELS and CONFIG_PGTABLE_LEVELS + 1 Later the one generated with 'CONFIG_PGTABLE_LEVELS + 1' can be used for idmap if VA_BITS is too small to cover system RAM, which is located sufficiently high in the physical address space. Later, idmap can be created by __create_pgd_mapping() directly. Signed-off-by: Pingfan Liu Cc: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Kristina Martsenko Cc: James Morse Cc: Steven Price Cc: Jonathan Cameron Cc: Pavel Tatashin Cc: Anshuman Khandual Cc: Atish Patra Cc: Mike Rapoport Cc: Logan Gunthorpe Cc: Mark Brown To: linux-arm-kernel@lists.infradead.org --- arch/arm64/Kconfig | 4 + arch/arm64/mm/Makefile | 2 + arch/arm64/mm/idmap_mmu.c | 45 ++++++ arch/arm64/mm/mmu.c | 263 +----------------------------------- arch/arm64/mm/mmu_include.c | 262 +++++++++++++++++++++++++++++++++++ 5 files changed, 315 insertions(+), 261 deletions(-) create mode 100644 arch/arm64/mm/idmap_mmu.c create mode 100644 arch/arm64/mm/mmu_include.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index e4e1b6550115..989fc501a1b4 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -327,6 +327,10 @@ config PGTABLE_LEVELS default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47 default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48 +config IDMAP_PGTABLE_EXPAND + def_bool y + depends on (ARM64_4K_PAGES && ARM64_VA_BITS_39) || (ARM64_64K_PAGES && ARM64_VA_BITS_42) + config ARCH_SUPPORTS_UPROBES def_bool y diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index f188c9092696..f9283cb9a201 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -3,6 +3,8 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ cache.o copypage.o flush.o \ ioremap.o mmap.o pgd.o mmu.o \ context.o proc.o pageattr.o + +obj-$(CONFIG_IDMAP_PGTABLE_EXPAND) += idmap_mmu.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_PTDUMP_CORE) += ptdump.o obj-$(CONFIG_PTDUMP_DEBUGFS) += ptdump_debugfs.o diff --git a/arch/arm64/mm/idmap_mmu.c b/arch/arm64/mm/idmap_mmu.c new file mode 100644 index 000000000000..7e9a4f4017d3 --- /dev/null +++ b/arch/arm64/mm/idmap_mmu.c @@ -0,0 +1,45 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#if CONFIG_IDMAP_PGTABLE_EXPAND + +#if CONFIG_PGTABLE_LEVELS == 2 +#define EXTEND_LEVEL 3 +#elif CONFIG_PGTABLE_LEVELS == 3 +#define EXTEND_LEVEL 4 +#endif + +#undef CONFIG_PGTABLE_LEVELS +#define CONFIG_PGTABLE_LEVELS EXTEND_LEVEL + + +#include "./mmu_include.c" + +void __create_pgd_mapping_extend(pgd_t *pgdir, unsigned int entries_cnt, phys_addr_t phys, + unsigned long virt, phys_addr_t size, + pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), + int flags) +{ + __create_pgd_mapping(pgdir, entries_cnt, phys, virt, size, prot, pgtable_alloc, flags); +} +#endif + + diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 5d9550fdb9cf..56e4f25e8d6d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -37,9 +37,6 @@ #include #include -#define NO_BLOCK_MAPPINGS BIT(0) -#define NO_CONT_MAPPINGS BIT(1) - u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN); u64 idmap_ptrs_per_pgd = PTRS_PER_PGD; @@ -116,264 +113,6 @@ static phys_addr_t __init early_pgtable_alloc(int shift) return phys; } -static bool pgattr_change_is_safe(u64 old, u64 new) -{ - /* - * The following mapping attributes may be updated in live - * kernel mappings without the need for break-before-make. - */ - pteval_t mask = PTE_PXN | PTE_RDONLY | PTE_WRITE | PTE_NG; - - /* creating or taking down mappings is always safe */ - if (old == 0 || new == 0) - return true; - - /* live contiguous mappings may not be manipulated at all */ - if ((old | new) & PTE_CONT) - return false; - - /* Transitioning from Non-Global to Global is unsafe */ - if (old & ~new & PTE_NG) - return false; - - /* - * Changing the memory type between Normal and Normal-Tagged is safe - * since Tagged is considered a permission attribute from the - * mismatched attribute aliases perspective. - */ - if (((old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) || - (old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)) && - ((new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) || - (new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED))) - mask |= PTE_ATTRINDX_MASK; - - return ((old ^ new) & ~mask) == 0; -} - -static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, - phys_addr_t phys, pgprot_t prot) -{ - pte_t *ptep; - - ptep = pte_set_fixmap_offset(pmdp, addr); - do { - pte_t old_pte = READ_ONCE(*ptep); - - set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot)); - - /* - * After the PTE entry has been populated once, we - * only allow updates to the permission attributes. - */ - BUG_ON(!pgattr_change_is_safe(pte_val(old_pte), - READ_ONCE(pte_val(*ptep)))); - - phys += PAGE_SIZE; - } while (ptep++, addr += PAGE_SIZE, addr != end); - - pte_clear_fixmap(); -} - -static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, - unsigned long end, phys_addr_t phys, - pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), - int flags) -{ - unsigned long next; - pmd_t pmd = READ_ONCE(*pmdp); - - BUG_ON(pmd_sect(pmd)); - if (pmd_none(pmd)) { - phys_addr_t pte_phys; - BUG_ON(!pgtable_alloc); - pte_phys = pgtable_alloc(PAGE_SHIFT); - __pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE); - pmd = READ_ONCE(*pmdp); - } - BUG_ON(pmd_bad(pmd)); - - do { - pgprot_t __prot = prot; - - next = pte_cont_addr_end(addr, end); - - /* use a contiguous mapping if the range is suitably aligned */ - if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) && - (flags & NO_CONT_MAPPINGS) == 0) - __prot = __pgprot(pgprot_val(prot) | PTE_CONT); - - init_pte(pmdp, addr, next, phys, __prot); - - phys += next - addr; - } while (addr = next, addr != end); -} - -static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, - phys_addr_t phys, pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), int flags) -{ - unsigned long next; - pmd_t *pmdp; - - pmdp = pmd_set_fixmap_offset(pudp, addr); - do { - pmd_t old_pmd = READ_ONCE(*pmdp); - - next = pmd_addr_end(addr, end); - - /* try section mapping first */ - if (((addr | next | phys) & ~SECTION_MASK) == 0 && - (flags & NO_BLOCK_MAPPINGS) == 0) { - pmd_set_huge(pmdp, phys, prot); - - /* - * After the PMD entry has been populated once, we - * only allow updates to the permission attributes. - */ - BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd), - READ_ONCE(pmd_val(*pmdp)))); - } else { - alloc_init_cont_pte(pmdp, addr, next, phys, prot, - pgtable_alloc, flags); - - BUG_ON(pmd_val(old_pmd) != 0 && - pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp))); - } - phys += next - addr; - } while (pmdp++, addr = next, addr != end); - - pmd_clear_fixmap(); -} - -static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, - unsigned long end, phys_addr_t phys, - pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), int flags) -{ - unsigned long next; - pud_t pud = READ_ONCE(*pudp); - - /* - * Check for initial section mappings in the pgd/pud. - */ - BUG_ON(pud_sect(pud)); - if (pud_none(pud)) { - phys_addr_t pmd_phys; - BUG_ON(!pgtable_alloc); - pmd_phys = pgtable_alloc(PMD_SHIFT); - __pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE); - pud = READ_ONCE(*pudp); - } - BUG_ON(pud_bad(pud)); - - do { - pgprot_t __prot = prot; - - next = pmd_cont_addr_end(addr, end); - - /* use a contiguous mapping if the range is suitably aligned */ - if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) && - (flags & NO_CONT_MAPPINGS) == 0) - __prot = __pgprot(pgprot_val(prot) | PTE_CONT); - - init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags); - - phys += next - addr; - } while (addr = next, addr != end); -} - -static inline bool use_1G_block(unsigned long addr, unsigned long next, - unsigned long phys) -{ - if (PAGE_SHIFT != 12) - return false; - - if (((addr | next | phys) & ~PUD_MASK) != 0) - return false; - - return true; -} - -static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, - phys_addr_t phys, pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), - int flags) -{ - unsigned long next; - pud_t *pudp; - p4d_t *p4dp = p4d_offset(pgdp, addr); - p4d_t p4d = READ_ONCE(*p4dp); - - if (p4d_none(p4d)) { - phys_addr_t pud_phys; - BUG_ON(!pgtable_alloc); - pud_phys = pgtable_alloc(PUD_SHIFT); - __p4d_populate(p4dp, pud_phys, PUD_TYPE_TABLE); - p4d = READ_ONCE(*p4dp); - } - BUG_ON(p4d_bad(p4d)); - - pudp = pud_set_fixmap_offset(p4dp, addr); - do { - pud_t old_pud = READ_ONCE(*pudp); - - next = pud_addr_end(addr, end); - - /* - * For 4K granule only, attempt to put down a 1GB block - */ - if (use_1G_block(addr, next, phys) && - (flags & NO_BLOCK_MAPPINGS) == 0) { - pud_set_huge(pudp, phys, prot); - - /* - * After the PUD entry has been populated once, we - * only allow updates to the permission attributes. - */ - BUG_ON(!pgattr_change_is_safe(pud_val(old_pud), - READ_ONCE(pud_val(*pudp)))); - } else { - alloc_init_cont_pmd(pudp, addr, next, phys, prot, - pgtable_alloc, flags); - - BUG_ON(pud_val(old_pud) != 0 && - pud_val(old_pud) != READ_ONCE(pud_val(*pudp))); - } - phys += next - addr; - } while (pudp++, addr = next, addr != end); - - pud_clear_fixmap(); -} - -static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, - unsigned long virt, phys_addr_t size, - pgprot_t prot, - phys_addr_t (*pgtable_alloc)(int), - int flags) -{ - unsigned long addr, end, next; - pgd_t *pgdp = pgd_offset_pgd(pgdir, virt); - - /* - * If the virtual and physical address don't have the same offset - * within a page, we cannot map the region as the caller expects. - */ - if (WARN_ON((phys ^ virt) & ~PAGE_MASK)) - return; - - phys &= PAGE_MASK; - addr = virt & PAGE_MASK; - end = PAGE_ALIGN(virt + size); - - do { - next = pgd_addr_end(addr, end); - alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc, - flags); - phys += next - addr; - } while (pgdp++, addr = next, addr != end); -} - static phys_addr_t __pgd_pgtable_alloc(int shift) { void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL); @@ -404,6 +143,8 @@ static phys_addr_t pgd_pgtable_alloc(int shift) return pa; } +#include "./mmu_include.c" + /* * This function can only be used to modify existing table entries, * without allocating new levels of table. Note that this permits the diff --git a/arch/arm64/mm/mmu_include.c b/arch/arm64/mm/mmu_include.c new file mode 100644 index 000000000000..e9ebdffe860b --- /dev/null +++ b/arch/arm64/mm/mmu_include.c @@ -0,0 +1,262 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#define NO_BLOCK_MAPPINGS BIT(0) +#define NO_CONT_MAPPINGS BIT(1) + +static bool pgattr_change_is_safe(u64 old, u64 new) +{ + /* + * The following mapping attributes may be updated in live + * kernel mappings without the need for break-before-make. + */ + pteval_t mask = PTE_PXN | PTE_RDONLY | PTE_WRITE | PTE_NG; + + /* creating or taking down mappings is always safe */ + if (old == 0 || new == 0) + return true; + + /* live contiguous mappings may not be manipulated at all */ + if ((old | new) & PTE_CONT) + return false; + + /* Transitioning from Non-Global to Global is unsafe */ + if (old & ~new & PTE_NG) + return false; + + /* + * Changing the memory type between Normal and Normal-Tagged is safe + * since Tagged is considered a permission attribute from the + * mismatched attribute aliases perspective. + */ + if (((old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) || + (old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)) && + ((new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) || + (new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED))) + mask |= PTE_ATTRINDX_MASK; + + return ((old ^ new) & ~mask) == 0; +} + +static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, + phys_addr_t phys, pgprot_t prot) +{ + pte_t *ptep; + + ptep = pte_set_fixmap_offset(pmdp, addr); + do { + pte_t old_pte = READ_ONCE(*ptep); + + set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot)); + + /* + * After the PTE entry has been populated once, we + * only allow updates to the permission attributes. + */ + BUG_ON(!pgattr_change_is_safe(pte_val(old_pte), + READ_ONCE(pte_val(*ptep)))); + + phys += PAGE_SIZE; + } while (ptep++, addr += PAGE_SIZE, addr != end); + + pte_clear_fixmap(); +} + +static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, + unsigned long end, phys_addr_t phys, + pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), + int flags) +{ + unsigned long next; + pmd_t pmd = READ_ONCE(*pmdp); + + BUG_ON(pmd_sect(pmd)); + if (pmd_none(pmd)) { + phys_addr_t pte_phys; + BUG_ON(!pgtable_alloc); + pte_phys = pgtable_alloc(PAGE_SHIFT); + __pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE); + pmd = READ_ONCE(*pmdp); + } + BUG_ON(pmd_bad(pmd)); + + do { + pgprot_t __prot = prot; + + next = pte_cont_addr_end(addr, end); + + /* use a contiguous mapping if the range is suitably aligned */ + if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) && + (flags & NO_CONT_MAPPINGS) == 0) + __prot = __pgprot(pgprot_val(prot) | PTE_CONT); + + init_pte(pmdp, addr, next, phys, __prot); + + phys += next - addr; + } while (addr = next, addr != end); +} + +static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, + phys_addr_t phys, pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), int flags) +{ + unsigned long next; + pmd_t *pmdp; + + pmdp = pmd_set_fixmap_offset(pudp, addr); + do { + pmd_t old_pmd = READ_ONCE(*pmdp); + + next = pmd_addr_end(addr, end); + + /* try section mapping first */ + if (((addr | next | phys) & ~SECTION_MASK) == 0 && + (flags & NO_BLOCK_MAPPINGS) == 0) { + pmd_set_huge(pmdp, phys, prot); + + /* + * After the PMD entry has been populated once, we + * only allow updates to the permission attributes. + */ + BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd), + READ_ONCE(pmd_val(*pmdp)))); + } else { + alloc_init_cont_pte(pmdp, addr, next, phys, prot, + pgtable_alloc, flags); + + BUG_ON(pmd_val(old_pmd) != 0 && + pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp))); + } + phys += next - addr; + } while (pmdp++, addr = next, addr != end); + + pmd_clear_fixmap(); +} + +static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, + unsigned long end, phys_addr_t phys, + pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), int flags) +{ + unsigned long next; + pud_t pud = READ_ONCE(*pudp); + + /* + * Check for initial section mappings in the pgd/pud. + */ + BUG_ON(pud_sect(pud)); + if (pud_none(pud)) { + phys_addr_t pmd_phys; + BUG_ON(!pgtable_alloc); + pmd_phys = pgtable_alloc(PMD_SHIFT); + __pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE); + pud = READ_ONCE(*pudp); + } + BUG_ON(pud_bad(pud)); + + do { + pgprot_t __prot = prot; + + next = pmd_cont_addr_end(addr, end); + + /* use a contiguous mapping if the range is suitably aligned */ + if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) && + (flags & NO_CONT_MAPPINGS) == 0) + __prot = __pgprot(pgprot_val(prot) | PTE_CONT); + + init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags); + + phys += next - addr; + } while (addr = next, addr != end); +} + +static inline bool use_1G_block(unsigned long addr, unsigned long next, + unsigned long phys) +{ + if (PAGE_SHIFT != 12) + return false; + + if (((addr | next | phys) & ~PUD_MASK) != 0) + return false; + + return true; +} + +static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, + phys_addr_t phys, pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), + int flags) +{ + unsigned long next; + pud_t *pudp; + p4d_t *p4dp = p4d_offset(pgdp, addr); + p4d_t p4d = READ_ONCE(*p4dp); + + if (p4d_none(p4d)) { + phys_addr_t pud_phys; + BUG_ON(!pgtable_alloc); + pud_phys = pgtable_alloc(PUD_SHIFT); + __p4d_populate(p4dp, pud_phys, PUD_TYPE_TABLE); + p4d = READ_ONCE(*p4dp); + } + BUG_ON(p4d_bad(p4d)); + + pudp = pud_set_fixmap_offset(p4dp, addr); + do { + pud_t old_pud = READ_ONCE(*pudp); + + next = pud_addr_end(addr, end); + + /* + * For 4K granule only, attempt to put down a 1GB block + */ + if (use_1G_block(addr, next, phys) && + (flags & NO_BLOCK_MAPPINGS) == 0) { + pud_set_huge(pudp, phys, prot); + + /* + * After the PUD entry has been populated once, we + * only allow updates to the permission attributes. + */ + BUG_ON(!pgattr_change_is_safe(pud_val(old_pud), + READ_ONCE(pud_val(*pudp)))); + } else { + alloc_init_cont_pmd(pudp, addr, next, phys, prot, + pgtable_alloc, flags); + + BUG_ON(pud_val(old_pud) != 0 && + pud_val(old_pud) != READ_ONCE(pud_val(*pudp))); + } + phys += next - addr; + } while (pudp++, addr = next, addr != end); + + pud_clear_fixmap(); +} + +static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, + unsigned long virt, phys_addr_t size, + pgprot_t prot, + phys_addr_t (*pgtable_alloc)(int), + int flags) +{ + unsigned long addr, end, next; + pgd_t *pgdp = pgd_offset_pgd(pgdir, virt); + + /* + * If the virtual and physical address don't have the same offset + * within a page, we cannot map the region as the caller expects. + */ + if (WARN_ON((phys ^ virt) & ~PAGE_MASK)) + return; + + phys &= PAGE_MASK; + addr = virt & PAGE_MASK; + end = PAGE_ALIGN(virt + size); + + do { + next = pgd_addr_end(addr, end); + alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc, + flags); + phys += next - addr; + } while (pgdp++, addr = next, addr != end); +}