From patchwork Mon Aug 26 20:43:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13778457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B560DC5321D for ; Mon, 26 Aug 2024 20:59:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FEAM9DK74WZYySVYhL7YnC2YOY0xQ42haB8uzH2x2BY=; b=zU4NSVXe5VrlgZ3OY8DjCBxFKh M0dO0g91GvT+zytjGjes2I8G6ZyaEQ5n5Gr/sjpAM5m2mQtG6guKhsskkzehLwdCmZUW/Ngoop6SM ivDRU17/CSbViUE2ns6dzfRc5+tm3frZuKBqKPzGimTvObWNlly9O2s0JfczYOE/JE5yrNE6WpKj9 Bk4Dg8Nmeonzfx6pA1nkd/Ta2ksjw7n0WE+/CcR/ncXw/JP588Sq+/Brl7ehUiRntQhgWhHvmOsK6 vc1HsDzCsla+Y7TSVX8De1OfwXVf66QPVUrM6DB0v8DHosnzrZXyTOhRlAKKiKwqRppI/rXTi3Ge6 BzXaqm1w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sigo4-00000008ijj-06De; Mon, 26 Aug 2024 20:59:20 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sigZn-00000008efy-2rmx for linux-arm-kernel@lists.infradead.org; Mon, 26 Aug 2024 20:44:37 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724705074; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FEAM9DK74WZYySVYhL7YnC2YOY0xQ42haB8uzH2x2BY=; b=YYLVoWiGteZHco1yz/MpCpfQF0yuSjBl1/+aBlhmwR1vGYC/tdjrJcfQZlZrF0e9fqAAqF 7VTkx7WYGYneWcCf+1sMX+ik/PDLezaALHGApYH03Wjeo7mkXZc1u5YTk2nTbMLFbzsYUf YnhJl9U7lzcQndaKRAOubldo2RhP3AA= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-552-SwpVj3TuNauJ4yQHfwbDyg-1; Mon, 26 Aug 2024 16:44:33 -0400 X-MC-Unique: SwpVj3TuNauJ4yQHfwbDyg-1 Received: by mail-qt1-f198.google.com with SMTP id d75a77b69052e-44fe1a88f85so67570111cf.1 for ; Mon, 26 Aug 2024 13:44:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724705072; x=1725309872; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FEAM9DK74WZYySVYhL7YnC2YOY0xQ42haB8uzH2x2BY=; b=b1Hb3iVAl+Gf/d+B930eftA2xcnc1vbPjQ6o7oEuqQVnSme1iFdUKg+oEa2cS4Pje4 sR2WP/DcDPjLRqThD1BtP+c0k97ds9gUk4vaSf3K/wVqJ/ZpMAcW9h6fdVTG1BNsQ+gs aFOsuXXh5b4NZRmEpGvhWipGyjix1Ce+7c1w+rmDV8f3yfiaJ9bA3RnD5MgsgmxSF2oV MuAba96X8pamV+/s1VmrmJqPaNEcqUcvO/B0zLm/loyBUUM0EZAwxX6Vfp+bevQge63S x3aa2fXPRooQSh+U3SRBYWgku2ziwCmj640YN2chhpRkEEPLr7NuZ94JfvCnpesiNpWS mKZA== X-Forwarded-Encrypted: i=1; AJvYcCXK9NMibpx4Ev+TtryPsEpqXEfkeZneXKp68iWC6DtJ5UMmmgTYb3CEip82ynrmhV/8jKMsXIyPNie07IkBekxx@lists.infradead.org X-Gm-Message-State: AOJu0YxISfNYAlorIHKMoLVyhmBdLeU0rdpm/qt+lEFYJ9rP+0czKt6m S1VhVWe8rt4/lnoyJcF8JKoFnpCWzml6YSQQUgQmTgEwOY6RN2ZZakjMGesjs/VD3BEvBU+yk2K qhzHxJfZ1u+jXy+aVX3QCOlK3HlD+tIG3XdSoecdrNZ81TUR8dNNNj+FluHJ2eBNOIxpx/kPp X-Received: by 2002:a05:620a:430a:b0:7a4:faab:fc79 with SMTP id af79cd13be357-7a6896d1835mr1392501485a.8.1724705072557; Mon, 26 Aug 2024 13:44:32 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHDiJ56urcvTKu8a4/ieO25ZyreTkMfmYYALFQqTS8OY3wwzzWFQolk9khXQC+xv0hRJc4KwA== X-Received: by 2002:a05:620a:430a:b0:7a4:faab:fc79 with SMTP id af79cd13be357-7a6896d1835mr1392497485a.8.1724705072141; Mon, 26 Aug 2024 13:44:32 -0700 (PDT) Received: from x1n.redhat.com (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7a67f3fd6c1sm491055185a.121.2024.08.26.13.44.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Aug 2024 13:44:31 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gavin Shan , Catalin Marinas , x86@kernel.org, Ingo Molnar , Andrew Morton , Paolo Bonzini , Dave Hansen , Thomas Gleixner , Alistair Popple , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Sean Christopherson , peterx@redhat.com, Oscar Salvador , Jason Gunthorpe , Borislav Petkov , Zi Yan , Axel Rasmussen , David Hildenbrand , Yan Zhao , Will Deacon , Kefeng Wang , Alex Williamson Subject: [PATCH v2 18/19] mm/arm64: Support large pfn mappings Date: Mon, 26 Aug 2024 16:43:52 -0400 Message-ID: <20240826204353.2228736-19-peterx@redhat.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20240826204353.2228736-1-peterx@redhat.com> References: <20240826204353.2228736-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240826_134435_824332_34820C3D X-CRM114-Status: GOOD ( 14.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Support huge pfnmaps by using bit 56 (PTE_SPECIAL) for "special" on pmds/puds. Provide the pmd/pud helpers to set/get special bit. There's one more thing missing for arm64 which is the pxx_pgprot() for pmd/pud. Add them too, which is mostly the same as the pte version by dropping the pfn field. These helpers are essential to be used in the new follow_pfnmap*() API to report valid pgprot_t results. Note that arm64 doesn't yet support huge PUD yet, but it's still straightforward to provide the pud helpers that we need altogether. Only PMD helpers will make an immediate benefit until arm64 will support huge PUDs first in general (e.g. in THPs). Cc: linux-arm-kernel@lists.infradead.org Cc: Catalin Marinas Cc: Will Deacon Signed-off-by: Peter Xu --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/pgtable.h | 29 +++++++++++++++++++++++++++++ 2 files changed, 30 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6494848019a0..6607ed8fdbb4 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -99,6 +99,7 @@ config ARM64 select ARCH_SUPPORTS_NUMA_BALANCING select ARCH_SUPPORTS_PAGE_TABLE_CHECK select ARCH_SUPPORTS_PER_VMA_LOCK + select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT select ARCH_WANT_DEFAULT_BPF_JIT diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b78cc4a6758b..2faecc033a19 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -578,6 +578,14 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd) return pte_pmd(set_pte_bit(pmd_pte(pmd), __pgprot(PTE_DEVMAP))); } +#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP +#define pmd_special(pte) (!!((pmd_val(pte) & PTE_SPECIAL))) +static inline pmd_t pmd_mkspecial(pmd_t pmd) +{ + return set_pmd_bit(pmd, __pgprot(PTE_SPECIAL)); +} +#endif + #define __pmd_to_phys(pmd) __pte_to_phys(pmd_pte(pmd)) #define __phys_to_pmd_val(phys) __phys_to_pte_val(phys) #define pmd_pfn(pmd) ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT) @@ -595,6 +603,27 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd) #define pud_pfn(pud) ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT) #define pfn_pud(pfn,prot) __pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) +#ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP +#define pud_special(pte) pte_special(pud_pte(pud)) +#define pud_mkspecial(pte) pte_pud(pte_mkspecial(pud_pte(pud))) +#endif + +#define pmd_pgprot pmd_pgprot +static inline pgprot_t pmd_pgprot(pmd_t pmd) +{ + unsigned long pfn = pmd_pfn(pmd); + + return __pgprot(pmd_val(pfn_pmd(pfn, __pgprot(0))) ^ pmd_val(pmd)); +} + +#define pud_pgprot pud_pgprot +static inline pgprot_t pud_pgprot(pud_t pud) +{ + unsigned long pfn = pud_pfn(pud); + + return __pgprot(pud_val(pfn_pud(pfn, __pgprot(0))) ^ pud_val(pud)); +} + static inline void __set_pte_at(struct mm_struct *mm, unsigned long __always_unused addr, pte_t *ptep, pte_t pte, unsigned int nr)