From patchwork Mon Aug 26 20:43:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13778456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0E5C6C5321D for ; Mon, 26 Aug 2024 20:58:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=IC5gYR+7Z4BwAPnZp8TN8XZEGypCYGRNkok3NV40z5s=; b=s801DTdL1a4t3I+9id2q2/Scw6 JI14hyScmPWOQL4tIscNwpZpCzWWc9TDXdl0rsEEC7A37A5DTo73zcbqY1Df7sl7nbS+D0RIOu7/o VODngHHFYbp5Hrn/R4lExzWEpmrMANvNfQYxFqOCjIBZAqUeGdkRbBk9UjDyo8ockC2Fhrh2su4pa GKurOtoA4zI3WLTZ1EY/sa8ck+a2poW95zUvTgWt/Ekwiq2VXQvVxuHupaUrLBWELjbBr9/FpjMOJ IIzw0swzmJi2FZvhHQhrdvAvJvYaRVZ0+o3mGV1s81fDFCB0iGynrxl5M1C2JJuEuHdvlku98kFGa ujlgqAvw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1signJ-00000008idJ-1S9V; Mon, 26 Aug 2024 20:58:33 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sigZl-00000008eeF-2FLK for linux-arm-kernel@lists.infradead.org; Mon, 26 Aug 2024 20:44:34 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724705072; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IC5gYR+7Z4BwAPnZp8TN8XZEGypCYGRNkok3NV40z5s=; b=Bx48/gEy8HcNPSGTC1saLoQpFtRlpAqaHob2hDvDJssP+38CWMrcsGJHc9yq14J+BcKZKB pTw4U6vVcGmGAmneL4EgB6tKASr//AaS5oPgIPuaZj7qp/UMLGNCE8ZZmCxgMsFCl3iKHN nipTN8TOh7lp2ZJCuTHVmvEytRghhrI= Received: from mail-oo1-f71.google.com (mail-oo1-f71.google.com [209.85.161.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-221--qr-l5kHPcOMCu7xaMUoaw-1; Mon, 26 Aug 2024 16:44:31 -0400 X-MC-Unique: -qr-l5kHPcOMCu7xaMUoaw-1 Received: by mail-oo1-f71.google.com with SMTP id 006d021491bc7-5da5516c615so6122345eaf.1 for ; Mon, 26 Aug 2024 13:44:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724705070; x=1725309870; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IC5gYR+7Z4BwAPnZp8TN8XZEGypCYGRNkok3NV40z5s=; b=K5qyswp1GVqDYaye25KXEiI3mAWzZwRJEH6QF0JtAqzyW4TtveAo++8+6DAoUxEGCz OJEjlL8WwbY9xCxGKRoddJ0odxCTK20dPxauzHk1iqcagm9UAi1u9gKkSxJXoRhEkqLD AqieP4S8yDlgRJQFNFO/Nh5vYX4rEjIo55G52mD0d30KbvYyaRFJBCATEsRnVDHGaHTC GWf3n05IEOUgIsyjZuxjQdSGES7qvPNXM7eenFGeJGM68iI6g33MDAXwM2bgbNkrqiL6 8bYj/pqyqvD5/GXdYgixUxQdu7gLB5dpPHrxV59fIH/SM0jf/4RtuL14CKFgi20Ph/I0 teiQ== X-Forwarded-Encrypted: i=1; AJvYcCX/RhSFLF6rAq8/O0+rmB/PxQB7RMinvq9iL0if60HidYovxnXlKO2ga62GJEu20xItATqFvQbJ3NNcoWmF7jsm@lists.infradead.org X-Gm-Message-State: AOJu0YxUctnJTYNrC4ti0vrRKV+nIGEh3nU4Q5gaUiJpzX+fb7czVfQP VhCkI4qtWlHnQCHHAReINGZOUXkELx7ia1x4BoJRO/Jtdqh5TfBr/00WJF7TCIeXoQb62KL6Y8F TllObpHRI3/xLhGs8N4qiuBRAshWjKG4llc5KaN0z0WvV9E6RdU6AUoH/jWS64SmLMKpsE3sB X-Received: by 2002:a05:6358:70c3:b0:1ac:f00d:c8c6 with SMTP id e5c5f4694b2df-1b5c22ebd07mr1493542355d.27.1724705070549; Mon, 26 Aug 2024 13:44:30 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHWRaHJzCfl4jAxAoYVG6HBrfG1+v3SSmaZHfZSlba6wG4sR68STMcFscyvu7mDZcazVEGWKw== X-Received: by 2002:a05:6358:70c3:b0:1ac:f00d:c8c6 with SMTP id e5c5f4694b2df-1b5c22ebd07mr1493538355d.27.1724705070146; Mon, 26 Aug 2024 13:44:30 -0700 (PDT) Received: from x1n.redhat.com (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7a67f3fd6c1sm491055185a.121.2024.08.26.13.44.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Aug 2024 13:44:29 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gavin Shan , Catalin Marinas , x86@kernel.org, Ingo Molnar , Andrew Morton , Paolo Bonzini , Dave Hansen , Thomas Gleixner , Alistair Popple , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Sean Christopherson , peterx@redhat.com, Oscar Salvador , Jason Gunthorpe , Borislav Petkov , Zi Yan , Axel Rasmussen , David Hildenbrand , Yan Zhao , Will Deacon , Kefeng Wang , Alex Williamson Subject: [PATCH v2 17/19] mm/x86: Support large pfn mappings Date: Mon, 26 Aug 2024 16:43:51 -0400 Message-ID: <20240826204353.2228736-18-peterx@redhat.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20240826204353.2228736-1-peterx@redhat.com> References: <20240826204353.2228736-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240826_134433_678685_2B771542 X-CRM114-Status: GOOD ( 17.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Helpers to install and detect special pmd/pud entries. In short, bit 9 on x86 is not used for pmd/pud, so we can directly define them the same as the pte level. One note is that it's also used in _PAGE_BIT_CPA_TEST but that is only used in the debug test, and shouldn't conflict in this case. One note is that pxx_set|clear_flags() for pmd/pud will need to be moved upper so that they can be referenced by the new special bit helpers. There's no change in the code that was moved. Cc: x86@kernel.org Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Signed-off-by: Peter Xu --- arch/x86/Kconfig | 1 + arch/x86/include/asm/pgtable.h | 80 ++++++++++++++++++++++------------ 2 files changed, 53 insertions(+), 28 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index b74b9ee484da..d4dbe9717e96 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -28,6 +28,7 @@ config X86_64 select ARCH_HAS_GIGANTIC_PAGE select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 select ARCH_SUPPORTS_PER_VMA_LOCK + select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE select HAVE_ARCH_SOFT_DIRTY select MODULES_USE_ELF_RELA select NEED_DMA_MAP_STATE diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 8d12bfad6a1d..4c2d080d26b4 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -120,6 +120,34 @@ extern pmdval_t early_pmd_flags; #define arch_end_context_switch(prev) do {} while(0) #endif /* CONFIG_PARAVIRT_XXL */ +static inline pmd_t pmd_set_flags(pmd_t pmd, pmdval_t set) +{ + pmdval_t v = native_pmd_val(pmd); + + return native_make_pmd(v | set); +} + +static inline pmd_t pmd_clear_flags(pmd_t pmd, pmdval_t clear) +{ + pmdval_t v = native_pmd_val(pmd); + + return native_make_pmd(v & ~clear); +} + +static inline pud_t pud_set_flags(pud_t pud, pudval_t set) +{ + pudval_t v = native_pud_val(pud); + + return native_make_pud(v | set); +} + +static inline pud_t pud_clear_flags(pud_t pud, pudval_t clear) +{ + pudval_t v = native_pud_val(pud); + + return native_make_pud(v & ~clear); +} + /* * The following only work if pte_present() is true. * Undefined behaviour if not.. @@ -317,6 +345,30 @@ static inline int pud_devmap(pud_t pud) } #endif +#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP +static inline bool pmd_special(pmd_t pmd) +{ + return pmd_flags(pmd) & _PAGE_SPECIAL; +} + +static inline pmd_t pmd_mkspecial(pmd_t pmd) +{ + return pmd_set_flags(pmd, _PAGE_SPECIAL); +} +#endif /* CONFIG_ARCH_SUPPORTS_PMD_PFNMAP */ + +#ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP +static inline bool pud_special(pud_t pud) +{ + return pud_flags(pud) & _PAGE_SPECIAL; +} + +static inline pud_t pud_mkspecial(pud_t pud) +{ + return pud_set_flags(pud, _PAGE_SPECIAL); +} +#endif /* CONFIG_ARCH_SUPPORTS_PUD_PFNMAP */ + static inline int pgd_devmap(pgd_t pgd) { return 0; @@ -487,20 +539,6 @@ static inline pte_t pte_mkdevmap(pte_t pte) return pte_set_flags(pte, _PAGE_SPECIAL|_PAGE_DEVMAP); } -static inline pmd_t pmd_set_flags(pmd_t pmd, pmdval_t set) -{ - pmdval_t v = native_pmd_val(pmd); - - return native_make_pmd(v | set); -} - -static inline pmd_t pmd_clear_flags(pmd_t pmd, pmdval_t clear) -{ - pmdval_t v = native_pmd_val(pmd); - - return native_make_pmd(v & ~clear); -} - /* See comments above mksaveddirty_shift() */ static inline pmd_t pmd_mksaveddirty(pmd_t pmd) { @@ -595,20 +633,6 @@ static inline pmd_t pmd_mkwrite_novma(pmd_t pmd) pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); #define pmd_mkwrite pmd_mkwrite -static inline pud_t pud_set_flags(pud_t pud, pudval_t set) -{ - pudval_t v = native_pud_val(pud); - - return native_make_pud(v | set); -} - -static inline pud_t pud_clear_flags(pud_t pud, pudval_t clear) -{ - pudval_t v = native_pud_val(pud); - - return native_make_pud(v & ~clear); -} - /* See comments above mksaveddirty_shift() */ static inline pud_t pud_mksaveddirty(pud_t pud) {