From patchwork Sat Oct 9 00:36:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 12546739 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB65DC43217 for ; Sat, 9 Oct 2021 00:37:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C848E60FE8 for ; Sat, 9 Oct 2021 00:37:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244129AbhJIAjk (ORCPT ); Fri, 8 Oct 2021 20:39:40 -0400 Received: from mga02.intel.com ([134.134.136.20]:5254 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244059AbhJIAjd (ORCPT ); Fri, 8 Oct 2021 20:39:33 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10131"; a="213756497" X-IronPort-AV: E=Sophos;i="5.85,358,1624345200"; d="scan'208";a="213756497" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Oct 2021 17:37:36 -0700 X-IronPort-AV: E=Sophos;i="5.85,358,1624345200"; d="scan'208";a="624905351" Received: from dmsojoza-mobl3.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.251.135.62]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Oct 2021 17:37:35 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski , Bjorn Helgaas , Richard Henderson , Thomas Bogendoerfer , James E J Bottomley , Helge Deller , "David S . Miller" , Arnd Bergmann , Jonathan Corbet , "Michael S . Tsirkin" , Paolo Bonzini , David Hildenbrand , Andrea Arcangeli , Josh Poimboeuf Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v5 04/16] x86/tdx: Make pages shared in ioremap() Date: Fri, 8 Oct 2021 17:36:59 -0700 Message-Id: <20211009003711.1390019-5-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211009003711.1390019-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20211009003711.1390019-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: "Kirill A. Shutemov" All ioremap()ed pages that are not backed by normal memory (NONE or RESERVED) have to be mapped as shared. Reuse the infrastructure from AMD SEV code. Note that DMA code doesn't use ioremap() to convert memory to shared as DMA buffers backed by normal memory. DMA code make buffer shared with set_memory_decrypted(). Signed-off-by: Kirill A. Shutemov Reviewed-by: Andi Kleen Reviewed-by: Tony Luck Signed-off-by: Kuppuswamy Sathyanarayanan --- Changes since v4: * Renamed "protected_guest" to "cc_guest". * Replaced use of prot_guest_has() with cc_guest_has() * Modified the patch to adapt to latest version cc_guest_has() changes. Changes since v3: * Rebased on top of Tom Lendacky's protected guest changes (https://lore.kernel.org/patchwork/cover/1468760/) Changes since v1: * Fixed format issues in commit log. arch/x86/include/asm/pgtable.h | 4 ++++ arch/x86/mm/ioremap.c | 8 ++++++-- include/linux/cc_platform.h | 13 +++++++++++++ 3 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 448cd01eb3ec..ecefccbdf2e3 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -21,6 +21,10 @@ #define pgprot_encrypted(prot) __pgprot(__sme_set(pgprot_val(prot))) #define pgprot_decrypted(prot) __pgprot(__sme_clr(pgprot_val(prot))) +/* Make the page accesable by VMM for confidential guests */ +#define pgprot_cc_guest(prot) __pgprot(pgprot_val(prot) | \ + tdx_shared_mask()) + #ifndef __ASSEMBLY__ #include #include diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 026031b3b782..83daa3f8f39c 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -26,6 +27,7 @@ #include #include #include +#include #include "physaddr.h" @@ -87,8 +89,8 @@ static unsigned int __ioremap_check_ram(struct resource *res) } /* - * In a SEV guest, NONE and RESERVED should not be mapped encrypted because - * there the whole memory is already encrypted. + * In a SEV or TDX guest, NONE and RESERVED should not be mapped encrypted (or + * private in TDX case) because there the whole memory is already encrypted. */ static unsigned int __ioremap_check_encrypted(struct resource *res) { @@ -246,6 +248,8 @@ __ioremap_caller(resource_size_t phys_addr, unsigned long size, prot = PAGE_KERNEL_IO; if ((io_desc.flags & IORES_MAP_ENCRYPTED) || encrypted) prot = pgprot_encrypted(prot); + else if (cc_platform_has(CC_ATTR_GUEST_SHARED_MAPPING_INIT)) + prot = pgprot_cc_guest(prot); switch (pcm) { case _PAGE_CACHE_MODE_UC: diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h index 7728574d7783..edb1d7a2f6af 100644 --- a/include/linux/cc_platform.h +++ b/include/linux/cc_platform.h @@ -81,6 +81,19 @@ enum cc_attr { * Examples include TDX Guest. */ CC_ATTR_GUEST_UNROLL_STRING_IO, + + /** + * @CC_ATTR_GUEST_SHARED_MAPPING_INIT: IO Remapped memory is marked + * as shared. + * + * The platform/OS is running as a guest/virtual machine and + * initializes all IO remapped memory as shared. + * + * Examples include TDX Guest (SEV marks all pages as shared by default + * so this feature cannot be enabled for it). + */ + CC_ATTR_GUEST_SHARED_MAPPING_INIT, + }; #ifdef CONFIG_ARCH_HAS_CC_PLATFORM