From patchwork Mon May 1 08:57:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227408 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BE9AC77B61 for ; Mon, 1 May 2023 08:57:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232281AbjEAI5g (ORCPT ); Mon, 1 May 2023 04:57:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232181AbjEAI5d (ORCPT ); Mon, 1 May 2023 04:57:33 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C481FA; Mon, 1 May 2023 01:57:32 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1aaf2ede38fso8715355ad.2; Mon, 01 May 2023 01:57:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931451; x=1685523451; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Udm3eUyVQh3+nWxooTJ9GWCwKBxGdrkWeoFcDQJr1CA=; b=QKSYElK4WHUusngEnOczpAhR7EGv6uM9CCbzcsaLTEr863kXBbKsvFhU0JbIncMqt/ ub5HB/1bcxs4CQHUJIEJl9aMZ3kFmfbmcu0rtvPhLuyw8pH7CK8S5eB/SBorhAexkENg /okTBlcthPspkPR5p3Fz+vrQfbZ6XL3q2pvBci8CKmGsdgMaKKOtX46hFESEeO6aUE1R bLbIKGgS+JUpTftXTjZfO9XmbI3rMM6/WYy0Jk02C2BmuIvkeZ2WLCIcF0M2+W6pDA6u I9ZreZJzKgQcuo7Ik24+1Ps6PkT3JdoikZNpTctGJ10bBCX+OCFna4kPAkYRs8uH9Vnb 9b/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931451; x=1685523451; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Udm3eUyVQh3+nWxooTJ9GWCwKBxGdrkWeoFcDQJr1CA=; b=EY0rmndNqA7DL+Msmkk8q03HUMPD4JniFjjgbG1DQQUvB1vDeSf2G/n+uzcWGJ7T1f 0XWUoOcGg8+CsnS1ixRQPqNS0hC9ooSYoiZZBkbrOTgZBnkJzeGJFUXGfBq+h8Bpla5u 77cbWl//O/0DVApb2zrs5cc8jAxQEwTRnCz7RQ2pinBOX1Us+nXobn3CUg/lwMbUuUU/ KX7vowO/7h9jmDoDBtSCnBFb8Osbhrvt6q16xi3u7IIzV5N6z/QoNwpdqesSEpblN3E7 Bv+XdjekM7ffhA6Tq2bsxzrU/9fRTu7VfgvoNy9la8n2Rb2u2UqDGqR0+pm6yzcL9ONf YsDA== X-Gm-Message-State: AC+VfDyn45i2IPYqQQBBgH34+2sqUNOOIN118t1TbTmgXC9Per7QgC8U j2YGHc4GyERGfAaxlUN6v0g= X-Google-Smtp-Source: ACHHUZ7+1muzdHLZiWE1xKZI1oGh+cy7NEQfwpA8BCxEyCpnE8Cp516qFjbXssQfgWJ1Wz8Vie1hwg== X-Received: by 2002:a17:902:b48c:b0:1a6:34ea:6bcf with SMTP id y12-20020a170902b48c00b001a634ea6bcfmr12766396plr.45.1682931451363; Mon, 01 May 2023 01:57:31 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:31 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 01/15] x86/hyperv: Add sev-snp enlightened guest static key Date: Mon, 1 May 2023 04:57:11 -0400 Message-Id: <20230501085726.544209-2-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Introduce static key isolation_type_en_snp for enlightened sev-snp guest check. Signed-off-by: Tianyu Lan --- Change since RFC-v3: * Remove some Hyper-V specific config setting --- arch/x86/hyperv/ivm.c | 11 +++++++++++ arch/x86/include/asm/mshyperv.h | 3 +++ arch/x86/kernel/cpu/mshyperv.c | 9 +++++++-- drivers/hv/hv_common.c | 6 ++++++ 4 files changed, 27 insertions(+), 2 deletions(-) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 127d5b7b63de..368b2731950e 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -409,3 +409,14 @@ bool hv_isolation_type_snp(void) { return static_branch_unlikely(&isolation_type_snp); } + +DEFINE_STATIC_KEY_FALSE(isolation_type_en_snp); +/* + * hv_isolation_type_en_snp - Check system runs in the AMD SEV-SNP based + * isolation enlightened VM. + */ +bool hv_isolation_type_en_snp(void) +{ + return static_branch_unlikely(&isolation_type_en_snp); +} + diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index b445e252aa83..97d117ec95c4 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -26,6 +26,7 @@ union hv_ghcb; DECLARE_STATIC_KEY_FALSE(isolation_type_snp); +DECLARE_STATIC_KEY_FALSE(isolation_type_en_snp); typedef int (*hyperv_fill_flush_list_func)( struct hv_guest_mapping_flush_list *flush, @@ -45,6 +46,8 @@ extern void *hv_hypercall_pg; extern u64 hv_current_partition_id; +extern bool hv_isolation_type_en_snp(void); + extern union hv_ghcb * __percpu *hv_ghcb_pg; int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index c7969e806c64..63a2bfbfe701 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -402,8 +402,12 @@ static void __init ms_hyperv_init_platform(void) pr_info("Hyper-V: Isolation Config: Group A 0x%x, Group B 0x%x\n", ms_hyperv.isolation_config_a, ms_hyperv.isolation_config_b); - if (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP) + + if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) { + static_branch_enable(&isolation_type_en_snp); + } else if (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP) { static_branch_enable(&isolation_type_snp); + } } if (hv_max_functions_eax >= HYPERV_CPUID_NESTED_FEATURES) { @@ -473,7 +477,8 @@ static void __init ms_hyperv_init_platform(void) #if IS_ENABLED(CONFIG_HYPERV) if ((hv_get_isolation_type() == HV_ISOLATION_TYPE_VBS) || - (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP)) + (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP + && !cc_platform_has(CC_ATTR_GUEST_SEV_SNP))) hv_vtom_init(); /* * Setup the hook to get control post apic initialization. diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index 64f9ceca887b..179bc5f5bf52 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -502,6 +502,12 @@ bool __weak hv_isolation_type_snp(void) } EXPORT_SYMBOL_GPL(hv_isolation_type_snp); +bool __weak hv_isolation_type_en_snp(void) +{ + return false; +} +EXPORT_SYMBOL_GPL(hv_isolation_type_en_snp); + void __weak hv_setup_vmbus_handler(void (*handler)(void)) { } From patchwork Mon May 1 08:57:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227410 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B778C7EE2A for ; Mon, 1 May 2023 08:57:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232161AbjEAI5n (ORCPT ); Mon, 1 May 2023 04:57:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232237AbjEAI5e (ORCPT ); Mon, 1 May 2023 04:57:34 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C6F1F9; Mon, 1 May 2023 01:57:33 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id d9443c01a7336-1aaea3909d1so12312135ad.2; Mon, 01 May 2023 01:57:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931453; x=1685523453; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=v9VLdJ07x1krlNyXoh03R/V3Urrhl3Ftr0jW7RppoA8=; b=OAHWsXjqCza9GGkQIsMltKZ/xNNn2XdUn1BJla878XTHCVVOn9oRSc+7lC75LBscjk wTE15QGmJfIP+TH0TOna7U5/mTBvQjvDensIWmWnoY4L77eSS6cLF3HshSexyzAPpT5U h2vL27ccj8Sf6VjXU2JfQqd+6Sj94brmV9eJTipjwM6uanceCl2zZZaqsHjMYA/OWAfr lE2JW0d+EctJlnIZPm8z4kU2WYkPo6pyX53k2LAJ4HUjrXiG2o3pMHBzPShTeNEjmShy ukuMOmeb6OkxSnVOlLjI914zPKTCJnFYBR30m34kPAtq2inn+vvq2Y8kMQuPVAhpe1d3 vbmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931453; x=1685523453; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v9VLdJ07x1krlNyXoh03R/V3Urrhl3Ftr0jW7RppoA8=; b=UhCk59m8StmcoTWc9HypIX7kjmtgwvINJ2acuUq7Z9atvZTsfFMdTXACBmuag4+zOf FkAyfuZpjqtf3blxZBW2uLEDYCPiFFfDd/2U0GJ6G1bg1XcBGmM0p5uaH+ZKbERod9CL imTn7wxI+zszsirNNzcTaR5H7pEHVSLrvSa7eRvI2NX1j2rJNH8RdfuTw3MatU98nN00 JBoBpilb7iCbEIJTLQmC7LHQKZLXkLpIqD1+72N16lzLTWJy+LR8U3J7YKvy8ZHkrEBv FaXiRa3VtGkzZa/tXnAdmuM0nXOPxkuCa/IRTs2+3/TsIB5EL9FmWP8Ar7N9DwujJ8GR k06Q== X-Gm-Message-State: AC+VfDzgzdzKhZl3z22dje1I8tpftKz4L+FVWMIX7suLGTc6ykv2seXF RSii3pcfXI3okRtiCi+qXKE= X-Google-Smtp-Source: ACHHUZ6+e0MQ/Y769nSf4YpXEqHW9NOhctlQqSJ5uxHaxs6P+3FzTJI0Ai28PLEkCPx4pB89V1Go/g== X-Received: by 2002:a17:902:d585:b0:1a6:6b9d:5e12 with SMTP id k5-20020a170902d58500b001a66b9d5e12mr13386461plh.8.1682931452761; Mon, 01 May 2023 01:57:32 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:32 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 02/15] x86/hyperv: Decrypt hv vp assist page in sev-snp enlightened guest Date: Mon, 1 May 2023 04:57:12 -0400 Message-Id: <20230501085726.544209-3-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan hv vp assist page is shared between sev snp guest and hyperv. Decrypt the page when use it. Signed-off-by: Tianyu Lan --- arch/x86/hyperv/hv_init.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index a5f9474f08e1..9f3e2d71d015 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -113,6 +114,11 @@ static int hv_cpu_init(unsigned int cpu) } if (!WARN_ON(!(*hvp))) { + if (hv_isolation_type_en_snp()) { + WARN_ON_ONCE(set_memory_decrypted((unsigned long)(*hvp), 1)); + memset(*hvp, 0, PAGE_SIZE); + } + msr.enable = 1; wrmsrl(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64); } From patchwork Mon May 1 08:57:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB72AC7EE21 for ; Mon, 1 May 2023 08:57:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232181AbjEAI5j (ORCPT ); Mon, 1 May 2023 04:57:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232271AbjEAI5g (ORCPT ); Mon, 1 May 2023 04:57:36 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A614DE77; Mon, 1 May 2023 01:57:34 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id d2e1a72fcca58-64115e652eeso25489117b3a.0; Mon, 01 May 2023 01:57:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931454; x=1685523454; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6uYxV2dI/hb2rcrfooNSE6OrsK+/8Zb/z3h9orwCels=; b=C8uDAffQ6OHhqsfIO6Bq+5eTpKEILlK/GOpTu6rUfZvywEB76ytGf8lbazIOR7RQQm skQ5YvkOWDrA2OBKN1sODMUuPJTycrL1eGUHhcCQcIQ58UQFlRGh8MBIJWSLlmcRblmb 3rxU5Q6hzz8rdy/+deCqoi4vJgKDMLoOwEXcLcKDjcrAuBeeStzaOSZmWEeMRaOzqFpI /qHGtbbC0bC5ekIQeqa3xIgbJBnR4tj2hhMxmEsaDqtqE1cFcV/PylIU3GESvRe78xa0 +nyvIGwIcxAibNTg1Y6rloLSRyVgZ3UoFpmI6RVwrpVYFrL8dRuQCVuRp4Q9qH5FVdUl IX0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931454; x=1685523454; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6uYxV2dI/hb2rcrfooNSE6OrsK+/8Zb/z3h9orwCels=; b=UkWfriVuEA1bgnZ7xhfAIA5b70XnLUmKTsiFRcGesetPk9m9jgGoxQqQ3zu3jYFSNY ezBT9cBg274jzmQpUeu2SyX4coCoSwveMay/qH8NHzKM1CMNAU5oAdhAWa0o5XTPnNiQ 85dzCllhfDdVBoazmMJGt4MyiuuaYHqI5fvxLZvOsqenNux8EvSsOp0xLsyt7kypoeAF oXaNZAAocF1FQ2xSy+EUHfP3PetNKZmIYV2yhlPm5lgqW5U7j7H2cIs3fwIpqjF8ApGb HxGUPwVIN9MPj4OHURBOSwEKPw0MXHB8HzhOKYkfpyUt0+zKr0q5MydynA20lejHJx1F GJfQ== X-Gm-Message-State: AC+VfDzmuWj6F2g6+4kjkoeOunoP7ZLmD9HktXhoCeRU0HuW62JZHj9H c18GWZRIElnSbE8kYTfOq5U= X-Google-Smtp-Source: ACHHUZ7cvb0mCeWDDY43OW9x5ugQOQdr9CWHyT9lhwZk2ZnwbZrVvHE4XgLdkwaIL+bl9uTCm+SsPg== X-Received: by 2002:a17:902:e550:b0:1a9:7dc2:9427 with SMTP id n16-20020a170902e55000b001a97dc29427mr21398307plf.21.1682931454093; Mon, 01 May 2023 01:57:34 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:33 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 03/15] x86/hyperv: Set Virtual Trust Level in VMBus init message Date: Mon, 1 May 2023 04:57:13 -0400 Message-Id: <20230501085726.544209-4-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan sev-snp guest provides vtl(Virtual Trust Level) and get it from hyperv hvcall via HVCALL_GET_VP_REGISTERS. Set target vtl in the VMBus init message. Signed-off-by: Tianyu Lan --- Change since RFC v4: * Use struct_size to calculate array size. * Fix some coding style Change since RFC v3: * Use the standard helper functions to check hypercall result * Fix coding style Change since RFC v2: * Rename get_current_vtl() to get_vtl() * Fix some coding style issues --- arch/x86/hyperv/hv_init.c | 36 ++++++++++++++++++++++++++++++ arch/x86/include/asm/hyperv-tlfs.h | 7 ++++++ drivers/hv/connection.c | 1 + include/asm-generic/mshyperv.h | 1 + include/linux/hyperv.h | 4 ++-- 5 files changed, 47 insertions(+), 2 deletions(-) diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index 9f3e2d71d015..331b855314b7 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -384,6 +384,40 @@ static void __init hv_get_partition_id(void) local_irq_restore(flags); } +static u8 __init get_vtl(void) +{ + u64 control = HV_HYPERCALL_REP_COMP_1 | HVCALL_GET_VP_REGISTERS; + struct hv_get_vp_registers_input *input; + struct hv_get_vp_registers_output *output; + u64 vtl = 0; + u64 ret; + unsigned long flags; + + local_irq_save(flags); + input = *this_cpu_ptr(hyperv_pcpu_input_arg); + output = (struct hv_get_vp_registers_output *)input; + if (!input) { + local_irq_restore(flags); + goto done; + } + + memset(input, 0, struct_size(input, element, 1)); + input->header.partitionid = HV_PARTITION_ID_SELF; + input->header.vpindex = HV_VP_INDEX_SELF; + input->header.inputvtl = 0; + input->element[0].name0 = HV_X64_REGISTER_VSM_VP_STATUS; + + ret = hv_do_hypercall(control, input, output); + if (hv_result_success(ret)) + vtl = output->as64.low & HV_X64_VTL_MASK; + else + pr_err("Hyper-V: failed to get VTL! %lld", ret); + local_irq_restore(flags); + +done: + return vtl; +} + /* * This function is to be invoked early in the boot sequence after the * hypervisor has been detected. @@ -512,6 +546,8 @@ void __init hyperv_init(void) /* Query the VMs extended capability once, so that it can be cached. */ hv_query_ext_cap(0); + /* Find the VTL */ + ms_hyperv.vtl = get_vtl(); return; clean_guest_os_id: diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h index cea95dcd27c2..4bf0b315b0ce 100644 --- a/arch/x86/include/asm/hyperv-tlfs.h +++ b/arch/x86/include/asm/hyperv-tlfs.h @@ -301,6 +301,13 @@ enum hv_isolation_type { #define HV_X64_MSR_TIME_REF_COUNT HV_REGISTER_TIME_REF_COUNT #define HV_X64_MSR_REFERENCE_TSC HV_REGISTER_REFERENCE_TSC +/* + * Registers are only accessible via HVCALL_GET_VP_REGISTERS hvcall and + * there is not associated MSR address. + */ +#define HV_X64_REGISTER_VSM_VP_STATUS 0x000D0003 +#define HV_X64_VTL_MASK GENMASK(3, 0) + /* Hyper-V memory host visibility */ enum hv_mem_host_visibility { VMBUS_PAGE_NOT_VISIBLE = 0, diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c index 5978e9dbc286..02b54f85dc60 100644 --- a/drivers/hv/connection.c +++ b/drivers/hv/connection.c @@ -98,6 +98,7 @@ int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, u32 version) */ if (version >= VERSION_WIN10_V5) { msg->msg_sint = VMBUS_MESSAGE_SINT; + msg->msg_vtl = ms_hyperv.vtl; vmbus_connection.msg_conn_id = VMBUS_MESSAGE_CONNECTION_ID_4; } else { msg->interrupt_page = virt_to_phys(vmbus_connection.int_page); diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index 402a8c1c202d..3052130ba4ef 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -48,6 +48,7 @@ struct ms_hyperv_info { }; }; u64 shared_gpa_boundary; + u8 vtl; }; extern struct ms_hyperv_info ms_hyperv; extern bool hv_nested; diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index bfbc37ce223b..1f2bfec4abde 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -665,8 +665,8 @@ struct vmbus_channel_initiate_contact { u64 interrupt_page; struct { u8 msg_sint; - u8 padding1[3]; - u32 padding2; + u8 msg_vtl; + u8 reserved[6]; }; }; u64 monitor_page1; From patchwork Mon May 1 08:57:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FFF7C77B61 for ; Mon, 1 May 2023 08:57:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232288AbjEAI5p (ORCPT ); Mon, 1 May 2023 04:57:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231896AbjEAI5h (ORCPT ); Mon, 1 May 2023 04:57:37 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F65A10C0; Mon, 1 May 2023 01:57:36 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id d9443c01a7336-1aaf21bb42bso6351705ad.2; Mon, 01 May 2023 01:57:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931455; x=1685523455; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fry12NLoScUn7gbAM5zwlql+LIRPvftdxXtygfKZOSU=; b=K7jksmbTeUQw32z78/4ng/NzDiplAWJEG71m35MnXGDYaORVybT+T6G+yBN1nOJzn/ pAjeJ3TJoFIgQPcNn3hW3LHFIujcc6qNnBBr4qz6ho34LdoI6nS91RRIub5Vo7C5TcQI /4izzhyFpldYlqEyMd9RYeylokixec8Lx1jBQcZcccR2pMImedN0BlElAkN009HB9WI2 U7ti2dtp9zOfPXM80Cn3Q7ECo5Szqxr5G5KzjY9IKtapwwJ4mRE/jhJwdb5yBjXijq2S QWyaLNkrUezOi4o5eA8Q24SpBhVN1TfApYce8PlFt7/7Ijm1Z+6VmZRfzK1ks+HZ3O+/ j3pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931455; x=1685523455; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fry12NLoScUn7gbAM5zwlql+LIRPvftdxXtygfKZOSU=; b=SvNqLvmfhnBL/Se1u2jTN17Geid5RDisJYzYRGfwSjxSebMZ/Hq51/UigyN73oCH30 D2bMYRSkdPttX+8Eh0inf0PiFvFM2Y3tUcHK3rwBgnp3OoaJDbQjtHLSb8NsAAoI7zfP b50Qc7H30LJvho3fFwQrClHbEi9gXS/R8KBwtf6/sBn3MiOECobm63ingvc3iQLc/y2r 8A7YLjuGzYVp1YqOlXB5gY4KYtEjAr7OXx6yvmK40I7jo0cxewUMx/LtnAjNUovDiUTl IIteyet8q4KQVmA+RG6w97A+WHnTuOrupdQhzgv7KKqHZiImDfp0NMC9q5qw5sXyitl5 vyuw== X-Gm-Message-State: AC+VfDwsnd60qAWGl4xcRK0ZLJxSwBLIgdWZrd1Gr+5irBt5EezFTLKH 9pUXqbRpv05K0X1eDS1YmNE= X-Google-Smtp-Source: ACHHUZ5AxIolfnZywSPhghag7iLuDD6o1C2AaUdro9lVq5nNaCg1A5/izYX69cCASi1ejjuJSAO8yg== X-Received: by 2002:a17:902:b70c:b0:1a6:413c:4a47 with SMTP id d12-20020a170902b70c00b001a6413c4a47mr13048714pls.1.1682931455420; Mon, 01 May 2023 01:57:35 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:35 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 04/15] x86/hyperv: Use vmmcall to implement Hyper-V hypercall in sev-snp enlightened guest Date: Mon, 1 May 2023 04:57:14 -0400 Message-Id: <20230501085726.544209-5-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan In sev-snp enlightened guest, Hyper-V hypercall needs to use vmmcall to trigger vmexit and notify hypervisor to handle hypercall request. Signed-off-by: Tianyu Lan --- Change since RFC V2: * Fix indentation style --- arch/x86/include/asm/mshyperv.h | 44 ++++++++++++++++++++++++--------- 1 file changed, 33 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 97d117ec95c4..939373791249 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -61,16 +61,25 @@ static inline u64 hv_do_hypercall(u64 control, void *input, void *output) u64 hv_status; #ifdef CONFIG_X86_64 - if (!hv_hypercall_pg) - return U64_MAX; + if (hv_isolation_type_en_snp()) { + __asm__ __volatile__("mov %4, %%r8\n" + "vmmcall" + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input_address) + : "r" (output_address) + : "cc", "memory", "r8", "r9", "r10", "r11"); + } else { + if (!hv_hypercall_pg) + return U64_MAX; - __asm__ __volatile__("mov %4, %%r8\n" - CALL_NOSPEC - : "=a" (hv_status), ASM_CALL_CONSTRAINT, - "+c" (control), "+d" (input_address) - : "r" (output_address), - THUNK_TARGET(hv_hypercall_pg) - : "cc", "memory", "r8", "r9", "r10", "r11"); + __asm__ __volatile__("mov %4, %%r8\n" + CALL_NOSPEC + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input_address) + : "r" (output_address), + THUNK_TARGET(hv_hypercall_pg) + : "cc", "memory", "r8", "r9", "r10", "r11"); + } #else u32 input_address_hi = upper_32_bits(input_address); u32 input_address_lo = lower_32_bits(input_address); @@ -104,7 +113,13 @@ static inline u64 _hv_do_fast_hypercall8(u64 control, u64 input1) u64 hv_status; #ifdef CONFIG_X86_64 - { + if (hv_isolation_type_en_snp()) { + __asm__ __volatile__( + "vmmcall" + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input1) + :: "cc", "r8", "r9", "r10", "r11"); + } else { __asm__ __volatile__(CALL_NOSPEC : "=a" (hv_status), ASM_CALL_CONSTRAINT, "+c" (control), "+d" (input1) @@ -149,7 +164,14 @@ static inline u64 _hv_do_fast_hypercall16(u64 control, u64 input1, u64 input2) u64 hv_status; #ifdef CONFIG_X86_64 - { + if (hv_isolation_type_en_snp()) { + __asm__ __volatile__("mov %4, %%r8\n" + "vmmcall" + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input1) + : "r" (input2) + : "cc", "r8", "r9", "r10", "r11"); + } else { __asm__ __volatile__("mov %4, %%r8\n" CALL_NOSPEC : "=a" (hv_status), ASM_CALL_CONSTRAINT, From patchwork Mon May 1 08:57:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227412 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FAEDC7EE22 for ; Mon, 1 May 2023 08:57:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232365AbjEAI5r (ORCPT ); Mon, 1 May 2023 04:57:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232308AbjEAI5j (ORCPT ); Mon, 1 May 2023 04:57:39 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C7B410CE; Mon, 1 May 2023 01:57:38 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1a6762fd23cso18593745ad.3; Mon, 01 May 2023 01:57:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931457; x=1685523457; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eA3U8GDAlLCUNajUiT+ES1YkLuAsDmmQOi6ciH1zXUo=; b=nL7GT1DJwB2fN84yYmnsZxyPvh44uMIF7H9efWalaPCl3LTd5Vx/Rrsp2YGOL+1wtm zqpg8+U3hMLOeyWusj8btTUT+dQuWxyKk1Bq8TGrmTuQ4rK9QpN80xdf/AL/WfvABCWW F3NIF9Am4AqLaQpF9tCGwGvDs23UfC/oICUGbqsSQOXD/4v/oBNO5Zb1QtCwPUyuheG5 yNTeLk8Mbx1CjpPUv3DHu7edZ3d7BoWbl48ETjzhZjQEG7R3vqPi38+CckMUtpzskEYg ISXu1pFwAaCrWtXlNyLMNMPdrXL85cHxrA6nPNsOHO8JvFvrahqy9M6W1nR90YO5ymw/ gEPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931457; x=1685523457; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eA3U8GDAlLCUNajUiT+ES1YkLuAsDmmQOi6ciH1zXUo=; b=YTJaD9hRlVqs+oLXxevkkT7j5DQsK7quJwCFsJGwSIxlpyS64v+9XW1ovjEnIha45U XeuSv2Q7ML/ZBdeeYSfJKbYe05wfpxe1fMOj3ppEhGichKOpF6Tb/yBIsEvUji7rKKFo 9oxWH/Q81asp64uUbwwxSDzMZncLvM43l0MhHPadpmASRBReS1L2NGFJ5TY2FJsH9HCW MayvTpjhCbZA7fm15GuMzzbNp92accKB8daZH85nyWmax9rTyqEYGqsCxJlZR6nYRN2f RWPJzWnSEMOsMEv4rPVmr4fEAFRY2B+t+wO865Xd0OA1NRo9hKyKxhR7P7YhXPFmL5nq zdBA== X-Gm-Message-State: AC+VfDxagd0wPhITMEXr3G/tZLngDR7uear/JTz5BLgIhlWOCCUYDZfH mns5ZIAnR1QdeNWpFcyo0xc= X-Google-Smtp-Source: ACHHUZ48b4LOmBqkfMz2MT+3hlFv3HvN27uWK8AgwoSgWBPNkQJxyUWi0++/xDltX+MBtg7sXJyCVA== X-Received: by 2002:a17:902:db0b:b0:1aa:e5cd:6478 with SMTP id m11-20020a170902db0b00b001aae5cd6478mr5770890plx.58.1682931456993; Mon, 01 May 2023 01:57:36 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:36 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 05/15] clocksource/drivers/hyper-v: decrypt hyperv tsc page in sev-snp enlightened guest Date: Mon, 1 May 2023 04:57:15 -0400 Message-Id: <20230501085726.544209-6-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Hyper-V tsc page is shared with hypervisor and it should be decrypted in sev-snp enlightened guest when it's used. Signed-off-by: Tianyu Lan --- Change since RFC V2: * Change the Subject line prefix --- drivers/clocksource/hyperv_timer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c index bcd9042a0c9f..66e29a19770b 100644 --- a/drivers/clocksource/hyperv_timer.c +++ b/drivers/clocksource/hyperv_timer.c @@ -376,7 +376,7 @@ EXPORT_SYMBOL_GPL(hv_stimer_global_cleanup); static union { struct ms_hyperv_tsc_page page; u8 reserved[PAGE_SIZE]; -} tsc_pg __aligned(PAGE_SIZE); +} tsc_pg __bss_decrypted __aligned(PAGE_SIZE); static struct ms_hyperv_tsc_page *tsc_page = &tsc_pg.page; static unsigned long tsc_pfn; From patchwork Mon May 1 08:57:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A76BC77B61 for ; Mon, 1 May 2023 08:58:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232401AbjEAI56 (ORCPT ); Mon, 1 May 2023 04:57:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232330AbjEAI5k (ORCPT ); Mon, 1 May 2023 04:57:40 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD02C10E7; Mon, 1 May 2023 01:57:38 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-1aaef97652fso6860855ad.0; Mon, 01 May 2023 01:57:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931458; x=1685523458; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=v9rBUX8zxYLON2WeixMYcynqQit4QqOsD7fiqT8bXlg=; b=ozIKeWZGvO83qG/cO/jf8e1+md4Qe61e7BWRvVR0LTuFi7buO0Fcf+2hPdnr2F/BMo GW5YSzUDtx5XEJfQIYgOqSRHQbZ/RBZEHR2pVmpOoz4GjvaqgkJcwFBqJxy08OoO3Bic tdHcV4Sy/lyzse6vbOHhgsE8qxwmQqW5+BaHlBG3C7EdnzL4Dv/tyJjgQN6a9OuYKNVO MCWO1nJo9qgFdJrLmOId9XwLXOcPEi5CLNQ7De6LavPg7EIzwtiph2AmG13Ky0LylGNk eUMGhnFeaxt1N8yN+1yDiaFipBvisaTZwRexB05/+8L/cLvQis8EFH0siEIG5V4QYznW OLLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931458; x=1685523458; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v9rBUX8zxYLON2WeixMYcynqQit4QqOsD7fiqT8bXlg=; b=I2wOnIBDvhHP6A9BUkoAv1ek7a7Bm13o1BTXtYf3Ejj61fCJNenw2EhoVhnEses8kr vfv6RC8MprDlEHwhS3TOky+REa+i7PtZDPL7Ad0YEJgKl8AiZKGSyVD3w7F7m2MxFBKV sI0DEEINzpq4BnFqvP5bgPuq5f/7qtBXxmyAYxl2u76LTfmi9nxVR38WeuJyW0pw9f4T 0OHH+qfMz7aYDc/8YVtpff0MtseMoq/Iz64150FS/6q2jHtTE9cqc0rbD+wX0cPPwRzC +K1hLmQgn1qDZ1wBEuxNaTuYU6m4M+V0o3Xuywa8bUXsD1u2Y9t3WrgexBIIct0wfiYB wFsQ== X-Gm-Message-State: AC+VfDzlSOCFRIAqkVXlzvnJ7541ZM5BX1UUZq7fU//p4JCyUI38O7gS QviInTRJa5EYLatENCgBmd4= X-Google-Smtp-Source: ACHHUZ4rmtLo9X5FX1EH2LFsyhl6zwT7dMFc8tmW7JszxA6dxI7beKMcYvQfYhI9ndI4WoY8Db80Yg== X-Received: by 2002:a17:902:ec8a:b0:1a2:afdd:8476 with SMTP id x10-20020a170902ec8a00b001a2afdd8476mr16497385plg.2.1682931458294; Mon, 01 May 2023 01:57:38 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:37 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 06/15] hv: vmbus: decrypt VMBus pages for sev-snp enlightened guest Date: Mon, 1 May 2023 04:57:16 -0400 Message-Id: <20230501085726.544209-7-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan VMBus post msg, synic event and message pages are shared with hypervisor and so decrypt these pages in the sev-snp guest. Signed-off-by: Tianyu Lan --- Change sicne RFC V4: * Fix encrypt and free page order. Change since RFC V3: * Set encrypt page back in the hv_synic_free() Change since RFC V2: * Fix error in the error code path and encrypt pages correctly when decryption failure happens. --- drivers/hv/hv.c | 41 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 38 insertions(+), 3 deletions(-) diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c index de6708dbe0df..e0943db51acd 100644 --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "hyperv_vmbus.h" /* The one and only */ @@ -78,7 +79,7 @@ int hv_post_message(union hv_connection_id connection_id, int hv_synic_alloc(void) { - int cpu; + int cpu, ret; struct hv_per_cpu_context *hv_cpu; /* @@ -123,9 +124,33 @@ int hv_synic_alloc(void) goto err; } } + + if (hv_isolation_type_en_snp()) { + ret = set_memory_decrypted((unsigned long) + hv_cpu->synic_message_page, 1); + if (ret) + goto err; + + ret = set_memory_decrypted((unsigned long) + hv_cpu->synic_event_page, 1); + if (ret) + goto err_decrypt_event_page; + + memset(hv_cpu->synic_message_page, 0, PAGE_SIZE); + memset(hv_cpu->synic_event_page, 0, PAGE_SIZE); + } } return 0; + +err_decrypt_msg_page: + set_memory_encrypted((unsigned long) + hv_cpu->synic_event_page, 1); + +err_decrypt_event_page: + set_memory_encrypted((unsigned long) + hv_cpu->synic_message_page, 1); + err: /* * Any memory allocations that succeeded will be freed when @@ -143,8 +168,18 @@ void hv_synic_free(void) struct hv_per_cpu_context *hv_cpu = per_cpu_ptr(hv_context.cpu_context, cpu); - free_page((unsigned long)hv_cpu->synic_event_page); - free_page((unsigned long)hv_cpu->synic_message_page); + if (hv_isolation_type_en_snp()) { + if (!set_memory_encrypted((unsigned long) + hv_cpu->synic_message_page, 1)) + free_page((unsigned long)hv_cpu->synic_event_page); + + if (!set_memory_encrypted((unsigned long) + hv_cpu->synic_event_page, 1)) + free_page((unsigned long)hv_cpu->synic_message_page); + } else { + free_page((unsigned long)hv_cpu->synic_event_page); + free_page((unsigned long)hv_cpu->synic_message_page); + } } kfree(hv_context.hv_numa_map); From patchwork Mon May 1 08:57:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227414 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03DE3C77B61 for ; Mon, 1 May 2023 08:58:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232412AbjEAI6M (ORCPT ); Mon, 1 May 2023 04:58:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232376AbjEAI5t (ORCPT ); Mon, 1 May 2023 04:57:49 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90505E7A; Mon, 1 May 2023 01:57:40 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-1aaea43def7so8722535ad.2; Mon, 01 May 2023 01:57:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931460; x=1685523460; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=z8lQptbWtTMcCpIUwC+SzQjIl5Lzr5bz7ZBsqcGRMAU=; b=o4d53S8d5VyAf5qebAE9yXFatg1NHzWnPwqVGtuQQoUyxq8+haPPhm7zKoygxajnfv I5S+s4TbtGyytuaqV7FQ0DjSerZNTnHVxoO0uFI23Hs3+zEtBCB0LZXepwCn0cXR+NSR ymuPKNonexYx0rYvavkRs1i5BclR1v9gPCkKf6WqU9BnSIA7cp0ANYYW0wmVlFUQ7pHc LBefPmdidZQ+WOlD4H59mI76vy4YxLvw20WiaIFHwfTTrP5N2AthYKliY3XwSj5tk1C+ c/7kRi/gY/Fmh1ruzaK6vTUz00BfCFfCfmPkFGOoVt0ScNKKRlcUylp77tNmCUhlaADO 0OrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931460; x=1685523460; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=z8lQptbWtTMcCpIUwC+SzQjIl5Lzr5bz7ZBsqcGRMAU=; b=Uji3nsYiCwiCtBQpxS7fd6phMzHB2e/BWjHuSMtSEyF2EE8hfgaDag2ExUWMS7nVhg jUoOMT5eq/QiLRS8COP8P3sU9EOjbKPMpDnO+MSep4I6P1RklMpZBa4lMRsa1X98UQuS yDwzU41tqESkes5nT9+vju+b10pHU4g12JAHVeXTZ+iFH0gTFhxzCS/Fxb9w1u/HsPzE uFLBd7e/YKnSHqhwJYa+kqVPeJWLEWDrAAQLFgcRqfW4oJBy18e0oUg4PZ9Ny/DqI3/w KGGsgHhwE2PeT0FXY3CMUfmqw1Y3TdMEMJQQzW6YbzWQ4oaNWsipTmGabp+B4Sa/wCRB IHLQ== X-Gm-Message-State: AC+VfDw6hQSIwUaCeUg4/kJNXtPQPiM0RTeEE6TMeZnnd85qM+LOGYff PmntzMq5NJcSY22K8eaXG6k= X-Google-Smtp-Source: ACHHUZ7eDrCz/Bj7JdGVG3Uz1k2Jw1gJYenbaOHgIVa2m5W3GnYdIiprug0a3YTIX7w3fyemrBx82A== X-Received: by 2002:a17:902:d2c2:b0:19d:778:ff5 with SMTP id n2-20020a170902d2c200b0019d07780ff5mr16694641plc.15.1682931459793; Mon, 01 May 2023 01:57:39 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:39 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 07/15] drivers: hv: Decrypt percpu hvcall input arg page in sev-snp enlightened guest Date: Mon, 1 May 2023 04:57:17 -0400 Message-Id: <20230501085726.544209-8-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Hypervisor needs to access iput arg page and guest should decrypt the page. Signed-off-by: Tianyu Lan --- Change since RFC V4: * Use pgcount to free intput arg page Change since RFC V3: * Use pgcount to decrypt memory. Change since RFC V2: * Set inputarg to be zero after kfree() * Not free mem when fail to encrypt mem in the hv_common_cpu_die(). --- drivers/hv/hv_common.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index 179bc5f5bf52..15d3054f3440 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include @@ -359,6 +360,7 @@ int hv_common_cpu_init(unsigned int cpu) u64 msr_vp_index; gfp_t flags; int pgcount = hv_root_partition ? 2 : 1; + int ret; /* hv_cpu_init() can be called with IRQs disabled from hv_resume() */ flags = irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL; @@ -368,6 +370,17 @@ int hv_common_cpu_init(unsigned int cpu) if (!(*inputarg)) return -ENOMEM; + if (hv_isolation_type_en_snp()) { + ret = set_memory_decrypted((unsigned long)*inputarg, pgcount); + if (ret) { + kfree(*inputarg); + *inputarg = NULL; + return ret; + } + + memset(*inputarg, 0x00, pgcount * PAGE_SIZE); + } + if (hv_root_partition) { outputarg = (void **)this_cpu_ptr(hyperv_pcpu_output_arg); *outputarg = (char *)(*inputarg) + HV_HYP_PAGE_SIZE; @@ -387,6 +400,7 @@ int hv_common_cpu_die(unsigned int cpu) { unsigned long flags; void **inputarg, **outputarg; + int pgcount = hv_root_partition ? 2 : 1; void *mem; local_irq_save(flags); @@ -402,7 +416,12 @@ int hv_common_cpu_die(unsigned int cpu) local_irq_restore(flags); - kfree(mem); + if (hv_isolation_type_en_snp()) { + if (!set_memory_encrypted((unsigned long)mem, pgcount)) + kfree(mem); + } else { + kfree(mem); + } return 0; } From patchwork Mon May 1 08:57:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5CD4C77B73 for ; Mon, 1 May 2023 08:58:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232222AbjEAI63 (ORCPT ); Mon, 1 May 2023 04:58:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232334AbjEAI6G (ORCPT ); Mon, 1 May 2023 04:58:06 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A3B61703; Mon, 1 May 2023 01:57:42 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1aaf91ae451so6406295ad.1; Mon, 01 May 2023 01:57:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931461; x=1685523461; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0ViQJTBOxummZ8tgfJH2Tn4i8q0fqO21G+7Jo3g2HMc=; b=Yj95iqokkCFZmHQWGA5OJRnD5rpNU+EdiGK7/Rps7hzxP/xCeA/j7a55ZVYrEDOzLP DNlyn7Ua4QnfLibLUQcHknaUY3aDGjpGpMz/2WWBC5EN4uPTUH5+DEiyZH466wCnaqzJ cInO95RQEDo9K4jBeElmHIMvS14Cj7DPev5HPllDbCujpWNggch/efB1bHcI186ZpYy8 /fY2SAQ4a4zJSfQl445qRx41bbBuPO9JaI/Xck43VoeILzrs7hR+AmtrtAmfyWrGEA56 /AMm5Oq6fKCnfgNHTN8OquqkXY1oSjPgOSsoP30AZ8u1u/cOPJd5+5Kb9GILL0V+YLMe 0hfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931461; x=1685523461; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0ViQJTBOxummZ8tgfJH2Tn4i8q0fqO21G+7Jo3g2HMc=; b=LESipCHR0j4GY+q5UbuQoNmqRvcuviBEdYj9juLuhsq8RIV3pmqpFp3LGMsI+yQmI8 hh+2u42S3A9ENY0ljgUBGUchIPl3glkQ8xXpZhwDaXo+gXjYks3exc0PvmByxU3C8wlj UXqzxSiI81lK4xPyHzsLcpudl1YCZ1pcnr/v56XkfgiAE3A6a+SPlNQbgfo6VbFohzgy W4cYFXlhDhQWKhi4FpaEqCYHpbo0VmV9ifZ1jwso/SyF6lFcxWZbhJKwB5crE7qxnCHf AAAGPy+r5gisPTiZx/J294NAc1Hmj61DwlhPlKAciaVZVTfnBtxZBo489rUjJ87KRr8j f6CQ== X-Gm-Message-State: AC+VfDxWC6uk1GKq+z4JRq63p+xnU/tY5vf6Y17m5dcLHVHuLLGYSTRS n7N+CY1n2X4Fs/4zS4AkPWg= X-Google-Smtp-Source: ACHHUZ4ydAgA4a3i64E/2KYeAbIXAUYUzehAyvh+PrPyt4lsyTZzgqNi+0Ft+TiIT2nFN+XFLVLDpA== X-Received: by 2002:a17:902:e746:b0:1a6:fe25:4138 with SMTP id p6-20020a170902e74600b001a6fe254138mr16792346plf.59.1682931461184; Mon, 01 May 2023 01:57:41 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:40 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 08/15] x86/hyperv: Initialize cpu and memory for sev-snp enlightened guest Date: Mon, 1 May 2023 04:57:18 -0400 Message-Id: <20230501085726.544209-9-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Read processor amd memory info from specific address which are populated by Hyper-V. Initialize smp cpu related ops, pvalidate system memory and add it into e820 table. Signed-off-by: Tianyu Lan --- Change since RFCv4: * Add mem info addr to get mem layout info --- arch/x86/hyperv/ivm.c | 86 +++++++++++++++++++++++++++++++++ arch/x86/include/asm/mshyperv.h | 17 +++++++ arch/x86/kernel/cpu/mshyperv.c | 3 ++ 3 files changed, 106 insertions(+) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 368b2731950e..522eab55c0dd 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -17,6 +17,11 @@ #include #include #include +#include +#include +#include +#include +#include #ifdef CONFIG_AMD_MEM_ENCRYPT @@ -57,6 +62,8 @@ union hv_ghcb { static u16 hv_ghcb_version __ro_after_init; +static u32 processor_count; + u64 hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size) { union hv_ghcb *hv_ghcb; @@ -356,6 +363,85 @@ static bool hv_is_private_mmio(u64 addr) return false; } +static __init void hv_snp_get_smp_config(unsigned int early) +{ + if (!early) + return; + /* + * There is no firmware and ACPI MADT table spport in + * in the Hyper-V SEV-SNP enlightened guest. Set smp + * related config variable during early stage. + */ + while (num_processors < processor_count) { + early_per_cpu(x86_cpu_to_apicid, num_processors) = num_processors; + early_per_cpu(x86_bios_cpu_apicid, num_processors) = num_processors; + physid_set(num_processors, phys_cpu_present_map); + set_cpu_possible(num_processors, true); + set_cpu_present(num_processors, true); + num_processors++; + } +} + +__init void hv_sev_init_mem_and_cpu(void) +{ + struct memory_map_entry *entry; + struct e820_entry *e820_entry; + u64 e820_end; + u64 ram_end; + u64 page; + + /* + * Hyper-V enlightened snp guest boots kernel + * directly without bootloader and so roms, + * bios regions and reserve resources are not + * available. Set these callback to NULL. + */ + x86_platform.legacy.rtc = 0; + x86_platform.legacy.reserve_bios_regions = 0; + x86_platform.set_wallclock = set_rtc_noop; + x86_platform.get_wallclock = get_rtc_noop; + x86_init.resources.probe_roms = x86_init_noop; + x86_init.resources.reserve_resources = x86_init_noop; + x86_init.mpparse.find_smp_config = x86_init_noop; + x86_init.mpparse.get_smp_config = hv_snp_get_smp_config; + + /* + * Hyper-V SEV-SNP enlightened guest doesn't support ioapic + * and legacy APIC page read/write. Switch to hv apic here. + */ + disable_ioapic_support(); + + /* Get processor and mem info. */ + processor_count = *(u32 *)__va(EN_SEV_SNP_PROCESSOR_INFO_ADDR); + entry = (struct memory_map_entry *)__va(EN_SEV_SNP_MEM_INFO_ADDR); + + /* + * There is no bootloader/EFI firmware in the SEV SNP guest. + * E820 table in the memory just describes memory for kernel, + * ACPI table, cmdline, boot params and ramdisk. The dynamic + * data(e.g, vcpu nnumber and the rest memory layout) needs to + * be read from EN_SEV_SNP_PROCESSOR_INFO_ADDR. + */ + for (; entry->numpages != 0; entry++) { + e820_entry = &e820_table->entries[ + e820_table->nr_entries - 1]; + e820_end = e820_entry->addr + e820_entry->size; + ram_end = (entry->starting_gpn + + entry->numpages) * PAGE_SIZE; + + if (e820_end < entry->starting_gpn * PAGE_SIZE) + e820_end = entry->starting_gpn * PAGE_SIZE; + + if (e820_end < ram_end) { + pr_info("Hyper-V: add e820 entry [mem %#018Lx-%#018Lx]\n", e820_end, ram_end - 1); + e820__range_add(e820_end, ram_end - e820_end, + E820_TYPE_RAM); + for (page = e820_end; page < ram_end; page += PAGE_SIZE) + pvalidate((unsigned long)__va(page), RMP_PG_SIZE_4K, true); + } + } +} + void __init hv_vtom_init(void) { /* diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 939373791249..84e024ffacd5 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -50,6 +50,21 @@ extern bool hv_isolation_type_en_snp(void); extern union hv_ghcb * __percpu *hv_ghcb_pg; +/* + * Hyper-V puts processor and memory layout info + * to this address in SEV-SNP enlightened guest. + */ +#define EN_SEV_SNP_PROCESSOR_INFO_ADDR 0x802000 +#define EN_SEV_SNP_MEM_INFO_ADDR 0x802018 + +struct memory_map_entry { + u64 starting_gpn; + u64 numpages; + u16 type; + u16 flags; + u32 reserved; +}; + int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); @@ -255,12 +270,14 @@ void hv_ghcb_msr_read(u64 msr, u64 *value); bool hv_ghcb_negotiate_protocol(void); void hv_ghcb_terminate(unsigned int set, unsigned int reason); void hv_vtom_init(void); +void hv_sev_init_mem_and_cpu(void); #else static inline void hv_ghcb_msr_write(u64 msr, u64 value) {} static inline void hv_ghcb_msr_read(u64 msr, u64 *value) {} static inline bool hv_ghcb_negotiate_protocol(void) { return false; } static inline void hv_ghcb_terminate(unsigned int set, unsigned int reason) {} static inline void hv_vtom_init(void) {} +static inline void hv_sev_init_mem_and_cpu(void) {} #endif extern bool hv_isolation_type_snp(void); diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index 63a2bfbfe701..dea9b881180b 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -529,6 +529,9 @@ static void __init ms_hyperv_init_platform(void) if (!(ms_hyperv.features & HV_ACCESS_TSC_INVARIANT)) mark_tsc_unstable("running on Hyper-V"); + if (hv_isolation_type_en_snp()) + hv_sev_init_mem_and_cpu(); + hardlockup_detector_disable(); } From patchwork Mon May 1 08:57:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227416 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9ECA6C77B73 for ; Mon, 1 May 2023 08:58:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229947AbjEAI6c (ORCPT ); Mon, 1 May 2023 04:58:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232424AbjEAI61 (ORCPT ); Mon, 1 May 2023 04:58:27 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5F8F10FC; Mon, 1 May 2023 01:57:43 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id d9443c01a7336-1aaf70676b6so5085595ad.3; Mon, 01 May 2023 01:57:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931463; x=1685523463; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2wBt0G1QFo4NtGX3g2+/rfz7AKOqUQ37AswMPeAUefc=; b=K0GsFvLZi+NTMp5b2aDHPdL/dQ7jbtBXeFGlQ85s/7irbFyvYr8TOXCb2A9x+vCxHD yQVdwFXZ6mvzbEj31j6GgXewEiscn78Wo+3oWx8lVMhH2xa9MS2guu+hQK6MryaObOek UPGum2hj5/L/vu5zAptkkH8UKdjO0Fzceb5e/DDWy6wjpmtj9LiUI0tCqf6/ofq0VbFb vZjhoPRgxk3RW/qYaWsI9+6XE2JNDRUzSDT5vlE5hJS7kBmF5OhCxtC+1bDbjcWRlBPm Rxuu6JGNJz1am4lXoEi52wp9MctJXd3cfaBKBYJyyxoDUBGKT+50ijdbOUd0ohFSKUej WhSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931463; x=1685523463; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2wBt0G1QFo4NtGX3g2+/rfz7AKOqUQ37AswMPeAUefc=; b=MNVDu8lIXS7RvZC+Qu3qXwRfz6XsgCzxJmpUZ9cXwutqUWauvAj7UIqHLMt9mV9NTB 05hRH8ltl9v2mPyaczv4neWdfQkg1/0pcQGiZqLt3CLs7H9Uaz2GV6hYQSPg/TxBhqvx dNZrR9dAKlnQmhWyuS9YVMCMG8a6bfOtsukRMQ6xSpAc9jglDl8XwPm5b7bIF3iMQpRj BQncJLKCuDkDLCppH99xeexfzABj24dJFj28gsgGfRqXNs+dHDjXdResMLGsuuLfKz6v bvi7NdWrd1uuS9/48CgryNJfiiP/lDEMjy8XyrW8pSaTKuD8aIZa9SeRY/xqPqpfGRNS 8TCQ== X-Gm-Message-State: AC+VfDxJJeHE+oXR/LnHfKYrbF76VLzIlz4PG1hk08oHWEXBUuqEqo3p YBAswwLZ2+tKF94dYeox0mw= X-Google-Smtp-Source: ACHHUZ7U2KxnkJCHcrB6MnhTK/YZGVwFveC6H20olw60JgnLKpIJ80RElHJHjfKq+tG1KJ+DIOidRw== X-Received: by 2002:a17:903:11c4:b0:1a6:7ed1:7ce4 with SMTP id q4-20020a17090311c400b001a67ed17ce4mr15054784plh.44.1682931462740; Mon, 01 May 2023 01:57:42 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:42 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 09/15] x86/hyperv: Add smp support for sev-snp guest Date: Mon, 1 May 2023 04:57:19 -0400 Message-Id: <20230501085726.544209-10-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan The wakeup_secondary_cpu callback was populated with wakeup_ cpu_via_vmgexit() which doesn't work for Hyper-V and Hyper-V requires to call Hyper-V specific hvcall to start APs. So override it with Hyper-V specific hook to start AP sev_es_save_area data structure. Signed-off-by: Tianyu Lan --- Change sicne RFC v3: * Replace struct sev_es_save_area with struct vmcb_save_area * Move code from mshyperv.c to ivm.c Change since RFC v2: * Add helper function to initialize segment * Fix some coding style --- arch/x86/hyperv/ivm.c | 89 +++++++++++++++++++++++++++++++ arch/x86/include/asm/mshyperv.h | 18 +++++++ arch/x86/include/asm/sev.h | 13 +++++ arch/x86/include/asm/svm.h | 15 +++++- arch/x86/kernel/cpu/mshyperv.c | 13 ++++- arch/x86/kernel/sev.c | 4 +- include/asm-generic/hyperv-tlfs.h | 19 +++++++ 7 files changed, 166 insertions(+), 5 deletions(-) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 522eab55c0dd..0ef46f1874e6 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -22,11 +22,15 @@ #include #include #include +#include #ifdef CONFIG_AMD_MEM_ENCRYPT #define GHCB_USAGE_HYPERV_CALL 1 +static u8 ap_start_input_arg[PAGE_SIZE] __bss_decrypted __aligned(PAGE_SIZE); +static u8 ap_start_stack[PAGE_SIZE] __aligned(PAGE_SIZE); + union hv_ghcb { struct ghcb ghcb; struct { @@ -442,6 +446,91 @@ __init void hv_sev_init_mem_and_cpu(void) } } +#define hv_populate_vmcb_seg(seg, gdtr_base) \ +do { \ + if (seg.selector) { \ + seg.base = 0; \ + seg.limit = HV_AP_SEGMENT_LIMIT; \ + seg.attrib = *(u16 *)(gdtr_base + seg.selector + 5); \ + seg.attrib = (seg.attrib & 0xFF) | ((seg.attrib >> 4) & 0xF00); \ + } \ +} while (0) \ + +int hv_snp_boot_ap(int cpu, unsigned long start_ip) +{ + struct sev_es_save_area *vmsa = (struct sev_es_save_area *) + __get_free_page(GFP_KERNEL | __GFP_ZERO); + struct desc_ptr gdtr; + u64 ret, retry = 5; + struct hv_start_virtual_processor_input *start_vp_input; + union sev_rmp_adjust rmp_adjust; + unsigned long flags; + + native_store_gdt(&gdtr); + + vmsa->gdtr.base = gdtr.address; + vmsa->gdtr.limit = gdtr.size; + + asm volatile("movl %%es, %%eax;" : "=a" (vmsa->es.selector)); + hv_populate_vmcb_seg(vmsa->es, vmsa->gdtr.base); + + asm volatile("movl %%cs, %%eax;" : "=a" (vmsa->cs.selector)); + hv_populate_vmcb_seg(vmsa->cs, vmsa->gdtr.base); + + asm volatile("movl %%ss, %%eax;" : "=a" (vmsa->ss.selector)); + hv_populate_vmcb_seg(vmsa->ss, vmsa->gdtr.base); + + asm volatile("movl %%ds, %%eax;" : "=a" (vmsa->ds.selector)); + hv_populate_vmcb_seg(vmsa->ds, vmsa->gdtr.base); + + vmsa->efer = native_read_msr(MSR_EFER); + + asm volatile("movq %%cr4, %%rax;" : "=a" (vmsa->cr4)); + asm volatile("movq %%cr3, %%rax;" : "=a" (vmsa->cr3)); + asm volatile("movq %%cr0, %%rax;" : "=a" (vmsa->cr0)); + + vmsa->xcr0 = 1; + vmsa->g_pat = HV_AP_INIT_GPAT_DEFAULT; + vmsa->rip = (u64)secondary_startup_64_no_verify; + vmsa->rsp = (u64)&ap_start_stack[PAGE_SIZE]; + + vmsa->sev_features.snp = 1; + vmsa->sev_features.restrict_injection = 1; + + rmp_adjust.as_uint64 = 0; + rmp_adjust.target_vmpl = 1; + rmp_adjust.vmsa = 1; + ret = rmpadjust((unsigned long)vmsa, RMP_PG_SIZE_4K, + rmp_adjust.as_uint64); + if (ret != 0) { + pr_err("RMPADJUST(%llx) failed: %llx\n", (u64)vmsa, ret); + return ret; + } + + local_irq_save(flags); + start_vp_input = + (struct hv_start_virtual_processor_input *)ap_start_input_arg; + memset(start_vp_input, 0, sizeof(*start_vp_input)); + start_vp_input->partitionid = -1; + start_vp_input->vpindex = cpu; + start_vp_input->targetvtl = ms_hyperv.vtl; + *(u64 *)&start_vp_input->context[0] = __pa(vmsa) | 1; + + do { + ret = hv_do_hypercall(HVCALL_START_VP, + start_vp_input, NULL); + } while (hv_result(ret) == HV_STATUS_TIME_OUT && retry--); + + if (!hv_result_success(ret)) { + pr_err("HvCallStartVirtualProcessor failed: %llx\n", ret); + goto done; + } + +done: + local_irq_restore(flags); + return ret; +} + void __init hv_vtom_init(void) { /* diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 84e024ffacd5..5ade250ec771 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -65,6 +65,20 @@ struct memory_map_entry { u32 reserved; }; +/* + * DEFAULT INIT GPAT and SEGMENT LIMIT value in struct VMSA + * to start AP in enlightened SEV guest. + */ +#define HV_AP_INIT_GPAT_DEFAULT 0x0007040600070406ULL +#define HV_AP_SEGMENT_LIMIT 0xffffffff + +/* + * DEFAULT INIT GPAT and SEGMENT LIMIT value in struct VMSA + * to start AP in enlightened SEV guest. + */ +#define HV_AP_INIT_GPAT_DEFAULT 0x0007040600070406ULL +#define HV_AP_SEGMENT_LIMIT 0xffffffff + int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); @@ -263,6 +277,8 @@ struct irq_domain *hv_create_pci_msi_domain(void); int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector, struct hv_interrupt_entry *entry); int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry); +int hv_set_mem_host_visibility(unsigned long addr, int numpages, bool visible); +int hv_snp_boot_ap(int cpu, unsigned long start_ip); #ifdef CONFIG_AMD_MEM_ENCRYPT void hv_ghcb_msr_write(u64 msr, u64 value); @@ -271,6 +287,7 @@ bool hv_ghcb_negotiate_protocol(void); void hv_ghcb_terminate(unsigned int set, unsigned int reason); void hv_vtom_init(void); void hv_sev_init_mem_and_cpu(void); +int hv_snp_boot_ap(int cpu, unsigned long start_ip); #else static inline void hv_ghcb_msr_write(u64 msr, u64 value) {} static inline void hv_ghcb_msr_read(u64 msr, u64 *value) {} @@ -278,6 +295,7 @@ static inline bool hv_ghcb_negotiate_protocol(void) { return false; } static inline void hv_ghcb_terminate(unsigned int set, unsigned int reason) {} static inline void hv_vtom_init(void) {} static inline void hv_sev_init_mem_and_cpu(void) {} +static int hv_snp_boot_ap(int cpu, unsigned long start_ip) {} #endif extern bool hv_isolation_type_snp(void); diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index 13dc2a9d23c1..0d57cb1c1bb4 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -88,6 +88,19 @@ extern bool handle_vc_boot_ghcb(struct pt_regs *regs); #define RMPADJUST_VMSA_PAGE_BIT BIT(16) +union sev_rmp_adjust { + u64 as_uint64; + struct { + unsigned long target_vmpl : 8; + unsigned long enable_read : 1; + unsigned long enable_write : 1; + unsigned long enable_user_execute : 1; + unsigned long enable_kernel_execute : 1; + unsigned long reserved1 : 4; + unsigned long vmsa : 1; + }; +}; + /* SNP Guest message request */ struct snp_req_data { unsigned long req_gpa; diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index 770dcf75eaa9..cd6bf989d918 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -425,7 +425,20 @@ struct sev_es_save_area { u64 guest_exit_info_2; u64 guest_exit_int_info; u64 guest_nrip; - u64 sev_features; + union { + struct { + u64 snp : 1; + u64 vtom : 1; + u64 reflectvc : 1; + u64 restrict_injection : 1; + u64 alternate_injection : 1; + u64 full_debug : 1; + u64 reserved1 : 1; + u64 snpbtb_isolation : 1; + u64 resrved2 : 56; + }; + u64 val; + } sev_features; u64 vintr_ctrl; u64 guest_exit_code; u64 virtual_tom; diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index dea9b881180b..0c5f9f7bd7ba 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -295,6 +295,16 @@ static void __init hv_smp_prepare_cpus(unsigned int max_cpus) native_smp_prepare_cpus(max_cpus); + /* + * Override wakeup_secondary_cpu_64 callback for SEV-SNP + * enlightened guest. + */ + if (hv_isolation_type_en_snp()) + apic->wakeup_secondary_cpu_64 = hv_snp_boot_ap; + + if (!hv_root_partition) + return; + #ifdef CONFIG_X86_64 for_each_present_cpu(i) { if (i == 0) @@ -502,8 +512,7 @@ static void __init ms_hyperv_init_platform(void) # ifdef CONFIG_SMP smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu; - if (hv_root_partition) - smp_ops.smp_prepare_cpus = hv_smp_prepare_cpus; + smp_ops.smp_prepare_cpus = hv_smp_prepare_cpus; # endif /* diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index b031244d6d2d..20f3fd8ade2f 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -1082,7 +1082,7 @@ static int wakeup_cpu_via_vmgexit(int apic_id, unsigned long start_ip) * SEV_FEATURES (matches the SEV STATUS MSR right shifted 2 bits) */ vmsa->vmpl = 0; - vmsa->sev_features = sev_status >> 2; + vmsa->sev_features.val = sev_status >> 2; /* Switch the page over to a VMSA page now that it is initialized */ ret = snp_set_vmsa(vmsa, true); @@ -1099,7 +1099,7 @@ static int wakeup_cpu_via_vmgexit(int apic_id, unsigned long start_ip) ghcb = __sev_get_ghcb(&state); vc_ghcb_invalidate(ghcb); - ghcb_set_rax(ghcb, vmsa->sev_features); + ghcb_set_rax(ghcb, vmsa->sev_features.val); ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_AP_CREATION); ghcb_set_sw_exit_info_1(ghcb, ((u64)apic_id << 32) | SVM_VMGEXIT_AP_CREATE); ghcb_set_sw_exit_info_2(ghcb, __pa(vmsa)); diff --git a/include/asm-generic/hyperv-tlfs.h b/include/asm-generic/hyperv-tlfs.h index f4e4cc4f965f..959b075591b2 100644 --- a/include/asm-generic/hyperv-tlfs.h +++ b/include/asm-generic/hyperv-tlfs.h @@ -149,6 +149,7 @@ union hv_reference_tsc_msr { #define HVCALL_ENABLE_VP_VTL 0x000f #define HVCALL_NOTIFY_LONG_SPIN_WAIT 0x0008 #define HVCALL_SEND_IPI 0x000b +#define HVCALL_ENABLE_VP_VTL 0x000f #define HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX 0x0013 #define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX 0x0014 #define HVCALL_SEND_IPI_EX 0x0015 @@ -168,6 +169,7 @@ union hv_reference_tsc_msr { #define HVCALL_RETARGET_INTERRUPT 0x007e #define HVCALL_START_VP 0x0099 #define HVCALL_GET_VP_ID_FROM_APIC_ID 0x009a +#define HVCALL_START_VIRTUAL_PROCESSOR 0x0099 #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0 #define HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY 0x00db @@ -223,6 +225,7 @@ enum HV_GENERIC_SET_FORMAT { #define HV_STATUS_INVALID_PORT_ID 17 #define HV_STATUS_INVALID_CONNECTION_ID 18 #define HV_STATUS_INSUFFICIENT_BUFFERS 19 +#define HV_STATUS_TIME_OUT 120 #define HV_STATUS_VTL_ALREADY_ENABLED 134 /* @@ -783,6 +786,22 @@ struct hv_input_unmap_device_interrupt { struct hv_interrupt_entry interrupt_entry; } __packed; +struct hv_enable_vp_vtl_input { + u64 partitionid; + u32 vpindex; + u8 targetvtl; + u8 padding[3]; + u8 context[0xe0]; +} __packed; + +struct hv_start_virtual_processor_input { + u64 partitionid; + u32 vpindex; + u8 targetvtl; + u8 padding[3]; + u8 context[0xe0]; +} __packed; + #define HV_SOURCE_SHADOW_NONE 0x0 #define HV_SOURCE_SHADOW_BRIDGE_BUS_RANGE 0x1 From patchwork Mon May 1 08:57:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227418 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CB73C77B73 for ; Mon, 1 May 2023 08:59:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232548AbjEAI7Q (ORCPT ); Mon, 1 May 2023 04:59:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232499AbjEAI6c (ORCPT ); Mon, 1 May 2023 04:58:32 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F979199C; Mon, 1 May 2023 01:57:45 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id d9443c01a7336-1aae5c2423dso14849775ad.3; Mon, 01 May 2023 01:57:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931464; x=1685523464; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cmY2ej1hy4vAs/f4OlrbKuvNcHo8SUtv7o8IcsrQg3U=; b=FkdkRUE+5Whsuw9r5s2/iWsKbU/238MR7HewzSR5rax2e+7VF1hr07JmScnAgDk7q3 gAhLUKyzqNEdJdYeZE0fkNqi9yaMLkzh2HXJw5gu/7bcHq1G7yRtdb25eKpzZBRCGXEr Ry0POxQUAC94s/D4zrTPqm0IBW8XHtVkuFAxDnSEGRZ7r6cvn/JwhelO2wV8BR7n+kAN Xc+fE37JIJDG0Zwh1Jx1hthKpihB6QbBAp2WpouNoABBPjsjS8hR3a3yBfZglZEocvCr 8yTkcHReqMNJPO0UJHHUmhW3ZE+XvW2GjR7NQvFIO3W2XMam462PRMmG0h7MwYe7RIyL p+QA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931464; x=1685523464; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cmY2ej1hy4vAs/f4OlrbKuvNcHo8SUtv7o8IcsrQg3U=; b=XRAk8+jJHyzPa6YtOCu+hVYDfTJvEFOZ8R/QlWJSf4cMuN191BvrEFLj6zVLTQb/na Qdy7AM15uxjgx1Dfj9VrdJwCybo52y71JNuM6wzURX7By2PKg0l9t+7NDNI6bhJW/puH CXTuREZ6bI2An+vUwNmsyoecRKiqsVaGUGegBy8AXPatSHg0uRIyX6GHcerpNzYQ/tIo oDw74KwKXIWUlqnHxCbAxOapVV/K2M0cGu6DwHhxpUHf7gOlxrU4vqrrchNEG2mO3xqW DhIcTPcp1CjJiS2rB4O6O1riNj/WKtnUUrVWNwdJJOzftxY8tVfhlvk+5MRtMfkMSnv9 X0vw== X-Gm-Message-State: AC+VfDwkWhZwlXT8iclTkru+/PFJe09xec5cA0Y8oOqQPi1wWxbXbwPA tJDkplA8Ym49fiM3ouH/4L4= X-Google-Smtp-Source: ACHHUZ4ZH8PKEVHrfl+mAnYFuvD/XiRq5X7wX2j3kcUWhPydZMXAHdnw2f3GWKNJVeq0TJG4FCa0cg== X-Received: by 2002:a17:902:ce05:b0:1a9:2ae4:6c1e with SMTP id k5-20020a170902ce0500b001a92ae46c1emr13396216plg.4.1682931464296; Mon, 01 May 2023 01:57:44 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:43 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 10/15] x86/hyperv: Add hyperv-specific handling for VMMCALL under SEV-ES Date: Mon, 1 May 2023 04:57:20 -0400 Message-Id: <20230501085726.544209-11-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Add Hyperv-specific handling for faults caused by VMMCALL instructions. Signed-off-by: Tianyu Lan --- arch/x86/kernel/cpu/mshyperv.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index 0c5f9f7bd7ba..3469b369e627 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -32,6 +32,7 @@ #include #include #include +#include /* Is Linux running as the root partition? */ bool hv_root_partition; @@ -577,6 +578,20 @@ static bool __init ms_hyperv_msi_ext_dest_id(void) return eax & HYPERV_VS_PROPERTIES_EAX_EXTENDED_IOAPIC_RTE; } +static void hv_sev_es_hcall_prepare(struct ghcb *ghcb, struct pt_regs *regs) +{ + /* RAX and CPL are already in the GHCB */ + ghcb_set_rcx(ghcb, regs->cx); + ghcb_set_rdx(ghcb, regs->dx); + ghcb_set_r8(ghcb, regs->r8); +} + +static bool hv_sev_es_hcall_finish(struct ghcb *ghcb, struct pt_regs *regs) +{ + /* No checking of the return state needed */ + return true; +} + const __initconst struct hypervisor_x86 x86_hyper_ms_hyperv = { .name = "Microsoft Hyper-V", .detect = ms_hyperv_platform, @@ -584,4 +599,6 @@ const __initconst struct hypervisor_x86 x86_hyper_ms_hyperv = { .init.x2apic_available = ms_hyperv_x2apic_available, .init.msi_ext_dest_id = ms_hyperv_msi_ext_dest_id, .init.init_platform = ms_hyperv_init_platform, + .runtime.sev_es_hcall_prepare = hv_sev_es_hcall_prepare, + .runtime.sev_es_hcall_finish = hv_sev_es_hcall_finish, }; From patchwork Mon May 1 08:57:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1784C7EE21 for ; Mon, 1 May 2023 08:58:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232526AbjEAI65 (ORCPT ); Mon, 1 May 2023 04:58:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232462AbjEAI63 (ORCPT ); Mon, 1 May 2023 04:58:29 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA9E41BCF; Mon, 1 May 2023 01:57:46 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-1a9253d4551so16749395ad.0; Mon, 01 May 2023 01:57:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931465; x=1685523465; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7D3Lq52uaMEV7sN/HRjappRmwKRtSxBzVpnKNON3q8Q=; b=YXO7mPMhW06czn57J6V8VbU7ARi++uzNkomrqOJH7HUjTB4gNhD0oObqdZB1GAHUnx da1HYJttw3EpCPY/HH4FaR9oJ40hEUnN4kWqYM+EATh4fDRu3T/nVrq5i394QCG2GEUV KJRbZLxzw0wh6Z9uhC4/fc7CiOYR40OuePBxtnHun1x/eqdJOGsRqIuj8UlfGCsogC1+ a0EIJL/B+fzOAC2R0RpZFWWRz3xdWiTV2iJ9ci9cO14Q4A3Wly8qhqurFJgfCvHfdGs5 8X5sEhH/BBnnCCx6JOZVz9qz2EuR/i8NSycKMN7JGinG7wj5nazNWoywUGS949cv53kM NWSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931465; x=1685523465; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7D3Lq52uaMEV7sN/HRjappRmwKRtSxBzVpnKNON3q8Q=; b=A5Y7xWDGywa6fd98h/7IA4l18CnIn9NGlqhCcmKGWpWY7ioCcNbOKPSaD+kbCNiwsY 9pOnnGgeM3vP5GEWHZV9qrHiBPTWV5OI2wPua/o8mNeGumCIBAyOFKZvjhfTyMD5URhE kcCQ/koTwor7r4qEN5dSEiBvaIfyMyka7tUB9UKf1nODYIA2CMxKJg1rCjdg4MOfPt30 l7DPCZcn/iCbqqvy74xWysn8zLD6eTVfhLpeJD5LiyvACiqPN2mlsDC0OlWsXYea6+hv MvsB8YDo64o0fc/u1fXGvZ3O8fLbJFybDViIDKhL/EmqizOGYR2yi1ufij7UJ5hp82mG Tx3Q== X-Gm-Message-State: AC+VfDx2mrpVbvanU7PomM9HPPBIdALqgD+RV+Xrk0DFtWlO+guHld9v t8rOvzz7wqvIarvCPrynKkA= X-Google-Smtp-Source: ACHHUZ43A/6QADb0mkbQSsdVvNzcfEAHLzAlBCSIhCKYF91VQiE4RCjmKrqVpTmiNmj5bQ9QxCfODQ== X-Received: by 2002:a17:903:1250:b0:1a9:b62f:933b with SMTP id u16-20020a170903125000b001a9b62f933bmr13437271plh.53.1682931465622; Mon, 01 May 2023 01:57:45 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:45 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 11/15] x86/sev: Add a #HV exception handler Date: Mon, 1 May 2023 04:57:21 -0400 Message-Id: <20230501085726.544209-12-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Add a #HV exception handler that uses IST stack. Signed-off-by: Tianyu Lan --- Change since RFC V2: * Remove unnecessary line in the change log. --- arch/x86/entry/entry_64.S | 22 +++++++---- arch/x86/include/asm/cpu_entry_area.h | 6 +++ arch/x86/include/asm/idtentry.h | 40 +++++++++++++++++++- arch/x86/include/asm/page_64_types.h | 1 + arch/x86/include/asm/trapnr.h | 1 + arch/x86/include/asm/traps.h | 1 + arch/x86/kernel/cpu/common.c | 1 + arch/x86/kernel/dumpstack_64.c | 9 ++++- arch/x86/kernel/idt.c | 1 + arch/x86/kernel/sev.c | 53 +++++++++++++++++++++++++++ arch/x86/kernel/traps.c | 40 ++++++++++++++++++++ arch/x86/mm/cpu_entry_area.c | 2 + 12 files changed, 165 insertions(+), 12 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index eccc3431e515..653b1f10699b 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -496,7 +496,7 @@ SYM_CODE_END(\asmsym) #ifdef CONFIG_AMD_MEM_ENCRYPT /** - * idtentry_vc - Macro to generate entry stub for #VC + * idtentry_sev - Macro to generate entry stub for #VC * @vector: Vector number * @asmsym: ASM symbol for the entry point * @cfunc: C function to be called @@ -515,14 +515,18 @@ SYM_CODE_END(\asmsym) * * The macro is only used for one vector, but it is planned to be extended in * the future for the #HV exception. - */ -.macro idtentry_vc vector asmsym cfunc +*/ +.macro idtentry_sev vector asmsym cfunc has_error_code:req SYM_CODE_START(\asmsym) UNWIND_HINT_IRET_REGS ENDBR ASM_CLAC cld + .if \vector == X86_TRAP_HV + pushq $-1 /* ORIG_RAX: no syscall */ + .endif + /* * If the entry is from userspace, switch stacks and treat it as * a normal entry. @@ -545,7 +549,12 @@ SYM_CODE_START(\asmsym) * stack. */ movq %rsp, %rdi /* pt_regs pointer */ - call vc_switch_off_ist + .if \vector == X86_TRAP_VC + call vc_switch_off_ist + .else + call hv_switch_off_ist + .endif + movq %rax, %rsp /* Switch to new stack */ ENCODE_FRAME_POINTER @@ -568,10 +577,7 @@ SYM_CODE_START(\asmsym) /* Switch to the regular task stack */ .Lfrom_usermode_switch_stack_\@: - idtentry_body user_\cfunc, has_error_code=1 - -_ASM_NOKPROBE(\asmsym) -SYM_CODE_END(\asmsym) + idtentry_body user_\cfunc, \has_error_code .endm #endif diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h index 462fc34f1317..2186ed601b4a 100644 --- a/arch/x86/include/asm/cpu_entry_area.h +++ b/arch/x86/include/asm/cpu_entry_area.h @@ -30,6 +30,10 @@ char VC_stack[optional_stack_size]; \ char VC2_stack_guard[guardsize]; \ char VC2_stack[optional_stack_size]; \ + char HV_stack_guard[guardsize]; \ + char HV_stack[optional_stack_size]; \ + char HV2_stack_guard[guardsize]; \ + char HV2_stack[optional_stack_size]; \ char IST_top_guard[guardsize]; \ /* The exception stacks' physical storage. No guard pages required */ @@ -52,6 +56,8 @@ enum exception_stack_ordering { ESTACK_MCE, ESTACK_VC, ESTACK_VC2, + ESTACK_HV, + ESTACK_HV2, N_EXCEPTION_STACKS }; diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h index b241af4ce9b4..b0f3501b2767 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -317,6 +317,19 @@ static __always_inline void __##func(struct pt_regs *regs) __visible noinstr void kernel_##func(struct pt_regs *regs, unsigned long error_code); \ __visible noinstr void user_##func(struct pt_regs *regs, unsigned long error_code) + +/** + * DECLARE_IDTENTRY_HV - Declare functions for the HV entry point + * @vector: Vector number (ignored for C) + * @func: Function name of the entry point + * + * Maps to DECLARE_IDTENTRY_RAW, but declares also the user C handler. + */ +#define DECLARE_IDTENTRY_HV(vector, func) \ + DECLARE_IDTENTRY_RAW_ERRORCODE(vector, func); \ + __visible noinstr void kernel_##func(struct pt_regs *regs); \ + __visible noinstr void user_##func(struct pt_regs *regs) + /** * DEFINE_IDTENTRY_IST - Emit code for IST entry points * @func: Function name of the entry point @@ -376,6 +389,26 @@ static __always_inline void __##func(struct pt_regs *regs) #define DEFINE_IDTENTRY_VC_USER(func) \ DEFINE_IDTENTRY_RAW_ERRORCODE(user_##func) +/** + * DEFINE_IDTENTRY_HV_KERNEL - Emit code for HV injection handler + * when raised from kernel mode + * @func: Function name of the entry point + * + * Maps to DEFINE_IDTENTRY_RAW + */ +#define DEFINE_IDTENTRY_HV_KERNEL(func) \ + DEFINE_IDTENTRY_RAW(kernel_##func) + +/** + * DEFINE_IDTENTRY_HV_USER - Emit code for HV injection handler + * when raised from user mode + * @func: Function name of the entry point + * + * Maps to DEFINE_IDTENTRY_RAW + */ +#define DEFINE_IDTENTRY_HV_USER(func) \ + DEFINE_IDTENTRY_RAW(user_##func) + #else /* CONFIG_X86_64 */ /** @@ -463,8 +496,10 @@ __visible noinstr void func(struct pt_regs *regs, \ DECLARE_IDTENTRY(vector, func) # define DECLARE_IDTENTRY_VC(vector, func) \ - idtentry_vc vector asm_##func func + idtentry_sev vector asm_##func func has_error_code=1 +# define DECLARE_IDTENTRY_HV(vector, func) \ + idtentry_sev vector asm_##func func has_error_code=0 #else # define DECLARE_IDTENTRY_MCE(vector, func) \ DECLARE_IDTENTRY(vector, func) @@ -618,9 +653,10 @@ DECLARE_IDTENTRY_RAW_ERRORCODE(X86_TRAP_DF, xenpv_exc_double_fault); DECLARE_IDTENTRY_ERRORCODE(X86_TRAP_CP, exc_control_protection); #endif -/* #VC */ +/* #VC & #HV */ #ifdef CONFIG_AMD_MEM_ENCRYPT DECLARE_IDTENTRY_VC(X86_TRAP_VC, exc_vmm_communication); +DECLARE_IDTENTRY_HV(X86_TRAP_HV, exc_hv_injection); #endif #ifdef CONFIG_XEN_PV diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h index e9e2c3ba5923..0bd7dab676c5 100644 --- a/arch/x86/include/asm/page_64_types.h +++ b/arch/x86/include/asm/page_64_types.h @@ -29,6 +29,7 @@ #define IST_INDEX_DB 2 #define IST_INDEX_MCE 3 #define IST_INDEX_VC 4 +#define IST_INDEX_HV 5 /* * Set __PAGE_OFFSET to the most negative possible address + diff --git a/arch/x86/include/asm/trapnr.h b/arch/x86/include/asm/trapnr.h index f5d2325aa0b7..c6583631cecb 100644 --- a/arch/x86/include/asm/trapnr.h +++ b/arch/x86/include/asm/trapnr.h @@ -26,6 +26,7 @@ #define X86_TRAP_XF 19 /* SIMD Floating-Point Exception */ #define X86_TRAP_VE 20 /* Virtualization Exception */ #define X86_TRAP_CP 21 /* Control Protection Exception */ +#define X86_TRAP_HV 28 /* HV injected exception in SNP restricted mode */ #define X86_TRAP_VC 29 /* VMM Communication Exception */ #define X86_TRAP_IRET 32 /* IRET Exception */ diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h index 47ecfff2c83d..6795d3e517d6 100644 --- a/arch/x86/include/asm/traps.h +++ b/arch/x86/include/asm/traps.h @@ -16,6 +16,7 @@ asmlinkage __visible notrace struct pt_regs *fixup_bad_iret(struct pt_regs *bad_regs); void __init trap_init(void); asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *eregs); +asmlinkage __visible noinstr struct pt_regs *hv_switch_off_ist(struct pt_regs *eregs); #endif extern bool ibt_selftest(void); diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 8cd4126d8253..5bc44bcf6e48 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -2172,6 +2172,7 @@ static inline void tss_setup_ist(struct tss_struct *tss) tss->x86_tss.ist[IST_INDEX_MCE] = __this_cpu_ist_top_va(MCE); /* Only mapped when SEV-ES is active */ tss->x86_tss.ist[IST_INDEX_VC] = __this_cpu_ist_top_va(VC); + tss->x86_tss.ist[IST_INDEX_HV] = __this_cpu_ist_top_va(HV); } #else /* CONFIG_X86_64 */ diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c index f05339fee778..6d8f8864810c 100644 --- a/arch/x86/kernel/dumpstack_64.c +++ b/arch/x86/kernel/dumpstack_64.c @@ -26,11 +26,14 @@ static const char * const exception_stack_names[] = { [ ESTACK_MCE ] = "#MC", [ ESTACK_VC ] = "#VC", [ ESTACK_VC2 ] = "#VC2", + [ ESTACK_HV ] = "#HV", + [ ESTACK_HV2 ] = "#HV2", + }; const char *stack_type_name(enum stack_type type) { - BUILD_BUG_ON(N_EXCEPTION_STACKS != 6); + BUILD_BUG_ON(N_EXCEPTION_STACKS != 8); if (type == STACK_TYPE_TASK) return "TASK"; @@ -89,6 +92,8 @@ struct estack_pages estack_pages[CEA_ESTACK_PAGES] ____cacheline_aligned = { EPAGERANGE(MCE), EPAGERANGE(VC), EPAGERANGE(VC2), + EPAGERANGE(HV), + EPAGERANGE(HV2), }; static __always_inline bool in_exception_stack(unsigned long *stack, struct stack_info *info) @@ -98,7 +103,7 @@ static __always_inline bool in_exception_stack(unsigned long *stack, struct stac struct pt_regs *regs; unsigned int k; - BUILD_BUG_ON(N_EXCEPTION_STACKS != 6); + BUILD_BUG_ON(N_EXCEPTION_STACKS != 8); begin = (unsigned long)__this_cpu_read(cea_exception_stacks); /* diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c index a58c6bc1cd68..48c0a7e1dbcb 100644 --- a/arch/x86/kernel/idt.c +++ b/arch/x86/kernel/idt.c @@ -113,6 +113,7 @@ static const __initconst struct idt_data def_idts[] = { #ifdef CONFIG_AMD_MEM_ENCRYPT ISTG(X86_TRAP_VC, asm_exc_vmm_communication, IST_INDEX_VC), + ISTG(X86_TRAP_HV, asm_exc_hv_injection, IST_INDEX_HV), #endif SYSG(X86_TRAP_OF, asm_exc_overflow), diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 20f3fd8ade2f..7b06d7c0914f 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -2006,6 +2006,59 @@ DEFINE_IDTENTRY_VC_USER(exc_vmm_communication) irqentry_exit_to_user_mode(regs); } +static bool hv_raw_handle_exception(struct pt_regs *regs) +{ + return false; +} + +static __always_inline bool on_hv_fallback_stack(struct pt_regs *regs) +{ + unsigned long sp = (unsigned long)regs; + + return (sp >= __this_cpu_ist_bottom_va(HV2) && sp < __this_cpu_ist_top_va(HV2)); +} + +DEFINE_IDTENTRY_HV_USER(exc_hv_injection) +{ + irqentry_enter_from_user_mode(regs); + instrumentation_begin(); + + if (!hv_raw_handle_exception(regs)) { + /* + * Do not kill the machine if user-space triggered the + * exception. Send SIGBUS instead and let user-space deal + * with it. + */ + force_sig_fault(SIGBUS, BUS_OBJERR, (void __user *)0); + } + + instrumentation_end(); + irqentry_exit_to_user_mode(regs); +} + +DEFINE_IDTENTRY_HV_KERNEL(exc_hv_injection) +{ + irqentry_state_t irq_state; + + irq_state = irqentry_enter(regs); + instrumentation_begin(); + + if (!hv_raw_handle_exception(regs)) { + pr_emerg("PANIC: Unhandled #HV exception in kernel space\n"); + + /* Show some debug info */ + show_regs(regs); + + /* Ask hypervisor to sev_es_terminate */ + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SEV_ES_GEN_REQ); + + panic("Returned from Terminate-Request to Hypervisor\n"); + } + + instrumentation_end(); + irqentry_exit(regs, irq_state); +} + bool __init handle_vc_boot_ghcb(struct pt_regs *regs) { unsigned long exit_code = regs->orig_ax; diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index d317dc3d06a3..d29debec8134 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -905,6 +905,46 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r return regs_ret; } + +asmlinkage __visible noinstr struct pt_regs *hv_switch_off_ist(struct pt_regs *regs) +{ + unsigned long sp, *stack; + struct stack_info info; + struct pt_regs *regs_ret; + + /* + * In the SYSCALL entry path the RSP value comes from user-space - don't + * trust it and switch to the current kernel stack + */ + if (ip_within_syscall_gap(regs)) { + sp = this_cpu_read(pcpu_hot.top_of_stack); + goto sync; + } + + /* + * From here on the RSP value is trusted. Now check whether entry + * happened from a safe stack. Not safe are the entry or unknown stacks, + * use the fall-back stack instead in this case. + */ + sp = regs->sp; + stack = (unsigned long *)sp; + + if (!get_stack_info_noinstr(stack, current, &info) || info.type == STACK_TYPE_ENTRY || + info.type > STACK_TYPE_EXCEPTION_LAST) + sp = __this_cpu_ist_top_va(HV2); +sync: + /* + * Found a safe stack - switch to it as if the entry didn't happen via + * IST stack. The code below only copies pt_regs, the real switch happens + * in assembly code. + */ + sp = ALIGN_DOWN(sp, 8) - sizeof(*regs_ret); + + regs_ret = (struct pt_regs *)sp; + *regs_ret = *regs; + + return regs_ret; +} #endif asmlinkage __visible noinstr struct pt_regs *fixup_bad_iret(struct pt_regs *bad_regs) diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c index e91500a80963..97554fa0ff30 100644 --- a/arch/x86/mm/cpu_entry_area.c +++ b/arch/x86/mm/cpu_entry_area.c @@ -160,6 +160,8 @@ static void __init percpu_setup_exception_stacks(unsigned int cpu) if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) { cea_map_stack(VC); cea_map_stack(VC2); + cea_map_stack(HV); + cea_map_stack(HV2); } } } From patchwork Mon May 1 08:57:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BEADC7EE26 for ; Mon, 1 May 2023 08:59:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232347AbjEAI70 (ORCPT ); Mon, 1 May 2023 04:59:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232512AbjEAI6e (ORCPT ); Mon, 1 May 2023 04:58:34 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 518771BF2; Mon, 1 May 2023 01:57:48 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-64115e652eeso25489956b3a.0; Mon, 01 May 2023 01:57:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931467; x=1685523467; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dKHSp+qB5n2IMDCOI0freKMV1FWlMhZLNei9eLWDXIY=; b=RPUtxnqwKOqhdKOMYjaJwr7KrrCzqPH4C19gxi/1Vrapte1tatU2mG1zjhlzWI2rWr e9E0aKKTCmyFr9EZhuiQ/pzle/s9SeI6ye0LC0czbwRkxiomXr3xaV0XP4tT/lph4PHI pLhEzuvKd9zK6/B3rGjOSP0E1Kg9fEtPcuzy7hqLgxewEfnGVbtr6hWXhXeoH8BwSI28 R3HnyQ/bCFkiFQ1124pcBpA3rmk3Mvmp4fsqNjBxnTKkSzEHdMJdbp8e6cuNkzVHL263 CKV4SGmEdi/W6fNcl2MFK52xvc410QSH7dxz9OoyAVqj7ctvPatkeWfmdEgBar4I45bt +N6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931467; x=1685523467; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dKHSp+qB5n2IMDCOI0freKMV1FWlMhZLNei9eLWDXIY=; b=LXtbcodl/CUHuq48Zf+9gVsSRAR1+7l/yilBysyBZml4MasVsu+219O1+Gh948rFaW p5BYB0CB50li/8r4wwpFMFCf7DN2tp64V0JKioJzQmUuGp4tuKRG2QabNY27pUdUCj9r 2LXc4uvJfDA0q0j8828zUc9ffVarx0p9Gz/DB9uuGrwpddk7Sr8WGjD6IY+talTUsKe+ d1Yfb632ltrfWHFCoWlqSHovjqNMU3KkKYhvs7jRzguFgaZHYJF1eIhBxaKL4/15HQfs qSERsCveCJ3oseRimxyIefvxvrte7bRLYOVt+1Xf0VgJ9kHZh7tob2UbzsGp9ce7dGge 9ndg== X-Gm-Message-State: AC+VfDxd3Tf8Uk0s4Zfj9GdYWQXb2j+juCygWnYF2njh+nAqu/sJ2elJ pOvhsnTKvCveIk9+LZgmKV8= X-Google-Smtp-Source: ACHHUZ78Uss3sSCiMytfrzlQgjq0ll+TtRKA/S+jPmjFzZZxnumpdFOVJclVjYbA5VDIi94zFOKsGg== X-Received: by 2002:a17:903:22c7:b0:1a9:98ae:5975 with SMTP id y7-20020a17090322c700b001a998ae5975mr21653299plg.30.1682931466962; Mon, 01 May 2023 01:57:46 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:46 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 12/15] x86/sev: Add Check of #HV event in path Date: Mon, 1 May 2023 04:57:22 -0400 Message-Id: <20230501085726.544209-13-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Add check_hv_pending() and check_hv_pending_after_irq() to check queued #HV event when irq is disabled. Signed-off-by: Tianyu Lan --- arch/x86/entry/entry_64.S | 18 ++++++++++++++++ arch/x86/include/asm/irqflags.h | 14 +++++++++++- arch/x86/kernel/sev.c | 38 +++++++++++++++++++++++++++++++++ 3 files changed, 69 insertions(+), 1 deletion(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 653b1f10699b..147b850babf6 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1019,6 +1019,15 @@ SYM_CODE_END(paranoid_entry) * R15 - old SPEC_CTRL */ SYM_CODE_START_LOCAL(paranoid_exit) +#ifdef CONFIG_AMD_MEM_ENCRYPT + /* + * If a #HV was delivered during execution and interrupts were + * disabled, then check if it can be handled before the iret + * (which may re-enable interrupts). + */ + mov %rsp, %rdi + call check_hv_pending +#endif UNWIND_HINT_REGS /* @@ -1143,6 +1152,15 @@ SYM_CODE_START(error_entry) SYM_CODE_END(error_entry) SYM_CODE_START_LOCAL(error_return) +#ifdef CONFIG_AMD_MEM_ENCRYPT + /* + * If a #HV was delivered during execution and interrupts were + * disabled, then check if it can be handled before the iret + * (which may re-enable interrupts). + */ + mov %rsp, %rdi + call check_hv_pending +#endif UNWIND_HINT_REGS DEBUG_ENTRY_ASSERT_IRQS_OFF testb $3, CS(%rsp) diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h index 8c5ae649d2df..d09ec6d76591 100644 --- a/arch/x86/include/asm/irqflags.h +++ b/arch/x86/include/asm/irqflags.h @@ -11,6 +11,10 @@ /* * Interrupt control: */ +#ifdef CONFIG_AMD_MEM_ENCRYPT +void check_hv_pending(struct pt_regs *regs); +void check_hv_pending_irq_enable(void); +#endif /* Declaration required for gcc < 4.9 to prevent -Werror=missing-prototypes */ extern inline unsigned long native_save_fl(void); @@ -40,12 +44,20 @@ static __always_inline void native_irq_disable(void) static __always_inline void native_irq_enable(void) { asm volatile("sti": : :"memory"); +#ifdef CONFIG_AMD_MEM_ENCRYPT + check_hv_pending_irq_enable(); +#endif } static __always_inline void native_safe_halt(void) { mds_idle_clear_cpu_buffers(); - asm volatile("sti; hlt": : :"memory"); + asm volatile("sti": : :"memory"); + +#ifdef CONFIG_AMD_MEM_ENCRYPT + check_hv_pending_irq_enable(); +#endif + asm volatile("hlt": : :"memory"); } static __always_inline void native_halt(void) diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 7b06d7c0914f..e2bb19605a46 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -181,6 +181,44 @@ void noinstr __sev_es_ist_enter(struct pt_regs *regs) this_cpu_write(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC], new_ist); } +static void do_exc_hv(struct pt_regs *regs) +{ + /* Handle #HV exception. */ +} + +void check_hv_pending(struct pt_regs *regs) +{ + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + return; + + if ((regs->flags & X86_EFLAGS_IF) == 0) + return; + + do_exc_hv(regs); +} + +void check_hv_pending_irq_enable(void) +{ + struct pt_regs regs; + + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + return; + + memset(®s, 0, sizeof(struct pt_regs)); + asm volatile("movl %%cs, %%eax;" : "=a" (regs.cs)); + asm volatile("movl %%ss, %%eax;" : "=a" (regs.ss)); + regs.orig_ax = 0xffffffff; + regs.flags = native_save_fl(); + + /* + * Disable irq when handle pending #HV events after + * re-enabling irq. + */ + asm volatile("cli" : : : "memory"); + do_exc_hv(®s); + asm volatile("sti" : : : "memory"); +} + void noinstr __sev_es_ist_exit(void) { unsigned long ist; From patchwork Mon May 1 08:57:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D8D6C7EE2A for ; Mon, 1 May 2023 08:59:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232561AbjEAI7Z (ORCPT ); Mon, 1 May 2023 04:59:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232509AbjEAI6e (ORCPT ); Mon, 1 May 2023 04:58:34 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D19A1FCE; Mon, 1 May 2023 01:57:49 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id d9443c01a7336-1a5197f00e9so16648115ad.1; Mon, 01 May 2023 01:57:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931468; x=1685523468; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r3dy1haFlAAqCQoP2UnHM2pAiK+IIuuoaeXjq8DdAN0=; b=b/qDELH+rcsTGk5IyUNX8MkZhXOf1VVQJQAu18g1uxXx4ACah9SAxszG1kJWctbUPY nFarPc+Gek2Qmfe2neat1qxMoOl9rENn78YdEYi47FXyiFaiG3OMmfGfZdxKoXVo+4jB n+wCVrXRSt18HIgPRaEsRHPM05M5EgjaR/RGDxNTdGZV1ovbJ2PnNYUa6KnHSEBo4sof iZL0sl3nVsiGiQBtviRvmdqLk6aVsEl9+frHHbccFdDI6uvxw/qeyB+cqBSdZ6OX4RQk P5W5GMZgbIJksFvwySIln607kz3MI7okfZZs9ctSA9Qui06mbEi5t+Ba2Uam2wd8bedc t4FQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931468; x=1685523468; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r3dy1haFlAAqCQoP2UnHM2pAiK+IIuuoaeXjq8DdAN0=; b=WqlZoog25Ntwr80gx0RtfzZdQdHAG8QxbSZ0NJbGdZ19jeCc9fHspMdIHCGmwrYe6M rJ4IKUVR2msMPB8Nky6ArBHjP33oZtzdOgDZbd3f6ka+qnhIppP08WYxq9klqUvK82cI hp5mwrWaf87LIKPeLgKJTsS3J/4o+Gm9kNk1sg1w/hJGtG8rkwDXkIRDQEInyLswc192 NzcHJNZpRUC5+FTDnRXUB61O0NsgxTl82L9p39Si9oTa/SfyCsaFYUOYwXVKrlaH/3/k WNVBOlhGh3pcH0rIP1kwfgH1ZLKCt7Bb42r8UW14Q8Kdjfw3Z3YU9HnKb55Xp4RLuBsq CaEA== X-Gm-Message-State: AC+VfDwvs9dQOnFkTxqhelxDim6O7JyJpQf2phzKtOnMxYWfcPJgqQBt K0YWMvEZPb7bRydtKOWDq9o= X-Google-Smtp-Source: ACHHUZ76U2py0sT/wZhILDzyUv74ENxYeIUk2MMYX9A/+rQSlF3abQYpHX57JhIcF6QJ+DJSZN3MpQ== X-Received: by 2002:a17:903:1c3:b0:1a9:80a0:47dc with SMTP id e3-20020a17090301c300b001a980a047dcmr13511581plh.3.1682931468361; Mon, 01 May 2023 01:57:48 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:47 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 13/15] x86/sev: Add AMD sev-snp enlightened guest support on hyperv Date: Mon, 1 May 2023 04:57:23 -0400 Message-Id: <20230501085726.544209-14-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Enable #HV exception to handle interrupt requests from hypervisor. Co-developed-by: Lendacky Thomas Co-developed-by: Kalra Ashish Signed-off-by: Tianyu Lan --- Change since RFC V3: * Check NMI event when irq is disabled. * Remove redundant variable --- arch/x86/include/asm/mem_encrypt.h | 2 + arch/x86/include/uapi/asm/svm.h | 4 + arch/x86/kernel/sev.c | 314 ++++++++++++++++++++++++----- arch/x86/kernel/traps.c | 2 + 4 files changed, 266 insertions(+), 56 deletions(-) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index b7126701574c..9299caeca69f 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -50,6 +50,7 @@ void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, void __init mem_encrypt_free_decrypted_mem(void); void __init sev_es_init_vc_handling(void); +void __init sev_snp_init_hv_handling(void); #define __bss_decrypted __section(".bss..decrypted") @@ -73,6 +74,7 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { } static inline void __init sme_enable(struct boot_params *bp) { } static inline void sev_es_init_vc_handling(void) { } +static inline void sev_snp_init_hv_handling(void) { } static inline int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; } diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h index 80e1df482337..828d624a38cf 100644 --- a/arch/x86/include/uapi/asm/svm.h +++ b/arch/x86/include/uapi/asm/svm.h @@ -115,6 +115,10 @@ #define SVM_VMGEXIT_AP_CREATE_ON_INIT 0 #define SVM_VMGEXIT_AP_CREATE 1 #define SVM_VMGEXIT_AP_DESTROY 2 +#define SVM_VMGEXIT_HV_DOORBELL_PAGE 0x80000014 +#define SVM_VMGEXIT_GET_PREFERRED_HV_DOORBELL_PAGE 0 +#define SVM_VMGEXIT_SET_HV_DOORBELL_PAGE 1 +#define SVM_VMGEXIT_QUERY_HV_DOORBELL_PAGE 2 #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd #define SVM_VMGEXIT_TERM_REQUEST 0x8000fffe #define SVM_VMGEXIT_TERM_REASON(reason_set, reason_code) \ diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index e2bb19605a46..b6dafeb0edbe 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -124,6 +124,152 @@ struct sev_config { static struct sev_config sev_cfg __read_mostly; +static noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state); +static noinstr void __sev_put_ghcb(struct ghcb_state *state); +static int vmgexit_hv_doorbell_page(struct ghcb *ghcb, u64 op, u64 pa); +static void sev_snp_setup_hv_doorbell_page(struct ghcb *ghcb); + +union hv_pending_events { + u16 events; + struct { + u8 vector; + u8 nmi : 1; + u8 mc : 1; + u8 reserved1 : 5; + u8 no_further_signal : 1; + }; +}; + +struct sev_hv_doorbell_page { + union hv_pending_events pending_events; + u8 no_eoi_required; + u8 reserved2[61]; + u8 padding[4032]; +}; + +struct sev_snp_runtime_data { + struct sev_hv_doorbell_page hv_doorbell_page; +}; + +static DEFINE_PER_CPU(struct sev_snp_runtime_data*, snp_runtime_data); + +static inline u64 sev_es_rd_ghcb_msr(void) +{ + return __rdmsr(MSR_AMD64_SEV_ES_GHCB); +} + +static __always_inline void sev_es_wr_ghcb_msr(u64 val) +{ + u32 low, high; + + low = (u32)(val); + high = (u32)(val >> 32); + + native_wrmsr(MSR_AMD64_SEV_ES_GHCB, low, high); +} + +struct sev_hv_doorbell_page *sev_snp_current_doorbell_page(void) +{ + return &this_cpu_read(snp_runtime_data)->hv_doorbell_page; +} + +static u8 sev_hv_pending(void) +{ + return sev_snp_current_doorbell_page()->pending_events.events; +} + +#define sev_hv_pending_nmi \ + sev_snp_current_doorbell_page()->pending_events.nmi + +static void hv_doorbell_apic_eoi_write(u32 reg, u32 val) +{ + if (xchg(&sev_snp_current_doorbell_page()->no_eoi_required, 0) & 0x1) + return; + + BUG_ON(reg != APIC_EOI); + apic->write(reg, val); +} + +static void do_exc_hv(struct pt_regs *regs) +{ + union hv_pending_events pending_events; + + while (sev_hv_pending()) { + pending_events.events = xchg( + &sev_snp_current_doorbell_page()->pending_events.events, + 0); + + if (pending_events.nmi) + exc_nmi(regs); + +#ifdef CONFIG_X86_MCE + if (pending_events.mc) + exc_machine_check(regs); +#endif + + if (!pending_events.vector) + return; + + if (pending_events.vector < FIRST_EXTERNAL_VECTOR) { + /* Exception vectors */ + WARN(1, "exception shouldn't happen\n"); + } else if (pending_events.vector == FIRST_EXTERNAL_VECTOR) { + sysvec_irq_move_cleanup(regs); + } else if (pending_events.vector == IA32_SYSCALL_VECTOR) { + WARN(1, "syscall shouldn't happen\n"); + } else if (pending_events.vector >= FIRST_SYSTEM_VECTOR) { + switch (pending_events.vector) { +#if IS_ENABLED(CONFIG_HYPERV) + case HYPERV_STIMER0_VECTOR: + sysvec_hyperv_stimer0(regs); + break; + case HYPERVISOR_CALLBACK_VECTOR: + sysvec_hyperv_callback(regs); + break; +#endif +#ifdef CONFIG_SMP + case RESCHEDULE_VECTOR: + sysvec_reschedule_ipi(regs); + break; + case IRQ_MOVE_CLEANUP_VECTOR: + sysvec_irq_move_cleanup(regs); + break; + case REBOOT_VECTOR: + sysvec_reboot(regs); + break; + case CALL_FUNCTION_SINGLE_VECTOR: + sysvec_call_function_single(regs); + break; + case CALL_FUNCTION_VECTOR: + sysvec_call_function(regs); + break; +#endif +#ifdef CONFIG_X86_LOCAL_APIC + case ERROR_APIC_VECTOR: + sysvec_error_interrupt(regs); + break; + case SPURIOUS_APIC_VECTOR: + sysvec_spurious_apic_interrupt(regs); + break; + case LOCAL_TIMER_VECTOR: + sysvec_apic_timer_interrupt(regs); + break; + case X86_PLATFORM_IPI_VECTOR: + sysvec_x86_platform_ipi(regs); + break; +#endif + case 0x0: + break; + default: + panic("Unexpected vector %d\n", vector); + unreachable(); + } + } else { + common_interrupt(regs, pending_events.vector); + } + } +} + static __always_inline bool on_vc_stack(struct pt_regs *regs) { unsigned long sp = regs->sp; @@ -181,18 +327,19 @@ void noinstr __sev_es_ist_enter(struct pt_regs *regs) this_cpu_write(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC], new_ist); } -static void do_exc_hv(struct pt_regs *regs) -{ - /* Handle #HV exception. */ -} - void check_hv_pending(struct pt_regs *regs) { if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) return; - if ((regs->flags & X86_EFLAGS_IF) == 0) + /* Handle NMI when irq is disabled. */ + if ((regs->flags & X86_EFLAGS_IF) == 0) { + if (sev_hv_pending_nmi) { + exc_nmi(regs); + sev_hv_pending_nmi = 0; + } return; + } do_exc_hv(regs); } @@ -233,68 +380,35 @@ void noinstr __sev_es_ist_exit(void) this_cpu_write(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC], *(unsigned long *)ist); } -/* - * Nothing shall interrupt this code path while holding the per-CPU - * GHCB. The backup GHCB is only for NMIs interrupting this path. - * - * Callers must disable local interrupts around it. - */ -static noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state) +static bool sev_restricted_injection_enabled(void) +{ + return sev_status & MSR_AMD64_SNP_RESTRICTED_INJ; +} + +void __init sev_snp_init_hv_handling(void) { struct sev_es_runtime_data *data; + struct ghcb_state state; struct ghcb *ghcb; + unsigned long flags; WARN_ON(!irqs_disabled()); + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP) || !sev_restricted_injection_enabled()) + return; data = this_cpu_read(runtime_data); - ghcb = &data->ghcb_page; - - if (unlikely(data->ghcb_active)) { - /* GHCB is already in use - save its contents */ - - if (unlikely(data->backup_ghcb_active)) { - /* - * Backup-GHCB is also already in use. There is no way - * to continue here so just kill the machine. To make - * panic() work, mark GHCBs inactive so that messages - * can be printed out. - */ - data->ghcb_active = false; - data->backup_ghcb_active = false; - - instrumentation_begin(); - panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use"); - instrumentation_end(); - } - - /* Mark backup_ghcb active before writing to it */ - data->backup_ghcb_active = true; - state->ghcb = &data->backup_ghcb; + local_irq_save(flags); - /* Backup GHCB content */ - *state->ghcb = *ghcb; - } else { - state->ghcb = NULL; - data->ghcb_active = true; - } + ghcb = __sev_get_ghcb(&state); - return ghcb; -} + sev_snp_setup_hv_doorbell_page(ghcb); -static inline u64 sev_es_rd_ghcb_msr(void) -{ - return __rdmsr(MSR_AMD64_SEV_ES_GHCB); -} - -static __always_inline void sev_es_wr_ghcb_msr(u64 val) -{ - u32 low, high; + __sev_put_ghcb(&state); - low = (u32)(val); - high = (u32)(val >> 32); + apic_set_eoi_write(hv_doorbell_apic_eoi_write); - native_wrmsr(MSR_AMD64_SEV_ES_GHCB, low, high); + local_irq_restore(flags); } static int vc_fetch_insn_kernel(struct es_em_ctxt *ctxt, @@ -555,6 +669,69 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt /* Include code shared with pre-decompression boot stage */ #include "sev-shared.c" +/* + * Nothing shall interrupt this code path while holding the per-CPU + * GHCB. The backup GHCB is only for NMIs interrupting this path. + * + * Callers must disable local interrupts around it. + */ +static noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state) +{ + struct sev_es_runtime_data *data; + struct ghcb *ghcb; + + WARN_ON(!irqs_disabled()); + + data = this_cpu_read(runtime_data); + ghcb = &data->ghcb_page; + + if (unlikely(data->ghcb_active)) { + /* GHCB is already in use - save its contents */ + + if (unlikely(data->backup_ghcb_active)) { + /* + * Backup-GHCB is also already in use. There is no way + * to continue here so just kill the machine. To make + * panic() work, mark GHCBs inactive so that messages + * can be printed out. + */ + data->ghcb_active = false; + data->backup_ghcb_active = false; + + instrumentation_begin(); + panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use"); + instrumentation_end(); + } + + /* Mark backup_ghcb active before writing to it */ + data->backup_ghcb_active = true; + + state->ghcb = &data->backup_ghcb; + + /* Backup GHCB content */ + *state->ghcb = *ghcb; + } else { + state->ghcb = NULL; + data->ghcb_active = true; + } + + return ghcb; +} + +static void sev_snp_setup_hv_doorbell_page(struct ghcb *ghcb) +{ + u64 pa; + enum es_result ret; + + pa = __pa(sev_snp_current_doorbell_page()); + vc_ghcb_invalidate(ghcb); + ret = vmgexit_hv_doorbell_page(ghcb, + SVM_VMGEXIT_SET_HV_DOORBELL_PAGE, + pa); + if (ret != ES_OK) + panic("SEV-SNP: failed to set up #HV doorbell page"); +} + static noinstr void __sev_put_ghcb(struct ghcb_state *state) { struct sev_es_runtime_data *data; @@ -1283,6 +1460,7 @@ static void snp_register_per_cpu_ghcb(void) ghcb = &data->ghcb_page; snp_register_ghcb_early(__pa(ghcb)); + sev_snp_setup_hv_doorbell_page(ghcb); } void setup_ghcb(void) @@ -1322,6 +1500,11 @@ void setup_ghcb(void) snp_register_ghcb_early(__pa(&boot_ghcb_page)); } +int vmgexit_hv_doorbell_page(struct ghcb *ghcb, u64 op, u64 pa) +{ + return sev_es_ghcb_hv_call(ghcb, NULL, SVM_VMGEXIT_HV_DOORBELL_PAGE, op, pa); +} + #ifdef CONFIG_HOTPLUG_CPU static void sev_es_ap_hlt_loop(void) { @@ -1395,6 +1578,7 @@ static void __init alloc_runtime_data(int cpu) static void __init init_ghcb(int cpu) { struct sev_es_runtime_data *data; + struct sev_snp_runtime_data *snp_data; int err; data = per_cpu(runtime_data, cpu); @@ -1406,6 +1590,19 @@ static void __init init_ghcb(int cpu) memset(&data->ghcb_page, 0, sizeof(data->ghcb_page)); + snp_data = memblock_alloc(sizeof(*snp_data), PAGE_SIZE); + if (!snp_data) + panic("Can't allocate SEV-SNP runtime data"); + + err = early_set_memory_decrypted((unsigned long)&snp_data->hv_doorbell_page, + sizeof(snp_data->hv_doorbell_page)); + if (err) + panic("Can't map #HV doorbell pages unencrypted"); + + memset(&snp_data->hv_doorbell_page, 0, sizeof(snp_data->hv_doorbell_page)); + + per_cpu(snp_runtime_data, cpu) = snp_data; + data->ghcb_active = false; data->backup_ghcb_active = false; } @@ -2046,7 +2243,12 @@ DEFINE_IDTENTRY_VC_USER(exc_vmm_communication) static bool hv_raw_handle_exception(struct pt_regs *regs) { - return false; + /* Clear the no_further_signal bit */ + sev_snp_current_doorbell_page()->pending_events.events &= 0x7fff; + + check_hv_pending(regs); + + return true; } static __always_inline bool on_hv_fallback_stack(struct pt_regs *regs) diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index d29debec8134..1aa6cab2394b 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -1503,5 +1503,7 @@ void __init trap_init(void) cpu_init_exception_handling(); /* Setup traps as cpu_init() might #GP */ idt_setup_traps(); + sev_snp_init_hv_handling(); + cpu_init(); } From patchwork Mon May 1 08:57:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14513C7EE22 for ; Mon, 1 May 2023 08:59:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232579AbjEAI72 (ORCPT ); Mon, 1 May 2023 04:59:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232513AbjEAI6e (ORCPT ); Mon, 1 May 2023 04:58:34 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BF201BE8; Mon, 1 May 2023 01:57:50 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1a814fe0ddeso23141085ad.2; Mon, 01 May 2023 01:57:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931470; x=1685523470; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1sETQK8RjhVsgIajuVOBT7+FI6vT6vJ37bjN6FDEcyA=; b=C/iU8ACLnuOD9qFnz6IALxfJtg5iuvllq+tnhyAXeX1mkBQ2z9QJIS626TSQL30SKX FH9joSl+/RitbSY4sITqOTUB4N7ITQrDvULKQ4Wsk/1Sj72VZfb6cIy9Lqp+z8H2Alzo VIK/UnvOvUMniBTeHD5tBw5Bop/GTQxMSqVcRnUwH+UGTnXmLM4d6EPv9rqmlX3fBArd harTT0Z7qj/uWShBHZSu4iou6AKNWenKRL9BhQpRPreip2rpvS2T3kT0FLk0k8NRpOtY b7H2FNIA0lpccxk9FbIhIWndKcog+KA252SpPkZTmMnzQPFaPT7+BL9+B6PhgRBvQ0Jd gDNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931470; x=1685523470; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1sETQK8RjhVsgIajuVOBT7+FI6vT6vJ37bjN6FDEcyA=; b=BD7fPJerXm2JNyIhu6fPkjnsNo89Psnz8uWK6+ChqJcDt3txLp7Aaqi8ruoFkB6yR+ w2y6W+dQ7oCvM3BzosewfIYT602SARzojNVd8nvJwufeH0jqT7m1xdaOqHPj03kLW3xS drsia9bCoUa8mqorxGw/zTVi0J7stDjhCjoWoUMfq9g/bydru7rgu+0S7KTwKBykQRlL 65QuyB3pTDKArqCkuM6F4G/PJv+JgB4O7MaFrp2E9JIFyEJyHiTi7NuxWERCVz99QIHN Q/5p4N3UXgUlfwXnIgVbRQoQuc+VBemURox6+jdU/uTh1i2Ex9lo5tW+kn4LHKrKgFgu pVtg== X-Gm-Message-State: AC+VfDxnwrsvVKd7DyZw8rTcQC3yl8L2sFOeQZ+nVcMBHqL25qNc6Wp9 FukCfqSfLqM3FrC+aEmMSTY6SU+DarK9Sg== X-Google-Smtp-Source: ACHHUZ4kXFKQeoxBoIYVEn+wH4u/FMzRthp9bUneus6Mw8YwcAgdiPNdbHUVpmXhaEcbHL/aXNwjeA== X-Received: by 2002:a17:903:1c2:b0:1a6:dfb3:5f4b with SMTP id e2-20020a17090301c200b001a6dfb35f4bmr15071450plh.55.1682931469725; Mon, 01 May 2023 01:57:49 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:49 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 14/15] x86/sev: optimize system vector processing invoked from #HV exception Date: Mon, 1 May 2023 04:57:24 -0400 Message-Id: <20230501085726.544209-15-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ashish Kalra Construct system vector table and dispatch system vector exceptions through sysvec_table from #HV exception handler instead of explicitly calling each system vector. The system vector table is created dynamically and is placed in a new named ELF section. Signed-off-by: Ashish Kalra --- arch/x86/entry/entry_64.S | 6 +++ arch/x86/kernel/sev.c | 70 +++++++++++++---------------------- arch/x86/kernel/vmlinux.lds.S | 7 ++++ 3 files changed, 38 insertions(+), 45 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 147b850babf6..f86b319d0a9e 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -419,6 +419,12 @@ SYM_CODE_START(\asmsym) _ASM_NOKPROBE(\asmsym) SYM_CODE_END(\asmsym) + .if \vector >= FIRST_SYSTEM_VECTOR && \vector < NR_VECTORS + .section .system_vectors, "aw" + .byte \vector + .quad \cfunc + .previous + .endif .endm /* diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index b6dafeb0edbe..b6becf158598 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -153,6 +153,16 @@ struct sev_snp_runtime_data { static DEFINE_PER_CPU(struct sev_snp_runtime_data*, snp_runtime_data); +static void (*sysvec_table[NR_VECTORS - FIRST_SYSTEM_VECTOR]) + (struct pt_regs *regs) __ro_after_init; + +struct sysvec_entry { + unsigned char vector; + void (*sysvec_func)(struct pt_regs *regs); +} __packed; + +extern struct sysvec_entry __system_vectors[], __system_vectors_end[]; + static inline u64 sev_es_rd_ghcb_msr(void) { return __rdmsr(MSR_AMD64_SEV_ES_GHCB); @@ -218,51 +228,11 @@ static void do_exc_hv(struct pt_regs *regs) } else if (pending_events.vector == IA32_SYSCALL_VECTOR) { WARN(1, "syscall shouldn't happen\n"); } else if (pending_events.vector >= FIRST_SYSTEM_VECTOR) { - switch (pending_events.vector) { -#if IS_ENABLED(CONFIG_HYPERV) - case HYPERV_STIMER0_VECTOR: - sysvec_hyperv_stimer0(regs); - break; - case HYPERVISOR_CALLBACK_VECTOR: - sysvec_hyperv_callback(regs); - break; -#endif -#ifdef CONFIG_SMP - case RESCHEDULE_VECTOR: - sysvec_reschedule_ipi(regs); - break; - case IRQ_MOVE_CLEANUP_VECTOR: - sysvec_irq_move_cleanup(regs); - break; - case REBOOT_VECTOR: - sysvec_reboot(regs); - break; - case CALL_FUNCTION_SINGLE_VECTOR: - sysvec_call_function_single(regs); - break; - case CALL_FUNCTION_VECTOR: - sysvec_call_function(regs); - break; -#endif -#ifdef CONFIG_X86_LOCAL_APIC - case ERROR_APIC_VECTOR: - sysvec_error_interrupt(regs); - break; - case SPURIOUS_APIC_VECTOR: - sysvec_spurious_apic_interrupt(regs); - break; - case LOCAL_TIMER_VECTOR: - sysvec_apic_timer_interrupt(regs); - break; - case X86_PLATFORM_IPI_VECTOR: - sysvec_x86_platform_ipi(regs); - break; -#endif - case 0x0: - break; - default: - panic("Unexpected vector %d\n", vector); - unreachable(); + if (!(sysvec_table[pending_events.vector - FIRST_SYSTEM_VECTOR])) { + WARN(1, "system vector entry 0x%x is NULL\n", + pending_events.vector); + } else { + (*sysvec_table[pending_events.vector - FIRST_SYSTEM_VECTOR])(regs); } } else { common_interrupt(regs, pending_events.vector); @@ -385,6 +355,14 @@ static bool sev_restricted_injection_enabled(void) return sev_status & MSR_AMD64_SNP_RESTRICTED_INJ; } +static void __init construct_sysvec_table(void) +{ + struct sysvec_entry *p; + + for (p = __system_vectors; p < __system_vectors_end; p++) + sysvec_table[p->vector - FIRST_SYSTEM_VECTOR] = p->sysvec_func; +} + void __init sev_snp_init_hv_handling(void) { struct sev_es_runtime_data *data; @@ -409,6 +387,8 @@ void __init sev_snp_init_hv_handling(void) apic_set_eoi_write(hv_doorbell_apic_eoi_write); local_irq_restore(flags); + + construct_sysvec_table(); } static int vc_fetch_insn_kernel(struct es_em_ctxt *ctxt, diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 25f155205770..c37165d8e877 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -338,6 +338,13 @@ SECTIONS *(.altinstr_replacement) } + . = ALIGN(8); + .system_vectors : AT(ADDR(.system_vectors) - LOAD_OFFSET) { + __system_vectors = .; + *(.system_vectors) + __system_vectors_end = .; + } + . = ALIGN(8); .apicdrivers : AT(ADDR(.apicdrivers) - LOAD_OFFSET) { __apicdrivers = .; From patchwork Mon May 1 08:57:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13227422 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4401AC77B61 for ; Mon, 1 May 2023 08:59:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232476AbjEAI7w (ORCPT ); Mon, 1 May 2023 04:59:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232070AbjEAI7L (ORCPT ); Mon, 1 May 2023 04:59:11 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E585410D8; Mon, 1 May 2023 01:57:55 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id d9443c01a7336-1aaebed5bd6so8315655ad.1; Mon, 01 May 2023 01:57:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682931471; x=1685523471; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=85FbKOIGeJKFvrZkC+4B4Gx5bHvgxBGU0hkt5+QsJqk=; b=CYmVFe1ZSITuLQYsYizoY0KSUhYRoc3Md2T3+Zn3YfBMhN96TjlaesKr0AHYkxYL3v xf6KqM2vo4FHnlA9N+jobMd2j0rw9GmXaOyD0ksuq3xqSRXWNriRp1dywc02INgHtJOa 7ZQiLnu7ypHI1/DM3yog6Eq+UrjNGY43okh2hO6Y+VBY2FRliX/lYTQpHj9uYd/JjXh0 oTwSNOq7l/uXXvImVuBOg82rf4w0t0Wof18L+NUlLzWO5uEtCCvQPS7xzdWD9jG7qNqH nXXnq0yTNRHWetCQV7B+OgXJqrIfjSTNe2n4NCfq29mDprN2kOaC6zvS2JvyHgi1LXLT Xs9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682931471; x=1685523471; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=85FbKOIGeJKFvrZkC+4B4Gx5bHvgxBGU0hkt5+QsJqk=; b=EHi6CD/KDOYL8kMfViFr+rsD1X/LaCYXjAeq6z/n3O5NrAKXvVLgxVRnwvdLnZfW78 h9LZsRYvH6QJwv4lncgjjTVmI/Tk3EJkMbK0v7dDz51kP1Rw0R2OSc6SenCF9KtBOeeK pT0tbAlu0mOUb9rc/jH925fW13B+x3Yj2vSXcLFHKCz+voT6Dmg6ul+72I4ZKghs4z8E 8vxL8zn8lUlPXqivMP4gmSbI7SGRSMhr8xyhPTCeOO47dC3Wgm+wWey5VQ6t/hBoP32a ZlH31tk01ofH5rXkxsvn5LPjvfCG0NskuAggnbs/XBeM1DT7f4fr1K2umixQU7oGaHb6 aAAw== X-Gm-Message-State: AC+VfDyvodnaMuZ2NmRZI5wCTVLXhQaEvET97aBtMc0rZQMmpG/7H0sF zctloVaj/9wIrmcnl7xlEGA= X-Google-Smtp-Source: ACHHUZ4pSFG50Yn3+YjfnFZIYk4iayAIyRl4ulCUswwf5cB7fb2dNgoNDk/UK/mdNoBXMvkT5zvNHA== X-Received: by 2002:a17:903:2310:b0:1a6:54ce:4311 with SMTP id d16-20020a170903231000b001a654ce4311mr16387616plh.43.1682931471101; Mon, 01 May 2023 01:57:51 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:b:e11b:15ea:ad44:bde7]) by smtp.gmail.com with ESMTPSA id t13-20020a1709028c8d00b001a4fe00a8d4sm17407070plo.90.2023.05.01.01.57.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 01:57:50 -0700 (PDT) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: pangupta@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V5 15/15] x86/sev: Fix interrupt exit code paths from #HV exception Date: Mon, 1 May 2023 04:57:25 -0400 Message-Id: <20230501085726.544209-16-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230501085726.544209-1-ltykernel@gmail.com> References: <20230501085726.544209-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ashish Kalra Add checks in interrupt exit code paths in case of returns to user mode to check if currently executing the #HV handler then don't follow the irqentry_exit_to_user_mode path as that can potentially cause the #HV handler to be preempted and rescheduled on another CPU. Rescheduled #HV handler on another cpu will cause interrupts to be handled on a different cpu than the injected one, causing invalid EOIs and missed/lost guest interrupts and corresponding hangs and/or per-cpu IRQs handled on non-intended cpu. Signed-off-by: Ashish Kalra --- Change since RFC v3: * Add check of hv_handling_events in the do_exc_hv() to avoid nested entry. --- arch/x86/include/asm/idtentry.h | 66 +++++++++++++++++++++++++++++++++ arch/x86/kernel/sev.c | 37 +++++++++++++++++- 2 files changed, 102 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h index b0f3501b2767..415b7e14c227 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -13,6 +13,10 @@ #include +#ifdef CONFIG_AMD_MEM_ENCRYPT +noinstr void irqentry_exit_hv_cond(struct pt_regs *regs, irqentry_state_t state); +#endif + /** * DECLARE_IDTENTRY - Declare functions for simple IDT entry points * No error code pushed by hardware @@ -176,6 +180,7 @@ __visible noinstr void func(struct pt_regs *regs, unsigned long error_code) #define DECLARE_IDTENTRY_IRQ(vector, func) \ DECLARE_IDTENTRY_ERRORCODE(vector, func) +#ifndef CONFIG_AMD_MEM_ENCRYPT /** * DEFINE_IDTENTRY_IRQ - Emit code for device interrupt IDT entry points * @func: Function name of the entry point @@ -205,6 +210,26 @@ __visible noinstr void func(struct pt_regs *regs, \ } \ \ static noinline void __##func(struct pt_regs *regs, u32 vector) +#else + +#define DEFINE_IDTENTRY_IRQ(func) \ +static void __##func(struct pt_regs *regs, u32 vector); \ + \ +__visible noinstr void func(struct pt_regs *regs, \ + unsigned long error_code) \ +{ \ + irqentry_state_t state = irqentry_enter(regs); \ + u32 vector = (u32)(u8)error_code; \ + \ + instrumentation_begin(); \ + kvm_set_cpu_l1tf_flush_l1d(); \ + run_irq_on_irqstack_cond(__##func, regs, vector); \ + instrumentation_end(); \ + irqentry_exit_hv_cond(regs, state); \ +} \ + \ +static noinline void __##func(struct pt_regs *regs, u32 vector) +#endif /** * DECLARE_IDTENTRY_SYSVEC - Declare functions for system vector entry points @@ -221,6 +246,7 @@ static noinline void __##func(struct pt_regs *regs, u32 vector) #define DECLARE_IDTENTRY_SYSVEC(vector, func) \ DECLARE_IDTENTRY(vector, func) +#ifndef CONFIG_AMD_MEM_ENCRYPT /** * DEFINE_IDTENTRY_SYSVEC - Emit code for system vector IDT entry points * @func: Function name of the entry point @@ -245,6 +271,26 @@ __visible noinstr void func(struct pt_regs *regs) \ } \ \ static noinline void __##func(struct pt_regs *regs) +#else + +#define DEFINE_IDTENTRY_SYSVEC(func) \ +static void __##func(struct pt_regs *regs); \ + \ +__visible noinstr void func(struct pt_regs *regs) \ +{ \ + irqentry_state_t state = irqentry_enter(regs); \ + \ + instrumentation_begin(); \ + kvm_set_cpu_l1tf_flush_l1d(); \ + run_sysvec_on_irqstack_cond(__##func, regs); \ + instrumentation_end(); \ + irqentry_exit_hv_cond(regs, state); \ +} \ + \ +static noinline void __##func(struct pt_regs *regs) +#endif + +#ifndef CONFIG_AMD_MEM_ENCRYPT /** * DEFINE_IDTENTRY_SYSVEC_SIMPLE - Emit code for simple system vector IDT @@ -274,6 +320,26 @@ __visible noinstr void func(struct pt_regs *regs) \ } \ \ static __always_inline void __##func(struct pt_regs *regs) +#else + +#define DEFINE_IDTENTRY_SYSVEC_SIMPLE(func) \ +static __always_inline void __##func(struct pt_regs *regs); \ + \ +__visible noinstr void func(struct pt_regs *regs) \ +{ \ + irqentry_state_t state = irqentry_enter(regs); \ + \ + instrumentation_begin(); \ + __irq_enter_raw(); \ + kvm_set_cpu_l1tf_flush_l1d(); \ + __##func(regs); \ + __irq_exit_raw(); \ + instrumentation_end(); \ + irqentry_exit_hv_cond(regs, state); \ +} \ + \ +static __always_inline void __##func(struct pt_regs *regs) +#endif /** * DECLARE_IDTENTRY_XENCB - Declare functions for XEN HV callback entry point diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index b6becf158598..69b55075ddfe 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -149,6 +149,10 @@ struct sev_hv_doorbell_page { struct sev_snp_runtime_data { struct sev_hv_doorbell_page hv_doorbell_page; + /* + * Indication that we are currently handling #HV events. + */ + bool hv_handling_events; }; static DEFINE_PER_CPU(struct sev_snp_runtime_data*, snp_runtime_data); @@ -204,6 +208,12 @@ static void do_exc_hv(struct pt_regs *regs) { union hv_pending_events pending_events; + /* Avoid nested entry. */ + if (this_cpu_read(snp_runtime_data)->hv_handling_events) + return; + + this_cpu_read(snp_runtime_data)->hv_handling_events = true; + while (sev_hv_pending()) { pending_events.events = xchg( &sev_snp_current_doorbell_page()->pending_events.events, @@ -218,7 +228,7 @@ static void do_exc_hv(struct pt_regs *regs) #endif if (!pending_events.vector) - return; + goto out; if (pending_events.vector < FIRST_EXTERNAL_VECTOR) { /* Exception vectors */ @@ -238,6 +248,9 @@ static void do_exc_hv(struct pt_regs *regs) common_interrupt(regs, pending_events.vector); } } + +out: + this_cpu_read(snp_runtime_data)->hv_handling_events = false; } static __always_inline bool on_vc_stack(struct pt_regs *regs) @@ -2542,3 +2555,25 @@ static int __init snp_init_platform_device(void) return 0; } device_initcall(snp_init_platform_device); + +noinstr void irqentry_exit_hv_cond(struct pt_regs *regs, irqentry_state_t state) +{ + /* + * Check whether this returns to user mode, if so and if + * we are currently executing the #HV handler then we don't + * want to follow the irqentry_exit_to_user_mode path as + * that can potentially cause the #HV handler to be + * preempted and rescheduled on another CPU. Rescheduled #HV + * handler on another cpu will cause interrupts to be handled + * on a different cpu than the injected one, causing + * invalid EOIs and missed/lost guest interrupts and + * corresponding hangs and/or per-cpu IRQs handled on + * non-intended cpu. + */ + if (user_mode(regs) && + this_cpu_read(snp_runtime_data)->hv_handling_events) + return; + + /* follow normal interrupt return/exit path */ + irqentry_exit(regs, state); +}