From patchwork Sun Jan 22 02:45:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111388 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D81D3C54EED for ; Sun, 22 Jan 2023 02:46:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229710AbjAVCqS (ORCPT ); Sat, 21 Jan 2023 21:46:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229636AbjAVCqP (ORCPT ); Sat, 21 Jan 2023 21:46:15 -0500 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA0C01DBB6; Sat, 21 Jan 2023 18:46:13 -0800 (PST) Received: by mail-pf1-x433.google.com with SMTP id 207so6565101pfv.5; Sat, 21 Jan 2023 18:46:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qnMOwh1aaG72cY9v6pTmJyXRUzMRdcZVVSSO8Ajg0yo=; b=FQ3sW+3iS7GYYQEkCf5a/r3NDEp3gPy7T8saFKhEb2tXnD+h994YomEHYGYXawDzrL nXJ9hicOApE0AJJppKLg0GuyC1ymhvtoQSPKQQyX82hpYYMqNGfq7mE4WEu69ShLJw2F 0hK8mhDtw10qMaHU+KYvdhUr2kcA4MSvZPll03ZpptoNGLbGy8lSZh0HwTSvPqJzj4dF +BgpWYihPncFDGhUeobzc3NQrBUp+q5beBHuRicbQlKXlk12qd2/WluLo8CNKNrZphH2 ZUXl19NMMc2hCnHTNN1lB2IHTg8usgpeka0yt788gpRgd9m+5Fec4RBXkr9kfFmu6kz1 2omQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qnMOwh1aaG72cY9v6pTmJyXRUzMRdcZVVSSO8Ajg0yo=; b=zc2UlSCeYgklmeXkxJ7XhafP24PpvzwssxrRzawfFP1V4pIwDYOzTPR1bU95I9n7cm CggB3uE25MPdofBar64bitxKKf7MUnom/pfEDmj1Qw3wBLt/M/B5szLamgZvZxNC5Leu g9guoWmGTPXViAE37PcMVWUixPFwOZSXPGr+FcIFkFbNldp2FFcaGCpcALOA6yvI7QYt Nz3YM3W3jH8BBVk8wmiSn6lL2pnk4b1qWksDPil1x7SyLIVTj77urmtyqEHiCcVJJvyV EGuCxRLvtAjOGa/kd3HVk37TIeq8VUf7x6ykkfKGitU4DehFql2XK3Wj3mJ1tc6rxCEY FUdQ== X-Gm-Message-State: AFqh2kohZ93jhGI3Y+AtG1Ga3FCncZyYXqzcAtv7V41hFvSwj3QrWZAz d0I7V6VCNg3Ext5PQkBLCo0= X-Google-Smtp-Source: AMrXdXt5B+SQx0uDJ6a6sa7NLc7FmMWqUKla0LOC4gIATGo3yNnCZNwXzVdegW5IM9lXwjvzBBlFoA== X-Received: by 2002:a62:384e:0:b0:58b:c66e:1ca8 with SMTP id f75-20020a62384e000000b0058bc66e1ca8mr20006146pfa.11.1674355573052; Sat, 21 Jan 2023 18:46:13 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:12 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 01/16] x86/hyperv: Add sev-snp enlightened guest specific config Date: Sat, 21 Jan 2023 21:45:51 -0500 Message-Id: <20230122024607.788454-2-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Introduce static key isolation_type_en_snp for enlightened guest check and add some specific options in ms_hyperv_init_ platform(). Signed-off-by: Tianyu Lan --- arch/x86/hyperv/ivm.c | 10 ++++++++++ arch/x86/include/asm/mshyperv.h | 3 +++ arch/x86/kernel/cpu/mshyperv.c | 16 +++++++++++++++- drivers/hv/hv_common.c | 6 ++++++ 4 files changed, 34 insertions(+), 1 deletion(-) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index abca9431d068..8c5dd8e4eb1e 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -386,6 +386,16 @@ bool hv_is_isolation_supported(void) } DEFINE_STATIC_KEY_FALSE(isolation_type_snp); +DEFINE_STATIC_KEY_FALSE(isolation_type_en_snp); + +/* + * hv_isolation_type_en_snp - Check system runs in the AMD SEV-SNP based + * isolation enlightened VM. + */ +bool hv_isolation_type_en_snp(void) +{ + return static_branch_unlikely(&isolation_type_en_snp); +} /* * hv_isolation_type_snp - Check system runs in the AMD SEV-SNP based diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 010768d40155..285df71150e4 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -14,6 +14,7 @@ union hv_ghcb; DECLARE_STATIC_KEY_FALSE(isolation_type_snp); +DECLARE_STATIC_KEY_FALSE(isolation_type_en_snp); typedef int (*hyperv_fill_flush_list_func)( struct hv_guest_mapping_flush_list *flush, @@ -28,6 +29,8 @@ extern void *hv_hypercall_pg; extern u64 hv_current_partition_id; +extern bool hv_isolation_type_en_snp(void); + extern union hv_ghcb * __percpu *hv_ghcb_pg; int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index 8f83ceec45dc..ace5901ba0fc 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -273,6 +273,18 @@ static void __init ms_hyperv_init_platform(void) hv_max_functions_eax = cpuid_eax(HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS); + /* + * Add custom configuration for SEV-SNP Enlightened guest + */ + if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) { + ms_hyperv.features |= HV_ACCESS_FREQUENCY_MSRS; + ms_hyperv.misc_features |= HV_FEATURE_FREQUENCY_MSRS_AVAILABLE; + ms_hyperv.misc_features &= ~HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE; + ms_hyperv.hints |= HV_DEPRECATING_AEOI_RECOMMENDED; + ms_hyperv.hints |= HV_X64_APIC_ACCESS_RECOMMENDED; + ms_hyperv.hints |= HV_X64_CLUSTER_IPI_RECOMMENDED; + } + pr_info("Hyper-V: privilege flags low 0x%x, high 0x%x, hints 0x%x, misc 0x%x\n", ms_hyperv.features, ms_hyperv.priv_high, ms_hyperv.hints, ms_hyperv.misc_features); @@ -331,7 +343,9 @@ static void __init ms_hyperv_init_platform(void) pr_info("Hyper-V: Isolation Config: Group A 0x%x, Group B 0x%x\n", ms_hyperv.isolation_config_a, ms_hyperv.isolation_config_b); - if (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP) + if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + static_branch_enable(&isolation_type_en_snp); + else if (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP) static_branch_enable(&isolation_type_snp); } diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index 566735f35c28..f788c64de0bd 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -268,6 +268,12 @@ bool __weak hv_isolation_type_snp(void) } EXPORT_SYMBOL_GPL(hv_isolation_type_snp); +bool __weak hv_isolation_type_en_snp(void) +{ + return false; +} +EXPORT_SYMBOL_GPL(hv_isolation_type_en_snp); + void __weak hv_setup_vmbus_handler(void (*handler)(void)) { } From patchwork Sun Jan 22 02:45:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7E58C27C76 for ; Sun, 22 Jan 2023 02:46:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229752AbjAVCqU (ORCPT ); Sat, 21 Jan 2023 21:46:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229673AbjAVCqP (ORCPT ); Sat, 21 Jan 2023 21:46:15 -0500 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA1591A4BA; Sat, 21 Jan 2023 18:46:14 -0800 (PST) Received: by mail-pf1-x42e.google.com with SMTP id c85so6559344pfc.8; Sat, 21 Jan 2023 18:46:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DHCUEHp3mAbHZmlWLmQQYc0YJzSL3HZXdvwddGnkOCc=; b=ohzh5KtUBaY5tBTL+iy3zzV3Jw0Gc067IZMqxSpBcDwUUoamvPjeqJX7+DhPgS9gsr PxZqXNuOygoKSMFAJ7tXrYgBE/r8EUKAipdf7+sF8OmnfH5vKzuu2daJxduQKjjos3My a+fKiwyATUWJ/L5sd1SwuD5Jk6T3TYIM3juQXqEC043X9K/A7q0iahQOuFqbKYeA3mZH y8sgr7OK70Bdf7y8U17G8H5+ATQeegdhvJLjNpuKJz7y/Xlh79uiEjcZbVmeWtnFkJlG kFXWoFH76dpch3aSKiUvrE8bPgE18YgLRKdsq+YQo4Boda33rWYi59ou4gvp0MZOYABR +r5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DHCUEHp3mAbHZmlWLmQQYc0YJzSL3HZXdvwddGnkOCc=; b=CtjLkkuTFifA9KCBvYK0lI0yolzPssxiQLP2S5DQZBVYLyKAuOmc6OGzlD/Sg8box/ EsuPNbrEP6ruGnZ0RuI/mUwHd/Z2IFPo2HHOr34ec3KUNhWiCvwPgUg5PHgStWF9tqHK 20Pb5eLVGf54CD4bbmPxflWBpALMFEwWd+XA0nC/TJuCK3DgKLfGUDNKGOIe44iW3JYL YZCioNVg/fvtA/WRRViPkJPspp47swqGMyPBCOAioYCMleXrAOmA87wZCPvtXu64gp/O +EE1+JRT7HiJ0wT395Gdm8IymsR5xrnTHswILAZrVjwlyZyx80hRyxHSc9TMKMshJ/A0 OCHw== X-Gm-Message-State: AFqh2kolqCiKyj5xEX1bnMjBcZeSVis4UnutsIILw/dzzSSpyrwMaIRe RnjC/KlEsZHy3UC8gVvnB8M= X-Google-Smtp-Source: AMrXdXto4cGvkFjx1YrN8a/m/MPWHwwzaLBtA1y93TPlRa3ZQX+QImrNF8XYtHA+bXpHs8tzvME18g== X-Received: by 2002:a62:7b11:0:b0:586:e399:9cd4 with SMTP id w17-20020a627b11000000b00586e3999cd4mr40488569pfc.25.1674355574415; Sat, 21 Jan 2023 18:46:14 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:14 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 02/16] x86/hyperv: Decrypt hv vp assist page in sev-snp enlightened guest Date: Sat, 21 Jan 2023 21:45:52 -0500 Message-Id: <20230122024607.788454-3-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan hv vp assist page is shared between sev snp guest and hyperv. Decrypt the page when use it. Signed-off-by: Tianyu Lan --- arch/x86/hyperv/hv_init.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index a5f9474f08e1..24154c1ee12b 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -29,6 +29,7 @@ #include #include #include +#include int hyperv_init_cpuhp; u64 hv_current_partition_id = ~0ull; @@ -113,6 +114,11 @@ static int hv_cpu_init(unsigned int cpu) } if (!WARN_ON(!(*hvp))) { + if (hv_isolation_type_en_snp()) { + WARN_ON_ONCE(set_memory_decrypted((unsigned long)(*hvp), 1)); + memset(*hvp, 0, PAGE_SIZE); + } + msr.enable = 1; wrmsrl(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64); } From patchwork Sun Jan 22 02:45:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111390 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EADF6C54EB4 for ; Sun, 22 Jan 2023 02:46:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229788AbjAVCqW (ORCPT ); Sat, 21 Jan 2023 21:46:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229699AbjAVCqR (ORCPT ); Sat, 21 Jan 2023 21:46:17 -0500 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55CC31A4BA; Sat, 21 Jan 2023 18:46:16 -0800 (PST) Received: by mail-pg1-x536.google.com with SMTP id f3so6762383pgc.2; Sat, 21 Jan 2023 18:46:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4HMWUk1NYLJENsgefdfkWCLD2sRZMB9RYEswQeGCc7s=; b=h3JJuuB6rADx1MHxkLIycta4gXabf+UX6sWe2K8A1dlwPy+cF7v5WR4NnGdk7i/Ttr UIZcrUjH/tM+sn17iO95Enbrv2VNEMrPzsnD+xo3T27jccby4VRbflqQ2pM3of7Ex/lr +5PV8xsRIKaS2lV+UxXg7ZUcItqwbo1Q7sbl0i49WtOYE3I+6Z7TZapYD22eAubRaLfC 0hcqE4JjgkOnYXVYuNN5529Z5gpuk/FcIfc5bp08uTracsh+egQqLC/Q81yZYa0ZCLR+ O137kXex2d5/PIwq1mF+pFDV7Hz2m173sH0uhoEMp6390e/aY2YN+6TCAGxjqfWzfedQ I33A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4HMWUk1NYLJENsgefdfkWCLD2sRZMB9RYEswQeGCc7s=; b=cgYio7AlXz2ubI24YBY+3fVNy2WN5hoxzlER9tHHYHqooJS+af8vFyRBJ5EWQxXI3q 4g839NII8b8u7ePud8WJG+WxRhN2c/bL4bkMCyMjBQJ03A1d5WkZFatExq9cGoq2ioF0 ES7L2tOrPp5tYVHfxsEy011je3R5Vi0SMlCfgkvCf9O/k5JtqIIR2jd9p2fYGhx4c7qf menS/5qyVx6TmtDLiQzneYzY1cPuDziQHo6IcaTUZYjR88EeyglGfANSZLw6hpHORKX3 jxNF72IxFSH5X0dctQf0IdzG5qq4lLTjQ3XTvJwTDIX5XrjICKTxY8W+5qmNLwfG3kxY hFhA== X-Gm-Message-State: AFqh2krviQ2VGRNP/qSL8TnjAsGclsXe7q3pYpCyfjWM1/9GFsU6Xd64 351PJshuTydtm+S9YgN78ic= X-Google-Smtp-Source: AMrXdXvEPu6iaeq12zTvr+TtjIn6yQRqAGvEfTzkVLLlba9Vf7le7b/JTJ1igbLBddp6yEV2lxFAGA== X-Received: by 2002:a05:6a00:2ba:b0:576:7fb9:85cc with SMTP id q26-20020a056a0002ba00b005767fb985ccmr18626990pfs.14.1674355575792; Sat, 21 Jan 2023 18:46:15 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:15 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 03/16] x86/hyperv: Set Virtual Trust Level in vmbus init message Date: Sat, 21 Jan 2023 21:45:53 -0500 Message-Id: <20230122024607.788454-4-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan sev-snp guest provides vtl(Virtual Trust Level) and get it from hyperv hvcall via HVCALL_GET_VP_REGISTERS. Set target vtl in the vmbus init message. Signed-off-by: Tianyu Lan --- Change since RFC v2: * Rename get_current_vtl() to get_vtl() * Fix some coding style issues --- arch/x86/hyperv/hv_init.c | 37 ++++++++++++++++++++++++++++++ arch/x86/include/asm/hyperv-tlfs.h | 4 ++++ drivers/hv/connection.c | 1 + include/asm-generic/mshyperv.h | 2 ++ include/linux/hyperv.h | 4 ++-- 5 files changed, 46 insertions(+), 2 deletions(-) diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index 24154c1ee12b..9e9757049915 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -384,6 +384,40 @@ static void __init hv_get_partition_id(void) local_irq_restore(flags); } +static u8 __init get_vtl(void) +{ + u64 control = HV_HYPERCALL_REP_COMP_1 | HVCALL_GET_VP_REGISTERS; + struct hv_get_vp_registers_input *input = NULL; + struct hv_get_vp_registers_output *output = NULL; + u64 vtl = 0; + int ret; + unsigned long flags; + + local_irq_save(flags); + input = *(struct hv_get_vp_registers_input **)this_cpu_ptr(hyperv_pcpu_input_arg); + output = (struct hv_get_vp_registers_output *)input; + if (!input || !output) { + local_irq_restore(flags); + goto done; + } + + memset(input, 0, sizeof(*input) + sizeof(input->element[0])); + input->header.partitionid = HV_PARTITION_ID_SELF; + input->header.vpindex = HV_VP_INDEX_SELF; + input->header.inputvtl = 0; + input->element[0].name0 = HV_X64_REGISTER_VSM_VP_STATUS; + + ret = hv_do_hypercall(control, input, output); + if (ret == 0) + vtl = output->as64.low & HV_X64_VTL_MASK; + else + pr_err("Hyper-V: failed to get VTL!"); + local_irq_restore(flags); + +done: + return vtl; +} + /* * This function is to be invoked early in the boot sequence after the * hypervisor has been detected. @@ -512,6 +546,9 @@ void __init hyperv_init(void) /* Query the VMs extended capability once, so that it can be cached. */ hv_query_ext_cap(0); + /* Find the VTL */ + ms_hyperv.vtl = get_vtl(); + return; clean_guest_os_id: diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h index db2202d985bd..6dcbb21aac2b 100644 --- a/arch/x86/include/asm/hyperv-tlfs.h +++ b/arch/x86/include/asm/hyperv-tlfs.h @@ -36,6 +36,10 @@ #define HYPERV_CPUID_MIN 0x40000005 #define HYPERV_CPUID_MAX 0x4000ffff +/* Support for HVCALL_GET_VP_REGISTERS hvcall */ +#define HV_X64_REGISTER_VSM_VP_STATUS 0x000D0003 +#define HV_X64_VTL_MASK GENMASK(3, 0) + /* * Group D Features. The bit assignments are custom to each architecture. * On x86/x64 these are HYPERV_CPUID_FEATURES.EDX bits. diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c index f670cfd2e056..e4c39f4016ad 100644 --- a/drivers/hv/connection.c +++ b/drivers/hv/connection.c @@ -98,6 +98,7 @@ int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, u32 version) */ if (version >= VERSION_WIN10_V5) { msg->msg_sint = VMBUS_MESSAGE_SINT; + msg->msg_vtl = ms_hyperv.vtl; vmbus_connection.msg_conn_id = VMBUS_MESSAGE_CONNECTION_ID_4; } else { msg->interrupt_page = virt_to_phys(vmbus_connection.int_page); diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index f2c0856f1797..44e56777fea7 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -48,6 +48,7 @@ struct ms_hyperv_info { }; }; u64 shared_gpa_boundary; + u8 vtl; }; extern struct ms_hyperv_info ms_hyperv; @@ -57,6 +58,7 @@ extern void * __percpu *hyperv_pcpu_output_arg; extern u64 hv_do_hypercall(u64 control, void *inputaddr, void *outputaddr); extern u64 hv_do_fast_hypercall8(u16 control, u64 input8); extern bool hv_isolation_type_snp(void); +extern bool hv_isolation_type_en_snp(void); /* Helper functions that provide a consistent pattern for checking Hyper-V hypercall status. */ static inline int hv_result(u64 status) diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index 85f7c5a63aa6..65121b21b0af 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -665,8 +665,8 @@ struct vmbus_channel_initiate_contact { u64 interrupt_page; struct { u8 msg_sint; - u8 padding1[3]; - u32 padding2; + u8 msg_vtl; + u8 reserved[6]; }; }; u64 monitor_page1; From patchwork Sun Jan 22 02:45:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0126C61DA3 for ; Sun, 22 Jan 2023 02:46:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229763AbjAVCqY (ORCPT ); Sat, 21 Jan 2023 21:46:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229709AbjAVCqS (ORCPT ); Sat, 21 Jan 2023 21:46:18 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E89723315; Sat, 21 Jan 2023 18:46:18 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id z9-20020a17090a468900b00226b6e7aeeaso8351984pjf.1; Sat, 21 Jan 2023 18:46:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=alXsTvbdOwxDxeseaVKQDjisWg1IXQavEOrNy6v4vmU=; b=k4ENH/efCz3Ne0p2VirtRFonHgq2K0MSctPocvVnzX/NSSxvbf9jfoQjxfhHMaRRID cVmgD22bsvXDfNBqa8/mIlDfNJ3Nb69a55NMDcPxdWHvmM8dvzn0maxwaZzw2o0wUhRQ r0nakgzHDwa1xq+frtwz8eONcZpUhlCiX9adoGUNMQtb1iCLtEg6nzBGNoj9DRvWiKFh kp3yOF1zF+Nly9xjzAqJr9ikYIwAwMyjv4OFV3e+epiZ42O8094KihuzAvRhzmwuUfBT COcOmfuAe/MpOeykSWEEHH9EojcOQsOrz4A5iSoC1SzIuQRD98Al017agUohYguoQ1S5 WhmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=alXsTvbdOwxDxeseaVKQDjisWg1IXQavEOrNy6v4vmU=; b=X4/AvSqdh9m4d8m489/4Uw72qXE0kKjJc8rzWLwpjbKLLvjU21oR2ZcvoMUL9aD7gB dITx0ILGmy7DT0duILGJAu8e8JXXx9ap85moJMfV8BV+6/xXtx6ApiTRrIquAcnGElSh YK67RJIpLMiGtODvldsQIj6r+2wqA8P+pegLdoQLIilPGj0jPkptrVd0zrcNggOp1uvP F714qc2YvyjI51Mx00+CHuatNJ1qUcKL8z7JvFb6xGuJk4e/xYCaAkmEc3PpD4ibt5uq WAI/xfaILw+B5EMMl+0EFGPe8NNt55v4cY2sJqKSnElhd2/bvl/Mygm7nRPAUt6sIpuJ 7EVQ== X-Gm-Message-State: AFqh2kpf580Ks2rrMuFyd2+/a8Sn7HA3aBoj+gSJFcmEbS+NQXxKSo3V BeCfLuoYJkQ2tuvQTxSiSds= X-Google-Smtp-Source: AMrXdXtIY2JWRyKTWL+sIuCzE0RKCewMFwaS++WO6DJXwgoz+cm+oPMwpeWaj2JWUbuhJ0eSOciuCQ== X-Received: by 2002:a05:6a21:3a96:b0:ad:e5e8:cfe8 with SMTP id zv22-20020a056a213a9600b000ade5e8cfe8mr23617337pzb.48.1674355577433; Sat, 21 Jan 2023 18:46:17 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:17 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 04/16] x86/hyperv: Use vmmcall to implement Hyper-V hypercall in sev-snp enlightened guest Date: Sat, 21 Jan 2023 21:45:54 -0500 Message-Id: <20230122024607.788454-5-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan In sev-snp enlightened guest, Hyper-V hypercall needs to use vmmcall to trigger vmexit and notify hypervisor to handle hypercall request. Signed-off-by: Tianyu Lan --- Change since RFC V2: * Fix indentation style --- arch/x86/include/asm/mshyperv.h | 44 ++++++++++++++++++++++++--------- 1 file changed, 33 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 285df71150e4..1a4af0a4f29a 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -44,16 +44,25 @@ static inline u64 hv_do_hypercall(u64 control, void *input, void *output) u64 hv_status; #ifdef CONFIG_X86_64 - if (!hv_hypercall_pg) - return U64_MAX; + if (hv_isolation_type_en_snp()) { + __asm__ __volatile__("mov %4, %%r8\n" + "vmmcall" + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input_address) + : "r" (output_address) + : "cc", "memory", "r8", "r9", "r10", "r11"); + } else { + if (!hv_hypercall_pg) + return U64_MAX; - __asm__ __volatile__("mov %4, %%r8\n" - CALL_NOSPEC - : "=a" (hv_status), ASM_CALL_CONSTRAINT, - "+c" (control), "+d" (input_address) - : "r" (output_address), - THUNK_TARGET(hv_hypercall_pg) - : "cc", "memory", "r8", "r9", "r10", "r11"); + __asm__ __volatile__("mov %4, %%r8\n" + CALL_NOSPEC + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input_address) + : "r" (output_address), + THUNK_TARGET(hv_hypercall_pg) + : "cc", "memory", "r8", "r9", "r10", "r11"); + } #else u32 input_address_hi = upper_32_bits(input_address); u32 input_address_lo = lower_32_bits(input_address); @@ -81,7 +90,13 @@ static inline u64 hv_do_fast_hypercall8(u16 code, u64 input1) u64 hv_status, control = (u64)code | HV_HYPERCALL_FAST_BIT; #ifdef CONFIG_X86_64 - { + if (hv_isolation_type_en_snp()) { + __asm__ __volatile__( + "vmmcall" + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input1) + :: "cc", "r8", "r9", "r10", "r11"); + } else { __asm__ __volatile__(CALL_NOSPEC : "=a" (hv_status), ASM_CALL_CONSTRAINT, "+c" (control), "+d" (input1) @@ -112,7 +127,14 @@ static inline u64 hv_do_fast_hypercall16(u16 code, u64 input1, u64 input2) u64 hv_status, control = (u64)code | HV_HYPERCALL_FAST_BIT; #ifdef CONFIG_X86_64 - { + if (hv_isolation_type_en_snp()) { + __asm__ __volatile__("mov %4, %%r8\n" + "vmmcall" + : "=a" (hv_status), ASM_CALL_CONSTRAINT, + "+c" (control), "+d" (input1) + : "r" (input2) + : "cc", "r8", "r9", "r10", "r11"); + } else { __asm__ __volatile__("mov %4, %%r8\n" CALL_NOSPEC : "=a" (hv_status), ASM_CALL_CONSTRAINT, From patchwork Sun Jan 22 02:45:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111392 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14297C27C76 for ; Sun, 22 Jan 2023 02:46:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229837AbjAVCq0 (ORCPT ); Sat, 21 Jan 2023 21:46:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229744AbjAVCqU (ORCPT ); Sat, 21 Jan 2023 21:46:20 -0500 Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 512482364D; Sat, 21 Jan 2023 18:46:19 -0800 (PST) Received: by mail-pg1-x52c.google.com with SMTP id e10so6746609pgc.9; Sat, 21 Jan 2023 18:46:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=d3G8RfDk2f7ftR+C1v/a0axqypneBk1zP9Qtg1LVHyA=; b=c/L7UsrH7nJF/Ia15j9vfArLwatEAS0cENKSOaCkL63FyPz2KLBgsgNBrJuF3kVv5e lX19cjuEssChOh2HtV1TWinHCu29U/m2mijoi/VnOf8xM0ekpN5rWjNCn5XssgcVGpC+ 4HkV0zdtKoCv1/bH2N8KTCCHxSKWRgFqY4kg0hZalR4AZRCfaVZQMV29nl77ltEEB3lM 81QhuHoWStkiljwhbUzgL2GQlMEO5DsAcce+B+BhUauBQ+BlgnMw9mGOXcjD8xDsF60R NI+HR41cpVEdZmuyHrvCBuu4VSx9HP0oBzRMH/ovwnuYwS7TF0AwJSDrZYqx7IA1U2sV lybQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=d3G8RfDk2f7ftR+C1v/a0axqypneBk1zP9Qtg1LVHyA=; b=EOAbiGfZVS/daQTqBkv7/3sVsUUAorXnu4x3GY3GlJIWwwE4T4mZX3msMwmLlnZJKb IxbyLdMjRV1La1/IOkiq5vRdsyp6e6D6f7u7usyqBKV9FqHZZMTZiqEB7yo4mYXrqgg4 uZcoOht0lAV1HfcLmN+s+x4+e/Zk23I0c32IrjtI9viWPolJMGIr2cmWoufHwIVeniXi 6djNBeJfZT8LcZRxidRvayDzjf9UHsFsh0LOO96Y/ay8RGD4cJ7pk4BFE+c+xlWQNtJB KjSSWtViOfq/Neu+2iDEH5V5ojYN+DicrwDeshMqF8JdEc9MYCM3dKqp/XCnGtSLh/mq MgFg== X-Gm-Message-State: AFqh2komJDNMe5w0jwcKtwiaG+JLof8mDbFMkFQTCiG57G1Dop4V7psJ Z2own4OqZBJG+olb0WiqV7Q= X-Google-Smtp-Source: AMrXdXu574YjcUE6GVcktAFiPejKqe/c1ubdJquLqtdv6Em10rYn9/lUR8RXw/zYyUH3sKzENwk92Q== X-Received: by 2002:a62:1488:0:b0:586:b33c:be2 with SMTP id 130-20020a621488000000b00586b33c0be2mr22080547pfu.26.1674355578733; Sat, 21 Jan 2023 18:46:18 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:18 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 05/16] clocksource/drivers/hyper-v: decrypt hyperv tsc page in sev-snp enlightened guest Date: Sat, 21 Jan 2023 21:45:55 -0500 Message-Id: <20230122024607.788454-6-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Hyper-V tsc page is shared with hypervisor and it should be decrypted in sev-snp enlightened guest when it's used. Signed-off-by: Tianyu Lan --- Change since RFC V2: * Change the Subject line prefix --- drivers/clocksource/hyperv_timer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c index c0cef92b12b8..44da16ca203c 100644 --- a/drivers/clocksource/hyperv_timer.c +++ b/drivers/clocksource/hyperv_timer.c @@ -365,7 +365,7 @@ EXPORT_SYMBOL_GPL(hv_stimer_global_cleanup); static union { struct ms_hyperv_tsc_page page; u8 reserved[PAGE_SIZE]; -} tsc_pg __aligned(PAGE_SIZE); +} tsc_pg __bss_decrypted __aligned(PAGE_SIZE); static struct ms_hyperv_tsc_page *tsc_page = &tsc_pg.page; static unsigned long tsc_pfn; From patchwork Sun Jan 22 02:45:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2440C27C76 for ; Sun, 22 Jan 2023 02:46:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229794AbjAVCq2 (ORCPT ); Sat, 21 Jan 2023 21:46:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229776AbjAVCqV (ORCPT ); Sat, 21 Jan 2023 21:46:21 -0500 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7DC8241C5; Sat, 21 Jan 2023 18:46:20 -0800 (PST) Received: by mail-pf1-x432.google.com with SMTP id 20so6544918pfu.13; Sat, 21 Jan 2023 18:46:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ypt8rOlpcZ57Jgj8tjQGhQ8hwNv0NioxEBqSRz1HFqM=; b=D+cfF0+q0ri2SJ63SThdYX9OYn9naocQhZotGE3aFblCXcxPKG6rdt+MWm4RfQtDTN BnjVFkRrD8OOHbYI5SQgtEYgqCVSfoyfZuI12ofbmulZ753D6dx05uvRJc02KMICe6yB 7tjsD1HkoRDwvg2JKlt8FZz7MMpySG2KmtYz9pZO70OlOq6Vbhr+Sjw8lxPq2e6ygIFO x/1BYjtx2ASEEu1341hfQTG1EOkNOThpI+RXCgdovQhTSF1MyAwgzgNYeI9nysaDsySE PYYJc0mvdF4PWYGIrw5DQyr1ryOkQWn38IhSiC0STq/v7QfapptwrGsZXo2QWwV/8PJG /Nnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ypt8rOlpcZ57Jgj8tjQGhQ8hwNv0NioxEBqSRz1HFqM=; b=bnUomJ/1u477QjxdeXd9lnmW35ggyDN2YktA9qVtS/nKBm4sR0P5jt6qorBNbzybtT b8sejJzS7KNb+TUXoAEwYxFFWc1fADTZA+Awbrb27nWm7kWcY2G3aZAf57iGIBjV/uVq M6WudQHmoqZYMejjQgkSjcNEpnCdua6GSwa40aGVLD5MgiQ47okjVAMbxsnYx/qsckim pv1Tb80BFyNi8zwp9+/+qXmftvjHIE4NOF0WrgUlX4kSOsj1lXetXFaZJRKPYhnLyU5J cdwBF0jPBphibXJ0KN3GapxE3m8F5lcM1cPFwc9x638TIT+o86TYNmcaIhhuAvnJA7// cC2w== X-Gm-Message-State: AFqh2kqfLG/TagM07Z3KT4vW7/sDymETEpV6jqZVWFb3gk4o27Tv9Kan 7yARnFf71o5AyO3zaw6cwOg= X-Google-Smtp-Source: AMrXdXuZ/N8ZLDYZyqEoDk0CIevFERWDutOFClkUDxLpyb18/F0PowncaQPo+LPD1tiyrNCTEDgt4g== X-Received: by 2002:a05:6a00:4519:b0:58d:f047:53b7 with SMTP id cw25-20020a056a00451900b0058df04753b7mr14257689pfb.3.1674355580161; Sat, 21 Jan 2023 18:46:20 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:19 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 06/16] x86/hyperv: decrypt vmbus pages for sev-snp enlightened guest Date: Sat, 21 Jan 2023 21:45:56 -0500 Message-Id: <20230122024607.788454-7-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Vmbus post msg, synic event and message pages are shared with hypervisor and so decrypt these pages in the sev-snp guest. Signed-off-by: Tianyu Lan --- Change since RFC V2: * Fix error in the error code path and encrypt pages correctly when decryption failure happens. --- drivers/hv/hv.c | 33 ++++++++++++++++++++++++++++++++- 1 file changed, 32 insertions(+), 1 deletion(-) diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c index 410e6c4e80d3..52edc54c8172 100644 --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "hyperv_vmbus.h" /* The one and only */ @@ -117,7 +118,7 @@ int hv_post_message(union hv_connection_id connection_id, int hv_synic_alloc(void) { - int cpu; + int cpu, ret; struct hv_per_cpu_context *hv_cpu; /* @@ -168,9 +169,39 @@ int hv_synic_alloc(void) pr_err("Unable to allocate post msg page\n"); goto err; } + + if (hv_isolation_type_en_snp()) { + ret = set_memory_decrypted((unsigned long) + hv_cpu->synic_message_page, 1); + if (ret) + goto err; + + ret = set_memory_decrypted((unsigned long) + hv_cpu->synic_event_page, 1); + if (ret) + goto err_decrypt_event_page; + + ret = set_memory_decrypted((unsigned long) + hv_cpu->post_msg_page, 1); + if (ret) + goto err_decrypt_msg_page; + + memset(hv_cpu->synic_message_page, 0, PAGE_SIZE); + memset(hv_cpu->synic_event_page, 0, PAGE_SIZE); + memset(hv_cpu->post_msg_page, 0, PAGE_SIZE); + } } return 0; + +err_decrypt_msg_page: + set_memory_encrypted((unsigned long) + hv_cpu->synic_event_page, 1); + +err_decrypt_event_page: + set_memory_encrypted((unsigned long) + hv_cpu->synic_message_page, 1); + err: /* * Any memory allocations that succeeded will be freed when From patchwork Sun Jan 22 02:45:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111394 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B60B3C27C76 for ; Sun, 22 Jan 2023 02:46:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229852AbjAVCqg (ORCPT ); Sat, 21 Jan 2023 21:46:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229864AbjAVCq0 (ORCPT ); Sat, 21 Jan 2023 21:46:26 -0500 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71FF8241F4; Sat, 21 Jan 2023 18:46:22 -0800 (PST) Received: by mail-pf1-x42c.google.com with SMTP id i65so6604920pfc.0; Sat, 21 Jan 2023 18:46:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MSuNKkMXWHSa3J60m4rY4F79SY/10La9kbhfpfz9TzE=; b=VkVetbyCcGyNR+peKjvZmq1rh+GYQboKSEMpmfmYtRDt/XA1pMxwyIH1YOWmzj56ew DxryIxyvCP56FAfTMIkq47XAFpHM6ZkikVrfys80CqKN2kK5c6kFDpmEc5Z3ojxLe+Gx vbqH68TcpXINebpFsBdBNH4GVh/CFA798XBLWBmYsS/NAxT00p/TJIEXtJM5PKX6MoZX hd3eeoPaN1FxN76h3I/QsAfLcMPxOr6L7N5bqoOpd60MvIjWvwmpexN3qFCeXdej+A1H Sa8H2CQZt2cgQf2eFW5AkEGck0xbvc3rBMKKmqRb4mmY2NO0tJ0YZXqZlwacOJsQ2b9K RLMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MSuNKkMXWHSa3J60m4rY4F79SY/10La9kbhfpfz9TzE=; b=PegM51jYgPsvcmcByz58py8QyRD4vTae3b9BXGy5u27oyF8MUt2jUXISaIDCken1J+ nxdt2Va3gQ23tHY17xjImzq3FW4v7Uc06C6Y+o6SCnM75ZEAFGk3GsDP0DHsro23Qf1b 8P//sHRKHrdW4bpUl9PjYbx4zqLA/uMG2cIh8YgPi3I9NmZPMgyZOPYnmX8ZWUtbuWgG smQvsGq50RJdGeSU55sQLutLufg3B8NGkeCpWNe4XP0MtxQZYggb5VOG1GdgE8Vf+N1B hQwhvDHVFcfYJSi69OKy4HnEkTK7OwaZJBNUle38I1np1w70x/YZOMCq4hjqSvQUKTdF e03w== X-Gm-Message-State: AFqh2kpgpljVrP/Tyxd86h88yw7k/1Lvu8srrInQh9LVmuqPE9G1ScXM vOxO2svo3vJpy6aGq+m4LDc= X-Google-Smtp-Source: AMrXdXvE9Bcml9Mo/WbAKLd9q68UcwTXJOz+R5uCgaC7ad7djllW2ZKCNYXMprZSEPVIjI1oAT1q0w== X-Received: by 2002:a62:79d2:0:b0:582:b089:d9be with SMTP id u201-20020a6279d2000000b00582b089d9bemr21737321pfc.13.1674355581499; Sat, 21 Jan 2023 18:46:21 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:21 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 07/16] drivers: hv: Decrypt percpu hvcall input arg page in sev-snp enlightened guest Date: Sat, 21 Jan 2023 21:45:57 -0500 Message-Id: <20230122024607.788454-8-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Hypervisor needs to access iput arg page and guest should decrypt the page. Signed-off-by: Tianyu Lan --- Change since RFC V2: * Set inputarg to be zero after kfree() * Not free mem when fail to encrypt mem in the hv_common_cpu_die(). --- drivers/hv/hv_common.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index f788c64de0bd..205b6380d794 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include @@ -125,6 +126,7 @@ int hv_common_cpu_init(unsigned int cpu) u64 msr_vp_index; gfp_t flags; int pgcount = hv_root_partition ? 2 : 1; + int ret; /* hv_cpu_init() can be called with IRQs disabled from hv_resume() */ flags = irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL; @@ -134,6 +136,17 @@ int hv_common_cpu_init(unsigned int cpu) if (!(*inputarg)) return -ENOMEM; + if (hv_isolation_type_en_snp()) { + ret = set_memory_decrypted((unsigned long)*inputarg, pgcount); + if (ret) { + kfree(*inputarg); + *inputarg = NULL; + return ret; + } + + memset(*inputarg, 0x00, PAGE_SIZE); + } + if (hv_root_partition) { outputarg = (void **)this_cpu_ptr(hyperv_pcpu_output_arg); *outputarg = (char *)(*inputarg) + HV_HYP_PAGE_SIZE; @@ -168,7 +181,12 @@ int hv_common_cpu_die(unsigned int cpu) local_irq_restore(flags); - kfree(mem); + if (hv_isolation_type_en_snp()) { + if (!set_memory_encrypted((unsigned long)mem, 1)) + kfree(mem); + } else { + kfree(mem); + } return 0; } From patchwork Sun Jan 22 02:45:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A1E4C61D97 for ; Sun, 22 Jan 2023 02:46:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229934AbjAVCqj (ORCPT ); Sat, 21 Jan 2023 21:46:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229850AbjAVCqe (ORCPT ); Sat, 21 Jan 2023 21:46:34 -0500 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EBE2244A9; Sat, 21 Jan 2023 18:46:23 -0800 (PST) Received: by mail-pg1-x535.google.com with SMTP id g68so6744374pgc.11; Sat, 21 Jan 2023 18:46:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rnYtQQ1Y/HFKXDggPejFZRMsOajwCKlrShZxOBZUhc0=; b=ezxHRcyG1q3A3Vkb0DVGE2OVKiUCYY2LT6X7ODIti2UAcIkELTKv/p89t8n2Ey5afB o/g5npJBVqmKcrXeTOsg5fCM/MUKnmoTrYMfmH0M1W+enV9nRiE8NgShSPJfcULJWUws jGtZdrqVlbVfGu9RfaC3KhofEZQX/Gl1BdcEic95k1iCnNzK48favo5b5Zu1Y/lNOdiJ a2se4JewEAYWTMNggyOgVj29O4KBZYeOk1yOPbeQFNzqySaaNvZe10AOeZlpzNr61clJ 50ql37m4X9ZRvlCalFT3akvGAC1cKvTMS+2jqsGJ94HjGuKSYxzfJwOg1C+cEhIt3/uO rOqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rnYtQQ1Y/HFKXDggPejFZRMsOajwCKlrShZxOBZUhc0=; b=HoL6YdyQCskwnvXJBDyf5vXvKAbRGaaaC/1q7xLmFiaqfyVwbl1yCZWKev1OxTTK/X nudZXHfu/NUhOUTfJqUD58LbG2bH7Hky3yvikEwa0uVkE6e/fEnHBnq2VOc/GOzkVprR pctc8h73eh8WElIMhxtgR2A2PfRhBEfTXtGHAjS39rbUFDk76ItnumfLWxL0vP6AN5kQ 2yosDhhMtNG+YI7qlIG2zwXWqWHYNsq2gjnrApJ0ZJEwCFY+Tor/IhRyMaHLZDhoutKN ytnr8/iUtrmP060nGca2ioZrtaiCZF6EQblWs0chmdETd2Uqz1JmJoSxtAlny1jK9Vzd 8y0Q== X-Gm-Message-State: AFqh2kqn0IS2FwdRQwg+rZtLZW1qXqn0yJMXXyiKKBHUtGp87oNFZCq+ idyRAIUB1ndFrFoMgS8c1Yo= X-Google-Smtp-Source: AMrXdXucsaaYtNVA/g7jMccZBqdEqIy30skzYtj5NDJQwyiIMH+H6yV4LgagcWsf8el85V8qlL0rBw== X-Received: by 2002:a05:6a00:278d:b0:56b:f51d:820a with SMTP id bd13-20020a056a00278d00b0056bf51d820amr20193929pfb.7.1674355582863; Sat, 21 Jan 2023 18:46:22 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:22 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 08/16] x86/hyperv: Initialize cpu and memory for sev-snp enlightened guest Date: Sat, 21 Jan 2023 21:45:58 -0500 Message-Id: <20230122024607.788454-9-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Read processor amd memory info from specific address which are populated by Hyper-V. Initialize smp cpu related ops, pvalidate system memory and add it into e820 table. Signed-off-by: Tianyu Lan --- arch/x86/kernel/cpu/mshyperv.c | 85 ++++++++++++++++++++++++++++++++++ 1 file changed, 85 insertions(+) diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index ace5901ba0fc..b1871a7bb4c9 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -32,6 +32,12 @@ #include #include #include +#include +#include +#include +#include +#include +#include /* Is Linux running as the root partition? */ bool hv_root_partition; @@ -251,6 +257,30 @@ static void __init hv_smp_prepare_cpus(unsigned int max_cpus) } #endif +static u32 processor_count; + +static __init void hv_snp_get_smp_config(unsigned int early) +{ + if (!early) { + while (num_processors < processor_count) { + early_per_cpu(x86_cpu_to_apicid, num_processors) = num_processors; + early_per_cpu(x86_bios_cpu_apicid, num_processors) = num_processors; + physid_set(num_processors, phys_cpu_present_map); + set_cpu_possible(num_processors, true); + set_cpu_present(num_processors, true); + num_processors++; + } + } +} + +struct memory_map_entry { + u64 starting_gpn; + u64 numpages; + u16 type; + u16 flags; + u32 reserved; +}; + static void __init ms_hyperv_init_platform(void) { int hv_max_functions_eax; @@ -258,6 +288,11 @@ static void __init ms_hyperv_init_platform(void) int hv_host_info_ebx; int hv_host_info_ecx; int hv_host_info_edx; + struct memory_map_entry *entry; + struct e820_entry *e820_entry; + u64 e820_end; + u64 ram_end; + u64 page; #ifdef CONFIG_PARAVIRT pv_info.name = "Hyper-V"; @@ -466,6 +501,56 @@ static void __init ms_hyperv_init_platform(void) if (!(ms_hyperv.features & HV_ACCESS_TSC_INVARIANT)) mark_tsc_unstable("running on Hyper-V"); + if (isolation_type_en_snp()) { + /* + * Hyper-V enlightened snp guest boots kernel + * directly without bootloader and so roms, + * bios regions and reserve resources are not + * available. Set these callback to NULL. + */ + x86_platform.legacy.reserve_bios_regions = x86_init_noop; + x86_init.resources.probe_roms = x86_init_noop; + x86_init.resources.reserve_resources = x86_init_noop; + x86_init.mpparse.find_smp_config = x86_init_noop; + x86_init.mpparse.get_smp_config = hv_snp_get_smp_config; + + /* + * Hyper-V SEV-SNP enlightened guest doesn't support ioapic + * and legacy APIC page read/write. Switch to hv apic here. + */ + disable_ioapic_support(); + + /* Read processor number and memory layout. */ + processor_count = *(u32 *)__va(EN_SEV_SNP_PROCESSOR_INFO_ADDR); + entry = (struct memory_map_entry *)(__va(EN_SEV_SNP_PROCESSOR_INFO_ADDR) + + sizeof(struct memory_map_entry)); + + /* + * E820 table in the memory just describes memory for + * kernel, ACPI table, cmdline, boot params and ramdisk. + * Hyper-V popoulates the rest memory layout in the EN_SEV_ + * SNP_PROCESSOR_INFO_ADDR. + */ + for (; entry->numpages != 0; entry++) { + e820_entry = &e820_table->entries[ + e820_table->nr_entries - 1]; + e820_end = e820_entry->addr + e820_entry->size; + ram_end = (entry->starting_gpn + + entry->numpages) * PAGE_SIZE; + + if (e820_end < entry->starting_gpn * PAGE_SIZE) + e820_end = entry->starting_gpn * PAGE_SIZE; + + if (e820_end < ram_end) { + pr_info("Hyper-V: add e820 entry [mem %#018Lx-%#018Lx]\n", e820_end, ram_end - 1); + e820__range_add(e820_end, ram_end - e820_end, + E820_TYPE_RAM); + for (page = e820_end; page < ram_end; page += PAGE_SIZE) + pvalidate((unsigned long)__va(page), RMP_PG_SIZE_4K, true); + } + } + } + hardlockup_detector_disable(); } From patchwork Sun Jan 22 02:45:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111396 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B468C54EB4 for ; Sun, 22 Jan 2023 02:46:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229776AbjAVCqm (ORCPT ); Sat, 21 Jan 2023 21:46:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229943AbjAVCqf (ORCPT ); Sat, 21 Jan 2023 21:46:35 -0500 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D6EF241E7; Sat, 21 Jan 2023 18:46:24 -0800 (PST) Received: by mail-pf1-x432.google.com with SMTP id 20so6544963pfu.13; Sat, 21 Jan 2023 18:46:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sbUgAHJXFKj0/h7fquHwD/wdAiG1+wRW3iFZ6bjAfVo=; b=Q5VAPOY3SQnuTZrIyw1xSxs2JgzIw3shIjHoYRpMCnUfvjjTwvP/BKGuunjS6NY5jZ Ci625w3rBRnZBK3U1sdaW/YREjkKN3Ttk71wsnDi4yol/yJxjH2Dl07DqKhv6Vr5xVYr 6C+t/c+J7llg7IJiHrgcvCPcLqS1kkwC67Vrlg5GtCz8lX0c66rf059xdbhUC++FaW8M Zx6ngt4276VOoSJKNApMQG5XS4d1K6Upay7sQjORao/2JjvzfCewswdUGXYY0B9pUvVE rveMugJSP3awwnmZs9qvqIL6ZqB9zsDrea5uxgZ62Yh3Y+jR8ZugwILWTnhZY4rnFc8/ mHrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sbUgAHJXFKj0/h7fquHwD/wdAiG1+wRW3iFZ6bjAfVo=; b=O9TD7ucEVddgJ10UR6hdMXmN/8shior1Wwfr7Tv1coNwPwrz2F5OFbICcBIUFc2ink q4WBm7M/a9PNCK6Jx11+xVhmhU4ut7cQhLf5lXUbwOXTa+99/3PLMR6aYVbkF3OYuH3B Xecsyfvi5B9m2ssjSi+mPaRkdARJDWoiNPWY/5J+gY4uVLeSIrZkeuYG1qZP+qlyhrpP g4gCPM06mMVJcxLWDjEMGAlqnzZesoEvRg+JWJNEMBKhOICO/e0sG4JXy1Y9spDIuGyf qrWury3tkGYnDiglHi6CFidXoqCVqOOdZMsUvJc6hETEUuFYzPQiZopyEwfjzWolKPP4 DrBw== X-Gm-Message-State: AFqh2kq8XCSLfKUk51+LjKXoKZWYoB1Knnj4pJbAK3sk0eVgZBWZA/zY YvZUNBTBZHO8Es8IscnUiX4= X-Google-Smtp-Source: AMrXdXvUbjpvISFSYNBx+covuQ35XEsn7ZCKNFYhnCygXEhRGI2/2LiBOSghuNPgGh0dH0T+H9NE/A== X-Received: by 2002:a62:150f:0:b0:58d:b5bd:e4fe with SMTP id 15-20020a62150f000000b0058db5bde4femr22013000pfv.13.1674355584229; Sat, 21 Jan 2023 18:46:24 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:23 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 09/16] x86/hyperv: SEV-SNP enlightened guest don't support legacy rtc Date: Sat, 21 Jan 2023 21:45:59 -0500 Message-Id: <20230122024607.788454-10-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan SEV-SNP enlightened guest doesn't support legacy rtc. Set legacy.rtc, x86_platform.set_wallclock and get_wallclock to 0 or noop(). Make get/set_rtc_noop() to be public and reuse them in the ms_hyperv_init_platform(). Signed-off-by: Tianyu Lan --- arch/x86/include/asm/mshyperv.h | 7 ++++++- arch/x86/include/asm/x86_init.h | 2 ++ arch/x86/kernel/cpu/mshyperv.c | 3 +++ arch/x86/kernel/x86_init.c | 4 ++-- 4 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 1a4af0a4f29a..7266d71d30d6 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -33,6 +33,12 @@ extern bool hv_isolation_type_en_snp(void); extern union hv_ghcb * __percpu *hv_ghcb_pg; +/* + * Hyper-V puts processor and memory layout info + * to this address in SEV-SNP enlightened guest. + */ +#define EN_SEV_SNP_PROCESSOR_INFO_ADDR 0x802000 + int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); @@ -267,7 +273,6 @@ static inline void hv_set_register(unsigned int reg, u64 value) { } static inline u64 hv_get_register(unsigned int reg) { return 0; } #endif /* CONFIG_HYPERV */ - #include #endif diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h index c1c8c581759d..d8fb3a1639e9 100644 --- a/arch/x86/include/asm/x86_init.h +++ b/arch/x86/include/asm/x86_init.h @@ -326,5 +326,7 @@ extern void x86_init_uint_noop(unsigned int unused); extern bool bool_x86_init_noop(void); extern void x86_op_int_noop(int cpu); extern bool x86_pnpbios_disabled(void); +extern int set_rtc_noop(const struct timespec64 *now); +extern void get_rtc_noop(struct timespec64 *now); #endif diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index b1871a7bb4c9..197c8f2ec4eb 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -508,6 +508,9 @@ static void __init ms_hyperv_init_platform(void) * bios regions and reserve resources are not * available. Set these callback to NULL. */ + x86_platform.legacy.rtc = 0; + x86_platform.set_wallclock = set_rtc_noop; + x86_platform.get_wallclock = get_rtc_noop; x86_platform.legacy.reserve_bios_regions = x86_init_noop; x86_init.resources.probe_roms = x86_init_noop; x86_init.resources.reserve_resources = x86_init_noop; diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c index ef80d361b463..d93aeffec19b 100644 --- a/arch/x86/kernel/x86_init.c +++ b/arch/x86/kernel/x86_init.c @@ -33,8 +33,8 @@ static int __init iommu_init_noop(void) { return 0; } static void iommu_shutdown_noop(void) { } bool __init bool_x86_init_noop(void) { return false; } void x86_op_int_noop(int cpu) { } -static __init int set_rtc_noop(const struct timespec64 *now) { return -EINVAL; } -static __init void get_rtc_noop(struct timespec64 *now) { } +int set_rtc_noop(const struct timespec64 *now) { return -EINVAL; } +void get_rtc_noop(struct timespec64 *now) { } static __initconst const struct of_device_id of_cmos_match[] = { { .compatible = "motorola,mc146818" }, From patchwork Sun Jan 22 02:46:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18EC8C54EAA for ; Sun, 22 Jan 2023 02:47:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230073AbjAVCrK (ORCPT ); Sat, 21 Jan 2023 21:47:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229981AbjAVCqi (ORCPT ); Sat, 21 Jan 2023 21:46:38 -0500 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BE9A244B2; Sat, 21 Jan 2023 18:46:26 -0800 (PST) Received: by mail-pf1-x436.google.com with SMTP id g205so6572333pfb.6; Sat, 21 Jan 2023 18:46:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jKsYBhUiGWrJMP3oFh0QsZHqdKYC98gA0Kk1SEN2638=; b=JnyU3r5P59FIt0CzVytNJkRp0yv4FvnJi+puDlVTnHlbTW30gMAqMTGfyk5dh1/FDL HXC796YPVPqCuL9Fxr2/Aa0WC9N2c+Ob5Hzm9net0UO8zx+nP3ERYRkqHUsraCzaD7nT nqMbrkX+tqCm4ULlvjkl29za3bTS1MycHhhBRgnZq+HvcklygSZ1wCxSIOI5ZnW2bWxa wDEGtx3LjNIznCCVLGmov0EK/1VN47CrYqJCZx79A43w67k5wwAm2GE8sI25BMAlK2iu 95eS/DrrxBFJA9iwmHXck0J+kFxZ021jXHW1tnTFJ5kMZPXHyyWpiaP5v3MjUJv7AYFT LZ/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jKsYBhUiGWrJMP3oFh0QsZHqdKYC98gA0Kk1SEN2638=; b=wIwU0uWmMpNXxR1IUDXY++t6tpLZcqALiAFbJ5u7hJowqqm2g5O6KoLKG1fI1eqsQq mp1bD9aeGrBmaibwYeUN0r0quWZ/2zY/B3N2Z7P9U7jrNG1oN3GPZwtE4PDdRxcXbIE9 JZ+kUSYwVCxWf/+7aYHHwbhCu05iQCsaoq4QC/ILiMCxukiY13W7npla92TOBXhYnTnJ 5EB7cIcsFzxlmjfmaG4u+m6c1oRgFmQcLhbShNPjZQQfnA9VXQtS4Di19dXnEE3cWgr+ WcbJxOj3SbWRaEtwmjvc3EYexvZVKPYicPT7LAVBF6zpehpS3UEtGjceEsx7S+r1dAY7 93zA== X-Gm-Message-State: AFqh2kqGxWr+/LZ2WREoI9Vm/KmZG+jOZHVltpKJhileNA2VCTUaUjbj VimEyQfCHgekJ82kDS5eBr8= X-Google-Smtp-Source: AMrXdXvLmUN6lR8nFN7+4RUYdAxz/QQvP+Aa6TuqEw12tMBhTByU1S1Dr4pnYS1XLpoXoav9wkKJOw== X-Received: by 2002:a62:e80e:0:b0:576:1c37:5720 with SMTP id c14-20020a62e80e000000b005761c375720mr22999361pfi.4.1674355585672; Sat, 21 Jan 2023 18:46:25 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:25 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 10/16] x86/hyperv: Add smp support for sev-snp guest Date: Sat, 21 Jan 2023 21:46:00 -0500 Message-Id: <20230122024607.788454-11-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan The wakeup_secondary_cpu callback was populated with wakeup_ cpu_via_vmgexit() which doesn't work for Hyper-V. Override it with Hyper-V specific hook which uses HVCALL_START_VIRTUAL_ PROCESSOR hvcall to start AP with vmsa data structure. Signed-off-by: Tianyu Lan --- Change since RFC v2: * Add helper function to initialize segment * Fix some coding style --- arch/x86/include/asm/mshyperv.h | 2 + arch/x86/include/asm/sev.h | 13 ++++ arch/x86/include/asm/svm.h | 47 +++++++++++++ arch/x86/kernel/cpu/mshyperv.c | 112 ++++++++++++++++++++++++++++-- include/asm-generic/hyperv-tlfs.h | 19 +++++ 5 files changed, 189 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 7266d71d30d6..c69051eec0e1 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -203,6 +203,8 @@ struct irq_domain *hv_create_pci_msi_domain(void); int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector, struct hv_interrupt_entry *entry); int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry); +int hv_set_mem_host_visibility(unsigned long addr, int numpages, bool visible); +int hv_snp_boot_ap(int cpu, unsigned long start_ip); #ifdef CONFIG_AMD_MEM_ENCRYPT void hv_ghcb_msr_write(u64 msr, u64 value); diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index ebc271bb6d8e..e34aaf730220 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -86,6 +86,19 @@ extern bool handle_vc_boot_ghcb(struct pt_regs *regs); #define RMPADJUST_VMSA_PAGE_BIT BIT(16) +union sev_rmp_adjust { + u64 as_uint64; + struct { + unsigned long target_vmpl : 8; + unsigned long enable_read : 1; + unsigned long enable_write : 1; + unsigned long enable_user_execute : 1; + unsigned long enable_kernel_execute : 1; + unsigned long reserved1 : 4; + unsigned long vmsa : 1; + }; +}; + /* SNP Guest message request */ struct snp_req_data { unsigned long req_gpa; diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index cb1ee53ad3b1..f8b321a11ee4 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -336,6 +336,53 @@ struct vmcb_save_area { u64 last_excp_to; u8 reserved_0x298[72]; u32 spec_ctrl; /* Guest version of SPEC_CTRL at 0x2E0 */ + u8 reserved_7b[4]; + u32 pkru; + u8 reserved_7a[20]; + u64 reserved_8; /* rax already available at 0x01f8 */ + u64 rcx; + u64 rdx; + u64 rbx; + u64 reserved_9; /* rsp already available at 0x01d8 */ + u64 rbp; + u64 rsi; + u64 rdi; + u64 r8; + u64 r9; + u64 r10; + u64 r11; + u64 r12; + u64 r13; + u64 r14; + u64 r15; + u8 reserved_10[16]; + u64 sw_exit_code; + u64 sw_exit_info_1; + u64 sw_exit_info_2; + u64 sw_scratch; + union { + u64 sev_features; + struct { + u64 sev_feature_snp : 1; + u64 sev_feature_vtom : 1; + u64 sev_feature_reflectvc : 1; + u64 sev_feature_restrict_injection : 1; + u64 sev_feature_alternate_injection : 1; + u64 sev_feature_full_debug : 1; + u64 sev_feature_reserved1 : 1; + u64 sev_feature_snpbtb_isolation : 1; + u64 sev_feature_resrved2 : 56; + }; + }; + u64 vintr_ctrl; + u64 guest_error_code; + u64 virtual_tom; + u64 tlb_id; + u64 pcpu_id; + u64 event_inject; + u64 xcr0; + u8 valid_bitmap[16]; + u64 x87_state_gpa; } __packed; /* Save area definition for SEV-ES and SEV-SNP guests */ diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index 197c8f2ec4eb..9d547751a1a7 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -39,6 +39,13 @@ #include #include +/* + * DEFAULT INIT GPAT and SEGMENT LIMIT value in struct VMSA + * to start AP in enlightened SEV guest. + */ +#define HV_AP_INIT_GPAT_DEFAULT 0x0007040600070406ULL +#define HV_AP_SEGMENT_LIMIT 0xffffffff + /* Is Linux running as the root partition? */ bool hv_root_partition; struct ms_hyperv_info ms_hyperv; @@ -230,6 +237,94 @@ static void __init hv_smp_prepare_boot_cpu(void) #endif } +static u8 ap_start_input_arg[PAGE_SIZE] __bss_decrypted __aligned(PAGE_SIZE); +static u8 ap_start_stack[PAGE_SIZE] __aligned(PAGE_SIZE); + +#define hv_populate_vmcb_seg(seg, gdtr_base) \ +do { \ + if (seg.selector) { \ + seg.base = 0; \ + seg.limit = HV_AP_SEGMENT_LIMIT; \ + seg.attrib = *(u16 *)(gdtr_base + seg.selector + 5); \ + seg.attrib = (seg.attrib & 0xFF) | ((seg.attrib >> 4) & 0xF00); \ + } \ +} while (0) \ + +int hv_snp_boot_ap(int cpu, unsigned long start_ip) +{ + struct vmcb_save_area *vmsa = (struct vmcb_save_area *) + __get_free_page(GFP_KERNEL | __GFP_ZERO); + struct desc_ptr gdtr; + u64 ret, retry = 5; + struct hv_start_virtual_processor_input *start_vp_input; + union sev_rmp_adjust rmp_adjust; + unsigned long flags; + + native_store_gdt(&gdtr); + + vmsa->gdtr.base = gdtr.address; + vmsa->gdtr.limit = gdtr.size; + + asm volatile("movl %%es, %%eax;" : "=a" (vmsa->es.selector)); + hv_populate_vmcb_seg(vmsa->es, vmsa->gdtr.base); + + asm volatile("movl %%cs, %%eax;" : "=a" (vmsa->cs.selector)); + hv_populate_vmcb_seg(vmsa->cs, vmsa->gdtr.base); + + asm volatile("movl %%ss, %%eax;" : "=a" (vmsa->ss.selector)); + hv_populate_vmcb_seg(vmsa->ss, vmsa->gdtr.base); + + asm volatile("movl %%ds, %%eax;" : "=a" (vmsa->ds.selector)); + hv_populate_vmcb_seg(vmsa->ds, vmsa->gdtr.base); + + vmsa->efer = native_read_msr(MSR_EFER); + + asm volatile("movq %%cr4, %%rax;" : "=a" (vmsa->cr4)); + asm volatile("movq %%cr3, %%rax;" : "=a" (vmsa->cr3)); + asm volatile("movq %%cr0, %%rax;" : "=a" (vmsa->cr0)); + + vmsa->xcr0 = 1; + vmsa->g_pat = HV_AP_INIT_GPAT_DEFAULT; + vmsa->rip = (u64)secondary_startup_64_no_verify; + vmsa->rsp = (u64)&ap_start_stack[PAGE_SIZE]; + + vmsa->sev_feature_snp = 1; + vmsa->sev_feature_restrict_injection = 1; + + rmp_adjust.as_uint64 = 0; + rmp_adjust.target_vmpl = 1; + rmp_adjust.vmsa = 1; + ret = rmpadjust((unsigned long)vmsa, RMP_PG_SIZE_4K, + rmp_adjust.as_uint64); + if (ret != 0) { + pr_err("RMPADJUST(%llx) failed: %llx\n", (u64)vmsa, ret); + return ret; + } + + local_irq_save(flags); + start_vp_input = + (struct hv_start_virtual_processor_input *)ap_start_input_arg; + memset(start_vp_input, 0, sizeof(*start_vp_input)); + start_vp_input->partitionid = -1; + start_vp_input->vpindex = cpu; + start_vp_input->targetvtl = ms_hyperv.vtl; + *(u64 *)&start_vp_input->context[0] = __pa(vmsa) | 1; + + do { + ret = hv_do_hypercall(HVCALL_START_VIRTUAL_PROCESSOR, + start_vp_input, NULL); + } while (hv_result(ret) == HV_STATUS_TIME_OUT && retry--); + + if (!hv_result_success(ret)) { + pr_err("HvCallStartVirtualProcessor failed: %llx\n", ret); + goto done; + } + +done: + local_irq_restore(flags); + return ret; +} + static void __init hv_smp_prepare_cpus(unsigned int max_cpus) { #ifdef CONFIG_X86_64 @@ -239,6 +334,16 @@ static void __init hv_smp_prepare_cpus(unsigned int max_cpus) native_smp_prepare_cpus(max_cpus); + /* + * Override wakeup_secondary_cpu callback for SEV-SNP + * enlightened guest. + */ + if (hv_isolation_type_en_snp()) + apic->wakeup_secondary_cpu = hv_snp_boot_ap; + + if (!hv_root_partition) + return; + #ifdef CONFIG_X86_64 for_each_present_cpu(i) { if (i == 0) @@ -475,8 +580,7 @@ static void __init ms_hyperv_init_platform(void) # ifdef CONFIG_SMP smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu; - if (hv_root_partition) - smp_ops.smp_prepare_cpus = hv_smp_prepare_cpus; + smp_ops.smp_prepare_cpus = hv_smp_prepare_cpus; # endif /* @@ -501,7 +605,7 @@ static void __init ms_hyperv_init_platform(void) if (!(ms_hyperv.features & HV_ACCESS_TSC_INVARIANT)) mark_tsc_unstable("running on Hyper-V"); - if (isolation_type_en_snp()) { + if (hv_isolation_type_en_snp()) { /* * Hyper-V enlightened snp guest boots kernel * directly without bootloader and so roms, @@ -511,7 +615,7 @@ static void __init ms_hyperv_init_platform(void) x86_platform.legacy.rtc = 0; x86_platform.set_wallclock = set_rtc_noop; x86_platform.get_wallclock = get_rtc_noop; - x86_platform.legacy.reserve_bios_regions = x86_init_noop; + x86_platform.legacy.reserve_bios_regions = 0; x86_init.resources.probe_roms = x86_init_noop; x86_init.resources.reserve_resources = x86_init_noop; x86_init.mpparse.find_smp_config = x86_init_noop; diff --git a/include/asm-generic/hyperv-tlfs.h b/include/asm-generic/hyperv-tlfs.h index c1cc3ec36ad5..3d7c67be9f56 100644 --- a/include/asm-generic/hyperv-tlfs.h +++ b/include/asm-generic/hyperv-tlfs.h @@ -148,6 +148,7 @@ union hv_reference_tsc_msr { #define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST 0x0003 #define HVCALL_NOTIFY_LONG_SPIN_WAIT 0x0008 #define HVCALL_SEND_IPI 0x000b +#define HVCALL_ENABLE_VP_VTL 0x000f #define HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX 0x0013 #define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX 0x0014 #define HVCALL_SEND_IPI_EX 0x0015 @@ -165,6 +166,7 @@ union hv_reference_tsc_msr { #define HVCALL_MAP_DEVICE_INTERRUPT 0x007c #define HVCALL_UNMAP_DEVICE_INTERRUPT 0x007d #define HVCALL_RETARGET_INTERRUPT 0x007e +#define HVCALL_START_VIRTUAL_PROCESSOR 0x0099 #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0 #define HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY 0x00db @@ -219,6 +221,7 @@ enum HV_GENERIC_SET_FORMAT { #define HV_STATUS_INVALID_PORT_ID 17 #define HV_STATUS_INVALID_CONNECTION_ID 18 #define HV_STATUS_INSUFFICIENT_BUFFERS 19 +#define HV_STATUS_TIME_OUT 0x78 /* * The Hyper-V TimeRefCount register and the TSC @@ -778,6 +781,22 @@ struct hv_input_unmap_device_interrupt { struct hv_interrupt_entry interrupt_entry; } __packed; +struct hv_enable_vp_vtl_input { + u64 partitionid; + u32 vpindex; + u8 targetvtl; + u8 padding[3]; + u8 context[0xe0]; +} __packed; + +struct hv_start_virtual_processor_input { + u64 partitionid; + u32 vpindex; + u8 targetvtl; + u8 padding[3]; + u8 context[0xe0]; +} __packed; + #define HV_SOURCE_SHADOW_NONE 0x0 #define HV_SOURCE_SHADOW_BRIDGE_BUS_RANGE 0x1 From patchwork Sun Jan 22 02:46:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 361E3C25B50 for ; Sun, 22 Jan 2023 02:47:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230113AbjAVCrR (ORCPT ); Sat, 21 Jan 2023 21:47:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229938AbjAVCqn (ORCPT ); Sat, 21 Jan 2023 21:46:43 -0500 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E414226865; Sat, 21 Jan 2023 18:46:27 -0800 (PST) Received: by mail-pf1-x435.google.com with SMTP id s3so6543468pfd.12; Sat, 21 Jan 2023 18:46:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2L02B2A7h+GqVFqZYCULI5VizKi71tjYmMAIa783a5M=; b=GzAqv8fsChpPgHdpFS4rwUbNgCm/BnJAkCSh2+9Po0eYk5or0GObAloMD6l1wKgEZQ YFYeZMVxaavRp5loi8tZESQKY5KO99y79Yw0xeJnyZYJ78dhcp6gOO4yIrGqZnl6gHIw ZILDnGLmjCr2KyChGNbqos8Vxz/8qNaHOsQwiNSjgVL54l5dMht6UyQNu5IV4oA+QBt4 vbAJt1pcmGY2s5PtsNgNeZ4IeP3DcdSK3MIAcIN7hCeNkuWdq3/bgPKa8rqDPQMNotu2 wlXCgQ0AMI6cFBeTu2/Z1jbiVcAa8VPF/4HW8eLuLPDjIa7Y7PJZyjoqO2obSOBqpjp5 VPWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2L02B2A7h+GqVFqZYCULI5VizKi71tjYmMAIa783a5M=; b=bSdsiFJABDgrPFvt608yV76tB8Q08g8bClmxakR0BkR06ub8hvVQmhRJehcd/BSLuA yHa9FEEQ+nEJAW0W/Cwaoxfdt5Qw7VKtNbzwwkqQWcpzr8jKtOJu95Lr1/tayHk6DLeM Ny11WFksQeK/X14JQWd5jIfuhWDMD1GVlz9Z0wMrzBDcaknblzh7RMQ6jKkxZA3Ls3qj ufP5rhuborvDj2vtD55pbI5WeT1YjqoP3ItkotxeZFDFTeXXwi9DYQ127bsrxO4m1adh HdI5gxecHOdl9OalDu4dg3gknDqZmeCF7dLS2oS/azvdNDv/6oBOZj0P5Kj56nfbChGM Krzw== X-Gm-Message-State: AFqh2kr9rSlkJo7gPhGKLRhlyaFbr0K/5y+N7bdWN3GvSNl7qgfH5TSZ sWV6SU9Boe9daKPxF8YqS2c= X-Google-Smtp-Source: AMrXdXvPzm5jCpAu44quF4Axn/YToNU1y2zNLchkx1CozU9o+0R0H+kf4PfBM04pdt4cgbCmGjuRqg== X-Received: by 2002:a05:6a00:190e:b0:58b:c1a1:4006 with SMTP id y14-20020a056a00190e00b0058bc1a14006mr25144615pfi.18.1674355586974; Sat, 21 Jan 2023 18:46:26 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:26 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 11/16] x86/hyperv: Add hyperv-specific hadling for VMMCALL under SEV-ES Date: Sat, 21 Jan 2023 21:46:01 -0500 Message-Id: <20230122024607.788454-12-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Add Hyperv-specific handling for faults caused by VMMCALL instructions. Signed-off-by: Tianyu Lan --- arch/x86/kernel/cpu/mshyperv.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index 9d547751a1a7..9f37b51ba94b 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -694,6 +694,20 @@ static bool __init ms_hyperv_msi_ext_dest_id(void) return eax & HYPERV_VS_PROPERTIES_EAX_EXTENDED_IOAPIC_RTE; } +static void hv_sev_es_hcall_prepare(struct ghcb *ghcb, struct pt_regs *regs) +{ + /* RAX and CPL are already in the GHCB */ + ghcb_set_rcx(ghcb, regs->cx); + ghcb_set_rdx(ghcb, regs->dx); + ghcb_set_r8(ghcb, regs->r8); +} + +static bool hv_sev_es_hcall_finish(struct ghcb *ghcb, struct pt_regs *regs) +{ + /* No checking of the return state needed */ + return true; +} + const __initconst struct hypervisor_x86 x86_hyper_ms_hyperv = { .name = "Microsoft Hyper-V", .detect = ms_hyperv_platform, @@ -701,4 +715,6 @@ const __initconst struct hypervisor_x86 x86_hyper_ms_hyperv = { .init.x2apic_available = ms_hyperv_x2apic_available, .init.msi_ext_dest_id = ms_hyperv_msi_ext_dest_id, .init.init_platform = ms_hyperv_init_platform, + .runtime.sev_es_hcall_prepare = hv_sev_es_hcall_prepare, + .runtime.sev_es_hcall_finish = hv_sev_es_hcall_finish, }; From patchwork Sun Jan 22 02:46:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111400 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 481F1C54EED for ; Sun, 22 Jan 2023 02:47:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230136AbjAVCr2 (ORCPT ); Sat, 21 Jan 2023 21:47:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229983AbjAVCrJ (ORCPT ); Sat, 21 Jan 2023 21:47:09 -0500 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F0AC265A4; Sat, 21 Jan 2023 18:46:29 -0800 (PST) Received: by mail-pf1-x430.google.com with SMTP id c26so6557202pfp.10; Sat, 21 Jan 2023 18:46:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=z3ti+m9y3xKcUya3hebjrnKHJVDqXdy06y2y11vwt2U=; b=k2AJDj7MrAOIrs6QPnfk4qhD13a4qsjiolrh/f6H+v0KgwNfd8pD3o52fiDiT4jHz7 3zHBGLWZ2o5r5aCWDs3fqRhhRiFJ6Fb1+xeVvJf1WKmMah40JN7fuJImePCBwNnKbvhl k3OtHrHdOCnmGsv+m5haU7fdjHALXsjcSsxzRPqZbVduZ13q6fSBzpvN4xSmBmvRvlgV B6cTNoZMAeu7Wyx/w9BjStLq4oAWl4aUu9okx+MkCRevw94nqLiK7xUMkKToTe+zNIqK xT9NBU2fwrwMAo1sqOvdz0IlnAbWLhLbmuk7bAUFSyX33AOhQP56F6DAH8h5w4dLvtZF XRaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=z3ti+m9y3xKcUya3hebjrnKHJVDqXdy06y2y11vwt2U=; b=KsWpgIrwGQCkIr20Cw/GF9CgPKvWDtnNVw8iACSxwvZw38ivaBoPxHc6vE8//8ww2k hGrxlQCWh7gxplX5MU9DV3BpicMX9Bzw8IA4TPpquQ1VL+zJNu6aBUyPG8/Z94OKzySy 8QwpQXZTZ8lmLcC/b6CagfDuTUTnLino7//IHs76BA4LJ06pl1X4peKsoHNJYdP3b6nB njh6gSNIN9zY8Hwh9rbH4wyzxWPAzyj7e/OOMP0s3KDH0ZhaZMYIk1Vo8muxFrwtz+sP oMbXJtLHiLmjUBg1Z5YaRNJLk4m5FOK2vvO2chnXyqO+x99ELNQnQdBU0TWuYBnAYjRt BD9g== X-Gm-Message-State: AFqh2koC2gcjyBoEMJ15x8VnEl+3H7cENAwBPrGajVb0kQSSCi9nOJRZ oQ0Q2jm8SqVP8nFmu9VvxmE= X-Google-Smtp-Source: AMrXdXsVF74pO+xQ64VpTUQzJWz1qVR+dEyb3v1EZMNe0+yxI/X+oEOdQDYAuefkU1ic6+GNQ7OmSg== X-Received: by 2002:a05:6a00:1c95:b0:581:6f06:659 with SMTP id y21-20020a056a001c9500b005816f060659mr20630502pfw.6.1674355588292; Sat, 21 Jan 2023 18:46:28 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:27 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 12/16] x86/sev: Add a #HV exception handler Date: Sat, 21 Jan 2023 21:46:02 -0500 Message-Id: <20230122024607.788454-13-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Add a #HV exception handler that uses IST stack. Signed-off-by: Tianyu Lan --- Change since RFC V2: * Remove unnecessary line in the change log. --- arch/x86/entry/entry_64.S | 58 +++++++++++++++++++++++++++ arch/x86/include/asm/cpu_entry_area.h | 6 +++ arch/x86/include/asm/idtentry.h | 39 +++++++++++++++++- arch/x86/include/asm/page_64_types.h | 1 + arch/x86/include/asm/trapnr.h | 1 + arch/x86/include/asm/traps.h | 1 + arch/x86/kernel/cpu/common.c | 1 + arch/x86/kernel/dumpstack_64.c | 9 ++++- arch/x86/kernel/idt.c | 1 + arch/x86/kernel/sev.c | 53 ++++++++++++++++++++++++ arch/x86/kernel/traps.c | 40 ++++++++++++++++++ arch/x86/mm/cpu_entry_area.c | 2 + 12 files changed, 209 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 15739a2c0983..6baec7653f19 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -563,6 +563,64 @@ SYM_CODE_START(\asmsym) .Lfrom_usermode_switch_stack_\@: idtentry_body user_\cfunc, has_error_code=1 +_ASM_NOKPROBE(\asmsym) +SYM_CODE_END(\asmsym) +.endm +/* + * idtentry_hv - Macro to generate entry stub for #HV + * @vector: Vector number + * @asmsym: ASM symbol for the entry point + * @cfunc: C function to be called + * + * The macro emits code to set up the kernel context for #HV. The #HV handler + * runs on an IST stack and needs to be able to support nested #HV exceptions. + * + * To make this work the #HV entry code tries its best to pretend it doesn't use + * an IST stack by switching to the task stack if coming from user-space (which + * includes early SYSCALL entry path) or back to the stack in the IRET frame if + * entered from kernel-mode. + * + * If entered from kernel-mode the return stack is validated first, and if it is + * not safe to use (e.g. because it points to the entry stack) the #HV handler + * will switch to a fall-back stack (HV2) and call a special handler function. + * + * The macro is only used for one vector, but it is planned to be extended in + * the future for the #HV exception. + */ +.macro idtentry_hv vector asmsym cfunc +SYM_CODE_START(\asmsym) + UNWIND_HINT_IRET_REGS + ASM_CLAC + pushq $-1 /* ORIG_RAX: no syscall to restart */ + + testb $3, CS-ORIG_RAX(%rsp) + jnz .Lfrom_usermode_switch_stack_\@ + + call paranoid_entry + + UNWIND_HINT_REGS + + /* + * Switch off the IST stack to make it free for nested exceptions. + */ + movq %rsp, %rdi /* pt_regs pointer */ + call hv_switch_off_ist + movq %rax, %rsp /* Switch to new stack */ + + UNWIND_HINT_REGS + + /* Update pt_regs */ + movq ORIG_RAX(%rsp), %rsi /* get error code into 2nd argument*/ + movq $-1, ORIG_RAX(%rsp) /* no syscall to restart */ + + movq %rsp, %rdi /* pt_regs pointer */ + call kernel_\cfunc + + jmp paranoid_exit + +.Lfrom_usermode_switch_stack_\@: + idtentry_body user_\cfunc, has_error_code=1 + _ASM_NOKPROBE(\asmsym) SYM_CODE_END(\asmsym) .endm diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h index 462fc34f1317..2186ed601b4a 100644 --- a/arch/x86/include/asm/cpu_entry_area.h +++ b/arch/x86/include/asm/cpu_entry_area.h @@ -30,6 +30,10 @@ char VC_stack[optional_stack_size]; \ char VC2_stack_guard[guardsize]; \ char VC2_stack[optional_stack_size]; \ + char HV_stack_guard[guardsize]; \ + char HV_stack[optional_stack_size]; \ + char HV2_stack_guard[guardsize]; \ + char HV2_stack[optional_stack_size]; \ char IST_top_guard[guardsize]; \ /* The exception stacks' physical storage. No guard pages required */ @@ -52,6 +56,8 @@ enum exception_stack_ordering { ESTACK_MCE, ESTACK_VC, ESTACK_VC2, + ESTACK_HV, + ESTACK_HV2, N_EXCEPTION_STACKS }; diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h index 72184b0b2219..652fea10d377 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -317,6 +317,19 @@ static __always_inline void __##func(struct pt_regs *regs) __visible noinstr void kernel_##func(struct pt_regs *regs, unsigned long error_code); \ __visible noinstr void user_##func(struct pt_regs *regs, unsigned long error_code) + +/** + * DECLARE_IDTENTRY_HV - Declare functions for the HV entry point + * @vector: Vector number (ignored for C) + * @func: Function name of the entry point + * + * Maps to DECLARE_IDTENTRY_RAW, but declares also the user C handler. + */ +#define DECLARE_IDTENTRY_HV(vector, func) \ + DECLARE_IDTENTRY_RAW_ERRORCODE(vector, func); \ + __visible noinstr void kernel_##func(struct pt_regs *regs); \ + __visible noinstr void user_##func(struct pt_regs *regs) + /** * DEFINE_IDTENTRY_IST - Emit code for IST entry points * @func: Function name of the entry point @@ -376,6 +389,26 @@ static __always_inline void __##func(struct pt_regs *regs) #define DEFINE_IDTENTRY_VC_USER(func) \ DEFINE_IDTENTRY_RAW_ERRORCODE(user_##func) +/** + * DEFINE_IDTENTRY_HV_KERNEL - Emit code for HV injection handler + * when raised from kernel mode + * @func: Function name of the entry point + * + * Maps to DEFINE_IDTENTRY_RAW + */ +#define DEFINE_IDTENTRY_HV_KERNEL(func) \ + DEFINE_IDTENTRY_RAW(kernel_##func) + +/** + * DEFINE_IDTENTRY_HV_USER - Emit code for HV injection handler + * when raised from user mode + * @func: Function name of the entry point + * + * Maps to DEFINE_IDTENTRY_RAW + */ +#define DEFINE_IDTENTRY_HV_USER(func) \ + DEFINE_IDTENTRY_RAW(user_##func) + #else /* CONFIG_X86_64 */ /** @@ -465,6 +498,9 @@ __visible noinstr void func(struct pt_regs *regs, \ # define DECLARE_IDTENTRY_VC(vector, func) \ idtentry_vc vector asm_##func func +# define DECLARE_IDTENTRY_HV(vector, func) \ + idtentry_hv vector asm_##func func + #else # define DECLARE_IDTENTRY_MCE(vector, func) \ DECLARE_IDTENTRY(vector, func) @@ -622,9 +658,10 @@ DECLARE_IDTENTRY_RAW_ERRORCODE(X86_TRAP_DF, xenpv_exc_double_fault); DECLARE_IDTENTRY_ERRORCODE(X86_TRAP_CP, exc_control_protection); #endif -/* #VC */ +/* #VC & #HV */ #ifdef CONFIG_AMD_MEM_ENCRYPT DECLARE_IDTENTRY_VC(X86_TRAP_VC, exc_vmm_communication); +DECLARE_IDTENTRY_HV(X86_TRAP_HV, exc_hv_injection); #endif #ifdef CONFIG_XEN_PV diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h index e9e2c3ba5923..0bd7dab676c5 100644 --- a/arch/x86/include/asm/page_64_types.h +++ b/arch/x86/include/asm/page_64_types.h @@ -29,6 +29,7 @@ #define IST_INDEX_DB 2 #define IST_INDEX_MCE 3 #define IST_INDEX_VC 4 +#define IST_INDEX_HV 5 /* * Set __PAGE_OFFSET to the most negative possible address + diff --git a/arch/x86/include/asm/trapnr.h b/arch/x86/include/asm/trapnr.h index f5d2325aa0b7..c6583631cecb 100644 --- a/arch/x86/include/asm/trapnr.h +++ b/arch/x86/include/asm/trapnr.h @@ -26,6 +26,7 @@ #define X86_TRAP_XF 19 /* SIMD Floating-Point Exception */ #define X86_TRAP_VE 20 /* Virtualization Exception */ #define X86_TRAP_CP 21 /* Control Protection Exception */ +#define X86_TRAP_HV 28 /* HV injected exception in SNP restricted mode */ #define X86_TRAP_VC 29 /* VMM Communication Exception */ #define X86_TRAP_IRET 32 /* IRET Exception */ diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h index 47ecfff2c83d..6795d3e517d6 100644 --- a/arch/x86/include/asm/traps.h +++ b/arch/x86/include/asm/traps.h @@ -16,6 +16,7 @@ asmlinkage __visible notrace struct pt_regs *fixup_bad_iret(struct pt_regs *bad_regs); void __init trap_init(void); asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *eregs); +asmlinkage __visible noinstr struct pt_regs *hv_switch_off_ist(struct pt_regs *eregs); #endif extern bool ibt_selftest(void); diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 9cfca3d7d0e2..e48a489777ec 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -2162,6 +2162,7 @@ static inline void tss_setup_ist(struct tss_struct *tss) tss->x86_tss.ist[IST_INDEX_MCE] = __this_cpu_ist_top_va(MCE); /* Only mapped when SEV-ES is active */ tss->x86_tss.ist[IST_INDEX_VC] = __this_cpu_ist_top_va(VC); + tss->x86_tss.ist[IST_INDEX_HV] = __this_cpu_ist_top_va(HV); } #else /* CONFIG_X86_64 */ diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c index f05339fee778..6d8f8864810c 100644 --- a/arch/x86/kernel/dumpstack_64.c +++ b/arch/x86/kernel/dumpstack_64.c @@ -26,11 +26,14 @@ static const char * const exception_stack_names[] = { [ ESTACK_MCE ] = "#MC", [ ESTACK_VC ] = "#VC", [ ESTACK_VC2 ] = "#VC2", + [ ESTACK_HV ] = "#HV", + [ ESTACK_HV2 ] = "#HV2", + }; const char *stack_type_name(enum stack_type type) { - BUILD_BUG_ON(N_EXCEPTION_STACKS != 6); + BUILD_BUG_ON(N_EXCEPTION_STACKS != 8); if (type == STACK_TYPE_TASK) return "TASK"; @@ -89,6 +92,8 @@ struct estack_pages estack_pages[CEA_ESTACK_PAGES] ____cacheline_aligned = { EPAGERANGE(MCE), EPAGERANGE(VC), EPAGERANGE(VC2), + EPAGERANGE(HV), + EPAGERANGE(HV2), }; static __always_inline bool in_exception_stack(unsigned long *stack, struct stack_info *info) @@ -98,7 +103,7 @@ static __always_inline bool in_exception_stack(unsigned long *stack, struct stac struct pt_regs *regs; unsigned int k; - BUILD_BUG_ON(N_EXCEPTION_STACKS != 6); + BUILD_BUG_ON(N_EXCEPTION_STACKS != 8); begin = (unsigned long)__this_cpu_read(cea_exception_stacks); /* diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c index a58c6bc1cd68..48c0a7e1dbcb 100644 --- a/arch/x86/kernel/idt.c +++ b/arch/x86/kernel/idt.c @@ -113,6 +113,7 @@ static const __initconst struct idt_data def_idts[] = { #ifdef CONFIG_AMD_MEM_ENCRYPT ISTG(X86_TRAP_VC, asm_exc_vmm_communication, IST_INDEX_VC), + ISTG(X86_TRAP_HV, asm_exc_hv_injection, IST_INDEX_HV), #endif SYSG(X86_TRAP_OF, asm_exc_overflow), diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 679026a640ef..a8862a2eff67 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -2004,6 +2004,59 @@ DEFINE_IDTENTRY_VC_USER(exc_vmm_communication) irqentry_exit_to_user_mode(regs); } +static bool hv_raw_handle_exception(struct pt_regs *regs) +{ + return false; +} + +static __always_inline bool on_hv_fallback_stack(struct pt_regs *regs) +{ + unsigned long sp = (unsigned long)regs; + + return (sp >= __this_cpu_ist_bottom_va(HV2) && sp < __this_cpu_ist_top_va(HV2)); +} + +DEFINE_IDTENTRY_HV_USER(exc_hv_injection) +{ + irqentry_enter_from_user_mode(regs); + instrumentation_begin(); + + if (!hv_raw_handle_exception(regs)) { + /* + * Do not kill the machine if user-space triggered the + * exception. Send SIGBUS instead and let user-space deal + * with it. + */ + force_sig_fault(SIGBUS, BUS_OBJERR, (void __user *)0); + } + + instrumentation_end(); + irqentry_exit_to_user_mode(regs); +} + +DEFINE_IDTENTRY_HV_KERNEL(exc_hv_injection) +{ + irqentry_state_t irq_state; + + irq_state = irqentry_nmi_enter(regs); + instrumentation_begin(); + + if (!hv_raw_handle_exception(regs)) { + pr_emerg("PANIC: Unhandled #HV exception in kernel space\n"); + + /* Show some debug info */ + show_regs(regs); + + /* Ask hypervisor to sev_es_terminate */ + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SEV_ES_GEN_REQ); + + panic("Returned from Terminate-Request to Hypervisor\n"); + } + + instrumentation_end(); + irqentry_nmi_exit(regs, irq_state); +} + bool __init handle_vc_boot_ghcb(struct pt_regs *regs) { unsigned long exit_code = regs->orig_ax; diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index d317dc3d06a3..d29debec8134 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -905,6 +905,46 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r return regs_ret; } + +asmlinkage __visible noinstr struct pt_regs *hv_switch_off_ist(struct pt_regs *regs) +{ + unsigned long sp, *stack; + struct stack_info info; + struct pt_regs *regs_ret; + + /* + * In the SYSCALL entry path the RSP value comes from user-space - don't + * trust it and switch to the current kernel stack + */ + if (ip_within_syscall_gap(regs)) { + sp = this_cpu_read(pcpu_hot.top_of_stack); + goto sync; + } + + /* + * From here on the RSP value is trusted. Now check whether entry + * happened from a safe stack. Not safe are the entry or unknown stacks, + * use the fall-back stack instead in this case. + */ + sp = regs->sp; + stack = (unsigned long *)sp; + + if (!get_stack_info_noinstr(stack, current, &info) || info.type == STACK_TYPE_ENTRY || + info.type > STACK_TYPE_EXCEPTION_LAST) + sp = __this_cpu_ist_top_va(HV2); +sync: + /* + * Found a safe stack - switch to it as if the entry didn't happen via + * IST stack. The code below only copies pt_regs, the real switch happens + * in assembly code. + */ + sp = ALIGN_DOWN(sp, 8) - sizeof(*regs_ret); + + regs_ret = (struct pt_regs *)sp; + *regs_ret = *regs; + + return regs_ret; +} #endif asmlinkage __visible noinstr struct pt_regs *fixup_bad_iret(struct pt_regs *bad_regs) diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c index 7316a8224259..3ec844cef652 100644 --- a/arch/x86/mm/cpu_entry_area.c +++ b/arch/x86/mm/cpu_entry_area.c @@ -153,6 +153,8 @@ static void __init percpu_setup_exception_stacks(unsigned int cpu) if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) { cea_map_stack(VC); cea_map_stack(VC2); + cea_map_stack(HV); + cea_map_stack(HV2); } } } From patchwork Sun Jan 22 02:46:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9EFDC25B50 for ; Sun, 22 Jan 2023 02:47:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230130AbjAVCr1 (ORCPT ); Sat, 21 Jan 2023 21:47:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230040AbjAVCrJ (ORCPT ); Sat, 21 Jan 2023 21:47:09 -0500 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 872302448F; Sat, 21 Jan 2023 18:46:30 -0800 (PST) Received: by mail-pg1-x535.google.com with SMTP id g68so6744438pgc.11; Sat, 21 Jan 2023 18:46:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KLExipRMWfkYVFwFaKS7EfBjRw3kU6eyOtpQO4do6zs=; b=iZgUb5gxqddyVAL8PlrW9j86YWsLUoSzCYIsEOdBUV8FNwLHho4x+8GLjBGpp7Y3Io gXc8i/FXlz+NXrB/iNyudERtbb8Fbbv6k6pFIssUFbhmyNirqZrtSm7GqmGgQQqlcLLE cAM3njrwCkmre97UTS220uX4f16Av0jFoXRXeqSsS/ECFpzU9vdhrgXQ/t3Cnd5rPYre DV2PQChItvFyzYvjD0xnvEErsT08/vMc0+8qAN3Tdo+fRE/aYQ2CAPioFqwrQHsGjOnh TBlJ3tfNbqLe6LPCbe/7ZWu5WDOGFQLHRM311AUAUhoPY+1PMprugND60ElXbe+l4Oi3 RH5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KLExipRMWfkYVFwFaKS7EfBjRw3kU6eyOtpQO4do6zs=; b=fAcDeV013+8CXCYVGQwohUPKGfVlmKKHOheQrsoRx8gGS80/8IE0cUjVg7YiJFkXJm Pw2BbDqHpCCNYkI7c4Y80MLnfgckHM649gyTREDUffLIz+6jcqbnitJnQn4rwwdga5t0 Aow0Gv1p5+UKjMgSSrNvFFdeBf9frM7dhFwXAohir6Kt9UYx5boBLEHeBX4rVG0q1MT0 CfUDifwlfbE8UDmNrIi8Y1J4zwqteUHxo8FlgUd6u+3YM/I2ray/6SffaGQ0RsMenGwa 8CgY+G5P/uxu5iP9ZdFEE9/AZ/onSV5DQqvXD1YSlnLuNjCWYCZFPep/PTiC3p01kALk cnwg== X-Gm-Message-State: AFqh2kpNMVGxgJcQK6CcGA9XlpGOHbEJrARxpHSSsEbY3zhyqWsDGJSj 5P/Kt0ds7UVD0DVxchj6vLU= X-Google-Smtp-Source: AMrXdXvhgf2d6Qr7Pg5vloeC44DEXttW6lAKPC/URQb2MP1owft/zq78EPqmeHOWdfGVjkrY6PhdnQ== X-Received: by 2002:a05:6a00:44c5:b0:580:8c2c:d0ad with SMTP id cv5-20020a056a0044c500b005808c2cd0admr19676912pfb.13.1674355589646; Sat, 21 Jan 2023 18:46:29 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:29 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 13/16] x86/sev: Add Check of #HV event in path Date: Sat, 21 Jan 2023 21:46:03 -0500 Message-Id: <20230122024607.788454-14-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Add check_hv_pending() and check_hv_pending_after_irq() to check queued #HV event when irq is disabled. Signed-off-by: Tianyu Lan --- arch/x86/entry/entry_64.S | 18 +++++++++++++++ arch/x86/include/asm/irqflags.h | 10 +++++++++ arch/x86/kernel/sev.c | 39 +++++++++++++++++++++++++++++++++ 3 files changed, 67 insertions(+) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 6baec7653f19..aec8dc4443d1 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1064,6 +1064,15 @@ SYM_CODE_END(paranoid_entry) * R15 - old SPEC_CTRL */ SYM_CODE_START_LOCAL(paranoid_exit) +#ifdef CONFIG_AMD_MEM_ENCRYPT + /* + * If a #HV was delivered during execution and interrupts were + * disabled, then check if it can be handled before the iret + * (which may re-enable interrupts). + */ + mov %rsp, %rdi + call check_hv_pending +#endif UNWIND_HINT_REGS /* @@ -1188,6 +1197,15 @@ SYM_CODE_START(error_entry) SYM_CODE_END(error_entry) SYM_CODE_START_LOCAL(error_return) +#ifdef CONFIG_AMD_MEM_ENCRYPT + /* + * If a #HV was delivered during execution and interrupts were + * disabled, then check if it can be handled before the iret + * (which may re-enable interrupts). + */ + mov %rsp, %rdi + call check_hv_pending +#endif UNWIND_HINT_REGS DEBUG_ENTRY_ASSERT_IRQS_OFF testb $3, CS(%rsp) diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h index 7793e52d6237..fe46e59168dd 100644 --- a/arch/x86/include/asm/irqflags.h +++ b/arch/x86/include/asm/irqflags.h @@ -14,6 +14,10 @@ /* * Interrupt control: */ +#ifdef CONFIG_AMD_MEM_ENCRYPT +void check_hv_pending(struct pt_regs *regs); +void check_hv_pending_irq_enable(void); +#endif /* Declaration required for gcc < 4.9 to prevent -Werror=missing-prototypes */ extern inline unsigned long native_save_fl(void); @@ -43,12 +47,18 @@ static __always_inline void native_irq_disable(void) static __always_inline void native_irq_enable(void) { asm volatile("sti": : :"memory"); +#ifdef CONFIG_AMD_MEM_ENCRYPT + check_hv_pending_irq_enable(); +#endif } static inline __cpuidle void native_safe_halt(void) { mds_idle_clear_cpu_buffers(); asm volatile("sti; hlt": : :"memory"); +#ifdef CONFIG_AMD_MEM_ENCRYPT + check_hv_pending_irq_enable(); +#endif } static inline __cpuidle void native_halt(void) diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index a8862a2eff67..fe5e5e41433d 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -179,6 +179,45 @@ void noinstr __sev_es_ist_enter(struct pt_regs *regs) this_cpu_write(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC], new_ist); } +static void do_exc_hv(struct pt_regs *regs) +{ + /* Handle #HV exception. */ +} + +void check_hv_pending(struct pt_regs *regs) +{ + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + return; + + if ((regs->flags & X86_EFLAGS_IF) == 0) + return; + + do_exc_hv(regs); +} + +void check_hv_pending_irq_enable(void) +{ + unsigned long flags; + struct pt_regs regs; + + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) + return; + + memset(®s, 0, sizeof(struct pt_regs)); + asm volatile("movl %%cs, %%eax;" : "=a" (regs.cs)); + asm volatile("movl %%ss, %%eax;" : "=a" (regs.ss)); + regs.orig_ax = 0xffffffff; + regs.flags = native_save_fl(); + + /* + * Disable irq when handle pending #HV events after + * re-enabling irq. + */ + asm volatile("cli" : : : "memory"); + do_exc_hv(®s); + asm volatile("sti" : : : "memory"); +} + void noinstr __sev_es_ist_exit(void) { unsigned long ist; From patchwork Sun Jan 22 02:46:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05629C54EB4 for ; Sun, 22 Jan 2023 02:47:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230010AbjAVCrp (ORCPT ); Sat, 21 Jan 2023 21:47:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230093AbjAVCrR (ORCPT ); Sat, 21 Jan 2023 21:47:17 -0500 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A00E42749B; Sat, 21 Jan 2023 18:46:36 -0800 (PST) Received: by mail-pf1-x431.google.com with SMTP id c26so6557242pfp.10; Sat, 21 Jan 2023 18:46:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wODZ6fOzHez3jHxCs8MJHpbqBvrocCyrdLunVVUGFi8=; b=DDHiqioEMRd5nxl35OooG8feH4vQ5xT6qiE++UTqOVQnvKtHouuZyf1o8/NNyvhRiU 2vub87khRfyYMFsNoHnWLHZV2LuzfukssKnOExRUnC6j0AuvseCdlpvjD2E2lvFASxFl WkMW6ctC0bth7J5ZICaQf6mlgt9fk2l1XyQ9apbCKp+jkiwq0+PoCiXOE+FFGkl0SB6R ABZ9vNkVfrl9kWXbPz0Box6mx8R3DwZIhJTnNN4P0oh9v/ujNbVNoEcm8yhqb6jU4Gv2 H61iQO8d0lH/E32DWuA6PlYZWYjohEHcxrlxEU5TB/ue928BU/48EruyKJk/xB40PU18 SKVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wODZ6fOzHez3jHxCs8MJHpbqBvrocCyrdLunVVUGFi8=; b=t1CLzzE/W6tEw4hP6GbhdfZh7VBda/sDYijw/6K4L0FcruLPVcAH3K+nBdwsZO26hU xb9Vy+wYqYi2G9eBsclpQRu5IBVaEZiTQ/eeqMIdAWwA8bz1QU56V06Gjx5WpmEW52NR Sts1nKR75u8kbpzrLCtEGoDOYomVJrnqhs7FiLGdrvKi9YupcouWEw8Pv3YjGTwgkeAs 3YaO2csB9BRcjFAULt3MOdX9hX2/IGiY+28eG8o6Xgt85FfK8HrfM+9Mn6yEizEvp3Eu zo/Aa6+cMylN8U0WXosYEP2sVSgTLrj3sPYaArcrkBJXL+ZcwsfF32Frm6QU9+r5jdG8 +T3Q== X-Gm-Message-State: AFqh2kp5caPZ6Hvk5E0MUNeRC5Ayhl7Sa+JLW9BN6wXx7OIhfZgsGNMX lIr6I0it98tCKlvLCK+o1LE= X-Google-Smtp-Source: AMrXdXuhFBuxX2dsiIUyhaydLmKqTtx4FxXA0dRxvx34XUXiI6xBvdoM9FiAhKDM2mhFPL3a1VR48g== X-Received: by 2002:a05:6a00:4088:b0:58d:9ad6:6ae7 with SMTP id bw8-20020a056a00408800b0058d9ad66ae7mr22763945pfb.19.1674355591027; Sat, 21 Jan 2023 18:46:31 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:30 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 14/16] x86/sev: Initialize #HV doorbell and handle interrupt requests Date: Sat, 21 Jan 2023 21:46:04 -0500 Message-Id: <20230122024607.788454-15-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Tianyu Lan Enable #HV exception to handle interrupt requests from hypervisor. Co-developed-by: Lendacky Thomas Co-developed-by: Kalra Ashish Signed-off-by: Tianyu Lan --- arch/x86/include/asm/mem_encrypt.h | 2 + arch/x86/include/asm/msr-index.h | 6 + arch/x86/include/asm/svm.h | 12 +- arch/x86/include/uapi/asm/svm.h | 4 + arch/x86/kernel/sev.c | 307 +++++++++++++++++++++++------ arch/x86/kernel/traps.c | 2 + 6 files changed, 272 insertions(+), 61 deletions(-) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 72ca90552b6a..7264ca5f5b2d 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -50,6 +50,7 @@ void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, void __init mem_encrypt_free_decrypted_mem(void); void __init sev_es_init_vc_handling(void); +void __init sev_snp_init_hv_handling(void); #define __bss_decrypted __section(".bss..decrypted") @@ -72,6 +73,7 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { } static inline void __init sme_enable(struct boot_params *bp) { } static inline void sev_es_init_vc_handling(void) { } +static inline void sev_snp_init_hv_handling(void) { } static inline int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; } diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 6a6e70e792a4..70af0ce5f2c4 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -562,11 +562,17 @@ #define MSR_AMD64_SEV_ENABLED_BIT 0 #define MSR_AMD64_SEV_ES_ENABLED_BIT 1 #define MSR_AMD64_SEV_SNP_ENABLED_BIT 2 +#define MSR_AMD64_SEV_REFLECTVC_ENABLED_BIT 4 +#define MSR_AMD64_SEV_RESTRICTED_INJECTION_ENABLED_BIT 5 +#define MSR_AMD64_SEV_ALTERNATE_INJECTION_ENABLED_BIT 6 #define MSR_AMD64_SEV_ENABLED BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT) #define MSR_AMD64_SEV_ES_ENABLED BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT) #define MSR_AMD64_SEV_SNP_ENABLED BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT) #define MSR_AMD64_SNP_VTOM_ENABLED BIT_ULL(3) +#define MSR_AMD64_SEV_REFLECTVC_ENABLED BIT_ULL(MSR_AMD64_SEV_REFLECTVC_ENABLED_BIT) +#define MSR_AMD64_SEV_RESTRICTED_INJECTION_ENABLED BIT_ULL(MSR_AMD64_SEV_RESTRICTED_INJECTION_ENABLED_BIT) +#define MSR_AMD64_SEV_ALTERNATE_INJECTION_ENABLED BIT_ULL(MSR_AMD64_SEV_ALTERNATE_INJECTION_ENABLED_BIT) #define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f /* AMD Collaborative Processor Performance Control MSRs */ diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index f8b321a11ee4..911c991fec78 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -568,12 +568,12 @@ static inline void __unused_size_checks(void) /* Check offsets of reserved fields */ - BUILD_BUG_RESERVED_OFFSET(vmcb_save_area, 0xa0); - BUILD_BUG_RESERVED_OFFSET(vmcb_save_area, 0xcc); - BUILD_BUG_RESERVED_OFFSET(vmcb_save_area, 0xd8); - BUILD_BUG_RESERVED_OFFSET(vmcb_save_area, 0x180); - BUILD_BUG_RESERVED_OFFSET(vmcb_save_area, 0x248); - BUILD_BUG_RESERVED_OFFSET(vmcb_save_area, 0x298); +// BUILD_BUG_RESERVED_OFFSET(vmcb_save_area, 0xa0); +// BUILD_BUG_RESERVED_OFFSET(vmcb_save_area, 0xcc); +// BUILD_BUG_RESERVED_OFFSET(vmcb_save_area, 0xd8); +// BUILD_BUG_RESERVED_OFFSET(vmcb_save_area, 0x180); +// BUILD_BUG_RESERVED_OFFSET(vmcb_save_area, 0x248); +// BUILD_BUG_RESERVED_OFFSET(vmcb_save_area, 0x298); BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0xc8); BUILD_BUG_RESERVED_OFFSET(sev_es_save_area, 0xcc); diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h index f69c168391aa..85d6882262e7 100644 --- a/arch/x86/include/uapi/asm/svm.h +++ b/arch/x86/include/uapi/asm/svm.h @@ -115,6 +115,10 @@ #define SVM_VMGEXIT_AP_CREATE_ON_INIT 0 #define SVM_VMGEXIT_AP_CREATE 1 #define SVM_VMGEXIT_AP_DESTROY 2 +#define SVM_VMGEXIT_HV_DOORBELL_PAGE 0x80000014 +#define SVM_VMGEXIT_GET_PREFERRED_HV_DOORBELL_PAGE 0 +#define SVM_VMGEXIT_SET_HV_DOORBELL_PAGE 1 +#define SVM_VMGEXIT_QUERY_HV_DOORBELL_PAGE 2 #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd #define SVM_VMGEXIT_UNSUPPORTED_EVENT 0x8000ffff diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index fe5e5e41433d..03d99fad9e76 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -122,6 +122,150 @@ struct sev_config { static struct sev_config sev_cfg __read_mostly; +static noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state); +static noinstr void __sev_put_ghcb(struct ghcb_state *state); +static int vmgexit_hv_doorbell_page(struct ghcb *ghcb, u64 op, u64 pa); +static void sev_snp_setup_hv_doorbell_page(struct ghcb *ghcb); + +union hv_pending_events { + u16 events; + struct { + u8 vector; + u8 nmi : 1; + u8 mc : 1; + u8 reserved1 : 5; + u8 no_further_signal : 1; + }; +}; + +struct sev_hv_doorbell_page { + union hv_pending_events pending_events; + u8 no_eoi_required; + u8 reserved2[61]; + u8 padding[4032]; +}; + +struct sev_snp_runtime_data { + struct sev_hv_doorbell_page hv_doorbell_page; +}; + +static DEFINE_PER_CPU(struct sev_snp_runtime_data*, snp_runtime_data); + +static inline u64 sev_es_rd_ghcb_msr(void) +{ + return __rdmsr(MSR_AMD64_SEV_ES_GHCB); +} + +static __always_inline void sev_es_wr_ghcb_msr(u64 val) +{ + u32 low, high; + + low = (u32)(val); + high = (u32)(val >> 32); + + native_wrmsr(MSR_AMD64_SEV_ES_GHCB, low, high); +} + +struct sev_hv_doorbell_page *sev_snp_current_doorbell_page(void) +{ + return &this_cpu_read(snp_runtime_data)->hv_doorbell_page; +} + +static u8 sev_hv_pending(void) +{ + return sev_snp_current_doorbell_page()->pending_events.events; +} + +static void hv_doorbell_apic_eoi_write(u32 reg, u32 val) +{ + if (xchg(&sev_snp_current_doorbell_page()->no_eoi_required, 0) & 0x1) + return; + + BUG_ON(reg != APIC_EOI); + apic->write(reg, val); +} + +static void do_exc_hv(struct pt_regs *regs) +{ + union hv_pending_events pending_events; + u8 vector; + + while (sev_hv_pending()) { + pending_events.events = xchg( + &sev_snp_current_doorbell_page()->pending_events.events, + 0); + + if (pending_events.nmi) + exc_nmi(regs); + +#ifdef CONFIG_X86_MCE + if (pending_events.mc) + exc_machine_check(regs); +#endif + + if (!pending_events.vector) + return; + + if (pending_events.vector < FIRST_EXTERNAL_VECTOR) { + /* Exception vectors */ + WARN(1, "exception shouldn't happen\n"); + } else if (pending_events.vector == FIRST_EXTERNAL_VECTOR) { + sysvec_irq_move_cleanup(regs); + } else if (pending_events.vector == IA32_SYSCALL_VECTOR) { + WARN(1, "syscall shouldn't happen\n"); + } else if (pending_events.vector >= FIRST_SYSTEM_VECTOR) { + switch (pending_events.vector) { +#if IS_ENABLED(CONFIG_HYPERV) + case HYPERV_STIMER0_VECTOR: + sysvec_hyperv_stimer0(regs); + break; + case HYPERVISOR_CALLBACK_VECTOR: + sysvec_hyperv_callback(regs); + break; +#endif +#ifdef CONFIG_SMP + case RESCHEDULE_VECTOR: + sysvec_reschedule_ipi(regs); + break; + case IRQ_MOVE_CLEANUP_VECTOR: + sysvec_irq_move_cleanup(regs); + break; + case REBOOT_VECTOR: + sysvec_reboot(regs); + break; + case CALL_FUNCTION_SINGLE_VECTOR: + sysvec_call_function_single(regs); + break; + case CALL_FUNCTION_VECTOR: + sysvec_call_function(regs); + break; +#endif +#ifdef CONFIG_X86_LOCAL_APIC + case ERROR_APIC_VECTOR: + sysvec_error_interrupt(regs); + break; + case SPURIOUS_APIC_VECTOR: + sysvec_spurious_apic_interrupt(regs); + break; + case LOCAL_TIMER_VECTOR: + sysvec_apic_timer_interrupt(regs); + break; + case X86_PLATFORM_IPI_VECTOR: + sysvec_x86_platform_ipi(regs); + break; +#endif + case 0x0: + break; + default: + panic("Unexpected vector %d\n", vector); + unreachable(); + } + } else { + common_interrupt(regs, pending_events.vector); + } + } +} + static __always_inline bool on_vc_stack(struct pt_regs *regs) { unsigned long sp = regs->sp; @@ -179,11 +323,6 @@ void noinstr __sev_es_ist_enter(struct pt_regs *regs) this_cpu_write(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC], new_ist); } -static void do_exc_hv(struct pt_regs *regs) -{ - /* Handle #HV exception. */ -} - void check_hv_pending(struct pt_regs *regs) { if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) @@ -232,68 +371,38 @@ void noinstr __sev_es_ist_exit(void) this_cpu_write(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC], *(unsigned long *)ist); } -/* - * Nothing shall interrupt this code path while holding the per-CPU - * GHCB. The backup GHCB is only for NMIs interrupting this path. - * - * Callers must disable local interrupts around it. - */ -static noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state) +static bool sev_restricted_injection_enabled(void) +{ + return sev_status & MSR_AMD64_SEV_RESTRICTED_INJECTION_ENABLED; +} + +void __init sev_snp_init_hv_handling(void) { + struct sev_snp_runtime_data *snp_data; struct sev_es_runtime_data *data; + struct ghcb_state state; struct ghcb *ghcb; + unsigned long flags; + int cpu; + int err; WARN_ON(!irqs_disabled()); + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP) || !sev_restricted_injection_enabled()) + return; data = this_cpu_read(runtime_data); - ghcb = &data->ghcb_page; - if (unlikely(data->ghcb_active)) { - /* GHCB is already in use - save its contents */ - - if (unlikely(data->backup_ghcb_active)) { - /* - * Backup-GHCB is also already in use. There is no way - * to continue here so just kill the machine. To make - * panic() work, mark GHCBs inactive so that messages - * can be printed out. - */ - data->ghcb_active = false; - data->backup_ghcb_active = false; - - instrumentation_begin(); - panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use"); - instrumentation_end(); - } - - /* Mark backup_ghcb active before writing to it */ - data->backup_ghcb_active = true; - - state->ghcb = &data->backup_ghcb; + local_irq_save(flags); - /* Backup GHCB content */ - *state->ghcb = *ghcb; - } else { - state->ghcb = NULL; - data->ghcb_active = true; - } + ghcb = __sev_get_ghcb(&state); - return ghcb; -} + sev_snp_setup_hv_doorbell_page(ghcb); -static inline u64 sev_es_rd_ghcb_msr(void) -{ - return __rdmsr(MSR_AMD64_SEV_ES_GHCB); -} - -static __always_inline void sev_es_wr_ghcb_msr(u64 val) -{ - u32 low, high; + __sev_put_ghcb(&state); - low = (u32)(val); - high = (u32)(val >> 32); + apic_set_eoi_write(hv_doorbell_apic_eoi_write); - native_wrmsr(MSR_AMD64_SEV_ES_GHCB, low, high); + local_irq_restore(flags); } static int vc_fetch_insn_kernel(struct es_em_ctxt *ctxt, @@ -554,6 +663,69 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt /* Include code shared with pre-decompression boot stage */ #include "sev-shared.c" +/* + * Nothing shall interrupt this code path while holding the per-CPU + * GHCB. The backup GHCB is only for NMIs interrupting this path. + * + * Callers must disable local interrupts around it. + */ +static noinstr struct ghcb *__sev_get_ghcb(struct ghcb_state *state) +{ + struct sev_es_runtime_data *data; + struct ghcb *ghcb; + + WARN_ON(!irqs_disabled()); + + data = this_cpu_read(runtime_data); + ghcb = &data->ghcb_page; + + if (unlikely(data->ghcb_active)) { + /* GHCB is already in use - save its contents */ + + if (unlikely(data->backup_ghcb_active)) { + /* + * Backup-GHCB is also already in use. There is no way + * to continue here so just kill the machine. To make + * panic() work, mark GHCBs inactive so that messages + * can be printed out. + */ + data->ghcb_active = false; + data->backup_ghcb_active = false; + + instrumentation_begin(); + panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use"); + instrumentation_end(); + } + + /* Mark backup_ghcb active before writing to it */ + data->backup_ghcb_active = true; + + state->ghcb = &data->backup_ghcb; + + /* Backup GHCB content */ + *state->ghcb = *ghcb; + } else { + state->ghcb = NULL; + data->ghcb_active = true; + } + + return ghcb; +} + +static void sev_snp_setup_hv_doorbell_page(struct ghcb *ghcb) +{ + u64 pa; + enum es_result ret; + + pa = __pa(sev_snp_current_doorbell_page()); + vc_ghcb_invalidate(ghcb); + ret = vmgexit_hv_doorbell_page(ghcb, + SVM_VMGEXIT_SET_HV_DOORBELL_PAGE, + pa); + if (ret != ES_OK) + panic("SEV-SNP: failed to set up #HV doorbell page"); +} + static noinstr void __sev_put_ghcb(struct ghcb_state *state) { struct sev_es_runtime_data *data; @@ -1282,6 +1454,7 @@ static void snp_register_per_cpu_ghcb(void) ghcb = &data->ghcb_page; snp_register_ghcb_early(__pa(ghcb)); + sev_snp_setup_hv_doorbell_page(ghcb); } void setup_ghcb(void) @@ -1321,6 +1494,11 @@ void setup_ghcb(void) snp_register_ghcb_early(__pa(&boot_ghcb_page)); } +int vmgexit_hv_doorbell_page(struct ghcb *ghcb, u64 op, u64 pa) +{ + return sev_es_ghcb_hv_call(ghcb, NULL, SVM_VMGEXIT_HV_DOORBELL_PAGE, op, pa); +} + #ifdef CONFIG_HOTPLUG_CPU static void sev_es_ap_hlt_loop(void) { @@ -1394,6 +1572,7 @@ static void __init alloc_runtime_data(int cpu) static void __init init_ghcb(int cpu) { struct sev_es_runtime_data *data; + struct sev_snp_runtime_data *snp_data; int err; data = per_cpu(runtime_data, cpu); @@ -1405,6 +1584,19 @@ static void __init init_ghcb(int cpu) memset(&data->ghcb_page, 0, sizeof(data->ghcb_page)); + snp_data = memblock_alloc(sizeof(*snp_data), PAGE_SIZE); + if (!snp_data) + panic("Can't allocate SEV-SNP runtime data"); + + err = early_set_memory_decrypted((unsigned long)&snp_data->hv_doorbell_page, + sizeof(snp_data->hv_doorbell_page)); + if (err) + panic("Can't map #HV doorbell pages unencrypted"); + + memset(&snp_data->hv_doorbell_page, 0, sizeof(snp_data->hv_doorbell_page)); + + per_cpu(snp_runtime_data, cpu) = snp_data; + data->ghcb_active = false; data->backup_ghcb_active = false; } @@ -2045,7 +2237,12 @@ DEFINE_IDTENTRY_VC_USER(exc_vmm_communication) static bool hv_raw_handle_exception(struct pt_regs *regs) { - return false; + /* Clear the no_further_signal bit */ + sev_snp_current_doorbell_page()->pending_events.events &= 0x7fff; + + check_hv_pending(regs); + + return true; } static __always_inline bool on_hv_fallback_stack(struct pt_regs *regs) diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index d29debec8134..1aa6cab2394b 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -1503,5 +1503,7 @@ void __init trap_init(void) cpu_init_exception_handling(); /* Setup traps as cpu_init() might #GP */ idt_setup_traps(); + sev_snp_init_hv_handling(); + cpu_init(); } From patchwork Sun Jan 22 02:46:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111402 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4131BC54EB4 for ; Sun, 22 Jan 2023 02:47:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230167AbjAVCr6 (ORCPT ); Sat, 21 Jan 2023 21:47:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229920AbjAVCrW (ORCPT ); Sat, 21 Jan 2023 21:47:22 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8543A24481; Sat, 21 Jan 2023 18:46:42 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id h5-20020a17090a9c0500b0022bb85eb35dso3944605pjp.3; Sat, 21 Jan 2023 18:46:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HfzaNmMXTvYjZaqBx5BRGh5/YKdqJuGZMdeNixaODzI=; b=CftyTjTfQaZlq6X+W/d9ddvb3E7GIsvuW56grnwYN/kjL2hPuns+Lv2zmf17h+Bz80 gbHbRV1L8cabmqUYsjDWnLabuuY3Fs2fdwyNow7dZX5lYjA7hJmqoT1VL4JTMZwgf/iP xRTEsQOgqV0xsq+8+EjD22xPS0quK+jPa6L/Sy1KHJjQ+yHsp92f12rKF4U9gBbT2b9r gxvd53MiamSbL1spRy4c3k01MsHuL/cZQKZ8cO5DUN7B2AZQQFJ3RUYYXXw9gx4VS7mW BZA9YUfo9CWDXAMW8z4+5ZBTnla0ngLZLEzfeZ02prI0eganSll9A3Z4+ob8VlwpmvTp d6hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HfzaNmMXTvYjZaqBx5BRGh5/YKdqJuGZMdeNixaODzI=; b=sysE2oWkeupsYkoczfiePeBYTgchXDICspFd8cEqnveBs5iYg43/k7L+G1RHymQu21 DvpzhDpi8b6n1rdcwMJUomqssvDGvOYAtZJPeAais7jQdCfOkpv7bRcWAQxeHNw33+f+ +ONHlsfTIl7JA5sfsORG4/GlzijmUpYAtWctk1pnDlMghJCckNj0Z1PXmVU+KUx3KDVa yvy3JTwfFaMG9NFuRifvMDWUmVS78K/0sLk1up23ua/FMgQ2dHsqu/KyRcXIWuRxVo5f vs5vaO6BijxxAL9Dg4NbgIQA+0YTnVBdV//DvHxQ6BbYL04syxPAOZhN0Pc+VDHwDSNw fzhQ== X-Gm-Message-State: AFqh2koMMVk6CeTfb4Zk4uZ1K3lrjodqjlVeF1/Dipuz3GnTnIdQhJbN HUOeBKR40ZldSWT4SODa6sQ= X-Google-Smtp-Source: AMrXdXueHgf+64MbPOvUl3iPQC56zhsQqJoJWPq4Yf/nM88fQTLxW966nL9hfBqm61ZqY3d5YnD66Q== X-Received: by 2002:a05:6a21:1646:b0:ac:29b4:11bc with SMTP id no6-20020a056a21164600b000ac29b411bcmr19655447pzb.21.1674355592364; Sat, 21 Jan 2023 18:46:32 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:31 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 15/16] x86/sev: optimize system vector processing invoked from #HV exception Date: Sat, 21 Jan 2023 21:46:05 -0500 Message-Id: <20230122024607.788454-16-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ashish Kalra Construct system vector table and dispatch system vector exceptions through sysvec_table from #HV exception handler instead of explicitly calling each system vector. The system vector table is created dynamically and is placed in a new named ELF section. Signed-off-by: Ashish Kalra --- arch/x86/entry/entry_64.S | 6 +++ arch/x86/kernel/sev.c | 70 +++++++++++++---------------------- arch/x86/kernel/vmlinux.lds.S | 7 ++++ 3 files changed, 38 insertions(+), 45 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index aec8dc4443d1..03af871f08e9 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -412,6 +412,12 @@ SYM_CODE_START(\asmsym) _ASM_NOKPROBE(\asmsym) SYM_CODE_END(\asmsym) + .if \vector >= FIRST_SYSTEM_VECTOR && \vector < NR_VECTORS + .section .system_vectors, "aw" + .byte \vector + .quad \cfunc + .previous + .endif .endm /* diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 03d99fad9e76..b1a98c2a52f8 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -151,6 +151,16 @@ struct sev_snp_runtime_data { static DEFINE_PER_CPU(struct sev_snp_runtime_data*, snp_runtime_data); +static void (*sysvec_table[NR_VECTORS - FIRST_SYSTEM_VECTOR]) + (struct pt_regs *regs) __ro_after_init; + +struct sysvec_entry { + unsigned char vector; + void (*sysvec_func)(struct pt_regs *regs); +} __packed; + +extern struct sysvec_entry __system_vectors[], __system_vectors_end[]; + static inline u64 sev_es_rd_ghcb_msr(void) { return __rdmsr(MSR_AMD64_SEV_ES_GHCB); @@ -214,51 +224,11 @@ static void do_exc_hv(struct pt_regs *regs) } else if (pending_events.vector == IA32_SYSCALL_VECTOR) { WARN(1, "syscall shouldn't happen\n"); } else if (pending_events.vector >= FIRST_SYSTEM_VECTOR) { - switch (pending_events.vector) { -#if IS_ENABLED(CONFIG_HYPERV) - case HYPERV_STIMER0_VECTOR: - sysvec_hyperv_stimer0(regs); - break; - case HYPERVISOR_CALLBACK_VECTOR: - sysvec_hyperv_callback(regs); - break; -#endif -#ifdef CONFIG_SMP - case RESCHEDULE_VECTOR: - sysvec_reschedule_ipi(regs); - break; - case IRQ_MOVE_CLEANUP_VECTOR: - sysvec_irq_move_cleanup(regs); - break; - case REBOOT_VECTOR: - sysvec_reboot(regs); - break; - case CALL_FUNCTION_SINGLE_VECTOR: - sysvec_call_function_single(regs); - break; - case CALL_FUNCTION_VECTOR: - sysvec_call_function(regs); - break; -#endif -#ifdef CONFIG_X86_LOCAL_APIC - case ERROR_APIC_VECTOR: - sysvec_error_interrupt(regs); - break; - case SPURIOUS_APIC_VECTOR: - sysvec_spurious_apic_interrupt(regs); - break; - case LOCAL_TIMER_VECTOR: - sysvec_apic_timer_interrupt(regs); - break; - case X86_PLATFORM_IPI_VECTOR: - sysvec_x86_platform_ipi(regs); - break; -#endif - case 0x0: - break; - default: - panic("Unexpected vector %d\n", vector); - unreachable(); + if (!(sysvec_table[pending_events.vector - FIRST_SYSTEM_VECTOR])) { + WARN(1, "system vector entry 0x%x is NULL\n", + pending_events.vector); + } else { + (*sysvec_table[pending_events.vector - FIRST_SYSTEM_VECTOR])(regs); } } else { common_interrupt(regs, pending_events.vector); @@ -376,6 +346,14 @@ static bool sev_restricted_injection_enabled(void) return sev_status & MSR_AMD64_SEV_RESTRICTED_INJECTION_ENABLED; } +static void __init construct_sysvec_table(void) +{ + struct sysvec_entry *p; + + for (p = __system_vectors; p < __system_vectors_end; p++) + sysvec_table[p->vector - FIRST_SYSTEM_VECTOR] = p->sysvec_func; +} + void __init sev_snp_init_hv_handling(void) { struct sev_snp_runtime_data *snp_data; @@ -403,6 +381,8 @@ void __init sev_snp_init_hv_handling(void) apic_set_eoi_write(hv_doorbell_apic_eoi_write); local_irq_restore(flags); + + construct_sysvec_table(); } static int vc_fetch_insn_kernel(struct es_em_ctxt *ctxt, diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 2e0ee14229bf..aeadb4754b00 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -339,6 +339,13 @@ SECTIONS *(.altinstr_replacement) } + . = ALIGN(8); + .system_vectors : AT(ADDR(.system_vectors) - LOAD_OFFSET) { + __system_vectors = .; + *(.system_vectors) + __system_vectors_end = .; + } + . = ALIGN(8); .apicdrivers : AT(ADDR(.apicdrivers) - LOAD_OFFSET) { __apicdrivers = .; From patchwork Sun Jan 22 02:46:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 13111403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 372A1C27C76 for ; Sun, 22 Jan 2023 02:48:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229843AbjAVCr7 (ORCPT ); Sat, 21 Jan 2023 21:47:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229938AbjAVCrX (ORCPT ); Sat, 21 Jan 2023 21:47:23 -0500 Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44A2E2686F; Sat, 21 Jan 2023 18:46:43 -0800 (PST) Received: by mail-pg1-x52c.google.com with SMTP id 7so6765601pga.1; Sat, 21 Jan 2023 18:46:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=07U05LP61qajW1mCV1qXcQXICcjAVDSkBvwRtl3T56g=; b=V7DXN5ymQIHHKZ2ZJ3l3N5dh4HYfRbSVgt8QS1agUUpwF+FxfquraJIp1mYpBZedVM DxiDY042U8tFeg1swMsbpfjJemHKS8wI+ECwSBfse9saZVKNmNQottgOdv5PZpGEJjmS Kf4A/LS2CorUcT4ENXjr5IXFfEWTDcJiSvYgFSocPYKic2P7Im8QXDH2TEDVWIzwojxk lanwAYS4Jca+KAnDO81zz4me1GNVZ0MmB7z7GGn+u0YouPCv/+VDt3IgtSEoyfSiv4ES oI8McYlmu+7Q5rFMN17hYIwv8lz4WDyFG4z7MqA/3IeVarch+imIUIiwcr/QwdZFJqv6 dd+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=07U05LP61qajW1mCV1qXcQXICcjAVDSkBvwRtl3T56g=; b=FQiikwTR4A7DZPqWfL0Jitsrx7PQXSFWORtA+vm1f5zpwlSCiFG/GhELNOOvLS5mRv Q9RM/yleXhv6PfucJWhL7OrYrfRlzyXGjeHqH1wMjl43CeweKyjVoRXNGTlNyaxs80on Vz+1Imu5n/vH2Kr7SNypVmbYszaYwNa+eZTjZCjzU9wa2Syw3xjaMH6CbG97MTBtjBvm ijK8xPswpGaMBfqc4jUQc/mhBunzYlQ1J47gbopZFgx0Y8bouFzYHN+zOLlPtNU7k+Tn FFYQulwdF3I45r2npUWOsYMNbnhxYbCulMU1TOth0T+imlI5itbI1oRmmiErr1K43IGp jndg== X-Gm-Message-State: AFqh2kqZ5Xiz5tNO91lhSUfKNHmLz70gSvRkqYnDVdqBFutLIc1oElsx 33pJ64s9xIrAohBw+PSL4Pg= X-Google-Smtp-Source: AMrXdXvc2yzHSWu2VtDmL+U/RJIWgvymkJP05/rKvZqbzcDQIhnWOItfc+0kLq7FxnPXJ2esXHHLHw== X-Received: by 2002:a05:6a00:b53:b0:574:a541:574a with SMTP id p19-20020a056a000b5300b00574a541574amr28185403pfo.0.1674355593767; Sat, 21 Jan 2023 18:46:33 -0800 (PST) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:36:d4:8b9d:a9e7:109b]) by smtp.gmail.com with ESMTPSA id b75-20020a621b4e000000b0058ba53aaa75sm18523094pfb.99.2023.01.21.18.46.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Jan 2023 18:46:33 -0800 (PST) From: Tianyu Lan To: luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, jgross@suse.com, tiala@microsoft.com, kirill@shutemov.name, jiangshan.ljs@antgroup.com, peterz@infradead.org, ashish.kalra@amd.com, srutherford@google.com, akpm@linux-foundation.org, anshuman.khandual@arm.com, pawan.kumar.gupta@linux.intel.com, adrian.hunter@intel.com, daniel.sneddon@linux.intel.com, alexander.shishkin@linux.intel.com, sandipan.das@amd.com, ray.huang@amd.com, brijesh.singh@amd.com, michael.roth@amd.com, thomas.lendacky@amd.com, venu.busireddy@oracle.com, sterritt@google.com, tony.luck@intel.com, samitolvanen@google.com, fenghua.yu@intel.com Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-arch@vger.kernel.org Subject: [RFC PATCH V3 16/16] x86/sev: Fix interrupt exit code paths from #HV exception Date: Sat, 21 Jan 2023 21:46:06 -0500 Message-Id: <20230122024607.788454-17-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230122024607.788454-1-ltykernel@gmail.com> References: <20230122024607.788454-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ashish Kalra Add checks in interrupt exit code paths in case of returns to user mode to check if currently executing the #HV handler then don't follow the irqentry_exit_to_user_mode path as that can potentially cause the #HV handler to be preempted and rescheduled on another CPU. Rescheduled #HV handler on another cpu will cause interrupts to be handled on a different cpu than the injected one, causing invalid EOIs and missed/lost guest interrupts and corresponding hangs and/or per-cpu IRQs handled on non-intended cpu. Signed-off-by: Ashish Kalra --- arch/x86/include/asm/idtentry.h | 66 +++++++++++++++++++++++++++++++++ arch/x86/kernel/sev.c | 30 +++++++++++++++ 2 files changed, 96 insertions(+) diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h index 652fea10d377..45b47132be7c 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -13,6 +13,10 @@ #include +#ifdef CONFIG_AMD_MEM_ENCRYPT +noinstr void irqentry_exit_hv_cond(struct pt_regs *regs, irqentry_state_t state); +#endif + /** * DECLARE_IDTENTRY - Declare functions for simple IDT entry points * No error code pushed by hardware @@ -176,6 +180,7 @@ __visible noinstr void func(struct pt_regs *regs, unsigned long error_code) #define DECLARE_IDTENTRY_IRQ(vector, func) \ DECLARE_IDTENTRY_ERRORCODE(vector, func) +#ifndef CONFIG_AMD_MEM_ENCRYPT /** * DEFINE_IDTENTRY_IRQ - Emit code for device interrupt IDT entry points * @func: Function name of the entry point @@ -205,6 +210,26 @@ __visible noinstr void func(struct pt_regs *regs, \ } \ \ static noinline void __##func(struct pt_regs *regs, u32 vector) +#else + +#define DEFINE_IDTENTRY_IRQ(func) \ +static void __##func(struct pt_regs *regs, u32 vector); \ + \ +__visible noinstr void func(struct pt_regs *regs, \ + unsigned long error_code) \ +{ \ + irqentry_state_t state = irqentry_enter(regs); \ + u32 vector = (u32)(u8)error_code; \ + \ + instrumentation_begin(); \ + kvm_set_cpu_l1tf_flush_l1d(); \ + run_irq_on_irqstack_cond(__##func, regs, vector); \ + instrumentation_end(); \ + irqentry_exit_hv_cond(regs, state); \ +} \ + \ +static noinline void __##func(struct pt_regs *regs, u32 vector) +#endif /** * DECLARE_IDTENTRY_SYSVEC - Declare functions for system vector entry points @@ -221,6 +246,7 @@ static noinline void __##func(struct pt_regs *regs, u32 vector) #define DECLARE_IDTENTRY_SYSVEC(vector, func) \ DECLARE_IDTENTRY(vector, func) +#ifndef CONFIG_AMD_MEM_ENCRYPT /** * DEFINE_IDTENTRY_SYSVEC - Emit code for system vector IDT entry points * @func: Function name of the entry point @@ -245,6 +271,26 @@ __visible noinstr void func(struct pt_regs *regs) \ } \ \ static noinline void __##func(struct pt_regs *regs) +#else + +#define DEFINE_IDTENTRY_SYSVEC(func) \ +static void __##func(struct pt_regs *regs); \ + \ +__visible noinstr void func(struct pt_regs *regs) \ +{ \ + irqentry_state_t state = irqentry_enter(regs); \ + \ + instrumentation_begin(); \ + kvm_set_cpu_l1tf_flush_l1d(); \ + run_sysvec_on_irqstack_cond(__##func, regs); \ + instrumentation_end(); \ + irqentry_exit_hv_cond(regs, state); \ +} \ + \ +static noinline void __##func(struct pt_regs *regs) +#endif + +#ifndef CONFIG_AMD_MEM_ENCRYPT /** * DEFINE_IDTENTRY_SYSVEC_SIMPLE - Emit code for simple system vector IDT @@ -274,6 +320,26 @@ __visible noinstr void func(struct pt_regs *regs) \ } \ \ static __always_inline void __##func(struct pt_regs *regs) +#else + +#define DEFINE_IDTENTRY_SYSVEC_SIMPLE(func) \ +static __always_inline void __##func(struct pt_regs *regs); \ + \ +__visible noinstr void func(struct pt_regs *regs) \ +{ \ + irqentry_state_t state = irqentry_enter(regs); \ + \ + instrumentation_begin(); \ + __irq_enter_raw(); \ + kvm_set_cpu_l1tf_flush_l1d(); \ + __##func(regs); \ + __irq_exit_raw(); \ + instrumentation_end(); \ + irqentry_exit_hv_cond(regs, state); \ +} \ + \ +static __always_inline void __##func(struct pt_regs *regs) +#endif /** * DECLARE_IDTENTRY_XENCB - Declare functions for XEN HV callback entry point diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index b1a98c2a52f8..23f15e95838b 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -147,6 +147,10 @@ struct sev_hv_doorbell_page { struct sev_snp_runtime_data { struct sev_hv_doorbell_page hv_doorbell_page; + /* + * Indication that we are currently handling #HV events. + */ + bool hv_handling_events; }; static DEFINE_PER_CPU(struct sev_snp_runtime_data*, snp_runtime_data); @@ -200,6 +204,8 @@ static void do_exc_hv(struct pt_regs *regs) union hv_pending_events pending_events; u8 vector; + this_cpu_read(snp_runtime_data)->hv_handling_events = true; + while (sev_hv_pending()) { pending_events.events = xchg( &sev_snp_current_doorbell_page()->pending_events.events, @@ -234,6 +240,8 @@ static void do_exc_hv(struct pt_regs *regs) common_interrupt(regs, pending_events.vector); } } + + this_cpu_read(snp_runtime_data)->hv_handling_events = false; } static __always_inline bool on_vc_stack(struct pt_regs *regs) @@ -2529,3 +2537,25 @@ static int __init snp_init_platform_device(void) return 0; } device_initcall(snp_init_platform_device); + +noinstr void irqentry_exit_hv_cond(struct pt_regs *regs, irqentry_state_t state) +{ + /* + * Check whether this returns to user mode, if so and if + * we are currently executing the #HV handler then we don't + * want to follow the irqentry_exit_to_user_mode path as + * that can potentially cause the #HV handler to be + * preempted and rescheduled on another CPU. Rescheduled #HV + * handler on another cpu will cause interrupts to be handled + * on a different cpu than the injected one, causing + * invalid EOIs and missed/lost guest interrupts and + * corresponding hangs and/or per-cpu IRQs handled on + * non-intended cpu. + */ + if (user_mode(regs) && + this_cpu_read(snp_runtime_data)->hv_handling_events) + return; + + /* follow normal interrupt return/exit path */ + irqentry_exit(regs, state); +}