From patchwork Tue Aug 27 21:59:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 13780195 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2061.outbound.protection.outlook.com [40.107.243.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58A8E1D415E; Tue, 27 Aug 2024 21:59:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.243.61 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796000; cv=fail; b=aYZmI3LZ9B9Up4Rosb4ZuqfHdmTL13INd6S8oOUyxPetCR54WmH0Ej5GnPTpxXhAVsQdoAxZN0KrR6kiLd5HY/1AvRBHJufSm8bCAlfxVU7MRJLPdnKzy3BtJwutNO7F0dRgtPQcgu+w+UNU/T0zNMoHJKhpLJFyNdPObS2bCHE= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796000; c=relaxed/simple; bh=nJkF8hAEgd6t5ZyJm2tyzLnU96C4UwhPFU/jCBoNm3w=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nldvC4qPiwZmWk2DtX0JtldeCLte/7nZy4aYIoLWN9l9UamUXoC/2qvPo5//zXaLCJuYJdVLLWIrwVxoV5JFMIv6j2G2UaiaPrnzpcLghLerEgA7y1Qvw4MyCAJZj+ofLFmyJIdSASUHqje8tOoJLTyIYiqMYW/GH0rmVhQPleU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=rMX19p7U; arc=fail smtp.client-ip=40.107.243.61 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="rMX19p7U" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=vrgB9qDnlnJ1NB0gQHxmZ7Q3DX6dJNwi+OTqGVQ2dc0xA4twPUuxQHaKm/NYXlt0nxTgzjPYWj24l98bolvuOdWFSEeG2NYyNmBZFfL3mpPgRBENH1jEEmYup5dsnuvShmbSbtPhXG6sG7/xCZ8xUuLDqIDxhlsu+0DZA80STDOrSQGEoAxyUi4g4jEmJaXf+XU6Fob5YXLNyw87WyMn9rHE9QFxx7rzg9GkrxauR1Ku8FB6xD3bzV4B31HzKw/KuY7I/Uk0GH6Ci4T7+ZDJ+48W+bm6JdXhAoLcCE2nToadxrJxGoWaEKsCTK4L1ODP7kLXwa/4GLR3QaQqBSnpWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bL7v8C8CiMIGs7H9bEPlpTemtKI1+fKz8wqjUvilL9E=; b=ato+rl8+R5BetIwj9r7Aoq3JCL2GBUxJHuZoaEY/Mb3CtnQyWCs52JGUTX11I/rWuZetbin32YLxewzft50+SQdnfhMTxpYWv5ClU6xr3wFKZyymzkDuwkRPGbc5mfovmD7ai0sHPBPp0oIlYDRenoLoj6zVaFTxZvTvSGgSIsoRm2TE+smk8s+lp+DY7VEB9ARN7TvOInSpMS159nPysCzEdoGgRgKMa65LXxxS007cPKOAsMnmMyIFiqQ4gBwfCVec2c2ql+/wAqBKg9Hny9fqu2xR9jqu9bIlh//SfsWIpvFY5zZr40lCiXTLLbC83yLOKZKCNJgrd0Tsyh/QMg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bL7v8C8CiMIGs7H9bEPlpTemtKI1+fKz8wqjUvilL9E=; b=rMX19p7UHEjTPCSSvZgjy/yYnkHFZtH3DwF4/Clg1QKIS+rlPaJ0igz0Xr1BqjPaHWSYsrS7Myqnoz4bmil0De8aeLJoCPoWXJzsFj8Ku5POJ3DFBqx+qf91lCfXPD4vC6MSCFhNdIH7NkSsJ8ymZhB3YIhhwF4Fk6Kt0waRjRI= Received: from BN0PR03CA0038.namprd03.prod.outlook.com (2603:10b6:408:e7::13) by DS0PR12MB7770.namprd12.prod.outlook.com (2603:10b6:8:138::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.26; Tue, 27 Aug 2024 21:59:55 +0000 Received: from BN3PEPF0000B06B.namprd21.prod.outlook.com (2603:10b6:408:e7:cafe::d3) by BN0PR03CA0038.outlook.office365.com (2603:10b6:408:e7::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7875.20 via Frontend Transport; Tue, 27 Aug 2024 21:59:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN3PEPF0000B06B.mail.protection.outlook.com (10.167.243.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7939.2 via Frontend Transport; Tue, 27 Aug 2024 21:59:54 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Tue, 27 Aug 2024 16:59:52 -0500 From: Tom Lendacky To: , , , CC: Paolo Bonzini , Sean Christopherson , Borislav Petkov , Dave Hansen , Ingo Molnar , "Thomas Gleixner" , Michael Roth , "Ashish Kalra" , Joerg Roedel , Roy Hopkins Subject: [RFC PATCH 1/7] KVM: SVM: Implement GET_AP_APIC_IDS NAE event Date: Tue, 27 Aug 2024 16:59:25 -0500 Message-ID: X-Mailer: git-send-email 2.43.2 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B06B:EE_|DS0PR12MB7770:EE_ X-MS-Office365-Filtering-Correlation-Id: a54dad7c-f689-4fd9-b13e-08dcc6e39599 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|36860700013|7416014|82310400026|1800799024; X-Microsoft-Antispam-Message-Info: psl9SacNuEjwlFDNzDXZkz30hYuCLDfd9O8oz9Wsc8/2mBgwhy2vlUZU/rhbU9btSYBxMgDyX9B4LfjObhMpnq6QEnbu2IBxnrlH7CnJhZ+NXb2Gqfnsr908a5JlSif1oaITDvnLVMXHAVwpb3jW4hUbcAh2SL3wfMr3VBAxXLAbolrBE8Rr09qyMf8D+0AGP1d3rKm582JRZ43W6R62hXafH/ML9d7TwBcXBwqhcb3+HoJsdTEX1BBBdHXNyuFG2ZoSDZ0PM6qDs/6rlF4+FXL5YONP/DuCdhZ4fpisvtJgAWcv80+b4qOPoggoi/cz8t7p1Ze1Cme3uAUrTkmVTnlewFs+S0Dx3LhAhvWY2WGre6P0Zr0OHOz1KUSeut+4EpsK29r9cNpaGacIDev71/KR1nlSQ7yIkGOxt0PgAxPvy2P/jqqCAeYNK3C/nEDbKNheYS7cE4MS+860f3ne6P6WW6l0owFAIVYj6gaYIIHkr/zBW2LR5oNacXlk3z04qVPuz8hE+hVVw/3F7SFRAfitQEbSoZ3OJfQBXvby55fW01b0vM8XTKdkTlUL84CaDGBVDJsSZLAPQT8r2YDeUMwZtsztmna2cE1yw64xEHn0WxZ1aZELK/wpjRTw7RBAyeMlNa0+MW8jeaHwNBeycVvjt9pJVzpPVGRISH01e8GsoUn+EFBXFO/jHJgf8QYE+YjKwon7cQ6izet1s5AFASsRYVF1oRJz3SU5pClJ+VP7mqGReKfoHLweBRWeejbPqHiLHsfXmCi5qI/K42CGSjyNHvm7/8g3LZ4rkNaRbVcXx7wzgGb1fDDQ4aPbL3JXA710k0oGVstmhZ+jTl1rGvULy+vnbPcgc49mPpLhVnJEhNYri0zlwUpvIN2o1omqJvlKdQcRQCjIXlGQr8ozk6O2shR8An14i72ddo1VGew3j8cyUu0BpimZcloEfyMztKs2HzhF9tvqoCML7hlMmrqafTWQaNP/BYs0y432gZb69fzEtkzHPd6Zvuv0P+VMlhTphEh0mTorBgZzgAvURhVSIJFXgxqE5LE2O/f5oqft4hBbgKKM43VzIqWo9UsLDy1mS0FiHJjUxwV/1DBhCEk9/1AqiSbwzpKaca7VMy35f7CCdYje04mIQSLcAK2QauQTLxbKuWEo+TDfhjnCPV6N4fa9GpQ4ME82SNuwuMQz72GNpw39jMaemkpfnLWvzvE4ozEzC4Qjz+p5FwR+UHFex4Q9tPZTjMioW9MG0Ii+lc/9bUyy3Cagwbb/JE7A5dX/ARIHzS2OdI96fBj55SowypiNhs4E49KsoOBUbM3m+fA2658iop0UcYy81loL7dpfP1IcsIOGdHHlkfCQKq3J1/VEgt5KFdWK5uSqRR788Q5jZWzbTz118h55Jwsl+1wVC1ok6eIKJuyEP94297PiEELihNVx8CIUmx4TbIw= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(376014)(36860700013)(7416014)(82310400026)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Aug 2024 21:59:54.4208 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a54dad7c-f689-4fd9-b13e-08dcc6e39599 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B06B.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7770 Implement the GET_APIC_IDS NAE event to gather and return the list of APIC IDs for all vCPUs in the guest. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/sev-common.h | 1 + arch/x86/include/uapi/asm/svm.h | 1 + arch/x86/kvm/svm/sev.c | 84 ++++++++++++++++++++++++++++++- 3 files changed, 85 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h index 98726c2b04f8..d63c861ef91f 100644 --- a/arch/x86/include/asm/sev-common.h +++ b/arch/x86/include/asm/sev-common.h @@ -136,6 +136,7 @@ enum psc_op { #define GHCB_HV_FT_SNP BIT_ULL(0) #define GHCB_HV_FT_SNP_AP_CREATION BIT_ULL(1) +#define GHCB_HV_FT_APIC_ID_LIST BIT_ULL(4) #define GHCB_HV_FT_SNP_MULTI_VMPL BIT_ULL(5) /* diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h index 1814b413fd57..f8fa3c4c0322 100644 --- a/arch/x86/include/uapi/asm/svm.h +++ b/arch/x86/include/uapi/asm/svm.h @@ -115,6 +115,7 @@ #define SVM_VMGEXIT_AP_CREATE_ON_INIT 0 #define SVM_VMGEXIT_AP_CREATE 1 #define SVM_VMGEXIT_AP_DESTROY 2 +#define SVM_VMGEXIT_GET_APIC_IDS 0x80000017 #define SVM_VMGEXIT_SNP_RUN_VMPL 0x80000018 #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd #define SVM_VMGEXIT_TERM_REQUEST 0x8000fffe diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 532df12b43c5..199bdc7c7db1 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -39,7 +39,9 @@ #define GHCB_VERSION_DEFAULT 2ULL #define GHCB_VERSION_MIN 1ULL -#define GHCB_HV_FT_SUPPORTED (GHCB_HV_FT_SNP | GHCB_HV_FT_SNP_AP_CREATION) +#define GHCB_HV_FT_SUPPORTED (GHCB_HV_FT_SNP | \ + GHCB_HV_FT_SNP_AP_CREATION | \ + GHCB_HV_FT_APIC_ID_LIST) /* enable/disable SEV support */ static bool sev_enabled = true; @@ -3390,6 +3392,10 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm) if (!kvm_ghcb_rax_is_valid(svm)) goto vmgexit_err; break; + case SVM_VMGEXIT_GET_APIC_IDS: + if (!kvm_ghcb_rax_is_valid(svm)) + goto vmgexit_err; + break; case SVM_VMGEXIT_NMI_COMPLETE: case SVM_VMGEXIT_AP_HLT_LOOP: case SVM_VMGEXIT_AP_JUMP_TABLE: @@ -4124,6 +4130,77 @@ static int snp_handle_ext_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t r return 1; /* resume guest */ } +struct sev_apic_id_desc { + u32 num_entries; + u32 apic_ids[]; +}; + +static void sev_get_apic_ids(struct vcpu_svm *svm) +{ + struct ghcb *ghcb = svm->sev_es.ghcb; + struct kvm_vcpu *vcpu = &svm->vcpu, *loop_vcpu; + struct kvm *kvm = vcpu->kvm; + unsigned int id_desc_size; + struct sev_apic_id_desc *desc; + kvm_pfn_t pfn; + gpa_t gpa; + u64 pages; + unsigned long i; + int n; + + pages = vcpu->arch.regs[VCPU_REGS_RAX]; + + /* Each APIC ID is 32-bits in size, so make sure there is room */ + n = atomic_read(&kvm->online_vcpus); + /*TODO: is this possible? */ + if (n < 0) + return; + + id_desc_size = sizeof(*desc); + id_desc_size += n * sizeof(desc->apic_ids[0]); + if (id_desc_size > (pages * PAGE_SIZE)) { + vcpu->arch.regs[VCPU_REGS_RAX] = PFN_UP(id_desc_size); + return; + } + + gpa = svm->vmcb->control.exit_info_1; + + ghcb_set_sw_exit_info_1(ghcb, 2); + ghcb_set_sw_exit_info_2(ghcb, 5); + + if (!page_address_valid(vcpu, gpa)) + return; + + pfn = gfn_to_pfn(kvm, gpa_to_gfn(gpa)); + if (is_error_noslot_pfn(pfn)) + return; + + if (!pages) + return; + + /* Allocate a buffer to hold the APIC IDs */ + desc = kvzalloc(id_desc_size, GFP_KERNEL_ACCOUNT); + if (!desc) + return; + + desc->num_entries = n; + kvm_for_each_vcpu(i, loop_vcpu, kvm) { + /*TODO: is this possible? */ + if (i > n) + break; + + desc->apic_ids[i] = loop_vcpu->vcpu_id; + } + + if (!kvm_write_guest(kvm, gpa, desc, id_desc_size)) { + /* IDs were successfully written */ + ghcb_set_sw_exit_info_1(ghcb, 0); + ghcb_set_sw_exit_info_2(ghcb, 0); + } + + kvfree(desc); +} + static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) { struct vmcb_control_area *control = &svm->vmcb->control; @@ -4404,6 +4481,11 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) case SVM_VMGEXIT_EXT_GUEST_REQUEST: ret = snp_handle_ext_guest_req(svm, control->exit_info_1, control->exit_info_2); break; + case SVM_VMGEXIT_GET_APIC_IDS: + sev_get_apic_ids(svm); + + ret = 1; + break; case SVM_VMGEXIT_UNSUPPORTED_EVENT: vcpu_unimpl(vcpu, "vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n", From patchwork Tue Aug 27 21:59:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 13780197 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2068.outbound.protection.outlook.com [40.107.212.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D64A3C08A; Tue, 27 Aug 2024 22:00:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.212.68 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796016; cv=fail; b=uO9CxVe2crLRhLbFNyZnJ35R3kVZmPEtBHscZJwFWrk4syN4jpaUO6al7V3qc8oxessFaJDkLdCb0vtXC3u7Mq2kn7sENvtw2UOEc+967UrCvgjQ4jT77R3os6vAF3AQXn0PM25evjaQZAN5ujSCCrwBZ6RahARDxrwIlfg2s0E= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796016; c=relaxed/simple; bh=Hfr5fwqMX1k88AfHrWaUUgULW25MOwtdzC7HZ6hvfGM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=uYi0hgMMrF6oCU58+5PadhiFuciKoQn85kZ39AdXbxRwddOsJgStbLsh4sWQq1Ltx6oYl62+KhmEGQlslI5raiKiIXzBc05dYrqtKve4giZNBgzvdauyIAPD6TXBh5rKCqxcUvVl8qHmmm4tPQ3okxR+RohVcutqAzvtTLE7a7Q= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=hl4ehui+; arc=fail smtp.client-ip=40.107.212.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="hl4ehui+" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=YuFSS1mUI5RUTtSC/vBvCRbQu6a04IkF3nlasGQ8jKQRLJNS7XchvYt+MbiNf/D38RrsRMq9VHXM8mvsgQGLPCaaBTYykSMNhIstDtceP0WEhN6KSj7XIEpkKpemmZzVsZpgzp4j12Zc+6PViOmO5013SRk2+kRTrhKeh+Ycj+dlmhIncXA0B/mqoKA3ssVWbWYS7isbs889riBNgxMzakUQUGwm4FOFeQ6uXAXeSoTgf55C0s8KH46cY2fvWGbHKAyiIXCpJk+WNwltnKqDkQvL+R6f0aDCrd374WZ2ZMNy7pcg66vaGWzpuQh940vao8tk7LUkPaA+MF5qfAzRvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/dwp1JFmwXNR+Gz2MJjCP5cbccsoRFsz1qirH4ozExo=; b=kuT5SJQV/lvfvgHeUwJewOq1SL2zGH9bbclmicDB5UDWsRk0oByCI++ldhH5/0phNfkzr+LKCdHwl1lpGK/4qDzWoscIZK948I4wUlqI/fTO5RRavIFlvXhqekqhviON1LiLXWkdzfW1Di1R/sGAIJv0YUXppYo+gMjVKIT+m9LiXz6WUeaMOThMYi7KMUZJn9U9sDWQM/OkBTAnZzI60fzK46Y+ozB8Ci88lzmVKXFglh0WzQfcBYR67vDV+U9ioync1UVwZrSB/N6e6V4n0FnWydi3A3ctgS7DN3yCIJwDTvCPHnzhFUIDDjq7hqSWHf7p9TicqCrkdTw1QhGO6A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/dwp1JFmwXNR+Gz2MJjCP5cbccsoRFsz1qirH4ozExo=; b=hl4ehui+eq7mWgt2B0jsD15H9kYWZCaNgfrQExtqlJjKO0WfPbC3XiLShANT2hfZP1Yhiw4K1J09U2t8VbwqDSX0DRbKMUcLiwc7/wmfn591ugfPm5OiBqNV20yAs/Ux1n40QQFT7gY8rj5d27DWbkxKdRO65Hi47VH0pLoGLRo= Received: from BN9PR03CA0519.namprd03.prod.outlook.com (2603:10b6:408:131::14) by PH7PR12MB6834.namprd12.prod.outlook.com (2603:10b6:510:1b4::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.25; Tue, 27 Aug 2024 22:00:04 +0000 Received: from BN3PEPF0000B069.namprd21.prod.outlook.com (2603:10b6:408:131:cafe::40) by BN9PR03CA0519.outlook.office365.com (2603:10b6:408:131::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.25 via Frontend Transport; Tue, 27 Aug 2024 22:00:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN3PEPF0000B069.mail.protection.outlook.com (10.167.243.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7939.2 via Frontend Transport; Tue, 27 Aug 2024 22:00:03 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Tue, 27 Aug 2024 17:00:01 -0500 From: Tom Lendacky To: , , , CC: Paolo Bonzini , Sean Christopherson , Borislav Petkov , Dave Hansen , Ingo Molnar , "Thomas Gleixner" , Michael Roth , "Ashish Kalra" , Joerg Roedel , Roy Hopkins Subject: [RFC PATCH 2/7] KVM: SEV: Allow for VMPL level specification in AP create Date: Tue, 27 Aug 2024 16:59:26 -0500 Message-ID: X-Mailer: git-send-email 2.43.2 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B069:EE_|PH7PR12MB6834:EE_ X-MS-Office365-Filtering-Correlation-Id: b35ab5b4-507b-4907-5b9a-08dcc6e39b10 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|36860700013|1800799024|82310400026|376014; X-Microsoft-Antispam-Message-Info: fyNdBgZLFT14twbqylcTFr7hnv8N6XZPJ+maCY7yc+TmLCn8uyyX56fFfNd15wC/af4VLQne1jtnbKvq+d5aew8uuWqPHHJQ99LZFtFozNx3NyFKIvs+xeJxOa9ncdL0qfy9yoLgALdaFRaV+FU+jAuPqUgIk/HstmWDlQ54rI2mZbDatszNGNujXtssWSEVg7iVvo/jtOEGHNXoQItbpJm20Q/6/O5XQ0y8jpG72aj00RzV0wo49pesrcHgBpDIWNpdqhI4Vam19LktGROyMOk6M3YAXUQiZNvQzaMov76lPIbwaE9c7IEVEOnXJpEpDzT/RuvhFzIm9DqeLyOn9Qb7pqOTbkz+jE9hcwR1NDQHulq73rlqOYVJ/9K7OoLod1pnLgoej3t+JuJaw1UsQOHP2R6k2dBo1EMSBYPr0mg+BAjK1LoK0bmoSNPg1AZZNhyTNV8RsV7vVO4tbhvc+C3a71zGCl0ydMmm/jIKdgtRCpuHF6izypmBjKZuGk+IaO4rXF/hEL0ijNXnkIixlgs1M1zTHFK7PT8Z4rNZvJjdeEsRBievzNvVaWTJp/+oDL1hh4KnbTzCLcHdjZksF/A3L+CWDcHcjPUQ5qMDvhP8b5wD4IuvLY6zypgxeLGmznSWN7aQUuS968IlAJNsP9viUzybOmvK8NZOJK+4rTnnL1i7du8r4/bAqjXpWd/kUd0SGWL2JJgbQu2hUzvrt2I6UMRdjPZMUDxDBDSwy09GfX1IIJ+mmgVZ7tQDs6q7LmUThzyR3TF1zE2NPV2NWo0egWDzcDpMaXt5wowVGmDx7IprNeysjz4mJ2GwBbMGr7/063qpWkPvgnqGIOV6fuyBJDG/ilNJ8K8ClpuqeS9jzpcQkXaiUgU82UkZOfSDO7GlvurOsilUHoUBPBE0PGYP5HvDEUzQuj/+dljONmCXXLiKkDZ7qVMYmIut7g70w5JRsCcTYnL8VPY7cS8ZWA4dH0KtNNhbGvFsYzHurlLRYNMMaCujqAsPErCScMB0FOUaYFQHDHoT6Nrg6b/ztDY7Cu/Xwus/JVUGcLDqUxVTVFd4El+vJGQmm7B61DDuiwherrqRu5AUEv7PaUUOAyMC0OH77C38LvzHjSoyP5b6vhF89cH1YA10qyLMCZeVe977sO72L2bRAme6DALvxU1Vf4VLM7BA+ohaDod7UsLvbZ9QW6zPnuotfgjN2vduKcb46tSIDQkBieWm+uRAX3DAGZPufU8oA0x0lgL+4zQlod0WEXFE75GOrzFr7G5SDGorrRfN3UNAhukXEX2noMVK6FO/Vs1tLnG2PItJLi+UFK6sqxomLVlpry1tJ8mYA3XZYE6WKJPDCSC7FYDvaHR4wpTdmfufKF2qzIKyjz9en47+HqxWWJvv5KYALew4JXI6k2lKtq9/ET3fQNf1YF/p8lF0D3PNBMJJ+VJZlD5EnUq/Eb4Dtfhq8HUPitht X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(36860700013)(1800799024)(82310400026)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Aug 2024 22:00:03.9934 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b35ab5b4-507b-4907-5b9a-08dcc6e39b10 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B069.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6834 Update AP creation to support ADD/DESTROY of VMSAs at levels other than VMPL0 in order to run under an SVSM at VMPL1 or lower. To maintain backwards compatibility, the VMPL is specified in bits 16 to 19 of the AP Creation request in SW_EXITINFO1 of the GHCB. In order to track the VMSAs at different levels, create arrays for the VMSAs, GHCBs, registered GHCBs and others. When switching VMPL levels, these entries will be used to set the VMSA and GHCB physical addresses in the VMCB for the VMPL level. In order ensure that the proper responses are returned in the proper GHCB, the GHCB must be unmapped at the current level and saved for restoration later when switching back to that VMPL level. Additional checks are applied to prevent a non-VMPL0 vCPU from being able to perform an AP creation request at VMPL0. Additionally, a vCPU cannot replace its own VMSA. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/svm.h | 9 ++ arch/x86/include/uapi/asm/svm.h | 2 + arch/x86/kvm/svm/sev.c | 146 +++++++++++++++++++++++--------- arch/x86/kvm/svm/svm.c | 6 +- arch/x86/kvm/svm/svm.h | 45 ++++++++-- arch/x86/kvm/x86.c | 9 ++ 6 files changed, 169 insertions(+), 48 deletions(-) diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h index f0dea3750ca9..26339d94c00f 100644 --- a/arch/x86/include/asm/svm.h +++ b/arch/x86/include/asm/svm.h @@ -294,6 +294,15 @@ static_assert((X2AVIC_MAX_PHYSICAL_ID & AVIC_PHYSICAL_MAX_INDEX_MASK) == X2AVIC_ (SVM_SEV_FEAT_RESTRICTED_INJECTION | \ SVM_SEV_FEAT_ALTERNATE_INJECTION) +enum { + SVM_SEV_VMPL0 = 0, + SVM_SEV_VMPL1, + SVM_SEV_VMPL2, + SVM_SEV_VMPL3, + + SVM_SEV_VMPL_MAX +}; + struct vmcb_seg { u16 selector; u16 attrib; diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h index f8fa3c4c0322..4a963dd12bb4 100644 --- a/arch/x86/include/uapi/asm/svm.h +++ b/arch/x86/include/uapi/asm/svm.h @@ -115,6 +115,8 @@ #define SVM_VMGEXIT_AP_CREATE_ON_INIT 0 #define SVM_VMGEXIT_AP_CREATE 1 #define SVM_VMGEXIT_AP_DESTROY 2 +#define SVM_VMGEXIT_AP_VMPL_MASK GENMASK(19, 16) +#define SVM_VMGEXIT_AP_VMPL_SHIFT 16 #define SVM_VMGEXIT_GET_APIC_IDS 0x80000017 #define SVM_VMGEXIT_SNP_RUN_VMPL 0x80000018 #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 199bdc7c7db1..c22b6f51ec81 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -807,7 +807,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) { struct kvm_vcpu *vcpu = &svm->vcpu; struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info; - struct sev_es_save_area *save = svm->sev_es.vmsa; + struct sev_es_save_area *save = vmpl_vmsa(svm, SVM_SEV_VMPL0); struct xregs_state *xsave; const u8 *s; u8 *d; @@ -920,11 +920,11 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu, * the VMSA memory content (i.e it will write the same memory region * with the guest's key), so invalidate it first. */ - clflush_cache_range(svm->sev_es.vmsa, PAGE_SIZE); + clflush_cache_range(vmpl_vmsa(svm, SVM_SEV_VMPL0), PAGE_SIZE); vmsa.reserved = 0; vmsa.handle = to_kvm_sev_info(kvm)->handle; - vmsa.address = __sme_pa(svm->sev_es.vmsa); + vmsa.address = __sme_pa(vmpl_vmsa(svm, SVM_SEV_VMPL0)); vmsa.len = PAGE_SIZE; ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error); if (ret) @@ -2452,7 +2452,7 @@ static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) kvm_for_each_vcpu(i, vcpu, kvm) { struct vcpu_svm *svm = to_svm(vcpu); - u64 pfn = __pa(svm->sev_es.vmsa) >> PAGE_SHIFT; + u64 pfn = __pa(vmpl_vmsa(svm, SVM_SEV_VMPL0)) >> PAGE_SHIFT; ret = sev_es_sync_vmsa(svm); if (ret) @@ -2464,7 +2464,7 @@ static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) return ret; /* Issue the SNP command to encrypt the VMSA */ - data.address = __sme_pa(svm->sev_es.vmsa); + data.address = __sme_pa(vmpl_vmsa(svm, SVM_SEV_VMPL0)); ret = __sev_issue_cmd(argp->sev_fd, SEV_CMD_SNP_LAUNCH_UPDATE, &data, &argp->error); if (ret) { @@ -3178,16 +3178,16 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu) * releasing it back to the system. */ if (sev_snp_guest(vcpu->kvm)) { - u64 pfn = __pa(svm->sev_es.vmsa) >> PAGE_SHIFT; + u64 pfn = __pa(vmpl_vmsa(svm, SVM_SEV_VMPL0)) >> PAGE_SHIFT; if (kvm_rmp_make_shared(vcpu->kvm, pfn, PG_LEVEL_4K)) goto skip_vmsa_free; } if (vcpu->arch.guest_state_protected) - sev_flush_encrypted_page(vcpu, svm->sev_es.vmsa); + sev_flush_encrypted_page(vcpu, vmpl_vmsa(svm, SVM_SEV_VMPL0)); - __free_page(virt_to_page(svm->sev_es.vmsa)); + __free_page(virt_to_page(vmpl_vmsa(svm, SVM_SEV_VMPL0))); skip_vmsa_free: if (svm->sev_es.ghcb_sa_free) @@ -3385,13 +3385,19 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm) if (!kvm_ghcb_sw_scratch_is_valid(svm)) goto vmgexit_err; break; - case SVM_VMGEXIT_AP_CREATION: + case SVM_VMGEXIT_AP_CREATION: { + unsigned int request; + if (!sev_snp_guest(vcpu->kvm)) goto vmgexit_err; - if (lower_32_bits(control->exit_info_1) != SVM_VMGEXIT_AP_DESTROY) + + request = lower_32_bits(control->exit_info_1); + request &= ~SVM_VMGEXIT_AP_VMPL_MASK; + if (request != SVM_VMGEXIT_AP_DESTROY) if (!kvm_ghcb_rax_is_valid(svm)) goto vmgexit_err; break; + } case SVM_VMGEXIT_GET_APIC_IDS: if (!kvm_ghcb_rax_is_valid(svm)) goto vmgexit_err; @@ -3850,9 +3856,10 @@ static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu) /* Clear use of the VMSA */ svm->vmcb->control.vmsa_pa = INVALID_PAGE; + tgt_vmpl_vmsa_hpa(svm) = INVALID_PAGE; - if (VALID_PAGE(svm->sev_es.snp_vmsa_gpa)) { - gfn_t gfn = gpa_to_gfn(svm->sev_es.snp_vmsa_gpa); + if (VALID_PAGE(tgt_vmpl_vmsa_gpa(svm))) { + gfn_t gfn = gpa_to_gfn(tgt_vmpl_vmsa_gpa(svm)); struct kvm_memory_slot *slot; kvm_pfn_t pfn; @@ -3870,32 +3877,54 @@ static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu) /* * From this point forward, the VMSA will always be a * guest-mapped page rather than the initial one allocated - * by KVM in svm->sev_es.vmsa. In theory, svm->sev_es.vmsa - * could be free'd and cleaned up here, but that involves - * cleanups like wbinvd_on_all_cpus() which would ideally - * be handled during teardown rather than guest boot. - * Deferring that also allows the existing logic for SEV-ES - * VMSAs to be re-used with minimal SNP-specific changes. + * by KVM in svm->sev_es.vmsa_info[vmpl].vmsa. In theory, + * svm->sev_es.vmsa_info[vmpl].vmsa could be free'd and cleaned + * up here, but that involves cleanups like wbinvd_on_all_cpus() + * which would ideally be handled during teardown rather than + * guest boot. Deferring that also allows the existing logic for + * SEV-ES VMSAs to be re-used with minimal SNP-specific changes. */ - svm->sev_es.snp_has_guest_vmsa = true; + tgt_vmpl_has_guest_vmsa(svm) = true; /* Use the new VMSA */ svm->vmcb->control.vmsa_pa = pfn_to_hpa(pfn); + tgt_vmpl_vmsa_hpa(svm) = pfn_to_hpa(pfn); + + /* + * Since the vCPU may not have gone through the LAUNCH_UPDATE_VMSA path, + * be sure to mark the guest state as protected and enable LBR virtualization. + */ + vcpu->arch.guest_state_protected = true; + svm_enable_lbrv(vcpu); /* Mark the vCPU as runnable */ vcpu->arch.pv.pv_unhalted = false; vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; - svm->sev_es.snp_vmsa_gpa = INVALID_PAGE; + tgt_vmpl_vmsa_gpa(svm) = INVALID_PAGE; /* * gmem pages aren't currently migratable, but if this ever * changes then care should be taken to ensure - * svm->sev_es.vmsa is pinned through some other means. + * svm->sev_es.vmsa_info[vmpl].vmsa is pinned through some other + * means. */ kvm_release_pfn_clean(pfn); } + if (cur_vmpl(svm) != tgt_vmpl(svm)) { + /* Unmap the current GHCB */ + sev_es_unmap_ghcb(svm); + + /* Save the GHCB GPA of the current VMPL */ + svm->sev_es.ghcb_gpa[cur_vmpl(svm)] = svm->vmcb->control.ghcb_gpa; + + /* Set the GHCB_GPA for the target VMPL and make it the current VMPL */ + svm->vmcb->control.ghcb_gpa = svm->sev_es.ghcb_gpa[tgt_vmpl(svm)]; + + cur_vmpl(svm) = tgt_vmpl(svm); + } + /* * When replacing the VMSA during SEV-SNP AP creation, * mark the VMCB dirty so that full state is always reloaded. @@ -3918,10 +3947,10 @@ void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu) mutex_lock(&svm->sev_es.snp_vmsa_mutex); - if (!svm->sev_es.snp_ap_waiting_for_reset) + if (!tgt_vmpl_ap_waiting_for_reset(svm)) goto unlock; - svm->sev_es.snp_ap_waiting_for_reset = false; + tgt_vmpl_ap_waiting_for_reset(svm) = false; ret = __sev_snp_update_protected_guest_state(vcpu); if (ret) @@ -3939,12 +3968,24 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm) struct vcpu_svm *target_svm; unsigned int request; unsigned int apic_id; + unsigned int vmpl; bool kick; int ret; request = lower_32_bits(svm->vmcb->control.exit_info_1); apic_id = upper_32_bits(svm->vmcb->control.exit_info_1); + vmpl = (request & SVM_VMGEXIT_AP_VMPL_MASK) >> SVM_VMGEXIT_AP_VMPL_SHIFT; + request &= ~SVM_VMGEXIT_AP_VMPL_MASK; + + /* Validate the requested VMPL level */ + if (vmpl >= SVM_SEV_VMPL_MAX) { + vcpu_unimpl(vcpu, "vmgexit: invalid VMPL level [%u] from guest\n", + vmpl); + return -EINVAL; + } + vmpl = array_index_nospec(vmpl, SVM_SEV_VMPL_MAX); + /* Validate the APIC ID */ target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, apic_id); if (!target_vcpu) { @@ -3966,13 +4007,22 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm) mutex_lock(&target_svm->sev_es.snp_vmsa_mutex); - target_svm->sev_es.snp_vmsa_gpa = INVALID_PAGE; - target_svm->sev_es.snp_ap_waiting_for_reset = true; + vmpl_vmsa_gpa(target_svm, vmpl) = INVALID_PAGE; + vmpl_ap_waiting_for_reset(target_svm, vmpl) = true; - /* Interrupt injection mode shouldn't change for AP creation */ + /* VMPL0 can only be replaced by another vCPU running VMPL0 */ + if (vmpl == SVM_SEV_VMPL0 && + (vcpu == target_vcpu || + vmpl_vmsa_hpa(svm, SVM_SEV_VMPL0) != svm->vmcb->control.vmsa_pa)) { + ret = -EINVAL; + goto out; + } + + /* Perform common AP creation validation */ if (request < SVM_VMGEXIT_AP_DESTROY) { u64 sev_features; + /* Interrupt injection mode shouldn't change for AP creation */ sev_features = vcpu->arch.regs[VCPU_REGS_RAX]; sev_features ^= sev->vmsa_features; @@ -3982,13 +4032,8 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm) ret = -EINVAL; goto out; } - } - switch (request) { - case SVM_VMGEXIT_AP_CREATE_ON_INIT: - kick = false; - fallthrough; - case SVM_VMGEXIT_AP_CREATE: + /* Validate the input VMSA page */ if (!page_address_valid(vcpu, svm->vmcb->control.exit_info_2)) { vcpu_unimpl(vcpu, "vmgexit: invalid AP VMSA address [%#llx] from guest\n", svm->vmcb->control.exit_info_2); @@ -4010,8 +4055,17 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm) ret = -EINVAL; goto out; } + } - target_svm->sev_es.snp_vmsa_gpa = svm->vmcb->control.exit_info_2; + switch (request) { + case SVM_VMGEXIT_AP_CREATE_ON_INIT: + /* Delay switching to the new VMSA */ + kick = false; + fallthrough; + case SVM_VMGEXIT_AP_CREATE: + /* Switch to new VMSA on the next VMRUN */ + target_svm->sev_es.snp_target_vmpl = vmpl; + vmpl_vmsa_gpa(target_svm, vmpl) = svm->vmcb->control.exit_info_2 & PAGE_MASK; break; case SVM_VMGEXIT_AP_DESTROY: break; @@ -4298,7 +4352,7 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) gfn = get_ghcb_msr_bits(svm, GHCB_MSR_GPA_VALUE_MASK, GHCB_MSR_GPA_VALUE_POS); - svm->sev_es.ghcb_registered_gpa = gfn_to_gpa(gfn); + svm->sev_es.ghcb_registered_gpa[cur_vmpl(svm)] = gfn_to_gpa(gfn); set_ghcb_msr_bits(svm, gfn, GHCB_MSR_GPA_VALUE_MASK, GHCB_MSR_GPA_VALUE_POS); @@ -4579,8 +4633,8 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm) * the VMSA will be NULL if this vCPU is the destination for intrahost * migration, and will be copied later. */ - if (svm->sev_es.vmsa && !svm->sev_es.snp_has_guest_vmsa) - svm->vmcb->control.vmsa_pa = __pa(svm->sev_es.vmsa); + if (cur_vmpl_vmsa(svm) && !cur_vmpl_has_guest_vmsa(svm)) + svm->vmcb->control.vmsa_pa = __pa(cur_vmpl_vmsa(svm)); /* Can't intercept CR register access, HV can't modify CR registers */ svm_clr_intercept(svm, INTERCEPT_CR0_READ); @@ -4643,16 +4697,30 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm) { struct kvm_vcpu *vcpu = &svm->vcpu; struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info; + unsigned int i; + u64 sev_info; /* * Set the GHCB MSR value as per the GHCB specification when emulating * vCPU RESET for an SEV-ES guest. */ - set_ghcb_msr(svm, GHCB_MSR_SEV_INFO((__u64)sev->ghcb_version, - GHCB_VERSION_MIN, - sev_enc_bit)); + sev_info = GHCB_MSR_SEV_INFO((__u64)sev->ghcb_version, GHCB_VERSION_MIN, + sev_enc_bit); + set_ghcb_msr(svm, sev_info); + svm->sev_es.ghcb_gpa[SVM_SEV_VMPL0] = sev_info; mutex_init(&svm->sev_es.snp_vmsa_mutex); + + /* + * When not running under SNP, the "current VMPL" tracking for a guest + * is always 0 and the base tracking of GPAs and SPAs will be as before + * multiple VMPL support. However, under SNP, multiple VMPL levels can + * be run, so initialize these values appropriately. + */ + for (i = 1; i < SVM_SEV_VMPL_MAX; i++) { + svm->sev_es.vmsa_info[i].hpa = INVALID_PAGE; + svm->sev_es.ghcb_gpa[i] = sev_info; + } } void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d6f252555ab3..ca4bc53fb14a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1463,8 +1463,10 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu) svm->vmcb01.pa = __sme_set(page_to_pfn(vmcb01_page) << PAGE_SHIFT); svm_switch_vmcb(svm, &svm->vmcb01); - if (vmsa_page) - svm->sev_es.vmsa = page_address(vmsa_page); + if (vmsa_page) { + vmpl_vmsa(svm, SVM_SEV_VMPL0) = page_address(vmsa_page); + vmpl_vmsa_hpa(svm, SVM_SEV_VMPL0) = __pa(page_address(vmsa_page)); + } svm->guest_state_loaded = false; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 76107c7d0595..45a37d16b6f7 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -198,9 +198,39 @@ struct svm_nested_state { bool force_msr_bitmap_recalc; }; -struct vcpu_sev_es_state { - /* SEV-ES support */ +#define vmpl_vmsa(s, v) ((s)->sev_es.vmsa_info[(v)].vmsa) +#define vmpl_vmsa_gpa(s, v) ((s)->sev_es.vmsa_info[(v)].gpa) +#define vmpl_vmsa_hpa(s, v) ((s)->sev_es.vmsa_info[(v)].hpa) +#define vmpl_ap_waiting_for_reset(s, v) ((s)->sev_es.vmsa_info[(v)].ap_waiting_for_reset) +#define vmpl_has_guest_vmsa(s, v) ((s)->sev_es.vmsa_info[(v)].has_guest_vmsa) + +#define cur_vmpl(s) ((s)->sev_es.snp_current_vmpl) +#define cur_vmpl_vmsa(s) vmpl_vmsa((s), cur_vmpl(s)) +#define cur_vmpl_vmsa_gpa(s) vmpl_vmsa_gpa((s), cur_vmpl(s)) +#define cur_vmpl_vmsa_hpa(s) vmpl_vmsa_hpa((s), cur_vmpl(s)) +#define cur_vmpl_ap_waiting_for_reset(s) vmpl_ap_waiting_for_reset((s), cur_vmpl(s)) +#define cur_vmpl_has_guest_vmsa(s) vmpl_has_guest_vmsa((s), cur_vmpl(s)) + +#define tgt_vmpl(s) ((s)->sev_es.snp_target_vmpl) +#define tgt_vmpl_vmsa(s) vmpl_vmsa((s), tgt_vmpl(s)) +#define tgt_vmpl_vmsa_gpa(s) vmpl_vmsa_gpa((s), tgt_vmpl(s)) +#define tgt_vmpl_vmsa_hpa(s) vmpl_vmsa_hpa((s), tgt_vmpl(s)) +#define tgt_vmpl_ap_waiting_for_reset(s) vmpl_ap_waiting_for_reset((s), tgt_vmpl(s)) +#define tgt_vmpl_has_guest_vmsa(s) vmpl_has_guest_vmsa((s), tgt_vmpl(s)) + +struct sev_vmsa_info { + /* SEV-ES and SEV-SNP */ struct sev_es_save_area *vmsa; + + /* SEV-SNP for multi VMPL support */ + gpa_t gpa; + hpa_t hpa; + bool ap_waiting_for_reset; + bool has_guest_vmsa; +}; + +struct vcpu_sev_es_state { + /* SEV-ES/SEV-SNP support */ struct ghcb *ghcb; u8 valid_bitmap[16]; struct kvm_host_map ghcb_map; @@ -219,12 +249,13 @@ struct vcpu_sev_es_state { u16 psc_inflight; bool psc_2m; - u64 ghcb_registered_gpa; + gpa_t ghcb_gpa[SVM_SEV_VMPL_MAX]; + u64 ghcb_registered_gpa[SVM_SEV_VMPL_MAX]; + struct sev_vmsa_info vmsa_info[SVM_SEV_VMPL_MAX]; struct mutex snp_vmsa_mutex; /* Used to handle concurrent updates of VMSA. */ - gpa_t snp_vmsa_gpa; - bool snp_ap_waiting_for_reset; - bool snp_has_guest_vmsa; + unsigned int snp_current_vmpl; + unsigned int snp_target_vmpl; }; struct vcpu_svm { @@ -380,7 +411,7 @@ static __always_inline bool sev_snp_guest(struct kvm *kvm) static inline bool ghcb_gpa_is_registered(struct vcpu_svm *svm, u64 val) { - return svm->sev_es.ghcb_registered_gpa == val; + return svm->sev_es.ghcb_registered_gpa[cur_vmpl(svm)] == val; } static inline void vmcb_mark_all_dirty(struct vmcb *vmcb) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ef3d3511e4af..3efc3a89499c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11469,6 +11469,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) kvm_vcpu_block(vcpu); kvm_vcpu_srcu_read_lock(vcpu); + /* + * It is possible that the vCPU has never run before. If the + * request is to update the protected guest state (AP Create), + * then ensure that the vCPU can now run. + */ + if (kvm_test_request(KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, vcpu) && + vcpu->arch.mp_state == KVM_MP_STATE_UNINITIALIZED) + vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; + if (kvm_apic_accept_events(vcpu) < 0) { r = 0; goto out; From patchwork Tue Aug 27 21:59:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 13780196 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2061.outbound.protection.outlook.com [40.107.212.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9BE8117A5BC; Tue, 27 Aug 2024 22:00:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.212.61 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796015; cv=fail; b=g+iid6t0sE39/Nz8EZTr7TNGEYVmEPV0makPdijTJJPc0YMRJAjiavvPk9IgElJcuA0DwCdSGIA6NHeqxe4vXn2ASRhnRIF+Zl0KXAoqOEg5Vul5X8Y8rjktlUHxuLByiNiHP/fxrRW49H54M3hm/5H9bqwQ8yQWRrchho2cdZ0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796015; c=relaxed/simple; bh=cXYGpBeIROCJzMmei+ycA/8cIOOkci/o4/A85ptRn+4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FcXoQ49oH6YqLE+KY3kxZh+Xc9pzQJOoNngoHWj/Gzoe64RW7Y8fQ+Et9XyZ7LeBQfaiYlqUghOiilUlxVZkSuOockya1+YH9RSYcGJCz+dDMdW50UxJV3BrmgKxloYdjwlLeJsZkl5MQOybnz213wSusDp/gj39U07mj/IvJ60= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=IOYvXtiU; arc=fail smtp.client-ip=40.107.212.61 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="IOYvXtiU" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=foKs5mFRUjdffLYZSFnJwyhl+Frlyppjatq2Od8bzZv3J9lk4I+OjuqsbW3t6gEF60hbea+hBpGjm7hwjGz9MspOsZtVW4kcKy4d+l2SabcaIWhhhxa76TnTZvvw2l/+L2WieABhm9LRw4RJl/M7i0iMcB5lm7kSffTq5K+2Mw6nhcemQktKhbpOv313yE1d3X8uiSw1rbkPstTA7dxDw3SVwXGt5Q+uL7+FjKhKhCQj6qEZuuJshH+yAtTYbbfV3hCPjoAxb2xdbLkZZ9RY3angW/1/DlmV/YFFALGqXQMR9qO63KTdCz/ot913iuQs6Vuo+7KWVEuZWAk+Bqw8Vg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=K2BHUJ5ycezIfyaBcj+ITPyrm5LAgqwX87HLn8HCgbA=; b=a+YOQrrWaV13eR3krCzsOAIljfenqnhyXKhltcw/hVdzS1rER9CBwFzKUYRT8W57qOByg0PWmAsOjry0U09VWleyeEhix3gGg477zNoxynbsHPHAZTaEpsTG9+QwdTDMbU+wKdCK6hasfdnYHbwNzB2MxChaNKSeHG4HR5WZX6CWyvg7hPxYMw6g1jKOMbI5VG3U4qiRc3Jmlw9OwIDJjlOTZBpd5/oQxEDqQg7tArOEOv0o2e87uXMXAKgYmEBH2gswFurw5S/LU4Ntg0qvx8HHJf3Nz88zApTaYr9GiqLGjJ0xGgJ6Zw7gtaffbfaUnpPlCyR5PkW9WW/0jlBPaA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=K2BHUJ5ycezIfyaBcj+ITPyrm5LAgqwX87HLn8HCgbA=; b=IOYvXtiUl9IKkLwOf3JC1pjoXNMo2AFdxT7mJB88O7kIzm20B5tK+1NHK3Hk4qaCySuZKhRuUyCUGdHNfT87kMFcKHf0P04VhSZ3CMOl+rb7pvzre37V97/G6Mmn5jum+z381qG+yeyWki1P30+0B2qKFTsUTCGqFsj78t46I4c= Received: from BN9PR03CA0567.namprd03.prod.outlook.com (2603:10b6:408:138::32) by IA1PR12MB8408.namprd12.prod.outlook.com (2603:10b6:208:3db::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.26; Tue, 27 Aug 2024 22:00:10 +0000 Received: from BN3PEPF0000B06A.namprd21.prod.outlook.com (2603:10b6:408:138:cafe::94) by BN9PR03CA0567.outlook.office365.com (2603:10b6:408:138::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.25 via Frontend Transport; Tue, 27 Aug 2024 22:00:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN3PEPF0000B06A.mail.protection.outlook.com (10.167.243.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7939.2 via Frontend Transport; Tue, 27 Aug 2024 22:00:09 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Tue, 27 Aug 2024 17:00:08 -0500 From: Tom Lendacky To: , , , CC: Paolo Bonzini , Sean Christopherson , Borislav Petkov , Dave Hansen , Ingo Molnar , "Thomas Gleixner" , Michael Roth , "Ashish Kalra" , Joerg Roedel , Roy Hopkins Subject: [RFC PATCH 3/7] KVM: SVM: Invoke a specified VMPL level VMSA for the vCPU Date: Tue, 27 Aug 2024 16:59:27 -0500 Message-ID: <840c1337a42525b755661cc6de83c4b4e0c2d152.1724795971.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.43.2 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B06A:EE_|IA1PR12MB8408:EE_ X-MS-Office365-Filtering-Correlation-Id: e372f4e8-84a3-49d0-3a6e-08dcc6e39e8b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|7416014|376014|82310400026; X-Microsoft-Antispam-Message-Info: HS8vlLwrbvU4m1QNFKUs7KQuwxvJbMgx/v198KLfa/YZ6tN+m5no+vCsoXJ+ng4aKs9gxsFB9dYEMxtWZG9h32AdCZRrEtbLHtdadrag6K6N0x/TeRcDDmhGDElNHHhBPI3bXpMvplnDCp0wraXuL/FlfRzWXVlMJxVu3wFrRkJaaaadC+PIcK6h0HG175fwfRPl7b4q0vKsdSBF0g2/AsQ84JCvB39KjseuGO0NiZBKkSKCig5/tj/LprBn4FrN8D8TIISJgUdRbzrB5ow9h6A7GNQgLRv8FXVdgvXBzrduZpnyj++HJOppVAHiRFM52GEtZqVTwKgEmT0UAz4FcztwZ/xFzuzmZ77Ee6tzw+MgNzYeauPcRuZfwHgVMjfL5g0pPgu0zPhBz6FcpdS/hCA98gciv6EO9Wm9o3brHLzqHO8iZl8qx0qqtRZN9NQl60Nq2aFSHLLUztU+RX0ZiJl1Ab5TbAhVIGFJnPo39pCpXcBgrZk+mMiP1Vi8ZHuYPlFIhvyEWSN26KatHhrb0M/iNtWlm4B1paXIfzrp9VZVGyBULsSaDIAwkKNn9ozTcxNsUjSnKTsACdO3QoFoSLZlHrZFxBEKUHQcWXKW3MjgdV8Dm99iw2wz3eTsWsWDSgyTDN+QWMKh9od24/RtT0l49X6hTbkeU7qw2z5DqGQRXm5oEb+vpGUWQZpz3O67uJ8ce421A1oQw24Az4KfMIokygx5DIU/gg3jVxH0izEVnh2N5K5t98Sg5vfGKwW604EThEsuVDXYmqb6smhP9tzHTqss71EBH9iuDI5wq4+Ph7ISp5xBEac5hkhVotp6ntTxBQOwp/RI27vq7aW3SeMF/PuG0AgZYe/smOKmBrpoMzSZOq4aW2tTKKixhaNKHttkFDxpxZX0/nhvdmDml2ak0n0EydI6d60FrduCnJuvkNWqTXBAiChxenDU35q7p00lzx6Yxlek9/UtUyfQUfsZKnu6BwuGbVQw3cc6wj0d95o024jSPkkPULDJoQ7zklo16A2N6FoFMUFC75NDmV9ngQCPawUEac1zGIg9GM2YmsH6WkhSjImC1ULbe0N1FbuLPD5HH1dOntjefJFm8II5RTQvG+O3QHjM80YdkZTSBBNVx+YLY0fYAw1HndNj9UT4HmRlbONozxHHw3is707xGN0cMIKm9mQkqgNcWITTgZaKYsuzhcjehYtaHiaZWj5XyLqZFqftzneG4KJnVDtA4LiAIVFBkyU1m09p3HIjvDfYeVNlhGJMsEx5WkMDj1k/HGOoBVtNeAPzj6y/PMVA16sK4qRxs5Www07rY6IDImu8SPGWjSM4Vx6NTvCLJjTOiZ/llDsNs966dxzh1qOUGb8d3+M9AO9VEJNyxX8c+LQcPvEWFgyzK2a63yEcNQ58V6CSp1vLPqw96oM4NEtNI+vhM7aA9YU+vAQ28E+IiJ1H8QSdl1niRvvJeQni X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(7416014)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Aug 2024 22:00:09.8787 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e372f4e8-84a3-49d0-3a6e-08dcc6e39e8b X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B06A.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8408 Implement the SNP Run VMPL NAE event and MSR protocol to allow a guest to request a different VMPL level VMSA be run for the vCPU. This allows the guest to "call" an SVSM to process an SVSM request. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/sev-common.h | 6 ++ arch/x86/kvm/svm/sev.c | 126 +++++++++++++++++++++++++++++- arch/x86/kvm/svm/svm.c | 13 +++ arch/x86/kvm/svm/svm.h | 18 ++++- 4 files changed, 158 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h index d63c861ef91f..6f7134aada83 100644 --- a/arch/x86/include/asm/sev-common.h +++ b/arch/x86/include/asm/sev-common.h @@ -114,6 +114,8 @@ enum psc_op { /* GHCB Run at VMPL Request/Response */ #define GHCB_MSR_VMPL_REQ 0x016 +#define GHCB_MSR_VMPL_LEVEL_POS 32 +#define GHCB_MSR_VMPL_LEVEL_MASK GENMASK_ULL(7, 0) #define GHCB_MSR_VMPL_REQ_LEVEL(v) \ /* GHCBData[39:32] */ \ (((u64)(v) & GENMASK_ULL(7, 0) << 32) | \ @@ -121,6 +123,10 @@ enum psc_op { GHCB_MSR_VMPL_REQ) #define GHCB_MSR_VMPL_RESP 0x017 +#define GHCB_MSR_VMPL_ERROR_POS 32 +#define GHCB_MSR_VMPL_ERROR_MASK GENMASK_ULL(31, 0) +#define GHCB_MSR_VMPL_RSVD_POS 12 +#define GHCB_MSR_VMPL_RSVD_MASK GENMASK_ULL(19, 0) #define GHCB_MSR_VMPL_RESP_VAL(v) \ /* GHCBData[63:32] */ \ (((u64)(v) & GENMASK_ULL(63, 32)) >> 32) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index c22b6f51ec81..e0f5122061e6 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -3421,6 +3421,10 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm) control->exit_info_1 == control->exit_info_2) goto vmgexit_err; break; + case SVM_VMGEXIT_SNP_RUN_VMPL: + if (!sev_snp_guest(vcpu->kvm)) + goto vmgexit_err; + break; default: reason = GHCB_ERR_INVALID_EVENT; goto vmgexit_err; @@ -3935,21 +3939,25 @@ static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu) } /* - * Invoked as part of svm_vcpu_reset() processing of an init event. + * Invoked as part of svm_vcpu_reset() processing of an init event + * or as part of switching to a new VMPL. */ -void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu) +bool sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); + bool init = false; int ret; if (!sev_snp_guest(vcpu->kvm)) - return; + return false; mutex_lock(&svm->sev_es.snp_vmsa_mutex); if (!tgt_vmpl_ap_waiting_for_reset(svm)) goto unlock; + init = true; + tgt_vmpl_ap_waiting_for_reset(svm) = false; ret = __sev_snp_update_protected_guest_state(vcpu); @@ -3958,6 +3966,8 @@ void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu) unlock: mutex_unlock(&svm->sev_es.snp_vmsa_mutex); + + return init; } static int sev_snp_ap_creation(struct vcpu_svm *svm) @@ -4255,6 +4265,92 @@ static void sev_get_apic_ids(struct vcpu_svm *svm) kvfree(desc); } +static int __sev_run_vmpl_vmsa(struct vcpu_svm *svm, unsigned int new_vmpl) +{ + struct kvm_vcpu *vcpu = &svm->vcpu; + struct vmpl_switch_sa *old_vmpl_sa; + struct vmpl_switch_sa *new_vmpl_sa; + unsigned int old_vmpl; + + if (new_vmpl >= SVM_SEV_VMPL_MAX) + return -EINVAL; + new_vmpl = array_index_nospec(new_vmpl, SVM_SEV_VMPL_MAX); + + old_vmpl = svm->sev_es.snp_current_vmpl; + svm->sev_es.snp_target_vmpl = new_vmpl; + + if (svm->sev_es.snp_target_vmpl == svm->sev_es.snp_current_vmpl || + sev_snp_init_protected_guest_state(vcpu)) + return 0; + + /* If the VMSA is not valid, return an error */ + if (!VALID_PAGE(vmpl_vmsa_hpa(svm, new_vmpl))) + return -EINVAL; + + /* Unmap the current GHCB */ + sev_es_unmap_ghcb(svm); + + /* Save some current VMCB values */ + svm->sev_es.ghcb_gpa[old_vmpl] = svm->vmcb->control.ghcb_gpa; + + old_vmpl_sa = &svm->sev_es.vssa[old_vmpl]; + old_vmpl_sa->int_state = svm->vmcb->control.int_state; + old_vmpl_sa->exit_int_info = svm->vmcb->control.exit_int_info; + old_vmpl_sa->exit_int_info_err = svm->vmcb->control.exit_int_info_err; + old_vmpl_sa->cr0 = vcpu->arch.cr0; + old_vmpl_sa->cr2 = vcpu->arch.cr2; + old_vmpl_sa->cr4 = vcpu->arch.cr4; + old_vmpl_sa->cr8 = vcpu->arch.cr8; + old_vmpl_sa->efer = vcpu->arch.efer; + + /* Restore some previous VMCB values */ + svm->vmcb->control.vmsa_pa = vmpl_vmsa_hpa(svm, new_vmpl); + svm->vmcb->control.ghcb_gpa = svm->sev_es.ghcb_gpa[new_vmpl]; + + new_vmpl_sa = &svm->sev_es.vssa[new_vmpl]; + svm->vmcb->control.int_state = new_vmpl_sa->int_state; + svm->vmcb->control.exit_int_info = new_vmpl_sa->exit_int_info; + svm->vmcb->control.exit_int_info_err = new_vmpl_sa->exit_int_info_err; + vcpu->arch.cr0 = new_vmpl_sa->cr0; + vcpu->arch.cr2 = new_vmpl_sa->cr2; + vcpu->arch.cr4 = new_vmpl_sa->cr4; + vcpu->arch.cr8 = new_vmpl_sa->cr8; + vcpu->arch.efer = new_vmpl_sa->efer; + + svm->sev_es.snp_current_vmpl = new_vmpl; + + vmcb_mark_all_dirty(svm->vmcb); + + return 0; +} + +static void sev_run_vmpl_vmsa(struct vcpu_svm *svm) +{ + struct ghcb *ghcb = svm->sev_es.ghcb; + struct kvm_vcpu *vcpu = &svm->vcpu; + unsigned int vmpl; + int ret; + + /* TODO: Does this need to be synced for original VMPL ... */ + ghcb_set_sw_exit_info_1(ghcb, 0); + ghcb_set_sw_exit_info_2(ghcb, 0); + + if (!sev_snp_guest(vcpu->kvm)) + goto err; + + vmpl = lower_32_bits(svm->vmcb->control.exit_info_1); + + ret = __sev_run_vmpl_vmsa(svm, vmpl); + if (ret) + goto err; + + return; + +err: + ghcb_set_sw_exit_info_1(ghcb, 2); + ghcb_set_sw_exit_info_2(ghcb, 0); +} + static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) { struct vmcb_control_area *control = &svm->vmcb->control; @@ -4366,6 +4462,25 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm) ret = snp_begin_psc_msr(svm, control->ghcb_gpa); break; + case GHCB_MSR_VMPL_REQ: { + unsigned int vmpl; + + vmpl = get_ghcb_msr_bits(svm, GHCB_MSR_VMPL_LEVEL_MASK, GHCB_MSR_VMPL_LEVEL_POS); + + /* + * Set as successful in advance, since this value will be saved + * as part of the VMPL switch and then restored if switching + * back to the calling VMPL level. + */ + set_ghcb_msr_bits(svm, 0, GHCB_MSR_VMPL_ERROR_MASK, GHCB_MSR_VMPL_ERROR_POS); + set_ghcb_msr_bits(svm, 0, GHCB_MSR_VMPL_RSVD_MASK, GHCB_MSR_VMPL_RSVD_POS); + set_ghcb_msr_bits(svm, GHCB_MSR_VMPL_RESP, GHCB_MSR_INFO_MASK, GHCB_MSR_INFO_POS); + + if (__sev_run_vmpl_vmsa(svm, vmpl)) + set_ghcb_msr_bits(svm, 1, GHCB_MSR_VMPL_ERROR_MASK, GHCB_MSR_VMPL_ERROR_POS); + + break; + } case GHCB_MSR_TERM_REQ: { u64 reason_set, reason_code; @@ -4538,6 +4653,11 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) case SVM_VMGEXIT_GET_APIC_IDS: sev_get_apic_ids(svm); + ret = 1; + break; + case SVM_VMGEXIT_SNP_RUN_VMPL: + sev_run_vmpl_vmsa(svm); + ret = 1; break; case SVM_VMGEXIT_UNSUPPORTED_EVENT: diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index ca4bc53fb14a..586c26627bb1 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4253,6 +4253,19 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, } vcpu->arch.regs_dirty = 0; + if (sev_snp_is_rinj_active(vcpu)) { + /* + * When SEV-SNP is running with restricted injection, the V_IRQ + * bit may be cleared on exit because virtual interrupt support + * is ignored. To support multiple VMPLs, some of which may not + * be running with restricted injection, ensure to reset the + * V_IRQ bit if a virtual interrupt is meant to be active (the + * virtual interrupt priority mask is non-zero). + */ + if (svm->vmcb->control.int_ctl & V_INTR_PRIO_MASK) + svm->vmcb->control.int_ctl |= V_IRQ_MASK; + } + if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI)) kvm_before_interrupt(vcpu, KVM_HANDLING_NMI); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 45a37d16b6f7..d1ef349556f7 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -198,6 +198,18 @@ struct svm_nested_state { bool force_msr_bitmap_recalc; }; +struct vmpl_switch_sa { + u32 int_state; + u32 exit_int_info; + u32 exit_int_info_err; + + unsigned long cr0; + unsigned long cr2; + unsigned long cr4; + unsigned long cr8; + u64 efer; +}; + #define vmpl_vmsa(s, v) ((s)->sev_es.vmsa_info[(v)].vmsa) #define vmpl_vmsa_gpa(s, v) ((s)->sev_es.vmsa_info[(v)].gpa) #define vmpl_vmsa_hpa(s, v) ((s)->sev_es.vmsa_info[(v)].hpa) @@ -256,6 +268,8 @@ struct vcpu_sev_es_state { struct mutex snp_vmsa_mutex; /* Used to handle concurrent updates of VMSA. */ unsigned int snp_current_vmpl; unsigned int snp_target_vmpl; + + struct vmpl_switch_sa vssa[SVM_SEV_VMPL_MAX]; }; struct vcpu_svm { @@ -776,7 +790,7 @@ int sev_cpu_init(struct svm_cpu_data *sd); int sev_dev_get_attr(u32 group, u64 attr, u64 *val); extern unsigned int max_sev_asid; void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code); -void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu); +bool sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu); int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order); void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end); int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn); @@ -800,7 +814,7 @@ static inline int sev_cpu_init(struct svm_cpu_data *sd) { return 0; } static inline int sev_dev_get_attr(u32 group, u64 attr, u64 *val) { return -ENXIO; } #define max_sev_asid 0 static inline void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code) {} -static inline void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu) {} +static inline bool sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu) { return false; } static inline int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order) { return 0; From patchwork Tue Aug 27 21:59:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 13780198 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2077.outbound.protection.outlook.com [40.107.223.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D2A717A5BC; Tue, 27 Aug 2024 22:00:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.77 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796023; cv=fail; b=qyYfSXqva0372e6J/gQJX2Fr+CiFAt3/5Sd4etz5k8AS7v9UtC55G3rzleOx3Xwlq/ZHtmPG7bmBsVXHQrlw/y6X6AZQzRjqz8VDThi3xCptvR1ChsoSSC3rx/iElD1+oqipr0X7WGInLxnLR5bpHLm5Arc7kQx8OKbQ5v4vzgo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796023; c=relaxed/simple; bh=T1kGWvbN0XQBhRi/13lHiklHHKcGpPpdrCg/H2aeMRY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=oCVI1G9ghILQvGeKeW4zhXS3INRd6dmsjIMG0dac6xSHu0Ev5kSI4J6h/aObeOldXmSdst96/EyN9/njscnDL3JwFEeJSyEt3UiWM82Qt8DaYBQdRZRTwDpFP+eJ1AZZ5YoNKifzYP6cu7ieH+esw6uwvY1av75/5Vlwj/tJsUo= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=Mz+JiJRT; arc=fail smtp.client-ip=40.107.223.77 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="Mz+JiJRT" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=OQo5nLZhQnkQC1nWGSXIamYtRy6qBmysNjQGRAPleALo62ze8EXPTASKWJ2qeKRLyheoJaOMTtziA+IhYsB6RUzu8CmBq2fE/S2UUQSaHIFJm2DIGN/hMJdm4aeHtPlnyeuUsc9XiiEcvHAXWh19W7b+QrVrDJ59PYwCGV6AEDwLv15b+ya77EqesWWSVZuMO54C6jgaq4FR6gmSJTLIQq18i3ZUj4uhtvc5oSBLk8pIgf4uIUdF/YrF4ARyL6Jk5+/RB6CTtFwia8mWnyG/K2S3oCUTLcmnKFUciuzM+4686k8U/xACyDSi1uyCHKyCJ5CrS7YYNMNmhAS8cNR82Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BPVAfw+rt4VzsF2tjyc7JjXx6ScgRpCQ+PG7vAKtPnA=; b=Gr0C/4WAHtSs7vcBMiziLTvp8vOk0z91Dz7lVX5vzbVHyNdkdq3M5mBz5xyBmiLwPmD2zcnWkkvhVEEeD4sGJjQ8YWvmIGbcz5uiuufNrCRmr8+6qKu+OSQDrxYwi+7Z2rGtCq0Bm40Cm//EEsyU9qr+P0xrLXcRDD13YYjYFEGUMEhB+cJh6TKv2GGH1YA5bRG2us0IkaWl8/+3KzWwEoSqPjpRs34XOh5rCUQv4+8VByjdWJoq1zk8sdC4gVVhrDEMVCIPty/VY+yMOlAe/J6H6IDOwJmdk0/j6FZkC1k0a0Q0CmfxNLjzGfzsynOBosGs7SwTeaZsPhcDpFByEA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BPVAfw+rt4VzsF2tjyc7JjXx6ScgRpCQ+PG7vAKtPnA=; b=Mz+JiJRTUsUf66Rq/e+WZA3Ri8H2KQutHfducCXnSiNUAHrpnGPQWqsoCGFhSDJdB+8KS/3jS1oWdxs9LJEKx+Mdn5dTnDe9RkPML6L2FtXBqwksV2WyrUl4HZlsbvVvMNxBpLwvKaESmfjB1uftgLSkNIAMaqppBmbiB8K+H8k= Received: from BN9PR03CA0519.namprd03.prod.outlook.com (2603:10b6:408:131::14) by SJ1PR12MB6243.namprd12.prod.outlook.com (2603:10b6:a03:456::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.26; Tue, 27 Aug 2024 22:00:17 +0000 Received: from BN3PEPF0000B069.namprd21.prod.outlook.com (2603:10b6:408:131:cafe::47) by BN9PR03CA0519.outlook.office365.com (2603:10b6:408:131::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.25 via Frontend Transport; Tue, 27 Aug 2024 22:00:17 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN3PEPF0000B069.mail.protection.outlook.com (10.167.243.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7939.2 via Frontend Transport; Tue, 27 Aug 2024 22:00:17 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Tue, 27 Aug 2024 17:00:15 -0500 From: Tom Lendacky To: , , , CC: Paolo Bonzini , Sean Christopherson , Borislav Petkov , Dave Hansen , Ingo Molnar , "Thomas Gleixner" , Michael Roth , "Ashish Kalra" , Joerg Roedel , Roy Hopkins , Carlos Bilbao Subject: [RFC PATCH 4/7] KVM: SVM: Maintain per-VMPL SEV features in kvm_sev_info Date: Tue, 27 Aug 2024 16:59:28 -0500 Message-ID: <95d863d50c0984058b37681271a2034e65edcb89.1724795971.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.43.2 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B069:EE_|SJ1PR12MB6243:EE_ X-MS-Office365-Filtering-Correlation-Id: 2becf6f7-1480-428a-bca2-08dcc6e3a2f3 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|82310400026|36860700013|376014|7416014; X-Microsoft-Antispam-Message-Info: FfJ7XBMB03ejayw2zcw3BYoItUkX1lhUbqQB7KY5aD5MwKbpHBwKFIcmRjLtDOkUrDPxSmh0YHqRm7bIY7MqCxV+Bbyfvvwbc4qjcRVKTEdG14r06OmqgmsbPbjCDx9LA+MlN/CcxPKnIMI4uu4Od77iXEd3G4zIjTxt4Yzyfy3nVoVHrfmPJVKhx0U27hdTeeTWZAVIMMIV+HkstC/XLIEnok7+6wD0aQ3+yYmJ65XBK3BzL/5I1nFpAF0quGh7qch7209snhDzTv1tZ7rgN26umH5IbhK64dm6nE/B4p7Ceop/FAuexFeST8oSai+En2VJ2Sp8kGHpNnvFHZ7HECj4py61z2t0dQBAFXjpm1xYBhbrQq5NKtcuaLAdTmGaEuq6z41O5QuOPdY1p1mH0f2nl+T5qGvmC/tAYoIfo/aek36tC+nDmWEvCgIkhUhB6IMdtc5o2pTJOdiW2YPt2TfbzWVpAU1TXlT9h2RGV+CsPA5QGRTIJNPR6sIHvdnMXhqNEPjc6XXYot9BR/mWJqIi9xnAD2DWw7Wb2Zu4ag1W3e+H0Vyv7KVUaHr6QMqk4H4q5gMDFhTtNKj4SbXL/xeNqmONeMzLzH58JZlIMyDZXOcT3RaiLMxEz45cTsMrk7tV1yjSgCyowOd8qph1ft7DCb+e/dLaiSmAKdNLrUs6AJ/MGwD2sopa3U+tOPJBYuQnzDsm5wgIJXoCWTJVSi6piKnpfFAJd5Omj/bGiumMwsplXEqhyayfXCREbHurlmn4kqyfo+3M3rtJdFvAf9LoTad6RbHtmV5uOSe9taq+QBbwbDPuN3ZUOF6C/PH58M1PSrF/g3A9DrtdhzyR7itj+XjLHze3Rhl4lSAkFcnjzxvLFCj3KGtSWP5czRk3FfUyfmRSWN4KPu1DvKoYMZXrm42e7X+o9Zj5fdOTi9KP2fhsV9kFaeY15Vc8eFdEoxV1flgT8M6MU3mDmpYk+o/+GSQKCr4OIgF1wzp537PCc8DR3xeVoYnNkdYbWA5e6jJjI7FkQY2Awz2vTU89wwqbIYe1qYJXVLzDGZBy0d9RJ2OIsHdfFeFcgxzKpZEZvu3kdO47OfFu/EY8vz4IZrfXTngVebTqQ5p4U+4utfFGABAz5J18s6JPoBmBWoXRfewkn8Fn+wiGn38o2rAoy+fiNPBOJRtyw6nK+Nq3QDyLjOru3CfGo0t/v7gDpKS+bISg9PteWyD0uG+OkA1h33/o/Q+mioeokdMpY/9UbwiCp6KEOQwU16bylGOFJmVWWGg283FqPxAhIHpQ/13186m51IELUIbT0ATwPrmP0fUduflAfPwD4o48b5IJlwNWrt4B2bnpHINcamvhzEwi9k43xqE7RnVN43LW6SsMfQQQIyjimddcKTvNxfHPMGUSd/x0/Kgo5h1wYKkG0W5PZiAvl3VJfVtQNH2mpASUvCd8NJmYsBiS0rMm9LjFBp22 X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(36860700013)(376014)(7416014);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Aug 2024 22:00:17.2901 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2becf6f7-1480-428a-bca2-08dcc6e3a2f3 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B069.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6243 From: Carlos Bilbao Make struct kvm_sev_info maintain separate SEV features per VMPL, allowing distinct SEV features depending on VMs privilege level. Signed-off-by: Carlos Bilbao Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 22 +++++++++++++++------- arch/x86/kvm/svm/svm.h | 4 ++-- 2 files changed, 17 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index e0f5122061e6..c6c9306c86ef 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -144,7 +144,7 @@ static bool sev_vcpu_has_debug_swap(struct vcpu_svm *svm) struct kvm_vcpu *vcpu = &svm->vcpu; struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info; - return sev->vmsa_features & SVM_SEV_FEAT_DEBUG_SWAP; + return sev->vmsa_features[cur_vmpl(svm)] & SVM_SEV_FEAT_DEBUG_SWAP; } /* Must be called with the sev_bitmap_lock held */ @@ -428,7 +428,7 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, sev->active = true; sev->es_active = es_active; - sev->vmsa_features = data->vmsa_features; + sev->vmsa_features[SVM_SEV_VMPL0] = data->vmsa_features; sev->ghcb_version = data->ghcb_version; /* @@ -440,7 +440,7 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, sev->ghcb_version = GHCB_VERSION_DEFAULT; if (vm_type == KVM_X86_SNP_VM) - sev->vmsa_features |= SVM_SEV_FEAT_SNP_ACTIVE; + sev->vmsa_features[SVM_SEV_VMPL0] |= SVM_SEV_FEAT_SNP_ACTIVE; ret = sev_asid_new(sev); if (ret) @@ -468,7 +468,7 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, sev_asid_free(sev); sev->asid = 0; e_no_asid: - sev->vmsa_features = 0; + sev->vmsa_features[SVM_SEV_VMPL0] = 0; sev->es_active = false; sev->active = false; return ret; @@ -852,7 +852,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm) save->xss = svm->vcpu.arch.ia32_xss; save->dr6 = svm->vcpu.arch.dr6; - save->sev_features = sev->vmsa_features; + save->sev_features = sev->vmsa_features[SVM_SEV_VMPL0]; /* * Skip FPU and AVX setup with KVM_SEV_ES_INIT to avoid @@ -1985,7 +1985,7 @@ static void sev_migrate_from(struct kvm *dst_kvm, struct kvm *src_kvm) dst->pages_locked = src->pages_locked; dst->enc_context_owner = src->enc_context_owner; dst->es_active = src->es_active; - dst->vmsa_features = src->vmsa_features; + memcpy(dst->vmsa_features, src->vmsa_features, sizeof(dst->vmsa_features)); src->asid = 0; src->active = false; @@ -4034,8 +4034,16 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm) /* Interrupt injection mode shouldn't change for AP creation */ sev_features = vcpu->arch.regs[VCPU_REGS_RAX]; - sev_features ^= sev->vmsa_features; + /* + * The SNPActive feature must at least be set. If the SEV + * features of this AP are zero, this is the first vCPU created at + * this VMPL. + */ + if (!sev->vmsa_features[vmpl]) + sev->vmsa_features[vmpl] = sev_features | SVM_SEV_FEAT_SNP_ACTIVE; + + sev_features ^= sev->vmsa_features[vmpl]; if (sev_features & SVM_SEV_FEAT_INT_INJ_MODES) { vcpu_unimpl(vcpu, "vmgexit: invalid AP injection mode [%#lx] from guest\n", vcpu->arch.regs[VCPU_REGS_RAX]); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index d1ef349556f7..55f1f6ffb871 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -87,7 +87,7 @@ struct kvm_sev_info { unsigned long pages_locked; /* Number of pages locked */ struct list_head regions_list; /* List of registered regions */ u64 ap_jump_table; /* SEV-ES AP Jump Table address */ - u64 vmsa_features; + u64 vmsa_features[SVM_SEV_VMPL_MAX]; u16 ghcb_version; /* Highest guest GHCB protocol version allowed */ struct kvm *enc_context_owner; /* Owner of copied encryption context */ struct list_head mirror_vms; /* List of VMs mirroring */ @@ -416,7 +416,7 @@ static __always_inline bool sev_snp_guest(struct kvm *kvm) #ifdef CONFIG_KVM_AMD_SEV struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; - return (sev->vmsa_features & SVM_SEV_FEAT_SNP_ACTIVE) && + return (sev->vmsa_features[SVM_SEV_VMPL0] & SVM_SEV_FEAT_SNP_ACTIVE) && !WARN_ON_ONCE(!sev_es_guest(kvm)); #else return false; From patchwork Tue Aug 27 21:59:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 13780199 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2067.outbound.protection.outlook.com [40.107.93.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5159412F5B1; Tue, 27 Aug 2024 22:00:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.67 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796034; cv=fail; b=t7tjFIf+iLeYyJWkiY86jRYKIUXqgrgUmMoCkKFVGgCqKHCdC0OzY73FZtq4HsM5ux93mI+Rr5h/MkuDxH+CogxvVtXHIPN1mcrfrTZ7JHrQBSiJjAw9Nx9sJmxkjwDvA30vbZ/40B6Os6LaYLWA4sa6W/7rqyVtQ4At/O2LqRk= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796034; c=relaxed/simple; bh=NEmx+eA4vSYDlDH1xuX3tWpf1/Qy6BKYf+DMblRdp4c=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NNEy2VzI46s3HiXgfVixSrq+1iR2e9MRdDU8KFsK/aJDTDyb7C0sl1qVe2v8kT1MntPADWNP/ddqWyECmIDYQyNKmlBQCWaSQhwU5clGRTJhkJhuTVirg+ejvrc8RZAKqsVvmusx4UUlIpYLmwLaw6B9keORYqpHWT83oeKKRcA= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=TTrbU2JA; arc=fail smtp.client-ip=40.107.93.67 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="TTrbU2JA" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=bFNCmarHAUSt4lm3Xdt1x9nV8GwvuxHgBGCJZZsh7Qym3ukoHOvfKWgecOqn53WdT5KIMNURoTkc1LoBWzBRlGk+G2uHwma8S80Ae2ogWReN2ZvgYfY7LOmRuEuTcu44lgoHPsMEH73KMKrZr5b134cneV7QDwra1f13gR6pIqliO3K4EztBfAP9IXQwutqrvgSew0db/nmjnOhyiDUEVjVZqJheJkJ6Z0OS6VzCrhOfvfqlwfy48jPjwkGyYL64hEXYQ8Ou33dwHNOPTjpuCN2HPBj9Oyjc6OXaMxVsktInuxIpWHwikW/tIyXoUJCoEIhDEOZJh5aemVgGfDLSMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JRW7pm5GAuZqbleBkXM4aygyFi6FbDFnuA3rOh+ivjE=; b=eUVlEgg/ymgHAJ5vY/lQ7ElUs9EdpIpAAxoUMU3NEl/iQGxiEfqnnW1ynYLRJjeAfjG9qWUVn1z2/jk8taYRwk/J/GVRIk5LoUfPjTgc3Bo4DnRztc0BTmpnJponsRSvASmw7t+0gmHZD/+h+F3KhnK+frm6tHuOzW2lCR2gt5spg4T9q6o8uT6quuksx5nvQGDqDPClnD4gfh3frE5C6udNgCTghJYt+p3JnNC/R2mfsekmSQOMBslfWOGC+nOKj2vKIyjeHqHeUm6cVbziyR2Q1iieydyjekIGoWlEFMcG7k40wtRbjgmY3KHrNawjxhpD+MJLHZVsYojQ/u9QUw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JRW7pm5GAuZqbleBkXM4aygyFi6FbDFnuA3rOh+ivjE=; b=TTrbU2JAbvAIbB/7ztoI9KxejXDA4f+hgTTCmRtX3uu61Vv9iMJZLjd5RjSwpUyydqY0uR8S0u1NtVoE4h5OXBKTdFVYPZBsOUMRpdNk4EyXHlvHFRQpQyJ/Yd66XVYP42htOdzMwMOQPFGXi8PAwvasTUj7w6rH+BP37/vktVo= Received: from MN0PR03CA0021.namprd03.prod.outlook.com (2603:10b6:208:52f::35) by SA1PR12MB7176.namprd12.prod.outlook.com (2603:10b6:806:2bd::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.26; Tue, 27 Aug 2024 22:00:26 +0000 Received: from BN3PEPF0000B06D.namprd21.prod.outlook.com (2603:10b6:208:52f:cafe::49) by MN0PR03CA0021.outlook.office365.com (2603:10b6:208:52f::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.26 via Frontend Transport; Tue, 27 Aug 2024 22:00:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN3PEPF0000B06D.mail.protection.outlook.com (10.167.243.72) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7939.2 via Frontend Transport; Tue, 27 Aug 2024 22:00:26 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Tue, 27 Aug 2024 17:00:23 -0500 From: Tom Lendacky To: , , , CC: Paolo Bonzini , Sean Christopherson , Borislav Petkov , Dave Hansen , Ingo Molnar , "Thomas Gleixner" , Michael Roth , "Ashish Kalra" , Joerg Roedel , Roy Hopkins Subject: [RFC PATCH 5/7] KVM: SVM: Prevent injection when restricted injection is active Date: Tue, 27 Aug 2024 16:59:29 -0500 Message-ID: <2e8bce9bf1b1f0a83e1afb78a61165f536c70cb4.1724795971.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.43.2 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B06D:EE_|SA1PR12MB7176:EE_ X-MS-Office365-Filtering-Correlation-Id: 77894e91-a907-40e0-cac2-08dcc6e3a880 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|376014|7416014|82310400026; X-Microsoft-Antispam-Message-Info: 3C5LFS5cucshf6IeuC8hUOKcR9qK+dk3yJR37xjuX3mIcfZHxhx6vOsGwGzKVOuvIdUSDnOKG06WS06M5OTGBnDgrxnCwNSu3OuAzO3ilfP42CipZa9ZT+PKCGH5P4ko+Z+qSYyHlx6ZZ51fidBt9tNSn4Z43SstKuwJ9zBFBbfLw6TFnUh+im3zbkT1ff0/zYszgGRuDCiiFIKT4y5axnYaiztYVbLK4gNP4l1QmXk74lXa+vyMMmGpeTr4esZkvFDohQdU2fmsBruMoy8NSxkXwBcvElIT2E0sOtfBGXh45iO4U8528UDajQumVrg2hyoQ2rDWi6NpBXiq2NkbM9butpZALwWw3tMYzLdgo7BbRV6IkToku999mUCaKP0/8tvV8FzlDGhjqucTbdrq2PdOAaHWR0lXhMEAUG5ICEc5cOtxu9Sw90ubaS9wAYYsAE4UapvmkwOnvKfWUJVFpbAANsHGsF0fbXqsLo3Y0qmdPKM47bFWrK7y3Rytguvs4MZ2BXMvgx7EtDNJUwmxbHhZTO+G3YKPgaEwksq6Qt8JbYUmrQg4+SpdPUR0yXJ18YTOVQJH6pmkEHNEdPXO9xIFzHFLXUqQCiOm7teriKstJuIn0dyYPCSKc7td5FFKPwQJNF1zhs0epVYIeKZLOmdu2sRTEPwwPH0U/uR8HhLdNH0QGqK9Qa3CFD2h5pbW0xohmB4J4RVfNlL2juLUi2q/ucRQa9ZdQqSm+I1b8FDqCcPqnWj2EDKnfRP0fKV5H3qAvMUiyHpfuSnlJpEGac/KC5Uo5e4OWcMUIANgrXfWIjffLuyhpDGs/FQbGL4oMa8tCPWYLJJKyZ+v/McGzxp1d/rSpYxoHswyJpGy+fQs3sI9elA/nW3WD+JZ1yaiBdbhpRH028aQhSo957D2Swls4NzJmY7h+yYvfjVpf4mex0N4RN2pePqbxEI6hREnK59kSt+0mrWOGnzgN6j9Rnb0bFpUG0l0UQtXuGJVoMi1WyjBnJI/93ctUTW9yDvcWq6+Sj9noeaAFFGl5PJV0J7deWiyPreyrlTXEArqlocn6oXv+PR7EPws0Cm9knQ6N99eVeY2z9O43vJkOetFlOR0UuFLZOnZQaVKS/1ZEJZG9HBq4SFf7EHcUUtwoFCC47JnKEp5ASQagK0gJETis74LUTwomMdFdWuk10QdiNwzRqnagbWyCOpt3CgjLx+P1RcYmwJ6eT/0Qff+E+iX/Tat4MSo/Q8ZEh3KnOvUgsRnVaLk4wfL69/4hBfEQklxZRDmAWEBPxXJsMVGEcEuPFdfAxeOXK6m5i4z15b2rPiPlGdgpOpyCHVmKyFsepNwlj8vvuVwZeCR9YufRDRpKttY/l/scRrCCNDdQsLiaDCs2nHUL0PnkSrJdxzyHo22hBmYiR2cpSO1GUvwgsYrwOGrDkh6lerfvpmHw5ByOTUe2ebtojhfpegJIvSvboGY X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(376014)(7416014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Aug 2024 22:00:26.5875 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 77894e91-a907-40e0-cac2-08dcc6e3a880 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B06D.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7176 Prevent injection of exceptions/interrupts when restricted injection is active. This is not full support for restricted injection, but the SVSM is not expecting any injections at all. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 30 ++++++++++++++++++++++++++++++ arch/x86/kvm/svm/svm.c | 6 ++++++ arch/x86/kvm/svm/svm.h | 3 +++ 3 files changed, 39 insertions(+) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index c6c9306c86ef..4324a72d35ea 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -5227,3 +5227,33 @@ int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) return level; } + +bool sev_snp_is_rinj_active(struct kvm_vcpu *vcpu) +{ + struct kvm_sev_info *sev; + int vmpl; + + if (!sev_snp_guest(vcpu->kvm)) + return false; + + sev = &to_kvm_svm(vcpu->kvm)->sev_info; + vmpl = to_svm(vcpu)->sev_es.snp_current_vmpl; + + return sev->vmsa_features[vmpl] & SVM_SEV_FEAT_RESTRICTED_INJECTION; +} + +bool sev_snp_nmi_blocked(struct kvm_vcpu *vcpu) +{ + WARN_ON_ONCE(!sev_snp_is_rinj_active(vcpu)); + + /* NMIs are blocked when restricted injection is active */ + return true; +} + +bool sev_snp_interrupt_blocked(struct kvm_vcpu *vcpu) +{ + WARN_ON_ONCE(!sev_snp_is_rinj_active(vcpu)); + + /* Interrupts are blocked when restricted injection is active */ + return true; +} diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 586c26627bb1..632c74cb41f4 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3780,6 +3780,9 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu) if (!gif_set(svm)) return true; + if (sev_snp_is_rinj_active(vcpu)) + return sev_snp_nmi_blocked(vcpu); + if (is_guest_mode(vcpu) && nested_exit_on_nmi(svm)) return false; @@ -3812,6 +3815,9 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu) if (!gif_set(svm)) return true; + if (sev_snp_is_rinj_active(vcpu)) + return sev_snp_interrupt_blocked(vcpu); + if (is_guest_mode(vcpu)) { /* As long as interrupts are being delivered... */ if ((svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 55f1f6ffb871..029eb54a8472 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -761,6 +761,9 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm); void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector); void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa); void sev_es_unmap_ghcb(struct vcpu_svm *svm); +bool sev_snp_is_rinj_active(struct kvm_vcpu *vcpu); +bool sev_snp_nmi_blocked(struct kvm_vcpu *vcpu); +bool sev_snp_interrupt_blocked(struct kvm_vcpu *vcpu); #ifdef CONFIG_KVM_AMD_SEV int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp); From patchwork Tue Aug 27 21:59:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 13780200 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2066.outbound.protection.outlook.com [40.107.237.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FF381D54DA; Tue, 27 Aug 2024 22:00:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.66 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796038; cv=fail; b=mwM1pUtDx8L/iG0tmVATxpjvK2/Jf+/dFJi0wuwRMw7A5Wh03dnA0Luw4HbSxzjcF7EZaXva/q0+3ZxriJ3kWt7LgFyCmpI2l1DEl7KlAF9r59nT30OZzJRurHLk00y+edZGuWjXqLhKtFCm8eUaFhZOlqCUXniBWQV0DwgCVgs= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796038; c=relaxed/simple; bh=7gsgtsp7fPjqi2GzZkuGZecGepka+mlfAypr38lQFXs=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=MddOZOcSu7MiHZaB0EdplEKneemiTW9AZ+MrgxbuFoNlpeX5IKwNuCfUIgiqO6xvUxVtCvUjwXsVrZumFU5sO1WhUkRTtcbEjR4lsDKPmKlmeLyaxoZDoIOGW7mJEirc+kFDJIGtbIKIB4k6TGdRHlYhuDR3zshDwNGI5Gyfxdg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=NfwJf/3e; arc=fail smtp.client-ip=40.107.237.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="NfwJf/3e" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=AE+aXYuEw7Go1Ol5+SBoRwQ0RlNoqEmFb9Hpg/MAdHD/J/yBWjQu6dJ3Aij/jgHr/2fS4wSt95I8/UndabSctGtj3CSFc1AxYac0jPvsI3Tru9aNBchG3iJMnm8OpltvzujYasfdgdBkvuS0ILZYl2iZrhrd2JAih0G6/7Kmpr/3trLREeizruNBBxbnVfc5FUdwv/1BTT1pZV1wqXWIDpLUYO4gPIUww5XaPCWWFo0IX6Nl9iNXQY35GIGEAip9oZiXOIunj168gBRPX0uYeUrE/cZy87QFCze3KD0HIkMhPk7UIVIOU0/12P2dfRgQBh0tBKJNZITJskwcCKiPsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gj2qXXhRtnR0BugKiyAPi7i5qL0MCbnSlW4lR9MZSjQ=; b=NfRCCXuoxOXvI2Uc/vwr2vS6YWtVPwlsoMbxC9Grpk4P0LpdwevB+J2e+51NCbzrFOeoCDboRl/KnMtxMNorZa4bdxBR/a0lldvmFM7dtrBjFQ3oMyTsXqWeCjBzO2O0xc+370taqRUryDW3xqeNBtWspwqQCzhx4SDVUD7oHx8G7zO5ETgSRvFbnsV+96qWM6JqU+Lzxod6pP9bMSFvjR1byVzeRyom57+HxwIy8r2D0GdsFUbE2jtFhL0KWJStkccrIJVyyudKX7PQ2G+lK2ax3gh5dv4v+YkKem6dpgpcOx9o8Og63bHIk2Zb4i48OWb7oEnitDVNFXIuXGhuUg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gj2qXXhRtnR0BugKiyAPi7i5qL0MCbnSlW4lR9MZSjQ=; b=NfwJf/3eSQa5sqtosV4OAj+I2GoF7st25+CMwhtdr3IFtCQw7yV38Jk+IXoVAXkR2mIpGLFMGZuDJAlZeqcCUXdBu1oBBw2AkUz1wGdePZkUfLCiBiMbHH+Ol9Fvwp5EgZk3q5sUZFjdGpEtRQA3i8b4a9HdkhXMLHPxIn1QBjk= Received: from MN0PR03CA0012.namprd03.prod.outlook.com (2603:10b6:208:52f::25) by DM4PR12MB6589.namprd12.prod.outlook.com (2603:10b6:8:b4::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.25; Tue, 27 Aug 2024 22:00:34 +0000 Received: from BN3PEPF0000B06D.namprd21.prod.outlook.com (2603:10b6:208:52f:cafe::93) by MN0PR03CA0012.outlook.office365.com (2603:10b6:208:52f::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.26 via Frontend Transport; Tue, 27 Aug 2024 22:00:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN3PEPF0000B06D.mail.protection.outlook.com (10.167.243.72) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7939.2 via Frontend Transport; Tue, 27 Aug 2024 22:00:32 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Tue, 27 Aug 2024 17:00:30 -0500 From: Tom Lendacky To: , , , CC: Paolo Bonzini , Sean Christopherson , Borislav Petkov , Dave Hansen , Ingo Molnar , "Thomas Gleixner" , Michael Roth , "Ashish Kalra" , Joerg Roedel , Roy Hopkins Subject: [RFC PATCH 6/7] KVM: SVM: Support launching an SVSM with Restricted Injection set Date: Tue, 27 Aug 2024 16:59:30 -0500 Message-ID: <25460ae2dcf050bd26ac58b71b727bda3913529a.1724795971.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.43.2 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B06D:EE_|DM4PR12MB6589:EE_ X-MS-Office365-Filtering-Correlation-Id: 0bdc5b2d-c074-45bb-e14c-08dcc6e3abc0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|376014|7416014|82310400026; X-Microsoft-Antispam-Message-Info: 2srwx/z/vo5BfGli/3SQJ4+aUhci7Thuktfex+zjP266pkn7wSbPiJ8QDm5WgiS2mFUmA9hXzRJuNvFHxWn2y4qWKysNM15pu98YGsMbjeXqzoPPEHQpraykrOHnz429RhyRA6YTrJ6o0Uc9MI1M0k4uKAI/lI6VU2iBRsLt3WLQ9+ZNWkMqNNRlTRIMcLhSPujRY86ujSaS2V55VZePWlfJs79yeCwntFqLjGe887PahJ+JFffiD9FfkW+kkQkA3p3igEMk47MyP0BdLIaFLG1tG2solm2qVRS/NkrA2o/U2nNfHgW4kkg1NVw2LPsUYENh7Zgvtlu9Hw5YIxVBiV1+SPQ/BzfUn2LYuoWHcnlHfQJzGKcNUsZYz+bS5MdNa6H5V6AtO6TknDrp2Zww8m4+87m1SJ54XRA+spqHL/OEZxReWcKHKgJupPJ7KO5pi5PVdqp7cIxB80RJ9vDT/gFnWjg6hLyzYYMBb3YBLS3e/9JvMI++P+ZrT+KIGxul85U7hZ7FaWafZbYI/7E0LoZPzDx6BYWDmXKnKASgRFxqc/GB0Pbui8ClBk0fZ9kPm3Ou7Fdroh0ZGF6czpWzBc3m0gj+S5E+TsalIQudaSWaOeuswG0Wyh0MtYLUNHtyiVQ0rWMuvPlVk8Q6x496eWX/4zsOZynmuHMCVhY24HrYj2+qdnWXKh1OM4WHapdKRMeTUubAf/cTSNZNG+Q07wkEG6kQGrzGiOiXIVPDb9Q7/iVIVyj+sTu6cOEXUO+SywT9z/B91eyfuufgwJqNRJZcL9tlErxgWXk5hLRlhHuyLA2YZY9DQlL+NGGWyGankdOe2H3xsEfNZnOGRq4Q0s+xmVU/pfnbGdNfcjMp3AN52dzlOLgJl6tZgFaRljc8yO28D+yPKwgPKR/tBXnJkkP+ME9DGUDnhAx7f7bnsNmJ4QXUlzu/LwYxMPvoumpFuKvrcuUcIPjyrwwRZ5ceEBVTMkAG+fbVWRDKAq06KHE0fCKS5dTejxkZI17qFUmgzu9PHP5jOwu9trM0CXGqnt44AkezfRgyeKCzWaLCGdWa5k3lypVW/rfD+eL3ejmeJCCSEf9r5PqkN0/AxISA3APXczCwdzUD9EM9Dzzr8INkn/CvoP+rYymLs00n67HDh4rc8hxCyqNc3Q/+/7lUjuJKufuGlAV9OChIzfilxlQAYOLh/eEjlaotXJES0e/1S6LluoSkv7j5dZlJL3klm0WW/3u/t7gCWPNoMoEd46yWQCXFIQGFIViR8vKJxnHEMCVVt2ViGYWkjIqogsA8aUkjYZTfF9oqKLg5GEYnmWBkeT9hIxAtMaqmUaoa3N+qK4vXQGPz53qk3MeOVsubqamtlMgZosljhS7kNdWxJ3tBppuNySeCjXnFyngQdffyXt0mhCmDkMtTOCVkDxgR8VHbWB+QgK8DDPPNsYSyoyoxQGEr2cvTGFHpM4+I7pRE X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(376014)(7416014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Aug 2024 22:00:32.0406 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0bdc5b2d-c074-45bb-e14c-08dcc6e3abc0 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B06D.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6589 Allow Restricted Injection to be set in SEV_FEATURES. When set, attempts to inject any interrupts other than #HV will make VMRUN fail. This is done to further reduce the security exposure within the SVSM. Signed-off-by: Tom Lendacky --- arch/x86/kvm/svm/sev.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 4324a72d35ea..3aa9489786ee 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -3078,6 +3078,7 @@ void __init sev_hardware_setup(void) sev_es_debug_swap_enabled = false; sev_supported_vmsa_features = 0; + sev_supported_vmsa_features |= SVM_SEV_FEAT_RESTRICTED_INJECTION; if (sev_es_debug_swap_enabled) sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP; } From patchwork Tue Aug 27 21:59:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 13780201 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2066.outbound.protection.outlook.com [40.107.236.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2F071D415E; Tue, 27 Aug 2024 22:00:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.66 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796057; cv=fail; b=BfVLwAqfj7on9t8+8oEp6TcVVnfjOv3z7zCm6WXNXmkURdXzIa0nHhB0al60R7wbf8n/ACVUilkXaFZA+d3YJ0K5FM5OnOVWWothMtiATPh7gGaZXnx2jX/XnsuRk81IooiqTKx6NNUwO+9UlWlZRuGP4S+i8+Oj5aAmT5BzfMI= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724796057; c=relaxed/simple; bh=ZTX3oSMjQHclgW1ECuY/8ny7CTgGaYIo87hB77/6vLg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=egNYIQmBXUk4vUhIY8d2JGtih44i9ox9WWX193YGeaIZQflbIpuE3TY6NRfDsl+wMwuGTh7sFoEctQ1yf1co0TD2sdpH4nGeGnkj67XXE2vgdGlzFZUlA6NdS6qJR8slp2I7kRk7hDDPtB+b3iHUxPXwDywRxHw3zFfhIKaweyg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=MB80L7Ed; arc=fail smtp.client-ip=40.107.236.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="MB80L7Ed" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=tZFnn0rzMKjNv8NdGUNkjReO3++89y3CkLSWuCrC+EeD/Y/hEyvPd7wKENwsN5BZ7lyixkyNUStirwwOVY+GyblLfBtTyxTgdNhP4IgSeg8KWDVVuPjhV3+6/gIRj8AqOxHSJ4N44AGVseSYUWq7tn8/GXNQv5BLzZQGy/S4+nhe09xBttuo2r9BBTf+WQ+GjBdgCVlPQ5RpW59c0aM1TNDhNpRbWNhniENlmS5kLB7glDkOJ9u2TGjDkgcGwMwvdPhyRdrqIbovrxZ/84LE/evfI3cxPbMVAGUdSD+m67ChxxLvOk/xhdo5qEPMeOy1Ilxabrjcm8fJ5x8gPHiEiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=3ZzxZOc/YcNZP8xNG6kaZkkicWt/v/inW/GthKLWOEw=; b=hkUbWl4nNks2l6a2/uClk6We6qb1RLxNYnAxvAq6srKU4c2i7pZNAPusn9wUO2WLBkUcIh2tzq/Q8TH/qs4VWmdajMhMzaEWDUI4JrbnPjDA9J3oD1kIOM+04TBg9yZMMlOQHRps9Dc44N3LPT76lX6/ZN9YUozs4r62pyBGwiR+gcpQHpi4Le6r01jEPWyuiUVVCpGEq/jI3B6Fpc8IzGYr/DloASQUmAjk+tIdaLx59LpjfqfigFhSxyYWlceBtMVtk92/K1X+7skytgiK9hSn663fQkCflsq4rhlqijt8WBqOzfTuDCSzdPXrgkiJr5GMv3UCg8RE2s/HqM6xtQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3ZzxZOc/YcNZP8xNG6kaZkkicWt/v/inW/GthKLWOEw=; b=MB80L7EdI8wbLUj1ohd3F4VA0y8zXVJQlGnTIwmp66NYZ5WqPnIfOtN9m0d45X9Q84fhIEIXHDanOquQ23guZI4arzHsx3/l5NPS+xxsAbJAvgTpzOuFv6/1Uy9Lupr2iEPLpVdz/UNrtih5IwikbgVprqHYwtw/c6XiWlJ/adw= Received: from BN9PR03CA0568.namprd03.prod.outlook.com (2603:10b6:408:138::33) by IA1PR12MB7736.namprd12.prod.outlook.com (2603:10b6:208:420::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7875.20; Tue, 27 Aug 2024 22:00:44 +0000 Received: from BN3PEPF0000B06A.namprd21.prod.outlook.com (2603:10b6:408:138:cafe::82) by BN9PR03CA0568.outlook.office365.com (2603:10b6:408:138::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.26 via Frontend Transport; Tue, 27 Aug 2024 22:00:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN3PEPF0000B06A.mail.protection.outlook.com (10.167.243.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7939.2 via Frontend Transport; Tue, 27 Aug 2024 22:00:44 +0000 Received: from tlendack-t1.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Tue, 27 Aug 2024 17:00:42 -0500 From: Tom Lendacky To: , , , CC: Paolo Bonzini , Sean Christopherson , Borislav Petkov , Dave Hansen , Ingo Molnar , "Thomas Gleixner" , Michael Roth , "Ashish Kalra" , Joerg Roedel , Roy Hopkins Subject: [RFC PATCH 7/7] KVM: SVM: Support initialization of an SVSM Date: Tue, 27 Aug 2024 16:59:31 -0500 Message-ID: <8d5d8aae56f3623ee3fe247aafa764b2c0b181c9.1724795971.git.thomas.lendacky@amd.com> X-Mailer: git-send-email 2.43.2 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B06A:EE_|IA1PR12MB7736:EE_ X-MS-Office365-Filtering-Correlation-Id: 504c4f25-d64d-4e61-89d0-08dcc6e3b359 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|82310400026|376014|7416014|1800799024; X-Microsoft-Antispam-Message-Info: Kq0qijys2GP5nc5/WHZM5VTSujYHYicW2C4av7ejAIjwVENO4Eu7U3z/nIoPX1fcPZa15bKDfKikWbkuPjl/WRCDZ5fR7htb+cy49l+c34mT89otSTX4oqPYJFpFSs1j3JM9zzewCmU8dWZWEgwaFDXBbDd6j3ErCxiBHtK/RM1Rc9fR3eTvwfVOxdKT9Vw3V+v9xgobxoRmpbk8Xf+RAPPfA7jsLaxrxARYFPjYU3SsSJOOFvlhYnxvH2nhijBEq8HKxwf7DAoo5n9ATKnyz/uKMXNAtQhbMpJmwi/zjLNWi9yTLgTJq6n2t9aUNV6Fm8LSFDuMaIsohlHEHTDbyspfh/AYqq/KVBB6pGognMg7lrld+QpYukoPDpoloJncF3LsRcfMLkR1yCO79dqv/zWFGNduIjgvqhMt7UeoZm14RsKC0tvqRYk70iHOA19c0q4f6m5vz3ZuZoGzOvrq5iE3u+2m8qpAw+ciH1aW09hb08n4aChtt6JQrwa5r/5IJQesifq/ISp41BPmonvJRt+O9XkbtyXwHjwebdJdFm2a+PgRW5fe/awR4XD9O06BilIe7ynwMp5rpWgOpqrfq00WOcZc70cxnyDyFBtbW/wFNl7EtHGxdzBGD+mOQI5GsWCdxJB0TuvjZD44miXIVssRFEdN8P+9SAgUE6eYCtzk12cgnaDC5OUwZ6X9PqcslB0CvkJ2qTOSklqWIrJKMmYlEJHyIFgcHANvQA7NNJ4DZyebRQKSHCIL3F5QeFyuwm1pEVC/LjSWGjhQy+4pFmKJp0igHAOD4c4WuaMH3ORJdGcBtpBsqEJEZ25GaRN+u5GL3ILLZJRKmeGJHAOuNydEUnw1M6jnsfHwr/z+jFK8nbR1oZrxiq2ksoymeS83AZ0tY4KJ7R4sp1uRjnV5m4TIG/GmmtsVVBrwIk5kst8umGBQd+sWlAYJ0JcR/o0RY+28a4uNqiltRCTn3NGK5hVBuxCjtU9hLTUdw+QRVv7Z/MRj56umYvfdVovkpNoIDhcb5y8HQovvtlXir6ibmt4CBXIrV/6Db1w849FzuCARrncCPj4k4XYA/4u6GDxgmfwB4bsP9xfzO1fxW/6YM584KWtzU4R5AVYVHAb2Mq9VrTmDJS6bVmAM4K0G0i8iGV1F1EBMnJl0W4uE3d5EoKYuy1nx0TLYeR2Kd/JNIEpHusfBC1R+nhC3S34oABsqWo4mTBs0gXDtlneVb/noQwOdPErQxouX86/YP2ULZX/kNOMHWFM006IfSEs+yQgIyHvV66kF+jKVtV3tDU6dFdE1DgLXl5X1XHbfjgZIqoQPoKv1toOQlwGFIHWOsuWPjCmMckD1deFvT/Vd7ey1W0NVq0ZJQFCka+T/JAn3PuUe3iIPCOS8DzxqGEolE7EFt3x/KQI5CdMZ74LxvgG5aaHx+pyelKtdjSYWRKq3eMI= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(82310400026)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Aug 2024 22:00:44.7852 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 504c4f25-d64d-4e61-89d0-08dcc6e3b359 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B06A.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7736 Allow for setting VMPL permission as part of the launch sequence and ssing an SNP init flag, limit measuring of the guest vCPUs, to just the BSP. Indicate full multi-VMPL support to the guest through the GHCB feature bitmap. Signed-off-by: Tom Lendacky --- arch/x86/include/uapi/asm/kvm.h | 10 +++ arch/x86/kvm/svm/sev.c | 123 ++++++++++++++++++++++++-------- arch/x86/kvm/svm/svm.h | 1 + include/uapi/linux/kvm.h | 3 + 4 files changed, 107 insertions(+), 30 deletions(-) diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index bf57a824f722..c60557bb4253 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -465,6 +465,7 @@ struct kvm_sync_regs { /* vendor-specific groups and attributes for system fd */ #define KVM_X86_GRP_SEV 1 # define KVM_X86_SEV_VMSA_FEATURES 0 +# define KVM_X86_SEV_SNP_INIT_FLAGS 1 struct kvm_vmx_nested_state_data { __u8 vmcs12[KVM_STATE_NESTED_VMX_VMCS_SIZE]; @@ -703,6 +704,8 @@ enum sev_cmd_id { KVM_SEV_SNP_LAUNCH_UPDATE, KVM_SEV_SNP_LAUNCH_FINISH, + KVM_SEV_SNP_LAUNCH_UPDATE_VMPLS, + KVM_SEV_NR_MAX, }; @@ -856,6 +859,13 @@ struct kvm_sev_snp_launch_update { __u64 pad2[4]; }; +struct kvm_sev_snp_launch_update_vmpls { + struct kvm_sev_snp_launch_update lu; + __u8 vmpl3_perms; + __u8 vmpl2_perms; + __u8 vmpl1_perms; +}; + #define KVM_SEV_SNP_ID_BLOCK_SIZE 96 #define KVM_SEV_SNP_ID_AUTH_SIZE 4096 #define KVM_SEV_SNP_FINISH_DATA_SIZE 32 diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 3aa9489786ee..25d5fe0dab5a 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -41,7 +41,10 @@ #define GHCB_HV_FT_SUPPORTED (GHCB_HV_FT_SNP | \ GHCB_HV_FT_SNP_AP_CREATION | \ - GHCB_HV_FT_APIC_ID_LIST) + GHCB_HV_FT_APIC_ID_LIST | \ + GHCB_HV_FT_SNP_MULTI_VMPL) + +#define SNP_SUPPORTED_INIT_FLAGS KVM_SEV_SNP_SVSM /* enable/disable SEV support */ static bool sev_enabled = true; @@ -329,6 +332,12 @@ static void sev_unbind_asid(struct kvm *kvm, unsigned int handle) sev_decommission(handle); } +static bool verify_init_flags(struct kvm_sev_init *data, unsigned long vm_type) +{ + return (vm_type != KVM_X86_SNP_VM) ? !data->flags + : !(data->flags & ~SNP_SUPPORTED_INIT_FLAGS); +} + /* * This sets up bounce buffers/firmware pages to handle SNP Guest Request * messages (e.g. attestation requests). See "SNP Guest Request" in the GHCB @@ -414,7 +423,7 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, if (kvm->created_vcpus) return -EINVAL; - if (data->flags) + if (!verify_init_flags(data, vm_type)) return -EINVAL; if (data->vmsa_features & ~valid_vmsa_features) @@ -430,6 +439,7 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, sev->es_active = es_active; sev->vmsa_features[SVM_SEV_VMPL0] = data->vmsa_features; sev->ghcb_version = data->ghcb_version; + sev->snp_init_flags = data->flags; /* * Currently KVM supports the full range of mandatory features defined @@ -468,6 +478,7 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, sev_asid_free(sev); sev->asid = 0; e_no_asid: + sev->snp_init_flags = 0; sev->vmsa_features[SVM_SEV_VMPL0] = 0; sev->es_active = false; sev->active = false; @@ -2152,7 +2163,9 @@ int sev_dev_get_attr(u32 group, u64 attr, u64 *val) case KVM_X86_SEV_VMSA_FEATURES: *val = sev_supported_vmsa_features; return 0; - + case KVM_X86_SEV_SNP_INIT_FLAGS: + *val = SNP_SUPPORTED_INIT_FLAGS; + return 0; default: return -ENXIO; } @@ -2260,6 +2273,9 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp) struct sev_gmem_populate_args { __u8 type; + __u8 vmpl1_perms; + __u8 vmpl2_perms; + __u8 vmpl3_perms; int sev_fd; int fw_error; }; @@ -2309,6 +2325,9 @@ static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn_start, kvm_pfn_t pf fw_args.address = __sme_set(pfn_to_hpa(pfn + i)); fw_args.page_size = PG_LEVEL_TO_RMP(PG_LEVEL_4K); fw_args.page_type = sev_populate_args->type; + fw_args.vmpl1_perms = sev_populate_args->vmpl1_perms; + fw_args.vmpl2_perms = sev_populate_args->vmpl2_perms; + fw_args.vmpl3_perms = sev_populate_args->vmpl3_perms; ret = __sev_issue_cmd(sev_populate_args->sev_fd, SEV_CMD_SNP_LAUNCH_UPDATE, &fw_args, &sev_populate_args->fw_error); @@ -2355,34 +2374,27 @@ static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn_start, kvm_pfn_t pf return ret; } -static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp) +static int __snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp, + struct kvm_sev_snp_launch_update_vmpls *params) { - struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; struct sev_gmem_populate_args sev_populate_args = {0}; - struct kvm_sev_snp_launch_update params; struct kvm_memory_slot *memslot; long npages, count; void __user *src; int ret = 0; - if (!sev_snp_guest(kvm) || !sev->snp_context) - return -EINVAL; - - if (copy_from_user(¶ms, u64_to_user_ptr(argp->data), sizeof(params))) - return -EFAULT; - pr_debug("%s: GFN start 0x%llx length 0x%llx type %d flags %d\n", __func__, - params.gfn_start, params.len, params.type, params.flags); + params->lu.gfn_start, params->lu.len, params->lu.type, params->lu.flags); - if (!PAGE_ALIGNED(params.len) || params.flags || - (params.type != KVM_SEV_SNP_PAGE_TYPE_NORMAL && - params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO && - params.type != KVM_SEV_SNP_PAGE_TYPE_UNMEASURED && - params.type != KVM_SEV_SNP_PAGE_TYPE_SECRETS && - params.type != KVM_SEV_SNP_PAGE_TYPE_CPUID)) + if (!PAGE_ALIGNED(params->lu.len) || params->lu.flags || + (params->lu.type != KVM_SEV_SNP_PAGE_TYPE_NORMAL && + params->lu.type != KVM_SEV_SNP_PAGE_TYPE_ZERO && + params->lu.type != KVM_SEV_SNP_PAGE_TYPE_UNMEASURED && + params->lu.type != KVM_SEV_SNP_PAGE_TYPE_SECRETS && + params->lu.type != KVM_SEV_SNP_PAGE_TYPE_CPUID)) return -EINVAL; - npages = params.len / PAGE_SIZE; + npages = params->lu.len / PAGE_SIZE; /* * For each GFN that's being prepared as part of the initial guest @@ -2405,17 +2417,20 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp) */ mutex_lock(&kvm->slots_lock); - memslot = gfn_to_memslot(kvm, params.gfn_start); + memslot = gfn_to_memslot(kvm, params->lu.gfn_start); if (!kvm_slot_can_be_private(memslot)) { ret = -EINVAL; goto out; } sev_populate_args.sev_fd = argp->sev_fd; - sev_populate_args.type = params.type; - src = params.type == KVM_SEV_SNP_PAGE_TYPE_ZERO ? NULL : u64_to_user_ptr(params.uaddr); + sev_populate_args.type = params->lu.type; + sev_populate_args.vmpl1_perms = params->vmpl1_perms; + sev_populate_args.vmpl2_perms = params->vmpl2_perms; + sev_populate_args.vmpl3_perms = params->vmpl3_perms; + src = params->lu.type == KVM_SEV_SNP_PAGE_TYPE_ZERO ? NULL : u64_to_user_ptr(params->lu.uaddr); - count = kvm_gmem_populate(kvm, params.gfn_start, src, npages, + count = kvm_gmem_populate(kvm, params->lu.gfn_start, src, npages, sev_gmem_post_populate, &sev_populate_args); if (count < 0) { argp->error = sev_populate_args.fw_error; @@ -2423,13 +2438,16 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp) __func__, count, argp->error); ret = -EIO; } else { - params.gfn_start += count; - params.len -= count * PAGE_SIZE; - if (params.type != KVM_SEV_SNP_PAGE_TYPE_ZERO) - params.uaddr += count * PAGE_SIZE; + params->lu.gfn_start += count; + params->lu.len -= count * PAGE_SIZE; + if (params->lu.type != KVM_SEV_SNP_PAGE_TYPE_ZERO) + params->lu.uaddr += count * PAGE_SIZE; ret = 0; - if (copy_to_user(u64_to_user_ptr(argp->data), ¶ms, sizeof(params))) + + /* Only copy the original LAUNCH_UPDATE area back */ + if (copy_to_user(u64_to_user_ptr(argp->data), params, + sizeof(struct kvm_sev_snp_launch_update))) ret = -EFAULT; } @@ -2439,6 +2457,40 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp) return ret; } +static int snp_launch_update_vmpls(struct kvm *kvm, struct kvm_sev_cmd *argp) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + struct kvm_sev_snp_launch_update_vmpls params; + + if (!sev_snp_guest(kvm) || !sev->snp_context) + return -EINVAL; + + if (copy_from_user(¶ms, (void __user *)(uintptr_t)argp->data, sizeof(params))) + return -EFAULT; + + return __snp_launch_update(kvm, argp, ¶ms); +} + +static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + struct kvm_sev_snp_launch_update_vmpls params; + + if (!sev_snp_guest(kvm) || !sev->snp_context) + return -EINVAL; + + /* Copy only the kvm_sev_snp_launch_update portion */ + if (copy_from_user(¶ms, (void __user *)(uintptr_t)argp->data, + sizeof(struct kvm_sev_snp_launch_update))) + return -EFAULT; + + params.vmpl1_perms = 0; + params.vmpl2_perms = 0; + params.vmpl3_perms = 0; + + return __snp_launch_update(kvm, argp, ¶ms); +} + static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) { struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; @@ -2454,6 +2506,10 @@ static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) struct vcpu_svm *svm = to_svm(vcpu); u64 pfn = __pa(vmpl_vmsa(svm, SVM_SEV_VMPL0)) >> PAGE_SHIFT; + /* If SVSM support is requested, only measure the boot vCPU */ + if ((sev->snp_init_flags & KVM_SEV_SNP_SVSM) && vcpu->vcpu_id != 0) + continue; + ret = sev_es_sync_vmsa(svm); if (ret) return ret; @@ -2482,6 +2538,10 @@ static int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) * MSR_IA32_DEBUGCTLMSR when guest_state_protected is not set. */ svm_enable_lbrv(vcpu); + + /* If SVSM support is requested, no more vCPUs are measured. */ + if (sev->snp_init_flags & KVM_SEV_SNP_SVSM) + break; } return 0; @@ -2507,7 +2567,7 @@ static int snp_launch_finish(struct kvm *kvm, struct kvm_sev_cmd *argp) if (params.flags) return -EINVAL; - /* Measure all vCPUs using LAUNCH_UPDATE before finalizing the launch flow. */ + /* Measure vCPUs using LAUNCH_UPDATE before we finalize the launch flow. */ ret = snp_launch_update_vmsa(kvm, argp); if (ret) return ret; @@ -2665,6 +2725,9 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp) case KVM_SEV_SNP_LAUNCH_UPDATE: r = snp_launch_update(kvm, &sev_cmd); break; + case KVM_SEV_SNP_LAUNCH_UPDATE_VMPLS: + r = snp_launch_update_vmpls(kvm, &sev_cmd); + break; case KVM_SEV_SNP_LAUNCH_FINISH: r = snp_launch_finish(kvm, &sev_cmd); break; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 029eb54a8472..97a1b1b4cb5f 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -98,6 +98,7 @@ struct kvm_sev_info { void *guest_req_buf; /* Bounce buffer for SNP Guest Request input */ void *guest_resp_buf; /* Bounce buffer for SNP Guest Request output */ struct mutex guest_req_mutex; /* Must acquire before using bounce buffers */ + unsigned int snp_init_flags; }; struct kvm_svm { diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 637efc055145..49833912432a 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1399,6 +1399,9 @@ struct kvm_enc_region { #define KVM_GET_SREGS2 _IOR(KVMIO, 0xcc, struct kvm_sregs2) #define KVM_SET_SREGS2 _IOW(KVMIO, 0xcd, struct kvm_sregs2) +/* Enable SVSM support */ +#define KVM_SEV_SNP_SVSM (1 << 0) + #define KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (1 << 0) #define KVM_DIRTY_LOG_INITIALLY_SET (1 << 1)