From patchwork Tue Feb 2 19:01:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Roth X-Patchwork-Id: 12062689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 032EFC433DB for ; Tue, 2 Feb 2021 19:07:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BB1B964DA5 for ; Tue, 2 Feb 2021 19:07:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239313AbhBBTHh (ORCPT ); Tue, 2 Feb 2021 14:07:37 -0500 Received: from mail-mw2nam12hn2219.outbound.protection.outlook.com ([52.100.167.219]:28736 "EHLO NAM12-MW2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S239411AbhBBTDV (ORCPT ); Tue, 2 Feb 2021 14:03:21 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MtJtpQkxn/9IDTPrarlO1wDHzkhuFZhzG0Qu6TiEfB8ZlYedv10ht3W4wKSDy4B13b0TzcXZ1egUht76FPEEocxdkd9PNVtEClzupRUx3lJPaZMd6X9UX3k6bzrjbTibaoW21gGRSPzOxgP81LRvYVJK4uk+eS7OxyYcJvCRd/buVnuRt75Wh170BbjSNtqDA/Sq7p4KL2M3I0rHjdQESLC/U+9iAzMT7M2XILkdGt8cwcoRrsxn0tc0JhifVtpQQDeoPptOSrGtAfOsyjUx45eOcSk7x8iBcjAJjzEEe0jRazi/yyNQDpGrBlhZOeZd4yAScOXolsc7gQCBWdHZ8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=44wOblXdL5kqFLlvvvthkEjwALIAI0JsqHasc9b9oxc=; b=WE8baLDt0cXdbdbMpAAuu/R8q7JXcWIZhAXXkzQVkTDM+5sEh6Wi1D63uXc2Spr3vnI6js5zW8iSAJRefJh+UESDGmGreZGAJ48cZwcAcdxI7wJouji/utXf4VUhc6mChUwZH4NFl45NgCj/FEP+I34CDPvvkDEWdrOrppArDgVDvGN2vKVXzQcrbLcnU8e7At2PDAhhIFJcSqEJh1BBeG8u0sRZHLjTjlj7DtPuQ6VKDAoC1e3nc2KR1b3s005eaq2nHMqGluDhe7gJsITOJov4uIpGmKnVzuFQK+oR9DEfuakSUHj8kMt68mq6Uw+VxjpXv/5dJKArJAj4p7HJWA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=44wOblXdL5kqFLlvvvthkEjwALIAI0JsqHasc9b9oxc=; b=kpZwzOpDWBCCdhG7koy/77ct0fymLTstoJzXtHybmi1S90Gle6Caizs8YZSk68MXk8yJE8LubC2MsXhs7M6+DXZLiofE5H43Oxf1WzNsQhJGuzuZLlrrectM15oZ2SWZf672VKRr4Yh4xXfDWdiPP8w4e56bslEAUhCoy8ny0Ho= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from CH2PR12MB4133.namprd12.prod.outlook.com (2603:10b6:610:7a::13) by CH2PR12MB4264.namprd12.prod.outlook.com (2603:10b6:610:a4::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.24; Tue, 2 Feb 2021 19:02:00 +0000 Received: from CH2PR12MB4133.namprd12.prod.outlook.com ([fe80::81f6:605c:f345:b99f]) by CH2PR12MB4133.namprd12.prod.outlook.com ([fe80::81f6:605c:f345:b99f%3]) with mapi id 15.20.3805.027; Tue, 2 Feb 2021 19:02:00 +0000 From: Michael Roth To: kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Andy Lutomirski , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , linux-kernel@vger.kernel.org, Tom Lendacky Subject: [PATCH v4 1/3] KVM: SVM: use vmsave/vmload for saving/restoring additional host state Date: Tue, 2 Feb 2021 13:01:24 -0600 Message-Id: <20210202190126.2185715-2-michael.roth@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210202190126.2185715-1-michael.roth@amd.com> References: <20210202190126.2185715-1-michael.roth@amd.com> X-Originating-IP: [165.204.78.25] X-ClientProxiedBy: SN6PR01CA0025.prod.exchangelabs.com (2603:10b6:805:b6::38) To CH2PR12MB4133.namprd12.prod.outlook.com (2603:10b6:610:7a::13) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost (165.204.78.25) by SN6PR01CA0025.prod.exchangelabs.com (2603:10b6:805:b6::38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.16 via Frontend Transport; Tue, 2 Feb 2021 19:02:00 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 8583233a-5021-4a9e-e748-08d8c7ad053a X-MS-TrafficTypeDiagnostic: CH2PR12MB4264: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rLhHZD+TsUOau5rrSa1PxlaWbkjuYGcFCHHOl3Fc7MizXnRD0qQYgEa/JTdeydpTYIIHKNUP+U/A61JYFL+gbThRmJERg0BS944k6Aw35SoKBXfilSD0JLtj8O+MtMUYtxk7GGj+VhkwhAhuiL/QxeIOuD4EJICgNqKTkHnIjs9nINnXsxIIUxtoqDJMeOnmPoxErxUnYVu3OVsSC8XyydZNoPJGRkR/6vUTPjdtgpd4XMdiWRxt+Qt3wfPRNPNtzvxT5Cgacf25ANdzdirDMCNnspgggSlx+tJge1D0YSs81D5/z9JgInFWpR0t6+6RDiME2CGrD+XOuS5Q1B+kZJbAvzEirriz105VxJ2YM8cJSFO5kORiGzVGAbS3ypMbeTd2go+1idkDVtkw1yZ2KZONBQk8siCnitZ/bQPr3t81wjMZIHYPdzpZKz0TFKNxZqIJfHRDA61RupQr0inqUWEZb8It8ekI+k8jnQ9jGiqayNwFXVbw0c9iIJYz+FJ5EA8ES/P7aiGEJw9rSUnNdvRII2fj32aldxYiPBl0aPkHtRG5hSOJ7hU0UAr5lOg32367fpiGT5aPTeoV8jZUv2ybub9/xWcvWuaL6VqptjBuQiJshWnfwwAvZ047mtPE1UOP3UnaA5XLYfo7Dt3F+Sfh8pgGFlAx3746QblUibn4Ouh3hBpL5mm1Up05nMZzSTXQczidW/kB4DRbuXtJhb79m2iSmPt41nl9iFomxrCgBB0nKb9iDW80XT7zJ+kfWM8VYCTTPl+PJNMW4pCpY7q8nscFA3AQZUoaHnzTHz6DNoPFo/fZ9+ZqbyKMMNkCALZ1nLYilFepV9QCLJSVz5SGwKo4BZF2TtZJ55lp+f8MSAP52iwyFhYJw2+5M5hq X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:5;SRV:;IPV:NLI;SFV:SPM;H:CH2PR12MB4133.namprd12.prod.outlook.com;PTR:;CAT:OSPM;SFS:(4636009)(396003)(376002)(136003)(366004)(39860400002)(346002)(186003)(6666004)(16526019)(66476007)(66556008)(2616005)(956004)(36756003)(8936002)(44832011)(26005)(2906002)(54906003)(83380400001)(86362001)(478600001)(8676002)(6916009)(7416002)(1076003)(6486002)(52116002)(4326008)(5660300002)(66946007)(6496006)(316002)(23200700001);DIR:OUT;SFP:1501; X-MS-Exchange-AntiSpam-MessageData: gDk+oEIdTL6tkiX0EnjZ3ZrkPhWcxpOVtHAojjKjG0OjisDYPBHoksLrpF+udEowqOqmiciXOwjKe0EL5V2mi7E5kAweT4sU/0FAWMMY2j9X55PsS6ESC5dH2GooW4XilH7z/NTHAQcqIoq+T7G1bp0+aIeqfrtwW/vimQsXW3f9S2K5q9Xm1nV0fxQhtK5QDePSmYBxK450605R3QlxVi4BqtfVCzJ3xUxmssl5Ep+1sVjqS4286h57eSjUeIbboj9lrI/YRFaf7WagCuLHSPNSS4+7jm6Q5a2M2+wc9jRoPt8AfiE1e4M3U/X/ISfyG7tNUXAj6bx/2dkpLdR9g4WZDKKoYD+gjnFlJhp1KrT6Bb+eZLgX+lcNaqQc81xbj88vglcGeQGhE5jDl72TREAvU49wbv0vmWFFKvLgBxvgbWkrZjfRcV4ucd0BvqXtT5yuFwKVTJD9DlE0nWJku8znyZ/DicE+U7w+Hm0TF6YKhBGAmNzH7KGv+8KHQZD9TkUKM9GzYdW+CuRExuxjOjru/C6cE1sXxh/c3NUsE+TPQCdSS8BH2r5bprKySb1i4Yest7kqVbvNOGM/zUXAylbsZDO6TiNg0tSHiuqBxGbSnulKijGus4lBAbtcVeNrMOUfencrEwLUfgwJvxrZNfw+sqaO9gQphFae+gKddZP5gRFXQodFMc/NnHgT/OpDvLZoNiVDUFZZQs2lw6o0o/IXdNBsh8rrLM2NPE2U2Aa67jUo7io91L1hJiGi1IyPjr7/2BhvKWonwZIofAZVTtSegQvV+fb2hq8BXUsnatfnxMF3DcoiLakkjM/qoxq5Rx7pzcJNppX44Ul4WlrQS5DPEwyt9zTmDOJsw6BmdQSBbnCP+dgPefpxk334cq7RUfElccfuq9ixIkI6M3phIBnIQQrsbkV998jHIAsNiuztNE4mCtjLq9dok85JGZgKBhrbR2UdITU7o+J7FZrvz5or5d1ltNH1c3VgkhtA27Oq+zRb/LcDP4jU4bMoDiG6ng+k+36/a7Ey+Rwnp3mckRJE7TMpfX3Rb9qAvMED7HyLbUW3BUavY3oMv8Fv3DnyhGBhFskFP1qkcN7iATVlD6Xwt/qbexioZXxR3HUURrCai3+swC5l+IygyqlrQQ0yOmo1nO7RhN4/McSLDGxnzr1x0m2DTdOb+sg5mkQYw/IuViQQrABRh6nQNvNEizVVWyrFQ2Z+F6J7ysu5fGg857gNrGz8HL08RJayZbdTZGZ32VRdJ5T8bYD7BY2wExdTZIaN9cKXVyK4rCFOHxEaI9DeSN6Kf6ww6ieusuay2nJj4joxWDl7koor4lvtvVP1 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8583233a-5021-4a9e-e748-08d8c7ad053a X-MS-Exchange-CrossTenant-AuthSource: CH2PR12MB4133.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 19:02:00.5185 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: KFp2xgZ0mwBk/nYegL37/MHaXIPLBWvoKpG72Iru7nOAPFNXdza4VRfVZW4LNv3RFZZ/GYnJ7Xg6ZtRtKTd0Uw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4264 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Using a guest workload which simply issues 'hlt' in a tight loop to generate VMEXITs, it was observed (on a recent EPYC processor) that a significant amount of the VMEXIT overhead measured on the host was the result of MSR reads/writes in svm_vcpu_load/svm_vcpu_put according to perf: 67.49%--kvm_arch_vcpu_ioctl_run | |--23.13%--vcpu_put | kvm_arch_vcpu_put | | | |--21.31%--native_write_msr | | | --1.27%--svm_set_cr4 | |--16.11%--vcpu_load | | | --15.58%--kvm_arch_vcpu_load | | | |--13.97%--svm_set_cr4 | | | | | |--12.64%--native_read_msr Most of these MSRs relate to 'syscall'/'sysenter' and segment bases, and can be saved/restored using 'vmsave'/'vmload' instructions rather than explicit MSR reads/writes. In doing so there is a significant reduction in the svm_vcpu_load/svm_vcpu_put overhead measured for the above workload: 50.92%--kvm_arch_vcpu_ioctl_run | |--19.28%--disable_nmi_singlestep | |--13.68%--vcpu_load | kvm_arch_vcpu_load | | | |--9.19%--svm_set_cr4 | | | | | --6.44%--native_read_msr | | | --3.55%--native_write_msr | |--6.05%--kvm_inject_nmi |--2.80%--kvm_sev_es_mmio_read |--2.19%--vcpu_put | | | --1.25%--kvm_arch_vcpu_put | native_write_msr Quantifying this further, if we look at the raw cycle counts for a normal iteration of the above workload (according to 'rdtscp'), kvm_arch_vcpu_ioctl_run() takes ~4600 cycles from start to finish with the current behavior. Using 'vmsave'/'vmload', this is reduced to ~2800 cycles, a savings of 39%. While this approach doesn't seem to manifest in any noticeable improvement for more realistic workloads like UnixBench, netperf, and kernel builds, likely due to their exit paths generally involving IO with comparatively high latencies, it does improve overall overhead of KVM_RUN significantly, which may still be noticeable for certain situations. It also simplifies some aspects of the code. With this change, explicit save/restore is no longer needed for the following host MSRs, since they are documented[1] as being part of the VMCB State Save Area: MSR_STAR, MSR_LSTAR, MSR_CSTAR, MSR_SYSCALL_MASK, MSR_KERNEL_GS_BASE, MSR_IA32_SYSENTER_CS, MSR_IA32_SYSENTER_ESP, MSR_IA32_SYSENTER_EIP, MSR_FS_BASE, MSR_GS_BASE and only the following MSR needs individual handling in svm_vcpu_put/svm_vcpu_load: MSR_TSC_AUX We could drop the host_save_user_msrs array/loop and instead handle MSR read/write of MSR_TSC_AUX directly, but we leave that for now as a potential follow-up. Since 'vmsave'/'vmload' also handles the LDTR and FS/GS segment registers (and associated hidden state)[2], some of the code previously used to handle this is no longer needed, so we drop it as well. The first public release of the SVM spec[3] also documents the same handling for the host state in question, so we make these changes unconditionally. Also worth noting is that we 'vmsave' to the same page that is subsequently used by 'vmrun' to record some host additional state. This is okay, since, in accordance with the spec[2], the additional state written to the page by 'vmrun' does not overwrite any fields written by 'vmsave'. This has also been confirmed through testing (for the above CPU, at least). [1] AMD64 Architecture Programmer's Manual, Rev 3.33, Volume 2, Appendix B, Table B-2 [2] AMD64 Architecture Programmer's Manual, Rev 3.31, Volume 3, Chapter 4, VMSAVE/VMLOAD [3] Secure Virtual Machine Architecture Reference Manual, Rev 3.01 Suggested-by: Tom Lendacky Signed-off-by: Michael Roth --- arch/x86/kvm/svm/svm.c | 31 +++++-------------------------- arch/x86/kvm/svm/svm.h | 17 ----------------- arch/x86/kvm/svm/svm_ops.h | 5 +++++ 3 files changed, 10 insertions(+), 43 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 687876211ebe..bdc1921094dc 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1422,16 +1422,11 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (sev_es_guest(svm->vcpu.kvm)) { sev_es_vcpu_load(svm, cpu); } else { -#ifdef CONFIG_X86_64 - rdmsrl(MSR_GS_BASE, to_svm(vcpu)->host.gs_base); -#endif - savesegment(fs, svm->host.fs); - savesegment(gs, svm->host.gs); - svm->host.ldt = kvm_read_ldt(); - for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) rdmsrl(host_save_user_msrs[i].index, svm->host_user_msrs[i]); + + vmsave(__sme_page_pa(sd->save_area)); } if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) { @@ -1463,17 +1458,6 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu) if (sev_es_guest(svm->vcpu.kvm)) { sev_es_vcpu_put(svm); } else { - kvm_load_ldt(svm->host.ldt); -#ifdef CONFIG_X86_64 - loadsegment(fs, svm->host.fs); - wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gsbase); - load_gs_index(svm->host.gs); -#else -#ifdef CONFIG_X86_32_LAZY_GS - loadsegment(gs, svm->host.gs); -#endif -#endif - for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) wrmsrl(host_save_user_msrs[i].index, svm->host_user_msrs[i]); @@ -3780,16 +3764,11 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, if (sev_es_guest(svm->vcpu.kvm)) { __svm_sev_es_vcpu_run(svm->vmcb_pa); } else { + struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu); + __svm_vcpu_run(svm->vmcb_pa, (unsigned long *)&svm->vcpu.arch.regs); -#ifdef CONFIG_X86_64 - native_wrmsrl(MSR_GS_BASE, svm->host.gs_base); -#else - loadsegment(fs, svm->host.fs); -#ifndef CONFIG_X86_32_LAZY_GS - loadsegment(gs, svm->host.gs); -#endif -#endif + vmload(__sme_page_pa(sd->save_area)); } /* diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 0fe874ae5498..525f1bf57917 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -27,17 +27,6 @@ static const struct svm_host_save_msrs { u32 index; /* Index of the MSR */ bool sev_es_restored; /* True if MSR is restored on SEV-ES VMEXIT */ } host_save_user_msrs[] = { -#ifdef CONFIG_X86_64 - { .index = MSR_STAR, .sev_es_restored = true }, - { .index = MSR_LSTAR, .sev_es_restored = true }, - { .index = MSR_CSTAR, .sev_es_restored = true }, - { .index = MSR_SYSCALL_MASK, .sev_es_restored = true }, - { .index = MSR_KERNEL_GS_BASE, .sev_es_restored = true }, - { .index = MSR_FS_BASE, .sev_es_restored = true }, -#endif - { .index = MSR_IA32_SYSENTER_CS, .sev_es_restored = true }, - { .index = MSR_IA32_SYSENTER_ESP, .sev_es_restored = true }, - { .index = MSR_IA32_SYSENTER_EIP, .sev_es_restored = true }, { .index = MSR_TSC_AUX, .sev_es_restored = false }, }; #define NR_HOST_SAVE_USER_MSRS ARRAY_SIZE(host_save_user_msrs) @@ -130,12 +119,6 @@ struct vcpu_svm { u64 next_rip; u64 host_user_msrs[NR_HOST_SAVE_USER_MSRS]; - struct { - u16 fs; - u16 gs; - u16 ldt; - u64 gs_base; - } host; u64 spec_ctrl; /* diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h index 0c8377aee52c..c2a05f56c8e4 100644 --- a/arch/x86/kvm/svm/svm_ops.h +++ b/arch/x86/kvm/svm/svm_ops.h @@ -56,4 +56,9 @@ static inline void vmsave(hpa_t pa) svm_asm1(vmsave, "a" (pa), "memory"); } +static inline void vmload(hpa_t pa) +{ + svm_asm1(vmload, "a" (pa), "memory"); +} + #endif /* __KVM_X86_SVM_OPS_H */ From patchwork Tue Feb 2 19:01:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Roth X-Patchwork-Id: 12062699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FAECC433DB for ; Tue, 2 Feb 2021 19:09:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B9F0564E92 for ; Tue, 2 Feb 2021 19:09:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234034AbhBBTHp (ORCPT ); Tue, 2 Feb 2021 14:07:45 -0500 Received: from mail-mw2nam12hn2225.outbound.protection.outlook.com ([52.100.167.225]:26626 "EHLO NAM12-MW2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S239431AbhBBTDv (ORCPT ); Tue, 2 Feb 2021 14:03:51 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KxMh7kn8odq1yh6UxF+FWfH6t01uZEj/oqvRSB3GB2OYrhT+twcqxZg+4lPPYrLZEb4+BOKJbvl0JJcOmh8KT8SkA8CTWmcN4JuJzHYzyKRUxOb/XtI+1u5cTPHCMy1o0WNExNiJLT14jz5XjCh/8JkGmPKniVEIIexK0IYTePwlbLLHciwlogmSamG8xaNu88PRzVKo5P6/SR5iRM4lUmRW8WA+EuN3bWdVRpiyAM+h7fE7+X44XTInUTPHmnHWZEZjWgbDYgcrMOSbkOrCxAHT8Xgv75kzZ1SEPiXPs7vQRLkcvh5nRa6pFPwbUWESz13yfzLjOiuGRIXVLQdT5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=g/sOqT9outqhrzEUhrYT5WaVS9dh0d1r0jcfU7DNVWE=; b=c9siTLyTJTClZSm109oq6Rbi+BTMLPjtEJM7rUE8pMEkagTNVmyxad54++L3FTGvuEOXxJ8A9CPYgkZ9imslA5z/1/lDrcq2O8/cr90ToWWCpwYQA4KnQ6JPYwozru+VI1Z8BcBohBwZaX9KfN7a0MonxsuVuc78PI0/m8b4opfJIz48uKCuZnKvnp7Q56ZwbNho555c2QpKWaUKx4A27Cu/1Z/trN+D5kbryi2VHdpLOfSCXmlejiSU0WAZmftstvojBCuK9P2zl4Vbi1iEdJwdCKXFU1Kr+BRmbb9SGIWiy8kS0tHOtQ0mWsbXTqLUpmhbDY58zs4MYXwViX3fXw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=g/sOqT9outqhrzEUhrYT5WaVS9dh0d1r0jcfU7DNVWE=; b=XtxB0PHAumMu/HAEMliICN+aDEGsq/M50Rkhxwt76naq/tYO6aAz+sYNq1YY1OwhBK5guK634hlR3cct/UX8dda4YljL19NWMuVKNt+ZfgXBGDn339KXqHdM3U3MT4sLS92yugB1NX1TkLyCjyUq6GXNrqQz3CIXSajEkRpmBis= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from CH2PR12MB4133.namprd12.prod.outlook.com (2603:10b6:610:7a::13) by CH2PR12MB4264.namprd12.prod.outlook.com (2603:10b6:610:a4::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.24; Tue, 2 Feb 2021 19:02:03 +0000 Received: from CH2PR12MB4133.namprd12.prod.outlook.com ([fe80::81f6:605c:f345:b99f]) by CH2PR12MB4133.namprd12.prod.outlook.com ([fe80::81f6:605c:f345:b99f%3]) with mapi id 15.20.3805.027; Tue, 2 Feb 2021 19:02:03 +0000 From: Michael Roth To: kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Andy Lutomirski , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , linux-kernel@vger.kernel.org Subject: [PATCH v4 2/3] KVM: SVM: remove uneeded fields from host_save_users_msrs Date: Tue, 2 Feb 2021 13:01:25 -0600 Message-Id: <20210202190126.2185715-3-michael.roth@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210202190126.2185715-1-michael.roth@amd.com> References: <20210202190126.2185715-1-michael.roth@amd.com> X-Originating-IP: [165.204.78.25] X-ClientProxiedBy: SN6PR01CA0012.prod.exchangelabs.com (2603:10b6:805:b6::25) To CH2PR12MB4133.namprd12.prod.outlook.com (2603:10b6:610:7a::13) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost (165.204.78.25) by SN6PR01CA0012.prod.exchangelabs.com (2603:10b6:805:b6::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3825.17 via Frontend Transport; Tue, 2 Feb 2021 19:02:02 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 50809c3d-2f7b-4f05-728a-08d8c7ad06e9 X-MS-TrafficTypeDiagnostic: CH2PR12MB4264: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6430; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: k9UmSxLCSCxqh/l2QmnMJtmOURd8QW+BUvatQfzwWeVgEFqhknQXvu4iqIecaHbln0LCBfXgqP/SkqsDdOw7GU0eHOHNlfy86bbb3w/UKACEYtIeklIJyW3kI+HT4ePcA3bUsGP6EelSI6BYKyHERuQnKsWFyR2dOjr6gDt7DhrUXUiasfw0Yq2yGZ379eiwEoGorPNXzV0whCC1ysKtsrh1S+Gg4RM74hPyu5QnwiExnkrKdrHBYKCvny5kcJCdeHTCs/DipPeTwloWihdHtACjfw02NFYab1D8Fmjq4D/+SHkrfo1pZ7eE6ixWMh+cugSs19mUWUwMJ8j++Tt6EGfC4iOXvOvfuSaBG1y9M0X2B3zHxd0ojj4GH75hbpGwXodONHg/CkgmAkD/7k9W6K2hK79VyMd1SurEPugaFio1RpKhBjFJTwx9HqJz6fy/OjFLOnn2MPN7D/e4RTY1hIC20+wp6LOOH7oS1RGV27GKFdLw2Q4fzh0tA6tfCGeDPradvWO2tWN1dAedyCjnTVF/mU270mly8XuueRuRLC8Xn9vxW/pXdn7hvIp3sbWIGETbkbLtpYWtOLYZHxKydjER5iPJudRi1XHjChAz3xcVqy2y3GCEl9Mf9ruM22GmUpiHpIcCwqTnyc91aO3HiL91/S76m3Hex4at7JW/dQXNJpYnB106cq+Y5zcPFJVFrlQXdxzS9DKKSqEi9ewL9g7e7jCOc5lXKkynEnmBo7W4/pK0xHP0yI4SbQ2PSKxy/xm2SJvEm62u4WzELRqxDG3brHsTU7j8EycPHzTs8oUTcmwz0fjbISXvAtBwcTsd9wxabg1erJ1hcliMLFg8DGsURzDXdGNZynjImS6kaIBPBCjXg+EMYLY5VLCiX/E+ X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:5;SRV:;IPV:NLI;SFV:SPM;H:CH2PR12MB4133.namprd12.prod.outlook.com;PTR:;CAT:OSPM;SFS:(4636009)(396003)(376002)(136003)(366004)(39860400002)(346002)(186003)(6666004)(16526019)(66476007)(66556008)(2616005)(956004)(36756003)(8936002)(44832011)(26005)(2906002)(54906003)(83380400001)(86362001)(478600001)(8676002)(6916009)(7416002)(1076003)(6486002)(52116002)(4326008)(5660300002)(66946007)(6496006)(316002)(23200700001);DIR:OUT;SFP:1501; X-MS-Exchange-AntiSpam-MessageData: yu6zM14cc1CulhwASuADm+/L5sxUOWhjvXmuSVD45xnwFNvfZRXkxgQZae3NtU08F6dDjgWc6aDcH4/fCT66nJEcRXaFMKPF8kYa8esAK32khDIBFXqJpYfp2lSt+CTdz8Rw9Yq/5nnQu3ibu08rRwlCoq7+iWl6Qrvn9Ka+6BxR9LdJ6jm+YeiDLn/w2j+7qWQ53blS9u54Ozrpf0OLcpQVdS2Bn6KIHMf4Jbymomqf1UW/WbHwSFE9+hCZB6x2kC5OppZEe/XuC7XWIXoosEz9BywwV3NAx7Do0Z+aTAnEbXx5q1ZRMuWkP60CUwsiwe+0T5pDjhtA6DDAywxcRXxv7WT7j4bdR4yclGyoHW/ZdxtDbMlwtuKQ0jjv5ZDDIgWO0hrxaWAZYuEMpyjgkUU1pj4LCMWBwFQKg21T3LC8OrxB/m9yU7cZoEjD3V57M07JU3D9UUWMegmB+0Ej/ZdxCK+ZVXCeM6ImgC7Fm+FTUo2l96BttIa506VLU52c9QNSWyGb9ZrH7GPqnfSHHl0eblwtIeFygLBXQxqjxeqPWCFZsDOLD94A93PpwSEMFO571ZZkxjpWuh1Z30HwnEGo3e5z26ZtUpckez0WKwV1paeChqxhr2n7parI+TkMa+5xXc4bU+//xDp4Ac2chfyfrPpV0mJI3LpuxtIyGPDBInp/OsAJTOhf4ugBXqsD6xPx1sDdusdZyQffLJMvWYOceN3scHGzxXxUNuMWp5fHo/Cjq8jGnyPXbkIh3EVg8lBIDSRZCj5eGo+PfUyBgfOSwN9YA4mcYVGzSMLNqGPzDi01WSGz9WpdWoNq77NlvaFw+w37McKpLdKOOMP88K8TTDvCKkj69orTJ5jwXg5AFpp1q5bTdRsGzsArQrLqjzSyOwgdF0oW3IJoKR6evjCUMylIRriEPzgEBoMlT5ZeWTYjUBEWuPIpx/zg7l2CFJqKRnSF7raDBvdVpMiqc7HViEOp5sTi+LXy77HQFSe6v4GUU7Zzd1izAqeYetkMcwHr39jVLcEG9YgMCwAzA7Z1Cz6LmfyyOwTHZuyM1sbZZlyN2tEd0NXfp017zRsoKMWkCa8sCcH7tAEPSoPUVgyKE/pbn1zOwkRkVhYSFIgX6fkPiWkMSFyXUEMDY+6rrHjEV4etcsgyGjcUsskJrhHrrqKQU0iKcTqHFehdJZEplQebew/hrLcmPwMFj7sVUyTBE1p4P9N8/THOLT7mbyvhPC74vFciexnARFZCDhfyTrOrcps2334xagmcwwJGGY5kFE40PyiATkGH13Fwh5cZRI38TNuB5t7Ghy7OvMUKIzk27ghYcK6HC9CjcXXm X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 50809c3d-2f7b-4f05-728a-08d8c7ad06e9 X-MS-Exchange-CrossTenant-AuthSource: CH2PR12MB4133.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 19:02:03.3839 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: ugKV/kXlqDxdcUP4SjOQ8FClmONOK5g+IJUZYHJrBHRNv7WqMXz5yk0oFq9scAjwJmDe3tTUYF1yw0rBCYU1cw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4264 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that the set of host user MSRs that need to be individually saved/restored are the same with/without SEV-ES, we can drop the .sev_es_restored flag and just iterate through the list unconditionally for both cases. A subsequent patch can then move these loops to a common path. Signed-off-by: Michael Roth --- arch/x86/kvm/svm/sev.c | 16 ++++------------ arch/x86/kvm/svm/svm.c | 6 ++---- arch/x86/kvm/svm/svm.h | 7 ++----- 3 files changed, 8 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index a3e2b29f484d..87167ef8ca23 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2083,12 +2083,8 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu) * Certain MSRs are restored on VMEXIT, only save ones that aren't * restored. */ - for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) { - if (host_save_user_msrs[i].sev_es_restored) - continue; - - rdmsrl(host_save_user_msrs[i].index, svm->host_user_msrs[i]); - } + for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) + rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); /* XCR0 is restored on VMEXIT, save the current host value */ hostsa = (struct vmcb_save_area *)(page_address(sd->save_area) + 0x400); @@ -2109,12 +2105,8 @@ void sev_es_vcpu_put(struct vcpu_svm *svm) * Certain MSRs are restored on VMEXIT and were saved with vmsave in * sev_es_vcpu_load() above. Only restore ones that weren't. */ - for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) { - if (host_save_user_msrs[i].sev_es_restored) - continue; - - wrmsrl(host_save_user_msrs[i].index, svm->host_user_msrs[i]); - } + for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) + wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); } void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index bdc1921094dc..ae897aaa4471 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1423,8 +1423,7 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) sev_es_vcpu_load(svm, cpu); } else { for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) - rdmsrl(host_save_user_msrs[i].index, - svm->host_user_msrs[i]); + rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); vmsave(__sme_page_pa(sd->save_area)); } @@ -1459,8 +1458,7 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu) sev_es_vcpu_put(svm); } else { for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) - wrmsrl(host_save_user_msrs[i].index, - svm->host_user_msrs[i]); + wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); } } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 525f1bf57917..66d83dfefe18 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -23,11 +23,8 @@ #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT) -static const struct svm_host_save_msrs { - u32 index; /* Index of the MSR */ - bool sev_es_restored; /* True if MSR is restored on SEV-ES VMEXIT */ -} host_save_user_msrs[] = { - { .index = MSR_TSC_AUX, .sev_es_restored = false }, +static const u32 host_save_user_msrs[] = { + MSR_TSC_AUX, }; #define NR_HOST_SAVE_USER_MSRS ARRAY_SIZE(host_save_user_msrs) From patchwork Tue Feb 2 19:01:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Roth X-Patchwork-Id: 12062687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EF49C433DB for ; Tue, 2 Feb 2021 19:06:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B44DE64E92 for ; Tue, 2 Feb 2021 19:06:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239516AbhBBTGg (ORCPT ); Tue, 2 Feb 2021 14:06:36 -0500 Received: from mail-mw2nam12hn2208.outbound.protection.outlook.com ([52.100.167.208]:25697 "EHLO NAM12-MW2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S239455AbhBBTEY (ORCPT ); Tue, 2 Feb 2021 14:04:24 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X6lJLddrHWoW1v45Twi0oiskpLG5+D7lySkGYrVKMI0V8tZyDqWfpFl60ERUfCEMBW0Oj/khlaxawIo5B0kjXKS3hMy4qB/AhTqZ1ygVlq1EZlgInN+hEhonCGq2SlNqN78BxkirSVk3XTMT0EGV9/6b94BxqgCRPhNy2UcOatF8n+PHbDenkWxkfJttVfsd2G43KWfOLSPsOkq+bA9d+eMi7v7QMnvSo865w5Vb9+nugrFih3gr8zA/GI83E6j/qFMviSFBt3a6hYjCVJQJUlUYU8XMgAeHZ2xWdez2Hn+uCXecX96LdlJR9mKWjdAeuiQ6N6TwjhDNS8Cq3IYVpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=B/nvFN+9HDd8e9cnsaaK312iiEHHWt9QGWrDYJ27H98=; b=akCp1VUhWTwTQAM9ipS6uoOIEus3U0S9QTLp57HK/KgHeufn8AitTQEcRXuRLs3g37OYCcKP8T+dFEh/Oni11NA8ows8dt5sse3ePGwKnWQ+O2NMlbFqUBttLVLgLqvSAoqd7WbyU11nDQg+kq2oMl6buLK5+bnj2A8Aj3zaBeomWuqde3WhcUls6Y2zPvfLMX0Q9C+NEI0OmdI6R9wX9EmdQVHNxf1b4jijkam8Fn3moAFv1ygPPKJDb7uCP3wojO+9L11kf+QAqAbbk+jJbVXWI+QjCyCXtzUodw0xzEKF2eZQpcFCRqK21/Cw2TSIl4+jB5kZgks7mSvbtVM7EA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=B/nvFN+9HDd8e9cnsaaK312iiEHHWt9QGWrDYJ27H98=; b=XM6PIyX2IS+un5vjzSy/CeWnDubg3zVHmll+1nlVns4h3vx5EKKYCvJdzdLdUccGU4iDZRqYyBD5pKvx5QN1nAOzAwvZvvh7TRM5wr/D5INZWECqceRJVBlcJh8mBe4Xv8kUpEy16Eqk6vupWP5idNUQB++Y9UMsHomdoQKCitQ= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from CH2PR12MB4133.namprd12.prod.outlook.com (2603:10b6:610:7a::13) by CH2PR12MB4264.namprd12.prod.outlook.com (2603:10b6:610:a4::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3805.24; Tue, 2 Feb 2021 19:02:07 +0000 Received: from CH2PR12MB4133.namprd12.prod.outlook.com ([fe80::81f6:605c:f345:b99f]) by CH2PR12MB4133.namprd12.prod.outlook.com ([fe80::81f6:605c:f345:b99f%3]) with mapi id 15.20.3805.027; Tue, 2 Feb 2021 19:02:07 +0000 From: Michael Roth To: kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Andy Lutomirski , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , linux-kernel@vger.kernel.org Subject: [PATCH v4 3/3] KVM: SVM: use .prepare_guest_switch() to handle CPU register save/setup Date: Tue, 2 Feb 2021 13:01:26 -0600 Message-Id: <20210202190126.2185715-4-michael.roth@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210202190126.2185715-1-michael.roth@amd.com> References: <20210202190126.2185715-1-michael.roth@amd.com> X-Originating-IP: [165.204.78.25] X-ClientProxiedBy: SN6PR2101CA0020.namprd21.prod.outlook.com (2603:10b6:805:106::30) To CH2PR12MB4133.namprd12.prod.outlook.com (2603:10b6:610:7a::13) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost (165.204.78.25) by SN6PR2101CA0020.namprd21.prod.outlook.com (2603:10b6:805:106::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3846.2 via Frontend Transport; Tue, 2 Feb 2021 19:02:06 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: df7caf47-4aba-4a49-865c-08d8c7ad08f9 X-MS-TrafficTypeDiagnostic: CH2PR12MB4264: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4125; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 7iJoJx65TTBBJW/5LuIgKxRLqnopUoPfXxHqzcHHhy1OXw4/aGC6sDADAW3o9iqs1q0Avb+xQ644JUVEFAYJy1rM4ctZaTQgK3yIBx/8iE0zXAbd1F0R/zhJ2DihKs455UM6YTUIsSl4jSNPTbLNGrgpUXJYG+X7vZK3vKZ7AhUdTBGJpFvh1yl/BXNQEvXGwW955pfmaF1vIm5iYScggWek0+cDxteassmSZxNrOWBFMg2nIIFHwXWAnScrz/0nkMbXao8S8zvuwoN0pLZnICc1PLwgGVLIT4kS01LqC3uxzLdVGOi5I0U0DQYsIoEgKuBzntAXsp2rqauNqmhHYPeVKdGdqMuHDsR1gWPkMeFw92W+zy2HUa+lSsg2MJ5NqvZSbT86bMLW+JiAc+HGJG3vd7utGOMtdoPlFvKLo99PCjMrKp1wk3AQIjYaliwgb1kmwftmnpYTHKDQ+fQCyqKJDdd6o4rfa0t7R/zZt+tK9eAHXBiqFqUMi2VcxFctXzSCGI5xXITs0k7orZUB7W8yUwPZalqEDKUyHG2AiYSdIiM1HxFF+AQ1TAeX8wRemwhP1PfiJsnzoryM2MzxOXohBSGR254zDAO946IE7QvssKu1w0Euo0VxH8I0hc4ubeqymDROKkiNBkjbENlNwF7fQWDClS8h+3FhiA66ZbU9VE+ee5aNjlKfoYflJSQDHwG1Uc051FmJErDV+4j06k4f1pSpue+FrmfwbJU/94r09LQOhr/8P3l7KmL8eWLWFZIAk4NqQQrYqaw/fg8z36u7FcFmpzl8PJbjw+WhxevgR+qTznhnt0F1uBiYfyZEhN7SoHv4lELJSSrQHFdh1r6XnKJ8ay29pK68kr4A28PT6hke66adTBPY9MD53Cj1hmUvgRdNdhKRyxMHjBK+GQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:5;SRV:;IPV:NLI;SFV:SPM;H:CH2PR12MB4133.namprd12.prod.outlook.com;PTR:;CAT:OSPM;SFS:(4636009)(396003)(376002)(136003)(366004)(39860400002)(346002)(186003)(6666004)(16526019)(66476007)(66556008)(2616005)(956004)(36756003)(8936002)(44832011)(26005)(2906002)(54906003)(83380400001)(86362001)(478600001)(8676002)(6916009)(7416002)(1076003)(6486002)(52116002)(4326008)(5660300002)(66946007)(6496006)(316002)(23200700001);DIR:OUT;SFP:1501; X-MS-Exchange-AntiSpam-MessageData: h13GY79nFHJtv7WpZ8wq3F6TeK172sgeAC6ApvaDwOQYiAIWVBZz67NnXeU664Bpg+f+fdEiyrBTotr4QWMsiW6ygnFXP/eubOg2X3Mfw3kJ9ScWBPxDiV344ijwHsqIZEE+/t+RCE4BUS33VtKedaU6/pVnzkSyi8exDR+r4AQAUn6oDP1zs/zjHmBgRLSiPdxMUn4nTwrfnPefGk0lOrAJLDo283qmqYrls6xXNkgjKch2cd9lsYORG7K97DtzFDoeQKb2AZSHeeqrZvmjtP7weixLhb2HFE61Z4dM7xrzwDoJ6ppp3JChxcsl2iJUtAI5AsZ64ScyUnDuneug2AkAdK48IUJpGm/E7CpAA65NVeECClXiNHQvLIa94MNapCxhCQWDi/uV+kys9oyHoVa0lksFqZwhmteL9hQcUTsQPUHKHlBIT82XMBA/95s6BlO2Nvpizf/G6/DOtigGluwjFILXcXftKCcicrej1FzZys6KmOwmdnWBzz1p6o1aFQsyLcoYQLaEO5xBPcIyygrZxY4GKwg4g3kXZsD8M4/WjfOCMUFPMVMzRyPvDQKFVLMfa/C3t6L0jwuFQbj7EXnerAET7b88Hs0IirbRSCIhf+1zevRh4SPFsKMRD+KmLUUoEnwPh9PY/VWMxfcYr7azLrNdpEGgezAAya7ioRsZMaSyOHzwrXNyWyaVfF4qm7Nii7YT93BQxuA/ifVAmy28//m2QzdjubcV8sAfEFG22X8OHgGxSSyUr/lr9Z13FVspyoDVMJQiJ+wywKl8vYabbI4fAWDHi5o8IZsFvvbpf7gYDReJRSletFi5iV+3g64EKhWwSISFLB7f5NbpXfqewGKkBySehmjopXjlzwDSpJpq0K9ZHXLCVWxnKPY4Xzl4NHitSA1tPqpGcUHNxhyeRRbrPBLBcZJxg9C6CuL3qgsVRYteWPpfmxv+KbfALqMnz6A+Vpzs5GW0ZeR2R8uXb6t/qhvfuUwLRl6KBInYevEAnPjwqSctLmcEqE8xTtsvTiEo/xAutLjuK3Cu6/t0Hf0AcIqDrNBqFKb2EJXvdKwL1CnhZDqnYw4FmUN8bOuH0ViYRKkC5FEcATf99pzX6RXgiUinbxKAH2jIPFv7bjCEoLmKAiu6Q9kEIKcyfDKeEAFA4nz/0Fxr34FFq6S3bQWusQsu7rxo+9ptssvGDW2o43cyGe2s09QhZiVK/zY+Nb6OS9zJodGth3NKWgZ6nm9gq0x8i1jHxakiTqDwT7ceE1x4bT6/+arC6MVmvJ5kx0ntzNkf8depvBhwIoclvz4gsiEfN2FMs1nrYMa+pARm4PbIgbBDHC/vDUDD X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: df7caf47-4aba-4a49-865c-08d8c7ad08f9 X-MS-Exchange-CrossTenant-AuthSource: CH2PR12MB4133.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Feb 2021 19:02:06.8677 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: h4FcxDf/I4LCHdzfto936nvKpnnl6x0diVbKEi07+1sGFMB4gTjEgV/QfrWbMfBV+K4M09rmvfz8UJ8gE83Zhw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4264 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently we save host state like user-visible host MSRs, and do some initial guest register setup for MSR_TSC_AUX and MSR_AMD64_TSC_RATIO in svm_vcpu_load(). Defer this until just before we enter the guest by moving the handling to kvm_x86_ops.prepare_guest_switch() similarly to how it is done for the VMX implementation. Additionally, since handling of saving/restoring host user MSRs is the same both with/without SEV-ES enabled, move that handling to common code. Suggested-by: Sean Christopherson Signed-off-by: Michael Roth --- arch/x86/kvm/svm/sev.c | 22 +----------- arch/x86/kvm/svm/svm.c | 76 +++++++++++++++++++++++++++++------------- arch/x86/kvm/svm/svm.h | 5 +-- 3 files changed, 56 insertions(+), 47 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 87167ef8ca23..874ea309279f 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2066,11 +2066,10 @@ void sev_es_create_vcpu(struct vcpu_svm *svm) sev_enc_bit)); } -void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu) +void sev_es_prepare_guest_switch(struct vcpu_svm *svm, unsigned int cpu) { struct svm_cpu_data *sd = per_cpu(svm_data, cpu); struct vmcb_save_area *hostsa; - unsigned int i; /* * As an SEV-ES guest, hardware will restore the host state on VMEXIT, @@ -2079,13 +2078,6 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu) */ vmsave(__sme_page_pa(sd->save_area)); - /* - * Certain MSRs are restored on VMEXIT, only save ones that aren't - * restored. - */ - for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) - rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); - /* XCR0 is restored on VMEXIT, save the current host value */ hostsa = (struct vmcb_save_area *)(page_address(sd->save_area) + 0x400); hostsa->xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK); @@ -2097,18 +2089,6 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu) hostsa->xss = host_xss; } -void sev_es_vcpu_put(struct vcpu_svm *svm) -{ - unsigned int i; - - /* - * Certain MSRs are restored on VMEXIT and were saved with vmsave in - * sev_es_vcpu_load() above. Only restore ones that weren't. - */ - for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) - wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); -} - void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) { struct vcpu_svm *svm = to_svm(vcpu); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index ae897aaa4471..0059f1d14b82 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1361,6 +1361,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm->vmsa = page_address(vmsa_page); svm->asid_generation = 0; + svm->guest_state_loaded = false; init_vmcb(svm); svm_init_osvw(vcpu); @@ -1408,23 +1409,30 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); } -static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); - struct svm_cpu_data *sd = per_cpu(svm_data, cpu); - int i; + struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu); + unsigned int i; - if (unlikely(cpu != vcpu->cpu)) { - svm->asid_generation = 0; - vmcb_mark_all_dirty(svm->vmcb); - } + if (svm->guest_state_loaded) + return; + + /* + * Certain MSRs are restored on VMEXIT (sev-es), or vmload of host save + * area (non-sev-es). Save ones that aren't so we can restore them + * individually later. + */ + for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) + rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); + /* + * Save additional host state that will be restored on VMEXIT (sev-es) + * or subsequent vmload of host save area. + */ if (sev_es_guest(svm->vcpu.kvm)) { - sev_es_vcpu_load(svm, cpu); + sev_es_prepare_guest_switch(svm, vcpu->cpu); } else { - for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) - rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); - vmsave(__sme_page_pa(sd->save_area)); } @@ -1435,10 +1443,42 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) wrmsrl(MSR_AMD64_TSC_RATIO, tsc_ratio); } } + /* This assumes that the kernel never uses MSR_TSC_AUX */ if (static_cpu_has(X86_FEATURE_RDTSCP)) wrmsrl(MSR_TSC_AUX, svm->tsc_aux); + svm->guest_state_loaded = true; +} + +static void svm_prepare_host_switch(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + unsigned int i; + + if (!svm->guest_state_loaded) + return; + + /* + * Certain MSRs are restored on VMEXIT (sev-es), or vmload of host save + * area (non-sev-es). Restore the ones that weren't. + */ + for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) + wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); + + svm->guest_state_loaded = false; +} + +static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + struct svm_cpu_data *sd = per_cpu(svm_data, cpu); + + if (unlikely(cpu != vcpu->cpu)) { + svm->asid_generation = 0; + vmcb_mark_all_dirty(svm->vmcb); + } + if (sd->current_vmcb != svm->vmcb) { sd->current_vmcb = svm->vmcb; indirect_branch_prediction_barrier(); @@ -1448,18 +1488,10 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) static void svm_vcpu_put(struct kvm_vcpu *vcpu) { - struct vcpu_svm *svm = to_svm(vcpu); - int i; - avic_vcpu_put(vcpu); + svm_prepare_host_switch(vcpu); ++vcpu->stat.host_state_reload; - if (sev_es_guest(svm->vcpu.kvm)) { - sev_es_vcpu_put(svm); - } else { - for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) - wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]); - } } static unsigned long svm_get_rflags(struct kvm_vcpu *vcpu) @@ -3614,10 +3646,6 @@ static void svm_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t gva) invlpga(gva, svm->vmcb->control.asid); } -static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu) -{ -} - static inline void sync_cr8_to_lapic(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 66d83dfefe18..cfc495c71fc1 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -172,6 +172,8 @@ struct vcpu_svm { u64 ghcb_sa_len; bool ghcb_sa_sync; bool ghcb_sa_free; + + bool guest_state_loaded; }; struct svm_cpu_data { @@ -570,9 +572,8 @@ int sev_handle_vmgexit(struct vcpu_svm *svm); int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in); void sev_es_init_vmcb(struct vcpu_svm *svm); void sev_es_create_vcpu(struct vcpu_svm *svm); -void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu); -void sev_es_vcpu_put(struct vcpu_svm *svm); void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector); +void sev_es_prepare_guest_switch(struct vcpu_svm *svm, unsigned int cpu); /* vmenter.S */