From patchwork Tue Sep 14 15:48:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D82EDC433FE for ; Tue, 14 Sep 2021 15:48:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BAD696121F for ; Tue, 14 Sep 2021 15:48:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234833AbhINPuD (ORCPT ); Tue, 14 Sep 2021 11:50:03 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:48039 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234896AbhINPuC (ORCPT ); Tue, 14 Sep 2021 11:50:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634524; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pEV05lOFEjMox5QCsgxD2AGwkvNu18w87kROINIyih0=; b=FX7VGibzXhDOQYIjPKkS96Jw0FtGNO5zcjDVed9lfvsPzx+Lkkx7QfxvJ/e69vBW9DTYB4 NIXn+YelI6nAWiS52uFt9JTF80Jw9ecGXCpAgpw6metX6rBxg10LXhmSHA657ERH0EAlNj +MIUB3gvCpQtMcvimRcTs7FB4jB1ESU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-578-xxUqKsjMPnyqaTqLuyC7eA-1; Tue, 14 Sep 2021 11:48:41 -0400 X-MC-Unique: xxUqKsjMPnyqaTqLuyC7eA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7446619253C2; Tue, 14 Sep 2021 15:48:39 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id 09A665C1BB; Tue, 14 Sep 2021 15:48:34 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 01/14] KVM: x86: nSVM: restore int_vector in svm_clear_vintr Date: Tue, 14 Sep 2021 18:48:12 +0300 Message-Id: <20210914154825.104886-2-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In svm_clear_vintr we try to restore the virtual interrupt injection that might be pending, but we fail to restore the interrupt vector. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 05e8d4d27969..b2e710a3fff6 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1566,6 +1566,8 @@ static void svm_clear_vintr(struct vcpu_svm *svm) svm->vmcb->control.int_ctl |= svm->nested.ctl.int_ctl & V_IRQ_INJECTION_BITS_MASK; + + svm->vmcb->control.int_vector = svm->nested.ctl.int_vector; } vmcb_mark_dirty(svm->vmcb, VMCB_INTR); From patchwork Tue Sep 14 15:48:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A893C433EF for ; Tue, 14 Sep 2021 15:48:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 427D961166 for ; Tue, 14 Sep 2021 15:48:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234762AbhINPuM (ORCPT ); Tue, 14 Sep 2021 11:50:12 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:34885 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234925AbhINPuI (ORCPT ); Tue, 14 Sep 2021 11:50:08 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634530; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nNFS2qgQHK2vi95uN3wusPasopF88FlnkNTthAniwNU=; b=RQX6ATA21Bsi5GWfqD0SOa42sbhjj+K4OzTl8t14wCWmuyCbs4fSm9XQgmyOQNqujxi6Jx Uxr3EtnH/UZvvGvKuoQZYDSVGI6fnrRcJI6Eqqdynaek0Yuqy0+nMEESprGSbVwt09HBB+ CAcGaGw/w6ESaoTkJQRnfIoqLclABiY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-61-RkpTmGzvPj-3vG4O2PY63w-1; Tue, 14 Sep 2021 11:48:49 -0400 X-MC-Unique: RkpTmGzvPj-3vG4O2PY63w-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 45F3B1084684; Tue, 14 Sep 2021 15:48:47 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id BD09F5C1BB; Tue, 14 Sep 2021 15:48:39 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 02/14] KVM: x86: selftests: test simultaneous uses of V_IRQ from L1 and L0 Date: Tue, 14 Sep 2021 18:48:13 +0300 Message-Id: <20210914154825.104886-3-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Test that if: * L1 disables virtual interrupt masking, and INTR intercept. * L1 setups a virtual interrupt to be injected to L2 and enters L2 with interrupts disabled, thus the virtual interrupt is pending. * Now an external interrupt arrives in L1 and since L1 doesn't intercept it, it should be delivered to L2 when it enables interrupts. to do this L0 (abuses) V_IRQ to setup an interrupt window, and returns to L2. * L2 enables interrupts. This should trigger the interrupt window, injection of the external interrupt and delivery of the virtual interrupt that can now be done. * Test that now L2 gets those interrupts. This is the test that demonstrates the issue that was fixed in the previous patch. Signed-off-by: Maxim Levitsky --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/svm_int_ctl_test.c | 128 ++++++++++++++++++ 3 files changed, 130 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 36896d251977..eb98958b15e4 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -23,6 +23,7 @@ /x86_64/smm_test /x86_64/state_test /x86_64/svm_vmcall_test +/x86_64/svm_int_ctl_test /x86_64/sync_regs_test /x86_64/tsc_msrs_test /x86_64/userspace_msr_exit_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index c103873531e0..3b8b143daecc 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -56,6 +56,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/smm_test TEST_GEN_PROGS_x86_64 += x86_64/state_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_preemption_timer_test TEST_GEN_PROGS_x86_64 += x86_64/svm_vmcall_test +TEST_GEN_PROGS_x86_64 += x86_64/svm_int_ctl_test TEST_GEN_PROGS_x86_64 += x86_64/sync_regs_test TEST_GEN_PROGS_x86_64 += x86_64/userspace_msr_exit_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_apic_access_test diff --git a/tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c b/tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c new file mode 100644 index 000000000000..df04f56ce859 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c @@ -0,0 +1,128 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * svm_int_ctl_test + * + * Copyright (C) 2021, Red Hat, Inc. + * + * Nested SVM testing: test simultaneous use of V_IRQ from L1 and L0. + */ + +#include "test_util.h" +#include "kvm_util.h" +#include "processor.h" +#include "svm_util.h" +#include "apic.h" + +#define VCPU_ID 0 + +static struct kvm_vm *vm; + +bool vintr_irq_called; +bool intr_irq_called; + +#define VINTR_IRQ_NUMBER 0x20 +#define INTR_IRQ_NUMBER 0x30 + +static void vintr_irq_handler(struct ex_regs *regs) +{ + vintr_irq_called = true; +} + +static void intr_irq_handler(struct ex_regs *regs) +{ + x2apic_write_reg(APIC_EOI, 0x00); + intr_irq_called = true; +} + +static void l2_guest_code(struct svm_test_data *svm) +{ + /* This code raises interrupt INTR_IRQ_NUMBER in the L1's LAPIC, + * and since L1 didn't enable virtual interrupt masking, + * L2 should receive it and not L1. + * + * L2 also has virtual interrupt 'VINTR_IRQ_NUMBER' pending in V_IRQ + * so it should also receive it after the following 'sti'. + */ + x2apic_write_reg(APIC_ICR, + APIC_DEST_SELF | APIC_INT_ASSERT | INTR_IRQ_NUMBER); + + __asm__ __volatile__( + "sti\n" + "nop\n" + ); + + GUEST_ASSERT(vintr_irq_called); + GUEST_ASSERT(intr_irq_called); + + __asm__ __volatile__( + "vmcall\n" + ); +} + +static void l1_guest_code(struct svm_test_data *svm) +{ + #define L2_GUEST_STACK_SIZE 64 + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; + struct vmcb *vmcb = svm->vmcb; + + x2apic_enable(); + + /* Prepare for L2 execution. */ + generic_svm_setup(svm, l2_guest_code, + &l2_guest_stack[L2_GUEST_STACK_SIZE]); + + /* No virtual interrupt masking */ + vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + + /* No intercepts for real and virtual interrupts */ + vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR | INTERCEPT_VINTR); + + /* Make a virtual interrupt VINTR_IRQ_NUMBER pending */ + vmcb->control.int_ctl |= V_IRQ_MASK | (0x1 << V_INTR_PRIO_SHIFT); + vmcb->control.int_vector = VINTR_IRQ_NUMBER; + + run_guest(vmcb, svm->vmcb_gpa); + GUEST_ASSERT(vmcb->control.exit_code == SVM_EXIT_VMMCALL); + GUEST_DONE(); +} + +int main(int argc, char *argv[]) +{ + vm_vaddr_t svm_gva; + + nested_svm_check_supported(); + + vm = vm_create_default(VCPU_ID, 0, (void *) l1_guest_code); + + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vm, VCPU_ID); + + vm_install_exception_handler(vm, VINTR_IRQ_NUMBER, vintr_irq_handler); + vm_install_exception_handler(vm, INTR_IRQ_NUMBER, intr_irq_handler); + + vcpu_alloc_svm(vm, &svm_gva); + vcpu_args_set(vm, VCPU_ID, 1, svm_gva); + + struct kvm_run *run = vcpu_state(vm, VCPU_ID); + struct ucall uc; + + vcpu_run(vm, VCPU_ID); + TEST_ASSERT(run->exit_reason == KVM_EXIT_IO, + "Got exit_reason other than KVM_EXIT_IO: %u (%s)\n", + run->exit_reason, + exit_reason_str(run->exit_reason)); + + switch (get_ucall(vm, VCPU_ID, &uc)) { + case UCALL_ABORT: + TEST_FAIL("%s", (const char *)uc.args[0]); + break; + /* NOT REACHED */ + case UCALL_DONE: + goto done; + default: + TEST_FAIL("Unknown ucall 0x%lx.", uc.cmd); + } +done: + kvm_vm_free(vm); + return 0; +} From patchwork Tue Sep 14 15:48:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A424DC433EF for ; Tue, 14 Sep 2021 15:49:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A12C61166 for ; Tue, 14 Sep 2021 15:49:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234180AbhINPuU (ORCPT ); Tue, 14 Sep 2021 11:50:20 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:26854 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234974AbhINPuR (ORCPT ); Tue, 14 Sep 2021 11:50:17 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634538; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sEWAwpmEEJaqmrX1gMRTQPBz4mPei3x4VtshFvs+Gd0=; b=jWTpM+sw6h7zH3ukLcZV6qNd+8yMWo8rDSxonh6KPFRghHi6WP+Gv+C2o8w9Wo7MKbqW2s ZfOb6O0x7JwxPOtYUGQmbSQQ7qGNN8iOkCEwW30j+NSVaEPNpZgvsdWpw/IpXJz2pdgWYg lwdQvpNXCM+GjXttxCH6rgJnzEYhWII= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-519-gjka4qD3PgW8zeIlo4Xz2Q-1; Tue, 14 Sep 2021 11:48:57 -0400 X-MC-Unique: gjka4qD3PgW8zeIlo4Xz2Q-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A569519253C2; Tue, 14 Sep 2021 15:48:55 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id A65355C1BB; Tue, 14 Sep 2021 15:48:47 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 03/14] KVM: x86: nSVM: test eax for 4K alignment for GP errata workaround Date: Tue, 14 Sep 2021 18:48:14 +0300 Message-Id: <20210914154825.104886-4-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org GP SVM errata workaround made the #GP handler always emulate the SVM instructions. However these instructions #GP in case the operand is not 4K aligned, but the workaround code didn't check this and we ended up emulating these instructions anyway. This is only an emulation accuracy check bug as there is no harm for KVM to read/write unaligned vmcb images. Fixes: 82a11e9c6fa2 ("KVM: SVM: Add emulation support for #GP triggered by SVM instructions") Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index b2e710a3fff6..6645542df9bd 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2224,6 +2224,10 @@ static int gp_interception(struct kvm_vcpu *vcpu) if (error_code) goto reinject; + /* All SVM instructions expect page aligned RAX */ + if (svm->vmcb->save.rax & ~PAGE_MASK) + goto reinject; + /* Decode the instruction for usage later */ if (x86_decode_emulated_instruction(vcpu, 0, NULL, 0) != EMULATION_OK) goto reinject; From patchwork Tue Sep 14 15:48:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27DEFC433FE for ; Tue, 14 Sep 2021 15:49:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0D45C61184 for ; Tue, 14 Sep 2021 15:49:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235026AbhINPu2 (ORCPT ); Tue, 14 Sep 2021 11:50:28 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:28807 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235062AbhINPuZ (ORCPT ); Tue, 14 Sep 2021 11:50:25 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634546; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rh6Yn54FVBR9BdmMO2Rm3QRiud0k4fZkr3BdtFwKDlc=; b=ejcfe683i4x/Flg56vy3sfsF+ZpRwTHblyxb4iNk1OzRIQEUblHM0OquQ6SkK22CYGsJbc 8c9k9Z7aRPD77TH62hJWRqVFOo1fWVlBOnhyicOZ5aAEYDXR3WfhTZg7FlJAvBb8WCOG23 QkZUydJQTjkTsZEwlCtBVLZ28/HD8Io= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-175-5ZQYaQuiNDGbdA5ibHidqg-1; Tue, 14 Sep 2021 11:49:05 -0400 X-MC-Unique: 5ZQYaQuiNDGbdA5ibHidqg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5FFBF802936; Tue, 14 Sep 2021 15:49:03 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id 11C0D5C1BB; Tue, 14 Sep 2021 15:48:55 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 04/14] KVM: x86: nSVM: don't copy pause related settings Date: Tue, 14 Sep 2021 18:48:15 +0300 Message-Id: <20210914154825.104886-5-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org According to the SDM, the CPU never modifies these settings. It loads them on VM entry and updates an internal copy instead. Also don't load them from the vmcb12 as we don't expose these features to the nested guest yet. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 2545d0c61985..476e01f98035 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -551,9 +551,6 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm) svm->vmcb->control.event_inj = svm->nested.ctl.event_inj; svm->vmcb->control.event_inj_err = svm->nested.ctl.event_inj_err; - svm->vmcb->control.pause_filter_count = svm->nested.ctl.pause_filter_count; - svm->vmcb->control.pause_filter_thresh = svm->nested.ctl.pause_filter_thresh; - nested_svm_transition_tlb_flush(vcpu); /* Enter Guest-Mode */ @@ -808,11 +805,6 @@ int nested_svm_vmexit(struct vcpu_svm *svm) vmcb12->control.event_inj = svm->nested.ctl.event_inj; vmcb12->control.event_inj_err = svm->nested.ctl.event_inj_err; - vmcb12->control.pause_filter_count = - svm->vmcb->control.pause_filter_count; - vmcb12->control.pause_filter_thresh = - svm->vmcb->control.pause_filter_thresh; - nested_svm_copy_common_state(svm->nested.vmcb02.ptr, svm->vmcb01.ptr); svm_switch_vmcb(svm, &svm->vmcb01); From patchwork Tue Sep 14 15:48:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09FE6C43219 for ; Tue, 14 Sep 2021 15:49:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DBB8861247 for ; Tue, 14 Sep 2021 15:49:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235167AbhINPuw (ORCPT ); Tue, 14 Sep 2021 11:50:52 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:31173 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235191AbhINPum (ORCPT ); Tue, 14 Sep 2021 11:50:42 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634564; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BIMzmTmiJfFMVSajy+y6/ZiL6DW6ca8ffVuSGDu5ZjE=; b=BaCyVJNZK1Pg/d4ge6a9KkAA1mSogHfg/OxB1S/28YVX02CoLjoW1UGLw+0QFb8EoIMHqI 5b4IrkncEcXk9jEVUf8YujDyMTIV3JhKEiaaHGh0fGJAi8kjYa5WI/Os0roaip34gNP1or 3iWuttHjDMGyahbUhEEzOq+D5vo0GNU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-250-HgYqKWbQPmWGcVRqPuhrbA-1; Tue, 14 Sep 2021 11:49:23 -0400 X-MC-Unique: HgYqKWbQPmWGcVRqPuhrbA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BDC4E1084684; Tue, 14 Sep 2021 15:49:21 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id C08B25C261; Tue, 14 Sep 2021 15:49:03 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 05/14] KVM: x86: nSVM: don't copy virt_ext from vmcb12 Date: Tue, 14 Sep 2021 18:48:16 +0300 Message-Id: <20210914154825.104886-6-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org These field correspond to features that we don't expose yet to L2 While currently there are no CVE worthy features in this field, if AMD adds more features to this field, that could allow guest escapes similar to CVE-2021-3653 and CVE-2021-3656. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 476e01f98035..4df59d9795b6 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -545,7 +545,6 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm) (svm->nested.ctl.int_ctl & int_ctl_vmcb12_bits) | (svm->vmcb01.ptr->control.int_ctl & int_ctl_vmcb01_bits); - svm->vmcb->control.virt_ext = svm->nested.ctl.virt_ext; svm->vmcb->control.int_vector = svm->nested.ctl.int_vector; svm->vmcb->control.int_state = svm->nested.ctl.int_state; svm->vmcb->control.event_inj = svm->nested.ctl.event_inj; From patchwork Tue Sep 14 15:48:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2B0FC433EF for ; Tue, 14 Sep 2021 15:49:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8AD816121E for ; Tue, 14 Sep 2021 15:49:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234486AbhINPvD (ORCPT ); Tue, 14 Sep 2021 11:51:03 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:49120 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235139AbhINPuv (ORCPT ); Tue, 14 Sep 2021 11:50:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634573; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gx7MKOlEUlgIaYAWawWmkyIY/NqMxsny76+oulEPAjA=; b=NzLELZiIRu3mT4hH5qczV/ascypxf4umL4oS1vzgMyhfLOsxcrwvY7fBVJSzJKOkC93eYw MtNTxJRYX32WW0ac4RqGT0NAUucZqCWSVtX13pS7YcF0U0m3sNjiwvfJdsP8q6KKx+uZuJ HAX89k8CyNZYArZJ9Tn+jz9G8me+dH4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-56-8OI8-DSbNCSsPRlXLJq6Xw-1; Tue, 14 Sep 2021 11:49:29 -0400 X-MC-Unique: 8OI8-DSbNCSsPRlXLJq6Xw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D8161CC624; Tue, 14 Sep 2021 15:49:27 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id 286D75C1BB; Tue, 14 Sep 2021 15:49:21 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 06/14] KVM: x86: SVM: don't set VMLOAD/VMSAVE intercepts on vCPU reset Date: Tue, 14 Sep 2021 18:48:17 +0300 Message-Id: <20210914154825.104886-7-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org commit adc2a23734ac ("KVM: nSVM: improve SYSENTER emulation on AMD"), made init_vmcb set vmload/vmsave intercepts unconditionally, and relied on svm_vcpu_after_set_cpuid to clear them when possible. However init_vmcb is also called when the vCPU is reset, and it is not followed by another call to svm_vcpu_after_set_cpuid because the CPUID is already set. This mistake makes the VMSAVE/VMLOAD intercept to be set when it is not needed, and harms performance of the nested guest. Fixes: adc2a23734ac ("KVM: nSVM: improve SYSENTER emulation on AMD") Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 6645542df9bd..861ac9f74331 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1199,8 +1199,6 @@ static void init_vmcb(struct kvm_vcpu *vcpu) svm_set_intercept(svm, INTERCEPT_SHUTDOWN); svm_set_intercept(svm, INTERCEPT_VMRUN); svm_set_intercept(svm, INTERCEPT_VMMCALL); - svm_set_intercept(svm, INTERCEPT_VMLOAD); - svm_set_intercept(svm, INTERCEPT_VMSAVE); svm_set_intercept(svm, INTERCEPT_STGI); svm_set_intercept(svm, INTERCEPT_CLGI); svm_set_intercept(svm, INTERCEPT_SKINIT); @@ -1377,6 +1375,10 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) svm->guest_state_loaded = false; svm_switch_vmcb(svm, &svm->vmcb01); + + svm_set_intercept(svm, INTERCEPT_VMLOAD); + svm_set_intercept(svm, INTERCEPT_VMSAVE); + init_vmcb(vcpu); svm_vcpu_init_msrpm(vcpu, svm->msrpm); From patchwork Tue Sep 14 15:48:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98656C433FE for ; Tue, 14 Sep 2021 15:50:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 847F761248 for ; Tue, 14 Sep 2021 15:50:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235296AbhINPv1 (ORCPT ); Tue, 14 Sep 2021 11:51:27 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:37763 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235342AbhINPvF (ORCPT ); Tue, 14 Sep 2021 11:51:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634588; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=245jQK827gv9pORp/HoiEjyYDnHYg74f716xJlQFbt0=; b=M5W1dRzOU44Dt+7RjGOBdH9T9Wpfj1/7MXyXhHVTPdytUqrTB8UR3x+vpdYw2URiF73anF dZkoRNDfMBKJ7rJemBgywVSefnaqHZ5UD8AWQ8fLqxkXc3OKBfthqZPSopCy0vQotx3xhT fZE4GzndZ3N/1534Acu4PACS291UDAA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-277-4gGChjZqOIycN4nwhAkovg-1; Tue, 14 Sep 2021 11:49:44 -0400 X-MC-Unique: 4gGChjZqOIycN4nwhAkovg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EE91F1084685; Tue, 14 Sep 2021 15:49:42 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id 44B445C261; Tue, 14 Sep 2021 15:49:28 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 07/14] KVM: x86: SVM: add warning for CVE-2021-3656 Date: Tue, 14 Sep 2021 18:48:18 +0300 Message-Id: <20210914154825.104886-8-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Just in case, add a warning ensuring that on guest entry, either both VMLOAD and VMSAVE intercept is enabled or vVMLOAD/VMSAVE is enabled. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 861ac9f74331..deeebd05f682 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3784,6 +3784,12 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu)); + /* Check that CVE-2021-3656 can't happen again */ + if (!svm_is_intercept(svm, INTERCEPT_VMSAVE) || + !svm_is_intercept(svm, INTERCEPT_VMSAVE)) + WARN_ON(!(svm->vmcb->control.virt_ext & + VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK)); + sync_lapic_to_cr8(vcpu); if (unlikely(svm->asid != svm->vmcb->control.asid)) { From patchwork Tue Sep 14 15:48:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45258C433EF for ; Tue, 14 Sep 2021 15:50:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2F95F6124D for ; Tue, 14 Sep 2021 15:50:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235360AbhINPv3 (ORCPT ); Tue, 14 Sep 2021 11:51:29 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:52905 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235399AbhINPvK (ORCPT ); Tue, 14 Sep 2021 11:51:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634592; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=air76GPjqWicilRGvgmhrS4u3L0LPW0f6tGGztJqbug=; b=O3wcV//KqiermhWpuqMXTXbtB9yTwE+FwFKIvXuFhROK06acCGiFQrBzYUIHFvr954mFPt FbJ6dxdqKqLftcRikbVPMk8ROMuNrsbJiyzJNLvHpgMXZJGLaUsAvvnXpEKnAnE/DIboxm +yFOt+uaNHuxBWaEnBWfRR98zhIYxLY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-496-SOQOF10_Ps6-SVteknt1hg-1; Tue, 14 Sep 2021 11:49:49 -0400 X-MC-Unique: SOQOF10_Ps6-SVteknt1hg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 92453CC62E; Tue, 14 Sep 2021 15:49:47 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5A42C5C1BB; Tue, 14 Sep 2021 15:49:43 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 08/14] KVM: x86: SVM: add module param to control LBR virtualization Date: Tue, 14 Sep 2021 18:48:19 +0300 Message-Id: <20210914154825.104886-9-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This is useful for debug and also makes it consistent with the rest of the SVM optional features. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index deeebd05f682..b08c5d826208 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -186,6 +186,11 @@ module_param(vls, int, 0444); static int vgif = true; module_param(vgif, int, 0444); +/* enable/disable LBR virtualization */ +static int lbrv = true; +module_param(lbrv, int, 0444); + + /* * enable / disable AVIC. Because the defaults differ for APICv * support between VMX and SVM we cannot use module_param_named. @@ -1059,6 +1064,13 @@ static __init int svm_hardware_setup(void) pr_info("Virtual GIF supported\n"); } + if (lbrv) { + if (!boot_cpu_has(X86_FEATURE_LBRV)) + lbrv = false; + else + pr_info("LBR virtualization supported\n"); + } + svm_set_cpu_caps(); /* @@ -2920,7 +2932,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) svm->tsc_aux = data; break; case MSR_IA32_DEBUGCTLMSR: - if (!boot_cpu_has(X86_FEATURE_LBRV)) { + if (!lbrv) { vcpu_unimpl(vcpu, "%s: MSR_IA32_DEBUGCTL 0x%llx, nop\n", __func__, data); break; From patchwork Tue Sep 14 15:48:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2403BC433F5 for ; Tue, 14 Sep 2021 15:50:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0C04761246 for ; Tue, 14 Sep 2021 15:50:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235393AbhINPva (ORCPT ); Tue, 14 Sep 2021 11:51:30 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:41197 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235439AbhINPvN (ORCPT ); Tue, 14 Sep 2021 11:51:13 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634595; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0ncVr4kL+N9re2Ax5j40BktILoXmljuWD8PAGlleOTQ=; b=C/s3Onqm/XOgAZpIboF+/rYScHmCKclsOZDexsgkJ/4bfOx6pQpmc1Y+871ARYAsudcOVG 9oqMXV4/rzAtWR16fN8ErYSiZkpeaL74oy4LIfG2987gssBa809JBsUOYTKTSSM3RVx+un OjEMPE1MMSakche8zjMKWCRONScMp3Y= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-189-ZHxdYFYsNliIJqQZAssuqw-1; Tue, 14 Sep 2021 11:49:54 -0400 X-MC-Unique: ZHxdYFYsNliIJqQZAssuqw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 20F14CC622; Tue, 14 Sep 2021 15:49:52 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id F2E5C5C1BB; Tue, 14 Sep 2021 15:49:47 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 09/14] KVM: x86: nSVM: correctly virtualize LBR msrs when L2 is running Date: Tue, 14 Sep 2021 18:48:20 +0300 Message-Id: <20210914154825.104886-10-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When L2 is running without LBR virtualization, we should ensure that L1's LBR msrs continue to update as usual. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 11 +++++ arch/x86/kvm/svm/svm.c | 98 +++++++++++++++++++++++++++++++-------- arch/x86/kvm/svm/svm.h | 2 + 3 files changed, 92 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 4df59d9795b6..c6f09b591696 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -502,6 +502,9 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12 svm->vcpu.arch.dr6 = vmcb12->save.dr6 | DR6_ACTIVE_LOW; vmcb_mark_dirty(svm->vmcb, VMCB_DR); } + + if (unlikely(svm->vmcb01.ptr->control.virt_ext & LBR_CTL_ENABLE_MASK)) + svm_copy_lbrs(svm->vmcb01.ptr, svm->vmcb); } static void nested_vmcb02_prepare_control(struct vcpu_svm *svm) @@ -550,6 +553,9 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm) svm->vmcb->control.event_inj = svm->nested.ctl.event_inj; svm->vmcb->control.event_inj_err = svm->nested.ctl.event_inj_err; + svm->vmcb->control.virt_ext = svm->vmcb01.ptr->control.virt_ext & + LBR_CTL_ENABLE_MASK; + nested_svm_transition_tlb_flush(vcpu); /* Enter Guest-Mode */ @@ -808,6 +814,11 @@ int nested_svm_vmexit(struct vcpu_svm *svm) svm_switch_vmcb(svm, &svm->vmcb01); + if (unlikely(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK)) { + svm_copy_lbrs(svm->nested.vmcb02.ptr, svm->vmcb); + svm_update_lbrv(vcpu); + } + /* * On vmexit the GIF is set to false and * no event can be injected in L1. diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index b08c5d826208..981cc9765b95 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -792,6 +792,17 @@ static void init_msrpm_offsets(void) } } +void svm_copy_lbrs(struct vmcb *from_vmcb, struct vmcb *to_vmcb) +{ + to_vmcb->save.dbgctl = from_vmcb->save.dbgctl; + to_vmcb->save.br_from = from_vmcb->save.br_from; + to_vmcb->save.br_to = from_vmcb->save.br_to; + to_vmcb->save.last_excp_from = from_vmcb->save.last_excp_from; + to_vmcb->save.last_excp_to = from_vmcb->save.last_excp_to; + + vmcb_mark_dirty(to_vmcb, VMCB_LBR); +} + static void svm_enable_lbrv(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); @@ -801,6 +812,10 @@ static void svm_enable_lbrv(struct kvm_vcpu *vcpu) set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 1, 1); set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1); set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1); + + /* Move the LBR msrs to the vmcb02 so that the guest can see them. */ + if (is_guest_mode(vcpu)) + svm_copy_lbrs(svm->vmcb01.ptr, svm->vmcb); } static void svm_disable_lbrv(struct kvm_vcpu *vcpu) @@ -812,6 +827,63 @@ static void svm_disable_lbrv(struct kvm_vcpu *vcpu) set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 0, 0); set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 0, 0); set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 0, 0); + + /* + * Move the LBR msrs back to the vmcb01 to avoid copying them + * on nested guest entries. + */ + if (is_guest_mode(vcpu)) + svm_copy_lbrs(svm->vmcb, svm->vmcb01.ptr); +} + +static int svm_get_lbr_msr(struct vcpu_svm *svm, u32 index) +{ + /* + * If the LBR virtualization is disabled, the LBR msrs are always + * kept in the vmcb01 to avoid copying them on nested guest entries. + * + * If nested, and the LBR virtualization is enabled/disabled, the msrs + * are moved between the vmcb01 and vmcb02 as needed. + */ + struct vmcb *vmcb = + (svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) ? + svm->vmcb : svm->vmcb01.ptr; + + switch (index) { + case MSR_IA32_DEBUGCTLMSR: + return vmcb->save.dbgctl; + case MSR_IA32_LASTBRANCHFROMIP: + return vmcb->save.br_from; + case MSR_IA32_LASTBRANCHTOIP: + return vmcb->save.br_to; + case MSR_IA32_LASTINTFROMIP: + return vmcb->save.last_excp_from; + case MSR_IA32_LASTINTTOIP: + return vmcb->save.last_excp_to; + default: + KVM_BUG(false, svm->vcpu.kvm, + "%s: Unknown MSR 0x%x", __func__, index); + return 0; + } +} + +void svm_update_lbrv(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + bool enable_lbrv = svm_get_lbr_msr(svm, MSR_IA32_DEBUGCTLMSR) & + DEBUGCTLMSR_LBR; + + bool current_enable_lbrv = !!(svm->vmcb->control.virt_ext & + LBR_CTL_ENABLE_MASK); + + if (enable_lbrv == current_enable_lbrv) + return; + + if (enable_lbrv) + svm_enable_lbrv(vcpu); + else + svm_disable_lbrv(vcpu); } void disable_nmi_singlestep(struct vcpu_svm *svm) @@ -2704,25 +2776,12 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_TSC_AUX: msr_info->data = svm->tsc_aux; break; - /* - * Nobody will change the following 5 values in the VMCB so we can - * safely return them on rdmsr. They will always be 0 until LBRV is - * implemented. - */ case MSR_IA32_DEBUGCTLMSR: - msr_info->data = svm->vmcb->save.dbgctl; - break; case MSR_IA32_LASTBRANCHFROMIP: - msr_info->data = svm->vmcb->save.br_from; - break; case MSR_IA32_LASTBRANCHTOIP: - msr_info->data = svm->vmcb->save.br_to; - break; case MSR_IA32_LASTINTFROMIP: - msr_info->data = svm->vmcb->save.last_excp_from; - break; case MSR_IA32_LASTINTTOIP: - msr_info->data = svm->vmcb->save.last_excp_to; + msr_info->data = svm_get_lbr_msr(svm, msr_info->index); break; case MSR_VM_HSAVE_PA: msr_info->data = svm->nested.hsave_msr; @@ -2940,12 +2999,13 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) if (data & DEBUGCTL_RESERVED_BITS) return 1; - svm->vmcb->save.dbgctl = data; - vmcb_mark_dirty(svm->vmcb, VMCB_LBR); - if (data & (1ULL<<0)) - svm_enable_lbrv(vcpu); + if (svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) + svm->vmcb->save.dbgctl = data; else - svm_disable_lbrv(vcpu); + svm->vmcb01.ptr->save.dbgctl = data; + + svm_update_lbrv(vcpu); + break; case MSR_VM_HSAVE_PA: /* diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 524d943f3efc..0c351e9c4d6d 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -417,6 +417,8 @@ u32 svm_msrpm_offset(u32 msr); u32 *svm_vcpu_alloc_msrpm(void); void svm_vcpu_init_msrpm(struct kvm_vcpu *vcpu, u32 *msrpm); void svm_vcpu_free_msrpm(u32 *msrpm); +void svm_copy_lbrs(struct vmcb *from_vmcb, struct vmcb *to_vmcb); +void svm_update_lbrv(struct kvm_vcpu *vcpu); int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer); void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); From patchwork Tue Sep 14 15:48:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95762C433F5 for ; Tue, 14 Sep 2021 15:50:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7D7BC61252 for ; Tue, 14 Sep 2021 15:50:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235486AbhINPvo (ORCPT ); Tue, 14 Sep 2021 11:51:44 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:23127 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235505AbhINPvV (ORCPT ); Tue, 14 Sep 2021 11:51:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634603; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i57uulpf45Axgq/0Xg7VZTg9GdN6y//kXUWT8wj5cmc=; b=fJl0nUb5esWUZJBuAVv7lspqPK628zu/VTM8gyRjMlr+JKbnvIhhix0v5XIq5dXWJyRoEa H2hZegCkuhHZEHULYcqdgmVZvM2z87yYhRXMI23uKGJFikPLnAlxRz8bILDMw4DuA+qNkW DOkhB7yHj42q7pMN4olJ9XiO5xRWV4U= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-29-MuID-62JN4qcBb3K7inOUw-1; Tue, 14 Sep 2021 11:50:02 -0400 X-MC-Unique: MuID-62JN4qcBb3K7inOUw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 49F621084683; Tue, 14 Sep 2021 15:50:00 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id 80FB95C1BB; Tue, 14 Sep 2021 15:49:52 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 10/14] KVM: x86: nSVM: implement nested LBR virtualization Date: Tue, 14 Sep 2021 18:48:21 +0300 Message-Id: <20210914154825.104886-11-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This was tested with kvm-unit-test that was developed for this purpose. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 21 +++++++++++++++++++-- arch/x86/kvm/svm/svm.c | 9 +++++++++ arch/x86/kvm/svm/svm.h | 1 + 3 files changed, 29 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index c6f09b591696..aadbff9b6514 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -503,8 +503,19 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12 vmcb_mark_dirty(svm->vmcb, VMCB_DR); } - if (unlikely(svm->vmcb01.ptr->control.virt_ext & LBR_CTL_ENABLE_MASK)) + if (unlikely(svm->lbrv_enabled && (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) { + svm_copy_lbrs(vmcb12, svm->vmcb); + /* + * TODO: to avoid TOC/TOU race, + * make sure that we only pick LBR enable bit from + * the guest. + */ + svm->vmcb->save.dbgctl &= LBR_CTL_ENABLE_MASK; + svm_update_lbrv(&svm->vcpu); + + } else if (unlikely(svm->vmcb01.ptr->control.virt_ext & LBR_CTL_ENABLE_MASK)) { svm_copy_lbrs(svm->vmcb01.ptr, svm->vmcb); + } } static void nested_vmcb02_prepare_control(struct vcpu_svm *svm) @@ -555,6 +566,9 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm) svm->vmcb->control.virt_ext = svm->vmcb01.ptr->control.virt_ext & LBR_CTL_ENABLE_MASK; + if (svm->lbrv_enabled) + svm->vmcb->control.virt_ext |= + (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK); nested_svm_transition_tlb_flush(vcpu); @@ -814,7 +828,10 @@ int nested_svm_vmexit(struct vcpu_svm *svm) svm_switch_vmcb(svm, &svm->vmcb01); - if (unlikely(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK)) { + if (unlikely(svm->lbrv_enabled && (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) { + svm_copy_lbrs(svm->nested.vmcb02.ptr, vmcb12); + svm_update_lbrv(vcpu); + } else if (unlikely(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK)) { svm_copy_lbrs(svm->nested.vmcb02.ptr, svm->vmcb); svm_update_lbrv(vcpu); } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 981cc9765b95..66f99e8d804c 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -877,6 +877,10 @@ void svm_update_lbrv(struct kvm_vcpu *vcpu) bool current_enable_lbrv = !!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK); + if (unlikely(is_guest_mode(vcpu) && svm->lbrv_enabled)) + if (unlikely(svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK)) + enable_lbrv = true; + if (enable_lbrv == current_enable_lbrv) return; @@ -1006,6 +1010,9 @@ static __init void svm_set_cpu_caps(void) if (npt_enabled) kvm_cpu_cap_set(X86_FEATURE_NPT); + if (lbrv) + kvm_cpu_cap_set(X86_FEATURE_LBRV); + /* Nested VM can receive #VMEXIT instead of triggering #GP */ kvm_cpu_cap_set(X86_FEATURE_SVME_ADDR_CHK); } @@ -4081,6 +4088,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) svm->nrips_enabled = kvm_cpu_cap_has(X86_FEATURE_NRIPS) && guest_cpuid_has(vcpu, X86_FEATURE_NRIPS); + svm->lbrv_enabled = lbrv && guest_cpuid_has(vcpu, X86_FEATURE_LBRV); + svm_recalc_instruction_intercepts(vcpu, svm); /* For sev guests, the memory encryption bit is not reserved in CR3. */ diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 0c351e9c4d6d..c9a81e18707d 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -161,6 +161,7 @@ struct vcpu_svm { /* cached guest cpuid flags for faster access */ bool nrips_enabled : 1; + bool lbrv_enabled : 1; u32 ldr_reg; u32 dfr_reg; From patchwork Tue Sep 14 15:48:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A322CC433EF for ; Tue, 14 Sep 2021 15:50:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 84D476124D for ; Tue, 14 Sep 2021 15:50:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235068AbhINPv6 (ORCPT ); Tue, 14 Sep 2021 11:51:58 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:27753 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235561AbhINPvf (ORCPT ); Tue, 14 Sep 2021 11:51:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634617; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZLaQBdWcYWzUQ/0WAKIV5Ngb/tBLBqu2kjrdS0k3qs4=; b=CUFv8FbfkdU3hx4e+HU5hs3e8WdT0ql40fg8y1qYZ3+ElRMK/WYIu/T0v1bPMb2zibpfUh v0CshuG5IPvrr7nbwlhBg0BAyEa8t8TEshDoaLgVzDvCS3LZlsqJ2VctsEl8ti9jiYxpQb xzxTrXh6fFJKnRTw7sY9Ww2ZqX+nVRs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-120-jJMMEyswODyQwe_9ektIEA-1; Tue, 14 Sep 2021 11:50:16 -0400 X-MC-Unique: jJMMEyswODyQwe_9ektIEA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 78F5F835DE0; Tue, 14 Sep 2021 15:50:14 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id A97A65C1BB; Tue, 14 Sep 2021 15:50:00 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 11/14] KVM: x86: nSVM: implement nested VMLOAD/VMSAVE Date: Tue, 14 Sep 2021 18:48:22 +0300 Message-Id: <20210914154825.104886-12-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This was tested by booting L1,L2,L3 (all Linux) and checking that no VMLOAD/VMSAVE vmexits happened. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 35 +++++++++++++++++++++++++++++------ arch/x86/kvm/svm/svm.c | 7 +++++++ arch/x86/kvm/svm/svm.h | 12 +++++++++--- 3 files changed, 45 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index aadbff9b6514..29b5d0f85960 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -119,6 +119,20 @@ static void nested_svm_uninit_mmu_context(struct kvm_vcpu *vcpu) vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; } +static bool nested_vmcb_needs_vls_intercept(struct vcpu_svm *svm) +{ + if (!svm->v_vmload_vmsave_enabled) + return true; + + if (!nested_npt_enabled(svm)) + return true; + + if (!(svm->nested.ctl.virt_ext & VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK)) + return true; + + return false; +} + void recalc_intercepts(struct vcpu_svm *svm) { struct vmcb_control_area *c, *h, *g; @@ -159,8 +173,17 @@ void recalc_intercepts(struct vcpu_svm *svm) if (!intercept_smi) vmcb_clr_intercept(c, INTERCEPT_SMI); - vmcb_set_intercept(c, INTERCEPT_VMLOAD); - vmcb_set_intercept(c, INTERCEPT_VMSAVE); + if (nested_vmcb_needs_vls_intercept(svm)) { + /* + * If the virtual VMLOAD/VMSAVE is not enabled for the L2, + * we must intercept these instructions to correctly + * emulate them in case L1 doesn't intercept them. + */ + vmcb_set_intercept(c, INTERCEPT_VMLOAD); + vmcb_set_intercept(c, INTERCEPT_VMSAVE); + } else { + WARN_ON(!(c->virt_ext & VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK)); + } } static void copy_vmcb_control_area(struct vmcb_control_area *dst, @@ -387,10 +410,7 @@ static void nested_save_pending_event_to_vmcb12(struct vcpu_svm *svm, vmcb12->control.exit_int_info = exit_int_info; } -static inline bool nested_npt_enabled(struct vcpu_svm *svm) -{ - return svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE; -} + static void nested_svm_transition_tlb_flush(struct kvm_vcpu *vcpu) { @@ -570,6 +590,9 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm) svm->vmcb->control.virt_ext |= (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK); + if (!nested_vmcb_needs_vls_intercept(svm)) + svm->vmcb->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK; + nested_svm_transition_tlb_flush(vcpu); /* Enter Guest-Mode */ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 66f99e8d804c..6504e40e0985 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1013,6 +1013,9 @@ static __init void svm_set_cpu_caps(void) if (lbrv) kvm_cpu_cap_set(X86_FEATURE_LBRV); + if (vls) + kvm_cpu_cap_set(X86_FEATURE_V_VMSAVE_VMLOAD); + /* Nested VM can receive #VMEXIT instead of triggering #GP */ kvm_cpu_cap_set(X86_FEATURE_SVME_ADDR_CHK); } @@ -4090,6 +4093,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) svm->lbrv_enabled = lbrv && guest_cpuid_has(vcpu, X86_FEATURE_LBRV); + svm->v_vmload_vmsave_enabled = vls && guest_cpuid_has(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD); + svm_recalc_instruction_intercepts(vcpu, svm); /* For sev guests, the memory encryption bit is not reserved in CR3. */ @@ -4129,6 +4134,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_EIP, 0, 0); set_msr_interception(vcpu, svm->msrpm, MSR_IA32_SYSENTER_ESP, 0, 0); + + svm->v_vmload_vmsave_enabled = false; } else { /* * If hardware supports Virtual VMLOAD VMSAVE then enable it diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index c9a81e18707d..029340a7fbcc 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -159,9 +159,10 @@ struct vcpu_svm { unsigned int3_injected; unsigned long int3_rip; - /* cached guest cpuid flags for faster access */ - bool nrips_enabled : 1; - bool lbrv_enabled : 1; + /* optional nested SVM features that are enabled for this guest */ + bool nrips_enabled : 1; + bool lbrv_enabled : 1; + bool v_vmload_vmsave_enabled : 1; u32 ldr_reg; u32 dfr_reg; @@ -409,6 +410,11 @@ static inline bool gif_set(struct vcpu_svm *svm) return !!(svm->vcpu.arch.hflags & HF_GIF_MASK); } +static inline bool nested_npt_enabled(struct vcpu_svm *svm) +{ + return svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE; +} + /* svm.c */ #define MSR_INVALID 0xffffffffU From patchwork Tue Sep 14 15:48:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 106A5C433F5 for ; Tue, 14 Sep 2021 15:50:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E87F061247 for ; Tue, 14 Sep 2021 15:50:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235447AbhINPwD (ORCPT ); Tue, 14 Sep 2021 11:52:03 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:27408 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235609AbhINPvj (ORCPT ); Tue, 14 Sep 2021 11:51:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634622; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kPW/xRZgG9ba7bqjddBqyr34NV20pn+mK9DAeId5oEY=; b=SfjBBmvIUlo/L4LY+xQrPobQEYT4lA14c42/7J59cSM4TFpV9qjvuKT7QRTZFrfDgQ26kK CrPHBzJZDCfDiFhEkBShee006wd3+GbEWYAP8hDDue+AhTTEbIeGsFFQOze7zU6WMdNXvp ZiecztwI8VZXW8ehCdSBEVtP7BIF5zA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-495-3pNPXvL6PwOmCdVoq0vR7Q-1; Tue, 14 Sep 2021 11:50:21 -0400 X-MC-Unique: 3pNPXvL6PwOmCdVoq0vR7Q-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 05EE4802B9E; Tue, 14 Sep 2021 15:50:19 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id D7F195C1BB; Tue, 14 Sep 2021 15:50:14 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 12/14] KVM: x86: SVM: add module param to control TSC scaling Date: Tue, 14 Sep 2021 18:48:23 +0300 Message-Id: <20210914154825.104886-13-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This allows to easily simulate a CPU without this feature. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/svm.c | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 6504e40e0985..001a5af842ba 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -191,6 +191,9 @@ static int lbrv = true; module_param(lbrv, int, 0444); +static int tsc_scaling = true; +module_param(tsc_scaling, int, 0444); + /* * enable / disable AVIC. Because the defaults differ for APICv * support between VMX and SVM we cannot use module_param_named. @@ -471,7 +474,7 @@ static int has_svm(void) static void svm_hardware_disable(void) { /* Make sure we clean up behind us */ - if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) + if (tsc_scaling) wrmsrl(MSR_AMD64_TSC_RATIO, TSC_RATIO_DEFAULT); cpu_svm_disable(); @@ -514,6 +517,10 @@ static int svm_hardware_enable(void) wrmsrl(MSR_VM_HSAVE_PA, __sme_page_pa(sd->save_area)); if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) { + /* + * Set the default value, even if we don't use TSC scaling + * to avoid having stale value in the msr + */ wrmsrl(MSR_AMD64_TSC_RATIO, TSC_RATIO_DEFAULT); __this_cpu_write(current_tsc_ratio, TSC_RATIO_DEFAULT); } @@ -1063,10 +1070,15 @@ static __init int svm_hardware_setup(void) if (boot_cpu_has(X86_FEATURE_FXSR_OPT)) kvm_enable_efer_bits(EFER_FFXSR); - if (boot_cpu_has(X86_FEATURE_TSCRATEMSR)) { - kvm_has_tsc_control = true; - kvm_max_tsc_scaling_ratio = TSC_RATIO_MAX; - kvm_tsc_scaling_ratio_frac_bits = 32; + if (tsc_scaling) { + if (!boot_cpu_has(X86_FEATURE_TSCRATEMSR)) { + tsc_scaling = false; + } else { + pr_info("TSC scaling supported\n"); + kvm_has_tsc_control = true; + kvm_max_tsc_scaling_ratio = TSC_RATIO_MAX; + kvm_tsc_scaling_ratio_frac_bits = 32; + } } tsc_aux_uret_slot = kvm_add_user_return_msr(MSR_TSC_AUX); @@ -1543,7 +1555,7 @@ static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu) vmsave(__sme_page_pa(sd->save_area)); } - if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) { + if (tsc_scaling) { u64 tsc_ratio = vcpu->arch.tsc_scaling_ratio; if (tsc_ratio != __this_cpu_read(current_tsc_ratio)) { __this_cpu_write(current_tsc_ratio, tsc_ratio); From patchwork Tue Sep 14 15:48:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35915C433EF for ; Tue, 14 Sep 2021 15:51:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1BFB861248 for ; Tue, 14 Sep 2021 15:51:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234534AbhINPwQ (ORCPT ); Tue, 14 Sep 2021 11:52:16 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:34306 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235315AbhINPvo (ORCPT ); Tue, 14 Sep 2021 11:51:44 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634626; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zOE0crkOwV8Xe98ZUlMpfni3es5pLcDJQ76pDP+haw4=; b=ZgWw+Zy/Oei2wg3JGKypTsm18N33RPYj7vETeIm1M6Ov/YTuPa5rPUWrhWwwsxIiU7KZGa BooB0uZi9grpg6GCrvPPSZ5lUCzph22NknvQRu6xv4x9X4o7Er+Cc+dP0JWQV6cDNgXA33 abrR1ovv9UE+aQe6E9XFlTV4GI136gs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-338-D7M7ObJBOgOoIFjKEmiv5Q-1; Tue, 14 Sep 2021 11:50:25 -0400 X-MC-Unique: D7M7ObJBOgOoIFjKEmiv5Q-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 88DD0835DE0; Tue, 14 Sep 2021 15:50:23 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id 673655C1D1; Tue, 14 Sep 2021 15:50:19 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 13/14] KVM: x86: nSVM: implement nested TSC scaling Date: Tue, 14 Sep 2021 18:48:24 +0300 Message-Id: <20210914154825.104886-14-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This was tested by booting a nested guest with TSC=1Ghz, observing the clocks, and doing about 100 cycles of migration. Note that qemu patch is needed to support migration because of a new MSR that needs to be placed in the migration state. The patch will be sent to the qemu mailing list soon. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 29 +++++++++++++++++++++++++++-- arch/x86/kvm/svm/svm.c | 30 ++++++++++++++++++++++++++++-- arch/x86/kvm/svm/svm.h | 5 +++++ arch/x86/kvm/vmx/vmx.c | 1 + arch/x86/kvm/x86.c | 1 + 5 files changed, 62 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 29b5d0f85960..4c26417f36b8 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -572,8 +572,17 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm) if (nested_npt_enabled(svm)) nested_svm_init_mmu_context(vcpu); - svm->vmcb->control.tsc_offset = vcpu->arch.tsc_offset = - vcpu->arch.l1_tsc_offset + svm->nested.ctl.tsc_offset; + vcpu->arch.tsc_offset = kvm_calc_nested_tsc_offset( + vcpu->arch.l1_tsc_offset, + svm->nested.ctl.tsc_offset, + svm->tsc_ratio_msr); + + svm->vmcb->control.tsc_offset = vcpu->arch.tsc_offset; + + if (svm->tsc_ratio_msr != kvm_default_tsc_scaling_ratio) { + WARN_ON(!svm->tsc_scaling_enabled); + nested_svm_update_tsc_ratio_msr(vcpu); + } svm->vmcb->control.int_ctl = (svm->nested.ctl.int_ctl & int_ctl_vmcb12_bits) | @@ -872,6 +881,12 @@ int nested_svm_vmexit(struct vcpu_svm *svm) vmcb_mark_dirty(svm->vmcb, VMCB_INTERCEPTS); } + if (svm->tsc_ratio_msr != kvm_default_tsc_scaling_ratio) { + WARN_ON(!svm->tsc_scaling_enabled); + vcpu->arch.tsc_scaling_ratio = vcpu->arch.l1_tsc_scaling_ratio; + svm_write_tsc_multiplier(vcpu, vcpu->arch.tsc_scaling_ratio); + } + svm->nested.ctl.nested_cr3 = 0; /* @@ -1259,6 +1274,16 @@ int nested_svm_exit_special(struct vcpu_svm *svm) return NESTED_EXIT_CONTINUE; } +void nested_svm_update_tsc_ratio_msr(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + + vcpu->arch.tsc_scaling_ratio = + kvm_calc_nested_tsc_multiplier(vcpu->arch.l1_tsc_scaling_ratio, + svm->tsc_ratio_msr); + svm_write_tsc_multiplier(vcpu, vcpu->arch.tsc_scaling_ratio); +} + static int svm_get_nested_state(struct kvm_vcpu *vcpu, struct kvm_nested_state __user *user_kvm_nested_state, u32 user_data_size) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 001a5af842ba..0b797351cfb9 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1023,6 +1023,9 @@ static __init void svm_set_cpu_caps(void) if (vls) kvm_cpu_cap_set(X86_FEATURE_V_VMSAVE_VMLOAD); + if (tsc_scaling) + kvm_cpu_cap_set(X86_FEATURE_TSCRATEMSR); + /* Nested VM can receive #VMEXIT instead of triggering #GP */ kvm_cpu_cap_set(X86_FEATURE_SVME_ADDR_CHK); } @@ -1215,7 +1218,9 @@ static u64 svm_get_l2_tsc_offset(struct kvm_vcpu *vcpu) static u64 svm_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu) { - return kvm_default_tsc_scaling_ratio; + struct vcpu_svm *svm = to_svm(vcpu); + + return svm->tsc_ratio_msr; } static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) @@ -1227,7 +1232,7 @@ static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) vmcb_mark_dirty(svm->vmcb, VMCB_INTERCEPTS); } -static void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier) +void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier) { wrmsrl(MSR_AMD64_TSC_RATIO, multiplier); } @@ -1405,6 +1410,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu) enable_gif(svm); + svm->tsc_ratio_msr = kvm_default_tsc_scaling_ratio; } static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) @@ -2765,6 +2771,11 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) struct vcpu_svm *svm = to_svm(vcpu); switch (msr_info->index) { + case MSR_AMD64_TSC_RATIO: + if (!msr_info->host_initiated && !svm->tsc_scaling_enabled) + return 1; + msr_info->data = svm->tsc_ratio_msr; + break; case MSR_STAR: msr_info->data = svm->vmcb01.ptr->save.star; break; @@ -2901,6 +2912,19 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) u32 ecx = msr->index; u64 data = msr->data; switch (ecx) { + case MSR_AMD64_TSC_RATIO: + if (!msr->host_initiated && !svm->tsc_scaling_enabled) + return 1; + + if (data & TSC_RATIO_RSVD) + return 1; + + svm->tsc_ratio_msr = data; + + if (svm->tsc_scaling_enabled && is_guest_mode(vcpu)) + nested_svm_update_tsc_ratio_msr(vcpu); + + break; case MSR_IA32_CR_PAT: if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data)) return 1; @@ -4107,6 +4131,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) svm->v_vmload_vmsave_enabled = vls && guest_cpuid_has(vcpu, X86_FEATURE_V_VMSAVE_VMLOAD); + svm->tsc_scaling_enabled = tsc_scaling && guest_cpuid_has(vcpu, X86_FEATURE_TSCRATEMSR); + svm_recalc_instruction_intercepts(vcpu, svm); /* For sev guests, the memory encryption bit is not reserved in CR3. */ diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 029340a7fbcc..c8ea3b14da73 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -140,6 +140,8 @@ struct vcpu_svm { u64 next_rip; u64 spec_ctrl; + + u64 tsc_ratio_msr; /* * Contains guest-controlled bits of VIRT_SPEC_CTRL, which will be * translated into the appropriate L2_CFG bits on the host to @@ -163,6 +165,7 @@ struct vcpu_svm { bool nrips_enabled : 1; bool lbrv_enabled : 1; bool v_vmload_vmsave_enabled : 1; + bool tsc_scaling_enabled : 1; u32 ldr_reg; u32 dfr_reg; @@ -491,6 +494,8 @@ int nested_svm_check_permissions(struct kvm_vcpu *vcpu); int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, bool has_error_code, u32 error_code); int nested_svm_exit_special(struct vcpu_svm *svm); +void nested_svm_update_tsc_ratio_msr(struct kvm_vcpu *vcpu); +void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier); void nested_load_control_from_vmcb12(struct vcpu_svm *svm, struct vmcb_control_area *control); void nested_sync_control_from_vmcb02(struct vcpu_svm *svm); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index fada1055f325..e1d8f0df8172 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6404,6 +6404,7 @@ static bool vmx_has_emulated_msr(struct kvm *kvm, u32 index) case MSR_IA32_VMX_BASIC ... MSR_IA32_VMX_VMFUNC: return nested; case MSR_AMD64_VIRT_SPEC_CTRL: + case MSR_AMD64_TSC_RATIO: /* This is AMD only. */ return false; default: diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 86539c1686fa..1b7881c7a516 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1374,6 +1374,7 @@ static const u32 emulated_msrs_all[] = { MSR_PLATFORM_INFO, MSR_MISC_FEATURES_ENABLES, MSR_AMD64_VIRT_SPEC_CTRL, + MSR_AMD64_TSC_RATIO, MSR_IA32_POWER_CTL, MSR_IA32_UCODE_REV, From patchwork Tue Sep 14 15:48:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12494035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DBA8C433EF for ; Tue, 14 Sep 2021 15:51:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 16D476124C for ; Tue, 14 Sep 2021 15:51:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235274AbhINPwV (ORCPT ); Tue, 14 Sep 2021 11:52:21 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:45895 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235185AbhINPvx (ORCPT ); Tue, 14 Sep 2021 11:51:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1631634635; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9yPLnpJJhUzUT/VjQu5gJbVtoZO+FQcj2iSexaO76k4=; b=TWm3nnDUAGYSDzaMyK9Ay+rvwmilaJPgwh4hzYIvMfgvSY33Gf9QCm6NG7JSjU4vfRmHaa XwxjWoQjFpsvKnFeLTtxibdgiDac9n2ly5DG2HCUiy5PGnJoC+yWqDiGYz7kWtVvz42cwv BfVBd6KrdKBmnDODlF2mgiiLHdymMvw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-312--kk8KmQkNAqTxbQ_dKpbGw-1; Tue, 14 Sep 2021 11:50:33 -0400 X-MC-Unique: -kk8KmQkNAqTxbQ_dKpbGw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 00C28CC624; Tue, 14 Sep 2021 15:50:32 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.50]) by smtp.corp.redhat.com (Postfix) with ESMTP id E99025C1D1; Tue, 14 Sep 2021 15:50:23 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Vitaly Kuznetsov , Paolo Bonzini , Borislav Petkov , Bandan Das , linux-kernel@vger.kernel.org (open list), Joerg Roedel , Ingo Molnar , Wei Huang , Sean Christopherson , linux-kselftest@vger.kernel.org (open list:KERNEL SELFTEST FRAMEWORK), Maxim Levitsky , "H. Peter Anvin" , Jim Mattson , Thomas Gleixner , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Shuah Khan , Wanpeng Li Subject: [PATCH 14/14] KVM: x86: nSVM: support PAUSE filter threshold and count Date: Tue, 14 Sep 2021 18:48:25 +0300 Message-Id: <20210914154825.104886-15-mlevitsk@redhat.com> In-Reply-To: <20210914154825.104886-1-mlevitsk@redhat.com> References: <20210914154825.104886-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Allow L1 to use these settings if L0 disables PAUSE interception (AKA cpu_pm=on) Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/nested.c | 6 ++++++ arch/x86/kvm/svm/svm.c | 18 ++++++++++++++++++ arch/x86/kvm/svm/svm.h | 2 ++ 3 files changed, 26 insertions(+) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 4c26417f36b8..b47c745ec659 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -602,6 +602,12 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm) if (!nested_vmcb_needs_vls_intercept(svm)) svm->vmcb->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK; + if (svm->pause_filter_enabled) + svm->vmcb->control.pause_filter_count = svm->nested.ctl.pause_filter_count; + + if (svm->pause_threshold_enabled) + svm->vmcb->control.pause_filter_thresh = svm->nested.ctl.pause_filter_thresh; + nested_svm_transition_tlb_flush(vcpu); /* Enter Guest-Mode */ diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 0b797351cfb9..95125cd2e666 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1026,6 +1026,13 @@ static __init void svm_set_cpu_caps(void) if (tsc_scaling) kvm_cpu_cap_set(X86_FEATURE_TSCRATEMSR); + if (pause_filter_count) + kvm_cpu_cap_set(X86_FEATURE_PAUSEFILTER); + + if (pause_filter_thresh) + kvm_cpu_cap_set(X86_FEATURE_PFTHRESHOLD); + + /* Nested VM can receive #VMEXIT instead of triggering #GP */ kvm_cpu_cap_set(X86_FEATURE_SVME_ADDR_CHK); } @@ -4133,6 +4140,17 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) svm->tsc_scaling_enabled = tsc_scaling && guest_cpuid_has(vcpu, X86_FEATURE_TSCRATEMSR); + if (kvm_pause_in_guest(vcpu->kvm)) { + svm->pause_filter_enabled = pause_filter_count > 0 && + guest_cpuid_has(vcpu, X86_FEATURE_PAUSEFILTER); + + svm->pause_threshold_enabled = pause_filter_thresh > 0 && + guest_cpuid_has(vcpu, X86_FEATURE_PFTHRESHOLD); + } else { + svm->pause_filter_enabled = false; + svm->pause_threshold_enabled = false; + } + svm_recalc_instruction_intercepts(vcpu, svm); /* For sev guests, the memory encryption bit is not reserved in CR3. */ diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index c8ea3b14da73..7e679dd7a6e4 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -166,6 +166,8 @@ struct vcpu_svm { bool lbrv_enabled : 1; bool v_vmload_vmsave_enabled : 1; bool tsc_scaling_enabled : 1; + bool pause_filter_enabled : 1; + bool pause_threshold_enabled : 1; u32 ldr_reg; u32 dfr_reg;