From patchwork Wed Nov 10 10:00:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12611675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFB24C433FE for ; Wed, 10 Nov 2021 10:00:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98FEC61241 for ; Wed, 10 Nov 2021 10:00:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231174AbhKJKDi (ORCPT ); Wed, 10 Nov 2021 05:03:38 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:40343 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230456AbhKJKDg (ORCPT ); Wed, 10 Nov 2021 05:03:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636538449; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SXmw4UL6pUw7vk0K53uGHVrES63lbb9L++9ywYEo/w4=; b=Q4+Sbra6wlfpPnFNDjVd3DKgKDeFPLij8hEVm7DwwQNsiWUiwXPSU+Z+Bj4plzTwWMGPN/ YqkMgc4x/2Gz9iDG58czQbwKt1qqwuNQXN1pTQsMWzygdjx2JnbfMFgicl3Fue7abU78wp z+kB5fFIER7BHSGPCGlukR7JlZ9f1B4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-311-59RoaI07MfWnaGe3rf7pdQ-1; Wed, 10 Nov 2021 05:00:46 -0500 X-MC-Unique: 59RoaI07MfWnaGe3rf7pdQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8971E845F2C; Wed, 10 Nov 2021 10:00:44 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.243]) by smtp.corp.redhat.com (Postfix) with ESMTP id 303891045EBC; Wed, 10 Nov 2021 10:00:39 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Wanpeng Li , Borislav Petkov , Ingo Molnar , "H. Peter Anvin" , linux-kernel@vger.kernel.org, Sean Christopherson , Joerg Roedel , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , Thomas Gleixner , Paolo Bonzini , Jim Mattson , Maxim Levitsky Subject: [PATCH 1/3] KVM: nVMX: extract calculation of the L1's EFER Date: Wed, 10 Nov 2021 12:00:16 +0200 Message-Id: <20211110100018.367426-2-mlevitsk@redhat.com> In-Reply-To: <20211110100018.367426-1-mlevitsk@redhat.com> References: <20211110100018.367426-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This will be useful in the next patch. No functional change intended. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/vmx/nested.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b4ee5e9f9e201..49ae96c0cc4d1 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4228,6 +4228,21 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, kvm_clear_interrupt_queue(vcpu); } +/* + * Given vmcs12, return the expected L1 value of IA32_EFER + * after VM exit from that vmcs12 + */ +static inline u64 nested_vmx_get_vmcs12_host_efer(struct kvm_vcpu *vcpu, + struct vmcs12 *vmcs12) +{ + if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_EFER) + return vmcs12->host_ia32_efer; + else if (vmcs12->vm_exit_controls & VM_EXIT_HOST_ADDR_SPACE_SIZE) + return vcpu->arch.efer | (EFER_LMA | EFER_LME); + else + return vcpu->arch.efer & ~(EFER_LMA | EFER_LME); +} + /* * A part of what we need to when the nested L2 guest exits and we want to * run its L1 parent, is to reset L1's guest state to the host state specified @@ -4243,12 +4258,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, enum vm_entry_failure_code ignored; struct kvm_segment seg; - if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_EFER) - vcpu->arch.efer = vmcs12->host_ia32_efer; - else if (vmcs12->vm_exit_controls & VM_EXIT_HOST_ADDR_SPACE_SIZE) - vcpu->arch.efer |= (EFER_LMA | EFER_LME); - else - vcpu->arch.efer &= ~(EFER_LMA | EFER_LME); + vcpu->arch.efer = nested_vmx_get_vmcs12_host_efer(vcpu, vmcs12); vmx_set_efer(vcpu, vcpu->arch.efer); kvm_rsp_write(vcpu, vmcs12->host_rsp); From patchwork Wed Nov 10 10:00:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12611677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 154DFC433F5 for ; Wed, 10 Nov 2021 10:01:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0194B61159 for ; Wed, 10 Nov 2021 10:01:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231199AbhKJKDq (ORCPT ); Wed, 10 Nov 2021 05:03:46 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:42408 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231197AbhKJKDl (ORCPT ); Wed, 10 Nov 2021 05:03:41 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636538453; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=28X2k1WeI6bPfleaW2HxIWWLVXWVUppijzDYiHE+Gn8=; b=ETeDno3uOhTjXXwfVJTLJ240ZvUjTxFWx8vJoG2OTmC0JtBXUJO3oo7X6FyVjiX75bF7if ww5f6W5Nzh7MXJzCAv/X1LxOSEedYmmxwt1Bpp+AX9A8oByDbKzOTCvk29i2Ec6Btvs5IZ H7GvD7Kp4XMQKNDwhNpxEdrkq7HAzCU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-325-iHx4bk0MP7eYhswcfvyxyQ-1; Wed, 10 Nov 2021 05:00:50 -0500 X-MC-Unique: iHx4bk0MP7eYhswcfvyxyQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D375819057AA; Wed, 10 Nov 2021 10:00:48 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.243]) by smtp.corp.redhat.com (Postfix) with ESMTP id 02F5510016F4; Wed, 10 Nov 2021 10:00:44 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Wanpeng Li , Borislav Petkov , Ingo Molnar , "H. Peter Anvin" , linux-kernel@vger.kernel.org, Sean Christopherson , Joerg Roedel , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , Thomas Gleixner , Paolo Bonzini , Jim Mattson , Maxim Levitsky Subject: [PATCH 2/3] KVM: nVMX: restore L1's EFER prior to setting the nested state Date: Wed, 10 Nov 2021 12:00:17 +0200 Message-Id: <20211110100018.367426-3-mlevitsk@redhat.com> In-Reply-To: <20211110100018.367426-1-mlevitsk@redhat.com> References: <20211110100018.367426-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org It is known that some kvm's users (e.g. qemu) load part of L2's register state prior to setting the nested state after a migration. If a 32 bit L2 guest is running in a 64 bit L1 guest, and nested migration happens, Qemu will restore L2's EFER, and then the nested state load function will use it as if it was L1's EFER. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/vmx/nested.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 49ae96c0cc4d1..28e270824e5b1 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -6404,6 +6404,17 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, kvm_state->hdr.vmx.preemption_timer_deadline; } + /* + * The vcpu might currently contain L2's IA32_EFER, due to the way + * some userspace kvm users (e.g qemu) restore nested state. + * + * To fix this, restore its IA32_EFER to the value it would have + * after VM exit from the nested guest. + * + */ + + vcpu->arch.efer = nested_vmx_get_vmcs12_host_efer(vcpu, vmcs12); + if (nested_vmx_check_controls(vcpu, vmcs12) || nested_vmx_check_host_state(vcpu, vmcs12) || nested_vmx_check_guest_state(vcpu, vmcs12, &ignored)) From patchwork Wed Nov 10 10:00:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12611679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0A1FC433EF for ; Wed, 10 Nov 2021 10:01:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C90E861241 for ; Wed, 10 Nov 2021 10:01:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231215AbhKJKDu (ORCPT ); Wed, 10 Nov 2021 05:03:50 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:50385 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231180AbhKJKDq (ORCPT ); Wed, 10 Nov 2021 05:03:46 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636538459; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s0kl1pu5e8AfuwOD4EByN8Bid0Hy0Ge+6FEAJzOlZXk=; b=R9Aw1LfSv6Lfd49DGrI0whfIYF9bdi2Of/+iW4WtdLOwepriKhiSIUfZMRdgs3cx7CU7p/ wQGgRk6gvZHz6qs6lANKocMCGo08tLsJNq1oV5qvvkkyBew7+2/bL12THD7oP6jnCORhHT R4hgm6FLE2BqK+8bHbksk8HQP6ZFgKE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-567-PH1IkTukNJmEqYxRhlid-A-1; Wed, 10 Nov 2021 05:00:55 -0500 X-MC-Unique: PH1IkTukNJmEqYxRhlid-A-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 34F9B87D541; Wed, 10 Nov 2021 10:00:53 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.194.243]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4B0DD1037F3B; Wed, 10 Nov 2021 10:00:49 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: Wanpeng Li , Borislav Petkov , Ingo Molnar , "H. Peter Anvin" , linux-kernel@vger.kernel.org, Sean Christopherson , Joerg Roedel , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Vitaly Kuznetsov , Thomas Gleixner , Paolo Bonzini , Jim Mattson , Maxim Levitsky Subject: [PATCH 3/3] KVM: x86/mmu: don't skip mmu initialization when mmu root level changes Date: Wed, 10 Nov 2021 12:00:18 +0200 Message-Id: <20211110100018.367426-4-mlevitsk@redhat.com> In-Reply-To: <20211110100018.367426-1-mlevitsk@redhat.com> References: <20211110100018.367426-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When running mix of 32 and 64 bit guests, it is possible to have mmu reset with same mmu role but different root level (32 bit vs 64 bit paging) Signed-off-by: Maxim Levitsky --- arch/x86/kvm/mmu/mmu.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 354d2ca92df4d..763867475860f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4745,7 +4745,10 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) union kvm_mmu_role new_role = kvm_calc_tdp_mmu_root_page_role(vcpu, ®s, false); - if (new_role.as_u64 == context->mmu_role.as_u64) + u8 new_root_level = role_regs_to_root_level(®s); + + if (new_role.as_u64 == context->mmu_role.as_u64 && + context->root_level == new_root_level) return; context->mmu_role.as_u64 = new_role.as_u64; @@ -4757,7 +4760,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) context->get_guest_pgd = get_cr3; context->get_pdptr = kvm_pdptr_read; context->inject_page_fault = kvm_inject_page_fault; - context->root_level = role_regs_to_root_level(®s); + context->root_level = new_root_level; if (!is_cr0_pg(context)) context->gva_to_gpa = nonpaging_gva_to_gpa; @@ -4806,7 +4809,10 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte struct kvm_mmu_role_regs *regs, union kvm_mmu_role new_role) { - if (new_role.as_u64 == context->mmu_role.as_u64) + u8 new_root_level = role_regs_to_root_level(regs); + + if (new_role.as_u64 == context->mmu_role.as_u64 && + context->root_level == new_root_level) return; context->mmu_role.as_u64 = new_role.as_u64; @@ -4817,8 +4823,8 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte paging64_init_context(context); else paging32_init_context(context); - context->root_level = role_regs_to_root_level(regs); + context->root_level = new_root_level; reset_guest_paging_metadata(vcpu, context); context->shadow_root_level = new_role.base.level;