From patchwork Tue Aug 13 13:53:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 11092307 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0BC341395 for ; Tue, 13 Aug 2019 13:53:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F1CBB2868D for ; Tue, 13 Aug 2019 13:53:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E59E128694; Tue, 13 Aug 2019 13:53:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 69F6C28689 for ; Tue, 13 Aug 2019 13:53:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729364AbfHMNxo (ORCPT ); Tue, 13 Aug 2019 09:53:44 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:36159 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729350AbfHMNxn (ORCPT ); Tue, 13 Aug 2019 09:53:43 -0400 Received: by mail-wr1-f67.google.com with SMTP id r3so14107260wrt.3 for ; Tue, 13 Aug 2019 06:53:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=opuSm2G6gYcM4qXZGufoG5jzs7R2voT6tX4ZJqRDemo=; b=EaGIOKai9KBLVLgBTKzWiSfzD33jhMgdTLSKhaPKxgMs/P2Wx/AkOthD4EiOI3QLhf /w6g4HpJ4FI7sruMj6faac9sZYobROd4I8anpsNeQh8meqSD263raHn5tKK7iJ+JAdO/ UQk2r0bo+R1Dudyh0AVoC08PgHq8ipDRmVb5H1QF0DbLaYlP9wIhynDn5ivdiqypOJGd EfVINFaPt/5si7PAEUNBDPCs+04RS4uHUZM8Pnc+2YAd7gF+EgEF3ANPi1T3cN3qD1jc c3tbBb7WizpfI54YrfBkji5D7cLADW5NueGrO8Aw1G4EuGw9SI7Kh3YPoesk03D3QyDg Fqvw== X-Gm-Message-State: APjAAAV1Rt+s0f3SW6z9X0CfFW4lQldOBtjguWeUvV5Rk6W5ahgQDGf9 jgUJzGz4Cw2kT2HQZPKFIcOwn6ulorg= X-Google-Smtp-Source: APXvYqxtXKosjYcO7Vv1TlMyD9LAKM+5QU0Fz3e0Yt9evI2DYtMcnLuXEnv3H63ZtuXgUqzJd/G7/Q== X-Received: by 2002:a5d:480e:: with SMTP id l14mr8323838wrq.96.1565704420950; Tue, 13 Aug 2019 06:53:40 -0700 (PDT) Received: from vitty.brq.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id k1sm15205820wru.49.2019.08.13.06.53.39 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 13 Aug 2019 06:53:40 -0700 (PDT) From: Vitaly Kuznetsov To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Joerg Roedel , Jim Mattson , Sean Christopherson Subject: [PATCH v4 1/7] x86: KVM: svm: don't pretend to advance RIP in case wrmsr_interception() results in #GP Date: Tue, 13 Aug 2019 15:53:29 +0200 Message-Id: <20190813135335.25197-2-vkuznets@redhat.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190813135335.25197-1-vkuznets@redhat.com> References: <20190813135335.25197-1-vkuznets@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP svm->next_rip is only used by skip_emulated_instruction() and in case kvm_set_msr() fails we rightfully don't do that. Move svm->next_rip advancement to 'else' branch to avoid creating false impression that it's always advanced (and make it look like rdmsr_interception()). This is a preparatory change to removing hardcoded RIP advancement from instruction intercepts, no functional change. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Sean Christopherson --- arch/x86/kvm/svm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 7eafc6907861..7e843b340490 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -4447,13 +4447,13 @@ static int wrmsr_interception(struct vcpu_svm *svm) msr.index = ecx; msr.host_initiated = false; - svm->next_rip = kvm_rip_read(&svm->vcpu) + 2; if (kvm_set_msr(&svm->vcpu, &msr)) { trace_kvm_msr_write_ex(ecx, data); kvm_inject_gp(&svm->vcpu, 0); return 1; } else { trace_kvm_msr_write(ecx, data); + svm->next_rip = kvm_rip_read(&svm->vcpu) + 2; return kvm_skip_emulated_instruction(&svm->vcpu); } } From patchwork Tue Aug 13 13:53:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 11092319 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C4FD21823 for ; Tue, 13 Aug 2019 13:54:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5EC728607 for ; Tue, 13 Aug 2019 13:54:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A9D9A2868C; Tue, 13 Aug 2019 13:54:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 82C70284CE for ; Tue, 13 Aug 2019 13:54:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729430AbfHMNyD (ORCPT ); Tue, 13 Aug 2019 09:54:03 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:34369 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729354AbfHMNxp (ORCPT ); Tue, 13 Aug 2019 09:53:45 -0400 Received: by mail-wr1-f66.google.com with SMTP id 31so107879587wrm.1 for ; Tue, 13 Aug 2019 06:53:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nvxS4qmSvD/bAOI0A3gCRZ4/snx5GdjWoBeh7enAi+M=; b=A8xD3eMhjD1FptN+pktii9EvQrCx8inyQDPpv1yMz7Z9TCvujlJnkAeirlvg0IJRs8 /Yl3cwYUoiFwkSGFt0DPgap/8JLDS4o01aDYfxP7XsFJ2Lujeo802OMmK2RLZi6fp1zz 7ctkJZ1AIEAVIcpvSc3z9+PQ0RI0T/ho5UKs3Yb5XmohgqFprWVZ2ntDPwJtPTcV1Oox L1rJP8R5nYe8zcuoKzb9u/YNP7ypFS5shEgjbehNhDpPMgZ9gLKPfO8HHHktpTuLAP7p mnsmuOfX8NdJZ7h8aG8s9Jz4kwSRoMVNFi58NnCF3Raeg6HPlIHYEPvdO44O2taMjvyI HvzQ== X-Gm-Message-State: APjAAAXF55NxafuQTcgIdQQNC87H04l1TSEuyr5foBP2TZ3fDVoyQ+5z LiFr87bj35f4hlC4fMbQXfcApmkdAWI= X-Google-Smtp-Source: APXvYqyEm7H0Qx4uO/rvkSZ6Es+0Br01V6GgUjOl/ZMwo0beGSQBVz0tk05FmcZBPh6nnHQLseC2GA== X-Received: by 2002:a5d:4c87:: with SMTP id z7mr13542108wrs.10.1565704421915; Tue, 13 Aug 2019 06:53:41 -0700 (PDT) Received: from vitty.brq.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id k1sm15205820wru.49.2019.08.13.06.53.40 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 13 Aug 2019 06:53:41 -0700 (PDT) From: Vitaly Kuznetsov To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Joerg Roedel , Jim Mattson , Sean Christopherson Subject: [PATCH v4 2/7] x86: kvm: svm: propagate errors from skip_emulated_instruction() Date: Tue, 13 Aug 2019 15:53:30 +0200 Message-Id: <20190813135335.25197-3-vkuznets@redhat.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190813135335.25197-1-vkuznets@redhat.com> References: <20190813135335.25197-1-vkuznets@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On AMD, kvm_x86_ops->skip_emulated_instruction(vcpu) can, in theory, fail: in !nrips case we call kvm_emulate_instruction(EMULTYPE_SKIP). Currently, we only do printk(KERN_DEBUG) when this happens and this is not ideal. Propagate the error up the stack. On VMX, skip_emulated_instruction() doesn't fail, we have two call sites calling it explicitly: handle_exception_nmi() and handle_task_switch(), we can just ignore the result. On SVM, we also have two explicit call sites: svm_queue_exception() and it seems we don't need to do anything there as we check if RIP was advanced or not. In task_switch_interception(), however, we are better off not proceeding to kvm_task_switch() in case skip_emulated_instruction() failed. Suggested-by: Sean Christopherson Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/svm.c | 36 ++++++++++++++++++--------------- arch/x86/kvm/vmx/vmx.c | 16 ++++++++++++--- arch/x86/kvm/x86.c | 6 ++++-- 4 files changed, 38 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7b0a4ee77313..f9e6d0b0f581 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1068,7 +1068,7 @@ struct kvm_x86_ops { void (*run)(struct kvm_vcpu *vcpu); int (*handle_exit)(struct kvm_vcpu *vcpu); - void (*skip_emulated_instruction)(struct kvm_vcpu *vcpu); + int (*skip_emulated_instruction)(struct kvm_vcpu *vcpu); void (*set_interrupt_shadow)(struct kvm_vcpu *vcpu, int mask); u32 (*get_interrupt_shadow)(struct kvm_vcpu *vcpu); void (*patch_hypercall)(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 7e843b340490..8299b0de06e2 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -770,7 +770,7 @@ static void svm_set_interrupt_shadow(struct kvm_vcpu *vcpu, int mask) } -static void skip_emulated_instruction(struct kvm_vcpu *vcpu) +static int skip_emulated_instruction(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); @@ -779,18 +779,17 @@ static void skip_emulated_instruction(struct kvm_vcpu *vcpu) svm->next_rip = svm->vmcb->control.next_rip; } - if (!svm->next_rip) { - if (kvm_emulate_instruction(vcpu, EMULTYPE_SKIP) != - EMULATE_DONE) - printk(KERN_DEBUG "%s: NOP\n", __func__); - return; - } + if (!svm->next_rip) + return kvm_emulate_instruction(vcpu, EMULTYPE_SKIP); + if (svm->next_rip - kvm_rip_read(vcpu) > MAX_INST_SIZE) printk(KERN_ERR "%s: ip 0x%lx next 0x%llx\n", __func__, kvm_rip_read(vcpu), svm->next_rip); kvm_rip_write(vcpu, svm->next_rip); svm_set_interrupt_shadow(vcpu, 0); + + return EMULATE_DONE; } static void svm_queue_exception(struct kvm_vcpu *vcpu) @@ -821,7 +820,7 @@ static void svm_queue_exception(struct kvm_vcpu *vcpu) * raises a fault that is not intercepted. Still better than * failing in all cases. */ - skip_emulated_instruction(&svm->vcpu); + (void)skip_emulated_instruction(&svm->vcpu); rip = kvm_rip_read(&svm->vcpu); svm->int3_rip = rip + svm->vmcb->save.cs.base; svm->int3_injected = rip - old_rip; @@ -3899,20 +3898,25 @@ static int task_switch_interception(struct vcpu_svm *svm) if (reason != TASK_SWITCH_GATE || int_type == SVM_EXITINTINFO_TYPE_SOFT || (int_type == SVM_EXITINTINFO_TYPE_EXEPT && - (int_vec == OF_VECTOR || int_vec == BP_VECTOR))) - skip_emulated_instruction(&svm->vcpu); + (int_vec == OF_VECTOR || int_vec == BP_VECTOR))) { + if (skip_emulated_instruction(&svm->vcpu) != EMULATE_DONE) + goto fail; + } if (int_type != SVM_EXITINTINFO_TYPE_SOFT) int_vec = -1; if (kvm_task_switch(&svm->vcpu, tss_selector, int_vec, reason, - has_error_code, error_code) == EMULATE_FAIL) { - svm->vcpu.run->exit_reason = KVM_EXIT_INTERNAL_ERROR; - svm->vcpu.run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION; - svm->vcpu.run->internal.ndata = 0; - return 0; - } + has_error_code, error_code) == EMULATE_FAIL) + goto fail; + return 1; + +fail: + svm->vcpu.run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + svm->vcpu.run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION; + svm->vcpu.run->internal.ndata = 0; + return 0; } static int cpuid_interception(struct vcpu_svm *svm) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 074385c86c09..358827b5bc44 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1472,8 +1472,11 @@ static int vmx_rtit_ctl_check(struct kvm_vcpu *vcpu, u64 data) return 0; } - -static void skip_emulated_instruction(struct kvm_vcpu *vcpu) +/* + * Returns an int to be compatible with SVM implementation (which can fail). + * Do not use directly, use skip_emulated_instruction() instead. + */ +static int __skip_emulated_instruction(struct kvm_vcpu *vcpu) { unsigned long rip; @@ -1483,6 +1486,13 @@ static void skip_emulated_instruction(struct kvm_vcpu *vcpu) /* skipping an emulated instruction also counts */ vmx_set_interrupt_shadow(vcpu, 0); + + return EMULATE_DONE; +} + +static inline void skip_emulated_instruction(struct kvm_vcpu *vcpu) +{ + (void)__skip_emulated_instruction(vcpu); } static void vmx_clear_hlt(struct kvm_vcpu *vcpu) @@ -7700,7 +7710,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = { .run = vmx_vcpu_run, .handle_exit = vmx_handle_exit, - .skip_emulated_instruction = skip_emulated_instruction, + .skip_emulated_instruction = __skip_emulated_instruction, .set_interrupt_shadow = vmx_set_interrupt_shadow, .get_interrupt_shadow = vmx_get_interrupt_shadow, .patch_hypercall = vmx_patch_hypercall, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c6d951cbd76c..e8f797fe9d9e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6383,9 +6383,11 @@ static void kvm_vcpu_do_singlestep(struct kvm_vcpu *vcpu, int *r) int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu) { unsigned long rflags = kvm_x86_ops->get_rflags(vcpu); - int r = EMULATE_DONE; + int r; - kvm_x86_ops->skip_emulated_instruction(vcpu); + r = kvm_x86_ops->skip_emulated_instruction(vcpu); + if (unlikely(r != EMULATE_DONE)) + return 0; /* * rflags is the old, "raw" value of the flags. The new value has From patchwork Tue Aug 13 13:53:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 11092321 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB6C71395 for ; Tue, 13 Aug 2019 13:54:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9808F28689 for ; Tue, 13 Aug 2019 13:54:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C5722868C; Tue, 13 Aug 2019 13:54:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 35BAA286A0 for ; Tue, 13 Aug 2019 13:54:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729444AbfHMNyF (ORCPT ); Tue, 13 Aug 2019 09:54:05 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:54525 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729190AbfHMNxo (ORCPT ); Tue, 13 Aug 2019 09:53:44 -0400 Received: by mail-wm1-f66.google.com with SMTP id p74so1601922wme.4 for ; Tue, 13 Aug 2019 06:53:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=N0JnsZY26PRvpYDGTc2Pe4GLRcJAxgHbDsa9zV94zQU=; b=TLPlAiWh1g74rKGBS1don05fydsk17lEN8MVq+WmJiiHdgs7YfdtVHIhpZKBDOkwNK Us15jTuxd6EKcuwzb0VFeo44QUzEy2YSLxZwdoEOk2gjs/quko8VskjUB2d/0EostaDH YMuZIWAOJSDAmhpynH/VvAq7H9cQ1+mC/nExcNSCtGjwh2clsPRmkoZ4IgA1NJcmdzzt Dax/su8R5/506yDo9chf5O8AZkDgf4T68SQR81pgJS+MxBQ8R6qZ0sfLDneA0EITyhHt +Ffc1TO+x123nUSpjhQgBIWXEYVHSvecGHuUy0T8P1HZE9Kz4nDyKm9Cht6/QoeuGYcU ud7w== X-Gm-Message-State: APjAAAX91KvSVJBThVMJvISQW5YiONnAKMWWXAkh9QIyG4Fr98/eU9ML Y0NCbXf3YAxGuOEb36QQSkarZe1pEMo= X-Google-Smtp-Source: APXvYqx7LOYSOyFaXYCeHH4qMpq9142a94Tu9q092qneV6hhDoXpP+eyaFhRAGlLPcVNfQqsCLla0Q== X-Received: by 2002:a1c:c915:: with SMTP id f21mr3139239wmb.173.1565704422729; Tue, 13 Aug 2019 06:53:42 -0700 (PDT) Received: from vitty.brq.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id k1sm15205820wru.49.2019.08.13.06.53.41 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 13 Aug 2019 06:53:42 -0700 (PDT) From: Vitaly Kuznetsov To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Joerg Roedel , Jim Mattson , Sean Christopherson Subject: [PATCH v4 3/7] x86: KVM: clear interrupt shadow on EMULTYPE_SKIP Date: Tue, 13 Aug 2019 15:53:31 +0200 Message-Id: <20190813135335.25197-4-vkuznets@redhat.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190813135335.25197-1-vkuznets@redhat.com> References: <20190813135335.25197-1-vkuznets@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When doing x86_emulate_instruction(EMULTYPE_SKIP) interrupt shadow has to be cleared if and only if the skipping is successful. There are two immediate issues: - In SVM skip_emulated_instruction() we are not zapping interrupt shadow in case kvm_emulate_instruction(EMULTYPE_SKIP) is used to advance RIP (!nrpip_save). - In VMX handle_ept_misconfig() when running as a nested hypervisor we (static_cpu_has(X86_FEATURE_HYPERVISOR) case) forget to clear interrupt shadow. Note that we intentionally don't handle the case when the skipped instruction is supposed to prolong the interrupt shadow ("MOV/POP SS") as skip-emulation of those instructions should not happen under normal circumstances. Suggested-by: Sean Christopherson Reviewed-by: Sean Christopherson Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/x86.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e8f797fe9d9e..c2409d06c114 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6539,6 +6539,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, kvm_rip_write(vcpu, ctxt->_eip); if (ctxt->eflags & X86_EFLAGS_RF) kvm_set_rflags(vcpu, ctxt->eflags & ~X86_EFLAGS_RF); + kvm_x86_ops->set_interrupt_shadow(vcpu, 0); return EMULATE_DONE; } From patchwork Tue Aug 13 13:53:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 11092315 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4F6DB1395 for ; Tue, 13 Aug 2019 13:54:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 40F611FEBD for ; Tue, 13 Aug 2019 13:54:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 345E228607; Tue, 13 Aug 2019 13:54:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B8C481FEBD for ; Tue, 13 Aug 2019 13:54:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729386AbfHMNxr (ORCPT ); Tue, 13 Aug 2019 09:53:47 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:43692 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729350AbfHMNxq (ORCPT ); Tue, 13 Aug 2019 09:53:46 -0400 Received: by mail-wr1-f65.google.com with SMTP id y8so1896651wrn.10 for ; Tue, 13 Aug 2019 06:53:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ETC0fWrxESFuxx6Oj/kds8Zwb+U9NuoqAGMEwsccwFw=; b=pC0E2KOwNO3oHLN3DaUKVF4EdR3twgGzkcAceWwDFVRKL9Npezk7gmUFOW8B3VV3kr v2V5o1nQIsRItnehuwjmst814bjDTFF3dlDi4JUvf+on0tNYXB//aYbhbuMfTEYoJjtU bfF5ee/qEJgTXCPT4lrgCAwXni13Lfkfi+WQLEEfI8G6uJSKgmjtXevSUSeXZZNMZ5kI rIngqlDMGKaeHnH8D8/q6vRUjCgRBBmZ0Z7GY0cG2GvheiN256aBs5EUFSQIXWLaIZpu /aVNNRyRa9oqMAxk9jqhrld7PjF6lx+bBz344Nb5UQhRtUpFhZjeJ6A7nGGC/uWwk8Gv Dd0g== X-Gm-Message-State: APjAAAVclzfBNwoGi4eJIOyKkdqNaDjyvi2M4n56/U0gPBGgnlufWw2j j9LZ8wHPHMY162c1tH6BjEqXsNWV5wY= X-Google-Smtp-Source: APXvYqypg7jgzPbPiphrSqdcLZgFzmF95HAvPR1lfdk/ZR95gyYT2gQSiEA8W1MlS+WQtynCsZb9gw== X-Received: by 2002:adf:b612:: with SMTP id f18mr48120919wre.97.1565704423627; Tue, 13 Aug 2019 06:53:43 -0700 (PDT) Received: from vitty.brq.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id k1sm15205820wru.49.2019.08.13.06.53.42 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 13 Aug 2019 06:53:43 -0700 (PDT) From: Vitaly Kuznetsov To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Joerg Roedel , Jim Mattson , Sean Christopherson Subject: [PATCH v4 4/7] x86: KVM: add xsetbv to the emulator Date: Tue, 13 Aug 2019 15:53:32 +0200 Message-Id: <20190813135335.25197-5-vkuznets@redhat.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190813135335.25197-1-vkuznets@redhat.com> References: <20190813135335.25197-1-vkuznets@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To avoid hardcoding xsetbv length to '3' we need to support decoding it in the emulator. Signed-off-by: Vitaly Kuznetsov --- arch/x86/include/asm/kvm_emulate.h | 3 ++- arch/x86/kvm/emulate.c | 23 ++++++++++++++++++++++- arch/x86/kvm/svm.c | 1 + arch/x86/kvm/x86.c | 6 ++++++ 4 files changed, 31 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h index feab24cac610..77cf6c11f66b 100644 --- a/arch/x86/include/asm/kvm_emulate.h +++ b/arch/x86/include/asm/kvm_emulate.h @@ -229,7 +229,7 @@ struct x86_emulate_ops { int (*pre_leave_smm)(struct x86_emulate_ctxt *ctxt, const char *smstate); void (*post_leave_smm)(struct x86_emulate_ctxt *ctxt); - + int (*set_xcr)(struct x86_emulate_ctxt *ctxt, u32 index, u64 xcr); }; typedef u32 __attribute__((vector_size(16))) sse128_t; @@ -429,6 +429,7 @@ enum x86_intercept { x86_intercept_ins, x86_intercept_out, x86_intercept_outs, + x86_intercept_xsetbv, nr_x86_intercepts }; diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 718f7d9afedc..f9e843dd992a 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -4156,6 +4156,20 @@ static int em_fxrstor(struct x86_emulate_ctxt *ctxt) return rc; } +static int em_xsetbv(struct x86_emulate_ctxt *ctxt) +{ + u32 eax, ecx, edx; + + eax = reg_read(ctxt, VCPU_REGS_RAX); + edx = reg_read(ctxt, VCPU_REGS_RDX); + ecx = reg_read(ctxt, VCPU_REGS_RCX); + + if (ctxt->ops->set_xcr(ctxt, ecx, ((u64)edx << 32) | eax)) + return emulate_gp(ctxt, 0); + + return X86EMUL_CONTINUE; +} + static bool valid_cr(int nr) { switch (nr) { @@ -4409,6 +4423,12 @@ static const struct opcode group7_rm1[] = { N, N, N, N, N, N, }; +static const struct opcode group7_rm2[] = { + N, + II(ImplicitOps | Priv, em_xsetbv, xsetbv), + N, N, N, N, N, N, +}; + static const struct opcode group7_rm3[] = { DIP(SrcNone | Prot | Priv, vmrun, check_svme_pa), II(SrcNone | Prot | EmulateOnUD, em_hypercall, vmmcall), @@ -4498,7 +4518,8 @@ static const struct group_dual group7 = { { }, { EXT(0, group7_rm0), EXT(0, group7_rm1), - N, EXT(0, group7_rm3), + EXT(0, group7_rm2), + EXT(0, group7_rm3), II(SrcNone | DstMem | Mov, em_smsw, smsw), N, II(SrcMem16 | Mov | Priv, em_lmsw, lmsw), EXT(0, group7_rm7), diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 8299b0de06e2..858feeac01a4 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -6067,6 +6067,7 @@ static const struct __x86_intercept { [x86_intercept_ins] = POST_EX(SVM_EXIT_IOIO), [x86_intercept_out] = POST_EX(SVM_EXIT_IOIO), [x86_intercept_outs] = POST_EX(SVM_EXIT_IOIO), + [x86_intercept_xsetbv] = PRE_EX(SVM_EXIT_XSETBV), }; #undef PRE_EX diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c2409d06c114..337559294169 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6068,6 +6068,11 @@ static void emulator_post_leave_smm(struct x86_emulate_ctxt *ctxt) kvm_smm_changed(emul_to_vcpu(ctxt)); } +static int emulator_set_xcr(struct x86_emulate_ctxt *ctxt, u32 index, u64 xcr) +{ + return __kvm_set_xcr(emul_to_vcpu(ctxt), index, xcr); +} + static const struct x86_emulate_ops emulate_ops = { .read_gpr = emulator_read_gpr, .write_gpr = emulator_write_gpr, @@ -6109,6 +6114,7 @@ static const struct x86_emulate_ops emulate_ops = { .set_hflags = emulator_set_hflags, .pre_leave_smm = emulator_pre_leave_smm, .post_leave_smm = emulator_post_leave_smm, + .set_xcr = emulator_set_xcr, }; static void toggle_interruptibility(struct kvm_vcpu *vcpu, u32 mask) From patchwork Tue Aug 13 13:53:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 11092317 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4273F13AC for ; Tue, 13 Aug 2019 13:54:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 33B581FEBD for ; Tue, 13 Aug 2019 13:54:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 261C62868D; Tue, 13 Aug 2019 13:54:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ABE9828688 for ; Tue, 13 Aug 2019 13:54:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729420AbfHMNyD (ORCPT ); Tue, 13 Aug 2019 09:54:03 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:36633 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729377AbfHMNxq (ORCPT ); Tue, 13 Aug 2019 09:53:46 -0400 Received: by mail-wm1-f66.google.com with SMTP id g67so1533211wme.1 for ; Tue, 13 Aug 2019 06:53:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=a93Im5z5Ix69mZdWJn1THOVc/7bNAs6K4ApoypuzVhg=; b=qp4zX3AHckA2pNCJnWihZD3OrpMvsDW1/nlbd+5Q+ES9EUAxthKioxDhyjdB1xPGE/ ymbDcc60stQCZ98AU2gC1Uo1DBKWHLuMhioHBMaXXaJrjg1j863pjAUcvLcnSDunqNHq s1KznBVKPD1D5Cpb951XB7iHAe6SDDOdH1DV68vrsbtrw8ujtc0swIV3lcm8pLNnWhJC IY/OIl+lCw+OWNrclHSxGPpPEuEkFNtZ7WeicO5pFWWLXcrxIUwr4GX0QxdFpSN30Lhb kYEeqCKGHW0DOWpjR+0XKNYbolVJIwk9M/fFy4Igbc+FwMB3zJp6ahmdBg7oYAp2nPT0 mIGw== X-Gm-Message-State: APjAAAWbhFjDRFS7yBlgxL+nlnvS7OQCVAxRCPMOpE/y46o+UIUOwDbx mSO/kq4a4Ukk1yaI5ca6zXs7+859u4M= X-Google-Smtp-Source: APXvYqzbCVFOFWFv973LiK5d08gyFUsBp6auwzFvMEOcVNiaUMCzfnLmLrxEuIGJ1KFK99mE/WaGNg== X-Received: by 2002:a05:600c:144:: with SMTP id w4mr3477047wmm.94.1565704424653; Tue, 13 Aug 2019 06:53:44 -0700 (PDT) Received: from vitty.brq.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id k1sm15205820wru.49.2019.08.13.06.53.43 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 13 Aug 2019 06:53:44 -0700 (PDT) From: Vitaly Kuznetsov To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Joerg Roedel , Jim Mattson , Sean Christopherson Subject: [PATCH v4 5/7] x86: KVM: svm: remove hardcoded instruction length from intercepts Date: Tue, 13 Aug 2019 15:53:33 +0200 Message-Id: <20190813135335.25197-6-vkuznets@redhat.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190813135335.25197-1-vkuznets@redhat.com> References: <20190813135335.25197-1-vkuznets@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Various intercepts hard-code the respective instruction lengths to optimize skip_emulated_instruction(): when next_rip is pre-set we skip kvm_emulate_instruction(vcpu, EMULTYPE_SKIP). The optimization is, however, incorrect: different (redundant) prefixes could be used to enlarge the instruction. We can't really avoid decoding. svm->next_rip is not used when CPU supports 'nrips' (X86_FEATURE_NRIPS) feature: next RIP is provided in VMCB. The feature is not really new (Opteron G3s had it already) and the change should have zero affect. Remove manual svm->next_rip setting with hard-coded instruction lengths. The only case where we now use svm->next_rip is EXIT_IOIO: the instruction length is provided to us by hardware. Hardcoded RIP advancement remains in vmrun_interception(), this is going to be taken care of separately. Reported-by: Jim Mattson Reviewed-by: Sean Christopherson Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/svm.c | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 858feeac01a4..6d16d1898810 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2903,13 +2903,11 @@ static int nop_on_interception(struct vcpu_svm *svm) static int halt_interception(struct vcpu_svm *svm) { - svm->next_rip = kvm_rip_read(&svm->vcpu) + 1; return kvm_emulate_halt(&svm->vcpu); } static int vmmcall_interception(struct vcpu_svm *svm) { - svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; return kvm_emulate_hypercall(&svm->vcpu); } @@ -3697,7 +3695,6 @@ static int vmload_interception(struct vcpu_svm *svm) nested_vmcb = map.hva; - svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; ret = kvm_skip_emulated_instruction(&svm->vcpu); nested_svm_vmloadsave(nested_vmcb, svm->vmcb); @@ -3724,7 +3721,6 @@ static int vmsave_interception(struct vcpu_svm *svm) nested_vmcb = map.hva; - svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; ret = kvm_skip_emulated_instruction(&svm->vcpu); nested_svm_vmloadsave(svm->vmcb, nested_vmcb); @@ -3775,7 +3771,6 @@ static int stgi_interception(struct vcpu_svm *svm) if (vgif_enabled(svm)) clr_intercept(svm, INTERCEPT_STGI); - svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; ret = kvm_skip_emulated_instruction(&svm->vcpu); kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); @@ -3791,7 +3786,6 @@ static int clgi_interception(struct vcpu_svm *svm) if (nested_svm_check_permissions(svm)) return 1; - svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; ret = kvm_skip_emulated_instruction(&svm->vcpu); disable_gif(svm); @@ -3816,7 +3810,6 @@ static int invlpga_interception(struct vcpu_svm *svm) /* Let's treat INVLPGA the same as INVLPG (can be optimized!) */ kvm_mmu_invlpg(vcpu, kvm_rax_read(&svm->vcpu)); - svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; return kvm_skip_emulated_instruction(&svm->vcpu); } @@ -3839,7 +3832,6 @@ static int xsetbv_interception(struct vcpu_svm *svm) u32 index = kvm_rcx_read(&svm->vcpu); if (kvm_set_xcr(&svm->vcpu, index, new_bv) == 0) { - svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; return kvm_skip_emulated_instruction(&svm->vcpu); } @@ -3921,7 +3913,6 @@ static int task_switch_interception(struct vcpu_svm *svm) static int cpuid_interception(struct vcpu_svm *svm) { - svm->next_rip = kvm_rip_read(&svm->vcpu) + 2; return kvm_emulate_cpuid(&svm->vcpu); } @@ -4251,7 +4242,6 @@ static int rdmsr_interception(struct vcpu_svm *svm) kvm_rax_write(&svm->vcpu, msr_info.data & 0xffffffff); kvm_rdx_write(&svm->vcpu, msr_info.data >> 32); - svm->next_rip = kvm_rip_read(&svm->vcpu) + 2; return kvm_skip_emulated_instruction(&svm->vcpu); } } @@ -4457,7 +4447,6 @@ static int wrmsr_interception(struct vcpu_svm *svm) return 1; } else { trace_kvm_msr_write(ecx, data); - svm->next_rip = kvm_rip_read(&svm->vcpu) + 2; return kvm_skip_emulated_instruction(&svm->vcpu); } } From patchwork Tue Aug 13 13:53:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 11092313 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 84FDF13AC for ; Tue, 13 Aug 2019 13:54:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7571C1FEBD for ; Tue, 13 Aug 2019 13:54:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 689F92868C; Tue, 13 Aug 2019 13:54:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 07B2E28607 for ; Tue, 13 Aug 2019 13:54:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729414AbfHMNx5 (ORCPT ); Tue, 13 Aug 2019 09:53:57 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:46353 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729380AbfHMNxs (ORCPT ); Tue, 13 Aug 2019 09:53:48 -0400 Received: by mail-wr1-f65.google.com with SMTP id z1so107846103wru.13 for ; Tue, 13 Aug 2019 06:53:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yYlHN4JVME6q9nUPfct82Z5v9HJ8xQJ6H/P1ku2ts7g=; b=aQBeBToPraDnyHjNHWWWa6gx4uBEZpmrHBM5ne9fYPop+RA+Vq6gAOElmS1UQBQmXO PiMwUby2+5um1exJdT2H65YNXQc6usO6F6SCPr+YFtbkswba64tzcY0BaxdQLZBmaS2S dPFbFW6uxdS7FzsDbRjgQQARnvrdPcrLn/LNcJL6xiWj1c5umI4fJhTi8GJYIbqJfudk cPWyEs4NxthcPVsmHnRtMmDYjM3wBBA388DdJeN5Yowygse9Vbep1tKREzDcOieOjZ+J MU2a4v1GXHRTDuMy5d7+3XiDb0WpBdScBptLhxb+KHIcQkMxx7r5UwGuTQEnTNfdGcwh jAkQ== X-Gm-Message-State: APjAAAXOwRfVwmumN54lPYP+vCGoeKH166vgrZ/xkXQjent8+ZmHekGV soCjOO7it9yd+VDIUncwPmzFz/cYnAo= X-Google-Smtp-Source: APXvYqwV8ku5H3v9v805OHgWqH2cJ4Gi6IW1ctkPJ92w1fnejqYyTGi7CVad8ViPXuYezi1WY5Bcog== X-Received: by 2002:a5d:560a:: with SMTP id l10mr9214460wrv.101.1565704425712; Tue, 13 Aug 2019 06:53:45 -0700 (PDT) Received: from vitty.brq.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id k1sm15205820wru.49.2019.08.13.06.53.44 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 13 Aug 2019 06:53:45 -0700 (PDT) From: Vitaly Kuznetsov To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Joerg Roedel , Jim Mattson , Sean Christopherson Subject: [PATCH v4 6/7] x86: KVM: svm: eliminate weird goto from vmrun_interception() Date: Tue, 13 Aug 2019 15:53:34 +0200 Message-Id: <20190813135335.25197-7-vkuznets@redhat.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190813135335.25197-1-vkuznets@redhat.com> References: <20190813135335.25197-1-vkuznets@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Regardless of whether or not nested_svm_vmrun_msrpm() fails, we return 1 from vmrun_interception() so there's no point in doing goto. Also, nested_svm_vmrun_msrpm() call can be made from nested_svm_vmrun() where other nested launch issues are handled. nested_svm_vmrun() returns a bool, however, its result is ignored in vmrun_interception() as we always return '1'. As a preparatory change to putting kvm_skip_emulated_instruction() inside nested_svm_vmrun() make nested_svm_vmrun() return an int (always '1' for now). Suggested-by: Sean Christopherson Signed-off-by: Vitaly Kuznetsov Reviewed-by: Sean Christopherson --- arch/x86/kvm/svm.c | 36 ++++++++++++++---------------------- 1 file changed, 14 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 6d16d1898810..51c39b608ef7 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3586,7 +3586,7 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, mark_all_dirty(svm->vmcb); } -static bool nested_svm_vmrun(struct vcpu_svm *svm) +static int nested_svm_vmrun(struct vcpu_svm *svm) { int rc; struct vmcb *nested_vmcb; @@ -3601,7 +3601,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) if (rc) { if (rc == -EINVAL) kvm_inject_gp(&svm->vcpu, 0); - return false; + return 1; } nested_vmcb = map.hva; @@ -3614,7 +3614,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) kvm_vcpu_unmap(&svm->vcpu, &map, true); - return false; + return 1; } trace_kvm_nested_vmrun(svm->vmcb->save.rip, vmcb_gpa, @@ -3658,7 +3658,16 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm) enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb, &map); - return true; + if (!nested_svm_vmrun_msrpm(svm)) { + svm->vmcb->control.exit_code = SVM_EXIT_ERR; + svm->vmcb->control.exit_code_hi = 0; + svm->vmcb->control.exit_info_1 = 0; + svm->vmcb->control.exit_info_2 = 0; + + nested_svm_vmexit(svm); + } + + return 1; } static void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) @@ -3737,24 +3746,7 @@ static int vmrun_interception(struct vcpu_svm *svm) /* Save rip after vmrun instruction */ kvm_rip_write(&svm->vcpu, kvm_rip_read(&svm->vcpu) + 3); - if (!nested_svm_vmrun(svm)) - return 1; - - if (!nested_svm_vmrun_msrpm(svm)) - goto failed; - - return 1; - -failed: - - svm->vmcb->control.exit_code = SVM_EXIT_ERR; - svm->vmcb->control.exit_code_hi = 0; - svm->vmcb->control.exit_info_1 = 0; - svm->vmcb->control.exit_info_2 = 0; - - nested_svm_vmexit(svm); - - return 1; + return nested_svm_vmrun(svm); } static int stgi_interception(struct vcpu_svm *svm) From patchwork Tue Aug 13 13:53:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 11092311 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B458A1395 for ; Tue, 13 Aug 2019 13:53:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A5B0F284CE for ; Tue, 13 Aug 2019 13:53:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 997AB2869F; Tue, 13 Aug 2019 13:53:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 889EF284CE for ; Tue, 13 Aug 2019 13:53:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729374AbfHMNxu (ORCPT ); Tue, 13 Aug 2019 09:53:50 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:53102 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729350AbfHMNxt (ORCPT ); Tue, 13 Aug 2019 09:53:49 -0400 Received: by mail-wm1-f68.google.com with SMTP id o4so1470933wmh.2 for ; Tue, 13 Aug 2019 06:53:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nCHf5t+smj+/MDbcCuVPNgK/OsWIUKLyWUiDOFrtLtU=; b=W7WZzJ5ppL8IuX/ZQDRyG/79DySf4if5HXIx37DTqyBDf923oJo3x5FY9MwA5vQA6X kXMArOD1DRlILDOHd6myJW3yoFDc7A2QlQXOL991gfYGOu2jU7ZQPN+eXkftkH2DHp1A Uk8WftQSQLm62q5fkN7B0OAY1kA/5pn1r1AByrc+hkY0uNoqs6aey4RAbhmNziiYAoc6 Ttyq8FhlymOmfGCGwdjFfSCpakq0DbOmyH8ez3wJwCxA9YijjK3EftNGGMlkhFX/Qbf5 wz+ARwEuYLLKDgOWJnmSMZY1I1nNxr64MmWlsyT/EYg5UiL+yXdGvgF8PWbfDOa10eno 89qw== X-Gm-Message-State: APjAAAUv4zqIgtIiTcnKTgMWPcld+FFj512aSGH7RboCiUpLFUOa/BTi GhpJyjnmXCojeyFNlIuxtYA/0HLB6dU= X-Google-Smtp-Source: APXvYqwwbjx6HehIQwKISH00/Hji0I4JRkndO+nCLUekpySx2CNpHKLHhWjcndRwSCLW92rmdjJdZw== X-Received: by 2002:a1c:cb01:: with SMTP id b1mr3362214wmg.69.1565704426865; Tue, 13 Aug 2019 06:53:46 -0700 (PDT) Received: from vitty.brq.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id k1sm15205820wru.49.2019.08.13.06.53.45 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 13 Aug 2019 06:53:46 -0700 (PDT) From: Vitaly Kuznetsov To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Joerg Roedel , Jim Mattson , Sean Christopherson Subject: [PATCH v4 7/7] x86: KVM: svm: eliminate hardcoded RIP advancement from vmrun_interception() Date: Tue, 13 Aug 2019 15:53:35 +0200 Message-Id: <20190813135335.25197-8-vkuznets@redhat.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190813135335.25197-1-vkuznets@redhat.com> References: <20190813135335.25197-1-vkuznets@redhat.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Just like we do with other intercepts, in vmrun_interception() we should be doing kvm_skip_emulated_instruction() and not just RIP += 3. Also, it is wrong to increment RIP before nested_svm_vmrun() as it can result in kvm_inject_gp(). We can't call kvm_skip_emulated_instruction() after nested_svm_vmrun() so move it inside. Suggested-by: Sean Christopherson Signed-off-by: Vitaly Kuznetsov Reviewed-by: Sean Christopherson --- arch/x86/kvm/svm.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 51c39b608ef7..8473cbea7e8b 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3588,7 +3588,7 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa, static int nested_svm_vmrun(struct vcpu_svm *svm) { - int rc; + int ret; struct vmcb *nested_vmcb; struct vmcb *hsave = svm->nested.hsave; struct vmcb *vmcb = svm->vmcb; @@ -3597,13 +3597,16 @@ static int nested_svm_vmrun(struct vcpu_svm *svm) vmcb_gpa = svm->vmcb->save.rax; - rc = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map); - if (rc) { - if (rc == -EINVAL) - kvm_inject_gp(&svm->vcpu, 0); + ret = kvm_vcpu_map(&svm->vcpu, gpa_to_gfn(vmcb_gpa), &map); + if (ret == EINVAL) { + kvm_inject_gp(&svm->vcpu, 0); return 1; + } else if (ret) { + return kvm_skip_emulated_instruction(&svm->vcpu); } + ret = kvm_skip_emulated_instruction(&svm->vcpu); + nested_vmcb = map.hva; if (!nested_vmcb_checks(nested_vmcb)) { @@ -3614,7 +3617,7 @@ static int nested_svm_vmrun(struct vcpu_svm *svm) kvm_vcpu_unmap(&svm->vcpu, &map, true); - return 1; + return ret; } trace_kvm_nested_vmrun(svm->vmcb->save.rip, vmcb_gpa, @@ -3667,7 +3670,7 @@ static int nested_svm_vmrun(struct vcpu_svm *svm) nested_svm_vmexit(svm); } - return 1; + return ret; } static void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) @@ -3743,9 +3746,6 @@ static int vmrun_interception(struct vcpu_svm *svm) if (nested_svm_check_permissions(svm)) return 1; - /* Save rip after vmrun instruction */ - kvm_rip_write(&svm->vcpu, kvm_rip_read(&svm->vcpu) + 3); - return nested_svm_vmrun(svm); }