From patchwork Mon Mar 2 19:43:31 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Joel Schopp X-Patchwork-Id: 5916921 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 9F0BEBF440 for ; Mon, 2 Mar 2015 19:41:29 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AF87D20211 for ; Mon, 2 Mar 2015 19:41:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A2C99201D3 for ; Mon, 2 Mar 2015 19:41:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755090AbbCBTjt (ORCPT ); Mon, 2 Mar 2015 14:39:49 -0500 Received: from mail-bl2on0136.outbound.protection.outlook.com ([65.55.169.136]:5920 "EHLO na01-bl2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755044AbbCBTjn (ORCPT ); Mon, 2 Mar 2015 14:39:43 -0500 Received: from BN1PR02CA0040.namprd02.prod.outlook.com (10.141.56.40) by BL2PR02MB467.namprd02.prod.outlook.com (10.141.95.153) with Microsoft SMTP Server (TLS) id 15.1.99.14; Mon, 2 Mar 2015 19:39:40 +0000 Received: from BL2FFO11OLC001.protection.gbl (2a01:111:f400:7c09::151) by BN1PR02CA0040.outlook.office365.com (2a01:111:e400:2a::40) with Microsoft SMTP Server (TLS) id 15.1.99.14 via Frontend Transport; Mon, 2 Mar 2015 19:39:40 +0000 Received: from atltwp02.amd.com (165.204.84.222) by BL2FFO11OLC001.mail.protection.outlook.com (10.173.161.185) with Microsoft SMTP Server id 15.1.99.6 via Frontend Transport; Mon, 2 Mar 2015 19:39:39 +0000 X-WSS-ID: 0NKLNY0-08-6UN-02 X-M-MSG: Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by atltwp02.amd.com (Axway MailGate 5.3.1) with ESMTPS id 2CC0ED1607D; Mon, 2 Mar 2015 13:39:35 -0600 (CST) Received: from SATLEXDAG06.amd.com (10.181.40.13) by SATLVEXEDGE02.amd.com (10.177.96.29) with Microsoft SMTP Server (TLS) id 14.3.195.1; Mon, 2 Mar 2015 13:39:51 -0600 Received: from joelvmguard2.amd.com (10.180.168.240) by satlexdag06.amd.com (10.181.40.13) with Microsoft SMTP Server (TLS) id 14.3.195.1; Mon, 2 Mar 2015 14:39:37 -0500 Subject: [PATCH v3 1/2] kvm: x86: make kvm_emulate_* consistant From: Joel Schopp To: Gleb Natapov , Paolo Bonzini , CC: Joerg Roedel , Borislav Petkov , , David Kaplan , Date: Mon, 2 Mar 2015 13:43:31 -0600 Message-ID: <20150302194331.2121.28466.stgit@joelvmguard2.amd.com> In-Reply-To: <20150302193943.2121.60575.stgit@joelvmguard2.amd.com> References: <20150302193943.2121.60575.stgit@joelvmguard2.amd.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-EOPAttributedMessage: 0 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Authentication-Results: spf=none (sender IP is 165.204.84.222) smtp.mailfrom=Joel.Schopp@amd.com; 8bytes.org; dkim=none (message not signed) header.d=none; X-Forefront-Antispam-Report: CIP:165.204.84.222; CTRY:US; IPV:NLI; EFV:NLI; SFV:NSPM; SFS:(10019020)(979002)(6009001)(428002)(199003)(189002)(50466002)(87936001)(83506001)(23676002)(33646002)(53416004)(229853001)(106466001)(46102003)(47776003)(19580405001)(103116003)(19580395003)(105586002)(101416001)(54356999)(76176999)(50986999)(77156002)(97746001)(62966003)(86362001)(2950100001)(92566002)(575784001)(77096005)(71626003)(969003)(989001)(999001)(1009001)(1019001); DIR:OUT; SFP:1102; SCL:1; SRVR:BL2PR02MB467; H:atltwp02.amd.com; FPR:; SPF:None; MLV:ovrnspm; A:1; MX:1; PTR:InfoDomainNonexistent; LANG:en; X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BL2PR02MB467; X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(5005006); SRVR:BL2PR02MB467; BCL:0; PCL:0; RULEID:; SRVR:BL2PR02MB467; X-Forefront-PRVS: 0503FF9A3E X-OriginatorOrg: amd4.onmicrosoft.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Mar 2015 19:39:39.3682 (UTC) X-MS-Exchange-CrossTenant-Id: fde4dada-be84-483f-92cc-e026cbee8e96 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=fde4dada-be84-483f-92cc-e026cbee8e96; Ip=[165.204.84.222] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL2PR02MB467 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently kvm_emulate() skips the instruction but kvm_emulate_* sometimes don't. The end reult is the caller ends up doing the skip themselves. Let's make them consistant. Signed-off-by: Joel Schopp Reviewed-by: Radim Kr?má? --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/svm.c | 2 -- arch/x86/kvm/vmx.c | 9 +++------ arch/x86/kvm/x86.c | 23 ++++++++++++++++++++--- 4 files changed, 24 insertions(+), 11 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a236e39..bf5a160 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -933,6 +933,7 @@ struct x86_emulate_ctxt; int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size, unsigned short port); void kvm_emulate_cpuid(struct kvm_vcpu *vcpu); int kvm_emulate_halt(struct kvm_vcpu *vcpu); +int kvm_vcpu_halt(struct kvm_vcpu *vcpu); int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu); void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg); diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index d319e0c..0c9e377 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1929,14 +1929,12 @@ static int nop_on_interception(struct vcpu_svm *svm) static int halt_interception(struct vcpu_svm *svm) { svm->next_rip = kvm_rip_read(&svm->vcpu) + 1; - skip_emulated_instruction(&svm->vcpu); return kvm_emulate_halt(&svm->vcpu); } static int vmmcall_interception(struct vcpu_svm *svm) { svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; - skip_emulated_instruction(&svm->vcpu); kvm_emulate_hypercall(&svm->vcpu); return 1; } diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 14c1a18..eef7f53 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -4995,7 +4995,7 @@ static int handle_rmode_exception(struct kvm_vcpu *vcpu, if (emulate_instruction(vcpu, 0) == EMULATE_DONE) { if (vcpu->arch.halt_request) { vcpu->arch.halt_request = 0; - return kvm_emulate_halt(vcpu); + return kvm_vcpu_halt(vcpu); } return 1; } @@ -5522,13 +5522,11 @@ static int handle_interrupt_window(struct kvm_vcpu *vcpu) static int handle_halt(struct kvm_vcpu *vcpu) { - skip_emulated_instruction(vcpu); return kvm_emulate_halt(vcpu); } static int handle_vmcall(struct kvm_vcpu *vcpu) { - skip_emulated_instruction(vcpu); kvm_emulate_hypercall(vcpu); return 1; } @@ -5559,7 +5557,6 @@ static int handle_rdpmc(struct kvm_vcpu *vcpu) static int handle_wbinvd(struct kvm_vcpu *vcpu) { - skip_emulated_instruction(vcpu); kvm_emulate_wbinvd(vcpu); return 1; } @@ -5898,7 +5895,7 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu) if (vcpu->arch.halt_request) { vcpu->arch.halt_request = 0; - ret = kvm_emulate_halt(vcpu); + ret = kvm_vcpu_halt(vcpu); goto out; } @@ -9513,7 +9510,7 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch) vmcs12->launch_state = 1; if (vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT) - return kvm_emulate_halt(vcpu); + return kvm_vcpu_halt(vcpu); vmx->nested.nested_run_pending = 1; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index bd7a70b..6ff90f7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4706,7 +4706,7 @@ static void emulator_invlpg(struct x86_emulate_ctxt *ctxt, ulong address) kvm_mmu_invlpg(emul_to_vcpu(ctxt), address); } -int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu) +int kvm_emulate_wbinvd_noskip(struct kvm_vcpu *vcpu) { if (!need_emulate_wbinvd(vcpu)) return X86EMUL_CONTINUE; @@ -4723,11 +4723,19 @@ int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu) wbinvd(); return X86EMUL_CONTINUE; } + +int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu) +{ + kvm_x86_ops->skip_emulated_instruction(vcpu); + return kvm_emulate_wbinvd_noskip(vcpu); +} EXPORT_SYMBOL_GPL(kvm_emulate_wbinvd); + + static void emulator_wbinvd(struct x86_emulate_ctxt *ctxt) { - kvm_emulate_wbinvd(emul_to_vcpu(ctxt)); + kvm_emulate_wbinvd_noskip(emul_to_vcpu(ctxt)); } int emulator_get_dr(struct x86_emulate_ctxt *ctxt, int dr, unsigned long *dest) @@ -5817,7 +5825,7 @@ void kvm_arch_exit(void) free_percpu(shared_msrs); } -int kvm_emulate_halt(struct kvm_vcpu *vcpu) +int kvm_vcpu_halt(struct kvm_vcpu *vcpu) { ++vcpu->stat.halt_exits; if (irqchip_in_kernel(vcpu->kvm)) { @@ -5828,6 +5836,13 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu) return 0; } } +EXPORT_SYMBOL_GPL(kvm_vcpu_halt); + +int kvm_emulate_halt(struct kvm_vcpu *vcpu) +{ + kvm_x86_ops->skip_emulated_instruction(vcpu); + return kvm_vcpu_halt(vcpu); +} EXPORT_SYMBOL_GPL(kvm_emulate_halt); int kvm_hv_hypercall(struct kvm_vcpu *vcpu) @@ -5912,6 +5927,8 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) unsigned long nr, a0, a1, a2, a3, ret; int op_64_bit, r = 1; + kvm_x86_ops->skip_emulated_instruction(vcpu); + if (kvm_hv_hypercall_enabled(vcpu->kvm)) return kvm_hv_hypercall(vcpu);