From patchwork Tue May 13 14:55:40 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 4168411 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id B49A4BFF02 for ; Tue, 13 May 2014 14:56:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E10AA20266 for ; Tue, 13 May 2014 14:56:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0DE232020E for ; Tue, 13 May 2014 14:56:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964892AbaEMO4E (ORCPT ); Tue, 13 May 2014 10:56:04 -0400 Received: from mail-ee0-f45.google.com ([74.125.83.45]:44454 "EHLO mail-ee0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964867AbaEMO4A (ORCPT ); Tue, 13 May 2014 10:56:00 -0400 Received: by mail-ee0-f45.google.com with SMTP id d49so511541eek.32 for ; Tue, 13 May 2014 07:55:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=IEmqEQf92hTlfqc4FDBDehgRY4JQMopyVbgVMP8Y4cU=; b=A3YxXGSsMeB0Gj/3EosSeNZIDXKpUxYenJGL3PawuyfrnY9rvzaTqLUpcutLIdisHO llrOBhS9TQ5C/808kh6CB3jnMXxUhkcinuma3i2mTNBUSgvoXILO4NwYPzxInrwVxOyB b8EYJ2k28T6BxzeWd4tO5Gf+ozQ4MEcW6E524u9siKpJs9h9IL025X+tCYUqNa+JqybX y1JK92F4XnQyncv/m/Oise831BGfRGI6OYysWS/PZiu8LzNqS/vL+qeZDwEMuNZ+0dnF 0n1BokSgQn21krs7Xn0Ba8MH/yiqJKNd17eEenLtcIibKIPAenia7xb5FA84HPZy1utb NH9g== X-Received: by 10.14.212.9 with SMTP id x9mr41696815eeo.46.1399992958963; Tue, 13 May 2014 07:55:58 -0700 (PDT) Received: from playground.station (net-37-117-141-58.cust.vodafonedsl.it. [37.117.141.58]) by mx.google.com with ESMTPSA id l3sm40720216eeo.43.2014.05.13.07.55.57 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 May 2014 07:55:58 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org Cc: jan.kiszka@siemens.com, kvm@vger.kernel.org, gleb@kernel.org, avi.kivity@gmail.com Subject: [PATCH 4/5] KVM: vmx: force CPL=0 on real->protected mode transition Date: Tue, 13 May 2014 16:55:40 +0200 Message-Id: <1399992941-11600-5-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1399992941-11600-1-git-send-email-pbonzini@redhat.com> References: <1399992941-11600-1-git-send-email-pbonzini@redhat.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When writing 1 to CR0.PE, the CPL remains 0 even if bits 0-1 of CS disagree. Before calling vmx_set_cr0, VCPU_EXREG_CPL was cleared by vmx_vcpu_run, so set it again and make sure that enter_pmode does not change it. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/vmx.c | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 7dc5fdd30d7f..7440cce3bf30 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -3102,7 +3102,7 @@ static void fix_pmode_seg(struct kvm_vcpu *vcpu, int seg, * CS and SS RPL should be equal during guest entry according * to VMX spec, but in reality it is not always so. Since vcpu * is in the middle of the transition from real mode to - * protected mode it is safe to assume that RPL 0 is a good + * protected mode, the CPL is 0; thus RPL 0 is a good * default value. */ if (seg == VCPU_SREG_CS || seg == VCPU_SREG_SS) @@ -3110,7 +3110,7 @@ static void fix_pmode_seg(struct kvm_vcpu *vcpu, int seg, save->dpl = save->selector & SELECTOR_RPL_MASK; save->s = 1; } - vmx_set_segment(vcpu, save, seg, false); + vmx_set_segment(vcpu, save, seg, true); } static void enter_pmode(struct kvm_vcpu *vcpu) @@ -3151,10 +3151,6 @@ static void enter_pmode(struct kvm_vcpu *vcpu) fix_pmode_seg(vcpu, VCPU_SREG_DS, &vmx->rmode.segs[VCPU_SREG_DS]); fix_pmode_seg(vcpu, VCPU_SREG_FS, &vmx->rmode.segs[VCPU_SREG_FS]); fix_pmode_seg(vcpu, VCPU_SREG_GS, &vmx->rmode.segs[VCPU_SREG_GS]); - - /* CPL is always 0 when CPU enters protected mode */ - __set_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail); - vmx->cpl = 0; } static void fix_rmode_seg(int seg, struct kvm_segment *save) @@ -3397,11 +3393,21 @@ static void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) else { hw_cr0 |= KVM_VM_CR0_ALWAYS_ON; - if (vmx->rmode.vm86_active && (cr0 & X86_CR0_PE)) - enter_pmode(vcpu); + if (cr0 & X86_CR0_PE) { + /* + * CPL is always 0 when CPU enters protected + * mode, bits 0-1 of CS do not matter. + */ + __set_bit(VCPU_EXREG_CPL, + (ulong *)&vcpu->arch.regs_avail); + vmx->cpl = 0; - if (!vmx->rmode.vm86_active && !(cr0 & X86_CR0_PE)) - enter_rmode(vcpu); + if (vmx->rmode.vm86_active) + enter_pmode(vcpu); + } else { + if (!vmx->rmode.vm86_active) + enter_rmode(vcpu); + } } #ifdef CONFIG_X86_64