From patchwork Wed Dec 12 17:10:52 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gleb Natapov X-Patchwork-Id: 1868591 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 43EA83FC81 for ; Wed, 12 Dec 2012 17:11:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754635Ab2LLRLI (ORCPT ); Wed, 12 Dec 2012 12:11:08 -0500 Received: from mx1.redhat.com ([209.132.183.28]:25700 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754414Ab2LLRLE (ORCPT ); Wed, 12 Dec 2012 12:11:04 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id qBCHB3gr012120 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Wed, 12 Dec 2012 12:11:04 -0500 Received: from dhcp-1-237.tlv.redhat.com (dhcp-4-26.tlv.redhat.com [10.35.4.26]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id qBCHB3Dd015831; Wed, 12 Dec 2012 12:11:03 -0500 Received: by dhcp-1-237.tlv.redhat.com (Postfix, from userid 13519) id B1DA51336D1; Wed, 12 Dec 2012 19:11:02 +0200 (IST) From: Gleb Natapov To: kvm@vger.kernel.org Cc: mtosatti@redhat.com Subject: [PATCH 4/7] KVM: VMX: use fix_rmode_seg() to fix all code/data segments Date: Wed, 12 Dec 2012 19:10:52 +0200 Message-Id: <1355332255-26612-5-git-send-email-gleb@redhat.com> In-Reply-To: <1355332255-26612-1-git-send-email-gleb@redhat.com> References: <1355332255-26612-1-git-send-email-gleb@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The code for SS and CS does the same thing fix_rmode_seg() is doing. Use it instead of hand crafted code. Signed-off-by: Gleb Natapov --- arch/x86/kvm/vmx.c | 26 ++------------------------ 1 file changed, 2 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index a01fbed..35cafa9 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -3315,30 +3315,8 @@ static void vmx_set_segment(struct kvm_vcpu *vcpu, * unrestricted guest like Westmere to older host that don't have * unrestricted guest like Nehelem. */ - if (vmx->rmode.vm86_active) { - switch (seg) { - case VCPU_SREG_CS: - vmcs_write32(GUEST_CS_AR_BYTES, 0xf3); - vmcs_write32(GUEST_CS_LIMIT, 0xffff); - if (vmcs_readl(GUEST_CS_BASE) == 0xffff0000) - vmcs_writel(GUEST_CS_BASE, 0xf0000); - vmcs_write16(GUEST_CS_SELECTOR, - vmcs_readl(GUEST_CS_BASE) >> 4); - break; - case VCPU_SREG_ES: - case VCPU_SREG_DS: - case VCPU_SREG_GS: - case VCPU_SREG_FS: - fix_rmode_seg(seg, &vmx->rmode.segs[seg]); - break; - case VCPU_SREG_SS: - vmcs_write16(GUEST_SS_SELECTOR, - vmcs_readl(GUEST_SS_BASE) >> 4); - vmcs_write32(GUEST_SS_LIMIT, 0xffff); - vmcs_write32(GUEST_SS_AR_BYTES, 0xf3); - break; - } - } + if (vmx->rmode.vm86_active && var->s) + fix_rmode_seg(seg, &vmx->rmode.segs[seg]); } static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)