From patchwork Fri Oct 30 15:36:26 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= X-Patchwork-Id: 7527771 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 663CE9F327 for ; Fri, 30 Oct 2015 15:37:45 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 96F962078D for ; Fri, 30 Oct 2015 15:37:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id ABE922078C for ; Fri, 30 Oct 2015 15:37:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965745AbbJ3Ph2 (ORCPT ); Fri, 30 Oct 2015 11:37:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54070 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759555AbbJ3Ph0 (ORCPT ); Fri, 30 Oct 2015 11:37:26 -0400 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (Postfix) with ESMTPS id 1C72891EAF; Fri, 30 Oct 2015 15:37:26 +0000 (UTC) Received: from potion (dhcp-1-105.brq.redhat.com [10.34.1.105]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id t9UFbNmB025062; Fri, 30 Oct 2015 11:37:24 -0400 Received: by potion (sSMTP sendmail emulation); Fri, 30 Oct 2015 16:37:22 +0100 From: =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Paolo Bonzini , Laszlo Ersek Subject: [PATCH 3/3] KVM: x86: simplify RSM into 64-bit protected mode Date: Fri, 30 Oct 2015 16:36:26 +0100 Message-Id: <1446219386-23937-4-git-send-email-rkrcmar@redhat.com> In-Reply-To: <1446219386-23937-1-git-send-email-rkrcmar@redhat.com> References: <1446219386-23937-1-git-send-email-rkrcmar@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This reverts 0123456789abc ("KVM: x86: fix RSM into 64-bit protected mode, round 2"). We've achieved the same by treating SMBASE as a physical address in the previous patch. Signed-off-by: Radim Kr?má? --- arch/x86/kvm/emulate.c | 37 +++++++------------------------------ 1 file changed, 7 insertions(+), 30 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 59e80e0de865..b60fed56671b 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2311,16 +2311,7 @@ static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) return X86EMUL_CONTINUE; } -struct rsm_stashed_seg_64 { - u16 selector; - struct desc_struct desc; - u32 base3; -}; - -static int rsm_stash_seg_64(struct x86_emulate_ctxt *ctxt, - struct rsm_stashed_seg_64 *stash, - u64 smbase, - int n) +static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) { struct desc_struct desc; int offset; @@ -2335,20 +2326,10 @@ static int rsm_stash_seg_64(struct x86_emulate_ctxt *ctxt, set_desc_base(&desc, GET_SMSTATE(u32, smbase, offset + 8)); base3 = GET_SMSTATE(u32, smbase, offset + 12); - stash[n].selector = selector; - stash[n].desc = desc; - stash[n].base3 = base3; + ctxt->ops->set_segment(ctxt, selector, &desc, base3, n); return X86EMUL_CONTINUE; } -static inline void rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, - struct rsm_stashed_seg_64 *stash, - int n) -{ - ctxt->ops->set_segment(ctxt, stash[n].selector, &stash[n].desc, - stash[n].base3, n); -} - static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, u64 cr0, u64 cr4) { @@ -2438,7 +2419,6 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase) u32 base3; u16 selector; int i, r; - struct rsm_stashed_seg_64 stash[6]; for (i = 0; i < 16; i++) *reg_write(ctxt, i) = GET_SMSTATE(u64, smbase, 0x7ff8 - i * 8); @@ -2480,18 +2460,15 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase) dt.address = GET_SMSTATE(u64, smbase, 0x7e68); ctxt->ops->set_gdt(ctxt, &dt); - for (i = 0; i < ARRAY_SIZE(stash); i++) { - r = rsm_stash_seg_64(ctxt, stash, smbase, i); - if (r != X86EMUL_CONTINUE) - return r; - } - r = rsm_enter_protected_mode(ctxt, cr0, cr4); if (r != X86EMUL_CONTINUE) return r; - for (i = 0; i < ARRAY_SIZE(stash); i++) - rsm_load_seg_64(ctxt, stash, i); + for (i = 0; i < 6; i++) { + r = rsm_load_seg_64(ctxt, smbase, i); + if (r != X86EMUL_CONTINUE) + return r; + } return X86EMUL_CONTINUE; }