From patchwork Fri Oct 23 21:43:55 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Laszlo Ersek X-Patchwork-Id: 7478671 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 4A980BEEA4 for ; Fri, 23 Oct 2015 21:44:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 33F6920A31 for ; Fri, 23 Oct 2015 21:44:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 37EF420A2D for ; Fri, 23 Oct 2015 21:44:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752057AbbJWVoE (ORCPT ); Fri, 23 Oct 2015 17:44:04 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59925 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751324AbbJWVoC (ORCPT ); Fri, 23 Oct 2015 17:44:02 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (Postfix) with ESMTPS id 1F0668EA37; Fri, 23 Oct 2015 21:44:02 +0000 (UTC) Received: from lacos-laptop-7.usersys.redhat.com (ovpn-113-38.phx2.redhat.com [10.3.113.38]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t9NLhxgP020985; Fri, 23 Oct 2015 17:44:00 -0400 From: Laszlo Ersek To: kvm@vger.kernel.org, lersek@redhat.com Cc: Paolo Bonzini , =?UTF-8?q?Radim=20Kr=E8m=E1=F8?= , Jordan Justen , Michael Kinney , stable@vger.kernel.org Subject: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2 Date: Fri, 23 Oct 2015 23:43:55 +0200 Message-Id: <1445636635-32260-1-git-send-email-lersek@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Commit b10d92a54dac ("KVM: x86: fix RSM into 64-bit protected mode") reordered the rsm_load_seg_64() and rsm_enter_protected_mode() calls, relative to each other. The argument that said commit made was correct, however putting rsm_enter_protected_mode() first whole-sale violated the following (correct) invariant from em_rsm(): * Get back to real mode, to prepare a safe state in which to load * CR0/CR3/CR4/EFER. Also this will ensure that addresses passed * to read_std/write_std are not virtual. Namely, rsm_enter_protected_mode() may re-enable paging, *after* which rsm_load_seg_64() GET_SMSTATE() read_std() will try to interpret the (smbase + offset) address as a virtual one. This will result in unexpected page faults being injected to the guest in response to the RSM instruction. Split rsm_load_seg_64() in two parts: - The first part, rsm_stash_seg_64(), shall call GET_SMSTATE() while in real mode, and save the relevant state off SMRAM into an array local to rsm_load_state_64(). - The second part, rsm_load_seg_64(), shall occur after entering protected mode, but the segment details shall come from the local array, not the guest's SMRAM. Fixes: b10d92a54dac25a6152f1aa1ffc95c12908035ce Cc: Paolo Bonzini Cc: Radim Kr?má? Cc: Jordan Justen Cc: Michael Kinney Cc: stable@vger.kernel.org Signed-off-by: Laszlo Ersek Reviewed-by: Radim Kr?má? --- arch/x86/kvm/emulate.c | 37 ++++++++++++++++++++++++++++++------- 1 file changed, 30 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 9da95b9..25e16b6 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2311,7 +2311,16 @@ static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) return X86EMUL_CONTINUE; } -static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) +struct rsm_stashed_seg_64 { + u16 selector; + struct desc_struct desc; + u32 base3; +}; + +static int rsm_stash_seg_64(struct x86_emulate_ctxt *ctxt, + struct rsm_stashed_seg_64 *stash, + u64 smbase, + int n) { struct desc_struct desc; int offset; @@ -2326,10 +2335,20 @@ static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n) set_desc_base(&desc, GET_SMSTATE(u32, smbase, offset + 8)); base3 = GET_SMSTATE(u32, smbase, offset + 12); - ctxt->ops->set_segment(ctxt, selector, &desc, base3, n); + stash[n].selector = selector; + stash[n].desc = desc; + stash[n].base3 = base3; return X86EMUL_CONTINUE; } +static inline void rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, + struct rsm_stashed_seg_64 *stash, + int n) +{ + ctxt->ops->set_segment(ctxt, stash[n].selector, &stash[n].desc, + stash[n].base3, n); +} + static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, u64 cr0, u64 cr4) { @@ -2419,6 +2438,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase) u32 base3; u16 selector; int i, r; + struct rsm_stashed_seg_64 stash[6]; for (i = 0; i < 16; i++) *reg_write(ctxt, i) = GET_SMSTATE(u64, smbase, 0x7ff8 - i * 8); @@ -2460,15 +2480,18 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase) dt.address = GET_SMSTATE(u64, smbase, 0x7e68); ctxt->ops->set_gdt(ctxt, &dt); + for (i = 0; i < ARRAY_SIZE(stash); i++) { + r = rsm_stash_seg_64(ctxt, stash, smbase, i); + if (r != X86EMUL_CONTINUE) + return r; + } + r = rsm_enter_protected_mode(ctxt, cr0, cr4); if (r != X86EMUL_CONTINUE) return r; - for (i = 0; i < 6; i++) { - r = rsm_load_seg_64(ctxt, smbase, i); - if (r != X86EMUL_CONTINUE) - return r; - } + for (i = 0; i < ARRAY_SIZE(stash); i++) + rsm_load_seg_64(ctxt, stash, i); return X86EMUL_CONTINUE; }