From patchwork Mon Sep 4 17:21:01 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 9937439 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 865CB6038C for ; Mon, 4 Sep 2017 17:23:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6B82528783 for ; Mon, 4 Sep 2017 17:23:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5DB51287E6; Mon, 4 Sep 2017 17:23:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 64EDA28783 for ; Mon, 4 Sep 2017 17:23:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dov3d-0005Jy-3b; Mon, 04 Sep 2017 17:21:09 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dov3b-0005Ji-Cs for xen-devel@lists.xen.org; Mon, 04 Sep 2017 17:21:07 +0000 Received: from [193.109.254.147] by server-7.bemta-6.messagelabs.com id 8E/2C-03610-28B8DA95; Mon, 04 Sep 2017 17:21:06 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrLLMWRWlGSWpSXmKPExsXitHSDvW5T99p Igx2WFks+LmZxYPQ4uvs3UwBjFGtmXlJ+RQJrxpLNZ9gL5rQzVkybc4algfGabxcjJ4eEgL/E /29fmUFsNgF9id0vPjGB2CIC6hKnOy6ygtjMArUSsw41sYDYwgJBEteb5rGB2CwCKhK3lnUwg ti8Ap4SHzuvMELMlJM4f/wn2EwhATWJa/2X2CFqBCVOznzCAjFTQuLgixfMExi5ZyFJzUKSWs DItIpRozi1qCy1SNfIXC+pKDM9oyQ3MTNH19DATC83tbg4MT01JzGpWC85P3cTIzAYGIBgB+P itYGHGCU5mJREeY+GrI0U4kvKT6nMSCzOiC8qzUktPsQow8GhJMH7txMoJ1iUmp5akZaZAwxL mLQEB4+SCO83kDRvcUFibnFmOkTqFKOilDgvUxdQQgAkkVGaB9cGi4VLjLJSwryMQIcI8RSkF uVmlqDKv2IU52BUEuZ9DzKeJzOvBG76K6DFTECLq16uAVlckoiQkmpgjG/TvR9j69T9W2Cz1r f5kvM+GXCe/tTd5/7j1/z1ZbeuMa0LaQv91nR7zpPsh1kzvi38PfXOhOBt6kVrKipvL9p3YtH DjdVLjGOzHnHLbVG4etz/2M6VpXwsz4r3bogNOG4u9M38VGjVlpT4l53ip7xlly3RiNgu2v/B /++Ov+XBOpyeku11LkosxRmJhlrMRcWJAGgz0UuAAgAA X-Env-Sender: prvs=41342edc5=Andrew.Cooper3@citrix.com X-Msg-Ref: server-3.tower-27.messagelabs.com!1504545664!114715427!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 61724 invoked from network); 4 Sep 2017 17:21:05 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 4 Sep 2017 17:21:05 -0000 X-IronPort-AV: E=Sophos;i="5.41,475,1498521600"; d="scan'208";a="446472036" From: Andrew Cooper To: Xen-devel Date: Mon, 4 Sep 2017 18:21:01 +0100 Message-ID: <1504545661-24626-1-git-send-email-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.1.4 MIME-Version: 1.0 Cc: Petre Pircalabu , Andrew Cooper , Jan Beulich Subject: [Xen-devel] [PATCH RFC] x86/emul: Fix the handling of unimplemented Grp7 instructions X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Grp7 is abnormally complicated to decode, even by x86's standards, with {s,l}msw being the problematic cases. Previously, any value which fell through the first switch statement (looking for instructions with entirely implicit operands) would be interpreted by the second switch statement (handling instructions with memory operands). Unimplemented instructions would then hit the #UD case for having a non-memory operand, rather than taking the cannot_emulate path. Place a big if/else around the two switch statements (accounting for {s,l}msw which need handling in the else clause), so both switch statments can have a default goto cannot_emulate path. Reported-by: Petre Pircalabu Signed-off-by: Andrew Cooper --- CC: Jan Beulich CC: Petre Pircalabu RFC as I've only done light testing so far. --- xen/arch/x86/x86_emulate/x86_emulate.c | 353 +++++++++++++++++---------------- 1 file changed, 184 insertions(+), 169 deletions(-) diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c index 2201852..af3d8da 100644 --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -4987,197 +4987,212 @@ x86_emulate( } break; - case X86EMUL_OPC(0x0f, 0x01): /* Grp7 */ { - unsigned long base, limit, cr0, cr0w; + case X86EMUL_OPC(0x0f, 0x01): /* Grp7 */ + { + unsigned long base, limit; - switch( modrm ) + if ( (modrm & 0xc0) == 0xc0 && + (modrm_reg & 7) != 4 /* smsw */ && + (modrm_reg & 7) != 6 /* lmsw */ ) { - case 0xca: /* clac */ - case 0xcb: /* stac */ - vcpu_must_have(smap); - generate_exception_if(vex.pfx || !mode_ring0(), EXC_UD); - - _regs.eflags &= ~X86_EFLAGS_AC; - if ( modrm == 0xcb ) - _regs.eflags |= X86_EFLAGS_AC; - goto complete_insn; + switch ( modrm ) + { + case 0xca: /* clac */ + case 0xcb: /* stac */ + vcpu_must_have(smap); + generate_exception_if(vex.pfx || !mode_ring0(), EXC_UD); + + _regs.eflags &= ~X86_EFLAGS_AC; + if ( modrm == 0xcb ) + _regs.eflags |= X86_EFLAGS_AC; + goto complete_insn; #ifdef __XEN__ - case 0xd1: /* xsetbv */ - generate_exception_if(vex.pfx, EXC_UD); - if ( !ops->read_cr || ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY ) - cr4 = 0; - generate_exception_if(!(cr4 & X86_CR4_OSXSAVE), EXC_UD); - generate_exception_if(!mode_ring0() || - handle_xsetbv(_regs.ecx, - _regs.eax | (_regs.rdx << 32)), - EXC_GP, 0); - goto complete_insn; + case 0xd1: /* xsetbv */ + generate_exception_if(vex.pfx, EXC_UD); + if ( !ops->read_cr || + ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY ) + cr4 = 0; + generate_exception_if(!(cr4 & X86_CR4_OSXSAVE), EXC_UD); + generate_exception_if(!mode_ring0() || + handle_xsetbv(_regs.ecx, _regs.eax | + (_regs.rdx << 32)), + EXC_GP, 0); + goto complete_insn; #endif - case 0xd4: /* vmfunc */ - generate_exception_if(vex.pfx, EXC_UD); - fail_if(!ops->vmfunc); - if ( (rc = ops->vmfunc(ctxt)) != X86EMUL_OKAY ) - goto done; - goto complete_insn; - - case 0xd5: /* xend */ - generate_exception_if(vex.pfx, EXC_UD); - generate_exception_if(!vcpu_has_rtm(), EXC_UD); - generate_exception_if(vcpu_has_rtm(), EXC_GP, 0); - break; - - case 0xd6: /* xtest */ - generate_exception_if(vex.pfx, EXC_UD); - generate_exception_if(!vcpu_has_rtm() && !vcpu_has_hle(), - EXC_UD); - /* Neither HLE nor RTM can be active when we get here. */ - _regs.eflags |= X86_EFLAGS_ZF; - goto complete_insn; + case 0xd4: /* vmfunc */ + generate_exception_if(vex.pfx, EXC_UD); + fail_if(!ops->vmfunc); + if ( (rc = ops->vmfunc(ctxt)) != X86EMUL_OKAY ) + goto done; + goto complete_insn; - case 0xdf: /* invlpga */ - generate_exception_if(!in_protmode(ctxt, ops), EXC_UD); - generate_exception_if(!mode_ring0(), EXC_GP, 0); - fail_if(ops->invlpg == NULL); - if ( (rc = ops->invlpg(x86_seg_none, truncate_ea(_regs.r(ax)), - ctxt)) ) - goto done; - goto complete_insn; + case 0xd5: /* xend */ + generate_exception_if(vex.pfx, EXC_UD); + generate_exception_if(!vcpu_has_rtm(), EXC_UD); + generate_exception_if(vcpu_has_rtm(), EXC_GP, 0); + break; - case 0xf9: /* rdtscp */ - fail_if(ops->read_msr == NULL); - if ( (rc = ops->read_msr(MSR_TSC_AUX, - &msr_val, ctxt)) != X86EMUL_OKAY ) - goto done; - _regs.r(cx) = (uint32_t)msr_val; - goto rdtsc; + case 0xd6: /* xtest */ + generate_exception_if(vex.pfx, EXC_UD); + generate_exception_if(!vcpu_has_rtm() && !vcpu_has_hle(), + EXC_UD); + /* Neither HLE nor RTM can be active when we get here. */ + _regs.eflags |= X86_EFLAGS_ZF; + goto complete_insn; - case 0xfc: /* clzero */ - { - unsigned long zero = 0; + case 0xdf: /* invlpga */ + generate_exception_if(!in_protmode(ctxt, ops), EXC_UD); + generate_exception_if(!mode_ring0(), EXC_GP, 0); + fail_if(ops->invlpg == NULL); + if ( (rc = ops->invlpg(x86_seg_none, truncate_ea(_regs.r(ax)), + ctxt)) ) + goto done; + goto complete_insn; - vcpu_must_have(clzero); + case 0xf9: /* rdtscp */ + fail_if(ops->read_msr == NULL); + if ( (rc = ops->read_msr(MSR_TSC_AUX, + &msr_val, ctxt)) != X86EMUL_OKAY ) + goto done; + _regs.r(cx) = (uint32_t)msr_val; + goto rdtsc; - base = ad_bytes == 8 ? _regs.r(ax) : - ad_bytes == 4 ? _regs.eax : _regs.ax; - limit = 0; - if ( vcpu_has_clflush() && - ops->cpuid(1, 0, &cpuid_leaf, ctxt) == X86EMUL_OKAY ) - limit = ((cpuid_leaf.b >> 8) & 0xff) * 8; - generate_exception_if(limit < sizeof(long) || - (limit & (limit - 1)), EXC_UD); - base &= ~(limit - 1); - if ( ops->rep_stos ) + case 0xfc: /* clzero */ { - unsigned long nr_reps = limit / sizeof(zero); + unsigned long zero = 0; + + vcpu_must_have(clzero); + + base = ad_bytes == 8 ? _regs.r(ax) : + ad_bytes == 4 ? _regs.eax : _regs.ax; + limit = 0; + if ( vcpu_has_clflush() && + ops->cpuid(1, 0, &cpuid_leaf, ctxt) == X86EMUL_OKAY ) + limit = ((cpuid_leaf.b >> 8) & 0xff) * 8; + generate_exception_if(limit < sizeof(long) || + (limit & (limit - 1)), EXC_UD); + base &= ~(limit - 1); + if ( ops->rep_stos ) + { + unsigned long nr_reps = limit / sizeof(zero); - rc = ops->rep_stos(&zero, ea.mem.seg, base, sizeof(zero), - &nr_reps, ctxt); - if ( rc == X86EMUL_OKAY ) + rc = ops->rep_stos(&zero, ea.mem.seg, base, sizeof(zero), + &nr_reps, ctxt); + if ( rc == X86EMUL_OKAY ) + { + base += nr_reps * sizeof(zero); + limit -= nr_reps * sizeof(zero); + } + else if ( rc != X86EMUL_UNHANDLEABLE ) + goto done; + } + fail_if(limit && !ops->write); + while ( limit ) { - base += nr_reps * sizeof(zero); - limit -= nr_reps * sizeof(zero); + rc = ops->write(ea.mem.seg, base, &zero, + sizeof(zero), ctxt); + if ( rc != X86EMUL_OKAY ) + goto done; + base += sizeof(zero); + limit -= sizeof(zero); } - else if ( rc != X86EMUL_UNHANDLEABLE ) - goto done; + goto complete_insn; } - fail_if(limit && !ops->write); - while ( limit ) - { - rc = ops->write(ea.mem.seg, base, &zero, sizeof(zero), ctxt); - if ( rc != X86EMUL_OKAY ) - goto done; - base += sizeof(zero); - limit -= sizeof(zero); + + default: + goto cannot_emulate; } - goto complete_insn; - } } + else + { + unsigned long cr0, cr0w; - seg = (modrm_reg & 1) ? x86_seg_idtr : x86_seg_gdtr; + seg = (modrm_reg & 1) ? x86_seg_idtr : x86_seg_gdtr; - switch ( modrm_reg & 7 ) - { - case 0: /* sgdt */ - case 1: /* sidt */ - generate_exception_if(ea.type != OP_MEM, EXC_UD); - generate_exception_if(umip_active(ctxt, ops), EXC_GP, 0); - fail_if(!ops->read_segment || !ops->write); - if ( (rc = ops->read_segment(seg, &sreg, ctxt)) ) - goto done; - if ( mode_64bit() ) - op_bytes = 8; - else if ( op_bytes == 2 ) - { - sreg.base &= 0xffffff; - op_bytes = 4; - } - if ( (rc = ops->write(ea.mem.seg, ea.mem.off, &sreg.limit, - 2, ctxt)) != X86EMUL_OKAY || - (rc = ops->write(ea.mem.seg, ea.mem.off + 2, &sreg.base, - op_bytes, ctxt)) != X86EMUL_OKAY ) - goto done; - break; - case 2: /* lgdt */ - case 3: /* lidt */ - generate_exception_if(!mode_ring0(), EXC_GP, 0); - generate_exception_if(ea.type != OP_MEM, EXC_UD); - fail_if(ops->write_segment == NULL); - memset(&sreg, 0, sizeof(sreg)); - if ( (rc = read_ulong(ea.mem.seg, ea.mem.off+0, - &limit, 2, ctxt, ops)) || - (rc = read_ulong(ea.mem.seg, ea.mem.off+2, - &base, mode_64bit() ? 8 : 4, ctxt, ops)) ) - goto done; - generate_exception_if(!is_canonical_address(base), EXC_GP, 0); - sreg.base = base; - sreg.limit = limit; - if ( !mode_64bit() && op_bytes == 2 ) - sreg.base &= 0xffffff; - if ( (rc = ops->write_segment(seg, &sreg, ctxt)) ) - goto done; - break; - case 4: /* smsw */ - generate_exception_if(umip_active(ctxt, ops), EXC_GP, 0); - if ( ea.type == OP_MEM ) + switch ( modrm_reg & 7 ) { - fail_if(!ops->write); - d |= Mov; /* force writeback */ - ea.bytes = 2; + case 0: /* sgdt */ + case 1: /* sidt */ + generate_exception_if(ea.type != OP_MEM, EXC_UD); + generate_exception_if(umip_active(ctxt, ops), EXC_GP, 0); + fail_if(!ops->read_segment || !ops->write); + if ( (rc = ops->read_segment(seg, &sreg, ctxt)) ) + goto done; + if ( mode_64bit() ) + op_bytes = 8; + else if ( op_bytes == 2 ) + { + sreg.base &= 0xffffff; + op_bytes = 4; + } + if ( (rc = ops->write(ea.mem.seg, ea.mem.off, &sreg.limit, + 2, ctxt)) != X86EMUL_OKAY || + (rc = ops->write(ea.mem.seg, ea.mem.off + 2, &sreg.base, + op_bytes, ctxt)) != X86EMUL_OKAY ) + goto done; + break; + case 2: /* lgdt */ + case 3: /* lidt */ + generate_exception_if(!mode_ring0(), EXC_GP, 0); + generate_exception_if(ea.type != OP_MEM, EXC_UD); + fail_if(ops->write_segment == NULL); + memset(&sreg, 0, sizeof(sreg)); + if ( (rc = read_ulong(ea.mem.seg, ea.mem.off+0, + &limit, 2, ctxt, ops)) || + (rc = read_ulong(ea.mem.seg, ea.mem.off+2, + &base, mode_64bit() ? 8 : 4, ctxt, ops)) ) + goto done; + generate_exception_if(!is_canonical_address(base), EXC_GP, 0); + sreg.base = base; + sreg.limit = limit; + if ( !mode_64bit() && op_bytes == 2 ) + sreg.base &= 0xffffff; + if ( (rc = ops->write_segment(seg, &sreg, ctxt)) ) + goto done; + break; + case 4: /* smsw */ + generate_exception_if(umip_active(ctxt, ops), EXC_GP, 0); + if ( ea.type == OP_MEM ) + { + fail_if(!ops->write); + d |= Mov; /* force writeback */ + ea.bytes = 2; + } + else + ea.bytes = op_bytes; + dst = ea; + fail_if(ops->read_cr == NULL); + if ( (rc = ops->read_cr(0, &dst.val, ctxt)) ) + goto done; + break; + case 6: /* lmsw */ + fail_if(ops->read_cr == NULL); + fail_if(ops->write_cr == NULL); + generate_exception_if(!mode_ring0(), EXC_GP, 0); + if ( (rc = ops->read_cr(0, &cr0, ctxt)) ) + goto done; + if ( ea.type == OP_REG ) + cr0w = *ea.reg; + else if ( (rc = read_ulong(ea.mem.seg, ea.mem.off, + &cr0w, 2, ctxt, ops)) ) + goto done; + /* LMSW can: (1) set bits 0-3; (2) clear bits 1-3. */ + cr0 = (cr0 & ~0xe) | (cr0w & 0xf); + if ( (rc = ops->write_cr(0, cr0, ctxt)) ) + goto done; + break; + case 7: /* invlpg */ + generate_exception_if(!mode_ring0(), EXC_GP, 0); + generate_exception_if(ea.type != OP_MEM, EXC_UD); + fail_if(ops->invlpg == NULL); + if ( (rc = ops->invlpg(ea.mem.seg, ea.mem.off, ctxt)) ) + goto done; + break; + default: + goto cannot_emulate; } - else - ea.bytes = op_bytes; - dst = ea; - fail_if(ops->read_cr == NULL); - if ( (rc = ops->read_cr(0, &dst.val, ctxt)) ) - goto done; - break; - case 6: /* lmsw */ - fail_if(ops->read_cr == NULL); - fail_if(ops->write_cr == NULL); - generate_exception_if(!mode_ring0(), EXC_GP, 0); - if ( (rc = ops->read_cr(0, &cr0, ctxt)) ) - goto done; - if ( ea.type == OP_REG ) - cr0w = *ea.reg; - else if ( (rc = read_ulong(ea.mem.seg, ea.mem.off, - &cr0w, 2, ctxt, ops)) ) - goto done; - /* LMSW can: (1) set bits 0-3; (2) clear bits 1-3. */ - cr0 = (cr0 & ~0xe) | (cr0w & 0xf); - if ( (rc = ops->write_cr(0, cr0, ctxt)) ) - goto done; - break; - case 7: /* invlpg */ - generate_exception_if(!mode_ring0(), EXC_GP, 0); - generate_exception_if(ea.type != OP_MEM, EXC_UD); - fail_if(ops->invlpg == NULL); - if ( (rc = ops->invlpg(ea.mem.seg, ea.mem.off, ctxt)) ) - goto done; - break; - default: - goto cannot_emulate; } break; }