From patchwork Fri Oct 14 19:47:36 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kyle Huey X-Patchwork-Id: 9377349 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 47C8A607FD for ; Fri, 14 Oct 2016 19:50:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 352E02A837 for ; Fri, 14 Oct 2016 19:50:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 29BEB2A838; Fri, 14 Oct 2016 19:50:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3FC602A83A for ; Fri, 14 Oct 2016 19:50:11 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bv8SQ-00070r-5b; Fri, 14 Oct 2016 19:47:54 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bv8SP-00070l-1i for xen-devel@lists.xen.org; Fri, 14 Oct 2016 19:47:53 +0000 Received: from [85.158.143.35] by server-4.bemta-6.messagelabs.com id 51/63-29421-86631085; Fri, 14 Oct 2016 19:47:52 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrMIsWRWlGSWpSXmKPExsVyMfSOi266GWO EQfd7FYslHxezODB6HN39mymAMYo1My8pvyKBNePW9xamghkOFUc3/2BrYFxj0MXIxSEkMJFR YvOpiSwgDotAM6vEjz9nwRwJgX5WiY9TvrN3MXICOTESHTv2MXUxcgDZ5RIXnkuBhIUE5CQOn DzBBDFpKZPEz4tnmEESbAKKEvfW7WCE6DWTaL30ghXEFhGQlrj2+TIjSAOzwDFGibPf9oIlhA U8JBb8vsYGYrMIqErMuLIbbBAvUPPKo2vZIAbJS1y4eooFxOYUMJd4dGsHC8QVZhJfL5xkncA ouICRYRWjRnFqUVlqka6hgV5SUWZ6RkluYmYOkGeml5taXJyYnpqTmFSsl5yfu4kRGHQMQLCD 8d6ygEOMkhxMSqK8Nl8ZIoT4kvJTKjMSizPii0pzUosPMcpwcChJ8CqYMkYICRalpqdWpGXmA MMfJi3BwaMkwmsEkuYtLkjMLc5Mh0idYjTm2PL72lomjm1T761lEmLJy89LlRLnVQYpFQApzS jNgxsEi8tLjLJSwryMQKcJ8RSkFuVmlqDKv2IU52BUEubdbwI0hSczrwRu3yugU5iATvnQxgB ySkkiQkqqgXG5DR/fcYu4jPar4ZW9sZ62exmYXt+M5T2p90jv0yPbhrTp6xT3t0533N5V26Nl +iJTTTpaLrU5WUXWZYXQqtiLD4PKGgNff8lSnv2/S0VVf/31bwc1pi+fePbPkQ8hp/t62rdl5 vtV+av1MC2vtedZccH8wCPW7oiQSW9XdfBILvtwNqrwkBJLcUaioRZzUXEiAIohiTnGAgAA X-Env-Sender: me@kylehuey.com X-Msg-Ref: server-9.tower-21.messagelabs.com!1476474470!38098752!1 X-Originating-IP: [209.85.220.68] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.0.13; banners=-,-,- X-VirusChecked: Checked Received: (qmail 47494 invoked from network); 14 Oct 2016 19:47:51 -0000 Received: from mail-pa0-f68.google.com (HELO mail-pa0-f68.google.com) (209.85.220.68) by server-9.tower-21.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 14 Oct 2016 19:47:51 -0000 Received: by mail-pa0-f68.google.com with SMTP id hh10so6448951pac.0 for ; Fri, 14 Oct 2016 12:47:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kylehuey-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dfNhyF6yJ7mZR4xe6KA9zX7CRj0cjXURZ0Yl072RMfc=; b=xGoFB7TiAiY9VBCEf/LjCarPYDxmSzp7ZetW8o3BnfhSgLPIHXvshvQ7BBY3pWaZ5I EvnpJNXAO/Kr88KVPTyrMqYLYlNIv2dOQCzgY4dRXUO94ZA8nN0rk3nqIc1yLGvCrYN1 iA9bGRa6ksAX/wYfW+131pZmijr4BJp7wkSA8xsNWjNrfjbkwq7mOr3/IUauRYSYz1nn Rs4xi5okZ0KWR96jYsAJRAMbXBpH5pNWGkiBPcI1VCdbKQ844YU8KgshVI8wI5NSyTfk H4ijbjRO5X2KnuBcTOjQHKmK6xFmbJvqYTQleBx0B3PzbwdO7hYxA+UBkSh9bVrRaT3p zMPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dfNhyF6yJ7mZR4xe6KA9zX7CRj0cjXURZ0Yl072RMfc=; b=Ok97c44ku39nlm41qwY8rZU2+yO7wi2+H2HH8ttiXEHwQsJt7nQ2EPHi0JBXmTfs+W iX64frfOK6jtrgdBc9ltnhV4yO3E4qU+jZtWBnNCyKTQciuINEk0S7CRaLD2ChyjKMbg BQbGYKS9qyiX2CIReC1EsKnNsamDYbbk/rEhzdXvLCNKK4gzs6TSRT/+jVkmwpC2BkKy MAhQKABSCpVvZN5FLZGr4E5y2sM7y6TrsKOur0fY0Bm+WeahVxz/kZNBNpJ0PuCwj3vN /rzvhnuy7m9jYsD/EiAE7AVU6lrfm1W2axo6GtI+X8Pw7LvWXlzzgUuDDc7hJhs60lMT oObA== X-Gm-Message-State: AA6/9RnkeWLVwfHNWJ92qaINbaiNahp8fOk8gDNCqH2VjTjinslJ8NfllyqUKig+XU/x9g== X-Received: by 10.66.9.199 with SMTP id c7mr17344787pab.19.1476474469553; Fri, 14 Oct 2016 12:47:49 -0700 (PDT) Received: from minbar.hsd1.ca.comcast.net (c-73-162-102-141.hsd1.ca.comcast.net. [73.162.102.141]) by smtp.gmail.com with ESMTPSA id id6sm29447972pad.28.2016.10.14.12.47.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 14 Oct 2016 12:47:49 -0700 (PDT) From: Kyle Huey X-Google-Original-From: Kyle Huey To: xen-devel@lists.xen.org Date: Fri, 14 Oct 2016 12:47:36 -0700 Message-Id: <20161014194736.5913-3-khuey@kylehuey.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20161014194736.5913-1-khuey@kylehuey.com> References: <20161014194736.5913-1-khuey@kylehuey.com> Cc: Andrew Cooper , Kevin Tian , Jun Nakajima , Jan Beulich , Robert O'Callahan Subject: [Xen-devel] [PATCH v3 2/2] x86/Intel: virtualize support for cpuid faulting X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP On HVM guests, the cpuid triggers a vm exit, so we can check the emulated faulting state in vmx_do_cpuid and inject a GP(0) if CPL > 0. Notably no hardware support for faulting on cpuid is necessary to emulate support with an HVM guest. On PV guests, hardware support is required so that userspace cpuid will trap to xen. Xen already enables cpuid faulting on supported CPUs for pv guests (that aren't the control domain, see the comment in intel_ctxt_switch_levelling). Every PV guest cpuid will trap via a GP(0) to emulate_privileged_op (via do_general_protection). Once there we simply decline to emulate cpuid if the CPL > 0 and faulting is enabled, leaving the GP(0) for the guest kernel to handle. Signed-off-by: Kyle Huey --- xen/arch/x86/hvm/vmx/vmx.c | 24 ++++++++++++++++++++++-- xen/arch/x86/traps.c | 34 ++++++++++++++++++++++++++++++++++ xen/include/asm-x86/domain.h | 3 +++ 3 files changed, 59 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index b9102ce..c038393 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2427,16 +2427,25 @@ static void vmx_cpuid_intercept( HVMTRACE_5D (CPUID, input, *eax, *ebx, *ecx, *edx); } static int vmx_do_cpuid(struct cpu_user_regs *regs) { unsigned int eax, ebx, ecx, edx; unsigned int leaf, subleaf; + struct segment_register sreg; + struct vcpu *v = current; + + hvm_get_segment_register(v, x86_seg_ss, &sreg); + if ( v->arch.cpuid_fault && sreg.attr.fields.dpl > 0 ) + { + hvm_inject_hw_exception(TRAP_gp_fault, 0); + return 1; /* Don't advance the guest IP! */ + } eax = regs->eax; ebx = regs->ebx; ecx = regs->ecx; edx = regs->edx; leaf = regs->eax; subleaf = regs->ecx; @@ -2694,19 +2703,23 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content) case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL: case MSR_IA32_PEBS_ENABLE: case MSR_IA32_DS_AREA: if ( vpmu_do_rdmsr(msr, msr_content) ) goto gp_fault; break; case MSR_INTEL_PLATFORM_INFO: - if ( rdmsr_safe(MSR_INTEL_PLATFORM_INFO, *msr_content) ) - goto gp_fault; + *msr_content = MSR_PLATFORM_INFO_CPUID_FAULTING; + break; + + case MSR_INTEL_MISC_FEATURES_ENABLES: *msr_content = 0; + if ( current->arch.cpuid_fault ) + *msr_content |= MSR_MISC_FEATURES_CPUID_FAULTING; break; default: if ( passive_domain_do_rdmsr(msr, msr_content) ) goto done; switch ( long_mode_do_msr_read(msr, msr_content) ) { case HNDL_unhandled: @@ -2925,16 +2938,23 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content) break; case MSR_INTEL_PLATFORM_INFO: if ( msr_content || rdmsr_safe(MSR_INTEL_PLATFORM_INFO, msr_content) ) goto gp_fault; break; + case MSR_INTEL_MISC_FEATURES_ENABLES: + if ( msr_content & ~MSR_MISC_FEATURES_CPUID_FAULTING ) + goto gp_fault; + v->arch.cpuid_fault = + !!(msr_content & MSR_MISC_FEATURES_CPUID_FAULTING); + break; + default: if ( passive_domain_do_wrmsr(msr, msr_content) ) return X86EMUL_OKAY; if ( wrmsr_viridian_regs(msr, msr_content) ) break; switch ( long_mode_do_msr_write(msr, msr_content) ) diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index 293ff8d..12322bd 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -1315,16 +1315,24 @@ static int emulate_forced_invalid_op(struct cpu_user_regs *regs) /* We only emulate CPUID. */ if ( ( rc = copy_from_user(instr, (char *)eip, sizeof(instr))) != 0 ) { propagate_page_fault(eip + sizeof(instr) - rc, 0); return EXCRET_fault_fixed; } if ( memcmp(instr, "\xf\xa2", sizeof(instr)) ) return 0; + + /* If cpuid faulting is enabled and CPL>0 inject a #GP in place of #UD. */ + if ( current->arch.cpuid_fault && !guest_kernel_mode(current, regs) ) { + regs->eip = eip; + do_guest_trap(TRAP_gp_fault, regs); + return EXCRET_fault_fixed; + } + eip += sizeof(instr); pv_cpuid(regs); instruction_done(regs, eip, 0); trace_trap_one_addr(TRC_PV_FORCED_INVALID_OP, regs->eip); @@ -2474,16 +2482,27 @@ static int priv_op_read_msr(unsigned int reg, uint64_t *val, *val = 0; return X86EMUL_OKAY; case MSR_INTEL_PLATFORM_INFO: if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || rdmsr_safe(MSR_INTEL_PLATFORM_INFO, *val) ) break; *val = 0; + if ( this_cpu(cpuid_faulting_enabled) ) + *val |= MSR_PLATFORM_INFO_CPUID_FAULTING; + return X86EMUL_OKAY; + + case MSR_INTEL_MISC_FEATURES_ENABLES: + if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || + rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES, *val) ) + break; + *val = 0; + if ( curr->arch.cpuid_fault ) + *val |= MSR_MISC_FEATURES_CPUID_FAULTING; return X86EMUL_OKAY; case MSR_P6_PERFCTR(0)...MSR_P6_PERFCTR(7): case MSR_P6_EVNTSEL(0)...MSR_P6_EVNTSEL(3): case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2: case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL: if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL ) { @@ -2677,16 +2696,27 @@ static int priv_op_write_msr(unsigned int reg, uint64_t val, return X86EMUL_OKAY; case MSR_INTEL_PLATFORM_INFO: if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || val || rdmsr_safe(MSR_INTEL_PLATFORM_INFO, val) ) break; return X86EMUL_OKAY; + case MSR_INTEL_MISC_FEATURES_ENABLES: + if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || + (val & ~MSR_MISC_FEATURES_CPUID_FAULTING) || + rdmsr_safe(MSR_INTEL_MISC_FEATURES_ENABLES, temp) ) + break; + if ( (val & MSR_MISC_FEATURES_CPUID_FAULTING) && + !this_cpu(cpuid_faulting_enabled) ) + break; + curr->arch.cpuid_fault = !!(val & MSR_MISC_FEATURES_CPUID_FAULTING); + return X86EMUL_OKAY; + case MSR_P6_PERFCTR(0)...MSR_P6_PERFCTR(7): case MSR_P6_EVNTSEL(0)...MSR_P6_EVNTSEL(3): case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2: case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL: if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL ) { vpmu_msr = true; case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5: @@ -3186,16 +3216,20 @@ static int emulate_privileged_op(struct cpu_user_regs *regs) if ( priv_op_read_msr(regs->_ecx, &val, NULL) != X86EMUL_OKAY ) goto fail; rdmsr_writeback: regs->eax = (uint32_t)val; regs->edx = (uint32_t)(val >> 32); break; case 0xa2: /* CPUID */ + /* If cpuid faulting is enabled and CPL>0 leave the #GP untouched. */ + if ( v->arch.cpuid_fault && !guest_kernel_mode(v, regs) ) + goto fail; + pv_cpuid(regs); break; default: goto fail; } #undef wr_ad diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h index 5807a1f..27c20cc 100644 --- a/xen/include/asm-x86/domain.h +++ b/xen/include/asm-x86/domain.h @@ -552,16 +552,19 @@ struct arch_vcpu * However, processor should not be able to touch eXtended states before * it explicitly enables it via xcr0. */ uint64_t xcr0_accum; /* This variable determines whether nonlazy extended state has been used, * and thus should be saved/restored. */ bool_t nonlazy_xstate_used; + /* Has the guest enabled CPUID faulting? */ + bool cpuid_fault; + /* * The SMAP check policy when updating runstate_guest(v) and the * secondary system time. */ smap_check_policy_t smap_check_policy; struct vmce vmce;