From patchwork Mon Jan 17 18:34:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 12715664 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C82D9C433EF for ; Mon, 17 Jan 2022 18:35:27 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.258222.444460 (Exim 4.92) (envelope-from ) id 1n9Wq9-0004gb-GH; Mon, 17 Jan 2022 18:34:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 258222.444460; Mon, 17 Jan 2022 18:34:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1n9Wq9-0004gU-Cf; Mon, 17 Jan 2022 18:34:49 +0000 Received: by outflank-mailman (input) for mailman id 258222; Mon, 17 Jan 2022 18:34:47 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1n9Wq7-0004g5-D6 for xen-devel@lists.xenproject.org; Mon, 17 Jan 2022 18:34:47 +0000 Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 22e155d3-77c4-11ec-a115-11989b9578b4; Mon, 17 Jan 2022 19:34:44 +0100 (CET) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 22e155d3-77c4-11ec-a115-11989b9578b4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1642444484; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HJvhAnanEqe+CxbLx0yf9TQAJKilHOHk3YDJCg2zxCg=; b=Hwgxf5jvdJQqp5UDJ/WmQLlr+AZyV1nTr/S1K0Nv4nmKKXdf9ixNNhaH NjKyaHwBcwdwYeP555X9ShyiGtWYLBq8xZUZwRFbn79SlqLQYS2nxWOy+ mcki+/AK/81pZZgArcq4v3Ruf6PxTYK9Zri7vitLU2KmXDMtf1e93V/0f o=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none IronPort-SDR: TB2XcwXV2wc3RobzdhHqcIalrDSepoqgFBepUAmWtUovaWdo/iWfmCLvgM4YxjqM1r1wpjFGjP N5VVoPLRuzr7/KXgPlnutWFLCRi/QMPdgh0HFja9X96JD06gy3lqJkBKVcrX5hTgEN5QuCDkld qgZGIsSjUAAG9m26+dQY66lm20cbmZfL3UEZy2jab/oz6Jo3v/ApaKBlObEfgrySTWZUZ51xeC sjHOch7sez0ykeit8OTNl+StdjXBhpI7iiJmZkxcoYI2jNaK2+7Yj+sO6ePnp2a4ZCHM2EdNr+ VuJ5+rgUlHSUBiMOtoaOBHcD X-SBRS: 5.2 X-MesageID: 62085888 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:cKW+JK7Wh512NJP8x6kEQwxRtL3AchMFZxGqfqrLsTDasY5as4F+v mJOD2/VO/ffNjb0c9BxaoXg8U1VvpXQyodnSANqrS8xHi5G8cbLO4+Ufxz6V8+wwmwvb67FA +E2MISowBUcFyeEzvuV3zyIQUBUjclkfJKlYAL/En03FV8MpBsJ00o5wbZg29Ew27BVPivW0 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Zk pZTkcSsERoVIYLowtU5UD4CDQZ7MvgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr 7pCcmlLN03dwbLtqF64YrAEasALBc/nJo4A/FpnyinUF60OSpHfWaTao9Rf2V/cg+gQR66OO ppJOVKDajzue0R0ZgYdFagfp+Clgj7wKCZ1skCa8P9fD2/7k1UqjemF3MDuUt6XQ4NTl0WRp GPD9kz4BA0XMJqUzj/t2nGhmO7J2z/6UYQ6Fbuk+/osi1qWrkQMDDUGWF39puO24mauVtQaJ 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4O88Q5RyJy6HUyx2EHWVCRTlEAOHKr+dvG2Zsj AXQ2Yq0W3o/69V5VE5x6J/O7i2WYTRNMFMdSgtefC8GxIP9h4Q820enoslYLIa5idj8GDfVy j+MrTQji7h7sfPnx5lX7nic3Wvy+8Ghohodo1yOAzn7tl8RiJuNPtTwsTDmAeB8wJF1p7Vrl FwNgICg4e8HFvlhfwTdEbxWTNlFCxtoWQAwYGKD/bF8rVxBGFb5JOi8BQ2Swm8zaK7onhezM SfuVft5vsM7AZdTRfYfj3iNI8or17P8Mt/uS+rZaNFDCrAoKlPdp3w+Ph7Mgz69+KTJrU3ZE c3GGSpLJSxLYZmLMRLsH7tNuVPV7n1WKZzvqWDTkE38jOv2iI+9QrYZKlqeBt3VH4vfyDg5B +13bpPQoz0GCbWWSnCOreY7cA5WRVBmW8Geg5EHJ4arf1s9cEl8WqC5/F/UU9E/90ijvr2Wr ijVt44x4AeXuEAr3i3ROy8zM+2+DM8vxZ/5VAR1VWuVN7EYSd7HxM8im1EfJ9HLLcRvkqx5S ecrYcKFDqgdQzjL4W1FP5L8sJZjZFKgggfXZ3ipZz02fphBQQ3V+4C7IluzpXdWVifn59Ejp 7CA1x/ARcZRTQpVE8uLOumkyEm8vCZBlbsqDVfIONRaZG7l7JNud37qlvYyLsxVcUfDyzKW2 hy4GxAdoeWR8YY5/MOQ3fKPrpuzEvs4FU1fRjGJ4bGzPCjc32yi3Y4fD7rYIWGDDDv5ofzwa /9UwvfwNOw8sGxL64csQax2ya8e5sf0o+MIxApTA3iWPU+gDaltIyfa0JAX5LFN3LJQpSC/R lmLpotBIbyMNc7oTAwRKQ4iYrjR3P0YgGCPv/E8IUG87y5r5ruXF05VOkDU2iBaKbJ0NqIjw Psg55FKu1Du1EJyP4bUlD1Q+kSNMmcEAvcuuZwtCYP2jhYmlwNZapvGBy6quJyCZr2g6KXxz uN4UEYau4lh+w== IronPort-HdrOrdr: A9a23:Rau5CaNQvUNKXsBcTvmjsMiBIKoaSvp037Eqv3oedfUzSL3gqy nOpoV86faaslYssR0b9exofZPwJE80lqQFhrX5X43SPzUO0VHAROoJgLcKgQeQfxEWntQtrZ uIGJIeNDSfNzdHZL7BkWuFL+o= X-IronPort-AV: E=Sophos;i="5.88,296,1635220800"; d="scan'208";a="62085888" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Jun Nakajima , Kevin Tian Subject: [PATCH v2 1/4] x86/guest: Introduce {get,set}_reg() infrastructure Date: Mon, 17 Jan 2022 18:34:12 +0000 Message-ID: <20220117183415.11150-2-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220117183415.11150-1-andrew.cooper3@citrix.com> References: <20220117183415.11150-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Various registers have per-guest-type or per-vendor locations or access requirements. To support their use from common code, provide accessors which allow for per-guest-type behaviour. For now, just infrastructure handling default cases and expectations. Subsequent patches will start handling registers using this infrastructure. Signed-off-by: Andrew Cooper Reviewed-by: Jan Beulich --- CC: Jan Beulich CC: Roger Pau Monné CC: Wei Liu CC: Jun Nakajima CC: Kevin Tian It is deliberately {get,set}_reg() because in the fullness of time, it will handle more than just MSRs. There's loads of space in the MSR index range which we can reuse for non-MSRs. v2: * New --- xen/arch/x86/hvm/hvm.c | 22 ++++++++++++++++++++++ xen/arch/x86/hvm/svm/svm.c | 30 ++++++++++++++++++++++++++++++ xen/arch/x86/hvm/vmx/vmx.c | 31 +++++++++++++++++++++++++++++++ xen/arch/x86/include/asm/hvm/hvm.h | 24 ++++++++++++++++++++++++ xen/arch/x86/include/asm/pv/domain.h | 13 +++++++++++++ xen/arch/x86/pv/emulate.c | 31 +++++++++++++++++++++++++++++++ 6 files changed, 151 insertions(+) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 3b87506ac4b3..b530e986e86c 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3744,6 +3744,28 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content, return X86EMUL_EXCEPTION; } +uint64_t hvm_get_reg(struct vcpu *v, unsigned int reg) +{ + ASSERT(v == current || !vcpu_runnable(v)); + + switch ( reg ) + { + default: + return alternative_call(hvm_funcs.get_reg, v, reg); + } +} + +void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) +{ + ASSERT(v == current || !vcpu_runnable(v)); + + switch ( reg ) + { + default: + return alternative_vcall(hvm_funcs.set_reg, v, reg, val); + } +} + static bool is_sysdesc_access(const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt) { diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index fae39c4b4cbd..bb6b8e560a9f 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -2469,6 +2469,33 @@ static bool svm_get_pending_event(struct vcpu *v, struct x86_event *info) return true; } +static uint64_t svm_get_reg(struct vcpu *v, unsigned int reg) +{ + struct domain *d = v->domain; + + switch ( reg ) + { + default: + printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n", + __func__, v, reg); + domain_crash(d); + return 0; + } +} + +static void svm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) +{ + struct domain *d = v->domain; + + switch ( reg ) + { + default: + printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n", + __func__, v, reg, val); + domain_crash(d); + } +} + static struct hvm_function_table __initdata svm_function_table = { .name = "SVM", .cpu_up_prepare = svm_cpu_up_prepare, @@ -2518,6 +2545,9 @@ static struct hvm_function_table __initdata svm_function_table = { .nhvm_intr_blocked = nsvm_intr_blocked, .nhvm_hap_walk_L1_p2m = nsvm_hap_walk_L1_p2m, + .get_reg = svm_get_reg, + .set_reg = svm_set_reg, + .tsc_scaling = { .max_ratio = ~TSC_RATIO_RSVD_BITS, }, diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index a7a0d662342a..4ff92ab4e94e 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2404,6 +2404,33 @@ static int vmtrace_reset(struct vcpu *v) return 0; } +static uint64_t vmx_get_reg(struct vcpu *v, unsigned int reg) +{ + struct domain *d = v->domain; + + switch ( reg ) + { + default: + printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n", + __func__, v, reg); + domain_crash(d); + return 0; + } +} + +static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) +{ + struct domain *d = v->domain; + + switch ( reg ) + { + default: + printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n", + __func__, v, reg, val); + domain_crash(d); + } +} + static struct hvm_function_table __initdata vmx_function_table = { .name = "VMX", .cpu_up_prepare = vmx_cpu_up_prepare, @@ -2464,6 +2491,10 @@ static struct hvm_function_table __initdata vmx_function_table = { .vmtrace_set_option = vmtrace_set_option, .vmtrace_get_option = vmtrace_get_option, .vmtrace_reset = vmtrace_reset, + + .get_reg = vmx_get_reg, + .set_reg = vmx_set_reg, + .tsc_scaling = { .max_ratio = VMX_TSC_MULTIPLIER_MAX, }, diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index b26302d9e769..c8b62b514b42 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -223,6 +223,9 @@ struct hvm_function_table { int (*vmtrace_get_option)(struct vcpu *v, uint64_t key, uint64_t *value); int (*vmtrace_reset)(struct vcpu *v); + uint64_t (*get_reg)(struct vcpu *v, unsigned int reg); + void (*set_reg)(struct vcpu *v, unsigned int reg, uint64_t val); + /* * Parameters and callbacks for hardware-assisted TSC scaling, * which are valid only when the hardware feature is available. @@ -730,6 +733,18 @@ static inline int hvm_vmtrace_reset(struct vcpu *v) } /* + * Accessors for registers which have per-guest-type or per-vendor locations + * (e.g. VMCS, msr load/save lists, VMCB, VMLOAD lazy, etc). + * + * The caller is responsible for all auditing - these accessors do not fail, + * but do use domain_crash() for usage errors. + * + * Must cope with being called in non-current context. + */ +uint64_t hvm_get_reg(struct vcpu *v, unsigned int reg); +void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val); + +/* * This must be defined as a macro instead of an inline function, * because it uses 'struct vcpu' and 'struct domain' which have * not been defined yet. @@ -852,6 +867,15 @@ static inline int hvm_vmtrace_get_option( return -EOPNOTSUPP; } +static inline uint64_t pv_get_reg(struct vcpu *v, unsigned int reg) +{ + ASSERT_UNREACHABLE(); +} +static inline void pv_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) +{ + ASSERT_UNREACHABLE(); +} + #define is_viridian_domain(d) ((void)(d), false) #define is_viridian_vcpu(v) ((void)(v), false) #define has_viridian_time_ref_count(d) ((void)(d), false) diff --git a/xen/arch/x86/include/asm/pv/domain.h b/xen/arch/x86/include/asm/pv/domain.h index df9716ff26a8..5fbf4043e0d9 100644 --- a/xen/arch/x86/include/asm/pv/domain.h +++ b/xen/arch/x86/include/asm/pv/domain.h @@ -72,6 +72,10 @@ int pv_vcpu_initialise(struct vcpu *v); void pv_domain_destroy(struct domain *d); int pv_domain_initialise(struct domain *d); +/* See hvm_{get,set}_reg() for description. */ +uint64_t pv_get_reg(struct vcpu *v, unsigned int reg); +void pv_set_reg(struct vcpu *v, unsigned int reg, uint64_t val); + /* * Bits which a PV guest can toggle in its view of cr4. Some are loaded into * hardware, while some are fully emulated. @@ -100,6 +104,15 @@ static inline int pv_vcpu_initialise(struct vcpu *v) { return -EOPNOTSUPP; } static inline void pv_domain_destroy(struct domain *d) {} static inline int pv_domain_initialise(struct domain *d) { return -EOPNOTSUPP; } +static inline uint64_t pv_get_reg(struct vcpu *v, unsigned int reg) +{ + ASSERT_UNREACHABLE(); +} +static inline void pv_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) +{ + ASSERT_UNREACHABLE(); +} + static inline unsigned long pv_make_cr4(const struct vcpu *v) { return ~0ul; } #endif /* CONFIG_PV */ diff --git a/xen/arch/x86/pv/emulate.c b/xen/arch/x86/pv/emulate.c index e8bb326efdfe..ae049b60f2fc 100644 --- a/xen/arch/x86/pv/emulate.c +++ b/xen/arch/x86/pv/emulate.c @@ -90,6 +90,37 @@ void pv_emul_instruction_done(struct cpu_user_regs *regs, unsigned long rip) } } +uint64_t pv_get_reg(struct vcpu *v, unsigned int reg) +{ + struct domain *d = v->domain; + + ASSERT(v == current || !vcpu_runnable(v)); + + switch ( reg ) + { + default: + printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n", + __func__, v, reg); + domain_crash(d); + return 0; + } +} + +void pv_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) +{ + struct domain *d = v->domain; + + ASSERT(v == current || !vcpu_runnable(v)); + + switch ( reg ) + { + default: + printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n", + __func__, v, reg, val); + domain_crash(d); + } +} + /* * Local variables: * mode: C From patchwork Mon Jan 17 18:34:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 12715662 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E6501C433F5 for ; Mon, 17 Jan 2022 18:35:26 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.258226.444503 (Exim 4.92) (envelope-from ) id 1n9WqC-0005dB-6G; Mon, 17 Jan 2022 18:34:52 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 258226.444503; Mon, 17 Jan 2022 18:34:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1n9WqB-0005b0-UQ; Mon, 17 Jan 2022 18:34:51 +0000 Received: by outflank-mailman (input) for mailman id 258226; Mon, 17 Jan 2022 18:34:50 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1n9Wq9-0004g9-Sq for xen-devel@lists.xenproject.org; Mon, 17 Jan 2022 18:34:50 +0000 Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 262bb5cc-77c4-11ec-9bbc-9dff3e4ee8c5; Mon, 17 Jan 2022 19:34:48 +0100 (CET) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 262bb5cc-77c4-11ec-9bbc-9dff3e4ee8c5 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1642444488; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OEodxsk1Z+V+K/D8CnrkWqzHc3WZwn5T5qtHT3QZzcc=; b=HtyJ0IrzjxgEpOxTBwkbRYCon7hJMW6wKYMIVaNz9NiRdwzmT//+IcLb 6YyZrM34Tw5LLXmTZxx56wLpzkRpySOs8eJN9kX9F+Xw+dOdmM+QZnCRL pg5UsYQacUaOwcoow63h89yk+bCLs0fg4eDu2m+xbOLCIIquQhKbau7LG Q=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none IronPort-SDR: dEr7Ijtrnhr/isr6azX/ft7g2zfxD0T798i33uStTPmaEHGYXKF6fn3BooN4QVHbJR8ffPJXA4 K5kMkmrKpQXKsa9vc/LeOUIVoUzZ+BGDs/gjTxWCDiAG8Dt0YBxTYwgbqSoXfg+Pdc20nZLAwy 8GKJbWEfjbSzvYS9q9+LaweVzMS6/IB8MUEWMILPnyT2qiBihF4dEIrh0+QQCw2m19/uJIJ+uL 7EQKKlnRsAxt+YrGIENUxSQ3p5J7VsTf18BoZECrWAX1lQ+enPZu8ho8ymuE0LLtS4OQ2Awk8a tiayvXI+Sj9hQQUt/NP5VY5Y X-SBRS: 5.2 X-MesageID: 62573949 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:g4X77aNEyASi9E3vrR1ykMFynXyQoLVcMsEvi/4bfWQNrUok0zcCz TRNXjqHbPaDYTanKIp2bYni8htSvJ6Bn4M3SQto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h ynLQoCYdKjYdpJYz/uUGuCJQUNUjMlkfZKhTr6UUsxNbVU8En150Es8w7RRbrNA2rBVPSvc4 bsenOWHULOV82Yc3rU8sv/rRLtH5ZweiRtA1rAMTakjUGz2zhH5OKk3N6CpR0YUd6EPdgKMq 0Qv+5nilo/R109F5tpICd8XeGVSKlLZFVDmZna7x8FOK/WNz8A/+v9TCRYSVatYow2Iv5dxz uRvjKWxUjgxHKTApb8CcjANRkmSPYUekFPGCX22sMjVxEzaaXr8hf5pCSnaP6VBpLwxWzsXs 6VFdnZdNXhvhMrvqF6/YsBqit4uM4/AO4QHt2s75TrYEewnUdbIRKCiCdpwgm1t2ZsfQae2i 8wxMTRXZy7Jc152BXwWCLYXuvX3n0LDSmgNwL6SjfVuuDWCpOBr65DyNPLFd9rMQt9a9m66j G/b+2XyAjkBKceSjzGC9xqEluLJ2C/2Ro8WPLm57eJxxk2ewHQJDx8bXkf9puO24nNSQPoGd RZSoHB36/Fvqgr7FbERQiFUvla8vz5bW9xhMdQD6Rqy0ojd7zvCGUktG2sphMMdiOc6Qjkj1 1msltzvBCByvLD9dU9x5ot4vhvpZ3FLcDZqiTssCFJcvoK9+N1bYgfnF447SMaIYsvJ9SYcK txghAw3nP0tgMECzM1XFniX0mv39vAlouPYjzg7v15JDCskPuZJhKTysDA3CMqsyq7DFDFtW 1BeyqCjABgmV83lqcB0aLxl8EuVz/iEKibAplVkAoMs8T+gk1b6I9wKu2wufxc1bphUEdMMX KM1kVkMjHO0FCH7BZKbnqrrU5h6pUQePYmNug/ogipmPcEqKV7vENBGbk+MxWH9+HXAYolkU ap3hf2EVC5AYYw+lWLeb75EjdcDm35irUuOG8GT50n3gNK2OS/OIZ9YYQTmUwzMxP7eyOkj2 4wBZ5LiJtQ2eLCWXxQ7BqZIfA9adiZqVMmmwyGVH8baSjdb9KgaI6e56dscl0ZNxsy5T8/Eo SOwXFF20l36iSGVIAmGcCk7OrjuQYx+vTQwOil1ZQSk3H0qYICO6qYDdsRoIel7pbI7lfMkH eMYf8igA+hUTmiV8ToqcpSg/pdpcw6mhFzSMnP9MiQ/ZZNpWyfA5sTgIln07CALAyfu7Zk+r rSs2xn1W50GQwg+Xs/aZOj2lwG6vGQHmfI0VEzNe4EBdELp+YlsCirwkv5ofJ1cdUSdnmOXj l/EDw0ZqO/Bp54O3OPI3a3U/Z20F+ZeH1ZBGzWJ57iBKiSHrHGoxpVNUbjUcGmFBn/04qire c5c0+r4bK8chF9PvodxT+RrwKY564e9rrNW1F05TnDCblDtAbJ8OHiWm8JIs/QVlLNevAK3X GOJ+8VbZurVaJ+0TgZJKVp3dPmH2NEVhiLWvKY8L0jN7SNq+KaKDBdJNB6WhS0BdLZ4PevJG wv6VBL6P+BnticXDw== IronPort-HdrOrdr: A9a23:UVeoWK9sx0VSN3TLYTtuk+DiI+orL9Y04lQ7vn2YSXRuE/Bw8P re5MjztCWE8Qr5N0tQ+uxoVJPufZqYz+8Q3WBzB8bFYOCFghrLEGgK1+KLqFeMdxEWtNQtsp uIG5IOc+EYZmIbsS+V2meF+q4bsby6zJw= X-IronPort-AV: E=Sophos;i="5.88,296,1635220800"; d="scan'208";a="62573949" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu Subject: [PATCH v2 2/4] x86/msr: Split MSR_SPEC_CTRL handling Date: Mon, 17 Jan 2022 18:34:13 +0000 Message-ID: <20220117183415.11150-3-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220117183415.11150-1-andrew.cooper3@citrix.com> References: <20220117183415.11150-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 In order to fix a VT-x bug, and support MSR_SPEC_CTRL on AMD, move MSR_SPEC_CTRL handling into the new {get,set}_reg() infrastructure. Duplicate the msrs->spec_ctrl.raw accesses in the PV and VT-x paths for now. The SVM path is currently unreachable because of the CPUID policy. No functional change. Signed-off-by: Andrew Cooper Reviewed-by: Jan Beulich --- CC: Jan Beulich CC: Roger Pau Monné CC: Wei Liu v2: * Rework on top of {get,set}_reg() --- xen/arch/x86/hvm/vmx/vmx.c | 7 +++++++ xen/arch/x86/msr.c | 21 +++++++++++++++++---- xen/arch/x86/pv/emulate.c | 9 +++++++++ 3 files changed, 33 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 4ff92ab4e94e..c32967f190ff 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2410,6 +2410,9 @@ static uint64_t vmx_get_reg(struct vcpu *v, unsigned int reg) switch ( reg ) { + case MSR_SPEC_CTRL: + return v->arch.msrs->spec_ctrl.raw; + default: printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n", __func__, v, reg); @@ -2424,6 +2427,10 @@ static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) switch ( reg ) { + case MSR_SPEC_CTRL: + v->arch.msrs->spec_ctrl.raw = val; + break; + default: printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n", __func__, v, reg, val); diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c index b834456c7b02..fd4012808472 100644 --- a/xen/arch/x86/msr.c +++ b/xen/arch/x86/msr.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include @@ -265,8 +266,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val) case MSR_SPEC_CTRL: if ( !cp->feat.ibrsb ) goto gp_fault; - *val = msrs->spec_ctrl.raw; - break; + goto get_reg; case MSR_INTEL_PLATFORM_INFO: *val = mp->platform_info.raw; @@ -424,6 +424,13 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val) return ret; + get_reg: /* Delegate register access to per-vm-type logic. */ + if ( is_pv_domain(d) ) + *val = pv_get_reg(v, msr); + else + *val = hvm_get_reg(v, msr); + return X86EMUL_OKAY; + gp_fault: return X86EMUL_EXCEPTION; } @@ -514,8 +521,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val) if ( val & rsvd ) goto gp_fault; /* Rsvd bit set? */ - msrs->spec_ctrl.raw = val; - break; + goto set_reg; case MSR_PRED_CMD: if ( !cp->feat.ibrsb && !cp->extd.ibpb ) @@ -663,6 +669,13 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val) return ret; + set_reg: /* Delegate register access to per-vm-type logic. */ + if ( is_pv_domain(d) ) + pv_set_reg(v, msr, val); + else + hvm_set_reg(v, msr, val); + return X86EMUL_OKAY; + gp_fault: return X86EMUL_EXCEPTION; } diff --git a/xen/arch/x86/pv/emulate.c b/xen/arch/x86/pv/emulate.c index ae049b60f2fc..0a7907ec5e84 100644 --- a/xen/arch/x86/pv/emulate.c +++ b/xen/arch/x86/pv/emulate.c @@ -92,12 +92,16 @@ void pv_emul_instruction_done(struct cpu_user_regs *regs, unsigned long rip) uint64_t pv_get_reg(struct vcpu *v, unsigned int reg) { + const struct vcpu_msrs *msrs = v->arch.msrs; struct domain *d = v->domain; ASSERT(v == current || !vcpu_runnable(v)); switch ( reg ) { + case MSR_SPEC_CTRL: + return msrs->spec_ctrl.raw; + default: printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n", __func__, v, reg); @@ -108,12 +112,17 @@ uint64_t pv_get_reg(struct vcpu *v, unsigned int reg) void pv_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) { + struct vcpu_msrs *msrs = v->arch.msrs; struct domain *d = v->domain; ASSERT(v == current || !vcpu_runnable(v)); switch ( reg ) { + case MSR_SPEC_CTRL: + msrs->spec_ctrl.raw = val; + break; + default: printk(XENLOG_G_ERR "%s(%pv, 0x%08x, 0x%016"PRIx64") Bad register\n", __func__, v, reg, val); From patchwork Mon Jan 17 18:34:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 12715665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D01B5C4332F for ; Mon, 17 Jan 2022 18:35:27 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.258225.444478 (Exim 4.92) (envelope-from ) id 1n9WqA-00050F-J4; Mon, 17 Jan 2022 18:34:50 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 258225.444478; Mon, 17 Jan 2022 18:34:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1n9WqA-0004yq-Ea; Mon, 17 Jan 2022 18:34:50 +0000 Received: by outflank-mailman (input) for mailman id 258225; Mon, 17 Jan 2022 18:34:49 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1n9Wq8-0004g9-Sm for xen-devel@lists.xenproject.org; Mon, 17 Jan 2022 18:34:49 +0000 Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 25648e86-77c4-11ec-9bbc-9dff3e4ee8c5; Mon, 17 Jan 2022 19:34:46 +0100 (CET) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 25648e86-77c4-11ec-9bbc-9dff3e4ee8c5 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1642444486; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/4/0GjuH2o/J2ZSoMPlStwoA5JUecXOYK7NCYxrZUco=; b=iWQ/pIZvOn+HQ5TtafsfgWdEjHJSHSb5gUPXsoLs3iGcxNO7U+eh0F+S vLJLIxpb+J2vFc7JLBykYucSYK+b4unt5yv+MNIGrnROXKEdptVU9vE+5 c0gQ5bQeAbeJnBderxQihu5UikhyTgoQAvjLTy7z8uwNqexUMbf0CJOBv g=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none IronPort-SDR: T8QlsTVmL9kjo6GaLrZ5PSdU3VOhmVi+vWpIQZ+Gh8Kp1Bk+LukNK+QqADKd6YN6jz92hEHp1t 8e9yC/PnIa0+57Ar/7SheJ8QEmM1VR3YuHcad1+oh8tTelfzjUcwX9pqIU0vBSZElkx7wOvPoC xjzIiMltUH90aigvBPBAPozXtPTEwl5GAdhZb9fbJ9EfUtUu5/SmygM1H8aWItZ0N/FDkuMs+T 711Jv7UD9MsuwVbs4F5M+/FMWbzKuAEnxD/KELcA0yEO3ow6eqKMrTeRipK1zgvosM+UUmX03M 4ue3iVD01AH32zNj35O16uvh X-SBRS: 5.2 X-MesageID: 62573948 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:4ysvdqwmiFV1hQB3ltt6t+eywSrEfRIJ4+MujC+fZmUNrF6WrkUHn DAWDW+PaPmMYWX9LtB0PIqxphwP65+GydY3SgY5qCAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx 59DAjUVBJlsFhcwnvopW1TYhSEUOZugH9IQM8aZfHAhLeNYYH1500g7wrdm2tQAbeWRWGthh /uj+6UzB3f9s9JEGjp8B3Wr8U4HUFza4Vv0j3RmDRx5lAa2e0o9VfrzEZqZPXrgKrS4K8bhL wr1IBNVyUuCl/slIovNfr8W6STmSJaKVeSFoiI+t6RPHnGuD8H9u0o2HKN0VKtZt9mGt4BQ7 /N/7cyyckBzG43lo94MbB1JTD4raMWq+JefSZS+mcmazkmAeHrw2fR+SkoxOOX0+M4uXzsIr 6ZBbmlQMFbT3Ipaw5riIgVoru0lINPmI8U0vXZ4wCuCJf0nXYrCU+PB4towMDIY2JoTQq2PO 5pxhTxHVR3qOR1/NVwsUpMZzfy1giavSG1hkQfAzUYwyzeKl1EguFT3C/LXZ9rMQ8xWl0SZo 2vu/mLlDxVcP9uaoRKG/2ytgKnTnCr9cIMUCLC8sPVthTW72Wg7GBAQE1yhrpGRmkO4Ht5SN UEQ0i4vtrQpslymSMHnWB+1q2LCuQQTM/JyOeAn7ACGyoLP/h2UQGMDS1Z8hMcO7ZFsA2Zwj xnQwo2vVWcHXKCppWy1ppaziwHqaSouajUnRQY/dxo658i8v9Rm5v7QdepLHKmwh9zzPDj/x TGWsSQz74kuYd43O7aTpg6e3W/1znTdZktsv1iMADr5hu9sTNf9P9TA1LTN0RpXwG91pHGlt WNMpcWR5ftm4XqlxH3UG7Vl8F1ECp+43NzgbbxHQsNJG9eFoSfLkWVsDNdWfh0B3iEsI26BX aMrkVkNjKK/xVPzBUONX6q/Ct4x0Y/rHsn/W/bfY7JmO8YtLlTarHo+ORLOjwgBdXTAd4llZ P93lu72XB4n5VlPlmLqF4/xL5d2rszB+Y8jbc+ilEn2uVZvTHWUVa0EIDOzghMRt8u5TPHu2 48HbaOikkwHOMWnO3W/2dNNcTgicCZqbbir+50/XrPSeWJORTB+Y8I9NJt8IeSJaYwPyLeRl px8M2cFoGfCaYrvclTVOis9OeK2Df6SbxsTZEQRALph4FB7Ca7H0UvVX8JfkWAP+LMxwPhqY eMCfsncUP1DRi6eo2YWbIXnrZwkfxOu3FrcMy2gaTk5XphhWw2WpYO0IlqxrHEDXnitqM8zg 7y8zQeHE5ANcBtvUZTNY/W1wlLv4XVEwLBuX1HFK8V4cVn39NQ4MDT4i/I6epleKRjKyjaA+ RyRBBMU+bvEr4MvqYGbjqGYtYa5VeB5GxMCTWXc6L+3Mwjc/3aintAcALrZI2iFWTqtqqu4Z OhTw/XtC9E9nQ5H49hmDrJm7aMi/N+z9bVU+RtpQSfQZFOxB7I+fnTfhZtTtrdAz6NysBetX h7d4cFTPLiENZ+3EFMVIwZ5PO2P2etNx2vX5PUxZk77+DV27PyMVkALZ0uAjylULb1UNoI5w Lh+5J5KulLn0hd6YMybii109niXKi1SWqoqgZgWHYv3h1d50VpFe5HdVnf77Zznhw+g6aX2z ut4XJb/uok= IronPort-HdrOrdr: A9a23:SD2w7K1wG9UwmsdJUNHAqgqjBIokLtp133Aq2lEZdPRUGvb3qy nIpoVj6faUskd2ZJhOo7C90cW7LU80sKQFhLX5Xo3SOzUO2lHYT72KhLGKq1aLdhEWtNQtsZ uIG5IOcOEYZmIasS+V2maF+q4bsbu6zJw= X-IronPort-AV: E=Sophos;i="5.88,296,1635220800"; d="scan'208";a="62573948" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Jun Nakajima , Kevin Tian Subject: [PATCH v2 3/4] x86/spec-ctrl: Drop SPEC_CTRL_{ENTRY_FROM,EXIT_TO}_HVM Date: Mon, 17 Jan 2022 18:34:14 +0000 Message-ID: <20220117183415.11150-4-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220117183415.11150-1-andrew.cooper3@citrix.com> References: <20220117183415.11150-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 These were written before Spectre/Meltdown went public, and there was large uncertainty in how the protections would evolve. As it turns out, they're very specific to Intel hardware, and not very suitable for AMD. Drop the macros, opencoding the relevant subset of functionality, and leaving grep-fodder to locate the logic. No change at all for VT-x. For AMD, the only relevant piece of functionality is DO_OVERWRITE_RSB, although we will soon be adding (different) logic to handle MSR_SPEC_CTRL. This has a marginal improvement of removing an unconditional pile of long-nops from the vmentry/exit path. Signed-off-by: Andrew Cooper Reviewed-by: Roger Pau Monné --- CC: Jan Beulich CC: Roger Pau Monné CC: Wei Liu CC: Jun Nakajima CC: Kevin Tian v2: * Tweak text. --- xen/arch/x86/hvm/svm/entry.S | 5 +++-- xen/arch/x86/hvm/vmx/entry.S | 8 ++++++-- xen/arch/x86/include/asm/spec_ctrl_asm.h | 19 ++++--------------- 3 files changed, 13 insertions(+), 19 deletions(-) diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S index e208a4b32ae7..276215d36aff 100644 --- a/xen/arch/x86/hvm/svm/entry.S +++ b/xen/arch/x86/hvm/svm/entry.S @@ -59,7 +59,7 @@ __UNLIKELY_END(nsvm_hap) mov VCPUMSR_spec_ctrl_raw(%rax), %eax /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */ - SPEC_CTRL_EXIT_TO_HVM /* Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */ + /* SPEC_CTRL_EXIT_TO_SVM (nothing currently) */ pop %r15 pop %r14 @@ -86,7 +86,8 @@ __UNLIKELY_END(nsvm_hap) GET_CURRENT(bx) - SPEC_CTRL_ENTRY_FROM_HVM /* Req: b=curr %rsp=regs/cpuinfo, Clob: acd */ + /* SPEC_CTRL_ENTRY_FROM_SVM Req: b=curr %rsp=regs/cpuinfo, Clob: ac */ + ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */ stgi diff --git a/xen/arch/x86/hvm/vmx/entry.S b/xen/arch/x86/hvm/vmx/entry.S index 27c8c5ca4943..30139ae58e9d 100644 --- a/xen/arch/x86/hvm/vmx/entry.S +++ b/xen/arch/x86/hvm/vmx/entry.S @@ -33,7 +33,9 @@ ENTRY(vmx_asm_vmexit_handler) movb $1,VCPU_vmx_launched(%rbx) mov %rax,VCPU_hvm_guest_cr2(%rbx) - SPEC_CTRL_ENTRY_FROM_HVM /* Req: b=curr %rsp=regs/cpuinfo, Clob: acd */ + /* SPEC_CTRL_ENTRY_FROM_VMX Req: b=curr %rsp=regs/cpuinfo, Clob: acd */ + ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM + ALTERNATIVE "", DO_SPEC_CTRL_ENTRY_FROM_HVM, X86_FEATURE_SC_MSR_HVM /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */ /* Hardware clears MSR_DEBUGCTL on VMExit. Reinstate it if debugging Xen. */ @@ -80,7 +82,9 @@ UNLIKELY_END(realmode) mov VCPUMSR_spec_ctrl_raw(%rax), %eax /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */ - SPEC_CTRL_EXIT_TO_HVM /* Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */ + /* SPEC_CTRL_EXIT_TO_VMX Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */ + ALTERNATIVE "", DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM + ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)), X86_FEATURE_SC_VERW_HVM mov VCPU_hvm_guest_cr2(%rbx),%rax diff --git a/xen/arch/x86/include/asm/spec_ctrl_asm.h b/xen/arch/x86/include/asm/spec_ctrl_asm.h index cb34299a865b..2b3f123cb501 100644 --- a/xen/arch/x86/include/asm/spec_ctrl_asm.h +++ b/xen/arch/x86/include/asm/spec_ctrl_asm.h @@ -68,14 +68,16 @@ * * The following ASM fragments implement this algorithm. See their local * comments for further details. - * - SPEC_CTRL_ENTRY_FROM_HVM * - SPEC_CTRL_ENTRY_FROM_PV * - SPEC_CTRL_ENTRY_FROM_INTR * - SPEC_CTRL_ENTRY_FROM_INTR_IST * - SPEC_CTRL_EXIT_TO_XEN_IST * - SPEC_CTRL_EXIT_TO_XEN * - SPEC_CTRL_EXIT_TO_PV - * - SPEC_CTRL_EXIT_TO_HVM + * + * Additionally, the following grep-fodder exists to find the HVM logic. + * - SPEC_CTRL_ENTRY_FROM_{SVM,VMX} + * - SPEC_CTRL_EXIT_TO_{SVM,VMX} */ .macro DO_OVERWRITE_RSB tmp=rax @@ -225,12 +227,6 @@ wrmsr .endm -/* Use after a VMEXIT from an HVM guest. */ -#define SPEC_CTRL_ENTRY_FROM_HVM \ - ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM; \ - ALTERNATIVE "", DO_SPEC_CTRL_ENTRY_FROM_HVM, \ - X86_FEATURE_SC_MSR_HVM - /* Use after an entry from PV context (syscall/sysenter/int80/int82/etc). */ #define SPEC_CTRL_ENTRY_FROM_PV \ ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_PV; \ @@ -255,13 +251,6 @@ ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)), \ X86_FEATURE_SC_VERW_PV -/* Use when exiting to HVM guest context. */ -#define SPEC_CTRL_EXIT_TO_HVM \ - ALTERNATIVE "", \ - DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM; \ - ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)), \ - X86_FEATURE_SC_VERW_HVM - /* * Use in IST interrupt/exception context. May interrupt Xen or PV context. * Fine grain control of SCF_ist_wrmsr is needed for safety in the S3 resume From patchwork Mon Jan 17 18:34:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 12715663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 480FCC43217 for ; Mon, 17 Jan 2022 18:35:28 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.258223.444465 (Exim 4.92) (envelope-from ) id 1n9Wq9-0004kA-QN; Mon, 17 Jan 2022 18:34:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 258223.444465; Mon, 17 Jan 2022 18:34:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1n9Wq9-0004je-Ls; Mon, 17 Jan 2022 18:34:49 +0000 Received: by outflank-mailman (input) for mailman id 258223; Mon, 17 Jan 2022 18:34:47 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1n9Wq7-0004g9-DQ for xen-devel@lists.xenproject.org; Mon, 17 Jan 2022 18:34:47 +0000 Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 22f05d8a-77c4-11ec-9bbc-9dff3e4ee8c5; Mon, 17 Jan 2022 19:34:44 +0100 (CET) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 22f05d8a-77c4-11ec-9bbc-9dff3e4ee8c5 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1642444484; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+dPlpy4gktrXn5Bej6AptKI40l2T8GyYSNyUi8f2DKA=; b=PyeZrXawAaDOvPAvGWMCToHsYUpT519merFvnAgQDpCkdVzjx3qUEfau XtzErn4WfM74PSuwGqNdJUj4QuJbuxehBTgWzxMw1sHNmOhRKzPn90jul Z37g91O+x1Wb6prOWleZyDjEcd4zD7yJDhlsV9hibCmD7QzAFzDDyPI66 c=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none IronPort-SDR: AcXdK4ySMDBBBQppmzuVuxVIEQ+DUHufiBHmQjiHxVkeC0ol0GSSGNr3Z2QMFn6DeiifmpNEFI +9kLsbTj7hAIEREZbYjXDxUFIhM3xzeRHH6D3uD4gqQwFO+Cc6ovL5lVrijOZSVt501LRgPimj bPsyjTZvRr7pl2t8FvSFlf3m+bilR8o+lEUNZhdK6rIloUkz1wAOgXDbn3nmhtFbfjfOiKgt5I 8LZT9Y/dpNu6a/2tE9MB8rFMSroawfw7FryETUgQXbj5xppJE1jCJKkxEBz5m1Du3A7MPI0jZ7 qOKKVMSTBLcdv9mp8q1HwUm8 X-SBRS: 5.2 X-MesageID: 62573947 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:7Byrcay4FV2vShLBXSt6t+eywSrEfRIJ4+MujC+fZmUNrF6WrkUCy WtLD2iDM/2CNGukKY8jPoS28BhS7ZTXn4NrTVZq+SAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx 59DAjUVBJlsFhcwnvopW1TYhSEUOZugH9IQM8aZfHAhLeNYYH1500g7wrdm2tQAbeWRWGthh /uj+6UzB3f9s9JEGjp8B3Wr8U4HUFza4Vv0j3RmDRx5lAa2e0o9VfrzEZqZPXrgKrS4K8bhL wr1IBNVyUuCl/slIovNfr8W6STmSJaKVeSFoiI+t6RPHnGuD8H9u0o2HKN0VKtZt9mGt4BQ7 /N/7cyyckBzG43lo94MbB1JTD4raMWq+JefSZS+mcmazkmAeHrw2fR+SkoxOOX0+M4uXzsIr 6ZBbmlQMFbT3Ipaw5riIgVoru0lINPmI8U0vXZ4wCuCJf0nXYrCU+PB4towMDIY2JoTQq2PO 5VxhTxHUx3qQyMQJAssWKk6gM6z2EP/KQR4gQfAzUYwyzeKl1EguFT3C/LXZ9rMQ8xWl0SZo 2vu/mLlDxVcP9uaoRKG/2ytgKnTnCr9cIMUCLC8sPVthTW72Wg7GBAQE1yhrpGRmkO4Ht5SN UEQ0i4vtrQpslymSMHnWB+1q2LCuQQTM/JyOeAn7ACGyoLP/h2UQGMDS1Z8hMcO7ZFsA2Zwj xnQwo2vVWcHXKCppWy1ppaziwHqaSouajUnRQY/dxo658i8v9Rm5v7QdepLHKmwh9zzPDj/x TGWsSQz74kuYd43O7aTpg6e3W/1znTdZktsv1iMADr5hu9sTNf9P9TA1LTN0RpXwG91pHGlt WNMpcWR5ftm4XqlxH3UG7Vl8F1ECp+43NzgbbxHQsNJG9eFoSfLkWVsDNdWfh0B3iEsI26BX aMrkVkNjKK/xVPzBUONX6q/Ct4x0Y/rHsn/W/bfY7JmO8YtLlTarHo+ORLOjwgBdXTAd4llZ P93lu72XB4n5VlPlmLqF4/xL5d2rszB+Y8jbc+ilEn2uVZvTHWUVa0EIDOzghMRt8u5TPHu2 48HbaOikkwHOMWnO3W/2dNNcTgicCZqbbir+50/XrPSeWJORTB+Y8I9NJt8IeSJaYwPyLeRl px8M2cFoGfCaYrvclTVOis9OeK2Df6SbxsTZEQRALph4FB7Ca7H0UvVX8JfkWAP+LMxwPhqY eMCfsncUP1DRi6eo2YWbIXnrZwkfxOu3FrcMy2gaTk5XphhWw2WpYO0IlqxrHEDXnitqM8zg 7y8zQeHE5ANcBtvUZTNY/W1wlLv4XVEwLBuX1HFK8V4cVn39NQ4MDT4i/I6epleKRjKyjaA+ RyRBBMU+bvEr4MvqYGbjqGYtYa5VeB5GxMCTWXc6L+3Mwjc/3aintAcALrZI2iFWTqtqqu4Z OhTw/XtC9E9nQ5H49hmDrJm7aMi/N+z9bVU+RtpQSfQZFOxB7I+fnTfhZtTtrdAz6NysBetX h7d4cFTPLiENZ+3EFMVIwZ5PO2P2etNx2vX5PUxZk77+DV27PyMVkALZ0uAjylULb1UNoI5w Lh+5J5KulLn0hd6YMybii109niXKi1SWqoqgZgWHYv3h1d50VpFe5HdVnf77Zznhw+g6aX2z ut4XJb/uok= IronPort-HdrOrdr: A9a23:X2q+xaEI5Zcu6F9tpLqE0MeALOsnbusQ8zAXP0AYc3Jom6uj5r mTdZUgpHnJYVkqOE3I9ertBEDEewK4yXcX2/h3AV7BZniEhILAFugLhuGO/9SjIVybygc079 YZT0EUMrzN5DZB4voSmDPIceod/A== X-IronPort-AV: E=Sophos;i="5.88,296,1635220800"; d="scan'208";a="62573947" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Jun Nakajima , Kevin Tian Subject: [PATCH v2 4/4] x86/spec-ctrl: Fix NMI race condition with VT-x MSR_SPEC_CTRL handling Date: Mon, 17 Jan 2022 18:34:15 +0000 Message-ID: <20220117183415.11150-5-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220117183415.11150-1-andrew.cooper3@citrix.com> References: <20220117183415.11150-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 The logic was based on a mistaken understanding of how NMI blocking on vmexit works. NMIs are only blocked for EXIT_REASON_NMI, and not for general exits. Therefore, an NMI can in general hit early in the vmx_asm_vmexit_handler path, and the guest's value will be clobbered before it is saved. Switch to using MSR load/save lists. This causes the guest value to be saved atomically with respect to NMIs/MCEs/etc. First, update vmx_cpuid_policy_changed() to configure the load/save lists at the same time as configuring the intercepts. This function is always used in remote context, so extend the vmx_vmcs_{enter,exit}() block to cover the whole function, rather than having multiple remote acquisitions of the same VMCS. Both of vmx_{add,del}_guest_msr() can fail. The -ESRCH delete case is fine, but all others are fatal to the running of the VM, so handle them using domain_crash() - this path is only used during domain construction anyway. Second, update vmx_{get,set}_reg() to use the MSR load/save lists rather than vcpu_msrs, and update the vcpu_msrs comment to describe the new state location. Finally, adjust the entry/exit asm. Because the guest value is saved and loaded atomically, we do not need to manually load the guest value, nor do we need to enable SCF_use_shadow. This lets us remove the use of DO_SPEC_CTRL_EXIT_TO_GUEST. Additionally, SPEC_CTRL_ENTRY_FROM_PV gets removed too, because on an early entry failure, we're no longer in the guest MSR_SPEC_CTRL context needing to switch back to Xen's context. The only action remaining is to load Xen's MSR_SPEC_CTRL value on vmexit. We could in principle use the host msr list, but is expected to complicated future work. Delete DO_SPEC_CTRL_ENTRY_FROM_HVM entirely, and use a shorter code sequence to simply reload Xen's setting from the top-of-stack block. Adjust the comment at the top of spec_ctrl_asm.h in light of this bugfix. Signed-off-by: Andrew Cooper --- CC: Jan Beulich CC: Roger Pau Monné CC: Wei Liu CC: Jun Nakajima CC: Kevin Tian Needs backporting as far as people can tolerate. If the entry/exit logic were in C, I'd ASSERT() that shadow tracking is off, but this is awkard to arrange in asm. v2: * Rework on top of {get,set}_reg() infrastructure. * Future-proof against other vmx_del_guest_msr() failures. * Rewrite the commit message to explain things better. --- xen/arch/x86/hvm/vmx/entry.S | 21 +++++++++------ xen/arch/x86/hvm/vmx/vmx.c | 44 +++++++++++++++++++++++++++++--- xen/arch/x86/include/asm/msr.h | 10 +++++++- xen/arch/x86/include/asm/spec_ctrl_asm.h | 32 +++-------------------- 4 files changed, 67 insertions(+), 40 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/entry.S b/xen/arch/x86/hvm/vmx/entry.S index 30139ae58e9d..ce7b48558ee1 100644 --- a/xen/arch/x86/hvm/vmx/entry.S +++ b/xen/arch/x86/hvm/vmx/entry.S @@ -35,7 +35,14 @@ ENTRY(vmx_asm_vmexit_handler) /* SPEC_CTRL_ENTRY_FROM_VMX Req: b=curr %rsp=regs/cpuinfo, Clob: acd */ ALTERNATIVE "", DO_OVERWRITE_RSB, X86_FEATURE_SC_RSB_HVM - ALTERNATIVE "", DO_SPEC_CTRL_ENTRY_FROM_HVM, X86_FEATURE_SC_MSR_HVM + + .macro restore_spec_ctrl + mov $MSR_SPEC_CTRL, %ecx + movzbl CPUINFO_xen_spec_ctrl(%rsp), %eax + xor %edx, %edx + wrmsr + .endm + ALTERNATIVE "", restore_spec_ctrl, X86_FEATURE_SC_MSR_HVM /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */ /* Hardware clears MSR_DEBUGCTL on VMExit. Reinstate it if debugging Xen. */ @@ -82,8 +89,7 @@ UNLIKELY_END(realmode) mov VCPUMSR_spec_ctrl_raw(%rax), %eax /* WARNING! `ret`, `call *`, `jmp *` not safe beyond this point. */ - /* SPEC_CTRL_EXIT_TO_VMX Req: a=spec_ctrl %rsp=regs/cpuinfo, Clob: cd */ - ALTERNATIVE "", DO_SPEC_CTRL_EXIT_TO_GUEST, X86_FEATURE_SC_MSR_HVM + /* SPEC_CTRL_EXIT_TO_VMX Req: %rsp=regs/cpuinfo Clob: */ ALTERNATIVE "", __stringify(verw CPUINFO_verw_sel(%rsp)), X86_FEATURE_SC_VERW_HVM mov VCPU_hvm_guest_cr2(%rbx),%rax @@ -119,12 +125,11 @@ UNLIKELY_END(realmode) SAVE_ALL /* - * PV variant needed here as no guest code has executed (so - * MSR_SPEC_CTRL can't have changed value), and NMIs/MCEs are liable - * to hit (in which case the HVM variant might corrupt things). + * SPEC_CTRL_ENTRY notes + * + * If we end up here, no guest code has executed. We still have Xen's + * choice of MSR_SPEC_CTRL in context, and the RSB is safe. */ - SPEC_CTRL_ENTRY_FROM_PV /* Req: %rsp=regs/cpuinfo Clob: acd */ - /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */ call vmx_vmentry_failure jmp .Lvmx_process_softirqs diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index c32967f190ff..69e38d0fa8f9 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -592,6 +592,7 @@ void vmx_update_exception_bitmap(struct vcpu *v) static void vmx_cpuid_policy_changed(struct vcpu *v) { const struct cpuid_policy *cp = v->domain->arch.cpuid; + int rc = 0; if ( opt_hvm_fep || (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor) ) @@ -601,17 +602,29 @@ static void vmx_cpuid_policy_changed(struct vcpu *v) vmx_vmcs_enter(v); vmx_update_exception_bitmap(v); - vmx_vmcs_exit(v); /* * We can safely pass MSR_SPEC_CTRL through to the guest, even if STIBP * isn't enumerated in hardware, as SPEC_CTRL_STIBP is ignored. */ if ( cp->feat.ibrsb ) + { vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW); + + rc = vmx_add_guest_msr(v, MSR_SPEC_CTRL, 0); + if ( rc ) + goto out; + } else + { vmx_set_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW); + rc = vmx_del_msr(v, MSR_SPEC_CTRL, VMX_MSR_GUEST); + if ( rc && rc != -ESRCH ) + goto out; + rc = 0; /* Tolerate -ESRCH */ + } + /* MSR_PRED_CMD is safe to pass through if the guest knows about it. */ if ( cp->feat.ibrsb || cp->extd.ibpb ) vmx_clear_msr_intercept(v, MSR_PRED_CMD, VMX_MSR_RW); @@ -623,6 +636,15 @@ static void vmx_cpuid_policy_changed(struct vcpu *v) vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW); else vmx_set_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW); + + out: + vmx_vmcs_exit(v); + + if ( rc ) + { + printk(XENLOG_G_ERR "%pv MSR list error: %d", v, rc); + domain_crash(v->domain); + } } int vmx_guest_x86_mode(struct vcpu *v) @@ -2407,11 +2429,20 @@ static int vmtrace_reset(struct vcpu *v) static uint64_t vmx_get_reg(struct vcpu *v, unsigned int reg) { struct domain *d = v->domain; + uint64_t val = 0; + int rc; switch ( reg ) { case MSR_SPEC_CTRL: - return v->arch.msrs->spec_ctrl.raw; + rc = vmx_read_guest_msr(v, reg, &val); + if ( rc ) + { + printk(XENLOG_G_ERR "%s(%pv, 0x%08x) MSR list error: %d\n", + __func__, v, reg, rc); + domain_crash(d); + } + return val; default: printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n", @@ -2424,11 +2455,18 @@ static uint64_t vmx_get_reg(struct vcpu *v, unsigned int reg) static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) { struct domain *d = v->domain; + int rc; switch ( reg ) { case MSR_SPEC_CTRL: - v->arch.msrs->spec_ctrl.raw = val; + rc = vmx_write_guest_msr(v, reg, val); + if ( rc ) + { + printk(XENLOG_G_ERR "%s(%pv, 0x%08x) MSR list error: %d\n", + __func__, v, reg, rc); + domain_crash(d); + } break; default: diff --git a/xen/arch/x86/include/asm/msr.h b/xen/arch/x86/include/asm/msr.h index 1d3eca9063a2..10039c2d227b 100644 --- a/xen/arch/x86/include/asm/msr.h +++ b/xen/arch/x86/include/asm/msr.h @@ -287,7 +287,15 @@ extern struct msr_policy raw_msr_policy, /* Container object for per-vCPU MSRs */ struct vcpu_msrs { - /* 0x00000048 - MSR_SPEC_CTRL */ + /* + * 0x00000048 - MSR_SPEC_CTRL + * + * For PV guests, this holds the guest kernel value. It is accessed on + * every entry/exit path. + * + * For VT-x guests, the guest value is held in the MSR guest load/save + * list. + */ struct { uint32_t raw; } spec_ctrl; diff --git a/xen/arch/x86/include/asm/spec_ctrl_asm.h b/xen/arch/x86/include/asm/spec_ctrl_asm.h index 2b3f123cb501..bf82528a12ae 100644 --- a/xen/arch/x86/include/asm/spec_ctrl_asm.h +++ b/xen/arch/x86/include/asm/spec_ctrl_asm.h @@ -42,9 +42,10 @@ * path, or late in the exit path after restoring the guest value. This * will corrupt the guest value. * - * Factor 1 is dealt with by relying on NMIs/MCEs being blocked immediately - * after VMEXIT. The VMEXIT-specific code reads MSR_SPEC_CTRL and updates - * current before loading Xen's MSR_SPEC_CTRL setting. + * Factor 1 is dealt with: + * - On VMX by using MSR load/save lists to have vmentry/exit atomically + * load/save the guest value. Xen's value is loaded in regular code, and + * there is no need to use the shadow logic (below). * * Factor 2 is harder. We maintain a shadow_spec_ctrl value, and a use_shadow * boolean in the per cpu spec_ctrl_flags. The synchronous use is: @@ -128,31 +129,6 @@ #endif .endm -.macro DO_SPEC_CTRL_ENTRY_FROM_HVM -/* - * Requires %rbx=current, %rsp=regs/cpuinfo - * Clobbers %rax, %rcx, %rdx - * - * The common case is that a guest has direct access to MSR_SPEC_CTRL, at - * which point we need to save the guest value before setting IBRS for Xen. - * Unilaterally saving the guest value is shorter and faster than checking. - */ - mov $MSR_SPEC_CTRL, %ecx - rdmsr - - /* Stash the value from hardware. */ - mov VCPU_arch_msrs(%rbx), %rdx - mov %eax, VCPUMSR_spec_ctrl_raw(%rdx) - xor %edx, %edx - - /* Clear SPEC_CTRL shadowing *before* loading Xen's value. */ - andb $~SCF_use_shadow, CPUINFO_spec_ctrl_flags(%rsp) - - /* Load Xen's intended value. */ - movzbl CPUINFO_xen_spec_ctrl(%rsp), %eax - wrmsr -.endm - .macro DO_SPEC_CTRL_ENTRY maybexen:req /* * Requires %rsp=regs (also cpuinfo if !maybexen) From patchwork Mon Jan 17 19:25:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 12715669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 290E8C433F5 for ; Mon, 17 Jan 2022 19:26:21 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.258237.444514 (Exim 4.92) (envelope-from ) id 1n9Xdf-0003jl-Or; Mon, 17 Jan 2022 19:25:59 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 258237.444514; Mon, 17 Jan 2022 19:25:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1n9Xdf-0003je-Ls; Mon, 17 Jan 2022 19:25:59 +0000 Received: by outflank-mailman (input) for mailman id 258237; Mon, 17 Jan 2022 19:25:57 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1n9Xdd-0003jY-Pz for xen-devel@lists.xenproject.org; Mon, 17 Jan 2022 19:25:57 +0000 Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 49494667-77cb-11ec-9bbc-9dff3e4ee8c5; Mon, 17 Jan 2022 20:25:55 +0100 (CET) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 49494667-77cb-11ec-9bbc-9dff3e4ee8c5 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1642447555; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TBxUQEv/8KGeyofIzKhKk3pEAr7Tyk9mYlDRniD8VBQ=; b=XFD0vVojkSWTAZmYBFQ9lNgYNsBasfmfKLoMaNkta7d+J1GNnPNbJdjD 8nkOqaYSQaZkFBm1NqcPRQgj1mELRJC5e1YXtGHjsnu5u1lScrZayUd4T LcWrCCeVofwQ5/po7VYLxRr2dN7i6UuySwIB2MNlSvc/3tOAG6xzN0mla M=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none IronPort-SDR: GjkUb/gmnaynm5e6p1c5O6X5SKz2XnJO/6zR3OuZj7jQsV+RaSYnjwKIjfglPcGdRM+TalHtUv J8BOk7D20tu33NX4SkPoM3FdQNbxY3QiuKtnh7S3vxwFxzw4NRG/qZEVGpDO9r/eJ30qI+n6BT 68y0Smlby+pIo0S4jPtEnuAjxzKJ52GTTpCmlhXvqE8Pqs8bgnOGU1T76bafZ9RIeAICJa5bHy U0VACzxTp+KPQe/9O5fgDPCP2BbF838m4PeaABtuAGnvC/nHUlQrvEZws459pvXyqa8cpIGrxl eku8WzC6PjzV4KOEqqT7wJzA X-SBRS: 5.2 X-MesageID: 62164950 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:glJ0LqPqB3rgiPTvrR0gkMFynXyQoLVcMsEvi/4bfWQNrUom1WZSn zAfCzuBa/uMYjSnfI8iO9my/UxSvceGmoUySgto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h ynLQoCYdKjYdpJYz/uUGuCJQUNUjMlkfZKhTr6UUsxNbVU8En150Es8w7RRbrNA2rBVPSvc4 bsenOWHULOV82Yc3rU8sv/rRLtH5ZweiRtA1rAMTakjUGz2zhH5OKk3N6CpR0YUd6EPdgKMq 0Qv+5nilo/R109F5tpICd8XeGVSKlLZFVDmZna7x8FOK/WNz8A/+v9TCRYSVatYoznTmNFt2 vFgjLK1dQAvJq+Pw+EeejANRkmSPYUekFPGCX22sMjVxEzaaXr8hf5pCSnaP6VBpLwxWzsXs 6VFdnZdNXhvhMrvqF6/YsBqit4uM4/AO4QHt2s75TrYEewnUdbIRKCiCdpwgm1p2JwfQK62i 8wxexs+Lw7veBd0MwkHN8scncm51lfzWmgNwL6SjfVuuDWCpOBr65DvLdyTfNWJTMdUm0+wp 2Ta8mC/CRYfXPScxCSE9DSwh+bJtSL9RI8WUra/85ZCn1m71mEVThoMWjOTsfS/z0KzRd9bA 0gV4TY167g/8lSxSdvwVAH+p2SL1jY+cddNF+wx6CmW17HZpQ2eAwA5oiVpMYJ88pVsHHpzi wHPz4iB6SFTXKO9RSia96uGiR6LFy0pKmkaZ2xfSAgsyoy2yG0stS7nQtFmGa+zq9T6HzDs3 jyHxBQDa6UvYd0jjPviow2e6964jt2QF1NuuF2LNo6wxlohPNbNWmC+1bTMAR+sxq69R0LJg nULktP2AAsmXcDUz3zlrAng8diUCxe53N/03AQH83oJrW3FF5ufkWZ4umsWyKBBaJdsRNMRS BWP0T69HbcKVJdQUYd5YpiqF+MhxrX6GNLuW5j8N4QSOMMsJFTXoH0wNSZ8OlwBdmB2zMnT3 r/BIK6R4YsyU/w7nFJauc9AuVPU+szO7TyKHs2qp/hW+bGfeGSUWd843KimNYgEAFe/iFyNq b53bpLSoz0GCbGWSnSJreY7cA5bRVBmVcGeg5EGLYarf1s5cFzN/teMm9vNjaQ/wfQM/goJl 1ngMnJlJK3X3iyYeV7UOyE6ONsCn/9X9BoGAMDlBn7ws1BLXGplxP53m0IfceZ1+ep94+RzS vVZKcyMDu4WEmbM+igHbIm7p4tnLUz5iQWLNiujQT4+Y58/GFCZpo66JlPipHsUEy66lcoiu Ln8hAnVdoUOGlZ5B8HMZfPxk17o5SoBmPh/VlfjK8VIfBm+65BjLiH816dlI8wFJRjZ6CGd0 gKaXUURqeXX+tdn+9jVn6GU6YyuFrImTEZdGmDa65ewNDXboTX/kdMRDr7QcGmEBm3u+aika eFE9N3GMaUKzARQro5xM7d31qZitdHhkKBXk1Z/F3LRYlX1Vr45eiua3dNCv7Fmz6NCvVfkQ VqG/9RXNOnbOM7hF1JNdgMpYv7aiKMRkzjWq/80PF/79Gl8+7/eCRdeOByFiSp8KrppMdx6n bd96ZBOswHv2AA3NtumjzxP8zXeJ3MNZKwrq5UGDdK5kQEs0FxDPcTRByKeDEtjsDmQ3p3G+ gOpuZc= IronPort-HdrOrdr: A9a23:DCXioakX/Dwzo31t3dM4a0OkalHpDfIU3DAbv31ZSRFFG/Fxl6 iV8sjzsiWE8Qr5OUtQ/+xoV5PhfZqxz/JICMwqTNKftWrdyQyVxeNZnOjfKlTbckWUnINgPO VbAsxD4bXLfCBHZK3BgTVQfexO/DD+ytHLudvj X-IronPort-AV: E=Sophos;i="5.88,296,1635220800"; d="scan'208";a="62164950" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Jun Nakajima , Kevin Tian Subject: [PATCH v2 5/4] x86/hvm: Drop hvm_{get,set}_guest_bndcfgs() and use {get,set}_regs() instead Date: Mon, 17 Jan 2022 19:25:33 +0000 Message-ID: <20220117192533.6048-1-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220117183415.11150-1-andrew.cooper3@citrix.com> References: <20220117183415.11150-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 hvm_{get,set}_guest_bndcfgs() are thin wrappers around accessing MSR_BNDCFGS. MPX was implemented on Skylake uarch CPUs and dropped in subsequent CPUs, and is disabled by default in Xen VMs. It would be nice to move all the logic into vmx_msr_{read,write}_intercept(), but the common HVM migration code uses guest_{rd,wr}msr(). Therefore, use {get,set}_regs() to reduce the quantity of "common" HVM code. In lieu of having hvm_set_guest_bndcfgs() split out, use some #ifdef CONFIG_HVM in guest_wrmsr(). In vmx_{get,set}_regs(), split the switch statements into two depending on whether the require remote VMCS acquisition or not. Signed-off-by: Andrew Cooper Reviewed-by: Jan Beulich --- CC: Jan Beulich CC: Roger Pau Monné CC: Wei Liu CC: Jun Nakajima CC: Kevin Tian This counteracts the hvm_funcs size increase from {get,set}_regs(), and shows how to use the new functionality to clean the HVM logic up. --- xen/arch/x86/hvm/hvm.c | 37 -------------------------- xen/arch/x86/hvm/vmx/vmx.c | 54 ++++++++++++++++++-------------------- xen/arch/x86/include/asm/hvm/hvm.h | 12 --------- xen/arch/x86/msr.c | 34 +++++++++++++++++++----- 4 files changed, 53 insertions(+), 84 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index b530e986e86c..d7d3299b431e 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -324,43 +324,6 @@ int hvm_set_guest_pat(struct vcpu *v, uint64_t guest_pat) return 1; } -bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val) -{ - if ( !hvm_funcs.set_guest_bndcfgs || - !is_canonical_address(val) || - (val & IA32_BNDCFGS_RESERVED) ) - return false; - - /* - * While MPX instructions are supposed to be gated on XCR0.BND*, let's - * nevertheless force the relevant XCR0 bits on when the feature is being - * enabled in BNDCFGS. - */ - if ( (val & IA32_BNDCFGS_ENABLE) && - !(v->arch.xcr0_accum & (X86_XCR0_BNDREGS | X86_XCR0_BNDCSR)) ) - { - uint64_t xcr0 = get_xcr0(); - int rc; - - if ( v != current ) - return false; - - rc = handle_xsetbv(XCR_XFEATURE_ENABLED_MASK, - xcr0 | X86_XCR0_BNDREGS | X86_XCR0_BNDCSR); - - if ( rc ) - { - HVM_DBG_LOG(DBG_LEVEL_1, "Failed to force XCR0.BND*: %d", rc); - return false; - } - - if ( handle_xsetbv(XCR_XFEATURE_ENABLED_MASK, xcr0) ) - /* nothing, best effort only */; - } - - return alternative_call(hvm_funcs.set_guest_bndcfgs, v, val); -} - /* * Get the ratio to scale host TSC frequency to gtsc_khz. zero will be * returned if TSC scaling is unavailable or ratio cannot be handled diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 69e38d0fa8f9..8c55e56cbddb 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -1212,28 +1212,6 @@ static int vmx_get_guest_pat(struct vcpu *v, u64 *gpat) return 1; } -static bool vmx_set_guest_bndcfgs(struct vcpu *v, u64 val) -{ - ASSERT(cpu_has_mpx && cpu_has_vmx_mpx); - - vmx_vmcs_enter(v); - __vmwrite(GUEST_BNDCFGS, val); - vmx_vmcs_exit(v); - - return true; -} - -static bool vmx_get_guest_bndcfgs(struct vcpu *v, u64 *val) -{ - ASSERT(cpu_has_mpx && cpu_has_vmx_mpx); - - vmx_vmcs_enter(v); - __vmread(GUEST_BNDCFGS, val); - vmx_vmcs_exit(v); - - return true; -} - static void vmx_handle_cd(struct vcpu *v, unsigned long value) { if ( !paging_mode_hap(v->domain) ) @@ -2432,6 +2410,7 @@ static uint64_t vmx_get_reg(struct vcpu *v, unsigned int reg) uint64_t val = 0; int rc; + /* Logic which doesn't require remote VMCS acquisition. */ switch ( reg ) { case MSR_SPEC_CTRL: @@ -2443,13 +2422,25 @@ static uint64_t vmx_get_reg(struct vcpu *v, unsigned int reg) domain_crash(d); } return val; + } + + /* Logic which maybe requires remote VMCS acquisition. */ + vmx_vmcs_enter(v); + switch ( reg ) + { + case MSR_IA32_BNDCFGS: + __vmread(GUEST_BNDCFGS, &val); + break; default: printk(XENLOG_G_ERR "%s(%pv, 0x%08x) Bad register\n", __func__, v, reg); domain_crash(d); - return 0; + break; } + vmx_vmcs_exit(v); + + return val; } static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) @@ -2457,6 +2448,7 @@ static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) struct domain *d = v->domain; int rc; + /* Logic which doesn't require remote VMCS acquisition. */ switch ( reg ) { case MSR_SPEC_CTRL: @@ -2467,6 +2459,15 @@ static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) __func__, v, reg, rc); domain_crash(d); } + return; + } + + /* Logic which maybe requires remote VMCS acquisition. */ + vmx_vmcs_enter(v); + switch ( reg ) + { + case MSR_IA32_BNDCFGS: + __vmwrite(GUEST_BNDCFGS, val); break; default: @@ -2474,6 +2475,7 @@ static void vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) __func__, v, reg, val); domain_crash(d); } + vmx_vmcs_exit(v); } static struct hvm_function_table __initdata vmx_function_table = { @@ -2796,12 +2798,6 @@ const struct hvm_function_table * __init start_vmx(void) vmx_function_table.tsc_scaling.setup = vmx_setup_tsc_scaling; } - if ( cpu_has_mpx && cpu_has_vmx_mpx ) - { - vmx_function_table.set_guest_bndcfgs = vmx_set_guest_bndcfgs; - vmx_function_table.get_guest_bndcfgs = vmx_get_guest_bndcfgs; - } - lbr_tsx_fixup_check(); ler_to_fixup_check(); diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index c8b62b514b42..7bb7d0b77d32 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -148,9 +148,6 @@ struct hvm_function_table { int (*get_guest_pat)(struct vcpu *v, u64 *); int (*set_guest_pat)(struct vcpu *v, u64); - bool (*get_guest_bndcfgs)(struct vcpu *v, u64 *); - bool (*set_guest_bndcfgs)(struct vcpu *v, u64); - void (*set_tsc_offset)(struct vcpu *v, u64 offset, u64 at_tsc); void (*inject_event)(const struct x86_event *event); @@ -291,8 +288,6 @@ void hvm_set_segment_register(struct vcpu *v, enum x86_segment seg, void hvm_set_info_guest(struct vcpu *v); -bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val); - int hvm_vmexit_cpuid(struct cpu_user_regs *regs, unsigned int inst_len); void hvm_migrate_timers(struct vcpu *v); void hvm_do_resume(struct vcpu *v); @@ -479,12 +474,6 @@ static inline unsigned long hvm_get_shadow_gs_base(struct vcpu *v) return alternative_call(hvm_funcs.get_shadow_gs_base, v); } -static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val) -{ - return hvm_funcs.get_guest_bndcfgs && - alternative_call(hvm_funcs.get_guest_bndcfgs, v, val); -} - #define has_hvm_params(d) \ ((d)->arch.hvm.params != NULL) @@ -768,7 +757,6 @@ int hvm_guest_x86_mode(struct vcpu *v); unsigned long hvm_get_shadow_gs_base(struct vcpu *v); void hvm_cpuid_policy_changed(struct vcpu *v); void hvm_set_tsc_offset(struct vcpu *v, uint64_t offset, uint64_t at_tsc); -bool hvm_get_guest_bndcfgs(struct vcpu *v, uint64_t *val); /* End of prototype list */ diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c index fd4012808472..9e22404eb24a 100644 --- a/xen/arch/x86/msr.c +++ b/xen/arch/x86/msr.c @@ -30,6 +30,7 @@ #include #include #include +#include #include @@ -323,10 +324,9 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val) break; case MSR_IA32_BNDCFGS: - if ( !cp->feat.mpx || !is_hvm_domain(d) || - !hvm_get_guest_bndcfgs(v, val) ) + if ( !cp->feat.mpx ) /* Implies Intel HVM only */ goto gp_fault; - break; + goto get_reg; case MSR_IA32_XSS: if ( !cp->xstate.xsaves ) @@ -594,11 +594,33 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val) ret = guest_wrmsr_x2apic(v, msr, val); break; +#ifdef CONFIG_HVM case MSR_IA32_BNDCFGS: - if ( !cp->feat.mpx || !is_hvm_domain(d) || - !hvm_set_guest_bndcfgs(v, val) ) + if ( !cp->feat.mpx || /* Implies Intel HVM only */ + !is_canonical_address(val) || (val & IA32_BNDCFGS_RESERVED) ) goto gp_fault; - break; + + /* + * While MPX instructions are supposed to be gated on XCR0.BND*, let's + * nevertheless force the relevant XCR0 bits on when the feature is + * being enabled in BNDCFGS. + */ + if ( (val & IA32_BNDCFGS_ENABLE) && + !(v->arch.xcr0_accum & (X86_XCR0_BNDREGS | X86_XCR0_BNDCSR)) ) + { + uint64_t xcr0 = get_xcr0(); + + if ( v != current || + handle_xsetbv(XCR_XFEATURE_ENABLED_MASK, + xcr0 | X86_XCR0_BNDREGS | X86_XCR0_BNDCSR) ) + goto gp_fault; + + if ( handle_xsetbv(XCR_XFEATURE_ENABLED_MASK, xcr0) ) + /* nothing, best effort only */; + } + + goto set_reg; +#endif /* CONFIG_HVM */ case MSR_IA32_XSS: if ( !cp->xstate.xsaves )