From patchwork Mon Feb 27 07:56:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xenia Ragiadakou X-Patchwork-Id: 13152912 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26F2BC64ED6 for ; Mon, 27 Feb 2023 07:57:24 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.502218.773914 (Exim 4.92) (envelope-from ) id 1pWYNa-00017w-K8; Mon, 27 Feb 2023 07:57:02 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 502218.773914; Mon, 27 Feb 2023 07:57:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pWYNa-000154-Ef; Mon, 27 Feb 2023 07:57:02 +0000 Received: by outflank-mailman (input) for mailman id 502218; Mon, 27 Feb 2023 07:57:01 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pWYNZ-0000zz-Lb for xen-devel@lists.xenproject.org; Mon, 27 Feb 2023 07:57:01 +0000 Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [2a00:1450:4864:20::52d]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 50592f13-b674-11ed-88bb-e56d68cac8db; Mon, 27 Feb 2023 08:57:00 +0100 (CET) Received: by mail-ed1-x52d.google.com with SMTP id ec43so21964294edb.8 for ; Sun, 26 Feb 2023 23:57:00 -0800 (PST) Received: from uni.router.wind (adsl-209.109.242.226.tellas.gr. [109.242.226.209]) by smtp.googlemail.com with ESMTPSA id b23-20020a50ccd7000000b0049ef70a2894sm2788272edj.38.2023.02.26.23.56.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 26 Feb 2023 23:56:58 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 50592f13-b674-11ed-88bb-e56d68cac8db DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pnLzBk1wGHculJohEmqu0NjSNnMRtRwNM1eASmcyK7g=; b=XpIwuJxkvMyIxXOrltJKdUm3iZn63kgkpjcD+ADaAvqH7jhlg5gOI7M44LeTbWoYTz NAsJmfgDMVnY9LXB5A2VKokOkIKxEjFZ5VtSWOXa7ht0DHPEFTxApGg9XqLO/I6a5eyk wNdrRI8WiM0jFIqlDaXvCeq2W/qkGQ3G1xyAgEB6n/A+wFCMaF8qGcOR/WBOGs2omKCS aWjAFKGv2IXWsCxyR/M0AO1lUHj/454e+qezdGA7DfbIx5vwg/lIL/szuG/lYUQuWUBD nsU6lNJqq0cCAlDIaloM+a0SiiJJRBgBGoMxE6sgPmg1bvrnBH821XPM3TjSX2htVE1t f9mQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pnLzBk1wGHculJohEmqu0NjSNnMRtRwNM1eASmcyK7g=; b=8Ixu5N92oRY1V09pK6OrUeSAUU39IHa3UQzQ7h88Cd8MWEUu/KlfufDPGtUK+K+TL3 ir0wg2toGJTxn41QCKeIqLf2k3sy8RLCQkdqkpeG/S9+iTGFutSERkfv3yRPKWjt4rx5 BTvQripn/W/43K9olZzKzrtyLVW9Qy3oUYNHDVfCbni8P3+jD96Rm7mCJXiRce19W30c nOZBslPi+8Qct/7fDBsb/Tj1uAgNzXH/XvQ6OpuwZUtA2IQQxaXPVC6Dp3ltku/tKNqe RVWo4uGl4RYbUeAIDQg14+bXqWpJjdBon9/18YfWlxUXX1KWOcbuw175n9o4dANZ1fqq HE7Q== X-Gm-Message-State: AO0yUKVkeUhaawbS0MMhMiLqnAc6cvR/UzoV5uF3p3bs4zjXwUg8hhgS YkYLaOLuzfnN8kxJIi7kU9/pa+JIi7mU2Q== X-Google-Smtp-Source: AK7set9bGy/1HtlnFiZuZBv3paHLqzPWNlh8gUpseIREAC0+E5QaeISn+/4aHvSrRaJP/iEeWMSubQ== X-Received: by 2002:a05:6402:e04:b0:4ac:d90e:92b with SMTP id h4-20020a0564020e0400b004acd90e092bmr9246366edh.10.1677484619198; Sun, 26 Feb 2023 23:56:59 -0800 (PST) From: Xenia Ragiadakou To: xen-devel@lists.xenproject.org Cc: Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Jun Nakajima , Kevin Tian Subject: [PATCH 1/4] x86/vpmu: rename {svm,vmx}_vpmu_initialise to {amd,core2}_vpmu_initialise Date: Mon, 27 Feb 2023 09:56:49 +0200 Message-Id: <20230227075652.3782973-2-burzalodowa@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230227075652.3782973-1-burzalodowa@gmail.com> References: <20230227075652.3782973-1-burzalodowa@gmail.com> MIME-Version: 1.0 PMU virtualization is not dependent on the hardware virtualization support. Rename {svm,vmx}_vpmu_initialise to {amd,core2}_vpmu_initialise because the {svm,vmx} prefix is misleading. Take the opportunity to remove the also misleading comment stating that vpmu is specific to hvm guests, and correct the filename. No functional change intended. Signed-off-by: Xenia Ragiadakou Acked-by: Jan Beulich --- xen/arch/x86/cpu/vpmu_amd.c | 6 +++--- xen/arch/x86/cpu/vpmu_intel.c | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c index 58794a16f0..9df739aa3f 100644 --- a/xen/arch/x86/cpu/vpmu_amd.c +++ b/xen/arch/x86/cpu/vpmu_amd.c @@ -1,5 +1,5 @@ /* - * vpmu.c: PMU virtualization for HVM domain. + * vpmu_amd.c: AMD specific PMU virtualization. * * Copyright (c) 2010, Advanced Micro Devices, Inc. * Parts of this code are Copyright (c) 2007, Intel Corporation @@ -480,7 +480,7 @@ static void cf_check amd_vpmu_dump(const struct vcpu *v) } } -static int cf_check svm_vpmu_initialise(struct vcpu *v) +static int cf_check amd_vpmu_initialise(struct vcpu *v) { struct xen_pmu_amd_ctxt *ctxt; struct vpmu_struct *vpmu = vcpu_vpmu(v); @@ -527,7 +527,7 @@ static int cf_check amd_allocate_context(struct vcpu *v) #endif static const struct arch_vpmu_ops __initconst_cf_clobber amd_vpmu_ops = { - .initialise = svm_vpmu_initialise, + .initialise = amd_vpmu_initialise, .do_wrmsr = amd_vpmu_do_wrmsr, .do_rdmsr = amd_vpmu_do_rdmsr, .do_interrupt = amd_vpmu_do_interrupt, diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c index a8df52579d..bcfa187a14 100644 --- a/xen/arch/x86/cpu/vpmu_intel.c +++ b/xen/arch/x86/cpu/vpmu_intel.c @@ -1,5 +1,5 @@ /* - * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain. + * vpmu_intel.c: CORE 2 specific PMU virtualization. * * Copyright (c) 2007, Intel Corporation. * @@ -833,7 +833,7 @@ static void cf_check core2_vpmu_destroy(struct vcpu *v) vpmu_clear(vpmu); } -static int cf_check vmx_vpmu_initialise(struct vcpu *v) +static int cf_check core2_vpmu_initialise(struct vcpu *v) { struct vpmu_struct *vpmu = vcpu_vpmu(v); u64 msr_content; @@ -898,7 +898,7 @@ static int cf_check vmx_vpmu_initialise(struct vcpu *v) } static const struct arch_vpmu_ops __initconst_cf_clobber core2_vpmu_ops = { - .initialise = vmx_vpmu_initialise, + .initialise = core2_vpmu_initialise, .do_wrmsr = core2_vpmu_do_wrmsr, .do_rdmsr = core2_vpmu_do_rdmsr, .do_interrupt = core2_vpmu_do_interrupt, From patchwork Mon Feb 27 07:56:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xenia Ragiadakou X-Patchwork-Id: 13152913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 45E64C64ED8 for ; Mon, 27 Feb 2023 07:57:24 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.502219.773928 (Exim 4.92) (envelope-from ) id 1pWYNc-0001VA-SO; Mon, 27 Feb 2023 07:57:04 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 502219.773928; Mon, 27 Feb 2023 07:57:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pWYNc-0001V3-O9; Mon, 27 Feb 2023 07:57:04 +0000 Received: by outflank-mailman (input) for mailman id 502219; Mon, 27 Feb 2023 07:57:03 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pWYNb-0000zz-8w for xen-devel@lists.xenproject.org; Mon, 27 Feb 2023 07:57:03 +0000 Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [2a00:1450:4864:20::52d]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 51caa822-b674-11ed-88bb-e56d68cac8db; Mon, 27 Feb 2023 08:57:02 +0100 (CET) Received: by mail-ed1-x52d.google.com with SMTP id h16so21938235edz.10 for ; Sun, 26 Feb 2023 23:57:02 -0800 (PST) Received: from uni.router.wind (adsl-209.109.242.226.tellas.gr. [109.242.226.209]) by smtp.googlemail.com with ESMTPSA id b23-20020a50ccd7000000b0049ef70a2894sm2788272edj.38.2023.02.26.23.57.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 26 Feb 2023 23:57:01 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 51caa822-b674-11ed-88bb-e56d68cac8db DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=J/HLnBgL78z3g+xd6PxPHvHY0SFIgTSI0bcDcphjSqs=; b=J7ipBAzCwMCWW6J14s4QbKNAkMUrWhe1/KfqjxlYg+addcysszeDILiWeMKEVAT4lQ wwdZsvddtwkXNeiMDFTQG4Zs2DeSrS/+rFEIdonM68g1Cs2Pb3ffryHJMqBKw3qt/FZN Aj05ns7rGIMShV3ujr6HN7agVN/jM2gx8l4lzAo/22kawFGuzmXX6mQiSn/B6toi1XUe 7SccLwdJBeTxrTUGYAEi8JgoYHfZUyCu7J3d5QBuM/AUxZx4nBLvJ2YaGzv0I865Dr23 uPoqJ3YvGWVszQA3OAhO0HixmHCHh7JgpuO5Sn37ytvZvevVdzVgGbFdwKgPY2MjZvqn iDTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J/HLnBgL78z3g+xd6PxPHvHY0SFIgTSI0bcDcphjSqs=; b=RWKItBz/TKyNQiTDXmfxwLOBtZ0xgUiq3+MXoPrMVZVa7xBDkCJUX04qAzDWnzRMIb KpEBEU58qA+WtHnJVv3Ig54xPGUH9GxaJAipqwJ4rJmTc53pa33EDjQ99OYVx9O7dTF+ Lg0/uTpAj0raLr7NCV0eN+8bhCQfFc5L1hCNftsdgGTCQQYx4cmnzAZ+6zSopQAGNrMr YVIf0H7vEYsTyZghff3qIUdFt1RVbGDpmz7YfUAlt+LvDs06pqUTObmmj1BfF+4v3RE1 huB2H0wdDhn/eLRa6ZIKPhbTwET5WFteyvcowb2n1bdHFWBmZVLXR6/yaDKBzAnOxomc 5Hdg== X-Gm-Message-State: AO0yUKUDXgixntefHe6Xh20o5+gouBvDUbgvgm+8GV1wxBfS6erP1Yte +J8g1OowXRJXdjqXKpjOYhtmrsATQN0gWA== X-Google-Smtp-Source: AK7set+W0JPaC06MNtxzOhoJq8s+ldBdHuhqSAWapm9BTZZdL/x5N8263P+Gj7kHNraT+RX8upnZow== X-Received: by 2002:aa7:cccc:0:b0:4ac:b308:d732 with SMTP id y12-20020aa7cccc000000b004acb308d732mr24509991edt.18.1677484621678; Sun, 26 Feb 2023 23:57:01 -0800 (PST) From: Xenia Ragiadakou To: xen-devel@lists.xenproject.org Cc: Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu Subject: [PATCH 2/4] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept() Date: Mon, 27 Feb 2023 09:56:50 +0200 Message-Id: <20230227075652.3782973-3-burzalodowa@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230227075652.3782973-1-burzalodowa@gmail.com> References: <20230227075652.3782973-1-burzalodowa@gmail.com> MIME-Version: 1.0 This change aims to render the control interface of MSR intercepts identical between SVM and VMX code, so that the control of the MSR intercept in common code can be done through an hvm_funcs callback. Create two new functions: - svm_set_msr_intercept(), enables interception of read/write accesses to the corresponding MSR, by setting the corresponding read/write bits in the MSRPM based on the flags - svm_clear_msr_intercept(), disables interception of read/write accesses to the corresponding MSR, by clearing the corresponding read/write bits in the MSRPM based on the flags More specifically: - if flag is MSR_R, the functions {set,clear} the MSRPM bit that controls read access to the MSR - if flag is MSR_W, the functions {set,clear} the MSRPM bit that controls write access to the MSR - if flag is MSR_RW, the functions {set,clear} both MSRPM bits Place the definitions of the flags in asm/hvm/hvm.h because there is the intention to be used by VMX code as well. Remove svm_intercept_msr() and MSR_INTERCEPT_* definitions, and use the new functions and flags instead. No functional change intended. Signed-off-by: Xenia Ragiadakou --- xen/arch/x86/cpu/vpmu_amd.c | 9 +-- xen/arch/x86/hvm/svm/svm.c | 80 ++++++++++++++++--------- xen/arch/x86/include/asm/hvm/hvm.h | 4 ++ xen/arch/x86/include/asm/hvm/svm/vmcb.h | 13 ++-- 4 files changed, 66 insertions(+), 40 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c index 9df739aa3f..ed6706959e 100644 --- a/xen/arch/x86/cpu/vpmu_amd.c +++ b/xen/arch/x86/cpu/vpmu_amd.c @@ -165,8 +165,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v) for ( i = 0; i < num_counters; i++ ) { - svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE); - svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE); + svm_clear_msr_intercept(v, counters[i], MSR_RW); + svm_set_msr_intercept(v, ctrls[i], MSR_W); + svm_clear_msr_intercept(v, ctrls[i], MSR_R); } msr_bitmap_on(vpmu); @@ -179,8 +180,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v) for ( i = 0; i < num_counters; i++ ) { - svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW); - svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW); + svm_set_msr_intercept(v, counters[i], MSR_RW); + svm_set_msr_intercept(v, ctrls[i], MSR_RW); } msr_bitmap_off(vpmu); diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index a43bcf2e92..eb144272f4 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -288,23 +288,34 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr) return msr_bit; } -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int flags) +void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags) { - unsigned long *msr_bit; - const struct domain *d = v->domain; + unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr); + + if ( msr_bit == NULL ) + return; - msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr); - BUG_ON(msr_bit == NULL); msr &= 0x1fff; - if ( flags & MSR_INTERCEPT_READ ) + if ( flags & MSR_R ) __set_bit(msr * 2, msr_bit); - else if ( !monitored_msr(d, msr) ) - __clear_bit(msr * 2, msr_bit); - - if ( flags & MSR_INTERCEPT_WRITE ) + if ( flags & MSR_W ) __set_bit(msr * 2 + 1, msr_bit); - else if ( !monitored_msr(d, msr) ) +} + +void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags) +{ + unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr); + + if ( msr_bit == NULL ) + return; + + if ( monitored_msr(v->domain, msr) ) + return; + + if ( flags & MSR_R ) + __clear_bit(msr * 2, msr_bit); + if ( flags & MSR_W ) __clear_bit(msr * 2 + 1, msr_bit); } @@ -312,8 +323,10 @@ static void cf_check svm_enable_msr_interception(struct domain *d, uint32_t msr) { struct vcpu *v; - for_each_vcpu ( d, v ) - svm_intercept_msr(v, msr, MSR_INTERCEPT_WRITE); + for_each_vcpu ( d, v ) { + svm_set_msr_intercept(v, msr, MSR_W); + svm_clear_msr_intercept(v, msr, MSR_R); + } } static void svm_save_dr(struct vcpu *v) @@ -330,10 +343,10 @@ static void svm_save_dr(struct vcpu *v) if ( v->domain->arch.cpuid->extd.dbext ) { - svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_RW); - svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_RW); - svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_RW); - svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_RW); + svm_set_msr_intercept(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_RW); + svm_set_msr_intercept(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_RW); + svm_set_msr_intercept(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_RW); + svm_set_msr_intercept(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_RW); rdmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]); rdmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]); @@ -361,10 +374,10 @@ static void __restore_debug_registers(struct vmcb_struct *vmcb, struct vcpu *v) if ( v->domain->arch.cpuid->extd.dbext ) { - svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_NONE); - svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_NONE); - svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_NONE); - svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_NONE); + svm_clear_msr_intercept(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_RW); + svm_clear_msr_intercept(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_RW); + svm_clear_msr_intercept(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_RW); + svm_clear_msr_intercept(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_RW); wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]); wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]); @@ -595,22 +608,31 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu *v) vmcb_set_exception_intercepts(vmcb, bitmap); /* Give access to MSR_SPEC_CTRL if the guest has been told about it. */ - svm_intercept_msr(v, MSR_SPEC_CTRL, - cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); + if ( cp->extd.ibrs ) + svm_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW); + else + svm_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW); /* * Always trap write accesses to VIRT_SPEC_CTRL in order to cache the guest * setting and avoid having to perform a rdmsr on vmexit to get the guest * setting even if VIRT_SSBD is offered to Xen itself. */ - svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL, - cp->extd.virt_ssbd && cpu_has_virt_ssbd && - !cpu_has_amd_ssbd ? - MSR_INTERCEPT_WRITE : MSR_INTERCEPT_RW); + if ( cp->extd.virt_ssbd && cpu_has_virt_ssbd && !cpu_has_amd_ssbd ) + { + svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_W); + svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_R); + } + else + { + svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW); + } /* Give access to MSR_PRED_CMD if the guest has been told about it. */ - svm_intercept_msr(v, MSR_PRED_CMD, - cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW); + if ( cp->extd.ibpb ) + svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW); + else + svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW); } void svm_sync_vmcb(struct vcpu *v, enum vmcb_sync_state new_state) diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index 43d3fc2498..f853e2f3e8 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -261,6 +261,10 @@ extern struct hvm_function_table hvm_funcs; extern bool_t hvm_enabled; extern s8 hvm_port80_allowed; +#define MSR_R BIT(0, U) +#define MSR_W BIT(1, U) +#define MSR_RW (MSR_W | MSR_R) + extern const struct hvm_function_table *start_svm(void); extern const struct hvm_function_table *start_vmx(void); diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h b/xen/arch/x86/include/asm/hvm/svm/vmcb.h index e87728fa81..ed2e55e5cf 100644 --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h @@ -585,13 +585,12 @@ void svm_destroy_vmcb(struct vcpu *v); void setup_vmcb_dump(void); -#define MSR_INTERCEPT_NONE 0 -#define MSR_INTERCEPT_READ 1 -#define MSR_INTERCEPT_WRITE 2 -#define MSR_INTERCEPT_RW (MSR_INTERCEPT_WRITE | MSR_INTERCEPT_READ) -void svm_intercept_msr(struct vcpu *v, uint32_t msr, int enable); -#define svm_disable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_NONE) -#define svm_enable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), MSR_INTERCEPT_RW) +void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags); +void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags); +#define svm_disable_intercept_for_msr(v, msr) \ + svm_clear_msr_intercept((v), (msr), MSR_RW) +#define svm_enable_intercept_for_msr(v, msr) \ + svm_set_intercept_msr((v), (msr), MSR_RW) /* * VMCB accessor functions. From patchwork Mon Feb 27 07:56:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xenia Ragiadakou X-Patchwork-Id: 13152915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 030B9C7EE2D for ; Mon, 27 Feb 2023 07:57:26 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.502220.773938 (Exim 4.92) (envelope-from ) id 1pWYNh-0001oC-4Z; Mon, 27 Feb 2023 07:57:09 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 502220.773938; Mon, 27 Feb 2023 07:57:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pWYNh-0001o1-1F; Mon, 27 Feb 2023 07:57:09 +0000 Received: by outflank-mailman (input) for mailman id 502220; Mon, 27 Feb 2023 07:57:07 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pWYNf-0000zt-81 for xen-devel@lists.xenproject.org; Mon, 27 Feb 2023 07:57:07 +0000 Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com [2a00:1450:4864:20::531]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 5353a6b3-b674-11ed-a82a-c9ca1d2f71af; Mon, 27 Feb 2023 08:57:05 +0100 (CET) Received: by mail-ed1-x531.google.com with SMTP id cq23so22126117edb.1 for ; Sun, 26 Feb 2023 23:57:05 -0800 (PST) Received: from uni.router.wind (adsl-209.109.242.226.tellas.gr. [109.242.226.209]) by smtp.googlemail.com with ESMTPSA id b23-20020a50ccd7000000b0049ef70a2894sm2788272edj.38.2023.02.26.23.57.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 26 Feb 2023 23:57:03 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5353a6b3-b674-11ed-a82a-c9ca1d2f71af DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=62fJGVZ50pgC5Zj0lapAQX8/jn38WZfFE90N46rhqOc=; b=aeopOVF+ZmTM9E1XSIg18EmWxLfPTk6K/Zo8lH97FqBN4mxKZ9nhlE6xRx4Ch29nPT /20cPln2vSqnzPXVmUZXVge4IjKXCha/UR2tPxR2t9lxurOwF8ewtWFauKr+4dnWwFH6 5BFQ18Srn/Av6C+VgelkgNvtR6XZYiARrD+dnswZm6t3G4bqsA+coUhv+vNS4CCdhg76 FFIr4YictVrBJn+OCAST1Wz4kePyiANoCCaZ6QMiLmFcwBHiUzaiInfz49bF7EAoyFq6 G5p50jV8fzt8oW1bIMwY/e/sLc6xKuXxvnpv+GDfYuXVBK+3nL6vTUXSJ98+gz5YSn4B nfVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=62fJGVZ50pgC5Zj0lapAQX8/jn38WZfFE90N46rhqOc=; b=BRRIYzzJgaAkpdGCRsneK2PByF8ijquiyR+fUVPZQ6KEa38yZSvL73YO4Mm8NItWSf lNNVpM/saWN2Zf2Mmp0llar9qFdX9IR21yVx5BCI5d6ucWK3ACWbPFrY3eTOByqodWnr Mvis68EUdsDmZNru6t2d3bt7Y8XroNJYhryJP7Q+qNQOamogqVt8YB2zgziyues3Muw+ TlD4ftfgDAz+FtYPQXaKbY6VpECJqVjshhsHnk9gz0Y7YDtinztESqCOn0TpWMt9nIj+ YLnq8uRtuXv3OV2wtyuiYKwIgJzbIvGmGPmYNCvBZBU+L//lB3QOSXrsJP7DtyJOBD98 rrzQ== X-Gm-Message-State: AO0yUKW4P8+BQLcxypiy1pV7C9jPdmk2oWseQuYEz0TPo6vLNRIoGwm0 24HbHgU5ygjB2sMpNCIAAWwbP1fguFAryQ== X-Google-Smtp-Source: AK7set++sPj47cXU0KyOEHvdPqoq+vL31REQK7S1la89qbPs5Slr4kKrDXazFIwBa/iQHf6X7Cnlyg== X-Received: by 2002:a05:6402:60d:b0:4ad:7c30:259a with SMTP id n13-20020a056402060d00b004ad7c30259amr24302872edv.2.1677484624048; Sun, 26 Feb 2023 23:57:04 -0800 (PST) From: Xenia Ragiadakou To: xen-devel@lists.xenproject.org Cc: Jun Nakajima , Kevin Tian , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu Subject: [PATCH 3/4] x86/vmx: replace enum vmx_msr_intercept_type with the msr access flags Date: Mon, 27 Feb 2023 09:56:51 +0200 Message-Id: <20230227075652.3782973-4-burzalodowa@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230227075652.3782973-1-burzalodowa@gmail.com> References: <20230227075652.3782973-1-burzalodowa@gmail.com> MIME-Version: 1.0 Replace enum vmx_msr_intercept_type with the msr access flags, defined in hvm.h, so that the functions {svm,vmx}_{set,clear}_msr_intercept() share the same prototype. No functional change intended. Signed-off-by: Xenia Ragiadakou --- xen/arch/x86/cpu/vpmu_intel.c | 24 +++++++------- xen/arch/x86/hvm/vmx/vmcs.c | 38 ++++++++++----------- xen/arch/x86/hvm/vmx/vmx.c | 44 ++++++++++++------------- xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 14 ++------ 4 files changed, 54 insertions(+), 66 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c index bcfa187a14..bd91c79a36 100644 --- a/xen/arch/x86/cpu/vpmu_intel.c +++ b/xen/arch/x86/cpu/vpmu_intel.c @@ -230,22 +230,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v) /* Allow Read/Write PMU Counters MSR Directly. */ for ( i = 0; i < fixed_pmc_cnt; i++ ) - vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); for ( i = 0; i < arch_pmc_cnt; i++ ) { - vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); if ( full_width_write ) - vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); } /* Allow Read PMU Non-global Controls Directly. */ for ( i = 0; i < arch_pmc_cnt; i++ ) - vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), VMX_MSR_R); + vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); - vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, VMX_MSR_R); - vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, VMX_MSR_R); + vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); + vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); } static void core2_vpmu_unset_msr_bitmap(struct vcpu *v) @@ -253,21 +253,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *v) unsigned int i; for ( i = 0; i < fixed_pmc_cnt; i++ ) - vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); for ( i = 0; i < arch_pmc_cnt; i++ ) { - vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); if ( full_width_write ) - vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); } for ( i = 0; i < arch_pmc_cnt; i++ ) - vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); - vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, VMX_MSR_R); - vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); + vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); } static inline void __core2_vpmu_save(struct vcpu *v) diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index ed71ecfb62..22c12509d5 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -902,8 +902,7 @@ static void vmx_set_host_env(struct vcpu *v) (unsigned long)&get_cpu_info()->guest_cpu_user_regs.error_code); } -void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, - enum vmx_msr_intercept_type type) +void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type) { struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap; struct domain *d = v->domain; @@ -917,25 +916,24 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, if ( msr <= 0x1fff ) { - if ( type & VMX_MSR_R ) + if ( type & MSR_R ) clear_bit(msr, msr_bitmap->read_low); - if ( type & VMX_MSR_W ) + if ( type & MSR_W ) clear_bit(msr, msr_bitmap->write_low); } else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) ) { msr &= 0x1fff; - if ( type & VMX_MSR_R ) + if ( type & MSR_R ) clear_bit(msr, msr_bitmap->read_high); - if ( type & VMX_MSR_W ) + if ( type & MSR_W ) clear_bit(msr, msr_bitmap->write_high); } else ASSERT(!"MSR out of range for interception\n"); } -void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, - enum vmx_msr_intercept_type type) +void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, int type) { struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap; @@ -945,17 +943,17 @@ void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, if ( msr <= 0x1fff ) { - if ( type & VMX_MSR_R ) + if ( type & MSR_R ) set_bit(msr, msr_bitmap->read_low); - if ( type & VMX_MSR_W ) + if ( type & MSR_W ) set_bit(msr, msr_bitmap->write_low); } else if ( (msr >= 0xc0000000) && (msr <= 0xc0001fff) ) { msr &= 0x1fff; - if ( type & VMX_MSR_R ) + if ( type & MSR_R ) set_bit(msr, msr_bitmap->read_high); - if ( type & VMX_MSR_W ) + if ( type & MSR_W ) set_bit(msr, msr_bitmap->write_high); } else @@ -1162,17 +1160,17 @@ static int construct_vmcs(struct vcpu *v) v->arch.hvm.vmx.msr_bitmap = msr_bitmap; __vmwrite(MSR_BITMAP, virt_to_maddr(msr_bitmap)); - vmx_clear_msr_intercept(v, MSR_FS_BASE, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_GS_BASE, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_SHADOW_GS_BASE, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, VMX_MSR_RW); - vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_FS_BASE, MSR_RW); + vmx_clear_msr_intercept(v, MSR_GS_BASE, MSR_RW); + vmx_clear_msr_intercept(v, MSR_SHADOW_GS_BASE, MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, MSR_RW); if ( paging_mode_hap(d) && (!is_iommu_enabled(d) || iommu_snoop) ) - vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW); if ( (vmexit_ctl & VM_EXIT_CLEAR_BNDCFGS) && (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) ) - vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, MSR_RW); } /* I/O access bitmap. */ diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 0ec33bcc18..87c47c002c 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -802,7 +802,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v) */ if ( cp->feat.ibrsb ) { - vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW); rc = vmx_add_guest_msr(v, MSR_SPEC_CTRL, 0); if ( rc ) @@ -810,7 +810,7 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v) } else { - vmx_set_msr_intercept(v, MSR_SPEC_CTRL, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW); rc = vmx_del_msr(v, MSR_SPEC_CTRL, VMX_MSR_GUEST); if ( rc && rc != -ESRCH ) @@ -820,20 +820,20 @@ static void cf_check vmx_cpuid_policy_changed(struct vcpu *v) /* MSR_PRED_CMD is safe to pass through if the guest knows about it. */ if ( cp->feat.ibrsb || cp->extd.ibpb ) - vmx_clear_msr_intercept(v, MSR_PRED_CMD, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_PRED_CMD, MSR_RW); else - vmx_set_msr_intercept(v, MSR_PRED_CMD, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_PRED_CMD, MSR_RW); /* MSR_FLUSH_CMD is safe to pass through if the guest knows about it. */ if ( cp->feat.l1d_flush ) - vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, MSR_RW); else - vmx_set_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_FLUSH_CMD, MSR_RW); if ( cp->feat.pks ) - vmx_clear_msr_intercept(v, MSR_PKRS, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_PKRS, MSR_RW); else - vmx_set_msr_intercept(v, MSR_PKRS, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_PKRS, MSR_RW); out: vmx_vmcs_exit(v); @@ -1429,7 +1429,7 @@ static void cf_check vmx_handle_cd(struct vcpu *v, unsigned long value) vmx_get_guest_pat(v, pat); vmx_set_guest_pat(v, uc_pat); - vmx_set_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW); + vmx_set_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW); wbinvd(); /* flush possibly polluted cache */ hvm_asid_flush_vcpu(v); /* invalidate memory type cached in TLB */ @@ -1440,7 +1440,7 @@ static void cf_check vmx_handle_cd(struct vcpu *v, unsigned long value) v->arch.hvm.cache_mode = NORMAL_CACHE_MODE; vmx_set_guest_pat(v, *pat); if ( !is_iommu_enabled(v->domain) || iommu_snoop ) - vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW); + vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, MSR_RW); hvm_asid_flush_vcpu(v); /* no need to flush cache */ } } @@ -1906,9 +1906,9 @@ static void cf_check vmx_update_guest_efer(struct vcpu *v) * into hardware, clear the read intercept to avoid unnecessary VMExits. */ if ( guest_efer == v->arch.hvm.guest_efer ) - vmx_clear_msr_intercept(v, MSR_EFER, VMX_MSR_R); + vmx_clear_msr_intercept(v, MSR_EFER, MSR_R); else - vmx_set_msr_intercept(v, MSR_EFER, VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_EFER, MSR_R); } void nvmx_enqueue_n2_exceptions(struct vcpu *v, @@ -2335,7 +2335,7 @@ static void cf_check vmx_enable_msr_interception(struct domain *d, uint32_t msr) struct vcpu *v; for_each_vcpu ( d, v ) - vmx_set_msr_intercept(v, msr, VMX_MSR_W); + vmx_set_msr_intercept(v, msr, MSR_W); } static void cf_check vmx_vcpu_update_eptp(struct vcpu *v) @@ -3502,17 +3502,17 @@ void cf_check vmx_vlapic_msr_changed(struct vcpu *v) { for ( msr = MSR_X2APIC_FIRST; msr <= MSR_X2APIC_LAST; msr++ ) - vmx_clear_msr_intercept(v, msr, VMX_MSR_R); + vmx_clear_msr_intercept(v, msr, MSR_R); - vmx_set_msr_intercept(v, MSR_X2APIC_PPR, VMX_MSR_R); - vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, VMX_MSR_R); - vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, VMX_MSR_R); + vmx_set_msr_intercept(v, MSR_X2APIC_PPR, MSR_R); + vmx_set_msr_intercept(v, MSR_X2APIC_TMICT, MSR_R); + vmx_set_msr_intercept(v, MSR_X2APIC_TMCCT, MSR_R); } if ( cpu_has_vmx_virtual_intr_delivery ) { - vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, VMX_MSR_W); - vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, VMX_MSR_W); - vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, VMX_MSR_W); + vmx_clear_msr_intercept(v, MSR_X2APIC_TPR, MSR_W); + vmx_clear_msr_intercept(v, MSR_X2APIC_EOI, MSR_W); + vmx_clear_msr_intercept(v, MSR_X2APIC_SELF, MSR_W); } } else @@ -3523,7 +3523,7 @@ void cf_check vmx_vlapic_msr_changed(struct vcpu *v) SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) ) for ( msr = MSR_X2APIC_FIRST; msr <= MSR_X2APIC_LAST; msr++ ) - vmx_set_msr_intercept(v, msr, VMX_MSR_RW); + vmx_set_msr_intercept(v, msr, MSR_RW); vmx_update_secondary_exec_control(v); vmx_vmcs_exit(v); @@ -3659,7 +3659,7 @@ static int cf_check vmx_msr_write_intercept( return X86EMUL_OKAY; } - vmx_clear_msr_intercept(v, lbr->base + i, VMX_MSR_RW); + vmx_clear_msr_intercept(v, lbr->base + i, MSR_RW); } } diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h index 0a84e74478..e08c506be5 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h @@ -644,18 +644,8 @@ static inline int vmx_write_guest_msr(struct vcpu *v, uint32_t msr, return 0; } - -/* MSR intercept bitmap infrastructure. */ -enum vmx_msr_intercept_type { - VMX_MSR_R = 1, - VMX_MSR_W = 2, - VMX_MSR_RW = VMX_MSR_R | VMX_MSR_W, -}; - -void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, - enum vmx_msr_intercept_type type); -void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, - enum vmx_msr_intercept_type type); +void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type); +void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, int type); void vmx_vmcs_switch(paddr_t from, paddr_t to); void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector); void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector); From patchwork Mon Feb 27 07:56:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xenia Ragiadakou X-Patchwork-Id: 13152914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A7ED7C7EE2E for ; Mon, 27 Feb 2023 07:57:26 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.502221.773948 (Exim 4.92) (envelope-from ) id 1pWYNi-00025H-GT; Mon, 27 Feb 2023 07:57:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 502221.773948; Mon, 27 Feb 2023 07:57:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pWYNi-000256-D6; Mon, 27 Feb 2023 07:57:10 +0000 Received: by outflank-mailman (input) for mailman id 502221; Mon, 27 Feb 2023 07:57:10 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pWYNh-0000zt-RF for xen-devel@lists.xenproject.org; Mon, 27 Feb 2023 07:57:10 +0000 Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com [2a00:1450:4864:20::530]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 55088ff7-b674-11ed-a82a-c9ca1d2f71af; Mon, 27 Feb 2023 08:57:07 +0100 (CET) Received: by mail-ed1-x530.google.com with SMTP id o15so19516482edr.13 for ; Sun, 26 Feb 2023 23:57:07 -0800 (PST) Received: from uni.router.wind (adsl-209.109.242.226.tellas.gr. [109.242.226.209]) by smtp.googlemail.com with ESMTPSA id b23-20020a50ccd7000000b0049ef70a2894sm2788272edj.38.2023.02.26.23.57.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 26 Feb 2023 23:57:07 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 55088ff7-b674-11ed-a82a-c9ca1d2f71af DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kfd+upXaSlLKmbg4zugpjT75X2tZR4wPNkKQmCXr5iI=; b=lYjSUSdLDHn6VAmtg9ODvuSTjLpwHlp0BYYmCBAS6F56mkwSGEvJ+ZOIB3DXdcvjRt LaqJZAmXmCqTg0FI/6+3sZ0uM7PWyd7cynyehETs9yQyBIM9lS/PfGM56/i3QIcHBAH6 so0ufqMnkXIr7VQU96WM7ZA2oBr/qmeE6LjZNT3+HxNESd1sOUB4qB+JTUwVdJhfzUMn BBP8eATGk4hk29ZVfGu8aP7wVWWjnqxQcEfclWqZgxTKFuDoXfQ6QhPCKQIJQVuFq142 UyIBFyIKRF4lAi+mpEZfuQZvmwMLigTairKZnaCy6VtgvrQcXBWvC2XflZ0FQMAhUzhP PK6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kfd+upXaSlLKmbg4zugpjT75X2tZR4wPNkKQmCXr5iI=; b=wTB3cpNguhxXwXEUJ9fWZFB1ViC+uORJ7K6GHlSzf352cioAGv1STyVVGXPspYIbvf Rs9jRIqIlGcrU9cXNbKzlSXl0QMwgPC6wkrFieBxckX3QXIdsr0wIUBmO0mfbLVZha8d MpnWLX3xCkjlxkubR0J60XcY+HAmw8dhbTVVwMp7wYPbFQYtwn34AOtWU3Apq9qmxY1M QYf31PDk1JdqMgxsL1PTlx0uBqnEF0Yz+Yr2bMACzKIpaQhMHJFP7bs4ZUDKyB2BcZk/ S2OZq8Yx8srtpJgnJycIZmjZasmzRr7YWWbggo5z57i8dS0yPU4NxJW/Tj1HFQIHEH4U wcmA== X-Gm-Message-State: AO0yUKUJx1igYSycycl/PwMW4txvZlGyKWL9i9gai6UkLU8WebBIrTH6 n6i14ErGBzOLalKau0zzpkCZzDU74X1vWg== X-Google-Smtp-Source: AK7set+XRzjKrDAJ1gyPWG0oFLRCggpXfNaFq0J9FhzdyGfvANCaUg7NBW/IBn9IwWPWIzT07gnO+A== X-Received: by 2002:aa7:d705:0:b0:4ac:89b:b605 with SMTP id t5-20020aa7d705000000b004ac089bb605mr25859620edq.22.1677484627312; Sun, 26 Feb 2023 23:57:07 -0800 (PST) From: Xenia Ragiadakou To: xen-devel@lists.xenproject.org Cc: Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Jun Nakajima , Kevin Tian Subject: [PATCH 4/4] x86/hvm: create hvm_funcs for {svm,vmx}_{set,clear}_msr_intercept() Date: Mon, 27 Feb 2023 09:56:52 +0200 Message-Id: <20230227075652.3782973-5-burzalodowa@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230227075652.3782973-1-burzalodowa@gmail.com> References: <20230227075652.3782973-1-burzalodowa@gmail.com> MIME-Version: 1.0 Add hvm_funcs hooks for {set,clear}_msr_intercept() for controlling the msr intercept in common vpmu code. No functional change intended. Signed-off-by: Xenia Ragiadakou --- xen/arch/x86/cpu/vpmu_amd.c | 10 ++++----- xen/arch/x86/cpu/vpmu_intel.c | 24 ++++++++++----------- xen/arch/x86/hvm/svm/svm.c | 4 ++-- xen/arch/x86/hvm/vmx/vmcs.c | 4 ++-- xen/arch/x86/hvm/vmx/vmx.c | 2 ++ xen/arch/x86/include/asm/hvm/hvm.h | 28 +++++++++++++++++++++++++ xen/arch/x86/include/asm/hvm/svm/vmcb.h | 4 ++-- xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 4 ++-- 8 files changed, 55 insertions(+), 25 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c index ed6706959e..a306297a69 100644 --- a/xen/arch/x86/cpu/vpmu_amd.c +++ b/xen/arch/x86/cpu/vpmu_amd.c @@ -165,9 +165,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v) for ( i = 0; i < num_counters; i++ ) { - svm_clear_msr_intercept(v, counters[i], MSR_RW); - svm_set_msr_intercept(v, ctrls[i], MSR_W); - svm_clear_msr_intercept(v, ctrls[i], MSR_R); + hvm_clear_msr_intercept(v, counters[i], MSR_RW); + hvm_set_msr_intercept(v, ctrls[i], MSR_W); + hvm_clear_msr_intercept(v, ctrls[i], MSR_R); } msr_bitmap_on(vpmu); @@ -180,8 +180,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v) for ( i = 0; i < num_counters; i++ ) { - svm_set_msr_intercept(v, counters[i], MSR_RW); - svm_set_msr_intercept(v, ctrls[i], MSR_RW); + hvm_set_msr_intercept(v, counters[i], MSR_RW); + hvm_set_msr_intercept(v, ctrls[i], MSR_RW); } msr_bitmap_off(vpmu); diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c index bd91c79a36..46ae38a326 100644 --- a/xen/arch/x86/cpu/vpmu_intel.c +++ b/xen/arch/x86/cpu/vpmu_intel.c @@ -230,22 +230,22 @@ static void core2_vpmu_set_msr_bitmap(struct vcpu *v) /* Allow Read/Write PMU Counters MSR Directly. */ for ( i = 0; i < fixed_pmc_cnt; i++ ) - vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); + hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); for ( i = 0; i < arch_pmc_cnt; i++ ) { - vmx_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); + hvm_clear_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); if ( full_width_write ) - vmx_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); + hvm_clear_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); } /* Allow Read PMU Non-global Controls Directly. */ for ( i = 0; i < arch_pmc_cnt; i++ ) - vmx_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); + hvm_clear_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); - vmx_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); - vmx_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); + hvm_clear_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); + hvm_clear_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); } static void core2_vpmu_unset_msr_bitmap(struct vcpu *v) @@ -253,21 +253,21 @@ static void core2_vpmu_unset_msr_bitmap(struct vcpu *v) unsigned int i; for ( i = 0; i < fixed_pmc_cnt; i++ ) - vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); + hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR0 + i, MSR_RW); for ( i = 0; i < arch_pmc_cnt; i++ ) { - vmx_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); + hvm_set_msr_intercept(v, MSR_IA32_PERFCTR0 + i, MSR_RW); if ( full_width_write ) - vmx_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); + hvm_set_msr_intercept(v, MSR_IA32_A_PERFCTR0 + i, MSR_RW); } for ( i = 0; i < arch_pmc_cnt; i++ ) - vmx_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); + hvm_set_msr_intercept(v, MSR_P6_EVNTSEL(i), MSR_R); - vmx_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); - vmx_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); + hvm_set_msr_intercept(v, MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_R); + hvm_set_msr_intercept(v, MSR_IA32_DS_AREA, MSR_R); } static inline void __core2_vpmu_save(struct vcpu *v) diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index eb144272f4..e54dc08e8a 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -288,7 +288,7 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr) return msr_bit; } -void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags) +void cf_check svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags) { unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr); @@ -303,7 +303,7 @@ void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags) __set_bit(msr * 2 + 1, msr_bit); } -void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags) +void cf_check svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags) { unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr); diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index 22c12509d5..3d0022f392 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -902,7 +902,7 @@ static void vmx_set_host_env(struct vcpu *v) (unsigned long)&get_cpu_info()->guest_cpu_user_regs.error_code); } -void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type) +void cf_check vmx_clear_msr_intercept(struct vcpu *v, uint32_t msr, int type) { struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap; struct domain *d = v->domain; @@ -933,7 +933,7 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type) ASSERT(!"MSR out of range for interception\n"); } -void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, int type) +void cf_check vmx_set_msr_intercept(struct vcpu *v, uint32_t msr, int type) { struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap; diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 87c47c002c..d3f2b3add4 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2765,6 +2765,8 @@ static struct hvm_function_table __initdata_cf_clobber vmx_function_table = { .nhvm_domain_relinquish_resources = nvmx_domain_relinquish_resources, .update_vlapic_mode = vmx_vlapic_msr_changed, .nhvm_hap_walk_L1_p2m = nvmx_hap_walk_L1_p2m, + .set_msr_intercept = vmx_set_msr_intercept, + .clear_msr_intercept = vmx_clear_msr_intercept, .enable_msr_interception = vmx_enable_msr_interception, .altp2m_vcpu_update_p2m = vmx_vcpu_update_eptp, .altp2m_vcpu_update_vmfunc_ve = vmx_vcpu_update_vmfunc_ve, diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index f853e2f3e8..dd9aa42d0a 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -224,6 +224,8 @@ struct hvm_function_table { paddr_t *L1_gpa, unsigned int *page_order, uint8_t *p2m_acc, struct npfec npfec); + void (*set_msr_intercept)(struct vcpu *v, uint32_t msr, int flags); + void (*clear_msr_intercept)(struct vcpu *v, uint32_t msr, int flags); void (*enable_msr_interception)(struct domain *d, uint32_t msr); /* Alternate p2m */ @@ -658,6 +660,20 @@ static inline int nhvm_hap_walk_L1_p2m( v, L2_gpa, L1_gpa, page_order, p2m_acc, npfec); } +static inline void hvm_set_msr_intercept(struct vcpu *v, uint32_t msr, + int flags) +{ + if ( hvm_funcs.set_msr_intercept ) + alternative_vcall(hvm_funcs.set_msr_intercept, v, msr, flags); +} + +static inline void hvm_clear_msr_intercept(struct vcpu *v, uint32_t msr, + int flags) +{ + if ( hvm_funcs.clear_msr_intercept ) + alternative_vcall(hvm_funcs.clear_msr_intercept, v, msr, flags); +} + static inline void hvm_enable_msr_interception(struct domain *d, uint32_t msr) { alternative_vcall(hvm_funcs.enable_msr_interception, d, msr); @@ -916,6 +932,18 @@ static inline void hvm_set_reg(struct vcpu *v, unsigned int reg, uint64_t val) ASSERT_UNREACHABLE(); } +static inline void hvm_set_msr_intercept(struct vcpu *v, uint32_t msr, + int flags) +{ + ASSERT_UNREACHABLE(); +} + +static inline void hvm_clear_msr_intercept(struct vcpu *v, uint32_t msr, + int flags) +{ + ASSERT_UNREACHABLE(); +} + #define is_viridian_domain(d) ((void)(d), false) #define is_viridian_vcpu(v) ((void)(v), false) #define has_viridian_time_ref_count(d) ((void)(d), false) diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h b/xen/arch/x86/include/asm/hvm/svm/vmcb.h index ed2e55e5cf..dbe8ba89cc 100644 --- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h +++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h @@ -585,8 +585,8 @@ void svm_destroy_vmcb(struct vcpu *v); void setup_vmcb_dump(void); -void svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags); -void svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags); +void cf_check svm_set_msr_intercept(struct vcpu *v, uint32_t msr, int flags); +void cf_check svm_clear_msr_intercept(struct vcpu *v, uint32_t msr, int flags); #define svm_disable_intercept_for_msr(v, msr) \ svm_clear_msr_intercept((v), (msr), MSR_RW) #define svm_enable_intercept_for_msr(v, msr) \ diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h index e08c506be5..f2880c8122 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h @@ -644,8 +644,8 @@ static inline int vmx_write_guest_msr(struct vcpu *v, uint32_t msr, return 0; } -void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr, int type); -void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr, int type); +void cf_check vmx_clear_msr_intercept(struct vcpu *v, uint32_t msr, int type); +void cf_check vmx_set_msr_intercept(struct vcpu *v, uint32_t msr, int type); void vmx_vmcs_switch(paddr_t from, paddr_t to); void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector); void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);