diff mbox series

[XEN,v1,2/7] x86/intel: guard vmce_has_lmce() with INTEL option

Message ID 5e26895d84f8b7750799740ac2324b2cb92fa97e.1713860310.git.Sergiy_Kibrik@epam.com (mailing list archive)
State Superseded
Headers show
Series x86: make Intel/AMD vPMU & MCE support configurable | expand

Commit Message

Sergiy Kibrik April 23, 2024, 8:50 a.m. UTC
Since MCG_LMCE_P bit is specific to Intel CPUs the code to check it can
possibly be excluded from build if !CONFIG_INTEL. With these guards
calls to vmce_has_lmce() are eliminated and mce_intel.c can end up
not being built.

Also replace boilerplate code that checks for MCG_LMCE_P flag with
vmce_has_lmce(), which might contribute to readability a bit.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
 xen/arch/x86/cpu/mcheck/vmce.c | 4 ++--
 xen/arch/x86/msr.c             | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

Comments

Stefano Stabellini April 26, 2024, 11:04 p.m. UTC | #1
On Tue, 23 Apr 2024, Sergiy Kibrik wrote:
> Since MCG_LMCE_P bit is specific to Intel CPUs the code to check it can
> possibly be excluded from build if !CONFIG_INTEL. With these guards
> calls to vmce_has_lmce() are eliminated and mce_intel.c can end up
> not being built.
> 
> Also replace boilerplate code that checks for MCG_LMCE_P flag with
> vmce_has_lmce(), which might contribute to readability a bit.
> 
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Jan Beulich April 29, 2024, 3:34 p.m. UTC | #2
On 23.04.2024 10:50, Sergiy Kibrik wrote:
> Since MCG_LMCE_P bit is specific to Intel CPUs

That's the case now. It could change going forward, and an underlying hypervisor
might also have (obscure?) reasons to surface it elsewhere.

> the code to check it can
> possibly be excluded from build if !CONFIG_INTEL. With these guards
> calls to vmce_has_lmce() are eliminated and mce_intel.c can end up
> not being built.
> 
> Also replace boilerplate code that checks for MCG_LMCE_P flag with
> vmce_has_lmce(), which might contribute to readability a bit.

Alternatively, have you considered making that function an inline one in a
suitable header? Besides addressing your build issue (I think), ...

> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -199,7 +199,7 @@ int vmce_rdmsr(uint32_t msr, uint64_t *val)
>           * bits are always set in guest MSR_IA32_FEATURE_CONTROL by Xen, so it
>           * does not need to check them here.
>           */
> -        if ( cur->arch.vmce.mcg_cap & MCG_LMCE_P )
> +        if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(cur) )

... doing so would alternatively also permit integrating the IS_ENABLED()
into the function, rather than repeating the same ...

> @@ -324,7 +324,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
>          break;
>  
>      case MSR_IA32_MCG_EXT_CTL:
> -        if ( (cur->arch.vmce.mcg_cap & MCG_LMCE_P) &&
> +        if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(cur) &&
>               !(val & ~MCG_EXT_CTL_LMCE_EN) )
>              cur->arch.vmce.mcg_ext_ctl = val;
>          else
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -86,7 +86,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>              goto gp_fault;
>  
>          *val = IA32_FEATURE_CONTROL_LOCK;
> -        if ( vmce_has_lmce(v) )
> +        if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(v) )
>              *val |= IA32_FEATURE_CONTROL_LMCE_ON;
>          if ( cp->basic.vmx )
>              *val |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;

... three times.

Jan
Sergiy Kibrik April 30, 2024, 9:42 a.m. UTC | #3
29.04.24 18:34, Jan Beulich:
> On 23.04.2024 10:50, Sergiy Kibrik wrote:
>> Since MCG_LMCE_P bit is specific to Intel CPUs
> 
> That's the case now. It could change going forward, and an underlying hypervisor
> might also have (obscure?) reasons to surface it elsewhere.
> 
>> the code to check it can
>> possibly be excluded from build if !CONFIG_INTEL. With these guards
>> calls to vmce_has_lmce() are eliminated and mce_intel.c can end up
>> not being built.
>>
>> Also replace boilerplate code that checks for MCG_LMCE_P flag with
>> vmce_has_lmce(), which might contribute to readability a bit.
> 
> Alternatively, have you considered making that function an inline one in a
> suitable header? Besides addressing your build issue (I think), ...
> 
>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>> @@ -199,7 +199,7 @@ int vmce_rdmsr(uint32_t msr, uint64_t *val)
>>            * bits are always set in guest MSR_IA32_FEATURE_CONTROL by Xen, so it
>>            * does not need to check them here.
>>            */
>> -        if ( cur->arch.vmce.mcg_cap & MCG_LMCE_P )
>> +        if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(cur) )
> 
> ... doing so would alternatively also permit integrating the IS_ENABLED()
> into the function, rather than repeating the same ...
> 
>> @@ -324,7 +324,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
>>           break;
>>   
>>       case MSR_IA32_MCG_EXT_CTL:
>> -        if ( (cur->arch.vmce.mcg_cap & MCG_LMCE_P) &&
>> +        if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(cur) &&
>>                !(val & ~MCG_EXT_CTL_LMCE_EN) )
>>               cur->arch.vmce.mcg_ext_ctl = val;
>>           else
>> --- a/xen/arch/x86/msr.c
>> +++ b/xen/arch/x86/msr.c
>> @@ -86,7 +86,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>>               goto gp_fault;
>>   
>>           *val = IA32_FEATURE_CONTROL_LOCK;
>> -        if ( vmce_has_lmce(v) )
>> +        if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(v) )
>>               *val |= IA32_FEATURE_CONTROL_LMCE_ON;
>>           if ( cp->basic.vmx )
>>               *val |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;
> 
> ... three times.
> 

I think I'll move vmce_has_lmce() to arch/x86/cpu/mcheck/mce.h then, if 
no objections.

   -Sergiy
diff mbox series

Patch

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 353d4f19b2..c437f62c0a 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -199,7 +199,7 @@  int vmce_rdmsr(uint32_t msr, uint64_t *val)
          * bits are always set in guest MSR_IA32_FEATURE_CONTROL by Xen, so it
          * does not need to check them here.
          */
-        if ( cur->arch.vmce.mcg_cap & MCG_LMCE_P )
+        if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(cur) )
         {
             *val = cur->arch.vmce.mcg_ext_ctl;
             mce_printk(MCE_VERBOSE, "MCE: %pv: rd MCG_EXT_CTL %#"PRIx64"\n",
@@ -324,7 +324,7 @@  int vmce_wrmsr(uint32_t msr, uint64_t val)
         break;
 
     case MSR_IA32_MCG_EXT_CTL:
-        if ( (cur->arch.vmce.mcg_cap & MCG_LMCE_P) &&
+        if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(cur) &&
              !(val & ~MCG_EXT_CTL_LMCE_EN) )
             cur->arch.vmce.mcg_ext_ctl = val;
         else
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 9babd441f9..4010c87d93 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -86,7 +86,7 @@  int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
             goto gp_fault;
 
         *val = IA32_FEATURE_CONTROL_LOCK;
-        if ( vmce_has_lmce(v) )
+        if ( IS_ENABLED(CONFIG_INTEL) && vmce_has_lmce(v) )
             *val |= IA32_FEATURE_CONTROL_LMCE_ON;
         if ( cp->basic.vmx )
             *val |= IA32_FEATURE_CONTROL_ENABLE_VMXON_OUTSIDE_SMX;