diff mbox

[1/3] x86/vpmu: Calculate vpmu_enabled() based on vpmu_mode value

Message ID 1486952997-19445-2-git-send-email-boris.ostrovsky@oracle.com (mailing list archive)
State New, archived
Headers show

Commit Message

Boris Ostrovsky Feb. 13, 2017, 2:29 a.m. UTC
vpmu_enabled() is currently reported based on VPMU_CONTEXT_ALLOCATED
bit of VPMU's flags. This presents a problem on Intel processors where
VPMU context is allocated lazily, during the first access to VPMU MSRs.
With such delayed allocation we, for example, cannot properly report
CPUID's leaf 0xa since it is likely that the leaf is queried by a
guest before the guest attempts ro read or write the MSR.

Instead of keying vpmu_enabled() off the flags we should compute it
based on vpmu_mode.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---

vpmu_enabled() cannot be made an inline in vpmu.h since is_hardware_domain()
is defined in sched.h which includes asm-x86/domain.h which, in turn, needs
vpmu.h

Making it a macro would work but then we'd depend on whoever includes vpmu.h
to also include sched.h and I didn't like that.

 xen/arch/x86/cpu/vpmu.c    | 6 ++++++
 xen/include/asm-x86/vpmu.h | 2 +-
 2 files changed, 7 insertions(+), 1 deletion(-)

Comments

Andrew Cooper Feb. 13, 2017, 10:33 a.m. UTC | #1
On 13/02/17 02:29, Boris Ostrovsky wrote:
> vpmu_enabled() is currently reported based on VPMU_CONTEXT_ALLOCATED
> bit of VPMU's flags. This presents a problem on Intel processors where
> VPMU context is allocated lazily, during the first access to VPMU MSRs.
> With such delayed allocation we, for example, cannot properly report
> CPUID's leaf 0xa since it is likely that the leaf is queried by a
> guest before the guest attempts ro read or write the MSR.
>
> Instead of keying vpmu_enabled() off the flags we should compute it
> based on vpmu_mode.
>
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

s/bool_t/bool/

Otherwise, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

I can fix this up on commit if there are no other issues.

~Andrew
Jan Beulich Feb. 13, 2017, 12:50 p.m. UTC | #2
>>> On 13.02.17 at 03:29, <boris.ostrovsky@oracle.com> wrote:
> vpmu_enabled() is currently reported based on VPMU_CONTEXT_ALLOCATED
> bit of VPMU's flags. This presents a problem on Intel processors where
> VPMU context is allocated lazily, during the first access to VPMU MSRs.
> With such delayed allocation we, for example, cannot properly report
> CPUID's leaf 0xa since it is likely that the leaf is queried by a
> guest before the guest attempts ro read or write the MSR.
> 
> Instead of keying vpmu_enabled() off the flags we should compute it
> based on vpmu_mode.

Doesn't this have its own downsides? What if the mode changes
behind the back of a DomU? While certain mode changes are
disallowed while there are active users, there's nothing preventing
mode or features from changing between a DomU querying CPUID
and enabling the vPMU.

> --- a/xen/arch/x86/cpu/vpmu.c
> +++ b/xen/arch/x86/cpu/vpmu.c
> @@ -110,6 +110,12 @@ static void __init parse_vpmu_params(char *s)
>      printk("VPMU: unknown flags: %s - vpmu disabled!\n", s);
>  }
>  
> +bool_t vpmu_enabled (const struct vcpu *v)

Stray blank.

Jan
Boris Ostrovsky Feb. 13, 2017, 2:38 p.m. UTC | #3
On 02/13/2017 07:50 AM, Jan Beulich wrote:
>>>> On 13.02.17 at 03:29, <boris.ostrovsky@oracle.com> wrote:
>> vpmu_enabled() is currently reported based on VPMU_CONTEXT_ALLOCATED
>> bit of VPMU's flags. This presents a problem on Intel processors where
>> VPMU context is allocated lazily, during the first access to VPMU MSRs.
>> With such delayed allocation we, for example, cannot properly report
>> CPUID's leaf 0xa since it is likely that the leaf is queried by a
>> guest before the guest attempts ro read or write the MSR.
>>
>> Instead of keying vpmu_enabled() off the flags we should compute it
>> based on vpmu_mode.
>
> Doesn't this have its own downsides? What if the mode changes
> behind the back of a DomU? While certain mode changes are
> disallowed while there are active users, there's nothing preventing
> mode or features from changing between a DomU querying CPUID
> and enabling the vPMU.


As you said, the mode can't change after a domU VCPU has been 
initialized, i.e. before the VCPU started running.

Of course, this was broken, which is what patch 2 (not quite 
successfully) tried to fix. And I proposed in my reply to Andrew
to instead define vpmu_enabled() as !!vcpu_vpmu(v)->arch_vpmu_ops.
That should solve problems that both patches try to address.

-boris


>
>> --- a/xen/arch/x86/cpu/vpmu.c
>> +++ b/xen/arch/x86/cpu/vpmu.c
>> @@ -110,6 +110,12 @@ static void __init parse_vpmu_params(char *s)
>>      printk("VPMU: unknown flags: %s - vpmu disabled!\n", s);
>>  }
>>
>> +bool_t vpmu_enabled (const struct vcpu *v)
>
> Stray blank.
>
> Jan
>
Jan Beulich Feb. 13, 2017, 2:44 p.m. UTC | #4
>>> On 13.02.17 at 15:38, <boris.ostrovsky@oracle.com> wrote:
> On 02/13/2017 07:50 AM, Jan Beulich wrote:
>>>>> On 13.02.17 at 03:29, <boris.ostrovsky@oracle.com> wrote:
>>> vpmu_enabled() is currently reported based on VPMU_CONTEXT_ALLOCATED
>>> bit of VPMU's flags. This presents a problem on Intel processors where
>>> VPMU context is allocated lazily, during the first access to VPMU MSRs.
>>> With such delayed allocation we, for example, cannot properly report
>>> CPUID's leaf 0xa since it is likely that the leaf is queried by a
>>> guest before the guest attempts ro read or write the MSR.
>>>
>>> Instead of keying vpmu_enabled() off the flags we should compute it
>>> based on vpmu_mode.
>>
>> Doesn't this have its own downsides? What if the mode changes
>> behind the back of a DomU? While certain mode changes are
>> disallowed while there are active users, there's nothing preventing
>> mode or features from changing between a DomU querying CPUID
>> and enabling the vPMU.
> 
> As you said, the mode can't change after a domU VCPU has been 
> initialized, i.e. before the VCPU started running.

I don't understand this last part - how is this related to the vCPU
starting to run? Doesn't at least PV initiate the initialization via
hypercall?

Jan
Boris Ostrovsky Feb. 13, 2017, 3:02 p.m. UTC | #5
On 02/13/2017 09:44 AM, Jan Beulich wrote:
>>>> On 13.02.17 at 15:38, <boris.ostrovsky@oracle.com> wrote:
>> On 02/13/2017 07:50 AM, Jan Beulich wrote:
>>>>>> On 13.02.17 at 03:29, <boris.ostrovsky@oracle.com> wrote:
>>>> vpmu_enabled() is currently reported based on VPMU_CONTEXT_ALLOCATED
>>>> bit of VPMU's flags. This presents a problem on Intel processors where
>>>> VPMU context is allocated lazily, during the first access to VPMU MSRs.
>>>> With such delayed allocation we, for example, cannot properly report
>>>> CPUID's leaf 0xa since it is likely that the leaf is queried by a
>>>> guest before the guest attempts ro read or write the MSR.
>>>>
>>>> Instead of keying vpmu_enabled() off the flags we should compute it
>>>> based on vpmu_mode.
>>>
>>> Doesn't this have its own downsides? What if the mode changes
>>> behind the back of a DomU? While certain mode changes are
>>> disallowed while there are active users, there's nothing preventing
>>> mode or features from changing between a DomU querying CPUID
>>> and enabling the vPMU.
>>
>> As you said, the mode can't change after a domU VCPU has been
>> initialized, i.e. before the VCPU started running.
>
> I don't understand this last part - how is this related to the vCPU
> starting to run? Doesn't at least PV initiate the initialization via
> hypercall?

Oh, right. Then looking at arch_vpmu_ops will not work neither for 
exactly the same reason --- the hypercall initializing VPMU may come 
after CPUID.

-boris
diff mbox

Patch

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index 35a9403..0252171 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -110,6 +110,12 @@  static void __init parse_vpmu_params(char *s)
     printk("VPMU: unknown flags: %s - vpmu disabled!\n", s);
 }
 
+bool_t vpmu_enabled (const struct vcpu *v)
+{
+    return ((vpmu_mode & (XENPMU_MODE_SELF | XENPMU_MODE_HV)) ||
+            ((vpmu_mode & XENPMU_MODE_ALL) && is_hardware_domain(v->domain)));
+}
+
 void vpmu_lvtpc_update(uint32_t val)
 {
     struct vpmu_struct *vpmu;
diff --git a/xen/include/asm-x86/vpmu.h b/xen/include/asm-x86/vpmu.h
index e50618f..30a0476 100644
--- a/xen/include/asm-x86/vpmu.h
+++ b/xen/include/asm-x86/vpmu.h
@@ -25,7 +25,6 @@ 
 
 #define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
 #define vpmu_vcpu(vpmu)   container_of((vpmu), struct vcpu, arch.vpmu)
-#define vpmu_enabled(vcpu) vpmu_is_set(vcpu_vpmu(vcpu), VPMU_CONTEXT_ALLOCATED)
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
@@ -101,6 +100,7 @@  static inline bool_t vpmu_are_all_set(const struct vpmu_struct *vpmu,
     return !!((vpmu->flags & mask) == mask);
 }
 
+bool_t vpmu_enabled(const struct vcpu *v);
 void vpmu_lvtpc_update(uint32_t val);
 int vpmu_do_msr(unsigned int msr, uint64_t *msr_content,
                 uint64_t supported, bool_t is_write);