diff mbox series

[v3,4/6] KVM: x86: Make use of kvm_read_cr*_bits() when testing bits

Message ID 20230201194604.11135-5-minipli@grsecurity.net (mailing list archive)
State New, archived
Headers show
Series KVM: MMU: performance tweaks for heavy CR0.WP users | expand

Commit Message

Mathias Krause Feb. 1, 2023, 7:46 p.m. UTC
Make use of the kvm_read_cr{0,4}_bits() helper functions when we only
want to know the state of certain bits instead of the whole register.

This not only makes the intend cleaner, it also avoids a VMREAD in case
the tested bits aren't guest owned.

Signed-off-by: Mathias Krause <minipli@grsecurity.net>
---
 arch/x86/kvm/pmu.c     | 4 ++--
 arch/x86/kvm/vmx/vmx.c | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

Comments

Zhi Wang Feb. 7, 2023, 1:05 p.m. UTC | #1
On Wed,  1 Feb 2023 20:46:02 +0100
Mathias Krause <minipli@grsecurity.net> wrote:

> Make use of the kvm_read_cr{0,4}_bits() helper functions when we only
> want to know the state of certain bits instead of the whole register.
> 
> This not only makes the intend cleaner, it also avoids a VMREAD in case
> the tested bits aren't guest owned.
                    ^
The patch comment is a little confusing. Not sure if I misunderstood here:

Check the code of kvm_read_cr0_bits

static inline ulong kvm_read_cr0_bits(struct kvm_vcpu *vcpu, ulong mask)
{
        ulong tmask = mask & KVM_POSSIBLE_CR0_GUEST_BITS;
        if ((tmask & vcpu->arch.cr0_guest_owned_bits) &&
            !kvm_register_is_available(vcpu, VCPU_EXREG_CR0))
                static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR0);
        return vcpu->arch.cr0 & mask;
}

I suppose the conditions that can avoids a VMREAD is to avoid the vmread in
static_call(kvm_x86_cache_reg):

Conditions are not triggering vmread:

1) The test bits are guest_owned_bits and cache register is available.
2) The test bits are *not* guest_owned bits.

I agree that this makes the intend cleaner, but not sure the later statement
is true in the patch comment. If the test bits are not guest owned, it will
not reach static_call(kvm_x86_cache_reg).
> 
> Signed-off-by: Mathias Krause <minipli@grsecurity.net>
> ---
>  arch/x86/kvm/pmu.c     | 4 ++--
>  arch/x86/kvm/vmx/vmx.c | 4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index d939d3b84e6f..d9922277df67 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -439,9 +439,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
>  	if (!pmc)
>  		return 1;
>  
> -	if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
> +	if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) &&
>  	    (static_call(kvm_x86_get_cpl)(vcpu) != 0) &&
> -	    (kvm_read_cr0(vcpu) & X86_CR0_PE))
> +	    (kvm_read_cr0_bits(vcpu, X86_CR0_PE)))
>  		return 1;
>  
>  	*data = pmc_read_counter(pmc) & mask;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index c8198c8a9b55..d3b49e0b6c32 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -5487,7 +5487,7 @@ static int handle_cr(struct kvm_vcpu *vcpu)
>  		break;
>  	case 3: /* lmsw */
>  		val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f;
> -		trace_kvm_cr_write(0, (kvm_read_cr0(vcpu) & ~0xful) | val);
> +		trace_kvm_cr_write(0, (kvm_read_cr0_bits(vcpu, ~0xful) | val));
>  		kvm_lmsw(vcpu, val);
>  
>  		return kvm_skip_emulated_instruction(vcpu);
> @@ -7547,7 +7547,7 @@ static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
>  	if (!kvm_arch_has_noncoherent_dma(vcpu->kvm))
>  		return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT;
>  
> -	if (kvm_read_cr0(vcpu) & X86_CR0_CD) {
> +	if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) {
>  		if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))
>  			cache = MTRR_TYPE_WRBACK;
>  		else
Mathias Krause Feb. 8, 2023, 9:11 a.m. UTC | #2
On 07.02.23 14:05, Zhi Wang wrote:
> On Wed,  1 Feb 2023 20:46:02 +0100
> Mathias Krause <minipli@grsecurity.net> wrote:
> 
>> Make use of the kvm_read_cr{0,4}_bits() helper functions when we only
>> want to know the state of certain bits instead of the whole register.
>>
>> This not only makes the intend cleaner, it also avoids a VMREAD in case
                           ~~~~~~
Oh, this should have been "intent". Will fix in v4, if there's a need for.

>> the tested bits aren't guest owned.
>                     ^
> The patch comment is a little confusing. Not sure if I misunderstood here:

Sorry, lets try to clarify.

> Check the code of kvm_read_cr0_bits
> 
> static inline ulong kvm_read_cr0_bits(struct kvm_vcpu *vcpu, ulong mask)
> {
>         ulong tmask = mask & KVM_POSSIBLE_CR0_GUEST_BITS;
>         if ((tmask & vcpu->arch.cr0_guest_owned_bits) &&
>             !kvm_register_is_available(vcpu, VCPU_EXREG_CR0))
>                 static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR0);
>         return vcpu->arch.cr0 & mask;
> }
> 
> I suppose the conditions that can avoids a VMREAD is to avoid the vmread in
> static_call(kvm_x86_cache_reg):

Correct, that's what this patch is trying to do: It tries to avoid the
static_call(kvm_x86_cache_reg)(...) by making the compiler aware of the
actually used bits in 'mask'. If those don't intersect with the guest
owned bits, the first part of the condition wont be true and we simply
can make use of 'vcpu->arch.cr0'.

Maybe it gets clearer when looking at kvm_read_cr0() too which is just this:

static inline ulong kvm_read_cr0(struct kvm_vcpu *vcpu)
{
    return kvm_read_cr0_bits(vcpu, ~0UL);
}

So the 'mask' passed to kvm_read_cr0_bits() will always include all
(possible) guest owned bits (KVM_POSSIBLE_CR0_GUEST_BITS & ~0UL ==
KVM_POSSIBLE_CR0_GUEST_BITS) and the compiler cannot do the optimization
mentioned above.

If we, however, use kvm_read_cr0_bits(..., MASK) directly instead of
using kvm_read_cr0() & MASK, it can, like for all bits not in
KVM_POSSIBLE_CR0_GUEST_BITS & vcpu->arch.cr0_guest_owned_bits.

> Conditions are not triggering vmread:
> 
> 1) The test bits are guest_owned_bits and cache register is available.
> 2) The test bits are *not* guest_owned bits.

For case 1 the patch would make only a minor difference, by concluding
earlier that it can simply make use of vcpu->arch.cr0. But it's case 2
I'm after.

If you look up KVM_POSSIBLE_CR0_GUEST_BITS, which is the upper bound for
guest owned CR0 bits, you'll find before patch 6:

#define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS

and after patch 6:

#define KVM_LAZY_CR0_GUEST_BITS     X86_CR0_WP
#define KVM_POSSIBLE_CR0_GUEST_BITS (X86_CR0_TS|KVM_LAZY_CR0_GUEST_BITS)

So the upper bound would be 'X86_CR0_TS|X86_CR0_WP'. Every bit outside
that set can directly be read from the 'vcpu' cached register value and
that's (mostly) the case for the users this patch is changing, see below.

> I agree that this makes the intend cleaner, but not sure the later statement
> is true in the patch comment. If the test bits are not guest owned, it will
> not reach static_call(kvm_x86_cache_reg).

Correct, but that's no different from what I'm saying. My description
just set 'static_call(kvm_x86_cache_reg)' mentally equivalent to VMREAD,
which abstracts the static_call quite well, IMHO. But maybe I should
clarify that 'tested bits' means the bits used by the changed call side?
Though, I think that's rather obvious from the change itself. I can
factor in the caching aspect, though.

Maybe something like this?:

    This not only makes the intent cleaner, it also avoids a potential
    VMREAD in case the tested bits aren't guest owned.

I've added "potential" but left the remainder as is.

>>
>> Signed-off-by: Mathias Krause <minipli@grsecurity.net>
>> ---
>>  arch/x86/kvm/pmu.c     | 4 ++--
>>  arch/x86/kvm/vmx/vmx.c | 4 ++--
>>  2 files changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
>> index d939d3b84e6f..d9922277df67 100644
>> --- a/arch/x86/kvm/pmu.c
>> +++ b/arch/x86/kvm/pmu.c
>> @@ -439,9 +439,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
>>  	if (!pmc)
>>  		return 1;
>>  
>> -	if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
>> +	if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) &&

X86_CR4_PCE & KVM_POSSIBLE_CR4_GUEST_BITS == X86_CR4_PCE, therefore can
only be optimized if X86_CR4_PCE would be dropped from
'vcpu->arch.cr4_guest_owned_bits' as well. But AFAICS we don't do that.
So here you're right that this only clears up the intent, not the actual
behavior at runtime.

>>  	    (static_call(kvm_x86_get_cpl)(vcpu) != 0) &&
>> -	    (kvm_read_cr0(vcpu) & X86_CR0_PE))
>> +	    (kvm_read_cr0_bits(vcpu, X86_CR0_PE)))

X86_CR0_PE & KVM_POSSIBLE_CR0_GUEST_BITS == 0, therefore this can be
optimized.

>>  		return 1;
>>  
>>  	*data = pmc_read_counter(pmc) & mask;
>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>> index c8198c8a9b55..d3b49e0b6c32 100644
>> --- a/arch/x86/kvm/vmx/vmx.c
>> +++ b/arch/x86/kvm/vmx/vmx.c
>> @@ -5487,7 +5487,7 @@ static int handle_cr(struct kvm_vcpu *vcpu)
>>  		break;
>>  	case 3: /* lmsw */
>>  		val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f;
>> -		trace_kvm_cr_write(0, (kvm_read_cr0(vcpu) & ~0xful) | val);
>> +		trace_kvm_cr_write(0, (kvm_read_cr0_bits(vcpu, ~0xful) | val));

~0xful & KVM_POSSIBLE_CR0_GUEST_BITS is 0 prior to patch 6 and
X86_CR0_WP afterwards, therefore this might be optimized, depending on
the runtime setting of 'enable_lazy_cr0', possibly capping the guest
owned CR0 bits to exclude X86_CR0_WP again.

>>  		kvm_lmsw(vcpu, val);
>>  
>>  		return kvm_skip_emulated_instruction(vcpu);
>> @@ -7547,7 +7547,7 @@ static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
>>  	if (!kvm_arch_has_noncoherent_dma(vcpu->kvm))
>>  		return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT;
>>  
>> -	if (kvm_read_cr0(vcpu) & X86_CR0_CD) {
>> +	if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) {

X86_CR0_CD & KVM_POSSIBLE_CR0_GUEST_BITS == 0, therefore this can be
optimized as well.

>>  		if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))
>>  			cache = MTRR_TYPE_WRBACK;
>>  		else
> 

Thanks,
Mathias
Zhi Wang Feb. 14, 2023, 11:08 a.m. UTC | #3
On Wed, 8 Feb 2023 10:11:30 +0100
Mathias Krause <minipli@grsecurity.net> wrote:

> On 07.02.23 14:05, Zhi Wang wrote:
> > On Wed,  1 Feb 2023 20:46:02 +0100
> > Mathias Krause <minipli@grsecurity.net> wrote:
> >   
> >> Make use of the kvm_read_cr{0,4}_bits() helper functions when we only
> >> want to know the state of certain bits instead of the whole register.
> >>
> >> This not only makes the intend cleaner, it also avoids a VMREAD in case  
>                            ~~~~~~
> Oh, this should have been "intent". Will fix in v4, if there's a need for.
> 
> >> the tested bits aren't guest owned.  
> >                     ^
> > The patch comment is a little confusing. Not sure if I misunderstood here:  
> 
> Sorry, lets try to clarify.
> 
> > Check the code of kvm_read_cr0_bits
> > 
> > static inline ulong kvm_read_cr0_bits(struct kvm_vcpu *vcpu, ulong mask)
> > {
> >         ulong tmask = mask & KVM_POSSIBLE_CR0_GUEST_BITS;
> >         if ((tmask & vcpu->arch.cr0_guest_owned_bits) &&
> >             !kvm_register_is_available(vcpu, VCPU_EXREG_CR0))
> >                 static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR0);
> >         return vcpu->arch.cr0 & mask;
> > }
> > 
> > I suppose the conditions that can avoids a VMREAD is to avoid the vmread in
> > static_call(kvm_x86_cache_reg):  
> 
> Correct, that's what this patch is trying to do: It tries to avoid the
> static_call(kvm_x86_cache_reg)(...) by making the compiler aware of the
> actually used bits in 'mask'. If those don't intersect with the guest
> owned bits, the first part of the condition wont be true and we simply
> can make use of 'vcpu->arch.cr0'.
> 
> Maybe it gets clearer when looking at kvm_read_cr0() too which is just this:
> 
> static inline ulong kvm_read_cr0(struct kvm_vcpu *vcpu)
> {
>     return kvm_read_cr0_bits(vcpu, ~0UL);
> }
> 
> So the 'mask' passed to kvm_read_cr0_bits() will always include all
> (possible) guest owned bits (KVM_POSSIBLE_CR0_GUEST_BITS & ~0UL ==
> KVM_POSSIBLE_CR0_GUEST_BITS) and the compiler cannot do the optimization
> mentioned above.
> 
> If we, however, use kvm_read_cr0_bits(..., MASK) directly instead of
> using kvm_read_cr0() & MASK, it can, like for all bits not in
> KVM_POSSIBLE_CR0_GUEST_BITS & vcpu->arch.cr0_guest_owned_bits.
> 
> > Conditions are not triggering vmread:
> > 
> > 1) The test bits are guest_owned_bits and cache register is available.
> > 2) The test bits are *not* guest_owned bits.  
> 
> For case 1 the patch would make only a minor difference, by concluding
> earlier that it can simply make use of vcpu->arch.cr0. But it's case 2
> I'm after.
>

Thanks for the explanation. Now I got it. 
 
> If you look up KVM_POSSIBLE_CR0_GUEST_BITS, which is the upper bound for
> guest owned CR0 bits, you'll find before patch 6:
> 
> #define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS
> 
> and after patch 6:
> 
> #define KVM_LAZY_CR0_GUEST_BITS     X86_CR0_WP
> #define KVM_POSSIBLE_CR0_GUEST_BITS (X86_CR0_TS|KVM_LAZY_CR0_GUEST_BITS)
> 
> So the upper bound would be 'X86_CR0_TS|X86_CR0_WP'. Every bit outside
> that set can directly be read from the 'vcpu' cached register value and
> that's (mostly) the case for the users this patch is changing, see below.
> 
> > I agree that this makes the intend cleaner, but not sure the later statement
> > is true in the patch comment. If the test bits are not guest owned, it will
> > not reach static_call(kvm_x86_cache_reg).  
> 
> Correct, but that's no different from what I'm saying. My description
> just set 'static_call(kvm_x86_cache_reg)' mentally equivalent to VMREAD,
> which abstracts the static_call quite well, IMHO. But maybe I should
> clarify that 'tested bits' means the bits used by the changed call side?
> Though, I think that's rather obvious from the change itself. I can
> factor in the caching aspect, though.
> 
> Maybe something like this?:
> 
>     This not only makes the intent cleaner, it also avoids a potential
>     VMREAD in case the tested bits aren't guest owned.
> 
> I've added "potential" but left the remainder as is.
> 
> >>
> >> Signed-off-by: Mathias Krause <minipli@grsecurity.net>
> >> ---
> >>  arch/x86/kvm/pmu.c     | 4 ++--
> >>  arch/x86/kvm/vmx/vmx.c | 4 ++--
> >>  2 files changed, 4 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> >> index d939d3b84e6f..d9922277df67 100644
> >> --- a/arch/x86/kvm/pmu.c
> >> +++ b/arch/x86/kvm/pmu.c
> >> @@ -439,9 +439,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
> >>  	if (!pmc)
> >>  		return 1;
> >>  
> >> -	if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
> >> +	if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) &&  
> 
> X86_CR4_PCE & KVM_POSSIBLE_CR4_GUEST_BITS == X86_CR4_PCE, therefore can
> only be optimized if X86_CR4_PCE would be dropped from
> 'vcpu->arch.cr4_guest_owned_bits' as well. But AFAICS we don't do that.
> So here you're right that this only clears up the intent, not the actual
> behavior at runtime.
> 
> >>  	    (static_call(kvm_x86_get_cpl)(vcpu) != 0) &&
> >> -	    (kvm_read_cr0(vcpu) & X86_CR0_PE))
> >> +	    (kvm_read_cr0_bits(vcpu, X86_CR0_PE)))  
> 
> X86_CR0_PE & KVM_POSSIBLE_CR0_GUEST_BITS == 0, therefore this can be
> optimized.
> 
> >>  		return 1;
> >>  
> >>  	*data = pmc_read_counter(pmc) & mask;
> >> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> >> index c8198c8a9b55..d3b49e0b6c32 100644
> >> --- a/arch/x86/kvm/vmx/vmx.c
> >> +++ b/arch/x86/kvm/vmx/vmx.c
> >> @@ -5487,7 +5487,7 @@ static int handle_cr(struct kvm_vcpu *vcpu)
> >>  		break;
> >>  	case 3: /* lmsw */
> >>  		val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f;
> >> -		trace_kvm_cr_write(0, (kvm_read_cr0(vcpu) & ~0xful) | val);
> >> +		trace_kvm_cr_write(0, (kvm_read_cr0_bits(vcpu, ~0xful) | val));  
> 
> ~0xful & KVM_POSSIBLE_CR0_GUEST_BITS is 0 prior to patch 6 and
> X86_CR0_WP afterwards, therefore this might be optimized, depending on
> the runtime setting of 'enable_lazy_cr0', possibly capping the guest
> owned CR0 bits to exclude X86_CR0_WP again.
> 
> >>  		kvm_lmsw(vcpu, val);
> >>  
> >>  		return kvm_skip_emulated_instruction(vcpu);
> >> @@ -7547,7 +7547,7 @@ static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
> >>  	if (!kvm_arch_has_noncoherent_dma(vcpu->kvm))
> >>  		return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT;
> >>  
> >> -	if (kvm_read_cr0(vcpu) & X86_CR0_CD) {
> >> +	if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) {  
> 
> X86_CR0_CD & KVM_POSSIBLE_CR0_GUEST_BITS == 0, therefore this can be
> optimized as well.
> 
> >>  		if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))
> >>  			cache = MTRR_TYPE_WRBACK;
> >>  		else  
> >   
> 
> Thanks,
> Mathias
Sean Christopherson March 15, 2023, 10:18 p.m. UTC | #4
On Wed, Feb 01, 2023, Mathias Krause wrote:
> Make use of the kvm_read_cr{0,4}_bits() helper functions when we only
> want to know the state of certain bits instead of the whole register.
> 
> This not only makes the intend cleaner, it also avoids a VMREAD in case
> the tested bits aren't guest owned.
> 
> Signed-off-by: Mathias Krause <minipli@grsecurity.net>
> ---
>  arch/x86/kvm/pmu.c     | 4 ++--
>  arch/x86/kvm/vmx/vmx.c | 4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index d939d3b84e6f..d9922277df67 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -439,9 +439,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
>  	if (!pmc)
>  		return 1;
>  
> -	if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
> +	if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) &&

Purely as an FYI, I proposed adding helpers to query single CR0/CR4 bits in a
separate thread[*].  No need to do anything on your end, I'll plan on applying
this patch first and will handle whatever conflicts arise.

[*] https://lore.kernel.org/all/ZAuRec2NkC3+4jvD@google.com
Mathias Krause March 20, 2023, 9:34 p.m. UTC | #5
On 15.03.23 23:18, Sean Christopherson wrote:
> On Wed, Feb 01, 2023, Mathias Krause wrote:
>> Make use of the kvm_read_cr{0,4}_bits() helper functions when we only
>> want to know the state of certain bits instead of the whole register.
>>
>> This not only makes the intend cleaner, it also avoids a VMREAD in case
>> the tested bits aren't guest owned.
>>
>> Signed-off-by: Mathias Krause <minipli@grsecurity.net>
>> ---
>>  arch/x86/kvm/pmu.c     | 4 ++--
>>  arch/x86/kvm/vmx/vmx.c | 4 ++--
>>  2 files changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
>> index d939d3b84e6f..d9922277df67 100644
>> --- a/arch/x86/kvm/pmu.c
>> +++ b/arch/x86/kvm/pmu.c
>> @@ -439,9 +439,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
>>  	if (!pmc)
>>  		return 1;
>>  
>> -	if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
>> +	if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) &&
> 
> Purely as an FYI, I proposed adding helpers to query single CR0/CR4 bits in a
> separate thread[*].  No need to do anything on your end, I'll plan on applying
> this patch first and will handle whatever conflicts arise.
> 
> [*] https://lore.kernel.org/all/ZAuRec2NkC3+4jvD@google.com

Unfortunately, not all users of kvm_read_cr*_bits() only want to read a
single bit. There are a very few that read bit masks -- but you're
probably fully aware of this.

Thanks,
Mathias
Sean Christopherson March 21, 2023, 3:57 p.m. UTC | #6
On Mon, Mar 20, 2023, Mathias Krause wrote:
> On 15.03.23 23:18, Sean Christopherson wrote:
> > On Wed, Feb 01, 2023, Mathias Krause wrote:
> >> Make use of the kvm_read_cr{0,4}_bits() helper functions when we only
> >> want to know the state of certain bits instead of the whole register.
> >>
> >> This not only makes the intend cleaner, it also avoids a VMREAD in case
> >> the tested bits aren't guest owned.
> >>
> >> Signed-off-by: Mathias Krause <minipli@grsecurity.net>
> >> ---
> >>  arch/x86/kvm/pmu.c     | 4 ++--
> >>  arch/x86/kvm/vmx/vmx.c | 4 ++--
> >>  2 files changed, 4 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> >> index d939d3b84e6f..d9922277df67 100644
> >> --- a/arch/x86/kvm/pmu.c
> >> +++ b/arch/x86/kvm/pmu.c
> >> @@ -439,9 +439,9 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
> >>  	if (!pmc)
> >>  		return 1;
> >>  
> >> -	if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
> >> +	if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) &&
> > 
> > Purely as an FYI, I proposed adding helpers to query single CR0/CR4 bits in a
> > separate thread[*].  No need to do anything on your end, I'll plan on applying
> > this patch first and will handle whatever conflicts arise.
> > 
> > [*] https://lore.kernel.org/all/ZAuRec2NkC3+4jvD@google.com
> 
> Unfortunately, not all users of kvm_read_cr*_bits() only want to read a
> single bit. There are a very few that read bit masks -- but you're
> probably fully aware of this.

Yeah, we won't be able to get rid of kvm_read_cr*_bits() entirely, the goal is
purely to make code that does query a single bit slightly more readable/obvious.
diff mbox series

Patch

diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index d939d3b84e6f..d9922277df67 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -439,9 +439,9 @@  int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
 	if (!pmc)
 		return 1;
 
-	if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
+	if (!(kvm_read_cr4_bits(vcpu, X86_CR4_PCE)) &&
 	    (static_call(kvm_x86_get_cpl)(vcpu) != 0) &&
-	    (kvm_read_cr0(vcpu) & X86_CR0_PE))
+	    (kvm_read_cr0_bits(vcpu, X86_CR0_PE)))
 		return 1;
 
 	*data = pmc_read_counter(pmc) & mask;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c8198c8a9b55..d3b49e0b6c32 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5487,7 +5487,7 @@  static int handle_cr(struct kvm_vcpu *vcpu)
 		break;
 	case 3: /* lmsw */
 		val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f;
-		trace_kvm_cr_write(0, (kvm_read_cr0(vcpu) & ~0xful) | val);
+		trace_kvm_cr_write(0, (kvm_read_cr0_bits(vcpu, ~0xful) | val));
 		kvm_lmsw(vcpu, val);
 
 		return kvm_skip_emulated_instruction(vcpu);
@@ -7547,7 +7547,7 @@  static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
 	if (!kvm_arch_has_noncoherent_dma(vcpu->kvm))
 		return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT;
 
-	if (kvm_read_cr0(vcpu) & X86_CR0_CD) {
+	if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) {
 		if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))
 			cache = MTRR_TYPE_WRBACK;
 		else