diff mbox series

[RFC,4/7] KVM: MMU: Refactor pkr_mask to cache condition

Message ID 20200807084841.7112-5-chenyi.qiang@intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: PKS Virtualization support | expand

Commit Message

Chenyi Qiang Aug. 7, 2020, 8:48 a.m. UTC
pkr_mask bitmap indicates if protection key checks are needed for user
pages currently. It is indexed by page fault error code bits [4:1] with
PFEC.RSVD replaced by the ACC_USER_MASK from the page tables. Refactor
it by reverting to the use of PFEC.RSVD. After that, PKS and PKU can
share the same bitmap.

Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
---
 arch/x86/kvm/mmu.h     | 10 ++++++----
 arch/x86/kvm/mmu/mmu.c | 16 ++++++++++------
 2 files changed, 16 insertions(+), 10 deletions(-)

Comments

Paolo Bonzini Jan. 26, 2021, 6:16 p.m. UTC | #1
On 07/08/20 10:48, Chenyi Qiang wrote:
> 
>  		* index of the protection domain, so pte_pkey * 2 is
>  		* is the index of the first bit for the domain.
>  		*/
> -		pkr_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3;
> +		if (pte_access & PT_USER_MASK)
> +			pkr_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3;
> +		else
> +			pkr_bits = 0;
>  
> -		/* clear present bit, replace PFEC.RSVD with ACC_USER_MASK. */
> -		offset = (pfec & ~1) +
> -			((pte_access & PT_USER_MASK) << (PFERR_RSVD_BIT - PT_USER_SHIFT));
> +		/* clear present bit */
> +		offset = (pfec & ~1);
>  
>  		pkr_bits &= mmu->pkr_mask >> offset;
>  		errcode |= -pkr_bits & PFERR_PK_MASK;

I think this is incorrect.  mmu->pkr_mask must cover both clear and set 
ACC_USER_MASK, in to cover all combinations of CR4.PKE and CR4.PKS. 
Right now, check_pkey is !ff && pte_user, but you need to make it 
something like

	check_pkey = !ff && (pte_user ? cr4_pke : cr4_pks);

Paolo
Chenyi Qiang Jan. 27, 2021, 3:14 a.m. UTC | #2
On 1/27/2021 2:16 AM, Paolo Bonzini wrote:
> On 07/08/20 10:48, Chenyi Qiang wrote:
>>
>>          * index of the protection domain, so pte_pkey * 2 is
>>          * is the index of the first bit for the domain.
>>          */
>> -        pkr_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3;
>> +        if (pte_access & PT_USER_MASK)
>> +            pkr_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3;
>> +        else
>> +            pkr_bits = 0;
>>
>> -        /* clear present bit, replace PFEC.RSVD with ACC_USER_MASK. */
>> -        offset = (pfec & ~1) +
>> -            ((pte_access & PT_USER_MASK) << (PFERR_RSVD_BIT - 
>> PT_USER_SHIFT));
>> +        /* clear present bit */
>> +        offset = (pfec & ~1);
>>
>>          pkr_bits &= mmu->pkr_mask >> offset;
>>          errcode |= -pkr_bits & PFERR_PK_MASK;
> 
> I think this is incorrect.  mmu->pkr_mask must cover both clear and set 
> ACC_USER_MASK, in to cover all combinations of CR4.PKE and CR4.PKS. 
> Right now, check_pkey is !ff && pte_user, but you need to make it 
> something like
> 
>      check_pkey = !ff && (pte_user ? cr4_pke : cr4_pks);
> 
> Paolo

Oh, I didn't distinguish the cr4_pke/cr4_pks check. Will fix this issue.

>
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 0c2fdf0abf22..7fb4c63d5704 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -202,11 +202,13 @@  static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 		* index of the protection domain, so pte_pkey * 2 is
 		* is the index of the first bit for the domain.
 		*/
-		pkr_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3;
+		if (pte_access & PT_USER_MASK)
+			pkr_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3;
+		else
+			pkr_bits = 0;
 
-		/* clear present bit, replace PFEC.RSVD with ACC_USER_MASK. */
-		offset = (pfec & ~1) +
-			((pte_access & PT_USER_MASK) << (PFERR_RSVD_BIT - PT_USER_SHIFT));
+		/* clear present bit */
+		offset = (pfec & ~1);
 
 		pkr_bits &= mmu->pkr_mask >> offset;
 		errcode |= -pkr_bits & PFERR_PK_MASK;
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 481442f5e27a..333b4da739f8 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4737,21 +4737,25 @@  static void update_pkr_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 
 	for (bit = 0; bit < ARRAY_SIZE(mmu->permissions); ++bit) {
 		unsigned pfec, pkey_bits;
-		bool check_pkey, check_write, ff, uf, wf, pte_user;
+		bool check_pkey, check_write, ff, uf, wf, rsvdf;
 
 		pfec = bit << 1;
 		ff = pfec & PFERR_FETCH_MASK;
 		uf = pfec & PFERR_USER_MASK;
 		wf = pfec & PFERR_WRITE_MASK;
 
-		/* PFEC.RSVD is replaced by ACC_USER_MASK. */
-		pte_user = pfec & PFERR_RSVD_MASK;
+		/*
+		 * PFERR_RSVD_MASK bit is not set if the
+		 * access is subject to PK restrictions.
+		 */
+		rsvdf = pfec & PFERR_RSVD_MASK;
 
 		/*
-		 * Only need to check the access which is not an
-		 * instruction fetch and is to a user page.
+		 * need to check the access which is not an
+		 * instruction fetch and is not a rsvd fault.
 		 */
-		check_pkey = (!ff && pte_user);
+		check_pkey = (!ff && !rsvdf);
+
 		/*
 		 * write access is controlled by PKRU if it is a
 		 * user access or CR0.WP = 1.