diff mbox series

[v2,1/4] KVM: x86/mmu: Fix write-protection of PTs mapped by the TDP MMU

Message ID 20220113233020.3986005-2-dmatlack@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86/mmu: Fix write-protection bug in the TDP MMU | expand

Commit Message

David Matlack Jan. 13, 2022, 11:30 p.m. UTC
When the TDP MMU is write-protection GFNs for page table protection (as
opposed to for dirty logging, or due to the HVA not being writable), it
checks if the SPTE is already write-protected and if so skips modifying
the SPTE and the TLB flush.

This behavior is incorrect because the SPTE may be write-protected for
dirty logging. This implies that the SPTE could be locklessly be made
writable on the next write access, and that vCPUs could still be running
with writable SPTEs cached in their TLB.

Fix this by only skipping setting the SPTE if the SPTE is already
write-protected *and* MMU-writable is already clear.

Fixes: 46044f72c382 ("kvm: x86/mmu: Support write protection for nesting in tdp MMU")
Cc: stable@vger.kernel.org
Signed-off-by: David Matlack <dmatlack@google.com>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)


base-commit: fea31d1690945e6dd6c3e89ec5591490857bc3d4

Comments

Sean Christopherson Jan. 14, 2022, 11:38 p.m. UTC | #1
On Thu, Jan 13, 2022, David Matlack wrote:
> When the TDP MMU is write-protection GFNs for page table protection (as
                      ^^^^^^^^^^^^^^^^
                      write-protecting

> opposed to for dirty logging, or due to the HVA not being writable), it
> checks if the SPTE is already write-protected and if so skips modifying
> the SPTE and the TLB flush.
> 
> This behavior is incorrect because the SPTE may be write-protected for
> dirty logging. This implies that the SPTE could be locklessly be made

Spurious "be" between could and locklessly.

Hmm, it doesn't imply anything, the behavior of MMU-writable is quite explicit.
If the bug occurs, then _that_ implies the SPTE was write-protected for dirty
logging, otherwise MMU-Writable would have been '0' due to HOST-Writable also
being '0'.

I think what you're trying to say is:

  This behavior is incorrect because it fails to check if the SPTE is
  write-protected for page table protection, i.e. fails to check that
  MMU-writable is '0'.  If the SPTE was write-protected for dirty logging
  but not page table protection, the SPTE could locklessly be made
  writable, and vCPUs could still be running with writable mappings
  cached in their TLB.

> writable on the next write access, and that vCPUs could still be running
> with writable SPTEs cached in their TLB.

Nit, it's technically the mapping, not the SPTE itself, that's cached in the TLB.

> Fix this by only skipping setting the SPTE if the SPTE is already
> write-protected *and* MMU-writable is already clear.

Might also be worth adding:

  Note, technically checking only MMU-writable would suffice, as a SPTE
  cannot be writable without MMU-writable being set, but check both to
  be paranoid and because it arguably yields more readable code.

Pedantry aside,

Reviewed-by: Sean Christopherson <seanjc@google.com>

> Fixes: 46044f72c382 ("kvm: x86/mmu: Support write protection for nesting in tdp MMU")
> Cc: stable@vger.kernel.org
> Signed-off-by: David Matlack <dmatlack@google.com>
> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 7b1bc816b7c3..bc9e3553fba2 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -1442,12 +1442,12 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root,
>  		    !is_last_spte(iter.old_spte, iter.level))
>  			continue;
>  
> -		if (!is_writable_pte(iter.old_spte))
> -			break;
> -
>  		new_spte = iter.old_spte &
>  			~(PT_WRITABLE_MASK | shadow_mmu_writable_mask);
>  
> +		if (new_spte == iter.old_spte)
> +			break;
> +
>  		tdp_mmu_set_spte(kvm, &iter, new_spte);
>  		spte_set = true;
>  	}
> 
> base-commit: fea31d1690945e6dd6c3e89ec5591490857bc3d4
> -- 
> 2.34.1.703.g22d0c6ccf7-goog
>
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 7b1bc816b7c3..bc9e3553fba2 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1442,12 +1442,12 @@  static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root,
 		    !is_last_spte(iter.old_spte, iter.level))
 			continue;
 
-		if (!is_writable_pte(iter.old_spte))
-			break;
-
 		new_spte = iter.old_spte &
 			~(PT_WRITABLE_MASK | shadow_mmu_writable_mask);
 
+		if (new_spte == iter.old_spte)
+			break;
+
 		tdp_mmu_set_spte(kvm, &iter, new_spte);
 		spte_set = true;
 	}