diff mbox

arm/arm64: KVM: Enforce Break-Before-Make on Stage-2 page tables

Message ID 1461856591-5751-1-git-send-email-marc.zyngier@arm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Marc Zyngier April 28, 2016, 3:16 p.m. UTC
The ARM architecture mandates that when changing a page table entry
from a valid entry to another valid entry, an invalid entry is first
written, TLB invalidated, and only then the new entry being written.

The current code doesn't respect this, directly writing the new
entry and only then invalidating TLBs. Let's fix it up.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/mmu.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

Comments

Mark Rutland April 28, 2016, 4:07 p.m. UTC | #1
On Thu, Apr 28, 2016 at 04:16:31PM +0100, Marc Zyngier wrote:
> The ARM architecture mandates that when changing a page table entry
> from a valid entry to another valid entry, an invalid entry is first
> written, TLB invalidated, and only then the new entry being written.
> 
> The current code doesn't respect this, directly writing the new
> entry and only then invalidating TLBs. Let's fix it up.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

FWIW, this looks correct to me.

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm/kvm/mmu.c | 17 +++++++++++------
>  1 file changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 58dbd5c..edf1cd1 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -893,11 +893,14 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
>  	VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));
>  
>  	old_pmd = *pmd;
> -	kvm_set_pmd(pmd, *new_pmd);
> -	if (pmd_present(old_pmd))
> +	if (pmd_present(old_pmd)) {
> +		pmd_clear(pmd);
>  		kvm_tlb_flush_vmid_ipa(kvm, addr);
> -	else
> +	} else {
>  		get_page(virt_to_page(pmd));
> +	}
> +
> +	kvm_set_pmd(pmd, *new_pmd);
>  	return 0;
>  }
>  
> @@ -946,12 +949,14 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
>  
>  	/* Create 2nd stage page table mapping - Level 3 */
>  	old_pte = *pte;
> -	kvm_set_pte(pte, *new_pte);
> -	if (pte_present(old_pte))
> +	if (pte_present(old_pte)) {
> +		kvm_set_pte(pte, __pte(0));
>  		kvm_tlb_flush_vmid_ipa(kvm, addr);
> -	else
> +	} else {
>  		get_page(virt_to_page(pte));
> +	}
>  
> +	kvm_set_pte(pte, *new_pte);
>  	return 0;
>  }
>  
> -- 
> 2.1.4
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>
Christoffer Dall April 29, 2016, 11:31 a.m. UTC | #2
On Thu, Apr 28, 2016 at 04:16:31PM +0100, Marc Zyngier wrote:
> The ARM architecture mandates that when changing a page table entry
> from a valid entry to another valid entry, an invalid entry is first
> written, TLB invalidated, and only then the new entry being written.
> 
> The current code doesn't respect this, directly writing the new
> entry and only then invalidating TLBs. Let's fix it up.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Thanks for fixing this, I've applied it to next.

-Christoffer
diff mbox

Patch

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 58dbd5c..edf1cd1 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -893,11 +893,14 @@  static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
 	VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));
 
 	old_pmd = *pmd;
-	kvm_set_pmd(pmd, *new_pmd);
-	if (pmd_present(old_pmd))
+	if (pmd_present(old_pmd)) {
+		pmd_clear(pmd);
 		kvm_tlb_flush_vmid_ipa(kvm, addr);
-	else
+	} else {
 		get_page(virt_to_page(pmd));
+	}
+
+	kvm_set_pmd(pmd, *new_pmd);
 	return 0;
 }
 
@@ -946,12 +949,14 @@  static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
 
 	/* Create 2nd stage page table mapping - Level 3 */
 	old_pte = *pte;
-	kvm_set_pte(pte, *new_pte);
-	if (pte_present(old_pte))
+	if (pte_present(old_pte)) {
+		kvm_set_pte(pte, __pte(0));
 		kvm_tlb_flush_vmid_ipa(kvm, addr);
-	else
+	} else {
 		get_page(virt_to_page(pte));
+	}
 
+	kvm_set_pte(pte, *new_pte);
 	return 0;
 }