diff mbox series

[RFC] KVM: arm64: Add prejudgement for relaxing permissions only case in stage2 translation fault handler

Message ID 20201211080115.21460-2-wangyanan55@huawei.com (mailing list archive)
State New, archived
Headers show
Series Add prejudgement for relaxing permissions only case | expand

Commit Message

Yanan Wang Dec. 11, 2020, 8:01 a.m. UTC
In dirty-logging, or dirty-logging-stopped time, even normal running
time of a guest configed with huge mappings and numbers of vCPUs,
translation faults by different vCPUs on the same GPA could occur
successively almost at the same time. There are two reasons for it.

(1) If there are some vCPUs accessing the same GPA at the same time
and the leaf PTE is not set yet, then they will all cause translation
faults and the first vCPU holding mmu_lock will set valid leaf PTE,
and the others will later choose to update the leaf PTE or not.

(2) When changing a leaf entry or a table entry with break-before-make,
if there are some vCPUs accessing the same GPA just catch the moment
when the target PTE is set invalid in a BBM procedure coincidentally,
they will all cause translation faults and will later choose to update
the leaf PTE or not.

The worst case can be like this: some vCPUs cause translation faults
on the same GPA with different prots, they will fight each other by
changing back access permissions of the PTE with break-before-make.
And the BBM-invalid moment might trigger more unnecessary translation
faults. As a result, some useless small loops will occur, which could
lead to vCPU stuck.

To avoid unnecessary update and small loops, add prejudgement in the
translation fault handler: Skip updating the valid leaf PTE if we are
trying to recreate exactly the same mapping or to reduce access
permissions only(such as RW-->RO). And update the valid leaf PTE without
break-before-make if we are trying to add more permissions only.

Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
---
 arch/arm64/kvm/hyp/pgtable.c | 73 +++++++++++++++++++++++++-----------
 1 file changed, 52 insertions(+), 21 deletions(-)

Comments

Marc Zyngier Dec. 11, 2020, 9:49 a.m. UTC | #1
Hi Yanan,

On 2020-12-11 08:01, Yanan Wang wrote:
> In dirty-logging, or dirty-logging-stopped time, even normal running
> time of a guest configed with huge mappings and numbers of vCPUs,
> translation faults by different vCPUs on the same GPA could occur
> successively almost at the same time. There are two reasons for it.
> 
> (1) If there are some vCPUs accessing the same GPA at the same time
> and the leaf PTE is not set yet, then they will all cause translation
> faults and the first vCPU holding mmu_lock will set valid leaf PTE,
> and the others will later choose to update the leaf PTE or not.
> 
> (2) When changing a leaf entry or a table entry with break-before-make,
> if there are some vCPUs accessing the same GPA just catch the moment
> when the target PTE is set invalid in a BBM procedure coincidentally,
> they will all cause translation faults and will later choose to update
> the leaf PTE or not.
> 
> The worst case can be like this: some vCPUs cause translation faults
> on the same GPA with different prots, they will fight each other by
> changing back access permissions of the PTE with break-before-make.
> And the BBM-invalid moment might trigger more unnecessary translation
> faults. As a result, some useless small loops will occur, which could
> lead to vCPU stuck.
> 
> To avoid unnecessary update and small loops, add prejudgement in the
> translation fault handler: Skip updating the valid leaf PTE if we are
> trying to recreate exactly the same mapping or to reduce access
> permissions only(such as RW-->RO). And update the valid leaf PTE 
> without
> break-before-make if we are trying to add more permissions only.

I'm a bit perplexed with this: why are you skipping the update if the
permissions need to be reduced? Even more, how can we reduce the
permissions from a vCPU fault? I can't really think of a scenario where
that happens.

Or are you describing a case where two vcpus fault simultaneously with
conflicting permissions:

- Both vcpus fault on translation fault
- vcpu A wants W access
- vpcu B wants R access

and 'A' gets in first, set the permissions to RW (because R is
implicitly added to W), followed by 'B' which downgrades it to RO?

If that's what you are describing, then I agree we could do better.

> 
> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 73 +++++++++++++++++++++++++-----------
>  1 file changed, 52 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/arm64/kvm/hyp/pgtable.c 
> b/arch/arm64/kvm/hyp/pgtable.c
> index 23a01dfcb27a..f8b3248cef1c 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -45,6 +45,8 @@
> 
>  #define KVM_PTE_LEAF_ATTR_HI_S2_XN	BIT(54)
> 
> +#define KVM_PTE_LEAF_ATTR_PERMS	(GENMASK(7, 6) | BIT(54))
> +
>  struct kvm_pgtable_walk_data {
>  	struct kvm_pgtable		*pgt;
>  	struct kvm_pgtable_walker	*walker;
> @@ -170,10 +172,9 @@ static void kvm_set_table_pte(kvm_pte_t *ptep,
> kvm_pte_t *childp)
>  	smp_store_release(ptep, pte);
>  }
> 
> -static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t 
> attr,
> -				   u32 level)
> +static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 
> level)
>  {
> -	kvm_pte_t old = *ptep, pte = kvm_phys_to_pte(pa);
> +	kvm_pte_t pte = kvm_phys_to_pte(pa);
>  	u64 type = (level == KVM_PGTABLE_MAX_LEVELS - 1) ? KVM_PTE_TYPE_PAGE 
> :
>  							   KVM_PTE_TYPE_BLOCK;
> 
> @@ -181,12 +182,7 @@ static bool kvm_set_valid_leaf_pte(kvm_pte_t
> *ptep, u64 pa, kvm_pte_t attr,
>  	pte |= FIELD_PREP(KVM_PTE_TYPE, type);
>  	pte |= KVM_PTE_VALID;
> 
> -	/* Tolerate KVM recreating the exact same mapping. */
> -	if (kvm_pte_valid(old))
> -		return old == pte;
> -
> -	smp_store_release(ptep, pte);
> -	return true;
> +	return pte;
>  }
> 
>  static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data, 
> u64 addr,
> @@ -341,12 +337,17 @@ static int hyp_map_set_prot_attr(enum
> kvm_pgtable_prot prot,
>  static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>  				    kvm_pte_t *ptep, struct hyp_map_data *data)
>  {
> +	kvm_pte_t new, old = *ptep;
>  	u64 granule = kvm_granule_size(level), phys = data->phys;
> 
>  	if (!kvm_block_mapping_supported(addr, end, phys, level))
>  		return false;
> 
> -	WARN_ON(!kvm_set_valid_leaf_pte(ptep, phys, data->attr, level));
> +	/* Tolerate KVM recreating the exact same mapping. */
> +	new = kvm_init_valid_leaf_pte(phys, data->attr, level);
> +	if (old != new && !WARN_ON(kvm_pte_valid(old)))
> +		smp_store_release(ptep, new);
> +
>  	data->phys += granule;
>  	return true;
>  }
> @@ -461,25 +462,56 @@ static int stage2_map_set_prot_attr(enum
> kvm_pgtable_prot prot,
>  	return 0;
>  }
> 
> +static bool stage2_set_valid_leaf_pte_pre(u64 addr, u32 level,
> +					  kvm_pte_t *ptep, kvm_pte_t new,
> +					  struct stage2_map_data *data)
> +{
> +	kvm_pte_t old = *ptep, old_attr, new_attr;
> +
> +	if ((old ^ new) & (~KVM_PTE_LEAF_ATTR_PERMS))
> +		return false;
> +
> +	/*
> +	 * Skip updating if we are trying to recreate exactly the same 
> mapping
> +	 * or to reduce the access permissions only. And update the valid 
> leaf
> +	 * PTE without break-before-make if we are trying to add more access
> +	 * permissions only.
> +	 */
> +	old_attr = (old & KVM_PTE_LEAF_ATTR_PERMS) ^ 
> KVM_PTE_LEAF_ATTR_HI_S2_XN;
> +	new_attr = (new & KVM_PTE_LEAF_ATTR_PERMS) ^ 
> KVM_PTE_LEAF_ATTR_HI_S2_XN;
> +	if (new_attr <= old_attr)
> +		return true;
> +
> +	WRITE_ONCE(*ptep, new);
> +	kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);

I think what bothers me the most here is that we are turning a mapping 
into
a permission update, which makes the code really hard to read, and mixes
two things that were so far separate.

I wonder whether we should instead abort the update and simply take the 
fault
again, if we ever need to do it.

Thanks,

         M.
Will Deacon Dec. 11, 2020, 9:53 a.m. UTC | #2
Hi Yanan,

On Fri, Dec 11, 2020 at 04:01:15PM +0800, Yanan Wang wrote:
> In dirty-logging, or dirty-logging-stopped time, even normal running
> time of a guest configed with huge mappings and numbers of vCPUs,
> translation faults by different vCPUs on the same GPA could occur
> successively almost at the same time. There are two reasons for it.
> 
> (1) If there are some vCPUs accessing the same GPA at the same time
> and the leaf PTE is not set yet, then they will all cause translation
> faults and the first vCPU holding mmu_lock will set valid leaf PTE,
> and the others will later choose to update the leaf PTE or not.
> 
> (2) When changing a leaf entry or a table entry with break-before-make,
> if there are some vCPUs accessing the same GPA just catch the moment
> when the target PTE is set invalid in a BBM procedure coincidentally,
> they will all cause translation faults and will later choose to update
> the leaf PTE or not.
> 
> The worst case can be like this: some vCPUs cause translation faults
> on the same GPA with different prots, they will fight each other by
> changing back access permissions of the PTE with break-before-make.
> And the BBM-invalid moment might trigger more unnecessary translation
> faults. As a result, some useless small loops will occur, which could
> lead to vCPU stuck.
> 
> To avoid unnecessary update and small loops, add prejudgement in the
> translation fault handler: Skip updating the valid leaf PTE if we are
> trying to recreate exactly the same mapping or to reduce access
> permissions only(such as RW-->RO). And update the valid leaf PTE without
> break-before-make if we are trying to add more permissions only.
> 
> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 73 +++++++++++++++++++++++++-----------
>  1 file changed, 52 insertions(+), 21 deletions(-)

Cheers for this. Given that this patch is solving a few different problems,
do you think you could split it up please? That would certainly make it much
easier to review, as there's quite a lot going on here. A chunk of the
changes seem to be the diff I posted previously:

https://lore.kernel.org/r/20201201141632.GC26973@willie-the-truck

so maybe that could be its own patch?

> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 23a01dfcb27a..f8b3248cef1c 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -45,6 +45,8 @@
>  
>  #define KVM_PTE_LEAF_ATTR_HI_S2_XN	BIT(54)
>  
> +#define KVM_PTE_LEAF_ATTR_PERMS	(GENMASK(7, 6) | BIT(54))

You only use this on the S2 path, so how about:

#define KVM_PTE_LEAF_ATTR_S2_PERMS	KVM_PTE_LEAF_ATTR_LO_S2_S2AP_R | \
					KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \
					KVM_PTE_LEAF_ATTR_HI_S2_XN

or something like that?

>  struct kvm_pgtable_walk_data {
>  	struct kvm_pgtable		*pgt;
>  	struct kvm_pgtable_walker	*walker;
> @@ -170,10 +172,9 @@ static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp)
>  	smp_store_release(ptep, pte);
>  }
>  
> -static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t attr,
> -				   u32 level)
> +static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 level)
>  {
> -	kvm_pte_t old = *ptep, pte = kvm_phys_to_pte(pa);
> +	kvm_pte_t pte = kvm_phys_to_pte(pa);
>  	u64 type = (level == KVM_PGTABLE_MAX_LEVELS - 1) ? KVM_PTE_TYPE_PAGE :
>  							   KVM_PTE_TYPE_BLOCK;
>  
> @@ -181,12 +182,7 @@ static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t attr,
>  	pte |= FIELD_PREP(KVM_PTE_TYPE, type);
>  	pte |= KVM_PTE_VALID;
>  
> -	/* Tolerate KVM recreating the exact same mapping. */
> -	if (kvm_pte_valid(old))
> -		return old == pte;
> -
> -	smp_store_release(ptep, pte);
> -	return true;
> +	return pte;
>  }
>  
>  static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data, u64 addr,
> @@ -341,12 +337,17 @@ static int hyp_map_set_prot_attr(enum kvm_pgtable_prot prot,
>  static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>  				    kvm_pte_t *ptep, struct hyp_map_data *data)
>  {
> +	kvm_pte_t new, old = *ptep;
>  	u64 granule = kvm_granule_size(level), phys = data->phys;
>  
>  	if (!kvm_block_mapping_supported(addr, end, phys, level))
>  		return false;
>  
> -	WARN_ON(!kvm_set_valid_leaf_pte(ptep, phys, data->attr, level));
> +	/* Tolerate KVM recreating the exact same mapping. */
> +	new = kvm_init_valid_leaf_pte(phys, data->attr, level);
> +	if (old != new && !WARN_ON(kvm_pte_valid(old)))
> +		smp_store_release(ptep, new);
> +
>  	data->phys += granule;
>  	return true;
>  }
> @@ -461,25 +462,56 @@ static int stage2_map_set_prot_attr(enum kvm_pgtable_prot prot,
>  	return 0;
>  }
>  
> +static bool stage2_set_valid_leaf_pte_pre(u64 addr, u32 level,
> +					  kvm_pte_t *ptep, kvm_pte_t new,
> +					  struct stage2_map_data *data)
> +{
> +	kvm_pte_t old = *ptep, old_attr, new_attr;
> +
> +	if ((old ^ new) & (~KVM_PTE_LEAF_ATTR_PERMS))
> +		return false;
> +
> +	/*
> +	 * Skip updating if we are trying to recreate exactly the same mapping
> +	 * or to reduce the access permissions only. And update the valid leaf
> +	 * PTE without break-before-make if we are trying to add more access
> +	 * permissions only.
> +	 */
> +	old_attr = (old & KVM_PTE_LEAF_ATTR_PERMS) ^ KVM_PTE_LEAF_ATTR_HI_S2_XN;
> +	new_attr = (new & KVM_PTE_LEAF_ATTR_PERMS) ^ KVM_PTE_LEAF_ATTR_HI_S2_XN;
> +	if (new_attr <= old_attr)
> +		return true;

I think this is a significant change in behaviour for
kvm_pgtable_stage2_map() and I worry that it could catch somebody out in the
future. Please can you update the kerneldoc in kvm_pgtable.h with a note
about this?

Will
Will Deacon Dec. 11, 2020, 10 a.m. UTC | #3
On Fri, Dec 11, 2020 at 09:49:28AM +0000, Marc Zyngier wrote:
> On 2020-12-11 08:01, Yanan Wang wrote:
> > @@ -461,25 +462,56 @@ static int stage2_map_set_prot_attr(enum
> > kvm_pgtable_prot prot,
> >  	return 0;
> >  }
> > 
> > +static bool stage2_set_valid_leaf_pte_pre(u64 addr, u32 level,
> > +					  kvm_pte_t *ptep, kvm_pte_t new,
> > +					  struct stage2_map_data *data)
> > +{
> > +	kvm_pte_t old = *ptep, old_attr, new_attr;
> > +
> > +	if ((old ^ new) & (~KVM_PTE_LEAF_ATTR_PERMS))
> > +		return false;
> > +
> > +	/*
> > +	 * Skip updating if we are trying to recreate exactly the same mapping
> > +	 * or to reduce the access permissions only. And update the valid leaf
> > +	 * PTE without break-before-make if we are trying to add more access
> > +	 * permissions only.
> > +	 */
> > +	old_attr = (old & KVM_PTE_LEAF_ATTR_PERMS) ^
> > KVM_PTE_LEAF_ATTR_HI_S2_XN;
> > +	new_attr = (new & KVM_PTE_LEAF_ATTR_PERMS) ^
> > KVM_PTE_LEAF_ATTR_HI_S2_XN;
> > +	if (new_attr <= old_attr)
> > +		return true;
> > +
> > +	WRITE_ONCE(*ptep, new);
> > +	kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
> 
> I think what bothers me the most here is that we are turning a mapping into
> a permission update, which makes the code really hard to read, and mixes
> two things that were so far separate.
> 
> I wonder whether we should instead abort the update and simply take the
> fault
> again, if we ever need to do it.

That's a nice idea. If we could enforce that we don't alter permissions on
the map path, and instead just return e.g. -EAGAIN then that would be a
very neat solution and would cement the permission vs translation fault
division.

Will
Yanan Wang Dec. 14, 2020, 7:20 a.m. UTC | #4
On 2020/12/11 17:53, Will Deacon wrote:
> Hi Yanan,
>
> On Fri, Dec 11, 2020 at 04:01:15PM +0800, Yanan Wang wrote:
>> In dirty-logging, or dirty-logging-stopped time, even normal running
>> time of a guest configed with huge mappings and numbers of vCPUs,
>> translation faults by different vCPUs on the same GPA could occur
>> successively almost at the same time. There are two reasons for it.
>>
>> (1) If there are some vCPUs accessing the same GPA at the same time
>> and the leaf PTE is not set yet, then they will all cause translation
>> faults and the first vCPU holding mmu_lock will set valid leaf PTE,
>> and the others will later choose to update the leaf PTE or not.
>>
>> (2) When changing a leaf entry or a table entry with break-before-make,
>> if there are some vCPUs accessing the same GPA just catch the moment
>> when the target PTE is set invalid in a BBM procedure coincidentally,
>> they will all cause translation faults and will later choose to update
>> the leaf PTE or not.
>>
>> The worst case can be like this: some vCPUs cause translation faults
>> on the same GPA with different prots, they will fight each other by
>> changing back access permissions of the PTE with break-before-make.
>> And the BBM-invalid moment might trigger more unnecessary translation
>> faults. As a result, some useless small loops will occur, which could
>> lead to vCPU stuck.
>>
>> To avoid unnecessary update and small loops, add prejudgement in the
>> translation fault handler: Skip updating the valid leaf PTE if we are
>> trying to recreate exactly the same mapping or to reduce access
>> permissions only(such as RW-->RO). And update the valid leaf PTE without
>> break-before-make if we are trying to add more permissions only.
>>
>> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
>> ---
>>   arch/arm64/kvm/hyp/pgtable.c | 73 +++++++++++++++++++++++++-----------
>>   1 file changed, 52 insertions(+), 21 deletions(-)
> Cheers for this. Given that this patch is solving a few different problems,
> do you think you could split it up please? That would certainly make it much
> easier to review, as there's quite a lot going on here. A chunk of the
> changes seem to be the diff I posted previously:
>
> https://lore.kernel.org/r/20201201141632.GC26973@willie-the-truck
>
> so maybe that could be its own patch?
Yeah, I will split the diff into two patches at next version, thanks.
>
>> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
>> index 23a01dfcb27a..f8b3248cef1c 100644
>> --- a/arch/arm64/kvm/hyp/pgtable.c
>> +++ b/arch/arm64/kvm/hyp/pgtable.c
>> @@ -45,6 +45,8 @@
>>   
>>   #define KVM_PTE_LEAF_ATTR_HI_S2_XN	BIT(54)
>>   
>> +#define KVM_PTE_LEAF_ATTR_PERMS	(GENMASK(7, 6) | BIT(54))
> You only use this on the S2 path, so how about:
>
> #define KVM_PTE_LEAF_ATTR_S2_PERMS	KVM_PTE_LEAF_ATTR_LO_S2_S2AP_R | \
> 					KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \
> 					KVM_PTE_LEAF_ATTR_HI_S2_XN
>
> or something like that?
Yes, it's more reasonable.
>>   struct kvm_pgtable_walk_data {
>>   	struct kvm_pgtable		*pgt;
>>   	struct kvm_pgtable_walker	*walker;
>> @@ -170,10 +172,9 @@ static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp)
>>   	smp_store_release(ptep, pte);
>>   }
>>   
>> -static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t attr,
>> -				   u32 level)
>> +static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 level)
>>   {
>> -	kvm_pte_t old = *ptep, pte = kvm_phys_to_pte(pa);
>> +	kvm_pte_t pte = kvm_phys_to_pte(pa);
>>   	u64 type = (level == KVM_PGTABLE_MAX_LEVELS - 1) ? KVM_PTE_TYPE_PAGE :
>>   							   KVM_PTE_TYPE_BLOCK;
>>   
>> @@ -181,12 +182,7 @@ static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t attr,
>>   	pte |= FIELD_PREP(KVM_PTE_TYPE, type);
>>   	pte |= KVM_PTE_VALID;
>>   
>> -	/* Tolerate KVM recreating the exact same mapping. */
>> -	if (kvm_pte_valid(old))
>> -		return old == pte;
>> -
>> -	smp_store_release(ptep, pte);
>> -	return true;
>> +	return pte;
>>   }
>>   
>>   static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data, u64 addr,
>> @@ -341,12 +337,17 @@ static int hyp_map_set_prot_attr(enum kvm_pgtable_prot prot,
>>   static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>>   				    kvm_pte_t *ptep, struct hyp_map_data *data)
>>   {
>> +	kvm_pte_t new, old = *ptep;
>>   	u64 granule = kvm_granule_size(level), phys = data->phys;
>>   
>>   	if (!kvm_block_mapping_supported(addr, end, phys, level))
>>   		return false;
>>   
>> -	WARN_ON(!kvm_set_valid_leaf_pte(ptep, phys, data->attr, level));
>> +	/* Tolerate KVM recreating the exact same mapping. */
>> +	new = kvm_init_valid_leaf_pte(phys, data->attr, level);
>> +	if (old != new && !WARN_ON(kvm_pte_valid(old)))
>> +		smp_store_release(ptep, new);
>> +
>>   	data->phys += granule;
>>   	return true;
>>   }
>> @@ -461,25 +462,56 @@ static int stage2_map_set_prot_attr(enum kvm_pgtable_prot prot,
>>   	return 0;
>>   }
>>   
>> +static bool stage2_set_valid_leaf_pte_pre(u64 addr, u32 level,
>> +					  kvm_pte_t *ptep, kvm_pte_t new,
>> +					  struct stage2_map_data *data)
>> +{
>> +	kvm_pte_t old = *ptep, old_attr, new_attr;
>> +
>> +	if ((old ^ new) & (~KVM_PTE_LEAF_ATTR_PERMS))
>> +		return false;
>> +
>> +	/*
>> +	 * Skip updating if we are trying to recreate exactly the same mapping
>> +	 * or to reduce the access permissions only. And update the valid leaf
>> +	 * PTE without break-before-make if we are trying to add more access
>> +	 * permissions only.
>> +	 */
>> +	old_attr = (old & KVM_PTE_LEAF_ATTR_PERMS) ^ KVM_PTE_LEAF_ATTR_HI_S2_XN;
>> +	new_attr = (new & KVM_PTE_LEAF_ATTR_PERMS) ^ KVM_PTE_LEAF_ATTR_HI_S2_XN;
>> +	if (new_attr <= old_attr)
>> +		return true;
> I think this is a significant change in behaviour for
> kvm_pgtable_stage2_map() and I worry that it could catch somebody out in the
> future. Please can you update the kerneldoc in kvm_pgtable.h with a note
> about this?
>
> Will
> .
Yanan Wang Dec. 14, 2020, 7:20 a.m. UTC | #5
On 2020/12/11 17:49, Marc Zyngier wrote:
> Hi Yanan,
>
> On 2020-12-11 08:01, Yanan Wang wrote:
>> In dirty-logging, or dirty-logging-stopped time, even normal running
>> time of a guest configed with huge mappings and numbers of vCPUs,
>> translation faults by different vCPUs on the same GPA could occur
>> successively almost at the same time. There are two reasons for it.
>>
>> (1) If there are some vCPUs accessing the same GPA at the same time
>> and the leaf PTE is not set yet, then they will all cause translation
>> faults and the first vCPU holding mmu_lock will set valid leaf PTE,
>> and the others will later choose to update the leaf PTE or not.
>>
>> (2) When changing a leaf entry or a table entry with break-before-make,
>> if there are some vCPUs accessing the same GPA just catch the moment
>> when the target PTE is set invalid in a BBM procedure coincidentally,
>> they will all cause translation faults and will later choose to update
>> the leaf PTE or not.
>>
>> The worst case can be like this: some vCPUs cause translation faults
>> on the same GPA with different prots, they will fight each other by
>> changing back access permissions of the PTE with break-before-make.
>> And the BBM-invalid moment might trigger more unnecessary translation
>> faults. As a result, some useless small loops will occur, which could
>> lead to vCPU stuck.
>>
>> To avoid unnecessary update and small loops, add prejudgement in the
>> translation fault handler: Skip updating the valid leaf PTE if we are
>> trying to recreate exactly the same mapping or to reduce access
>> permissions only(such as RW-->RO). And update the valid leaf PTE without
>> break-before-make if we are trying to add more permissions only.
>
> I'm a bit perplexed with this: why are you skipping the update if the
> permissions need to be reduced? Even more, how can we reduce the
> permissions from a vCPU fault? I can't really think of a scenario where
> that happens.
>
> Or are you describing a case where two vcpus fault simultaneously with
> conflicting permissions:
>
> - Both vcpus fault on translation fault
> - vcpu A wants W access
> - vpcu B wants R access
>
> and 'A' gets in first, set the permissions to RW (because R is
> implicitly added to W), followed by 'B' which downgrades it to RO?
>
> If that's what you are describing, then I agree we could do better.
Yes, this is exactly what I want to describe.
>
>>
>> Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
>> ---
>>  arch/arm64/kvm/hyp/pgtable.c | 73 +++++++++++++++++++++++++-----------
>>  1 file changed, 52 insertions(+), 21 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
>> index 23a01dfcb27a..f8b3248cef1c 100644
>> --- a/arch/arm64/kvm/hyp/pgtable.c
>> +++ b/arch/arm64/kvm/hyp/pgtable.c
>> @@ -45,6 +45,8 @@
>>
>>  #define KVM_PTE_LEAF_ATTR_HI_S2_XN    BIT(54)
>>
>> +#define KVM_PTE_LEAF_ATTR_PERMS    (GENMASK(7, 6) | BIT(54))
>> +
>>  struct kvm_pgtable_walk_data {
>>      struct kvm_pgtable        *pgt;
>>      struct kvm_pgtable_walker    *walker;
>> @@ -170,10 +172,9 @@ static void kvm_set_table_pte(kvm_pte_t *ptep,
>> kvm_pte_t *childp)
>>      smp_store_release(ptep, pte);
>>  }
>>
>> -static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, 
>> kvm_pte_t attr,
>> -                   u32 level)
>> +static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 
>> level)
>>  {
>> -    kvm_pte_t old = *ptep, pte = kvm_phys_to_pte(pa);
>> +    kvm_pte_t pte = kvm_phys_to_pte(pa);
>>      u64 type = (level == KVM_PGTABLE_MAX_LEVELS - 1) ? 
>> KVM_PTE_TYPE_PAGE :
>>                                 KVM_PTE_TYPE_BLOCK;
>>
>> @@ -181,12 +182,7 @@ static bool kvm_set_valid_leaf_pte(kvm_pte_t
>> *ptep, u64 pa, kvm_pte_t attr,
>>      pte |= FIELD_PREP(KVM_PTE_TYPE, type);
>>      pte |= KVM_PTE_VALID;
>>
>> -    /* Tolerate KVM recreating the exact same mapping. */
>> -    if (kvm_pte_valid(old))
>> -        return old == pte;
>> -
>> -    smp_store_release(ptep, pte);
>> -    return true;
>> +    return pte;
>>  }
>>
>>  static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data 
>> *data, u64 addr,
>> @@ -341,12 +337,17 @@ static int hyp_map_set_prot_attr(enum
>> kvm_pgtable_prot prot,
>>  static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>>                      kvm_pte_t *ptep, struct hyp_map_data *data)
>>  {
>> +    kvm_pte_t new, old = *ptep;
>>      u64 granule = kvm_granule_size(level), phys = data->phys;
>>
>>      if (!kvm_block_mapping_supported(addr, end, phys, level))
>>          return false;
>>
>> -    WARN_ON(!kvm_set_valid_leaf_pte(ptep, phys, data->attr, level));
>> +    /* Tolerate KVM recreating the exact same mapping. */
>> +    new = kvm_init_valid_leaf_pte(phys, data->attr, level);
>> +    if (old != new && !WARN_ON(kvm_pte_valid(old)))
>> +        smp_store_release(ptep, new);
>> +
>>      data->phys += granule;
>>      return true;
>>  }
>> @@ -461,25 +462,56 @@ static int stage2_map_set_prot_attr(enum
>> kvm_pgtable_prot prot,
>>      return 0;
>>  }
>>
>> +static bool stage2_set_valid_leaf_pte_pre(u64 addr, u32 level,
>> +                      kvm_pte_t *ptep, kvm_pte_t new,
>> +                      struct stage2_map_data *data)
>> +{
>> +    kvm_pte_t old = *ptep, old_attr, new_attr;
>> +
>> +    if ((old ^ new) & (~KVM_PTE_LEAF_ATTR_PERMS))
>> +        return false;
>> +
>> +    /*
>> +     * Skip updating if we are trying to recreate exactly the same 
>> mapping
>> +     * or to reduce the access permissions only. And update the 
>> valid leaf
>> +     * PTE without break-before-make if we are trying to add more 
>> access
>> +     * permissions only.
>> +     */
>> +    old_attr = (old & KVM_PTE_LEAF_ATTR_PERMS) ^ 
>> KVM_PTE_LEAF_ATTR_HI_S2_XN;
>> +    new_attr = (new & KVM_PTE_LEAF_ATTR_PERMS) ^ 
>> KVM_PTE_LEAF_ATTR_HI_S2_XN;
>> +    if (new_attr <= old_attr)
>> +        return true;
>> +
>> +    WRITE_ONCE(*ptep, new);
>> +    kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
>
> I think what bothers me the most here is that we are turning a mapping 
> into
> a permission update, which makes the code really hard to read, and mixes
> two things that were so far separate.
>
> I wonder whether we should instead abort the update and simply take 
> the fault
> again, if we ever need to do it.
>
> Thanks,
>
>         M.
Yanan Wang Dec. 14, 2020, 7:20 a.m. UTC | #6
Hi Will, Marc,

On 2020/12/11 18:00, Will Deacon wrote:
> On Fri, Dec 11, 2020 at 09:49:28AM +0000, Marc Zyngier wrote:
>> On 2020-12-11 08:01, Yanan Wang wrote:
>>> @@ -461,25 +462,56 @@ static int stage2_map_set_prot_attr(enum
>>> kvm_pgtable_prot prot,
>>>   	return 0;
>>>   }
>>>
>>> +static bool stage2_set_valid_leaf_pte_pre(u64 addr, u32 level,
>>> +					  kvm_pte_t *ptep, kvm_pte_t new,
>>> +					  struct stage2_map_data *data)
>>> +{
>>> +	kvm_pte_t old = *ptep, old_attr, new_attr;
>>> +
>>> +	if ((old ^ new) & (~KVM_PTE_LEAF_ATTR_PERMS))
>>> +		return false;
>>> +
>>> +	/*
>>> +	 * Skip updating if we are trying to recreate exactly the same mapping
>>> +	 * or to reduce the access permissions only. And update the valid leaf
>>> +	 * PTE without break-before-make if we are trying to add more access
>>> +	 * permissions only.
>>> +	 */
>>> +	old_attr = (old & KVM_PTE_LEAF_ATTR_PERMS) ^
>>> KVM_PTE_LEAF_ATTR_HI_S2_XN;
>>> +	new_attr = (new & KVM_PTE_LEAF_ATTR_PERMS) ^
>>> KVM_PTE_LEAF_ATTR_HI_S2_XN;
>>> +	if (new_attr <= old_attr)
>>> +		return true;
>>> +
>>> +	WRITE_ONCE(*ptep, new);
>>> +	kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
>> I think what bothers me the most here is that we are turning a mapping into
>> a permission update, which makes the code really hard to read, and mixes
>> two things that were so far separate.
>>
>> I wonder whether we should instead abort the update and simply take the
>> fault
>> again, if we ever need to do it.
> That's a nice idea. If we could enforce that we don't alter permissions on
> the map path, and instead just return e.g. -EAGAIN then that would be a
> very neat solution and would cement the permission vs translation fault
> division.

I agree with that we can indeed simplify the code, separate 
permission-relaxing and

mapping by the *straightly return* way, although the cost is one more 
vCPU trap on

permission fault next time possibly.

So how about the new two diffs below? I split them into two patches with 
different aims.

Thanks,

Yanan.


diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 23a01dfcb27a..a74a62283012 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -170,10 +170,9 @@ static void kvm_set_table_pte(kvm_pte_t *ptep, 
kvm_pte_t *childp)
         smp_store_release(ptep, pte);
  }

-static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t attr,
-                                  u32 level)
+static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 level)
  {
-       kvm_pte_t old = *ptep, pte = kvm_phys_to_pte(pa);
+       kvm_pte_t pte = kvm_phys_to_pte(pa);
         u64 type = (level == KVM_PGTABLE_MAX_LEVELS - 1) ? 
KVM_PTE_TYPE_PAGE :
KVM_PTE_TYPE_BLOCK;

@@ -181,12 +180,7 @@ static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, 
u64 pa, kvm_pte_t attr,
         pte |= FIELD_PREP(KVM_PTE_TYPE, type);
         pte |= KVM_PTE_VALID;

-       /* Tolerate KVM recreating the exact same mapping. */
-       if (kvm_pte_valid(old))
-               return old == pte;
-
-       smp_store_release(ptep, pte);
-       return true;
+       return pte;
  }

  static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data, 
u64 addr,
@@ -341,12 +335,17 @@ static int hyp_map_set_prot_attr(enum 
kvm_pgtable_prot prot,
  static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level,
                                     kvm_pte_t *ptep, struct 
hyp_map_data *data)
  {
+       kvm_pte_t new, old = *ptep;
         u64 granule = kvm_granule_size(level), phys = data->phys;

         if (!kvm_block_mapping_supported(addr, end, phys, level))
                 return false;

-       WARN_ON(!kvm_set_valid_leaf_pte(ptep, phys, data->attr, level));
+       /* Tolerate KVM recreating the exact same mapping. */
+       new = kvm_init_valid_leaf_pte(phys, data->attr, level);
+       if (old != new && !WARN_ON(kvm_pte_valid(old)))
+               smp_store_release(ptep, new);
+
         data->phys += granule;
         return true;
  }
@@ -465,21 +464,29 @@ static bool stage2_map_walker_try_leaf(u64 addr, 
u64 end, u32 level,
                                        kvm_pte_t *ptep,
                                        struct stage2_map_data *data)
  {
+       kvm_pte_t new, old = *ptep;
         u64 granule = kvm_granule_size(level), phys = data->phys;
+       struct page *page = virt_to_page(ptep);

         if (!kvm_block_mapping_supported(addr, end, phys, level))
                 return false;

-       if (kvm_pte_valid(*ptep))
-               put_page(virt_to_page(ptep));
+       new = kvm_init_valid_leaf_pte(phys, data->attr, level);
+       if (kvm_pte_valid(old)) {
+               /* Tolerate KVM recreating the exact same mapping. */
+               if (old == new)
+                       goto out;

-       if (kvm_set_valid_leaf_pte(ptep, phys, data->attr, level))
-               goto out;
+               /* There's an existing different valid leaf entry, so 
perform
+                * break-before-make.
+                */
+               kvm_set_invalid_pte(ptep);
+               kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, 
level);
+               put_page(page);
+       }

-       /* There's an existing valid leaf entry, so perform 
break-before-make */
-       kvm_set_invalid_pte(ptep);
-       kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
-       kvm_set_valid_leaf_pte(ptep, phys, data->attr, level);
+       smp_store_release(ptep, new);
+       get_page(page);
  out:
         data->phys += granule;
         return true;
@@ -521,7 +528,7 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, 
u32 level, kvm_pte_t *ptep,
         }

         if (stage2_map_walker_try_leaf(addr, end, level, ptep, data))
-               goto out_get_page;
+               return 0;

         if (WARN_ON(level == KVM_PGTABLE_MAX_LEVELS - 1))
                 return -EINVAL;
@@ -545,9 +552,8 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, 
u32 level, kvm_pte_t *ptep,
         }

         kvm_set_table_pte(ptep, childp);
-
-out_get_page:
         get_page(page);
+
         return 0;
  }


diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index a74a62283012..e3c6133567c4 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -45,6 +45,10 @@

  #define KVM_PTE_LEAF_ATTR_HI_S2_XN     BIT(54)

+#define KVM_PTE_LEAF_ATTR_S2_PERMS (KVM_PTE_LEAF_ATTR_LO_S2_S2AP_R | \
+        KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \
+ KVM_PTE_LEAF_ATTR_HI_S2_XN)
+
  struct kvm_pgtable_walk_data {
         struct kvm_pgtable                   *pgt;
         struct kvm_pgtable_walker       *walker;
@@ -473,8 +477,13 @@ static bool stage2_map_walker_try_leaf(u64 addr, 
u64 end, u32 level,

         new = kvm_init_valid_leaf_pte(phys, data->attr, level);
         if (kvm_pte_valid(old)) {
-               /* Tolerate KVM recreating the exact same mapping. */
-               if (old == new)
+               /*
+                * Skip updating the PTE with break-before-make if we 
are trying
+                * to recreate the exact same mapping or only change the 
access
+                * permissions. Actually, change of permissions will be 
handled
+                * through the relax_perms path next time if necessary.
+                */
+               if (!((old ^ new) & (~KVM_PTE_LEAF_ATTR_S2_PERMS)))
                         goto out;

                 /* There's an existing different valid leaf entry, so 
perform
Marc Zyngier Dec. 15, 2020, 1:18 p.m. UTC | #7
Hi Yanan,

On 2020-12-14 07:20, wangyanan (Y) wrote:

> diff --git a/arch/arm64/kvm/hyp/pgtable.c 
> b/arch/arm64/kvm/hyp/pgtable.c
> index a74a62283012..e3c6133567c4 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -45,6 +45,10 @@
> 
>  #define KVM_PTE_LEAF_ATTR_HI_S2_XN     BIT(54)
> 
> +#define KVM_PTE_LEAF_ATTR_S2_PERMS (KVM_PTE_LEAF_ATTR_LO_S2_S2AP_R | \
> +        KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W | \
> + KVM_PTE_LEAF_ATTR_HI_S2_XN)
> +
>  struct kvm_pgtable_walk_data {
>         struct kvm_pgtable                   *pgt;
>         struct kvm_pgtable_walker       *walker;
> @@ -473,8 +477,13 @@ static bool stage2_map_walker_try_leaf(u64 addr,
> u64 end, u32 level,
> 
>         new = kvm_init_valid_leaf_pte(phys, data->attr, level);
>         if (kvm_pte_valid(old)) {
> -               /* Tolerate KVM recreating the exact same mapping. */
> -               if (old == new)
> +               /*
> +                * Skip updating the PTE with break-before-make if we 
> are trying
> +                * to recreate the exact same mapping or only change 
> the access
> +                * permissions. Actually, change of permissions will be 
> handled
> +                * through the relax_perms path next time if necessary.
> +                */
> +               if (!((old ^ new) & (~KVM_PTE_LEAF_ATTR_S2_PERMS)))
>                         goto out;
> 
>                 /* There's an existing different valid leaf entry, so 
> perform

I think there is a bit more work to do on this.

One obvious issue is that we currently flag a page as dirty before 
handling
the fault. With an early exit, we end-up having spurious dirty pages.

It's not a big deal, but I'd rather mark the page dirty after the 
mapping
or the permission update having been successful (at the moment, it 
cannot
fails).

Thanks,

         M.
diff mbox series

Patch

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 23a01dfcb27a..f8b3248cef1c 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -45,6 +45,8 @@ 
 
 #define KVM_PTE_LEAF_ATTR_HI_S2_XN	BIT(54)
 
+#define KVM_PTE_LEAF_ATTR_PERMS	(GENMASK(7, 6) | BIT(54))
+
 struct kvm_pgtable_walk_data {
 	struct kvm_pgtable		*pgt;
 	struct kvm_pgtable_walker	*walker;
@@ -170,10 +172,9 @@  static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp)
 	smp_store_release(ptep, pte);
 }
 
-static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t attr,
-				   u32 level)
+static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 level)
 {
-	kvm_pte_t old = *ptep, pte = kvm_phys_to_pte(pa);
+	kvm_pte_t pte = kvm_phys_to_pte(pa);
 	u64 type = (level == KVM_PGTABLE_MAX_LEVELS - 1) ? KVM_PTE_TYPE_PAGE :
 							   KVM_PTE_TYPE_BLOCK;
 
@@ -181,12 +182,7 @@  static bool kvm_set_valid_leaf_pte(kvm_pte_t *ptep, u64 pa, kvm_pte_t attr,
 	pte |= FIELD_PREP(KVM_PTE_TYPE, type);
 	pte |= KVM_PTE_VALID;
 
-	/* Tolerate KVM recreating the exact same mapping. */
-	if (kvm_pte_valid(old))
-		return old == pte;
-
-	smp_store_release(ptep, pte);
-	return true;
+	return pte;
 }
 
 static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data, u64 addr,
@@ -341,12 +337,17 @@  static int hyp_map_set_prot_attr(enum kvm_pgtable_prot prot,
 static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level,
 				    kvm_pte_t *ptep, struct hyp_map_data *data)
 {
+	kvm_pte_t new, old = *ptep;
 	u64 granule = kvm_granule_size(level), phys = data->phys;
 
 	if (!kvm_block_mapping_supported(addr, end, phys, level))
 		return false;
 
-	WARN_ON(!kvm_set_valid_leaf_pte(ptep, phys, data->attr, level));
+	/* Tolerate KVM recreating the exact same mapping. */
+	new = kvm_init_valid_leaf_pte(phys, data->attr, level);
+	if (old != new && !WARN_ON(kvm_pte_valid(old)))
+		smp_store_release(ptep, new);
+
 	data->phys += granule;
 	return true;
 }
@@ -461,25 +462,56 @@  static int stage2_map_set_prot_attr(enum kvm_pgtable_prot prot,
 	return 0;
 }
 
+static bool stage2_set_valid_leaf_pte_pre(u64 addr, u32 level,
+					  kvm_pte_t *ptep, kvm_pte_t new,
+					  struct stage2_map_data *data)
+{
+	kvm_pte_t old = *ptep, old_attr, new_attr;
+
+	if ((old ^ new) & (~KVM_PTE_LEAF_ATTR_PERMS))
+		return false;
+
+	/*
+	 * Skip updating if we are trying to recreate exactly the same mapping
+	 * or to reduce the access permissions only. And update the valid leaf
+	 * PTE without break-before-make if we are trying to add more access
+	 * permissions only.
+	 */
+	old_attr = (old & KVM_PTE_LEAF_ATTR_PERMS) ^ KVM_PTE_LEAF_ATTR_HI_S2_XN;
+	new_attr = (new & KVM_PTE_LEAF_ATTR_PERMS) ^ KVM_PTE_LEAF_ATTR_HI_S2_XN;
+	if (new_attr <= old_attr)
+		return true;
+
+	WRITE_ONCE(*ptep, new);
+	kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
+
+	return true;
+}
+
 static bool stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
 				       kvm_pte_t *ptep,
 				       struct stage2_map_data *data)
 {
+	kvm_pte_t new, old = *ptep;
 	u64 granule = kvm_granule_size(level), phys = data->phys;
+	struct page *page = virt_to_page(ptep);
 
 	if (!kvm_block_mapping_supported(addr, end, phys, level))
 		return false;
 
-	if (kvm_pte_valid(*ptep))
-		put_page(virt_to_page(ptep));
+	new = kvm_init_valid_leaf_pte(phys, data->attr, level);
+	if (kvm_pte_valid(old)) {
+		if (stage2_set_valid_leaf_pte_pre(addr, level, ptep, new, data))
+			goto out;
 
-	if (kvm_set_valid_leaf_pte(ptep, phys, data->attr, level))
-		goto out;
+		/* Update the PTE with break-before-make if it's necessary. */
+		kvm_set_invalid_pte(ptep);
+		kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
+		put_page(page);
+	}
 
-	/* There's an existing valid leaf entry, so perform break-before-make */
-	kvm_set_invalid_pte(ptep);
-	kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level);
-	kvm_set_valid_leaf_pte(ptep, phys, data->attr, level);
+	smp_store_release(ptep, new);
+	get_page(page);
 out:
 	data->phys += granule;
 	return true;
@@ -521,7 +553,7 @@  static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 	}
 
 	if (stage2_map_walker_try_leaf(addr, end, level, ptep, data))
-		goto out_get_page;
+		return 0;
 
 	if (WARN_ON(level == KVM_PGTABLE_MAX_LEVELS - 1))
 		return -EINVAL;
@@ -545,9 +577,8 @@  static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
 	}
 
 	kvm_set_table_pte(ptep, childp);
-
-out_get_page:
 	get_page(page);
+
 	return 0;
 }