diff mbox

arm64: KVM: Take S1 walks into account when determining S2 write faults

Message ID 1475149021-13288-1-git-send-email-will.deacon@arm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Will Deacon Sept. 29, 2016, 11:37 a.m. UTC
The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was
generated by a read or a write instruction. For stage 2 data aborts
generated by a stage 1 translation table walk (i.e. the actual page
table access faults at EL2), the WnR bit therefore reports whether the
instruction generating the walk was a load or a store, *not* whether the
page table walker was reading or writing the entry.

For page tables marked as read-only at stage 2 (e.g. due to KSM merging
them with the tables from another guest), this could result in livelock,
where a page table walk generated by a load instruction attempts to
set the access flag in the stage 1 descriptor, but fails to trigger
CoW in the host since only a read fault is reported.

This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to
take into account stage 2 faults in stage 1 walks. Since DBM cannot be
disabled at EL2 for CPUs that implement it, we assume that these faults
are always causes by writes, avoiding the livelock situation at the
expense of occasional, spurious CoWs.

We could, in theory, do a bit better by checking the guest TCR
configuration and inspecting the page table to see why the PTE faulted.
However, I doubt this is measurable in practice, and the threat of
livelock is real.

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Julien Grall <julien.grall@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/kvm_emulate.h | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

Comments

Marc Zyngier Sept. 29, 2016, 3:36 p.m. UTC | #1
On Thu, 29 Sep 2016 12:37:01 +0100
Will Deacon <will.deacon@arm.com> wrote:

> The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was
> generated by a read or a write instruction. For stage 2 data aborts
> generated by a stage 1 translation table walk (i.e. the actual page
> table access faults at EL2), the WnR bit therefore reports whether the
> instruction generating the walk was a load or a store, *not* whether the
> page table walker was reading or writing the entry.
> 
> For page tables marked as read-only at stage 2 (e.g. due to KSM merging
> them with the tables from another guest), this could result in livelock,
> where a page table walk generated by a load instruction attempts to
> set the access flag in the stage 1 descriptor, but fails to trigger
> CoW in the host since only a read fault is reported.
> 
> This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to
> take into account stage 2 faults in stage 1 walks. Since DBM cannot be
> disabled at EL2 for CPUs that implement it, we assume that these faults
> are always causes by writes, avoiding the livelock situation at the
> expense of occasional, spurious CoWs.
> 
> We could, in theory, do a bit better by checking the guest TCR
> configuration and inspecting the page table to see why the PTE faulted.
> However, I doubt this is measurable in practice, and the threat of
> livelock is real.
> 
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/kvm_emulate.h | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 4cdeae3b17c6..948a9a8a9297 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -167,11 +167,6 @@ static inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu)
>  	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV);
>  }
>  
> -static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
> -{
> -	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR);
> -}
> -
>  static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu)
>  {
>  	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE);
> @@ -192,6 +187,12 @@ static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
>  	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
>  }
>  
> +static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
> +{
> +	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) ||
> +		kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */
> +}
> +
>  static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
>  {
>  	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM);

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>

Thanks,

	M.
Mark Rutland Sept. 29, 2016, 5:16 p.m. UTC | #2
[Adding Julien, who seemed to be missing from the real Cc list]

Mark.

On Thu, Sep 29, 2016 at 12:37:01PM +0100, Will Deacon wrote:
> The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was
> generated by a read or a write instruction. For stage 2 data aborts
> generated by a stage 1 translation table walk (i.e. the actual page
> table access faults at EL2), the WnR bit therefore reports whether the
> instruction generating the walk was a load or a store, *not* whether the
> page table walker was reading or writing the entry.
> 
> For page tables marked as read-only at stage 2 (e.g. due to KSM merging
> them with the tables from another guest), this could result in livelock,
> where a page table walk generated by a load instruction attempts to
> set the access flag in the stage 1 descriptor, but fails to trigger
> CoW in the host since only a read fault is reported.
> 
> This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to
> take into account stage 2 faults in stage 1 walks. Since DBM cannot be
> disabled at EL2 for CPUs that implement it, we assume that these faults
> are always causes by writes, avoiding the livelock situation at the
> expense of occasional, spurious CoWs.
> 
> We could, in theory, do a bit better by checking the guest TCR
> configuration and inspecting the page table to see why the PTE faulted.
> However, I doubt this is measurable in practice, and the threat of
> livelock is real.
> 
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/kvm_emulate.h | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 4cdeae3b17c6..948a9a8a9297 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -167,11 +167,6 @@ static inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu)
>  	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV);
>  }
>  
> -static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
> -{
> -	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR);
> -}
> -
>  static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu)
>  {
>  	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE);
> @@ -192,6 +187,12 @@ static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
>  	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
>  }
>  
> +static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
> +{
> +	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) ||
> +		kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */
> +}
> +
>  static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
>  {
>  	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM);
> -- 
> 2.1.4
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>
Christoffer Dall Sept. 29, 2016, 7:14 p.m. UTC | #3
On Thu, Sep 29, 2016 at 12:37:01PM +0100, Will Deacon wrote:
> The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was
> generated by a read or a write instruction. For stage 2 data aborts
> generated by a stage 1 translation table walk (i.e. the actual page
> table access faults at EL2), the WnR bit therefore reports whether the
> instruction generating the walk was a load or a store, *not* whether the
> page table walker was reading or writing the entry.
> 
> For page tables marked as read-only at stage 2 (e.g. due to KSM merging
> them with the tables from another guest), this could result in livelock,
> where a page table walk generated by a load instruction attempts to
> set the access flag in the stage 1 descriptor, but fails to trigger
> CoW in the host since only a read fault is reported.
> 
> This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to
> take into account stage 2 faults in stage 1 walks. Since DBM cannot be
> disabled at EL2 for CPUs that implement it, we assume that these faults
> are always causes by writes, avoiding the livelock situation at the
> expense of occasional, spurious CoWs.
> 
> We could, in theory, do a bit better by checking the guest TCR
> configuration and inspecting the page table to see why the PTE faulted.
> However, I doubt this is measurable in practice, and the threat of
> livelock is real.
> 
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

Applied,
-Christoffer
Will Deacon Oct. 17, 2016, 10:20 a.m. UTC | #4
On Thu, Sep 29, 2016 at 09:14:32PM +0200, Christoffer Dall wrote:
> On Thu, Sep 29, 2016 at 12:37:01PM +0100, Will Deacon wrote:
> > The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was
> > generated by a read or a write instruction. For stage 2 data aborts
> > generated by a stage 1 translation table walk (i.e. the actual page
> > table access faults at EL2), the WnR bit therefore reports whether the
> > instruction generating the walk was a load or a store, *not* whether the
> > page table walker was reading or writing the entry.
> > 
> > For page tables marked as read-only at stage 2 (e.g. due to KSM merging
> > them with the tables from another guest), this could result in livelock,
> > where a page table walk generated by a load instruction attempts to
> > set the access flag in the stage 1 descriptor, but fails to trigger
> > CoW in the host since only a read fault is reported.
> > 
> > This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to
> > take into account stage 2 faults in stage 1 walks. Since DBM cannot be
> > disabled at EL2 for CPUs that implement it, we assume that these faults
> > are always causes by writes, avoiding the livelock situation at the
> > expense of occasional, spurious CoWs.
> > 
> > We could, in theory, do a bit better by checking the guest TCR
> > configuration and inspecting the page table to see why the PTE faulted.
> > However, I doubt this is measurable in practice, and the threat of
> > livelock is real.
> > 
> > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > Cc: Christoffer Dall <christoffer.dall@linaro.org>
> > Cc: Julien Grall <julien.grall@arm.com>
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> 
> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
> 
> Applied,

This doesn't seem to be in 4.9-rc1. Could you please dig it up?

Ta,

Will
Marc Zyngier Oct. 17, 2016, 10:28 a.m. UTC | #5
On 17/10/16 11:20, Will Deacon wrote:
> On Thu, Sep 29, 2016 at 09:14:32PM +0200, Christoffer Dall wrote:
>> On Thu, Sep 29, 2016 at 12:37:01PM +0100, Will Deacon wrote:
>>> The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was
>>> generated by a read or a write instruction. For stage 2 data aborts
>>> generated by a stage 1 translation table walk (i.e. the actual page
>>> table access faults at EL2), the WnR bit therefore reports whether the
>>> instruction generating the walk was a load or a store, *not* whether the
>>> page table walker was reading or writing the entry.
>>>
>>> For page tables marked as read-only at stage 2 (e.g. due to KSM merging
>>> them with the tables from another guest), this could result in livelock,
>>> where a page table walk generated by a load instruction attempts to
>>> set the access flag in the stage 1 descriptor, but fails to trigger
>>> CoW in the host since only a read fault is reported.
>>>
>>> This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to
>>> take into account stage 2 faults in stage 1 walks. Since DBM cannot be
>>> disabled at EL2 for CPUs that implement it, we assume that these faults
>>> are always causes by writes, avoiding the livelock situation at the
>>> expense of occasional, spurious CoWs.
>>>
>>> We could, in theory, do a bit better by checking the guest TCR
>>> configuration and inspecting the page table to see why the PTE faulted.
>>> However, I doubt this is measurable in practice, and the threat of
>>> livelock is real.
>>>
>>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>>> Cc: Christoffer Dall <christoffer.dall@linaro.org>
>>> Cc: Julien Grall <julien.grall@arm.com>
>>> Signed-off-by: Will Deacon <will.deacon@arm.com>
>>
>> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
>>
>> Applied,
> 
> This doesn't seem to be in 4.9-rc1. Could you please dig it up?

Looks like this patch has been lingering in -queue. I'll push it on
master as a fix for -rc2.

Thanks for the heads up.

	M.
diff mbox

Patch

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 4cdeae3b17c6..948a9a8a9297 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -167,11 +167,6 @@  static inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu)
 	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV);
 }
 
-static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
-{
-	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR);
-}
-
 static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu)
 {
 	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE);
@@ -192,6 +187,12 @@  static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
 	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
 }
 
+static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
+{
+	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) ||
+		kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */
+}
+
 static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
 {
 	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM);