diff mbox series

[5/5] KVM: arm64: Handle DABT caused by LS64* instructions on unsupported memory

Message ID 20241202135504.14252-6-yangyicong@huawei.com (mailing list archive)
State New
Headers show
Series Add support for FEAT_{LS64, LS64_V, LS64_ACCDATA} and related tests | expand

Commit Message

Yicong Yang Dec. 2, 2024, 1:55 p.m. UTC
From: Yicong Yang <yangyicong@hisilicon.com>

FEAT_LS64* instructions only support to access Device/Uncacheable
memory, otherwise a data abort for unsupported Exclusive or atomic
access (0x35) is generated per spec. It's implementation defined
whether the target exception level is routed and is possible to
implemented as route to EL2 on a VHE VM. Per DDI0487K.a Section
C3.2.12.2 Single-copy atomic 64-byte load/store:

  The check is performed against the resulting memory type after all
  enabled stages of translation. In this case the fault is reported
  at the final enabled stage of translation.

If it's implemented as generate the DABT to the final enabled stage
(stage-2), inject a DABT to the guest to handle it.

Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
---
 arch/arm64/kvm/mmu.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

Comments

Marc Zyngier Dec. 3, 2024, 9:38 a.m. UTC | #1
On Mon, 02 Dec 2024 13:55:04 +0000,
Yicong Yang <yangyicong@huawei.com> wrote:
> 
> From: Yicong Yang <yangyicong@hisilicon.com>
> 
> FEAT_LS64* instructions only support to access Device/Uncacheable
> memory, otherwise a data abort for unsupported Exclusive or atomic

Not quite. FEAT_LS64WB explicitly supports Write-Back mappings.

> access (0x35) is generated per spec. It's implementation defined
> whether the target exception level is routed and is possible to
> implemented as route to EL2 on a VHE VM. Per DDI0487K.a Section
> C3.2.12.2 Single-copy atomic 64-byte load/store:
> 
>   The check is performed against the resulting memory type after all
>   enabled stages of translation. In this case the fault is reported
>   at the final enabled stage of translation.
> 
> If it's implemented as generate the DABT to the final enabled stage
> (stage-2), inject a DABT to the guest to handle it.
> 
> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
> ---
>  arch/arm64/kvm/mmu.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index c9d46ad57e52..b7e6f0a27537 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1787,6 +1787,20 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>  		return 1;
>  	}
>  
> +	/*
> +	 * If instructions of FEAT_{LS64, LS64_V, LS64_ACCDATA} operated on
> +	 * unsupported memory regions, a DABT for unsupported Exclusive or
> +	 * atomic access is generated. It's implementation defined whether
> +	 * the exception will be taken to, a stage-1 DABT or the final enabled
> +	 * stage of translation (stage-2 in this case as we hit here). Inject
> +	 * a DABT to the guest to handle it if it's implemented as a stage-2
> +	 * DABT.
> +	 */
> +	if (esr_fsc_is_excl_atomic_fault(esr)) {
> +		kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
> +		return 1;
> +	}

This doesn't seem quite right. This is injecting an *External* Data
Abort, which is not what the spec says happens, as you are emulating
the *first* acceptable behaviour:

  "The check is performed at each enabled stage of translation, and
   the fault is reported for the first stage of translation that
   provides an inappropriate memory type. In this case, the value of
   the HCR_EL2.DC bit does not cause accesses generated by the
   instructions to generate a stage 1 Data abort,"

So while the exception is reported at a different EL, the fault should
still be an "unsupported Exclusive or atomic access". But that's also
assuming that S2 has a device mapping, and it is EL1 that did
something wrong. Surely you should check the IPA against its memory
type?

Further questions: what happens when a L2 guest triggers such fault?
I don't think you can't arbitrarily route it back to L2 without
looking at why it faulted.

Thanks,

	M.
Yicong Yang Dec. 4, 2024, 9:08 a.m. UTC | #2
On 2024/12/3 17:38, Marc Zyngier wrote:
> On Mon, 02 Dec 2024 13:55:04 +0000,
> Yicong Yang <yangyicong@huawei.com> wrote:
>>
>> From: Yicong Yang <yangyicong@hisilicon.com>
>>
>> FEAT_LS64* instructions only support to access Device/Uncacheable
>> memory, otherwise a data abort for unsupported Exclusive or atomic
> 
> Not quite. FEAT_LS64WB explicitly supports Write-Back mappings.

thanks for the information, I didn't know this since it's not listed on DDI0487K.a,
have seen this feature from [1]. will refine the comments and mention the behaviour
applies when no FEAT_LS64WB.

[1] https://developer.arm.com/documentation/109697/2024_09/Feature-descriptions/The-Armv9-6-architecture-extension

> 
>> access (0x35) is generated per spec. It's implementation defined
>> whether the target exception level is routed and is possible to
>> implemented as route to EL2 on a VHE VM. Per DDI0487K.a Section
>> C3.2.12.2 Single-copy atomic 64-byte load/store:
>>
>>   The check is performed against the resulting memory type after all
>>   enabled stages of translation. In this case the fault is reported
>>   at the final enabled stage of translation.
>>
>> If it's implemented as generate the DABT to the final enabled stage
>> (stage-2), inject a DABT to the guest to handle it.
>>
>> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
>> ---
>>  arch/arm64/kvm/mmu.c | 14 ++++++++++++++
>>  1 file changed, 14 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index c9d46ad57e52..b7e6f0a27537 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -1787,6 +1787,20 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>>  		return 1;
>>  	}
>>  
>> +	/*
>> +	 * If instructions of FEAT_{LS64, LS64_V, LS64_ACCDATA} operated on
>> +	 * unsupported memory regions, a DABT for unsupported Exclusive or
>> +	 * atomic access is generated. It's implementation defined whether
>> +	 * the exception will be taken to, a stage-1 DABT or the final enabled
>> +	 * stage of translation (stage-2 in this case as we hit here). Inject
>> +	 * a DABT to the guest to handle it if it's implemented as a stage-2
>> +	 * DABT.
>> +	 */
>> +	if (esr_fsc_is_excl_atomic_fault(esr)) {
>> +		kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
>> +		return 1;
>> +	}
> 
> This doesn't seem quite right. This is injecting an *External* Data
> Abort, which is not what the spec says happens, as you are emulating
> the *first* acceptable behaviour:
> 
>   "The check is performed at each enabled stage of translation, and
>    the fault is reported for the first stage of translation that
>    provides an inappropriate memory type. In this case, the value of
>    the HCR_EL2.DC bit does not cause accesses generated by the
>    instructions to generate a stage 1 Data abort,"
> 
> So while the exception is reported at a different EL, the fault should
> still be an "unsupported Exclusive or atomic access". 

yes it's trying to emulate the first behaviour to let guest OS handle it
as how host OS handle it - send a SIGBUS to the EL0 task if it's performing
on unsupported memory type. Currently if such cases happens the VM will be
killed without the handling here. Keep the injected FSC as "unsupported
Exclusive or atomic access" should be more proper, will make it in next
version.

> But that's also
> assuming that S2 has a device mapping, and it is EL1 that did
> something wrong. Surely you should check the IPA against its memory
> type?

in my case it happens when both S1 and S2 mapping is normal memory, for example
running the hwcap test (PATCH 3/5) in the VM. But yes the memory type should
be checked here.

> 
> Further questions: what happens when a L2 guest triggers such fault?
> I don't think you can't arbitrarily route it back to L2 without
> looking at why it faulted.
> 

will see what affect to the nested VM case. but as you point out, it's
too early to inject the abort here, at least we should check the memory
type. will address it.

Thanks.
diff mbox series

Patch

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index c9d46ad57e52..b7e6f0a27537 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1787,6 +1787,20 @@  int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
 		return 1;
 	}
 
+	/*
+	 * If instructions of FEAT_{LS64, LS64_V, LS64_ACCDATA} operated on
+	 * unsupported memory regions, a DABT for unsupported Exclusive or
+	 * atomic access is generated. It's implementation defined whether
+	 * the exception will be taken to, a stage-1 DABT or the final enabled
+	 * stage of translation (stage-2 in this case as we hit here). Inject
+	 * a DABT to the guest to handle it if it's implemented as a stage-2
+	 * DABT.
+	 */
+	if (esr_fsc_is_excl_atomic_fault(esr)) {
+		kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
+		return 1;
+	}
+
 	trace_kvm_guest_fault(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu),
 			      kvm_vcpu_get_hfar(vcpu), fault_ipa);