Message ID | 20241202135504.14252-6-yangyicong@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Add support for FEAT_{LS64, LS64_V, LS64_ACCDATA} and related tests | expand |
On Mon, 02 Dec 2024 13:55:04 +0000, Yicong Yang <yangyicong@huawei.com> wrote: > > From: Yicong Yang <yangyicong@hisilicon.com> > > FEAT_LS64* instructions only support to access Device/Uncacheable > memory, otherwise a data abort for unsupported Exclusive or atomic Not quite. FEAT_LS64WB explicitly supports Write-Back mappings. > access (0x35) is generated per spec. It's implementation defined > whether the target exception level is routed and is possible to > implemented as route to EL2 on a VHE VM. Per DDI0487K.a Section > C3.2.12.2 Single-copy atomic 64-byte load/store: > > The check is performed against the resulting memory type after all > enabled stages of translation. In this case the fault is reported > at the final enabled stage of translation. > > If it's implemented as generate the DABT to the final enabled stage > (stage-2), inject a DABT to the guest to handle it. > > Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> > --- > arch/arm64/kvm/mmu.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index c9d46ad57e52..b7e6f0a27537 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -1787,6 +1787,20 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) > return 1; > } > > + /* > + * If instructions of FEAT_{LS64, LS64_V, LS64_ACCDATA} operated on > + * unsupported memory regions, a DABT for unsupported Exclusive or > + * atomic access is generated. It's implementation defined whether > + * the exception will be taken to, a stage-1 DABT or the final enabled > + * stage of translation (stage-2 in this case as we hit here). Inject > + * a DABT to the guest to handle it if it's implemented as a stage-2 > + * DABT. > + */ > + if (esr_fsc_is_excl_atomic_fault(esr)) { > + kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); > + return 1; > + } This doesn't seem quite right. This is injecting an *External* Data Abort, which is not what the spec says happens, as you are emulating the *first* acceptable behaviour: "The check is performed at each enabled stage of translation, and the fault is reported for the first stage of translation that provides an inappropriate memory type. In this case, the value of the HCR_EL2.DC bit does not cause accesses generated by the instructions to generate a stage 1 Data abort," So while the exception is reported at a different EL, the fault should still be an "unsupported Exclusive or atomic access". But that's also assuming that S2 has a device mapping, and it is EL1 that did something wrong. Surely you should check the IPA against its memory type? Further questions: what happens when a L2 guest triggers such fault? I don't think you can't arbitrarily route it back to L2 without looking at why it faulted. Thanks, M.
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9d46ad57e52..b7e6f0a27537 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1787,6 +1787,20 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) return 1; } + /* + * If instructions of FEAT_{LS64, LS64_V, LS64_ACCDATA} operated on + * unsupported memory regions, a DABT for unsupported Exclusive or + * atomic access is generated. It's implementation defined whether + * the exception will be taken to, a stage-1 DABT or the final enabled + * stage of translation (stage-2 in this case as we hit here). Inject + * a DABT to the guest to handle it if it's implemented as a stage-2 + * DABT. + */ + if (esr_fsc_is_excl_atomic_fault(esr)) { + kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); + return 1; + } + trace_kvm_guest_fault(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu), kvm_vcpu_get_hfar(vcpu), fault_ipa);