mbox series

[0/3] KVM: arm64: aarch32 ACTLR accesses

Message ID 20200526161834.29165-1-james.morse@arm.com (mailing list archive)
Headers show
Series KVM: arm64: aarch32 ACTLR accesses | expand

Message

James Morse May 26, 2020, 4:18 p.m. UTC
Hello!

Patch 1 fixes an issue where the 32bit and 64bit indexes into copro[]
and sys_regs[] are muddled.

Patch 3 adds support for aarch32 accessing the top 32bits of ACTLR_EL1
via ACTLR2. Support for this register is advertised in ID_MMFR4.AC2, which
doesn't get removed by cpufeature. The register is mandatory from v8.2, but
imp-def before then.

Patch 2 stops the sys_regs[] value we use for emulation being save/restored.
This simplifies patch 3 as the aarch32 helper can rely on the in-memory copy.

I think Patch 1 is stable material, I'm not sure about 2&3.


Bonus cans of worms!:

1. How does this copro[] thing work with a big-endian host?
The cp15_regs emulation look fine as nothing uses vcpu_cp15() to read the
register, but wouldn't prepare_fault32() read the wrong end of the register
when using vcpu_cp15()?

2. How does the 32bit fault injection code work with VHE?
vcpu_cp15() modifies the in-memory copy, surely a vcpu_put() will clobber
everything it did, or fail to restore it when entering the guest.


Thanks,

James Morse (3):
  KVM: arm64: Stop writing aarch32's CSSELR into ACTLR
  KVM: arm64: Stop save/restoring ACTLR_EL1
  KVM: arm64: Add emulation for 32bit guests accessing ACTLR2

 arch/arm64/include/asm/kvm_host.h    |  1 +
 arch/arm64/kvm/hyp/sysreg-sr.c       |  2 --
 arch/arm64/kvm/sys_regs.c            |  8 +++-----
 arch/arm64/kvm/sys_regs_generic_v8.c | 16 +++++++++++++++-
 4 files changed, 19 insertions(+), 8 deletions(-)

Comments

Marc Zyngier May 31, 2020, 1:37 p.m. UTC | #1
Hi James,

On 2020-05-26 17:18, James Morse wrote:

[...]

> 1. How does this copro[] thing work with a big-endian host?
> The cp15_regs emulation look fine as nothing uses vcpu_cp15() to read 
> the
> register, but wouldn't prepare_fault32() read the wrong end of the 
> register
> when using vcpu_cp15()?

This seems pretty broken indeed. How about something like this:

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 59029e90b557..e80c0e06f235 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -404,8 +404,14 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 
val, int reg);
   * CP14 and CP15 live in the same array, as they are backed by the
   * same system registers.
   */
-#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r)])
-#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r)])
+#ifdef CPU_BIG_ENDIAN
+#define CPx_OFFSET	1
+#else
+#define CPx_OFFSET	0
+#endif
+
+#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])
+#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])

  struct kvm_vm_stat {
  	ulong remote_tlb_flush;

Yes, it's ugly.

> 2. How does the 32bit fault injection code work with VHE?
> vcpu_cp15() modifies the in-memory copy, surely a vcpu_put() will 
> clobber
> everything it did, or fail to restore it when entering the guest.

Wow, you're really wadding into dangerous territory! ;-)

Again, you are absolutely right. I guess nobody really ever ran
32bit guests on VHE systems, as they are both rare and mostly
with 64-bit-only EL1. This code is also mostly never used
(we run well behaved guests at all times!).

Here's a hack that should do the right thing at all times:

diff --git a/arch/arm64/kvm/aarch32.c b/arch/arm64/kvm/aarch32.c
index 0a356aa91aa1..40a62a99fbf8 100644
--- a/arch/arm64/kvm/aarch32.c
+++ b/arch/arm64/kvm/aarch32.c
@@ -33,6 +33,26 @@ static const u8 return_offsets[8][2] = {
  	[7] = { 4, 4 },		/* FIQ, unused */
  };

+static bool pre_fault_synchronize(struct kvm_vcpu *vcpu)
+{
+	preempt_disable();
+	if (vcpu->arch.sysregs_loaded_on_cpu) {
+		kvm_arch_vcpu_put(vcpu);
+		return true;
+	}
+
+	preempt_enable();
+	return false;
+}
+
+static void post_fault_synchronize(struct kvm_vcpu *vcpu, bool loaded)
+{
+	if (loaded) {
+		kvm_arch_vcpu_load(vcpu, smp_processor_id());
+		preempt_enable();
+	}
+}
+
  /*
   * When an exception is taken, most CPSR fields are left unchanged in 
the
   * handler. However, some are explicitly overridden (e.g. M[4:0]).
@@ -155,7 +175,10 @@ static void prepare_fault32(struct kvm_vcpu *vcpu, 
u32 mode, u32 vect_offset)

  void kvm_inject_undef32(struct kvm_vcpu *vcpu)
  {
+	bool loaded = pre_fault_synchronize(vcpu);
+
  	prepare_fault32(vcpu, PSR_AA32_MODE_UND, 4);
+	post_fault_synchronize(vcpu, loaded);
  }

  /*
@@ -168,6 +191,9 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool 
is_pabt,
  	u32 vect_offset;
  	u32 *far, *fsr;
  	bool is_lpae;
+	bool loaded;
+
+	loaded = pre_fault_synchronize(vcpu);

  	if (is_pabt) {
  		vect_offset = 12;
@@ -191,6 +217,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool 
is_pabt,
  		/* no need to shuffle FS[4] into DFSR[10] as its 0 */
  		*fsr = DFSR_FSC_EXTABT_nLPAE;
  	}
+
+	post_fault_synchronize(vcpu, loaded);
  }

  void kvm_inject_dabt32(struct kvm_vcpu *vcpu, unsigned long addr)

Of course, none of this is tested. We really should find ways to
trigger these corner cases... :-/

Thanks,

         M.