Message ID | 20200903135307.251331-9-ascull@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Introduce separate nVHE hyp context | expand |
On Thu, 03 Sep 2020 14:52:57 +0100, Andrew Scull <ascull@google.com> wrote: > > During __guest_enter, save and restore from a new hyp context rather > than the host context. This is preparation for separation of the hyp and > host context in nVHE. > > Signed-off-by: Andrew Scull <ascull@google.com> > --- > arch/arm64/include/asm/kvm_hyp.h | 3 ++- > arch/arm64/kernel/image-vars.h | 1 + > arch/arm64/kvm/arm.c | 10 ++++++++++ > arch/arm64/kvm/hyp/entry.S | 10 +++++----- > arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +- > arch/arm64/kvm/hyp/nvhe/switch.c | 2 +- > arch/arm64/kvm/hyp/vhe/switch.c | 2 +- > 7 files changed, 21 insertions(+), 9 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h > index 1e2491da324e..0b525e05e5bf 100644 > --- a/arch/arm64/include/asm/kvm_hyp.h > +++ b/arch/arm64/include/asm/kvm_hyp.h > @@ -12,6 +12,7 @@ > #include <asm/alternative.h> > #include <asm/sysreg.h> > > +DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); > DECLARE_PER_CPU(unsigned long, kvm_hyp_vector); > > #define read_sysreg_elx(r,nvh,vh) \ > @@ -89,7 +90,7 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu); > void deactivate_traps_vhe_put(void); > #endif > > -u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); > +u64 __guest_enter(struct kvm_vcpu *vcpu); > > void __noreturn hyp_panic(void); > #ifdef __KVM_NVHE_HYPERVISOR__ > diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h > index 54bb0eb34b0f..9f419e4fc66b 100644 > --- a/arch/arm64/kernel/image-vars.h > +++ b/arch/arm64/kernel/image-vars.h > @@ -71,6 +71,7 @@ KVM_NVHE_ALIAS(kvm_update_va_mask); > /* Global kernel state accessed by nVHE hyp code. */ > KVM_NVHE_ALIAS(arm64_ssbd_callback_required); > KVM_NVHE_ALIAS(kvm_host_data); > +KVM_NVHE_ALIAS(kvm_hyp_ctxt); > KVM_NVHE_ALIAS(kvm_hyp_vector); > KVM_NVHE_ALIAS(kvm_vgic_global_state); > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index b6442c6be5ad..ae4b34f91e94 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -47,6 +47,7 @@ __asm__(".arch_extension virt"); > #endif > > DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); > +DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); [back to this patch after having reviewed a few of the subsequent ones] Given the variety of contexts you are introducing, I wonder if the best course of action for most of this isn't simply to use the EL2 stack rather than defining ad-hoc structures. The host save/restore you are introducing in a later patch certainly could do without a separate data structure (hello, struct pt_regs) and the hyp/host churn. What do you think? M.
On Mon, Sep 07, 2020 at 02:29:11PM +0100, Marc Zyngier wrote: > On Thu, 03 Sep 2020 14:52:57 +0100, > Andrew Scull <ascull@google.com> wrote: > > > > During __guest_enter, save and restore from a new hyp context rather > > than the host context. This is preparation for separation of the hyp and > > host context in nVHE. > > > > Signed-off-by: Andrew Scull <ascull@google.com> > > --- > > arch/arm64/include/asm/kvm_hyp.h | 3 ++- > > arch/arm64/kernel/image-vars.h | 1 + > > arch/arm64/kvm/arm.c | 10 ++++++++++ > > arch/arm64/kvm/hyp/entry.S | 10 +++++----- > > arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +- > > arch/arm64/kvm/hyp/nvhe/switch.c | 2 +- > > arch/arm64/kvm/hyp/vhe/switch.c | 2 +- > > 7 files changed, 21 insertions(+), 9 deletions(-) > > > > diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h > > index 1e2491da324e..0b525e05e5bf 100644 > > --- a/arch/arm64/include/asm/kvm_hyp.h > > +++ b/arch/arm64/include/asm/kvm_hyp.h > > @@ -12,6 +12,7 @@ > > #include <asm/alternative.h> > > #include <asm/sysreg.h> > > > > +DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); > > DECLARE_PER_CPU(unsigned long, kvm_hyp_vector); > > > > #define read_sysreg_elx(r,nvh,vh) \ > > @@ -89,7 +90,7 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu); > > void deactivate_traps_vhe_put(void); > > #endif > > > > -u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); > > +u64 __guest_enter(struct kvm_vcpu *vcpu); > > > > void __noreturn hyp_panic(void); > > #ifdef __KVM_NVHE_HYPERVISOR__ > > diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h > > index 54bb0eb34b0f..9f419e4fc66b 100644 > > --- a/arch/arm64/kernel/image-vars.h > > +++ b/arch/arm64/kernel/image-vars.h > > @@ -71,6 +71,7 @@ KVM_NVHE_ALIAS(kvm_update_va_mask); > > /* Global kernel state accessed by nVHE hyp code. */ > > KVM_NVHE_ALIAS(arm64_ssbd_callback_required); > > KVM_NVHE_ALIAS(kvm_host_data); > > +KVM_NVHE_ALIAS(kvm_hyp_ctxt); > > KVM_NVHE_ALIAS(kvm_hyp_vector); > > KVM_NVHE_ALIAS(kvm_vgic_global_state); > > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > > index b6442c6be5ad..ae4b34f91e94 100644 > > --- a/arch/arm64/kvm/arm.c > > +++ b/arch/arm64/kvm/arm.c > > @@ -47,6 +47,7 @@ __asm__(".arch_extension virt"); > > #endif > > > > DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); > > +DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); > > [back to this patch after having reviewed a few of the subsequent > ones] > > Given the variety of contexts you are introducing, I wonder if the > best course of action for most of this isn't simply to use the EL2 > stack rather than defining ad-hoc structures. > > The host save/restore you are introducing in a later patch certainly > could do without a separate data structure (hello, struct pt_regs) and > the hyp/host churn. > > What do you think? We could define the start of the stack to be the host context (IIRC, TF-A does something along those lines). Maybe there is some locality benefit? The percpu definitions become less onerous in code with David's percpu series as the mapping to EL2 is done in bulk rather than per item. Ptrauth switching is something that doesn't fall under pt_regs (it's no longer in this series, but will need to be switched later on). I had chosen to reuse the existing structs but a host-specilized context might be preferred?
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 1e2491da324e..0b525e05e5bf 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -12,6 +12,7 @@ #include <asm/alternative.h> #include <asm/sysreg.h> +DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); DECLARE_PER_CPU(unsigned long, kvm_hyp_vector); #define read_sysreg_elx(r,nvh,vh) \ @@ -89,7 +90,7 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu); void deactivate_traps_vhe_put(void); #endif -u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); +u64 __guest_enter(struct kvm_vcpu *vcpu); void __noreturn hyp_panic(void); #ifdef __KVM_NVHE_HYPERVISOR__ diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 54bb0eb34b0f..9f419e4fc66b 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -71,6 +71,7 @@ KVM_NVHE_ALIAS(kvm_update_va_mask); /* Global kernel state accessed by nVHE hyp code. */ KVM_NVHE_ALIAS(arm64_ssbd_callback_required); KVM_NVHE_ALIAS(kvm_host_data); +KVM_NVHE_ALIAS(kvm_hyp_ctxt); KVM_NVHE_ALIAS(kvm_hyp_vector); KVM_NVHE_ALIAS(kvm_vgic_global_state); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index b6442c6be5ad..ae4b34f91e94 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -47,6 +47,7 @@ __asm__(".arch_extension virt"); #endif DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); +DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); @@ -1542,6 +1543,7 @@ static int init_hyp_mode(void) for_each_possible_cpu(cpu) { struct kvm_host_data *cpu_data; + struct kvm_cpu_context *hyp_ctxt; unsigned long *vector; cpu_data = per_cpu_ptr(&kvm_host_data, cpu); @@ -1552,6 +1554,14 @@ static int init_hyp_mode(void) goto out_err; } + hyp_ctxt = per_cpu_ptr(&kvm_hyp_ctxt, cpu); + err = create_hyp_mappings(hyp_ctxt, hyp_ctxt + 1, PAGE_HYP); + + if (err) { + kvm_err("Cannot map hyp context: %d\n", err); + goto out_err; + } + vector = per_cpu_ptr(&kvm_hyp_vector, cpu); err = create_hyp_mappings(vector, vector + 1, PAGE_HYP); diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 76e7eaf4675e..9551d7f186da 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -57,15 +57,15 @@ .endm /* - * u64 __guest_enter(struct kvm_vcpu *vcpu, - * struct kvm_cpu_context *host_ctxt); + * u64 __guest_enter(struct kvm_vcpu *vcpu); */ SYM_FUNC_START(__guest_enter) // x0: vcpu - // x1: host context - // x2-x17: clobbered by macros + // x1-x17: clobbered by macros // x29: guest context + hyp_adr_this_cpu x1, kvm_hyp_ctxt, x2 + // Store the host regs save_callee_saved_regs x1 @@ -148,7 +148,7 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL) // Store the guest's sp_el0 save_sp_el0 x1, x2 - get_host_ctxt x2, x3 + hyp_adr_this_cpu x2, kvm_hyp_ctxt, x3 // Macro ptrauth_switch_to_guest format: // ptrauth_switch_to_host(guest cxt, host cxt, tmp1, tmp2, tmp3) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 96ea3fdd0c20..afe714056b97 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -381,7 +381,7 @@ static inline bool __hyp_handle_ptrauth(struct kvm_vcpu *vcpu) !esr_is_ptrauth_trap(kvm_vcpu_get_esr(vcpu))) return false; - ctxt = &__hyp_this_cpu_ptr(kvm_host_data)->host_ctxt; + ctxt = __hyp_this_cpu_ptr(kvm_hyp_ctxt); __ptrauth_save_key(ctxt, APIA); __ptrauth_save_key(ctxt, APIB); __ptrauth_save_key(ctxt, APDA); diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 1ab773bb60ca..72d3e0119299 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -209,7 +209,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) do { /* Jump in the fire! */ - exit_code = __guest_enter(vcpu, host_ctxt); + exit_code = __guest_enter(vcpu); /* And we're baaack! */ } while (fixup_guest_exit(vcpu, &exit_code)); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 7e99a7183320..ae6b63e45638 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -135,7 +135,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) do { /* Jump in the fire! */ - exit_code = __guest_enter(vcpu, host_ctxt); + exit_code = __guest_enter(vcpu); /* And we're baaack! */ } while (fixup_guest_exit(vcpu, &exit_code));
During __guest_enter, save and restore from a new hyp context rather than the host context. This is preparation for separation of the hyp and host context in nVHE. Signed-off-by: Andrew Scull <ascull@google.com> --- arch/arm64/include/asm/kvm_hyp.h | 3 ++- arch/arm64/kernel/image-vars.h | 1 + arch/arm64/kvm/arm.c | 10 ++++++++++ arch/arm64/kvm/hyp/entry.S | 10 +++++----- arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +- arch/arm64/kvm/hyp/nvhe/switch.c | 2 +- arch/arm64/kvm/hyp/vhe/switch.c | 2 +- 7 files changed, 21 insertions(+), 9 deletions(-)