diff mbox series

arm64: KVM: Invoke compute_layout() before alternatives are applied

Message ID 20191016165953.o6ogh4fdmsjmd2sw@linutronix.de (mailing list archive)
State New, archived
Headers show
Series arm64: KVM: Invoke compute_layout() before alternatives are applied | expand

Commit Message

Sebastian Andrzej Siewior Oct. 16, 2019, 4:59 p.m. UTC
compute_layout() is invoked as part of an alternative fixup under
stop_machine(). This function invokes get_random_long() which acquires a
sleeping lock on -RT which can not be acquired in this context.

Rename compute_layout() to kvm_compute_layout() and invoke it before
stop_machines() invokes the fixups. Add a __init prefix to
kvm_compute_layout() because the caller has it, too (and so the code can
be discarded after boot).

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/arm64/include/asm/alternative.h |  6 ++++++
 arch/arm64/kernel/alternative.c      |  1 +
 arch/arm64/kvm/va_layout.c           | 11 ++++-------
 3 files changed, 11 insertions(+), 7 deletions(-)

Comments

James Morse Oct. 17, 2019, 5:41 p.m. UTC | #1
Hi Sebastian,

On 16/10/2019 17:59, Sebastian Andrzej Siewior wrote:
> compute_layout() is invoked as part of an alternative fixup under
> stop_machine(). This function invokes get_random_long() which acquires a
> sleeping lock on -RT which can not be acquired in this context.

> Rename compute_layout() to kvm_compute_layout() and invoke it before
> stop_machines() invokes the fixups.

Nit: stop_machine() applies the alternatives.


> Add a __init prefix to
> kvm_compute_layout() because the caller has it, too (and so the code can
> be discarded after boot).


> diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
> index b9f8d787eea9f..7532f044d713b 100644
> --- a/arch/arm64/include/asm/alternative.h
> +++ b/arch/arm64/include/asm/alternative.h
> @@ -35,6 +35,12 @@ void apply_alternatives_module(void *start, size_t length);
>  static inline void apply_alternatives_module(void *start, size_t length) { }
>  #endif
>  
> +#ifdef CONFIG_KVM_ARM_HOST
> +void kvm_compute_layout(void);
> +#else
> +static inline void kvm_compute_layout(void) { }
> +#endif

I don't think alternative.h is where this belongs... Could you move it to kvm_mmu.h, which
is where the kvm_update_va_mask macro that depends on it lives.

You can avoid the #ifdef if you use if(IS_ENABLED()) in the caller.
This has the advantage that the compiler will catch invalid C regardless of the build
options. (and its easier on the eye)


> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index d1757ef1b1e74..c28652ee06f64 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -238,6 +238,7 @@ static int __apply_alternatives_multi_stop(void *unused)
>  void __init apply_alternatives_all(void)
>  {
>  	/* better not try code patching on a live SMP system */
> +	kvm_compute_layout();
>  	stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask);
>  }

This is a funny place to do this kvm check, its not needed to apply the alternatives.

apply_alternatives_all() is only called from smp_cpus_done(), immediately before it calls
hyp_mode_check(), could we move it there to live with the 'started at EL2' message?

(to save you battling the header-jungle: To include asm/kvm_mmu.h, you need to include
linux/kvm_host.h first)


We end up calling it unconditionally, but I don't think that matters, both callers do the
right thing.

With that:
Reviewed-by: James Morse <james.morse@arm.com>


Thanks,

James
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index b9f8d787eea9f..7532f044d713b 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -35,6 +35,12 @@  void apply_alternatives_module(void *start, size_t length);
 static inline void apply_alternatives_module(void *start, size_t length) { }
 #endif
 
+#ifdef CONFIG_KVM_ARM_HOST
+void kvm_compute_layout(void);
+#else
+static inline void kvm_compute_layout(void) { }
+#endif
+
 #define ALTINSTR_ENTRY(feature,cb)					      \
 	" .word 661b - .\n"				/* label           */ \
 	" .if " __stringify(cb) " == 0\n"				      \
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index d1757ef1b1e74..c28652ee06f64 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -238,6 +238,7 @@  static int __apply_alternatives_multi_stop(void *unused)
 void __init apply_alternatives_all(void)
 {
 	/* better not try code patching on a live SMP system */
+	kvm_compute_layout();
 	stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask);
 }
 
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index 2cf7d4b606c38..0b9fbdf0aaddd 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -22,12 +22,15 @@  static u8 tag_lsb;
 static u64 tag_val;
 static u64 va_mask;
 
-static void compute_layout(void)
+__init void kvm_compute_layout(void)
 {
 	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
 	u64 hyp_va_msb;
 	int kva_msb;
 
+	if (WARN_ON(va_mask))
+		return;
+
 	/* Where is my RAM region? */
 	hyp_va_msb  = idmap_addr & BIT(vabits_actual - 1);
 	hyp_va_msb ^= BIT(vabits_actual - 1);
@@ -110,9 +113,6 @@  void __init kvm_update_va_mask(struct alt_instr *alt,
 
 	BUG_ON(nr_inst != 5);
 
-	if (!has_vhe() && !va_mask)
-		compute_layout();
-
 	for (i = 0; i < nr_inst; i++) {
 		u32 rd, rn, insn, oinsn;
 
@@ -156,9 +156,6 @@  void kvm_patch_vector_branch(struct alt_instr *alt,
 		return;
 	}
 
-	if (!va_mask)
-		compute_layout();
-
 	/*
 	 * Compute HYP VA by using the same computation as kern_hyp_va()
 	 */