Message ID | 20210527150526.271941-5-pasha.tatashin@soleen.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: MMU enabled kexec relocation | expand |
On Thu, 27 May 2021 16:05:12 +0100, Pavel Tatashin <pasha.tatashin@soleen.com> wrote: > > Replace places that contain logic like this: > is_hyp_mode_available() && !is_kernel_in_hyp_mode() > > With a dedicated boolean function is_hyp_callable(). This will be needed > later in kexec in order to sooner switch back to EL2. This looks like the very definition of "run in nVHE mode", so I'd rather you call it like this, rather than "callable", which is extremely ambiguous (if running at EL2, I call it any time I want, for free). > > Suggested-by: James Morse <james.morse@arm.com> > > [Fixed merging issues] > > Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> > --- > arch/arm64/include/asm/virt.h | 5 +++++ > arch/arm64/kernel/cpu-reset.h | 3 +-- > arch/arm64/kernel/hibernate.c | 9 +++------ > arch/arm64/kernel/sdei.c | 2 +- > 4 files changed, 10 insertions(+), 9 deletions(-) > > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h > index 7379f35ae2c6..4216c8623538 100644 > --- a/arch/arm64/include/asm/virt.h > +++ b/arch/arm64/include/asm/virt.h > @@ -128,6 +128,11 @@ static __always_inline bool is_protected_kvm_enabled(void) > return cpus_have_final_cap(ARM64_KVM_PROTECTED_MODE); > } > > +static inline bool is_hyp_callable(void) > +{ > + return is_hyp_mode_available() && !is_kernel_in_hyp_mode(); > +} nit: consider switching the two members of the expression so that you don't have extra memory accesses when running at EL2. > + > #endif /* __ASSEMBLY__ */ > > #endif /* ! __ASM__VIRT_H */ > diff --git a/arch/arm64/kernel/cpu-reset.h b/arch/arm64/kernel/cpu-reset.h > index 9a7b1262ef17..48d0ed48c147 100644 > --- a/arch/arm64/kernel/cpu-reset.h > +++ b/arch/arm64/kernel/cpu-reset.h > @@ -20,8 +20,7 @@ static inline void __noreturn __nocfi cpu_soft_restart(unsigned long entry, > { > typeof(__cpu_soft_restart) *restart; > > - unsigned long el2_switch = !is_kernel_in_hyp_mode() && > - is_hyp_mode_available(); > + unsigned long el2_switch = is_hyp_callable(); > restart = (void *)__pa_symbol(function_nocfi(__cpu_soft_restart)); > > cpu_install_idmap(); > diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c > index b1cef371df2b..c764574a1acb 100644 > --- a/arch/arm64/kernel/hibernate.c > +++ b/arch/arm64/kernel/hibernate.c > @@ -48,9 +48,6 @@ > */ > extern int in_suspend; > > -/* Do we need to reset el2? */ > -#define el2_reset_needed() (is_hyp_mode_available() && !is_kernel_in_hyp_mode()) > - Please keep the macro, as it explains *why* we're doing things (we need to reset EL2), and replacing it with a generic macro drops the documentation aspect. > /* temporary el2 vectors in the __hibernate_exit_text section. */ > extern char hibernate_el2_vectors[]; > > @@ -125,7 +122,7 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size) > hdr->reenter_kernel = _cpu_resume; > > /* We can't use __hyp_get_vectors() because kvm may still be loaded */ > - if (el2_reset_needed()) > + if (is_hyp_callable()) > hdr->__hyp_stub_vectors = __pa_symbol(__hyp_stub_vectors); > else > hdr->__hyp_stub_vectors = 0; > @@ -387,7 +384,7 @@ int swsusp_arch_suspend(void) > dcache_clean_range(__idmap_text_start, __idmap_text_end); > > /* Clean kvm setup code to PoC? */ > - if (el2_reset_needed()) { > + if (is_hyp_callable()) { > dcache_clean_range(__hyp_idmap_text_start, __hyp_idmap_text_end); > dcache_clean_range(__hyp_text_start, __hyp_text_end); > } > @@ -482,7 +479,7 @@ int swsusp_arch_resume(void) > * > * We can skip this step if we booted at EL1, or are running with VHE. > */ > - if (el2_reset_needed()) { > + if (is_hyp_callable()) { > phys_addr_t el2_vectors = (phys_addr_t)hibernate_exit; > el2_vectors += hibernate_el2_vectors - > __hibernate_exit_text_start; /* offset */ > diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c > index 2c7ca449dd51..af0ac2f920cf 100644 > --- a/arch/arm64/kernel/sdei.c > +++ b/arch/arm64/kernel/sdei.c > @@ -200,7 +200,7 @@ unsigned long sdei_arch_get_entry_point(int conduit) > * dropped to EL1 because we don't support VHE, then we can't support > * SDEI. > */ > - if (is_hyp_mode_available() && !is_kernel_in_hyp_mode()) { > + if (is_hyp_callable()) { > pr_err("Not supported on this hardware/boot configuration\n"); > goto out_err; > } > -- > 2.25.1 > > Thanks, M.
On Tue, Jun 1, 2021 at 8:38 AM Marc Zyngier <maz@kernel.org> wrote: > > On Thu, 27 May 2021 16:05:12 +0100, > Pavel Tatashin <pasha.tatashin@soleen.com> wrote: > > > > Replace places that contain logic like this: > > is_hyp_mode_available() && !is_kernel_in_hyp_mode() > > > > With a dedicated boolean function is_hyp_callable(). This will be needed > > later in kexec in order to sooner switch back to EL2. > > This looks like the very definition of "run in nVHE mode", so I'd > rather you call it like this, rather than "callable", which is > extremely ambiguous (if running at EL2, I call it any time I want, for > free). Hi Marc, Naming is hard. Are you proposing s/is_hyp_callable/run_in_nvhe_mode/ ? This is also not a very good name because it does not sound like a boolean, but instead that we know that there is nvhe mode available and we can switch to it. > > > > > Suggested-by: James Morse <james.morse@arm.com> > > > > [Fixed merging issues] > > > > Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> > > --- > > arch/arm64/include/asm/virt.h | 5 +++++ > > arch/arm64/kernel/cpu-reset.h | 3 +-- > > arch/arm64/kernel/hibernate.c | 9 +++------ > > arch/arm64/kernel/sdei.c | 2 +- > > 4 files changed, 10 insertions(+), 9 deletions(-) > > > > diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h > > index 7379f35ae2c6..4216c8623538 100644 > > --- a/arch/arm64/include/asm/virt.h > > +++ b/arch/arm64/include/asm/virt.h > > @@ -128,6 +128,11 @@ static __always_inline bool is_protected_kvm_enabled(void) > > return cpus_have_final_cap(ARM64_KVM_PROTECTED_MODE); > > } > > > > +static inline bool is_hyp_callable(void) > > +{ > > + return is_hyp_mode_available() && !is_kernel_in_hyp_mode(); > > +} > > nit: consider switching the two members of the expression so that you > don't have extra memory accesses when running at EL2. Sure, I will do that. > > -/* Do we need to reset el2? */ > > -#define el2_reset_needed() (is_hyp_mode_available() && !is_kernel_in_hyp_mode()) > > - > > Please keep the macro, as it explains *why* we're doing things (we > need to reset EL2), and replacing it with a generic macro drops the > documentation aspect. OK, I will keep the macro, and redefine it to use the common macro. Thank you, Pasha
On Wed, 02 Jun 2021 02:33:52 +0100, Pavel Tatashin <pasha.tatashin@soleen.com> wrote: > > On Tue, Jun 1, 2021 at 8:38 AM Marc Zyngier <maz@kernel.org> wrote: > > > > On Thu, 27 May 2021 16:05:12 +0100, > > Pavel Tatashin <pasha.tatashin@soleen.com> wrote: > > > > > > Replace places that contain logic like this: > > > is_hyp_mode_available() && !is_kernel_in_hyp_mode() > > > > > > With a dedicated boolean function is_hyp_callable(). This will be needed > > > later in kexec in order to sooner switch back to EL2. > > > > This looks like the very definition of "run in nVHE mode", so I'd > > rather you call it like this, rather than "callable", which is > > extremely ambiguous (if running at EL2, I call it any time I want, for > > free). > > Hi Marc, > > Naming is hard. News flash! > Are you proposing s/is_hyp_callable/run_in_nvhe_mode/ ? This is also > not a very good name because it does not sound like a boolean, but > instead that we know that there is nvhe mode available and we can > switch to it. No, what I suggest is "is_hyp_nvhe()", or something along those lines. It clearly identifies that we are in control of EL2, and which mode it is in. M.
diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h index 7379f35ae2c6..4216c8623538 100644 --- a/arch/arm64/include/asm/virt.h +++ b/arch/arm64/include/asm/virt.h @@ -128,6 +128,11 @@ static __always_inline bool is_protected_kvm_enabled(void) return cpus_have_final_cap(ARM64_KVM_PROTECTED_MODE); } +static inline bool is_hyp_callable(void) +{ + return is_hyp_mode_available() && !is_kernel_in_hyp_mode(); +} + #endif /* __ASSEMBLY__ */ #endif /* ! __ASM__VIRT_H */ diff --git a/arch/arm64/kernel/cpu-reset.h b/arch/arm64/kernel/cpu-reset.h index 9a7b1262ef17..48d0ed48c147 100644 --- a/arch/arm64/kernel/cpu-reset.h +++ b/arch/arm64/kernel/cpu-reset.h @@ -20,8 +20,7 @@ static inline void __noreturn __nocfi cpu_soft_restart(unsigned long entry, { typeof(__cpu_soft_restart) *restart; - unsigned long el2_switch = !is_kernel_in_hyp_mode() && - is_hyp_mode_available(); + unsigned long el2_switch = is_hyp_callable(); restart = (void *)__pa_symbol(function_nocfi(__cpu_soft_restart)); cpu_install_idmap(); diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index b1cef371df2b..c764574a1acb 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -48,9 +48,6 @@ */ extern int in_suspend; -/* Do we need to reset el2? */ -#define el2_reset_needed() (is_hyp_mode_available() && !is_kernel_in_hyp_mode()) - /* temporary el2 vectors in the __hibernate_exit_text section. */ extern char hibernate_el2_vectors[]; @@ -125,7 +122,7 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size) hdr->reenter_kernel = _cpu_resume; /* We can't use __hyp_get_vectors() because kvm may still be loaded */ - if (el2_reset_needed()) + if (is_hyp_callable()) hdr->__hyp_stub_vectors = __pa_symbol(__hyp_stub_vectors); else hdr->__hyp_stub_vectors = 0; @@ -387,7 +384,7 @@ int swsusp_arch_suspend(void) dcache_clean_range(__idmap_text_start, __idmap_text_end); /* Clean kvm setup code to PoC? */ - if (el2_reset_needed()) { + if (is_hyp_callable()) { dcache_clean_range(__hyp_idmap_text_start, __hyp_idmap_text_end); dcache_clean_range(__hyp_text_start, __hyp_text_end); } @@ -482,7 +479,7 @@ int swsusp_arch_resume(void) * * We can skip this step if we booted at EL1, or are running with VHE. */ - if (el2_reset_needed()) { + if (is_hyp_callable()) { phys_addr_t el2_vectors = (phys_addr_t)hibernate_exit; el2_vectors += hibernate_el2_vectors - __hibernate_exit_text_start; /* offset */ diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index 2c7ca449dd51..af0ac2f920cf 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -200,7 +200,7 @@ unsigned long sdei_arch_get_entry_point(int conduit) * dropped to EL1 because we don't support VHE, then we can't support * SDEI. */ - if (is_hyp_mode_available() && !is_kernel_in_hyp_mode()) { + if (is_hyp_callable()) { pr_err("Not supported on this hardware/boot configuration\n"); goto out_err; }
Replace places that contain logic like this: is_hyp_mode_available() && !is_kernel_in_hyp_mode() With a dedicated boolean function is_hyp_callable(). This will be needed later in kexec in order to sooner switch back to EL2. Suggested-by: James Morse <james.morse@arm.com> [Fixed merging issues] Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> --- arch/arm64/include/asm/virt.h | 5 +++++ arch/arm64/kernel/cpu-reset.h | 3 +-- arch/arm64/kernel/hibernate.c | 9 +++------ arch/arm64/kernel/sdei.c | 2 +- 4 files changed, 10 insertions(+), 9 deletions(-)