Message ID | c19cf600-5971-457b-936d-77a035ab6913@suse.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | x86: arrange for ENDBR zapping from <vendor>_ctxt_switch_masking() | expand |
On 16/01/2024 4:53 pm, Jan Beulich wrote: > While altcall is already used for them, the functions want announcing in > .init.rodata.cf_clobber, even if the resulting static variables aren't > otherwise used. > > While doing this also move ctxt_switch_masking to .data.ro_after_init. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> > > --- a/xen/arch/x86/cpu/amd.c > +++ b/xen/arch/x86/cpu/amd.c > @@ -258,6 +258,11 @@ static void cf_check amd_ctxt_switch_mas > #undef LAZY > } > > +#ifdef CONFIG_XEN_IBT /* Announce the function to ENDBR clobbering logic. */ > +static const typeof(ctxt_switch_masking) __initconst_cf_clobber __used csm = > + amd_ctxt_switch_masking; > +#endif If we gain more of these, I suspect we'll want a wrapper for it. Irritatingly you can't pass parameters into global asm, because the nice way to do this would be an _ASM_PTR in a pushsection. ~Andrew
On 01.02.2024 22:12, Andrew Cooper wrote: > On 16/01/2024 4:53 pm, Jan Beulich wrote: >> While altcall is already used for them, the functions want announcing in >> .init.rodata.cf_clobber, even if the resulting static variables aren't >> otherwise used. >> >> While doing this also move ctxt_switch_masking to .data.ro_after_init. >> >> Signed-off-by: Jan Beulich <jbeulich@suse.com> > > Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Thanks. >> --- a/xen/arch/x86/cpu/amd.c >> +++ b/xen/arch/x86/cpu/amd.c >> @@ -258,6 +258,11 @@ static void cf_check amd_ctxt_switch_mas >> #undef LAZY >> } >> >> +#ifdef CONFIG_XEN_IBT /* Announce the function to ENDBR clobbering logic. */ >> +static const typeof(ctxt_switch_masking) __initconst_cf_clobber __used csm = >> + amd_ctxt_switch_masking; >> +#endif > > If we gain more of these, I suspect we'll want a wrapper for it. > > Irritatingly you can't pass parameters into global asm, because the nice > way to do this would be an _ASM_PTR in a pushsection. While I'm not convinced resorting to asm() here would indeed be a good thing, for very many years I've been carrying a gcc change to permit exactly this. I don't even recall anymore why it wasn't liked upstream. Jan
--- a/xen/arch/x86/cpu/amd.c +++ b/xen/arch/x86/cpu/amd.c @@ -258,6 +258,11 @@ static void cf_check amd_ctxt_switch_mas #undef LAZY } +#ifdef CONFIG_XEN_IBT /* Announce the function to ENDBR clobbering logic. */ +static const typeof(ctxt_switch_masking) __initconst_cf_clobber __used csm = + amd_ctxt_switch_masking; +#endif + /* * Mask the features and extended features returned by CPUID. Parameters are * set from the boot line via two methods: --- a/xen/arch/x86/cpu/common.c +++ b/xen/arch/x86/cpu/common.c @@ -119,7 +119,7 @@ static const struct cpu_dev __initconst_ static const struct cpu_dev *this_cpu = &default_cpu; static DEFINE_PER_CPU(uint64_t, msr_misc_features); -void (* __read_mostly ctxt_switch_masking)(const struct vcpu *next); +void (* __ro_after_init ctxt_switch_masking)(const struct vcpu *next); bool __init probe_cpuid_faulting(void) { --- a/xen/arch/x86/cpu/intel.c +++ b/xen/arch/x86/cpu/intel.c @@ -220,6 +220,11 @@ static void cf_check intel_ctxt_switch_m #undef LAZY } +#ifdef CONFIG_XEN_IBT /* Announce the function to ENDBR clobbering logic. */ +static const typeof(ctxt_switch_masking) __initconst_cf_clobber __used csm = + intel_ctxt_switch_masking; +#endif + /* * opt_cpuid_mask_ecx/edx: cpuid.1[ecx, edx] feature mask. * For example, E8400[Intel Core 2 Duo Processor series] ecx = 0x0008E3FD,
While altcall is already used for them, the functions want announcing in .init.rodata.cf_clobber, even if the resulting static variables aren't otherwise used. While doing this also move ctxt_switch_masking to .data.ro_after_init. Signed-off-by: Jan Beulich <jbeulich@suse.com>