diff mbox

[01/16] arm64: capabilities: Update prototype for enable call back

Message ID 20180123122809.16269-2-suzuki.poulose@arm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Suzuki K Poulose Jan. 23, 2018, 12:27 p.m. UTC
From: Dave Martin <dave.martin@arm.com>

We issue the enable() call back for all CPU hwcaps capabilities
available on the system, on all the CPUs. So far we have ignored
the argument passed to the call back, which had a prototype to
accept a "void *" for use with on_each_cpu() and later with
stop_machine(). However, with commit 0a0d111d40fd1
("arm64: cpufeature: Pass capability structure to ->enable callback"),
there are some users of the argument who wants the matching capability
struct pointer where there are multiple matching criteria for a single
capability. Update the prototype for enable to accept a const pointer.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: James Morse <james.morse@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Dave Martin <dave.martin@arm.com>
[ Rebased to for-next/core converting more users ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |  3 ++-
 arch/arm64/include/asm/fpsimd.h     |  4 +++-
 arch/arm64/include/asm/processor.h  |  7 ++++---
 arch/arm64/kernel/cpu_errata.c      | 14 ++++++--------
 arch/arm64/kernel/cpufeature.c      | 16 ++++++++++++----
 arch/arm64/kernel/fpsimd.c          |  3 ++-
 arch/arm64/kernel/traps.c           |  3 ++-
 arch/arm64/mm/fault.c               |  2 +-
 8 files changed, 32 insertions(+), 20 deletions(-)

Comments

Dave Martin Jan. 23, 2018, 2:52 p.m. UTC | #1
On Tue, Jan 23, 2018 at 12:27:54PM +0000, Suzuki K Poulose wrote:
> From: Dave Martin <dave.martin@arm.com>
> 
> We issue the enable() call back for all CPU hwcaps capabilities
> available on the system, on all the CPUs. So far we have ignored
> the argument passed to the call back, which had a prototype to
> accept a "void *" for use with on_each_cpu() and later with
> stop_machine(). However, with commit 0a0d111d40fd1
> ("arm64: cpufeature: Pass capability structure to ->enable callback"),
> there are some users of the argument who wants the matching capability
> struct pointer where there are multiple matching criteria for a single
> capability. Update the prototype for enable to accept a const pointer.
> 
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Andre Przywara <andre.przywara@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
> Signed-off-by: Dave Martin <dave.martin@arm.com>
> [ Rebased to for-next/core converting more users ]
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |  3 ++-
>  arch/arm64/include/asm/fpsimd.h     |  4 +++-
>  arch/arm64/include/asm/processor.h  |  7 ++++---
>  arch/arm64/kernel/cpu_errata.c      | 14 ++++++--------
>  arch/arm64/kernel/cpufeature.c      | 16 ++++++++++++----
>  arch/arm64/kernel/fpsimd.c          |  3 ++-
>  arch/arm64/kernel/traps.c           |  3 ++-
>  arch/arm64/mm/fault.c               |  2 +-
>  8 files changed, 32 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index ac67cfc2585a..cefbd685292c 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -97,7 +97,8 @@ struct arm64_cpu_capabilities {
>  	u16 capability;
>  	int def_scope;			/* default scope */
>  	bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
> -	int (*enable)(void *);		/* Called on all active CPUs */
> +	/* Called on all active CPUs for all  "available" capabilities */

Nit: Odd spacing?  Also, "available" doesn't really make sense for errata
workarounds.

Maybe applicable would be a better word?

> +	int (*enable)(const struct arm64_cpu_capabilities *caps);

Alternatively, if the comment is liable to be ambiguous, maybe it would
be better to delete it.  The explicit argument type already makes this
more self-documenting than previously.

I don't feel that strongly either way though; probably not worth a
respin unless you have other things to change.

Also please note that I didn't test the original patch here (in case
I didn't point that out already...)

[...]

Cheers
---Dave
Suzuki K Poulose Jan. 23, 2018, 3:38 p.m. UTC | #2
On 23/01/18 14:52, Dave Martin wrote:
> On Tue, Jan 23, 2018 at 12:27:54PM +0000, Suzuki K Poulose wrote:
>> From: Dave Martin <dave.martin@arm.com>
>>
>> We issue the enable() call back for all CPU hwcaps capabilities
>> available on the system, on all the CPUs. So far we have ignored
>> the argument passed to the call back, which had a prototype to
>> accept a "void *" for use with on_each_cpu() and later with
>> stop_machine(). However, with commit 0a0d111d40fd1
>> ("arm64: cpufeature: Pass capability structure to ->enable callback"),
>> there are some users of the argument who wants the matching capability
>> struct pointer where there are multiple matching criteria for a single
>> capability. Update the prototype for enable to accept a const pointer.
>>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Robin Murphy <robin.murphy@arm.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Andre Przywara <andre.przywara@arm.com>
>> Cc: James Morse <james.morse@arm.com>
>> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
>> Signed-off-by: Dave Martin <dave.martin@arm.com>
>> [ Rebased to for-next/core converting more users ]
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> ---
>>   arch/arm64/include/asm/cpufeature.h |  3 ++-
>>   arch/arm64/include/asm/fpsimd.h     |  4 +++-
>>   arch/arm64/include/asm/processor.h  |  7 ++++---
>>   arch/arm64/kernel/cpu_errata.c      | 14 ++++++--------
>>   arch/arm64/kernel/cpufeature.c      | 16 ++++++++++++----
>>   arch/arm64/kernel/fpsimd.c          |  3 ++-
>>   arch/arm64/kernel/traps.c           |  3 ++-
>>   arch/arm64/mm/fault.c               |  2 +-
>>   8 files changed, 32 insertions(+), 20 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index ac67cfc2585a..cefbd685292c 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -97,7 +97,8 @@ struct arm64_cpu_capabilities {
>>   	u16 capability;
>>   	int def_scope;			/* default scope */
>>   	bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
>> -	int (*enable)(void *);		/* Called on all active CPUs */
>> +	/* Called on all active CPUs for all  "available" capabilities */
> 
> Nit: Odd spacing?  Also, "available" doesn't really make sense for errata
> workarounds.
> 

Thanks for spotting, will fix it.

> Maybe applicable would be a better word?
> 

There is a subtle difference. If there are two entries for a capability,
with only one of them matches, we end up calling the enable() for both
the entries. "Applicable" could potentially be misunderstood, leading
to assumption that the enable() is called only if that "entry" matches,
which is not true. I accept that "available" doesn't sound any better either.


>> +	int (*enable)(const struct arm64_cpu_capabilities *caps);
> 
> Alternatively, if the comment is liable to be ambiguous, maybe it would
> be better to delete it.  The explicit argument type already makes this
> more self-documenting than previously.

I think we still need to make it clear that the enable is called on
all active CPUs. It is not about the argument anymore.

How about :

/*
  * Called on all active CPUs if the capability associated with
  * this entry is set.
  */


> 
> I don't feel that strongly either way though; probably not worth a
> respin unless you have other things to change.
> 
> Also please note that I didn't test the original patch here (in case
> I didn't point that out already...)

I think I did test it using the HW DBM feature (see the patch at the end).
However, will do some more testing with it. Thanks for mentioning.

Suzuki
Dave Martin Jan. 25, 2018, 3:36 p.m. UTC | #3
On Tue, Jan 23, 2018 at 03:38:37PM +0000, Suzuki K Poulose wrote:
> On 23/01/18 14:52, Dave Martin wrote:
> >On Tue, Jan 23, 2018 at 12:27:54PM +0000, Suzuki K Poulose wrote:
> >>From: Dave Martin <dave.martin@arm.com>
> >>
> >>We issue the enable() call back for all CPU hwcaps capabilities
> >>available on the system, on all the CPUs. So far we have ignored
> >>the argument passed to the call back, which had a prototype to
> >>accept a "void *" for use with on_each_cpu() and later with
> >>stop_machine(). However, with commit 0a0d111d40fd1
> >>("arm64: cpufeature: Pass capability structure to ->enable callback"),
> >>there are some users of the argument who wants the matching capability
> >>struct pointer where there are multiple matching criteria for a single
> >>capability. Update the prototype for enable to accept a const pointer.
> >>
> >>Cc: Will Deacon <will.deacon@arm.com>
> >>Cc: Robin Murphy <robin.murphy@arm.com>
> >>Cc: Catalin Marinas <catalin.marinas@arm.com>
> >>Cc: Mark Rutland <mark.rutland@arm.com>
> >>Cc: Andre Przywara <andre.przywara@arm.com>
> >>Cc: James Morse <james.morse@arm.com>
> >>Reviewed-by: Julien Thierry <julien.thierry@arm.com>
> >>Signed-off-by: Dave Martin <dave.martin@arm.com>
> >>[ Rebased to for-next/core converting more users ]
> >>Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> >>---
> >>  arch/arm64/include/asm/cpufeature.h |  3 ++-
> >>  arch/arm64/include/asm/fpsimd.h     |  4 +++-
> >>  arch/arm64/include/asm/processor.h  |  7 ++++---
> >>  arch/arm64/kernel/cpu_errata.c      | 14 ++++++--------
> >>  arch/arm64/kernel/cpufeature.c      | 16 ++++++++++++----
> >>  arch/arm64/kernel/fpsimd.c          |  3 ++-
> >>  arch/arm64/kernel/traps.c           |  3 ++-
> >>  arch/arm64/mm/fault.c               |  2 +-
> >>  8 files changed, 32 insertions(+), 20 deletions(-)
> >>
> >>diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> >>index ac67cfc2585a..cefbd685292c 100644
> >>--- a/arch/arm64/include/asm/cpufeature.h
> >>+++ b/arch/arm64/include/asm/cpufeature.h
> >>@@ -97,7 +97,8 @@ struct arm64_cpu_capabilities {
> >>  	u16 capability;
> >>  	int def_scope;			/* default scope */
> >>  	bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
> >>-	int (*enable)(void *);		/* Called on all active CPUs */
> >>+	/* Called on all active CPUs for all  "available" capabilities */
> >
> >Nit: Odd spacing?  Also, "available" doesn't really make sense for errata
> >workarounds.
> >
> 
> Thanks for spotting, will fix it.
> 
> >Maybe applicable would be a better word?
> >
> 
> There is a subtle difference. If there are two entries for a capability,
> with only one of them matches, we end up calling the enable() for both
> the entries. "Applicable" could potentially be misunderstood, leading
> to assumption that the enable() is called only if that "entry" matches,
> which is not true. I accept that "available" doesn't sound any better either.
> 
> 
> >>+	int (*enable)(const struct arm64_cpu_capabilities *caps);

This probably shouldn't be "caps" here: this argument refers a single
capability, not an array.  Also, this shouldn't be any random capability,
but the one corresponding to the enable method:

	cap->enable(cap) 

(i.e., cap1->enable(cap2) is invalid, and the cpufeature framework won't
do that).

> >Alternatively, if the comment is liable to be ambiguous, maybe it would
> >be better to delete it.  The explicit argument type already makes this
> >more self-documenting than previously.
> 
> I think we still need to make it clear that the enable is called on
> all active CPUs. It is not about the argument anymore.
> 
> How about :
> 
> /*
>  * Called on all active CPUs if the capability associated with
>  * this entry is set.
>  */

Maybe, but now we have the new concept of "setting" a capability.

Really, this is enabling the capability for a CPU, not globally, so
maybe it could be renamed to "cpu_enable".

Could we describe the method in terms of what it is required to do,
as well as the circumstances of the call, e.g.:

/*
 * Take the appropriate actions to enable this capability for this cpu.
 * Every time a cpu is booted, this method is called under stop_machine()
 * for each globally enabled capability.
 */

(I'm hoping that "globally enabled" is meaningful wording, though
perhaps not.)

Also, what does the return value of this method mean?

Previously, the return value was ignored, but other patches in this
series might change that.

[...]

Cheers
---Dave
Suzuki K Poulose Jan. 25, 2018, 4:57 p.m. UTC | #4
On 25/01/18 15:36, Dave Martin wrote:
> On Tue, Jan 23, 2018 at 03:38:37PM +0000, Suzuki K Poulose wrote:
>> On 23/01/18 14:52, Dave Martin wrote:
>>> On Tue, Jan 23, 2018 at 12:27:54PM +0000, Suzuki K Poulose wrote:
>>>> From: Dave Martin <dave.martin@arm.com>
>>>>
>>>> We issue the enable() call back for all CPU hwcaps capabilities
>>>> available on the system, on all the CPUs. So far we have ignored
>>>> the argument passed to the call back, which had a prototype to
>>>> accept a "void *" for use with on_each_cpu() and later with
>>>> stop_machine(). However, with commit 0a0d111d40fd1
>>>> ("arm64: cpufeature: Pass capability structure to ->enable callback"),
>>>> there are some users of the argument who wants the matching capability
>>>> struct pointer where there are multiple matching criteria for a single
>>>> capability. Update the prototype for enable to accept a const pointer.
>>>>
>>>> Cc: Will Deacon <will.deacon@arm.com>
>>>> Cc: Robin Murphy <robin.murphy@arm.com>
>>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>>> Cc: Andre Przywara <andre.przywara@arm.com>
>>>> Cc: James Morse <james.morse@arm.com>
>>>> Reviewed-by: Julien Thierry <julien.thierry@arm.com>
>>>> Signed-off-by: Dave Martin <dave.martin@arm.com>
>>>> [ Rebased to for-next/core converting more users ]
>>>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>>>> ---
>>>>   arch/arm64/include/asm/cpufeature.h |  3 ++-
>>>>   arch/arm64/include/asm/fpsimd.h     |  4 +++-
>>>>   arch/arm64/include/asm/processor.h  |  7 ++++---
>>>>   arch/arm64/kernel/cpu_errata.c      | 14 ++++++--------
>>>>   arch/arm64/kernel/cpufeature.c      | 16 ++++++++++++----
>>>>   arch/arm64/kernel/fpsimd.c          |  3 ++-
>>>>   arch/arm64/kernel/traps.c           |  3 ++-
>>>>   arch/arm64/mm/fault.c               |  2 +-
>>>>   8 files changed, 32 insertions(+), 20 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>>>> index ac67cfc2585a..cefbd685292c 100644
>>>> --- a/arch/arm64/include/asm/cpufeature.h
>>>> +++ b/arch/arm64/include/asm/cpufeature.h
>>>> @@ -97,7 +97,8 @@ struct arm64_cpu_capabilities {
>>>>   	u16 capability;
>>>>   	int def_scope;			/* default scope */
>>>>   	bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
>>>> -	int (*enable)(void *);		/* Called on all active CPUs */
>>>> +	/* Called on all active CPUs for all  "available" capabilities */
>>>
>>> Nit: Odd spacing?  Also, "available" doesn't really make sense for errata
>>> workarounds.
>>>
>>
>> Thanks for spotting, will fix it.
>>
>>> Maybe applicable would be a better word?
>>>
>>
>> There is a subtle difference. If there are two entries for a capability,
>> with only one of them matches, we end up calling the enable() for both
>> the entries. "Applicable" could potentially be misunderstood, leading
>> to assumption that the enable() is called only if that "entry" matches,
>> which is not true. I accept that "available" doesn't sound any better either.
>>
>>
>>>> +	int (*enable)(const struct arm64_cpu_capabilities *caps);
> 
> This probably shouldn't be "caps" here: this argument refers a single
> capability, not an array.  Also, this shouldn't be any random capability,
> but the one corresponding to the enable method:
> 
> 	cap->enable(cap)
> 
> (i.e., cap1->enable(cap2) is invalid, and the cpufeature framework won't
> do that).
> 

Yes, thats correct. I will change it.

>>> Alternatively, if the comment is liable to be ambiguous, maybe it would
>>> be better to delete it.  The explicit argument type already makes this
>>> more self-documenting than previously.
>>
>> I think we still need to make it clear that the enable is called on
>> all active CPUs. It is not about the argument anymore.
>>
>> How about :
>>
>> /*
>>   * Called on all active CPUs if the capability associated with
>>   * this entry is set.
>>   */
> 
> Maybe, but now we have the new concept of "setting" a capability.
> 
> Really, this is enabling the capability for a CPU, not globally, so
> maybe it could be renamed to "cpu_enable".
> 
> Could we describe the method in terms of what it is required to do,
> as well as the circumstances of the call, e.g.:
> 
> /*
>   * Take the appropriate actions to enable this capability for this cpu.
>   * Every time a cpu is booted, this method is called under stop_machine()
>   * for each globally enabled capability.
>   */

Make sense, except for the "stop_machine" part. It is called under stop_machine
for all CPUs brought up by kernel, for capabilities which are enabled from
setup_cpu_features(), i.e, after all the CPUs are booted. But, it is called
directly on the CPU, if the CPU is booted after it has been enabled on CPUs
in the system. (e.g, a CPU brought up by the user - via __verify_local_cpu_caps -,
or a capability that is enabled early by boot CPU - will post the changes in the next
revision).

> 
> (I'm hoping that "globally enabled" is meaningful wording, though
> perhaps not.)

Hmm.. "globally enabled" kind of sounds recursive for describing "enable" :-).
How about "globally accepted" or "globally detected" ?

> 
> Also, what does the return value of this method mean?

Thats a good point. We ignore it. I believe it is best to leave it to the method
to decide, what to do about it if there is a failure. It was there just to make
sure we comply with what stop_machine expected. Now that we don't pass it to
stop_machine directly, we could change it to void.
  
> Previously, the return value was ignored, but other patches in this
> series might change that.
> 

Cheers
Suzuki
Dave Martin Jan. 29, 2018, 4:35 p.m. UTC | #5
On Thu, Jan 25, 2018 at 04:57:22PM +0000, Suzuki K Poulose wrote:
> On 25/01/18 15:36, Dave Martin wrote:
> >On Tue, Jan 23, 2018 at 03:38:37PM +0000, Suzuki K Poulose wrote:
> >>On 23/01/18 14:52, Dave Martin wrote:
> >>>On Tue, Jan 23, 2018 at 12:27:54PM +0000, Suzuki K Poulose wrote:
> >>>>From: Dave Martin <dave.martin@arm.com>
> >>>>
> >>>>We issue the enable() call back for all CPU hwcaps capabilities
> >>>>available on the system, on all the CPUs. So far we have ignored
> >>>>the argument passed to the call back, which had a prototype to
> >>>>accept a "void *" for use with on_each_cpu() and later with
> >>>>stop_machine(). However, with commit 0a0d111d40fd1
> >>>>("arm64: cpufeature: Pass capability structure to ->enable callback"),
> >>>>there are some users of the argument who wants the matching capability
> >>>>struct pointer where there are multiple matching criteria for a single
> >>>>capability. Update the prototype for enable to accept a const pointer.

[...]

> >>>>diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> >>>>index ac67cfc2585a..cefbd685292c 100644
> >>>>--- a/arch/arm64/include/asm/cpufeature.h
> >>>>+++ b/arch/arm64/include/asm/cpufeature.h
> >>>>@@ -97,7 +97,8 @@ struct arm64_cpu_capabilities {
> >>>>  	u16 capability;
> >>>>  	int def_scope;			/* default scope */
> >>>>  	bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
> >>>>-	int (*enable)(void *);		/* Called on all active CPUs */

[...]

> >>/*
> >>  * Called on all active CPUs if the capability associated with
> >>  * this entry is set.
> >>  */
> >
> >Maybe, but now we have the new concept of "setting" a capability.
> >
> >Really, this is enabling the capability for a CPU, not globally, so
> >maybe it could be renamed to "cpu_enable".
> >
> >Could we describe the method in terms of what it is required to do,
> >as well as the circumstances of the call, e.g.:
> >
> >/*
> >  * Take the appropriate actions to enable this capability for this cpu.
> >  * Every time a cpu is booted, this method is called under stop_machine()
> >  * for each globally enabled capability.
> >  */
> 
> Make sense, except for the "stop_machine" part. It is called under stop_machine
> for all CPUs brought up by kernel, for capabilities which are enabled from
> setup_cpu_features(), i.e, after all the CPUs are booted. But, it is called
> directly on the CPU, if the CPU is booted after it has been enabled on CPUs
> in the system. (e.g, a CPU brought up by the user - via __verify_local_cpu_caps -,
> or a capability that is enabled early by boot CPU - will post the changes in the next
> revision).

Fair enough.  Saying "this cpu" is probably sufficient to hint that
preemption will be disabled etc.

> >(I'm hoping that "globally enabled" is meaningful wording, though
> >perhaps not.)
> 
> Hmm.. "globally enabled" kind of sounds recursive for describing "enable" :-).
> How about "globally accepted" or "globally detected" ?

"Detected" is probably better, since this terminology at least exists
today in the code (if not very much).

Alternatively, the "enable" method could be renamed to something that
doesn't contain the word "enable" at all, like "cpu_setup" or similar.
That might avoid ambiguity about what is meant by "enable".  This has
no functional value though, and it's probably not worth the churn.

> >Also, what does the return value of this method mean?
> 
> Thats a good point. We ignore it. I believe it is best to leave it to the method
> to decide, what to do about it if there is a failure. It was there just to make
> sure we comply with what stop_machine expected. Now that we don't pass it to
> stop_machine directly, we could change it to void.
> >Previously, the return value was ignored, but other patches in this
> >series might change that.

If we don't change it to void, it would be good to have a comment saying
that it is ignored for now, and method implementations should return 0
for now.

Cheers
---Dave
diff mbox

Patch

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index ac67cfc2585a..cefbd685292c 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -97,7 +97,8 @@  struct arm64_cpu_capabilities {
 	u16 capability;
 	int def_scope;			/* default scope */
 	bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
-	int (*enable)(void *);		/* Called on all active CPUs */
+	/* Called on all active CPUs for all  "available" capabilities */
+	int (*enable)(const struct arm64_cpu_capabilities *caps);
 	union {
 		struct {	/* To be used for erratum handling only */
 			u32 midr_model;
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index 8857a0f0d0f7..2a4b5fc681a3 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -83,7 +83,9 @@  extern void sve_save_state(void *state, u32 *pfpsr);
 extern void sve_load_state(void const *state, u32 const *pfpsr,
 			   unsigned long vq_minus_1);
 extern unsigned int sve_get_vl(void);
-extern int sve_kernel_enable(void *);
+
+struct arm64_cpu_capabilities;
+extern int sve_kernel_enable(const struct arm64_cpu_capabilities *__unused);
 
 extern int __ro_after_init sve_max_vl;
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index cee4ae25a5d1..ff4c753a75fe 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -34,6 +34,7 @@ 
 #include <linux/string.h>
 
 #include <asm/alternative.h>
+#include <asm/cpufeature.h>
 #include <asm/fpsimd.h>
 #include <asm/hw_breakpoint.h>
 #include <asm/lse.h>
@@ -214,9 +215,9 @@  static inline void spin_lock_prefetch(const void *ptr)
 
 #endif
 
-int cpu_enable_pan(void *__unused);
-int cpu_enable_cache_maint_trap(void *__unused);
-int cpu_clear_disr(void *__unused);
+int cpu_enable_pan(const struct arm64_cpu_capabilities *__unused);
+int cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused);
+int cpu_clear_disr(const struct arm64_cpu_capabilities *__unused);
 
 /* Userspace interface for PR_SVE_{SET,GET}_VL prctl()s: */
 #define SVE_SET_VL(arg)	sve_set_current_vl(arg)
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 54e41dfe41f6..9ae0d7e395cf 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -53,7 +53,8 @@  has_mismatched_cache_line_size(const struct arm64_cpu_capabilities *entry,
 		(arm64_ftr_reg_ctrel0.sys_val & arm64_ftr_reg_ctrel0.strict_mask);
 }
 
-static int cpu_enable_trap_ctr_access(void *__unused)
+static int cpu_enable_trap_ctr_access(
+	const struct arm64_cpu_capabilities *__unused)
 {
 	/* Clear SCTLR_EL1.UCT */
 	config_sctlr_el1(SCTLR_EL1_UCT, 0);
@@ -144,10 +145,8 @@  static void  install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
 
 #include <linux/psci.h>
 
-static int enable_psci_bp_hardening(void *data)
+static int enable_psci_bp_hardening(const struct arm64_cpu_capabilities *entry)
 {
-	const struct arm64_cpu_capabilities *entry = data;
-
 	if (psci_ops.get_version)
 		install_bp_hardening_cb(entry,
 				       (bp_hardening_cb_t)psci_ops.get_version,
@@ -169,10 +168,9 @@  static void qcom_link_stack_sanitization(void)
 		     : "=&r" (tmp));
 }
 
-static int qcom_enable_link_stack_sanitization(void *data)
+static int qcom_enable_link_stack_sanitization(
+			const struct arm64_cpu_capabilities *entry)
 {
-	const struct arm64_cpu_capabilities *entry = data;
-
 	install_bp_hardening_cb(entry, qcom_link_stack_sanitization,
 				__qcom_hyp_sanitize_link_stack_start,
 				__qcom_hyp_sanitize_link_stack_end);
@@ -376,7 +374,7 @@  void verify_local_cpu_errata_workarounds(void)
 	for (; caps->matches; caps++) {
 		if (cpus_have_cap(caps->capability)) {
 			if (caps->enable)
-				caps->enable((void *)caps);
+				caps->enable(caps);
 		} else if (caps->matches(caps, SCOPE_LOCAL_CPU)) {
 			pr_crit("CPU%d: Requires work around for %s, not detected"
 					" at boot time\n",
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 5612d6f46331..6a8dfdc532b1 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -887,7 +887,7 @@  static int __init parse_kpti(char *str)
 __setup("kpti=", parse_kpti);
 #endif	/* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
-static int cpu_copy_el2regs(void *__unused)
+static int cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
 {
 	/*
 	 * Copy register values that aren't redirected by hardware.
@@ -1183,6 +1183,14 @@  void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 	}
 }
 
+
+static int __enable_cpu_capability(void *arg)
+{
+	const struct arm64_cpu_capabilities *cap = arg;
+
+	return cap->enable(cap);
+}
+
 /*
  * Run through the enabled capabilities and enable() it on all active
  * CPUs
@@ -1205,7 +1213,7 @@  void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
 			 * uses an IPI, giving us a PSTATE that disappears when
 			 * we return.
 			 */
-			stop_machine(caps->enable, (void *)caps, cpu_online_mask);
+			stop_machine(__enable_cpu_capability, (void *)caps, cpu_online_mask);
 		}
 	}
 }
@@ -1249,7 +1257,7 @@  verify_local_cpu_features(const struct arm64_cpu_capabilities *caps_list)
 			cpu_die_early();
 		}
 		if (caps->enable)
-			caps->enable((void *)caps);
+			caps->enable(caps);
 	}
 }
 
@@ -1472,7 +1480,7 @@  static int __init enable_mrs_emulation(void)
 
 core_initcall(enable_mrs_emulation);
 
-int cpu_clear_disr(void *__unused)
+int cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 {
 	/* Firmware may have left a deferred SError in this register. */
 	write_sysreg_s(0, SYS_DISR_EL1);
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 55fb544072f6..4d7eff33c643 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -40,6 +40,7 @@ 
 #include <linux/sysctl.h>
 
 #include <asm/fpsimd.h>
+#include <asm/cpufeature.h>
 #include <asm/cputype.h>
 #include <asm/simd.h>
 #include <asm/sigcontext.h>
@@ -757,7 +758,7 @@  static void __init sve_efi_setup(void)
  * Enable SVE for EL1.
  * Intended for use by the cpufeatures code during CPU boot.
  */
-int sve_kernel_enable(void *__always_unused p)
+int sve_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
 {
 	write_sysreg(read_sysreg(CPACR_EL1) | CPACR_EL1_ZEN_EL1EN, CPACR_EL1);
 	isb();
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index bbb0fde2780e..296fab8e5c67 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -38,6 +38,7 @@ 
 
 #include <asm/atomic.h>
 #include <asm/bug.h>
+#include <asm/cpufeature.h>
 #include <asm/daifflags.h>
 #include <asm/debug-monitors.h>
 #include <asm/esr.h>
@@ -374,7 +375,7 @@  asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
 	force_signal_inject(SIGILL, ILL_ILLOPC, regs, 0);
 }
 
-int cpu_enable_cache_maint_trap(void *__unused)
+int cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 {
 	config_sctlr_el1(SCTLR_EL1_UCI, 0);
 	return 0;
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 0e671ddf4855..937f89d2c353 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -813,7 +813,7 @@  asmlinkage int __exception do_debug_exception(unsigned long addr,
 NOKPROBE_SYMBOL(do_debug_exception);
 
 #ifdef CONFIG_ARM64_PAN
-int cpu_enable_pan(void *__unused)
+int cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
 {
 	/*
 	 * We modify PSTATE. This won't work from irq context as the PSTATE