diff mbox series

[v5,29/39] arm64: Add helpers to probe local CPU for PAC and BTI support

Message ID 20231124101840.944737-70-ardb@google.com (mailing list archive)
State New, archived
Headers show
Series arm64: Reorganize kernel VA space for LPA2 | expand

Commit Message

Ard Biesheuvel Nov. 24, 2023, 10:19 a.m. UTC
From: Ard Biesheuvel <ardb@kernel.org>

Add some helpers that will be used by the early kernel mapping code to
check feature support on the local CPU. This permits the early kernel
mapping to be created with the right attributes, removing the need for
tearing it down and recreating it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/cpufeature.h | 44 ++++++++++++++++++++
 1 file changed, 44 insertions(+)

Comments

Marc Zyngier Nov. 24, 2023, 12:37 p.m. UTC | #1
On Fri, 24 Nov 2023 10:19:09 +0000,
Ard Biesheuvel <ardb@google.com> wrote:
> 
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Add some helpers that will be used by the early kernel mapping code to
> check feature support on the local CPU. This permits the early kernel
> mapping to be created with the right attributes, removing the need for
> tearing it down and recreating it.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm64/include/asm/cpufeature.h | 44 ++++++++++++++++++++
>  1 file changed, 44 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 99c01417e544..301d4d2211d5 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -921,6 +921,50 @@ static inline bool kaslr_disabled_cmdline(void)
>  u32 get_kvm_ipa_limit(void);
>  void dump_cpu_features(void);
>  
> +static inline bool cpu_has_bti(void)
> +{
> +	u64 pfr1;
> +
> +	if (!IS_ENABLED(CONFIG_ARM64_BTI))
> +		return false;
> +
> +	pfr1 = read_cpuid(ID_AA64PFR1_EL1);
> +	pfr1 &= ~id_aa64pfr1_override.mask;
> +	pfr1 |= id_aa64pfr1_override.val;

There is a potential gotcha here. If the override failed because a
filter rejected it, val will be set to full ones for the size of the
failed override field, and the corresponding mask will be set to 0
(see match_options()).

A safer pattern would be:

	pfr1 |= id_aa64pfr1_override.val & id_aa64pfr1_override.mask;

although we currently don't have such a filter for BTI nor PAC, so the
code is OK for now.

Thanks,

	M.
Ard Biesheuvel Nov. 24, 2023, 1:08 p.m. UTC | #2
On Fri, 24 Nov 2023 at 13:37, Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 24 Nov 2023 10:19:09 +0000,
> Ard Biesheuvel <ardb@google.com> wrote:
> >
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > Add some helpers that will be used by the early kernel mapping code to
> > check feature support on the local CPU. This permits the early kernel
> > mapping to be created with the right attributes, removing the need for
> > tearing it down and recreating it.
> >
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > ---
> >  arch/arm64/include/asm/cpufeature.h | 44 ++++++++++++++++++++
> >  1 file changed, 44 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > index 99c01417e544..301d4d2211d5 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -921,6 +921,50 @@ static inline bool kaslr_disabled_cmdline(void)
> >  u32 get_kvm_ipa_limit(void);
> >  void dump_cpu_features(void);
> >
> > +static inline bool cpu_has_bti(void)
> > +{
> > +     u64 pfr1;
> > +
> > +     if (!IS_ENABLED(CONFIG_ARM64_BTI))
> > +             return false;
> > +
> > +     pfr1 = read_cpuid(ID_AA64PFR1_EL1);
> > +     pfr1 &= ~id_aa64pfr1_override.mask;
> > +     pfr1 |= id_aa64pfr1_override.val;
>
> There is a potential gotcha here. If the override failed because a
> filter rejected it, val will be set to full ones for the size of the
> failed override field, and the corresponding mask will be set to 0
> (see match_options()).
>
> A safer pattern would be:
>
>         pfr1 |= id_aa64pfr1_override.val & id_aa64pfr1_override.mask;
>
> although we currently don't have such a filter for BTI nor PAC, so the
> code is OK for now.
>

This confuses me.

What is the point of a value/mask pair if the mask is applied to the
value, and not to the quantity that is being overridden? Surely, not
setting those bits in 'value' to begin with makes the whole mask
redundant, no?

If the filter is supposed to prevent the override from taking effect,
wouldn't it be better to use 0/0 for the mask/value pair?

(/me likely misses some context here)
Marc Zyngier Nov. 24, 2023, 1:48 p.m. UTC | #3
On Fri, 24 Nov 2023 13:08:33 +0000,
Ard Biesheuvel <ardb@kernel.org> wrote:
> 
> On Fri, 24 Nov 2023 at 13:37, Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Fri, 24 Nov 2023 10:19:09 +0000,
> > Ard Biesheuvel <ardb@google.com> wrote:
> > >
> > > From: Ard Biesheuvel <ardb@kernel.org>
> > >
> > > Add some helpers that will be used by the early kernel mapping code to
> > > check feature support on the local CPU. This permits the early kernel
> > > mapping to be created with the right attributes, removing the need for
> > > tearing it down and recreating it.
> > >
> > > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/cpufeature.h | 44 ++++++++++++++++++++
> > >  1 file changed, 44 insertions(+)
> > >
> > > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > > index 99c01417e544..301d4d2211d5 100644
> > > --- a/arch/arm64/include/asm/cpufeature.h
> > > +++ b/arch/arm64/include/asm/cpufeature.h
> > > @@ -921,6 +921,50 @@ static inline bool kaslr_disabled_cmdline(void)
> > >  u32 get_kvm_ipa_limit(void);
> > >  void dump_cpu_features(void);
> > >
> > > +static inline bool cpu_has_bti(void)
> > > +{
> > > +     u64 pfr1;
> > > +
> > > +     if (!IS_ENABLED(CONFIG_ARM64_BTI))
> > > +             return false;
> > > +
> > > +     pfr1 = read_cpuid(ID_AA64PFR1_EL1);
> > > +     pfr1 &= ~id_aa64pfr1_override.mask;
> > > +     pfr1 |= id_aa64pfr1_override.val;
> >
> > There is a potential gotcha here. If the override failed because a
> > filter rejected it, val will be set to full ones for the size of the
> > failed override field, and the corresponding mask will be set to 0
> > (see match_options()).
> >
> > A safer pattern would be:
> >
> >         pfr1 |= id_aa64pfr1_override.val & id_aa64pfr1_override.mask;
> >
> > although we currently don't have such a filter for BTI nor PAC, so the
> > code is OK for now.
> >
> 
> This confuses me.
> 
> What is the point of a value/mask pair if the mask is applied to the
> value, and not to the quantity that is being overridden? Surely, not
> setting those bits in 'value' to begin with makes the whole mask
> redundant, no?
> 
> If the filter is supposed to prevent the override from taking effect,
> wouldn't it be better to use 0/0 for the mask/value pair?
> 
> (/me likely misses some context here)

0/0 would be fine for the purpose of not applying the override.
However, this hack is used to report the failed override at boot time
instead of silently ignoring it (see [1]).

Is it crap? Absolutely. But I didn't have a better idea at the time
(and still don't). Alternatively, we can drop the reporting and be
done with it.

	M.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/kernel/cpufeature.c#n923
Ard Biesheuvel Nov. 25, 2023, 8:59 a.m. UTC | #4
On Fri, 24 Nov 2023 at 14:48, Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 24 Nov 2023 13:08:33 +0000,
> Ard Biesheuvel <ardb@kernel.org> wrote:
> >
> > On Fri, 24 Nov 2023 at 13:37, Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > On Fri, 24 Nov 2023 10:19:09 +0000,
> > > Ard Biesheuvel <ardb@google.com> wrote:
> > > >
> > > > From: Ard Biesheuvel <ardb@kernel.org>
> > > >
> > > > Add some helpers that will be used by the early kernel mapping code to
> > > > check feature support on the local CPU. This permits the early kernel
> > > > mapping to be created with the right attributes, removing the need for
> > > > tearing it down and recreating it.
> > > >
> > > > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > > > ---
> > > >  arch/arm64/include/asm/cpufeature.h | 44 ++++++++++++++++++++
> > > >  1 file changed, 44 insertions(+)
> > > >
> > > > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > > > index 99c01417e544..301d4d2211d5 100644
> > > > --- a/arch/arm64/include/asm/cpufeature.h
> > > > +++ b/arch/arm64/include/asm/cpufeature.h
> > > > @@ -921,6 +921,50 @@ static inline bool kaslr_disabled_cmdline(void)
> > > >  u32 get_kvm_ipa_limit(void);
> > > >  void dump_cpu_features(void);
> > > >
> > > > +static inline bool cpu_has_bti(void)
> > > > +{
> > > > +     u64 pfr1;
> > > > +
> > > > +     if (!IS_ENABLED(CONFIG_ARM64_BTI))
> > > > +             return false;
> > > > +
> > > > +     pfr1 = read_cpuid(ID_AA64PFR1_EL1);
> > > > +     pfr1 &= ~id_aa64pfr1_override.mask;
> > > > +     pfr1 |= id_aa64pfr1_override.val;
> > >
> > > There is a potential gotcha here. If the override failed because a
> > > filter rejected it, val will be set to full ones for the size of the
> > > failed override field, and the corresponding mask will be set to 0
> > > (see match_options()).
> > >
> > > A safer pattern would be:
> > >
> > >         pfr1 |= id_aa64pfr1_override.val & id_aa64pfr1_override.mask;
> > >
> > > although we currently don't have such a filter for BTI nor PAC, so the
> > > code is OK for now.
> > >
> >
> > This confuses me.
> >
> > What is the point of a value/mask pair if the mask is applied to the
> > value, and not to the quantity that is being overridden? Surely, not
> > setting those bits in 'value' to begin with makes the whole mask
> > redundant, no?
> >
> > If the filter is supposed to prevent the override from taking effect,
> > wouldn't it be better to use 0/0 for the mask/value pair?
> >
> > (/me likely misses some context here)
>
> 0/0 would be fine for the purpose of not applying the override.
> However, this hack is used to report the failed override at boot time
> instead of silently ignoring it (see [1]).
>
> Is it crap? Absolutely. But I didn't have a better idea at the time
> (and still don't). Alternatively, we can drop the reporting and be
> done with it.
>

No I think it is fine, actually. A valid mask should cover at least
the 1 bits in value, so signalling an invalid override this way is
perfectly reasonable.

But applying the mask to value still does not make sense imo. Instead,
I should just check that the override is valid before applying it.
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 99c01417e544..301d4d2211d5 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -921,6 +921,50 @@  static inline bool kaslr_disabled_cmdline(void)
 u32 get_kvm_ipa_limit(void);
 void dump_cpu_features(void);
 
+static inline bool cpu_has_bti(void)
+{
+	u64 pfr1;
+
+	if (!IS_ENABLED(CONFIG_ARM64_BTI))
+		return false;
+
+	pfr1 = read_cpuid(ID_AA64PFR1_EL1);
+	pfr1 &= ~id_aa64pfr1_override.mask;
+	pfr1 |= id_aa64pfr1_override.val;
+
+	return cpuid_feature_extract_unsigned_field(pfr1,
+						    ID_AA64PFR1_EL1_BT_SHIFT);
+}
+
+static inline bool cpu_has_pac(void)
+{
+	u64 isar1, isar2;
+	u8 feat;
+
+	if (!IS_ENABLED(CONFIG_ARM64_PTR_AUTH))
+		return false;
+
+	isar1 = read_cpuid(ID_AA64ISAR1_EL1);
+	isar1 &= ~id_aa64isar1_override.mask;
+	isar1 |= id_aa64isar1_override.val;
+	feat = cpuid_feature_extract_unsigned_field(isar1,
+						    ID_AA64ISAR1_EL1_APA_SHIFT);
+	if (feat)
+		return true;
+
+	feat = cpuid_feature_extract_unsigned_field(isar1,
+						    ID_AA64ISAR1_EL1_API_SHIFT);
+	if (feat)
+		return true;
+
+	isar2 = read_sysreg_s(SYS_ID_AA64ISAR2_EL1);
+	isar2 &= ~id_aa64isar2_override.mask;
+	isar2 |= id_aa64isar2_override.val;
+	feat = cpuid_feature_extract_unsigned_field(isar2,
+						    ID_AA64ISAR2_EL1_APA3_SHIFT);
+	return feat;
+}
+
 #endif /* __ASSEMBLY__ */
 
 #endif