diff mbox series

[v2,3/4] arm64: cpufeature: Modify address authentication cpufeature to exact

Message ID 1586842314-19527-4-git-send-email-amit.kachhap@arm.com (mailing list archive)
State New, archived
Headers show
Series arm64: add Armv8.6 pointer authentication | expand

Commit Message

Amit Daniel Kachhap April 14, 2020, 5:31 a.m. UTC
This patch modifies the address authentication cpufeature type to EXACT
from earlier LOWER_SAFE as the different configurations added for Armv8.6
enhanced PAC have different behaviour and there is no tunable to enable the
lower safe versions. The safe value is set as 0.

After this change, if there is any variation in configurations in secondary
cpus from boot cpu then those cpus are marked tainted. The KVM guests may
completely disable address authentication if there is any such variations
detected.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
---
 arch/arm64/kernel/cpufeature.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Catalin Marinas May 6, 2020, 5:13 p.m. UTC | #1
On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
> This patch modifies the address authentication cpufeature type to EXACT
> from earlier LOWER_SAFE as the different configurations added for Armv8.6
> enhanced PAC have different behaviour and there is no tunable to enable the
> lower safe versions. The safe value is set as 0.
> 
> After this change, if there is any variation in configurations in secondary
> cpus from boot cpu then those cpus are marked tainted. The KVM guests may
> completely disable address authentication if there is any such variations
> detected.
> 
> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> ---
>  arch/arm64/kernel/cpufeature.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 08795025409c..599b03df2f93 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>  	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>  	ARM64_FTR_END,

Is this sufficient? If we have the boot CPU already enabling the ptrauth
and we get a secondary CPU with a different ISAR1 field that matches the
address auth in cpufeature.c, we still allow it to boot. We no longer
report the feature to the user system_supports_address_auth() is true
while system_supports_generic_auth() would be false as it checks the
sanitised feature registers.
Amit Daniel Kachhap May 8, 2020, 4:21 p.m. UTC | #2
On 5/6/20 10:43 PM, Catalin Marinas wrote:
> On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
>> This patch modifies the address authentication cpufeature type to EXACT
>> from earlier LOWER_SAFE as the different configurations added for Armv8.6
>> enhanced PAC have different behaviour and there is no tunable to enable the
>> lower safe versions. The safe value is set as 0.
>>
>> After this change, if there is any variation in configurations in secondary
>> cpus from boot cpu then those cpus are marked tainted. The KVM guests may
>> completely disable address authentication if there is any such variations
>> detected.
>>
>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>> ---
>>   arch/arm64/kernel/cpufeature.c | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 08795025409c..599b03df2f93 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>   	ARM64_FTR_END,
> 
> Is this sufficient? If we have the boot CPU already enabling the ptrauth
> and we get a secondary CPU with a different ISAR1 field that matches the
> address auth in cpufeature.c, we still allow it to boot. We no longer
> report the feature to the user system_supports_address_auth() is true
> while system_supports_generic_auth() would be false as it checks the
> sanitised feature registers.

Yes agreed. Generic authentication also needs EXACT cpufeature type.

>
Catalin Marinas May 12, 2020, 5:33 p.m. UTC | #3
On Fri, May 08, 2020 at 09:51:53PM +0530, Amit Kachhap wrote:
> On 5/6/20 10:43 PM, Catalin Marinas wrote:
> > On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
> > > This patch modifies the address authentication cpufeature type to EXACT
> > > from earlier LOWER_SAFE as the different configurations added for Armv8.6
> > > enhanced PAC have different behaviour and there is no tunable to enable the
> > > lower safe versions. The safe value is set as 0.
> > > 
> > > After this change, if there is any variation in configurations in secondary
> > > cpus from boot cpu then those cpus are marked tainted. The KVM guests may
> > > completely disable address authentication if there is any such variations
> > > detected.
> > > 
> > > Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
> > > ---
> > >   arch/arm64/kernel/cpufeature.c | 4 ++--
> > >   1 file changed, 2 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > > index 08795025409c..599b03df2f93 100644
> > > --- a/arch/arm64/kernel/cpufeature.c
> > > +++ b/arch/arm64/kernel/cpufeature.c
> > > @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
> > >   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
> > >   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
> > >   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> > > -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
> > > +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
> > >   	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> > > -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> > > +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> > >   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
> > >   	ARM64_FTR_END,
> > 
> > Is this sufficient? If we have the boot CPU already enabling the ptrauth
> > and we get a secondary CPU with a different ISAR1 field that matches the
> > address auth in cpufeature.c, we still allow it to boot. We no longer
> > report the feature to the user system_supports_address_auth() is true
> > while system_supports_generic_auth() would be false as it checks the
> > sanitised feature registers.
> 
> Yes agreed. Generic authentication also needs EXACT cpufeature type.

I'm still not sure that's sufficient. If we boot the primary CPU with
ptrauth as detected in proc.S, we consider this a boot feature so all
secondary CPUs must have it. Subsequent CPUs are currently checked via
the arm64_features[] definitions and we allow them to boot if the ID is
at least that of the boot CPU. How does this interact with the above
FTR_EXACT changes?

My concern is that we boot with PAC enabled on all CPUs but because of
the FTR_EXACT, the sanitised ID registers no longer report the feature.
Amit Daniel Kachhap May 13, 2020, 3:42 p.m. UTC | #4
On 5/12/20 11:03 PM, Catalin Marinas wrote:
> On Fri, May 08, 2020 at 09:51:53PM +0530, Amit Kachhap wrote:
>> On 5/6/20 10:43 PM, Catalin Marinas wrote:
>>> On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
>>>> This patch modifies the address authentication cpufeature type to EXACT
>>>> from earlier LOWER_SAFE as the different configurations added for Armv8.6
>>>> enhanced PAC have different behaviour and there is no tunable to enable the
>>>> lower safe versions. The safe value is set as 0.
>>>>
>>>> After this change, if there is any variation in configurations in secondary
>>>> cpus from boot cpu then those cpus are marked tainted. The KVM guests may
>>>> completely disable address authentication if there is any such variations
>>>> detected.
>>>>
>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>> ---
>>>>    arch/arm64/kernel/cpufeature.c | 4 ++--
>>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>>>> index 08795025409c..599b03df2f93 100644
>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>> @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>>>>    	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>>>    	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>>>    	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>    	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>> -		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>    	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>>>    	ARM64_FTR_END,
>>>
>>> Is this sufficient? If we have the boot CPU already enabling the ptrauth
>>> and we get a secondary CPU with a different ISAR1 field that matches the
>>> address auth in cpufeature.c, we still allow it to boot. We no longer
>>> report the feature to the user system_supports_address_auth() is true
>>> while system_supports_generic_auth() would be false as it checks the
>>> sanitised feature registers.
>>
>> Yes agreed. Generic authentication also needs EXACT cpufeature type.
> 
> I'm still not sure that's sufficient. If we boot the primary CPU with
> ptrauth as detected in proc.S, we consider this a boot feature so all
> secondary CPUs must have it. Subsequent CPUs are currently checked via
> the arm64_features[] definitions and we allow them to boot if the ID is
> at least that of the boot CPU. How does this interact with the above
> FTR_EXACT changes?

Unfortunately FTR_EXACT does not effect the bootflow directly but marks
the cpu TAINTED and goes ahead.

> 
> My concern is that we boot with PAC enabled on all CPUs but because of
> the FTR_EXACT, the sanitised ID registers no longer report the feature.
> 

You are right that PAC is enabled in hardware but un-reported to user in 
this case.

The issue here is in feature_matches() which only validates with the
entry->min_field_value. If we can modify this value to boot cpu value
for FTR_EXACT type then this cpu will fail to online.
May be we can introduce a new structure or make arm64_feature[] writable 
for this.

Something like below code.
-------------------------------------------------------------------------
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3ae35aadbc20..967928568edf 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -103,6 +103,8 @@ DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
  bool arm64_use_ng_mappings = false;
  EXPORT_SYMBOL(arm64_use_ng_mappings);

+u8 min_cap_value[ARM64_NCAPS];
+
  /*
   * Flag to indicate if we have computed the system wide
   * capabilities based on the boot time active CPUs. This
@@ -616,6 +618,17 @@ static void __init sort_ftr_regs(void)
                 BUG_ON(arm64_ftr_regs[i].sys_id < arm64_ftr_regs[i - 
1].sys_id);
  }

+static void __init update_cpu_capability_min(u32 sys_reg, s64 new)
+{
+       const struct arm64_cpu_capabilities *caps = arm64_features;
+       for (; caps->matches; caps++) {
+               if (caps->sys_reg == sys_reg) {
+                       if (caps->min_field_value)
+                               min_cap_value[caps->capability] = new;
+               }
+       }
+}
+
  /*
   * Initialise the CPU feature register from Boot CPU values.
   * Also initiliases the strict_mask for the register.
@@ -649,6 +662,8 @@ static void __init init_cpu_ftr_reg(u32 sys_reg, u64 
new)
                         reg->user_val = arm64_ftr_set_value(ftrp,
                                                             reg->user_val,
 
ftrp->safe_val);
+               if (ftrp->type == FTR_EXACT)
+                       update_cpu_capability_min(sys_reg, ftr_new);
         }

         val &= valid_mask;
@@ -1021,8 +1036,10 @@ static bool
  feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry)
  {
         int val = cpuid_feature_extract_field(reg, entry->field_pos, 
entry->sign);
-
-       return val >= entry->min_field_value;
+       if (min_cap_value[entry->capability])
+               return val >= min_cap_value[entry->capability];
+       else
+               return val >= entry->min_field_value;
  }

  static bool

-------------------------------------------------------------------------
Suzuki K Poulose May 20, 2020, 1:20 p.m. UTC | #5
On 05/13/2020 04:42 PM, Amit Kachhap wrote:
> 
> 
> On 5/12/20 11:03 PM, Catalin Marinas wrote:
>> On Fri, May 08, 2020 at 09:51:53PM +0530, Amit Kachhap wrote:
>>> On 5/6/20 10:43 PM, Catalin Marinas wrote:
>>>> On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
>>>>> This patch modifies the address authentication cpufeature type to 
>>>>> EXACT
>>>>> from earlier LOWER_SAFE as the different configurations added for 
>>>>> Armv8.6
>>>>> enhanced PAC have different behaviour and there is no tunable to 
>>>>> enable the
>>>>> lower safe versions. The safe value is set as 0.
>>>>>
>>>>> After this change, if there is any variation in configurations in 
>>>>> secondary
>>>>> cpus from boot cpu then those cpus are marked tainted. The KVM 
>>>>> guests may
>>>>> completely disable address authentication if there is any such 
>>>>> variations
>>>>> detected.
>>>>>
>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>> ---
>>>>>    arch/arm64/kernel/cpufeature.c | 4 ++--
>>>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/arch/arm64/kernel/cpufeature.c 
>>>>> b/arch/arm64/kernel/cpufeature.c
>>>>> index 08795025409c..599b03df2f93 100644
>>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>>> @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits 
>>>>> ftr_id_aa64isar1[] = {
>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>> ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>> ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>>>>        
>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 
>>>>> 4, 0),
>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>>        
>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 
>>>>> 4, 0),
>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>> ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>>>>        ARM64_FTR_END,
>>>>
>>>> Is this sufficient? If we have the boot CPU already enabling the 
>>>> ptrauth
>>>> and we get a secondary CPU with a different ISAR1 field that matches 
>>>> the
>>>> address auth in cpufeature.c, we still allow it to boot. We no longer
>>>> report the feature to the user system_supports_address_auth() is true
>>>> while system_supports_generic_auth() would be false as it checks the
>>>> sanitised feature registers.
>>>
>>> Yes agreed. Generic authentication also needs EXACT cpufeature type.
>>
>> I'm still not sure that's sufficient. If we boot the primary CPU with
>> ptrauth as detected in proc.S, we consider this a boot feature so all
>> secondary CPUs must have it. Subsequent CPUs are currently checked via
>> the arm64_features[] definitions and we allow them to boot if the ID is
>> at least that of the boot CPU. How does this interact with the above
>> FTR_EXACT changes?
> 
> Unfortunately FTR_EXACT does not effect the bootflow directly but marks
> the cpu TAINTED and goes ahead.
> 
>>
>> My concern is that we boot with PAC enabled on all CPUs but because of
>> the FTR_EXACT, the sanitised ID registers no longer report the feature.
>>
> 
> You are right that PAC is enabled in hardware but un-reported to user in 
> this case.
> 
> The issue here is in feature_matches() which only validates with the
> entry->min_field_value. If we can modify this value to boot cpu value
> for FTR_EXACT type then this cpu will fail to online.
> May be we can introduce a new structure or make arm64_feature[] writable 
> for this.
> 
> Something like below code.

The has_cpuid_feature() is for features with "FTR_LOWER_SAFE". Hacking
it to support EXACT doesn't look ideal. You may simply add your own
"matches()" for ptr-auth.

something like :

static bool
has_addr_auth(const struct arm64_cpu_capabilities *entry, int scope)
{
	static int boot_cpu_auth;
	int local_cpu_auth;
	u64 isar1;

	/* We don't expect to be called with SCOPE_SYSTEM */
	WARN_ON(scope == SCOPE_SYSTEM);
	isar1 = read_sysreg_s(SYS_ID_AA64ISAR1_EL1);
	local_cpu_auth = cpuid_feature_extract_unsigned_field(isar1, entry->shift);
						
	/*
	 * The ptr-auth feature levels are not intercompatible with
	 * lower levels. Hence we must match all the CPUs with that
	 * of the boot CPU. So cache the level of boot CPU and compare
	 * it against the secondary CPUs.
	 */
	if (scope & SCOPE_BOOT_CPU) {
		boot_cpu_auth = local_cpu_auth;
		return boot_cpu_auth > 0;
	} else if (scope & SCOPE_LOCAL_CPU) {
		return local_cpu_auth == boot_cpu_auth;
	}
}

Suzuki
Amit Daniel Kachhap May 21, 2020, 8:09 a.m. UTC | #6
Hi Suzuki,

On 5/20/20 6:50 PM, Suzuki K Poulose wrote:
> On 05/13/2020 04:42 PM, Amit Kachhap wrote:
>>
>>
>> On 5/12/20 11:03 PM, Catalin Marinas wrote:
>>> On Fri, May 08, 2020 at 09:51:53PM +0530, Amit Kachhap wrote:
>>>> On 5/6/20 10:43 PM, Catalin Marinas wrote:
>>>>> On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
>>>>>> This patch modifies the address authentication cpufeature type to 
>>>>>> EXACT
>>>>>> from earlier LOWER_SAFE as the different configurations added for 
>>>>>> Armv8.6
>>>>>> enhanced PAC have different behaviour and there is no tunable to 
>>>>>> enable the
>>>>>> lower safe versions. The safe value is set as 0.
>>>>>>
>>>>>> After this change, if there is any variation in configurations in 
>>>>>> secondary
>>>>>> cpus from boot cpu then those cpus are marked tainted. The KVM 
>>>>>> guests may
>>>>>> completely disable address authentication if there is any such 
>>>>>> variations
>>>>>> detected.
>>>>>>
>>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>>> ---
>>>>>>    arch/arm64/kernel/cpufeature.c | 4 ++--
>>>>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/arch/arm64/kernel/cpufeature.c 
>>>>>> b/arch/arm64/kernel/cpufeature.c
>>>>>> index 08795025409c..599b03df2f93 100644
>>>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>>>> @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits 
>>>>>> ftr_id_aa64isar1[] = {
>>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>> ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>> ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>> ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>> ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>> ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>>>>>        ARM64_FTR_END,
>>>>>
>>>>> Is this sufficient? If we have the boot CPU already enabling the 
>>>>> ptrauth
>>>>> and we get a secondary CPU with a different ISAR1 field that 
>>>>> matches the
>>>>> address auth in cpufeature.c, we still allow it to boot. We no longer
>>>>> report the feature to the user system_supports_address_auth() is true
>>>>> while system_supports_generic_auth() would be false as it checks the
>>>>> sanitised feature registers.
>>>>
>>>> Yes agreed. Generic authentication also needs EXACT cpufeature type.
>>>
>>> I'm still not sure that's sufficient. If we boot the primary CPU with
>>> ptrauth as detected in proc.S, we consider this a boot feature so all
>>> secondary CPUs must have it. Subsequent CPUs are currently checked via
>>> the arm64_features[] definitions and we allow them to boot if the ID is
>>> at least that of the boot CPU. How does this interact with the above
>>> FTR_EXACT changes?
>>
>> Unfortunately FTR_EXACT does not effect the bootflow directly but marks
>> the cpu TAINTED and goes ahead.
>>
>>>
>>> My concern is that we boot with PAC enabled on all CPUs but because of
>>> the FTR_EXACT, the sanitised ID registers no longer report the feature.
>>>
>>
>> You are right that PAC is enabled in hardware but un-reported to user 
>> in this case.
>>
>> The issue here is in feature_matches() which only validates with the
>> entry->min_field_value. If we can modify this value to boot cpu value
>> for FTR_EXACT type then this cpu will fail to online.
>> May be we can introduce a new structure or make arm64_feature[] 
>> writable for this.
>>
>> Something like below code.
> 
> The has_cpuid_feature() is for features with "FTR_LOWER_SAFE". Hacking
> it to support EXACT doesn't look ideal. You may simply add your own
> "matches()" for ptr-auth.

Yes it is reasonable to have separate match() function. I was thinking
of adding some generic match function for FTR_EXACT to be used by other
similar cpufeatures.

> 
> something like :
> 
> static bool
> has_addr_auth(const struct arm64_cpu_capabilities *entry, int scope)
> {
>      static int boot_cpu_auth;

I suppose that is this new match() has to be used for both AUTH_ARCH and 
AUTH_IMP_DEF then we may need 2 such static variables.

>      int local_cpu_auth;
>      u64 isar1;
> 
>      /* We don't expect to be called with SCOPE_SYSTEM */
>      WARN_ON(scope == SCOPE_SYSTEM);
>      isar1 = read_sysreg_s(SYS_ID_AA64ISAR1_EL1);
>      local_cpu_auth = cpuid_feature_extract_unsigned_field(isar1, 
> entry->shift);
> 
>      /*
>       * The ptr-auth feature levels are not intercompatible with
>       * lower levels. Hence we must match all the CPUs with that
>       * of the boot CPU. So cache the level of boot CPU and compare
>       * it against the secondary CPUs.
>       */
>      if (scope & SCOPE_BOOT_CPU) {
>          boot_cpu_auth = local_cpu_auth;
>          return boot_cpu_auth > 0;

May be,
return boot_cpu_auth >= entry->min_field_value

>      } else if (scope & SCOPE_LOCAL_CPU) {
>          return local_cpu_auth == boot_cpu_auth;
>      }
> }
> 
> Suzuki

Thanks,
Amit Daniel
Suzuki K Poulose May 21, 2020, 9 a.m. UTC | #7
On 05/21/2020 09:09 AM, Amit Kachhap wrote:
> Hi Suzuki,
> 
> On 5/20/20 6:50 PM, Suzuki K Poulose wrote:
>> On 05/13/2020 04:42 PM, Amit Kachhap wrote:
>>>
>>>
>>> On 5/12/20 11:03 PM, Catalin Marinas wrote:
>>>> On Fri, May 08, 2020 at 09:51:53PM +0530, Amit Kachhap wrote:
>>>>> On 5/6/20 10:43 PM, Catalin Marinas wrote:
>>>>>> On Tue, Apr 14, 2020 at 11:01:53AM +0530, Amit Daniel Kachhap wrote:
>>>>>>> This patch modifies the address authentication cpufeature type to 
>>>>>>> EXACT
>>>>>>> from earlier LOWER_SAFE as the different configurations added for 
>>>>>>> Armv8.6
>>>>>>> enhanced PAC have different behaviour and there is no tunable to 
>>>>>>> enable the
>>>>>>> lower safe versions. The safe value is set as 0.
>>>>>>>
>>>>>>> After this change, if there is any variation in configurations in 
>>>>>>> secondary
>>>>>>> cpus from boot cpu then those cpus are marked tainted. The KVM 
>>>>>>> guests may
>>>>>>> completely disable address authentication if there is any such 
>>>>>>> variations
>>>>>>> detected.
>>>>>>>
>>>>>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
>>>>>>> ---
>>>>>>>    arch/arm64/kernel/cpufeature.c | 4 ++--
>>>>>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>
>>>>>>> diff --git a/arch/arm64/kernel/cpufeature.c 
>>>>>>> b/arch/arm64/kernel/cpufeature.c
>>>>>>> index 08795025409c..599b03df2f93 100644
>>>>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>>>>> @@ -154,9 +154,9 @@ static const struct arm64_ftr_bits 
>>>>>>> ftr_id_aa64isar1[] = {
>>>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>>> ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>>>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>>> ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>>>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>>> ID_AA64ISAR1_API_SHIFT, 4, 0),
>>>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 
>>>>>>> 0),
>>>>>>> ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>>>>>>> -               FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>>> ID_AA64ISAR1_APA_SHIFT, 4, 0),
>>>>>>> +               FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 
>>>>>>> 0),
>>>>>>>        ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
>>>>>>> ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>>>>>>>        ARM64_FTR_END,
>>>>>>
>>>>>> Is this sufficient? If we have the boot CPU already enabling the 
>>>>>> ptrauth
>>>>>> and we get a secondary CPU with a different ISAR1 field that 
>>>>>> matches the
>>>>>> address auth in cpufeature.c, we still allow it to boot. We no longer
>>>>>> report the feature to the user system_supports_address_auth() is true
>>>>>> while system_supports_generic_auth() would be false as it checks the
>>>>>> sanitised feature registers.
>>>>>
>>>>> Yes agreed. Generic authentication also needs EXACT cpufeature type.
>>>>
>>>> I'm still not sure that's sufficient. If we boot the primary CPU with
>>>> ptrauth as detected in proc.S, we consider this a boot feature so all
>>>> secondary CPUs must have it. Subsequent CPUs are currently checked via
>>>> the arm64_features[] definitions and we allow them to boot if the ID is
>>>> at least that of the boot CPU. How does this interact with the above
>>>> FTR_EXACT changes?
>>>
>>> Unfortunately FTR_EXACT does not effect the bootflow directly but marks
>>> the cpu TAINTED and goes ahead.
>>>
>>>>
>>>> My concern is that we boot with PAC enabled on all CPUs but because of
>>>> the FTR_EXACT, the sanitised ID registers no longer report the feature.
>>>>
>>>
>>> You are right that PAC is enabled in hardware but un-reported to user 
>>> in this case.
>>>
>>> The issue here is in feature_matches() which only validates with the
>>> entry->min_field_value. If we can modify this value to boot cpu value
>>> for FTR_EXACT type then this cpu will fail to online.
>>> May be we can introduce a new structure or make arm64_feature[] 
>>> writable for this.
>>>
>>> Something like below code.
>>
>> The has_cpuid_feature() is for features with "FTR_LOWER_SAFE". Hacking
>> it to support EXACT doesn't look ideal. You may simply add your own
>> "matches()" for ptr-auth.
> 
> Yes it is reasonable to have separate match() function. I was thinking
> of adding some generic match function for FTR_EXACT to be used by other
> similar cpufeatures.

Ideally, if we enable CPU feature sanity check infrastructure to do this 
for us (i.e park a CPU which conflicts), we don't have to duplicate it
here in the capabilities. For now, yes, let us use something specific to
ptr-auth, which may not cater for generic EXACT features. Generalizing
it for different "scope"s are going to be tricky.

> 
>>
>> something like :
>>
>> static bool
>> has_addr_auth(const struct arm64_cpu_capabilities *entry, int scope)
>> {
>>      static int boot_cpu_auth;
> 
> I suppose that is this new match() has to be used for both AUTH_ARCH and 
> AUTH_IMP_DEF then we may need 2 such static variables.
> 
>>      int local_cpu_auth;
>>      u64 isar1;
>>
>>      /* We don't expect to be called with SCOPE_SYSTEM */
>>      WARN_ON(scope == SCOPE_SYSTEM);
>>      isar1 = read_sysreg_s(SYS_ID_AA64ISAR1_EL1);
>>      local_cpu_auth = cpuid_feature_extract_unsigned_field(isar1, 
>> entry->shift);
>>
>>      /*
>>       * The ptr-auth feature levels are not intercompatible with
>>       * lower levels. Hence we must match all the CPUs with that
>>       * of the boot CPU. So cache the level of boot CPU and compare
>>       * it against the secondary CPUs.
>>       */
>>      if (scope & SCOPE_BOOT_CPU) {
>>          boot_cpu_auth = local_cpu_auth;
>>          return boot_cpu_auth > 0;
> 
> May be,
> return boot_cpu_auth >= entry->min_field_value

Yes, thats fine.

Cheers
Suzuki
diff mbox series

Patch

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 08795025409c..599b03df2f93 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -154,9 +154,9 @@  static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
-		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
 	ARM64_FTR_END,
 };