diff mbox

crypto x86/camellia_aesni_avx: Fix CPU feature checks

Message ID 5615260D.7000301@sr71.net (mailing list archive)
State Not Applicable
Delegated to: Herbert Xu
Headers show

Commit Message

Dave Hansen Oct. 7, 2015, 2:02 p.m. UTC
On 10/07/2015 12:25 AM, Ingo Molnar wrote:
> * Ben Hutchings <ben@decadent.org.uk> wrote:
>> We need to explicitly check the AVX and AES CPU features, as we can't
>> infer them from the related XSAVE feature flags.  For example, the
>> Core i3 2310M passes the XSAVE feature test but does not implement
>> AES-NI.
...
>> diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c b/arch/x86/crypto/camellia_aesni_avx_glue.c
>> index 80a0e43..bacaa13 100644
>> --- a/arch/x86/crypto/camellia_aesni_avx_glue.c
>> +++ b/arch/x86/crypto/camellia_aesni_avx_glue.c
>> @@ -554,6 +554,11 @@ static int __init camellia_aesni_init(void)
>>  {
>>  	const char *feature_name;
>>  
>> +	if (!cpu_has_avx || !cpu_has_aes || !cpu_has_osxsave) {
>> +		pr_info("AVX or AES-NI instructions are not detected.\n");
>> +		return -ENODEV;
>> +	}
>> +
>>  	if (!cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, &feature_name)) {
>>  		pr_info("CPU feature '%s' is not supported.\n", feature_name);
>>  		return -ENODEV;
> 
> Good catch!
> 
> Do we still need the cpu_has_xfeatures() check after the cpuid based check?

Practically, no.  Today, we either enable all of the XFEATUREs we know
about, or we disable XSAVE completely.  But, if we ever somehow disabled
support for the YMM xstate on a CPU that still had AVX and AES support,
we would need this check.  (this is not likely)

FWIW, the SDM also spells out that you should check cpuid bits and XCR0
state (which cpu_has_xfeatures() does implicitly).

I was actually looking at simplifying all of the CPUID/XSTATE_* checks
in arch/x86/crypto/* and I came up with a similar fix.  I also added
some sse2/avx/avx2_usable() functions that save a lot of this repetitive
copy/paste.  I need to clean those up and submit them (part of the
series is attached).

Feel free to add my acked-by on this though.  It looks good to me.
diff mbox

Patch


From: Dave Hansen <dave.hansen@linux.intel.com>

A bunch of crypto code tries to do the same detection logic.  Some
get it right, but most probably get it wrong.

For instance, some check for XFEATURE_MASK_YMM, but don't check
for AVX itself.  Especially if the *software* X86_FEATURE_AVX bit
is cleared, we might end up with XFEATURE_MASK_YMM set, but we
do not want to support AVX.

This also formally checks for SSE2 before checking for AVX and
AVX prior to the AVX2 checks, as the SDM suggests.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
---

 b/arch/x86/include/asm/feature-checks.h |   32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff -puN /dev/null arch/x86/include/asm/feature-checks.h
--- /dev/null	2015-07-13 14:24:11.435656502 -0700
+++ b/arch/x86/include/asm/feature-checks.h	2015-09-28 10:04:20.290827923 -0700
@@ -0,0 +1,32 @@ 
+#ifndef _ASM_X86_FEATURE_CHECKS_H
+#define _ASM_X86_FEATURE_CHECKS_H
+
+/*
+ *
+ */
+
+static inline bool __init sse2_usable(void)
+{
+	if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL))
+		return false;
+	return true;
+}
+
+static inline bool __init avx_usable(void)
+{
+	if (!sse2_usable())
+		return false;
+	if (!cpu_has_avx || !cpu_has_osxsave)
+		return false;
+	return true;
+}
+
+static inline bool __init avx2_usable(void)
+{
+       if (avx_usable() && cpu_has_avx2)
+               return true;
+
+       return false;
+}
+
+#endif /* _ASM_X86_FEATURE_CHECKS_H */
_