From patchwork Wed Oct 7 14:02:53 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 7345561 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 549769F1B9 for ; Wed, 7 Oct 2015 14:03:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2A83320635 for ; Wed, 7 Oct 2015 14:03:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9403C20627 for ; Wed, 7 Oct 2015 14:02:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752424AbbJGOC4 (ORCPT ); Wed, 7 Oct 2015 10:02:56 -0400 Received: from www.sr71.net ([198.145.64.142]:36964 "EHLO blackbird.sr71.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750967AbbJGOCz (ORCPT ); Wed, 7 Oct 2015 10:02:55 -0400 Received: from [127.0.0.1] (unknown [192.183.194.2]) (Authenticated sender: dave) by blackbird.sr71.net (Postfix) with ESMTPSA id 9EEDEFA86F; Wed, 7 Oct 2015 07:02:54 -0700 (PDT) Subject: Re: [PATCH] crypto x86/camellia_aesni_avx: Fix CPU feature checks To: Ingo Molnar , Ben Hutchings References: <1444131093.2956.122.camel@decadent.org.uk> <20151007072558.GF7837@gmail.com> Cc: linux-crypto@vger.kernel.org, =?UTF-8?Q?St=c3=a9phane_Glondu?= , stable , x86@kernel.org From: Dave Hansen Message-ID: <5615260D.7000301@sr71.net> Date: Wed, 7 Oct 2015 07:02:53 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: <20151007072558.GF7837@gmail.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, T_TVD_MIME_EPI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On 10/07/2015 12:25 AM, Ingo Molnar wrote: > * Ben Hutchings wrote: >> We need to explicitly check the AVX and AES CPU features, as we can't >> infer them from the related XSAVE feature flags. For example, the >> Core i3 2310M passes the XSAVE feature test but does not implement >> AES-NI. ... >> diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c b/arch/x86/crypto/camellia_aesni_avx_glue.c >> index 80a0e43..bacaa13 100644 >> --- a/arch/x86/crypto/camellia_aesni_avx_glue.c >> +++ b/arch/x86/crypto/camellia_aesni_avx_glue.c >> @@ -554,6 +554,11 @@ static int __init camellia_aesni_init(void) >> { >> const char *feature_name; >> >> + if (!cpu_has_avx || !cpu_has_aes || !cpu_has_osxsave) { >> + pr_info("AVX or AES-NI instructions are not detected.\n"); >> + return -ENODEV; >> + } >> + >> if (!cpu_has_xfeatures(XSTATE_SSE | XSTATE_YMM, &feature_name)) { >> pr_info("CPU feature '%s' is not supported.\n", feature_name); >> return -ENODEV; > > Good catch! > > Do we still need the cpu_has_xfeatures() check after the cpuid based check? Practically, no. Today, we either enable all of the XFEATUREs we know about, or we disable XSAVE completely. But, if we ever somehow disabled support for the YMM xstate on a CPU that still had AVX and AES support, we would need this check. (this is not likely) FWIW, the SDM also spells out that you should check cpuid bits and XCR0 state (which cpu_has_xfeatures() does implicitly). I was actually looking at simplifying all of the CPUID/XSTATE_* checks in arch/x86/crypto/* and I came up with a similar fix. I also added some sse2/avx/avx2_usable() functions that save a lot of this repetitive copy/paste. I need to clean those up and submit them (part of the series is attached). Feel free to add my acked-by on this though. It looks good to me. From: Dave Hansen A bunch of crypto code tries to do the same detection logic. Some get it right, but most probably get it wrong. For instance, some check for XFEATURE_MASK_YMM, but don't check for AVX itself. Especially if the *software* X86_FEATURE_AVX bit is cleared, we might end up with XFEATURE_MASK_YMM set, but we do not want to support AVX. This also formally checks for SSE2 before checking for AVX and AVX prior to the AVX2 checks, as the SDM suggests. Signed-off-by: Dave Hansen --- b/arch/x86/include/asm/feature-checks.h | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff -puN /dev/null arch/x86/include/asm/feature-checks.h --- /dev/null 2015-07-13 14:24:11.435656502 -0700 +++ b/arch/x86/include/asm/feature-checks.h 2015-09-28 10:04:20.290827923 -0700 @@ -0,0 +1,32 @@ +#ifndef _ASM_X86_FEATURE_CHECKS_H +#define _ASM_X86_FEATURE_CHECKS_H + +/* + * + */ + +static inline bool __init sse2_usable(void) +{ + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) + return false; + return true; +} + +static inline bool __init avx_usable(void) +{ + if (!sse2_usable()) + return false; + if (!cpu_has_avx || !cpu_has_osxsave) + return false; + return true; +} + +static inline bool __init avx2_usable(void) +{ + if (avx_usable() && cpu_has_avx2) + return true; + + return false; +} + +#endif /* _ASM_X86_FEATURE_CHECKS_H */ _