From patchwork Fri Mar 29 01:53:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609968 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C96972C6B2; Fri, 29 Mar 2024 02:09:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678180; cv=none; b=RfQsP+b4wKHtXR+xl4VaG4gGjKuWvbgEXIWlycAWLcCIhiSvxasDINNUlsNkkk9sDdRYyaTY4+B/rFZIJfl5sK6hfv/bECnDxLB22KDdj5wRN/miXoq6e9/Uh5/+Re3f3zQSG/JD7ldAfC2s2pq46Auw5lUFmyEmbLRbLnuH6Q0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678180; c=relaxed/simple; bh=SYfo7FsBA6K+tnItKudH2rQ2gHHfa93TWwRVJIqbrE8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RxJrhGMxSfDdS3DSoHE3OydoNQZptQmG8tvagwLWSWYWio7nO1W6lf1n68p43vEc12EfQd0s+yfzw+w9GfYZuDdiQNTxK8o00nR78TFmcPscYJGu//YH5idxjRUzCl2VrkAg6q/VbHtPAL1mxIKs/HBhnV71GAv6izft5ka7UTk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cirGLDIs; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cirGLDIs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678177; x=1743214177; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SYfo7FsBA6K+tnItKudH2rQ2gHHfa93TWwRVJIqbrE8=; b=cirGLDIs6Rg/JAnFQph/A9hVcUJFOjRSL1IP1DrmoWNUbMkdh1Np6Vzq vTcAvB7ipjqJUHMqVbf0U+lEu/FzCV2TqPRF+JWjn7QR+d03wWA9UciwH R7kKUtYDxeCqgB8qtG/SLxRWnY1LimBRgoodelirBtYYpxPuW6ZvCapZy wJOcJmMXEWW8/NhjF3Oy3eJW9JbDcoocQDRUArZxSMG9xH8vWRyqiEHWS JqL6LkXKFIGea5e+soPcKuWTMx15qOBqQ2O71QdpDCxr4ves4lo+era29 dAfYOBOMVGwwnhF4mpPJuDtkWFYG2vKLW4uUf5ZUt82yC9LRv0B5IRtZX w==; X-CSE-ConnectionGUID: KSH3KfosR9G7sXyEwY6Orw== X-CSE-MsgGUID: CwRmN5NfRw+vExxzPyggVA== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700018" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700018" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301372" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:36 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com, Bagas Sanjaya , Randy Dunlap Subject: [PATCH v9 01/14] Documentation/x86: Document Key Locker Date: Thu, 28 Mar 2024 18:53:33 -0700 Message-Id: <20240329015346.635933-2-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Document the overview of the feature along with relevant consideration when provisioning dm-crypt volumes with AES-KL instead of AES-NI. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Reviewed-by: Bagas Sanjaya Cc: Randy Dunlap Reviewed-by: Randy Dunlap --- Changes from v8: * Change wording of documentation slightly. (Randy Dunlap and Bagas Sanjaya) Changes from v6: * Rebase on the upstream -- commit ff61f0791ce9 ("docs: move x86 documentation into Documentation/arch/"). (Nathan Huckleberry) * Remove a duplicated sentence -- 'But there is no AES-KL instruction to process a 192-bit key.' * Update the text for clarity and readability: - Clarify the error code and exemplify the backup failure - Use 'wrapping key' instead of less readable 'IWKey' Changes from v5: * Fix a typo: 'feature feature' -> 'feature' Changes from RFC v2: * Add as a new patch. The preview is available here: https://htmlpreview.github.io/?https://github.com/intel-staging/keylocker/kdoc/arch/x86/keylocker.html --- Documentation/arch/x86/index.rst | 1 + Documentation/arch/x86/keylocker.rst | 96 ++++++++++++++++++++++++++++ 2 files changed, 97 insertions(+) create mode 100644 Documentation/arch/x86/keylocker.rst diff --git a/Documentation/arch/x86/index.rst b/Documentation/arch/x86/index.rst index 8ac64d7de4dc..669c239c009f 100644 --- a/Documentation/arch/x86/index.rst +++ b/Documentation/arch/x86/index.rst @@ -43,3 +43,4 @@ x86-specific Documentation features elf_auxvec xstate + keylocker diff --git a/Documentation/arch/x86/keylocker.rst b/Documentation/arch/x86/keylocker.rst new file mode 100644 index 000000000000..b28addb8eaf4 --- /dev/null +++ b/Documentation/arch/x86/keylocker.rst @@ -0,0 +1,96 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============== +x86 Key Locker +============== + +Introduction +============ + +Key Locker is a CPU feature to reduce key exfiltration opportunities +while maintaining a programming interface similar to AES-NI. It +converts the AES key into an encoded form, called the 'key handle'. +The key handle is a wrapped version of the clear-text key where the +wrapping key has limited exposure. Once converted, all subsequent data +encryption using new AES instructions (AES-KL) uses this key handle, +reducing the exposure of private key material in memory. + +CPU-internal Wrapping Key +========================= + +The CPU-internal wrapping key is an entity in a software-invisible CPU +state. On every system boot, a new key is loaded. So the key handle that +was encoded by the old wrapping key is no longer usable on system shutdown +or reboot. + +And the key may be lost on the following exceptional situation upon wakeup: + +Wrapping Key Restore Failure +---------------------------- + +The CPU state is volatile with the ACPI S3/4 sleep states. When the system +supports those states, the key has to be backed up so that it is restored +on wake up. The kernel saves the key in non-volatile media. + +Upon the event of a wrapping key restore failure upon resume from suspend, +all established key handles become invalid. In flight dm-crypt operations +receive error results from pending operations. In the likely scenario that +dm-crypt is hosting the root filesystem the recovery is identical to if a +storage controller failed to resume from suspend or reboot. If the volume +impacted by a wrapping key restore failure is a data volume then it is +possible that I/O errors on that volume do not bring down the rest of the +system. However, a reboot is still required because the kernel will have +soft-disabled Key Locker. Upon the failure, the crypto library code will +return -ENODEV on every AES-KL function call. The Key Locker implementation +only loads a new wrapping key at initial boot, not any time after like +resume from suspend. + +Use Case and Non-use Cases +========================== + +Bare metal disk encryption is the only intended use case. + +Userspace usage is not supported because there is no ABI provided to +communicate and coordinate wrapping-key restore failure to userspace. For +now, key restore failures are only coordinated with kernel users. But the +kernel can not prevent userspace from using the feature's AES instructions +('AES-KL') when the feature has been enabled. So, the lack of userspace +support is only documented, not actively enforced. + +Key Locker is not expected to be advertised to guest VMs and the kernel +implementation ignores it even if the VMM enumerates the capability. The +expectation is that a guest VM wants private wrapping key state, but the +architecture does not provide that. An emulation of that capability, by +caching per-VM wrapping keys in memory, defeats the purpose of Key Locker. +The backup / restore facility is also not performant enough to be suitable +for guest VM context switches. + +AES Instruction Set +=================== + +The feature accompanies a new AES instruction set. This instruction set is +analogous to AES-NI. A set of AES-NI instructions can be mapped to an +AES-KL instruction. For example, AESENC128KL is responsible for ten rounds +of transformation, which is equivalent to nine times AESENC and one +AESENCLAST in AES-NI. + +But they have some notable differences: + +* AES-KL provides a secure data transformation using an encrypted key. + +* If an invalid key handle is provided, e.g. a corrupted one or a handle + restriction failure, the instruction fails with setting RFLAGS.ZF. The + crypto library implementation includes the flag check to return -EINVAL. + Note that this flag is also set if the wrapping key is changed, e.g., + because of the backup error. + +* AES-KL implements support for 128-bit and 256-bit keys, but there is no + AES-KL instruction to process an 192-bit key. The AES-KL cipher + implementation logs a warning message with a 192-bit key and then falls + back to AES-NI. So, this 192-bit key-size limitation is only documented, + not enforced. It means the key will remain in clear-text in memory. This + is to meet Linux crypto-cipher expectation that each implementation must + support all the AES-compliant key sizes. + +* Some AES-KL hardware implementation may have noticeable performance + overhead when compared with AES-NI instructions. From patchwork Fri Mar 29 01:53:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609969 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D18E42D638; Fri, 29 Mar 2024 02:09:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678182; cv=none; b=RDxE4rcWJsAXVDXwbfLQ2IX3G2xk1AyQ4mDTuF3g1gqu94OFJZ2Mqs5h8E+NAL+Jm6nHU5Hyvfw6thEElmubLiw0sDz3aHbRas22fmEcze42AvpB34zEazvciu496ytX2zQ6IG7aecgrl5TonH7Qu1JnuHtU8hjmFJPGsucAYHo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678182; c=relaxed/simple; bh=Biwv9GDlTY4g+9a5wBjbuHDJeSUThAcyCeUm/fngnfA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=VUtJU4bfngVm8ATBgonpRVXC/UvAYyLnn6SnEO/NrXgHQ8A6ku1RilN3VB9JrglvYKZpVu9O5L806tDZvo2iZSso5SwVy7LKsXR7qD5UqZjjyssLe489kRrptXG2gzQNkeUwHrNZHbEXZnBXzFdFSw8INLiWAWmh4+exmn3zW58= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=L/gIQzEL; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="L/gIQzEL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678178; x=1743214178; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Biwv9GDlTY4g+9a5wBjbuHDJeSUThAcyCeUm/fngnfA=; b=L/gIQzELMfGo5+2bBbihXH2yUkqgeEpAXLUsHcUJdwlW4KPsIY+dDw7o t/P0+DTrbqBdu6ljOG6sG7axKfvwMOuwiz5e9XIgCRUF2SzkOMWNFZ4J1 6t/0Kir/DwsHDS9p0ocOYIy2By7yoourKSRKNg3Qsio5QPRT7rHxkL4po 04RqRYINYBHJPHLrxOLrtdgo15jqKfVd7ZV64+4UtdVA+HZi8soVAfcls srMe9gEbjnNqLmWRXXk7VZ36dhljVBKEb1r4tPCY9v5k+iYGlvn8rABkv VSZGay2OLX39Mg0Hu4EmyzeEqaHf/wR5sUIrIrB/r4BlcvpgnXrc0hDv2 A==; X-CSE-ConnectionGUID: iJJPVzITT429koX/p+0wlw== X-CSE-MsgGUID: lN/orpD+RIiWUTZgCEN72w== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700027" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700027" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301377" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:38 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com Subject: [PATCH v9 02/14] x86/cpufeature: Enumerate Key Locker feature Date: Thu, 28 Mar 2024 18:53:34 -0700 Message-Id: <20240329015346.635933-3-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Key Locker is a CPU feature to minimize exposure of clear-text key material. An encoded form, called 'key handle', is referenced for data encryption or decryption instead of accessing the clear text key. A wrapping key loaded in the CPU's software-inaccessible state is used to transform a user key into a key handle. On rarely unexpected hardware failure, the key could be lost. Here enumerate this hardware capability. It will not be shown up in /proc/cpuinfo as userspace usage is not supported. This is because there is no ABI to coordinate the wrapping-key failure. The feature supports Advanced Encryption Standard (AES) cipher algorithm with new SIMD instruction set like its predecessor (AES-NI). Mark the feature having dependency on XMM2 like AES-NI has. The new AES implementation will be in the crypto library. Add X86_FEATURE_KEYLOCKER to the disabled feature list. It will be enabled by a new Kconfig option. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams --- Changes from v6: * Massage the changelog -- re-organize the change descriptions Changes from RFC v2: * Do not publish the feature flag to userspace. * Update the changelog. Changes from RFC v1: * Updated the changelog. --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/disabled-features.h | 8 +++++++- arch/x86/include/uapi/asm/processor-flags.h | 2 ++ arch/x86/kernel/cpu/cpuid-deps.c | 1 + 4 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index f0337f7bcf16..dd30435af487 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -399,6 +399,7 @@ #define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* POPCNT for vectors of DW/QW */ #define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */ #define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */ +#define X86_FEATURE_KEYLOCKER (16*32+23) /* "" Key Locker */ #define X86_FEATURE_BUS_LOCK_DETECT (16*32+24) /* Bus Lock detect */ #define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */ #define X86_FEATURE_MOVDIRI (16*32+27) /* MOVDIRI instruction */ diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index da4054fbf533..14aa6dc3b846 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -38,6 +38,12 @@ # define DISABLE_OSPKE (1<<(X86_FEATURE_OSPKE & 31)) #endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */ +#ifdef CONFIG_X86_KEYLOCKER +# define DISABLE_KEYLOCKER 0 +#else +# define DISABLE_KEYLOCKER (1<<(X86_FEATURE_KEYLOCKER & 31)) +#endif /* CONFIG_X86_KEYLOCKER */ + #ifdef CONFIG_X86_5LEVEL # define DISABLE_LA57 0 #else @@ -150,7 +156,7 @@ #define DISABLED_MASK14 0 #define DISABLED_MASK15 0 #define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57|DISABLE_UMIP| \ - DISABLE_ENQCMD) + DISABLE_ENQCMD|DISABLE_KEYLOCKER) #define DISABLED_MASK17 0 #define DISABLED_MASK18 (DISABLE_IBT) #define DISABLED_MASK19 (DISABLE_SEV_SNP) diff --git a/arch/x86/include/uapi/asm/processor-flags.h b/arch/x86/include/uapi/asm/processor-flags.h index f1a4adc78272..a24f7cb2cd68 100644 --- a/arch/x86/include/uapi/asm/processor-flags.h +++ b/arch/x86/include/uapi/asm/processor-flags.h @@ -128,6 +128,8 @@ #define X86_CR4_PCIDE _BITUL(X86_CR4_PCIDE_BIT) #define X86_CR4_OSXSAVE_BIT 18 /* enable xsave and xrestore */ #define X86_CR4_OSXSAVE _BITUL(X86_CR4_OSXSAVE_BIT) +#define X86_CR4_KEYLOCKER_BIT 19 /* enable Key Locker */ +#define X86_CR4_KEYLOCKER _BITUL(X86_CR4_KEYLOCKER_BIT) #define X86_CR4_SMEP_BIT 20 /* enable SMEP support */ #define X86_CR4_SMEP _BITUL(X86_CR4_SMEP_BIT) #define X86_CR4_SMAP_BIT 21 /* enable SMAP support */ diff --git a/arch/x86/kernel/cpu/cpuid-deps.c b/arch/x86/kernel/cpu/cpuid-deps.c index b7174209d855..820dcf35eca9 100644 --- a/arch/x86/kernel/cpu/cpuid-deps.c +++ b/arch/x86/kernel/cpu/cpuid-deps.c @@ -84,6 +84,7 @@ static const struct cpuid_dep cpuid_deps[] = { { X86_FEATURE_SHSTK, X86_FEATURE_XSAVES }, { X86_FEATURE_FRED, X86_FEATURE_LKGS }, { X86_FEATURE_FRED, X86_FEATURE_WRMSRNS }, + { X86_FEATURE_KEYLOCKER, X86_FEATURE_XMM2 }, {} }; From patchwork Fri Mar 29 01:53:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609970 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3CED3381BB; Fri, 29 Mar 2024 02:09:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678182; cv=none; b=OP0GwPh1iW5hiFGjbku4R/7w5YbZMxgdXUD/F4WdeLU4/Sgdbc+2+TZTtSXMzOIsOE2mE0xjr+/9USFsTRqs2HTEPGtuEbcKpuFgkEMt2yYTue8aLuwtC5eHaXvlOUDz9m90QpEsgQaW++YfXPQWfhz3qknCW/uZr9AUVb9nGcU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678182; c=relaxed/simple; bh=N6G0WKeysLe36XGor+TH2QHAhPLm6Xleh7fpJIEyD2Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=a3y194X/1rJBk8AQQyjVAUxPRKSeSthnBKzwk6UWRVXksx/nHGNba7ZHK4Q7sS48uBQ7WkR8v2t9my65pqUlTr13Er1DOB5kdCjxlUbFOqu6fgrYrRmjsTAhhwtYKsUHbMaMJbqABg6Xga70IE02UtXO6VIN9HiYjc4MGfLoIEA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=PWtPAJ0m; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="PWtPAJ0m" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678180; x=1743214180; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=N6G0WKeysLe36XGor+TH2QHAhPLm6Xleh7fpJIEyD2Y=; b=PWtPAJ0mZArmRPOExLeNUWpKcm2uU3SsUOWm/yRQS9f7XRKdA7bwH+xv 3QKiGU+iqzduCx/rvNQwIXc6QM9pphqUat75vFoiQpOygJwtw6pwNagcB lx2nixWjWVna1nI6Tq7HJWvY29QjHPtR9y8R2T8Z5MYEEHKO8icVaS8d1 t02/TLPAOKFMgZQxsbc1ZYwe0cPf4OoFCAXO11SQvpcpZ8LQTP4UN7PDY czFdtf3an4+i6XUIDjS3YMQXyg6e41HETRWzzFP/HZIMeSGcaPnRp/GaL GC/5ap3E84j1G+XpGxdVMEt4cabAFqd9vgHH43xPlI/spfyePD1XBCYOp w==; X-CSE-ConnectionGUID: PWVP/QhtR5W0PGJPUaiKrw== X-CSE-MsgGUID: l8eI8ZV3RLKuVAm8QP7rFg== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700039" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700039" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301382" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:39 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com Subject: [PATCH v9 03/14] x86/insn: Add Key Locker instructions to the opcode map Date: Thu, 28 Mar 2024 18:53:35 -0700 Message-Id: <20240329015346.635933-4-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The x86 instruction decoder needs to know these new instructions that are going to be used in the crypto library as well as the x86 core code. Add the following: LOADIWKEY: Load a CPU-internal wrapping key. ENCODEKEY128: Wrap a 128-bit AES key to a key handle. ENCODEKEY256: Wrap a 256-bit AES key to a key handle. AESENC128KL: Encrypt a 128-bit block of data using a 128-bit AES key indicated by a key handle. AESENC256KL: Encrypt a 128-bit block of data using a 256-bit AES key indicated by a key handle. AESDEC128KL: Decrypt a 128-bit block of data using a 128-bit AES key indicated by a key handle. AESDEC256KL: Decrypt a 128-bit block of data using a 256-bit AES key indicated by a key handle. AESENCWIDE128KL: Encrypt 8 128-bit blocks of data using a 128-bit AES key indicated by a key handle. AESENCWIDE256KL: Encrypt 8 128-bit blocks of data using a 256-bit AES key indicated by a key handle. AESDECWIDE128KL: Decrypt 8 128-bit blocks of data using a 128-bit AES key indicated by a key handle. AESDECWIDE256KL: Decrypt 8 128-bit blocks of data using a 256-bit AES key indicated by a key handle. The detail can be found in Intel Software Developer Manual. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams --- Changes from v6: * Massage the changelog -- add the reason a bit. Changes from RFC v1: * Separated out the LOADIWKEY addition in a new patch. * Included AES instructions to avoid warning messages when the AES Key Locker module is built. --- arch/x86/lib/x86-opcode-map.txt | 11 +++++++---- tools/arch/x86/lib/x86-opcode-map.txt | 11 +++++++---- 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/lib/x86-opcode-map.txt b/arch/x86/lib/x86-opcode-map.txt index 12af572201a2..c94988d5130d 100644 --- a/arch/x86/lib/x86-opcode-map.txt +++ b/arch/x86/lib/x86-opcode-map.txt @@ -800,11 +800,12 @@ cb: sha256rnds2 Vdq,Wdq | vrcp28ss/d Vx,Hx,Wx (66),(ev) cc: sha256msg1 Vdq,Wdq | vrsqrt28ps/d Vx,Wx (66),(ev) cd: sha256msg2 Vdq,Wdq | vrsqrt28ss/d Vx,Hx,Wx (66),(ev) cf: vgf2p8mulb Vx,Wx (66) +d8: AESENCWIDE128KL Qpi (F3),(000),(00B) | AESENCWIDE256KL Qpi (F3),(000),(10B) | AESDECWIDE128KL Qpi (F3),(000),(01B) | AESDECWIDE256KL Qpi (F3),(000),(11B) db: VAESIMC Vdq,Wdq (66),(v1) -dc: vaesenc Vx,Hx,Wx (66) -dd: vaesenclast Vx,Hx,Wx (66) -de: vaesdec Vx,Hx,Wx (66) -df: vaesdeclast Vx,Hx,Wx (66) +dc: vaesenc Vx,Hx,Wx (66) | LOADIWKEY Vx,Hx (F3) | AESENC128KL Vpd,Qpi (F3) +dd: vaesenclast Vx,Hx,Wx (66) | AESDEC128KL Vpd,Qpi (F3) +de: vaesdec Vx,Hx,Wx (66) | AESENC256KL Vpd,Qpi (F3) +df: vaesdeclast Vx,Hx,Wx (66) | AESDEC256KL Vpd,Qpi (F3) f0: MOVBE Gy,My | MOVBE Gw,Mw (66) | CRC32 Gd,Eb (F2) | CRC32 Gd,Eb (66&F2) f1: MOVBE My,Gy | MOVBE Mw,Gw (66) | CRC32 Gd,Ey (F2) | CRC32 Gd,Ew (66&F2) f2: ANDN Gy,By,Ey (v) @@ -814,6 +815,8 @@ f6: ADCX Gy,Ey (66) | ADOX Gy,Ey (F3) | MULX By,Gy,rDX,Ey (F2),(v) | WRSSD/Q My, f7: BEXTR Gy,Ey,By (v) | SHLX Gy,Ey,By (66),(v) | SARX Gy,Ey,By (F3),(v) | SHRX Gy,Ey,By (F2),(v) f8: MOVDIR64B Gv,Mdqq (66) | ENQCMD Gv,Mdqq (F2) | ENQCMDS Gv,Mdqq (F3) f9: MOVDIRI My,Gy +fa: ENCODEKEY128 Ew,Ew (F3) +fb: ENCODEKEY256 Ew,Ew (F3) EndTable Table: 3-byte opcode 2 (0x0f 0x3a) diff --git a/tools/arch/x86/lib/x86-opcode-map.txt b/tools/arch/x86/lib/x86-opcode-map.txt index 12af572201a2..c94988d5130d 100644 --- a/tools/arch/x86/lib/x86-opcode-map.txt +++ b/tools/arch/x86/lib/x86-opcode-map.txt @@ -800,11 +800,12 @@ cb: sha256rnds2 Vdq,Wdq | vrcp28ss/d Vx,Hx,Wx (66),(ev) cc: sha256msg1 Vdq,Wdq | vrsqrt28ps/d Vx,Wx (66),(ev) cd: sha256msg2 Vdq,Wdq | vrsqrt28ss/d Vx,Hx,Wx (66),(ev) cf: vgf2p8mulb Vx,Wx (66) +d8: AESENCWIDE128KL Qpi (F3),(000),(00B) | AESENCWIDE256KL Qpi (F3),(000),(10B) | AESDECWIDE128KL Qpi (F3),(000),(01B) | AESDECWIDE256KL Qpi (F3),(000),(11B) db: VAESIMC Vdq,Wdq (66),(v1) -dc: vaesenc Vx,Hx,Wx (66) -dd: vaesenclast Vx,Hx,Wx (66) -de: vaesdec Vx,Hx,Wx (66) -df: vaesdeclast Vx,Hx,Wx (66) +dc: vaesenc Vx,Hx,Wx (66) | LOADIWKEY Vx,Hx (F3) | AESENC128KL Vpd,Qpi (F3) +dd: vaesenclast Vx,Hx,Wx (66) | AESDEC128KL Vpd,Qpi (F3) +de: vaesdec Vx,Hx,Wx (66) | AESENC256KL Vpd,Qpi (F3) +df: vaesdeclast Vx,Hx,Wx (66) | AESDEC256KL Vpd,Qpi (F3) f0: MOVBE Gy,My | MOVBE Gw,Mw (66) | CRC32 Gd,Eb (F2) | CRC32 Gd,Eb (66&F2) f1: MOVBE My,Gy | MOVBE Mw,Gw (66) | CRC32 Gd,Ey (F2) | CRC32 Gd,Ew (66&F2) f2: ANDN Gy,By,Ey (v) @@ -814,6 +815,8 @@ f6: ADCX Gy,Ey (66) | ADOX Gy,Ey (F3) | MULX By,Gy,rDX,Ey (F2),(v) | WRSSD/Q My, f7: BEXTR Gy,Ey,By (v) | SHLX Gy,Ey,By (66),(v) | SARX Gy,Ey,By (F3),(v) | SHRX Gy,Ey,By (F2),(v) f8: MOVDIR64B Gv,Mdqq (66) | ENQCMD Gv,Mdqq (F2) | ENQCMDS Gv,Mdqq (F3) f9: MOVDIRI My,Gy +fa: ENCODEKEY128 Ew,Ew (F3) +fb: ENCODEKEY256 Ew,Ew (F3) EndTable Table: 3-byte opcode 2 (0x0f 0x3a) From patchwork Fri Mar 29 01:53:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609971 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7DC5D23758; Fri, 29 Mar 2024 02:09:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678183; cv=none; b=BYX69Og22gqx1aXReF79itqDX55Dx0Sf6hsyhbHfaVg/JUAODKS6bkIlvsAJeZdzqieW0Mln1KSLH1x4Vihvp/X9R0TOOW64jZQMNZqjwR6eZwa+8T/ptIbjaQ/HGcDGcfaN+oFEVvs0IgIZArQ1Yzpk4zOn7pemDu77ibBnBjY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678183; c=relaxed/simple; bh=K9Qk8pSUsltL7/8lG3y5liCiSXgfEq5ZpFkBVaabd6g=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Klc4rMj8bWjPwAg63Ep0hblmV/cbVXNu4Sp6o9Kzbv5NPk9vlSR7dbddCX9BNdP6EuCk1m4YCza4O8eyQzZKjOVEapzXcovJ9RycGgODk40f4N9b/uWIylaFHwnSl72slQNAouUG6xcfC4x3P6z2ZjaAKY0L0cF4RIlbxP67HCs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=miLMt+Ye; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="miLMt+Ye" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678181; x=1743214181; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=K9Qk8pSUsltL7/8lG3y5liCiSXgfEq5ZpFkBVaabd6g=; b=miLMt+Yeu+tm2YLmwr4Gv+XK1plff/SZm9FmoeIt8ECM5VgKQiiN8vyI csQ0KPU2HOouqkFonfJRmepvmRgRWPJcamJnJlN45O/ke3gGrN+X595uq HQETb1VMDj9FBBd9kul4CXFWqAbS2pkP52n6wAq+IaR+3McH4uNPaq+A7 2qk75oy4qPmQ4yoaEe1lzc2GNKZboutZQ5QIaJwI5vbnozMWVcyiAlA9c V7oQ5IQVa+4R5OnaLjkOnS4zlSfOOYipFYtkMtIHcsIVwAJh30D83c0Oh Rb9HEIEg/3pd2FlPguRPUVjQw5um9jkRMq+CsC34Gn1ba2nELSPmZPmHh w==; X-CSE-ConnectionGUID: qYWiYffcSGafgreiA5J0HA== X-CSE-MsgGUID: rwlPr3+YRFC+HLoSQs55Vw== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700048" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700048" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301389" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:40 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com Subject: [PATCH v9 04/14] x86/asm: Add a wrapper function for the LOADIWKEY instruction Date: Thu, 28 Mar 2024 18:53:36 -0700 Message-Id: <20240329015346.635933-5-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Key Locker introduces a CPU-internal wrapping key to encode a user key to a key handle. Then a key handle is referenced instead of the plain text key. LOADIWKEY loads a wrapping key in the software-inaccessible CPU state. It operates only in kernel mode. The kernel will use this to load a new key at boot time. Establish an accessor for the feature setup, and define struct iwkey to pass a key value. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams --- Changes from v6: * Massage the changelog -- clarify the reason and the changes a bit. Changes from v5: * Fix a typo: kernel_cpu_begin() -> kernel_fpu_begin() Changes from RFC v2: * Separate out the code as a new patch. * Improve the usability with the new struct as an argument. (Dan Williams) Previously, Dan questioned the necessity of 'WARN_ON(!irq_fpu_usable())' in the load_xmm_iwkey() function. However, it's worth noting that the function comment emphasizes the caller's responsibility for invoking kernel_fpu_begin(), which effectively performs the sanity check through kernel_fpu_begin_mask(). --- arch/x86/include/asm/keylocker.h | 25 +++++++++++++++++++++++++ arch/x86/include/asm/special_insns.h | 28 ++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+) create mode 100644 arch/x86/include/asm/keylocker.h diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h new file mode 100644 index 000000000000..4e731f577c50 --- /dev/null +++ b/arch/x86/include/asm/keylocker.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef _ASM_KEYLOCKER_H +#define _ASM_KEYLOCKER_H + +#ifndef __ASSEMBLY__ + +#include + +/** + * struct iwkey - A temporary wrapping key storage. + * @integrity_key: A 128-bit key used to verify the integrity of + * key handles + * @encryption_key: A 256-bit encryption key used for wrapping and + * unwrapping clear text keys. + * + * This storage should be flushed immediately after being loaded. + */ +struct iwkey { + struct reg_128_bit integrity_key; + struct reg_128_bit encryption_key[2]; +}; + +#endif /*__ASSEMBLY__ */ +#endif /* _ASM_KEYLOCKER_H */ diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 2e9fc5c400cd..65267013f1e1 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -9,6 +9,7 @@ #include #include #include +#include /* * The compiler should not reorder volatile asm statements with respect to each @@ -301,6 +302,33 @@ static __always_inline void tile_release(void) asm volatile(".byte 0xc4, 0xe2, 0x78, 0x49, 0xc0"); } +/** + * load_xmm_iwkey - Load a CPU-internal wrapping key into XMM registers. + * @key: A pointer to a struct iwkey containing the key data. + * + * The caller is responsible for invoking kernel_fpu_begin() before. + */ +static inline void load_xmm_iwkey(struct iwkey *key) +{ + struct reg_128_bit zeros = { 0 }; + + asm volatile ("movdqu %0, %%xmm0; movdqu %1, %%xmm1; movdqu %2, %%xmm2;" + :: "m"(key->integrity_key), "m"(key->encryption_key[0]), + "m"(key->encryption_key[1])); + + /* + * 'LOADIWKEY %xmm1,%xmm2' loads a key from XMM0-2 into a + * software-invisible CPU state. With zero in EAX, CPU does not + * perform hardware randomization and allows key backup. + * + * This instruction is supported by binutils >= 2.36. + */ + asm volatile (".byte 0xf3,0x0f,0x38,0xdc,0xd1" :: "a"(0)); + + asm volatile ("movdqu %0, %%xmm0; movdqu %0, %%xmm1; movdqu %0, %%xmm2;" + :: "m"(zeros)); +} + #endif /* __KERNEL__ */ #endif /* _ASM_X86_SPECIAL_INSNS_H */ From patchwork Fri Mar 29 01:53:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609972 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD9D62C68C; Fri, 29 Mar 2024 02:09:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678184; cv=none; b=PW5jF8WSbTrTibDqltiqnlk0jsS/1+SNm0X3CJh6Msy0fo67dubDJg2Ivs+apOfoBV/OfdZ4CcQcYNUYubxpsfwdjGyHE1tkPD6FmZC2KYZOfIzoByVf6PtAaypMSw2YsjWtzFXDpq+FrViMEUc+pywiK9eoe2xoBQF4rUPUtDs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678184; c=relaxed/simple; bh=L98bHTa7C7ExooOa5n0J0qSnhQrzRsiRFuKgyofpwGo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BCbw1Dn/04izTJgSN4KALb2geLmUgZg/yf2yBv8TQfcMS0Q4sYUZRJwR9OBz5qOhHnluphxy9vOZwNLkk//prmohYaYbglNFBkQzDNioMZjBy4LOdYhRexDvUNZmFhZZhsVJT6NR8L1gJXfiCnTeGXuF/eROclw4CRWCvYkFau4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=EcbSpSil; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="EcbSpSil" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678182; x=1743214182; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=L98bHTa7C7ExooOa5n0J0qSnhQrzRsiRFuKgyofpwGo=; b=EcbSpSilF1q10cQmZnWPGSdmIaBO2woJ4Krl3aBh5ZHXlV8yRzaemOj7 VCho/C70AvkF5NJ1rS/y0LN09ax03sKHDEy+eaem4mjoPFZdyIjAGBKRP XpGHs9v0AdRmw5EUoc1ogxwZoc1VeQiR30B6pwB8rEQfenYj0R4l3+d2Z HnklJusI0H3TU/PdeZ06jdkFD4PRfJiM5WgqSkBuqEqAxtVWXoPc2ubEH ls5udX1g7Ag50DcOtRcVnlYt1JrGB2U5wCG0n+43J8DYTwGlSP9aPUhdF kbJAmhFdswQ9sf6jm+RUbrkpLpUjQ+yPnWV2e2fWVrLJ7jMZTywpGwy32 Q==; X-CSE-ConnectionGUID: pu3J4a77RRy5cSoCaDtS0A== X-CSE-MsgGUID: yZu85nj+S9+SWi4rbXapkw== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700056" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700056" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301395" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:42 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com Subject: [PATCH v9 05/14] x86/msr-index: Add MSRs for Key Locker wrapping key Date: Thu, 28 Mar 2024 18:53:37 -0700 Message-Id: <20240329015346.635933-6-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The wrapping key resides in the same power domain as the CPU cache. Consequently, any sleep state that invalidates the cache, such as S3, also affects the wrapping key's state. However, as the wrapping key's state is inaccessible to software, a specialized mechanism is necessary to save and restore the key during deep sleep. A set of new MSRs is provided as an abstract interface for saving, restoring, and checking the wrapping key's status. The wrapping key is securely saved in a platform-scoped state using non-volatile media. Both the backup storage and its path from the CPU are encrypted and integrity-protected to ensure security. Define those MSRs for saving and restoring the key during S3/4 sleep states. Note that the non-volatility of the backup storage is not architecturally guaranteed across off-states such as S5 and G3. In such cases, the kernel may generate a new key during the next boot. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams --- Changes from v8: * Tweak the changelog. Changes from v6: * Tweak the changelog -- put the last for those about other sleep states Changes from RFC v2: * Update the changelog. (Dan Williams) * Rename the MSRs. (Dan Williams) --- arch/x86/include/asm/msr-index.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 05956bd8bacf..a451fa1e2cd9 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -1192,4 +1192,10 @@ * a #GP */ +/* MSRs for managing a CPU-internal wrapping key for Key Locker. */ +#define MSR_IA32_IWKEY_COPY_STATUS 0x00000990 +#define MSR_IA32_IWKEY_BACKUP_STATUS 0x00000991 +#define MSR_IA32_BACKUP_IWKEY_TO_PLATFORM 0x00000d91 +#define MSR_IA32_COPY_IWKEY_TO_LOCAL 0x00000d92 + #endif /* _ASM_X86_MSR_INDEX_H */ From patchwork Fri Mar 29 01:53:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609973 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97B033A1B5; Fri, 29 Mar 2024 02:09:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678186; cv=none; b=UH25RMncErGKqDcHmsDLMA41uA849zDbCW9TOPjWESWBqxV7P4FzMwabWNwnIlq6G1dbKTIWvNzO7wgWRi23rd6/UUQYvHQ49qn7rJXlY7aNrSmFYTj5E0CkoRfOTAmS2tP+2wbDRLY8GStykwjrnHeCLACz+9md6PT3P+yhBtE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678186; c=relaxed/simple; bh=aLdZ4Gq66eiPhVW+fge8rLHBtRcUDjGDQQLqA+I8V7E=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=m9O2vWyaEPGTB5cID1+gHecE5GSVWdoLULE+a1AEE2SzkPBWAXznqSbBjegbFjHGHdZkYLIATnVfR1o7qEyS+vfzppRGgQgGrnG4Er3UkGwAFkDaK4j99AeWj1eCcmiF8VlaCNGFJT8i7YwP/gLrJBCAgsdEcQUMm8ZMgpJKu2g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=kv2EoZqV; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="kv2EoZqV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678184; x=1743214184; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aLdZ4Gq66eiPhVW+fge8rLHBtRcUDjGDQQLqA+I8V7E=; b=kv2EoZqVdSaIFMmG2g2nJqV+xd7cyJwuR+M6BzsaCWMAqMPxAaDOz3xr /7aVQS5aYBmb8BEq5Y08/xkgAdaNi808cxHA8VD94oD0/S2Yu0ZFLYbpF LKGIKbDIynAsLNHc3/46li8TXgsFhA5vxZ/1h9iiirJFnsWfmyKGhk+Wv ZYZTWYTtZYv7leJJYgs9mxMK07BsUHX18/dLoDsUq2wKCXrCKcdgoXwM6 mocTpfDhzlTnn/dbNcjExe7kO83e95q79zp9+7XfMw4H+jL3XSviOK8bc vvS+nTXbdFexeYghrI3ygrBlQECWC/Wtb1Jp5mhpwEme82I3c9zV2uBun Q==; X-CSE-ConnectionGUID: dK6owm9xR4GpFSOurCf1yg== X-CSE-MsgGUID: mShV2ocBTD2mmtQpcQH2xA== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700064" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700064" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301402" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:43 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com Subject: [PATCH v9 06/14] x86/keylocker: Define Key Locker CPUID leaf Date: Thu, 28 Mar 2024 18:53:38 -0700 Message-Id: <20240329015346.635933-7-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Both Key Locker enabling code in the x86 core and AES Key Locker code in the crypto library will need to reference feature-specific CPUID bits. Define this CPUID leaf and bits. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams --- Changes from v6: * Tweak the changelog -- comment the reason first and then brief the change. Changes from RFC v2: * Separate out the code as a new patch. --- arch/x86/include/asm/keylocker.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h index 4e731f577c50..1213d273c369 100644 --- a/arch/x86/include/asm/keylocker.h +++ b/arch/x86/include/asm/keylocker.h @@ -5,6 +5,7 @@ #ifndef __ASSEMBLY__ +#include #include /** @@ -21,5 +22,11 @@ struct iwkey { struct reg_128_bit encryption_key[2]; }; +#define KEYLOCKER_CPUID 0x019 +#define KEYLOCKER_CPUID_EAX_SUPERVISOR BIT(0) +#define KEYLOCKER_CPUID_EBX_AESKLE BIT(0) +#define KEYLOCKER_CPUID_EBX_WIDE BIT(2) +#define KEYLOCKER_CPUID_EBX_BACKUP BIT(4) + #endif /*__ASSEMBLY__ */ #endif /* _ASM_KEYLOCKER_H */ From patchwork Fri Mar 29 01:53:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609974 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26AE43B791; Fri, 29 Mar 2024 02:09:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678188; cv=none; b=hsJXvUNcGW5SqkVN7rTZsFUbqp+E07RFGcj751M46SmewURCQeXMhHmpexYyfFBWMRg/7OQh+Sbso0xfxleXgPYemN2wANM384BI/xJAqtWbbMoRt6IKAxsMWjry3GlSM69UztbfV8oRWt6fB1rXlulKYSa5y55tdH1SDc341Sk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678188; c=relaxed/simple; bh=bKJ53VcpCk0oPkHnmZyx6yPAdxMIyd4w8zm9xu5vvYk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=StfOZaQnvFQsPmxZ+oYEHWlJw2R2cWePxIVqe8zMmu5ZJNftAbPc+efUzfyBVRQcRVpcRFW8FfBu1dASmj02FowPls7AP9tsOLH6MqT1daiO7bbsc1rZ2/x0n+7zrt1dq5irWmDKDgoi3BR32S6YXdzbPDs+vMd5tYkrcelgkj0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=YLuu5WxX; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="YLuu5WxX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678186; x=1743214186; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bKJ53VcpCk0oPkHnmZyx6yPAdxMIyd4w8zm9xu5vvYk=; b=YLuu5WxXwS9W4ex9AJzo56ozGgCvxqo4DLdpWLpbG3u25cNWh24yjPMS hjreh8ukb/oLH/ItqEU3cOpYSFMrjSMunj6jcMAP6ZTrITY1jj/fNIbfL FKReAdQ4jn3n5LVYZ2aNh3K8VkPP6gmDO5dFlZ4ve1zJMvNBwP3l86TT8 cM2Abj/iisMWoEPPSC4pH5Kv39RhTtPx8MwCsghtDHfulLbWTn4aW4QoY Cc9IDFDPK2NV+TRChKo4ttVO9krgIR4A+b7sgnzRBF0LMYhb7Vv3xbWP3 H91OgFI66zHus9wn3E43qxFBueD4+sT8Qq121+vgeGrvqt9Z5ZYABoLKT g==; X-CSE-ConnectionGUID: pFTEoLtbTqWjseADfWY3ew== X-CSE-MsgGUID: DxZVAY0ZQj2jCHmPvMeAWQ== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700070" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700070" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301407" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:45 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com, Dave Hansen Subject: [PATCH v9 07/14] x86/cpu/keylocker: Load a wrapping key at boot time Date: Thu, 28 Mar 2024 18:53:39 -0700 Message-Id: <20240329015346.635933-8-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The wrapping key is an entity to encode a clear text key into a key handle. This key is a pivot in protecting user keys. So the value has to be randomized before being loaded in the software-invisible CPU state. The wrapping key needs to be established before the first user. Given that the only proposed Linux use case for Key Locker is dm-crypt, the feature could be lazily enabled before the first dm-crypt user arrives. But there is no precedent for late enabling of CPU features and it adds maintenance burden without demonstrative benefit outside of minimizing the visibility of Key Locker to userspace. Therefore, generate random bytes and load them at boot time, involving clobbering XMM registers. Perform this process under arch_initcall(), ensuring that it occurs after FPU initialization. Finally, flush out random bytes after loading. Given that the Linux Key Locker support is only intended for bare metal dm-crypt use, and that switching wrapping key per virtual machine is impractical, explicitly skip this setup in the X86_FEATURE_HYPERVISOR case. Signed-off-by: Chang S. Bae Cc: Eric Biggers Cc: Dave Hansen Cc: "Elliott, Robert (Servers)" Cc: Dan Williams --- Changes from v8: * Invoke the setup code via arch_initcall(). The move was due to the upstream changes. Commit b81fac906a8f ("x86/fpu: Move FPU initialization into arch_cpu_finalize_init()") delays the FPU setup. * Tweak code comments and the changelog. * Revoke the review tag as the code change is significant. Changes from v6: * Switch to use 'static inline' for the empty functions, instead of macro that disallows type checks. (Eric Biggers and Dave Hansen) * Use memzero_explicit() to wipe out the key data instead of writing the poison value over there. (Robert Elliott) * Massage the changelog for the better readability. Changes from v5: * Call out the disabling when the feature is available on a virtual machine. Then, it will turn off the feature flag Changes from RFC v2: * Make bare metal only. * Clean up the code (e.g. dynamically allocate the key cache). (Dan Williams) * Massage the changelog. * Move out the LOADIWKEY wrapper and the Key Locker CPUID defines. --- arch/x86/kernel/Makefile | 1 + arch/x86/kernel/keylocker.c | 77 +++++++++++++++++++++++++++++++++++++ 2 files changed, 78 insertions(+) create mode 100644 arch/x86/kernel/keylocker.c diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 74077694da7d..d105e5785b90 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -137,6 +137,7 @@ obj-$(CONFIG_PERF_EVENTS) += perf_regs.o obj-$(CONFIG_TRACING) += tracepoint.o obj-$(CONFIG_SCHED_MC_PRIO) += itmt.o obj-$(CONFIG_X86_UMIP) += umip.o +obj-$(CONFIG_X86_KEYLOCKER) += keylocker.o obj-$(CONFIG_UNWINDER_ORC) += unwind_orc.o obj-$(CONFIG_UNWINDER_FRAME_POINTER) += unwind_frame.o diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c new file mode 100644 index 000000000000..0d6b715baf1e --- /dev/null +++ b/arch/x86/kernel/keylocker.c @@ -0,0 +1,77 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* + * Setup Key Locker feature and support the wrapping key management. + */ + +#include +#include + +#include +#include +#include + +static struct iwkey wrapping_key __initdata; + +static void __init generate_keylocker_data(void) +{ + get_random_bytes(&wrapping_key.integrity_key, sizeof(wrapping_key.integrity_key)); + get_random_bytes(&wrapping_key.encryption_key, sizeof(wrapping_key.encryption_key)); +} + +static void __init destroy_keylocker_data(void) +{ + memzero_explicit(&wrapping_key, sizeof(wrapping_key)); +} + +/* + * For loading the wrapping key into each CPU, the feature bit is set + * in the control register and FPU context management is performed. + */ +static void __init load_keylocker(struct work_struct *unused) +{ + cr4_set_bits(X86_CR4_KEYLOCKER); + + kernel_fpu_begin(); + load_xmm_iwkey(&wrapping_key); + kernel_fpu_end(); +} + +static int __init init_keylocker(void) +{ + u32 eax, ebx, ecx, edx; + + if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER)) + goto disable; + + if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) { + pr_debug("x86/keylocker: Not compatible with a hypervisor.\n"); + goto clear_cap; + } + + cr4_set_bits(X86_CR4_KEYLOCKER); + + /* AESKLE depends on CR4.KEYLOCKER */ + cpuid_count(KEYLOCKER_CPUID, 0, &eax, &ebx, &ecx, &edx); + if (!(ebx & KEYLOCKER_CPUID_EBX_AESKLE) || + !(eax & KEYLOCKER_CPUID_EAX_SUPERVISOR)) { + pr_debug("x86/keylocker: Not fully supported.\n"); + goto clear_cap; + } + + generate_keylocker_data(); + schedule_on_each_cpu(load_keylocker); + destroy_keylocker_data(); + + pr_info_once("x86/keylocker: Enabled.\n"); + return 0; + +clear_cap: + setup_clear_cpu_cap(X86_FEATURE_KEYLOCKER); + pr_info_once("x86/keylocker: Disabled.\n"); +disable: + cr4_clear_bits(X86_CR4_KEYLOCKER); + return -ENODEV; +} + +arch_initcall(init_keylocker); From patchwork Fri Mar 29 01:53:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609975 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4983B3BBE8; Fri, 29 Mar 2024 02:09:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678189; cv=none; b=Fz6L1oe8GxC0paBzCXdHqMPCa5aUSUFVQkVz6xdQIw9QzNxYLBvxXh9H3EwKnzkDPEHhCHQyjkzkeEzTvCPNW1bbqUOhPuX8wXikaB9AuuYxgxHYGXstRVDMNHMWnG9UatHnK8j6ic+pQ4H9DTJ0XwjL0OKoaQdV4rGgIB8cghA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678189; c=relaxed/simple; bh=m/LrRS8Kqm0y+dXApdmaiqcIdS4WVMKaG8fq+VQ4pUo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=EjeiGTq+vLCuaPA3yqoO0p45f1MSl2rWlAjhqN91AnMlUgfmEHoMg1M171CCPJ339Tm7Ux3G7DH/q1b+Zkva+k3HbULSRmX2VXFz1M+WSNb7FgRVZuqrCDcG6mbIaGlJfoN3Eq9L88zBcsE3XKwGjSZoTLdN90j8xpMcpSaUOXQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hQZei34w; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hQZei34w" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678187; x=1743214187; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=m/LrRS8Kqm0y+dXApdmaiqcIdS4WVMKaG8fq+VQ4pUo=; b=hQZei34wH1QAEqUyYsPXghRocne3YVjZiUgEWWf+0ZUfngPi79j2v5c0 K9Mi+F6IsQXsHUBfJitsTeHdB3dOr1EfvpuY3L64ACgMA7NldkPIcUJ/L pIap1AvtispDbSaiwV0W3v1TYEKWn44/NxtxkwfPz7V1sO2pCpBEeb2Z4 JiFe+bJSdQZl2MMkxVKjQWmL7XcDkdx+R6M42n0AE9L5z9G8y00W2kaWq RjUUx2Dke77V5ZkCiF5xkS0bQFeRYkOPbQKZ8EUIUX6uWyWCzwTmpbz8M bGZCLXYCue/nZ9Pu93kRSW9SnSXW0C5mWM5lc99fm6u9HY0mqg/9ciWfE g==; X-CSE-ConnectionGUID: mZfG+bePTYeuOwPgzbAbKQ== X-CSE-MsgGUID: tz4t4HAbSF2uA/ysPlmyZw== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700086" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700086" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301414" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:46 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com, "Rafael J . Wysocki" , Dave Hansen , Sangwhan Moon Subject: [PATCH v9 08/14] x86/PM/keylocker: Restore the wrapping key on the resume from ACPI S3/4 Date: Thu, 28 Mar 2024 18:53:40 -0700 Message-Id: <20240329015346.635933-9-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The primary use case for the feature is bare metal dm-crypt. The key needs to be restored properly on wakeup, as dm-crypt does not prompt for the key on resume from suspend. Even if the prompt performs for unlocking the volume, where the hibernation image is stored, it still expects to reuse the key handles within the hibernation image once it is loaded. == Wrapping-key Restore == To meet dm-crypt's expectations, the key handles in the suspend-image has to remain valid after resuming from an S-state. However, when the system enters ACPI S3 or S4 sleep states, the wrapping key is discarded. Key Locker provides a mechanism to back up the wrapping key in non-volatile storage. Therefore, upon boot, request a backup of the wrapping key and copy it back to each CPU upon wakeup. If the backup mechanism is unavailable, disable the feature unless CONFIG_SUSPEND=n. == Restore Failure == In the event of a key restore failure, the kernel proceeds with an initialized wrapping key state. This action invalidates any key handles present in the suspend-image, leading to I/O errors in dm-crypt operations. However, data integrity remains intact, and access is restored with new handles created by the new wrapping key at the next boot. At least, manage a feature-specific flag to communicate with the crypto implementation, ensuring to stop using AES instructions upon the key restore failure, instead of abruptly disabling the feature. == Off-states == While the backup may persist in non-volatile media across S5 and G3 "off" states, it is neither architecturally guaranteed nor expected by dm-crypt. Therefore, a reboot can address this scenario with a new wrapping key, as dm-crypt prompts for the key whenever the volume is started. Signed-off-by: Chang S. Bae Acked-by: Rafael J. Wysocki Cc: Eric Biggers Cc: Dave Hansen Cc: Sangwhan Moon Cc: Dan Williams --- Changes from v8: * Rebase on the previous patch (patch7) changes, separating the wrapping key restoration code from the initial load. Previously, the identify_cpu() -> setup_keylocker() sequence in the hotplug path could hit __init code, leading to an explosion. This change removes the initialization code from the hotplug path. (Sangwhan Moon) * Turn copy_keylocker() to return bool for simplification. * Rename the flag for clarity: 'valid_kl' -> 'valid_wrapping_key'. * Don't export symbol for valid_keylocker(), as AES-KL will be built-in. (see patch14 for detail). * Tweak code comments and the changelog. * Revoke the review tag as the code change is significant. Changes from v6: * Limit the symbol export only when needed. * Improve the coding style -- reduce an indent after 'if() { ... return; }'. (Eric Biggers) * Fix the coding style -- reduce an indent after if() {...return;}. (Eric Biggers) Tweak the comment along with that. * Improve the function prototype, instead of using a macro. (Eric Biggers and Dave Hansen) * Update the documentation: - Massage the changelog to clarify the problem-and-solution by sections - Clarify the comment about the key restore failure. Changes from v5: * Fix the 'valid_kl' flag not to be set when the feature is disabled. (Reported by Marvin Hsu marvin.hsu@intel.com) Add the function comment about this. * Improve the error handling in setup_keylocker(). All the error cases fall through the end that disables the feature. Otherwise, all the successful cases return immediately. Changes from v4: * Update the changelog and title. (Rafael Wysocki) Changes from v3: * Fix the build issue with !X86_KEYLOCKER. (Eric Biggers) Changes from RFC v2: * Change the backup key failure handling. (Dan Williams) Changes from RFC v1: * Folded the warning message into the if condition check. (Rafael Wysocki) * Rebase on the changes of the previous patches. * Added error code for key restoration failures. * Moved the restore helper. * Added function descriptions. --- arch/x86/include/asm/keylocker.h | 10 ++++ arch/x86/kernel/cpu/common.c | 4 +- arch/x86/kernel/keylocker.c | 88 ++++++++++++++++++++++++++++++++ arch/x86/power/cpu.c | 2 + 4 files changed, 103 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h index 1213d273c369..c93102101c41 100644 --- a/arch/x86/include/asm/keylocker.h +++ b/arch/x86/include/asm/keylocker.h @@ -28,5 +28,15 @@ struct iwkey { #define KEYLOCKER_CPUID_EBX_WIDE BIT(2) #define KEYLOCKER_CPUID_EBX_BACKUP BIT(4) +#ifdef CONFIG_X86_KEYLOCKER +void setup_keylocker(void); +void restore_keylocker(void); +extern bool valid_keylocker(void); +#else +static inline void setup_keylocker(void) { } +static inline void restore_keylocker(void) { } +static inline bool valid_keylocker(void) { return false; } +#endif + #endif /*__ASSEMBLY__ */ #endif /* _ASM_KEYLOCKER_H */ diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 5c1e6d6be267..bfbb1ca64664 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -62,6 +62,7 @@ #include #include #include +#include #include #include #include @@ -1826,10 +1827,11 @@ static void identify_cpu(struct cpuinfo_x86 *c) /* Disable the PN if appropriate */ squash_the_stupid_serial_number(c); - /* Set up SMEP/SMAP/UMIP */ + /* Setup various Intel-specific CPU security features */ setup_smep(c); setup_smap(c); setup_umip(c); + setup_keylocker(); /* Enable FSGSBASE instructions if available. */ if (cpu_has(c, X86_FEATURE_FSGSBASE)) { diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c index 0d6b715baf1e..d5d11d0263b7 100644 --- a/arch/x86/kernel/keylocker.c +++ b/arch/x86/kernel/keylocker.c @@ -9,10 +9,24 @@ #include #include +#include #include static struct iwkey wrapping_key __initdata; +/* + * This flag is set when a wrapping key is successfully loaded. If a key + * restoration fails, it is reset. This state is exported to the crypto + * library, indicating whether Key Locker is usable. Thus, the feature + * can be soft-disabled based on this flag. + */ +static bool valid_wrapping_key; + +bool valid_keylocker(void) +{ + return valid_wrapping_key; +} + static void __init generate_keylocker_data(void) { get_random_bytes(&wrapping_key.integrity_key, sizeof(wrapping_key.integrity_key)); @@ -37,9 +51,69 @@ static void __init load_keylocker(struct work_struct *unused) kernel_fpu_end(); } +/** + * copy_keylocker - Copy the wrapping key from the backup. + * + * Returns: true if successful, otherwise false. + */ +static bool copy_keylocker(void) +{ + u64 status; + + wrmsrl(MSR_IA32_COPY_IWKEY_TO_LOCAL, 1); + rdmsrl(MSR_IA32_IWKEY_COPY_STATUS, status); + return !!(status & BIT(0)); +} + +/* + * On wakeup, APs copy a wrapping key after the boot CPU verifies a valid + * backup status through restore_keylocker(). Subsequently, they adhere + * to the error handling protocol by invalidating the flag. + */ +void setup_keylocker(void) +{ + if (!valid_wrapping_key) + return; + + cr4_set_bits(X86_CR4_KEYLOCKER); + + if (copy_keylocker()) + return; + + pr_err_once("x86/keylocker: Invalid copy status.\n"); + valid_wrapping_key = false; +} + +/* The boot CPU restores the wrapping key in the first place on wakeup. */ +void restore_keylocker(void) +{ + u64 backup_status; + + if (!valid_wrapping_key) + return; + + rdmsrl(MSR_IA32_IWKEY_BACKUP_STATUS, backup_status); + if (backup_status & BIT(0)) { + if (copy_keylocker()) + return; + pr_err("x86/keylocker: Invalid copy state.\n"); + } else { + pr_err("x86/keylocker: The key backup access failed with %s.\n", + (backup_status & BIT(2)) ? "read error" : "invalid status"); + } + + /* + * Invalidate the feature via this flag to indicate that the + * crypto code should voluntarily stop using the feature, rather + * than abruptly disabling it. + */ + valid_wrapping_key = false; +} + static int __init init_keylocker(void) { u32 eax, ebx, ecx, edx; + bool backup_available; if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER)) goto disable; @@ -59,9 +133,23 @@ static int __init init_keylocker(void) goto clear_cap; } + /* + * The backup is critical for restoring the wrapping key upon + * wakeup. + */ + backup_available = !!(ebx & KEYLOCKER_CPUID_EBX_BACKUP); + if (!backup_available && IS_ENABLED(CONFIG_SUSPEND)) { + pr_debug("x86/keylocker: No key backup with possible S3/4.\n"); + goto clear_cap; + } + generate_keylocker_data(); schedule_on_each_cpu(load_keylocker); destroy_keylocker_data(); + valid_wrapping_key = true; + + if (backup_available) + wrmsrl(MSR_IA32_BACKUP_IWKEY_TO_PLATFORM, 1); pr_info_once("x86/keylocker: Enabled.\n"); return 0; diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c index 63230ff8cf4f..e99be45354cd 100644 --- a/arch/x86/power/cpu.c +++ b/arch/x86/power/cpu.c @@ -27,6 +27,7 @@ #include #include #include +#include #ifdef CONFIG_X86_32 __visible unsigned long saved_context_ebx; @@ -264,6 +265,7 @@ static void notrace __restore_processor_state(struct saved_context *ctxt) x86_platform.restore_sched_clock_state(); cache_bp_restore(); perf_restore_debug_store(); + restore_keylocker(); c = &cpu_data(smp_processor_id()); if (cpu_has(c, X86_FEATURE_MSR_IA32_FEAT_CTL)) From patchwork Fri Mar 29 01:53:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609976 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 689473C48D; Fri, 29 Mar 2024 02:09:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678190; cv=none; b=kJFK/Fc4WnWwlzDtba38ugaTFd0x5ixurergrA0s79snsP0pX2E8G/hgEztlrkC3o4AwUUcr4Ku+DQBN8PUoBLKCxjcw0HFuFgwxqoWINbmVLxwgNxvi40YFxfnxLPeVyP8WlRoKgiQ1enTNbzptnD5RJaKTY7E7fm96aHkhVbY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678190; c=relaxed/simple; bh=qoYT81wmdpJZjtmAFrTyCEr2wuBbIWdL55Nla8I0oJk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=qBGuqH7X9nnvve/JMiyntL1FgBj6UhXG5y458I2wpQnKKqf/iyLYbK6+xSek1Bshtt3hbq82M+145dml7Vg9VLAkEShqWjOGyRAMHxBrN9d7KlSoL8SKQA9ee+xKjLAsPlffwOVX6uQzU5/9MZlK+c5fG2kFccHHUStdistjZms= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GaISkusb; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GaISkusb" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678188; x=1743214188; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qoYT81wmdpJZjtmAFrTyCEr2wuBbIWdL55Nla8I0oJk=; b=GaISkusbB+5r4l38BYA9FEU1Mr2MHpYRMHxRlIMN4pLDfrli/9ZEp/3l 2+ySldoqh/glJUm8O/KAcaPp+j0w4YiQDjQiR9MNgbcCeKL9Pv/fvU+VA fqFpXHqKgnzkJ3x9Rzfu60sqqq9p3FrwDID8WuYLP2AlpNzX4Inq323EX Cek5tHv1aLwSPc1161Mt8fyzp+Ni1pnRaDMXZGTGLaCE7/V2TAS7QJLYv IrtQzbX2Tji/LD1oP69rqh7eSeFVXVnrmNFfCv+HHoJcigeoW4sDhf5gG 54d6PNsmFpmylhEGjPsmPNKwZc+C9cMa9vjPraoDnW7sm5bUieLjChjTw w==; X-CSE-ConnectionGUID: taYdZxwOQGykpoHSUgKvAg== X-CSE-MsgGUID: fr9bq4GhRn2jABf43DSJMA== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700095" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700095" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301417" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:47 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com, Sangwhan Moon Subject: [PATCH v9 09/14] x86/hotplug/keylocker: Ensure wrapping key backup capability Date: Thu, 28 Mar 2024 18:53:41 -0700 Message-Id: <20240329015346.635933-10-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 To facilitate CPU hotplug, the wrapping key needs to be loaded during CPU hotplug bringup. setup_keylocker() already establishes the routine for the wakeup path by copying the key from the backup state. Disable the feature if it's missing with CONFIG_HOTPLUG_CPU=y. Also, update the code comment to indicate support for CPU hotplug. Signed-off-by: Chang S. Bae Cc: Sangwhan Moon --- Changes from v8: * Add as a new patch. --- arch/x86/kernel/keylocker.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c index d5d11d0263b7..1b57e11d93ad 100644 --- a/arch/x86/kernel/keylocker.c +++ b/arch/x86/kernel/keylocker.c @@ -69,6 +69,8 @@ static bool copy_keylocker(void) * On wakeup, APs copy a wrapping key after the boot CPU verifies a valid * backup status through restore_keylocker(). Subsequently, they adhere * to the error handling protocol by invalidating the flag. + * + * This setup routine is also invoked in the hotplug bringup path. */ void setup_keylocker(void) { @@ -135,11 +137,11 @@ static int __init init_keylocker(void) /* * The backup is critical for restoring the wrapping key upon - * wakeup. + * wakeup or during hotplug bringup. */ backup_available = !!(ebx & KEYLOCKER_CPUID_EBX_BACKUP); - if (!backup_available && IS_ENABLED(CONFIG_SUSPEND)) { - pr_debug("x86/keylocker: No key backup with possible S3/4.\n"); + if (!backup_available && (IS_ENABLED(CONFIG_SUSPEND) || IS_ENABLED(CONFIG_HOTPLUG_CPU))) { + pr_debug("x86/keylocker: No key backup with possible S3/4 or CPU hotplug.\n"); goto clear_cap; } From patchwork Fri Mar 29 01:53:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609977 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC7193D57E; Fri, 29 Mar 2024 02:09:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678191; cv=none; b=dlJjsBfqGJfENpZFS1XhJ7vVpcD8jTwdSPRcBnOPEqQJyCVOwcf79q73piUJRgc0jvzhMmKc9VJ1Z7WGxC4il155l4+XfNQ7O4XxlK1lSp2jVKMiXqNbDX6EVh1awHwhhnX5eYu+PJpZ+Gkzxzf/s3iSaJbSoPIT1rC/Z6BpxqI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678191; c=relaxed/simple; bh=Mx/wXePy7wvV0gXFJRhA1AqtZexWrUrVnUZ2TwoL11I=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hGdg7Pv+AYQZ8WlgFhdxhOSz6KCJwkXGowL6BlVTUJM8+o0aBpbBtJFM4UuAMWzLKFEdCHmQlFDvrbgRlw8iOcodsrfhAZmgY2P33ktT1liins8hqupCL18ThGTbRqib9v5GfZ9hLPY1SUPEev501uXEfmNUHE6nFe7zaURIYNY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=IrgP+aEy; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IrgP+aEy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678189; x=1743214189; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Mx/wXePy7wvV0gXFJRhA1AqtZexWrUrVnUZ2TwoL11I=; b=IrgP+aEy6HkF3QntYg4givY7wAY0pDn3Bq/5BAe7nVSj3hQa2Qar7weI sYhYG+INhQjXHp+ghOuIp+ErhZbSmQ3oAas2QnTLFyMPYXupTi1NLNOs8 S1sHb7GgKP37HhuI4yM2pypE/Rqa/tvNL7e/XVhM9ukaQoJrHnlpc+BEt 1id8BoZmfFUGcQTpRQrtWGADpEpcDHINWG9BingKgpSOgWvwRGK7WTnSI +s5ZsOmz0ySW8EjeE5vpzvqWl9sL5bljuSkZAKczSnIKgjlTZRYrErBSO x54Kbo3ZANhqlRr1FfI4T17ECEz59AYpnqRcCBapyfheBvmvuyjOFQPVf w==; X-CSE-ConnectionGUID: G/75SySHReGaa3/dS6U7nQ== X-CSE-MsgGUID: 8GrvhMg4RDmirn9Oh01puQ== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700104" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700104" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301420" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:49 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com, Dave Hansen , Pawan Gupta Subject: [PATCH v9 10/14] x86/cpu/keylocker: Check Gather Data Sampling mitigation Date: Thu, 28 Mar 2024 18:53:42 -0700 Message-Id: <20240329015346.635933-11-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Gather Data Sampling is a transient execution side channel issue in some CPU models. The stale data in registers is not guaranteed as secure when this vulnerability is not addressed. In the Key Locker usage during AES transformations, the temporary storage of the original key in registers poses a risk. The key material can be staled in some implementations, leading to susceptibility to leakage of the AES key. To mitigate this vulnerability, a qualified microcode image must be applied. Software then verifies the mitigation state using MSRs. Add code to ensure that the mitigation is installed and securely locked. Disable the feature, otherwise. Signed-off-by: Chang S. Bae Cc: Dave Hansen Cc: Pawan Gupta --- Changes from v8: * Add as a new patch. Note that the code follows the guidance from [1]: "Intel recommends that system software does not enable Key Locker (by setting CR4.KL) unless the GDS mitigation is enabled (IA32_MCU_OPT_CTRL[GDS_MITG_DIS] (bit 4) is 0) and locked (IA32_MCU_OPT_CTRL [GDS_MITG_LOCK](bit 5) is 1)." [1] https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/gather-data-sampling.html --- arch/x86/kernel/keylocker.c | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c index 1b57e11d93ad..d4f3aa65ea8a 100644 --- a/arch/x86/kernel/keylocker.c +++ b/arch/x86/kernel/keylocker.c @@ -7,6 +7,7 @@ #include #include +#include #include #include #include @@ -112,6 +113,37 @@ void restore_keylocker(void) valid_wrapping_key = false; } +/* + * The mitigation is implemented at a microcode level. Ensure that the + * microcode update is applied and the mitigation is locked. + */ +static bool __init have_gds_mitigation(void) +{ + u64 mcu_ctrl; + + /* GDS_CTRL is set if new microcode is loaded. */ + if (!(x86_read_arch_cap_msr() & ARCH_CAP_GDS_CTRL)) + goto vulnerable; + + /* If GDS_MITG_LOCKED is set, GDS_MITG_DIS is forced to 0. */ + rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl); + if (mcu_ctrl & GDS_MITG_LOCKED) + return true; + +vulnerable: + pr_warn("x86/keylocker: Susceptible to the GDS vulnerability.\n"); + return false; +} + +/* Check if Key Locker is secure enough to be used. */ +static bool __init secure_keylocker(void) +{ + if (boot_cpu_has_bug(X86_BUG_GDS) && !have_gds_mitigation()) + return false; + + return true; +} + static int __init init_keylocker(void) { u32 eax, ebx, ecx, edx; @@ -125,6 +157,9 @@ static int __init init_keylocker(void) goto clear_cap; } + if (!secure_keylocker()) + goto clear_cap; + cr4_set_bits(X86_CR4_KEYLOCKER); /* AESKLE depends on CR4.KEYLOCKER */ From patchwork Fri Mar 29 01:53:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609978 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FF2D4087D; Fri, 29 Mar 2024 02:09:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678193; cv=none; b=GSkLEGPVYVFwtd4k8fZe7ADgzicGvPG0EGmQ2RZT1bTV0TZSVHuvyUsIKokzm/JWJQQGOzAerBc2GkKhj7gx11erH7bdjtj71miIg1nz+LMsK0+NynNhQwADBuRXEDknLH2Dz2wg0eUG08xVJ/2zS9gdU86OLRo/T8AoGSYBSv8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678193; c=relaxed/simple; bh=DyK+TtXbaf/FnkHWA/BzEAyIV7zrqPVmsS0WMMVczwI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ahs57FCXqg8JlkC49aS9KCO8Df+12wKnViBM7oTHL+lKePFTrpmc90RRzcUr93sqdlLLEFBv+4sCLNl3xidbC1QT5K+iwYmQxEq/AJHXkuYnXaCwmFxIs7kEnQCot9cSFiODkQWN88Y7EMDBuz8WI9W8cjytvRUhYqlsTfULw2I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=iyW8e82s; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="iyW8e82s" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678191; x=1743214191; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DyK+TtXbaf/FnkHWA/BzEAyIV7zrqPVmsS0WMMVczwI=; b=iyW8e82sWhfZ6O6XxM0PB0oJP23GKGpHxFfsx5IQzn8XFBydYt9rgG8h UceCEdu1LCZx/YUeK1XD4KwsZrsky6YpgTHy+LuF0MUSO0ZZGJlpON1BK da7SGsRHUgQ3/+fYmUm54dglasxICqSoXe2HVj6VR6bdAJ+K6qvnJK2Le UHBcSLZw5E5/RXMw+pZSjyPsoR9qx0i7AuB8VmAbyZUuLwcuOrug/A9VY gFxJ4S29BXL53S2d5ecnYrKP7taI3VDUd6EM91GXPoaqqRL58muPUh/vC xr1TWu2do1snF9oxANMQXOGSnAYvbaZBLDuQaEzi/aT5MdVEHWxIlTWZE w==; X-CSE-ConnectionGUID: M5AWO0kDTT+Kw+SavdLGgQ== X-CSE-MsgGUID: kF4CLby9Tti6gtKssJ8jIg== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700121" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700121" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301424" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:50 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com, Dave Hansen , Pawan Gupta Subject: [PATCH v9 11/14] x86/cpu/keylocker: Check Register File Data Sampling mitigation Date: Thu, 28 Mar 2024 18:53:43 -0700 Message-Id: <20240329015346.635933-12-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The Register File Data Sampling vulnerability may allow malicious userspace programs to infer stale kernel register data, potentially exposing sensitive key values, including AES keys. To address this vulnerability, a microcode update needs to be applied to the CPU, which modifies the VERW instruction to flush the affected CPU buffers. The kernel already has a facility to flush CPU buffers before returning to userspace, which is indicated by the X86_FEATURE_CLEAR_CPU_BUF flag. Ensure the mitigation before enabling Key Locker. Do not enable the feature on CPUs affected by the vulnerability but lacks mitigation. Signed-off-by: Chang S. Bae Cc: Dave Hansen Cc: Pawan Gupta --- Change from v8: * Add as a new patch. Note that the code change follows the mitigation guidance [1]: "Software loading Key Locker keys using LOADIWKEY should execute a VERW to clear registers before transitioning to untrusted code to prevent later software from inferring the loaded key." [1] https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/register-file-data-sampling.html --- arch/x86/kernel/keylocker.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c index d4f3aa65ea8a..6e805c4da76d 100644 --- a/arch/x86/kernel/keylocker.c +++ b/arch/x86/kernel/keylocker.c @@ -135,12 +135,29 @@ static bool __init have_gds_mitigation(void) return false; } +/* + * IA32_ARCH_CAPABILITIES MSR is retrieved during the setting of + * X86_BUG_RFDS. Ensure that the mitigation is applied to flush CPU + * buffers by checking the flag. + */ +static bool __init have_rfds_mitigation(void) +{ + if (boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF)) + return true; + + pr_warn("x86/keylocker: Susceptible to the RFDS vulnerability.\n"); + return false; +} + /* Check if Key Locker is secure enough to be used. */ static bool __init secure_keylocker(void) { if (boot_cpu_has_bug(X86_BUG_GDS) && !have_gds_mitigation()) return false; + if (boot_cpu_has_bug(X86_BUG_RFDS) && !have_rfds_mitigation()) + return false; + return true; } From patchwork Fri Mar 29 01:53:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609979 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D04A3F9CF; Fri, 29 Mar 2024 02:10:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678201; cv=none; b=Tid+fYHLhnKEb8G5arpuoVAOXHHwbV4c9tjJCf7JgK5VFIIe8JsYXzmVOCGpLkAJE1tnAutIkrq2JhONGpqU201ir1Tvio+v59fcFf5zDBXZ7xkW6xKRVJzrsOysYfKpbJBgcm29c6fgADpC00p7qiwz2elP4Q4Ph2/7EYG4kG8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678201; c=relaxed/simple; bh=aiMV3JLKvNdSNk+9oWcbdjK3Hl15dermhyqrKf1VLgg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Y86n8k0V3/llQXsp3mFb1NCC7w1ASo1t01g5U2YKsKe08gnI/l7+tZz0zdzItVbQb7c+eVNNpXs+ek2q9tZcl41gVdMrbbFQ5uPzeARPZX6NoQyM76DyeBJoWbfvtAOpEoufNCiXStgk/ZPwK5bADryZuz1K1ChHGkng70wI/3c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=DvHL/maD; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="DvHL/maD" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678200; x=1743214200; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aiMV3JLKvNdSNk+9oWcbdjK3Hl15dermhyqrKf1VLgg=; b=DvHL/maDgm/EZ3rLOBOdSW4bX/B2lOnWkK60iqSm8rXSH+QTQ0K74DKx IChexcaB1nPej5IGvdbt+gpPWXEfh+p8Pikq/2ym0FzOzo5+uCvUKCXV8 3To9QSbmWpXwvfLp/LjiHUtffeTmc5V2s1ueXFe2mvLLYOZI4itze5ViC acEIa8KO6slBmDxCS/gO8aQPcLVegYhqOgmSs3HhPwdQADQNsLlmB67AV jP3VP8kRc3BHX+Lrw/ic9eRCN0bdkoeJXKOkuTmoxUVRRKLMqLRgGXYN0 dPnOay1U8ri82z4yjjqPJ/r/Z04ybgTevRINPsMzXqJ5K/fOv9wq6gt5O g==; X-CSE-ConnectionGUID: Tt4UX0zvQKefSWm704dOCg== X-CSE-MsgGUID: SyPsO9KXQi2u14yo97DYnQ== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700127" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700127" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301427" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:51 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com Subject: [PATCH v9 12/14] x86/Kconfig: Add a configuration for Key Locker Date: Thu, 28 Mar 2024 18:53:44 -0700 Message-Id: <20240329015346.635933-13-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add CONFIG_X86_KEYLOCKER to gate whether Key Locker is initialized at boot. The option is selected by the Key Locker cipher module CRYPTO_AES_KL (to be added in a later patch). Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Cc: Borislav Petkov --- Changes from v8: * Drop the "nokeylocker" option. (Borislav Petkov) Changes from v6: * Rebase on the upstream: commit a894a8a56b57 ("Documentation: kernel-parameters: sort all "no..." parameters") Changes from RFC v2: * Make the option selected by CRYPTO_AES_KL. (Dan Williams) * Massage the changelog and the config option description. --- arch/x86/Kconfig | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 39886bab943a..41eb88dcfb62 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1878,6 +1878,9 @@ config X86_INTEL_MEMORY_PROTECTION_KEYS If unsure, say y. +config X86_KEYLOCKER + bool + choice prompt "TSX enable mode" depends on CPU_SUP_INTEL From patchwork Fri Mar 29 01:53:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609980 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCC6344377; Fri, 29 Mar 2024 02:10:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678204; cv=none; b=J4NZ8/WxccLK0nHv68gwRpkg8TR9RUNISDfz8SzbMHqX0UjgU3XFomItDxiGsf5DP92mhwxGx3ebBGEpnL4sEhyXii8qS4nJduFtyQ299h59wiUpe1XuFNSWUPQ0E/AEFkrKXNJUHpx80UPu48mLz2ROn4EPwZ9kjV/O9dF0KiA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678204; c=relaxed/simple; bh=LaXGSbwrzGYECbzg7+ba80GNcN6I8Yse45fG0WYSBy8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=JiAylTXcqV4zkyVZAIlr2PeJL5jcR3CWo4z7QGAAoKkOTSB2VW61t/QLUIqs/cQJOpAwXGuc85Y6FbpgnNt4O/S+hWtuWraeaJtRn3p8NXILP0FsNzsko8MdwcTCT7FDhUZRBGhmBelaD8NQcYWMcard/cO/ZF0aYYcLdTFx9O4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=WgvGm+Kv; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WgvGm+Kv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678202; x=1743214202; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LaXGSbwrzGYECbzg7+ba80GNcN6I8Yse45fG0WYSBy8=; b=WgvGm+KvGZH2wj1dlyGfrzPggOmSxnyU0/r9Qwl4GY6EWfT8hT2xJo4W cWBsEhiGW51QPM1IZz/ZzcHjCJeYun45iuJ7S8L9VciD1QEWPSbpeAD5b H6JqIcv1JwYrdSS55UxbHvtdVkuO0UrzBaXHKAJoXmOyGez6h9iUO9Kwc 0OfxlL+7MtmvC5iMGSiyc9+HpajW5TFlgNGbquZXiorOdTCWKh9jNPiPL b02DHV8aEs9a11zPvbGPMJtde7Yk+BOSHTspVjxnBORvKHl8sh1VwsOKr 2uHHDsr3Ufyv8JXSQS4sYCij3/mRJD1scElWARo/Tg/JB9jKbRN20SuSv A==; X-CSE-ConnectionGUID: l0w9IEJDSoavN0BvbkBNpg== X-CSE-MsgGUID: xu4unCn0QxSgaxtJB/bXqw== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700136" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700136" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301432" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:53 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com Subject: [PATCH v9 13/14] crypto: x86/aes - Prepare for new AES-XTS implementation Date: Thu, 28 Mar 2024 18:53:45 -0700 Message-Id: <20240329015346.635933-14-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The key Locker's AES instruction set ('AES-KL') shares a similar programming interface with AES-NI. The internal ABI in the assembly code will have the same prototype as AES-NI, and the glue code will also be identical. The upcoming AES code will exclusively support the XTS mode as disk encryption is the only intended use case. Refactor the XTS-related code to eliminate code duplication and relocate certain constant values to make them shareable. Also, introduce wrappers for data transformation functions to return an error code, as AES-KL may populate it. Introduce union x86_aes_ctx as AES-KL will reference an encoded form instead of an expanded AES key. This allows different AES context formats in the shared code. Inline the refactored code to the caller to prevent the potential overhead of indirect calls. No functional changes or performance regressions are intended. Signed-off-by: Chang S. Bae Acked-by: Dan Williams Cc: Eric Biggers Cc: Ard Biesheuvel Cc: Herbert Xu --- Changes from v8: * Rebase on the AES-NI changes in the mainline -- mostly cleanup works. * Introduce 'union x86_aes_ctx'. (Eric Biggers) * Ensure 'inline' for wrapper functions. * Tweak the changelog. Changes from v7: * Remove aesni_dec() as not referenced by the refactored helpers. But, keep the ASM symbol '__aesni_dec' to make it appears consistent with its counterpart '__aesni_enc'. * Call out 'AES-XTS' in the subject. Changes from v6: * Inline the helper code to avoid the indirect call. (Eric Biggers) * Rename the filename: aes-intel* -> aes-helper*. (Eric Biggers) * Don't export symbols yet here. Instead, do it when needed later. * Improve the coding style: - Follow the symbol convention: '_' -> '__' (Eric Biggers) - Fix a style issue -- 'dst = src = ...' catched by checkpatch.pl: "CHECK: multiple assignments should be avoided" * Cleanup: move some define back to AES-NI code as not used by AES-KL Changes from v5: * Clean up the staled function definition -- cbc_crypt_common(). * Ensure kernel_fpu_end() for the possible error return from xts_crypt_common()->crypt1_fn(). Changes from v4: * Drop CBC mode changes. (Eric Biggers) Changes from v3: * Drop ECB and CTR mode changes. (Eric Biggers) * Export symbols. (Eric Biggers) Changes from RFC v2: * Massage the changelog. (Dan Williams) Changes from RFC v1: * Added as a new patch. (Ard Biesheuvel) --- arch/x86/crypto/aes-helper_asm.S | 22 +++ arch/x86/crypto/aes-helper_glue.h | 167 ++++++++++++++++++++++ arch/x86/crypto/aesni-intel_asm.S | 47 +++---- arch/x86/crypto/aesni-intel_glue.c | 213 ++++++++--------------------- 4 files changed, 261 insertions(+), 188 deletions(-) create mode 100644 arch/x86/crypto/aes-helper_asm.S create mode 100644 arch/x86/crypto/aes-helper_glue.h diff --git a/arch/x86/crypto/aes-helper_asm.S b/arch/x86/crypto/aes-helper_asm.S new file mode 100644 index 000000000000..b31abcdf63cb --- /dev/null +++ b/arch/x86/crypto/aes-helper_asm.S @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +/* + * Constant values shared between AES implementations: + */ + +.pushsection .rodata +.align 16 +.Lcts_permute_table: + .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 + .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 + .byte 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 + .byte 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f + .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 + .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 +.popsection + +.section .rodata.cst16.gf128mul_x_ble_mask, "aM", @progbits, 16 +.align 16 +.Lgf128mul_x_ble_mask: + .octa 0x00000000000000010000000000000087 +.previous diff --git a/arch/x86/crypto/aes-helper_glue.h b/arch/x86/crypto/aes-helper_glue.h new file mode 100644 index 000000000000..52ba1fe5cf71 --- /dev/null +++ b/arch/x86/crypto/aes-helper_glue.h @@ -0,0 +1,167 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Shared glue code between AES implementations, refactored from the AES-NI's. + * + * The helper code is inlined for a performance reason. With the mitigation + * for speculative executions like retpoline, indirect calls become very + * expensive at a cost of measurable overhead. + */ + +#ifndef _AES_HELPER_GLUE_H +#define _AES_HELPER_GLUE_H + +#include +#include +#include +#include +#include +#include +#include + +#define AES_ALIGN 16 +#define AES_ALIGN_ATTR __attribute__((__aligned__(AES_ALIGN))) +#define AES_ALIGN_EXTRA ((AES_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1)) +#define XTS_AES_CTX_SIZE (sizeof(struct aes_xts_ctx) + AES_ALIGN_EXTRA) + +/* + * Preserve data types for various AES implementations available in x86 + */ +union x86_aes_ctx { + struct crypto_aes_ctx aesni; +}; + +struct aes_xts_ctx { + union x86_aes_ctx tweak_ctx AES_ALIGN_ATTR; + union x86_aes_ctx crypt_ctx AES_ALIGN_ATTR; +}; + +static inline void *aes_align_addr(void *addr) +{ + return (crypto_tfm_ctx_alignment() >= AES_ALIGN) ? addr : PTR_ALIGN(addr, AES_ALIGN); +} + +static inline struct aes_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm) +{ + return aes_align_addr(crypto_skcipher_ctx(tfm)); +} + +static inline int +xts_setkey_common(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen, + int (*fn)(union x86_aes_ctx *ctx, const u8 *in_key, unsigned int key_len)) +{ + struct aes_xts_ctx *ctx = aes_xts_ctx(tfm); + int err; + + err = xts_verify_key(tfm, key, keylen); + if (err) + return err; + + keylen /= 2; + + /* first half of xts-key is for crypt */ + err = fn(&ctx->crypt_ctx, key, keylen); + if (err) + return err; + + /* second half of xts-key is for tweak */ + return fn(&ctx->tweak_ctx, key + keylen, keylen); +} + +static inline int +xts_crypt_common(struct skcipher_request *req, + int (*crypt_fn)(const union x86_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv), + int (*crypt1_fn)(const void *ctx, u8 *out, const u8 *in)) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct aes_xts_ctx *ctx = aes_xts_ctx(tfm); + int tail = req->cryptlen % AES_BLOCK_SIZE; + struct skcipher_request subreq; + struct skcipher_walk walk; + int err; + + if (req->cryptlen < AES_BLOCK_SIZE) + return -EINVAL; + + err = skcipher_walk_virt(&walk, req, false); + if (!walk.nbytes) + return err; + + if (unlikely(tail > 0 && walk.nbytes < walk.total)) { + int blocks = DIV_ROUND_UP(req->cryptlen, AES_BLOCK_SIZE) - 2; + + skcipher_walk_abort(&walk); + + skcipher_request_set_tfm(&subreq, tfm); + skcipher_request_set_callback(&subreq, + skcipher_request_flags(req), + NULL, NULL); + skcipher_request_set_crypt(&subreq, req->src, req->dst, + blocks * AES_BLOCK_SIZE, req->iv); + req = &subreq; + + err = skcipher_walk_virt(&walk, req, false); + if (!walk.nbytes) + return err; + } else { + tail = 0; + } + + kernel_fpu_begin(); + + /* calculate first value of T */ + err = crypt1_fn(&ctx->tweak_ctx, walk.iv, walk.iv); + if (err) { + kernel_fpu_end(); + return err; + } + + while (walk.nbytes > 0) { + int nbytes = walk.nbytes; + + if (nbytes < walk.total) + nbytes &= ~(AES_BLOCK_SIZE - 1); + + err = crypt_fn(&ctx->crypt_ctx, walk.dst.virt.addr, walk.src.virt.addr, + nbytes, walk.iv); + kernel_fpu_end(); + if (err) + return err; + + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + + if (walk.nbytes > 0) + kernel_fpu_begin(); + } + + if (unlikely(tail > 0 && !err)) { + struct scatterlist sg_src[2], sg_dst[2]; + struct scatterlist *src, *dst; + + src = scatterwalk_ffwd(sg_src, req->src, req->cryptlen); + if (req->dst != req->src) + dst = scatterwalk_ffwd(sg_dst, req->dst, req->cryptlen); + else + dst = src; + + skcipher_request_set_crypt(req, src, dst, AES_BLOCK_SIZE + tail, + req->iv); + + err = skcipher_walk_virt(&walk, &subreq, false); + if (err) + return err; + + kernel_fpu_begin(); + err = crypt_fn(&ctx->crypt_ctx, walk.dst.virt.addr, walk.src.virt.addr, + walk.nbytes, walk.iv); + kernel_fpu_end(); + if (err) + return err; + + err = skcipher_walk_done(&walk, 0); + } + return err; +} + +#endif /* _AES_HELPER_GLUE_H */ diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S index 7ecb55cae3d6..1015a36a73a0 100644 --- a/arch/x86/crypto/aesni-intel_asm.S +++ b/arch/x86/crypto/aesni-intel_asm.S @@ -28,6 +28,7 @@ #include #include #include +#include "aes-helper_asm.S" /* * The following macros are used to move an (un)aligned 16 byte value to/from @@ -1934,9 +1935,9 @@ SYM_FUNC_START(aesni_set_key) SYM_FUNC_END(aesni_set_key) /* - * void aesni_enc(const void *ctx, u8 *dst, const u8 *src) + * void __aesni_enc(const void *ctx, u8 *dst, const u8 *src) */ -SYM_FUNC_START(aesni_enc) +SYM_FUNC_START(__aesni_enc) FRAME_BEGIN #ifndef __x86_64__ pushl KEYP @@ -1955,7 +1956,7 @@ SYM_FUNC_START(aesni_enc) #endif FRAME_END RET -SYM_FUNC_END(aesni_enc) +SYM_FUNC_END(__aesni_enc) /* * _aesni_enc1: internal ABI @@ -2123,9 +2124,9 @@ SYM_FUNC_START_LOCAL(_aesni_enc4) SYM_FUNC_END(_aesni_enc4) /* - * void aesni_dec (const void *ctx, u8 *dst, const u8 *src) + * void __aesni_dec (const void *ctx, u8 *dst, const u8 *src) */ -SYM_FUNC_START(aesni_dec) +SYM_FUNC_START(__aesni_dec) FRAME_BEGIN #ifndef __x86_64__ pushl KEYP @@ -2145,7 +2146,7 @@ SYM_FUNC_START(aesni_dec) #endif FRAME_END RET -SYM_FUNC_END(aesni_dec) +SYM_FUNC_END(__aesni_dec) /* * _aesni_dec1: internal ABI @@ -2688,22 +2689,14 @@ SYM_FUNC_START(aesni_cts_cbc_dec) RET SYM_FUNC_END(aesni_cts_cbc_dec) +#ifdef __x86_64__ + .pushsection .rodata .align 16 -.Lcts_permute_table: - .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 - .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 - .byte 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 - .byte 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f - .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 - .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 -#ifdef __x86_64__ .Lbswap_mask: .byte 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 -#endif .popsection -#ifdef __x86_64__ /* * _aesni_inc_init: internal ABI * setup registers used by _aesni_inc @@ -2818,12 +2811,6 @@ SYM_FUNC_END(aesni_ctr_enc) #endif -.section .rodata.cst16.gf128mul_x_ble_mask, "aM", @progbits, 16 -.align 16 -.Lgf128mul_x_ble_mask: - .octa 0x00000000000000010000000000000087 -.previous - /* * _aesni_gf128mul_x_ble: internal ABI * Multiply in GF(2^128) for XTS IVs @@ -2843,10 +2830,10 @@ SYM_FUNC_END(aesni_ctr_enc) pxor KEY, IV; /* - * void aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *dst, - * const u8 *src, unsigned int len, le128 *iv) + * void __aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *dst, + * const u8 *src, unsigned int len, le128 *iv) */ -SYM_FUNC_START(aesni_xts_encrypt) +SYM_FUNC_START(__aesni_xts_encrypt) FRAME_BEGIN #ifndef __x86_64__ pushl IVP @@ -2995,13 +2982,13 @@ SYM_FUNC_START(aesni_xts_encrypt) movups STATE, (OUTP) jmp .Lxts_enc_ret -SYM_FUNC_END(aesni_xts_encrypt) +SYM_FUNC_END(__aesni_xts_encrypt) /* - * void aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *dst, - * const u8 *src, unsigned int len, le128 *iv) + * void __aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *dst, + * const u8 *src, unsigned int len, le128 *iv) */ -SYM_FUNC_START(aesni_xts_decrypt) +SYM_FUNC_START(__aesni_xts_decrypt) FRAME_BEGIN #ifndef __x86_64__ pushl IVP @@ -3157,4 +3144,4 @@ SYM_FUNC_START(aesni_xts_decrypt) movups STATE, (OUTP) jmp .Lxts_dec_ret -SYM_FUNC_END(aesni_xts_decrypt) +SYM_FUNC_END(__aesni_xts_decrypt) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 0ea3abaaa645..4ac7b9a28967 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -36,33 +36,25 @@ #include #include +#include "aes-helper_glue.h" -#define AESNI_ALIGN 16 -#define AESNI_ALIGN_ATTR __attribute__ ((__aligned__(AESNI_ALIGN))) -#define AES_BLOCK_MASK (~(AES_BLOCK_SIZE - 1)) #define RFC4106_HASH_SUBKEY_SIZE 16 -#define AESNI_ALIGN_EXTRA ((AESNI_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1)) -#define CRYPTO_AES_CTX_SIZE (sizeof(struct crypto_aes_ctx) + AESNI_ALIGN_EXTRA) -#define XTS_AES_CTX_SIZE (sizeof(struct aesni_xts_ctx) + AESNI_ALIGN_EXTRA) +#define AES_BLOCK_MASK (~(AES_BLOCK_SIZE - 1)) +#define CRYPTO_AES_CTX_SIZE (sizeof(struct crypto_aes_ctx) + AES_ALIGN_EXTRA) /* This data is stored at the end of the crypto_tfm struct. * It's a type of per "session" data storage location. * This needs to be 16 byte aligned. */ struct aesni_rfc4106_gcm_ctx { - u8 hash_subkey[16] AESNI_ALIGN_ATTR; - struct crypto_aes_ctx aes_key_expanded AESNI_ALIGN_ATTR; + u8 hash_subkey[16] AES_ALIGN_ATTR; + struct crypto_aes_ctx aes_key_expanded AES_ALIGN_ATTR; u8 nonce[4]; }; struct generic_gcmaes_ctx { - u8 hash_subkey[16] AESNI_ALIGN_ATTR; - struct crypto_aes_ctx aes_key_expanded AESNI_ALIGN_ATTR; -}; - -struct aesni_xts_ctx { - struct crypto_aes_ctx tweak_ctx AESNI_ALIGN_ATTR; - struct crypto_aes_ctx crypt_ctx AESNI_ALIGN_ATTR; + u8 hash_subkey[16] AES_ALIGN_ATTR; + struct crypto_aes_ctx aes_key_expanded AES_ALIGN_ATTR; }; #define GCM_BLOCK_LEN 16 @@ -80,17 +72,10 @@ struct gcm_context_data { u8 hash_keys[GCM_BLOCK_LEN * 16]; }; -static inline void *aes_align_addr(void *addr) -{ - if (crypto_tfm_ctx_alignment() >= AESNI_ALIGN) - return addr; - return PTR_ALIGN(addr, AESNI_ALIGN); -} - asmlinkage void aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len); -asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in); -asmlinkage void aesni_dec(const void *ctx, u8 *out, const u8 *in); +asmlinkage void __aesni_enc(const void *ctx, u8 *out, const u8 *in); +asmlinkage void __aesni_dec(const void *ctx, u8 *out, const u8 *in); asmlinkage void aesni_ecb_enc(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len); asmlinkage void aesni_ecb_dec(struct crypto_aes_ctx *ctx, u8 *out, @@ -104,14 +89,20 @@ asmlinkage void aesni_cts_cbc_enc(struct crypto_aes_ctx *ctx, u8 *out, asmlinkage void aesni_cts_cbc_dec(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len, u8 *iv); +static inline int aesni_enc(const void *ctx, u8 *out, const u8 *in) +{ + __aesni_enc(ctx, out, in); + return 0; +} + #define AVX_GEN2_OPTSIZE 640 #define AVX_GEN4_OPTSIZE 4096 -asmlinkage void aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, - const u8 *in, unsigned int len, u8 *iv); +asmlinkage void __aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, + const u8 *in, unsigned int len, u8 *iv); -asmlinkage void aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, - const u8 *in, unsigned int len, u8 *iv); +asmlinkage void __aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, + const u8 *in, unsigned int len, u8 *iv); #ifdef CONFIG_X86_64 @@ -223,11 +214,6 @@ static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx) return aes_align_addr(raw_ctx); } -static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm) -{ - return aes_align_addr(crypto_skcipher_ctx(tfm)); -} - static int aes_set_key_common(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len) { @@ -261,7 +247,7 @@ static void aesni_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) aes_encrypt(ctx, dst, src); } else { kernel_fpu_begin(); - aesni_enc(ctx, dst, src); + __aesni_enc(ctx, dst, src); kernel_fpu_end(); } } @@ -274,11 +260,31 @@ static void aesni_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) aes_decrypt(ctx, dst, src); } else { kernel_fpu_begin(); - aesni_dec(ctx, dst, src); + __aesni_dec(ctx, dst, src); kernel_fpu_end(); } } +static inline int aesni_xts_setkey(union x86_aes_ctx *ctx, + const u8 *in_key, unsigned int key_len) +{ + return aes_set_key_common(&ctx->aesni, in_key, key_len); +} + +static inline int aesni_xts_encrypt(const union x86_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv) +{ + __aesni_xts_encrypt(&ctx->aesni, out, in, len, iv); + return 0; +} + +static inline int aesni_xts_decrypt(const union x86_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv) +{ + __aesni_xts_decrypt(&ctx->aesni, out, in, len, iv); + return 0; +} + static int aesni_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int len) { @@ -524,7 +530,7 @@ static int ctr_crypt(struct skcipher_request *req) nbytes &= ~AES_BLOCK_MASK; if (walk.nbytes == walk.total && nbytes > 0) { - aesni_enc(ctx, keystream, walk.iv); + __aesni_enc(ctx, keystream, walk.iv); crypto_xor_cpy(walk.dst.virt.addr + walk.nbytes - nbytes, walk.src.virt.addr + walk.nbytes - nbytes, keystream, nbytes); @@ -668,8 +674,8 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, u8 *iv, void *aes_ctx, u8 *auth_tag, unsigned long auth_tag_len) { - u8 databuf[sizeof(struct gcm_context_data) + (AESNI_ALIGN - 8)] __aligned(8); - struct gcm_context_data *data = PTR_ALIGN((void *)databuf, AESNI_ALIGN); + u8 databuf[sizeof(struct gcm_context_data) + (AES_ALIGN - 8)] __aligned(8); + struct gcm_context_data *data = PTR_ALIGN((void *)databuf, AES_ALIGN); unsigned long left = req->cryptlen; struct scatter_walk assoc_sg_walk; struct skcipher_walk walk; @@ -824,8 +830,8 @@ static int helper_rfc4106_encrypt(struct aead_request *req) struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm); void *aes_ctx = &(ctx->aes_key_expanded); - u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); - u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); + u8 ivbuf[16 + (AES_ALIGN - 8)] __aligned(8); + u8 *iv = PTR_ALIGN(&ivbuf[0], AES_ALIGN); unsigned int i; __be32 counter = cpu_to_be32(1); @@ -852,8 +858,8 @@ static int helper_rfc4106_decrypt(struct aead_request *req) struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm); void *aes_ctx = &(ctx->aes_key_expanded); - u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); - u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); + u8 ivbuf[16 + (AES_ALIGN - 8)] __aligned(8); + u8 *iv = PTR_ALIGN(&ivbuf[0], AES_ALIGN); unsigned int i; if (unlikely(req->assoclen != 16 && req->assoclen != 20)) @@ -878,126 +884,17 @@ static int helper_rfc4106_decrypt(struct aead_request *req) static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm); - int err; - - err = xts_verify_key(tfm, key, keylen); - if (err) - return err; - - keylen /= 2; - - /* first half of xts-key is for crypt */ - err = aes_set_key_common(&ctx->crypt_ctx, key, keylen); - if (err) - return err; - - /* second half of xts-key is for tweak */ - return aes_set_key_common(&ctx->tweak_ctx, key + keylen, keylen); -} - -static int xts_crypt(struct skcipher_request *req, bool encrypt) -{ - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm); - int tail = req->cryptlen % AES_BLOCK_SIZE; - struct skcipher_request subreq; - struct skcipher_walk walk; - int err; - - if (req->cryptlen < AES_BLOCK_SIZE) - return -EINVAL; - - err = skcipher_walk_virt(&walk, req, false); - if (!walk.nbytes) - return err; - - if (unlikely(tail > 0 && walk.nbytes < walk.total)) { - int blocks = DIV_ROUND_UP(req->cryptlen, AES_BLOCK_SIZE) - 2; - - skcipher_walk_abort(&walk); - - skcipher_request_set_tfm(&subreq, tfm); - skcipher_request_set_callback(&subreq, - skcipher_request_flags(req), - NULL, NULL); - skcipher_request_set_crypt(&subreq, req->src, req->dst, - blocks * AES_BLOCK_SIZE, req->iv); - req = &subreq; - - err = skcipher_walk_virt(&walk, req, false); - if (!walk.nbytes) - return err; - } else { - tail = 0; - } - - kernel_fpu_begin(); - - /* calculate first value of T */ - aesni_enc(&ctx->tweak_ctx, walk.iv, walk.iv); - - while (walk.nbytes > 0) { - int nbytes = walk.nbytes; - - if (nbytes < walk.total) - nbytes &= ~(AES_BLOCK_SIZE - 1); - - if (encrypt) - aesni_xts_encrypt(&ctx->crypt_ctx, - walk.dst.virt.addr, walk.src.virt.addr, - nbytes, walk.iv); - else - aesni_xts_decrypt(&ctx->crypt_ctx, - walk.dst.virt.addr, walk.src.virt.addr, - nbytes, walk.iv); - kernel_fpu_end(); - - err = skcipher_walk_done(&walk, walk.nbytes - nbytes); - - if (walk.nbytes > 0) - kernel_fpu_begin(); - } - - if (unlikely(tail > 0 && !err)) { - struct scatterlist sg_src[2], sg_dst[2]; - struct scatterlist *src, *dst; - - dst = src = scatterwalk_ffwd(sg_src, req->src, req->cryptlen); - if (req->dst != req->src) - dst = scatterwalk_ffwd(sg_dst, req->dst, req->cryptlen); - - skcipher_request_set_crypt(req, src, dst, AES_BLOCK_SIZE + tail, - req->iv); - - err = skcipher_walk_virt(&walk, &subreq, false); - if (err) - return err; - - kernel_fpu_begin(); - if (encrypt) - aesni_xts_encrypt(&ctx->crypt_ctx, - walk.dst.virt.addr, walk.src.virt.addr, - walk.nbytes, walk.iv); - else - aesni_xts_decrypt(&ctx->crypt_ctx, - walk.dst.virt.addr, walk.src.virt.addr, - walk.nbytes, walk.iv); - kernel_fpu_end(); - - err = skcipher_walk_done(&walk, 0); - } - return err; + return xts_setkey_common(tfm, key, keylen, aesni_xts_setkey); } static int xts_encrypt(struct skcipher_request *req) { - return xts_crypt(req, true); + return xts_crypt_common(req, aesni_xts_encrypt, aesni_enc); } static int xts_decrypt(struct skcipher_request *req) { - return xts_crypt(req, false); + return xts_crypt_common(req, aesni_xts_decrypt, aesni_enc); } static struct crypto_alg aesni_cipher_alg = { @@ -1152,8 +1049,8 @@ static int generic_gcmaes_encrypt(struct aead_request *req) struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct generic_gcmaes_ctx *ctx = generic_gcmaes_ctx_get(tfm); void *aes_ctx = &(ctx->aes_key_expanded); - u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); - u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); + u8 ivbuf[16 + (AES_ALIGN - 8)] __aligned(8); + u8 *iv = PTR_ALIGN(&ivbuf[0], AES_ALIGN); __be32 counter = cpu_to_be32(1); memcpy(iv, req->iv, 12); @@ -1169,8 +1066,8 @@ static int generic_gcmaes_decrypt(struct aead_request *req) struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct generic_gcmaes_ctx *ctx = generic_gcmaes_ctx_get(tfm); void *aes_ctx = &(ctx->aes_key_expanded); - u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); - u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); + u8 ivbuf[16 + (AES_ALIGN - 8)] __aligned(8); + u8 *iv = PTR_ALIGN(&ivbuf[0], AES_ALIGN); memcpy(iv, req->iv, 12); *((__be32 *)(iv+12)) = counter; From patchwork Fri Mar 29 01:53:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 13609981 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2207244C63; Fri, 29 Mar 2024 02:10:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678204; cv=none; b=uMy+MkafhhtaFoPZ04k64NrhwACpFPvNBw7zdOlxUQUk1OZDCnAnuFw6xLE9Jvw15OGLBPxb10G6G1pG+2kCqwfrinQjAvWRCLpDJyg9J9EE9ROnjYFkDkhpkuGHeN+GzshIKWq6udx/i57TZZhWwCKKU7bVIMosISpaYIGZo/k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711678204; c=relaxed/simple; bh=1LDGQG8Tv2hZlx4EpDC4r0s9O2jkinHHSZjPSmJhxNE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=q1ywNdezDsybGgCDIvC2lupSsygT62tSLLxmv+DbeW2C9arRtxjn0+O3Uz9fu0gb2gfG771OER9UvVr72NmkzS//jmvsC9arK02xH39DSvq46Y9Rc6bkUpaxTgoOllzNWRWGLdmGwf8oQJnpVKVwyhFNHGev4RC9UJK/GQ9c1Aw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=aQnQ4B4C; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="aQnQ4B4C" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711678202; x=1743214202; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1LDGQG8Tv2hZlx4EpDC4r0s9O2jkinHHSZjPSmJhxNE=; b=aQnQ4B4CbSVVFgWSqhn6BdQ79uEYypph+ESYl0edKfloc4gt/vdgMeFO o6o7478hz5koEmAIDLhtVcSn0XESqRIxbjBL0maIFvdY8qtkPMNBCyme/ huzxzTyzMLe4qbciZXMTYGvbJFQ7LApi80ZdGM1trsvQ4AoPf7hpI8cEh /y+gIB+bdSaXbUR1UjhxUDPZl9vN7LhRKXnN16hCVZoLN+nWxblc41IoR velVM8ystoJkMHxovFClrT807tVuqWKqLQRZxSomIS4qOzYmU1X3INErp dg7O7K3gwMs45mK/sq02kfGyyBP1hKA/XHZ3i6wUKEFUoJNLQ2+R6djue w==; X-CSE-ConnectionGUID: R6el/fnyQMa1TGBcBMpLOA== X-CSE-MsgGUID: E0ZfMyEVRn6rjvtNEP4WPQ== X-IronPort-AV: E=McAfee;i="6600,9927,11027"; a="6700147" X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="6700147" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2024 19:09:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,162,1708416000"; d="scan'208";a="17301436" Received: from chang-linux-3.sc.intel.com ([172.25.66.175]) by orviesa006.jf.intel.com with ESMTP; 28 Mar 2024 19:09:54 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, elliott@hpe.com, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, chang.seok.bae@intel.com Subject: [PATCH v9 14/14] crypto: x86/aes-kl - Implement the AES-XTS algorithm Date: Thu, 28 Mar 2024 18:53:46 -0700 Message-Id: <20240329015346.635933-15-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329015346.635933-1-chang.seok.bae@intel.com> References: <20230603152227.12335-1-chang.seok.bae@intel.com> <20240329015346.635933-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Key Locker is a CPU feature to reduce key exfiltration opportunities. It converts the AES key into an encoded form, called 'key handle', to reduce the exposure of private key material in memory. This key conversion as well as all subsequent data transformations are provided by new AES instructions ('AES-KL'). AES-KL is analogous to that of AES-NI as maintains a similar programming interface. Support the XTS mode as the primary use case is dm-crypt. The support has some details worth mentioning, which differentiate itself from AES-NI, that users may need to be aware of: == Key Handle Restriction == The AES-KL instruction set supports selecting key usage restrictions at key handle creation time. Restrict all key handles created by the kernel to kernel mode use only. The AES-KL instructions themselves are executable in userspace. This restriction enforces the mode consistency in its operation. If the key handle is created in userspace but referenced in the kernel, then encrypt() and decrypt() functions will return -EINVAL. === AES-NI Dependency for AES Compliance === Key Locker is not AES compliant as it lacks 192-bit key support. However, per the expectations of Linux crypto-cipher implementations, the software cipher implementation must support all the AES-compliant key sizes. The AES-KL cipher implementation achieves this constraint by logging a warning and falling back to AES-NI. In other words, the 192-bit key-size limitation for what can be converted into a key handle is only documented, not enforced. This then creates a rather strong dependency on AES-NI. If this driver supports a module build, the exported AES-NI functions cannot be inlined. More importantly, indirect calls will impact the performance. To simplify, disallow a module build for AES-KL and always select AES-NI. This restriction can be relaxed if strong use cases arise against it. == Wrapping Key Restore Failure Handling == In the event of hardware failure, the wrapping key is lost from deep sleep states. Then, the wrapping key turns to zero which is an unusable state. The x86 core provides valid_keylocker() to indicate the failure. Subsequent setkey() as well as encode()/decode() can check it and return -ENODEV if failed. In this way, an error code can be returned, instead of facing abrupt exceptions. == Userspace Exposition == The Keylocker implementations so far have measurable performance penalties. So, keep AES-NI as the default. However, with a slow storage device, storage bandwidth is the bottleneck, even if disk encryption is enabled by AES-KL. Thus, it is an end-user consideration for selecting AES-KL. Users may pick it according to the name 'xts-aes-aeskl' shown in /proc/crypto. == 64-bit Only == Support 64-bit only, as the 32-bit kernel is being deprecated. Signed-off-by: Chang S. Bae Acked-by: Dan Williams Cc: Eric Biggers Cc: Ard Biesheuvel Cc: Herbert Xu --- Changes from v8: * Rebase on the upstream changes. * Combine the XTS enc/dec assembly code in a macro. (Eric Biggers) * Define setkey() as void instead of returning 'int'. (Eric Biggers) * Rearrange the assembly code to reduce jumps especially for success cases. (Eric Biggers) * Update the changelog for clarification. (Eric Biggers) * Exclude module build. Changes from v7: * Update the changelog -- remove 'API Limitation'. (Eric Biggers) * Update the comment for valid_keylocker(). (Eric Biggers) * Improve the code: - Remove the key-length check and simplify the code. (Eric Biggers) - Remove aeskl_dec() and __aeskl_dec() as not needed. - Simplify the register-function return handling. (Eric Biggers) - Rename setkey functions for coherent naming: aeskl_setkey() -> __aeskl_setkey(), aeskl_setkey_common() -> aeskl_setkey(), aeskl_xts_setkey() -> xts_setkey() - Revert an unnecessary comment. Changes from v6: * Merge all the AES-KL patches. (Eric Biggers) * Make the driver for the 64-bit mode only. (Eric Biggers) * Rework the key-size check code: - Trim unnecessary checks. (Eric Biggers) - Document the reason - Make sure both XTS keys with the same size * Adjust the Kconfig change: - Move the location. (Robert Elliott) - Trim the description to follow others such as AES-NI. * Update the changelog: - Explain the priority value for the common name under 'User Exposition' (renamed from 'Performance'). (Eric Biggers) - Trim the introduction - Switch to more imperative mood for those explaining the code change - Add a new section '64-bit Only' * Adjust the ASM code to return a proper error code. (Eric Biggers) * Update assembly code macros: - Remove unused one. - Document the reason for the duplicated ones. Changes from v5: * Replace the ret instruction with RET as rebased on the upstream -- commit f94909ceb1ed ("x86: Prepare asm files for straight-line-speculation"). Changes from v3: * Exclude non-AES-KL objects. (Eric Biggers) * Simplify the assembler dependency check. (Peter Zijlstra) * Trim the Kconfig help text. (Dan Williams) * Fix a defined-but-not-used warning. Changes from RFC v2: * Move out each mode support in new patches. * Update the changelog to describe the limitation and the tradeoff clearly. (Andy Lutomirski) Changes from RFC v1: * Rebased on the refactored code. (Ard Biesheuvel) * Dropped exporting the single block interface. (Ard Biesheuvel) * Fixed the fallback and error handling paths. (Ard Biesheuvel) * Revised the module description. (Dave Hansen and Peter Zijlstra) * Made the build depend on the binutils version to support new instructions. (Borislav Petkov and Peter Zijlstra) * Updated the changelog accordingly. --- arch/x86/Kconfig.assembler | 5 + arch/x86/crypto/Kconfig | 17 ++ arch/x86/crypto/Makefile | 3 + arch/x86/crypto/aes-helper_glue.h | 7 +- arch/x86/crypto/aeskl-intel_asm.S | 412 +++++++++++++++++++++++++++++ arch/x86/crypto/aeskl-intel_glue.c | 187 +++++++++++++ arch/x86/crypto/aeskl-intel_glue.h | 35 +++ arch/x86/crypto/aesni-intel_glue.c | 30 +-- arch/x86/crypto/aesni-intel_glue.h | 40 +++ 9 files changed, 704 insertions(+), 32 deletions(-) create mode 100644 arch/x86/crypto/aeskl-intel_asm.S create mode 100644 arch/x86/crypto/aeskl-intel_glue.c create mode 100644 arch/x86/crypto/aeskl-intel_glue.h create mode 100644 arch/x86/crypto/aesni-intel_glue.h diff --git a/arch/x86/Kconfig.assembler b/arch/x86/Kconfig.assembler index 8ad41da301e5..0e58f2b61dd3 100644 --- a/arch/x86/Kconfig.assembler +++ b/arch/x86/Kconfig.assembler @@ -25,6 +25,11 @@ config AS_GFNI help Supported by binutils >= 2.30 and LLVM integrated assembler +config AS_HAS_KEYLOCKER + def_bool $(as-instr,encodekey256 %eax$(comma)%eax) + help + Supported by binutils >= 2.36 and LLVM integrated assembler >= V12 + config AS_WRUSS def_bool $(as-instr,wrussq %rax$(comma)(%rbx)) help diff --git a/arch/x86/crypto/Kconfig b/arch/x86/crypto/Kconfig index c9e59589a1ce..067bb149998b 100644 --- a/arch/x86/crypto/Kconfig +++ b/arch/x86/crypto/Kconfig @@ -29,6 +29,23 @@ config CRYPTO_AES_NI_INTEL Architecture: x86 (32-bit and 64-bit) using: - AES-NI (AES new instructions) +config CRYPTO_AES_KL + bool "Ciphers: AES, modes: XTS (AES-KL)" + depends on X86 && 64BIT + depends on AS_HAS_KEYLOCKER + select CRYPTO_AES_NI_INTEL + select X86_KEYLOCKER + + help + Block cipher: AES cipher algorithms + Length-preserving ciphers: AES with XTS + + Architecture: x86 (64-bit) using: + - AES-KL (AES Key Locker) + - AES-NI for a 192-bit key + + See Documentation/arch/x86/keylocker.rst for more details. + config CRYPTO_BLOWFISH_X86_64 tristate "Ciphers: Blowfish, modes: ECB, CBC" depends on X86 && 64BIT diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile index 9aa46093c91b..ae2aa7abd151 100644 --- a/arch/x86/crypto/Makefile +++ b/arch/x86/crypto/Makefile @@ -50,6 +50,9 @@ obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o aes_ctrby8_avx-x86_64.o +obj-$(CONFIG_CRYPTO_AES_KL) += aeskl-intel.o +aeskl-intel-y := aeskl-intel_asm.o aeskl-intel_glue.o + obj-$(CONFIG_CRYPTO_SHA1_SSSE3) += sha1-ssse3.o sha1-ssse3-y := sha1_avx2_x86_64_asm.o sha1_ssse3_asm.o sha1_ssse3_glue.o sha1-ssse3-$(CONFIG_AS_SHA1_NI) += sha1_ni_asm.o diff --git a/arch/x86/crypto/aes-helper_glue.h b/arch/x86/crypto/aes-helper_glue.h index 52ba1fe5cf71..262c1cec0011 100644 --- a/arch/x86/crypto/aes-helper_glue.h +++ b/arch/x86/crypto/aes-helper_glue.h @@ -19,16 +19,17 @@ #include #include +#include "aeskl-intel_glue.h" + #define AES_ALIGN 16 #define AES_ALIGN_ATTR __attribute__((__aligned__(AES_ALIGN))) #define AES_ALIGN_EXTRA ((AES_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1)) #define XTS_AES_CTX_SIZE (sizeof(struct aes_xts_ctx) + AES_ALIGN_EXTRA) -/* - * Preserve data types for various AES implementations available in x86 - */ +/* Data types for the two AES implementations available in x86 */ union x86_aes_ctx { struct crypto_aes_ctx aesni; + struct aeskl_ctx aeskl; }; struct aes_xts_ctx { diff --git a/arch/x86/crypto/aeskl-intel_asm.S b/arch/x86/crypto/aeskl-intel_asm.S new file mode 100644 index 000000000000..81af7f61aab5 --- /dev/null +++ b/arch/x86/crypto/aeskl-intel_asm.S @@ -0,0 +1,412 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Implement AES algorithm using AES Key Locker instructions. + * + * Most code is based from the AES-NI implementation, aesni-intel_asm.S + * + */ + +#include +#include +#include +#include +#include +#include "aes-helper_asm.S" + +.text + +#define STATE1 %xmm0 +#define STATE2 %xmm1 +#define STATE3 %xmm2 +#define STATE4 %xmm3 +#define STATE5 %xmm4 +#define STATE6 %xmm5 +#define STATE7 %xmm6 +#define STATE8 %xmm7 +#define STATE STATE1 + +#define IV %xmm9 +#define KEY %xmm10 +#define INC %xmm13 + +#define IN %xmm8 + +#define HANDLEP %rdi +#define OUTP %rsi +#define KLEN %r9d +#define INP %rdx +#define T1 %r10 +#define LEN %rcx +#define IVP %r8 + +#define UKEYP OUTP +#define GF128MUL_MASK %xmm11 + +/* + * void __aeskl_setkey(struct crypto_aes_ctx *handlep, const u8 *ukeyp, + * unsigned int key_len) + */ +SYM_FUNC_START(__aeskl_setkey) + FRAME_BEGIN + movl %edx, 480(HANDLEP) + movdqu (UKEYP), STATE1 + mov $1, %eax + cmp $16, %dl + je .Lsetkey_128 + + movdqu 0x10(UKEYP), STATE2 + encodekey256 %eax, %eax + movdqu STATE4, 0x30(HANDLEP) + jmp .Lsetkey_end +.Lsetkey_128: + encodekey128 %eax, %eax + +.Lsetkey_end: + movdqu STATE1, (HANDLEP) + movdqu STATE2, 0x10(HANDLEP) + movdqu STATE3, 0x20(HANDLEP) + + FRAME_END + RET +SYM_FUNC_END(__aeskl_setkey) + +/* + * int __aeskl_enc(const void *handlep, u8 *outp, const u8 *inp) + */ +SYM_FUNC_START(__aeskl_enc) + FRAME_BEGIN + movdqu (INP), STATE + movl 480(HANDLEP), KLEN + + cmp $16, KLEN + je .Lenc_128 + aesenc256kl (HANDLEP), STATE + jz .Lenc_err + xor %rax, %rax + jmp .Lenc_end +.Lenc_128: + aesenc128kl (HANDLEP), STATE + jz .Lenc_err + xor %rax, %rax + jmp .Lenc_end + +.Lenc_err: + mov $(-EINVAL), %rax +.Lenc_end: + movdqu STATE, (OUTP) + FRAME_END + RET +SYM_FUNC_END(__aeskl_enc) + +/* + * XTS implementation + */ + +/* + * _aeskl_gf128mul_x_ble: internal ABI + * Multiply in GF(2^128) for XTS IVs + * input: + * IV: current IV + * GF128MUL_MASK == mask with 0x87 and 0x01 + * output: + * IV: next IV + * changed: + * CTR: == temporary value + * + * While based on the AES-NI code, this macro is separated here due to + * the register constraint. E.g., aesencwide256kl has implicit + * operands: XMM0-7. + */ +#define _aeskl_gf128mul_x_ble() \ + pshufd $0x13, IV, KEY; \ + paddq IV, IV; \ + psrad $31, KEY; \ + pand GF128MUL_MASK, KEY; \ + pxor KEY, IV; + +.macro XTS_ENC_DEC operation + FRAME_BEGIN + movdqa .Lgf128mul_x_ble_mask(%rip), GF128MUL_MASK + movups (IVP), IV + + mov 480(HANDLEP), KLEN + +.ifc \operation, dec + test $15, LEN + jz .Lxts_op8_\@ + sub $16, LEN +.endif + +.Lxts_op8_\@: + sub $128, LEN + jl .Lxts_op1_pre_\@ + + movdqa IV, STATE1 + movdqu (INP), INC + pxor INC, STATE1 + movdqu IV, (OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE2 + movdqu 0x10(INP), INC + pxor INC, STATE2 + movdqu IV, 0x10(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE3 + movdqu 0x20(INP), INC + pxor INC, STATE3 + movdqu IV, 0x20(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE4 + movdqu 0x30(INP), INC + pxor INC, STATE4 + movdqu IV, 0x30(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE5 + movdqu 0x40(INP), INC + pxor INC, STATE5 + movdqu IV, 0x40(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE6 + movdqu 0x50(INP), INC + pxor INC, STATE6 + movdqu IV, 0x50(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE7 + movdqu 0x60(INP), INC + pxor INC, STATE7 + movdqu IV, 0x60(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE8 + movdqu 0x70(INP), INC + pxor INC, STATE8 + movdqu IV, 0x70(OUTP) + + cmp $16, KLEN + je .Lxts_op8_128_\@ +.ifc \operation, dec + aesdecwide256kl (%rdi) +.else + aesencwide256kl (%rdi) +.endif + jz .Lxts_op_err_\@ + jmp .Lxts_op8_end_\@ +.Lxts_op8_128_\@: +.ifc \operation, dec + aesdecwide128kl (%rdi) +.else + aesencwide128kl (%rdi) +.endif + jz .Lxts_op_err_\@ + +.Lxts_op8_end_\@: + movdqu 0x00(OUTP), INC + pxor INC, STATE1 + movdqu STATE1, 0x00(OUTP) + + movdqu 0x10(OUTP), INC + pxor INC, STATE2 + movdqu STATE2, 0x10(OUTP) + + movdqu 0x20(OUTP), INC + pxor INC, STATE3 + movdqu STATE3, 0x20(OUTP) + + movdqu 0x30(OUTP), INC + pxor INC, STATE4 + movdqu STATE4, 0x30(OUTP) + + movdqu 0x40(OUTP), INC + pxor INC, STATE5 + movdqu STATE5, 0x40(OUTP) + + movdqu 0x50(OUTP), INC + pxor INC, STATE6 + movdqu STATE6, 0x50(OUTP) + + movdqu 0x60(OUTP), INC + pxor INC, STATE7 + movdqu STATE7, 0x60(OUTP) + + movdqu 0x70(OUTP), INC + pxor INC, STATE8 + movdqu STATE8, 0x70(OUTP) + + _aeskl_gf128mul_x_ble() + + add $128, INP + add $128, OUTP + test LEN, LEN + jnz .Lxts_op8_\@ + +.Lxts_op_ret_\@: + movups IV, (IVP) + xor %rax, %rax + FRAME_END + RET + +.Lxts_op1_pre_\@: + add $128, LEN + jz .Lxts_op_ret_\@ +.ifc \operation, enc + sub $16, LEN + jl .Lxts_op_cts4_\@ +.endif + +.Lxts_op1_\@: + movdqu (INP), STATE1 + +.ifc \operation, dec + add $16, INP + sub $16, LEN + jl .Lxts_op_cts1_\@ +.endif + + pxor IV, STATE1 + + cmp $16, KLEN + je .Lxts_op1_128_\@ +.ifc \operation, dec + aesdec256kl (HANDLEP), STATE1 +.else + aesenc256kl (HANDLEP), STATE1 +.endif + jz .Lxts_op_err_\@ + jmp .Lxts_op1_end_\@ +.Lxts_op1_128_\@: +.ifc \operation, dec + aesdec128kl (HANDLEP), STATE1 +.else + aesenc128kl (HANDLEP), STATE1 +.endif + jz .Lxts_op_err_\@ + +.Lxts_op1_end_\@: + pxor IV, STATE1 + _aeskl_gf128mul_x_ble() + + test LEN, LEN + jz .Lxts_op1_out_\@ + +.ifc \operation, enc + add $16, INP + sub $16, LEN + jl .Lxts_op_cts1_\@ +.endif + + movdqu STATE1, (OUTP) + add $16, OUTP + jmp .Lxts_op1_\@ + +.Lxts_op1_out_\@: + movdqu STATE1, (OUTP) + jmp .Lxts_op_ret_\@ + +.Lxts_op_cts4_\@: +.ifc \operation, enc + movdqu STATE8, STATE1 + sub $16, OUTP +.endif + +.Lxts_op_cts1_\@: +.ifc \operation, dec + movdqa IV, STATE5 + _aeskl_gf128mul_x_ble() + + pxor IV, STATE1 + + cmp $16, KLEN + je .Lxts_dec1_cts_pre_128_\@ + aesdec256kl (HANDLEP), STATE1 + jz .Lxts_op_err_\@ + jmp .Lxts_dec1_cts_pre_end_\@ +.Lxts_dec1_cts_pre_128_\@: + aesdec128kl (HANDLEP), STATE1 + jz .Lxts_op_err_\@ +.Lxts_dec1_cts_pre_end_\@: + pxor IV, STATE1 +.endif + + lea .Lcts_permute_table(%rip), T1 + add LEN, INP /* rewind input pointer */ + add $16, LEN /* # bytes in final block */ + movups (INP), IN + + mov T1, IVP + add $32, IVP + add LEN, T1 + sub LEN, IVP + add OUTP, LEN + + movups (T1), STATE2 + movaps STATE1, STATE3 + pshufb STATE2, STATE1 + movups STATE1, (LEN) + + movups (IVP), STATE1 + pshufb STATE1, IN + pblendvb STATE3, IN + movaps IN, STATE1 + +.ifc \operation, dec + pxor STATE5, STATE1 +.else + pxor IV, STATE1 +.endif + + cmp $16, KLEN + je .Lxts_op1_cts_128_\@ +.ifc \operation, dec + aesdec256kl (HANDLEP), STATE1 +.else + aesenc256kl (HANDLEP), STATE1 +.endif + jz .Lxts_op_err_\@ + jmp .Lxts_op1_cts_end_\@ +.Lxts_op1_cts_128_\@: +.ifc \operation, dec + aesdec128kl (HANDLEP), STATE1 +.else + aesenc128kl (HANDLEP), STATE1 +.endif + jz .Lxts_op_err_\@ + +.Lxts_op1_cts_end_\@: +.ifc \operation, dec + pxor STATE5, STATE1 +.else + pxor IV, STATE1 +.endif + movups STATE1, (OUTP) + xor %rax, %rax + FRAME_END + RET + +.Lxts_op_err_\@: + mov $(-EINVAL), %rax + FRAME_END + RET +.endm + +/* + * int __aeskl_xts_encrypt(const struct aeskl_ctx *handlep, u8 *outp, + * const u8 *inp, unsigned int klen, le128 *ivp) + */ +SYM_FUNC_START(__aeskl_xts_encrypt) + XTS_ENC_DEC enc +SYM_FUNC_END(__aeskl_xts_encrypt) + +/* + * int __aeskl_xts_decrypt(const struct crypto_aes_ctx *handlep, u8 *outp, + * const u8 *inp, unsigned int klen, le128 *ivp) + */ +SYM_FUNC_START(__aeskl_xts_decrypt) + XTS_ENC_DEC dec +SYM_FUNC_END(__aeskl_xts_decrypt) + diff --git a/arch/x86/crypto/aeskl-intel_glue.c b/arch/x86/crypto/aeskl-intel_glue.c new file mode 100644 index 000000000000..7672c4836da8 --- /dev/null +++ b/arch/x86/crypto/aeskl-intel_glue.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Support for AES Key Locker instructions. This file contains glue + * code and the real AES implementation is in aeskl-intel_asm.S. + * + * Most code is based on AES-NI glue code, aesni-intel_glue.c + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "aes-helper_glue.h" +#include "aesni-intel_glue.h" + +asmlinkage void __aeskl_setkey(struct aeskl_ctx *ctx, const u8 *in_key, unsigned int keylen); + +asmlinkage int __aeskl_enc(const void *ctx, u8 *out, const u8 *in); + +asmlinkage int __aeskl_xts_encrypt(const struct aeskl_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv); +asmlinkage int __aeskl_xts_decrypt(const struct aeskl_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv); + +/* + * If a hardware failure occurs, the wrapping key may be lost during + * sleep states. The state of the feature can be retrieved via + * valid_keylocker(). + * + * Since disabling can occur preemptively, check for availability on + * every use along with kernel_fpu_begin(). + */ + +static int aeskl_setkey(union x86_aes_ctx *ctx, const u8 *in_key, unsigned int keylen) +{ + int err; + + if (!crypto_simd_usable()) + return -EBUSY; + + err = aes_check_keylen(keylen); + if (err) + return err; + + if (unlikely(keylen == AES_KEYSIZE_192)) { + pr_warn_once("AES-KL does not support 192-bit key. Use AES-NI.\n"); + kernel_fpu_begin(); + aesni_set_key(&ctx->aesni, in_key, keylen); + kernel_fpu_end(); + return 0; + } + + if (!valid_keylocker()) + return -ENODEV; + + kernel_fpu_begin(); + __aeskl_setkey(&ctx->aeskl, in_key, keylen); + kernel_fpu_end(); + return 0; +} + +static inline int aeskl_enc(const void *ctx, u8 *out, const u8 *in) +{ + if (!valid_keylocker()) + return -ENODEV; + + return __aeskl_enc(ctx, out, in); +} + +static inline int aeskl_xts_encrypt(const union x86_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv) +{ + if (!valid_keylocker()) + return -ENODEV; + + return __aeskl_xts_encrypt(&ctx->aeskl, out, in, len, iv); +} + +static inline int aeskl_xts_decrypt(const union x86_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv) +{ + if (!valid_keylocker()) + return -ENODEV; + + return __aeskl_xts_decrypt(&ctx->aeskl, out, in, len, iv); +} + +static int xts_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keylen) +{ + return xts_setkey_common(tfm, key, keylen, aeskl_setkey); +} + +static inline u32 xts_keylen(struct skcipher_request *req) +{ + struct aes_xts_ctx *ctx = aes_xts_ctx(crypto_skcipher_reqtfm(req)); + + return ctx->crypt_ctx.aeskl.key_length; +} + +static int xts_encrypt(struct skcipher_request *req) +{ + u32 keylen = xts_keylen(req); + + if (likely(keylen != AES_KEYSIZE_192)) + return xts_crypt_common(req, aeskl_xts_encrypt, aeskl_enc); + else + return xts_crypt_common(req, aesni_xts_encrypt, aesni_enc); +} + +static int xts_decrypt(struct skcipher_request *req) +{ + u32 keylen = xts_keylen(req); + + if (likely(keylen != AES_KEYSIZE_192)) + return xts_crypt_common(req, aeskl_xts_decrypt, aeskl_enc); + else + return xts_crypt_common(req, aesni_xts_decrypt, aesni_enc); +} + +static struct skcipher_alg aeskl_skciphers[] = { + { + .base = { + .cra_name = "__xts(aes)", + .cra_driver_name = "__xts-aes-aeskl", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_INTERNAL, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = XTS_AES_CTX_SIZE, + .cra_module = THIS_MODULE, + }, + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .walksize = 2 * AES_BLOCK_SIZE, + .setkey = xts_setkey, + .encrypt = xts_encrypt, + .decrypt = xts_decrypt, + } +}; + +static struct simd_skcipher_alg *aeskl_simd_skciphers[ARRAY_SIZE(aeskl_skciphers)]; + +static int __init aeskl_init(void) +{ + u32 eax, ebx, ecx, edx; + + if (!valid_keylocker()) + return -ENODEV; + + cpuid_count(KEYLOCKER_CPUID, 0, &eax, &ebx, &ecx, &edx); + if (!(ebx & KEYLOCKER_CPUID_EBX_WIDE)) + return -ENODEV; + + /* + * AES-KL itself does not rely on AES-NI. But, AES-KL does not + * support 192-bit keys. To ensure AES compliance, AES-KL falls + * back to AES-NI. + */ + if (!boot_cpu_has(X86_FEATURE_AES)) + return -ENODEV; + + return simd_register_skciphers_compat(aeskl_skciphers, ARRAY_SIZE(aeskl_skciphers), + aeskl_simd_skciphers); +} + +static void __exit aeskl_exit(void) +{ + simd_unregister_skciphers(aeskl_skciphers, ARRAY_SIZE(aeskl_skciphers), + aeskl_simd_skciphers); +} + +late_initcall(aeskl_init); +module_exit(aeskl_exit); + +MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm, AES Key Locker implementation"); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("aes"); diff --git a/arch/x86/crypto/aeskl-intel_glue.h b/arch/x86/crypto/aeskl-intel_glue.h new file mode 100644 index 000000000000..57cfd6c55a4f --- /dev/null +++ b/arch/x86/crypto/aeskl-intel_glue.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _AESKL_INTEL_GLUE_H +#define _AESKL_INTEL_GLUE_H + +#include +#include + +#define AESKL_AAD_SIZE 16 +#define AESKL_TAG_SIZE 16 +#define AESKL_CIPHERTEXT_MAX AES_KEYSIZE_256 + +/* The Key Locker handle is an encoded form of the AES key. */ +struct aeskl_handle { + u8 additional_authdata[AESKL_AAD_SIZE]; + u8 integrity_tag[AESKL_TAG_SIZE]; + u8 ciphre_text[AESKL_CIPHERTEXT_MAX]; +}; + +/* + * Key Locker does not support 192-bit key size. The driver needs to + * retrieve the key size in the first place. The offset of the + * 'key_length' field here should be compatible with struct + * crypto_aes_ctx. + */ +#define AESKL_CTX_RESERVED (sizeof(struct crypto_aes_ctx) - sizeof(struct aeskl_handle) \ + - sizeof(u32)) + +struct aeskl_ctx { + struct aeskl_handle handle; + u8 reserved[AESKL_CTX_RESERVED]; + u32 key_length; +}; + +#endif /* _AESKL_INTEL_GLUE_H */ diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 4ac7b9a28967..d9c4aa055383 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -37,6 +37,7 @@ #include #include "aes-helper_glue.h" +#include "aesni-intel_glue.h" #define RFC4106_HASH_SUBKEY_SIZE 16 #define AES_BLOCK_MASK (~(AES_BLOCK_SIZE - 1)) @@ -72,9 +73,6 @@ struct gcm_context_data { u8 hash_keys[GCM_BLOCK_LEN * 16]; }; -asmlinkage void aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - unsigned int key_len); -asmlinkage void __aesni_enc(const void *ctx, u8 *out, const u8 *in); asmlinkage void __aesni_dec(const void *ctx, u8 *out, const u8 *in); asmlinkage void aesni_ecb_enc(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len); @@ -89,21 +87,9 @@ asmlinkage void aesni_cts_cbc_enc(struct crypto_aes_ctx *ctx, u8 *out, asmlinkage void aesni_cts_cbc_dec(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len, u8 *iv); -static inline int aesni_enc(const void *ctx, u8 *out, const u8 *in) -{ - __aesni_enc(ctx, out, in); - return 0; -} - #define AVX_GEN2_OPTSIZE 640 #define AVX_GEN4_OPTSIZE 4096 -asmlinkage void __aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, - const u8 *in, unsigned int len, u8 *iv); - -asmlinkage void __aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, - const u8 *in, unsigned int len, u8 *iv); - #ifdef CONFIG_X86_64 asmlinkage void aesni_ctr_enc(struct crypto_aes_ctx *ctx, u8 *out, @@ -271,20 +257,6 @@ static inline int aesni_xts_setkey(union x86_aes_ctx *ctx, return aes_set_key_common(&ctx->aesni, in_key, key_len); } -static inline int aesni_xts_encrypt(const union x86_aes_ctx *ctx, u8 *out, const u8 *in, - unsigned int len, u8 *iv) -{ - __aesni_xts_encrypt(&ctx->aesni, out, in, len, iv); - return 0; -} - -static inline int aesni_xts_decrypt(const union x86_aes_ctx *ctx, u8 *out, const u8 *in, - unsigned int len, u8 *iv) -{ - __aesni_xts_decrypt(&ctx->aesni, out, in, len, iv); - return 0; -} - static int aesni_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int len) { diff --git a/arch/x86/crypto/aesni-intel_glue.h b/arch/x86/crypto/aesni-intel_glue.h new file mode 100644 index 000000000000..999f81f5bcde --- /dev/null +++ b/arch/x86/crypto/aesni-intel_glue.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * These are AES-NI functions that are used by the AES-KL code as a + * fallback when it is given a 192-bit key. Key Locker does not support + * 192-bit keys. + */ + +#ifndef _AESNI_INTEL_GLUE_H +#define _AESNI_INTEL_GLUE_H + +asmlinkage void aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, + unsigned int key_len); +asmlinkage void __aesni_enc(const void *ctx, u8 *out, const u8 *in); +asmlinkage void __aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, + const u8 *in, unsigned int len, u8 *iv); +asmlinkage void __aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, + const u8 *in, unsigned int len, u8 *iv); + +static inline int aesni_enc(const void *ctx, u8 *out, const u8 *in) +{ + __aesni_enc(ctx, out, in); + return 0; +} + +static inline int aesni_xts_encrypt(const union x86_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv) +{ + __aesni_xts_encrypt(&ctx->aesni, out, in, len, iv); + return 0; +} + +static inline int aesni_xts_decrypt(const union x86_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv) +{ + __aesni_xts_decrypt(&ctx->aesni, out, in, len, iv); + return 0; +} + +#endif /* _AESNI_INTEL_GLUE_H */