From patchwork Sat Nov 4 21:12:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13445634 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D648C4167B for ; Sat, 4 Nov 2023 21:17:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230494AbjKDVRQ (ORCPT ); Sat, 4 Nov 2023 17:17:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230330AbjKDVRP (ORCPT ); Sat, 4 Nov 2023 17:17:15 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24EBEBE; Sat, 4 Nov 2023 14:17:09 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 187EEC433CA; Sat, 4 Nov 2023 21:17:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699132628; bh=O5n+UVp0NsBwPC03GGgaGEnKrMqDeBDGpqdt7V3dNS8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CuGDJjOsQyHs3jkGhXzAwjSqnjwFZWJNRPL9bK6qxxQxzT/q5SVh+BivwfFLjJnuA QdhZmggww6E3sgclBuoYOrkQiV+Nd/c7ex7fREZHcYzlxIJDNiAV6dl/C+5LDjMyy7 9QyyddiCWBdwjwTzl9BRxxupcbedUozNOjnMHT0TNCKeri4jNpCHaXNKlpLnGXroTK wu00X+K+XzzwoQm7iHE/x9T3ri0dObY8txRVPVyh8hOt+7wBFI2zpaXpFdMhda4vr+ Wk+eIFvLcqaZOmSg1OFgz0ac3IvlCQS0NlfeDKsBM0XOLSmdzE5cQpRQ7D0kxxQF8A X1vVZci0E0Q+g== From: Eric Biggers To: linux-block@vger.kernel.org, linux-fscrypt@vger.kernel.org Cc: kernel-team@android.com, Israel Rukshin , Gaurav Kashyap , Srinivas Kandagatla , Bjorn Andersson , Peter Griffin , Daniil Lunev Subject: [RFC PATCH v8 1/4] blk-crypto: add basic hardware-wrapped key support Date: Sat, 4 Nov 2023 14:12:56 -0700 Message-ID: <20231104211259.17448-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231104211259.17448-1-ebiggers@kernel.org> References: <20231104211259.17448-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Eric Biggers To prevent keys from being compromised if an attacker acquires read access to kernel memory, some inline encryption hardware can accept keys which are wrapped by a per-boot hardware-internal key. This avoids needing to keep the raw keys in kernel memory, without limiting the number of keys that can be used. Such hardware also supports deriving a "software secret" for cryptographic tasks that can't be handled by inline encryption; this is needed for fscrypt to work properly. To support this hardware, allow struct blk_crypto_key to represent a hardware-wrapped key as an alternative to a standard key, and make drivers set flags in struct blk_crypto_profile to indicate which types of keys they support. Also add the ->derive_sw_secret() low-level operation, which drivers supporting wrapped keys must implement. For more information, see the detailed documentation which this patch adds to Documentation/block/inline-encryption.rst. Signed-off-by: Eric Biggers --- Documentation/block/inline-encryption.rst | 213 +++++++++++++++++++++- block/blk-crypto-fallback.c | 5 +- block/blk-crypto-internal.h | 1 + block/blk-crypto-profile.c | 46 +++++ block/blk-crypto.c | 51 +++++- drivers/md/dm-table.c | 1 + drivers/mmc/host/cqhci-crypto.c | 2 + drivers/ufs/core/ufshcd-crypto.c | 1 + fs/crypto/inline_crypt.c | 4 +- include/linux/blk-crypto-profile.h | 20 ++ include/linux/blk-crypto.h | 74 +++++++- 11 files changed, 394 insertions(+), 24 deletions(-) diff --git a/Documentation/block/inline-encryption.rst b/Documentation/block/inline-encryption.rst index 90b733422ed4..07218455a2bc 100644 --- a/Documentation/block/inline-encryption.rst +++ b/Documentation/block/inline-encryption.rst @@ -70,24 +70,24 @@ Constraints and notes layers to also evict keys from any keyslots they are present in. - When possible, device-mapper devices must be able to pass through the inline encryption support of their underlying devices. However, it doesn't make sense for device-mapper devices to have keyslots themselves. Basic design ============ We introduce ``struct blk_crypto_key`` to represent an inline encryption key and -how it will be used. This includes the actual bytes of the key; the size of the -key; the algorithm and data unit size the key will be used with; and the number -of bytes needed to represent the maximum data unit number the key will be used -with. +how it will be used. This includes the type of the key (standard or +hardware-wrapped); the actual bytes of the key; the size of the key; the +algorithm and data unit size the key will be used with; and the number of bytes +needed to represent the maximum data unit number the key will be used with. We introduce ``struct bio_crypt_ctx`` to represent an encryption context. It contains a data unit number and a pointer to a blk_crypto_key. We add pointers to a bio_crypt_ctx to ``struct bio`` and ``struct request``; this allows users of the block layer (e.g. filesystems) to provide an encryption context when creating a bio and have it be passed down the stack for processing by the block layer and device drivers. Note that the encryption context doesn't explicitly say whether to encrypt or decrypt, as that is implicit from the direction of the bio; WRITE means encrypt, and READ means decrypt. @@ -294,10 +294,215 @@ if the fallback is used, the device will receive the integrity info of the ciphertext, not that of the plaintext). Because there isn't any real hardware yet, it seems prudent to assume that hardware implementations might not implement both features together correctly, and disallow the combination for now. Whenever a device supports integrity, the kernel will pretend that the device does not support hardware inline encryption (by setting the blk_crypto_profile in the request_queue of the device to NULL). When the crypto API fallback is enabled, this means that all bios with and encryption context will use the fallback, and IO will complete as usual. When the fallback is disabled, a bio with an encryption context will be failed. + +.. _hardware_wrapped_keys: + +Hardware-wrapped keys +===================== + +Motivation and threat model +--------------------------- + +Linux storage encryption (dm-crypt, fscrypt, eCryptfs, etc.) traditionally +relies on the raw encryption key(s) being present in kernel memory so that the +encryption can be performed. This traditionally isn't seen as a problem because +the key(s) won't be present during an offline attack, which is the main type of +attack that storage encryption is intended to protect from. + +However, there is an increasing desire to also protect users' data from other +types of attacks (to the extent possible), including: + +- Cold boot attacks, where an attacker with physical access to a system suddenly + powers it off, then immediately dumps the system memory to extract recently + in-use encryption keys, then uses these keys to decrypt user data on-disk. + +- Online attacks where the attacker is able to read kernel memory without fully + compromising the system, followed by an offline attack where any extracted + keys can be used to decrypt user data on-disk. An example of such an online + attack would be if the attacker is able to run some code on the system that + exploits a Meltdown-like vulnerability but is unable to escalate privileges. + +- Online attacks where the attacker fully compromises the system, but their data + exfiltration is significantly time-limited and/or bandwidth-limited, so in + order to completely exfiltrate the data they need to extract the encryption + keys to use in a later offline attack. + +Hardware-wrapped keys are a feature of inline encryption hardware that is +designed to protect users' data from the above attacks (to the extent possible), +without introducing limitations such as a maximum number of keys. + +Note that it is impossible to **fully** protect users' data from these attacks. +Even in the attacks where the attacker "just" gets read access to kernel memory, +they can still extract any user data that is present in memory, including +plaintext pagecache pages of encrypted files. The focus here is just on +protecting the encryption keys, as those instantly give access to **all** user +data in any following offline attack, rather than just some of it (where which +data is included in that "some" might not be controlled by the attacker). + +Solution overview +----------------- + +Inline encryption hardware typically has "keyslots" into which software can +program keys for the hardware to use; the contents of keyslots typically can't +be read back by software. As such, the above security goals could be achieved +if the kernel simply erased its copy of the key(s) after programming them into +keyslot(s) and thereafter only referred to them via keyslot number. + +However, that naive approach runs into the problem that it limits the number of +unlocked keys to the number of keyslots, which typically is a small number. In +cases where there is only one encryption key system-wide (e.g., a full-disk +encryption key), that can be tolerable. However, in general there can be many +logged-in users with many different keys, and/or many running applications with +application-specific encrypted storage areas. This is especially true if +file-based encryption (e.g. fscrypt) is being used. + +Thus, it is important for the kernel to still have a way to "remind" the +hardware about a key, without actually having the raw key itself. This would +ensure that the number of hardware keyslots only limits the number of active I/O +requests, not other things such as the number of logged-in users, the number of +running apps, or the number of encrypted storage areas that apps can create. + +Somewhat less importantly, it is also desirable that the raw keys are never +visible to software at all, even while being initially unlocked. This would +ensure that a read-only compromise of system memory will never allow a key to be +extracted to be used off-system, even if it occurs when a key is being unlocked. + +To solve all these problems, some vendors of inline encryption hardware have +made their hardware support *hardware-wrapped keys*. Hardware-wrapped keys +are encrypted keys that can only be unwrapped (decrypted) and used by hardware +-- either by the inline encryption hardware itself, or by a dedicated hardware +block that can directly provision keys to the inline encryption hardware. + +(We refer to them as "hardware-wrapped keys" rather than simply "wrapped keys" +to add some clarity in cases where there could be other types of wrapped keys, +such as in file-based encryption. Key wrapping is a commonly used technique.) + +The key which wraps (encrypts) hardware-wrapped keys is a hardware-internal key +that is never exposed to software; it is either a persistent key (a "long-term +wrapping key") or a per-boot key (an "ephemeral wrapping key"). The long-term +wrapped form of the key is what is initially unlocked, but it is erased from +memory as soon as it is converted into an ephemerally-wrapped key. In-use +hardware-wrapped keys are always ephemerally-wrapped, not long-term wrapped. + +As inline encryption hardware can only be used to encrypt/decrypt data on-disk, +the hardware also includes a level of indirection; it doesn't use the unwrapped +key directly for inline encryption, but rather derives both an inline encryption +key and a "software secret" from it. Software can use the "software secret" for +tasks that can't use the inline encryption hardware, such as filenames +encryption. The software secret is not protected from memory compromise. + +Key hierarchy +------------- + +Here is the key hierarchy for a hardware-wrapped key:: + + Hardware-wrapped key + | + | + + | + ----------------------------- + | | + Inline encryption key Software secret + +The components are: + +- *Hardware-wrapped key*: a key for the hardware's KDF (Key Derivation + Function), in ephemerally-wrapped form. The key wrapping algorithm is a + hardware implementation detail that doesn't impact kernel operation, but a + strong authenticated encryption algorithm such as AES-256-GCM is recommended. + +- *Hardware KDF*: a KDF (Key Derivation Function) which the hardware uses to + derive subkeys after unwrapping the wrapped key. The hardware's choice of KDF + doesn't impact kernel operation, but it does need to be known for testing + purposes, and it's also assumed to have at least a 256-bit security strength. + All known hardware uses the SP800-108 KDF in Counter Mode with AES-256-CMAC, + with a particular choice of labels and contexts; new hardware should use this + already-vetted KDF. + +- *Inline encryption key*: a derived key which the hardware directly provisions + to a keyslot of the inline encryption hardware, without exposing it to + software. In all known hardware, this will always be an AES-256-XTS key. + However, in principle other encryption algorithms could be supported too. + Hardware must derive distinct subkeys for each supported encryption algorithm. + +- *Software secret*: a derived key which the hardware returns to software so + that software can use it for cryptographic tasks that can't use inline + encryption. This value is cryptographically isolated from the inline + encryption key, i.e. knowing one doesn't reveal the other. (The KDF ensures + this.) Currently, the software secret is always 32 bytes and thus is suitable + for cryptographic applications that require up to a 256-bit security strength. + Some use cases (e.g. full-disk encryption) won't require the software secret. + +Example: in the case of fscrypt, the fscrypt master key (the key that protects a +particular set of encrypted directories) is made hardware-wrapped. The inline +encryption key is used as the file contents encryption key, while the software +secret (rather than the master key directly) is used to key fscrypt's KDF +(HKDF-SHA512) to derive other subkeys such as filenames encryption keys. + +Note that currently this design assumes a single inline encryption key per +hardware-wrapped key, without any further key derivation. Thus, in the case of +fscrypt, currently hardware-wrapped keys are only compatible with the "inline +encryption optimized" settings, which use one file contents encryption key per +encryption policy rather than one per file. This design could be extended to +make the hardware derive per-file keys using per-file nonces passed down the +storage stack, and in fact some hardware already supports this; future work is +planned to remove this limitation by adding the corresponding kernel support. + +Kernel support +-------------- + +The inline encryption support of the kernel's block layer ("blk-crypto") has +been extended to support hardware-wrapped keys as an alternative to standard +keys, when hardware support is available. This works in the following way: + +- A ``key_types_supported`` field is added to the crypto capabilities in + ``struct blk_crypto_profile``. This allows device drivers to declare that + they support standard keys, hardware-wrapped keys, or both. + +- ``struct blk_crypto_key`` can now contain a hardware-wrapped key as an + alternative to a standard key; a ``key_type`` field is added to + ``struct blk_crypto_config`` to distinguish between the different key types. + This allows users of blk-crypto to en/decrypt data using a hardware-wrapped + key in a way very similar to using a standard key. + +- A new method ``blk_crypto_ll_ops::derive_sw_secret`` is added. Device drivers + that support hardware-wrapped keys must implement this method. Users of + blk-crypto can call ``blk_crypto_derive_sw_secret()`` to access this method. + +- The programming and eviction of hardware-wrapped keys happens via + ``blk_crypto_ll_ops::keyslot_program`` and + ``blk_crypto_ll_ops::keyslot_evict``, just like it does for standard keys. If + a driver supports hardware-wrapped keys, then it must handle hardware-wrapped + keys being passed to these methods. + +blk-crypto-fallback doesn't support hardware-wrapped keys. Therefore, +hardware-wrapped keys can only be used with actual inline encryption hardware. + +Testability +----------- + +Both the hardware KDF and the inline encryption itself are well-defined +algorithms that don't depend on any secrets other than the unwrapped key. +Therefore, if the unwrapped key is known to software, these algorithms can be +reproduced in software in order to verify the ciphertext that is written to disk +by the inline encryption hardware. + +However, the unwrapped key will only be known to software for testing if the +"import" functionality is used. Proper testing is not possible in the +"generate" case where the hardware generates the key itself. The correct +operation of the "generate" mode thus relies on the security and correctness of +the hardware RNG and its use to generate the key, as well as the testing of the +"import" mode as that should cover all parts other than the key generation. + +For an example of a test that verifies the ciphertext written to disk in the +"import" mode, see the fscrypt hardware-wrapped key tests in xfstests, or +`Android's vts_kernel_encryption_test +`_. diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c index e6468eab2681..6bce8b5d138c 100644 --- a/block/blk-crypto-fallback.c +++ b/block/blk-crypto-fallback.c @@ -80,21 +80,21 @@ static struct blk_crypto_fallback_keyslot { static struct blk_crypto_profile *blk_crypto_fallback_profile; static struct workqueue_struct *blk_crypto_wq; static mempool_t *blk_crypto_bounce_page_pool; static struct bio_set crypto_bio_split; /* * This is the key we set when evicting a keyslot. This *should* be the all 0's * key, but AES-XTS rejects that key, so we use some random bytes instead. */ -static u8 blank_key[BLK_CRYPTO_MAX_KEY_SIZE]; +static u8 blank_key[BLK_CRYPTO_MAX_STANDARD_KEY_SIZE]; static void blk_crypto_fallback_evict_keyslot(unsigned int slot) { struct blk_crypto_fallback_keyslot *slotp = &blk_crypto_keyslots[slot]; enum blk_crypto_mode_num crypto_mode = slotp->crypto_mode; int err; WARN_ON(slotp->crypto_mode == BLK_ENCRYPTION_MODE_INVALID); /* Clear the key in the skcipher */ @@ -531,21 +531,21 @@ int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key) static bool blk_crypto_fallback_inited; static int blk_crypto_fallback_init(void) { int i; int err; if (blk_crypto_fallback_inited) return 0; - get_random_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE); + get_random_bytes(blank_key, sizeof(blank_key)); err = bioset_init(&crypto_bio_split, 64, 0, 0); if (err) goto out; /* Dynamic allocation is needed because of lockdep_register_key(). */ blk_crypto_fallback_profile = kzalloc(sizeof(*blk_crypto_fallback_profile), GFP_KERNEL); if (!blk_crypto_fallback_profile) { err = -ENOMEM; @@ -553,20 +553,21 @@ static int blk_crypto_fallback_init(void) } err = blk_crypto_profile_init(blk_crypto_fallback_profile, blk_crypto_num_keyslots); if (err) goto fail_free_profile; err = -ENOMEM; blk_crypto_fallback_profile->ll_ops = blk_crypto_fallback_ll_ops; blk_crypto_fallback_profile->max_dun_bytes_supported = BLK_CRYPTO_MAX_IV_SIZE; + blk_crypto_fallback_profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_STANDARD; /* All blk-crypto modes have a crypto API fallback. */ for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++) blk_crypto_fallback_profile->modes_supported[i] = 0xFFFFFFFF; blk_crypto_fallback_profile->modes_supported[BLK_ENCRYPTION_MODE_INVALID] = 0; blk_crypto_wq = alloc_workqueue("blk_crypto_wq", WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM, num_online_cpus()); if (!blk_crypto_wq) diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h index 93a141979694..1893df9a8f06 100644 --- a/block/blk-crypto-internal.h +++ b/block/blk-crypto-internal.h @@ -7,20 +7,21 @@ #define __LINUX_BLK_CRYPTO_INTERNAL_H #include #include /* Represents a crypto mode supported by blk-crypto */ struct blk_crypto_mode { const char *name; /* name of this mode, shown in sysfs */ const char *cipher_str; /* crypto API name (for fallback case) */ unsigned int keysize; /* key size in bytes */ + unsigned int security_strength; /* security strength in bytes */ unsigned int ivsize; /* iv size in bytes */ }; extern const struct blk_crypto_mode blk_crypto_modes[]; #ifdef CONFIG_BLK_INLINE_ENCRYPTION int blk_crypto_sysfs_register(struct gendisk *disk); void blk_crypto_sysfs_unregister(struct gendisk *disk); diff --git a/block/blk-crypto-profile.c b/block/blk-crypto-profile.c index 7fabc883e39f..1b92276ed2fc 100644 --- a/block/blk-crypto-profile.c +++ b/block/blk-crypto-profile.c @@ -345,20 +345,22 @@ void blk_crypto_put_keyslot(struct blk_crypto_keyslot *slot) */ bool __blk_crypto_cfg_supported(struct blk_crypto_profile *profile, const struct blk_crypto_config *cfg) { if (!profile) return false; if (!(profile->modes_supported[cfg->crypto_mode] & cfg->data_unit_size)) return false; if (profile->max_dun_bytes_supported < cfg->dun_bytes) return false; + if (!(profile->key_types_supported & cfg->key_type)) + return false; return true; } /* * This is an internal function that evicts a key from an inline encryption * device that can be either a real device or the blk-crypto-fallback "device". * It is used only by blk_crypto_evict_key(); see that function for details. */ int __blk_crypto_evict_key(struct blk_crypto_profile *profile, const struct blk_crypto_key *key) @@ -455,20 +457,58 @@ bool blk_crypto_register(struct blk_crypto_profile *profile, { if (blk_integrity_queue_supports_integrity(q)) { pr_warn("Integrity and hardware inline encryption are not supported together. Disabling hardware inline encryption.\n"); return false; } q->crypto_profile = profile; return true; } EXPORT_SYMBOL_GPL(blk_crypto_register); +/** + * blk_crypto_derive_sw_secret() - Derive software secret from wrapped key + * @bdev: a block device that supports hardware-wrapped keys + * @eph_key: the hardware-wrapped key in ephemerally-wrapped form + * @eph_key_size: size of @eph_key in bytes + * @sw_secret: (output) the software secret + * + * Given a hardware-wrapped key in ephemerally-wrapped form (the same form that + * it is used for I/O), ask the hardware to derive the secret which software can + * use for cryptographic tasks other than inline encryption. This secret is + * guaranteed to be cryptographically isolated from the inline encryption key, + * i.e. derived with a different KDF context. + * + * Return: 0 on success, -EOPNOTSUPP if the block device doesn't support + * hardware-wrapped keys, -EBADMSG if the key isn't a valid + * hardware-wrapped key, or another -errno code. + */ +int blk_crypto_derive_sw_secret(struct block_device *bdev, + const u8 *eph_key, size_t eph_key_size, + u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]) +{ + struct blk_crypto_profile *profile = + bdev_get_queue(bdev)->crypto_profile; + int err; + + if (!profile) + return -EOPNOTSUPP; + if (!(profile->key_types_supported & BLK_CRYPTO_KEY_TYPE_HW_WRAPPED)) + return -EOPNOTSUPP; + if (!profile->ll_ops.derive_sw_secret) + return -EOPNOTSUPP; + blk_crypto_hw_enter(profile); + err = profile->ll_ops.derive_sw_secret(profile, eph_key, eph_key_size, + sw_secret); + blk_crypto_hw_exit(profile); + return err; +} + /** * blk_crypto_intersect_capabilities() - restrict supported crypto capabilities * by child device * @parent: the crypto profile for the parent device * @child: the crypto profile for the child device, or NULL * * This clears all crypto capabilities in @parent that aren't set in @child. If * @child is NULL, then this clears all parent capabilities. * * Only use this when setting up the crypto profile for a layered device, before @@ -478,24 +518,26 @@ void blk_crypto_intersect_capabilities(struct blk_crypto_profile *parent, const struct blk_crypto_profile *child) { if (child) { unsigned int i; parent->max_dun_bytes_supported = min(parent->max_dun_bytes_supported, child->max_dun_bytes_supported); for (i = 0; i < ARRAY_SIZE(child->modes_supported); i++) parent->modes_supported[i] &= child->modes_supported[i]; + parent->key_types_supported &= child->key_types_supported; } else { parent->max_dun_bytes_supported = 0; memset(parent->modes_supported, 0, sizeof(parent->modes_supported)); + parent->key_types_supported = 0; } } EXPORT_SYMBOL_GPL(blk_crypto_intersect_capabilities); /** * blk_crypto_has_capabilities() - Check whether @target supports at least all * the crypto capabilities that @reference does. * @target: the target profile * @reference: the reference profile * @@ -514,20 +556,23 @@ bool blk_crypto_has_capabilities(const struct blk_crypto_profile *target, for (i = 0; i < ARRAY_SIZE(target->modes_supported); i++) { if (reference->modes_supported[i] & ~target->modes_supported[i]) return false; } if (reference->max_dun_bytes_supported > target->max_dun_bytes_supported) return false; + if (reference->key_types_supported & ~target->key_types_supported) + return false; + return true; } EXPORT_SYMBOL_GPL(blk_crypto_has_capabilities); /** * blk_crypto_update_capabilities() - Update the capabilities of a crypto * profile to match those of another crypto * profile. * @dst: The crypto profile whose capabilities to update. * @src: The crypto profile whose capabilities this function will update @dst's @@ -548,12 +593,13 @@ EXPORT_SYMBOL_GPL(blk_crypto_has_capabilities); * might result in blk-crypto-fallback being used if available, or the bio being * failed). */ void blk_crypto_update_capabilities(struct blk_crypto_profile *dst, const struct blk_crypto_profile *src) { memcpy(dst->modes_supported, src->modes_supported, sizeof(dst->modes_supported)); dst->max_dun_bytes_supported = src->max_dun_bytes_supported; + dst->key_types_supported = src->key_types_supported; } EXPORT_SYMBOL_GPL(blk_crypto_update_capabilities); diff --git a/block/blk-crypto.c b/block/blk-crypto.c index 4d760b092deb..5a09d0ef1a01 100644 --- a/block/blk-crypto.c +++ b/block/blk-crypto.c @@ -16,38 +16,42 @@ #include #include #include "blk-crypto-internal.h" const struct blk_crypto_mode blk_crypto_modes[] = { [BLK_ENCRYPTION_MODE_AES_256_XTS] = { .name = "AES-256-XTS", .cipher_str = "xts(aes)", .keysize = 64, + .security_strength = 32, .ivsize = 16, }, [BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV] = { .name = "AES-128-CBC-ESSIV", .cipher_str = "essiv(cbc(aes),sha256)", .keysize = 16, + .security_strength = 16, .ivsize = 16, }, [BLK_ENCRYPTION_MODE_ADIANTUM] = { .name = "Adiantum", .cipher_str = "adiantum(xchacha12,aes)", .keysize = 32, + .security_strength = 32, .ivsize = 32, }, [BLK_ENCRYPTION_MODE_SM4_XTS] = { .name = "SM4-XTS", .cipher_str = "xts(sm4)", .keysize = 32, + .security_strength = 16, .ivsize = 16, }, }; /* * This number needs to be at least (the number of threads doing IO * concurrently) * (maximum recursive depth of a bio), so that we don't * deadlock on crypt_ctx allocations. The default is chosen to be the same * as the default number of post read contexts in both EXT4 and F2FS. */ @@ -69,23 +73,29 @@ static int __init bio_crypt_ctx_init(void) goto out_no_mem; bio_crypt_ctx_pool = mempool_create_slab_pool(num_prealloc_crypt_ctxs, bio_crypt_ctx_cache); if (!bio_crypt_ctx_pool) goto out_no_mem; /* This is assumed in various places. */ BUILD_BUG_ON(BLK_ENCRYPTION_MODE_INVALID != 0); - /* Sanity check that no algorithm exceeds the defined limits. */ + /* + * Validate the crypto mode properties. This ideally would be done with + * static assertions, but boot-time checks are the next best thing. + */ for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++) { - BUG_ON(blk_crypto_modes[i].keysize > BLK_CRYPTO_MAX_KEY_SIZE); + BUG_ON(blk_crypto_modes[i].keysize > + BLK_CRYPTO_MAX_STANDARD_KEY_SIZE); + BUG_ON(blk_crypto_modes[i].security_strength > + blk_crypto_modes[i].keysize); BUG_ON(blk_crypto_modes[i].ivsize > BLK_CRYPTO_MAX_IV_SIZE); } return 0; out_no_mem: panic("Failed to allocate mem for bio crypt ctxs\n"); } subsys_initcall(bio_crypt_ctx_init); void bio_crypt_set_ctx(struct bio *bio, const struct blk_crypto_key *key, @@ -308,79 +318,96 @@ int __blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio, if (!rq->crypt_ctx) return -ENOMEM; } *rq->crypt_ctx = *bio->bi_crypt_context; return 0; } /** * blk_crypto_init_key() - Prepare a key for use with blk-crypto * @blk_key: Pointer to the blk_crypto_key to initialize. - * @raw_key: Pointer to the raw key. Must be the correct length for the chosen - * @crypto_mode; see blk_crypto_modes[]. + * @raw_key: the raw bytes of the key + * @raw_key_size: size of the raw key in bytes + * @key_type: type of the key -- either standard or hardware-wrapped * @crypto_mode: identifier for the encryption algorithm to use * @dun_bytes: number of bytes that will be used to specify the DUN when this * key is used * @data_unit_size: the data unit size to use for en/decryption * * Return: 0 on success, -errno on failure. The caller is responsible for * zeroizing both blk_key and raw_key when done with them. */ -int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key, +int blk_crypto_init_key(struct blk_crypto_key *blk_key, + const u8 *raw_key, size_t raw_key_size, + enum blk_crypto_key_type key_type, enum blk_crypto_mode_num crypto_mode, unsigned int dun_bytes, unsigned int data_unit_size) { const struct blk_crypto_mode *mode; memset(blk_key, 0, sizeof(*blk_key)); if (crypto_mode >= ARRAY_SIZE(blk_crypto_modes)) return -EINVAL; mode = &blk_crypto_modes[crypto_mode]; - if (mode->keysize == 0) + switch (key_type) { + case BLK_CRYPTO_KEY_TYPE_STANDARD: + if (raw_key_size != mode->keysize) + return -EINVAL; + break; + case BLK_CRYPTO_KEY_TYPE_HW_WRAPPED: + if (raw_key_size < mode->security_strength || + raw_key_size > BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE) + return -EINVAL; + break; + default: return -EINVAL; + } if (dun_bytes == 0 || dun_bytes > mode->ivsize) return -EINVAL; if (!is_power_of_2(data_unit_size)) return -EINVAL; blk_key->crypto_cfg.crypto_mode = crypto_mode; blk_key->crypto_cfg.dun_bytes = dun_bytes; blk_key->crypto_cfg.data_unit_size = data_unit_size; + blk_key->crypto_cfg.key_type = key_type; blk_key->data_unit_size_bits = ilog2(data_unit_size); - blk_key->size = mode->keysize; - memcpy(blk_key->raw, raw_key, mode->keysize); + blk_key->size = raw_key_size; + memcpy(blk_key->raw, raw_key, raw_key_size); return 0; } bool blk_crypto_config_supported_natively(struct block_device *bdev, const struct blk_crypto_config *cfg) { return __blk_crypto_cfg_supported(bdev_get_queue(bdev)->crypto_profile, cfg); } /* * Check if bios with @cfg can be en/decrypted by blk-crypto (i.e. either the * block_device it's submitted to supports inline crypto, or the * blk-crypto-fallback is enabled and supports the cfg). */ bool blk_crypto_config_supported(struct block_device *bdev, const struct blk_crypto_config *cfg) { - return IS_ENABLED(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK) || - blk_crypto_config_supported_natively(bdev, cfg); + if (IS_ENABLED(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK) && + cfg->key_type == BLK_CRYPTO_KEY_TYPE_STANDARD) + return true; + return blk_crypto_config_supported_natively(bdev, cfg); } /** * blk_crypto_start_using_key() - Start using a blk_crypto_key on a device * @bdev: block device to operate on * @key: A key to use on the device * * Upper layers must call this function to ensure that either the hardware * supports the key's crypto settings, or the crypto API fallback has transforms * for the needed mode allocated and ready to go. This function may allocate @@ -389,20 +416,24 @@ bool blk_crypto_config_supported(struct block_device *bdev, * * Return: 0 on success; -ENOPKG if the hardware doesn't support the key and * blk-crypto-fallback is either disabled or the needed algorithm * is disabled in the crypto API; or another -errno code. */ int blk_crypto_start_using_key(struct block_device *bdev, const struct blk_crypto_key *key) { if (blk_crypto_config_supported_natively(bdev, &key->crypto_cfg)) return 0; + if (key->crypto_cfg.key_type != BLK_CRYPTO_KEY_TYPE_STANDARD) { + pr_warn_once("tried to use wrapped key, but hardware doesn't support it\n"); + return -EOPNOTSUPP; + } return blk_crypto_fallback_start_using_mode(key->crypto_cfg.crypto_mode); } /** * blk_crypto_evict_key() - Evict a blk_crypto_key from a block_device * @bdev: a block_device on which I/O using the key may have been done * @key: the key to evict * * For a given block_device, this function removes the given blk_crypto_key from * the keyslot management structures and evicts it from any underlying hardware diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 198d38b53322..2137e4e284f3 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1303,20 +1303,21 @@ static int dm_table_construct_crypto_profile(struct dm_table *t) if (!dmcp) return -ENOMEM; dmcp->md = t->md; profile = &dmcp->profile; blk_crypto_profile_init(profile, 0); profile->ll_ops.keyslot_evict = dm_keyslot_evict; profile->max_dun_bytes_supported = UINT_MAX; memset(profile->modes_supported, 0xFF, sizeof(profile->modes_supported)); + profile->key_types_supported = ~0; for (i = 0; i < t->num_targets; i++) { struct dm_target *ti = dm_table_get_target(t, i); if (!dm_target_passes_crypto(ti->type)) { blk_crypto_intersect_capabilities(profile, NULL); break; } if (!ti->type->iterate_devices) continue; diff --git a/drivers/mmc/host/cqhci-crypto.c b/drivers/mmc/host/cqhci-crypto.c index d5f4b6972f63..6652982410ec 100644 --- a/drivers/mmc/host/cqhci-crypto.c +++ b/drivers/mmc/host/cqhci-crypto.c @@ -203,20 +203,22 @@ int cqhci_crypto_init(struct cqhci_host *cq_host) err = devm_blk_crypto_profile_init(dev, profile, num_keyslots); if (err) goto out; profile->ll_ops = cqhci_crypto_ops; profile->dev = dev; /* Unfortunately, CQHCI crypto only supports 32 DUN bits. */ profile->max_dun_bytes_supported = 4; + profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_STANDARD; + /* * Cache all the crypto capabilities and advertise the supported crypto * modes and data unit sizes to the block layer. */ for (cap_idx = 0; cap_idx < cq_host->crypto_capabilities.num_crypto_cap; cap_idx++) { cq_host->crypto_cap_array[cap_idx].reg_val = cpu_to_le32(cqhci_readl(cq_host, CQHCI_CRYPTOCAP + cap_idx * sizeof(__le32))); diff --git a/drivers/ufs/core/ufshcd-crypto.c b/drivers/ufs/core/ufshcd-crypto.c index f2c4422cab86..f4cc54d82281 100644 --- a/drivers/ufs/core/ufshcd-crypto.c +++ b/drivers/ufs/core/ufshcd-crypto.c @@ -183,20 +183,21 @@ int ufshcd_hba_init_crypto_capabilities(struct ufs_hba *hba) /* The actual number of configurations supported is (CFGC+1) */ err = devm_blk_crypto_profile_init( hba->dev, &hba->crypto_profile, hba->crypto_capabilities.config_count + 1); if (err) goto out; hba->crypto_profile.ll_ops = ufshcd_crypto_ops; /* UFS only supports 8 bytes for any DUN */ hba->crypto_profile.max_dun_bytes_supported = 8; + hba->crypto_profile.key_types_supported = BLK_CRYPTO_KEY_TYPE_STANDARD; hba->crypto_profile.dev = hba->dev; /* * Cache all the UFS crypto capabilities and advertise the supported * crypto modes and data unit sizes to the block layer. */ for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap; cap_idx++) { hba->crypto_cap_array[cap_idx].reg_val = cpu_to_le32(ufshcd_readl(hba, diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c index b4002aea7cdb..821a800c9a89 100644 --- a/fs/crypto/inline_crypt.c +++ b/fs/crypto/inline_crypt.c @@ -123,20 +123,21 @@ int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci) sb->s_blocksize != PAGE_SIZE) return 0; /* * On all the filesystem's block devices, blk-crypto must support the * crypto configuration that the file would use. */ crypto_cfg.crypto_mode = ci->ci_mode->blk_crypto_mode; crypto_cfg.data_unit_size = 1U << ci->ci_data_unit_bits; crypto_cfg.dun_bytes = fscrypt_get_dun_bytes(ci); + crypto_cfg.key_type = BLK_CRYPTO_KEY_TYPE_STANDARD; devs = fscrypt_get_devices(sb, &num_devs); if (IS_ERR(devs)) return PTR_ERR(devs); for (i = 0; i < num_devs; i++) { if (!blk_crypto_config_supported(devs[i], &crypto_cfg)) goto out_free_devs; } @@ -159,21 +160,22 @@ int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key, struct blk_crypto_key *blk_key; struct block_device **devs; unsigned int num_devs; unsigned int i; int err; blk_key = kmalloc(sizeof(*blk_key), GFP_KERNEL); if (!blk_key) return -ENOMEM; - err = blk_crypto_init_key(blk_key, raw_key, crypto_mode, + err = blk_crypto_init_key(blk_key, raw_key, ci->ci_mode->keysize, + BLK_CRYPTO_KEY_TYPE_STANDARD, crypto_mode, fscrypt_get_dun_bytes(ci), 1U << ci->ci_data_unit_bits); if (err) { fscrypt_err(inode, "error %d initializing blk-crypto key", err); goto fail; } /* Start using blk-crypto on all the filesystem's block devices. */ devs = fscrypt_get_devices(sb, &num_devs); if (IS_ERR(devs)) { diff --git a/include/linux/blk-crypto-profile.h b/include/linux/blk-crypto-profile.h index 90ab33cb5d0e..229287a7f451 100644 --- a/include/linux/blk-crypto-profile.h +++ b/include/linux/blk-crypto-profile.h @@ -50,20 +50,34 @@ struct blk_crypto_ll_ops { * @key from any underlying devices. @slot won't be valid in this case. * * If there are no keyslots and no underlying devices, this function * isn't required. * * Must return 0 on success, or -errno on failure. */ int (*keyslot_evict)(struct blk_crypto_profile *profile, const struct blk_crypto_key *key, unsigned int slot); + + /** + * @derive_sw_secret: Derive the software secret from a hardware-wrapped + * key in ephemerally-wrapped form. + * + * This only needs to be implemented if BLK_CRYPTO_KEY_TYPE_HW_WRAPPED + * is supported. + * + * Must return 0 on success, -EBADMSG if the key is invalid, or another + * -errno code on other errors. + */ + int (*derive_sw_secret)(struct blk_crypto_profile *profile, + const u8 *eph_key, size_t eph_key_size, + u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]); }; /** * struct blk_crypto_profile - inline encryption profile for a device * * This struct contains a storage device's inline encryption capabilities (e.g. * the supported crypto algorithms), driver-provided functions to control the * inline encryption hardware (e.g. programming and evicting keys), and optional * device-independent keyslot management data. */ @@ -77,20 +91,26 @@ struct blk_crypto_profile { */ struct blk_crypto_ll_ops ll_ops; /** * @max_dun_bytes_supported: The maximum number of bytes supported for * specifying the data unit number (DUN). Specifically, the range of * supported DUNs is 0 through (1 << (8 * max_dun_bytes_supported)) - 1. */ unsigned int max_dun_bytes_supported; + /** + * @key_types_supported: A bitmask of the supported key types: + * BLK_CRYPTO_KEY_TYPE_STANDARD and/or BLK_CRYPTO_KEY_TYPE_HW_WRAPPED. + */ + unsigned int key_types_supported; + /** * @modes_supported: Array of bitmasks that specifies whether each * combination of crypto mode and data unit size is supported. * Specifically, the i'th bit of modes_supported[crypto_mode] is set if * crypto_mode can be used with a data unit size of (1 << i). Note that * only data unit sizes that are powers of 2 can be supported. */ unsigned int modes_supported[BLK_ENCRYPTION_MODE_MAX]; /** diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h index 5e5822c18ee4..19066d86ecbf 100644 --- a/include/linux/blk-crypto.h +++ b/include/linux/blk-crypto.h @@ -10,53 +10,107 @@ enum blk_crypto_mode_num { BLK_ENCRYPTION_MODE_INVALID, BLK_ENCRYPTION_MODE_AES_256_XTS, BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV, BLK_ENCRYPTION_MODE_ADIANTUM, BLK_ENCRYPTION_MODE_SM4_XTS, BLK_ENCRYPTION_MODE_MAX, }; -#define BLK_CRYPTO_MAX_KEY_SIZE 64 +/* + * Supported types of keys. Must be bitflags due to their use in + * blk_crypto_profile::key_types_supported. + */ +enum blk_crypto_key_type { + /* + * Standard keys (i.e. "software keys"). These keys are simply kept in + * raw, plaintext form in kernel memory. + */ + BLK_CRYPTO_KEY_TYPE_STANDARD = 1 << 0, + + /* + * Hardware-wrapped keys. These keys are only present in kernel memory + * in ephemerally-wrapped form, and they can only be unwrapped by + * dedicated hardware. For details, see the "Hardware-wrapped keys" + * section of Documentation/block/inline-encryption.rst. + */ + BLK_CRYPTO_KEY_TYPE_HW_WRAPPED = 1 << 1, +}; + +/* + * Currently the maximum standard key size is 64 bytes, as that is the key size + * of BLK_ENCRYPTION_MODE_AES_256_XTS which takes the longest key. + * + * The maximum hardware-wrapped key size depends on the hardware's key wrapping + * algorithm, which is a hardware implementation detail, so it isn't precisely + * specified. But currently 128 bytes is plenty in practice. Implementations + * are recommended to wrap a 32-byte key for the hardware KDF with AES-256-GCM, + * which should result in a size closer to 64 bytes than 128. + * + * Both of these values can trivially be increased if ever needed. + */ +#define BLK_CRYPTO_MAX_STANDARD_KEY_SIZE 64 +#define BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE 128 + +/* This should use max(), but max() doesn't work in a struct definition. */ +#define BLK_CRYPTO_MAX_ANY_KEY_SIZE \ + (BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE > \ + BLK_CRYPTO_MAX_STANDARD_KEY_SIZE ? \ + BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE : BLK_CRYPTO_MAX_STANDARD_KEY_SIZE) + +/* + * Size of the "software secret" which can be derived from a hardware-wrapped + * key. This is currently always 32 bytes. Note, the choice of 32 bytes + * assumes that the software secret is only used directly for algorithms that + * don't require more than a 256-bit key to get the desired security strength. + * If it were to be used e.g. directly as an AES-256-XTS key, then this would + * need to be increased (which is possible if hardware supports it, but care + * would need to be taken to avoid breaking users who need exactly 32 bytes). + */ +#define BLK_CRYPTO_SW_SECRET_SIZE 32 + /** * struct blk_crypto_config - an inline encryption key's crypto configuration * @crypto_mode: encryption algorithm this key is for * @data_unit_size: the data unit size for all encryption/decryptions with this * key. This is the size in bytes of each individual plaintext and * ciphertext. This is always a power of 2. It might be e.g. the * filesystem block size or the disk sector size. * @dun_bytes: the maximum number of bytes of DUN used when using this key + * @key_type: the type of this key -- either standard or hardware-wrapped */ struct blk_crypto_config { enum blk_crypto_mode_num crypto_mode; unsigned int data_unit_size; unsigned int dun_bytes; + enum blk_crypto_key_type key_type; }; /** * struct blk_crypto_key - an inline encryption key - * @crypto_cfg: the crypto configuration (like crypto_mode, key size) for this - * key + * @crypto_cfg: the crypto mode, data unit size, key type, and other + * characteristics of this key and how it will be used * @data_unit_size_bits: log2 of data_unit_size - * @size: size of this key in bytes (determined by @crypto_cfg.crypto_mode) - * @raw: the raw bytes of this key. Only the first @size bytes are used. + * @size: size of this key in bytes. The size of a standard key is fixed for a + * given crypto mode, but the size of a hardware-wrapped key can vary. + * @raw: the bytes of this key. Only the first @size bytes are significant. * * A blk_crypto_key is immutable once created, and many bios can reference it at * the same time. It must not be freed until all bios using it have completed * and it has been evicted from all devices on which it may have been used. */ struct blk_crypto_key { struct blk_crypto_config crypto_cfg; unsigned int data_unit_size_bits; unsigned int size; - u8 raw[BLK_CRYPTO_MAX_KEY_SIZE]; + u8 raw[BLK_CRYPTO_MAX_ANY_KEY_SIZE]; }; #define BLK_CRYPTO_MAX_IV_SIZE 32 #define BLK_CRYPTO_DUN_ARRAY_SIZE (BLK_CRYPTO_MAX_IV_SIZE / sizeof(u64)) /** * struct bio_crypt_ctx - an inline encryption context * @bc_key: the key, algorithm, and data unit size to use * @bc_dun: the data unit number (starting IV) to use * @@ -80,36 +134,42 @@ static inline bool bio_has_crypt_ctx(struct bio *bio) } void bio_crypt_set_ctx(struct bio *bio, const struct blk_crypto_key *key, const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], gfp_t gfp_mask); bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc, unsigned int bytes, const u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]); -int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key, +int blk_crypto_init_key(struct blk_crypto_key *blk_key, + const u8 *raw_key, size_t raw_key_size, + enum blk_crypto_key_type key_type, enum blk_crypto_mode_num crypto_mode, unsigned int dun_bytes, unsigned int data_unit_size); int blk_crypto_start_using_key(struct block_device *bdev, const struct blk_crypto_key *key); void blk_crypto_evict_key(struct block_device *bdev, const struct blk_crypto_key *key); bool blk_crypto_config_supported_natively(struct block_device *bdev, const struct blk_crypto_config *cfg); bool blk_crypto_config_supported(struct block_device *bdev, const struct blk_crypto_config *cfg); +int blk_crypto_derive_sw_secret(struct block_device *bdev, + const u8 *eph_key, size_t eph_key_size, + u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]); + #else /* CONFIG_BLK_INLINE_ENCRYPTION */ static inline bool bio_has_crypt_ctx(struct bio *bio) { return false; } #endif /* CONFIG_BLK_INLINE_ENCRYPTION */ int __bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask); From patchwork Sat Nov 4 21:12:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13445632 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 940CDC4167B for ; Sat, 4 Nov 2023 21:17:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230399AbjKDVRN (ORCPT ); Sat, 4 Nov 2023 17:17:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230330AbjKDVRM (ORCPT ); Sat, 4 Nov 2023 17:17:12 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 42A07EB; Sat, 4 Nov 2023 14:17:09 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 87FDCC433CC; Sat, 4 Nov 2023 21:17:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699132628; bh=ONfz1fWZ5DwqTuDyL+3p6UIvpLuzuXzxeO2FeKcYWdM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=g2Lp597zp7FSELy67yJMaMNmBRakqEFphhXjxghGDJwhRz2+XQqkWfccfgXxr8Flm un7c/qIis3vo+mI0tc7tCk0wf6OxIHT4WlwCqa3j/lFyBDxgZAy8jk6CNBQsXg0sbx XFyZjFF4K70SGWpzU32ShD7pv/MNclGoonCBCignTW38RWFAvptH/jGUm8SsbOJDcc Ru6KJR9Ul3IKZXLN88Lla1J+7Pf8qHWnj1Glmu2Pd0awfcBP2I72/A40+jpig+9OR/ su3I6uPfWPKTmT5Ezs0pfbdazIc/Xg+Ojtu1iqh3tRzqKeTwekTVW6hfxrCsTB0oWS SjYV/jJyz3Yyw== From: Eric Biggers To: linux-block@vger.kernel.org, linux-fscrypt@vger.kernel.org Cc: kernel-team@android.com, Israel Rukshin , Gaurav Kashyap , Srinivas Kandagatla , Bjorn Andersson , Peter Griffin , Daniil Lunev Subject: [RFC PATCH v8 2/4] blk-crypto: show supported key types in sysfs Date: Sat, 4 Nov 2023 14:12:57 -0700 Message-ID: <20231104211259.17448-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231104211259.17448-1-ebiggers@kernel.org> References: <20231104211259.17448-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Eric Biggers Add sysfs files that indicate which type(s) of keys are supported by the inline encryption hardware associated with a particular request queue: /sys/block/$disk/queue/crypto/hw_wrapped_keys /sys/block/$disk/queue/crypto/standard_keys Userspace can use the presence or absence of these files to decide what encyption settings to use. Don't use a single key_type file, as devices might support both key types at the same time. Signed-off-by: Eric Biggers --- Documentation/ABI/stable/sysfs-block | 18 ++++++++++++++ block/blk-crypto-sysfs.c | 35 ++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+) diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block index 1fe9a553c37b..e396c905800c 100644 --- a/Documentation/ABI/stable/sysfs-block +++ b/Documentation/ABI/stable/sysfs-block @@ -159,20 +159,30 @@ What: /sys/block//queue/crypto/ Date: February 2022 Contact: linux-block@vger.kernel.org Description: The presence of this subdirectory of /sys/block//queue/ indicates that the device supports inline encryption. This subdirectory contains files which describe the inline encryption capabilities of the device. For more information about inline encryption, refer to Documentation/block/inline-encryption.rst. +What: /sys/block//queue/crypto/hw_wrapped_keys +Contact: linux-block@vger.kernel.org +Description: + [RO] The presence of this file indicates that the device + supports hardware-wrapped inline encryption keys, i.e. key blobs + that can only be unwrapped and used by dedicated hardware. For + more information about hardware-wrapped inline encryption keys, + see Documentation/block/inline-encryption.rst. + + What: /sys/block//queue/crypto/max_dun_bits Date: February 2022 Contact: linux-block@vger.kernel.org Description: [RO] This file shows the maximum length, in bits, of data unit numbers accepted by the device in inline encryption requests. What: /sys/block//queue/crypto/modes/ Date: February 2022 @@ -197,20 +207,28 @@ Description: What: /sys/block//queue/crypto/num_keyslots Date: February 2022 Contact: linux-block@vger.kernel.org Description: [RO] This file shows the number of keyslots the device has for use with inline encryption. +What: /sys/block//queue/crypto/standard_keys +Contact: linux-block@vger.kernel.org +Description: + [RO] The presence of this file indicates that the device + supports standard inline encryption keys, i.e. keys that are + managed in raw, plaintext form in software. + + What: /sys/block//queue/dax Date: June 2016 Contact: linux-block@vger.kernel.org Description: [RO] This file indicates whether the device supports Direct Access (DAX), used by CPU-addressable storage to bypass the pagecache. It shows '1' if true, '0' if not. What: /sys/block//queue/discard_granularity diff --git a/block/blk-crypto-sysfs.c b/block/blk-crypto-sysfs.c index a304434489ba..acab50493f2c 100644 --- a/block/blk-crypto-sysfs.c +++ b/block/blk-crypto-sysfs.c @@ -24,46 +24,81 @@ struct blk_crypto_attr { static struct blk_crypto_profile *kobj_to_crypto_profile(struct kobject *kobj) { return container_of(kobj, struct blk_crypto_kobj, kobj)->profile; } static struct blk_crypto_attr *attr_to_crypto_attr(struct attribute *attr) { return container_of(attr, struct blk_crypto_attr, attr); } +static ssize_t hw_wrapped_keys_show(struct blk_crypto_profile *profile, + struct blk_crypto_attr *attr, char *page) +{ + /* Always show supported, since the file doesn't exist otherwise. */ + return sysfs_emit(page, "supported\n"); +} + static ssize_t max_dun_bits_show(struct blk_crypto_profile *profile, struct blk_crypto_attr *attr, char *page) { return sysfs_emit(page, "%u\n", 8 * profile->max_dun_bytes_supported); } static ssize_t num_keyslots_show(struct blk_crypto_profile *profile, struct blk_crypto_attr *attr, char *page) { return sysfs_emit(page, "%u\n", profile->num_slots); } +static ssize_t standard_keys_show(struct blk_crypto_profile *profile, + struct blk_crypto_attr *attr, char *page) +{ + /* Always show supported, since the file doesn't exist otherwise. */ + return sysfs_emit(page, "supported\n"); +} + #define BLK_CRYPTO_RO_ATTR(_name) \ static struct blk_crypto_attr _name##_attr = __ATTR_RO(_name) +BLK_CRYPTO_RO_ATTR(hw_wrapped_keys); BLK_CRYPTO_RO_ATTR(max_dun_bits); BLK_CRYPTO_RO_ATTR(num_keyslots); +BLK_CRYPTO_RO_ATTR(standard_keys); + +static umode_t blk_crypto_is_visible(struct kobject *kobj, + struct attribute *attr, int n) +{ + struct blk_crypto_profile *profile = kobj_to_crypto_profile(kobj); + struct blk_crypto_attr *a = attr_to_crypto_attr(attr); + + if (a == &hw_wrapped_keys_attr && + !(profile->key_types_supported & BLK_CRYPTO_KEY_TYPE_HW_WRAPPED)) + return 0; + if (a == &standard_keys_attr && + !(profile->key_types_supported & BLK_CRYPTO_KEY_TYPE_STANDARD)) + return 0; + + return 0444; +} static struct attribute *blk_crypto_attrs[] = { + &hw_wrapped_keys_attr.attr, &max_dun_bits_attr.attr, &num_keyslots_attr.attr, + &standard_keys_attr.attr, NULL, }; static const struct attribute_group blk_crypto_attr_group = { .attrs = blk_crypto_attrs, + .is_visible = blk_crypto_is_visible, }; /* * The encryption mode attributes. To avoid hard-coding the list of encryption * modes, these are initialized at boot time by blk_crypto_sysfs_init(). */ static struct blk_crypto_attr __blk_crypto_mode_attrs[BLK_ENCRYPTION_MODE_MAX]; static struct attribute *blk_crypto_mode_attrs[BLK_ENCRYPTION_MODE_MAX + 1]; static umode_t blk_crypto_mode_is_visible(struct kobject *kobj, From patchwork Sat Nov 4 21:12:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13445633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1CA6C4167D for ; Sat, 4 Nov 2023 21:17:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230420AbjKDVRP (ORCPT ); Sat, 4 Nov 2023 17:17:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230330AbjKDVRO (ORCPT ); Sat, 4 Nov 2023 17:17:14 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A301123; Sat, 4 Nov 2023 14:17:09 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F048FC433D9; Sat, 4 Nov 2023 21:17:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699132629; bh=4VXF1vwT1I72NW6Ayw/79rkx9ju0YknqguvoPNxCLTE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YnwanQEW+kdZS3DnMZthAbHkyNIQwP9cL7mWvh0XRPlfKrSHLkRPaJmA7AYY+XMF4 kG+9xHxcGAD17zE+SFIHOUWe05SPzwdeLBHUAkMLMP0e9qnGD1aGl4W0RJI4xyIzhA FkYxWSVUYIoDqztsFRT08eFovB5obESXuhFxQu5co8r2oLtr+lRzXcUGfIb9Cp/hy5 2WUiPZ6EQawliTccjvPlSek+gOckVPn72nONw69wMftdpvW4C6Mvh99x7k9LvBFsFv 9zTM+li3giONUilN8DISx9OCOyQhIDF+lpARptDg/Krg/eDWmfcpuMexZ5rJdZ7SjV p5AxNZbVWx4Pw== From: Eric Biggers To: linux-block@vger.kernel.org, linux-fscrypt@vger.kernel.org Cc: kernel-team@android.com, Israel Rukshin , Gaurav Kashyap , Srinivas Kandagatla , Bjorn Andersson , Peter Griffin , Daniil Lunev Subject: [RFC PATCH v8 3/4] blk-crypto: add ioctls to create and prepare hardware-wrapped keys Date: Sat, 4 Nov 2023 14:12:58 -0700 Message-ID: <20231104211259.17448-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231104211259.17448-1-ebiggers@kernel.org> References: <20231104211259.17448-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Eric Biggers Until this point, the kernel can use hardware-wrapped keys to do encryption if userspace provides one -- specifically a key in ephemerally-wrapped form. However, no generic way has been provided for userspace to get such a key in the first place. Getting such a key is a two-step process. First, the key needs to be imported from a raw key or generated by the hardware, producing a key in long-term wrapped form. This happens once in the whole lifetime of the key. Second, the long-term wrapped key needs to be converted into ephemerally-wrapped form. This happens each time the key is "unlocked". In Android, these operations are supported in a generic way through KeyMint, a userspace abstraction layer. However, that method is Android-specific and can't be used on other Linux systems, may rely on proprietary libraries, and also misleads people into supporting KeyMint features like rollback resistance that make sense for other KeyMint keys but don't make sense for hardware-wrapped inline encryption keys. Therefore, this patch provides a generic kernel interface for these operations by introducing new block device ioctls: - BLKCRYPTOIMPORTKEY: convert a raw key to long-term wrapped form. - BLKCRYPTOGENERATEKEY: have the hardware generate a new key, then return it in long-term wrapped form. - BLKCRYPTOPREPAREKEY: convert a key from long-term wrapped form to ephemerally-wrapped form. These ioctls are implemented using new operations in blk_crypto_ll_ops. Signed-off-by: Eric Biggers --- Documentation/block/inline-encryption.rst | 32 ++++ .../userspace-api/ioctl/ioctl-number.rst | 4 +- block/blk-crypto-internal.h | 9 ++ block/blk-crypto-profile.c | 57 +++++++ block/blk-crypto.c | 143 ++++++++++++++++++ block/ioctl.c | 5 + include/linux/blk-crypto-profile.h | 53 +++++++ include/linux/blk-crypto.h | 1 + include/uapi/linux/blk-crypto.h | 44 ++++++ include/uapi/linux/fs.h | 6 +- 10 files changed, 348 insertions(+), 6 deletions(-) create mode 100644 include/uapi/linux/blk-crypto.h diff --git a/Documentation/block/inline-encryption.rst b/Documentation/block/inline-encryption.rst index 07218455a2bc..e31b32495f66 100644 --- a/Documentation/block/inline-encryption.rst +++ b/Documentation/block/inline-encryption.rst @@ -479,20 +479,52 @@ keys, when hardware support is available. This works in the following way: - The programming and eviction of hardware-wrapped keys happens via ``blk_crypto_ll_ops::keyslot_program`` and ``blk_crypto_ll_ops::keyslot_evict``, just like it does for standard keys. If a driver supports hardware-wrapped keys, then it must handle hardware-wrapped keys being passed to these methods. blk-crypto-fallback doesn't support hardware-wrapped keys. Therefore, hardware-wrapped keys can only be used with actual inline encryption hardware. +All the above deals with hardware-wrapped keys in ephemerally-wrapped form only. +To get such keys in the first place, new block device ioctls have been added to +provide a generic interface to creating and preparing such keys: + +- ``BLKCRYPTOIMPORTKEY`` converts a raw key to long-term wrapped form. It takes + in a pointer to a ``struct blk_crypto_import_key_arg``. The caller must set + ``raw_key_ptr`` and ``raw_key_size`` to the pointer and size (in bytes) of the + raw key to import. On success, ``BLKCRYPTOIMPORTKEY`` returns 0 and writes + the resulting long-term wrapped key blob to the buffer pointed to by + ``lt_key_ptr``, which is of maximum size ``lt_key_size``. It also updates + ``lt_key_size`` to be the actual size of the key. On failure, it returns -1 + and sets errno. + +- ``BLKCRYPTOGENERATEKEY`` is like ``BLKCRYPTOIMPORTKEY``, but it has the + hardware generate the key instead of importing one. It takes in a pointer to + a ``struct blk_crypto_generate_key_arg``. + +- ``BLKCRYPTOPREPAREKEY`` converts a key from long-term wrapped form to + ephemerally-wrapped form. It takes in a pointer to a ``struct + blk_crypto_prepare_key_arg``. The caller must set ``lt_key_ptr`` and + ``lt_key_size`` to the pointer and size (in bytes) of the long-term wrapped + key blob to convert. On success, ``BLKCRYPTOPREPAREKEY`` returns 0 and writes + the resulting ephemerally-wrapped key blob to the buffer pointed to by + ``eph_key_ptr``, which is of maximum size ``eph_key_size``. It also updates + ``eph_key_size`` to be the actual size of the key. On failure, it returns -1 + and sets errno. + +Userspace needs to use either ``BLKCRYPTOIMPORTKEY`` or ``BLKCRYPTOGENERATEKEY`` +once to create a key, and then ``BLKCRYPTOPREPAREKEY`` each time the key is +unlocked and added to the kernel. Note that these ioctls have no relevance for +standard keys; they are only for hardware-wrapped keys. + Testability ----------- Both the hardware KDF and the inline encryption itself are well-defined algorithms that don't depend on any secrets other than the unwrapped key. Therefore, if the unwrapped key is known to software, these algorithms can be reproduced in software in order to verify the ciphertext that is written to disk by the inline encryption hardware. However, the unwrapped key will only be known to software for testing if the diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst b/Documentation/userspace-api/ioctl/ioctl-number.rst index 4ea5b837399a..27d431f0e4b9 100644 --- a/Documentation/userspace-api/ioctl/ioctl-number.rst +++ b/Documentation/userspace-api/ioctl/ioctl-number.rst @@ -75,22 +75,22 @@ Code Seq# Include File Comments 0x00 00-1F linux/fb.h conflict! 0x00 00-1F linux/wavefront.h conflict! 0x02 all linux/fd.h 0x03 all linux/hdreg.h 0x04 D2-DC linux/umsdos_fs.h Dead since 2.6.11, but don't reuse these. 0x06 all linux/lp.h 0x09 all linux/raid/md_u.h 0x10 00-0F drivers/char/s390/vmcp.h 0x10 10-1F arch/s390/include/uapi/sclp_ctl.h 0x10 20-2F arch/s390/include/uapi/asm/hypfs.h -0x12 all linux/fs.h - linux/blkpg.h +0x12 all linux/fs.h, linux/blkpg.h, linux/blkzoned.h, + linux/blk-crypto.h 0x1b all InfiniBand Subsystem 0x20 all drivers/cdrom/cm206.h 0x22 all scsi/sg.h 0x3E 00-0F linux/counter.h '!' 00-1F uapi/linux/seccomp.h '#' 00-3F IEEE 1394 Subsystem Block for the entire subsystem '$' 00-0F linux/perf_counter.h, linux/perf_event.h '%' 00-0F include/uapi/linux/stm.h System Trace Module subsystem diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h index 1893df9a8f06..ccf6dff6ff6b 100644 --- a/block/blk-crypto-internal.h +++ b/block/blk-crypto-internal.h @@ -76,20 +76,23 @@ blk_status_t blk_crypto_get_keyslot(struct blk_crypto_profile *profile, struct blk_crypto_keyslot **slot_ptr); void blk_crypto_put_keyslot(struct blk_crypto_keyslot *slot); int __blk_crypto_evict_key(struct blk_crypto_profile *profile, const struct blk_crypto_key *key); bool __blk_crypto_cfg_supported(struct blk_crypto_profile *profile, const struct blk_crypto_config *cfg); +int blk_crypto_ioctl(struct block_device *bdev, unsigned int cmd, + void __user *argp); + #else /* CONFIG_BLK_INLINE_ENCRYPTION */ static inline int blk_crypto_sysfs_register(struct gendisk *disk) { return 0; } static inline void blk_crypto_sysfs_unregister(struct gendisk *disk) { } @@ -123,20 +126,26 @@ static inline void blk_crypto_rq_set_defaults(struct request *rq) { } static inline bool blk_crypto_rq_is_encrypted(struct request *rq) { return false; } static inline bool blk_crypto_rq_has_keyslot(struct request *rq) { return false; } +static inline int blk_crypto_ioctl(struct block_device *bdev, unsigned int cmd, + void __user *argp) +{ + return -ENOTTY; +} + #endif /* CONFIG_BLK_INLINE_ENCRYPTION */ void __bio_crypt_advance(struct bio *bio, unsigned int bytes); static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes) { if (bio_has_crypt_ctx(bio)) __bio_crypt_advance(bio, bytes); } void __bio_crypt_free_ctx(struct bio *bio); diff --git a/block/blk-crypto-profile.c b/block/blk-crypto-profile.c index 1b92276ed2fc..f6419502fcbe 100644 --- a/block/blk-crypto-profile.c +++ b/block/blk-crypto-profile.c @@ -495,20 +495,77 @@ int blk_crypto_derive_sw_secret(struct block_device *bdev, return -EOPNOTSUPP; if (!profile->ll_ops.derive_sw_secret) return -EOPNOTSUPP; blk_crypto_hw_enter(profile); err = profile->ll_ops.derive_sw_secret(profile, eph_key, eph_key_size, sw_secret); blk_crypto_hw_exit(profile); return err; } +int blk_crypto_import_key(struct blk_crypto_profile *profile, + const u8 *raw_key, size_t raw_key_size, + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]) +{ + int ret; + + if (!profile) + return -EOPNOTSUPP; + if (!(profile->key_types_supported & BLK_CRYPTO_KEY_TYPE_HW_WRAPPED)) + return -EOPNOTSUPP; + if (!profile->ll_ops.import_key) + return -EOPNOTSUPP; + blk_crypto_hw_enter(profile); + ret = profile->ll_ops.import_key(profile, raw_key, raw_key_size, + lt_key); + blk_crypto_hw_exit(profile); + return ret; +} + +int blk_crypto_generate_key(struct blk_crypto_profile *profile, + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]) +{ + int ret; + + if (!profile) + return -EOPNOTSUPP; + if (!(profile->key_types_supported & BLK_CRYPTO_KEY_TYPE_HW_WRAPPED)) + return -EOPNOTSUPP; + if (!profile->ll_ops.generate_key) + return -EOPNOTSUPP; + + blk_crypto_hw_enter(profile); + ret = profile->ll_ops.generate_key(profile, lt_key); + blk_crypto_hw_exit(profile); + return ret; +} + +int blk_crypto_prepare_key(struct blk_crypto_profile *profile, + const u8 *lt_key, size_t lt_key_size, + u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]) +{ + int ret; + + if (!profile) + return -EOPNOTSUPP; + if (!(profile->key_types_supported & BLK_CRYPTO_KEY_TYPE_HW_WRAPPED)) + return -EOPNOTSUPP; + if (!profile->ll_ops.prepare_key) + return -EOPNOTSUPP; + + blk_crypto_hw_enter(profile); + ret = profile->ll_ops.prepare_key(profile, lt_key, lt_key_size, + eph_key); + blk_crypto_hw_exit(profile); + return ret; +} + /** * blk_crypto_intersect_capabilities() - restrict supported crypto capabilities * by child device * @parent: the crypto profile for the parent device * @child: the crypto profile for the child device, or NULL * * This clears all crypto capabilities in @parent that aren't set in @child. If * @child is NULL, then this clears all parent capabilities. * * Only use this when setting up the crypto profile for a layered device, before diff --git a/block/blk-crypto.c b/block/blk-crypto.c index 5a09d0ef1a01..2270a88e2e4d 100644 --- a/block/blk-crypto.c +++ b/block/blk-crypto.c @@ -460,10 +460,153 @@ void blk_crypto_evict_key(struct block_device *bdev, * keyslot (due to a hardware or driver issue) or is allegedly still in * use by I/O (due to a kernel bug). Even in these cases, the key is * still unlinked from the keyslot management structures, and the caller * is allowed and expected to free it right away. There's nothing * callers can do to handle errors, so just log them and return void. */ if (err) pr_warn_ratelimited("%pg: error %d evicting key\n", bdev, err); } EXPORT_SYMBOL_GPL(blk_crypto_evict_key); + +static int blk_crypto_ioctl_import_key(struct blk_crypto_profile *profile, + void __user *argp) +{ + struct blk_crypto_import_key_arg arg; + u8 raw_key[BLK_CRYPTO_MAX_STANDARD_KEY_SIZE]; + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]; + int ret; + + if (copy_from_user(&arg, argp, sizeof(arg))) + return -EFAULT; + + if (memchr_inv(arg.reserved, 0, sizeof(arg.reserved))) + return -EINVAL; + + if (arg.raw_key_size < 16 || arg.raw_key_size > sizeof(raw_key)) + return -EINVAL; + + if (copy_from_user(raw_key, u64_to_user_ptr(arg.raw_key_ptr), + arg.raw_key_size)) { + ret = -EFAULT; + goto out; + } + ret = blk_crypto_import_key(profile, raw_key, arg.raw_key_size, lt_key); + if (ret < 0) + goto out; + if (ret > arg.lt_key_size) { + ret = -EOVERFLOW; + goto out; + } + arg.lt_key_size = ret; + if (copy_to_user(u64_to_user_ptr(arg.lt_key_ptr), lt_key, + arg.lt_key_size) || + copy_to_user(argp, &arg, sizeof(arg))) { + ret = -EFAULT; + goto out; + } + ret = 0; + +out: + memzero_explicit(raw_key, sizeof(raw_key)); + memzero_explicit(lt_key, sizeof(lt_key)); + return ret; +} + +static int blk_crypto_ioctl_generate_key(struct blk_crypto_profile *profile, + void __user *argp) +{ + struct blk_crypto_generate_key_arg arg; + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]; + int ret; + + if (copy_from_user(&arg, argp, sizeof(arg))) + return -EFAULT; + + if (memchr_inv(arg.reserved, 0, sizeof(arg.reserved))) + return -EINVAL; + + ret = blk_crypto_generate_key(profile, lt_key); + if (ret < 0) + goto out; + if (ret > arg.lt_key_size) { + ret = -EOVERFLOW; + goto out; + } + arg.lt_key_size = ret; + if (copy_to_user(u64_to_user_ptr(arg.lt_key_ptr), lt_key, + arg.lt_key_size) || + copy_to_user(argp, &arg, sizeof(arg))) { + ret = -EFAULT; + goto out; + } + ret = 0; + +out: + memzero_explicit(lt_key, sizeof(lt_key)); + return ret; +} + +static int blk_crypto_ioctl_prepare_key(struct blk_crypto_profile *profile, + void __user *argp) +{ + struct blk_crypto_prepare_key_arg arg; + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]; + u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]; + int ret; + + if (copy_from_user(&arg, argp, sizeof(arg))) + return -EFAULT; + + if (memchr_inv(arg.reserved, 0, sizeof(arg.reserved))) + return -EINVAL; + + if (arg.lt_key_size > sizeof(lt_key)) + return -EINVAL; + + if (copy_from_user(lt_key, u64_to_user_ptr(arg.lt_key_ptr), + arg.lt_key_size)) { + ret = -EFAULT; + goto out; + } + ret = blk_crypto_prepare_key(profile, lt_key, arg.lt_key_size, eph_key); + if (ret < 0) + goto out; + if (ret > arg.eph_key_size) { + ret = -EOVERFLOW; + goto out; + } + arg.eph_key_size = ret; + if (copy_to_user(u64_to_user_ptr(arg.eph_key_ptr), eph_key, + arg.eph_key_size) || + copy_to_user(argp, &arg, sizeof(arg))) { + ret = -EFAULT; + goto out; + } + ret = 0; + +out: + memzero_explicit(lt_key, sizeof(lt_key)); + memzero_explicit(eph_key, sizeof(eph_key)); + return ret; +} + +int blk_crypto_ioctl(struct block_device *bdev, unsigned int cmd, + void __user *argp) +{ + struct blk_crypto_profile *profile = + bdev_get_queue(bdev)->crypto_profile; + + if (!profile) + return -EOPNOTSUPP; + + switch (cmd) { + case BLKCRYPTOIMPORTKEY: + return blk_crypto_ioctl_import_key(profile, argp); + case BLKCRYPTOGENERATEKEY: + return blk_crypto_ioctl_generate_key(profile, argp); + case BLKCRYPTOPREPAREKEY: + return blk_crypto_ioctl_prepare_key(profile, argp); + default: + return -ENOTTY; + } +} diff --git a/block/ioctl.c b/block/ioctl.c index 4160f4e6bd5b..a895d8547dd4 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -5,20 +5,21 @@ #include #include #include #include #include #include #include #include #include #include "blk.h" +#include "blk-crypto-internal.h" static int blkpg_do_ioctl(struct block_device *bdev, struct blkpg_partition __user *upart, int op) { struct gendisk *disk = bdev->bd_disk; struct blkpg_partition p; long long start, length; if (disk->flags & GENHD_FL_NO_PART) return -EINVAL; @@ -553,20 +554,24 @@ static int blkdev_common_ioctl(struct block_device *bdev, blk_mode_t mode, case BLKRRPART: if (!capable(CAP_SYS_ADMIN)) return -EACCES; if (bdev_is_partition(bdev)) return -EINVAL; return disk_scan_partitions(bdev->bd_disk, mode); case BLKTRACESTART: case BLKTRACESTOP: case BLKTRACETEARDOWN: return blk_trace_ioctl(bdev, cmd, argp); + case BLKCRYPTOIMPORTKEY: + case BLKCRYPTOGENERATEKEY: + case BLKCRYPTOPREPAREKEY: + return blk_crypto_ioctl(bdev, cmd, argp); case IOC_PR_REGISTER: return blkdev_pr_register(bdev, mode, argp); case IOC_PR_RESERVE: return blkdev_pr_reserve(bdev, mode, argp); case IOC_PR_RELEASE: return blkdev_pr_release(bdev, mode, argp); case IOC_PR_PREEMPT: return blkdev_pr_preempt(bdev, mode, argp, false); case IOC_PR_PREEMPT_ABORT: return blkdev_pr_preempt(bdev, mode, argp, true); diff --git a/include/linux/blk-crypto-profile.h b/include/linux/blk-crypto-profile.h index 229287a7f451..a3eef098f3c3 100644 --- a/include/linux/blk-crypto-profile.h +++ b/include/linux/blk-crypto-profile.h @@ -64,20 +64,62 @@ struct blk_crypto_ll_ops { * * This only needs to be implemented if BLK_CRYPTO_KEY_TYPE_HW_WRAPPED * is supported. * * Must return 0 on success, -EBADMSG if the key is invalid, or another * -errno code on other errors. */ int (*derive_sw_secret)(struct blk_crypto_profile *profile, const u8 *eph_key, size_t eph_key_size, u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]); + + /** + * @import_key: Create a hardware-wrapped key by importing a raw key. + * + * This only needs to be implemented if BLK_CRYPTO_KEY_TYPE_HW_WRAPPED + * is supported. + * + * On success, must write the new key in long-term wrapped form to + * @lt_key and return its size in bytes. On failure, must return a + * -errno value. + */ + int (*import_key)(struct blk_crypto_profile *profile, + const u8 *raw_key, size_t raw_key_size, + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]); + + /** + * @generate_key: Generate a hardware-wrapped key. + * + * This only needs to be implemented if BLK_CRYPTO_KEY_TYPE_HW_WRAPPED + * is supported. + * + * On success, must write the new key in long-term wrapped form to + * @lt_key and return its size in bytes. On failure, must return a + * -errno value. + */ + int (*generate_key)(struct blk_crypto_profile *profile, + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]); + + /** + * @prepare_key: Prepare a hardware-wrapped key to be used. + * + * Prepare a hardware-wrapped key to be used by converting it from + * long-term wrapped form to ephemerally-wrapped form. This only needs + * to be implemented if BLK_CRYPTO_KEY_TYPE_HW_WRAPPED is supported. + * + * On success, must write the key in ephemerally-wrapped form to + * @eph_key and return its size in bytes. On failure, must return a + * -errno value. + */ + int (*prepare_key)(struct blk_crypto_profile *profile, + const u8 *lt_key, size_t lt_key_size, + u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]); }; /** * struct blk_crypto_profile - inline encryption profile for a device * * This struct contains a storage device's inline encryption capabilities (e.g. * the supported crypto algorithms), driver-provided functions to control the * inline encryption hardware (e.g. programming and evicting keys), and optional * device-independent keyslot management data. */ @@ -156,20 +198,31 @@ int blk_crypto_profile_init(struct blk_crypto_profile *profile, int devm_blk_crypto_profile_init(struct device *dev, struct blk_crypto_profile *profile, unsigned int num_slots); unsigned int blk_crypto_keyslot_index(struct blk_crypto_keyslot *slot); void blk_crypto_reprogram_all_keys(struct blk_crypto_profile *profile); void blk_crypto_profile_destroy(struct blk_crypto_profile *profile); +int blk_crypto_import_key(struct blk_crypto_profile *profile, + const u8 *raw_key, size_t raw_key_size, + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]); + +int blk_crypto_generate_key(struct blk_crypto_profile *profile, + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]); + +int blk_crypto_prepare_key(struct blk_crypto_profile *profile, + const u8 *lt_key, size_t lt_key_size, + u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]); + void blk_crypto_intersect_capabilities(struct blk_crypto_profile *parent, const struct blk_crypto_profile *child); bool blk_crypto_has_capabilities(const struct blk_crypto_profile *target, const struct blk_crypto_profile *reference); void blk_crypto_update_capabilities(struct blk_crypto_profile *dst, const struct blk_crypto_profile *src); #endif /* __LINUX_BLK_CRYPTO_PROFILE_H */ diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h index 19066d86ecbf..e61008c23668 100644 --- a/include/linux/blk-crypto.h +++ b/include/linux/blk-crypto.h @@ -1,19 +1,20 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright 2019 Google LLC */ #ifndef __LINUX_BLK_CRYPTO_H #define __LINUX_BLK_CRYPTO_H #include +#include enum blk_crypto_mode_num { BLK_ENCRYPTION_MODE_INVALID, BLK_ENCRYPTION_MODE_AES_256_XTS, BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV, BLK_ENCRYPTION_MODE_ADIANTUM, BLK_ENCRYPTION_MODE_SM4_XTS, BLK_ENCRYPTION_MODE_MAX, }; diff --git a/include/uapi/linux/blk-crypto.h b/include/uapi/linux/blk-crypto.h new file mode 100644 index 000000000000..97302c6eb6af --- /dev/null +++ b/include/uapi/linux/blk-crypto.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _UAPI_LINUX_BLK_CRYPTO_H +#define _UAPI_LINUX_BLK_CRYPTO_H + +#include +#include + +struct blk_crypto_import_key_arg { + /* Raw key (input) */ + __u64 raw_key_ptr; + __u64 raw_key_size; + /* Long-term wrapped key blob (output) */ + __u64 lt_key_ptr; + __u64 lt_key_size; + __u64 reserved[4]; +}; + +struct blk_crypto_generate_key_arg { + /* Long-term wrapped key blob (output) */ + __u64 lt_key_ptr; + __u64 lt_key_size; + __u64 reserved[4]; +}; + +struct blk_crypto_prepare_key_arg { + /* Long-term wrapped key blob (input) */ + __u64 lt_key_ptr; + __u64 lt_key_size; + /* Ephemerally-wrapped key blob (output) */ + __u64 eph_key_ptr; + __u64 eph_key_size; + __u64 reserved[4]; +}; + +/* + * These ioctls share the block device ioctl space; see uapi/linux/fs.h. + * 140-141 are reserved for future blk-crypto ioctls; any more than that would + * require an additional allocation from the block device ioctl space. + */ +#define BLKCRYPTOIMPORTKEY _IOWR(0x12, 137, struct blk_crypto_import_key_arg) +#define BLKCRYPTOGENERATEKEY _IOWR(0x12, 138, struct blk_crypto_generate_key_arg) +#define BLKCRYPTOPREPAREKEY _IOWR(0x12, 139, struct blk_crypto_prepare_key_arg) + +#endif /* _UAPI_LINUX_BLK_CRYPTO_H */ diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index da43810b7485..d22f9518059e 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -178,24 +178,22 @@ struct fsxattr { #define BLKDISCARD _IO(0x12,119) #define BLKIOMIN _IO(0x12,120) #define BLKIOOPT _IO(0x12,121) #define BLKALIGNOFF _IO(0x12,122) #define BLKPBSZGET _IO(0x12,123) #define BLKDISCARDZEROES _IO(0x12,124) #define BLKSECDISCARD _IO(0x12,125) #define BLKROTATIONAL _IO(0x12,126) #define BLKZEROOUT _IO(0x12,127) #define BLKGETDISKSEQ _IOR(0x12,128,__u64) -/* - * A jump here: 130-136 are reserved for zoned block devices - * (see uapi/linux/blkzoned.h) - */ +/* 130-136 are used by zoned block device ioctls (uapi/linux/blkzoned.h) */ +/* 137-141 are used by blk-crypto ioctls (uapi/linux/blk-crypto.h) */ #define BMAP_IOCTL 1 /* obsolete - kept for compatibility */ #define FIBMAP _IO(0x00,1) /* bmap access */ #define FIGETBSZ _IO(0x00,2) /* get the block size used for bmap */ #define FIFREEZE _IOWR('X', 119, int) /* Freeze */ #define FITHAW _IOWR('X', 120, int) /* Thaw */ #define FITRIM _IOWR('X', 121, struct fstrim_range) /* Trim */ #define FICLONE _IOW(0x94, 9, int) #define FICLONERANGE _IOW(0x94, 13, struct file_clone_range) #define FIDEDUPERANGE _IOWR(0x94, 54, struct file_dedupe_range) From patchwork Sat Nov 4 21:12:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13445635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66082C41535 for ; Sat, 4 Nov 2023 21:17:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230330AbjKDVRS (ORCPT ); Sat, 4 Nov 2023 17:17:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231134AbjKDVRR (ORCPT ); Sat, 4 Nov 2023 17:17:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14F09B0; Sat, 4 Nov 2023 14:17:10 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6C57EC433C8; Sat, 4 Nov 2023 21:17:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699132629; bh=amnqdzaRr5HWv3IVXjNJNbr2D/Ga92eQZ3W65cAVlr8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LHx9yV7Nvs+rmH01ST+2Re6qpvX5pDDFDrRbjZS3Fu07+mCz7HQLPzj6JMhNOYn2U G6b7lk1n/DknKz3Hc6MAGL7PmtTnBrplhoc6iz/FTK8EYAJZeFZlLN5hooYhZdfxzt jatDnCwxzU1DMWFq8xxNBvlLioBThI6Wl1yESlWFvb1rQ97nWsZbkXkswbVLemrRnm BsfEZ5p2GhJ9j2+MHF6WjSCHaDzzt6VIwUAIXKtFCrz3Mak11Wn/vefhh27IpY7kaU JFi75uJINZMNQmGkZt9TIw6ayGDZjns6lM5mwbWj948c2xWQ97hikkmSyiVl4/EAW1 dDAHQ/+EYfQFQ== From: Eric Biggers To: linux-block@vger.kernel.org, linux-fscrypt@vger.kernel.org Cc: kernel-team@android.com, Israel Rukshin , Gaurav Kashyap , Srinivas Kandagatla , Bjorn Andersson , Peter Griffin , Daniil Lunev Subject: [RFC PATCH v8 4/4] fscrypt: add support for hardware-wrapped keys Date: Sat, 4 Nov 2023 14:12:59 -0700 Message-ID: <20231104211259.17448-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231104211259.17448-1-ebiggers@kernel.org> References: <20231104211259.17448-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From: Eric Biggers Add support for hardware-wrapped keys to fscrypt. Such keys are protected from certain attacks, such as cold boot attacks. For more information, see the "Hardware-wrapped keys" section of Documentation/block/inline-encryption.rst. To support hardware-wrapped keys in fscrypt, we allow the fscrypt master keys to be hardware-wrapped, and we allow encryption policies to be flagged as needing a hardware-wrapped key. File contents encryption is done by passing the wrapped key to the inline encryption hardware via blk-crypto. Other fscrypt operations such as filenames encryption continue to be done by the kernel, using the "software secret" which the hardware derives. For more information, see the documentation which this patch adds to Documentation/filesystems/fscrypt.rst. Note that this feature doesn't require any filesystem-specific changes. However it does depend on inline encryption support, and thus currently it is only applicable to ext4 and f2fs. This feature is intentionally not UAPI or on-disk format compatible with the version of this feature in the Android Common Kernels, as that version was meant as a temporary solution and it took some shortcuts. Once upstreamed, this new version should be used going forwards. This patch has been heavily rewritten from the original version by Gaurav Kashyap and Barani Muthukumaran . Signed-off-by: Eric Biggers --- Documentation/filesystems/fscrypt.rst | 154 +++++++++++++++++++++++--- fs/crypto/fscrypt_private.h | 71 ++++++++++-- fs/crypto/hkdf.c | 4 +- fs/crypto/inline_crypt.c | 46 ++++++-- fs/crypto/keyring.c | 122 ++++++++++++++------ fs/crypto/keysetup.c | 54 ++++++++- fs/crypto/keysetup_v1.c | 5 +- fs/crypto/policy.c | 11 +- include/uapi/linux/fscrypt.h | 7 +- 9 files changed, 401 insertions(+), 73 deletions(-) diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst index 1b84f818e574..08b499dd8e80 100644 --- a/Documentation/filesystems/fscrypt.rst +++ b/Documentation/filesystems/fscrypt.rst @@ -63,21 +63,21 @@ of holes (unallocated blocks which logically contain all zeroes) in files is not protected. fscrypt is not guaranteed to protect confidentiality or authenticity if an attacker is able to manipulate the filesystem offline prior to an authorized user later accessing the filesystem. Online attacks -------------- fscrypt (and storage encryption in general) can only provide limited -protection, if any at all, against online attacks. In detail: +protection against online attacks. In detail: Side-channel attacks ~~~~~~~~~~~~~~~~~~~~ fscrypt is only resistant to side-channel attacks, such as timing or electromagnetic attacks, to the extent that the underlying Linux Cryptographic API algorithms or inline encryption hardware are. If a vulnerable algorithm is used, such as a table-based implementation of AES, it may be possible for an attacker to mount a side channel attack against the online system. Side channel attacks may also be mounted @@ -92,30 +92,37 @@ system. Instead, existing access control mechanisms such as file mode bits, POSIX ACLs, LSMs, or namespaces should be used for this purpose. (For the reasoning behind this, understand that while the key is added, the confidentiality of the data, from the perspective of the system itself, is *not* protected by the mathematical properties of encryption but rather only by the correctness of the kernel. Therefore, any encryption-specific access control checks would merely be enforced by kernel *code* and therefore would be largely redundant with the wide variety of access control mechanisms already available.) -Kernel memory compromise -~~~~~~~~~~~~~~~~~~~~~~~~ +Read-only kernel memory compromise +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Unless `hardware-wrapped keys`_ are used, an attacker who gains the +ability to read from arbitrary kernel memory, e.g. by mounting a +physical attack or by exploiting a kernel security vulnerability, can +compromise all fscrypt keys that are currently in-use. This also +extends to cold boot attacks; if the system is suddenly powered off, +keys the system was using may remain in memory for a short time. -An attacker who compromises the system enough to read from arbitrary -memory, e.g. by mounting a physical attack or by exploiting a kernel -security vulnerability, can compromise all encryption keys that are -currently in use. +However, if hardware-wrapped keys are used, then the fscrypt master +keys and file contents encryption keys (but not other types of fscrypt +subkeys such as filenames encryption keys) are protected from +compromises of arbitrary kernel memory. -However, fscrypt allows encryption keys to be removed from the kernel, -which may protect them from later compromise. +In addition, fscrypt allows encryption keys to be removed from the +kernel, which may protect them from later compromise. In more detail, the FS_IOC_REMOVE_ENCRYPTION_KEY ioctl (or the FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS ioctl) can wipe a master encryption key from kernel memory. If it does so, it will also try to evict all cached inodes which had been "unlocked" using the key, thereby wiping their per-file keys and making them once again appear "locked", i.e. in ciphertext or encrypted form. However, these ioctls have some limitations: @@ -138,20 +145,38 @@ However, these ioctls have some limitations: caches are freed but not wiped. Therefore, portions thereof may be recoverable from freed memory, even after the corresponding key(s) were wiped. To partially solve this, you can set CONFIG_PAGE_POISONING=y in your kernel config and add page_poison=1 to your kernel command line. However, this has a performance cost. - Secret keys might still exist in CPU registers, in crypto accelerator hardware (if used by the crypto API to implement any of the algorithms), or in other places not explicitly considered here. +Full system compromise +~~~~~~~~~~~~~~~~~~~~~~ + +An attacker who gains "root" access and/or the ability to execute +arbitrary kernel code can freely exfiltrate data that is protected by +any in-use fscrypt keys. Thus, usually fscrypt provides no meaningful +protection in this scenario. (Data that is protected by a key that is +absent throughout the entire attack remains protected, modulo the +limitations of key removal mentioned above in the case where the key +was removed prior to the attack.) + +However, if `hardware-wrapped keys`_ are used, such attackers will be +unable to exfiltrate the master keys or file contents keys in a form +that will be usable after the system is powered off. This may be +useful if the attacker is significantly time-limited and/or +bandwidth-limited, so they can only exfiltrate some data and need to +rely on a later offline attack to exfiltrate the rest of it. + Limitations of v1 policies ~~~~~~~~~~~~~~~~~~~~~~~~~~ v1 encryption policies have some weaknesses with respect to online attacks: - There is no verification that the provided master key is correct. Therefore, a malicious user can temporarily associate the wrong key with another user's encrypted files to which they have read-only access. Because of filesystem caching, the wrong key will then be @@ -164,20 +189,25 @@ attacks: - Non-root users cannot securely remove encryption keys. All the above problems are fixed with v2 encryption policies. For this reason among others, it is recommended to use v2 encryption policies on all new encrypted directories. Key hierarchy ============= +Note: this section assumes the use of standard keys (i.e. "software +keys") rather than hardware-wrapped keys. The use of hardware-wrapped +keys modifies the key hierarchy slightly. For details, see the +`Hardware-wrapped keys`_ section. + Master Keys ----------- Each encrypted directory tree is protected by a *master key*. Master keys can be up to 64 bytes long, and must be at least as long as the greater of the security strength of the contents and filenames encryption modes being used. For example, if any AES-256 mode is used, the master key must be at least 256 bits, i.e. 32 bytes. A stricter requirement applies if the key is used by a v1 encryption policy and AES-256-XTS is used; such keys must be 64 bytes. @@ -604,20 +634,22 @@ This structure must be initialized as follows: - ``flags`` contains optional flags from ````: - FSCRYPT_POLICY_FLAGS_PAD_*: The amount of NUL padding to use when encrypting filenames. If unsure, use FSCRYPT_POLICY_FLAGS_PAD_32 (0x3). - FSCRYPT_POLICY_FLAG_DIRECT_KEY: See `DIRECT_KEY policies`_. - FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64: See `IV_INO_LBLK_64 policies`_. - FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32: See `IV_INO_LBLK_32 policies`_. + - FSCRYPT_POLICY_FLAG_HW_WRAPPED_KEY: This flag denotes that this + policy uses a hardware-wrapped key. See `Hardware-wrapped keys`_. v1 encryption policies only support the PAD_* and DIRECT_KEY flags. The other flags are only supported by v2 encryption policies. The DIRECT_KEY, IV_INO_LBLK_64, and IV_INO_LBLK_32 flags are mutually exclusive. - ``log2_data_unit_size`` is the log2 of the data unit size in bytes, or 0 to select the default data unit size. The data unit size is the granularity of file contents encryption. For example, setting @@ -826,40 +858,41 @@ The FS_IOC_ADD_ENCRYPTION_KEY ioctl adds a master encryption key to the filesystem, making all files on the filesystem which were encrypted using that key appear "unlocked", i.e. in plaintext form. It can be executed on any file or directory on the target filesystem, but using the filesystem's root directory is recommended. It takes in a pointer to struct fscrypt_add_key_arg, defined as follows:: struct fscrypt_add_key_arg { struct fscrypt_key_specifier key_spec; __u32 raw_size; __u32 key_id; - __u32 __reserved[8]; + __u32 flags; + __u32 __reserved[7]; __u8 raw[]; }; #define FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR 1 #define FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER 2 struct fscrypt_key_specifier { __u32 type; /* one of FSCRYPT_KEY_SPEC_TYPE_* */ __u32 __reserved; union { __u8 __reserved[32]; /* reserve some extra space */ __u8 descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE]; __u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE]; } u; }; struct fscrypt_provisioning_key_payload { __u32 type; - __u32 __reserved; + __u32 flags; __u8 raw[]; }; struct fscrypt_add_key_arg must be zeroed, then initialized as follows: - If the key is being added for use by v1 encryption policies, then ``key_spec.type`` must contain FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR, and ``key_spec.u.descriptor`` must contain the descriptor of the key being added, corresponding to the value in the @@ -873,20 +906,26 @@ as follows: an *output* field which the kernel fills in with a cryptographic hash of the key. To add this type of key, the calling process does not need any privileges. However, the number of keys that can be added is limited by the user's quota for the keyrings service (see ``Documentation/security/keys/core.rst``). - ``raw_size`` must be the size of the ``raw`` key provided, in bytes. Alternatively, if ``key_id`` is nonzero, this field must be 0, since in that case the size is implied by the specified Linux keyring key. +- ``flags`` contains optional flags from ````: + + - FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED: This denotes that the key is a + hardware-wrapped key. See `Hardware-wrapped keys`_. This flag + can't be used if FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR is used. + - ``key_id`` is 0 if the raw key is given directly in the ``raw`` field. Otherwise ``key_id`` is the ID of a Linux keyring key of type "fscrypt-provisioning" whose payload is struct fscrypt_provisioning_key_payload whose ``raw`` field contains the raw key and whose ``type`` field matches ``key_spec.type``. Since ``raw`` is variable-length, the total size of this key's payload must be ``sizeof(struct fscrypt_provisioning_key_payload)`` plus the raw key size. The process must have Search permission on this key. @@ -914,32 +953,36 @@ must still be provided, as a proof of knowledge). FS_IOC_ADD_ENCRYPTION_KEY returns 0 if either the key or a claim to the key was either added or already exists. FS_IOC_ADD_ENCRYPTION_KEY can fail with the following errors: - ``EACCES``: FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR was specified, but the caller does not have the CAP_SYS_ADMIN capability in the initial user namespace; or the raw key was specified by Linux key ID but the process lacks Search permission on the key. +- ``EBADMSG``: FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED was specified, but the + key isn't a valid hardware-wrapped key - ``EDQUOT``: the key quota for this user would be exceeded by adding the key - ``EINVAL``: invalid key size or key specifier type, or reserved bits were set - ``EKEYREJECTED``: the raw key was specified by Linux key ID, but the key has the wrong type - ``ENOKEY``: the raw key was specified by Linux key ID, but no key exists with that ID - ``ENOTTY``: this type of filesystem does not implement encryption - ``EOPNOTSUPP``: the kernel was not configured with encryption support for this filesystem, or the filesystem superblock has not - had encryption enabled on it + had encryption enabled on it, or FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED was + specified but the filesystem and/or the hardware doesn't support + hardware-wrapped keys Legacy method ~~~~~~~~~~~~~ For v1 encryption policies, a master encryption key can also be provided by adding it to a process-subscribed keyring, e.g. to a session keyring, or to a user keyring if the user keyring is linked into the session keyring. This method is deprecated (and not supported for v2 encryption @@ -988,23 +1031,22 @@ Two ioctls are available for removing a key that was added by - `FS_IOC_REMOVE_ENCRYPTION_KEY`_ - `FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS`_ These two ioctls differ only in cases where v2 policy keys are added or removed by non-root users. These ioctls don't work on keys that were added via the legacy process-subscribed keyrings mechanism. -Before using these ioctls, read the `Kernel memory compromise`_ -section for a discussion of the security goals and limitations of -these ioctls. +Before using these ioctls, read the `Online attacks`_ section for a +discussion of the security goals and limitations of these ioctls. FS_IOC_REMOVE_ENCRYPTION_KEY ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The FS_IOC_REMOVE_ENCRYPTION_KEY ioctl removes a claim to a master encryption key from the filesystem, and possibly removes the key itself. It can be executed on any file or directory on the target filesystem, but using the filesystem's root directory is recommended. It takes in a pointer to struct fscrypt_remove_key_arg, defined as follows:: @@ -1310,29 +1352,107 @@ contents. To enable this, set CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y in the kernel configuration, and specify the "inlinecrypt" mount option when mounting the filesystem. Note that the "inlinecrypt" mount option just specifies to use inline encryption when possible; it doesn't force its use. fscrypt will still fall back to using the kernel crypto API on files where the inline encryption hardware doesn't have the needed crypto capabilities (e.g. support for the needed encryption algorithm and data unit size) and where blk-crypto-fallback is unusable. (For blk-crypto-fallback to be usable, it must be enabled in the kernel configuration with -CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y.) +CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y, and the file must be +protected by a standard key rather than a hardware-wrapped key.) Currently fscrypt always uses the filesystem block size (which is usually 4096 bytes) as the data unit size. Therefore, it can only use inline encryption hardware that supports that data unit size. Inline encryption doesn't affect the ciphertext or other aspects of the on-disk format, so users may freely switch back and forth between -using "inlinecrypt" and not using "inlinecrypt". +using "inlinecrypt" and not using "inlinecrypt". An exception is that +files that are protected by a hardware-wrapped key can only be +encrypted/decrypted by the inline encryption hardware and therefore +can only be accessed when the "inlinecrypt" mount option is used. For +more information about hardware-wrapped keys, see below. + +Hardware-wrapped keys +--------------------- + +fscrypt supports using *hardware-wrapped keys* when the inline +encryption hardware supports it. Such keys are only present in kernel +memory in wrapped (encrypted) form; they can only be unwrapped +(decrypted) by the inline encryption hardware and are temporally bound +to the current boot. This prevents the keys from being compromised if +kernel memory is leaked. This is done without limiting the number of +keys that can be used and while still allowing the execution of +cryptographic tasks that are tied to the same key but can't use inline +encryption hardware, e.g. filenames encryption. + +Note that hardware-wrapped keys aren't specific to fscrypt; they are a +block layer feature (part of *blk-crypto*). For more details about +hardware-wrapped keys, see the block layer documentation at +:ref:`Documentation/block/inline-encryption.rst +`. Below, we just focus on the details of how +fscrypt can use hardware-wrapped keys. + +fscrypt supports hardware-wrapped keys by allowing the fscrypt master +keys to be hardware-wrapped keys as an alternative to standard keys. +To add a hardware-wrapped key with `FS_IOC_ADD_ENCRYPTION_KEY`_, +userspace must specify FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED in the +``flags`` field of struct fscrypt_add_key_arg and also in the +``flags`` field of struct fscrypt_provisioning_key_payload when +applicable. The key must be in ephemerally-wrapped form, not +long-term wrapped form. + +To specify that files will be protected by a hardware-wrapped key, +userspace must specify FSCRYPT_POLICY_FLAG_HW_WRAPPED_KEY in the +encryption policy. (Note that this flag is somewhat redundant, as the +encryption policy also contains the key identifier, and +hardware-wrapped keys and standard keys will have different key +identifiers. However, it is sometimes helpful to make it explicit +that an encryption policy is supposed to use a hardware-wrapped key.) + +Some limitations apply. First, files protected by a hardware-wrapped +key are tied to the system's inline encryption hardware. Therefore +they can only be accessed when the "inlinecrypt" mount option is used, +and they can't be included in portable filesystem images. Second, +currently the hardware-wrapped key support is only compatible with +`IV_INO_LBLK_64 policies`_ and `IV_INO_LBLK_32 policies`_, as it +assumes that there is just one file contents encryption key per +fscrypt master key rather than one per file. Future work may address +this limitation by passing per-file nonces down the storage stack to +allow the hardware to derive per-file keys. + +Implementation-wise, to encrypt/decrypt the contents of files that are +protected by a hardware-wrapped key, fscrypt uses blk-crypto, +attaching the hardware-wrapped key to the bio crypt contexts. As is +the case with standard keys, the block layer will program the key into +a keyslot when it isn't already in one. However, when programming a +hardware-wrapped key, the hardware doesn't program the given key +directly into a keyslot but rather unwraps it (using the hardware's +ephemeral wrapping key) and derives the inline encryption key from it. +The inline encryption key is the key that actually gets programmed +into a keyslot, and it is never exposed to software. + +However, fscrypt doesn't just do file contents encryption; it also +uses its master keys to derive filenames encryption keys, key +identifiers, and sometimes some more obscure types of subkeys such as +dirhash keys. So even with file contents encryption out of the +picture, fscrypt still needs a raw key to work with. To get such a +key from a hardware-wrapped key, fscrypt asks the inline encryption +hardware to derive a cryptographically isolated "software secret" from +the hardware-wrapped key. fscrypt uses this "software secret" to key +its KDF to derive all subkeys other than file contents keys. + +Note that this implies that the hardware-wrapped key feature only +protects the file contents encryption keys. It doesn't protect other +fscrypt subkeys such as filenames encryption keys. Direct I/O support ================== For direct I/O on an encrypted file to work, the following conditions must be met (in addition to the conditions for direct I/O on an unencrypted file): * The file must be using inline encryption. Usually this means that the filesystem must be mounted with ``-o inlinecrypt`` and inline diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h index 1892356cf924..57c5f3a7f48a 100644 --- a/fs/crypto/fscrypt_private.h +++ b/fs/crypto/fscrypt_private.h @@ -20,20 +20,41 @@ #define FSCRYPT_FILE_NONCE_SIZE 16 /* * Minimum size of an fscrypt master key. Note: a longer key will be required * if ciphers with a 256-bit security strength are used. This is just the * absolute minimum, which applies when only 128-bit encryption is used. */ #define FSCRYPT_MIN_KEY_SIZE 16 +/* Maximum size of a standard fscrypt master key */ +#define FSCRYPT_MAX_STANDARD_KEY_SIZE 64 + +/* Maximum size of a hardware-wrapped fscrypt master key */ +#define FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE + +/* + * Maximum size of an fscrypt master key across both key types. + * This should just use max(), but max() doesn't work in a struct definition. + */ +#define FSCRYPT_MAX_ANY_KEY_SIZE \ + (FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE > FSCRYPT_MAX_STANDARD_KEY_SIZE ? \ + FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE : FSCRYPT_MAX_STANDARD_KEY_SIZE) + +/* + * FSCRYPT_MAX_KEY_SIZE is defined in the UAPI header, but the addition of + * hardware-wrapped keys has made it misleading as it's only for standard keys. + * Don't use it in kernel code; use one of the above constants instead. + */ +#undef FSCRYPT_MAX_KEY_SIZE + #define FSCRYPT_CONTEXT_V1 1 #define FSCRYPT_CONTEXT_V2 2 /* Keep this in sync with include/uapi/linux/fscrypt.h */ #define FSCRYPT_MODE_MAX FSCRYPT_MODE_AES_256_HCTR2 struct fscrypt_context_v1 { u8 version; /* FSCRYPT_CONTEXT_V1 */ u8 contents_encryption_mode; u8 filenames_encryption_mode; @@ -351,51 +372,59 @@ struct fscrypt_hkdf { int fscrypt_init_hkdf(struct fscrypt_hkdf *hkdf, const u8 *master_key, unsigned int master_key_size); /* * The list of contexts in which fscrypt uses HKDF. These values are used as * the first byte of the HKDF application-specific info string to guarantee that * info strings are never repeated between contexts. This ensures that all HKDF * outputs are unique and cryptographically isolated, i.e. knowledge of one * output doesn't reveal another. */ -#define HKDF_CONTEXT_KEY_IDENTIFIER 1 /* info= */ +#define HKDF_CONTEXT_KEY_IDENTIFIER_FOR_STANDARD_KEY \ + 1 /* info= */ #define HKDF_CONTEXT_PER_FILE_ENC_KEY 2 /* info=file_nonce */ #define HKDF_CONTEXT_DIRECT_KEY 3 /* info=mode_num */ #define HKDF_CONTEXT_IV_INO_LBLK_64_KEY 4 /* info=mode_num||fs_uuid */ #define HKDF_CONTEXT_DIRHASH_KEY 5 /* info=file_nonce */ #define HKDF_CONTEXT_IV_INO_LBLK_32_KEY 6 /* info=mode_num||fs_uuid */ #define HKDF_CONTEXT_INODE_HASH_KEY 7 /* info= */ +#define HKDF_CONTEXT_KEY_IDENTIFIER_FOR_HW_WRAPPED_KEY \ + 8 /* info= */ int fscrypt_hkdf_expand(const struct fscrypt_hkdf *hkdf, u8 context, const u8 *info, unsigned int infolen, u8 *okm, unsigned int okmlen); void fscrypt_destroy_hkdf(struct fscrypt_hkdf *hkdf); /* inline_crypt.c */ #ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci); static inline bool fscrypt_using_inline_encryption(const struct fscrypt_inode_info *ci) { return ci->ci_inlinecrypt; } int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key, - const u8 *raw_key, + const u8 *raw_key, size_t raw_key_size, + bool is_hw_wrapped, const struct fscrypt_inode_info *ci); void fscrypt_destroy_inline_crypt_key(struct super_block *sb, struct fscrypt_prepared_key *prep_key); +int fscrypt_derive_sw_secret(struct super_block *sb, + const u8 *wrapped_key, size_t wrapped_key_size, + u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]); + /* * Check whether the crypto transform or blk-crypto key has been allocated in * @prep_key, depending on which encryption implementation the file will use. */ static inline bool fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key, const struct fscrypt_inode_info *ci) { /* * The two smp_load_acquire()'s here pair with the smp_store_release()'s @@ -418,63 +447,91 @@ static inline int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci) } static inline bool fscrypt_using_inline_encryption(const struct fscrypt_inode_info *ci) { return false; } static inline int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key, - const u8 *raw_key, + const u8 *raw_key, size_t raw_key_size, + bool is_hw_wrapped, const struct fscrypt_inode_info *ci) { WARN_ON_ONCE(1); return -EOPNOTSUPP; } static inline void fscrypt_destroy_inline_crypt_key(struct super_block *sb, struct fscrypt_prepared_key *prep_key) { } +static inline int +fscrypt_derive_sw_secret(struct super_block *sb, + const u8 *wrapped_key, size_t wrapped_key_size, + u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]) +{ + fscrypt_warn(NULL, "kernel doesn't support hardware-wrapped keys"); + return -EOPNOTSUPP; +} + static inline bool fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key, const struct fscrypt_inode_info *ci) { return smp_load_acquire(&prep_key->tfm) != NULL; } #endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ /* keyring.c */ /* * fscrypt_master_key_secret - secret key material of an in-use master key */ struct fscrypt_master_key_secret { /* - * For v2 policy keys: HKDF context keyed by this master key. - * For v1 policy keys: not set (hkdf.hmac_tfm == NULL). + * The KDF with which subkeys of this key can be derived. + * + * For v1 policy keys, this isn't applicable and won't be set. + * Otherwise, this KDF will be keyed by this master key if + * ->is_hw_wrapped=false, or by the "software secret" that hardware + * derived from this master key if ->is_hw_wrapped=true. */ struct fscrypt_hkdf hkdf; + /* + * True if this key is a hardware-wrapped key; false if this key is a + * standard key (i.e. a "software key"). For v1 policy keys this will + * always be false, as v1 policy support is a legacy feature which + * doesn't support newer functionality such as hardware-wrapped keys. + */ + bool is_hw_wrapped; + /* * Size of the raw key in bytes. This remains set even if ->raw was * zeroized due to no longer being needed. I.e. we still remember the * size of the key even if we don't need to remember the key itself. */ u32 size; - /* For v1 policy keys: the raw key. Wiped for v2 policy keys. */ - u8 raw[FSCRYPT_MAX_KEY_SIZE]; + /* + * The raw key which userspace provided, when still needed. This can be + * either a standard key or a hardware-wrapped key, as indicated by + * ->is_hw_wrapped. In the case of a standard, v2 policy key, there is + * no need to remember the raw key separately from ->hkdf so this field + * will be zeroized as soon as ->hkdf is initialized. + */ + u8 raw[FSCRYPT_MAX_ANY_KEY_SIZE]; } __randomize_layout; /* * fscrypt_master_key - an in-use master key * * This represents a master encryption key which has been added to the * filesystem. There are three high-level states that a key can be in: * * FSCRYPT_KEY_STATUS_PRESENT diff --git a/fs/crypto/hkdf.c b/fs/crypto/hkdf.c index 5a384dad2c72..7e007810e434 100644 --- a/fs/crypto/hkdf.c +++ b/fs/crypto/hkdf.c @@ -1,17 +1,19 @@ // SPDX-License-Identifier: GPL-2.0 /* * Implementation of HKDF ("HMAC-based Extract-and-Expand Key Derivation * Function"), aka RFC 5869. See also the original paper (Krawczyk 2010): * "Cryptographic Extraction and Key Derivation: The HKDF Scheme". * - * This is used to derive keys from the fscrypt master keys. + * This is used to derive keys from the fscrypt master keys (or from the + * "software secrets" which hardware derives from the fscrypt master keys, in + * the case that the fscrypt master keys are hardware-wrapped keys). * * Copyright 2019 Google LLC */ #include #include #include "fscrypt_private.h" /* diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c index 821a800c9a89..79b338f824b1 100644 --- a/fs/crypto/inline_crypt.c +++ b/fs/crypto/inline_crypt.c @@ -86,20 +86,21 @@ static void fscrypt_log_blk_crypto_impl(struct fscrypt_mode *mode, mode->friendly_name); } } } /* Enable inline encryption for this file if supported. */ int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci) { const struct inode *inode = ci->ci_inode; struct super_block *sb = inode->i_sb; + unsigned int policy_flags = fscrypt_policy_flags(&ci->ci_policy); struct blk_crypto_config crypto_cfg; struct block_device **devs; unsigned int num_devs; unsigned int i; /* The file must need contents encryption, not filenames encryption */ if (!S_ISREG(inode->i_mode)) return 0; /* The crypto mode must have a blk-crypto counterpart */ @@ -111,72 +112,75 @@ int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci) return 0; /* * When a page contains multiple logically contiguous filesystem blocks, * some filesystem code only calls fscrypt_mergeable_bio() for the first * block in the page. This is fine for most of fscrypt's IV generation * strategies, where contiguous blocks imply contiguous IVs. But it * doesn't work with IV_INO_LBLK_32. For now, simply exclude * IV_INO_LBLK_32 with blocksize != PAGE_SIZE from inline encryption. */ - if ((fscrypt_policy_flags(&ci->ci_policy) & - FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32) && + if ((policy_flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32) && sb->s_blocksize != PAGE_SIZE) return 0; /* * On all the filesystem's block devices, blk-crypto must support the * crypto configuration that the file would use. */ crypto_cfg.crypto_mode = ci->ci_mode->blk_crypto_mode; crypto_cfg.data_unit_size = 1U << ci->ci_data_unit_bits; crypto_cfg.dun_bytes = fscrypt_get_dun_bytes(ci); - crypto_cfg.key_type = BLK_CRYPTO_KEY_TYPE_STANDARD; + crypto_cfg.key_type = + (policy_flags & FSCRYPT_POLICY_FLAG_HW_WRAPPED_KEY) ? + BLK_CRYPTO_KEY_TYPE_HW_WRAPPED : BLK_CRYPTO_KEY_TYPE_STANDARD; devs = fscrypt_get_devices(sb, &num_devs); if (IS_ERR(devs)) return PTR_ERR(devs); for (i = 0; i < num_devs; i++) { if (!blk_crypto_config_supported(devs[i], &crypto_cfg)) goto out_free_devs; } fscrypt_log_blk_crypto_impl(ci->ci_mode, devs, num_devs, &crypto_cfg); ci->ci_inlinecrypt = true; out_free_devs: kfree(devs); return 0; } int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key, - const u8 *raw_key, + const u8 *raw_key, size_t raw_key_size, + bool is_hw_wrapped, const struct fscrypt_inode_info *ci) { const struct inode *inode = ci->ci_inode; struct super_block *sb = inode->i_sb; enum blk_crypto_mode_num crypto_mode = ci->ci_mode->blk_crypto_mode; + enum blk_crypto_key_type key_type = is_hw_wrapped ? + BLK_CRYPTO_KEY_TYPE_HW_WRAPPED : BLK_CRYPTO_KEY_TYPE_STANDARD; struct blk_crypto_key *blk_key; struct block_device **devs; unsigned int num_devs; unsigned int i; int err; blk_key = kmalloc(sizeof(*blk_key), GFP_KERNEL); if (!blk_key) return -ENOMEM; - err = blk_crypto_init_key(blk_key, raw_key, ci->ci_mode->keysize, - BLK_CRYPTO_KEY_TYPE_STANDARD, crypto_mode, - fscrypt_get_dun_bytes(ci), + err = blk_crypto_init_key(blk_key, raw_key, raw_key_size, key_type, + crypto_mode, fscrypt_get_dun_bytes(ci), 1U << ci->ci_data_unit_bits); if (err) { fscrypt_err(inode, "error %d initializing blk-crypto key", err); goto fail; } /* Start using blk-crypto on all the filesystem's block devices. */ devs = fscrypt_get_devices(sb, &num_devs); if (IS_ERR(devs)) { err = PTR_ERR(devs); @@ -221,20 +225,48 @@ void fscrypt_destroy_inline_crypt_key(struct super_block *sb, /* Evict the key from all the filesystem's block devices. */ devs = fscrypt_get_devices(sb, &num_devs); if (!IS_ERR(devs)) { for (i = 0; i < num_devs; i++) blk_crypto_evict_key(devs[i], blk_key); kfree(devs); } kfree_sensitive(blk_key); } +/* + * Ask the inline encryption hardware to derive the software secret from a + * hardware-wrapped key. Returns -EOPNOTSUPP if hardware-wrapped keys aren't + * supported on this filesystem or hardware. + */ +int fscrypt_derive_sw_secret(struct super_block *sb, + const u8 *wrapped_key, size_t wrapped_key_size, + u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]) +{ + int err; + + /* The filesystem must be mounted with -o inlinecrypt. */ + if (!(sb->s_flags & SB_INLINECRYPT)) { + fscrypt_warn(NULL, + "%s: filesystem not mounted with inlinecrypt\n", + sb->s_id); + return -EOPNOTSUPP; + } + + err = blk_crypto_derive_sw_secret(sb->s_bdev, wrapped_key, + wrapped_key_size, sw_secret); + if (err == -EOPNOTSUPP) + fscrypt_warn(NULL, + "%s: block device doesn't support hardware-wrapped keys\n", + sb->s_id); + return err; +} + bool __fscrypt_inode_uses_inline_crypto(const struct inode *inode) { return inode->i_crypt_info->ci_inlinecrypt; } EXPORT_SYMBOL_GPL(__fscrypt_inode_uses_inline_crypto); static void fscrypt_generate_dun(const struct fscrypt_inode_info *ci, u64 lblk_num, u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE]) { diff --git a/fs/crypto/keyring.c b/fs/crypto/keyring.c index f34a9b0b9e92..cd371eaa8a01 100644 --- a/fs/crypto/keyring.c +++ b/fs/crypto/keyring.c @@ -137,25 +137,25 @@ static inline bool valid_key_spec(const struct fscrypt_key_specifier *spec) { if (spec->__reserved) return false; return master_key_spec_len(spec) != 0; } static int fscrypt_user_key_instantiate(struct key *key, struct key_preparsed_payload *prep) { /* - * We just charge FSCRYPT_MAX_KEY_SIZE bytes to the user's key quota for - * each key, regardless of the exact key size. The amount of memory - * actually used is greater than the size of the raw key anyway. + * We just charge FSCRYPT_MAX_STANDARD_KEY_SIZE bytes to the user's key + * quota for each key, regardless of the exact key size. The amount of + * memory actually used is greater than the size of the raw key anyway. */ - return key_payload_reserve(key, FSCRYPT_MAX_KEY_SIZE); + return key_payload_reserve(key, FSCRYPT_MAX_STANDARD_KEY_SIZE); } static void fscrypt_user_key_describe(const struct key *key, struct seq_file *m) { seq_puts(m, key->description); } /* * Type of key in ->mk_users. Each key of this type represents a particular * user who has added a particular master key. @@ -546,55 +546,97 @@ static int do_add_master_key(struct super_block *sb, return err; } static int add_master_key(struct super_block *sb, struct fscrypt_master_key_secret *secret, struct fscrypt_key_specifier *key_spec) { int err; if (key_spec->type == FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER) { - err = fscrypt_init_hkdf(&secret->hkdf, secret->raw, - secret->size); - if (err) - return err; + u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]; + u8 *kdf_key = secret->raw; + unsigned int kdf_key_size = secret->size; + u8 keyid_kdf_ctx = HKDF_CONTEXT_KEY_IDENTIFIER_FOR_STANDARD_KEY; /* - * Now that the HKDF context is initialized, the raw key is no - * longer needed. + * For standard keys, the fscrypt master key is used directly as + * the fscrypt KDF key. For hardware-wrapped keys, we have to + * pass the master key to the hardware to derive the KDF key, + * which is then only used to derive non-file-contents subkeys. + */ + if (secret->is_hw_wrapped) { + err = fscrypt_derive_sw_secret(sb, secret->raw, + secret->size, sw_secret); + if (err) + return err; + kdf_key = sw_secret; + kdf_key_size = sizeof(sw_secret); + /* + * To avoid weird behavior if someone manages to + * determine sw_secret and add it as a standard key, + * ensure that hardware-wrapped keys and standard keys + * will have different key identifiers by deriving their + * key identifiers using different KDF contexts. + */ + keyid_kdf_ctx = + HKDF_CONTEXT_KEY_IDENTIFIER_FOR_HW_WRAPPED_KEY; + } + err = fscrypt_init_hkdf(&secret->hkdf, kdf_key, kdf_key_size); + /* + * Now that the KDF context is initialized, the raw KDF key is + * no longer needed. */ - memzero_explicit(secret->raw, secret->size); + memzero_explicit(kdf_key, kdf_key_size); + if (err) + return err; /* Calculate the key identifier */ - err = fscrypt_hkdf_expand(&secret->hkdf, - HKDF_CONTEXT_KEY_IDENTIFIER, NULL, 0, + err = fscrypt_hkdf_expand(&secret->hkdf, keyid_kdf_ctx, NULL, 0, key_spec->u.identifier, FSCRYPT_KEY_IDENTIFIER_SIZE); if (err) return err; } return do_add_master_key(sb, secret, key_spec); } +/* + * Validate the size of an fscrypt master key being added. Note that this is + * just an initial check, as we don't know which ciphers will be used yet. + * There is a stricter size check later when the key is actually used by a file. + */ +static inline bool fscrypt_valid_key_size(size_t size, u32 add_key_flags) +{ + u32 max_size = (add_key_flags & FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED) ? + FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE : + FSCRYPT_MAX_STANDARD_KEY_SIZE; + + return size >= FSCRYPT_MIN_KEY_SIZE && size <= max_size; +} + static int fscrypt_provisioning_key_preparse(struct key_preparsed_payload *prep) { const struct fscrypt_provisioning_key_payload *payload = prep->data; - if (prep->datalen < sizeof(*payload) + FSCRYPT_MIN_KEY_SIZE || - prep->datalen > sizeof(*payload) + FSCRYPT_MAX_KEY_SIZE) + if (prep->datalen < sizeof(*payload)) + return -EINVAL; + + if (!fscrypt_valid_key_size(prep->datalen - sizeof(*payload), + payload->flags)) return -EINVAL; if (payload->type != FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR && payload->type != FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER) return -EINVAL; - if (payload->__reserved) + if (payload->flags & ~FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED) return -EINVAL; prep->payload.data[0] = kmemdup(payload, prep->datalen, GFP_KERNEL); if (!prep->payload.data[0]) return -ENOMEM; prep->quotalen = prep->datalen; return 0; } @@ -627,50 +669,54 @@ static struct key_type key_type_fscrypt_provisioning = { .free_preparse = fscrypt_provisioning_key_free_preparse, .instantiate = generic_key_instantiate, .describe = fscrypt_provisioning_key_describe, .destroy = fscrypt_provisioning_key_destroy, }; /* * Retrieve the raw key from the Linux keyring key specified by 'key_id', and * store it into 'secret'. * - * The key must be of type "fscrypt-provisioning" and must have the field - * fscrypt_provisioning_key_payload::type set to 'type', indicating that it's - * only usable with fscrypt with the particular KDF version identified by - * 'type'. We don't use the "logon" key type because there's no way to - * completely restrict the use of such keys; they can be used by any kernel API - * that accepts "logon" keys and doesn't require a specific service prefix. + * The key must be of type "fscrypt-provisioning" and must have the 'type' and + * 'flags' field of the payload set to the given values, indicating that the key + * is intended for use for the specified purpose. We don't use the "logon" key + * type because there's no way to completely restrict the use of such keys; they + * can be used by any kernel API that accepts "logon" keys and doesn't require a + * specific service prefix. * * The ability to specify the key via Linux keyring key is intended for cases * where userspace needs to re-add keys after the filesystem is unmounted and * re-mounted. Most users should just provide the raw key directly instead. */ -static int get_keyring_key(u32 key_id, u32 type, +static int get_keyring_key(u32 key_id, u32 type, u32 flags, struct fscrypt_master_key_secret *secret) { key_ref_t ref; struct key *key; const struct fscrypt_provisioning_key_payload *payload; int err; ref = lookup_user_key(key_id, 0, KEY_NEED_SEARCH); if (IS_ERR(ref)) return PTR_ERR(ref); key = key_ref_to_ptr(ref); if (key->type != &key_type_fscrypt_provisioning) goto bad_key; payload = key->payload.data[0]; - /* Don't allow fscrypt v1 keys to be used as v2 keys and vice versa. */ - if (payload->type != type) + /* + * Don't allow fscrypt v1 keys to be used as v2 keys and vice versa. + * Similarly, don't allow hardware-wrapped keys to be used as + * non-hardware-wrapped keys and vice versa. + */ + if (payload->type != type || payload->flags != flags) goto bad_key; secret->size = key->datalen - sizeof(*payload); memcpy(secret->raw, payload->raw, secret->size); err = 0; goto out_put; bad_key: err = -EKEYREJECTED; out_put: @@ -722,29 +768,38 @@ int fscrypt_ioctl_add_key(struct file *filp, void __user *_uarg) /* * Only root can add keys that are identified by an arbitrary descriptor * rather than by a cryptographic hash --- since otherwise a malicious * user could add the wrong key. */ if (arg.key_spec.type == FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR && !capable(CAP_SYS_ADMIN)) return -EACCES; memset(&secret, 0, sizeof(secret)); + + if (arg.flags) { + if (arg.flags & ~FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED) + return -EINVAL; + if (arg.key_spec.type != FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER) + return -EINVAL; + secret.is_hw_wrapped = true; + } + if (arg.key_id) { if (arg.raw_size != 0) return -EINVAL; - err = get_keyring_key(arg.key_id, arg.key_spec.type, &secret); + err = get_keyring_key(arg.key_id, arg.key_spec.type, arg.flags, + &secret); if (err) goto out_wipe_secret; } else { - if (arg.raw_size < FSCRYPT_MIN_KEY_SIZE || - arg.raw_size > FSCRYPT_MAX_KEY_SIZE) + if (!fscrypt_valid_key_size(arg.raw_size, arg.flags)) return -EINVAL; secret.size = arg.raw_size; err = -EFAULT; if (copy_from_user(secret.raw, uarg->raw, secret.size)) goto out_wipe_secret; } err = add_master_key(sb, &secret, &arg.key_spec); if (err) goto out_wipe_secret; @@ -758,41 +813,42 @@ int fscrypt_ioctl_add_key(struct file *filp, void __user *_uarg) err = 0; out_wipe_secret: wipe_master_key_secret(&secret); return err; } EXPORT_SYMBOL_GPL(fscrypt_ioctl_add_key); static void fscrypt_get_test_dummy_secret(struct fscrypt_master_key_secret *secret) { - static u8 test_key[FSCRYPT_MAX_KEY_SIZE]; + static u8 test_key[FSCRYPT_MAX_STANDARD_KEY_SIZE]; - get_random_once(test_key, FSCRYPT_MAX_KEY_SIZE); + get_random_once(test_key, sizeof(test_key)); memset(secret, 0, sizeof(*secret)); - secret->size = FSCRYPT_MAX_KEY_SIZE; - memcpy(secret->raw, test_key, FSCRYPT_MAX_KEY_SIZE); + secret->size = sizeof(test_key); + memcpy(secret->raw, test_key, sizeof(test_key)); } int fscrypt_get_test_dummy_key_identifier( u8 key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE]) { struct fscrypt_master_key_secret secret; int err; fscrypt_get_test_dummy_secret(&secret); err = fscrypt_init_hkdf(&secret.hkdf, secret.raw, secret.size); if (err) goto out; - err = fscrypt_hkdf_expand(&secret.hkdf, HKDF_CONTEXT_KEY_IDENTIFIER, + err = fscrypt_hkdf_expand(&secret.hkdf, + HKDF_CONTEXT_KEY_IDENTIFIER_FOR_STANDARD_KEY, NULL, 0, key_identifier, FSCRYPT_KEY_IDENTIFIER_SIZE); out: wipe_master_key_secret(&secret); return err; } /** * fscrypt_add_test_dummy_key() - add the test dummy encryption key * @sb: the filesystem instance to add the key to diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c index d71f7c799e79..2cff1f840d2d 100644 --- a/fs/crypto/keysetup.c +++ b/fs/crypto/keysetup.c @@ -146,21 +146,23 @@ fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key, * raw key, encryption mode (@ci->ci_mode), flag indicating which encryption * implementation (fs-layer or blk-crypto) will be used (@ci->ci_inlinecrypt), * and IV generation method (@ci->ci_policy.flags). */ int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key, const u8 *raw_key, const struct fscrypt_inode_info *ci) { struct crypto_skcipher *tfm; if (fscrypt_using_inline_encryption(ci)) - return fscrypt_prepare_inline_crypt_key(prep_key, raw_key, ci); + return fscrypt_prepare_inline_crypt_key(prep_key, raw_key, + ci->ci_mode->keysize, + false, ci); tfm = fscrypt_allocate_skcipher(ci->ci_mode, raw_key, ci->ci_inode); if (IS_ERR(tfm)) return PTR_ERR(tfm); /* * Pairs with the smp_load_acquire() in fscrypt_is_key_prepared(). * I.e., here we publish ->tfm with a RELEASE barrier so that * concurrent tasks can ACQUIRE it. Note that this concurrency is only * possible for per-mode keys, not for per-file keys. */ @@ -188,39 +190,64 @@ int fscrypt_set_per_file_enc_key(struct fscrypt_inode_info *ci, static int setup_per_mode_enc_key(struct fscrypt_inode_info *ci, struct fscrypt_master_key *mk, struct fscrypt_prepared_key *keys, u8 hkdf_context, bool include_fs_uuid) { const struct inode *inode = ci->ci_inode; const struct super_block *sb = inode->i_sb; struct fscrypt_mode *mode = ci->ci_mode; const u8 mode_num = mode - fscrypt_modes; struct fscrypt_prepared_key *prep_key; - u8 mode_key[FSCRYPT_MAX_KEY_SIZE]; + u8 mode_key[FSCRYPT_MAX_STANDARD_KEY_SIZE]; u8 hkdf_info[sizeof(mode_num) + sizeof(sb->s_uuid)]; unsigned int hkdf_infolen = 0; + bool use_hw_wrapped_key = false; int err; if (WARN_ON_ONCE(mode_num > FSCRYPT_MODE_MAX)) return -EINVAL; + if (mk->mk_secret.is_hw_wrapped && S_ISREG(inode->i_mode)) { + /* Using a hardware-wrapped key for file contents encryption */ + if (!fscrypt_using_inline_encryption(ci)) { + if (sb->s_flags & SB_INLINECRYPT) + fscrypt_warn(ci->ci_inode, + "Hardware-wrapped key required, but no suitable inline encryption capabilities are available"); + else + fscrypt_warn(ci->ci_inode, + "Hardware-wrapped keys require inline encryption (-o inlinecrypt)"); + return -EINVAL; + } + use_hw_wrapped_key = true; + } + prep_key = &keys[mode_num]; if (fscrypt_is_key_prepared(prep_key, ci)) { ci->ci_enc_key = *prep_key; return 0; } mutex_lock(&fscrypt_mode_key_setup_mutex); if (fscrypt_is_key_prepared(prep_key, ci)) goto done_unlock; + if (use_hw_wrapped_key) { + err = fscrypt_prepare_inline_crypt_key(prep_key, + mk->mk_secret.raw, + mk->mk_secret.size, true, + ci); + if (err) + goto out_unlock; + goto done_unlock; + } + BUILD_BUG_ON(sizeof(mode_num) != 1); BUILD_BUG_ON(sizeof(sb->s_uuid) != 16); BUILD_BUG_ON(sizeof(hkdf_info) != 17); hkdf_info[hkdf_infolen++] = mode_num; if (include_fs_uuid) { memcpy(&hkdf_info[hkdf_infolen], &sb->s_uuid, sizeof(sb->s_uuid)); hkdf_infolen += sizeof(sb->s_uuid); } err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf, @@ -329,20 +356,33 @@ static int fscrypt_setup_iv_ino_lblk_32_key(struct fscrypt_inode_info *ci, fscrypt_hash_inode_number(ci, mk); return 0; } static int fscrypt_setup_v2_file_key(struct fscrypt_inode_info *ci, struct fscrypt_master_key *mk, bool need_dirhash_key) { int err; + if (mk->mk_secret.is_hw_wrapped && + !(ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_HW_WRAPPED_KEY)) { + fscrypt_warn(ci->ci_inode, + "Given key is hardware-wrapped, but file isn't protected by a hardware-wrapped key"); + return -EINVAL; + } + if ((ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_HW_WRAPPED_KEY) && + !mk->mk_secret.is_hw_wrapped) { + fscrypt_warn(ci->ci_inode, + "File is protected by a hardware-wrapped key, but given key isn't hardware-wrapped"); + return -EINVAL; + } + if (ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) { /* * DIRECT_KEY: instead of deriving per-file encryption keys, the * per-file nonce will be included in all the IVs. But unlike * v1 policies, for v2 policies in this case we don't encrypt * with the master key directly but rather derive a per-mode * encryption key. This ensures that the master key is * consistently used only for HKDF, avoiding key reuse issues. */ err = setup_per_mode_enc_key(ci, mk, mk->mk_direct_keys, @@ -355,21 +395,21 @@ static int fscrypt_setup_v2_file_key(struct fscrypt_inode_info *ci, * the IVs. This format is optimized for use with inline * encryption hardware compliant with the UFS standard. */ err = setup_per_mode_enc_key(ci, mk, mk->mk_iv_ino_lblk_64_keys, HKDF_CONTEXT_IV_INO_LBLK_64_KEY, true); } else if (ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32) { err = fscrypt_setup_iv_ino_lblk_32_key(ci, mk); } else { - u8 derived_key[FSCRYPT_MAX_KEY_SIZE]; + u8 derived_key[FSCRYPT_MAX_STANDARD_KEY_SIZE]; err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf, HKDF_CONTEXT_PER_FILE_ENC_KEY, ci->ci_nonce, FSCRYPT_FILE_NONCE_SIZE, derived_key, ci->ci_mode->keysize); if (err) return err; err = fscrypt_set_per_file_enc_key(ci, derived_key); memzero_explicit(derived_key, ci->ci_mode->keysize); @@ -492,20 +532,28 @@ static int setup_file_encryption_key(struct fscrypt_inode_info *ci, goto out_release_key; } if (!fscrypt_valid_master_key_size(mk, ci)) { err = -ENOKEY; goto out_release_key; } switch (ci->ci_policy.version) { case FSCRYPT_POLICY_V1: + if (WARN_ON(mk->mk_secret.is_hw_wrapped)) { + /* + * This should never happen, as adding a v1 policy key + * that is hardware-wrapped isn't allowed. + */ + err = -EINVAL; + goto out_release_key; + } err = fscrypt_setup_v1_file_key(ci, mk->mk_secret.raw); break; case FSCRYPT_POLICY_V2: err = fscrypt_setup_v2_file_key(ci, mk, need_dirhash_key); break; default: WARN_ON_ONCE(1); err = -EINVAL; break; } diff --git a/fs/crypto/keysetup_v1.c b/fs/crypto/keysetup_v1.c index cf3b58ec32cc..8f2d44e6726a 100644 --- a/fs/crypto/keysetup_v1.c +++ b/fs/crypto/keysetup_v1.c @@ -111,21 +111,22 @@ find_and_lock_process_key(const char *prefix, down_read(&key->sem); ukp = user_key_payload_locked(key); if (!ukp) /* was the key revoked before we acquired its semaphore? */ goto invalid; payload = (const struct fscrypt_key *)ukp->data; if (ukp->datalen != sizeof(struct fscrypt_key) || - payload->size < 1 || payload->size > FSCRYPT_MAX_KEY_SIZE) { + payload->size < 1 || + payload->size > FSCRYPT_MAX_STANDARD_KEY_SIZE) { fscrypt_warn(NULL, "key with description '%s' has invalid payload", key->description); goto invalid; } if (payload->size < min_keysize) { fscrypt_warn(NULL, "key with description '%s' is too short (got %u bytes, need %u+ bytes)", key->description, payload->size, min_keysize); @@ -142,21 +143,21 @@ find_and_lock_process_key(const char *prefix, } /* Master key referenced by DIRECT_KEY policy */ struct fscrypt_direct_key { struct super_block *dk_sb; struct hlist_node dk_node; refcount_t dk_refcount; const struct fscrypt_mode *dk_mode; struct fscrypt_prepared_key dk_key; u8 dk_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE]; - u8 dk_raw[FSCRYPT_MAX_KEY_SIZE]; + u8 dk_raw[FSCRYPT_MAX_STANDARD_KEY_SIZE]; }; static void free_direct_key(struct fscrypt_direct_key *dk) { if (dk) { fscrypt_destroy_prepared_key(dk->dk_sb, &dk->dk_key); kfree_sensitive(dk); } } diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c index 701259991277..91102635e98a 100644 --- a/fs/crypto/policy.c +++ b/fs/crypto/policy.c @@ -222,21 +222,22 @@ static bool fscrypt_supported_v2_policy(const struct fscrypt_policy_v2 *policy, fscrypt_warn(inode, "Unsupported encryption modes (contents %d, filenames %d)", policy->contents_encryption_mode, policy->filenames_encryption_mode); return false; } if (policy->flags & ~(FSCRYPT_POLICY_FLAGS_PAD_MASK | FSCRYPT_POLICY_FLAG_DIRECT_KEY | FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 | - FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32)) { + FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32 | + FSCRYPT_POLICY_FLAG_HW_WRAPPED_KEY)) { fscrypt_warn(inode, "Unsupported encryption flags (0x%02x)", policy->flags); return false; } count += !!(policy->flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY); count += !!(policy->flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64); count += !!(policy->flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32); if (count > 1) { fscrypt_warn(inode, "Mutually exclusive encryption flags (0x%02x)", @@ -262,20 +263,28 @@ static bool fscrypt_supported_v2_policy(const struct fscrypt_policy_v2 *policy, /* * Not safe to enable yet, as we need to ensure that DUN * wraparound can only occur on a FS block boundary. */ fscrypt_warn(inode, "Sub-block data units not yet supported with IV_INO_LBLK_32"); return false; } } + if ((policy->flags & FSCRYPT_POLICY_FLAG_HW_WRAPPED_KEY) && + !(policy->flags & (FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 | + FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32))) { + fscrypt_warn(inode, + "HW_WRAPPED_KEY flag can only be used with IV_INO_LBLK_64 or IV_INO_LBLK_32"); + return false; + } + if ((policy->flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) && !supported_direct_key_modes(inode, policy->contents_encryption_mode, policy->filenames_encryption_mode)) return false; if ((policy->flags & (FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 | FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32)) && !supported_iv_ino_lblk_policy(policy, inode)) return false; diff --git a/include/uapi/linux/fscrypt.h b/include/uapi/linux/fscrypt.h index 7a8f4c290187..2724febca08f 100644 --- a/include/uapi/linux/fscrypt.h +++ b/include/uapi/linux/fscrypt.h @@ -13,20 +13,21 @@ /* Encryption policy flags */ #define FSCRYPT_POLICY_FLAGS_PAD_4 0x00 #define FSCRYPT_POLICY_FLAGS_PAD_8 0x01 #define FSCRYPT_POLICY_FLAGS_PAD_16 0x02 #define FSCRYPT_POLICY_FLAGS_PAD_32 0x03 #define FSCRYPT_POLICY_FLAGS_PAD_MASK 0x03 #define FSCRYPT_POLICY_FLAG_DIRECT_KEY 0x04 #define FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 0x08 #define FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32 0x10 +#define FSCRYPT_POLICY_FLAG_HW_WRAPPED_KEY 0x20 /* Encryption algorithms */ #define FSCRYPT_MODE_AES_256_XTS 1 #define FSCRYPT_MODE_AES_256_CTS 4 #define FSCRYPT_MODE_AES_128_CBC 5 #define FSCRYPT_MODE_AES_128_CTS 6 #define FSCRYPT_MODE_SM4_XTS 7 #define FSCRYPT_MODE_SM4_CTS 8 #define FSCRYPT_MODE_ADIANTUM 9 #define FSCRYPT_MODE_AES_256_HCTR2 10 @@ -112,30 +113,32 @@ struct fscrypt_key_specifier { __u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE]; } u; }; /* * Payload of Linux keyring key of type "fscrypt-provisioning", referenced by * fscrypt_add_key_arg::key_id as an alternative to fscrypt_add_key_arg::raw. */ struct fscrypt_provisioning_key_payload { __u32 type; - __u32 __reserved; + __u32 flags; __u8 raw[]; }; /* Struct passed to FS_IOC_ADD_ENCRYPTION_KEY */ struct fscrypt_add_key_arg { struct fscrypt_key_specifier key_spec; __u32 raw_size; __u32 key_id; - __u32 __reserved[8]; +#define FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED 0x00000001 + __u32 flags; + __u32 __reserved[7]; __u8 raw[]; }; /* Struct passed to FS_IOC_REMOVE_ENCRYPTION_KEY */ struct fscrypt_remove_key_arg { struct fscrypt_key_specifier key_spec; #define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY 0x00000001 #define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS 0x00000002 __u32 removal_status_flags; /* output */ __u32 __reserved[5];