From patchwork Mon Oct 28 07:20:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11214713 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 44C361864 for ; Mon, 28 Oct 2019 07:20:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 21E2221850 for ; Mon, 28 Oct 2019 07:20:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NRiV89+Z" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731115AbfJ1HUm (ORCPT ); Mon, 28 Oct 2019 03:20:42 -0400 Received: from mail-pf1-f202.google.com ([209.85.210.202]:45592 "EHLO mail-pf1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729399AbfJ1HUl (ORCPT ); Mon, 28 Oct 2019 03:20:41 -0400 Received: by mail-pf1-f202.google.com with SMTP id a14so7834340pfr.12 for ; Mon, 28 Oct 2019 00:20:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=O3WIeuWKghxgdKG/ft90uB2M7SIrx3CskO7YtCoXnUg=; b=NRiV89+ZKhOBUqsstfX6ZFIY4Afesw6xPz8RZQC3OUPjjDuv6YQHxCIzrU2nQr3d2b H5Zot7x4avD6X4qbjE6UK7xTQ7PzE/d7obTeORbElm/lGnXRP0uhW4VlyjqUfVh5zhRw 2wSm7MItJnkYs6+MVOwxzH3r9bZUgdVMKXrC/WU/FTTiWuaTOhKSxx1Vm3vFQHbyup55 IS+5/HIDrZSoKFX8jqxSCVcEATfFy+IbLrrpm2791CWDTs9uT3tO9YdqUJAxC5EELcf6 xl4oE9ki80YSYkIBnDTz1ljfJjvK9rm4hO6CotyuySRhuO0gkUEpYq2VgfGw4Dqy6yvU 6xgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=O3WIeuWKghxgdKG/ft90uB2M7SIrx3CskO7YtCoXnUg=; b=gHX3OhECerPfn4lJd0Wz2KfeeL2hZAmJDghkQ6r2IsqZbdAi/UfEXMVmqAl+B3H8Wn g5kYguIoUnOhdt/IoRMWPnUkHgEZZA4EGk9uRT2ZnJDdcnF6ITuC96JgAirYFxDXBcB4 Sxzk8ytUOOn0MlLKSfPSgk6Ww2FLOJBlqgemaugBDmt/ehXdyrnNeGn+4HJOGaOVtfZr zSc3gppuiaGukKCyFViRglkOaNk2/u/TasC3gYjvVWCXPPQuqlKYiEQ1Lx/S1qMPEDF+ zwLOjDukP2A/H+N1JUw9zrnppzAn9cZsMvz1PBvmUNi2MoJUcgXkqXoNlCE9IR8VfEpE MvFQ== X-Gm-Message-State: APjAAAU7p8eXGO0NaeEkuz/Ky2dtRUR4YUnteOe8VwRvtV0iWOi7P5lZ qzqgSaRx1crJXu+UEnajeDeYxXcTces= X-Google-Smtp-Source: APXvYqz4BYdHS3GKFZYRSkGruyU6p50twEx0EsGH21poEw4Q/e/au51/3CoMWg50X5tpp/s6xxwUqaxK+rw= X-Received: by 2002:a63:2c9:: with SMTP id 192mr19720017pgc.315.1572247240225; Mon, 28 Oct 2019 00:20:40 -0700 (PDT) Date: Mon, 28 Oct 2019 00:20:24 -0700 In-Reply-To: <20191028072032.6911-1-satyat@google.com> Message-Id: <20191028072032.6911-2-satyat@google.com> Mime-Version: 1.0 References: <20191028072032.6911-1-satyat@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v5 1/9] block: Keyslot Manager for Inline Encryption From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Inline Encryption hardware allows software to specify an encryption context (an encryption key, crypto algorithm, data unit num, data unit size, etc.) along with a data transfer request to a storage device, and the inline encryption hardware will use that context to en/decrypt the data. The inline encryption hardware is part of the storage device, and it conceptually sits on the data path between system memory and the storage device. Inline Encryption hardware implementations often function around the concept of "keyslots". These implementations often have a limited number of "keyslots", each of which can hold an encryption context (we say that an encryption context can be "programmed" into a keyslot). Requests made to the storage device may have a keyslot associated with them, and the inline encryption hardware will en/decrypt the data in the requests using the encryption context programmed into that associated keyslot. As keyslots are limited, and programming keys may be expensive in many implementations, and multiple requests may use exactly the same encryption contexts, we introduce a Keyslot Manager to efficiently manage keyslots. The keyslot manager also functions as the interface that upper layers will use to program keys into inline encryption hardware. For more information on the Keyslot Manager, refer to documentation found in block/keyslot-manager.c and linux/keyslot-manager.h. Signed-off-by: Satya Tangirala --- block/Kconfig | 8 + block/Makefile | 1 + block/keyslot-manager.c | 352 ++++++++++++++++++++++++++++++++ include/linux/bio.h | 5 + include/linux/blkdev.h | 6 + include/linux/keyslot-manager.h | 98 +++++++++ 6 files changed, 470 insertions(+) create mode 100644 block/keyslot-manager.c create mode 100644 include/linux/keyslot-manager.h diff --git a/block/Kconfig b/block/Kconfig index 41c0917ce622..ae52d42b783b 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -177,6 +177,14 @@ config BLK_SED_OPAL Enabling this option enables users to setup/unlock/lock Locking ranges for SED devices using the Opal protocol. +config BLK_INLINE_ENCRYPTION + bool "Enable inline encryption support in block layer" + help + Build the blk-crypto subsystem. + Enabling this lets the block layer handle encryption, + so users can take advantage of inline encryption + hardware if present. + menu "Partition Types" source "block/partitions/Kconfig" diff --git a/block/Makefile b/block/Makefile index 9ef57ace90d4..e922844219c2 100644 --- a/block/Makefile +++ b/block/Makefile @@ -36,3 +36,4 @@ obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o obj-$(CONFIG_BLK_PM) += blk-pm.o +obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c new file mode 100644 index 000000000000..020931fc9f7d --- /dev/null +++ b/block/keyslot-manager.c @@ -0,0 +1,352 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * keyslot-manager.c + * + * Copyright 2019 Google LLC + */ + +/** + * DOC: The Keyslot Manager + * + * Many devices with inline encryption support have a limited number of "slots" + * into which encryption contexts may be programmed, and requests can be tagged + * with a slot number to specify the key to use for en/decryption. + * + * As the number of slots are limited, and programming keys is expensive on + * many inline encryption hardware, we don't want to program the same key into + * multiple slots - if multiple requests are using the same key, we want to + * program just one slot with that key and use that slot for all requests. + * + * The keyslot manager manages these keyslots appropriately, and also acts as + * an abstraction between the inline encryption hardware and the upper layers. + * + * Lower layer devices will set up a keyslot manager in their request queue + * and tell it how to perform device specific operations like programming/ + * evicting keys from keyslots. + * + * Upper layers will call keyslot_manager_get_slot_for_key() to program a + * key into some slot in the inline encryption hardware. + */ +#include +#include +#include +#include +#include + +struct keyslot { + atomic_t slot_refs; + struct list_head idle_slot_node; +}; + +struct keyslot_manager { + unsigned int num_slots; + atomic_t num_idle_slots; + struct keyslot_mgmt_ll_ops ksm_ll_ops; + void *ll_priv_data; + + /* Protects programming and evicting keys from the device */ + struct rw_semaphore lock; + + /* List of idle slots, with least recently used slot at front */ + wait_queue_head_t idle_slots_wait_queue; + struct list_head idle_slots; + spinlock_t idle_slots_lock; + + /* Per-keyslot data */ + struct keyslot slots[]; +}; + +/** + * keyslot_manager_create() - Create a keyslot manager + * @num_slots: The number of key slots to manage. + * @ksm_ll_ops: The struct keyslot_mgmt_ll_ops for the device that this keyslot + * manager will use to perform operations like programming and + * evicting keys. + * @ll_priv_data: Private data passed as is to the functions in ksm_ll_ops. + * + * Allocate memory for and initialize a keyslot manager. Called by e.g. + * storage drivers to set up a keyslot manager in their request_queue. + * + * Context: May sleep + * Return: Pointer to constructed keyslot manager or NULL on error. + */ +struct keyslot_manager *keyslot_manager_create(unsigned int num_slots, + const struct keyslot_mgmt_ll_ops *ksm_ll_ops, + void *ll_priv_data) +{ + struct keyslot_manager *ksm; + int slot; + + if (num_slots == 0) + return NULL; + + /* Check that all ops are specified */ + if (ksm_ll_ops->keyslot_program == NULL || + ksm_ll_ops->keyslot_evict == NULL || + ksm_ll_ops->crypto_mode_supported == NULL || + ksm_ll_ops->keyslot_find == NULL) + return NULL; + + ksm = kvzalloc(struct_size(ksm, slots, num_slots), GFP_KERNEL); + if (!ksm) + return NULL; + + ksm->num_slots = num_slots; + atomic_set(&ksm->num_idle_slots, num_slots); + ksm->ksm_ll_ops = *ksm_ll_ops; + ksm->ll_priv_data = ll_priv_data; + + init_rwsem(&ksm->lock); + + init_waitqueue_head(&ksm->idle_slots_wait_queue); + INIT_LIST_HEAD(&ksm->idle_slots); + + for (slot = 0; slot < num_slots; slot++) { + list_add_tail(&ksm->slots[slot].idle_slot_node, + &ksm->idle_slots); + } + + spin_lock_init(&ksm->idle_slots_lock); + + return ksm; +} +EXPORT_SYMBOL(keyslot_manager_create); + +static void remove_slot_from_lru_list(struct keyslot_manager *ksm, int slot) +{ + unsigned long flags; + + spin_lock_irqsave(&ksm->idle_slots_lock, flags); + list_del(&ksm->slots[slot].idle_slot_node); + spin_unlock_irqrestore(&ksm->idle_slots_lock, flags); + + atomic_dec(&ksm->num_idle_slots); +} + +static int find_and_grab_keyslot(struct keyslot_manager *ksm, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + int slot; + + slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key, + crypto_mode, data_unit_size); + if (slot < 0) + return slot; + if (WARN_ON(slot >= ksm->num_slots)) + return -EINVAL; + if (atomic_inc_return(&ksm->slots[slot].slot_refs) == 1) { + /* Took first reference to this slot; remove it from LRU list */ + remove_slot_from_lru_list(ksm, slot); + } + return slot; +} + +/** + * keyslot_manager_get_slot_for_key() - Program a key into a keyslot. + * @ksm: The keyslot manager to program the key into. + * @key: Pointer to the bytes of the key to program. Must be the correct length + * for the chosen @crypto_mode; see blk_crypto_modes in blk-crypto.c. + * @crypto_mode: Identifier for the encryption algorithm to use. + * @data_unit_size: The data unit size to use for en/decryption. + * + * Get a keyslot that's been programmed with the specified key, crypto_mode, and + * data_unit_size. If one already exists, return it with incremented refcount. + * Otherwise, wait for a keyslot to become idle and program it. + * + * Context: Process context. Takes and releases ksm->lock. + * Return: The keyslot on success, else a -errno value. + */ +int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm, + const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + int slot; + int err; + struct keyslot *idle_slot; + + down_read(&ksm->lock); + slot = find_and_grab_keyslot(ksm, key, crypto_mode, data_unit_size); + up_read(&ksm->lock); + if (slot != -ENOKEY) + return slot; + + for (;;) { + down_write(&ksm->lock); + slot = find_and_grab_keyslot(ksm, key, crypto_mode, + data_unit_size); + if (slot != -ENOKEY) { + up_write(&ksm->lock); + return slot; + } + + /* + * If we're here, that means there wasn't a slot that was + * already programmed with the key. So try to program it. + */ + if (atomic_read(&ksm->num_idle_slots) > 0) + break; + + up_write(&ksm->lock); + wait_event(ksm->idle_slots_wait_queue, + (atomic_read(&ksm->num_idle_slots) > 0)); + } + + idle_slot = list_first_entry(&ksm->idle_slots, struct keyslot, + idle_slot_node); + slot = idle_slot - ksm->slots; + + err = ksm->ksm_ll_ops.keyslot_program(ksm->ll_priv_data, key, + crypto_mode, + data_unit_size, + slot); + + if (err) { + wake_up(&ksm->idle_slots_wait_queue); + up_write(&ksm->lock); + return err; + } + + atomic_set(&ksm->slots[slot].slot_refs, 1); + remove_slot_from_lru_list(ksm, slot); + + up_write(&ksm->lock); + return slot; + +} +EXPORT_SYMBOL(keyslot_manager_get_slot_for_key); + +/** + * keyslot_manager_get_slot() - Increment the refcount on the specified slot. + * @ksm - The keyslot manager that we want to modify. + * @slot - The slot to increment the refcount of. + * + * This function assumes that there is already an active reference to that slot + * and simply increments the refcount. This is useful when cloning a bio that + * already has a reference to a keyslot, and we want the cloned bio to also have + * its own reference. + * + * Context: Any context. + */ +void keyslot_manager_get_slot(struct keyslot_manager *ksm, unsigned int slot) +{ + if (WARN_ON(slot >= ksm->num_slots)) + return; + + WARN_ON(atomic_inc_return(&ksm->slots[slot].slot_refs) < 2); +} +EXPORT_SYMBOL(keyslot_manager_get_slot); + +/** + * keyslot_manager_put_slot() - Release a reference to a slot + * @ksm: The keyslot manager to release the reference from. + * @slot: The slot to release the reference from. + * + * Context: Any context. + */ +void keyslot_manager_put_slot(struct keyslot_manager *ksm, unsigned int slot) +{ + unsigned long flags; + + if (WARN_ON(slot >= ksm->num_slots)) + return; + + if (atomic_dec_and_lock_irqsave(&ksm->slots[slot].slot_refs, + &ksm->idle_slots_lock, flags)) { + list_add_tail(&ksm->slots[slot].idle_slot_node, + &ksm->idle_slots); + spin_unlock_irqrestore(&ksm->idle_slots_lock, flags); + atomic_inc(&ksm->num_idle_slots); + wake_up(&ksm->idle_slots_wait_queue); + } +} +EXPORT_SYMBOL(keyslot_manager_put_slot); + +/** + * keyslot_manager_crypto_mode_supported() - Find out if a crypto_mode/data + * unit size combination is supported + * by a ksm. + * @ksm - The keyslot manager to check + * @crypto_mode - The crypto mode to check for. + * @data_unit_size - The data_unit_size for the mode. + * + * Calls and returns the result of the crypto_mode_supported function specified + * by the ksm. + * + * Context: Process context. + * Return: Whether or not this ksm supports the specified crypto_mode/ + * data_unit_size combo. + */ +bool keyslot_manager_crypto_mode_supported(struct keyslot_manager *ksm, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + if (!ksm) + return false; + return ksm->ksm_ll_ops.crypto_mode_supported(ksm->ll_priv_data, + crypto_mode, + data_unit_size); +} +EXPORT_SYMBOL(keyslot_manager_crypto_mode_supported); + +bool keyslot_manager_rq_crypto_mode_supported(struct request_queue *q, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + return keyslot_manager_crypto_mode_supported(q->ksm, crypto_mode, + data_unit_size); +} +EXPORT_SYMBOL(keyslot_manager_rq_crypto_mode_supported); + +/** + * keyslot_manager_evict_key() - Evict a key from the lower layer device. + * @ksm - The keyslot manager to evict from + * @key - The key to evict + * @crypto_mode - The crypto algorithm the key was programmed with. + * @data_unit_size - The data_unit_size the key was programmed with. + * + * Finds the slot that the specified key, crypto_mode, data_unit_size combo + * was programmed into, and evicts that slot from the lower layer device if + * the refcount on the slot is 0. Returns -EBUSY if the refcount is not 0, and + * -errno on error. + * + * Context: Process context. Takes and releases ksm->lock. + */ +int keyslot_manager_evict_key(struct keyslot_manager *ksm, + const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + int slot; + int err = 0; + + down_write(&ksm->lock); + slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key, + crypto_mode, + data_unit_size); + + if (slot < 0) { + up_write(&ksm->lock); + return slot; + } + + if (atomic_read(&ksm->slots[slot].slot_refs) == 0) { + err = ksm->ksm_ll_ops.keyslot_evict(ksm->ll_priv_data, key, + crypto_mode, + data_unit_size, + slot); + } else { + err = -EBUSY; + } + + up_write(&ksm->lock); + return err; +} +EXPORT_SYMBOL(keyslot_manager_evict_key); + +void keyslot_manager_destroy(struct keyslot_manager *ksm) +{ + kvfree(ksm); +} +EXPORT_SYMBOL(keyslot_manager_destroy); diff --git a/include/linux/bio.h b/include/linux/bio.h index 3cdb84cdc488..d0cb7c350cdc 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -564,6 +564,11 @@ static inline void bvec_kunmap_irq(char *buffer, unsigned long *flags) } #endif +enum blk_crypto_mode_num { + BLK_ENCRYPTION_MODE_INVALID = 0, + BLK_ENCRYPTION_MODE_AES_256_XTS = 1, +}; + /* * BIO list management for use by remapping drivers (e.g. DM or MD) and loop. * diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index f3ea78b0c91c..244e81a8f5d2 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -43,6 +43,7 @@ struct pr_ops; struct rq_qos; struct blk_queue_stats; struct blk_stat_callback; +struct keyslot_manager; #define BLKDEV_MIN_RQ 4 #define BLKDEV_MAX_RQ 128 /* Default maximum */ @@ -481,6 +482,11 @@ struct request_queue { unsigned int dma_pad_mask; unsigned int dma_alignment; +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + /* Inline crypto capabilities */ + struct keyslot_manager *ksm; +#endif + unsigned int rq_timeout; int poll_nsec; diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h new file mode 100644 index 000000000000..0777ade7907c --- /dev/null +++ b/include/linux/keyslot-manager.h @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2019 Google LLC + */ + +#include + +#ifdef CONFIG_BLOCK + +#ifndef __LINUX_KEYSLOT_MANAGER_H +#define __LINUX_KEYSLOT_MANAGER_H + +/** + * struct keyslot_mgmt_ll_ops - functions to manage keyslots in hardware + * @keyslot_program: Program the specified key and algorithm into the + * specified slot in the inline encryption hardware. + * @keyslot_evict: Evict key from the specified keyslot in the hardware. + * The key, crypto_mode and data_unit_size are also passed + * down so that e.g. dm layers can evict keys from + * the devices that they map over. + * Returns 0 on success, -errno otherwise. + * @crypto_mode_supported: Check whether a crypto_mode and data_unit_size + * combo is supported. + * @keyslot_find: Returns the slot number that matches the key, + * or -ENOKEY if no match found, or -errno on + * error. + * + * This structure should be provided by storage device drivers when they set up + * a keyslot manager - this structure holds the function ptrs that the keyslot + * manager will use to manipulate keyslots in the hardware. + */ +struct keyslot_mgmt_ll_ops { + int (*keyslot_program)(void *ll_priv_data, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size, + unsigned int slot); + int (*keyslot_evict)(void *ll_priv_data, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size, + unsigned int slot); + bool (*crypto_mode_supported)(void *ll_priv_data, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size); + int (*keyslot_find)(void *ll_priv_data, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size); +}; + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION +struct keyslot_manager; + +extern struct keyslot_manager *keyslot_manager_create(unsigned int num_slots, + const struct keyslot_mgmt_ll_ops *ksm_ops, + void *ll_priv_data); + +extern int +keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm, + const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size); + +extern void keyslot_manager_get_slot(struct keyslot_manager *ksm, + unsigned int slot); + +extern void keyslot_manager_put_slot(struct keyslot_manager *ksm, + unsigned int slot); + +extern bool +keyslot_manager_crypto_mode_supported(struct keyslot_manager *ksm, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size); + +extern bool +keyslot_manager_rq_crypto_mode_supported(struct request_queue *q, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size); + +extern int keyslot_manager_evict_key(struct keyslot_manager *ksm, + const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size); + +extern void keyslot_manager_destroy(struct keyslot_manager *ksm); + +#else /* CONFIG_BLK_INLINE_ENCRYPTION */ + +static inline bool +keyslot_manager_rq_crypto_mode_supported(struct request_queue *q, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + return false; +} +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */ + +#endif /* __LINUX_KEYSLOT_MANAGER_H */ + +#endif /* CONFIG_BLOCK */ From patchwork Mon Oct 28 07:20:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11214719 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8882C1864 for ; Mon, 28 Oct 2019 07:20:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 517AC208C0 for ; Mon, 28 Oct 2019 07:20:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bUb2NMWD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732729AbfJ1HUp (ORCPT ); Mon, 28 Oct 2019 03:20:45 -0400 Received: from mail-pf1-f201.google.com ([209.85.210.201]:56681 "EHLO mail-pf1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732705AbfJ1HUo (ORCPT ); Mon, 28 Oct 2019 03:20:44 -0400 Received: by mail-pf1-f201.google.com with SMTP id v11so207944pfm.23 for ; Mon, 28 Oct 2019 00:20:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=WzbsbFz4o8H8SJZeCqtGo6txi0Bzn3b43TKqwjKX5n4=; b=bUb2NMWDO+aVOB94qsGaE64AsV4ygRC4sgW32zy6qGeDAA2U+p/koZ8w2pc/HE6FrP v72wYbH57p/iSQVrWVLzaRKLlm8537Hp3TR01PPkw9kG9AsxxauKK1BUpV6S3rRG9JO5 3pmdA7UwYXqJyMqiDJqGKH6/hh0BQcS3Ct0Y7bGA2wuHhF8ad9y/jlNA5BS4Wzxz7TGY X9akU9rQVWfKIDwYLSE1goDdInj6SYZpYTUIoO2CMazWlGkNsGL7pqL+A/poc7DQmkDR 6EDb6rp0odYTNQKND/l+ltMCASAJtbRe2KLhWKDR5bcWV6w9UlhzzyMZK9bim/OJOfpS /P2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WzbsbFz4o8H8SJZeCqtGo6txi0Bzn3b43TKqwjKX5n4=; b=PEkzOV7RJoOLYFesk8jsnIO4YXtch9X+P3NsMzqKePCbs/o3Bvfq9s+4FBxvVLAg5e bB+oNpROl8IPEdlBVgqxw2boMcmP/yzYng6RvOch+br/RTYISLIbQKHHW415DV0kfX17 2Ys5XIdSzlRG0Mgot6rUkiPR5PC4tg4/vQ1f4PhAsFw237qgGyQL02bWDkZVKrjjui/N GCd3cbC/l4Osb2Ikxax3XetvS0VM5tLh+CTA1VMyzpImIzuGzzD/qiFHmufO/p+fXarj n9VRm0LL141i7D6Vzd2SDPXpmbJXIVzhNO8yTR6YHUpJalV5vZW8TCdJPqLQ8ns99R1K 6hVA== X-Gm-Message-State: APjAAAWbmacsqELAZmQV3OMu0jS7tJlbE3q0Yi2lmrIvw5TKUMPces0c t8bpVCrQlDJlUvyc7W799F96I8eopvw= X-Google-Smtp-Source: APXvYqwFWXhDGmTzN1CzplsG0yv8tZy8yJA11DEE9f5LVjzOlT/VMK72QlpuHZYWdmiS6fTmCwTGSpXQIuY= X-Received: by 2002:a63:5605:: with SMTP id k5mr19434999pgb.14.1572247242788; Mon, 28 Oct 2019 00:20:42 -0700 (PDT) Date: Mon, 28 Oct 2019 00:20:25 -0700 In-Reply-To: <20191028072032.6911-1-satyat@google.com> Message-Id: <20191028072032.6911-3-satyat@google.com> Mime-Version: 1.0 References: <20191028072032.6911-1-satyat@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v5 2/9] block: Add encryption context to struct bio From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org We must have some way of letting a storage device driver know what encryption context it should use for en/decrypting a request. However, it's the filesystem/fscrypt that knows about and manages encryption contexts. As such, when the filesystem layer submits a bio to the block layer, and this bio eventually reaches a device driver with support for inline encryption, the device driver will need to have been told the encryption context for that bio. We want to communicate the encryption context from the filesystem layer to the storage device along with the bio, when the bio is submitted to the block layer. To do this, we add a struct bio_crypt_ctx to struct bio, which can represent an encryption context (note that we can't use the bi_private field in struct bio to do this because that field does not function to pass information across layers in the storage stack). We also introduce various functions to manipulate the bio_crypt_ctx and make the bio/request merging logic aware of the bio_crypt_ctx. Signed-off-by: Satya Tangirala --- block/Makefile | 2 +- block/bio-crypt-ctx.c | 137 +++++++++++++++++++++ block/bio.c | 18 +-- block/blk-core.c | 3 + block/blk-merge.c | 35 +++++- block/bounce.c | 15 +-- drivers/md/dm.c | 15 ++- include/linux/bio-crypt-ctx.h | 219 ++++++++++++++++++++++++++++++++++ include/linux/bio.h | 6 +- include/linux/blk_types.h | 6 + 10 files changed, 426 insertions(+), 30 deletions(-) create mode 100644 block/bio-crypt-ctx.c create mode 100644 include/linux/bio-crypt-ctx.h diff --git a/block/Makefile b/block/Makefile index e922844219c2..f39611ed151f 100644 --- a/block/Makefile +++ b/block/Makefile @@ -36,4 +36,4 @@ obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o obj-$(CONFIG_BLK_PM) += blk-pm.o -obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o +obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o bio-crypt-ctx.o diff --git a/block/bio-crypt-ctx.c b/block/bio-crypt-ctx.c new file mode 100644 index 000000000000..aa3571f72ee7 --- /dev/null +++ b/block/bio-crypt-ctx.c @@ -0,0 +1,137 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2019 Google LLC + */ + +#include +#include +#include +#include + +static int num_prealloc_crypt_ctxs = 128; +static struct kmem_cache *bio_crypt_ctx_cache; +static mempool_t *bio_crypt_ctx_pool; + +int bio_crypt_ctx_init(void) +{ + bio_crypt_ctx_cache = KMEM_CACHE(bio_crypt_ctx, 0); + if (!bio_crypt_ctx_cache) + return -ENOMEM; + + bio_crypt_ctx_pool = mempool_create_slab_pool( + num_prealloc_crypt_ctxs, + bio_crypt_ctx_cache); + + if (!bio_crypt_ctx_pool) + return -ENOMEM; + + return 0; +} + +struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask) +{ + return mempool_alloc(bio_crypt_ctx_pool, gfp_mask); +} +EXPORT_SYMBOL(bio_crypt_alloc_ctx); + +void bio_crypt_free_ctx(struct bio *bio) +{ + mempool_free(bio->bi_crypt_context, bio_crypt_ctx_pool); + bio->bi_crypt_context = NULL; +} +EXPORT_SYMBOL(bio_crypt_free_ctx); + +int bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask) +{ + if (!bio_has_crypt_ctx(src)) + return 0; + + dst->bi_crypt_context = bio_crypt_alloc_ctx(gfp_mask); + if (!dst->bi_crypt_context) + return -ENOMEM; + + *dst->bi_crypt_context = *src->bi_crypt_context; + + if (bio_crypt_has_keyslot(src)) + keyslot_manager_get_slot(src->bi_crypt_context->processing_ksm, + src->bi_crypt_context->keyslot); + + return 0; +} +EXPORT_SYMBOL(bio_crypt_clone); + +bool bio_crypt_should_process(struct bio *bio, struct request_queue *q) +{ + if (!bio_has_crypt_ctx(bio)) + return false; + + WARN_ON(!bio_crypt_has_keyslot(bio)); + return q->ksm == bio->bi_crypt_context->processing_ksm; +} +EXPORT_SYMBOL(bio_crypt_should_process); + +/* + * Checks that two bio crypt contexts are compatible - i.e. that + * they are mergeable except for data_unit_num continuity. + */ +bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2) +{ + struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context; + struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context; + + if (bio_has_crypt_ctx(b_1) != bio_has_crypt_ctx(b_2)) + return false; + + if (!bio_has_crypt_ctx(b_1)) + return true; + + return bc1->keyslot == bc2->keyslot && + bc1->data_unit_size_bits == bc2->data_unit_size_bits; +} + +/* + * Checks that two bio crypt contexts are compatible, and also + * that their data_unit_nums are continuous (and can hence be merged) + */ +bool bio_crypt_ctx_back_mergeable(struct bio *b_1, + unsigned int b1_sectors, + struct bio *b_2) +{ + struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context; + struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context; + + if (!bio_crypt_ctx_compatible(b_1, b_2)) + return false; + + return !bio_has_crypt_ctx(b_1) || + (bc1->data_unit_num + + (b1_sectors >> (bc1->data_unit_size_bits - 9)) == + bc2->data_unit_num); +} + +void bio_crypt_ctx_release_keyslot(struct bio *bio) +{ + struct bio_crypt_ctx *crypt_ctx = bio->bi_crypt_context; + + keyslot_manager_put_slot(crypt_ctx->processing_ksm, crypt_ctx->keyslot); + bio->bi_crypt_context->processing_ksm = NULL; + bio->bi_crypt_context->keyslot = -1; +} + +int bio_crypt_ctx_acquire_keyslot(struct bio *bio, struct keyslot_manager *ksm) +{ + int slot; + enum blk_crypto_mode_num crypto_mode = bio_crypto_mode(bio); + + if (!ksm) + return -ENOMEM; + + slot = keyslot_manager_get_slot_for_key(ksm, + bio_crypt_raw_key(bio), crypto_mode, + 1 << bio->bi_crypt_context->data_unit_size_bits); + if (slot < 0) + return slot; + + bio_crypt_set_keyslot(bio, slot, ksm); + return 0; +} diff --git a/block/bio.c b/block/bio.c index 8f0ed6228fc5..ce8003aadf07 100644 --- a/block/bio.c +++ b/block/bio.c @@ -241,6 +241,7 @@ static void bio_free(struct bio *bio) struct bio_set *bs = bio->bi_pool; void *p; + bio_crypt_free_ctx(bio); bio_uninit(bio); if (bs) { @@ -612,15 +613,15 @@ struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs) __bio_clone_fast(b, bio); - if (bio_integrity(bio)) { - int ret; - - ret = bio_integrity_clone(b, bio, gfp_mask); + if (bio_crypt_clone(b, bio, gfp_mask) < 0) { + bio_put(b); + return NULL; + } - if (ret < 0) { - bio_put(b); - return NULL; - } + if (bio_integrity(bio) && + bio_integrity_clone(b, bio, gfp_mask) < 0) { + bio_put(b); + return NULL; } return b; @@ -992,6 +993,7 @@ void bio_advance(struct bio *bio, unsigned bytes) if (bio_integrity(bio)) bio_integrity_advance(bio, bytes); + bio_crypt_advance(bio, bytes); bio_advance_iter(bio, &bio->bi_iter, bytes); } EXPORT_SYMBOL(bio_advance); diff --git a/block/blk-core.c b/block/blk-core.c index d5e668ec751b..3b5959d386fb 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1807,5 +1807,8 @@ int __init blk_dev_init(void) blk_debugfs_root = debugfs_create_dir("block", NULL); #endif + if (bio_crypt_ctx_init() < 0) + panic("Failed to allocate mem for bio crypt ctxs\n"); + return 0; } diff --git a/block/blk-merge.c b/block/blk-merge.c index 48e6725b32ee..c97c02a20c6a 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -557,6 +557,9 @@ static inline int ll_new_hw_segment(struct request *req, struct bio *bio, if (blk_integrity_merge_bio(req->q, req, bio) == false) goto no_merge; + if (WARN_ON_ONCE(!bio_crypt_ctx_compatible(bio, req->bio))) + goto no_merge; + /* * This will form the start of a new hw segment. Bump both * counters. @@ -711,8 +714,14 @@ static enum elv_merge blk_try_req_merge(struct request *req, { if (blk_discard_mergable(req)) return ELEVATOR_DISCARD_MERGE; - else if (blk_rq_pos(req) + blk_rq_sectors(req) == blk_rq_pos(next)) + else if (blk_rq_pos(req) + blk_rq_sectors(req) == blk_rq_pos(next)) { + if (!bio_crypt_ctx_back_mergeable(req->bio, + blk_rq_sectors(req), + next->bio)) { + return ELEVATOR_NO_MERGE; + } return ELEVATOR_BACK_MERGE; + } return ELEVATOR_NO_MERGE; } @@ -748,6 +757,9 @@ static struct request *attempt_merge(struct request_queue *q, if (req->ioprio != next->ioprio) return NULL; + if (!bio_crypt_ctx_compatible(req->bio, next->bio)) + return NULL; + /* * If we are allowed to merge, then append bio list * from next to rq and release next. merge_requests_fn @@ -880,16 +892,31 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio) if (rq->ioprio != bio_prio(bio)) return false; + /* Only merge if the crypt contexts are compatible */ + if (!bio_crypt_ctx_compatible(bio, rq->bio)) + return false; + return true; } enum elv_merge blk_try_merge(struct request *rq, struct bio *bio) { - if (blk_discard_mergable(rq)) + if (blk_discard_mergable(rq)) { return ELEVATOR_DISCARD_MERGE; - else if (blk_rq_pos(rq) + blk_rq_sectors(rq) == bio->bi_iter.bi_sector) + } else if (blk_rq_pos(rq) + blk_rq_sectors(rq) == + bio->bi_iter.bi_sector) { + if (!bio_crypt_ctx_back_mergeable(rq->bio, + blk_rq_sectors(rq), bio)) { + return ELEVATOR_NO_MERGE; + } return ELEVATOR_BACK_MERGE; - else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_iter.bi_sector) + } else if (blk_rq_pos(rq) - bio_sectors(bio) == + bio->bi_iter.bi_sector) { + if (!bio_crypt_ctx_back_mergeable(bio, + bio_sectors(bio), rq->bio)) { + return ELEVATOR_NO_MERGE; + } return ELEVATOR_FRONT_MERGE; + } return ELEVATOR_NO_MERGE; } diff --git a/block/bounce.c b/block/bounce.c index f8ed677a1bf7..6f9a2359b22a 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -267,14 +267,15 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask, break; } - if (bio_integrity(bio_src)) { - int ret; + if (bio_crypt_clone(bio, bio_src, gfp_mask) < 0) { + bio_put(bio); + return NULL; + } - ret = bio_integrity_clone(bio, bio_src, gfp_mask); - if (ret < 0) { - bio_put(bio); - return NULL; - } + if (bio_integrity(bio_src) && + bio_integrity_clone(bio, bio_src, gfp_mask) < 0) { + bio_put(bio); + return NULL; } bio_clone_blkg_association(bio, bio_src); diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 1a5e328c443a..67c24294d7c8 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1322,12 +1322,15 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio, sector_t sector, unsigned len) { struct bio *clone = &tio->clone; + int ret; __bio_clone_fast(clone, bio); - if (bio_integrity(bio)) { - int r; + ret = bio_crypt_clone(clone, bio, GFP_NOIO); + if (ret < 0) + return ret; + if (bio_integrity(bio)) { if (unlikely(!dm_target_has_integrity(tio->ti->type) && !dm_target_passes_integrity(tio->ti->type))) { DMWARN("%s: the target %s doesn't support integrity data.", @@ -1336,9 +1339,11 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio, return -EIO; } - r = bio_integrity_clone(clone, bio, GFP_NOIO); - if (r < 0) - return r; + ret = bio_integrity_clone(clone, bio, GFP_NOIO); + if (ret < 0) { + bio_crypt_free_ctx(clone); + return ret; + } } bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector)); diff --git a/include/linux/bio-crypt-ctx.h b/include/linux/bio-crypt-ctx.h new file mode 100644 index 000000000000..5cd569f77c31 --- /dev/null +++ b/include/linux/bio-crypt-ctx.h @@ -0,0 +1,219 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2019 Google LLC + */ +#ifndef __LINUX_BIO_CRYPT_CTX_H +#define __LINUX_BIO_CRYPT_CTX_H + +enum blk_crypto_mode_num { + BLK_ENCRYPTION_MODE_INVALID = 0, + BLK_ENCRYPTION_MODE_AES_256_XTS = 1, +}; + +#ifdef CONFIG_BLOCK +#include + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION +struct bio_crypt_ctx { + int keyslot; + const u8 *raw_key; + enum blk_crypto_mode_num crypto_mode; + u64 data_unit_num; + unsigned int data_unit_size_bits; + + /* + * The keyslot manager where the key has been programmed + * with keyslot. + */ + struct keyslot_manager *processing_ksm; + + /* + * Copy of the bvec_iter when this bio was submitted. + * We only want to en/decrypt the part of the bio + * as described by the bvec_iter upon submission because + * bio might be split before being resubmitted + */ + struct bvec_iter crypt_iter; + u64 sw_data_unit_num; +}; + +extern int bio_crypt_clone(struct bio *dst, struct bio *src, + gfp_t gfp_mask); + +static inline bool bio_has_crypt_ctx(struct bio *bio) +{ + return bio->bi_crypt_context; +} + +static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes) +{ + if (bio_has_crypt_ctx(bio)) { + bio->bi_crypt_context->data_unit_num += + bytes >> bio->bi_crypt_context->data_unit_size_bits; + } +} + +static inline bool bio_crypt_has_keyslot(struct bio *bio) +{ + return bio->bi_crypt_context->keyslot >= 0; +} + +extern int bio_crypt_ctx_init(void); + +extern struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask); + +extern void bio_crypt_free_ctx(struct bio *bio); + +static inline int bio_crypt_set_ctx(struct bio *bio, + const u8 *raw_key, + enum blk_crypto_mode_num crypto_mode, + u64 dun, + unsigned int dun_bits, + gfp_t gfp_mask) +{ + struct bio_crypt_ctx *crypt_ctx; + + crypt_ctx = bio_crypt_alloc_ctx(gfp_mask); + if (!crypt_ctx) + return -ENOMEM; + + crypt_ctx->raw_key = raw_key; + crypt_ctx->data_unit_num = dun; + crypt_ctx->data_unit_size_bits = dun_bits; + crypt_ctx->crypto_mode = crypto_mode; + crypt_ctx->processing_ksm = NULL; + crypt_ctx->keyslot = -1; + bio->bi_crypt_context = crypt_ctx; + + return 0; +} + +static inline void bio_set_data_unit_num(struct bio *bio, u64 dun) +{ + bio->bi_crypt_context->data_unit_num = dun; +} + +static inline int bio_crypt_get_keyslot(struct bio *bio) +{ + return bio->bi_crypt_context->keyslot; +} + +static inline void bio_crypt_set_keyslot(struct bio *bio, + unsigned int keyslot, + struct keyslot_manager *ksm) +{ + bio->bi_crypt_context->keyslot = keyslot; + bio->bi_crypt_context->processing_ksm = ksm; +} + +extern void bio_crypt_ctx_release_keyslot(struct bio *bio); + +extern int bio_crypt_ctx_acquire_keyslot(struct bio *bio, + struct keyslot_manager *ksm); + +static inline const u8 *bio_crypt_raw_key(struct bio *bio) +{ + return bio->bi_crypt_context->raw_key; +} + +static inline enum blk_crypto_mode_num bio_crypto_mode(struct bio *bio) +{ + return bio->bi_crypt_context->crypto_mode; +} + +static inline u64 bio_crypt_data_unit_num(struct bio *bio) +{ + return bio->bi_crypt_context->data_unit_num; +} + +static inline u64 bio_crypt_sw_data_unit_num(struct bio *bio) +{ + return bio->bi_crypt_context->sw_data_unit_num; +} + +extern bool bio_crypt_should_process(struct bio *bio, struct request_queue *q); + +extern bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2); + +extern bool bio_crypt_ctx_back_mergeable(struct bio *b_1, + unsigned int b1_sectors, + struct bio *b_2); + +#else /* CONFIG_BLK_INLINE_ENCRYPTION */ +struct keyslot_manager; + +static inline int bio_crypt_ctx_init(void) +{ + return 0; +} + +static inline int bio_crypt_clone(struct bio *dst, struct bio *src, + gfp_t gfp_mask) +{ + return 0; +} + +static inline void bio_crypt_advance(struct bio *bio, + unsigned int bytes) { } + +static inline bool bio_has_crypt_ctx(struct bio *bio) +{ + return false; +} + +static inline void bio_crypt_free_ctx(struct bio *bio) { } + +static inline void bio_crypt_set_ctx(struct bio *bio, + u8 *raw_key, + enum blk_crypto_mode_num crypto_mode, + u64 dun, + unsigned int dun_bits, + gfp_t gfp_mask) { } + +static inline void bio_set_data_unit_num(struct bio *bio, u64 dun) { } + +static inline bool bio_crypt_has_keyslot(struct bio *bio) +{ + return false; +} + +static inline void bio_crypt_set_keyslot(struct bio *bio, + unsigned int keyslot, + struct keyslot_manager *ksm) { } + +static inline int bio_crypt_get_keyslot(struct bio *bio) +{ + return -1; +} + +static inline u8 *bio_crypt_raw_key(struct bio *bio) +{ + return NULL; +} + +static inline u64 bio_crypt_data_unit_num(struct bio *bio) +{ + return 0; +} + +static inline bool bio_crypt_should_process(struct bio *bio, + struct request_queue *q) +{ + return false; +} + +static inline bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2) +{ + return true; +} + +static inline bool bio_crypt_ctx_back_mergeable(struct bio *b_1, + unsigned int b1_sectors, + struct bio *b_2) +{ + return true; +} + +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */ +#endif /* CONFIG_BLOCK */ +#endif /* __LINUX_BIO_CRYPT_CTX_H */ diff --git a/include/linux/bio.h b/include/linux/bio.h index d0cb7c350cdc..63d0fee423fa 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -8,6 +8,7 @@ #include #include #include +#include #ifdef CONFIG_BLOCK /* struct bio, bio_vec and BIO_* flags are defined in blk_types.h */ @@ -564,11 +565,6 @@ static inline void bvec_kunmap_irq(char *buffer, unsigned long *flags) } #endif -enum blk_crypto_mode_num { - BLK_ENCRYPTION_MODE_INVALID = 0, - BLK_ENCRYPTION_MODE_AES_256_XTS = 1, -}; - /* * BIO list management for use by remapping drivers (e.g. DM or MD) and loop. * diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index d688b96d1d63..d3ee2dcb634d 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -18,6 +18,7 @@ struct block_device; struct io_context; struct cgroup_subsys_state; typedef void (bio_end_io_t) (struct bio *); +struct bio_crypt_ctx; /* * Block error status values. See block/blk-core:blk_errors for the details. @@ -173,6 +174,11 @@ struct bio { u64 bi_iocost_cost; #endif #endif + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + struct bio_crypt_ctx *bi_crypt_context; +#endif + union { #if defined(CONFIG_BLK_DEV_INTEGRITY) struct bio_integrity_payload *bi_integrity; /* data integrity */ From patchwork Mon Oct 28 07:20:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11214731 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 30EF01864 for ; Mon, 28 Oct 2019 07:20:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 03A94214AF for ; Mon, 28 Oct 2019 07:20:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vme0M9e2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732728AbfJ1HUs (ORCPT ); Mon, 28 Oct 2019 03:20:48 -0400 Received: from mail-vk1-f201.google.com ([209.85.221.201]:48502 "EHLO mail-vk1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732717AbfJ1HUr (ORCPT ); Mon, 28 Oct 2019 03:20:47 -0400 Received: by mail-vk1-f201.google.com with SMTP id x65so65883vkd.15 for ; Mon, 28 Oct 2019 00:20:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=DgGE3grzIR+GN9HpNoFt3dVePJK0s47e9C+4g0N4hMM=; b=vme0M9e2HPjPQfjoNRvttQYlxaJCaP9HGOtC/nL4M8V4a29QdIJZdI+LJb0Pw2p/Jr LdGcC8Pg85XyuegAA0MAN6H8gcMguNT3nmmlY8OYxM2EaylGl1E/8392CDSl46SucpUQ MFHInTTv7ehTtKoXdn7epuWth4QzBB9bh3N/TA0MsYQLBVyH6FtpnMq2POapzz59C5v3 1UU/B2U2jFBhMKrXXwKalTpfw3QxoeQPHxBYzWv14b852NB8VtWtBtvaA39BQjvJLtla pdCR5OZ3SA7amSjbb1JZmSS15u8pAPRK2LF1UWyXcspI4pkE9NHX5K90hOmoiZOf1NzO YvZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=DgGE3grzIR+GN9HpNoFt3dVePJK0s47e9C+4g0N4hMM=; b=Fv+nmaLHDhVbqkGjWuy4viierMwcZUMdQtjOj8aXPAuo6BKhcM+BYrKvtwPOvjlwCg YZmZM9I3pG15dzJK3YW1wrJXsQshb02zYa5+7j4MocGosKkDZl4frqgJBX/W6zRBMVKW MI2wmV5cCsJdD0N/dcna/Nn80A02mPDPIwNMXVa1Du5yDHhXUF+B8MddHrRTcE3rR6LI jhE9Ntd6W4xuxCvrhsFd2vgiFq1Xy3EMfu26mjXEd4ZYHkRJaOW2MkvxomsANage77PV yaS2uhAjgvtV3wiNlhQV+IygiGXOEP06rOpCfpD+EJreFUnX/ka3NkD2DZwbK8uw/hA1 iW3A== X-Gm-Message-State: APjAAAX7smF93utvaDGIaZKvANyan2bhOV+pkf54pMqu+tjvcPkiMahs iAImyx5HxPoulB1kQR7XwwI5VOW7gqs= X-Google-Smtp-Source: APXvYqykhnZtdzORXAyupuWqn2BHYQR0B89sobdNSzCHzKD3cGNpudvmnJ9oAl+4yIGw4UNa5Frr46oMdig= X-Received: by 2002:ab0:5a95:: with SMTP id w21mr4781080uae.72.1572247245620; Mon, 28 Oct 2019 00:20:45 -0700 (PDT) Date: Mon, 28 Oct 2019 00:20:26 -0700 In-Reply-To: <20191028072032.6911-1-satyat@google.com> Message-Id: <20191028072032.6911-4-satyat@google.com> Mime-Version: 1.0 References: <20191028072032.6911-1-satyat@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v5 3/9] block: blk-crypto for Inline Encryption From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org We introduce blk-crypto, which manages programming keyslots for struct bios. With blk-crypto, filesystems only need to call bio_crypt_set_ctx with the encryption key, algorithm and data_unit_num; they don't have to worry about getting a keyslot for each encryption context, as blk-crypto handles that. Blk-crypto also makes it possible for layered devices like device mapper to make use of inline encryption hardware. Blk-crypto delegates crypto operations to inline encryption hardware when available, and also contains a software fallback to the kernel crypto API. For more details, refer to Documentation/block/inline-encryption.rst. Signed-off-by: Satya Tangirala --- Documentation/block/index.rst | 1 + Documentation/block/inline-encryption.rst | 183 +++++ block/Kconfig | 2 + block/Makefile | 3 +- block/bio-crypt-ctx.c | 7 +- block/bio.c | 5 + block/blk-core.c | 11 +- block/blk-crypto.c | 798 ++++++++++++++++++++++ include/linux/bio-crypt-ctx.h | 7 + include/linux/blk-crypto.h | 62 ++ 10 files changed, 1076 insertions(+), 3 deletions(-) create mode 100644 Documentation/block/inline-encryption.rst create mode 100644 block/blk-crypto.c create mode 100644 include/linux/blk-crypto.h diff --git a/Documentation/block/index.rst b/Documentation/block/index.rst index 3fa7a52fafa4..026addfc69bc 100644 --- a/Documentation/block/index.rst +++ b/Documentation/block/index.rst @@ -14,6 +14,7 @@ Block cmdline-partition data-integrity deadline-iosched + inline-encryption ioprio kyber-iosched null_blk diff --git a/Documentation/block/inline-encryption.rst b/Documentation/block/inline-encryption.rst new file mode 100644 index 000000000000..202826cee95e --- /dev/null +++ b/Documentation/block/inline-encryption.rst @@ -0,0 +1,183 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================= +Inline Encryption +================= + +Objective +========= + +We want to support inline encryption (IE) in the kernel. +To allow for testing, we also want a crypto API fallback when actual +IE hardware is absent. We also want IE to work with layered devices +like dm and loopback (i.e. we want to be able to use the IE hardware +of the underlying devices if present, or else fall back to crypto API +en/decryption). + + +Constraints and notes +===================== + +- IE hardware have a limited number of "keyslots" that can be programmed + with an encryption context (key, algorithm, data unit size, etc.) at any time. + One can specify a keyslot in a data request made to the device, and the + device will en/decrypt the data using the encryption context programmed into + that specified keyslot. When possible, we want to make multiple requests with + the same encryption context share the same keyslot. + +- We need a way for filesystems to specify an encryption context to use for + en/decrypting a struct bio, and a device driver (like UFS) needs to be able + to use that encryption context when it processes the bio. + +- We need a way for device drivers to expose their capabilities in a unified + way to the upper layers. + + +Design +====== + +We add a struct bio_crypt_ctx to struct bio that can represent an +encryption context, because we need to be able to pass this encryption +context from the FS layer to the device driver to act upon. + +While IE hardware works on the notion of keyslots, the FS layer has no +knowledge of keyslots - it simply wants to specify an encryption context to +use while en/decrypting a bio. + +We introduce a keyslot manager (KSM) that handles the translation from +encryption contexts specified by the FS to keyslots on the IE hardware. +This KSM also serves as the way IE hardware can expose their capabilities to +upper layers. The generic mode of operation is: each device driver that wants +to support IE will construct a KSM and set it up in its struct request_queue. +Upper layers that want to use IE on this device can then use this KSM in +the device's struct request_queue to translate an encryption context into +a keyslot. The presence of the KSM in the request queue shall be used to mean +that the device supports IE. + +On the device driver end of the interface, the device driver needs to tell the +KSM how to actually manipulate the IE hardware in the device to do things like +programming the crypto key into the IE hardware into a particular keyslot. All +this is achieved through the :c:type:`struct keyslot_mgmt_ll_ops` that the +device driver passes to the KSM when creating it. + +It uses refcounts to track which keyslots are idle (either they have no +encryption context programmed, or there are no in-flight struct bios +referencing that keyslot). When a new encryption context needs a keyslot, it +tries to find a keyslot that has already been programmed with the same +encryption context, and if there is no such keyslot, it evicts the least +recently used idle keyslot and programs the new encryption context into that +one. If no idle keyslots are available, then the caller will sleep until there +is at least one. + + +Blk-crypto +========== + +The above is sufficient for simple cases, but does not work if there is a +need for a crypto API fallback, or if we are want to use IE with layered +devices. To these ends, we introduce blk-crypto. Blk-crypto allows us to +present a unified view of encryption to the FS (so FS only needs to specify +an encryption context and not worry about keyslots at all), and blk-crypto +can decide whether to delegate the en/decryption to IE hardware or to the +crypto API. Blk-crypto maintains an internal KSM that serves as the crypto +API fallback. + +Blk-crypto needs to ensure that the encryption context is programmed into the +"correct" keyslot manager for IE. If a bio is submitted to a layered device +that eventually passes the bio down to a device that really does support IE, we +want the encryption context to be programmed into a keyslot for the KSM of the +device with IE support. However, blk-crypto does not know a priori whether a +particular device is the final device in the layering structure for a bio or +not. So in the case that a particular device does not support IE, since it is +possibly the final destination device for the bio, if the bio requires +encryption (i.e. the bio is doing a write operation), blk-crypto must fallback +to the crypto API *before* sending the bio to the device. + +Blk-crypto ensures that: + +- The bio's encryption context is programmed into a keyslot in the KSM of the + request queue that the bio is being submitted to (or the crypto API fallback + KSM if the request queue doesn't have a KSM), and that the ``processing_ksm`` + in the ``bi_crypt_context`` is set to this KSM + +- That the bio has its own individual reference to the keyslot in this KSM. + Once the bio passes through blk-crypto, its encryption context is programmed + in some KSM. The "its own individual reference to the keyslot" ensures that + keyslots can be released by each bio independently of other bios while + ensuring that the bio has a valid reference to the keyslot when, for e.g., the + crypto API fallback KSM in blk-crypto performs crypto on the device's behalf. + The individual references are ensured by increasing the refcount for the + keyslot in the ``processing_ksm`` when a bio with a programmed encryption + context is cloned. + + +What blk-crypto does on bio submission +-------------------------------------- + +**Case 1:** blk-crypto is given a bio with only an encryption context that hasn't +been programmed into any keyslot in any KSM (for e.g. a bio from the FS). + In this case, blk-crypto will program the encryption context into the KSM of the + request queue the bio is being submitted to (and if this KSM does not exist, + then it will program it into blk-crypto's internal KSM for crypto API + fallback). The KSM that this encryption context was programmed into is stored + as the ``processing_ksm`` in the bio's ``bi_crypt_context``. + +**Case 2:** blk-crypto is given a bio whose encryption context has already been +programmed into a keyslot in the *crypto API fallback* KSM. + In this case, blk-crypto does nothing; it treats the bio as not having + specified an encryption context. Note that we cannot do here what we will do + in Case 3 because we would have already encrypted the bio via the crypto API + by this point. + +**Case 3:** blk-crypto is given a bio whose encryption context has already been +programmed into a keyslot in some KSM (that is *not* the crypto API fallback +KSM). + In this case, blk-crypto first releases that keyslot from that KSM and then + treats the bio as in Case 1. + +This way, when a device driver is processing a bio, it can be sure that +the bio's encryption context has been programmed into some KSM (either the +device driver's request queue's KSM, or blk-crypto's crypto API fallback KSM). +It then simply needs to check if the bio's processing_ksm is the device's +request queue's KSM. If so, then it should proceed with IE. If not, it should +simply do nothing with respect to crypto, because some other KSM (perhaps the +blk-crypto crypto API fallback KSM) is handling the en/decryption. + +Blk-crypto will release the keyslot that is being held by the bio (and also +decrypt it if the bio is using the crypto API fallback KSM) once +``bio_remaining_done`` returns true for the bio. + + +Layered Devices +=============== + +Layered devices that wish to support IE need to create their own keyslot +manager for their request queue, and expose whatever functionality they choose. +When a layered device wants to pass a bio to another layer (either by +resubmitting the same bio, or by submitting a clone), it doesn't need to do +anything special because the bio (or the clone) will once again pass through +blk-crypto, which will work as described in Case 3. If a layered device wants +for some reason to do the IO by itself instead of passing it on to a child +device, but it also chose to expose IE capabilities by setting up a KSM in its +request queue, it is then responsible for en/decrypting the data itself. In +such cases, the device can choose to call the blk-crypto function +``blk_crypto_fallback_to_kernel_crypto_api`` (TODO: Not yet implemented), which will +cause the en/decryption to be done via the crypto API fallback. + + +Future Optimizations for layered devices +======================================== + +Creating a keyslot manager for the layered device uses up memory for each +keyslot, and in general, a layered device (like dm-linear) merely passes the +request on to a "child" device, so the keyslots in the layered device itself +might be completely unused. We can instead define a new type of KSM; the +"passthrough KSM", that layered devices can use to let blk-crypto know that +this layered device *will* pass the bio to some child device (and hence +through blk-crypto again, at which point blk-crypto can program the encryption +context, instead of programming it into the layered device's KSM). Again, if +the device "lies" and decides to do the IO itself instead of passing it on to +a child device, it is responsible for doing the en/decryption (and can choose +to call ``blk_crypto_fallback_to_kernel_crypto_api``). Another use case for the +"passthrough KSM" is for IE devices that want to manage their own keyslots/do +not have a limited number of keyslots. diff --git a/block/Kconfig b/block/Kconfig index ae52d42b783b..606a67e47e68 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -179,6 +179,8 @@ config BLK_SED_OPAL config BLK_INLINE_ENCRYPTION bool "Enable inline encryption support in block layer" + select CRYPTO + select CRYPTO_BLKCIPHER help Build the blk-crypto subsystem. Enabling this lets the block layer handle encryption, diff --git a/block/Makefile b/block/Makefile index f39611ed151f..8932c7e4fd07 100644 --- a/block/Makefile +++ b/block/Makefile @@ -36,4 +36,5 @@ obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o obj-$(CONFIG_BLK_PM) += blk-pm.o -obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o bio-crypt-ctx.o +obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o bio-crypt-ctx.o \ + blk-crypto.o diff --git a/block/bio-crypt-ctx.c b/block/bio-crypt-ctx.c index aa3571f72ee7..6a2b061865c6 100644 --- a/block/bio-crypt-ctx.c +++ b/block/bio-crypt-ctx.c @@ -43,7 +43,12 @@ EXPORT_SYMBOL(bio_crypt_free_ctx); int bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask) { - if (!bio_has_crypt_ctx(src)) + /* + * If a bio is swhandled, then it will be decrypted when bio_endio + * is called. As we only want the data to be decrypted once, copies + * of the bio must not have have a crypt context. + */ + if (!bio_has_crypt_ctx(src) || bio_crypt_swhandled(src)) return 0; dst->bi_crypt_context = bio_crypt_alloc_ctx(gfp_mask); diff --git a/block/bio.c b/block/bio.c index ce8003aadf07..36a1712328d0 100644 --- a/block/bio.c +++ b/block/bio.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include "blk.h" @@ -1788,6 +1789,10 @@ void bio_endio(struct bio *bio) again: if (!bio_remaining_done(bio)) return; + + if (!blk_crypto_endio(bio)) + return; + if (!bio_integrity_endio(bio)) return; diff --git a/block/blk-core.c b/block/blk-core.c index 3b5959d386fb..0f7e81dbe2ee 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -38,6 +38,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -1061,7 +1062,9 @@ blk_qc_t generic_make_request(struct bio *bio) /* Create a fresh bio_list for all subordinate requests */ bio_list_on_stack[1] = bio_list_on_stack[0]; bio_list_init(&bio_list_on_stack[0]); - ret = q->make_request_fn(q, bio); + + if (!blk_crypto_submit_bio(&bio)) + ret = q->make_request_fn(q, bio); blk_queue_exit(q); @@ -1114,6 +1117,9 @@ blk_qc_t direct_make_request(struct bio *bio) if (!generic_make_request_checks(bio)) return BLK_QC_T_NONE; + if (blk_crypto_submit_bio(&bio)) + return BLK_QC_T_NONE; + if (unlikely(blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0))) { if (nowait && !blk_queue_dying(q)) bio->bi_status = BLK_STS_AGAIN; @@ -1810,5 +1816,8 @@ int __init blk_dev_init(void) if (bio_crypt_ctx_init() < 0) panic("Failed to allocate mem for bio crypt ctxs\n"); + if (blk_crypto_init() < 0) + panic("Failed to init blk-crypto\n"); + return 0; } diff --git a/block/blk-crypto.c b/block/blk-crypto.c new file mode 100644 index 000000000000..89649655bf4b --- /dev/null +++ b/block/blk-crypto.c @@ -0,0 +1,798 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2019 Google LLC + */ + +/* + * Refer to Documentation/block/inline-encryption.rst for detailed explanation. + */ + +#define pr_fmt(fmt) "blk-crypto: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* Represents a crypto mode supported by blk-crypto */ +struct blk_crypto_mode { + const char *cipher_str; /* crypto API name (for fallback case) */ + size_t keysize; /* key size in bytes */ +}; + +static const struct blk_crypto_mode blk_crypto_modes[] = { + [BLK_ENCRYPTION_MODE_AES_256_XTS] = { + .cipher_str = "xts(aes)", + .keysize = 64, + }, +}; + +static unsigned int num_prealloc_bounce_pg = 32; +module_param(num_prealloc_bounce_pg, uint, 0); +MODULE_PARM_DESC(num_prealloc_bounce_pg, + "Number of preallocated bounce pages for blk-crypto to use during crypto API fallback encryption"); + +#define BLK_CRYPTO_MAX_KEY_SIZE 64 +static int blk_crypto_num_keyslots = 100; +module_param_named(num_keyslots, blk_crypto_num_keyslots, int, 0); +MODULE_PARM_DESC(num_keyslots, + "Number of keyslots for crypto API fallback in blk-crypto."); + +static struct blk_crypto_keyslot { + struct crypto_skcipher *tfm; + enum blk_crypto_mode_num crypto_mode; + u8 key[BLK_CRYPTO_MAX_KEY_SIZE]; + struct crypto_skcipher *tfms[ARRAY_SIZE(blk_crypto_modes)]; +} *blk_crypto_keyslots; + +/* + * Allocating a crypto tfm during I/O can deadlock, so we have to preallocate + * all of a mode's tfms when that mode starts being used. Since each mode may + * need all the keyslots at some point, each mode needs its own tfm for each + * keyslot; thus, a keyslot may contain tfms for multiple modes. However, to + * match the behavior of real inline encryption hardware (which only supports a + * single encryption context per keyslot), we only allow one tfm per keyslot to + * be used at a time - the rest of the unused tfms have their keys cleared. + */ +static struct mutex tfms_lock[ARRAY_SIZE(blk_crypto_modes)]; +static bool tfms_inited[ARRAY_SIZE(blk_crypto_modes)]; + +struct work_mem { + struct work_struct crypto_work; + struct bio *bio; +}; + +/* The following few vars are only used during the crypto API fallback */ +static struct keyslot_manager *blk_crypto_ksm; +static struct workqueue_struct *blk_crypto_wq; +static mempool_t *blk_crypto_page_pool; +static struct kmem_cache *blk_crypto_work_mem_cache; + +bool bio_crypt_swhandled(struct bio *bio) +{ + return bio_has_crypt_ctx(bio) && + bio->bi_crypt_context->processing_ksm == blk_crypto_ksm; +} + +static u8 blank_key[BLK_CRYPTO_MAX_KEY_SIZE]; +static void evict_keyslot(unsigned int slot) +{ + struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot]; + enum blk_crypto_mode_num crypto_mode = slotp->crypto_mode; + int err; + + WARN_ON(slotp->crypto_mode == BLK_ENCRYPTION_MODE_INVALID); + + /* Clear the key in the skcipher */ + err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], blank_key, + blk_crypto_modes[crypto_mode].keysize); + WARN_ON(err); + memzero_explicit(slotp->key, BLK_CRYPTO_MAX_KEY_SIZE); + slotp->crypto_mode = BLK_ENCRYPTION_MODE_INVALID; +} + +static int blk_crypto_keyslot_program(void *priv, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size, + unsigned int slot) +{ + struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot]; + const struct blk_crypto_mode *mode = &blk_crypto_modes[crypto_mode]; + size_t keysize = mode->keysize; + int err; + + if (crypto_mode != slotp->crypto_mode && + slotp->crypto_mode != BLK_ENCRYPTION_MODE_INVALID) { + evict_keyslot(slot); + } + + if (!slotp->tfms[crypto_mode]) + return -ENOMEM; + slotp->crypto_mode = crypto_mode; + err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], key, keysize); + + if (err) { + evict_keyslot(slot); + return err; + } + + memcpy(slotp->key, key, keysize); + + return 0; +} + +static int blk_crypto_keyslot_evict(void *priv, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size, + unsigned int slot) +{ + evict_keyslot(slot); + return 0; +} + +static int blk_crypto_keyslot_find(void *priv, + const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size_bytes) +{ + int slot; + const size_t keysize = blk_crypto_modes[crypto_mode].keysize; + + for (slot = 0; slot < blk_crypto_num_keyslots; slot++) { + if (blk_crypto_keyslots[slot].crypto_mode == crypto_mode && + !crypto_memneq(blk_crypto_keyslots[slot].key, key, keysize)) + return slot; + } + + return -ENOKEY; +} + +static bool blk_crypto_mode_supported(void *priv, + enum blk_crypto_mode_num crypt_mode, + unsigned int data_unit_size) +{ + /* All blk_crypto_modes are required to have a crypto API fallback. */ + return true; +} + +/* + * The crypto API fallback KSM ops - only used for a bio when it specifies a + * blk_crypto_mode for which we failed to get a keyslot in the device's inline + * encryption hardware (which probably means the device doesn't have inline + * encryption hardware that supports that crypto mode). + */ +static const struct keyslot_mgmt_ll_ops blk_crypto_ksm_ll_ops = { + .keyslot_program = blk_crypto_keyslot_program, + .keyslot_evict = blk_crypto_keyslot_evict, + .keyslot_find = blk_crypto_keyslot_find, + .crypto_mode_supported = blk_crypto_mode_supported, +}; + +static void blk_crypto_encrypt_endio(struct bio *enc_bio) +{ + struct bio *src_bio = enc_bio->bi_private; + int i; + + for (i = 0; i < enc_bio->bi_vcnt; i++) + mempool_free(enc_bio->bi_io_vec[i].bv_page, + blk_crypto_page_pool); + + src_bio->bi_status = enc_bio->bi_status; + + bio_put(enc_bio); + bio_endio(src_bio); +} + +static struct bio *blk_crypto_clone_bio(struct bio *bio_src) +{ + struct bvec_iter iter; + struct bio_vec bv; + struct bio *bio; + + bio = bio_alloc_bioset(GFP_NOIO, bio_segments(bio_src), NULL); + if (!bio) + return NULL; + bio->bi_disk = bio_src->bi_disk; + bio->bi_opf = bio_src->bi_opf; + bio->bi_ioprio = bio_src->bi_ioprio; + bio->bi_write_hint = bio_src->bi_write_hint; + bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector; + bio->bi_iter.bi_size = bio_src->bi_iter.bi_size; + + bio_for_each_segment(bv, bio_src, iter) + bio->bi_io_vec[bio->bi_vcnt++] = bv; + + if (bio_integrity(bio_src) && + bio_integrity_clone(bio, bio_src, GFP_NOIO) < 0) { + bio_put(bio); + return NULL; + } + + bio_clone_blkg_association(bio, bio_src); + blkcg_bio_issue_init(bio); + + return bio; +} + +/* Check that all I/O segments are data unit aligned */ +static int bio_crypt_check_alignment(struct bio *bio) +{ + int data_unit_size = 1 << bio->bi_crypt_context->data_unit_size_bits; + struct bvec_iter iter; + struct bio_vec bv; + + bio_for_each_segment(bv, bio, iter) { + if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size)) + return -EIO; + } + return 0; +} + +static int blk_crypto_alloc_cipher_req(struct bio *src_bio, + struct skcipher_request **ciph_req_ptr, + struct crypto_wait *wait) +{ + int slot; + struct skcipher_request *ciph_req; + struct blk_crypto_keyslot *slotp; + + slot = bio_crypt_get_keyslot(src_bio); + slotp = &blk_crypto_keyslots[slot]; + ciph_req = skcipher_request_alloc(slotp->tfms[slotp->crypto_mode], + GFP_NOIO); + if (!ciph_req) { + src_bio->bi_status = BLK_STS_RESOURCE; + return -ENOMEM; + } + + skcipher_request_set_callback(ciph_req, + CRYPTO_TFM_REQ_MAY_BACKLOG | + CRYPTO_TFM_REQ_MAY_SLEEP, + crypto_req_done, wait); + *ciph_req_ptr = ciph_req; + return 0; +} + +static int blk_crypto_split_bio_if_needed(struct bio **bio_ptr) +{ + struct bio *bio = *bio_ptr; + unsigned int i = 0; + unsigned int num_sectors = 0; + struct bio_vec bv; + struct bvec_iter iter; + + bio_for_each_segment(bv, bio, iter) { + num_sectors += bv.bv_len >> SECTOR_SHIFT; + if (++i == BIO_MAX_PAGES) + break; + } + if (num_sectors < bio_sectors(bio)) { + struct bio *split_bio; + + split_bio = bio_split(bio, num_sectors, GFP_NOIO, NULL); + if (!split_bio) { + bio->bi_status = BLK_STS_RESOURCE; + return -ENOMEM; + } + bio_chain(split_bio, bio); + generic_make_request(bio); + *bio_ptr = split_bio; + } + return 0; +} + +/* + * The crypto API fallback's encryption routine. + * Allocate a bounce bio for encryption, encrypt the input bio using + * crypto API, and replace *bio_ptr with the bounce bio. May split input + * bio if it's too large. + */ +static int blk_crypto_encrypt_bio(struct bio **bio_ptr) +{ + struct bio *src_bio; + struct skcipher_request *ciph_req = NULL; + DECLARE_CRYPTO_WAIT(wait); + int err = 0; + u64 curr_dun; + union { + __le64 dun; + u8 bytes[16]; + } iv; + struct scatterlist src, dst; + struct bio *enc_bio; + struct bio_vec *enc_bvec; + int i, j; + int data_unit_size; + + /* Split the bio if it's too big for single page bvec */ + err = blk_crypto_split_bio_if_needed(bio_ptr); + if (err) + return err; + + src_bio = *bio_ptr; + data_unit_size = 1 << src_bio->bi_crypt_context->data_unit_size_bits; + + /* Allocate bounce bio for encryption */ + enc_bio = blk_crypto_clone_bio(src_bio); + if (!enc_bio) { + src_bio->bi_status = BLK_STS_RESOURCE; + return -ENOMEM; + } + + /* + * Use the crypto API fallback keyslot manager to get a crypto_skcipher + * for the algorithm and key specified for this bio. + */ + err = bio_crypt_ctx_acquire_keyslot(src_bio, blk_crypto_ksm); + if (err) { + src_bio->bi_status = BLK_STS_IOERR; + goto out_put_enc_bio; + } + + /* and then allocate an skcipher_request for it */ + err = blk_crypto_alloc_cipher_req(src_bio, &ciph_req, &wait); + if (err) + goto out_release_keyslot; + + curr_dun = bio_crypt_data_unit_num(src_bio); + sg_init_table(&src, 1); + sg_init_table(&dst, 1); + + skcipher_request_set_crypt(ciph_req, &src, &dst, + data_unit_size, iv.bytes); + + /* Encrypt each page in the bounce bio */ + for (i = 0, enc_bvec = enc_bio->bi_io_vec; i < enc_bio->bi_vcnt; + enc_bvec++, i++) { + struct page *plaintext_page = enc_bvec->bv_page; + struct page *ciphertext_page = + mempool_alloc(blk_crypto_page_pool, GFP_NOIO); + + enc_bvec->bv_page = ciphertext_page; + + if (!ciphertext_page) { + src_bio->bi_status = BLK_STS_RESOURCE; + err = -ENOMEM; + goto out_free_bounce_pages; + } + + sg_set_page(&src, plaintext_page, data_unit_size, + enc_bvec->bv_offset); + sg_set_page(&dst, ciphertext_page, data_unit_size, + enc_bvec->bv_offset); + + /* Encrypt each data unit in this page */ + for (j = 0; j < enc_bvec->bv_len; j += data_unit_size) { + memset(&iv, 0, sizeof(iv)); + iv.dun = cpu_to_le64(curr_dun); + + err = crypto_wait_req(crypto_skcipher_encrypt(ciph_req), + &wait); + if (err) { + i++; + src_bio->bi_status = BLK_STS_RESOURCE; + goto out_free_bounce_pages; + } + curr_dun++; + src.offset += data_unit_size; + dst.offset += data_unit_size; + } + } + + enc_bio->bi_private = src_bio; + enc_bio->bi_end_io = blk_crypto_encrypt_endio; + *bio_ptr = enc_bio; + + enc_bio = NULL; + err = 0; + goto out_free_ciph_req; + +out_free_bounce_pages: + while (i > 0) + mempool_free(enc_bio->bi_io_vec[--i].bv_page, + blk_crypto_page_pool); +out_free_ciph_req: + skcipher_request_free(ciph_req); +out_release_keyslot: + bio_crypt_ctx_release_keyslot(src_bio); +out_put_enc_bio: + if (enc_bio) + bio_put(enc_bio); + + return err; +} + +/* + * The crypto API fallback's main decryption routine. + * Decrypts input bio in place. + */ +static void blk_crypto_decrypt_bio(struct work_struct *w) +{ + struct work_mem *work_mem = + container_of(w, struct work_mem, crypto_work); + struct bio *bio = work_mem->bio; + struct skcipher_request *ciph_req = NULL; + DECLARE_CRYPTO_WAIT(wait); + struct bio_vec bv; + struct bvec_iter iter; + u64 curr_dun; + union { + __le64 dun; + u8 bytes[16]; + } iv; + struct scatterlist sg; + int data_unit_size = 1 << bio->bi_crypt_context->data_unit_size_bits; + int i; + int err; + + /* + * Use the crypto API fallback keyslot manager to get a crypto_skcipher + * for the algorithm and key specified for this bio. + */ + if (bio_crypt_ctx_acquire_keyslot(bio, blk_crypto_ksm)) { + bio->bi_status = BLK_STS_RESOURCE; + goto out_no_keyslot; + } + + /* and then allocate an skcipher_request for it */ + err = blk_crypto_alloc_cipher_req(bio, &ciph_req, &wait); + if (err) + goto out; + + curr_dun = bio_crypt_sw_data_unit_num(bio); + sg_init_table(&sg, 1); + skcipher_request_set_crypt(ciph_req, &sg, &sg, data_unit_size, + iv.bytes); + + /* Decrypt each segment in the bio */ + __bio_for_each_segment(bv, bio, iter, + bio->bi_crypt_context->crypt_iter) { + struct page *page = bv.bv_page; + + sg_set_page(&sg, page, data_unit_size, bv.bv_offset); + + /* Decrypt each data unit in the segment */ + for (i = 0; i < bv.bv_len; i += data_unit_size) { + memset(&iv, 0, sizeof(iv)); + iv.dun = cpu_to_le64(curr_dun); + if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req), + &wait)) { + bio->bi_status = BLK_STS_IOERR; + goto out; + } + curr_dun++; + sg.offset += data_unit_size; + } + } + +out: + skcipher_request_free(ciph_req); + bio_crypt_ctx_release_keyslot(bio); +out_no_keyslot: + kmem_cache_free(blk_crypto_work_mem_cache, work_mem); + bio_endio(bio); +} + +/* Queue bio for decryption */ +static void blk_crypto_queue_decrypt_bio(struct bio *bio) +{ + struct work_mem *work_mem = + kmem_cache_zalloc(blk_crypto_work_mem_cache, GFP_ATOMIC); + + if (!work_mem) { + bio->bi_status = BLK_STS_RESOURCE; + bio_endio(bio); + return; + } + + INIT_WORK(&work_mem->crypto_work, blk_crypto_decrypt_bio); + work_mem->bio = bio; + queue_work(blk_crypto_wq, &work_mem->crypto_work); +} + +/** + * blk_crypto_submit_bio - handle submitting bio for inline encryption + * + * @bio_ptr: pointer to original bio pointer + * + * If the bio doesn't have inline encryption enabled or the submitter already + * specified a keyslot for the target device, do nothing. Else, a raw key must + * have been provided, so acquire a device keyslot for it if supported. Else, + * use the crypto API fallback. + * + * When the crypto API fallback is used for encryption, blk-crypto may choose to + * split the bio into 2 - the first one that will continue to be processed and + * the second one that will be resubmitted via generic_make_request. + * A bounce bio will be allocated to encrypt the contents of the aforementioned + * "first one", and *bio_ptr will be updated to this bounce bio. + * + * Return: 0 if bio submission should continue; nonzero if bio_endio() was + * already called so bio submission should abort. + */ +int blk_crypto_submit_bio(struct bio **bio_ptr) +{ + struct bio *bio = *bio_ptr; + struct request_queue *q; + int err; + struct bio_crypt_ctx *crypt_ctx; + + if (!bio_has_crypt_ctx(bio) || !bio_has_data(bio)) + return 0; + + /* + * When a read bio is marked for sw decryption, its bi_iter is saved + * so that when we decrypt the bio later, we know what part of it was + * marked for sw decryption (when the bio is passed down after + * blk_crypto_submit bio, it may be split or advanced so we cannot rely + * on the bi_iter while decrypting in blk_crypto_endio) + */ + if (bio_crypt_swhandled(bio)) + return 0; + + err = bio_crypt_check_alignment(bio); + if (err) { + bio->bi_status = BLK_STS_IOERR; + goto out; + } + + crypt_ctx = bio->bi_crypt_context; + q = bio->bi_disk->queue; + + if (bio_crypt_has_keyslot(bio)) { + /* Key already programmed into device? */ + if (q->ksm == crypt_ctx->processing_ksm) + return 0; + + /* Nope, release the existing keyslot. */ + bio_crypt_ctx_release_keyslot(bio); + } + + /* Get device keyslot if supported */ + if (q->ksm) { + err = bio_crypt_ctx_acquire_keyslot(bio, q->ksm); + if (!err) + return 0; + + pr_warn_once("Failed to acquire keyslot for %s (err=%d). Falling back to crypto API.\n", + bio->bi_disk->disk_name, err); + } + + /* Fallback to crypto API */ + if (!READ_ONCE(tfms_inited[bio->bi_crypt_context->crypto_mode])) { + err = -EIO; + bio->bi_status = BLK_STS_IOERR; + goto out; + } + + if (bio_data_dir(bio) == WRITE) { + /* Encrypt the data now */ + err = blk_crypto_encrypt_bio(bio_ptr); + if (err) + goto out; + } else { + /* Mark bio as swhandled */ + bio->bi_crypt_context->processing_ksm = blk_crypto_ksm; + bio->bi_crypt_context->crypt_iter = bio->bi_iter; + bio->bi_crypt_context->sw_data_unit_num = + bio->bi_crypt_context->data_unit_num; + } + return 0; +out: + bio_endio(*bio_ptr); + return err; +} + +/** + * blk_crypto_endio - clean up bio w.r.t inline encryption during bio_endio + * + * @bio - the bio to clean up + * + * If blk_crypto_submit_bio decided to fallback to crypto API for this + * bio, we queue the bio for decryption into a workqueue and return false, + * and call bio_endio(bio) at a later time (after the bio has been decrypted). + * + * If the bio is not to be decrypted by the crypto API, this function releases + * the reference to the keyslot that blk_crypto_submit_bio got. + * + * Return: true if bio_endio should continue; false otherwise (bio_endio will + * be called again when bio has been decrypted). + */ +bool blk_crypto_endio(struct bio *bio) +{ + if (!bio_has_crypt_ctx(bio)) + return true; + + if (bio_crypt_swhandled(bio)) { + /* + * The only bios that are swhandled when they reach here + * are those with bio_data_dir(bio) == READ, since WRITE + * bios that are encrypted by the crypto API fallback are + * handled by blk_crypto_encrypt_endio. + */ + + /* If there was an IO error, don't decrypt. */ + if (bio->bi_status) + return true; + + blk_crypto_queue_decrypt_bio(bio); + return false; + } + + if (bio_crypt_has_keyslot(bio)) + bio_crypt_ctx_release_keyslot(bio); + + return true; +} + +/** + * blk_crypto_start_using_mode() - Allocate skciphers for a + * mode_num for all keyslots + * @mode_num - the blk_crypto_mode we want to allocate ciphers for. + * + * Upper layers (filesystems) should call this function to ensure that a + * the crypto API fallback has transforms for this algorithm, if they become + * necessary. + * + * Return: 0 on success and -err on error. + */ +int blk_crypto_start_using_mode(enum blk_crypto_mode_num mode_num, + unsigned int data_unit_size, + struct request_queue *q) +{ + struct blk_crypto_keyslot *slotp; + int err = 0; + int i; + + /* + * Fast path + * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num] + * for each i are visible before we try to access them. + */ + if (likely(smp_load_acquire(&tfms_inited[mode_num]))) + return 0; + + /* + * If the keyslot manager of the request queue supports this + * crypto mode, then we don't need to allocate this mode. + */ + if (keyslot_manager_crypto_mode_supported(q->ksm, mode_num, + data_unit_size)) { + return 0; + } + + mutex_lock(&tfms_lock[mode_num]); + if (likely(tfms_inited[mode_num])) + goto out; + + for (i = 0; i < blk_crypto_num_keyslots; i++) { + slotp = &blk_crypto_keyslots[i]; + slotp->tfms[mode_num] = crypto_alloc_skcipher( + blk_crypto_modes[mode_num].cipher_str, + 0, 0); + if (IS_ERR(slotp->tfms[mode_num])) { + err = PTR_ERR(slotp->tfms[mode_num]); + slotp->tfms[mode_num] = NULL; + goto out_free_tfms; + } + + crypto_skcipher_set_flags(slotp->tfms[mode_num], + CRYPTO_TFM_REQ_FORBID_WEAK_KEYS); + } + + /* + * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num] + * for each i are visible before we set tfms_inited[mode_num]. + */ + smp_store_release(&tfms_inited[mode_num], true); + goto out; + +out_free_tfms: + for (i = 0; i < blk_crypto_num_keyslots; i++) { + slotp = &blk_crypto_keyslots[i]; + crypto_free_skcipher(slotp->tfms[mode_num]); + slotp->tfms[mode_num] = NULL; + } +out: + mutex_unlock(&tfms_lock[mode_num]); + return err; +} +EXPORT_SYMBOL(blk_crypto_start_using_mode); + +/** + * blk_crypto_evict_key() - Evict a key from any inline encryption hardware + * it may have been programmed into + * @q - The request queue who's keyslot manager this key might have been + * programmed into + * @key - The key to evict + * @mode - The blk_crypto_mode_num used with this key + * @data_unit_size - The data unit size used with this key + * + * Upper layers (filesystems) should call this function to ensure that a key + * is evicted from hardware that it might have been programmed into. This + * will call keyslot_manager_evict_key on the queue's keyslot manager, if one + * exists, and supports the crypto algorithm with the specified data unit size. + * Otherwise, it will evict the key from the blk_crypto_ksm. + * + * Return: 0 on success, -err on error. + */ +int blk_crypto_evict_key(struct request_queue *q, const u8 *key, + enum blk_crypto_mode_num mode, + unsigned int data_unit_size) +{ + struct keyslot_manager *ksm = blk_crypto_ksm; + + if (q && q->ksm && keyslot_manager_crypto_mode_supported(q->ksm, mode, + data_unit_size)) { + ksm = q->ksm; + } + + return keyslot_manager_evict_key(ksm, key, mode, data_unit_size); +} +EXPORT_SYMBOL(blk_crypto_evict_key); + +int __init blk_crypto_init(void) +{ + int i; + int err = -ENOMEM; + + prandom_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE); + + blk_crypto_ksm = keyslot_manager_create(blk_crypto_num_keyslots, + &blk_crypto_ksm_ll_ops, + NULL); + if (!blk_crypto_ksm) + goto out; + + blk_crypto_wq = alloc_workqueue("blk_crypto_wq", + WQ_UNBOUND | WQ_HIGHPRI | + WQ_MEM_RECLAIM, + num_online_cpus()); + if (!blk_crypto_wq) + goto out_free_ksm; + + blk_crypto_keyslots = kcalloc(blk_crypto_num_keyslots, + sizeof(*blk_crypto_keyslots), + GFP_KERNEL); + if (!blk_crypto_keyslots) + goto out_free_workqueue; + + for (i = 0; i < blk_crypto_num_keyslots; i++) { + blk_crypto_keyslots[i].crypto_mode = + BLK_ENCRYPTION_MODE_INVALID; + } + + for (i = 0; i < ARRAY_SIZE(blk_crypto_modes); i++) + mutex_init(&tfms_lock[i]); + + blk_crypto_page_pool = + mempool_create_page_pool(num_prealloc_bounce_pg, 0); + if (!blk_crypto_page_pool) + goto out_free_keyslots; + + blk_crypto_work_mem_cache = KMEM_CACHE(work_mem, SLAB_RECLAIM_ACCOUNT); + if (!blk_crypto_work_mem_cache) + goto out_free_page_pool; + + return 0; + +out_free_page_pool: + mempool_destroy(blk_crypto_page_pool); + blk_crypto_page_pool = NULL; +out_free_keyslots: + kzfree(blk_crypto_keyslots); + blk_crypto_keyslots = NULL; +out_free_workqueue: + destroy_workqueue(blk_crypto_wq); + blk_crypto_wq = NULL; +out_free_ksm: + keyslot_manager_destroy(blk_crypto_ksm); + blk_crypto_ksm = NULL; +out: + pr_warn("No memory for blk-crypto crypto API fallback."); + return err; +} diff --git a/include/linux/bio-crypt-ctx.h b/include/linux/bio-crypt-ctx.h index 5cd569f77c31..7c389f310bab 100644 --- a/include/linux/bio-crypt-ctx.h +++ b/include/linux/bio-crypt-ctx.h @@ -53,6 +53,8 @@ static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes) } } +extern bool bio_crypt_swhandled(struct bio *bio); + static inline bool bio_crypt_has_keyslot(struct bio *bio) { return bio->bi_crypt_context->keyslot >= 0; @@ -170,6 +172,11 @@ static inline void bio_crypt_set_ctx(struct bio *bio, unsigned int dun_bits, gfp_t gfp_mask) { } +static inline bool bio_crypt_swhandled(struct bio *bio) +{ + return false; +} + static inline void bio_set_data_unit_num(struct bio *bio, u64 dun) { } static inline bool bio_crypt_has_keyslot(struct bio *bio) diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h new file mode 100644 index 000000000000..2a07401244a6 --- /dev/null +++ b/include/linux/blk-crypto.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2019 Google LLC + */ + +#ifndef __LINUX_BLK_CRYPTO_H +#define __LINUX_BLK_CRYPTO_H + +#include +#include + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + +int blk_crypto_init(void); + +int blk_crypto_submit_bio(struct bio **bio_ptr); + +bool blk_crypto_endio(struct bio *bio); + +int blk_crypto_start_using_mode(enum blk_crypto_mode_num mode_num, + unsigned int data_unit_size, + struct request_queue *q); + +int blk_crypto_evict_key(struct request_queue *q, const u8 *key, + enum blk_crypto_mode_num mode, + unsigned int data_unit_size); + +#else /* CONFIG_BLK_INLINE_ENCRYPTION */ + +static inline int blk_crypto_init(void) +{ + return 0; +} + +static inline int blk_crypto_submit_bio(struct bio **bio_ptr) +{ + return 0; +} + +static inline bool blk_crypto_endio(struct bio *bio) +{ + return true; +} + +static inline int +blk_crypto_start_using_mode(enum blk_crypto_mode_num mode_num, + unsigned int data_unit_size, + struct request_queue *q) +{ + return -EOPNOTSUPP; +} + +static inline int blk_crypto_evict_key(struct request_queue *q, const u8 *key, + enum blk_crypto_mode_num mode, + unsigned int data_unit_size) +{ + return 0; +} + +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */ + +#endif /* __LINUX_BLK_CRYPTO_H */ From patchwork Mon Oct 28 07:20:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11214737 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BD2DD1864 for ; Mon, 28 Oct 2019 07:20:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9A67621783 for ; Mon, 28 Oct 2019 07:20:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VU4jy3aD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732744AbfJ1HUu (ORCPT ); Mon, 28 Oct 2019 03:20:50 -0400 Received: from mail-pf1-f202.google.com ([209.85.210.202]:42016 "EHLO mail-pf1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732717AbfJ1HUt (ORCPT ); Mon, 28 Oct 2019 03:20:49 -0400 Received: by mail-pf1-f202.google.com with SMTP id c8so2281493pfo.9 for ; Mon, 28 Oct 2019 00:20:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bRKwTgSfXk7fwOVKWxovzUGX9YKiZ0rPGH1X8N6VbYE=; b=VU4jy3aDdPb0XgHsMa09XgTLMQjb+mXE60C3hudDweBaY1XvV1Z4zty8hyuKL0bt/x ambvCGLE+1tbm2DFluDteoH3eMaEo8ieefV5eUDS3NaAHpSwVSF0E9uxp+Q7RcYhMGBo X0UOPy9NSYynVK7Q6hUfZMoNAPZRV994EO+LouueLJYU/1qf0g200hUMhitdjm0823j1 U+TMIILKh52cqSmnoCHbPNqrO772bTPYKkfw4yFMbaRDpsVpJi1DM9YvCnk2W+3z+7AD lh9G/qCk+m572cXUgmU59jq/HSF5WI/s/JX0U+dbS2qwJRYOvGqy+iUnlfCosvtAemZa nW1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bRKwTgSfXk7fwOVKWxovzUGX9YKiZ0rPGH1X8N6VbYE=; b=IkhvfoCTEbxno/X1DCbsrJuBWJUm8ZpHIepQTfiHDTLF+AFw+TpIjTHVa8sCUmkoYx mApKroO5dXmHMJK/jjOIUua42yy8L99b9Q8z7kcl0SxvqJxnCpOPW/8n5XUbi92jXYyt ldyVHNP8s4QXzn7I1TOghJfVOwUDZt3dblX1XjtYRMOicjU5OeoPfBBvIqRssVL5k1nB /C5B04/z/+7EratGSzyivCCNfkmUL8+skBpImXFKyJvailza4zJnN1oiFaiqyDOwJYwk q/lElj9kDb9BBzNuhUfXSLRIGpvqPfwyU6YH5fDqf9SY7bABbQqagtiD3bQb9sfxNBUP JTMQ== X-Gm-Message-State: APjAAAVlbsa36YI/5KNp+GctdwLzI/tSgJL0eqGNm7jwB48kVUU4Ovew jWgzoeGPm3KkljkoGWZO9wzOo9XgJ+4= X-Google-Smtp-Source: APXvYqy3NeWQK99ybjv4dDgeoSiNONN6fh7fT0Ui9h0LyMmKLPvI7XfvVxXalHwla71EgYT0Xt/qAdGIyWs= X-Received: by 2002:a65:4bc3:: with SMTP id p3mr19222891pgr.188.1572247248176; Mon, 28 Oct 2019 00:20:48 -0700 (PDT) Date: Mon, 28 Oct 2019 00:20:27 -0700 In-Reply-To: <20191028072032.6911-1-satyat@google.com> Message-Id: <20191028072032.6911-5-satyat@google.com> Mime-Version: 1.0 References: <20191028072032.6911-1-satyat@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v5 4/9] scsi: ufs: UFS driver v2.1 spec crypto additions From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Add the crypto registers and structs defined in v2.1 of the JEDEC UFSHCI specification in preparation to add support for inline encryption to UFS. Signed-off-by: Satya Tangirala --- drivers/scsi/ufs/ufshcd.c | 2 ++ drivers/scsi/ufs/ufshcd.h | 5 +++ drivers/scsi/ufs/ufshci.h | 67 +++++++++++++++++++++++++++++++++++++-- 3 files changed, 72 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 11a87f51c442..e66eb7a39a02 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -4769,6 +4769,8 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) case OCS_MISMATCH_RESP_UPIU_SIZE: case OCS_PEER_COMM_FAILURE: case OCS_FATAL_ERROR: + case OCS_INVALID_CRYPTO_CONFIG: + case OCS_GENERAL_CRYPTO_ERROR: default: result |= DID_ERROR << 16; dev_err(hba->dev, diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index c94cfda52829..8f2329b4fe79 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -716,6 +716,11 @@ struct ufs_hba { * the performance of ongoing read/write operations. */ #define UFSHCD_CAP_KEEP_AUTO_BKOPS_ENABLED_EXCEPT_SUSPEND (1 << 5) + /* + * This capability allows the host controller driver to use the + * inline crypto engine, if it is present + */ +#define UFSHCD_CAP_CRYPTO (1 << 6) struct devfreq *devfreq; struct ufs_clk_scaling clk_scaling; diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h index dbb75cd28dc8..291f6c3e79db 100644 --- a/drivers/scsi/ufs/ufshci.h +++ b/drivers/scsi/ufs/ufshci.h @@ -90,6 +90,7 @@ enum { MASK_64_ADDRESSING_SUPPORT = 0x01000000, MASK_OUT_OF_ORDER_DATA_DELIVERY_SUPPORT = 0x02000000, MASK_UIC_DME_TEST_MODE_SUPPORT = 0x04000000, + MASK_CRYPTO_SUPPORT = 0x10000000, }; #define UFS_MASK(mask, offset) ((mask) << (offset)) @@ -143,6 +144,7 @@ enum { #define DEVICE_FATAL_ERROR 0x800 #define CONTROLLER_FATAL_ERROR 0x10000 #define SYSTEM_BUS_FATAL_ERROR 0x20000 +#define CRYPTO_ENGINE_FATAL_ERROR 0x40000 #define UFSHCD_UIC_HIBERN8_MASK (UIC_HIBERNATE_ENTER |\ UIC_HIBERNATE_EXIT) @@ -155,11 +157,13 @@ enum { #define UFSHCD_ERROR_MASK (UIC_ERROR |\ DEVICE_FATAL_ERROR |\ CONTROLLER_FATAL_ERROR |\ - SYSTEM_BUS_FATAL_ERROR) + SYSTEM_BUS_FATAL_ERROR |\ + CRYPTO_ENGINE_FATAL_ERROR) #define INT_FATAL_ERRORS (DEVICE_FATAL_ERROR |\ CONTROLLER_FATAL_ERROR |\ - SYSTEM_BUS_FATAL_ERROR) + SYSTEM_BUS_FATAL_ERROR |\ + CRYPTO_ENGINE_FATAL_ERROR) /* HCS - Host Controller Status 30h */ #define DEVICE_PRESENT 0x1 @@ -318,6 +322,61 @@ enum { INTERRUPT_MASK_ALL_VER_21 = 0x71FFF, }; +/* CCAP - Crypto Capability 100h */ +union ufs_crypto_capabilities { + __le32 reg_val; + struct { + u8 num_crypto_cap; + u8 config_count; + u8 reserved; + u8 config_array_ptr; + }; +}; + +enum ufs_crypto_key_size { + UFS_CRYPTO_KEY_SIZE_INVALID = 0x0, + UFS_CRYPTO_KEY_SIZE_128 = 0x1, + UFS_CRYPTO_KEY_SIZE_192 = 0x2, + UFS_CRYPTO_KEY_SIZE_256 = 0x3, + UFS_CRYPTO_KEY_SIZE_512 = 0x4, +}; + +enum ufs_crypto_alg { + UFS_CRYPTO_ALG_AES_XTS = 0x0, + UFS_CRYPTO_ALG_BITLOCKER_AES_CBC = 0x1, + UFS_CRYPTO_ALG_AES_ECB = 0x2, + UFS_CRYPTO_ALG_ESSIV_AES_CBC = 0x3, +}; + +/* x-CRYPTOCAP - Crypto Capability X */ +union ufs_crypto_cap_entry { + __le32 reg_val; + struct { + u8 algorithm_id; + u8 sdus_mask; /* Supported data unit size mask */ + u8 key_size; + u8 reserved; + }; +}; + +#define UFS_CRYPTO_CONFIGURATION_ENABLE (1 << 7) +#define UFS_CRYPTO_KEY_MAX_SIZE 64 +/* x-CRYPTOCFG - Crypto Configuration X */ +union ufs_crypto_cfg_entry { + __le32 reg_val[32]; + struct { + u8 crypto_key[UFS_CRYPTO_KEY_MAX_SIZE]; + u8 data_unit_size; + u8 crypto_cap_idx; + u8 reserved_1; + u8 config_enable; + u8 reserved_multi_host; + u8 reserved_2; + u8 vsb[2]; + u8 reserved_3[56]; + }; +}; + /* * Request Descriptor Definitions */ @@ -339,6 +398,7 @@ enum { UTP_NATIVE_UFS_COMMAND = 0x10000000, UTP_DEVICE_MANAGEMENT_FUNCTION = 0x20000000, UTP_REQ_DESC_INT_CMD = 0x01000000, + UTP_REQ_DESC_CRYPTO_ENABLE_CMD = 0x00800000, }; /* UTP Transfer Request Data Direction (DD) */ @@ -358,6 +418,9 @@ enum { OCS_PEER_COMM_FAILURE = 0x5, OCS_ABORTED = 0x6, OCS_FATAL_ERROR = 0x7, + OCS_DEVICE_FATAL_ERROR = 0x8, + OCS_INVALID_CRYPTO_CONFIG = 0x9, + OCS_GENERAL_CRYPTO_ERROR = 0xA, OCS_INVALID_COMMAND_STATUS = 0x0F, MASK_OCS = 0x0F, }; From patchwork Mon Oct 28 07:20:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11214745 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C56A517D5 for ; Mon, 28 Oct 2019 07:20:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 996D521783 for ; Mon, 28 Oct 2019 07:20:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GKZGgMi/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732753AbfJ1HUw (ORCPT ); Mon, 28 Oct 2019 03:20:52 -0400 Received: from mail-pl1-f202.google.com ([209.85.214.202]:34775 "EHLO mail-pl1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732751AbfJ1HUv (ORCPT ); Mon, 28 Oct 2019 03:20:51 -0400 Received: by mail-pl1-f202.google.com with SMTP id 70so5479099ple.1 for ; Mon, 28 Oct 2019 00:20:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YuLRBT8VeKFjeFAxW/RFbwDFQDt87L5ymIHlQ4T2tKE=; b=GKZGgMi/B6ycgEcPXCNsdNf93Eck666Y5VeC5+v8EzPCvEBsbJztoAv5JnltzkNazX 5atVX3AUEQVUdaDz2uncJeVahLolQ7TfOgdHnbnBiL2vc/UWEdp2JTnu07kXYsBJji/Q Zqk9FtVOR/LLqsEcFxarl6fyZpWocebg40i8cWWDap4nd+287AALYb07uOnC7YEYNooj 2xzOttjR/b8n4HGVT2voMOb7nF5ejTFw2D9eUkDjHq4Yu7KzERECuI3aflhauYzgTXbF VxLXTO3pQCRB89qNktQpwP0M41RRJ5v9QM66CZtT9aKvxjx/lTHsQCF1c0IFEuYZa9/W ZUlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YuLRBT8VeKFjeFAxW/RFbwDFQDt87L5ymIHlQ4T2tKE=; b=thwNSNFdQA1nPnb5H7vKlFBgW7DEBS/n58i/D3lOIViOZxB0GUttSk1DEW7AgTLwOK is4nVyFS9uZkUXvLS+/FigR+qYvjvj+VCYstojo11A83qrf2W0uF3EBGpEHc0KDHIkQ9 LTJJcDv9acho6ZQOhMj39dlS2CjNoXT+jCitUsfiUxuYzx9scMD3rBxKJrNJY2yW/CSh vm63rm0TSu0b5TYdXz3P8RcYyKolT3n1XYqe3I5O+c0qpOcBBdHCZQbzqZHo0jArf17O uTnI67I37mjt7tHpjrcY10KnzRKf6/GkmxzUZBIrpF9QN73vUguV/KqqbYgpkpJwg9ow GTeA== X-Gm-Message-State: APjAAAVHcujLFAZU8asqn6NP32PNsf1djPj9jHe+OKkqLtFNkqFRldqp vxISmtwAdd/rYlgK/k/ZNOCIhXxuSzg= X-Google-Smtp-Source: APXvYqwlVuJpn+Z1ioz3x/N3OYX0dwcv7dyCP7U4ND/nJgH4yaOVqF/AQ0+ZA9A5Z6rprH0WX9QpmN/gtBc= X-Received: by 2002:a63:bc11:: with SMTP id q17mr10427556pge.223.1572247250972; Mon, 28 Oct 2019 00:20:50 -0700 (PDT) Date: Mon, 28 Oct 2019 00:20:28 -0700 In-Reply-To: <20191028072032.6911-1-satyat@google.com> Message-Id: <20191028072032.6911-6-satyat@google.com> Mime-Version: 1.0 References: <20191028072032.6911-1-satyat@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v5 5/9] scsi: ufs: UFS crypto API From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Introduce functions to manipulate UFS inline encryption hardware in line with the JEDEC UFSHCI v2.1 specification and to work with the block keyslot manager. Signed-off-by: Satya Tangirala --- drivers/scsi/ufs/Kconfig | 9 + drivers/scsi/ufs/Makefile | 1 + drivers/scsi/ufs/ufshcd-crypto.c | 391 +++++++++++++++++++++++++++++++ drivers/scsi/ufs/ufshcd-crypto.h | 86 +++++++ drivers/scsi/ufs/ufshcd.h | 14 ++ 5 files changed, 501 insertions(+) create mode 100644 drivers/scsi/ufs/ufshcd-crypto.c create mode 100644 drivers/scsi/ufs/ufshcd-crypto.h diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig index 0b845ab7c3bf..3ee5cede823e 100644 --- a/drivers/scsi/ufs/Kconfig +++ b/drivers/scsi/ufs/Kconfig @@ -150,3 +150,12 @@ config SCSI_UFS_BSG Select this if you need a bsg device node for your UFS controller. If unsure, say N. + +config SCSI_UFS_CRYPTO + bool "UFS Crypto Engine Support" + depends on SCSI_UFSHCD && BLK_INLINE_ENCRYPTION + help + Enable Crypto Engine Support in UFS. + Enabling this makes it possible for the kernel to use the crypto + capabilities of the UFS device (if present) to perform crypto + operations on data being transferred to/from the device. diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile index 2a9097939bcb..094c39989a37 100644 --- a/drivers/scsi/ufs/Makefile +++ b/drivers/scsi/ufs/Makefile @@ -11,3 +11,4 @@ obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o obj-$(CONFIG_SCSI_UFS_MEDIATEK) += ufs-mediatek.o +ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c new file mode 100644 index 000000000000..3900a07a7e9b --- /dev/null +++ b/drivers/scsi/ufs/ufshcd-crypto.c @@ -0,0 +1,391 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2019 Google LLC + */ + +#include + +#include "ufshcd.h" +#include "ufshcd-crypto.h" + +static bool ufshcd_cap_idx_valid(struct ufs_hba *hba, unsigned int cap_idx) +{ + return cap_idx < hba->crypto_capabilities.num_crypto_cap; +} + +static u8 get_data_unit_size_mask(unsigned int data_unit_size) +{ + if (data_unit_size < 512 || data_unit_size > 65536 || + !is_power_of_2(data_unit_size)) + return 0; + + return data_unit_size / 512; +} + +static size_t get_keysize_bytes(enum ufs_crypto_key_size size) +{ + switch (size) { + case UFS_CRYPTO_KEY_SIZE_128: return 16; + case UFS_CRYPTO_KEY_SIZE_192: return 24; + case UFS_CRYPTO_KEY_SIZE_256: return 32; + case UFS_CRYPTO_KEY_SIZE_512: return 64; + default: return 0; + } +} + +static int ufshcd_crypto_cap_find(void *hba_p, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + struct ufs_hba *hba = hba_p; + enum ufs_crypto_alg ufs_alg; + u8 data_unit_mask; + int cap_idx; + enum ufs_crypto_key_size ufs_key_size; + union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array; + + if (!ufshcd_hba_is_crypto_supported(hba)) + return -EINVAL; + + switch (crypto_mode) { + case BLK_ENCRYPTION_MODE_AES_256_XTS: + ufs_alg = UFS_CRYPTO_ALG_AES_XTS; + ufs_key_size = UFS_CRYPTO_KEY_SIZE_256; + break; + default: return -EINVAL; + } + + data_unit_mask = get_data_unit_size_mask(data_unit_size); + + for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap; + cap_idx++) { + if (ccap_array[cap_idx].algorithm_id == ufs_alg && + (ccap_array[cap_idx].sdus_mask & data_unit_mask) && + ccap_array[cap_idx].key_size == ufs_key_size) + return cap_idx; + } + + return -EINVAL; +} + +/** + * ufshcd_crypto_cfg_entry_write_key - Write a key into a crypto_cfg_entry + * + * Writes the key with the appropriate format - for AES_XTS, + * the first half of the key is copied as is, the second half is + * copied with an offset halfway into the cfg->crypto_key array. + * For the other supported crypto algs, the key is just copied. + * + * @cfg: The crypto config to write to + * @key: The key to write + * @cap: The crypto capability (which specifies the crypto alg and key size) + * + * Returns 0 on success, or -EINVAL + */ +static int ufshcd_crypto_cfg_entry_write_key(union ufs_crypto_cfg_entry *cfg, + const u8 *key, + union ufs_crypto_cap_entry cap) +{ + size_t key_size_bytes = get_keysize_bytes(cap.key_size); + + if (key_size_bytes == 0) + return -EINVAL; + + switch (cap.algorithm_id) { + case UFS_CRYPTO_ALG_AES_XTS: + key_size_bytes *= 2; + if (key_size_bytes > UFS_CRYPTO_KEY_MAX_SIZE) + return -EINVAL; + + memcpy(cfg->crypto_key, key, key_size_bytes/2); + memcpy(cfg->crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2, + key + key_size_bytes/2, key_size_bytes/2); + return 0; + case UFS_CRYPTO_ALG_BITLOCKER_AES_CBC: // fallthrough + case UFS_CRYPTO_ALG_AES_ECB: // fallthrough + case UFS_CRYPTO_ALG_ESSIV_AES_CBC: + memcpy(cfg->crypto_key, key, key_size_bytes); + return 0; + } + + return -EINVAL; +} + +static void program_key(struct ufs_hba *hba, + const union ufs_crypto_cfg_entry *cfg, + int slot) +{ + int i; + u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg); + + /* Clear the dword 16 */ + ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0])); + /* Ensure that CFGE is cleared before programming the key */ + wmb(); + for (i = 0; i < 16; i++) { + ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[i]), + slot_offset + i * sizeof(cfg->reg_val[0])); + /* Spec says each dword in key must be written sequentially */ + wmb(); + } + /* Write dword 17 */ + ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[17]), + slot_offset + 17 * sizeof(cfg->reg_val[0])); + /* Dword 16 must be written last */ + wmb(); + /* Write dword 16 */ + ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[16]), + slot_offset + 16 * sizeof(cfg->reg_val[0])); + wmb(); +} + +static int ufshcd_crypto_keyslot_program(void *hba_p, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size, + unsigned int slot) +{ + struct ufs_hba *hba = hba_p; + int err = 0; + u8 data_unit_mask; + union ufs_crypto_cfg_entry cfg; + union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs; + int cap_idx; + + cap_idx = ufshcd_crypto_cap_find(hba_p, crypto_mode, + data_unit_size); + + if (!ufshcd_is_crypto_enabled(hba) || + !ufshcd_keyslot_valid(hba, slot) || + !ufshcd_cap_idx_valid(hba, cap_idx)) + return -EINVAL; + + data_unit_mask = get_data_unit_size_mask(data_unit_size); + + if (!(data_unit_mask & hba->crypto_cap_array[cap_idx].sdus_mask)) + return -EINVAL; + + memset(&cfg, 0, sizeof(cfg)); + cfg.data_unit_size = data_unit_mask; + cfg.crypto_cap_idx = cap_idx; + cfg.config_enable |= UFS_CRYPTO_CONFIGURATION_ENABLE; + + err = ufshcd_crypto_cfg_entry_write_key(&cfg, key, + hba->crypto_cap_array[cap_idx]); + if (err) + return err; + + program_key(hba, &cfg, slot); + + memcpy(&cfg_arr[slot], &cfg, sizeof(cfg)); + memzero_explicit(&cfg, sizeof(cfg)); + + return 0; +} + +static int ufshcd_crypto_keyslot_find(void *hba_p, + const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + struct ufs_hba *hba = hba_p; + int err = 0; + int slot; + u8 data_unit_mask; + union ufs_crypto_cfg_entry cfg; + union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs; + int cap_idx; + + cap_idx = ufshcd_crypto_cap_find(hba_p, crypto_mode, + data_unit_size); + + if (!ufshcd_is_crypto_enabled(hba) || + !ufshcd_cap_idx_valid(hba, cap_idx)) + return -EINVAL; + + data_unit_mask = get_data_unit_size_mask(data_unit_size); + + if (!(data_unit_mask & hba->crypto_cap_array[cap_idx].sdus_mask)) + return -EINVAL; + + memset(&cfg, 0, sizeof(cfg)); + err = ufshcd_crypto_cfg_entry_write_key(&cfg, key, + hba->crypto_cap_array[cap_idx]); + + if (err) + return -EINVAL; + + for (slot = 0; slot < NUM_KEYSLOTS(hba); slot++) { + if ((cfg_arr[slot].config_enable & + UFS_CRYPTO_CONFIGURATION_ENABLE) && + data_unit_mask == cfg_arr[slot].data_unit_size && + cap_idx == cfg_arr[slot].crypto_cap_idx && + !crypto_memneq(&cfg.crypto_key, cfg_arr[slot].crypto_key, + UFS_CRYPTO_KEY_MAX_SIZE)) { + memzero_explicit(&cfg, sizeof(cfg)); + return slot; + } + } + + memzero_explicit(&cfg, sizeof(cfg)); + return -ENOKEY; +} + +static int ufshcd_crypto_keyslot_evict(void *hba_p, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size, + unsigned int slot) +{ + struct ufs_hba *hba = hba_p; + int i = 0; + u32 reg_base; + union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs; + + if (!ufshcd_is_crypto_enabled(hba) || + !ufshcd_keyslot_valid(hba, slot)) + return -EINVAL; + + memset(&cfg_arr[slot], 0, sizeof(cfg_arr[slot])); + reg_base = hba->crypto_cfg_register + slot * sizeof(cfg_arr[0]); + + /* + * Clear the crypto cfg on the device. Clearing CFGE + * might not be sufficient, so just clear the entire cfg. + */ + for (i = 0; i < sizeof(cfg_arr[0]); i += sizeof(__le32)) + ufshcd_writel(hba, 0, reg_base + i); + wmb(); + + return 0; +} + +static bool ufshcd_crypto_mode_supported(void *hba_p, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + return ufshcd_crypto_cap_find(hba_p, crypto_mode, data_unit_size) >= 0; +} + +void ufshcd_crypto_enable(struct ufs_hba *hba) +{ + union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs; + int slot; + + if (!ufshcd_hba_is_crypto_supported(hba)) + return; + + hba->caps |= UFSHCD_CAP_CRYPTO; + /* + * Reset might clear all keys, so reprogram all the keys. + * Also serves to clear keys on driver init. + */ + for (slot = 0; slot < NUM_KEYSLOTS(hba); slot++) + program_key(hba, &cfg_arr[slot], slot); +} + +void ufshcd_crypto_disable(struct ufs_hba *hba) +{ + hba->caps &= ~UFSHCD_CAP_CRYPTO; +} + +static const struct keyslot_mgmt_ll_ops ufshcd_ksm_ops = { + .keyslot_program = ufshcd_crypto_keyslot_program, + .keyslot_evict = ufshcd_crypto_keyslot_evict, + .keyslot_find = ufshcd_crypto_keyslot_find, + .crypto_mode_supported = ufshcd_crypto_mode_supported, +}; + +/** + * ufshcd_hba_init_crypto - Read crypto capabilities, init crypto fields in hba + * @hba: Per adapter instance + * + * Returns 0 on success. Returns -ENODEV if such capabilities don't exist, and + * -ENOMEM upon OOM. + */ +int ufshcd_hba_init_crypto(struct ufs_hba *hba) +{ + int cap_idx = 0; + int err = 0; + + /* Default to disabling crypto */ + hba->caps &= ~UFSHCD_CAP_CRYPTO; + + if (!(hba->capabilities & MASK_CRYPTO_SUPPORT)) { + err = -ENODEV; + goto out; + } + + /* + * Crypto Capabilities should never be 0, because the + * config_array_ptr > 04h. So we use a 0 value to indicate that + * crypto init failed, and can't be enabled. + */ + hba->crypto_capabilities.reg_val = + cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP)); + hba->crypto_cfg_register = + (u32)hba->crypto_capabilities.config_array_ptr * 0x100; + hba->crypto_cap_array = + devm_kcalloc(hba->dev, + hba->crypto_capabilities.num_crypto_cap, + sizeof(hba->crypto_cap_array[0]), + GFP_KERNEL); + if (!hba->crypto_cap_array) { + err = -ENOMEM; + goto out; + } + + hba->crypto_cfgs = + devm_kcalloc(hba->dev, + NUM_KEYSLOTS(hba), + sizeof(hba->crypto_cfgs[0]), + GFP_KERNEL); + if (!hba->crypto_cfgs) { + err = -ENOMEM; + goto out_free_cfg_mem; + } + + /* + * Store all the capabilities now so that we don't need to repeatedly + * access the device each time we want to know its capabilities + */ + for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap; + cap_idx++) { + hba->crypto_cap_array[cap_idx].reg_val = + cpu_to_le32(ufshcd_readl(hba, + REG_UFS_CRYPTOCAP + + cap_idx * sizeof(__le32))); + } + + hba->ksm = keyslot_manager_create(NUM_KEYSLOTS(hba), &ufshcd_ksm_ops, + hba); + + if (!hba->ksm) { + err = -ENOMEM; + goto out_free_crypto_cfgs; + } + + return 0; +out_free_crypto_cfgs: + devm_kfree(hba->dev, hba->crypto_cfgs); +out_free_cfg_mem: + devm_kfree(hba->dev, hba->crypto_cap_array); +out: + // TODO: print error? + /* Indicate that init failed by setting crypto_capabilities to 0 */ + hba->crypto_capabilities.reg_val = 0; + return err; +} + +void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba, + struct request_queue *q) +{ + if (!ufshcd_hba_is_crypto_supported(hba) || !q) + return; + + q->ksm = hba->ksm; +} + +void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba, + struct request_queue *q) +{ + keyslot_manager_destroy(hba->ksm); +} diff --git a/drivers/scsi/ufs/ufshcd-crypto.h b/drivers/scsi/ufs/ufshcd-crypto.h new file mode 100644 index 000000000000..73ddc8e493fb --- /dev/null +++ b/drivers/scsi/ufs/ufshcd-crypto.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2019 Google LLC + */ + +#ifndef _UFSHCD_CRYPTO_H +#define _UFSHCD_CRYPTO_H + +struct ufs_hba; + +#ifdef CONFIG_SCSI_UFS_CRYPTO +#include + +#include "ufshci.h" + +#define NUM_KEYSLOTS(hba) (hba->crypto_capabilities.config_count + 1) + +static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot) +{ + /* + * The actual number of configurations supported is (CFGC+1), so slot + * numbers range from 0 to config_count inclusive. + */ + return slot < NUM_KEYSLOTS(hba); +} + +static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba) +{ + return hba->crypto_capabilities.reg_val != 0; +} + +static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba) +{ + return hba->caps & UFSHCD_CAP_CRYPTO; +} + +void ufshcd_crypto_enable(struct ufs_hba *hba); + +void ufshcd_crypto_disable(struct ufs_hba *hba); + +int ufshcd_hba_init_crypto(struct ufs_hba *hba); + +void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba, + struct request_queue *q); + +void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba, + struct request_queue *q); + +#else /* CONFIG_SCSI_UFS_CRYPTO */ + +static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba, + unsigned int slot) +{ + return false; +} + +static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba) +{ + return false; +} + +static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba) +{ + return false; +} + +static inline void ufshcd_crypto_enable(struct ufs_hba *hba) { } + +static inline void ufshcd_crypto_disable(struct ufs_hba *hba) { } + +static inline int ufshcd_hba_init_crypto(struct ufs_hba *hba) +{ + return 0; +} + +static inline void ufshcd_crypto_setup_rq_keyslot_manager( + struct ufs_hba *hba, + struct request_queue *q) { } + +static inline void ufshcd_crypto_destroy_rq_keyslot_manager( + struct ufs_hba *hba, + struct request_queue *q) { } + +#endif /* CONFIG_SCSI_UFS_CRYPTO */ + +#endif /* _UFSHCD_CRYPTO_H */ diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 8f2329b4fe79..31e64f4267dd 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -525,6 +525,11 @@ struct ufs_stats { * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for * device is known or not. * @scsi_block_reqs_cnt: reference counting for scsi block requests + * @crypto_capabilities: Content of crypto capabilities register (0x100) + * @crypto_cap_array: Array of crypto capabilities + * @crypto_cfg_register: Start of the crypto cfg array + * @crypto_cfgs: Array of crypto configurations (i.e. config for each slot) + * @ksm: the keyslot manager tied to this hba */ struct ufs_hba { void __iomem *mmio_base; @@ -735,6 +740,15 @@ struct ufs_hba { struct device bsg_dev; struct request_queue *bsg_queue; + +#ifdef CONFIG_SCSI_UFS_CRYPTO + /* crypto */ + union ufs_crypto_capabilities crypto_capabilities; + union ufs_crypto_cap_entry *crypto_cap_array; + u32 crypto_cfg_register; + union ufs_crypto_cfg_entry *crypto_cfgs; + struct keyslot_manager *ksm; +#endif /* CONFIG_SCSI_UFS_CRYPTO */ }; /* Returns true if clocks can be gated. Otherwise false */ From patchwork Mon Oct 28 07:20:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11214753 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4CFC3139A for ; Mon, 28 Oct 2019 07:20:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 20DF5214AF for ; Mon, 28 Oct 2019 07:20:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pakzYc6X" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732770AbfJ1HUz (ORCPT ); Mon, 28 Oct 2019 03:20:55 -0400 Received: from mail-pg1-f202.google.com ([209.85.215.202]:52255 "EHLO mail-pg1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732763AbfJ1HUy (ORCPT ); Mon, 28 Oct 2019 03:20:54 -0400 Received: by mail-pg1-f202.google.com with SMTP id e15so7632174pgh.19 for ; Mon, 28 Oct 2019 00:20:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=2wF1FL2um71TZXTPllMyCv0xpSXF1RkAdV1vOmnGVGY=; b=pakzYc6XGQh8JJBkU4zMyfqMOmrVrcl/WRoKpjbj4CR8/r1ll5Sc/09c+qBgJ9Vv+/ V3iFR+fwDeecGH+5OMSlK3d4Kx8A5N2zOaByVI/2sx4cKubt9L8HxWPIFLz4HYn48tou 7G6hBQ5QjfIibd/tOiR84zroVq4xYHgy6z90RcY+bL/Npfkd9zrRuj/1i+JUy99s5/v5 5YdZ+GeNEb5ejKsK29JfeykIpWe9Cw4OYbXYN+F+oghX2V/Lt7LZhViP7YZMh2CzV3xE Iv8mY0vgL68SNWXAHohbWA4rHNGA2LFfLCvg9sLN16ELQwvqWhGO/mZlC1UdWBeWR2qb sjGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=2wF1FL2um71TZXTPllMyCv0xpSXF1RkAdV1vOmnGVGY=; b=Fn56jx+4O4cKwR4dD4b7ZMDKh9HOn1QyjWdEaKSi0SXjeLOssCf3PWE10gmKtnMwKV PKHU3Bk+ZNZQV7tnjUuv4Hl7+GQbrsGWRQW7q1e10XwbV0xP2aQxWQ9q/WMkpAjJ1npP M6AoR8sysU628Zv5vFcPk/oxULX8aDjZeKYn5UhzIDeOqKviscjVGsuc/NU5Br1paoRZ k10WCcedWSi5sPmgwAIXLA75Ay3b2SCm2WvWbtbBdlE42wpw3rREbkuKftioPOj32rfL jh9htkwBtw025Wj/lTbYNUE3az2/T8jgInSG4u0D2SgZSVTOlo/HLNZa/Kro5aKR4zF6 D+Tg== X-Gm-Message-State: APjAAAX2HQx+oz/jYrAPMhVxYMX+dwshXq987f+LaHAN0LfKqb2PnFN0 wB4gNcNMXoZ1KQLwdbMNql5vTQmJAX0= X-Google-Smtp-Source: APXvYqy3InTY+RdLpcR9tmFHR45buosecQvoYzfMkh7GKWF+l2h443+9Dqs0Id39jUvJQqmLzfTv4bBn4dQ= X-Received: by 2002:a65:4608:: with SMTP id v8mr18872025pgq.366.1572247253428; Mon, 28 Oct 2019 00:20:53 -0700 (PDT) Date: Mon, 28 Oct 2019 00:20:29 -0700 In-Reply-To: <20191028072032.6911-1-satyat@google.com> Message-Id: <20191028072032.6911-7-satyat@google.com> Mime-Version: 1.0 References: <20191028072032.6911-1-satyat@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v5 6/9] scsi: ufs: Add inline encryption support to UFS From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Wire up ufshcd.c with the UFS Crypto API, the block layer inline encryption additions and the keyslot manager. Signed-off-by: Satya Tangirala --- drivers/scsi/ufs/ufshcd.c | 83 ++++++++++++++++++++++++++++++++++++--- drivers/scsi/ufs/ufshcd.h | 6 +++ 2 files changed, 84 insertions(+), 5 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index e66eb7a39a02..3a1190f0c672 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -47,6 +47,7 @@ #include "unipro.h" #include "ufs-sysfs.h" #include "ufs_bsg.h" +#include "ufshcd-crypto.h" #define CREATE_TRACE_POINTS #include @@ -857,7 +858,14 @@ static void ufshcd_enable_run_stop_reg(struct ufs_hba *hba) */ static inline void ufshcd_hba_start(struct ufs_hba *hba) { - ufshcd_writel(hba, CONTROLLER_ENABLE, REG_CONTROLLER_ENABLE); + u32 val = CONTROLLER_ENABLE; + + if (ufshcd_hba_is_crypto_supported(hba)) { + ufshcd_crypto_enable(hba); + val |= CRYPTO_GENERAL_ENABLE; + } + + ufshcd_writel(hba, val, REG_CONTROLLER_ENABLE); } /** @@ -2211,9 +2219,21 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp, dword_0 |= UTP_REQ_DESC_INT_CMD; /* Transfer request descriptor header fields */ + if (lrbp->crypto_enable) { + dword_0 |= UTP_REQ_DESC_CRYPTO_ENABLE_CMD; + dword_0 |= lrbp->crypto_key_slot; + req_desc->header.dword_1 = + cpu_to_le32((u32)lrbp->data_unit_num); + req_desc->header.dword_3 = + cpu_to_le32((u32)(lrbp->data_unit_num >> 32)); + } else { + /* dword_1 and dword_3 are reserved, hence they are set to 0 */ + req_desc->header.dword_1 = 0; + req_desc->header.dword_3 = 0; + } + req_desc->header.dword_0 = cpu_to_le32(dword_0); - /* dword_1 is reserved, hence it is set to 0 */ - req_desc->header.dword_1 = 0; + /* * assigning invalid value for command status. Controller * updates OCS on command completion, with the command @@ -2221,8 +2241,6 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp, */ req_desc->header.dword_2 = cpu_to_le32(OCS_INVALID_COMMAND_STATUS); - /* dword_3 is reserved, hence it is set to 0 */ - req_desc->header.dword_3 = 0; req_desc->prd_table_length = 0; } @@ -2382,6 +2400,37 @@ static inline u16 ufshcd_upiu_wlun_to_scsi_wlun(u8 upiu_wlun_id) return (upiu_wlun_id & ~UFS_UPIU_WLUN_ID) | SCSI_W_LUN_BASE; } +static inline int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba, + struct scsi_cmnd *cmd, + struct ufshcd_lrb *lrbp) +{ + int key_slot; + + if (!cmd->request->bio || + !bio_crypt_should_process(cmd->request->bio, cmd->request->q)) { + lrbp->crypto_enable = false; + return 0; + } + + if (WARN_ON(!ufshcd_is_crypto_enabled(hba))) { + /* + * Upper layer asked us to do inline encryption + * but that isn't enabled, so we fail this request. + */ + return -EINVAL; + } + key_slot = bio_crypt_get_keyslot(cmd->request->bio); + if (!ufshcd_keyslot_valid(hba, key_slot)) + return -EINVAL; + + lrbp->crypto_enable = true; + lrbp->crypto_key_slot = key_slot; + lrbp->data_unit_num = bio_crypt_data_unit_num(cmd->request->bio); + + return 0; +} + + /** * ufshcd_queuecommand - main entry point for SCSI requests * @host: SCSI host pointer @@ -2469,6 +2518,13 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) lrbp->task_tag = tag; lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun); lrbp->intr_cmd = !ufshcd_is_intr_aggr_allowed(hba) ? true : false; + + err = ufshcd_prepare_lrbp_crypto(hba, cmd, lrbp); + if (err) { + lrbp->cmd = NULL; + clear_bit_unlock(tag, &hba->lrb_in_use); + goto out; + } lrbp->req_abort_skip = false; ufshcd_comp_scsi_upiu(hba, lrbp); @@ -2502,6 +2558,7 @@ static int ufshcd_compose_dev_cmd(struct ufs_hba *hba, lrbp->task_tag = tag; lrbp->lun = 0; /* device management cmd is not specific to any LUN */ lrbp->intr_cmd = true; /* No interrupt aggregation */ + lrbp->crypto_enable = false; /* No crypto operations */ hba->dev_cmd.type = cmd_type; return ufshcd_comp_devman_upiu(hba, lrbp); @@ -4229,6 +4286,8 @@ static inline void ufshcd_hba_stop(struct ufs_hba *hba, bool can_sleep) { int err; + ufshcd_crypto_disable(hba); + ufshcd_writel(hba, CONTROLLER_DISABLE, REG_CONTROLLER_ENABLE); err = ufshcd_wait_for_register(hba, REG_CONTROLLER_ENABLE, CONTROLLER_ENABLE, CONTROLLER_DISABLE, @@ -4632,8 +4691,12 @@ static int ufshcd_change_queue_depth(struct scsi_device *sdev, int depth) static int ufshcd_slave_configure(struct scsi_device *sdev) { struct request_queue *q = sdev->request_queue; + struct ufs_hba *hba = shost_priv(sdev->host); blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); + + ufshcd_crypto_setup_rq_keyslot_manager(hba, q); + return 0; } @@ -4644,6 +4707,7 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) static void ufshcd_slave_destroy(struct scsi_device *sdev) { struct ufs_hba *hba; + struct request_queue *q = sdev->request_queue; hba = shost_priv(sdev->host); /* Drop the reference as it won't be needed anymore */ @@ -4654,6 +4718,8 @@ static void ufshcd_slave_destroy(struct scsi_device *sdev) hba->sdev_ufs_device = NULL; spin_unlock_irqrestore(hba->host->host_lock, flags); } + + ufshcd_crypto_destroy_rq_keyslot_manager(hba, q); } /** @@ -8380,6 +8446,13 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) /* Reset the attached device */ ufshcd_vops_device_reset(hba); + /* Init crypto */ + err = ufshcd_hba_init_crypto(hba); + if (err) { + dev_err(hba->dev, "crypto setup failed\n"); + goto out_remove_scsi_host; + } + /* Host controller enable */ err = ufshcd_hba_enable(hba); if (err) { diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 31e64f4267dd..a106b45f8358 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -167,6 +167,9 @@ struct ufs_pm_lvl_states { * @intr_cmd: Interrupt command (doesn't participate in interrupt aggregation) * @issue_time_stamp: time stamp for debug purposes * @compl_time_stamp: time stamp for statistics + * @crypto_enable: whether or not the request needs inline crypto operations + * @crypto_key_slot: the key slot to use for inline crypto + * @data_unit_num: the data unit number for the first block for inline crypto * @req_abort_skip: skip request abort task flag */ struct ufshcd_lrb { @@ -191,6 +194,9 @@ struct ufshcd_lrb { bool intr_cmd; ktime_t issue_time_stamp; ktime_t compl_time_stamp; + bool crypto_enable; + u8 crypto_key_slot; + u64 data_unit_num; bool req_abort_skip; }; From patchwork Mon Oct 28 07:20:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11214761 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F20C3139A for ; Mon, 28 Oct 2019 07:20:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B340F21783 for ; Mon, 28 Oct 2019 07:20:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nEBXpJ+Z" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732780AbfJ1HU5 (ORCPT ); Mon, 28 Oct 2019 03:20:57 -0400 Received: from mail-pf1-f202.google.com ([209.85.210.202]:42018 "EHLO mail-pf1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732748AbfJ1HU5 (ORCPT ); Mon, 28 Oct 2019 03:20:57 -0400 Received: by mail-pf1-f202.google.com with SMTP id c8so2281697pfo.9 for ; Mon, 28 Oct 2019 00:20:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZjHDwu3nbLW0lBmNY/DJlc3OwujKLVw8JCEXI/TvS+E=; b=nEBXpJ+Z5XzOh8CTC+gXI+CzvCbzSbJt8B3elgfBtzas2SSG6th6AH3E5NttD2fC4T Td3Qvqo1gllIc+8V2GO8PRFMC7b+vhrArLgz+pNBFmcInNbDypppChwQmDJ6NOD+DGCq MmrC0ocCfZBwprig+N8C7iS7gI8J1T1FhTmc0lYD+u9Ozs0tQsAUcG3BCv6zf8ZnDhY0 qNjLu4rAkz3BO8sr5bKGKN/wRajFrWAX9HkFsppmK9+GyUUKtsV60V5Vxm3yPaOdPyzy xHnJ3KVD9Ht7IwetNl543B/mry0v+ZZcM5rbL9KtC2RUSUaDcMD+xKD/jvMTVOCZPJKo s2PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZjHDwu3nbLW0lBmNY/DJlc3OwujKLVw8JCEXI/TvS+E=; b=qK5n42wBqqB+ntSMMUlHT6CCE8pJiPZ9N2gxDwdb3iS/yC4hTQuX3rdrND9yYiMgcP tHyiUi1YKZnMbA9Ai/H592uCvkY4tdrTz/Ey5uMAAIkPwjtL+5w/qpbqKi/i2yTNJmhc LAqVLaBbzP8KshOxrbKiLdkLTow9KAdZ37LBotONLOoOta8cCuJOKenqhZDWcO2w9TvC EOJyjUdGAHeSd2UMiT8gH07dU4UYl63ScNDkDrDWpg5aJRr6aQ6ewfNlxOyTBYJMvxsz Uo+kNYjXKYL/d2lTUhx8iT0twTtmfqk3RCSH7QigClsRaILtPKNn7gSSVLZTrL9zZGP3 9acQ== X-Gm-Message-State: APjAAAUEL3uL7E7oIsQzVFhlcfrEgGJfy9sT8MuUxNA0P7RYF1nowsjO rJdd5ren+wBE/gby0BMEO7IJmQdvFSU= X-Google-Smtp-Source: APXvYqzzPcupWdzlWmHtet9wHutTkhOG5C1OoxvfJmMimROu+MWXa97sI5lwCbMBsDzQUKxrgrEbZ1m5Ano= X-Received: by 2002:a63:200e:: with SMTP id g14mr19326954pgg.91.1572247255970; Mon, 28 Oct 2019 00:20:55 -0700 (PDT) Date: Mon, 28 Oct 2019 00:20:30 -0700 In-Reply-To: <20191028072032.6911-1-satyat@google.com> Message-Id: <20191028072032.6911-8-satyat@google.com> Mime-Version: 1.0 References: <20191028072032.6911-1-satyat@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v5 7/9] fscrypt: add inline encryption support From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala , Eric Biggers Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Add support for inline encryption to fs/crypto/. With "inline encryption", the block layer handles the decryption/encryption as part of the bio, instead of the filesystem doing the crypto itself via Linux's crypto API. This model is needed in order to take advantage of the inline encryption hardware present on most modern mobile SoCs. To use inline encryption, the filesystem needs to be mounted with '-o inlinecrypt'. The contents of any AES-256-XTS encrypted files will then be encrypted using blk-crypto, instead of using the traditional filesystem-layer crypto. fscrypt still provides the key and IV to use, and the actual ciphertext on-disk is still the same; therefore it's testable using the existing fscrypt ciphertext verification tests. Note that since blk-crypto has a fallack to Linux's crypto API, this feature is usable and testable even without actual inline encryption hardware. Per-filesystem changes will be needed to set encryption contexts when submitting bios and to implement the 'inlinecrypt' mount option. This patch just adds the common code. Co-developed-by: Eric Biggers Signed-off-by: Eric Biggers Signed-off-by: Satya Tangirala --- fs/crypto/Kconfig | 6 + fs/crypto/Makefile | 1 + fs/crypto/bio.c | 31 ++- fs/crypto/fscrypt_private.h | 72 +++++++ fs/crypto/inline_crypt.c | 390 ++++++++++++++++++++++++++++++++++++ fs/crypto/keyring.c | 2 + fs/crypto/keysetup.c | 18 +- include/linux/fscrypt.h | 60 ++++++ 8 files changed, 566 insertions(+), 14 deletions(-) create mode 100644 fs/crypto/inline_crypt.c diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig index ff5a1746cbae..5061aa546202 100644 --- a/fs/crypto/Kconfig +++ b/fs/crypto/Kconfig @@ -16,3 +16,9 @@ config FS_ENCRYPTION efficient since it avoids caching the encrypted and decrypted pages in the page cache. Currently Ext4, F2FS and UBIFS make use of this feature. + +config FS_ENCRYPTION_INLINE_CRYPT + bool "Enable fscrypt to use inline crypto" + depends on FS_ENCRYPTION && BLK_INLINE_ENCRYPTION + help + Enable fscrypt to use inline encryption hardware if available. diff --git a/fs/crypto/Makefile b/fs/crypto/Makefile index 232e2bb5a337..652c7180ec6d 100644 --- a/fs/crypto/Makefile +++ b/fs/crypto/Makefile @@ -11,3 +11,4 @@ fscrypto-y := crypto.o \ policy.o fscrypto-$(CONFIG_BLOCK) += bio.o +fscrypto-$(CONFIG_FS_ENCRYPTION_INLINE_CRYPT) += inline_crypt.o diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c index 1f4b8a277060..956798debf71 100644 --- a/fs/crypto/bio.c +++ b/fs/crypto/bio.c @@ -46,26 +46,38 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk, { const unsigned int blockbits = inode->i_blkbits; const unsigned int blocksize = 1 << blockbits; + const bool inlinecrypt = fscrypt_inode_uses_inline_crypto(inode); struct page *ciphertext_page; struct bio *bio; int ret, err = 0; - ciphertext_page = fscrypt_alloc_bounce_page(GFP_NOWAIT); - if (!ciphertext_page) - return -ENOMEM; + if (inlinecrypt) { + ciphertext_page = ZERO_PAGE(0); + } else { + ciphertext_page = fscrypt_alloc_bounce_page(GFP_NOWAIT); + if (!ciphertext_page) + return -ENOMEM; + } while (len--) { - err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk, - ZERO_PAGE(0), ciphertext_page, - blocksize, 0, GFP_NOFS); - if (err) - goto errout; + if (!inlinecrypt) { + err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk, + ZERO_PAGE(0), ciphertext_page, + blocksize, 0, GFP_NOFS); + if (err) + goto errout; + } bio = bio_alloc(GFP_NOWAIT, 1); if (!bio) { err = -ENOMEM; goto errout; } + err = fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOIO); + if (err) { + bio_put(bio); + goto errout; + } bio_set_dev(bio, inode->i_sb->s_bdev); bio->bi_iter.bi_sector = pblk << (blockbits - 9); bio_set_op_attrs(bio, REQ_OP_WRITE, 0); @@ -87,7 +99,8 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk, } err = 0; errout: - fscrypt_free_bounce_page(ciphertext_page); + if (!inlinecrypt) + fscrypt_free_bounce_page(ciphertext_page); return err; } EXPORT_SYMBOL(fscrypt_zeroout_range); diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h index b44e445b43a8..c731bd4245c5 100644 --- a/fs/crypto/fscrypt_private.h +++ b/fs/crypto/fscrypt_private.h @@ -13,6 +13,9 @@ #include #include +#include + +struct fscrypt_master_key; #define CONST_STRLEN(str) (sizeof(str) - 1) @@ -163,6 +166,14 @@ struct fscrypt_info { /* The actual crypto transform used for encryption and decryption */ struct crypto_skcipher *ci_ctfm; +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT + /* + * The raw key for inline encryption, if this file is using inline + * encryption rather than the traditional filesystem layer encryption. + */ + const u8 *ci_inline_crypt_key; +#endif + /* True if the key should be freed when this fscrypt_info is freed */ bool ci_owns_key; @@ -293,6 +304,54 @@ extern int fscrypt_hkdf_expand(struct fscrypt_hkdf *hkdf, u8 context, extern void fscrypt_destroy_hkdf(struct fscrypt_hkdf *hkdf); +/* inline_crypt.c */ +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT +extern bool fscrypt_should_use_inline_encryption(const struct fscrypt_info *ci); + +extern int fscrypt_set_inline_crypt_key(struct fscrypt_info *ci, + const u8 *derived_key); + +extern void fscrypt_free_inline_crypt_key(struct fscrypt_info *ci); + +extern int fscrypt_setup_per_mode_inline_crypt_key( + struct fscrypt_info *ci, + struct fscrypt_master_key *mk); + +extern void fscrypt_evict_inline_crypt_keys(struct fscrypt_master_key *mk); + +#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ + +static inline bool fscrypt_should_use_inline_encryption( + const struct fscrypt_info *ci) +{ + return false; +} + +static inline int fscrypt_set_inline_crypt_key(struct fscrypt_info *ci, + const u8 *derived_key) +{ + WARN_ON(1); + return -EOPNOTSUPP; +} + +static inline void fscrypt_free_inline_crypt_key(struct fscrypt_info *ci) +{ +} + +static inline int fscrypt_setup_per_mode_inline_crypt_key( + struct fscrypt_info *ci, + struct fscrypt_master_key *mk) +{ + WARN_ON(1); + return -EOPNOTSUPP; +} + +static inline void fscrypt_evict_inline_crypt_keys( + struct fscrypt_master_key *mk) +{ +} +#endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ + /* keyring.c */ /* @@ -391,6 +450,16 @@ struct fscrypt_master_key { */ struct crypto_skcipher *mk_iv_ino_lblk_64_tfms[__FSCRYPT_MODE_MAX + 1]; +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT + /* Raw keys for IV_INO_LBLK_64 policies, allocated on-demand */ + u8 *mk_iv_ino_lblk_64_raw_keys[__FSCRYPT_MODE_MAX + 1]; + + /* The data unit size being used for inline encryption */ + unsigned int mk_data_unit_size; + + /* The filesystem's block device */ + struct block_device *mk_bdev; +#endif } __randomize_layout; static inline bool @@ -445,9 +514,12 @@ struct fscrypt_mode { const char *cipher_str; int keysize; int ivsize; + enum blk_crypto_mode_num blk_crypto_mode; bool logged_impl_name; }; +extern struct fscrypt_mode fscrypt_modes[]; + static inline bool fscrypt_mode_supports_direct_key(const struct fscrypt_mode *mode) { diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c new file mode 100644 index 000000000000..e41c6d66ff0d --- /dev/null +++ b/fs/crypto/inline_crypt.c @@ -0,0 +1,390 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Inline encryption support for fscrypt + * + * Copyright 2019 Google LLC + */ + +/* + * With "inline encryption", the block layer handles the decryption/encryption + * as part of the bio, instead of the filesystem doing the crypto itself via + * crypto API. See Documentation/block/inline-encryption.rst. fscrypt still + * provides the key and IV to use. + */ + +#include +#include +#include +#include + +#include "fscrypt_private.h" + +/* Return true iff inline encryption should be used for this file */ +bool fscrypt_should_use_inline_encryption(const struct fscrypt_info *ci) +{ + const struct inode *inode = ci->ci_inode; + struct super_block *sb = inode->i_sb; + + /* The file must need contents encryption, not filenames encryption */ + if (!S_ISREG(inode->i_mode)) + return false; + + /* blk-crypto must implement the needed encryption algorithm */ + if (ci->ci_mode->blk_crypto_mode == BLK_ENCRYPTION_MODE_INVALID) + return false; + + /* DIRECT_KEY needs a 24+ byte IV, so it can't work with 8-byte DUNs */ + if (fscrypt_is_direct_key_policy(&ci->ci_policy)) + return false; + + /* The filesystem must be mounted with -o inlinecrypt */ + if (!sb->s_cop->inline_crypt_enabled || + !sb->s_cop->inline_crypt_enabled(sb)) + return false; + + return true; +} + +/* Set a per-file inline encryption key (for passing to blk-crypto) */ +int fscrypt_set_inline_crypt_key(struct fscrypt_info *ci, const u8 *derived_key) +{ + const struct fscrypt_mode *mode = ci->ci_mode; + const struct super_block *sb = ci->ci_inode->i_sb; + + ci->ci_inline_crypt_key = kmemdup(derived_key, mode->keysize, GFP_NOFS); + if (!ci->ci_inline_crypt_key) + return -ENOMEM; + ci->ci_owns_key = true; + + return blk_crypto_start_using_mode(mode->blk_crypto_mode, + sb->s_blocksize, + sb->s_bdev->bd_queue); +} + +/* Free a per-file inline encryption key and evict it from blk-crypto */ +void fscrypt_free_inline_crypt_key(struct fscrypt_info *ci) +{ + if (ci->ci_inline_crypt_key != NULL) { + const struct fscrypt_mode *mode = ci->ci_mode; + const struct super_block *sb = ci->ci_inode->i_sb; + + blk_crypto_evict_key(sb->s_bdev->bd_queue, + ci->ci_inline_crypt_key, + mode->blk_crypto_mode, sb->s_blocksize); + kzfree(ci->ci_inline_crypt_key); + } +} + +/* + * Set up ->inline_crypt_key (for passing to blk-crypto) for inodes which use an + * IV_INO_LBLK_64 encryption policy. + * + * Return: 0 on success, -errno on failure + */ +int fscrypt_setup_per_mode_inline_crypt_key(struct fscrypt_info *ci, + struct fscrypt_master_key *mk) +{ + static DEFINE_MUTEX(inline_crypt_setup_mutex); + const struct super_block *sb = ci->ci_inode->i_sb; + struct block_device *bdev = sb->s_bdev; + const struct fscrypt_mode *mode = ci->ci_mode; + const u8 mode_num = mode - fscrypt_modes; + u8 *raw_key; + u8 hkdf_info[sizeof(mode_num) + sizeof(sb->s_uuid)]; + int err; + + if (WARN_ON(mode_num > __FSCRYPT_MODE_MAX)) + return -EINVAL; + + /* pairs with smp_store_release() below */ + raw_key = smp_load_acquire(&mk->mk_iv_ino_lblk_64_raw_keys[mode_num]); + if (raw_key) { + err = 0; + goto out; + } + + mutex_lock(&inline_crypt_setup_mutex); + + raw_key = mk->mk_iv_ino_lblk_64_raw_keys[mode_num]; + if (raw_key) { + err = 0; + goto out_unlock; + } + + raw_key = kmalloc(mode->keysize, GFP_NOFS); + if (!raw_key) { + err = -ENOMEM; + goto out_unlock; + } + + BUILD_BUG_ON(sizeof(mode_num) != 1); + BUILD_BUG_ON(sizeof(sb->s_uuid) != 16); + BUILD_BUG_ON(sizeof(hkdf_info) != 17); + hkdf_info[0] = mode_num; + memcpy(&hkdf_info[1], &sb->s_uuid, sizeof(sb->s_uuid)); + + err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf, + HKDF_CONTEXT_IV_INO_LBLK_64_KEY, + hkdf_info, sizeof(hkdf_info), + raw_key, mode->keysize); + if (err) + goto out_unlock; + + err = blk_crypto_start_using_mode(mode->blk_crypto_mode, + sb->s_blocksize, bdev->bd_queue); + if (err) + goto out_unlock; + + /* + * When a master key's first inline encryption key is set up, save a + * reference to the filesystem's block device so that the inline + * encryption keys can be evicted when the master key is destroyed. + */ + if (!mk->mk_bdev) { + mk->mk_bdev = bdgrab(bdev); + mk->mk_data_unit_size = sb->s_blocksize; + } + + /* pairs with smp_load_acquire() above */ + smp_store_release(&mk->mk_iv_ino_lblk_64_raw_keys[mode_num], raw_key); + err = 0; +out_unlock: + mutex_unlock(&inline_crypt_setup_mutex); +out: + if (err == 0) { + ci->ci_inline_crypt_key = raw_key; + /* + * Since each struct fscrypt_master_key belongs to a particular + * filesystem (a struct super_block), there should be only one + * block device, and only one data unit size as it should equal + * the filesystem's blocksize (i.e. s_blocksize). + */ + if (WARN_ON(mk->mk_bdev != bdev)) + err = -EINVAL; + if (WARN_ON(mk->mk_data_unit_size != sb->s_blocksize)) + err = -EINVAL; + } else { + kzfree(raw_key); + } + return err; +} + +/* + * Evict per-mode inline encryption keys from blk-crypto when a master key is + * destroyed. + */ +void fscrypt_evict_inline_crypt_keys(struct fscrypt_master_key *mk) +{ + struct block_device *bdev = mk->mk_bdev; + size_t i; + + if (!bdev) /* No inline encryption keys? */ + return; + + for (i = 0; i < ARRAY_SIZE(mk->mk_iv_ino_lblk_64_raw_keys); i++) { + u8 *raw_key = mk->mk_iv_ino_lblk_64_raw_keys[i]; + + if (raw_key != NULL) { + blk_crypto_evict_key(bdev->bd_queue, raw_key, + fscrypt_modes[i].blk_crypto_mode, + mk->mk_data_unit_size); + kzfree(raw_key); + } + } + bdput(bdev); +} + +/** + * fscrypt_inode_uses_inline_crypto - test whether an inode uses inline encryption + * @inode: an inode + * + * Return: true if the inode requires file contents encryption and if the + * encryption should be done in the block layer via blk-crypto rather + * than in the filesystem layer. + */ +bool fscrypt_inode_uses_inline_crypto(const struct inode *inode) +{ + return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) && + inode->i_crypt_info->ci_inline_crypt_key != NULL; +} +EXPORT_SYMBOL_GPL(fscrypt_inode_uses_inline_crypto); + +/** + * fscrypt_inode_uses_fs_layer_crypto - test whether an inode uses fs-layer encryption + * @inode: an inode + * + * Return: true if the inode requires file contents encryption and if the + * encryption should be done in the filesystem layer rather than in the + * block layer via blk-crypto. + */ +bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode) +{ + return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) && + inode->i_crypt_info->ci_inline_crypt_key == NULL; +} +EXPORT_SYMBOL_GPL(fscrypt_inode_uses_fs_layer_crypto); + +static inline u64 fscrypt_generate_dun(const struct fscrypt_info *ci, + u64 lblk_num) +{ + union fscrypt_iv iv; + + fscrypt_generate_iv(&iv, lblk_num, ci); + /* + * fscrypt_should_use_inline_encryption() ensures we never get here if + * more than the first 8 bytes of the IV are nonzero. + */ + BUG_ON(memchr_inv(&iv.raw[8], 0, ci->ci_mode->ivsize - 8)); + return le64_to_cpu(iv.lblk_num); +} + +/** + * fscrypt_set_bio_crypt_ctx - prepare a file contents bio for inline encryption + * @bio: a bio which will eventually be submitted to the file + * @inode: the file's inode + * @first_lblk: the first file logical block number in the I/O + * @gfp_mask: memory allocation flags + * + * If the contents of the file should be encrypted (or decrypted) with inline + * encryption, then assign the appropriate encryption context to the bio. + * + * Normally the bio should be newly allocated (i.e. no pages added yet), as + * otherwise fscrypt_mergeable_bio() won't work as intended. + * + * The encryption context will be freed automatically when the bio is freed. + * + * Return: 0 on success, -errno on failure. If __GFP_NOFAIL is specified, this + * is guaranteed to succeed. + */ +int fscrypt_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode, + u64 first_lblk, gfp_t gfp_mask) +{ + const struct fscrypt_info *ci = inode->i_crypt_info; + u64 dun; + + if (!fscrypt_inode_uses_inline_crypto(inode)) + return 0; + + dun = fscrypt_generate_dun(ci, first_lblk); + + return bio_crypt_set_ctx(bio, ci->ci_inline_crypt_key, + ci->ci_mode->blk_crypto_mode, + dun, inode->i_blkbits, gfp_mask); +} +EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx); + +/* Extract the inode and logical block number from a buffer_head. */ +static bool bh_get_inode_and_lblk_num(const struct buffer_head *bh, + const struct inode **inode_ret, + u64 *lblk_num_ret) +{ + struct page *page = bh->b_page; + const struct address_space *mapping; + const struct inode *inode; + + /* + * The ext4 journal (jbd2) can submit a buffer_head it directly created + * for a non-pagecache page. fscrypt doesn't care about these. + */ + mapping = page_mapping(page); + if (!mapping) + return false; + inode = mapping->host; + + *inode_ret = inode; + *lblk_num_ret = ((u64)page->index << (PAGE_SHIFT - inode->i_blkbits)) + + (bh_offset(bh) >> inode->i_blkbits); + return true; +} + +/** + * fscrypt_set_bio_crypt_ctx_bh - prepare a file contents bio for inline encryption + * @bio: a bio which will eventually be submitted to the file + * @first_bh: the first buffer_head for which I/O will be submitted + * @gfp_mask: memory allocation flags + * + * Same as fscrypt_set_bio_crypt_ctx(), except this takes a buffer_head instead + * of an inode and block number directly. + * + * Return: 0 on success, -errno on failure + */ +int fscrypt_set_bio_crypt_ctx_bh(struct bio *bio, + const struct buffer_head *first_bh, + gfp_t gfp_mask) +{ + const struct inode *inode; + u64 first_lblk; + + if (!bh_get_inode_and_lblk_num(first_bh, &inode, &first_lblk)) + return 0; + + return fscrypt_set_bio_crypt_ctx(bio, inode, first_lblk, gfp_mask); +} +EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx_bh); + +/** + * fscrypt_mergeable_bio - test whether data can be added to a bio + * @bio: the bio being built up + * @inode: the inode for the next part of the I/O + * @next_lblk: the next file logical block number in the I/O + * + * When building a bio which may contain data which should undergo inline + * encryption (or decryption) via fscrypt, filesystems should call this function + * to ensure that the resulting bio contains only logically contiguous data. + * This will return false if the next part of the I/O cannot be merged with the + * bio because either the encryption key would be different or the encryption + * data unit numbers would be discontiguous. + * + * fscrypt_set_bio_crypt_ctx() must have already been called on the bio. + * + * Return: true iff the I/O is mergeable + */ +bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode, + u64 next_lblk) +{ + const struct bio_crypt_ctx *bc; + const u8 *next_key; + u64 next_dun; + + if (bio_has_crypt_ctx(bio) != fscrypt_inode_uses_inline_crypto(inode)) + return false; + if (!bio_has_crypt_ctx(bio)) + return true; + bc = bio->bi_crypt_context; + next_key = inode->i_crypt_info->ci_inline_crypt_key; + next_dun = fscrypt_generate_dun(inode->i_crypt_info, next_lblk); + + /* + * Comparing the key pointers is good enough, as all I/O for each key + * uses the same pointer. I.e., there's currently no need to support + * merging requests where the keys are the same but the pointers differ. + */ + return next_key == bc->raw_key && + next_dun == bc->data_unit_num + + (bio_sectors(bio) >> + (bc->data_unit_size_bits - SECTOR_SHIFT)); +} +EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio); + +/** + * fscrypt_mergeable_bio_bh - test whether data can be added to a bio + * @bio: the bio being built up + * @next_bh: the next buffer_head for which I/O will be submitted + * + * Same as fscrypt_mergeable_bio(), except this takes a buffer_head instead of + * an inode and block number directly. + * + * Return: true iff the I/O is mergeable + */ +bool fscrypt_mergeable_bio_bh(struct bio *bio, + const struct buffer_head *next_bh) +{ + const struct inode *inode; + u64 next_lblk; + + if (!bh_get_inode_and_lblk_num(next_bh, &inode, &next_lblk)) + return !bio_has_crypt_ctx(bio); + + return fscrypt_mergeable_bio(bio, inode, next_lblk); +} +EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio_bh); diff --git a/fs/crypto/keyring.c b/fs/crypto/keyring.c index 040df1f5e1c8..7788adfa2dc4 100644 --- a/fs/crypto/keyring.c +++ b/fs/crypto/keyring.c @@ -48,6 +48,8 @@ static void free_master_key(struct fscrypt_master_key *mk) crypto_free_skcipher(mk->mk_iv_ino_lblk_64_tfms[i]); } + fscrypt_evict_inline_crypt_keys(mk); + key_put(mk->mk_users); kzfree(mk); } diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c index f87ab930b92a..8070dad9a541 100644 --- a/fs/crypto/keysetup.c +++ b/fs/crypto/keysetup.c @@ -13,12 +13,13 @@ #include "fscrypt_private.h" -static struct fscrypt_mode available_modes[] = { +struct fscrypt_mode fscrypt_modes[] = { [FSCRYPT_MODE_AES_256_XTS] = { .friendly_name = "AES-256-XTS", .cipher_str = "xts(aes)", .keysize = 64, .ivsize = 16, + .blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_256_XTS, }, [FSCRYPT_MODE_AES_256_CTS] = { .friendly_name = "AES-256-CTS-CBC", @@ -51,10 +52,10 @@ select_encryption_mode(const union fscrypt_policy *policy, const struct inode *inode) { if (S_ISREG(inode->i_mode)) - return &available_modes[fscrypt_policy_contents_mode(policy)]; + return &fscrypt_modes[fscrypt_policy_contents_mode(policy)]; if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode)) - return &available_modes[fscrypt_policy_fnames_mode(policy)]; + return &fscrypt_modes[fscrypt_policy_fnames_mode(policy)]; WARN_ONCE(1, "fscrypt: filesystem tried to load encryption info for inode %lu, which is not encryptable (file type %d)\n", inode->i_ino, (inode->i_mode & S_IFMT)); @@ -111,6 +112,9 @@ int fscrypt_set_derived_key(struct fscrypt_info *ci, const u8 *derived_key) { struct crypto_skcipher *tfm; + if (fscrypt_should_use_inline_encryption(ci)) + return fscrypt_set_inline_crypt_key(ci, derived_key); + tfm = fscrypt_allocate_skcipher(ci->ci_mode, derived_key, ci->ci_inode); if (IS_ERR(tfm)) return PTR_ERR(tfm); @@ -128,7 +132,7 @@ static int setup_per_mode_key(struct fscrypt_info *ci, const struct inode *inode = ci->ci_inode; const struct super_block *sb = inode->i_sb; struct fscrypt_mode *mode = ci->ci_mode; - u8 mode_num = mode - available_modes; + const u8 mode_num = mode - fscrypt_modes; struct crypto_skcipher *tfm, *prev_tfm; u8 mode_key[FSCRYPT_MAX_KEY_SIZE]; u8 hkdf_info[sizeof(mode_num) + sizeof(sb->s_uuid)]; @@ -204,6 +208,8 @@ static int fscrypt_setup_v2_file_key(struct fscrypt_info *ci, * the IVs. This format is optimized for use with inline * encryption hardware compliant with the UFS or eMMC standards. */ + if (fscrypt_should_use_inline_encryption(ci)) + return fscrypt_setup_per_mode_inline_crypt_key(ci, mk); return setup_per_mode_key(ci, mk, mk->mk_iv_ino_lblk_64_tfms, HKDF_CONTEXT_IV_INO_LBLK_64_KEY, true); @@ -330,8 +336,10 @@ static void put_crypt_info(struct fscrypt_info *ci) if (ci->ci_direct_key) fscrypt_put_direct_key(ci->ci_direct_key); - else if (ci->ci_owns_key) + else if (ci->ci_owns_key) { crypto_free_skcipher(ci->ci_ctfm); + fscrypt_free_inline_crypt_key(ci); + } key = ci->ci_master_key; if (key) { diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h index 1a7bffe78ed5..9583837ca37b 100644 --- a/include/linux/fscrypt.h +++ b/include/linux/fscrypt.h @@ -64,6 +64,7 @@ struct fscrypt_operations { bool (*has_stable_inodes)(struct super_block *sb); void (*get_ino_and_lblk_bits)(struct super_block *sb, int *ino_bits_ret, int *lblk_bits_ret); + bool (*inline_crypt_enabled)(struct super_block *sb); }; static inline bool fscrypt_has_encryption_key(const struct inode *inode) @@ -529,6 +530,65 @@ static inline void fscrypt_set_ops(struct super_block *sb, #endif /* !CONFIG_FS_ENCRYPTION */ +/* inline_crypt.c */ +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT +extern bool fscrypt_inode_uses_inline_crypto(const struct inode *inode); + +extern bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode); + +extern int fscrypt_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode, + u64 first_lblk, gfp_t gfp_mask); + +extern int fscrypt_set_bio_crypt_ctx_bh(struct bio *bio, + const struct buffer_head *first_bh, + gfp_t gfp_mask); + +extern bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode, + u64 next_lblk); + +extern bool fscrypt_mergeable_bio_bh(struct bio *bio, + const struct buffer_head *next_bh); + +#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ +static inline bool fscrypt_inode_uses_inline_crypto(const struct inode *inode) +{ + return false; +} + +static inline bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode) +{ + return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode); +} + +static inline int fscrypt_set_bio_crypt_ctx(struct bio *bio, + const struct inode *inode, + u64 first_lblk, gfp_t gfp_mask) +{ + return 0; +} + +static inline int fscrypt_set_bio_crypt_ctx_bh( + struct bio *bio, + const struct buffer_head *first_bh, + gfp_t gfp_mask) +{ + return 0; +} + +static inline bool fscrypt_mergeable_bio(struct bio *bio, + const struct inode *inode, + u64 next_lblk) +{ + return true; +} + +static inline bool fscrypt_mergeable_bio_bh(struct bio *bio, + const struct buffer_head *next_bh) +{ + return true; +} +#endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ + /** * fscrypt_require_key - require an inode's encryption key * @inode: the inode we need the key for From patchwork Mon Oct 28 07:20:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11214785 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ACA6B197C for ; Mon, 28 Oct 2019 07:21:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 80BD5214AF for ; Mon, 28 Oct 2019 07:21:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AK4N6yQo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732807AbfJ1HVB (ORCPT ); Mon, 28 Oct 2019 03:21:01 -0400 Received: from mail-pf1-f202.google.com ([209.85.210.202]:37253 "EHLO mail-pf1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732800AbfJ1HVB (ORCPT ); Mon, 28 Oct 2019 03:21:01 -0400 Received: by mail-pf1-f202.google.com with SMTP id c30so1014209pfj.4 for ; Mon, 28 Oct 2019 00:20:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Je2tK/cj4wTX6JzO/38HbqzCHuS2GQAUkCJGvwCMB5o=; b=AK4N6yQo4Pn0xpbQWr7fGwE2KgpxDrdb/K7wxh8C5xgeMoLkqVfQTvzlTY0DjTwqMB MzDbZ/sKoI3cZc0Z6ILWxI251cPgM9o8MSuBNMiaV6KYuBY5+M4CM/B0jCHak28SQuwE 5RJ5OeQN3AhIvNBUvMXAbMRFT5SV9/Z7+HJdxgE75HSPU06FNxupFYxUZKqU68DmFfHK LKTpUGMIx34kTUsEkqlrVh443L6mA+v+Q9anf33YjXhHggOMg/HO8FkZzWxb4qJ2gAD6 uD/jfKMxVyuaXqCGg3wE1OXovpBNiWuOEQOPtPZj08TqiUjPk7su0oq/0POfdMlvJUcC QDHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Je2tK/cj4wTX6JzO/38HbqzCHuS2GQAUkCJGvwCMB5o=; b=hzhTN/tp3uIRD5SZPvtWUWaypknwIttmFYMmPs47TmuZkChuOvI5KTsSjMx8R7RFzw WCsg52TyOKOQag3eyDH/dfFCFZjs/wzEotDu7NaIxz0eGEwkG4UBRfy7pil8vsQCQX+T r4Sm3RXlBLOyji29pPDdo6bDP4ObnwpTU0ID1YcToW51whCP9LwKoUdkj7b2Aoa+bCBm WxpGzElpP7a0MGsXHmV4E0z9m/gsRkbw76MiWpt7+EyxSukiJ5yNPV3XVHCGn5SRCG8f m0kYoavuUZsdQ8NnIJchBmtG4zTiPZieqyXhENunHd3t6pA3qWJ0h8nGNLct9xDdTNaR QuCA== X-Gm-Message-State: APjAAAXSiEpbTJu3hMa2r6gH0Trw456HFQHHPYi7ladR6PH49UQ58kcL w3TrqL+6yPsZl5xJSGWrdnksmcLPMTQ= X-Google-Smtp-Source: APXvYqzAvkxwt92MM/xTqQy93H4FGyxOr+/D+EkAIxr0f/2SANtxWusUXJqBqp0P2xHUnHTp6OC7c6cm9M4= X-Received: by 2002:a63:511f:: with SMTP id f31mr8624766pgb.204.1572247258496; Mon, 28 Oct 2019 00:20:58 -0700 (PDT) Date: Mon, 28 Oct 2019 00:20:31 -0700 In-Reply-To: <20191028072032.6911-1-satyat@google.com> Message-Id: <20191028072032.6911-9-satyat@google.com> Mime-Version: 1.0 References: <20191028072032.6911-1-satyat@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v5 8/9] f2fs: add inline encryption support From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala , Eric Biggers Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Wire up f2fs to support inline encryption via the helper functions which fs/crypto/ now provides. This includes: - Adding a mount option 'inlinecrypt' which enables inline encryption on encrypted files where it can be used. - Setting the bio_crypt_ctx on bios that will be submitted to an inline-encrypted file. - Not adding logically discontiguous data to bios that will be submitted to an inline-encrypted file. - Not doing filesystem-layer crypto on inline-encrypted files. Co-developed-by: Eric Biggers Signed-off-by: Eric Biggers Signed-off-by: Satya Tangirala Acked-by: Jaegeuk Kim --- fs/f2fs/data.c | 76 +++++++++++++++++++++++++++++++++++++++++++------ fs/f2fs/f2fs.h | 3 ++ fs/f2fs/super.c | 20 +++++++++++++ 3 files changed, 91 insertions(+), 8 deletions(-) diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 5755e897a5f0..b5a7b540e630 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -306,6 +306,35 @@ static struct bio *__bio_alloc(struct f2fs_io_info *fio, int npages) return bio; } +static int f2fs_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode, + pgoff_t first_idx, + const struct f2fs_io_info *fio, + gfp_t gfp_mask) +{ + /* + * The f2fs garbage collector sets ->encrypted_page when it wants to + * read/write raw data without encryption. + */ + if (fio && fio->encrypted_page) + return 0; + + return fscrypt_set_bio_crypt_ctx(bio, inode, first_idx, gfp_mask); +} + +static bool f2fs_crypt_mergeable_bio(struct bio *bio, const struct inode *inode, + pgoff_t next_idx, + const struct f2fs_io_info *fio) +{ + /* + * The f2fs garbage collector sets ->encrypted_page when it wants to + * read/write raw data without encryption. + */ + if (fio && fio->encrypted_page) + return true; + + return fscrypt_mergeable_bio(bio, inode, next_idx); +} + static inline void __submit_bio(struct f2fs_sb_info *sbi, struct bio *bio, enum page_type type) { @@ -477,6 +506,7 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio) struct bio *bio; struct page *page = fio->encrypted_page ? fio->encrypted_page : fio->page; + int err; if (!f2fs_is_valid_blkaddr(fio->sbi, fio->new_blkaddr, fio->is_por ? META_POR : (__is_meta_io(fio) ? @@ -489,6 +519,13 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio) /* Allocate a new bio */ bio = __bio_alloc(fio, 1); + err = f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host, + fio->page->index, fio, GFP_NOIO); + if (err) { + bio_put(bio); + return err; + } + if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) { bio_put(bio); return -EFAULT; @@ -556,14 +593,19 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio) trace_f2fs_submit_page_bio(page, fio); f2fs_trace_ios(fio, 0); - if (bio && !page_is_mergeable(fio->sbi, bio, *fio->last_block, - fio->new_blkaddr)) { + if (bio && (!page_is_mergeable(fio->sbi, bio, *fio->last_block, + fio->new_blkaddr) || + !f2fs_crypt_mergeable_bio(bio, fio->page->mapping->host, + fio->page->index, fio))) { __submit_bio(fio->sbi, bio, fio->type); bio = NULL; } alloc_new: if (!bio) { bio = __bio_alloc(fio, BIO_MAX_PAGES); + f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host, + fio->page->index, fio, + GFP_NOIO | __GFP_NOFAIL); bio_set_op_attrs(bio, fio->op, fio->op_flags); } @@ -629,8 +671,11 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio) inc_page_count(sbi, WB_DATA_TYPE(bio_page)); - if (io->bio && !io_is_mergeable(sbi, io->bio, io, fio, - io->last_block_in_bio, fio->new_blkaddr)) + if (io->bio && + (!io_is_mergeable(sbi, io->bio, io, fio, io->last_block_in_bio, + fio->new_blkaddr) || + !f2fs_crypt_mergeable_bio(io->bio, fio->page->mapping->host, + fio->page->index, fio))) __submit_merged_bio(io); alloc_new: if (io->bio == NULL) { @@ -642,6 +687,9 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio) goto skip; } io->bio = __bio_alloc(fio, BIO_MAX_PAGES); + f2fs_set_bio_crypt_ctx(io->bio, fio->page->mapping->host, + fio->page->index, fio, + GFP_NOIO | __GFP_NOFAIL); io->fio = *fio; } @@ -681,15 +729,23 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr, struct bio *bio; struct bio_post_read_ctx *ctx; unsigned int post_read_steps = 0; + int err; bio = f2fs_bio_alloc(sbi, min_t(int, nr_pages, BIO_MAX_PAGES), false); if (!bio) return ERR_PTR(-ENOMEM); + + err = f2fs_set_bio_crypt_ctx(bio, inode, first_idx, NULL, GFP_NOFS); + if (err) { + bio_put(bio); + return ERR_PTR(err); + } + f2fs_target_device(sbi, blkaddr, bio); bio->bi_end_io = f2fs_read_end_io; bio_set_op_attrs(bio, REQ_OP_READ, op_flag); - if (f2fs_encrypted_file(inode)) + if (fscrypt_inode_uses_fs_layer_crypto(inode)) post_read_steps |= 1 << STEP_DECRYPT; if (f2fs_need_verity(inode, first_idx)) @@ -1726,8 +1782,9 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page, * This page will go to BIO. Do we need to send this * BIO off first? */ - if (bio && !page_is_mergeable(F2FS_I_SB(inode), bio, - *last_block_in_bio, block_nr)) { + if (bio && (!page_is_mergeable(F2FS_I_SB(inode), bio, + *last_block_in_bio, block_nr) || + !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) { submit_and_realloc: __submit_bio(F2FS_I_SB(inode), bio, DATA); bio = NULL; @@ -1867,6 +1924,9 @@ static int encrypt_one_page(struct f2fs_io_info *fio) /* wait for GCed page writeback via META_MAPPING */ f2fs_wait_on_block_writeback(inode, fio->old_blkaddr); + if (fscrypt_inode_uses_inline_crypto(inode)) + return 0; + retry_encrypt: fio->encrypted_page = fscrypt_encrypt_pagecache_blocks(fio->page, PAGE_SIZE, 0, @@ -2041,7 +2101,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio) f2fs_unlock_op(fio->sbi); err = f2fs_inplace_write_data(fio); if (err) { - if (f2fs_encrypted_file(inode)) + if (fscrypt_inode_uses_fs_layer_crypto(inode)) fscrypt_finalize_bounce_page(&fio->encrypted_page); if (PageWriteback(page)) end_page_writeback(page); diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 4024790028aa..e04fda00b4ef 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -137,6 +137,9 @@ struct f2fs_mount_info { int alloc_mode; /* segment allocation policy */ int fsync_mode; /* fsync policy */ bool test_dummy_encryption; /* test dummy encryption */ +#ifdef CONFIG_FS_ENCRYPTION + bool inlinecrypt; /* inline encryption enabled */ +#endif block_t unusable_cap; /* Amount of space allowed to be * unusable when disabling checkpoint */ diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index 851ac9522926..850a2a2394d8 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -137,6 +137,7 @@ enum { Opt_alloc, Opt_fsync, Opt_test_dummy_encryption, + Opt_inlinecrypt, Opt_checkpoint_disable, Opt_checkpoint_disable_cap, Opt_checkpoint_disable_cap_perc, @@ -199,6 +200,7 @@ static match_table_t f2fs_tokens = { {Opt_alloc, "alloc_mode=%s"}, {Opt_fsync, "fsync_mode=%s"}, {Opt_test_dummy_encryption, "test_dummy_encryption"}, + {Opt_inlinecrypt, "inlinecrypt"}, {Opt_checkpoint_disable, "checkpoint=disable"}, {Opt_checkpoint_disable_cap, "checkpoint=disable:%u"}, {Opt_checkpoint_disable_cap_perc, "checkpoint=disable:%u%%"}, @@ -783,6 +785,13 @@ static int parse_options(struct super_block *sb, char *options) f2fs_info(sbi, "Test dummy encryption mode enabled"); #else f2fs_info(sbi, "Test dummy encryption mount option ignored"); +#endif + break; + case Opt_inlinecrypt: +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT + F2FS_OPTION(sbi).inlinecrypt = true; +#else + f2fs_info(sbi, "inline encryption not supported"); #endif break; case Opt_checkpoint_disable_cap_perc: @@ -1438,6 +1447,8 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root) #ifdef CONFIG_FS_ENCRYPTION if (F2FS_OPTION(sbi).test_dummy_encryption) seq_puts(seq, ",test_dummy_encryption"); + if (F2FS_OPTION(sbi).inlinecrypt) + seq_puts(seq, ",inlinecrypt"); #endif if (F2FS_OPTION(sbi).alloc_mode == ALLOC_MODE_DEFAULT) @@ -1466,6 +1477,9 @@ static void default_options(struct f2fs_sb_info *sbi) F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_DEFAULT; F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_POSIX; F2FS_OPTION(sbi).test_dummy_encryption = false; +#ifdef CONFIG_FS_ENCRYPTION + F2FS_OPTION(sbi).inlinecrypt = false; +#endif F2FS_OPTION(sbi).s_resuid = make_kuid(&init_user_ns, F2FS_DEF_RESUID); F2FS_OPTION(sbi).s_resgid = make_kgid(&init_user_ns, F2FS_DEF_RESGID); @@ -2320,6 +2334,11 @@ static void f2fs_get_ino_and_lblk_bits(struct super_block *sb, *lblk_bits_ret = 8 * sizeof(block_t); } +static bool f2fs_inline_crypt_enabled(struct super_block *sb) +{ + return F2FS_OPTION(F2FS_SB(sb)).inlinecrypt; +} + static const struct fscrypt_operations f2fs_cryptops = { .key_prefix = "f2fs:", .get_context = f2fs_get_context, @@ -2329,6 +2348,7 @@ static const struct fscrypt_operations f2fs_cryptops = { .max_namelen = F2FS_NAME_LEN, .has_stable_inodes = f2fs_has_stable_inodes, .get_ino_and_lblk_bits = f2fs_get_ino_and_lblk_bits, + .inline_crypt_enabled = f2fs_inline_crypt_enabled, }; #endif From patchwork Mon Oct 28 07:20:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11214781 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 08C406A61 for ; Mon, 28 Oct 2019 07:21:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CFD4F214AF for ; Mon, 28 Oct 2019 07:21:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aZpymRW9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732800AbfJ1HVE (ORCPT ); Mon, 28 Oct 2019 03:21:04 -0400 Received: from mail-pg1-f201.google.com ([209.85.215.201]:39131 "EHLO mail-pg1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732764AbfJ1HVD (ORCPT ); Mon, 28 Oct 2019 03:21:03 -0400 Received: by mail-pg1-f201.google.com with SMTP id m20so7649631pgv.6 for ; Mon, 28 Oct 2019 00:21:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YaUYxJIU8QNjs4DnqY9qxnn9mg2HqCmU7dCcxj1yfsw=; b=aZpymRW91e2sm8Ex06DslpYKHKd+KQ6IO7BqB6qcudx1iMtuxWF4vHlLHvUbnln/Gf UOrMbTCDEL04H/p1n7Rh1mIzWJ48GJErFdUzemJlLarsRMGOmdw27/TJ3C83uCcKeGfg WHL1c4xZVsn/RU5JncLHCTWHF+IRP77RCU/y6sqPuvZiP+FOWRho7/0uzKFkV2E5i/Ps ZsphWxzFIFhiZFAPDX+IwTaIfpdR3v5MGyZ9+umV0S3ddi7fc/NPgDvSYmbeO8GboJil /XlRHZYPB1i6cwKa2uTUAJCrfbGmQPY+DVGK1SiUYWUx7GIW4K3kahcw59J4ru4BK+rs 4wjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YaUYxJIU8QNjs4DnqY9qxnn9mg2HqCmU7dCcxj1yfsw=; b=Z8C/7mgdQZRLHbPR5L1bw6ZoSoN+YCmYOcYXURhOwWKebE/cUGmUuTVoFV6jVMud60 bZ3qaY6ipkTV03B3PqyPYVarMaIkzvAdFSbzmRlZzh32/T9XBzV8JXQK296xQr/XTU4l 6GC7oc3ff8KRF7Lxo8w9MoKg9k2Wg7PCz5CH9OxBfp9BAI1ODD4PD2vwaHZlVkJu53ZO AiO2dJxo9Uw7++aCKiE4xfhL3I3NmTxehUn7G0W+2RZpU/DH0Dv0f228eg4qe6k3ihKS rA8CtFpunsH+jlxYUhZ6r+ETXMGfhtnUgX7eP0HZf/FSxX8noDoV+wigy3+Rky0Rqnw7 zNMQ== X-Gm-Message-State: APjAAAXyxLxgOk35Fq9A+oD/Ou84mLDgmTb/1RGAoUQMQ848sFeSQ+sL DIEZNaQGyd4peFXkIElThTRQSBQniSI= X-Google-Smtp-Source: APXvYqy3c/B/X4NYWwSXvNvZ0ZB6ZtWW4GfuuPKJh6nOOCXn8f0RoTWdJF68RmsTmVSY/kkgxvO1m7zp4mI= X-Received: by 2002:a63:3104:: with SMTP id x4mr18785655pgx.135.1572247261348; Mon, 28 Oct 2019 00:21:01 -0700 (PDT) Date: Mon, 28 Oct 2019 00:20:32 -0700 In-Reply-To: <20191028072032.6911-1-satyat@google.com> Message-Id: <20191028072032.6911-10-satyat@google.com> Mime-Version: 1.0 References: <20191028072032.6911-1-satyat@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v5 9/9] ext4: add inline encryption support From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Eric Biggers , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org From: Eric Biggers Wire up ext4 to support inline encryption via the helper functions which fs/crypto/ now provides. This includes: - Adding a mount option 'inlinecrypt' which enables inline encryption on encrypted files where it can be used. - Setting the bio_crypt_ctx on bios that will be submitted to an inline-encrypted file. Note: submit_bh_wbc() in fs/buffer.c also needed to be patched for this part, since ext4 sometimes uses ll_rw_block() on file data. - Not adding logically discontiguous data to bios that will be submitted to an inline-encrypted file. - Not doing filesystem-layer crypto on inline-encrypted files. Signed-off-by: Eric Biggers Signed-off-by: Satya Tangirala --- fs/buffer.c | 3 +++ fs/ext4/ext4.h | 1 + fs/ext4/inode.c | 4 ++-- fs/ext4/page-io.c | 11 +++++++++-- fs/ext4/readpage.c | 15 ++++++++++++--- fs/ext4/super.c | 13 +++++++++++++ 6 files changed, 40 insertions(+), 7 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 86a38b979323..5d1f420de95b 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -47,6 +47,7 @@ #include #include #include +#include static int fsync_buffers_list(spinlock_t *lock, struct list_head *list); static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh, @@ -3068,6 +3069,8 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh, */ bio = bio_alloc(GFP_NOIO, 1); + fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO | __GFP_NOFAIL); + bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); bio_set_dev(bio, bh->b_bdev); bio->bi_write_hint = write_hint; diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index b3a2cc7c0252..ce493e360814 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -1148,6 +1148,7 @@ struct ext4_inode_info { #define EXT4_MOUNT_JOURNAL_CHECKSUM 0x800000 /* Journal checksums */ #define EXT4_MOUNT_JOURNAL_ASYNC_COMMIT 0x1000000 /* Journal Async Commit */ #define EXT4_MOUNT_WARN_ON_ERROR 0x2000000 /* Trigger WARN_ON on error */ +#define EXT4_MOUNT_INLINECRYPT 0x4000000 /* Inline encryption support */ #define EXT4_MOUNT_DELALLOC 0x8000000 /* Delalloc support */ #define EXT4_MOUNT_DATA_ERR_ABORT 0x10000000 /* Abort on file data write */ #define EXT4_MOUNT_BLOCK_VALIDITY 0x20000000 /* Block validity checking */ diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 516faa280ced..43a844affc57 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1237,7 +1237,7 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len, } if (unlikely(err)) { page_zero_new_buffers(page, from, to); - } else if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)) { + } else if (fscrypt_inode_uses_fs_layer_crypto(inode)) { for (i = 0; i < nr_wait; i++) { int err2; @@ -4034,7 +4034,7 @@ static int __ext4_block_zero_page_range(handle_t *handle, /* Uhhuh. Read error. Complain and punt. */ if (!buffer_uptodate(bh)) goto unlock; - if (S_ISREG(inode->i_mode) && IS_ENCRYPTED(inode)) { + if (fscrypt_inode_uses_fs_layer_crypto(inode)) { /* We expect the key to be set. */ BUG_ON(!fscrypt_has_encryption_key(inode)); WARN_ON_ONCE(fscrypt_decrypt_pagecache_blocks( diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index 12ceadef32c5..46a4aeef8275 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -362,10 +362,16 @@ static int io_submit_init_bio(struct ext4_io_submit *io, struct buffer_head *bh) { struct bio *bio; + int err; bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES); if (!bio) return -ENOMEM; + err = fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO); + if (err) { + bio_put(bio); + return err; + } bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); bio_set_dev(bio, bh->b_bdev); bio->bi_end_io = ext4_end_bio; @@ -383,7 +389,8 @@ static int io_submit_add_bh(struct ext4_io_submit *io, { int ret; - if (io->io_bio && bh->b_blocknr != io->io_next_block) { + if (io->io_bio && (bh->b_blocknr != io->io_next_block || + !fscrypt_mergeable_bio_bh(io->io_bio, bh))) { submit_and_retry: ext4_io_submit(io); } @@ -474,7 +481,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io, * (e.g. holes) to be unnecessarily encrypted, but this is rare and * can't happen in the common case of blocksize == PAGE_SIZE. */ - if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) && nr_to_submit) { + if (fscrypt_inode_uses_fs_layer_crypto(inode) && nr_to_submit) { gfp_t gfp_flags = GFP_NOFS; unsigned int enc_bytes = round_up(len, i_blocksize(inode)); diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c index a30b203fa461..643f271b0b8e 100644 --- a/fs/ext4/readpage.c +++ b/fs/ext4/readpage.c @@ -183,7 +183,7 @@ static struct bio_post_read_ctx *get_bio_post_read_ctx(struct inode *inode, unsigned int post_read_steps = 0; struct bio_post_read_ctx *ctx = NULL; - if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)) + if (fscrypt_inode_uses_fs_layer_crypto(inode)) post_read_steps |= 1 << STEP_DECRYPT; if (ext4_need_verity(inode, first_idx)) @@ -220,6 +220,7 @@ int ext4_mpage_readpages(struct address_space *mapping, const unsigned blkbits = inode->i_blkbits; const unsigned blocks_per_page = PAGE_SIZE >> blkbits; const unsigned blocksize = 1 << blkbits; + sector_t next_block; sector_t block_in_file; sector_t last_block; sector_t last_block_in_file; @@ -252,7 +253,8 @@ int ext4_mpage_readpages(struct address_space *mapping, if (page_has_buffers(page)) goto confused; - block_in_file = (sector_t)page->index << (PAGE_SHIFT - blkbits); + block_in_file = next_block = + (sector_t)page->index << (PAGE_SHIFT - blkbits); last_block = block_in_file + nr_pages * blocks_per_page; last_block_in_file = (ext4_readpage_limit(inode) + blocksize - 1) >> blkbits; @@ -352,7 +354,8 @@ int ext4_mpage_readpages(struct address_space *mapping, * This page will go to BIO. Do we need to send this * BIO off first? */ - if (bio && (last_block_in_bio != blocks[0] - 1)) { + if (bio && (last_block_in_bio != blocks[0] - 1 || + !fscrypt_mergeable_bio(bio, inode, next_block))) { submit_and_realloc: submit_bio(bio); bio = NULL; @@ -364,6 +367,12 @@ int ext4_mpage_readpages(struct address_space *mapping, min_t(int, nr_pages, BIO_MAX_PAGES)); if (!bio) goto set_error_page; + if (fscrypt_set_bio_crypt_ctx(bio, inode, next_block, + GFP_KERNEL) != 0) { + bio_put(bio); + bio = NULL; + goto set_error_page; + } ctx = get_bio_post_read_ctx(inode, bio, page->index); if (IS_ERR(ctx)) { bio_put(bio); diff --git a/fs/ext4/super.c b/fs/ext4/super.c index b3cbf8622eab..3415bce51a36 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1357,6 +1357,11 @@ static void ext4_get_ino_and_lblk_bits(struct super_block *sb, *lblk_bits_ret = 8 * sizeof(ext4_lblk_t); } +static bool ext4_inline_crypt_enabled(struct super_block *sb) +{ + return test_opt(sb, INLINECRYPT); +} + static const struct fscrypt_operations ext4_cryptops = { .key_prefix = "ext4:", .get_context = ext4_get_context, @@ -1366,6 +1371,7 @@ static const struct fscrypt_operations ext4_cryptops = { .max_namelen = EXT4_NAME_LEN, .has_stable_inodes = ext4_has_stable_inodes, .get_ino_and_lblk_bits = ext4_get_ino_and_lblk_bits, + .inline_crypt_enabled = ext4_inline_crypt_enabled, }; #endif @@ -1461,6 +1467,7 @@ enum { Opt_journal_path, Opt_journal_checksum, Opt_journal_async_commit, Opt_abort, Opt_data_journal, Opt_data_ordered, Opt_data_writeback, Opt_data_err_abort, Opt_data_err_ignore, Opt_test_dummy_encryption, + Opt_inlinecrypt, Opt_usrjquota, Opt_grpjquota, Opt_offusrjquota, Opt_offgrpjquota, Opt_jqfmt_vfsold, Opt_jqfmt_vfsv0, Opt_jqfmt_vfsv1, Opt_quota, Opt_noquota, Opt_barrier, Opt_nobarrier, Opt_err, @@ -1557,6 +1564,7 @@ static const match_table_t tokens = { {Opt_noinit_itable, "noinit_itable"}, {Opt_max_dir_size_kb, "max_dir_size_kb=%u"}, {Opt_test_dummy_encryption, "test_dummy_encryption"}, + {Opt_inlinecrypt, "inlinecrypt"}, {Opt_nombcache, "nombcache"}, {Opt_nombcache, "no_mbcache"}, /* for backward compatibility */ {Opt_removed, "check=none"}, /* mount option from ext2/3 */ @@ -1768,6 +1776,11 @@ static const struct mount_opts { {Opt_jqfmt_vfsv1, QFMT_VFS_V1, MOPT_QFMT}, {Opt_max_dir_size_kb, 0, MOPT_GTE0}, {Opt_test_dummy_encryption, 0, MOPT_GTE0}, +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT + {Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_SET}, +#else + {Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_NOSUPPORT}, +#endif {Opt_nombcache, EXT4_MOUNT_NO_MBCACHE, MOPT_SET}, {Opt_err, 0, 0} };