From patchwork Wed Aug 21 07:57:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11105883 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A7AD014F7 for ; Wed, 21 Aug 2019 07:57:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7268A22D6D for ; Wed, 21 Aug 2019 07:57:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IKW0aN+y" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726882AbfHUH5W (ORCPT ); Wed, 21 Aug 2019 03:57:22 -0400 Received: from mail-pg1-f201.google.com ([209.85.215.201]:39610 "EHLO mail-pg1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726413AbfHUH5V (ORCPT ); Wed, 21 Aug 2019 03:57:21 -0400 Received: by mail-pg1-f201.google.com with SMTP id t19so822298pgh.6 for ; Wed, 21 Aug 2019 00:57:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GYHwPi7d8kDthlxRC2QtqYXeGrYkCO6cHAiAxIeKl2w=; b=IKW0aN+yLv7Ic3R8BjawRQWYh+JA/tjHRPyOkAe4uCVOZhHSNH7f3EdxqcueZzS0TV hvvFNZXC78bRng8tEVNrfSLhwlyfjIqcYkT/gTQkxFUK7has5vjW1KdEp2BzDKmWOBzt 86rwshilNU306hzb3YkdkCnAPv0GnqtgLpw0EZUVqzMAnsym0XAnHvWPgY0GqZz5v0U7 y7ahDdgTOzxCIsuRgq0EkYO6VBI6OMOZ//Dt3703izWC4wuW3pxJ8EfQPxIO38BkNjTi XMkY2XadI+ddT+C2PjCfmsigOMZBPiHpXSOJCMGiZsU1C33mCBkXAJ+uzyf5MwcUvn9r ZNmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GYHwPi7d8kDthlxRC2QtqYXeGrYkCO6cHAiAxIeKl2w=; b=MTRA0GsRRJNI08W58FbTAWA1vyC/gDQ9k8QQmZbbx+M6wUPkscsfvgBpsH3/muyECO +VWzCZLNUInwTEHAPF9oaoQ/r2CC0qI1W3QP/dquD2SScXZm4tSuPE5fTGYm46Bv0qEl Aj2DVllw8JnlG1ssI12tIf+8QUiZw+uKTWGkP22fOeIqEfnHv7Dit3SaF7SCkh0pQG8e VYtdLUPbPOd5tQsRlbC3Oj5VMERO7X8FxVdMjvzOP1cVdBLhIGHdELrsQlUjDSsApCRe hXQKyzUuvCQE2+1qibu6zPHaEGyWIZZUauL0clleRYuWPgWkpQNWKvVDykNe+MgIV0mg VCrQ== X-Gm-Message-State: APjAAAX8A6EJorSBvL9XqgOY0cMW8zhliJiDmSZ7XS3He79RHTomKJpv uaMLQzfKuf/c9SQPqN0D/TvTJjDhTw0= X-Google-Smtp-Source: APXvYqyV+P2tAM3aKCN6LYEg9vDK991bwuZUHQOqXrruD+VPP7UHRwbR1hbfob9NKwvxEUEZ7imMKQsdBc4= X-Received: by 2002:a63:6d6:: with SMTP id 205mr28824838pgg.262.1566374240737; Wed, 21 Aug 2019 00:57:20 -0700 (PDT) Date: Wed, 21 Aug 2019 00:57:07 -0700 In-Reply-To: <20190821075714.65140-1-satyat@google.com> Message-Id: <20190821075714.65140-2-satyat@google.com> Mime-Version: 1.0 References: <20190821075714.65140-1-satyat@google.com> X-Mailer: git-send-email 2.23.0.rc1.153.gdeed80330f-goog Subject: [PATCH v4 1/8] block: Keyslot Manager for Inline Encryption From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Inline Encryption hardware allows software to specify an encryption context (an encryption key, crypto algorithm, data unit num, data unit size, etc.) along with a data transfer request to a storage device, and the inline encryption hardware will use that context to en/decrypt the data. The inline encryption hardware is part of the storage device, and it conceptually sits on the data path between system memory and the storage device. Inline Encryption hardware implementations often function around the concept of "keyslots". These implementations often have a limited number of "keyslots", each of which can hold an encryption context (we say that an encryption context can be "programmed" into a keyslot). Requests made to the storage device may have a keyslot associated with them, and the inline encryption hardware will en/decrypt the data in the requests using the encryption context programmed into that associated keyslot. As keyslots are limited, and programming keys may be expensive in many implementations, and multiple requests may use exactly the same encryption contexts, we introduce a Keyslot Manager to efficiently manage keyslots. The keyslot manager also functions as the interface that upper layers will use to program keys into inline encryption hardware. For more information on the Keyslot Manager, refer to documentation found in block/keyslot-manager.c and linux/keyslot-manager.h. Signed-off-by: Satya Tangirala --- block/Kconfig | 8 + block/Makefile | 1 + block/keyslot-manager.c | 351 ++++++++++++++++++++++++++++++++ include/linux/bio.h | 12 ++ include/linux/blkdev.h | 6 + include/linux/keyslot-manager.h | 94 +++++++++ 6 files changed, 472 insertions(+) create mode 100644 block/keyslot-manager.c create mode 100644 include/linux/keyslot-manager.h diff --git a/block/Kconfig b/block/Kconfig index 8b5f8e560eb4..1469efdd385b 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -164,6 +164,14 @@ config BLK_SED_OPAL Enabling this option enables users to setup/unlock/lock Locking ranges for SED devices using the Opal protocol. +config BLK_INLINE_ENCRYPTION + bool "Enable inline encryption support in block layer" + help + Build the blk-crypto subsystem. + Enabling this lets the block layer handle encryption, + so users can take advantage of inline encryption + hardware if present. + menu "Partition Types" source "block/partitions/Kconfig" diff --git a/block/Makefile b/block/Makefile index eee1b4ceecf9..a72abd61b220 100644 --- a/block/Makefile +++ b/block/Makefile @@ -35,3 +35,4 @@ obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o obj-$(CONFIG_BLK_PM) += blk-pm.o +obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c new file mode 100644 index 000000000000..2f8582eb4e65 --- /dev/null +++ b/block/keyslot-manager.c @@ -0,0 +1,351 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * keyslot-manager.c + * + * Copyright 2019 Google LLC + */ + +/** + * DOC: The Keyslot Manager + * + * Many devices with inline encryption support have a limited number of "slots" + * into which encryption contexts may be programmed, and requests can be tagged + * with a slot number to specify the key to use for en/decryption. + * + * As the number of slots are limited, and programming keys is expensive on + * many inline encryption hardware, we don't want to program the same key into + * multiple slots - if multiple requests are using the same key, we want to + * program just one slot with that key and use that slot for all requests. + * + * The keyslot manager manages these keyslots appropriately, and also acts as + * an abstraction between the inline encryption hardware and the upper layers. + * + * Lower layer devices will set up a keyslot manager in their request queue + * and tell it how to perform device specific operations like programming/ + * evicting keys from keyslots. + * + * Upper layers will call keyslot_manager_get_slot_for_key() to program a + * key into some slot in the inline encryption hardware. + */ +#include +#include +#include +#include +#include + +struct keyslot { + atomic_t slot_refs; + struct list_head idle_slot_node; +}; + +struct keyslot_manager { + unsigned int num_slots; + atomic_t num_idle_slots; + struct keyslot_mgmt_ll_ops ksm_ll_ops; + void *ll_priv_data; + + /* Protects programming and evicting keys from the device */ + struct rw_semaphore lock; + + /* List of idle slots, with least recently used slot at front */ + wait_queue_head_t idle_slots_wait_queue; + struct list_head idle_slots; + spinlock_t idle_slots_lock; + + /* Per-keyslot data */ + struct keyslot slots[]; +}; + +/** + * keyslot_manager_create() - Create a keyslot manager + * @num_slots: The number of key slots to manage. + * @ksm_ll_ops: The struct keyslot_mgmt_ll_ops for the device that this keyslot + * manager will use to perform operations like programming and + * evicting keys. + * @ll_priv_data: Private data passed as is to the functions in ksm_ll_ops. + * + * Allocate memory for and initialize a keyslot manager. Called by for e.g. + * storage drivers to set up a keyslot manager in their request_queue. + * + * Context: May sleep + * Return: Pointer to constructed keyslot manager or NULL on error. + */ +struct keyslot_manager *keyslot_manager_create(unsigned int num_slots, + const struct keyslot_mgmt_ll_ops *ksm_ll_ops, + void *ll_priv_data) +{ + struct keyslot_manager *ksm; + int slot; + + if (num_slots == 0) + return NULL; + + /* Check that all ops are specified */ + if (ksm_ll_ops->keyslot_program == NULL || + ksm_ll_ops->keyslot_evict == NULL || + ksm_ll_ops->crypto_mode_supported == NULL || + ksm_ll_ops->keyslot_find == NULL) + return NULL; + + ksm = kvzalloc(struct_size(ksm, slots, num_slots), GFP_KERNEL); + if (!ksm) + return NULL; + + ksm->num_slots = num_slots; + atomic_set(&ksm->num_idle_slots, num_slots); + ksm->ksm_ll_ops = *ksm_ll_ops; + ksm->ll_priv_data = ll_priv_data; + + init_rwsem(&ksm->lock); + + init_waitqueue_head(&ksm->idle_slots_wait_queue); + INIT_LIST_HEAD(&ksm->idle_slots); + + for (slot = 0; slot < num_slots; slot++) + list_add(&ksm->slots[slot].idle_slot_node, &ksm->idle_slots); + + spin_lock_init(&ksm->idle_slots_lock); + + return ksm; +} +EXPORT_SYMBOL(keyslot_manager_create); + +static int find_and_grab_keyslot(struct keyslot_manager *ksm, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + int slot; + unsigned long flags; + + slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key, + crypto_mode, data_unit_size); + if (slot < 0) + return slot; + if (WARN_ON(slot >= ksm->num_slots)) + return -EINVAL; + if (atomic_inc_return(&ksm->slots[slot].slot_refs) == 1) { + /* Took first reference to this slot; remove it from LRU list */ + spin_lock_irqsave(&ksm->idle_slots_lock, flags); + list_del(&ksm->slots[slot].idle_slot_node); + spin_unlock_irqrestore(&ksm->idle_slots_lock, flags); + atomic_dec(&ksm->num_idle_slots); + } + return slot; +} + +/** + * keyslot_manager_get_slot_for_key() - Program a key into a keyslot. + * @ksm: The keyslot manager to program the key into. + * @key: Pointer to the bytes of the key to program. Must be the correct length + * for the chosen @crypto_mode; see blk_crypto_modes in blk-crypto.c. + * @crypto_mode: Identifier for the encryption algorithm to use. + * @data_unit_size: The data unit size to use for en/decryption. + * + * Get a keyslot that's been programmed with the specified key, crypto_mode, and + * data_unit_size. If one already exists, return it with incremented refcount. + * Otherwise, wait for a keyslot to become idle and program it. + * + * Context: Process context. Takes and releases ksm->lock. + * Return: The keyslot on success, else a -errno value. + */ +int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm, + const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + int slot; + int err; + struct keyslot *idle_slot; + unsigned long flags; + + down_read(&ksm->lock); + slot = find_and_grab_keyslot(ksm, key, crypto_mode, data_unit_size); + up_read(&ksm->lock); + if (slot != -ENOKEY) + return slot; + + while (true) { + down_write(&ksm->lock); + slot = find_and_grab_keyslot(ksm, key, crypto_mode, + data_unit_size); + if (slot != -ENOKEY) { + up_write(&ksm->lock); + return slot; + } + + /* + * If we're here, that means there wasn't a slot that was + * already programmed with the key. So try to program it. + */ + if (atomic_read(&ksm->num_idle_slots) > 0) + break; + + up_write(&ksm->lock); + wait_event(ksm->idle_slots_wait_queue, + (atomic_read(&ksm->num_idle_slots) > 0)); + } + + idle_slot = list_first_entry(&ksm->idle_slots, struct keyslot, + idle_slot_node); + slot = idle_slot - ksm->slots; + + err = ksm->ksm_ll_ops.keyslot_program(ksm->ll_priv_data, key, + crypto_mode, + data_unit_size, + slot); + + if (err) { + wake_up(&ksm->idle_slots_wait_queue); + up_write(&ksm->lock); + return err; + } + + atomic_inc(&ksm->slots[slot].slot_refs); + spin_lock_irqsave(&ksm->idle_slots_lock, flags); + list_del(&idle_slot->idle_slot_node); + spin_unlock_irqrestore(&ksm->idle_slots_lock, flags); + atomic_dec(&ksm->num_idle_slots); + + up_write(&ksm->lock); + return slot; + +} +EXPORT_SYMBOL(keyslot_manager_get_slot_for_key); + +/** + * keyslot_manager_get_slot() - Increment the refcount on the specified slot. + * @ksm - The keyslot manager that we want to modify. + * @slot - The slot to increment the refcount of. + * + * This function assumes that there is already an active reference to that slot + * and simply increments the refcount. This is useful when cloning a bio that + * already has a reference to a keyslot, and we want the cloned bio to also have + * its own reference. + * + * Context: Any context. + */ +void keyslot_manager_get_slot(struct keyslot_manager *ksm, unsigned int slot) +{ + if (WARN_ON(slot >= ksm->num_slots)) + return; + + WARN_ON(atomic_inc_return(&ksm->slots[slot].slot_refs) < 2); +} +EXPORT_SYMBOL(keyslot_manager_get_slot); + +/** + * keyslot_manager_put_slot() - Release a reference to a slot + * @ksm: The keyslot manager to release the reference from. + * @slot: The slot to release the reference from. + * + * Context: Any context. + */ +void keyslot_manager_put_slot(struct keyslot_manager *ksm, unsigned int slot) +{ + unsigned long flags; + + if (WARN_ON(slot >= ksm->num_slots)) + return; + + spin_lock_irqsave(&ksm->idle_slots_lock, flags); + if (atomic_dec_and_test(&ksm->slots[slot].slot_refs)) { + list_add_tail(&ksm->slots[slot].idle_slot_node, + &ksm->idle_slots); + spin_unlock_irqrestore(&ksm->idle_slots_lock, flags); + atomic_inc(&ksm->num_idle_slots); + wake_up(&ksm->idle_slots_wait_queue); + } else { + spin_unlock_irqrestore(&ksm->idle_slots_lock, flags); + } +} +EXPORT_SYMBOL(keyslot_manager_put_slot); + +/** + * keyslot_manager_crypto_mode_supported() - Find out if a crypto_mode/data + * unit size combination is supported + * by a ksm. + * @ksm - The keyslot manager to check + * @crypto_mode - The crypto mode to check for. + * @data_unit_size - The data_unit_size for the mode. + * + * Calls and returns the result of the crypto_mode_supported function specified + * by the ksm. + * + * Context: Process context. + * Return: Whether or not this ksm supports the specified crypto_mode/ + * data_unit_size combo. + */ +bool keyslot_manager_crypto_mode_supported(struct keyslot_manager *ksm, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + if (!ksm) + return false; + return ksm->ksm_ll_ops.crypto_mode_supported(ksm->ll_priv_data, + crypto_mode, + data_unit_size); +} +EXPORT_SYMBOL(keyslot_manager_crypto_mode_supported); + +bool keyslot_manager_rq_crypto_mode_supported(struct request_queue *q, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + return keyslot_manager_crypto_mode_supported(q->ksm, crypto_mode, + data_unit_size); +} +EXPORT_SYMBOL(keyslot_manager_rq_crypto_mode_supported); + +/** + * keyslot_manager_evict_key() - Evict a key from the lower layer device. + * @ksm - The keyslot manager to evict from + * @key - The key to evict + * @crypto_mode - The crypto algorithm the key was programmed with. + * @data_unit_size - The data_unit_size the key was programmed with. + * + * Finds the slot that the specified key, crypto_mode, data_unit_size combo + * was programmed into, and evicts that slot from the lower layer device if + * the refcount on the slot is 0. Returns -EBUSY if the refcount is not 0, and + * -errno on error. + * + * Context: Process context. Takes and releases ksm->lock. + */ +int keyslot_manager_evict_key(struct keyslot_manager *ksm, + const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + int slot; + int err = 0; + + down_write(&ksm->lock); + slot = ksm->ksm_ll_ops.keyslot_find(ksm->ll_priv_data, key, + crypto_mode, + data_unit_size); + + if (slot < 0) { + up_write(&ksm->lock); + return slot; + } + + if (atomic_read(&ksm->slots[slot].slot_refs) == 0) { + err = ksm->ksm_ll_ops.keyslot_evict(ksm->ll_priv_data, key, + crypto_mode, + data_unit_size, + slot); + } else { + err = -EBUSY; + } + + up_write(&ksm->lock); + return err; +} +EXPORT_SYMBOL(keyslot_manager_evict_key); + +void keyslot_manager_destroy(struct keyslot_manager *ksm) +{ + if (!ksm) + return; + kvfree(ksm); +} +EXPORT_SYMBOL(keyslot_manager_destroy); diff --git a/include/linux/bio.h b/include/linux/bio.h index 3cdb84cdc488..6c9228dd3156 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -564,6 +564,18 @@ static inline void bvec_kunmap_irq(char *buffer, unsigned long *flags) } #endif +enum blk_crypto_mode_num { + BLK_ENCRYPTION_MODE_INVALID = -1, + BLK_ENCRYPTION_MODE_AES_256_XTS = 0, + /* + * TODO: Support these too + * BLK_ENCRYPTION_MODE_AES_256_CTS = 1, + * BLK_ENCRYPTION_MODE_AES_128_CBC = 2, + * BLK_ENCRYPTION_MODE_AES_128_CTS = 3, + * BLK_ENCRYPTION_MODE_ADIANTUM = 4, + */ +}; + /* * BIO list management for use by remapping drivers (e.g. DM or MD) and loop. * diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 1ef375dafb1c..afcca4c1ff64 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -43,6 +43,7 @@ struct pr_ops; struct rq_qos; struct blk_queue_stats; struct blk_stat_callback; +struct keyslot_manager; #define BLKDEV_MIN_RQ 4 #define BLKDEV_MAX_RQ 128 /* Default maximum */ @@ -478,6 +479,11 @@ struct request_queue { unsigned int dma_pad_mask; unsigned int dma_alignment; +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + /* Inline crypto capabilities */ + struct keyslot_manager *ksm; +#endif + unsigned int rq_timeout; int poll_nsec; diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h new file mode 100644 index 000000000000..284f53973271 --- /dev/null +++ b/include/linux/keyslot-manager.h @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2019 Google LLC + */ + +#include + +#ifndef __LINUX_KEYSLOT_MANAGER_H +#define __LINUX_KEYSLOT_MANAGER_H + +/** + * struct keyslot_mgmt_ll_ops - functions to manage keyslots in hardware + * @keyslot_program: Program the specified key and algorithm into the + * specified slot in the inline encryption hardware. + * @keyslot_evict: Evict key from the specified keyslot in the hardware. + * The key, crypto_mode and data_unit_size are also passed + * down so that for e.g. dm layers can evict keys from + * the devices that they map over. + * Returns 0 on success, -errno otherwise. + * @crypto_mode_supported: Check whether a crypto_mode and data_unit_size + * combo is supported. + * @keyslot_find: Returns the slot number that matches the key, + * or -ENOKEY if no match found, or -errno on + * error. + * + * This structure should be provided by storage device drivers when they set up + * a keyslot manager - this structure holds the function ptrs that the keyslot + * manager will use to manipulate keyslots in the hardware. + */ +struct keyslot_mgmt_ll_ops { + int (*keyslot_program)(void *ll_priv_data, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size, + unsigned int slot); + int (*keyslot_evict)(void *ll_priv_data, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size, + unsigned int slot); + bool (*crypto_mode_supported)(void *ll_priv_data, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size); + int (*keyslot_find)(void *ll_priv_data, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size); +}; + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION +struct keyslot_manager; + +extern struct keyslot_manager *keyslot_manager_create(unsigned int num_slots, + const struct keyslot_mgmt_ll_ops *ksm_ops, + void *ll_priv_data); + +extern int +keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm, + const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size); + +extern void keyslot_manager_get_slot(struct keyslot_manager *ksm, + unsigned int slot); + +extern void keyslot_manager_put_slot(struct keyslot_manager *ksm, + unsigned int slot); + +extern bool +keyslot_manager_crypto_mode_supported(struct keyslot_manager *ksm, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size); + +extern bool +keyslot_manager_rq_crypto_mode_supported(struct request_queue *q, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size); + +extern int keyslot_manager_evict_key(struct keyslot_manager *ksm, + const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size); + +extern void keyslot_manager_destroy(struct keyslot_manager *ksm); + +#else /* CONFIG_BLK_INLINE_ENCRYPTION */ + +static inline bool +keyslot_manager_rq_crypto_mode_supported(struct request_queue *q, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + return false; +} +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */ + +#endif /* __LINUX_KEYSLOT_MANAGER_H */ From patchwork Wed Aug 21 07:57:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11105901 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C45A313A4 for ; Wed, 21 Aug 2019 07:57:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A2C3A2332A for ; Wed, 21 Aug 2019 07:57:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FVyr06XB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726988AbfHUH52 (ORCPT ); Wed, 21 Aug 2019 03:57:28 -0400 Received: from mail-qk1-f201.google.com ([209.85.222.201]:39395 "EHLO mail-qk1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726990AbfHUH5Z (ORCPT ); Wed, 21 Aug 2019 03:57:25 -0400 Received: by mail-qk1-f201.google.com with SMTP id x1so1278460qkn.6 for ; Wed, 21 Aug 2019 00:57:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=IJJtlknwtk9R5EKrLGyaqkcJMNagGiZqElpki0nh3vk=; b=FVyr06XBTw70NrHexmEQX4TMcwGRPmNimrVnPiKP1u/Shbgouh1GRyIxEBBS9U6wt9 qtpFYxNJTNUOZgf0fqvCjItYxjqaIipimmCvwP34uH9AK/KxNyocg16/J3phZptzYthi pXZnxi26FLtahOdEV46RhVk2ZU6mht4xEI7SFXmwXHGptsOap6dUKFFC6c38n0DcU34y 9udABHLgSHs7d8+5PztESdtY03K3RrxraV+EwWnxRTne+hY8HqU47GZCiCeF3+8nKyjQ wKo8zm2gZs0mqcKRs++6rxBEDQb9GA9pOy/ORskno7le0ArabmIltSzfbGjJaJI/lnMS TgOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IJJtlknwtk9R5EKrLGyaqkcJMNagGiZqElpki0nh3vk=; b=XBDky4kre1RQqCbf4+K1bZ3qGqoXgzAVx2WZp2susQ9+yrIiDwGyyiQPUHQ0Qk0pRX 1ljkmfajRuB92JcL/FXZocmfw++QJotIHq3jD+e51F4hUhR8ud6NAyxXw4ZSCwCIcysn 86gLqFDC8MjHL0regFnzlcNpbYseGvHg6TkP9BG0GxYSk1biG0we6jdbRYUXTMypx0++ 5RvYHr+WiJbSmCeNoZD2hgg8fv5glA1UFBdCz07FLi+JSNga2m+IHY7rI372bAzQ3m6D Z3gwAH9bWHrgFc1f7tmeZs4rS9nWv+yRFsJB+rqbp4Lz/DMAEjSUWxra8F0pOWh4YlEs HPuA== X-Gm-Message-State: APjAAAUhx/3aVbIlmU/sLpNJKfko1ZbjAJ8FL7wtDtV/2CxQAtQX1qc2 69hWF1GzwUrAi7ySg23bdFvHaFFFD9Y= X-Google-Smtp-Source: APXvYqyF4CEq7FNstHV9Yw7XcPzl84Mu1qdahWk99gbLSRGvHimfT0i+qmMXAUPrIxmIzuOwCb4rM/7BcE4= X-Received: by 2002:a37:99c2:: with SMTP id b185mr14536177qke.475.1566374243624; Wed, 21 Aug 2019 00:57:23 -0700 (PDT) Date: Wed, 21 Aug 2019 00:57:08 -0700 In-Reply-To: <20190821075714.65140-1-satyat@google.com> Message-Id: <20190821075714.65140-3-satyat@google.com> Mime-Version: 1.0 References: <20190821075714.65140-1-satyat@google.com> X-Mailer: git-send-email 2.23.0.rc1.153.gdeed80330f-goog Subject: [PATCH v4 2/8] block: Add encryption context to struct bio From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org We must have some way of letting a storage device driver know what encryption context it should use for en/decrypting a request. However, it's the filesystem/fscrypt that knows about and manages encryption contexts. As such, when the filesystem layer submits a bio to the block layer, and this bio eventually reaches a device driver with support for inline encryption, the device driver will need to have been told the encryption context for that bio. We want to communicate the encryption context from the filesystem layer to the storage device along with the bio, when the bio is submitted to the block layer. To do this, we add a struct bio_crypt_ctx to struct bio, which can represent an encryption context (note that we can't use the bi_private field in struct bio to do this because that field does not function to pass information across layers in the storage stack). We also introduce various functions to manipulate the bio_crypt_ctx and make the bio/request merging logic aware of the bio_crypt_ctx. Signed-off-by: Satya Tangirala --- block/Makefile | 2 +- block/bio-crypt-ctx.c | 137 +++++++++++++++++++++ block/bio.c | 18 +-- block/blk-core.c | 3 + block/blk-merge.c | 35 +++++- block/bounce.c | 15 +-- drivers/md/dm.c | 15 ++- include/linux/bio-crypt-ctx.h | 226 ++++++++++++++++++++++++++++++++++ include/linux/bio.h | 13 +- include/linux/blk_types.h | 6 + 10 files changed, 433 insertions(+), 37 deletions(-) create mode 100644 block/bio-crypt-ctx.c create mode 100644 include/linux/bio-crypt-ctx.h diff --git a/block/Makefile b/block/Makefile index a72abd61b220..4147ffa63631 100644 --- a/block/Makefile +++ b/block/Makefile @@ -35,4 +35,4 @@ obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o obj-$(CONFIG_BLK_PM) += blk-pm.o -obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o +obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o bio-crypt-ctx.o diff --git a/block/bio-crypt-ctx.c b/block/bio-crypt-ctx.c new file mode 100644 index 000000000000..aa3571f72ee7 --- /dev/null +++ b/block/bio-crypt-ctx.c @@ -0,0 +1,137 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2019 Google LLC + */ + +#include +#include +#include +#include + +static int num_prealloc_crypt_ctxs = 128; +static struct kmem_cache *bio_crypt_ctx_cache; +static mempool_t *bio_crypt_ctx_pool; + +int bio_crypt_ctx_init(void) +{ + bio_crypt_ctx_cache = KMEM_CACHE(bio_crypt_ctx, 0); + if (!bio_crypt_ctx_cache) + return -ENOMEM; + + bio_crypt_ctx_pool = mempool_create_slab_pool( + num_prealloc_crypt_ctxs, + bio_crypt_ctx_cache); + + if (!bio_crypt_ctx_pool) + return -ENOMEM; + + return 0; +} + +struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask) +{ + return mempool_alloc(bio_crypt_ctx_pool, gfp_mask); +} +EXPORT_SYMBOL(bio_crypt_alloc_ctx); + +void bio_crypt_free_ctx(struct bio *bio) +{ + mempool_free(bio->bi_crypt_context, bio_crypt_ctx_pool); + bio->bi_crypt_context = NULL; +} +EXPORT_SYMBOL(bio_crypt_free_ctx); + +int bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask) +{ + if (!bio_has_crypt_ctx(src)) + return 0; + + dst->bi_crypt_context = bio_crypt_alloc_ctx(gfp_mask); + if (!dst->bi_crypt_context) + return -ENOMEM; + + *dst->bi_crypt_context = *src->bi_crypt_context; + + if (bio_crypt_has_keyslot(src)) + keyslot_manager_get_slot(src->bi_crypt_context->processing_ksm, + src->bi_crypt_context->keyslot); + + return 0; +} +EXPORT_SYMBOL(bio_crypt_clone); + +bool bio_crypt_should_process(struct bio *bio, struct request_queue *q) +{ + if (!bio_has_crypt_ctx(bio)) + return false; + + WARN_ON(!bio_crypt_has_keyslot(bio)); + return q->ksm == bio->bi_crypt_context->processing_ksm; +} +EXPORT_SYMBOL(bio_crypt_should_process); + +/* + * Checks that two bio crypt contexts are compatible - i.e. that + * they are mergeable except for data_unit_num continuity. + */ +bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2) +{ + struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context; + struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context; + + if (bio_has_crypt_ctx(b_1) != bio_has_crypt_ctx(b_2)) + return false; + + if (!bio_has_crypt_ctx(b_1)) + return true; + + return bc1->keyslot == bc2->keyslot && + bc1->data_unit_size_bits == bc2->data_unit_size_bits; +} + +/* + * Checks that two bio crypt contexts are compatible, and also + * that their data_unit_nums are continuous (and can hence be merged) + */ +bool bio_crypt_ctx_back_mergeable(struct bio *b_1, + unsigned int b1_sectors, + struct bio *b_2) +{ + struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context; + struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context; + + if (!bio_crypt_ctx_compatible(b_1, b_2)) + return false; + + return !bio_has_crypt_ctx(b_1) || + (bc1->data_unit_num + + (b1_sectors >> (bc1->data_unit_size_bits - 9)) == + bc2->data_unit_num); +} + +void bio_crypt_ctx_release_keyslot(struct bio *bio) +{ + struct bio_crypt_ctx *crypt_ctx = bio->bi_crypt_context; + + keyslot_manager_put_slot(crypt_ctx->processing_ksm, crypt_ctx->keyslot); + bio->bi_crypt_context->processing_ksm = NULL; + bio->bi_crypt_context->keyslot = -1; +} + +int bio_crypt_ctx_acquire_keyslot(struct bio *bio, struct keyslot_manager *ksm) +{ + int slot; + enum blk_crypto_mode_num crypto_mode = bio_crypto_mode(bio); + + if (!ksm) + return -ENOMEM; + + slot = keyslot_manager_get_slot_for_key(ksm, + bio_crypt_raw_key(bio), crypto_mode, + 1 << bio->bi_crypt_context->data_unit_size_bits); + if (slot < 0) + return slot; + + bio_crypt_set_keyslot(bio, slot, ksm); + return 0; +} diff --git a/block/bio.c b/block/bio.c index 299a0e7651ec..ada9850c90dc 100644 --- a/block/bio.c +++ b/block/bio.c @@ -241,6 +241,7 @@ static void bio_free(struct bio *bio) struct bio_set *bs = bio->bi_pool; void *p; + bio_crypt_free_ctx(bio); bio_uninit(bio); if (bs) { @@ -612,15 +613,15 @@ struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs) __bio_clone_fast(b, bio); - if (bio_integrity(bio)) { - int ret; - - ret = bio_integrity_clone(b, bio, gfp_mask); + if (bio_crypt_clone(b, bio, gfp_mask) < 0) { + bio_put(b); + return NULL; + } - if (ret < 0) { - bio_put(b); - return NULL; - } + if (bio_integrity(bio) && + bio_integrity_clone(b, bio, gfp_mask) < 0) { + bio_put(b); + return NULL; } return b; @@ -1007,6 +1008,7 @@ void bio_advance(struct bio *bio, unsigned bytes) if (bio_integrity(bio)) bio_integrity_advance(bio, bytes); + bio_crypt_advance(bio, bytes); bio_advance_iter(bio, &bio->bi_iter, bytes); } EXPORT_SYMBOL(bio_advance); diff --git a/block/blk-core.c b/block/blk-core.c index d0cc6e14d2f0..35027e80e27d 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1769,5 +1769,8 @@ int __init blk_dev_init(void) blk_debugfs_root = debugfs_create_dir("block", NULL); #endif + if (bio_crypt_ctx_init() < 0) + panic("Failed to allocate mem for bio crypt ctxs\n"); + return 0; } diff --git a/block/blk-merge.c b/block/blk-merge.c index 57f7990b342d..ebfb26b536d2 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -504,6 +504,9 @@ static inline int ll_new_hw_segment(struct request *req, struct bio *bio, if (blk_integrity_merge_bio(req->q, req, bio) == false) goto no_merge; + if (WARN_ON_ONCE(!bio_crypt_ctx_compatible(bio, req->bio))) + goto no_merge; + /* * This will form the start of a new hw segment. Bump both * counters. @@ -658,8 +661,14 @@ static enum elv_merge blk_try_req_merge(struct request *req, { if (blk_discard_mergable(req)) return ELEVATOR_DISCARD_MERGE; - else if (blk_rq_pos(req) + blk_rq_sectors(req) == blk_rq_pos(next)) + else if (blk_rq_pos(req) + blk_rq_sectors(req) == blk_rq_pos(next)) { + if (!bio_crypt_ctx_back_mergeable(req->bio, + blk_rq_sectors(req), + next->bio)) { + return ELEVATOR_NO_MERGE; + } return ELEVATOR_BACK_MERGE; + } return ELEVATOR_NO_MERGE; } @@ -695,6 +704,9 @@ static struct request *attempt_merge(struct request_queue *q, if (req->ioprio != next->ioprio) return NULL; + if (!bio_crypt_ctx_compatible(req->bio, next->bio)) + return NULL; + /* * If we are allowed to merge, then append bio list * from next to rq and release next. merge_requests_fn @@ -827,16 +839,31 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio) if (rq->ioprio != bio_prio(bio)) return false; + /* Only merge if the crypt contexts are compatible */ + if (!bio_crypt_ctx_compatible(bio, rq->bio)) + return false; + return true; } enum elv_merge blk_try_merge(struct request *rq, struct bio *bio) { - if (blk_discard_mergable(rq)) + if (blk_discard_mergable(rq)) { return ELEVATOR_DISCARD_MERGE; - else if (blk_rq_pos(rq) + blk_rq_sectors(rq) == bio->bi_iter.bi_sector) + } else if (blk_rq_pos(rq) + blk_rq_sectors(rq) == + bio->bi_iter.bi_sector) { + if (!bio_crypt_ctx_back_mergeable(rq->bio, + blk_rq_sectors(rq), bio)) { + return ELEVATOR_NO_MERGE; + } return ELEVATOR_BACK_MERGE; - else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_iter.bi_sector) + } else if (blk_rq_pos(rq) - bio_sectors(bio) == + bio->bi_iter.bi_sector) { + if (!bio_crypt_ctx_back_mergeable(bio, + bio_sectors(bio), rq->bio)) { + return ELEVATOR_NO_MERGE; + } return ELEVATOR_FRONT_MERGE; + } return ELEVATOR_NO_MERGE; } diff --git a/block/bounce.c b/block/bounce.c index f8ed677a1bf7..6f9a2359b22a 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -267,14 +267,15 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask, break; } - if (bio_integrity(bio_src)) { - int ret; + if (bio_crypt_clone(bio, bio_src, gfp_mask) < 0) { + bio_put(bio); + return NULL; + } - ret = bio_integrity_clone(bio, bio_src, gfp_mask); - if (ret < 0) { - bio_put(bio); - return NULL; - } + if (bio_integrity(bio_src) && + bio_integrity_clone(bio, bio_src, gfp_mask) < 0) { + bio_put(bio); + return NULL; } bio_clone_blkg_association(bio, bio_src); diff --git a/drivers/md/dm.c b/drivers/md/dm.c index d0beef033e2f..ce378c0b9ebd 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1322,12 +1322,15 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio, sector_t sector, unsigned len) { struct bio *clone = &tio->clone; + int ret; __bio_clone_fast(clone, bio); - if (bio_integrity(bio)) { - int r; + ret = bio_crypt_clone(clone, bio, GFP_NOIO); + if (ret < 0) + return ret; + if (bio_integrity(bio)) { if (unlikely(!dm_target_has_integrity(tio->ti->type) && !dm_target_passes_integrity(tio->ti->type))) { DMWARN("%s: the target %s doesn't support integrity data.", @@ -1336,9 +1339,11 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio, return -EIO; } - r = bio_integrity_clone(clone, bio, GFP_NOIO); - if (r < 0) - return r; + ret = bio_integrity_clone(clone, bio, GFP_NOIO); + if (ret < 0) { + bio_crypt_free_ctx(clone); + return ret; + } } bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector)); diff --git a/include/linux/bio-crypt-ctx.h b/include/linux/bio-crypt-ctx.h new file mode 100644 index 000000000000..ebe456289338 --- /dev/null +++ b/include/linux/bio-crypt-ctx.h @@ -0,0 +1,226 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2019 Google LLC + */ +#ifndef __LINUX_BIO_CRYPT_CTX_H +#define __LINUX_BIO_CRYPT_CTX_H + +#ifdef CONFIG_BLOCK +#include + +enum blk_crypto_mode_num { + BLK_ENCRYPTION_MODE_INVALID = -1, + BLK_ENCRYPTION_MODE_AES_256_XTS = 0, + /* + * TODO: Support these too + * BLK_ENCRYPTION_MODE_AES_256_CTS = 1, + * BLK_ENCRYPTION_MODE_AES_128_CBC = 2, + * BLK_ENCRYPTION_MODE_AES_128_CTS = 3, + * BLK_ENCRYPTION_MODE_ADIANTUM = 4, + */ +}; + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION +struct bio_crypt_ctx { + int keyslot; + u8 *raw_key; + enum blk_crypto_mode_num crypto_mode; + u64 data_unit_num; + unsigned int data_unit_size_bits; + + /* + * The keyslot manager where the key has been programmed + * with keyslot. + */ + struct keyslot_manager *processing_ksm; + + /* + * Copy of the bvec_iter when this bio was submitted. + * We only want to en/decrypt the part of the bio + * as described by the bvec_iter upon submission because + * bio might be split before being resubmitted + */ + struct bvec_iter crypt_iter; + u64 sw_data_unit_num; +}; + +extern int bio_crypt_clone(struct bio *dst, struct bio *src, + gfp_t gfp_mask); + +static inline bool bio_has_crypt_ctx(struct bio *bio) +{ + return bio->bi_crypt_context; +} + +static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes) +{ + if (bio_has_crypt_ctx(bio)) { + bio->bi_crypt_context->data_unit_num += + bytes >> bio->bi_crypt_context->data_unit_size_bits; + } +} + +static inline bool bio_crypt_has_keyslot(struct bio *bio) +{ + return bio->bi_crypt_context->keyslot >= 0; +} + +extern int bio_crypt_ctx_init(void); + +extern struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask); + +extern void bio_crypt_free_ctx(struct bio *bio); + +static inline int bio_crypt_set_ctx(struct bio *bio, + u8 *raw_key, + enum blk_crypto_mode_num crypto_mode, + u64 dun, + unsigned int dun_bits, + gfp_t gfp_mask) +{ + struct bio_crypt_ctx *crypt_ctx; + + crypt_ctx = bio_crypt_alloc_ctx(gfp_mask); + if (!crypt_ctx) + return -ENOMEM; + + crypt_ctx->raw_key = raw_key; + crypt_ctx->data_unit_num = dun; + crypt_ctx->data_unit_size_bits = dun_bits; + crypt_ctx->crypto_mode = crypto_mode; + crypt_ctx->processing_ksm = NULL; + crypt_ctx->keyslot = -1; + bio->bi_crypt_context = crypt_ctx; + + return 0; +} + +static inline void bio_set_data_unit_num(struct bio *bio, u64 dun) +{ + bio->bi_crypt_context->data_unit_num = dun; +} + +static inline int bio_crypt_get_keyslot(struct bio *bio) +{ + return bio->bi_crypt_context->keyslot; +} + +static inline void bio_crypt_set_keyslot(struct bio *bio, + unsigned int keyslot, + struct keyslot_manager *ksm) +{ + bio->bi_crypt_context->keyslot = keyslot; + bio->bi_crypt_context->processing_ksm = ksm; +} + +extern void bio_crypt_ctx_release_keyslot(struct bio *bio); + +extern int bio_crypt_ctx_acquire_keyslot(struct bio *bio, + struct keyslot_manager *ksm); + +static inline u8 *bio_crypt_raw_key(struct bio *bio) +{ + return bio->bi_crypt_context->raw_key; +} + +static inline enum blk_crypto_mode_num bio_crypto_mode(struct bio *bio) +{ + return bio->bi_crypt_context->crypto_mode; +} + +static inline u64 bio_crypt_data_unit_num(struct bio *bio) +{ + return bio->bi_crypt_context->data_unit_num; +} + +static inline u64 bio_crypt_sw_data_unit_num(struct bio *bio) +{ + return bio->bi_crypt_context->sw_data_unit_num; +} + +extern bool bio_crypt_should_process(struct bio *bio, struct request_queue *q); + +extern bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2); + +extern bool bio_crypt_ctx_back_mergeable(struct bio *b_1, + unsigned int b1_sectors, + struct bio *b_2); + +#else /* CONFIG_BLK_INLINE_ENCRYPTION */ +struct keyslot_manager; + +static inline int bio_crypt_ctx_init(void) +{ + return 0; +} + +static inline int bio_crypt_clone(struct bio *dst, struct bio *src, + gfp_t gfp_mask) +{ + return 0; +} + +static inline void bio_crypt_advance(struct bio *bio, + unsigned int bytes) { } + +static inline bool bio_has_crypt_ctx(struct bio *bio) +{ + return false; +} + +static inline void bio_crypt_free_ctx(struct bio *bio) { } + +static inline void bio_crypt_set_ctx(struct bio *bio, + u8 *raw_key, + enum blk_crypto_mode_num crypto_mode, + u64 dun, + unsigned int dun_bits, + gfp_t gfp_mask) { } + +static inline void bio_set_data_unit_num(struct bio *bio, u64 dun) { } + +static inline bool bio_crypt_has_keyslot(struct bio *bio) +{ + return false; +} + +static inline void bio_crypt_set_keyslot(struct bio *bio, + unsigned int keyslot, + struct keyslot_manager *ksm) { } + +static inline int bio_crypt_get_keyslot(struct bio *bio) +{ + return -1; +} + +static inline u8 *bio_crypt_raw_key(struct bio *bio) +{ + return NULL; +} + +static inline u64 bio_crypt_data_unit_num(struct bio *bio) +{ + return 0; +} + +static inline bool bio_crypt_should_process(struct bio *bio, + struct request_queue *q) +{ + return false; +} + +static inline bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2) +{ + return true; +} + +static inline bool bio_crypt_ctx_back_mergeable(struct bio *b_1, + unsigned int b1_sectors, + struct bio *b_2) +{ + return true; +} + +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */ +#endif /* CONFIG_BLOCK */ +#endif /* __LINUX_BIO_CRYPT_CTX_H */ diff --git a/include/linux/bio.h b/include/linux/bio.h index 6c9228dd3156..9dddcd2af3e9 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -12,6 +12,7 @@ #ifdef CONFIG_BLOCK /* struct bio, bio_vec and BIO_* flags are defined in blk_types.h */ #include +#include #define BIO_DEBUG @@ -564,18 +565,6 @@ static inline void bvec_kunmap_irq(char *buffer, unsigned long *flags) } #endif -enum blk_crypto_mode_num { - BLK_ENCRYPTION_MODE_INVALID = -1, - BLK_ENCRYPTION_MODE_AES_256_XTS = 0, - /* - * TODO: Support these too - * BLK_ENCRYPTION_MODE_AES_256_CTS = 1, - * BLK_ENCRYPTION_MODE_AES_128_CBC = 2, - * BLK_ENCRYPTION_MODE_AES_128_CTS = 3, - * BLK_ENCRYPTION_MODE_ADIANTUM = 4, - */ -}; - /* * BIO list management for use by remapping drivers (e.g. DM or MD) and loop. * diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index feff3fe4467e..e59ab0186441 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -18,6 +18,7 @@ struct block_device; struct io_context; struct cgroup_subsys_state; typedef void (bio_end_io_t) (struct bio *); +struct bio_crypt_ctx; /* * Block error status values. See block/blk-core:blk_errors for the details. @@ -170,6 +171,11 @@ struct bio { struct blkcg_gq *bi_blkg; struct bio_issue bi_issue; #endif + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + struct bio_crypt_ctx *bi_crypt_context; +#endif + union { #if defined(CONFIG_BLK_DEV_INTEGRITY) struct bio_integrity_payload *bi_integrity; /* data integrity */ From patchwork Wed Aug 21 07:57:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11105903 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B613713A4 for ; Wed, 21 Aug 2019 07:57:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 75F6A2332A for ; Wed, 21 Aug 2019 07:57:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aQuiFrFx" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727004AbfHUH5a (ORCPT ); Wed, 21 Aug 2019 03:57:30 -0400 Received: from mail-ua1-f74.google.com ([209.85.222.74]:46823 "EHLO mail-ua1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726974AbfHUH53 (ORCPT ); Wed, 21 Aug 2019 03:57:29 -0400 Received: by mail-ua1-f74.google.com with SMTP id b3so85097uan.13 for ; Wed, 21 Aug 2019 00:57:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=Kf3i5GfRdL8NwTQ7VdaGovdvhlTtWbmm5GRIsDh1j7Y=; b=aQuiFrFx/xNbgttaOBSqLq2xQPJlPCljoVydZvLn5WVXbFUI4wEkdgJJ/8k8ISVLlA mEGn5Eurr/fvsqrjAmUauklqegLmV/p/UdoGtmaETmm/yPdkK3I2GX44MtMVFCHs9s0c +a7IU7ocM7j5X8d9WUI/sGmZY0tqOcbr8jYqQTurzWZODPXvjVdD97A53nKCDJ0bpyBK 8xji0MRRl6cgr2pNkGTEYCLOqmq8LrubCqABUI/5GYmu9fOL8VBZIDHxYBWjzd4xJTWt 5+SN2UI91QnGTIGheAmt7yBrTziT/5V7hGhDPi9kd0PHvurRmHTPIjWq86K10nAmwvAR W40Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=Kf3i5GfRdL8NwTQ7VdaGovdvhlTtWbmm5GRIsDh1j7Y=; b=QwgUHNSOEvKKBDxUJKfr7dwPeSpZmUmoZviSX8dzVk2tpb8fYrouRnhmvrqEyVHHHG svENvRCPv/jgLGzncuXIkjXB8cyH72BTrPxE6ddEJBdDuy9WGl9EpO4MwBp4SYe/YD7D uNZa5FQu17vodh7ItooV+SXNymQ8HsLGan6v2XQ+Ch3I+Im3mw5S1hEGA2Qn4pGfU+Yi zROpQuSV15hURXcdvyKzlvEgFyOZUrP5RblDbTbASu3rYYvvothsrTzZ1+JX1FCUaDxm j1dV0PrX2rdAu1Uc6jEygw8SWmxbNxahE1VQVLK0h2+4dkm8LvBwtBOkLW8R+d1/hLCa nFlA== X-Gm-Message-State: APjAAAUkmSCcAmmzGWdFaam74q0MCaiavVGN/Gh3aOnz8Jbzm5T/MpPK tneTOELR4Qm6a1ma9JyW5HSk+1ATRys= X-Google-Smtp-Source: APXvYqxbPMYC0sbvMg0RlFRcRMz6KcSDhyzau7D3f6lRBMSZtZM9Zpqyw2lZ7T2ERHxs5KKZzn33mJyTIVM= X-Received: by 2002:a1f:7c0e:: with SMTP id x14mr12439903vkc.0.1566374246206; Wed, 21 Aug 2019 00:57:26 -0700 (PDT) Date: Wed, 21 Aug 2019 00:57:09 -0700 In-Reply-To: <20190821075714.65140-1-satyat@google.com> Message-Id: <20190821075714.65140-4-satyat@google.com> Mime-Version: 1.0 References: <20190821075714.65140-1-satyat@google.com> X-Mailer: git-send-email 2.23.0.rc1.153.gdeed80330f-goog Subject: [PATCH v4 3/8] block: blk-crypto for Inline Encryption From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org We introduce blk-crypto, which manages programming keyslots for struct bios. With blk-crypto, filesystems only need to call bio_crypt_set_ctx with the encryption key, algorithm and data_unit_num; they don't have to worry about getting a keyslot for each encryption context, as blk-crypto handles that. Blk-crypto also makes it possible for layered devices like device mapper to make use of inline encryption hardware. Blk-crypto delegates crypto operations to inline encryption hardware when available, and also contains a software fallback to the kernel crypto API. For more details, refer to Documentation/block/blk-crypto.txt. Signed-off-by: Satya Tangirala --- Documentation/block/inline-encryption.txt | 186 ++++++ block/Kconfig | 2 + block/Makefile | 3 +- block/bio-crypt-ctx.c | 7 +- block/bio.c | 5 + block/blk-core.c | 11 +- block/blk-crypto.c | 737 ++++++++++++++++++++++ include/linux/bio-crypt-ctx.h | 7 + include/linux/blk-crypto.h | 47 ++ 9 files changed, 1002 insertions(+), 3 deletions(-) create mode 100644 Documentation/block/inline-encryption.txt create mode 100644 block/blk-crypto.c create mode 100644 include/linux/blk-crypto.h diff --git a/Documentation/block/inline-encryption.txt b/Documentation/block/inline-encryption.txt new file mode 100644 index 000000000000..925611a5ea65 --- /dev/null +++ b/Documentation/block/inline-encryption.txt @@ -0,0 +1,186 @@ +BLK-CRYPTO and KEYSLOT MANAGER +=========================== + +CONTENTS +1. Objective +2. Constraints and notes +3. Design +4. Blk-crypto + 4-1 What does blk-crypto do on bio submission +5. Layered Devices +6. Future optimizations for layered devices + +1. Objective +============ + +We want to support inline encryption (IE) in the kernel. +To allow for testing, we also want a crypto API fallback when actual +IE hardware is absent. We also want IE to work with layered devices +like dm and loopback (i.e. we want to be able to use the IE hardware +of the underlying devices if present, or else fall back to crypto API +en/decryption). + + +2. Constraints and notes +======================== + +1) IE hardware have a limited number of “keyslots” that can be programmed +with an encryption context (key, algorithm, data unit size, etc.) at any time. +One can specify a keyslot in a data request made to the device, and the +device will en/decrypt the data using the encryption context programmed into +that specified keyslot. When possible, we want to make multiple requests with +the same encryption context share the same keyslot. + +2) We need a way for filesystems to specify an encryption context to use for +en/decrypting a struct bio, and a device driver (like UFS) needs to be able +to use that encryption context when it processes the bio. + +3) We need a way for device drivers to expose their capabilities in a unified +way to the upper layers. + + +3. Design +========= + +We add a struct bio_crypt_ctx to struct bio that can represent an +encryption context, because we need to be able to pass this encryption +context from the FS layer to the device driver to act upon. + +While IE hardware works on the notion of keyslots, the FS layer has no +knowledge of keyslots - it simply wants to specify an encryption context to +use while en/decrypting a bio. + +We introduce a keyslot manager (KSM) that handles the translation from +encryption contexts specified by the FS to keyslots on the IE hardware. +This KSM also serves as the way IE hardware can expose their capabilities to +upper layers. The generic mode of operation is: each device driver that wants +to support IE will construct a KSM and set it up in its struct request_queue. +Upper layers that want to use IE on this device can then use this KSM in +the device’s struct request_queue to translate an encryption context into +a keyslot. The presence of the KSM in the request queue shall be used to mean +that the device supports IE. + +On the device driver end of the interface, the device driver needs to tell the +KSM how to actually manipulate the IE hardware in the device to do things like +programming the crypto key into the IE hardware into a particular keyslot. All +this is achieved through the struct keyslot_mgmt_ll_ops that the device driver +passes to the KSM when creating it. + +It uses refcounts to track which keyslots are idle (either they have no +encryption context programmed, or there are no in-flight struct bios +referencing that keyslot). When a new encryption context needs a keyslot, it +tries to find a keyslot that has already been programmed with the same +encryption context, and if there is no such keyslot, it evicts the least +recently used idle keyslot and programs the new encryption context into that +one. If no idle keyslots are available, then the caller will sleep until there +is at least one. + + +4. Blk-crypto +============= + +The above is sufficient for simple cases, but does not work if there is a +need for a crypto API fallback, or if we are want to use IE with layered +devices. To these ends, we introduce blk-crypto. Blk-crypto allows us to +present a unified view of encryption to the FS (so FS only needs to specify +an encryption context and not worry about keyslots at all), and blk-crypto +can decide whether to delegate the en/decryption to IE hardware or to the +crypto API. Blk-crypto maintains an internal KSM that serves as the crypto +API fallback. + +Blk-crypto needs to ensure that the encryption context is programmed into the +"correct" keyslot manager for IE. If a bio is submitted to a layered device +that eventually passes the bio down to a device that really does support IE, we +want the encryption context to be programmed into a keyslot for the KSM of the +device with IE support. However, blk-crypto does not know a priori whether a +particular device is the final device in the layering structure for a bio or +not. So in the case that a particular device does not support IE, since it is +possibly the final destination device for the bio, if the bio requires +encryption (i.e. the bio is doing a write operation), blk-crypto must fallback +to the crypto API *before* sending the bio to the device. + +Blk-crypto ensures that +1) The bio’s encryption context is programmed into a keyslot in the KSM of the +request queue that the bio is being submitted to (or the crypto API fallback KSM +if the request queue doesn’t have a KSM), and that the processing_ksm in the +bi_crypt_context is set to this KSM + +2) That the bio has its own individual reference to the keyslot in this KSM. +Once the bio passes through blk-crypto, its encryption context is programmed +in some KSM. The “its own individual reference to the keyslot” ensures that +keyslots can be released by each bio independently of other bios while ensuring +that the bio has a valid reference to the keyslot when, for e.g., the crypto API +fallback KSM in blk-crypto performs crypto on the device’s behalf. The individual +references are ensured by increasing the refcount for the keyslot in the +processing_ksm when a bio with a programmed encryption context is cloned. + + +4-1. What blk-crypto does on bio submission +------------------------------------------- + +Case 1: blk-crypto is given a bio with only an encryption context that hasn’t +been programmed into any keyslot in any KSM (for e.g. a bio from the FS). In +this case, blk-crypto will program the encryption context into the KSM of the +request queue the bio is being submitted to (and if this KSM does not exist, +then it will program it into blk-crypto’s internal KSM for crypto API fallback). +The KSM that this encryption context was programmed into is stored as the +processing_ksm in the bio’s bi_crypt_context. + +Case 2: blk-crypto is given a bio whose encryption context has already been +programmed into a keyslot in the *crypto API fallback KSM*. In this case, +blk-crypto does nothing; it treats the bio as not having specified an +encryption context. Note that we cannot do here what we will do in Case 3 +because we would have already encrypted the bio via the crypto API by this +point. + +Case 3: blk-crypto is given a bio whose encryption context has already been +programmed into a keyslot in some KSM (that is *not* the crypto API fallback +KSM). In this case, blk-crypto first releases that keyslot from that KSM and +then treats the bio as in Case 1. + +This way, when a device driver is processing a bio, it can be sure that +the bio’s encryption context has been programmed into some KSM (either the +device driver’s request queue’s KSM, or blk-crypto’s crypto API fallback KSM). +It then simply needs to check if the bio’s processing_ksm is the device’s +request queue’s KSM. If so, then it should proceed with IE. If not, it should +simply do nothing with respect to crypto, because some other KSM (perhaps the +blk-crypto crypto API fallback KSM) is handling the en/decryption. + +Blk-crypto will release the keyslot that is being held by the bio (and also +decrypt it if the bio is using the crypto API fallback KSM) once +bio_remaining_done returns true for the bio. + + +5. Layered Devices +================== + +Layered devices that wish to support IE need to create their own keyslot +manager for their request queue, and expose whatever functionality they choose. +When a layered device wants to pass a bio to another layer (either by +resubmitting the same bio, or by submitting a clone), it doesn’t need to do +anything special because the bio (or the clone) will once again pass through +blk-crypto, which will work as described in Case 3. If a layered device wants +for some reason to do the IO by itself instead of passing it on to a child +device, but it also chose to expose IE capabilities by setting up a KSM in its +request queue, it is then responsible for en/decrypting the data itself. In +such cases, the device can choose to call the blk-crypto function +blk_crypto_fallback_to_kernel_crypto_api (TODO: Not yet implemented), which will +cause the en/decryption to be done via the crypto API fallback. + + +6. Future Optimizations for layered devices +=========================================== + +Creating a keyslot manager for the layered device uses up memory for each +keyslot, and in general, a layered device (like dm-linear) merely passes the +request on to a “child” device, so the keyslots in the layered device itself +might be completely unused. We can instead define a new type of KSM; the +“passthrough KSM”, that layered devices can use to let blk-crypto know that +this layered device *will* pass the bio to some child device (and hence +through blk-crypto again, at which point blk-crypto can program the encryption +context, instead of programming it into the layered device’s KSM). Again, if +the device “lies” and decides to do the IO itself instead of passing it on to +a child device, it is responsible for doing the en/decryption (and can choose +to call blk_crypto_fallback_to_kernel_crypto_api). Another use case for the +"passthrough KSM" is for IE devices that want to manage their own keyslots/do +not have a limited number of keyslots. diff --git a/block/Kconfig b/block/Kconfig index 1469efdd385b..4f7e593d0a6d 100644 --- a/block/Kconfig +++ b/block/Kconfig @@ -166,6 +166,8 @@ config BLK_SED_OPAL config BLK_INLINE_ENCRYPTION bool "Enable inline encryption support in block layer" + select CRYPTO + select CRYPTO_BLKCIPHER help Build the blk-crypto subsystem. Enabling this lets the block layer handle encryption, diff --git a/block/Makefile b/block/Makefile index 4147ffa63631..1ba7de84dbaf 100644 --- a/block/Makefile +++ b/block/Makefile @@ -35,4 +35,5 @@ obj-$(CONFIG_BLK_DEBUG_FS) += blk-mq-debugfs.o obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o obj-$(CONFIG_BLK_SED_OPAL) += sed-opal.o obj-$(CONFIG_BLK_PM) += blk-pm.o -obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o bio-crypt-ctx.o +obj-$(CONFIG_BLK_INLINE_ENCRYPTION) += keyslot-manager.o bio-crypt-ctx.o \ + blk-crypto.o diff --git a/block/bio-crypt-ctx.c b/block/bio-crypt-ctx.c index aa3571f72ee7..6a2b061865c6 100644 --- a/block/bio-crypt-ctx.c +++ b/block/bio-crypt-ctx.c @@ -43,7 +43,12 @@ EXPORT_SYMBOL(bio_crypt_free_ctx); int bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask) { - if (!bio_has_crypt_ctx(src)) + /* + * If a bio is swhandled, then it will be decrypted when bio_endio + * is called. As we only want the data to be decrypted once, copies + * of the bio must not have have a crypt context. + */ + if (!bio_has_crypt_ctx(src) || bio_crypt_swhandled(src)) return 0; dst->bi_crypt_context = bio_crypt_alloc_ctx(gfp_mask); diff --git a/block/bio.c b/block/bio.c index ada9850c90dc..e2537e5588ac 100644 --- a/block/bio.c +++ b/block/bio.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include "blk.h" @@ -1800,6 +1801,10 @@ void bio_endio(struct bio *bio) again: if (!bio_remaining_done(bio)) return; + + if (!blk_crypto_endio(bio)) + return; + if (!bio_integrity_endio(bio)) return; diff --git a/block/blk-core.c b/block/blk-core.c index 35027e80e27d..f699ecd9ca2e 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -36,6 +36,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -1049,7 +1050,9 @@ blk_qc_t generic_make_request(struct bio *bio) /* Create a fresh bio_list for all subordinate requests */ bio_list_on_stack[1] = bio_list_on_stack[0]; bio_list_init(&bio_list_on_stack[0]); - ret = q->make_request_fn(q, bio); + + if (!blk_crypto_submit_bio(&bio)) + ret = q->make_request_fn(q, bio); blk_queue_exit(q); @@ -1102,6 +1105,9 @@ blk_qc_t direct_make_request(struct bio *bio) if (!generic_make_request_checks(bio)) return BLK_QC_T_NONE; + if (blk_crypto_submit_bio(&bio)) + return BLK_QC_T_NONE; + if (unlikely(blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0))) { if (nowait && !blk_queue_dying(q)) bio->bi_status = BLK_STS_AGAIN; @@ -1772,5 +1778,8 @@ int __init blk_dev_init(void) if (bio_crypt_ctx_init() < 0) panic("Failed to allocate mem for bio crypt ctxs\n"); + if (blk_crypto_init() < 0) + panic("Failed to init blk-crypto\n"); + return 0; } diff --git a/block/blk-crypto.c b/block/blk-crypto.c new file mode 100644 index 000000000000..c8f06264a0f5 --- /dev/null +++ b/block/blk-crypto.c @@ -0,0 +1,737 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2019 Google LLC + */ + +/* + * Refer to Documentation/block/inline-encryption.txt for detailed explanation. + */ + +#ifdef pr_fmt +#undef pr_fmt +#endif + +#define pr_fmt(fmt) "blk-crypto: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* Represents a crypto mode supported by blk-crypto */ +struct blk_crypto_mode { + const char *cipher_str; /* crypto API name (for fallback case) */ + size_t keysize; /* key size in bytes */ +}; + +static const struct blk_crypto_mode blk_crypto_modes[] = { + [BLK_ENCRYPTION_MODE_AES_256_XTS] = { + .cipher_str = "xts(aes)", + .keysize = 64, + }, +}; + +static unsigned int num_prealloc_bounce_pg = 32; +module_param(num_prealloc_bounce_pg, uint, 0); +MODULE_PARM_DESC(num_prealloc_bounce_pg, + "Number of preallocated bounce pages for blk-crypto to use during crypto API fallback encryption"); + +#define BLK_CRYPTO_MAX_KEY_SIZE 64 +static int blk_crypto_num_keyslots = 100; +module_param_named(num_keyslots, blk_crypto_num_keyslots, int, 0); +MODULE_PARM_DESC(num_keyslots, + "Number of keyslots for crypto API fallback in blk-crypto."); + +static struct blk_crypto_keyslot { + struct crypto_skcipher *tfm; + enum blk_crypto_mode_num crypto_mode; + u8 key[BLK_CRYPTO_MAX_KEY_SIZE]; + struct crypto_skcipher *tfms[ARRAY_SIZE(blk_crypto_modes)]; +} *blk_crypto_keyslots; + +static struct mutex tfms_lock[ARRAY_SIZE(blk_crypto_modes)]; +static bool tfms_inited[ARRAY_SIZE(blk_crypto_modes)]; + +struct work_mem { + struct work_struct crypto_work; + struct bio *bio; +}; + +/* The following few vars are only used during the crypto API fallback */ +static struct keyslot_manager *blk_crypto_ksm; +static struct workqueue_struct *blk_crypto_wq; +static mempool_t *blk_crypto_page_pool; +static struct kmem_cache *blk_crypto_work_mem_cache; + +bool bio_crypt_swhandled(struct bio *bio) +{ + return bio_has_crypt_ctx(bio) && + bio->bi_crypt_context->processing_ksm == blk_crypto_ksm; +} + +static const u8 zeroes[BLK_CRYPTO_MAX_KEY_SIZE]; +static void evict_keyslot(unsigned int slot) +{ + struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot]; + enum blk_crypto_mode_num crypto_mode = slotp->crypto_mode; + + /* Clear the key in the skcipher */ + crypto_skcipher_setkey(slotp->tfms[crypto_mode], zeroes, + blk_crypto_modes[crypto_mode].keysize); + memzero_explicit(slotp->key, BLK_CRYPTO_MAX_KEY_SIZE); +} + +static int blk_crypto_keyslot_program(void *priv, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size, + unsigned int slot) +{ + struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot]; + const struct blk_crypto_mode *mode = &blk_crypto_modes[crypto_mode]; + size_t keysize = mode->keysize; + int err; + + if (crypto_mode != slotp->crypto_mode) { + evict_keyslot(slot); + slotp->crypto_mode = crypto_mode; + } + + if (!slotp->tfms[crypto_mode]) + return -ENOMEM; + err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], key, keysize); + + if (err) { + evict_keyslot(slot); + return err; + } + + memcpy(slotp->key, key, keysize); + + return 0; +} + +static int blk_crypto_keyslot_evict(void *priv, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size, + unsigned int slot) +{ + evict_keyslot(slot); + return 0; +} + +static int blk_crypto_keyslot_find(void *priv, + const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size_bytes) +{ + int slot; + const size_t keysize = blk_crypto_modes[crypto_mode].keysize; + + for (slot = 0; slot < blk_crypto_num_keyslots; slot++) { + if (blk_crypto_keyslots[slot].crypto_mode == crypto_mode && + !crypto_memneq(blk_crypto_keyslots[slot].key, key, keysize)) + return slot; + } + + return -ENOKEY; +} + +static bool blk_crypto_mode_supported(void *priv, + enum blk_crypto_mode_num crypt_mode, + unsigned int data_unit_size) +{ + /* All blk_crypto_modes are required to have a crypto API fallback. */ + return true; +} + +/* + * The crypto API fallback KSM ops - only used for a bio when it specifies a + * blk_crypto_mode for which we failed to get a keyslot in the device's inline + * encryption hardware (which probably means the device doesn't have inline + * encryption hardware that supports that crypto mode). + */ +static const struct keyslot_mgmt_ll_ops blk_crypto_ksm_ll_ops = { + .keyslot_program = blk_crypto_keyslot_program, + .keyslot_evict = blk_crypto_keyslot_evict, + .keyslot_find = blk_crypto_keyslot_find, + .crypto_mode_supported = blk_crypto_mode_supported, +}; + +static void blk_crypto_encrypt_endio(struct bio *enc_bio) +{ + struct bio *src_bio = enc_bio->bi_private; + int i; + + for (i = 0; i < enc_bio->bi_vcnt; i++) + mempool_free(enc_bio->bi_io_vec[i].bv_page, + blk_crypto_page_pool); + + src_bio->bi_status = enc_bio->bi_status; + + bio_put(enc_bio); + bio_endio(src_bio); +} + +static struct bio *blk_crypto_clone_bio(struct bio *bio_src) +{ + struct bvec_iter iter; + struct bio_vec bv; + struct bio *bio; + + bio = bio_alloc_bioset(GFP_NOIO, bio_segments(bio_src), NULL); + if (!bio) + return NULL; + bio->bi_disk = bio_src->bi_disk; + bio->bi_opf = bio_src->bi_opf; + bio->bi_ioprio = bio_src->bi_ioprio; + bio->bi_write_hint = bio_src->bi_write_hint; + bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector; + bio->bi_iter.bi_size = bio_src->bi_iter.bi_size; + + bio_for_each_segment(bv, bio_src, iter) + bio->bi_io_vec[bio->bi_vcnt++] = bv; + + if (bio_integrity(bio_src) && + bio_integrity_clone(bio, bio_src, GFP_NOIO) < 0) { + bio_put(bio); + return NULL; + } + + bio_clone_blkg_association(bio, bio_src); + blkcg_bio_issue_init(bio); + + return bio; +} + +/* Check that all I/O segments are data unit aligned */ +static int bio_crypt_check_alignment(struct bio *bio) +{ + int data_unit_size = 1 << bio->bi_crypt_context->data_unit_size_bits; + struct bvec_iter iter; + struct bio_vec bv; + + bio_for_each_segment(bv, bio, iter) { + if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size)) + return -EIO; + } + return 0; +} + +static int blk_crypto_alloc_cipher_req(struct bio *src_bio, + struct skcipher_request **ciph_req_ptr, + struct crypto_wait *wait) +{ + int slot; + struct skcipher_request *ciph_req; + struct blk_crypto_keyslot *slotp; + + slot = bio_crypt_get_keyslot(src_bio); + slotp = &blk_crypto_keyslots[slot]; + ciph_req = skcipher_request_alloc(slotp->tfms[slotp->crypto_mode], + GFP_NOIO); + if (!ciph_req) { + src_bio->bi_status = BLK_STS_RESOURCE; + return -ENOMEM; + } + + skcipher_request_set_callback(ciph_req, + CRYPTO_TFM_REQ_MAY_BACKLOG | + CRYPTO_TFM_REQ_MAY_SLEEP, + crypto_req_done, wait); + *ciph_req_ptr = ciph_req; + return 0; +} + +static int blk_crypto_split_bio_if_needed(struct bio **bio_ptr) +{ + struct bio *bio = *bio_ptr; + unsigned int i = 0; + unsigned int num_sectors = 0; + struct bio_vec bv; + struct bvec_iter iter; + + bio_for_each_segment(bv, bio, iter) { + num_sectors += bv.bv_len >> SECTOR_SHIFT; + if (++i == BIO_MAX_PAGES) + break; + } + if (num_sectors < bio_sectors(bio)) { + struct bio *split_bio; + + split_bio = bio_split(bio, num_sectors, GFP_NOIO, NULL); + if (!split_bio) { + bio->bi_status = BLK_STS_RESOURCE; + return -ENOMEM; + } + bio_chain(split_bio, bio); + generic_make_request(bio); + *bio_ptr = split_bio; + } + return 0; +} + +/* + * The crypto API fallback's encryption routine. + * Allocate a bounce bio for encryption, encrypt the input bio using + * crypto API, and replace *bio_ptr with the bounce bio. May split input + * bio if it's too large. + */ +static int blk_crypto_encrypt_bio(struct bio **bio_ptr) +{ + struct bio *src_bio; + struct skcipher_request *ciph_req = NULL; + DECLARE_CRYPTO_WAIT(wait); + int err = 0; + u64 curr_dun; + union { + __le64 dun; + u8 bytes[16]; + } iv; + struct scatterlist src, dst; + struct bio *enc_bio; + struct bio_vec *enc_bvec; + int i, j; + int data_unit_size; + + /* Split the bio if it's too big for single page bvec */ + err = blk_crypto_split_bio_if_needed(bio_ptr); + if (err) + return err; + + src_bio = *bio_ptr; + data_unit_size = 1 << src_bio->bi_crypt_context->data_unit_size_bits; + + /* Allocate bounce bio for encryption */ + enc_bio = blk_crypto_clone_bio(src_bio); + if (!enc_bio) { + src_bio->bi_status = BLK_STS_RESOURCE; + return -ENOMEM; + } + + /* + * Use the crypto API fallback keyslot manager to get a crypto_skcipher + * for the algorithm and key specified for this bio. + */ + err = bio_crypt_ctx_acquire_keyslot(src_bio, blk_crypto_ksm); + if (err) { + src_bio->bi_status = BLK_STS_IOERR; + goto out_put_enc_bio; + } + + /* and then allocate an skcipher_request for it */ + err = blk_crypto_alloc_cipher_req(src_bio, &ciph_req, &wait); + if (err) + goto out_release_keyslot; + + curr_dun = bio_crypt_data_unit_num(src_bio); + sg_init_table(&src, 1); + sg_init_table(&dst, 1); + + skcipher_request_set_crypt(ciph_req, &src, &dst, + data_unit_size, iv.bytes); + + /* Encrypt each page in the bounce bio */ + for (i = 0, enc_bvec = enc_bio->bi_io_vec; i < enc_bio->bi_vcnt; + enc_bvec++, i++) { + struct page *plaintext_page = enc_bvec->bv_page; + struct page *ciphertext_page = + mempool_alloc(blk_crypto_page_pool, GFP_NOIO); + + enc_bvec->bv_page = ciphertext_page; + + if (!ciphertext_page) { + src_bio->bi_status = BLK_STS_RESOURCE; + err = -ENOMEM; + goto out_free_bounce_pages; + } + + sg_set_page(&src, plaintext_page, data_unit_size, + enc_bvec->bv_offset); + sg_set_page(&dst, ciphertext_page, data_unit_size, + enc_bvec->bv_offset); + + /* Encrypt each data unit in this page */ + for (j = 0; j < enc_bvec->bv_len; j += data_unit_size) { + memset(&iv, 0, sizeof(iv)); + iv.dun = cpu_to_le64(curr_dun); + + err = crypto_wait_req(crypto_skcipher_encrypt(ciph_req), + &wait); + if (err) { + i++; + src_bio->bi_status = BLK_STS_RESOURCE; + goto out_free_bounce_pages; + } + curr_dun++; + src.offset += data_unit_size; + dst.offset += data_unit_size; + } + } + + enc_bio->bi_private = src_bio; + enc_bio->bi_end_io = blk_crypto_encrypt_endio; + *bio_ptr = enc_bio; + + enc_bio = NULL; + err = 0; + goto out_free_ciph_req; + +out_free_bounce_pages: + while (i > 0) + mempool_free(enc_bio->bi_io_vec[--i].bv_page, + blk_crypto_page_pool); +out_free_ciph_req: + skcipher_request_free(ciph_req); +out_release_keyslot: + bio_crypt_ctx_release_keyslot(src_bio); +out_put_enc_bio: + if (enc_bio) + bio_put(enc_bio); + + return err; +} + +/* + * The crypto API fallback's main decryption routine. + * Decrypts input bio in place. + */ +static void blk_crypto_decrypt_bio(struct work_struct *w) +{ + struct work_mem *work_mem = + container_of(w, struct work_mem, crypto_work); + struct bio *bio = work_mem->bio; + struct skcipher_request *ciph_req = NULL; + DECLARE_CRYPTO_WAIT(wait); + struct bio_vec bv; + struct bvec_iter iter; + u64 curr_dun; + union { + __le64 dun; + u8 bytes[16]; + } iv; + struct scatterlist sg; + int data_unit_size = 1 << bio->bi_crypt_context->data_unit_size_bits; + int i; + int err; + + /* + * Use the crypto API fallback keyslot manager to get a crypto_skcipher + * for the algorithm and key specified for this bio. + */ + if (bio_crypt_ctx_acquire_keyslot(bio, blk_crypto_ksm)) { + bio->bi_status = BLK_STS_RESOURCE; + goto out_no_keyslot; + } + + /* and then allocate an skcipher_request for it */ + err = blk_crypto_alloc_cipher_req(bio, &ciph_req, &wait); + if (err) + goto out; + + curr_dun = bio_crypt_sw_data_unit_num(bio); + sg_init_table(&sg, 1); + skcipher_request_set_crypt(ciph_req, &sg, &sg, data_unit_size, + iv.bytes); + + /* Decrypt each segment in the bio */ + __bio_for_each_segment(bv, bio, iter, + bio->bi_crypt_context->crypt_iter) { + struct page *page = bv.bv_page; + + sg_set_page(&sg, page, data_unit_size, bv.bv_offset); + + /* Decrypt each data unit in the segment */ + for (i = 0; i < bv.bv_len; i += data_unit_size) { + memset(&iv, 0, sizeof(iv)); + iv.dun = cpu_to_le64(curr_dun); + if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req), + &wait)) { + bio->bi_status = BLK_STS_IOERR; + goto out; + } + curr_dun++; + sg.offset += data_unit_size; + } + } + +out: + skcipher_request_free(ciph_req); + bio_crypt_ctx_release_keyslot(bio); +out_no_keyslot: + kmem_cache_free(blk_crypto_work_mem_cache, work_mem); + bio_endio(bio); +} + +/* Queue bio for decryption */ +static void blk_crypto_queue_decrypt_bio(struct bio *bio) +{ + struct work_mem *work_mem = + kmem_cache_zalloc(blk_crypto_work_mem_cache, GFP_ATOMIC); + + if (!work_mem) { + bio->bi_status = BLK_STS_RESOURCE; + bio_endio(bio); + return; + } + + INIT_WORK(&work_mem->crypto_work, blk_crypto_decrypt_bio); + work_mem->bio = bio; + queue_work(blk_crypto_wq, &work_mem->crypto_work); +} + +/** + * blk_crypto_submit_bio - handle submitting bio for inline encryption + * + * @bio_ptr: pointer to original bio pointer + * + * If the bio doesn't have inline encryption enabled or the submitter already + * specified a keyslot for the target device, do nothing. Else, a raw key must + * have been provided, so acquire a device keyslot for it if supported. Else, + * use the crypto API fallback. + * + * When the crypto API fallback is used for encryption, blk-crypto may choose to + * split the bio into 2 - the first one that will continue to be processed and + * the second one that will be resubmitted via generic_make_request. + * A bounce bio will be allocated to encrypt the contents of the aforementioned + * "first one", and *bio_ptr will be updated to this bounce bio. + * + * Return: 0 if bio submission should continue; nonzero if bio_endio() was + * already called so bio submission should abort. + */ +int blk_crypto_submit_bio(struct bio **bio_ptr) +{ + struct bio *bio = *bio_ptr; + struct request_queue *q; + int err; + struct bio_crypt_ctx *crypt_ctx; + + if (!bio_has_crypt_ctx(bio) || !bio_has_data(bio)) + return 0; + + /* + * When a read bio is marked for sw decryption, its bi_iter is saved + * so that when we decrypt the bio later, we know what part of it was + * marked for sw decryption (when the bio is passed down after + * blk_crypto_submit bio, it may be split or advanced so we cannot rely + * on the bi_iter while decrypting in blk_crypto_endio) + */ + if (bio_crypt_swhandled(bio)) + return 0; + + err = bio_crypt_check_alignment(bio); + if (err) + goto out; + + crypt_ctx = bio->bi_crypt_context; + q = bio->bi_disk->queue; + + if (bio_crypt_has_keyslot(bio)) { + /* Key already programmed into device? */ + if (q->ksm == crypt_ctx->processing_ksm) + return 0; + + /* Nope, release the existing keyslot. */ + bio_crypt_ctx_release_keyslot(bio); + } + + /* Get device keyslot if supported */ + if (q->ksm) { + err = bio_crypt_ctx_acquire_keyslot(bio, q->ksm); + if (!err) + return 0; + + pr_warn_once("Failed to acquire keyslot for %s (err=%d). Falling back to crypto API.\n", + bio->bi_disk->disk_name, err); + } + + /* Fallback to crypto API */ + if (!READ_ONCE(tfms_inited[bio->bi_crypt_context->crypto_mode])) { + err = -EIO; + bio->bi_status = BLK_STS_IOERR; + goto out; + } + + if (bio_data_dir(bio) == WRITE) { + /* Encrypt the data now */ + err = blk_crypto_encrypt_bio(bio_ptr); + if (err) + goto out; + } else { + /* Mark bio as swhandled */ + bio->bi_crypt_context->processing_ksm = blk_crypto_ksm; + bio->bi_crypt_context->crypt_iter = bio->bi_iter; + bio->bi_crypt_context->sw_data_unit_num = + bio->bi_crypt_context->data_unit_num; + } + return 0; +out: + bio_endio(*bio_ptr); + return err; +} + +/** + * blk_crypto_endio - clean up bio w.r.t inline encryption during bio_endio + * + * @bio - the bio to clean up + * + * If blk_crypto_submit_bio decided to fallback to crypto API for this + * bio, we queue the bio for decryption into a workqueue and return false, + * and call bio_endio(bio) at a later time (after the bio has been decrypted). + * + * If the bio is not to be decrypted by the crypto API, this function releases + * the reference to the keyslot that blk_crypto_submit_bio got. + * + * Return: true if bio_endio should continue; false otherwise (bio_endio will + * be called again when bio has been decrypted). + */ +bool blk_crypto_endio(struct bio *bio) +{ + if (!bio_has_crypt_ctx(bio)) + return true; + + if (bio_crypt_swhandled(bio)) { + /* + * The only bios that are swhandled when they reach here + * are those with bio_data_dir(bio) == READ, since WRITE + * bios that are encrypted by the crypto API fallback are + * handled by blk_crypto_encrypt_endio. + */ + + /* If there was an IO error, don't decrypt. */ + if (bio->bi_status) + return true; + + blk_crypto_queue_decrypt_bio(bio); + return false; + } + + if (bio_has_crypt_ctx(bio) && bio_crypt_has_keyslot(bio)) + bio_crypt_ctx_release_keyslot(bio); + + return true; +} + +/* + * blk_crypto_mode_alloc_ciphers() - Allocate skciphers for a + * mode_num for all keyslots + * @mode_num - the blk_crypto_mode we want to allocate ciphers for. + * + * Upper layers (filesystems) should call this function to ensure that a + * the crypto API fallback has transforms for this algorithm, if they become + * necessary. + * + */ +int blk_crypto_mode_alloc_ciphers(enum blk_crypto_mode_num mode_num) +{ + struct blk_crypto_keyslot *slotp; + int err = 0; + int i; + + /* Fast path */ + if (likely(READ_ONCE(tfms_inited[mode_num]))) { + /* + * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num] + * for each i are visible before we try to access them. + */ + smp_rmb(); + return 0; + } + + mutex_lock(&tfms_lock[mode_num]); + if (likely(tfms_inited[mode_num])) + goto out; + + for (i = 0; i < blk_crypto_num_keyslots; i++) { + slotp = &blk_crypto_keyslots[i]; + slotp->tfms[mode_num] = crypto_alloc_skcipher( + blk_crypto_modes[mode_num].cipher_str, + 0, 0); + if (IS_ERR(slotp->tfms[mode_num])) { + err = PTR_ERR(slotp->tfms[mode_num]); + slotp->tfms[mode_num] = NULL; + goto out_free_tfms; + } + + crypto_skcipher_set_flags(slotp->tfms[mode_num], + CRYPTO_TFM_REQ_FORBID_WEAK_KEYS); + } + + /* + * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num] + * for each i are visible before we set tfms_inited[mode_num]. + */ + smp_wmb(); + WRITE_ONCE(tfms_inited[mode_num], true); + goto out; + +out_free_tfms: + for (i = 0; i < blk_crypto_num_keyslots; i++) { + slotp = &blk_crypto_keyslots[i]; + crypto_free_skcipher(slotp->tfms[mode_num]); + slotp->tfms[mode_num] = NULL; + } +out: + mutex_unlock(&tfms_lock[mode_num]); + return err; +} +EXPORT_SYMBOL(blk_crypto_mode_alloc_ciphers); + +int __init blk_crypto_init(void) +{ + int i; + int err = -ENOMEM; + + blk_crypto_ksm = keyslot_manager_create(blk_crypto_num_keyslots, + &blk_crypto_ksm_ll_ops, + NULL); + if (!blk_crypto_ksm) + goto out; + + blk_crypto_wq = alloc_workqueue("blk_crypto_wq", + WQ_UNBOUND | WQ_HIGHPRI | + WQ_MEM_RECLAIM, + num_online_cpus()); + if (!blk_crypto_wq) + goto out_free_ksm; + + blk_crypto_keyslots = kcalloc(blk_crypto_num_keyslots, + sizeof(*blk_crypto_keyslots), + GFP_KERNEL); + if (!blk_crypto_keyslots) + goto out_free_workqueue; + + for (i = 0; i < ARRAY_SIZE(blk_crypto_modes); i++) + mutex_init(&tfms_lock[i]); + + blk_crypto_page_pool = + mempool_create_page_pool(num_prealloc_bounce_pg, 0); + if (!blk_crypto_page_pool) + goto out_free_keyslots; + + blk_crypto_work_mem_cache = KMEM_CACHE(work_mem, SLAB_RECLAIM_ACCOUNT); + if (!blk_crypto_work_mem_cache) + goto out_free_page_pool; + + return 0; + +out_free_page_pool: + mempool_destroy(blk_crypto_page_pool); + blk_crypto_page_pool = NULL; +out_free_keyslots: + kzfree(blk_crypto_keyslots); + blk_crypto_keyslots = NULL; +out_free_workqueue: + destroy_workqueue(blk_crypto_wq); + blk_crypto_wq = NULL; +out_free_ksm: + keyslot_manager_destroy(blk_crypto_ksm); + blk_crypto_ksm = NULL; +out: + pr_warn("No memory for blk-crypto crypto API fallback."); + return err; +} diff --git a/include/linux/bio-crypt-ctx.h b/include/linux/bio-crypt-ctx.h index ebe456289338..b9e0515143a4 100644 --- a/include/linux/bio-crypt-ctx.h +++ b/include/linux/bio-crypt-ctx.h @@ -60,6 +60,8 @@ static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes) } } +extern bool bio_crypt_swhandled(struct bio *bio); + static inline bool bio_crypt_has_keyslot(struct bio *bio) { return bio->bi_crypt_context->keyslot >= 0; @@ -177,6 +179,11 @@ static inline void bio_crypt_set_ctx(struct bio *bio, unsigned int dun_bits, gfp_t gfp_mask) { } +static inline bool bio_crypt_swhandled(struct bio *bio) +{ + return false; +} + static inline void bio_set_data_unit_num(struct bio *bio, u64 dun) { } static inline bool bio_crypt_has_keyslot(struct bio *bio) diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h new file mode 100644 index 000000000000..42dbba33598f --- /dev/null +++ b/include/linux/blk-crypto.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2019 Google LLC + */ + +#ifndef __LINUX_BLK_CRYPTO_H +#define __LINUX_BLK_CRYPTO_H + +#include +#include + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + +int blk_crypto_init(void); + +int blk_crypto_submit_bio(struct bio **bio_ptr); + +bool blk_crypto_endio(struct bio *bio); + +int blk_crypto_mode_alloc_ciphers(enum blk_crypto_mode_num mode_num); + +#else /* CONFIG_BLK_INLINE_ENCRYPTION */ + +static inline int blk_crypto_init(void) +{ + return 0; +} + +static inline int blk_crypto_submit_bio(struct bio **bio_ptr) +{ + return 0; +} + +static inline bool blk_crypto_endio(struct bio *bio) +{ + return true; +} + +static inline int +blk_crypto_mode_alloc_ciphers(enum blk_crypto_mode_num mode_num) +{ + return -EOPNOTSUPP; +} + +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */ + +#endif /* __LINUX_BLK_CRYPTO_H */ From patchwork Wed Aug 21 07:57:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11105921 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5A37A184E for ; Wed, 21 Aug 2019 07:57:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 386E52332A for ; Wed, 21 Aug 2019 07:57:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OWdD2E2n" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727038AbfHUH5c (ORCPT ); Wed, 21 Aug 2019 03:57:32 -0400 Received: from mail-pf1-f202.google.com ([209.85.210.202]:53521 "EHLO mail-pf1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727018AbfHUH53 (ORCPT ); Wed, 21 Aug 2019 03:57:29 -0400 Received: by mail-pf1-f202.google.com with SMTP id 191so1032881pfy.20 for ; Wed, 21 Aug 2019 00:57:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dlCXskObrs6I/GXMJskEtfG45JphLBZCR9QbVAGGe44=; b=OWdD2E2n8s4ALffo3R1tzDMPwr2B5uSVYaJuiKll7w8gonewPnWws7SxvPXctV3dKj d4NryU1gOd4/TInj+5HouqXzVGnkIaScOI79BCHHIwYOQJAazHl9LWl6IAYsJg0yG0vk 6/+I+yy5au+zlKKER5Q5i2hGLWNH0dKUeARswHYrV0EvXoaSJBff0WaKQXGAgicfTRcu u2SWTU9hQZYnltD9guTJ9/mi9MtxP3qFQ15aQfPRxKoDScnQ33dqO8/vId32KKsWcvCm itC/LKsY3D55/fpraqO+eiw2N886xR3hlqmkTPBNjnrA55JHQe8uM78txKyqsmnJIini vV7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dlCXskObrs6I/GXMJskEtfG45JphLBZCR9QbVAGGe44=; b=FS8RUzV1wwox7DBBpFFwNFRZR3B2gm9yWlqqVEKllw7gS6BFtO/hZPVb6a9MtX1PGu 8sTdQwrY+vdxbQ5KDjVf3zlmQjhTmZ/d9VA9Sr5GyklO0XvmM655Rkfm2O0Cc94KF4vt HCa0nSeugSmD/zL3seWQ+d/wjZhk+eYwHxo+5NPuRNUUD2qufZbMuoC/BkC3EaA/wTYe SVjb6+f14UdZ85Fj06YhuDyrO7i5v+Im2VLWurEn3jlHTGuRgMcuYvabLAbOmwC9Wf0A 2KB4x+CrycVoWnthdY9ahNBtPbAmFNG52WF4iXqIk2pGKgiM63gaIsN/BTvykY/pZ/XE OIvg== X-Gm-Message-State: APjAAAU4X6vxFCfDWoBriPGNmD8sqGdBME9/Ew0szNUCJIZZXqxmI1lH tX5zz5eSioo7EV2kOHB+GZHyvgM9Nws= X-Google-Smtp-Source: APXvYqy0AY04z7iMj58E06vhJi4bzUYZWHzmdURFvAJpZ7vmflIw1KAf1t//phsYx4q9FlZ0Fu2x03gIyUs= X-Received: by 2002:a63:2157:: with SMTP id s23mr4828335pgm.167.1566374248499; Wed, 21 Aug 2019 00:57:28 -0700 (PDT) Date: Wed, 21 Aug 2019 00:57:10 -0700 In-Reply-To: <20190821075714.65140-1-satyat@google.com> Message-Id: <20190821075714.65140-5-satyat@google.com> Mime-Version: 1.0 References: <20190821075714.65140-1-satyat@google.com> X-Mailer: git-send-email 2.23.0.rc1.153.gdeed80330f-goog Subject: [PATCH v4 4/8] scsi: ufs: UFS driver v2.1 spec crypto additions From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Add the crypto registers and structs defined in v2.1 of the JEDEC UFSHCI specification in preparation to add support for inline encryption to UFS. Signed-off-by: Satya Tangirala --- drivers/scsi/ufs/ufshcd.c | 2 ++ drivers/scsi/ufs/ufshcd.h | 5 +++ drivers/scsi/ufs/ufshci.h | 67 +++++++++++++++++++++++++++++++++++++-- 3 files changed, 72 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index e274053109d0..212f7653c4c5 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -4722,6 +4722,8 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) case OCS_MISMATCH_RESP_UPIU_SIZE: case OCS_PEER_COMM_FAILURE: case OCS_FATAL_ERROR: + case OCS_INVALID_CRYPTO_CONFIG: + case OCS_GENERAL_CRYPTO_ERROR: default: result |= DID_ERROR << 16; dev_err(hba->dev, diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 994d73d03207..10b5cd26a020 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -692,6 +692,11 @@ struct ufs_hba { * the performance of ongoing read/write operations. */ #define UFSHCD_CAP_KEEP_AUTO_BKOPS_ENABLED_EXCEPT_SUSPEND (1 << 5) + /* + * This capability allows the host controller driver to use the + * inline crypto engine, if it is present + */ +#define UFSHCD_CAP_CRYPTO (1 << 6) struct devfreq *devfreq; struct ufs_clk_scaling clk_scaling; diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h index dbb75cd28dc8..291f6c3e79db 100644 --- a/drivers/scsi/ufs/ufshci.h +++ b/drivers/scsi/ufs/ufshci.h @@ -90,6 +90,7 @@ enum { MASK_64_ADDRESSING_SUPPORT = 0x01000000, MASK_OUT_OF_ORDER_DATA_DELIVERY_SUPPORT = 0x02000000, MASK_UIC_DME_TEST_MODE_SUPPORT = 0x04000000, + MASK_CRYPTO_SUPPORT = 0x10000000, }; #define UFS_MASK(mask, offset) ((mask) << (offset)) @@ -143,6 +144,7 @@ enum { #define DEVICE_FATAL_ERROR 0x800 #define CONTROLLER_FATAL_ERROR 0x10000 #define SYSTEM_BUS_FATAL_ERROR 0x20000 +#define CRYPTO_ENGINE_FATAL_ERROR 0x40000 #define UFSHCD_UIC_HIBERN8_MASK (UIC_HIBERNATE_ENTER |\ UIC_HIBERNATE_EXIT) @@ -155,11 +157,13 @@ enum { #define UFSHCD_ERROR_MASK (UIC_ERROR |\ DEVICE_FATAL_ERROR |\ CONTROLLER_FATAL_ERROR |\ - SYSTEM_BUS_FATAL_ERROR) + SYSTEM_BUS_FATAL_ERROR |\ + CRYPTO_ENGINE_FATAL_ERROR) #define INT_FATAL_ERRORS (DEVICE_FATAL_ERROR |\ CONTROLLER_FATAL_ERROR |\ - SYSTEM_BUS_FATAL_ERROR) + SYSTEM_BUS_FATAL_ERROR |\ + CRYPTO_ENGINE_FATAL_ERROR) /* HCS - Host Controller Status 30h */ #define DEVICE_PRESENT 0x1 @@ -318,6 +322,61 @@ enum { INTERRUPT_MASK_ALL_VER_21 = 0x71FFF, }; +/* CCAP - Crypto Capability 100h */ +union ufs_crypto_capabilities { + __le32 reg_val; + struct { + u8 num_crypto_cap; + u8 config_count; + u8 reserved; + u8 config_array_ptr; + }; +}; + +enum ufs_crypto_key_size { + UFS_CRYPTO_KEY_SIZE_INVALID = 0x0, + UFS_CRYPTO_KEY_SIZE_128 = 0x1, + UFS_CRYPTO_KEY_SIZE_192 = 0x2, + UFS_CRYPTO_KEY_SIZE_256 = 0x3, + UFS_CRYPTO_KEY_SIZE_512 = 0x4, +}; + +enum ufs_crypto_alg { + UFS_CRYPTO_ALG_AES_XTS = 0x0, + UFS_CRYPTO_ALG_BITLOCKER_AES_CBC = 0x1, + UFS_CRYPTO_ALG_AES_ECB = 0x2, + UFS_CRYPTO_ALG_ESSIV_AES_CBC = 0x3, +}; + +/* x-CRYPTOCAP - Crypto Capability X */ +union ufs_crypto_cap_entry { + __le32 reg_val; + struct { + u8 algorithm_id; + u8 sdus_mask; /* Supported data unit size mask */ + u8 key_size; + u8 reserved; + }; +}; + +#define UFS_CRYPTO_CONFIGURATION_ENABLE (1 << 7) +#define UFS_CRYPTO_KEY_MAX_SIZE 64 +/* x-CRYPTOCFG - Crypto Configuration X */ +union ufs_crypto_cfg_entry { + __le32 reg_val[32]; + struct { + u8 crypto_key[UFS_CRYPTO_KEY_MAX_SIZE]; + u8 data_unit_size; + u8 crypto_cap_idx; + u8 reserved_1; + u8 config_enable; + u8 reserved_multi_host; + u8 reserved_2; + u8 vsb[2]; + u8 reserved_3[56]; + }; +}; + /* * Request Descriptor Definitions */ @@ -339,6 +398,7 @@ enum { UTP_NATIVE_UFS_COMMAND = 0x10000000, UTP_DEVICE_MANAGEMENT_FUNCTION = 0x20000000, UTP_REQ_DESC_INT_CMD = 0x01000000, + UTP_REQ_DESC_CRYPTO_ENABLE_CMD = 0x00800000, }; /* UTP Transfer Request Data Direction (DD) */ @@ -358,6 +418,9 @@ enum { OCS_PEER_COMM_FAILURE = 0x5, OCS_ABORTED = 0x6, OCS_FATAL_ERROR = 0x7, + OCS_DEVICE_FATAL_ERROR = 0x8, + OCS_INVALID_CRYPTO_CONFIG = 0x9, + OCS_GENERAL_CRYPTO_ERROR = 0xA, OCS_INVALID_COMMAND_STATUS = 0x0F, MASK_OCS = 0x0F, }; From patchwork Wed Aug 21 07:57:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11105923 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8F7C214F7 for ; Wed, 21 Aug 2019 07:57:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 63F5C2332A for ; Wed, 21 Aug 2019 07:57:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BCi2mC+2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727087AbfHUH5f (ORCPT ); Wed, 21 Aug 2019 03:57:35 -0400 Received: from mail-pf1-f201.google.com ([209.85.210.201]:50815 "EHLO mail-pf1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727041AbfHUH5c (ORCPT ); Wed, 21 Aug 2019 03:57:32 -0400 Received: by mail-pf1-f201.google.com with SMTP id b21so1040167pfb.17 for ; Wed, 21 Aug 2019 00:57:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=fA2UXmDikRvcLZrf/BqE7ho8JRRlilYy8SGfgnGYbi4=; b=BCi2mC+20w5N+zNpYWAqnl3OLa7sGF9NYqQymsw3pKM27U9g1dk897V+yEt8tkuAe6 ceAu6GgylS/c2+MiabcQ7GQUpw3UKjB1gH6Bv9nFni+AkgT9HoASdjJ2dm3v+yL2gmrQ Y855BGce0NfeBx5c7O/a8hefODO/BqchfBIkF5QHWsPRNP0uTrv9pZJiVxVrBFrBcWjV 6eO0vFziVEbaRJaZF5OfhPsCCYXdiaYB7rD6TrEaUciu6bN+kgsRROH9Vf5wMiPP1yyC AoHg93D45k0kcNrmb5i8APmlvDXmOVoGeQ63PZoLFxIY9FBW7X/LClh/99v5EXYqjkKu qC5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=fA2UXmDikRvcLZrf/BqE7ho8JRRlilYy8SGfgnGYbi4=; b=BsSi+FMngXS0yS9UsbpE8dYhiEUWrEwCFtjbvzSUGHhFaE/H1pyFeuXrxs6NMHL/94 N472n6C5nQVimdAEQnDg39z2EQDezLeD1+2tB37aJ8zUrSyAq1LBttmOJcBxbX8t6bdq 2K13GAivDziPaqKDmTY8Rkw5Lu8O2REiZDMs2vY4pFVIkaX0uczJTVx65M2tUo+ggi5+ p4frU3vSUzEkEqCmxAqnSmh6CbgEJbcoLBTxW/oL7x5Qm4yxklfd8Mtvr10yNkcTiJho seymbRTf/PYd0LZd50W4pAeTMBI9PdLGSIYplfS7bBuDmP+VyaHHp2w/xBznwRzX51N1 J59Q== X-Gm-Message-State: APjAAAW5If2ILUhksJvZkgGH+RTKoy9Z1GhKpq5+135xL/IR/+Ej6xlG BbWrspOrGd6DN01ZBZ6zyMw/mGzdHQI= X-Google-Smtp-Source: APXvYqzC8CoZweF87ZvQlp1cKwYdF56hZv5lmzuFuV0i/0ehwOjdV1EfGnbFYblwutaQqJebzYMZrOtnRo8= X-Received: by 2002:a65:43c2:: with SMTP id n2mr28366337pgp.110.1566374250947; Wed, 21 Aug 2019 00:57:30 -0700 (PDT) Date: Wed, 21 Aug 2019 00:57:11 -0700 In-Reply-To: <20190821075714.65140-1-satyat@google.com> Message-Id: <20190821075714.65140-6-satyat@google.com> Mime-Version: 1.0 References: <20190821075714.65140-1-satyat@google.com> X-Mailer: git-send-email 2.23.0.rc1.153.gdeed80330f-goog Subject: [PATCH v4 5/8] scsi: ufs: UFS crypto API From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Introduce functions to manipulate UFS inline encryption hardware in line with the JEDEC UFSHCI v2.1 specification and to work with the block keyslot manager. Signed-off-by: Satya Tangirala --- drivers/scsi/ufs/Kconfig | 10 + drivers/scsi/ufs/Makefile | 1 + drivers/scsi/ufs/ufshcd-crypto.c | 429 +++++++++++++++++++++++++++++++ drivers/scsi/ufs/ufshcd-crypto.h | 86 +++++++ drivers/scsi/ufs/ufshcd.h | 18 ++ 5 files changed, 544 insertions(+) create mode 100644 drivers/scsi/ufs/ufshcd-crypto.c create mode 100644 drivers/scsi/ufs/ufshcd-crypto.h diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig index 0b845ab7c3bf..861aabfe791b 100644 --- a/drivers/scsi/ufs/Kconfig +++ b/drivers/scsi/ufs/Kconfig @@ -150,3 +150,13 @@ config SCSI_UFS_BSG Select this if you need a bsg device node for your UFS controller. If unsure, say N. + +config SCSI_UFS_CRYPTO + bool "UFS Crypto Engine Support" + depends on SCSI_UFSHCD && BLK_INLINE_ENCRYPTION + help + Enable Crypto Engine Support in UFS. + Enabling this makes it possible for the kernel to use the crypto + capabilities of the UFS device (if present) to perform crypto + operations on data being transferred to/from the device. + diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile index 2a9097939bcb..094c39989a37 100644 --- a/drivers/scsi/ufs/Makefile +++ b/drivers/scsi/ufs/Makefile @@ -11,3 +11,4 @@ obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o obj-$(CONFIG_SCSI_UFS_MEDIATEK) += ufs-mediatek.o +ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c new file mode 100644 index 000000000000..c069a75b245f --- /dev/null +++ b/drivers/scsi/ufs/ufshcd-crypto.c @@ -0,0 +1,429 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2019 Google LLC + */ + +#include + +#include "ufshcd.h" +#include "ufshcd-crypto.h" + +static bool ufshcd_cap_idx_valid(struct ufs_hba *hba, unsigned int cap_idx) +{ + return cap_idx < hba->crypto_capabilities.num_crypto_cap; +} + +static u8 get_data_unit_size_mask(unsigned int data_unit_size) +{ + if (data_unit_size < 512 || data_unit_size > 65536 || + !is_power_of_2(data_unit_size)) + return 0; + + return data_unit_size / 512; +} + +static size_t get_keysize_bytes(enum ufs_crypto_key_size size) +{ + switch (size) { + case UFS_CRYPTO_KEY_SIZE_128: return 16; + case UFS_CRYPTO_KEY_SIZE_192: return 24; + case UFS_CRYPTO_KEY_SIZE_256: return 32; + case UFS_CRYPTO_KEY_SIZE_512: return 64; + default: return 0; + } +} + +static int ufshcd_crypto_cap_find(void *hba_p, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + struct ufs_hba *hba = hba_p; + enum ufs_crypto_alg ufs_alg; + u8 data_unit_mask; + int cap_idx; + enum ufs_crypto_key_size ufs_key_size; + union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array; + + if (!ufshcd_hba_is_crypto_supported(hba)) + return -EINVAL; + + switch (crypto_mode) { + case BLK_ENCRYPTION_MODE_AES_256_XTS: + ufs_alg = UFS_CRYPTO_ALG_AES_XTS; + ufs_key_size = UFS_CRYPTO_KEY_SIZE_256; + break; + /* + * case BLK_CRYPTO_ALG_BITLOCKER_AES_CBC: + * ufs_alg = UFS_CRYPTO_ALG_BITLOCKER_AES_CBC; + * break; + * case BLK_CRYPTO_ALG_AES_ECB: + * ufs_alg = UFS_CRYPTO_ALG_AES_ECB; + * break; + * case BLK_CRYPTO_ALG_ESSIV_AES_CBC: + * ufs_alg = UFS_CRYPTO_ALG_ESSIV_AES_CBC; + * break; + */ + default: return -EINVAL; + } + + data_unit_mask = get_data_unit_size_mask(data_unit_size); + + for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap; + cap_idx++) { + if (ccap_array[cap_idx].algorithm_id == ufs_alg && + (ccap_array[cap_idx].sdus_mask & data_unit_mask) && + ccap_array[cap_idx].key_size == ufs_key_size) + return cap_idx; + } + + return -EINVAL; +} + +/** + * ufshcd_crypto_cfg_entry_write_key - Write a key into a crypto_cfg_entry + * + * Writes the key with the appropriate format - for AES_XTS, + * the first half of the key is copied as is, the second half is + * copied with an offset halfway into the cfg->crypto_key array. + * For the other supported crypto algs, the key is just copied. + * + * @cfg: The crypto config to write to + * @key: The key to write + * @cap: The crypto capability (which specifies the crypto alg and key size) + * + * Returns 0 on success, or -EINVAL + */ +static int ufshcd_crypto_cfg_entry_write_key(union ufs_crypto_cfg_entry *cfg, + const u8 *key, + union ufs_crypto_cap_entry cap) +{ + size_t key_size_bytes = get_keysize_bytes(cap.key_size); + + if (key_size_bytes == 0) + return -EINVAL; + + switch (cap.algorithm_id) { + case UFS_CRYPTO_ALG_AES_XTS: + key_size_bytes *= 2; + if (key_size_bytes > UFS_CRYPTO_KEY_MAX_SIZE) + return -EINVAL; + + memcpy(cfg->crypto_key, key, key_size_bytes/2); + memcpy(cfg->crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2, + key + key_size_bytes/2, key_size_bytes/2); + return 0; + case UFS_CRYPTO_ALG_BITLOCKER_AES_CBC: // fallthrough + case UFS_CRYPTO_ALG_AES_ECB: // fallthrough + case UFS_CRYPTO_ALG_ESSIV_AES_CBC: + memcpy(cfg->crypto_key, key, key_size_bytes); + return 0; + } + + return -EINVAL; +} + +static void program_key(struct ufs_hba *hba, + const union ufs_crypto_cfg_entry *cfg, + int slot) +{ + int i; + u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg); + + /* Clear the dword 16 */ + ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0])); + /* Ensure that CFGE is cleared before programming the key */ + wmb(); + for (i = 0; i < 16; i++) { + ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[i]), + slot_offset + i * sizeof(cfg->reg_val[0])); + /* Spec says each dword in key must be written sequentially */ + wmb(); + } + /* Write dword 17 */ + ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[17]), + slot_offset + 17 * sizeof(cfg->reg_val[0])); + /* Dword 16 must be written last */ + wmb(); + /* Write dword 16 */ + ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[16]), + slot_offset + 16 * sizeof(cfg->reg_val[0])); + wmb(); +} + +static int ufshcd_crypto_keyslot_program(void *hba_p, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size, + unsigned int slot) +{ + struct ufs_hba *hba = hba_p; + int err = 0; + u8 data_unit_mask; + union ufs_crypto_cfg_entry cfg; + union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs; + int cap_idx; + + cap_idx = ufshcd_crypto_cap_find(hba_p, crypto_mode, + data_unit_size); + + if (!ufshcd_is_crypto_enabled(hba) || + !ufshcd_keyslot_valid(hba, slot) || + !ufshcd_cap_idx_valid(hba, cap_idx)) + return -EINVAL; + + data_unit_mask = get_data_unit_size_mask(data_unit_size); + + if (!(data_unit_mask & hba->crypto_cap_array[cap_idx].sdus_mask)) + return -EINVAL; + + memset(&cfg, 0, sizeof(cfg)); + cfg.data_unit_size = data_unit_mask; + cfg.crypto_cap_idx = cap_idx; + cfg.config_enable |= UFS_CRYPTO_CONFIGURATION_ENABLE; + + err = ufshcd_crypto_cfg_entry_write_key(&cfg, key, + hba->crypto_cap_array[cap_idx]); + if (err) + return err; + + program_key(hba, &cfg, slot); + + memcpy(&cfg_arr[slot], &cfg, sizeof(cfg)); + memzero_explicit(&cfg, sizeof(cfg)); + + return 0; +} + +static int ufshcd_crypto_keyslot_find(void *hba_p, + const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + struct ufs_hba *hba = hba_p; + int err = 0; + int slot; + u8 data_unit_mask; + union ufs_crypto_cfg_entry cfg; + union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs; + int cap_idx; + + cap_idx = ufshcd_crypto_cap_find(hba_p, crypto_mode, + data_unit_size); + + if (!ufshcd_is_crypto_enabled(hba) || + !ufshcd_cap_idx_valid(hba, cap_idx)) + return -EINVAL; + + data_unit_mask = get_data_unit_size_mask(data_unit_size); + + if (!(data_unit_mask & hba->crypto_cap_array[cap_idx].sdus_mask)) + return -EINVAL; + + memset(&cfg, 0, sizeof(cfg)); + err = ufshcd_crypto_cfg_entry_write_key(&cfg, key, + hba->crypto_cap_array[cap_idx]); + + if (err) + return -EINVAL; + + for (slot = 0; slot < NUM_KEYSLOTS(hba); slot++) { + if ((cfg_arr[slot].config_enable & + UFS_CRYPTO_CONFIGURATION_ENABLE) && + data_unit_mask == cfg_arr[slot].data_unit_size && + cap_idx == cfg_arr[slot].crypto_cap_idx && + !crypto_memneq(&cfg.crypto_key, cfg_arr[slot].crypto_key, + UFS_CRYPTO_KEY_MAX_SIZE)) { + memzero_explicit(&cfg, sizeof(cfg)); + return slot; + } + } + + memzero_explicit(&cfg, sizeof(cfg)); + return -ENOKEY; +} + +static int ufshcd_crypto_keyslot_evict(void *hba_p, const u8 *key, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size, + unsigned int slot) +{ + struct ufs_hba *hba = hba_p; + int i = 0; + u32 reg_base; + union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs; + + if (!ufshcd_is_crypto_enabled(hba) || + !ufshcd_keyslot_valid(hba, slot)) + return -EINVAL; + + memset(&cfg_arr[slot], 0, sizeof(cfg_arr[slot])); + reg_base = hba->crypto_cfg_register + slot * sizeof(cfg_arr[0]); + + /* + * Clear the crypto cfg on the device. Clearing CFGE + * might not be sufficient, so just clear the entire cfg. + */ + for (i = 0; i < sizeof(cfg_arr[0]); i += sizeof(__le32)) + ufshcd_writel(hba, 0, reg_base + i); + wmb(); + + return 0; +} + +static bool ufshcd_crypto_mode_supported(void *hba_p, + enum blk_crypto_mode_num crypto_mode, + unsigned int data_unit_size) +{ + return ufshcd_crypto_cap_find(hba_p, crypto_mode, data_unit_size) >= 0; +} + +void ufshcd_crypto_enable(struct ufs_hba *hba) +{ + union ufs_crypto_cfg_entry *cfg_arr = hba->crypto_cfgs; + int slot; + + if (!ufshcd_hba_is_crypto_supported(hba)) + return; + + hba->caps |= UFSHCD_CAP_CRYPTO; + /* + * Reset might clear all keys, so reprogram all the keys. + * Also serves to clear keys on driver init. + */ + for (slot = 0; slot < NUM_KEYSLOTS(hba); slot++) + program_key(hba, &cfg_arr[slot], slot); +} + +void ufshcd_crypto_disable(struct ufs_hba *hba) +{ + hba->caps &= ~UFSHCD_CAP_CRYPTO; +} + + +/** + * ufshcd_hba_init_crypto - Read crypto capabilities, init crypto fields in hba + * @hba: Per adapter instance + * + * Returns 0 on success. Returns -ENODEV if such capabilities don't exist, and + * -ENOMEM upon OOM. + */ +int ufshcd_hba_init_crypto(struct ufs_hba *hba) +{ + int cap_idx = 0; + int err = 0; + + /* Default to disabling crypto */ + hba->caps &= ~UFSHCD_CAP_CRYPTO; + + if (!(hba->capabilities & MASK_CRYPTO_SUPPORT)) { + err = -ENODEV; + goto out; + } + + /* + * Crypto Capabilities should never be 0, because the + * config_array_ptr > 04h. So we use a 0 value to indicate that + * crypto init failed, and can't be enabled. + */ + hba->crypto_capabilities.reg_val = + cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP)); + hba->crypto_cfg_register = + (u32)hba->crypto_capabilities.config_array_ptr * 0x100; + hba->crypto_cap_array = + devm_kcalloc(hba->dev, + hba->crypto_capabilities.num_crypto_cap, + sizeof(hba->crypto_cap_array[0]), + GFP_KERNEL); + if (!hba->crypto_cap_array) { + err = -ENOMEM; + goto out; + } + + hba->crypto_cfgs = + devm_kcalloc(hba->dev, + hba->crypto_capabilities.config_count + 1, + sizeof(hba->crypto_cfgs[0]), + GFP_KERNEL); + if (!hba->crypto_cfgs) { + err = -ENOMEM; + goto out_cfg_mem; + } + + /* + * Store all the capabilities now so that we don't need to repeatedly + * access the device each time we want to know its capabilities + */ + for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap; + cap_idx++) { + hba->crypto_cap_array[cap_idx].reg_val = + cpu_to_le32(ufshcd_readl(hba, + REG_UFS_CRYPTOCAP + + cap_idx * sizeof(__le32))); + } + + hba->ksm = NULL; + mutex_init(&hba->ksm_lock); + hba->ksm_num_refs = 0; + + return 0; +out_cfg_mem: + devm_kfree(hba->dev, hba->crypto_cap_array); +out: + // TODO: print error? + /* Indicate that init failed by setting crypto_capabilities to 0 */ + hba->crypto_capabilities.reg_val = 0; + return err; +} + +static const struct keyslot_mgmt_ll_ops ufshcd_ksm_ops = { + .keyslot_program = ufshcd_crypto_keyslot_program, + .keyslot_evict = ufshcd_crypto_keyslot_evict, + .keyslot_find = ufshcd_crypto_keyslot_find, + .crypto_mode_supported = ufshcd_crypto_mode_supported, +}; + +void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba, + struct request_queue *q) +{ + if (!ufshcd_hba_is_crypto_supported(hba)) + return; + + if (q) { + mutex_lock(&hba->ksm_lock); + if (!hba->ksm) { + hba->ksm = keyslot_manager_create( + hba->crypto_capabilities.config_count + 1, + &ufshcd_ksm_ops, hba); + hba->ksm_num_refs = 0; + } + hba->ksm_num_refs++; + mutex_unlock(&hba->ksm_lock); + q->ksm = hba->ksm; + } + /* + * If we fail we make it look like + * crypto is not supported, which will avoid issues + * with reset + */ + if (!q || !q->ksm) { + ufshcd_crypto_disable(hba); + hba->crypto_capabilities.reg_val = 0; + devm_kfree(hba->dev, hba->crypto_cap_array); + devm_kfree(hba->dev, hba->crypto_cfgs); + } +} + +void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba, + struct request_queue *q) +{ + if (q && q->ksm) { + q->ksm = NULL; + mutex_lock(&hba->ksm_lock); + hba->ksm_num_refs--; + if (hba->ksm_num_refs == 0) { + keyslot_manager_destroy(hba->ksm); + hba->ksm = NULL; + } + mutex_unlock(&hba->ksm_lock); + } +} + diff --git a/drivers/scsi/ufs/ufshcd-crypto.h b/drivers/scsi/ufs/ufshcd-crypto.h new file mode 100644 index 000000000000..73ddc8e493fb --- /dev/null +++ b/drivers/scsi/ufs/ufshcd-crypto.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2019 Google LLC + */ + +#ifndef _UFSHCD_CRYPTO_H +#define _UFSHCD_CRYPTO_H + +struct ufs_hba; + +#ifdef CONFIG_SCSI_UFS_CRYPTO +#include + +#include "ufshci.h" + +#define NUM_KEYSLOTS(hba) (hba->crypto_capabilities.config_count + 1) + +static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot) +{ + /* + * The actual number of configurations supported is (CFGC+1), so slot + * numbers range from 0 to config_count inclusive. + */ + return slot < NUM_KEYSLOTS(hba); +} + +static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba) +{ + return hba->crypto_capabilities.reg_val != 0; +} + +static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba) +{ + return hba->caps & UFSHCD_CAP_CRYPTO; +} + +void ufshcd_crypto_enable(struct ufs_hba *hba); + +void ufshcd_crypto_disable(struct ufs_hba *hba); + +int ufshcd_hba_init_crypto(struct ufs_hba *hba); + +void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba, + struct request_queue *q); + +void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba, + struct request_queue *q); + +#else /* CONFIG_SCSI_UFS_CRYPTO */ + +static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba, + unsigned int slot) +{ + return false; +} + +static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba) +{ + return false; +} + +static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba) +{ + return false; +} + +static inline void ufshcd_crypto_enable(struct ufs_hba *hba) { } + +static inline void ufshcd_crypto_disable(struct ufs_hba *hba) { } + +static inline int ufshcd_hba_init_crypto(struct ufs_hba *hba) +{ + return 0; +} + +static inline void ufshcd_crypto_setup_rq_keyslot_manager( + struct ufs_hba *hba, + struct request_queue *q) { } + +static inline void ufshcd_crypto_destroy_rq_keyslot_manager( + struct ufs_hba *hba, + struct request_queue *q) { } + +#endif /* CONFIG_SCSI_UFS_CRYPTO */ + +#endif /* _UFSHCD_CRYPTO_H */ diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 10b5cd26a020..34e9849f00f0 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -501,6 +501,13 @@ struct ufs_stats { * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for * device is known or not. * @scsi_block_reqs_cnt: reference counting for scsi block requests + * @crypto_capabilities: Content of crypto capabilities register (0x100) + * @crypto_cap_array: Array of crypto capabilities + * @crypto_cfg_register: Start of the crypto cfg array + * @crypto_cfgs: Array of crypto configurations (i.e. config for each slot) + * @ksm: the keyslot manager tied to this hba + * @ksm_lock: lock to protect initialization and refcount of ksm + * @ksm_num_refs: refcount for ksm */ struct ufs_hba { void __iomem *mmio_base; @@ -711,6 +718,17 @@ struct ufs_hba { struct device bsg_dev; struct request_queue *bsg_queue; + +#ifdef CONFIG_SCSI_UFS_CRYPTO + /* crypto */ + union ufs_crypto_capabilities crypto_capabilities; + union ufs_crypto_cap_entry *crypto_cap_array; + u32 crypto_cfg_register; + union ufs_crypto_cfg_entry *crypto_cfgs; + struct keyslot_manager *ksm; + struct mutex ksm_lock; + unsigned int ksm_num_refs; +#endif /* CONFIG_SCSI_UFS_CRYPTO */ }; /* Returns true if clocks can be gated. Otherwise false */ From patchwork Wed Aug 21 07:57:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11105933 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 33BC814F7 for ; Wed, 21 Aug 2019 07:57:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0FAB122DA7 for ; Wed, 21 Aug 2019 07:57:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fWI4uwOO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727095AbfHUH5h (ORCPT ); Wed, 21 Aug 2019 03:57:37 -0400 Received: from mail-yb1-f202.google.com ([209.85.219.202]:39411 "EHLO mail-yb1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726974AbfHUH5e (ORCPT ); Wed, 21 Aug 2019 03:57:34 -0400 Received: by mail-yb1-f202.google.com with SMTP id f71so935109ybg.6 for ; Wed, 21 Aug 2019 00:57:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SfY5XRW311zdC5RUZv3f1YyeJJxz4XjcRlTY+ZjAHvc=; b=fWI4uwOOZULs38dJ/e4/Rov0rnXIpdgH46de98nsV/ACBXN1r1TbY8E4Nd7KuoWjBH g6wf7ZEj1i06r6eNeZEXyNidpiodgNdKwMma2Iiu4iwalBguttIU91udt3U+LY/0vK5L dN4xXQSkXRPxlGv8vodUe3SK3sWdv5m/q15C/L/5mient8PcVeiKuJJ8qXFDITGBo1Gq R4sBj7cqy9OvvTUGuvriaDdR5KlQKPWtTGVGv2Bxq41P48wo5toxF2doQ2uIMfkfe5Vg MB6ZUENCkq11qGTZvWQAIrwQi7QdqCSPirn78ushHpBMIYlxbXL7mZ3P3P9VEgIICuSC zhaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SfY5XRW311zdC5RUZv3f1YyeJJxz4XjcRlTY+ZjAHvc=; b=ba3gfnvzV35l8Ea9G74JHlYsSDcBPC7TbBV3kwqcRMYNUI27m3gz3Sf9/JuzHpOGAY vit7MueOy2dasJwzDze7X2/sePXONzJBeqEMbdsmjGGBJOlMfOE3r7N/HZgxT4DEBzX6 cjyA8uwL9vBZa2/vYfTfbFBIAC0u0qm+H0OTM7tKojv5A2JqNDTC7ZG6KmSKivgimS++ vvUoTeuu1I07XsFbvk7B2ghePOD0+cvLbUxxyCPP1DH09wHU4cKxufcdyU1GwKHMxdd9 gW4bp56TnRO+qJa2tamnj2HWE0zJur/t30RF9LeCDQ56w8x0sUI3xQkc+avZwMeaL2NV 9glw== X-Gm-Message-State: APjAAAWOQywS2IccDK/M12IMcUMoPmwUnX82AAeSmxC49ddfUPvb8G/y T3e5HNvkrG6EnJ6hLB5fYi8DufPJMpE= X-Google-Smtp-Source: APXvYqxis6xCLsHhX8VMjHW7pKCBcr3N5pGkoeuirvOMEpcmqSmucQXX36A/XD2uzZlmNEeh88CJW3Uksa0= X-Received: by 2002:a81:7850:: with SMTP id t77mr170499ywc.129.1566374253554; Wed, 21 Aug 2019 00:57:33 -0700 (PDT) Date: Wed, 21 Aug 2019 00:57:12 -0700 In-Reply-To: <20190821075714.65140-1-satyat@google.com> Message-Id: <20190821075714.65140-7-satyat@google.com> Mime-Version: 1.0 References: <20190821075714.65140-1-satyat@google.com> X-Mailer: git-send-email 2.23.0.rc1.153.gdeed80330f-goog Subject: [PATCH v4 6/8] scsi: ufs: Add inline encryption support to UFS From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Wire up ufshcd.c with the UFS Crypto API, the block layer inline encryption additions and the keyslot manager. Signed-off-by: Satya Tangirala --- drivers/scsi/ufs/ufshcd.c | 83 ++++++++++++++++++++++++++++++++++++--- drivers/scsi/ufs/ufshcd.h | 6 +++ 2 files changed, 84 insertions(+), 5 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 212f7653c4c5..dcb841c6be7e 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -47,6 +47,7 @@ #include "unipro.h" #include "ufs-sysfs.h" #include "ufs_bsg.h" +#include "ufshcd-crypto.h" #define CREATE_TRACE_POINTS #include @@ -855,7 +856,14 @@ static void ufshcd_enable_run_stop_reg(struct ufs_hba *hba) */ static inline void ufshcd_hba_start(struct ufs_hba *hba) { - ufshcd_writel(hba, CONTROLLER_ENABLE, REG_CONTROLLER_ENABLE); + u32 val = CONTROLLER_ENABLE; + + if (ufshcd_hba_is_crypto_supported(hba)) { + ufshcd_crypto_enable(hba); + val |= CRYPTO_GENERAL_ENABLE; + } + + ufshcd_writel(hba, val, REG_CONTROLLER_ENABLE); } /** @@ -2209,9 +2217,21 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp, dword_0 |= UTP_REQ_DESC_INT_CMD; /* Transfer request descriptor header fields */ + if (lrbp->crypto_enable) { + dword_0 |= UTP_REQ_DESC_CRYPTO_ENABLE_CMD; + dword_0 |= lrbp->crypto_key_slot; + req_desc->header.dword_1 = + cpu_to_le32((u32)lrbp->data_unit_num); + req_desc->header.dword_3 = + cpu_to_le32((u32)(lrbp->data_unit_num >> 32)); + } else { + /* dword_1 and dword_3 are reserved, hence they are set to 0 */ + req_desc->header.dword_1 = 0; + req_desc->header.dword_3 = 0; + } + req_desc->header.dword_0 = cpu_to_le32(dword_0); - /* dword_1 is reserved, hence it is set to 0 */ - req_desc->header.dword_1 = 0; + /* * assigning invalid value for command status. Controller * updates OCS on command completion, with the command @@ -2219,8 +2239,6 @@ static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp, */ req_desc->header.dword_2 = cpu_to_le32(OCS_INVALID_COMMAND_STATUS); - /* dword_3 is reserved, hence it is set to 0 */ - req_desc->header.dword_3 = 0; req_desc->prd_table_length = 0; } @@ -2380,6 +2398,37 @@ static inline u16 ufshcd_upiu_wlun_to_scsi_wlun(u8 upiu_wlun_id) return (upiu_wlun_id & ~UFS_UPIU_WLUN_ID) | SCSI_W_LUN_BASE; } +static inline int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba, + struct scsi_cmnd *cmd, + struct ufshcd_lrb *lrbp) +{ + int key_slot; + + if (!cmd->request->bio || + !bio_crypt_should_process(cmd->request->bio, cmd->request->q)) { + lrbp->crypto_enable = false; + return 0; + } + + if (WARN_ON(!ufshcd_is_crypto_enabled(hba))) { + /* + * Upper layer asked us to do inline encryption + * but that isn't enabled, so we fail this request. + */ + return -EINVAL; + } + key_slot = bio_crypt_get_keyslot(cmd->request->bio); + if (!ufshcd_keyslot_valid(hba, key_slot)) + return -EINVAL; + + lrbp->crypto_enable = true; + lrbp->crypto_key_slot = key_slot; + lrbp->data_unit_num = bio_crypt_data_unit_num(cmd->request->bio); + + return 0; +} + + /** * ufshcd_queuecommand - main entry point for SCSI requests * @host: SCSI host pointer @@ -2467,6 +2516,13 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) lrbp->task_tag = tag; lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun); lrbp->intr_cmd = !ufshcd_is_intr_aggr_allowed(hba) ? true : false; + + err = ufshcd_prepare_lrbp_crypto(hba, cmd, lrbp); + if (err) { + lrbp->cmd = NULL; + clear_bit_unlock(tag, &hba->lrb_in_use); + goto out; + } lrbp->req_abort_skip = false; ufshcd_comp_scsi_upiu(hba, lrbp); @@ -2500,6 +2556,7 @@ static int ufshcd_compose_dev_cmd(struct ufs_hba *hba, lrbp->task_tag = tag; lrbp->lun = 0; /* device management cmd is not specific to any LUN */ lrbp->intr_cmd = true; /* No interrupt aggregation */ + lrbp->crypto_enable = false; /* No crypto operations */ hba->dev_cmd.type = cmd_type; return ufshcd_comp_devman_upiu(hba, lrbp); @@ -4192,6 +4249,8 @@ static inline void ufshcd_hba_stop(struct ufs_hba *hba, bool can_sleep) { int err; + ufshcd_crypto_disable(hba); + ufshcd_writel(hba, CONTROLLER_DISABLE, REG_CONTROLLER_ENABLE); err = ufshcd_wait_for_register(hba, REG_CONTROLLER_ENABLE, CONTROLLER_ENABLE, CONTROLLER_DISABLE, @@ -4585,8 +4644,12 @@ static int ufshcd_change_queue_depth(struct scsi_device *sdev, int depth) static int ufshcd_slave_configure(struct scsi_device *sdev) { struct request_queue *q = sdev->request_queue; + struct ufs_hba *hba = shost_priv(sdev->host); blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); + + ufshcd_crypto_setup_rq_keyslot_manager(hba, q); + return 0; } @@ -4597,6 +4660,7 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) static void ufshcd_slave_destroy(struct scsi_device *sdev) { struct ufs_hba *hba; + struct request_queue *q = sdev->request_queue; hba = shost_priv(sdev->host); /* Drop the reference as it won't be needed anymore */ @@ -4607,6 +4671,8 @@ static void ufshcd_slave_destroy(struct scsi_device *sdev) hba->sdev_ufs_device = NULL; spin_unlock_irqrestore(hba->host->host_lock, flags); } + + ufshcd_crypto_destroy_rq_keyslot_manager(hba, q); } /** @@ -8323,6 +8389,13 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq) goto exit_gating; } + /* Init crypto */ + err = ufshcd_hba_init_crypto(hba); + if (err) { + dev_err(hba->dev, "crypto setup failed\n"); + goto out_remove_scsi_host; + } + /* Host controller enable */ err = ufshcd_hba_enable(hba); if (err) { diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 34e9849f00f0..c3e1f409e63d 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -167,6 +167,9 @@ struct ufs_pm_lvl_states { * @intr_cmd: Interrupt command (doesn't participate in interrupt aggregation) * @issue_time_stamp: time stamp for debug purposes * @compl_time_stamp: time stamp for statistics + * @crypto_enable: whether or not the request needs inline crypto operations + * @crypto_key_slot: the key slot to use for inline crypto + * @data_unit_num: the data unit number for the first block for inline crypto * @req_abort_skip: skip request abort task flag */ struct ufshcd_lrb { @@ -191,6 +194,9 @@ struct ufshcd_lrb { bool intr_cmd; ktime_t issue_time_stamp; ktime_t compl_time_stamp; + bool crypto_enable; + u8 crypto_key_slot; + u64 data_unit_num; bool req_abort_skip; }; From patchwork Wed Aug 21 07:57:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11105931 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 05EBF1864 for ; Wed, 21 Aug 2019 07:57:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CDBA922DA7 for ; Wed, 21 Aug 2019 07:57:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oP57VYhN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727067AbfHUH5i (ORCPT ); Wed, 21 Aug 2019 03:57:38 -0400 Received: from mail-pl1-f202.google.com ([209.85.214.202]:53478 "EHLO mail-pl1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727091AbfHUH5h (ORCPT ); Wed, 21 Aug 2019 03:57:37 -0400 Received: by mail-pl1-f202.google.com with SMTP id y22so986637plr.20 for ; Wed, 21 Aug 2019 00:57:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dJks2ByAGz5isWsWsG+cQFtjLu6PTmx1yKPNg7Wu/oY=; b=oP57VYhNVm6fJqGnzvPq0MNG2DWSwlgDOiAzthsXlw7IVzuD9Lu3eK5Ady0RdkNKZH YkURe8SghpiWMhIsGkcT+OE9aCWVEG+VzHZBvXs1I2C/Onvw0S9x0ZD7mmOP5hUXhejw WKnoglXhIX94ie3MwN1PASFFs3sCCYdy/Bv1eGsjNYxzizzHvo1L8GbX58SCl6ykti2X 1lHXWi+qUBSDDRni3JCH7DTYyU5PYu7WxtUHAdjUl+Svn1k2UConM0Zxek7nbuO4CLfj gN99inDMS8e0hiR29nyisHqJR+SYjMIoSWvTi6bH9h44Ww7s9IZ8fo78lpxhvc46D+aT 2DKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dJks2ByAGz5isWsWsG+cQFtjLu6PTmx1yKPNg7Wu/oY=; b=ke2JEIdG60lAkPqgZqjPwMtZYWs8L/Bh9PWFAn76zJqyx/QbIk9ZZqC2P6nZGHMXE5 PBw1tTR5czXecuMhz6uoZsoyZw2AcISRimWOsfdCrhLMIL0fgKn/ugOSOC7DzcBtW/O8 ElKvLdQSwRoRtO8DLurA+pewiQN/k2sMlnrSORn2VHje6wk7wnN9LIUp1KOHe+qxjGCl JOKJYHLnQvUQ5KcaXyD3Othh6W/d78v39ZWyxuBxWVam2zoOc84yXJLPzVIcfOvSOAXH tQ08rGZsP9o0XgoN5JuzNOrkFyInbFipbTDqEwYglna5ZIPS3f6qbucrNsNrvxZxpz89 JJ/w== X-Gm-Message-State: APjAAAUcRrH0d0AiHf4rB/9m6HSsmQd8EGBMeQ3z76WxICIiGIHN27Pl 72MbJXm40GNUU4i1dyEvlESdRFTbaps= X-Google-Smtp-Source: APXvYqylPopODeHLkGe/8JEp0Q1XIZ7ymywBgm2XRsA++xT7WBIRp1ouWAnk1hPMNEJA5UAk4ysPzIue7QI= X-Received: by 2002:a63:ff65:: with SMTP id s37mr27579097pgk.102.1566374255829; Wed, 21 Aug 2019 00:57:35 -0700 (PDT) Date: Wed, 21 Aug 2019 00:57:13 -0700 In-Reply-To: <20190821075714.65140-1-satyat@google.com> Message-Id: <20190821075714.65140-8-satyat@google.com> Mime-Version: 1.0 References: <20190821075714.65140-1-satyat@google.com> X-Mailer: git-send-email 2.23.0.rc1.153.gdeed80330f-goog Subject: [PATCH v4 7/8] fscrypt: wire up fscrypt to use blk-crypto From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Introduce fscrypt_set_bio_crypt_ctx for filesystems to call to set up encryption contexts in bios, and fscrypt_evict_crypt_key to evict the encryption context associated with an inode. Inline encryption is controlled by a policy flag in the fscrypt_info in the inode, and filesystems may check if an inode should use inline encryption by calling fscrypt_inode_is_inline_crypted. Files can be marked as inline encrypted from userspace by appropriately modifying the flags (OR-ing FS_POLICY_FLAGS_INLINE_ENCRYPTION to it) in the fscrypt_policy passed to fscrypt_ioctl_set_policy. To test inline encryption with the fscrypt dummy context, add ctx.flags |= FS_POLICY_FLAGS_INLINE_ENCRYPTION when setting up the dummy context in fs/crypto/keyinfo.c. Note that blk-crypto will fall back to software en/decryption in the absence of inline crypto hardware, so setting up the ctx.flags in the dummy context without inline crypto hardware serves as a test for the software fallback in blk-crypto. Signed-off-by: Satya Tangirala --- fs/crypto/Kconfig | 6 ++ fs/crypto/bio.c | 137 ++++++++++++++++++++++++++++++++---- fs/crypto/fscrypt_private.h | 23 ++++++ fs/crypto/keyinfo.c | 107 +++++++++++++++++++++------- fs/crypto/policy.c | 6 ++ include/linux/fscrypt.h | 72 +++++++++++++++++++ include/uapi/linux/fs.h | 3 +- 7 files changed, 316 insertions(+), 38 deletions(-) diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig index 5fdf24877c17..8191e0ff5014 100644 --- a/fs/crypto/Kconfig +++ b/fs/crypto/Kconfig @@ -14,3 +14,9 @@ config FS_ENCRYPTION efficient since it avoids caching the encrypted and decrypted pages in the page cache. Currently Ext4, F2FS and UBIFS make use of this feature. + +config FS_ENCRYPTION_INLINE_CRYPT + bool "Enable fscrypt to use inline crypto" + depends on FS_ENCRYPTION && BLK_INLINE_ENCRYPTION + help + Enables fscrypt to use inline crypto hardware if available. diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c index 82da2510721f..d3c3f63ec109 100644 --- a/fs/crypto/bio.c +++ b/fs/crypto/bio.c @@ -24,6 +24,9 @@ #include #include #include +#include +#include +#include #include "fscrypt_private.h" static void __fscrypt_decrypt_bio(struct bio *bio, bool done) @@ -76,17 +79,24 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk, struct page *ciphertext_page; struct bio *bio; int ret, err = 0; + bool need_fscrypt_crypto = fscrypt_needs_fs_layer_crypto(inode); - ciphertext_page = fscrypt_alloc_bounce_page(GFP_NOWAIT); - if (!ciphertext_page) - return -ENOMEM; + if (need_fscrypt_crypto) { + ciphertext_page = fscrypt_alloc_bounce_page(GFP_NOWAIT); + if (!ciphertext_page) + return -ENOMEM; + } else { + ciphertext_page = ZERO_PAGE(0); + } while (len--) { - err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk, - ZERO_PAGE(0), ciphertext_page, - blocksize, 0, GFP_NOFS); - if (err) - goto errout; + if (need_fscrypt_crypto) { + err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk, + ZERO_PAGE(0), ciphertext_page, + blocksize, 0, GFP_NOFS); + if (err) + goto errout; + } bio = bio_alloc(GFP_NOWAIT, 1); if (!bio) { @@ -103,9 +113,12 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk, err = -EIO; goto errout; } - err = submit_bio_wait(bio); - if (err == 0 && bio->bi_status) - err = -EIO; + err = fscrypt_set_bio_crypt_ctx(bio, inode, pblk, GFP_NOIO); + if (!err) { + err = submit_bio_wait(bio); + if (err == 0 && bio->bi_status) + err = -EIO; + } bio_put(bio); if (err) goto errout; @@ -114,7 +127,107 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk, } err = 0; errout: - fscrypt_free_bounce_page(ciphertext_page); + if (need_fscrypt_crypto) + fscrypt_free_bounce_page(ciphertext_page); return err; } EXPORT_SYMBOL(fscrypt_zeroout_range); + +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT +enum blk_crypto_mode_num +get_blk_crypto_mode_for_fscryptalg(u8 fscrypt_alg) +{ + switch (fscrypt_alg) { + case FS_ENCRYPTION_MODE_AES_256_XTS: + return BLK_ENCRYPTION_MODE_AES_256_XTS; + default: return BLK_ENCRYPTION_MODE_INVALID; + } +} + +int fscrypt_set_bio_crypt_ctx(struct bio *bio, + const struct inode *inode, + u64 data_unit_num, + gfp_t gfp_mask) +{ + struct fscrypt_info *ci = inode->i_crypt_info; + int err; + enum blk_crypto_mode_num blk_crypto_mode; + + + /* If inode is not inline encrypted, nothing to do. */ + if (!fscrypt_inode_is_inline_crypted(inode)) + return 0; + + blk_crypto_mode = get_blk_crypto_mode_for_fscryptalg(ci->ci_data_mode); + if (blk_crypto_mode == BLK_ENCRYPTION_MODE_INVALID) + return -EINVAL; + + err = bio_crypt_set_ctx(bio, ci->ci_master_key->mk_raw, + blk_crypto_mode, + data_unit_num, + inode->i_blkbits, + gfp_mask); + if (err) + return err; + + return 0; +} +EXPORT_SYMBOL(fscrypt_set_bio_crypt_ctx); + +void fscrypt_unset_bio_crypt_ctx(struct bio *bio) +{ + bio_crypt_free_ctx(bio); +} +EXPORT_SYMBOL(fscrypt_unset_bio_crypt_ctx); + +int fscrypt_evict_crypt_key(struct inode *inode) +{ + struct request_queue *q; + struct fscrypt_info *ci; + + if (!inode) + return 0; + + q = inode->i_sb->s_bdev->bd_queue; + ci = inode->i_crypt_info; + + if (!q || !q->ksm || !ci || + !fscrypt_inode_is_inline_crypted(inode)) { + return 0; + } + + return keyslot_manager_evict_key(q->ksm, + ci->ci_master_key->mk_raw, + get_blk_crypto_mode_for_fscryptalg( + ci->ci_data_mode), + 1 << inode->i_blkbits); +} +EXPORT_SYMBOL(fscrypt_evict_crypt_key); + +bool fscrypt_inode_crypt_mergeable(const struct inode *inode_1, + const struct inode *inode_2) +{ + struct fscrypt_info *ci_1, *ci_2; + bool enc_1 = !inode_1 || fscrypt_inode_is_inline_crypted(inode_1); + bool enc_2 = !inode_2 || fscrypt_inode_is_inline_crypted(inode_2); + + if (enc_1 != enc_2) + return false; + + if (!enc_1) + return true; + + if (inode_1 == inode_2) + return true; + + ci_1 = inode_1->i_crypt_info; + ci_2 = inode_2->i_crypt_info; + + return ci_1->ci_data_mode == ci_2->ci_data_mode && + crypto_memneq(ci_1->ci_master_key->mk_raw, + ci_2->ci_master_key->mk_raw, + ci_1->ci_master_key->mk_mode->keysize) == 0; +} +EXPORT_SYMBOL(fscrypt_inode_crypt_mergeable); + +#endif /* FS_ENCRYPTION_INLINE_CRYPT */ diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h index 8978eec9d766..3079405a2b12 100644 --- a/fs/crypto/fscrypt_private.h +++ b/fs/crypto/fscrypt_private.h @@ -14,6 +14,7 @@ #include #include +#include /* Encryption parameters */ #define FS_KEY_DERIVATION_NONCE_SIZE 16 @@ -49,6 +50,17 @@ struct fscrypt_symlink_data { char encrypted_path[1]; } __packed; +/* Master key referenced by FS_POLICY_FLAG_DIRECT_KEY policy */ +struct fscrypt_master_key { + struct hlist_node mk_node; + refcount_t mk_refcount; + const struct fscrypt_mode *mk_mode; + struct crypto_skcipher *mk_ctfm; + u8 mk_descriptor[FS_KEY_DESCRIPTOR_SIZE]; + u8 mk_raw[FS_MAX_KEY_SIZE]; + struct super_block *mk_sb; +}; + /* * fscrypt_info - the "encryption key" for an inode * @@ -113,6 +125,17 @@ static inline bool fscrypt_valid_enc_modes(u32 contents_mode, return false; } +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT +extern enum blk_crypto_mode_num +get_blk_crypto_mode_for_fscryptalg(u8 fscrypt_alg); +#else +static inline enum blk_crypto_mode_num +get_blk_crypto_mode_for_fscryptalg(u8 fscrypt_alg) +{ + return BLK_ENCRYPTION_MODE_INVALID; +} +#endif + /* crypto.c */ extern struct kmem_cache *fscrypt_info_cachep; extern int fscrypt_initialize(unsigned int cop_flags); diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c index 207ebed918c1..989cf12217df 100644 --- a/fs/crypto/keyinfo.c +++ b/fs/crypto/keyinfo.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -24,6 +25,21 @@ static struct crypto_shash *essiv_hash_tfm; static DEFINE_HASHTABLE(fscrypt_master_keys, 6); /* 6 bits = 64 buckets */ static DEFINE_SPINLOCK(fscrypt_master_keys_lock); +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT +static inline bool flags_inline_crypted(u8 flags, + const struct inode *inode) +{ + return (flags & FS_POLICY_FLAGS_INLINE_CRYPT_OPTIMIZED) && + S_ISREG(inode->i_mode); +} +#else +static inline bool flags_inline_crypted(u8 flags, + const struct inode *inode) +{ + return false; +} +#endif /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ + /* * Key derivation function. This generates the derived key by encrypting the * master key with AES-128-ECB using the inode's nonce as the AES key. @@ -219,6 +235,9 @@ static int find_and_derive_key(const struct inode *inode, memcpy(derived_key, payload->raw, mode->keysize); err = 0; } + } else if (flags_inline_crypted(ctx->flags, inode)) { + memcpy(derived_key, payload->raw, mode->keysize); + err = 0; } else { err = derive_key_aes(payload->raw, ctx, derived_key, mode->keysize); @@ -268,16 +287,6 @@ allocate_skcipher_for_mode(struct fscrypt_mode *mode, const u8 *raw_key, return ERR_PTR(err); } -/* Master key referenced by FS_POLICY_FLAG_DIRECT_KEY policy */ -struct fscrypt_master_key { - struct hlist_node mk_node; - refcount_t mk_refcount; - const struct fscrypt_mode *mk_mode; - struct crypto_skcipher *mk_ctfm; - u8 mk_descriptor[FS_KEY_DESCRIPTOR_SIZE]; - u8 mk_raw[FS_MAX_KEY_SIZE]; -}; - static void free_master_key(struct fscrypt_master_key *mk) { if (mk) { @@ -286,13 +295,15 @@ static void free_master_key(struct fscrypt_master_key *mk) } } -static void put_master_key(struct fscrypt_master_key *mk) +static void put_master_key(struct fscrypt_master_key *mk, + struct inode *inode) { if (!refcount_dec_and_lock(&mk->mk_refcount, &fscrypt_master_keys_lock)) return; hash_del(&mk->mk_node); spin_unlock(&fscrypt_master_keys_lock); + fscrypt_evict_crypt_key(inode); free_master_key(mk); } @@ -305,7 +316,9 @@ static void put_master_key(struct fscrypt_master_key *mk) static struct fscrypt_master_key * find_or_insert_master_key(struct fscrypt_master_key *to_insert, const u8 *raw_key, const struct fscrypt_mode *mode, - const struct fscrypt_info *ci) + const struct fscrypt_info *ci, + bool should_have_ctfm, + struct super_block *sb) { unsigned long hash_key; struct fscrypt_master_key *mk; @@ -328,6 +341,10 @@ find_or_insert_master_key(struct fscrypt_master_key *to_insert, continue; if (crypto_memneq(raw_key, mk->mk_raw, mode->keysize)) continue; + if (should_have_ctfm != (bool)mk->mk_ctfm) + continue; + if (sb != mk->mk_sb) + continue; /* using existing tfm with same (descriptor, mode, raw_key) */ refcount_inc(&mk->mk_refcount); spin_unlock(&fscrypt_master_keys_lock); @@ -347,9 +364,11 @@ fscrypt_get_master_key(const struct fscrypt_info *ci, struct fscrypt_mode *mode, { struct fscrypt_master_key *mk; int err; + bool inline_crypted = flags_inline_crypted(ci->ci_flags, inode); /* Is there already a tfm for this key? */ - mk = find_or_insert_master_key(NULL, raw_key, mode, ci); + mk = find_or_insert_master_key(NULL, raw_key, mode, ci, !inline_crypted, + inode->i_sb); if (mk) return mk; @@ -359,17 +378,21 @@ fscrypt_get_master_key(const struct fscrypt_info *ci, struct fscrypt_mode *mode, return ERR_PTR(-ENOMEM); refcount_set(&mk->mk_refcount, 1); mk->mk_mode = mode; - mk->mk_ctfm = allocate_skcipher_for_mode(mode, raw_key, inode); - if (IS_ERR(mk->mk_ctfm)) { - err = PTR_ERR(mk->mk_ctfm); - mk->mk_ctfm = NULL; - goto err_free_mk; + if (!inline_crypted) { + mk->mk_ctfm = allocate_skcipher_for_mode(mode, raw_key, inode); + if (IS_ERR(mk->mk_ctfm)) { + err = PTR_ERR(mk->mk_ctfm); + mk->mk_ctfm = NULL; + goto err_free_mk; + } } memcpy(mk->mk_descriptor, ci->ci_master_key_descriptor, FS_KEY_DESCRIPTOR_SIZE); memcpy(mk->mk_raw, raw_key, mode->keysize); + mk->mk_sb = inode->i_sb; - return find_or_insert_master_key(mk, raw_key, mode, ci); + return find_or_insert_master_key(mk, raw_key, mode, ci, !inline_crypted, + inode->i_sb); err_free_mk: free_master_key(mk); @@ -455,7 +478,8 @@ static int setup_crypto_transform(struct fscrypt_info *ci, struct crypto_skcipher *ctfm; int err; - if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) { + if ((ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) || + flags_inline_crypted(ci->ci_flags, inode)) { mk = fscrypt_get_master_key(ci, mode, raw_key, inode); if (IS_ERR(mk)) return PTR_ERR(mk); @@ -485,13 +509,13 @@ static int setup_crypto_transform(struct fscrypt_info *ci, return 0; } -static void put_crypt_info(struct fscrypt_info *ci) +static void put_crypt_info(struct fscrypt_info *ci, struct inode *inode) { if (!ci) return; if (ci->ci_master_key) { - put_master_key(ci->ci_master_key); + put_master_key(ci->ci_master_key, inode); } else { crypto_free_skcipher(ci->ci_ctfm); crypto_free_cipher(ci->ci_essiv_tfm); @@ -506,6 +530,7 @@ int fscrypt_get_encryption_info(struct inode *inode) struct fscrypt_mode *mode; u8 *raw_key = NULL; int res; + enum blk_crypto_mode_num blk_crypto_mode; if (fscrypt_has_encryption_key(inode)) return 0; @@ -571,12 +596,26 @@ int fscrypt_get_encryption_info(struct inode *inode) if (res) goto out; - if (cmpxchg_release(&inode->i_crypt_info, NULL, crypt_info) == NULL) + if (cmpxchg_release(&inode->i_crypt_info, NULL, crypt_info) == NULL) { crypt_info = NULL; + if (!flags_inline_crypted(ctx.flags, inode)) + goto out; + blk_crypto_mode = get_blk_crypto_mode_for_fscryptalg( + inode->i_crypt_info->ci_mode - available_modes); + + if (keyslot_manager_rq_crypto_mode_supported( + inode->i_sb->s_bdev->bd_queue, + blk_crypto_mode, + (1 << inode->i_blkbits))) { + goto out; + } + + blk_crypto_mode_alloc_ciphers(blk_crypto_mode); + } out: if (res == -ENOKEY) res = 0; - put_crypt_info(crypt_info); + put_crypt_info(crypt_info, NULL); kzfree(raw_key); return res; } @@ -590,7 +629,7 @@ EXPORT_SYMBOL(fscrypt_get_encryption_info); */ void fscrypt_put_encryption_info(struct inode *inode) { - put_crypt_info(inode->i_crypt_info); + put_crypt_info(inode->i_crypt_info, inode); inode->i_crypt_info = NULL; } EXPORT_SYMBOL(fscrypt_put_encryption_info); @@ -609,3 +648,21 @@ void fscrypt_free_inode(struct inode *inode) } } EXPORT_SYMBOL(fscrypt_free_inode); + +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT +bool fscrypt_inode_is_inline_crypted(const struct inode *inode) +{ + return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) && + flags_inline_crypted(inode->i_crypt_info->ci_flags, inode); +} +EXPORT_SYMBOL(fscrypt_inode_is_inline_crypted); + +#endif /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ + +bool fscrypt_needs_fs_layer_crypto(const struct inode *inode) +{ + return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) && + !fscrypt_inode_is_inline_crypted(inode); +} +EXPORT_SYMBOL(fscrypt_needs_fs_layer_crypto); + diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c index 4941fe8471ce..573407a2cda3 100644 --- a/fs/crypto/policy.c +++ b/fs/crypto/policy.c @@ -36,6 +36,7 @@ static int create_encryption_context_from_policy(struct inode *inode, struct fscrypt_context ctx; ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1; + memcpy(ctx.master_key_descriptor, policy->master_key_descriptor, FS_KEY_DESCRIPTOR_SIZE); @@ -46,8 +47,13 @@ static int create_encryption_context_from_policy(struct inode *inode, if (policy->flags & ~FS_POLICY_FLAGS_VALID) return -EINVAL; + if (!inode->i_sb->s_cop->inline_crypt_supp && + (policy->flags & FS_POLICY_FLAGS_INLINE_CRYPT_OPTIMIZED)) + return -EINVAL; + ctx.contents_encryption_mode = policy->contents_encryption_mode; ctx.filenames_encryption_mode = policy->filenames_encryption_mode; + ctx.flags = policy->flags; BUILD_BUG_ON(sizeof(ctx.nonce) != FS_KEY_DERIVATION_NONCE_SIZE); get_random_bytes(ctx.nonce, FS_KEY_DERIVATION_NONCE_SIZE); diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h index bd8f207a2fb6..6db1c7c5009d 100644 --- a/include/linux/fscrypt.h +++ b/include/linux/fscrypt.h @@ -61,6 +61,7 @@ struct fscrypt_operations { bool (*dummy_context)(struct inode *); bool (*empty_dir)(struct inode *); unsigned int max_namelen; + bool inline_crypt_supp; }; /* Decryption work */ @@ -141,6 +142,23 @@ extern int fscrypt_inherit_context(struct inode *, struct inode *, extern int fscrypt_get_encryption_info(struct inode *); extern void fscrypt_put_encryption_info(struct inode *); extern void fscrypt_free_inode(struct inode *); +extern bool fscrypt_needs_fs_layer_crypto(const struct inode *inode); + +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT +extern bool fscrypt_inode_is_inline_crypted(const struct inode *inode); +extern bool fscrypt_inode_crypt_mergeable(const struct inode *inode_1, + const struct inode *inode_2); +#else +static inline bool fscrypt_inode_is_inline_crypted(const struct inode *inode) +{ + return false; +} +static inline bool fscrypt_inode_crypt_mergeable(const struct inode *inode_1, + const struct inode *inode_2) +{ + return true; +} +#endif /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */ /* fname.c */ extern int fscrypt_setup_filename(struct inode *, const struct qstr *, @@ -237,6 +255,29 @@ extern void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx, struct bio *bio); extern int fscrypt_zeroout_range(const struct inode *, pgoff_t, sector_t, unsigned int); +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT +extern int fscrypt_set_bio_crypt_ctx(struct bio *bio, + const struct inode *inode, + u64 data_unit_num, + gfp_t gfp_mask); +extern void fscrypt_unset_bio_crypt_ctx(struct bio *bio); +extern int fscrypt_evict_crypt_key(struct inode *inode); +#else +static inline int fscrypt_set_bio_crypt_ctx(struct bio *bio, + const struct inode *inode, + u64 data_unit_num, + gfp_t gfp_mask) +{ + return 0; +} + +static inline void fscrypt_unset_bio_crypt_ctx(struct bio *bio) { } + +static inline int fscrypt_evict_crypt_key(struct inode *inode) +{ + return 0; +} +#endif /* hooks.c */ extern int fscrypt_file_open(struct inode *inode, struct file *filp); @@ -381,6 +422,17 @@ static inline void fscrypt_free_inode(struct inode *inode) { } +static inline bool fscrypt_inode_is_inline_crypted(const struct inode *inode) +{ + return false; +} + +static inline bool fscrypt_inode_crypt_mergeable(const struct inode *inode_1, + const struct inode *inode_2) +{ + return true; +} + /* fname.c */ static inline int fscrypt_setup_filename(struct inode *dir, const struct qstr *iname, @@ -446,6 +498,26 @@ static inline int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk, return -EOPNOTSUPP; } +static inline bool fscrypt_needs_fs_layer_crypto(const struct inode *inode) +{ + return false; +} + +static inline int fscrypt_set_bio_crypt_ctx(struct bio *bio, + const struct inode *inode, + u64 data_unit_num, + gfp_t gfp_mask) +{ + return -EOPNOTSUPP; +} + +static inline void fscrypt_unset_bio_crypt_ctx(struct bio *bio) { } + +static inline int fscrypt_evict_crypt_key(struct inode *inode) +{ + return 0; +} + /* hooks.c */ static inline int fscrypt_file_open(struct inode *inode, struct file *filp) diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index 59c71fa8c553..dea16d0f9d2e 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -224,7 +224,8 @@ struct fsxattr { #define FS_POLICY_FLAGS_PAD_32 0x03 #define FS_POLICY_FLAGS_PAD_MASK 0x03 #define FS_POLICY_FLAG_DIRECT_KEY 0x04 /* use master key directly */ -#define FS_POLICY_FLAGS_VALID 0x07 +#define FS_POLICY_FLAGS_INLINE_CRYPT_OPTIMIZED 0x08 +#define FS_POLICY_FLAGS_VALID 0x0F /* Encryption algorithms */ #define FS_ENCRYPTION_MODE_INVALID 0 From patchwork Wed Aug 21 07:57:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Satya Tangirala X-Patchwork-Id: 11105945 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5CCD91864 for ; Wed, 21 Aug 2019 07:57:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 38DEE2332A for ; Wed, 21 Aug 2019 07:57:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DkO7Nn7D" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727136AbfHUH5k (ORCPT ); Wed, 21 Aug 2019 03:57:40 -0400 Received: from mail-pl1-f201.google.com ([209.85.214.201]:35999 "EHLO mail-pl1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727113AbfHUH5j (ORCPT ); Wed, 21 Aug 2019 03:57:39 -0400 Received: by mail-pl1-f201.google.com with SMTP id a5so1011702pla.3 for ; Wed, 21 Aug 2019 00:57:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Za7w9RhqOvx+rbdwwS8b0CPcodToPB6Hjzwh7+A6Syg=; b=DkO7Nn7D2Xk3T8cG9p5c6N+nZxrZ4col+9+vVnpt+G8JRphS7kgWTTlgn5Bd5a7AEr 2+F74HCdI8eYIHLUkOCPVnjF6FETFobs//509xgwEM6D2QXn5fWkqtj1OYtyNK9jLmcb nUKfbKW2EpHKT2STSUPVMDF1nO/bLvyj7TNB7z7JrveFvkG+H83icRhL4TabD7m81QWd rLvILk0GUc37LAVeI9NTyikh6FTn203V3g/QNWZMD7plUmU3b9axvSOM/g4hUsLM+MdH qocPUAm3Qz2URW8cy40aN8EXQVmgiiGAjvgjxDT2Zlcia441Z0Tq6MJ7LfZvRRBZ/kxa m3aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Za7w9RhqOvx+rbdwwS8b0CPcodToPB6Hjzwh7+A6Syg=; b=bReJCa06iw3UyNAKaOGdcZ+nCrvjX2QFteH8OzInSXjlRjCPj6N9VfW7+4fGNhVSYr R808u1XIlC+iBRPQjhXAu4KszWkKjcUvUh7yTXKPYzUCXpH4ZEQxhizxoORHKkQ0NCwa ZNtOPRDA6D0um8iIGLMAoHmpSdwcXKzcDyzeBDgioQr2eEflo7l3s57mEwKmkuv1vRFG wH1K+xGQ3/VH97JdyUk80/JNC2bh+Yi1rbUrHYEdz6PvMMVTbi8AzutTRNJTYeo/T5BW uJvailctpdXacejhFQE1QvqBd6owNpz97GiRcYGGU93sw7XXx+PxHrc+9+wYvdGRjbz+ J6Sw== X-Gm-Message-State: APjAAAVS4nTguZuNaK15gCvOrkgR+EaX2tE0I1TkribzSlBKLHNbjXTN +wnOA7TsMCnSmvoDgNglzcYdblkJSIs= X-Google-Smtp-Source: APXvYqxT3jAgizDP+6vcOIt7t24Pqk/vf2IKzBLMg5jcFQikXJNxIrCVsY+Yz+Wd8/Yvw1kXif/HLfUjnzU= X-Received: by 2002:a63:ab08:: with SMTP id p8mr29238085pgf.340.1566374258557; Wed, 21 Aug 2019 00:57:38 -0700 (PDT) Date: Wed, 21 Aug 2019 00:57:14 -0700 In-Reply-To: <20190821075714.65140-1-satyat@google.com> Message-Id: <20190821075714.65140-9-satyat@google.com> Mime-Version: 1.0 References: <20190821075714.65140-1-satyat@google.com> X-Mailer: git-send-email 2.23.0.rc1.153.gdeed80330f-goog Subject: [PATCH v4 8/8] f2fs: Wire up f2fs to use inline encryption via fscrypt From: Satya Tangirala To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Cc: Barani Muthukumaran , Kuohong Wang , Kim Boojin , Satya Tangirala Sender: linux-fscrypt-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org Signed-off-by: Satya Tangirala --- fs/f2fs/data.c | 127 ++++++++++++++++++++++++++++++++++++++++++++---- fs/f2fs/super.c | 15 +++--- 2 files changed, 126 insertions(+), 16 deletions(-) diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index abbf14e9bd72..a7294edce173 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -143,6 +143,8 @@ static bool f2fs_bio_post_read_required(struct bio *bio) static void f2fs_read_end_io(struct bio *bio) { + fscrypt_unset_bio_crypt_ctx(bio); + if (time_to_inject(F2FS_P_SB(bio_first_page_all(bio)), FAULT_READ_IO)) { f2fs_show_injection_info(FAULT_READ_IO); @@ -166,6 +168,8 @@ static void f2fs_write_end_io(struct bio *bio) struct bio_vec *bvec; struct bvec_iter_all iter_all; + fscrypt_unset_bio_crypt_ctx(bio); + if (time_to_inject(sbi, FAULT_WRITE_IO)) { f2fs_show_injection_info(FAULT_WRITE_IO); bio->bi_status = BLK_STS_IOERR; @@ -283,6 +287,53 @@ static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr, return bio; } +static int f2fs_init_bio_crypt_ctx(struct bio *bio, struct inode *inode, + struct f2fs_io_info *fio, bool no_fail) +{ + gfp_t gfp_mask = GFP_NOIO; + /* + * This function should be called with NULL inode iff the pages + * being added to the bio are to be handled without inline + * en/decryption. + */ + if (!inode) + return 0; + + /* + * If the fio has an encrypted page, that means we want to read/write + * it without inline encryption (for e.g. when moving blocks) + */ + if (fio && fio->encrypted_page) + return 0; + + if (no_fail) + gfp_mask |= __GFP_NOFAIL; + + return fscrypt_set_bio_crypt_ctx(bio, inode, 0, gfp_mask); +} + +static inline u64 inline_crypt_dun(struct inode *inode, pgoff_t offset) +{ + return (((u64)inode->i_ino) << 32) | lower_32_bits(offset); +} + +bool f2fs_page_crypt_back_mergeable(const struct bio *bio, + const struct page *page) +{ + struct inode *bio_inode = bio && bio_page(bio)->mapping ? + bio_page(bio)->mapping->host : NULL; + struct inode *page_inode = page && page->mapping ? page->mapping->host + : NULL; + + if (!fscrypt_inode_crypt_mergeable(page_inode, bio_inode)) + return false; + if (!bio_inode || !fscrypt_inode_is_inline_crypted(bio_inode)) + return true; + return inline_crypt_dun(bio_inode, bio_page(bio)->index) + + (bio->bi_iter.bi_size >> PAGE_SHIFT) == + inline_crypt_dun(page_inode, page->index); +} + static inline void __submit_bio(struct f2fs_sb_info *sbi, struct bio *bio, enum page_type type) { @@ -327,6 +378,14 @@ static inline void __submit_bio(struct f2fs_sb_info *sbi, trace_f2fs_submit_read_bio(sbi->sb, type, bio); else trace_f2fs_submit_write_bio(sbi->sb, type, bio); + + if (bio_has_data(bio) && bio_has_crypt_ctx(bio)) { + struct page *page = bio_page(bio); + + bio_set_data_unit_num(bio, inline_crypt_dun(page->mapping->host, + page->index)); + } + submit_bio(bio); } @@ -451,6 +510,7 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio) struct bio *bio; struct page *page = fio->encrypted_page ? fio->encrypted_page : fio->page; + int err; if (!f2fs_is_valid_blkaddr(fio->sbi, fio->new_blkaddr, fio->is_por ? META_POR : (__is_meta_io(fio) ? @@ -464,9 +524,15 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio) bio = __bio_alloc(fio->sbi, fio->new_blkaddr, fio->io_wbc, 1, is_read_io(fio->op), fio->type, fio->temp); - if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) { + err = f2fs_init_bio_crypt_ctx(bio, fio->page->mapping->host, fio, + false); + + if (!err && bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) + err = -EFAULT; + + if (err) { bio_put(bio); - return -EFAULT; + return err; } if (fio->io_wbc && !is_read_io(fio->op)) @@ -486,6 +552,7 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio) struct bio *bio = *fio->bio; struct page *page = fio->encrypted_page ? fio->encrypted_page : fio->page; + int err; if (!f2fs_is_valid_blkaddr(fio->sbi, fio->new_blkaddr, __is_meta_io(fio) ? META_GENERIC : DATA_GENERIC)) @@ -495,7 +562,8 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio) f2fs_trace_ios(fio, 0); if (bio && (*fio->last_block + 1 != fio->new_blkaddr || - !__same_bdev(fio->sbi, fio->new_blkaddr, bio))) { + !__same_bdev(fio->sbi, fio->new_blkaddr, bio) || + !f2fs_page_crypt_back_mergeable(bio, page))) { __submit_bio(fio->sbi, bio, fio->type); bio = NULL; } @@ -503,6 +571,12 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio) if (!bio) { bio = __bio_alloc(fio->sbi, fio->new_blkaddr, fio->io_wbc, BIO_MAX_PAGES, false, fio->type, fio->temp); + err = f2fs_init_bio_crypt_ctx(bio, fio->page->mapping->host, + fio, false); + if (err) { + bio_put(bio); + return err; + } bio_set_op_attrs(bio, fio->op, fio->op_flags); } @@ -570,8 +644,10 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio) if (io->bio && (io->last_block_in_bio != fio->new_blkaddr - 1 || (io->fio.op != fio->op || io->fio.op_flags != fio->op_flags) || - !__same_bdev(sbi, fio->new_blkaddr, io->bio))) + !__same_bdev(sbi, fio->new_blkaddr, io->bio) || + !f2fs_page_crypt_back_mergeable(io->bio, bio_page))) __submit_merged_bio(io); + alloc_new: if (io->bio == NULL) { if ((fio->type == DATA || fio->type == NODE) && @@ -583,6 +659,8 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio) io->bio = __bio_alloc(sbi, fio->new_blkaddr, fio->io_wbc, BIO_MAX_PAGES, false, fio->type, fio->temp); + f2fs_init_bio_crypt_ctx(io->bio, fio->page->mapping->host, + fio, true); io->fio = *fio; } @@ -615,16 +693,24 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr, struct bio *bio; struct bio_post_read_ctx *ctx; unsigned int post_read_steps = 0; + int err; bio = f2fs_bio_alloc(sbi, min_t(int, nr_pages, BIO_MAX_PAGES), false); if (!bio) return ERR_PTR(-ENOMEM); + err = f2fs_init_bio_crypt_ctx(bio, inode, NULL, false); + if (err) { + bio_put(bio); + return ERR_PTR(err); + } + f2fs_target_device(sbi, blkaddr, bio); bio->bi_end_io = f2fs_read_end_io; bio_set_op_attrs(bio, REQ_OP_READ, op_flag); - if (f2fs_encrypted_file(inode)) + if (fscrypt_needs_fs_layer_crypto(inode)) post_read_steps |= 1 << STEP_DECRYPT; + if (post_read_steps) { ctx = mempool_alloc(bio_post_read_ctx_pool, GFP_NOFS); if (!ctx) { @@ -1574,6 +1660,7 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page, struct f2fs_map_blocks *map, struct bio **bio_ret, sector_t *last_block_in_bio, + u64 *next_dun, bool is_readahead) { struct bio *bio = *bio_ret; @@ -1648,6 +1735,13 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page, __submit_bio(F2FS_I_SB(inode), bio, DATA); bio = NULL; } + + if (bio && fscrypt_inode_is_inline_crypted(inode) && + *next_dun != inline_crypt_dun(inode, page->index)) { + __submit_bio(F2FS_I_SB(inode), bio, DATA); + bio = NULL; + } + if (bio == NULL) { bio = f2fs_grab_read_bio(inode, block_nr, nr_pages, is_readahead ? REQ_RAHEAD : 0); @@ -1667,6 +1761,9 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page, if (bio_add_page(bio, page, blocksize, 0) < blocksize) goto submit_and_realloc; + if (fscrypt_inode_is_inline_crypted(inode)) + *next_dun = inline_crypt_dun(inode, page->index) + 1; + inc_page_count(F2FS_I_SB(inode), F2FS_RD_DATA); ClearPageError(page); *last_block_in_bio = block_nr; @@ -1700,6 +1797,7 @@ static int f2fs_mpage_readpages(struct address_space *mapping, struct inode *inode = mapping->host; struct f2fs_map_blocks map; int ret = 0; + u64 next_dun = 0; map.m_pblk = 0; map.m_lblk = 0; @@ -1723,7 +1821,8 @@ static int f2fs_mpage_readpages(struct address_space *mapping, } ret = f2fs_read_single_page(inode, page, nr_pages, &map, &bio, - &last_block_in_bio, is_readahead); + &last_block_in_bio, &next_dun, + is_readahead); if (ret) { SetPageError(page); zero_user_segment(page, 0, PAGE_SIZE); @@ -1777,12 +1876,12 @@ static int encrypt_one_page(struct f2fs_io_info *fio) struct page *mpage; gfp_t gfp_flags = GFP_NOFS; - if (!f2fs_encrypted_file(inode)) - return 0; - /* wait for GCed page writeback via META_MAPPING */ f2fs_wait_on_block_writeback(inode, fio->old_blkaddr); + if (!fscrypt_needs_fs_layer_crypto(inode)) + return 0; + retry_encrypt: fio->encrypted_page = fscrypt_encrypt_pagecache_blocks(fio->page, PAGE_SIZE, 0, @@ -1957,7 +2056,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio) f2fs_unlock_op(fio->sbi); err = f2fs_inplace_write_data(fio); if (err) { - if (f2fs_encrypted_file(inode)) + if (fscrypt_needs_fs_layer_crypto(inode)) fscrypt_finalize_bounce_page(&fio->encrypted_page); if (PageWriteback(page)) end_page_writeback(page); @@ -2692,6 +2791,8 @@ static void f2fs_dio_end_io(struct bio *bio) { struct f2fs_private_dio *dio = bio->bi_private; + fscrypt_unset_bio_crypt_ctx(bio); + dec_page_count(F2FS_I_SB(dio->inode), dio->write ? F2FS_DIO_WRITE : F2FS_DIO_READ); @@ -2708,12 +2809,18 @@ static void f2fs_dio_submit_bio(struct bio *bio, struct inode *inode, { struct f2fs_private_dio *dio; bool write = (bio_op(bio) == REQ_OP_WRITE); + u64 data_unit_num = inline_crypt_dun(inode, file_offset >> PAGE_SHIFT); dio = f2fs_kzalloc(F2FS_I_SB(inode), sizeof(struct f2fs_private_dio), GFP_NOFS); if (!dio) goto out; + if (fscrypt_set_bio_crypt_ctx(bio, inode, data_unit_num, GFP_NOIO)) { + kvfree(dio); + goto out; + } + dio->inode = inode; dio->orig_end_io = bio->bi_end_io; dio->orig_private = bio->bi_private; diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index 78a1b873e48a..196145cb42f9 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -2223,12 +2223,15 @@ static bool f2fs_dummy_context(struct inode *inode) } static const struct fscrypt_operations f2fs_cryptops = { - .key_prefix = "f2fs:", - .get_context = f2fs_get_context, - .set_context = f2fs_set_context, - .dummy_context = f2fs_dummy_context, - .empty_dir = f2fs_empty_dir, - .max_namelen = F2FS_NAME_LEN, + .key_prefix = "f2fs:", + .get_context = f2fs_get_context, + .set_context = f2fs_set_context, + .dummy_context = f2fs_dummy_context, + .empty_dir = f2fs_empty_dir, + .max_namelen = F2FS_NAME_LEN, +#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT + .inline_crypt_supp = true, +#endif }; #endif