From patchwork Mon Jan 16 13:05:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103056 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E125C678D7 for ; Mon, 16 Jan 2023 13:07:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231176AbjAPNG5 (ORCPT ); Mon, 16 Jan 2023 08:06:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229749AbjAPNGj (ORCPT ); Mon, 16 Jan 2023 08:06:39 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76B371710; Mon, 16 Jan 2023 05:06:12 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 345E4B80E37; Mon, 16 Jan 2023 13:06:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1CDE9C433EF; Mon, 16 Jan 2023 13:06:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874369; bh=3vqVXUNkAHRx8zvBd2AMUzlpHePNDOP0eQM8jTGqV00=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A+Ku8TJauHSmodpnxMWblBL/kJGeyDexnWB+oiYm9dRCuEjlso7zxFTrRg0d6jTQg rmfHIM76X4oKr/6gENePgPcHXo05aDmeH1eVo0lGnSCqZ7iqNotTQqd9WjtTSmAWGI baXQE1khBkW58+c72IQllvg0MD1qZBh4p/kFjO//+T53eeao6z7Bg/lLG0nYMCbXDh EETstXcwgprT46aNeJODiKtP5G2c1HNF/gH+Q9yQXur2G56UPf/CnKa7ef1Bwvq0Xh wC9F3wXlGdAAvaLdySmmiFlfOy1fBeELz6QXrn6xhWmMMtcfl0YP5dzFkoPmJYBM5Z bMuaTEw7Cm0og== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH mlx5-next 01/13] net/mlx5: Introduce crypto IFC bits and structures Date: Mon, 16 Jan 2023 15:05:48 +0200 Message-Id: <92da0db17a6106230c9a1938bc43071c119b7e7f.1673873422.git.leon@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin Add crypto related IFC structs, layouts and enumerations. Signed-off-by: Israel Rukshin Signed-off-by: Leon Romanovsky --- include/linux/mlx5/mlx5_ifc.h | 36 ++++++++++++++++++++++++++++++++--- 1 file changed, 33 insertions(+), 3 deletions(-) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 8bbf15433bb2..170fe1081820 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -1331,6 +1331,29 @@ struct mlx5_ifc_macsec_cap_bits { u8 reserved_at_40[0x7c0]; }; +enum { + MLX5_CRYPTO_WRAPPED_IMPORT_METHOD_CAP_AES_XTS = 0x4, +}; + +struct mlx5_ifc_crypto_cap_bits { + u8 wrapped_crypto_operational[0x1]; + u8 reserved_at_1[0x17]; + u8 wrapped_import_method[0x8]; + + u8 reserved_at_20[0xb]; + u8 log_max_num_deks[0x5]; + u8 reserved_at_30[0x3]; + u8 log_max_num_import_keks[0x5]; + u8 reserved_at_38[0x3]; + u8 log_max_num_creds[0x5]; + + u8 failed_selftests[0x10]; + u8 num_nv_import_keks[0x8]; + u8 num_nv_credentials[0x8]; + + u8 reserved_at_60[0x7a0]; +}; + enum { MLX5_WQ_TYPE_LINKED_LIST = 0x0, MLX5_WQ_TYPE_CYCLIC = 0x1, @@ -1758,7 +1781,9 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 reserved_at_3e8[0x2]; u8 vhca_state[0x1]; u8 log_max_vlan_list[0x5]; - u8 reserved_at_3f0[0x3]; + u8 reserved_at_3f0[0x1]; + u8 aes_xts_single_block_le_tweak[0x1]; + u8 aes_xts_multi_block_be_tweak[0x1]; u8 log_max_current_mc_list[0x5]; u8 reserved_at_3f8[0x3]; u8 log_max_current_uc_list[0x5]; @@ -1774,7 +1799,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 ats[0x1]; u8 reserved_at_462[0x1]; u8 log_max_uctx[0x5]; - u8 reserved_at_468[0x2]; + u8 aes_xts_multi_block_le_tweak[0x1]; + u8 crypto[0x1]; u8 ipsec_offload[0x1]; u8 log_max_umem[0x5]; u8 max_num_eqs[0x10]; @@ -3377,6 +3403,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_virtio_emulation_cap_bits virtio_emulation_cap; struct mlx5_ifc_shampo_cap_bits shampo_cap; struct mlx5_ifc_macsec_cap_bits macsec_cap; + struct mlx5_ifc_crypto_cap_bits crypto_cap; u8 reserved_at_0[0x8000]; }; @@ -3995,7 +4022,9 @@ struct mlx5_ifc_mkc_bits { u8 reserved_at_1d9[0x1]; u8 log_page_size[0x5]; - u8 reserved_at_1e0[0x20]; + u8 reserved_at_1e0[0x3]; + u8 crypto_en[0x2]; + u8 reserved_at_1e5[0x1b]; }; struct mlx5_ifc_pkey_bits { @@ -11978,6 +12007,7 @@ enum { enum { MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_TYPE_TLS = 0x1, MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_TYPE_IPSEC = 0x2, + MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_TYPE_AES_XTS = 0x3, MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_TYPE_MACSEC = 0x4, }; From patchwork Mon Jan 16 13:05:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103066 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4851C677F1 for ; Mon, 16 Jan 2023 13:07:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229908AbjAPNHX (ORCPT ); Mon, 16 Jan 2023 08:07:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231312AbjAPNGj (ORCPT ); Mon, 16 Jan 2023 08:06:39 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05E0FAD; Mon, 16 Jan 2023 05:06:25 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A4272B80D20; Mon, 16 Jan 2023 13:06:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 91A16C433D2; Mon, 16 Jan 2023 13:06:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874382; bh=vAkRimTAOToGcO2FbKESWn7KypzSRbBcz5r9JH5CQXA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ADlWSkwHACNAO2DLud+o3x07DseJaWthyhMStPks9JKa3AMvFD7TCTSQ8TZM0fQ3R TXWvlPdUnDpnYl24ITlNcRdLUTeTLOuuVf9xcSA9ISNpqlNKsN/d5rGkznbmjvxyE9 qZiBv/tQlQfwjsPz960Yrq13RsUph72exN6wRTnzpQyRbuVxqrZA+czP5JviwZapIH 8BWLxK82/4kZPDyD8SyFfgxbcdcubFqUEE97PWhIiPLmUqRmsm2JDukeuGGnDcrUma aPQAEvN4YMtPALZapqy/W/8UppovupAUSNeAIgEwmJ4niNNuj8tJ9dq2bl4kwefxCs 3BkpzLZuxGDVw== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH mlx5-next 02/13] net/mlx5: Introduce crypto capabilities macro Date: Mon, 16 Jan 2023 15:05:49 +0200 Message-Id: <561d70fff0ab0ebc5594e371d9faec2a2c934972.1673873422.git.leon@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin Add MLX5_CAP_CRYPTO() macro to the list of capabilities. Signed-off-by: Israel Rukshin Reviewed-by: Max Gurtovoy Signed-off-by: Leon Romanovsky --- drivers/net/ethernet/mellanox/mlx5/core/fw.c | 6 ++++++ drivers/net/ethernet/mellanox/mlx5/core/main.c | 1 + include/linux/mlx5/device.h | 4 ++++ 3 files changed, 11 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw.c b/drivers/net/ethernet/mellanox/mlx5/core/fw.c index f34e758a2f1f..4603f7ffd8d6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fw.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fw.c @@ -147,6 +147,12 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev) if (err) return err; + if (MLX5_CAP_GEN(dev, crypto)) { + err = mlx5_core_get_caps(dev, MLX5_CAP_CRYPTO); + if (err) + return err; + } + if (MLX5_CAP_GEN(dev, port_selection_cap)) { err = mlx5_core_get_caps(dev, MLX5_CAP_PORT_SELECTION); if (err) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index df134f6d32dc..81348a009666 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -1555,6 +1555,7 @@ static const int types[] = { MLX5_CAP_DEV_SHAMPO, MLX5_CAP_MACSEC, MLX5_CAP_ADV_VIRTUALIZATION, + MLX5_CAP_CRYPTO, }; static void mlx5_hca_caps_free(struct mlx5_core_dev *dev) diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h index 29d4b201c7b2..fd095f0ed3ec 100644 --- a/include/linux/mlx5/device.h +++ b/include/linux/mlx5/device.h @@ -1204,6 +1204,7 @@ enum mlx5_cap_type { MLX5_CAP_VDPA_EMULATION = 0x13, MLX5_CAP_DEV_EVENT = 0x14, MLX5_CAP_IPSEC, + MLX5_CAP_CRYPTO = 0x1a, MLX5_CAP_DEV_SHAMPO = 0x1d, MLX5_CAP_MACSEC = 0x1f, MLX5_CAP_GENERAL_2 = 0x20, @@ -1466,6 +1467,9 @@ enum mlx5_qcam_feature_groups { #define MLX5_CAP_MACSEC(mdev, cap)\ MLX5_GET(macsec_cap, (mdev)->caps.hca[MLX5_CAP_MACSEC]->cur, cap) +#define MLX5_CAP_CRYPTO(mdev, cap)\ + MLX5_GET(crypto_cap, (mdev)->caps.hca[MLX5_CAP_CRYPTO]->cur, cap) + enum { MLX5_CMD_STAT_OK = 0x0, MLX5_CMD_STAT_INT_ERR = 0x1, From patchwork Mon Jan 16 13:05:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103057 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C173C46467 for ; Mon, 16 Jan 2023 13:07:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229765AbjAPNG7 (ORCPT ); Mon, 16 Jan 2023 08:06:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231290AbjAPNGj (ORCPT ); Mon, 16 Jan 2023 08:06:39 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D3AA272C; Mon, 16 Jan 2023 05:06:15 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id BFC8960F9C; Mon, 16 Jan 2023 13:06:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 59EFCC433D2; Mon, 16 Jan 2023 13:06:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874374; bh=WAO+ONgqCraZR3yFcifYnEjOC2lBkvnTXTYmd++BW0s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GM0T4s2IJ0FDTKt1QqH1Ja6+4wlUM9LuKIb7UeMv8Is4hBnKRT4LPtYykvf2yXcHP sS4B8DzDvptm/UcueHCYuLj41X/BDBs+xCV3rcY9avIHMClwgyXjeBIu7VNSd52Wbw ZaSUr4nBJnGEohv+f1PoH9FjXAME+MWVEHZZsfGSQHAuy+4H9qjbp5TfhpD6XJ475R iEja0YyoCJsFuwHcgXOqnNGMfEp7XlrsDay1zJfd4DGNrtRgfA617UrNC4BmhUpVlz pGYE5bUmFZt3Oi2qaZPmEpeRpkM0Gc8jL5sgA49rP9N7Vhuugkj2OAaFx0WeQ4Jf0k DVEONyK8FEurA== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH rdma-next 03/13] RDMA: Split kernel-only create QP flags from uverbs create QP flags Date: Mon, 16 Jan 2023 15:05:50 +0200 Message-Id: <6e46859a58645d9f16a63ff76592487aabc9971d.1673873422.git.leon@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin The current code limits the create_flags to being the same bitmap that will be copied to user space. Extend the create QP flags to 64 bits and move all the flags that are only used internally in the kernel to the upper 32 bits. This cleanly split out the uverbs flags from the kernel flags to avoid confusion in the flags bitmap. Also add some short comments describing which each of the kernel flags is connected to. Signed-off-by: Israel Rukshin Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/uverbs_std_types_qp.c | 12 +++++----- drivers/infiniband/hw/bnxt_re/ib_verbs.c | 2 +- drivers/infiniband/hw/mlx4/mlx4_ib.h | 4 ++-- drivers/infiniband/hw/mlx4/qp.c | 4 ++-- drivers/infiniband/hw/mlx5/mlx5_ib.h | 2 +- drivers/infiniband/hw/mlx5/qp.c | 9 ++++---- drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c | 2 +- include/rdma/ib_verbs.h | 22 ++++++++++++++----- 8 files changed, 35 insertions(+), 22 deletions(-) diff --git a/drivers/infiniband/core/uverbs_std_types_qp.c b/drivers/infiniband/core/uverbs_std_types_qp.c index 7b4773fa4bc0..fe2c7427eac4 100644 --- a/drivers/infiniband/core/uverbs_std_types_qp.c +++ b/drivers/infiniband/core/uverbs_std_types_qp.c @@ -97,6 +97,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QP_CREATE)( struct ib_uobject *xrcd_uobj = NULL; struct ib_device *device; u64 user_handle; + u32 create_flags; int ret; ret = uverbs_copy_from_or_zero(&cap, attrs, @@ -191,7 +192,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QP_CREATE)( return -EINVAL; } - ret = uverbs_get_flags32(&attr.create_flags, attrs, + ret = uverbs_get_flags32(&create_flags, attrs, UVERBS_ATTR_CREATE_QP_FLAGS, IB_UVERBS_QP_CREATE_BLOCK_MULTICAST_LOOPBACK | IB_UVERBS_QP_CREATE_SCATTER_FCS | @@ -201,7 +202,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QP_CREATE)( if (ret) return ret; - ret = check_creation_flags(attr.qp_type, attr.create_flags); + ret = check_creation_flags(attr.qp_type, create_flags); if (ret) return ret; @@ -211,7 +212,7 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QP_CREATE)( UVERBS_ATTR_CREATE_QP_SOURCE_QPN); if (ret) return ret; - attr.create_flags |= IB_QP_CREATE_SOURCE_QPN; + create_flags |= IB_QP_CREATE_SOURCE_QPN; } srq = uverbs_attr_get_obj(attrs, @@ -234,16 +235,17 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QP_CREATE)( attr.send_cq = send_cq; attr.recv_cq = recv_cq; attr.xrcd = xrcd; - if (attr.create_flags & IB_UVERBS_QP_CREATE_SQ_SIG_ALL) { + if (create_flags & IB_UVERBS_QP_CREATE_SQ_SIG_ALL) { /* This creation bit is uverbs one, need to mask before * calling drivers. It was added to prevent an extra user attr * only for that when using ioctl. */ - attr.create_flags &= ~IB_UVERBS_QP_CREATE_SQ_SIG_ALL; + create_flags &= ~IB_UVERBS_QP_CREATE_SQ_SIG_ALL; attr.sq_sig_type = IB_SIGNAL_ALL_WR; } else { attr.sq_sig_type = IB_SIGNAL_REQ_WR; } + attr.create_flags = create_flags; set_caps(&attr, &cap, true); mutex_init(&obj->mcast_lock); diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index 989edc789633..1493ee9ed2b8 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -1273,7 +1273,7 @@ static int bnxt_re_init_qp_attr(struct bnxt_re_qp *qp, struct bnxt_re_pd *pd, qplqp->dpi = &rdev->dpi_privileged; /* Doorbell page */ if (init_attr->create_flags) { ibdev_dbg(&rdev->ibdev, - "QP create flags 0x%x not supported", + "QP create flags 0x%llx not supported", init_attr->create_flags); return -EOPNOTSUPP; } diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h index 17fee1e73a45..c553bf0eb257 100644 --- a/drivers/infiniband/hw/mlx4/mlx4_ib.h +++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h @@ -184,7 +184,7 @@ enum mlx4_ib_qp_flags { /* Mellanox specific flags start from IB_QP_CREATE_RESERVED_START */ MLX4_IB_ROCE_V2_GSI_QP = MLX4_IB_QP_CREATE_ROCE_V2_GSI, MLX4_IB_SRIOV_TUNNEL_QP = 1 << 30, - MLX4_IB_SRIOV_SQP = 1 << 31, + MLX4_IB_SRIOV_SQP = 1ULL << 31, }; struct mlx4_ib_gid_entry { @@ -342,7 +342,7 @@ struct mlx4_ib_qp { int buf_size; struct mutex mutex; u16 xrcdn; - u32 flags; + u64 flags; u8 port; u8 alt_port; u8 atomic_rd_en; diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c index 884825b2e5f7..f3ad436bf5c9 100644 --- a/drivers/infiniband/hw/mlx4/qp.c +++ b/drivers/infiniband/hw/mlx4/qp.c @@ -287,7 +287,7 @@ static void mlx4_ib_wq_event(struct mlx4_qp *qp, enum mlx4_event type) type, qp->qpn); } -static int send_wqe_overhead(enum mlx4_ib_qp_type type, u32 flags) +static int send_wqe_overhead(enum mlx4_ib_qp_type type, u64 flags) { /* * UD WQEs must have a datagram segment. @@ -1514,7 +1514,7 @@ static int _mlx4_ib_create_qp(struct ib_pd *pd, struct mlx4_ib_qp *qp, struct ib_udata *udata) { int err; - int sup_u_create_flags = MLX4_IB_QP_BLOCK_MULTICAST_LOOPBACK; + u64 sup_u_create_flags = MLX4_IB_QP_BLOCK_MULTICAST_LOOPBACK; u16 xrcdn = 0; if (init_attr->rwq_ind_tbl) diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index ddb36c757074..295502692da2 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -500,7 +500,7 @@ struct mlx5_ib_qp { */ struct mutex mutex; /* cached variant of create_flags from struct ib_qp_init_attr */ - u32 flags; + u64 flags; u32 port; u8 state; int max_inline_data; diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 80504db92ee9..f04adc18e63b 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -2938,7 +2938,7 @@ static int process_vendor_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, return (flags) ? -EINVAL : 0; } -static void process_create_flag(struct mlx5_ib_dev *dev, int *flags, int flag, +static void process_create_flag(struct mlx5_ib_dev *dev, u64 *flags, u64 flag, bool cond, struct mlx5_ib_qp *qp) { if (!(*flags & flag)) @@ -2958,7 +2958,8 @@ static void process_create_flag(struct mlx5_ib_dev *dev, int *flags, int flag, *flags &= ~MLX5_IB_QP_CREATE_WC_TEST; return; } - mlx5_ib_dbg(dev, "Verbs create QP flag 0x%X is not supported\n", flag); + mlx5_ib_dbg(dev, "Verbs create QP flag 0x%llX is not supported\n", + flag); } static int process_create_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, @@ -2966,7 +2967,7 @@ static int process_create_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, { enum ib_qp_type qp_type = qp->type; struct mlx5_core_dev *mdev = dev->mdev; - int create_flags = attr->create_flags; + u64 create_flags = attr->create_flags; bool cond; if (qp_type == MLX5_IB_QPT_DCT) @@ -3024,7 +3025,7 @@ static int process_create_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, true, qp); if (create_flags) { - mlx5_ib_dbg(dev, "Create QP has unsupported flags 0x%X\n", + mlx5_ib_dbg(dev, "Create QP has unsupported flags 0x%llX\n", create_flags); return -EOPNOTSUPP; } diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c index f83cd4a9d992..0dbfc3c9e274 100644 --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c @@ -206,7 +206,7 @@ int pvrdma_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr, if (init_attr->create_flags) { dev_warn(&dev->pdev->dev, - "invalid create queuepair flags %#x\n", + "invalid create queuepair flags 0x%llx\n", init_attr->create_flags); return -EOPNOTSUPP; } diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 949cf4ffc536..cc2ddd4e6c12 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -1140,16 +1140,15 @@ enum ib_qp_type { IB_QPT_RESERVED10, }; +/* + * bits 0, 5, 6 and 7 may be set by old kernels and should not be used. + */ enum ib_qp_create_flags { - IB_QP_CREATE_IPOIB_UD_LSO = 1 << 0, IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK = IB_UVERBS_QP_CREATE_BLOCK_MULTICAST_LOOPBACK, IB_QP_CREATE_CROSS_CHANNEL = 1 << 2, IB_QP_CREATE_MANAGED_SEND = 1 << 3, IB_QP_CREATE_MANAGED_RECV = 1 << 4, - IB_QP_CREATE_NETIF_QP = 1 << 5, - IB_QP_CREATE_INTEGRITY_EN = 1 << 6, - IB_QP_CREATE_NETDEV_USE = 1 << 7, IB_QP_CREATE_SCATTER_FCS = IB_UVERBS_QP_CREATE_SCATTER_FCS, IB_QP_CREATE_CVLAN_STRIPPING = @@ -1159,7 +1158,18 @@ enum ib_qp_create_flags { IB_UVERBS_QP_CREATE_PCI_WRITE_END_PADDING, /* reserve bits 26-31 for low level drivers' internal use */ IB_QP_CREATE_RESERVED_START = 1 << 26, - IB_QP_CREATE_RESERVED_END = 1 << 31, + IB_QP_CREATE_RESERVED_END = 1ULL << 31, + + /* The below flags are used only by the kernel */ + + /* The created QP will be used for IPoIB UD LSO */ + IB_QP_CREATE_IPOIB_UD_LSO = 1ULL << 32, + /* Create a QP that supports flow-steering */ + IB_QP_CREATE_NETIF_QP = 1ULL << 33, + /* The created QP can carry out integrity handover operations */ + IB_QP_CREATE_INTEGRITY_EN = 1ULL << 34, + /* Create an accelerated UD QP */ + IB_QP_CREATE_NETDEV_USE = 1ULL << 35, }; /* @@ -1179,7 +1189,7 @@ struct ib_qp_init_attr { struct ib_qp_cap cap; enum ib_sig_type sq_sig_type; enum ib_qp_type qp_type; - u32 create_flags; + u64 create_flags; /* * Only needed for special QP types, or when using the RW API. From patchwork Mon Jan 16 13:05:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103058 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3A3AC678DB for ; Mon, 16 Jan 2023 13:07:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229829AbjAPNHB (ORCPT ); Mon, 16 Jan 2023 08:07:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231300AbjAPNGj (ORCPT ); Mon, 16 Jan 2023 08:06:39 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D700F3A92; Mon, 16 Jan 2023 05:06:20 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8885CB80D31; Mon, 16 Jan 2023 13:06:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E2E2C433D2; Mon, 16 Jan 2023 13:06:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874378; bh=FGqAyVpLDpa6MeOFTzSFbDozHcqVNx/tnBsc6ttpwoM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=in1GLNrzq+wqFOE9AjaQ/K0YCORb2NHrxF8tjgbcNj/kIKb2H6NHGODGVottYTQMO hSCapPblNrwwp6gJsN5FxW6/Vks9AoCr/LmQoiE/dkc4qxBb0U8Oj1ndZjy5CjDjiI PQmjUt1pRAF+CAlIhFstqLSerYxKXNovbzYn8WNAuAnfSP5IPDEBLB8xArwA+3tCn8 FW/qet4RhByW7xKJPoGynWXG84hWb2hIT/wBSpa/LG7tzX6k71bUS+lW6pwuNvJB/m AyROUjPdB2yGTQpVzslf6xLN3OfmWOXVz4lISPONbX2LmTreM8inz9x683LsyBUvN3 pVoCrD78rk9hw== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH rdma-next 04/13] RDMA/core: Add cryptographic device capabilities Date: Mon, 16 Jan 2023 15:05:51 +0200 Message-Id: <4be0048cfe54548acc3730d733009237d8a896f8.1673873422.git.leon@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin Some advanced RDMA devices have HW engines with cryptographic capabilities. Those devices can encrypt/decrypt data when transmitting from memory domain to wire domain and when receiving data from wire domain to memory domain. Expose these capabilities via common RDMA device attributes. For now, add only AES-XTS cryptographic support. Signed-off-by: Israel Rukshin Reviewed-by: Max Gurtovoy Signed-off-by: Leon Romanovsky --- include/rdma/crypto.h | 37 +++++++++++++++++++++++++++++++++++++ include/rdma/ib_verbs.h | 2 ++ 2 files changed, 39 insertions(+) create mode 100644 include/rdma/crypto.h diff --git a/include/rdma/crypto.h b/include/rdma/crypto.h new file mode 100644 index 000000000000..4779eacb000e --- /dev/null +++ b/include/rdma/crypto.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* + * Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. + */ + +#ifndef _RDMA_CRYPTO_H_ +#define _RDMA_CRYPTO_H_ + +#include + +/** + * Encryption and decryption operations are done by attaching crypto properties + * to a memory region. Once done, every access to the memory via the crypto + * memory region will result in inline encryption or decryption of the data + * by the RDMA device. The crypto properties contain the Data Encryption Key + * (DEK) and the crypto standard that should be used and its attributes. + */ + +/** + * Cryptographic engines in clear text mode capabilities. + * @IB_CRYPTO_ENGINES_CAP_AES_XTS: Support AES-XTS engine. + */ +enum { + IB_CRYPTO_ENGINES_CAP_AES_XTS = 1 << 0, +}; + +/** + * struct ib_crypto_caps - Cryptographic capabilities + * @crypto_engines: From enum ib_crypto_engines_cap_bits. + * @max_num_deks: Maximum number of Data Encryption Keys. + */ +struct ib_crypto_caps { + u32 crypto_engines; + u32 max_num_deks; +}; + +#endif /* _RDMA_CRYPTO_H_ */ diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index cc2ddd4e6c12..83be7e49c5f7 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -40,6 +40,7 @@ #include #include #include +#include #include #include @@ -450,6 +451,7 @@ struct ib_device_attr { u64 max_dm_size; /* Max entries for sgl for optimized performance per READ */ u32 max_sgl_rd; + struct ib_crypto_caps crypto_caps; }; enum ib_mtu { From patchwork Mon Jan 16 13:05:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F983C678D7 for ; Mon, 16 Jan 2023 13:07:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231450AbjAPNHY (ORCPT ); Mon, 16 Jan 2023 08:07:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229810AbjAPNGm (ORCPT ); Mon, 16 Jan 2023 08:06:42 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDB2B10AB7; Mon, 16 Jan 2023 05:06:35 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 78B6360F97; Mon, 16 Jan 2023 13:06:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 17886C433D2; Mon, 16 Jan 2023 13:06:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874394; bh=AUs3KMfwul5/qoENqpBBsgdpaDQZs+4sjPEYd3EWOgw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SZ2qMwj8IZWq9M1yXgCtDldTsT2NjF+yGxfuxQc6UhuH6wTSCTXdN8hzxp4iR5+kx TAs8fTSTQeD+D8cq/Ge6NnfU1ObfO0vh8sL+fE1NrUgSUPSB/hIfwg8T8DR6cXgAN8 HaZm5yn7T4ObQhbtMtAuzAPR5t3jE2eMTo9BsF3IDPB/6FuplJNv+huJKbmeS7gjdU SeeCkTOFp03faiW1jae/4mHefqoY/AR6QUy9NgxR+Co+CNVB5Y+zKNylpEHNx23pCI Nrk4md70FmjZ+U4kzzG/nNSdyRbf6vgaoj+wE+KZoYnYo0MvKynZ8kvqgNXJvkpqGQ tJOQAlFnKwrmQ== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH rdma-next 05/13] RDMA/core: Add DEK management API Date: Mon, 16 Jan 2023 15:05:52 +0200 Message-Id: <58e678103d910efbe3481d698169af9dadf70d4b.1673873422.git.leon@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin Add an API to manage Data Encryption Keys (DEKs). The API allows creating and destroying a DEK. DEKs allow encryption and decryption of transmitted data and are used in MKeys for crypto operations. A crypto setter for the MKey configuration API will be added in the following commit. Signed-off-by: Israel Rukshin Reviewed-by: Max Gurtovoy Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/device.c | 2 ++ drivers/infiniband/core/verbs.c | 32 +++++++++++++++++++++++++++ include/rdma/crypto.h | 38 ++++++++++++++++++++++++++++++++ include/rdma/ib_verbs.h | 3 +++ 4 files changed, 75 insertions(+) diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index a666847bd714..b2016725c3d8 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -2615,6 +2615,7 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) SET_DEVICE_OP(dev_ops, create_ah); SET_DEVICE_OP(dev_ops, create_counters); SET_DEVICE_OP(dev_ops, create_cq); + SET_DEVICE_OP(dev_ops, create_dek); SET_DEVICE_OP(dev_ops, create_flow); SET_DEVICE_OP(dev_ops, create_qp); SET_DEVICE_OP(dev_ops, create_rwq_ind_table); @@ -2632,6 +2633,7 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) SET_DEVICE_OP(dev_ops, destroy_ah); SET_DEVICE_OP(dev_ops, destroy_counters); SET_DEVICE_OP(dev_ops, destroy_cq); + SET_DEVICE_OP(dev_ops, destroy_dek); SET_DEVICE_OP(dev_ops, destroy_flow); SET_DEVICE_OP(dev_ops, destroy_flow_action); SET_DEVICE_OP(dev_ops, destroy_qp); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 26b021f43ba4..03633d706106 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -2306,6 +2306,38 @@ struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, } EXPORT_SYMBOL(ib_alloc_mr_integrity); +/** + * ib_create_dek - Create a DEK (Data Encryption Key) associated with the + * specific protection domain. + * @pd: The protection domain associated with the DEK. + * @attr: The attributes of the DEK. + * + * Return: Allocated DEK in case of success; IS_ERR() is true in case of an + * error, PTR_ERR() returns the error code. + */ +struct ib_dek *ib_create_dek(struct ib_pd *pd, struct ib_dek_attr *attr) +{ + struct ib_device *device = pd->device; + + if (!device->ops.create_dek || !device->ops.destroy_dek) + return ERR_PTR(-EOPNOTSUPP); + + return device->ops.create_dek(pd, attr); +} +EXPORT_SYMBOL(ib_create_dek); + +/** + * ib_destroy_dek - Destroys the specified DEK. + * @dek: The DEK to destroy. + */ +void ib_destroy_dek(struct ib_dek *dek) +{ + struct ib_device *device = dek->pd->device; + + device->ops.destroy_dek(dek); +} +EXPORT_SYMBOL(ib_destroy_dek); + /* Multicast groups */ static bool is_valid_mcast_lid(struct ib_qp *qp, u16 lid) diff --git a/include/rdma/crypto.h b/include/rdma/crypto.h index 4779eacb000e..cdf287c94737 100644 --- a/include/rdma/crypto.h +++ b/include/rdma/crypto.h @@ -34,4 +34,42 @@ struct ib_crypto_caps { u32 max_num_deks; }; +/** + * enum ib_crypto_key_type - Cryptographic key types + * @IB_CRYPTO_KEY_TYPE_AES_XTS: Key of type AES-XTS, which can be used when + * IB_CRYPTO_AES_XTS is supported. + */ +enum ib_crypto_key_type { + IB_CRYPTO_KEY_TYPE_AES_XTS, +}; + +/** + * struct ib_dek_attr - Parameters for DEK (Data Encryption Key) + * @key_blob: the key blob that will be used for encryption and decryption of + * transmitted data. Actual size and layout of this field depends on the + * provided key_type and key_blob_size. + * The layout of AES_XTS key is: key1_128b + key2_128b or key1_256b + + * key2_256b. + * @key_blob_size: size of the key blob in bytes. + * @key_type: specific cryptographic key type. + */ +struct ib_dek_attr { + const void *key_blob; + u32 key_blob_size; + enum ib_crypto_key_type key_type; +}; + +/** + * struct ib_dek - Data Encryption Key + * @pd: The protection domain associated with the DEK. + * @id: DEK identifier. + */ +struct ib_dek { + struct ib_pd *pd; + u32 id; +}; + +struct ib_dek *ib_create_dek(struct ib_pd *pd, struct ib_dek_attr *attr); +void ib_destroy_dek(struct ib_dek *dek); + #endif /* _RDMA_CRYPTO_H_ */ diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 83be7e49c5f7..5fb42d553ca1 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2512,6 +2512,9 @@ struct ib_device_ops { struct ib_mr *(*alloc_mr_integrity)(struct ib_pd *pd, u32 max_num_data_sg, u32 max_num_meta_sg); + struct ib_dek *(*create_dek)(struct ib_pd *pd, + struct ib_dek_attr *attr); + void (*destroy_dek)(struct ib_dek *dek); int (*advise_mr)(struct ib_pd *pd, enum ib_uverbs_advise_mr_advice advice, u32 flags, struct ib_sge *sg_list, u32 num_sge, From patchwork Mon Jan 16 13:05:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103060 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24B79C677F1 for ; Mon, 16 Jan 2023 13:07:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230342AbjAPNHC (ORCPT ); Mon, 16 Jan 2023 08:07:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231454AbjAPNGk (ORCPT ); Mon, 16 Jan 2023 08:06:40 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F5674690; Mon, 16 Jan 2023 05:06:27 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0F59B60F8A; Mon, 16 Jan 2023 13:06:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB33EC433D2; Mon, 16 Jan 2023 13:06:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874386; bh=PHPfzcF12XN8XXrVZEoA/iEMhAWm5Si9T+rPS/xKCts=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mZYTjnfg8+FBLPGRtA0+2CT/2NZiCZAPxlQotdBoQc+7iqOpGJ4kjallcHjbv/LR9 0GhJFlb4ZWZMMxjFdWAx1XgKiNuBzq3s47KxCfkZiqMCbRqYpYfyakYufFMJEiSNHH MzmiRJfjtEmbBuchyPbd6TMJ4OC56i99jm/Q5h1zXjmoSFQqaGVkpJX5XUQKSHxbsz qhmjUCMBS+2IRBX2oNd07bmDxndH/+okE4/k8qxjKbE+dUT1dBdViz0tKlTyrldWB0 Ug+Mhk9jzZJ8HkjQBVA0DZiOZafAu0dgraTV+13maJcQ0rKwe9SB2KBx62t0RnALe3 l3OZXwnlWC/gg== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH rdma-next 06/13] RDMA/core: Introduce MR type for crypto operations Date: Mon, 16 Jan 2023 15:05:53 +0200 Message-Id: <5b8fadc00c0fcc0c0ba3a5dcc9e7b9012c6b5859.1673873422.git.leon@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin Add crypto attributes for MKey. With those attributes, the device can encrypt/decrypt data when transmitting from memory domain to wire domain and when receiving data from wire domain to memory domain. Signed-off-by: Israel Rukshin Reviewed-by: Max Gurtovoy Signed-off-by: Leon Romanovsky Reviewed-by: Steven Rostedt (Google) --- drivers/infiniband/core/device.c | 1 + drivers/infiniband/core/mr_pool.c | 2 ++ drivers/infiniband/core/verbs.c | 56 ++++++++++++++++++++++++++++++- include/rdma/crypto.h | 43 ++++++++++++++++++++++++ include/rdma/ib_verbs.h | 7 ++++ include/trace/events/rdma_core.h | 33 ++++++++++++++++++ 6 files changed, 141 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c index b2016725c3d8..d9e98fe92b3c 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -2599,6 +2599,7 @@ void ib_set_device_ops(struct ib_device *dev, const struct ib_device_ops *ops) SET_DEVICE_OP(dev_ops, alloc_hw_device_stats); SET_DEVICE_OP(dev_ops, alloc_hw_port_stats); SET_DEVICE_OP(dev_ops, alloc_mr); + SET_DEVICE_OP(dev_ops, alloc_mr_crypto); SET_DEVICE_OP(dev_ops, alloc_mr_integrity); SET_DEVICE_OP(dev_ops, alloc_mw); SET_DEVICE_OP(dev_ops, alloc_pd); diff --git a/drivers/infiniband/core/mr_pool.c b/drivers/infiniband/core/mr_pool.c index c0e2df128b34..d102cb4caefd 100644 --- a/drivers/infiniband/core/mr_pool.c +++ b/drivers/infiniband/core/mr_pool.c @@ -44,6 +44,8 @@ int ib_mr_pool_init(struct ib_qp *qp, struct list_head *list, int nr, if (type == IB_MR_TYPE_INTEGRITY) mr = ib_alloc_mr_integrity(qp->pd, max_num_sg, max_num_meta_sg); + else if (type == IB_MR_TYPE_CRYPTO) + mr = ib_alloc_mr_crypto(qp->pd, max_num_sg); else mr = ib_alloc_mr(qp->pd, type, max_num_sg); if (IS_ERR(mr)) { diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 03633d706106..61473fee4b54 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -2179,6 +2179,7 @@ int ib_dereg_mr_user(struct ib_mr *mr, struct ib_udata *udata) struct ib_pd *pd = mr->pd; struct ib_dm *dm = mr->dm; struct ib_sig_attrs *sig_attrs = mr->sig_attrs; + struct ib_crypto_attrs *crypto_attrs = mr->crypto_attrs; int ret; trace_mr_dereg(mr); @@ -2189,6 +2190,7 @@ int ib_dereg_mr_user(struct ib_mr *mr, struct ib_udata *udata) if (dm) atomic_dec(&dm->usecnt); kfree(sig_attrs); + kfree(crypto_attrs); } return ret; @@ -2217,7 +2219,8 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, goto out; } - if (mr_type == IB_MR_TYPE_INTEGRITY) { + if (mr_type == IB_MR_TYPE_INTEGRITY || + mr_type == IB_MR_TYPE_CRYPTO) { WARN_ON_ONCE(1); mr = ERR_PTR(-EINVAL); goto out; @@ -2294,6 +2297,7 @@ struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, mr->uobject = NULL; atomic_inc(&pd->usecnt); mr->need_inval = false; + mr->crypto_attrs = NULL; mr->type = IB_MR_TYPE_INTEGRITY; mr->sig_attrs = sig_attrs; @@ -2306,6 +2310,56 @@ struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, } EXPORT_SYMBOL(ib_alloc_mr_integrity); +/** + * ib_alloc_mr_crypto() - Allocates a crypto memory region + * @pd: protection domain associated with the region + * @max_num_sg: maximum sg entries available for registration + * + * Notes: + * Memory registration page/sg lists must not exceed max_num_sg. + * + */ +struct ib_mr *ib_alloc_mr_crypto(struct ib_pd *pd, u32 max_num_sg) +{ + struct ib_mr *mr; + struct ib_crypto_attrs *crypto_attrs; + + if (!pd->device->ops.alloc_mr_crypto) { + mr = ERR_PTR(-EOPNOTSUPP); + goto out; + } + + crypto_attrs = kzalloc(sizeof(*crypto_attrs), GFP_KERNEL); + if (!crypto_attrs) { + mr = ERR_PTR(-ENOMEM); + goto out; + } + + mr = pd->device->ops.alloc_mr_crypto(pd, max_num_sg); + if (IS_ERR(mr)) { + kfree(crypto_attrs); + goto out; + } + + mr->device = pd->device; + mr->pd = pd; + mr->dm = NULL; + mr->uobject = NULL; + atomic_inc(&pd->usecnt); + mr->need_inval = false; + mr->sig_attrs = NULL; + mr->type = IB_MR_TYPE_CRYPTO; + mr->crypto_attrs = crypto_attrs; + + rdma_restrack_new(&mr->res, RDMA_RESTRACK_MR); + rdma_restrack_parent_name(&mr->res, &pd->res); + rdma_restrack_add(&mr->res); +out: + trace_mr_crypto_alloc(pd, max_num_sg, mr); + return mr; +} +EXPORT_SYMBOL(ib_alloc_mr_crypto); + /** * ib_create_dek - Create a DEK (Data Encryption Key) associated with the * specific protection domain. diff --git a/include/rdma/crypto.h b/include/rdma/crypto.h index cdf287c94737..ba1c6576a8ba 100644 --- a/include/rdma/crypto.h +++ b/include/rdma/crypto.h @@ -34,6 +34,49 @@ struct ib_crypto_caps { u32 max_num_deks; }; +/** + * enum ib_crypto_domain - Encryption domain + * According to the encryption domain and the data direction, the HW can + * conclude if need to encrypt or decrypt the data. + * @IB_CRYPTO_ENCRYPTED_WIRE_DOMAIN: encrypted data is in the wire domain. + * @IB_CRYPTO_ENCRYPTED_MEM_DOMAIN: encrypted data is in the memory domain. + */ +enum ib_crypto_domain { + IB_CRYPTO_ENCRYPTED_WIRE_DOMAIN, + IB_CRYPTO_ENCRYPTED_MEM_DOMAIN, +}; + +/** + * enum ib_crypto_standard - Encryption standard + * @IB_CRYPTO_AES_XTS: AES-XTS encryption. + */ +enum ib_crypto_standard { + IB_CRYPTO_AES_XTS, +}; + +/* XTS initial tweak size is up to 128 bits, 16 bytes. */ +#define IB_CRYPTO_XTS_TWEAK_MAX_SIZE 16 + +/** + * struct ib_crypto_attrs - Parameters for crypto handover operation + * @encrypt_domain: specific encryption domain. + * @encrypt_standard: specific encryption standard. + * @data_unit_size: data unit size in bytes. It might be e.g., the filesystem + * block size or the disk sector size. + * @xts_init_tweak: a value to be used during encryption of each data unit. + * This value is incremented by the device for every data_unit_size in the + * message. + * @dek: Data Encryption Key index. + */ +struct ib_crypto_attrs { + enum ib_crypto_domain encrypt_domain; + enum ib_crypto_standard encrypt_standard; + int data_unit_size; + /* Today we support only AES-XTS */ + u32 xts_init_tweak[IB_CRYPTO_XTS_TWEAK_MAX_SIZE / sizeof(u32)]; + u32 dek; +}; + /** * enum ib_crypto_key_type - Cryptographic key types * @IB_CRYPTO_KEY_TYPE_AES_XTS: Key of type AES-XTS, which can be used when diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 5fb42d553ca1..7507661c78d0 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -876,6 +876,8 @@ __attribute_const__ int ib_rate_to_mbps(enum ib_rate rate); * without address translations (VA=PA) * @IB_MR_TYPE_INTEGRITY: memory region that is used for * data integrity operations + * @IB_MR_TYPE_CRYPTO: memory region that is used for cryptographic + * operations */ enum ib_mr_type { IB_MR_TYPE_MEM_REG, @@ -884,6 +886,7 @@ enum ib_mr_type { IB_MR_TYPE_USER, IB_MR_TYPE_DMA, IB_MR_TYPE_INTEGRITY, + IB_MR_TYPE_CRYPTO, }; enum ib_mr_status_check { @@ -1854,6 +1857,7 @@ struct ib_mr { struct ib_dm *dm; struct ib_sig_attrs *sig_attrs; /* only for IB_MR_TYPE_INTEGRITY MRs */ + struct ib_crypto_attrs *crypto_attrs; /* only for IB_MR_TYPE_CRYPTO */ /* * Implementation details of the RDMA core, don't use in drivers: */ @@ -2512,6 +2516,7 @@ struct ib_device_ops { struct ib_mr *(*alloc_mr_integrity)(struct ib_pd *pd, u32 max_num_data_sg, u32 max_num_meta_sg); + struct ib_mr *(*alloc_mr_crypto)(struct ib_pd *pd, u32 max_num_sg); struct ib_dek *(*create_dek)(struct ib_pd *pd, struct ib_dek_attr *attr); void (*destroy_dek)(struct ib_dek *dek); @@ -4295,6 +4300,8 @@ struct ib_mr *ib_alloc_mr_integrity(struct ib_pd *pd, u32 max_num_data_sg, u32 max_num_meta_sg); +struct ib_mr *ib_alloc_mr_crypto(struct ib_pd *pd, u32 max_num_sg); + /** * ib_update_fast_reg_key - updates the key portion of the fast_reg MR * R_Key and L_Key. diff --git a/include/trace/events/rdma_core.h b/include/trace/events/rdma_core.h index 17642aa54437..b6a3d82b89ca 100644 --- a/include/trace/events/rdma_core.h +++ b/include/trace/events/rdma_core.h @@ -371,6 +371,39 @@ TRACE_EVENT(mr_integ_alloc, __entry->max_num_meta_sg, __entry->rc) ); +TRACE_EVENT(mr_crypto_alloc, + TP_PROTO( + const struct ib_pd *pd, + u32 max_num_sg, + const struct ib_mr *mr + ), + + TP_ARGS(pd, max_num_sg, mr), + + TP_STRUCT__entry( + __field(u32, pd_id) + __field(u32, mr_id) + __field(u32, max_num_sg) + __field(int, rc) + ), + + TP_fast_assign( + __entry->pd_id = pd->res.id; + if (IS_ERR(mr)) { + __entry->mr_id = 0; + __entry->rc = PTR_ERR(mr); + } else { + __entry->mr_id = mr->res.id; + __entry->rc = 0; + } + __entry->max_num_sg = max_num_sg; + ), + + TP_printk("pd.id=%u mr.id=%u max_num_sg=%u rc=%d", + __entry->pd_id, __entry->mr_id, __entry->max_num_sg, + __entry->rc) +); + TRACE_EVENT(mr_dereg, TP_PROTO( const struct ib_mr *mr From patchwork Mon Jan 16 13:05:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103059 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98C13C678DE for ; Mon, 16 Jan 2023 13:07:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231439AbjAPNHD (ORCPT ); Mon, 16 Jan 2023 08:07:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231478AbjAPNGk (ORCPT ); Mon, 16 Jan 2023 08:06:40 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CEF146AD; Mon, 16 Jan 2023 05:06:31 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 298E360F8C; Mon, 16 Jan 2023 13:06:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C5839C433EF; Mon, 16 Jan 2023 13:06:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874390; bh=IB+veTmklBKw7PiVo8v/NfpEJifNCfbu55nd+mTVPUg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Zq1PeBxWasNH9Dc8TC5hkRkJlGnfrUF4cWKlF4/fG9vW4QUdGsdaxsghZtSzGekgL Ql33dt1b+c8IiQ26vu0nGPhqhXCrfk9adXxFZXDxdtwDHR4x6iJ3SaSyZYBepeCwBZ Kc1h7HaqLwA7JpvpcHUpLHDiYeVw/LVwcm3Qkh0MIpAuak9cURPQnFRsQWsR/l+0Yk wUKGvViiPRzzYkrTDFpzf2TnxIG8excmNjlptAdTQM7ps0xm/RB3itb2zPmaoGw6tz ZlhHeHiLEOPWtUTi0h9Lawmhs2MdwmLgHHbEwdPoDN5CS39q3fayP03G9NKAQ6Vrgl DHhkfMNo2J7fQ== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH rdma-next 07/13] RDMA/core: Add support for creating crypto enabled QPs Date: Mon, 16 Jan 2023 15:05:54 +0200 Message-Id: <7a772388d517a28052fa5f0b8ea507cb3fe471fe.1673873422.git.leon@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin Add a list of crypto MRs and introduce a crypto WR type to post on those QPs. Signed-off-by: Israel Rukshin Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/verbs.c | 3 +++ include/rdma/ib_verbs.h | 12 +++++++++++- 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 61473fee4b54..01aefff6760e 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1223,6 +1223,7 @@ static struct ib_qp *create_qp(struct ib_device *dev, struct ib_pd *pd, spin_lock_init(&qp->mr_lock); INIT_LIST_HEAD(&qp->rdma_mrs); INIT_LIST_HEAD(&qp->sig_mrs); + INIT_LIST_HEAD(&qp->crypto_mrs); qp->send_cq = attr->send_cq; qp->recv_cq = attr->recv_cq; @@ -1363,6 +1364,8 @@ struct ib_qp *ib_create_qp_kernel(struct ib_pd *pd, device->attrs.max_sge_rd); if (qp_init_attr->create_flags & IB_QP_CREATE_INTEGRITY_EN) qp->integrity_en = true; + if (qp_init_attr->create_flags & IB_QP_CREATE_CRYPTO_EN) + qp->crypto_en = true; return qp; diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 7507661c78d0..1770cd30c0f0 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -1175,6 +1175,8 @@ enum ib_qp_create_flags { IB_QP_CREATE_INTEGRITY_EN = 1ULL << 34, /* Create an accelerated UD QP */ IB_QP_CREATE_NETDEV_USE = 1ULL << 35, + /* The created QP can carry out cryptographic handover operations */ + IB_QP_CREATE_CRYPTO_EN = 1ULL << 36, }; /* @@ -1352,6 +1354,12 @@ enum ib_wr_opcode { /* These are kernel only and can not be issued by userspace */ IB_WR_REG_MR = 0x20, IB_WR_REG_MR_INTEGRITY, + /* + * It is used to assign crypto properties to a MKey. Use the MKey in + * any RDMA transaction (SEND/RECV/READ/WRITE) to encrypt/decrypt data + * on-the-fly. + */ + IB_WR_REG_MR_CRYPTO, /* reserve values for low level drivers' internal use. * These values will not be used at all in the ib core layer. @@ -1800,6 +1808,7 @@ struct ib_qp { int mrs_used; struct list_head rdma_mrs; struct list_head sig_mrs; + struct list_head crypto_mrs; struct ib_srq *srq; struct ib_xrcd *xrcd; /* XRC TGT QPs only */ struct list_head xrcd_list; @@ -1822,7 +1831,8 @@ struct ib_qp { struct ib_qp_security *qp_sec; u32 port; - bool integrity_en; + u8 integrity_en:1; + u8 crypto_en:1; /* * Implementation details of the RDMA core, don't use in drivers: */ From patchwork Mon Jan 16 13:05:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103064 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC47BC678D7 for ; Mon, 16 Jan 2023 13:07:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230166AbjAPNHM (ORCPT ); Mon, 16 Jan 2023 08:07:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229817AbjAPNGx (ORCPT ); Mon, 16 Jan 2023 08:06:53 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A2803A90; Mon, 16 Jan 2023 05:06:52 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E4F1060F97; Mon, 16 Jan 2023 13:06:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 88B85C433D2; Mon, 16 Jan 2023 13:06:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874411; bh=yXankd3ADXRZerTodbpyETr0H4Qj01JdsQgtwpw0Dpo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fgIjibUjt1g+aWE/ux7w8lldiqti9MPTOYrg664GTgeLLIko4iPlQIWTchtK4PPS2 E/Nj7bdo+U8PXiZB22s69BYxYSo2jkj8gYzgB/2jWsva2ntMUuI2Aq3657DptC5dE5 4Jr0jMc+OseLWn678RNGP/CbyWwuU5rWZ4YS+wtBZ9xex7dzm7e7WMNr0g149oyV5g bV1mGiJJ9b8fsOyoNu+nM/BzhRhn01fxJYL0KHLx7WVH9RdBd24jGTRwxpOzoj8iDi A66sE/zByovan3DalETwgdA4lfwZfOcUxm25T0qqmVAvszBo4pnTSvZLqnjof6c1O+ WQcxe4oVbyRIg== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH rdma-next 08/13] RDMA/mlx5: Add cryptographic device capabilities Date: Mon, 16 Jan 2023 15:05:55 +0200 Message-Id: <39ba2f3cd1786e47f2541f4a7be59cc5af4b03c7.1673873422.git.leon@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin The capabilities provide information on general cryptographic support, maximum number of DEKs and status for RDMA devices. Also, they include the supported cryptographic engines and their import method (wrapped or plaintext). Wrapped crypto operational flag indicates the import method mode that can be used. For now, add only AES-XTS cryptographic support. Signed-off-by: Israel Rukshin Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/Makefile | 1 + drivers/infiniband/hw/mlx5/crypto.c | 31 ++++++++++++++++++++++++++++ drivers/infiniband/hw/mlx5/crypto.h | 11 ++++++++++ drivers/infiniband/hw/mlx5/main.c | 5 +++++ drivers/infiniband/hw/mlx5/mlx5_ib.h | 2 ++ 5 files changed, 50 insertions(+) create mode 100644 drivers/infiniband/hw/mlx5/crypto.c create mode 100644 drivers/infiniband/hw/mlx5/crypto.h diff --git a/drivers/infiniband/hw/mlx5/Makefile b/drivers/infiniband/hw/mlx5/Makefile index 612ee8190a2d..d6ae1a08b5b2 100644 --- a/drivers/infiniband/hw/mlx5/Makefile +++ b/drivers/infiniband/hw/mlx5/Makefile @@ -6,6 +6,7 @@ mlx5_ib-y := ah.o \ cong.o \ counters.o \ cq.o \ + crypto.o \ dm.o \ doorbell.o \ gsi.o \ diff --git a/drivers/infiniband/hw/mlx5/crypto.c b/drivers/infiniband/hw/mlx5/crypto.c new file mode 100644 index 000000000000..6fad9084877e --- /dev/null +++ b/drivers/infiniband/hw/mlx5/crypto.c @@ -0,0 +1,31 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. */ + +#include "crypto.h" + +void mlx5r_crypto_caps_init(struct mlx5_ib_dev *dev) +{ + struct ib_crypto_caps *caps = &dev->crypto_caps; + struct mlx5_core_dev *mdev = dev->mdev; + + if (!(MLX5_CAP_GEN_64(dev->mdev, general_obj_types) & + MLX5_HCA_CAP_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY)) + return; + + if (!MLX5_CAP_GEN(mdev, aes_xts_multi_block_le_tweak) && + !MLX5_CAP_GEN(mdev, aes_xts_multi_block_be_tweak)) + return; + + if (MLX5_CAP_CRYPTO(mdev, wrapped_import_method) & + MLX5_CRYPTO_WRAPPED_IMPORT_METHOD_CAP_AES_XTS) + return; + + if (MLX5_CAP_CRYPTO(mdev, failed_selftests)) { + mlx5_ib_warn(dev, "crypto self-tests failed with error 0x%x\n", + MLX5_CAP_CRYPTO(mdev, failed_selftests)); + return; + } + + caps->crypto_engines |= IB_CRYPTO_ENGINES_CAP_AES_XTS; + caps->max_num_deks = 1 << MLX5_CAP_CRYPTO(mdev, log_max_num_deks); +} diff --git a/drivers/infiniband/hw/mlx5/crypto.h b/drivers/infiniband/hw/mlx5/crypto.h new file mode 100644 index 000000000000..8686ac6fb0b0 --- /dev/null +++ b/drivers/infiniband/hw/mlx5/crypto.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. */ + +#ifndef _MLX5_IB_CRYPTO_H +#define _MLX5_IB_CRYPTO_H + +#include "mlx5_ib.h" + +void mlx5r_crypto_caps_init(struct mlx5_ib_dev *dev); + +#endif /* _MLX5_IB_CRYPTO_H */ diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index fb0d97bd4074..10f12e9a4dc3 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -39,6 +39,7 @@ #include "srq.h" #include "qp.h" #include "wr.h" +#include "crypto.h" #include "restrack.h" #include "counters.h" #include "umr.h" @@ -989,6 +990,7 @@ static int mlx5_ib_query_device(struct ib_device *ibdev, props->max_ah = INT_MAX; props->hca_core_clock = MLX5_CAP_GEN(mdev, device_frequency_khz); props->timestamp_mask = 0x7FFFFFFFFFFFFFFFULL; + props->crypto_caps = dev->crypto_caps; if (IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING)) { if (dev->odp_caps.general_caps & IB_ODP_SUPPORT) @@ -3826,6 +3828,9 @@ static int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev) if (MLX5_CAP_GEN(mdev, xrc)) ib_set_device_ops(&dev->ib_dev, &mlx5_ib_dev_xrc_ops); + if (MLX5_CAP_GEN(mdev, crypto)) + mlx5r_crypto_caps_init(dev); + if (MLX5_CAP_DEV_MEM(mdev, memic) || MLX5_CAP_GEN_64(dev->mdev, general_obj_types) & MLX5_GENERAL_OBJ_TYPES_CAP_SW_ICM) diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 295502692da2..8f6850539542 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -1100,6 +1100,8 @@ struct mlx5_ib_dev { struct mlx5_ib_delay_drop delay_drop; const struct mlx5_ib_profile *profile; + struct ib_crypto_caps crypto_caps; + struct mlx5_ib_lb_state lb; u8 umr_fence; struct list_head ib_dev_list; From patchwork Mon Jan 16 13:05:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103068 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80BB9C46467 for ; Mon, 16 Jan 2023 13:07:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231488AbjAPNHZ (ORCPT ); Mon, 16 Jan 2023 08:07:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231393AbjAPNGn (ORCPT ); Mon, 16 Jan 2023 08:06:43 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 372237AA8; Mon, 16 Jan 2023 05:06:40 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C9FCD60F97; Mon, 16 Jan 2023 13:06:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 73C5AC433D2; Mon, 16 Jan 2023 13:06:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874399; bh=e1ONfQusN53JOuI2VALrT8a03P5hTd6F5/2Z57t4AgQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NjeEAf6K1ZTvA5sAmlWSOfHcOZLUrT+sap+XphlaKKggOYN+QPNxrmcUQFaf8Dl37 jll6CyVdwpaeyFDzoSXUsGerkF+82sSLOpgHsFSW2WtWgtwJMLcHaAiA6RTD5GBCbx 86cQry3okdTrEYP8hjBbdqF6lPFBVPXghRJi7RFvZ6PTTQYXvTBN4IltOejDi8eJk6 Nta/DpHh7p8t9iM8YgyfWjXykc+Ukxya3aBzhuKzE4xgJOrt6VAaL5D3fHIxi4KxrX MAeRmaIk3WdH9vIkaRjmhLj0Q91G5hk3gQwxrf7Tj4CBTH0laj42RSCxQqTJ/xNxCR UDxsDd3CQeeFQ== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH rdma-next 09/13] RDMA/mlx5: Add DEK management API Date: Mon, 16 Jan 2023 15:05:56 +0200 Message-Id: <447a02ca42116a422d5727e725bf90551dd0c8ba.1673873422.git.leon@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin Add an API to manage Data Encryption Keys (DEKs). The API allows creating and destroying a DEK. DEKs allow encryption and decryption of transmitted data and are used in MKeys for crypto operations. Signed-off-by: Israel Rukshin Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/crypto.c | 83 +++++++++++++++++++++++++++++ drivers/infiniband/hw/mlx5/crypto.h | 8 +++ 2 files changed, 91 insertions(+) diff --git a/drivers/infiniband/hw/mlx5/crypto.c b/drivers/infiniband/hw/mlx5/crypto.c index 6fad9084877e..36e978c0fb85 100644 --- a/drivers/infiniband/hw/mlx5/crypto.c +++ b/drivers/infiniband/hw/mlx5/crypto.c @@ -3,6 +3,87 @@ #include "crypto.h" +static struct ib_dek *mlx5r_create_dek(struct ib_pd *pd, + struct ib_dek_attr *attr) +{ + u32 in[MLX5_ST_SZ_DW(create_encryption_key_in)] = {}; + u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)]; + struct mlx5_ib_dev *dev = to_mdev(pd->device); + u32 key_blob_size = attr->key_blob_size; + void *ptr, *key_addr; + struct ib_dek *dek; + u8 key_size; + int err; + + if (attr->key_type != IB_CRYPTO_KEY_TYPE_AES_XTS) + return ERR_PTR(-EOPNOTSUPP); + + switch (key_blob_size) { + case MLX5_IB_CRYPTO_AES_128_XTS_KEY_SIZE: + key_size = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_128; + break; + case MLX5_IB_CRYPTO_AES_256_XTS_KEY_SIZE: + key_size = MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_KEY_SIZE_256; + break; + default: + return ERR_PTR(-EOPNOTSUPP); + } + + dek = kzalloc(sizeof(*dek), GFP_KERNEL); + if (!dek) + return ERR_PTR(-ENOMEM); + + ptr = MLX5_ADDR_OF(create_encryption_key_in, in, + general_obj_in_cmd_hdr); + MLX5_SET(general_obj_in_cmd_hdr, ptr, opcode, + MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, ptr, obj_type, + MLX5_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY); + ptr = MLX5_ADDR_OF(create_encryption_key_in, in, encryption_key_object); + MLX5_SET(encryption_key_obj, ptr, key_size, key_size); + MLX5_SET(encryption_key_obj, ptr, key_type, + MLX5_GENERAL_OBJECT_TYPE_ENCRYPTION_KEY_TYPE_AES_XTS); + MLX5_SET(encryption_key_obj, ptr, pd, to_mpd(pd)->pdn); + key_addr = MLX5_ADDR_OF(encryption_key_obj, ptr, key); + memcpy(key_addr, attr->key_blob, key_blob_size); + + err = mlx5_cmd_exec(dev->mdev, in, sizeof(in), out, sizeof(out)); + /* avoid leaking key on the stack */ + memzero_explicit(in, sizeof(in)); + if (err) + goto err_free; + + dek->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + dek->pd = pd; + + return dek; + +err_free: + kfree(dek); + return ERR_PTR(err); +} + +static void mlx5r_destroy_dek(struct ib_dek *dek) +{ + struct mlx5_ib_dev *dev = to_mdev(dek->pd->device); + u32 in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {}; + u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)]; + + MLX5_SET(general_obj_in_cmd_hdr, in, opcode, + MLX5_CMD_OP_DESTROY_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, + MLX5_GENERAL_OBJECT_TYPES_ENCRYPTION_KEY); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, dek->id); + + mlx5_cmd_exec(dev->mdev, in, sizeof(in), out, sizeof(out)); + kfree(dek); +} + +static const struct ib_device_ops mlx5r_dev_crypto_ops = { + .create_dek = mlx5r_create_dek, + .destroy_dek = mlx5r_destroy_dek, +}; + void mlx5r_crypto_caps_init(struct mlx5_ib_dev *dev) { struct ib_crypto_caps *caps = &dev->crypto_caps; @@ -28,4 +109,6 @@ void mlx5r_crypto_caps_init(struct mlx5_ib_dev *dev) caps->crypto_engines |= IB_CRYPTO_ENGINES_CAP_AES_XTS; caps->max_num_deks = 1 << MLX5_CAP_CRYPTO(mdev, log_max_num_deks); + + ib_set_device_ops(&dev->ib_dev, &mlx5r_dev_crypto_ops); } diff --git a/drivers/infiniband/hw/mlx5/crypto.h b/drivers/infiniband/hw/mlx5/crypto.h index 8686ac6fb0b0..b132b780030f 100644 --- a/drivers/infiniband/hw/mlx5/crypto.h +++ b/drivers/infiniband/hw/mlx5/crypto.h @@ -6,6 +6,14 @@ #include "mlx5_ib.h" +/* + * The standard AES-XTS key blob composed of two keys. + * AES-128-XTS key blob composed of two 128-bit keys, which is 32 bytes and + * AES-256-XTS key blob composed of two 256-bit keys, which is 64 bytes. + */ +#define MLX5_IB_CRYPTO_AES_128_XTS_KEY_SIZE 32 +#define MLX5_IB_CRYPTO_AES_256_XTS_KEY_SIZE 64 + void mlx5r_crypto_caps_init(struct mlx5_ib_dev *dev); #endif /* _MLX5_IB_CRYPTO_H */ From patchwork Mon Jan 16 13:05:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E419AC678DA for ; Mon, 16 Jan 2023 13:07:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231544AbjAPNHN (ORCPT ); Mon, 16 Jan 2023 08:07:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229700AbjAPNGs (ORCPT ); Mon, 16 Jan 2023 08:06:48 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1C0A468D; Mon, 16 Jan 2023 05:06:45 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 81FD9B80E37; Mon, 16 Jan 2023 13:06:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 63900C433EF; Mon, 16 Jan 2023 13:06:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874403; bh=Wfrh+cFzRa6bL8VsIjXTHMZp54Ti5ONFcFNaROn2N0s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BE6tEMsI+O5FK8qa8wAtpmy+ZOkSqtR5gHiDdYaBYb0ant+YSdskbqREQ1w4EPXxG EIy7m1WWsgybCSyttOwAHsYEn1pcFeUYPTmV6vKmbWU5XmfbcXlRXohT3y3fx02ZNg S8etQ60IlT5YaDoKVuwr0g1cy/N52o0bRNI5ut1FPTydIwGP15BtgV4rK9dsyUlAm9 qMdZQlN1S2k9WiLELz5sNoHnXjsjxp/OvZurZhneSDNe8SiMUI8r7oGb0AP3S6CTnw J8HkCMJ+BJrYJhhhU2os4McmuIj4uoe8HhAUpyQXfD73IaruVVIKzXqybteHruAcEo ETpmtsfRT8Gfg== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH rdma-next 10/13] RDMA/mlx5: Add AES-XTS crypto support Date: Mon, 16 Jan 2023 15:05:57 +0200 Message-Id: <0a7ea7b225b3a62be84de852b1fc126508d07ee6.1673873422.git.leon@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin Add crypto attributes for MKey and QP. With this, the device can encrypt/decrypt data when transmitting data from memory to network and when receiving data from network to memory. Signed-off-by: Israel Rukshin Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/crypto.c | 1 + drivers/infiniband/hw/mlx5/crypto.h | 27 +++++ drivers/infiniband/hw/mlx5/mlx5_ib.h | 1 + drivers/infiniband/hw/mlx5/mr.c | 33 ++++++ drivers/infiniband/hw/mlx5/qp.c | 6 + drivers/infiniband/hw/mlx5/wr.c | 164 +++++++++++++++++++++++++-- 6 files changed, 222 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/crypto.c b/drivers/infiniband/hw/mlx5/crypto.c index 36e978c0fb85..28bc661bd5bc 100644 --- a/drivers/infiniband/hw/mlx5/crypto.c +++ b/drivers/infiniband/hw/mlx5/crypto.c @@ -82,6 +82,7 @@ static void mlx5r_destroy_dek(struct ib_dek *dek) static const struct ib_device_ops mlx5r_dev_crypto_ops = { .create_dek = mlx5r_create_dek, .destroy_dek = mlx5r_destroy_dek, + .alloc_mr_crypto = mlx5r_alloc_mr_crypto, }; void mlx5r_crypto_caps_init(struct mlx5_ib_dev *dev) diff --git a/drivers/infiniband/hw/mlx5/crypto.h b/drivers/infiniband/hw/mlx5/crypto.h index b132b780030f..33f7e3b8bcce 100644 --- a/drivers/infiniband/hw/mlx5/crypto.h +++ b/drivers/infiniband/hw/mlx5/crypto.h @@ -6,6 +6,33 @@ #include "mlx5_ib.h" +enum { + MLX5_CRYPTO_ENCRYPTED_WIRE = 0x0, + MLX5_CRYPTO_ENCRYPTED_MEM = 0x1, +}; + +enum { + MLX5_CRYPTO_AES_XTS = 0x0, +}; + +struct mlx5_bsf_crypto { + u8 bsf_size_type; + u8 encryption_order; + u8 rsvd0; + u8 encryption_standard; + __be32 raw_data_size; + u8 block_size_p; + u8 rsvd1[7]; + union { + __be32 be_xts_init_tweak[4]; + __le32 le_xts_init_tweak[4]; + }; + __be32 rsvd_dek_pointer; + u8 rsvd2[4]; + u8 keytag[8]; + u8 rsvd3[16]; +}; + /* * The standard AES-XTS key blob composed of two keys. * AES-128-XTS key blob composed of two 128-bit keys, which is 32 bytes and diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 8f6850539542..6e5b1a65f91b 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -1291,6 +1291,7 @@ struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, struct ib_mr *mlx5_ib_alloc_mr_integrity(struct ib_pd *pd, u32 max_num_sg, u32 max_num_meta_sg); +struct ib_mr *mlx5r_alloc_mr_crypto(struct ib_pd *pd, u32 max_num_sg); int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, unsigned int *sg_offset); int mlx5_ib_map_mr_sg_pi(struct ib_mr *ibmr, struct scatterlist *data_sg, diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index e3b0f41aef0d..8e2f32ae4196 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -40,6 +40,7 @@ #include #include #include +#include "crypto.h" #include "dm.h" #include "mlx5_ib.h" #include "umr.h" @@ -1553,6 +1554,8 @@ mlx5_alloc_priv_descs(struct ib_device *device, int add_size; int ret; + if (mr->ibmr.type == IB_MR_TYPE_CRYPTO) + size += sizeof(struct mlx5_bsf_crypto); add_size = max_t(int, MLX5_UMR_ALIGN - ARCH_KMALLOC_MINALIGN, 0); mr->descs_alloc = kzalloc(size + add_size, GFP_KERNEL); @@ -1582,6 +1585,8 @@ mlx5_free_priv_descs(struct mlx5_ib_mr *mr) int size = mr->max_descs * mr->desc_size; struct mlx5_ib_dev *dev = to_mdev(device); + if (mr->ibmr.type == IB_MR_TYPE_CRYPTO) + size += sizeof(struct mlx5_bsf_crypto); dma_unmap_single(&dev->mdev->pdev->dev, mr->desc_map, size, DMA_TO_DEVICE); kfree(mr->descs_alloc); @@ -1862,6 +1867,21 @@ static int mlx5_alloc_integrity_descs(struct ib_pd *pd, struct mlx5_ib_mr *mr, return err; } +static int mlx5_alloc_crypto_descs(struct ib_pd *pd, struct mlx5_ib_mr *mr, + int ndescs, u32 *in, int inlen) +{ + void *mkc; + + mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); + MLX5_SET(mkc, mkc, crypto_en, 1); + /* Set bsf descriptors for mkey */ + MLX5_SET(mkc, mkc, bsf_en, 1); + MLX5_SET(mkc, mkc, bsf_octword_size, MLX5_MKEY_BSF_OCTO_SIZE); + + return _mlx5_alloc_mkey_descs(pd, mr, ndescs, sizeof(struct mlx5_klm), + 0, MLX5_MKC_ACCESS_MODE_KLMS, in, inlen); +} + static struct ib_mr *__mlx5_ib_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type, u32 max_num_sg, u32 max_num_meta_sg) @@ -1897,6 +1917,9 @@ static struct ib_mr *__mlx5_ib_alloc_mr(struct ib_pd *pd, err = mlx5_alloc_integrity_descs(pd, mr, max_num_sg, max_num_meta_sg, in, inlen); break; + case IB_MR_TYPE_CRYPTO: + err = mlx5_alloc_crypto_descs(pd, mr, ndescs, in, inlen); + break; default: mlx5_ib_warn(dev, "Invalid mr type %d\n", mr_type); err = -EINVAL; @@ -1929,6 +1952,11 @@ struct ib_mr *mlx5_ib_alloc_mr_integrity(struct ib_pd *pd, max_num_meta_sg); } +struct ib_mr *mlx5r_alloc_mr_crypto(struct ib_pd *pd, u32 max_num_sg) +{ + return __mlx5_ib_alloc_mr(pd, IB_MR_TYPE_CRYPTO, max_num_sg, 0); +} + int mlx5_ib_alloc_mw(struct ib_mw *ibmw, struct ib_udata *udata) { struct mlx5_ib_dev *dev = to_mdev(ibmw->device); @@ -2368,6 +2396,11 @@ int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents, mr->desc_size * mr->max_descs, DMA_TO_DEVICE); + if (ibmr->type == IB_MR_TYPE_CRYPTO) { + /* This is zero-based memory region */ + ibmr->iova = 0; + } + return n; } diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index f04adc18e63b..8eb777d422e4 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -40,6 +40,7 @@ #include "ib_rep.h" #include "counters.h" #include "cmd.h" +#include "crypto.h" #include "umr.h" #include "qp.h" #include "wr.h" @@ -554,6 +555,8 @@ static int calc_send_wqe(struct ib_qp_init_attr *attr) } size += attr->cap.max_send_sge * sizeof(struct mlx5_wqe_data_seg); + if (attr->create_flags & IB_QP_CREATE_CRYPTO_EN) + size += sizeof(struct mlx5_bsf_crypto); if (attr->create_flags & IB_QP_CREATE_INTEGRITY_EN && ALIGN(max_t(int, inl_size, size), MLX5_SEND_WQE_BB) < MLX5_SIG_WQE_SIZE) return MLX5_SIG_WQE_SIZE; @@ -3024,6 +3027,9 @@ static int process_create_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, process_create_flag(dev, &create_flags, MLX5_IB_QP_CREATE_SQPN_QP1, true, qp); + process_create_flag(dev, &create_flags, IB_QP_CREATE_CRYPTO_EN, + !!dev->crypto_caps.crypto_engines, qp); + if (create_flags) { mlx5_ib_dbg(dev, "Create QP has unsupported flags 0x%llX\n", create_flags); diff --git a/drivers/infiniband/hw/mlx5/wr.c b/drivers/infiniband/hw/mlx5/wr.c index df1d1b0a3ef7..e047b8aabceb 100644 --- a/drivers/infiniband/hw/mlx5/wr.c +++ b/drivers/infiniband/hw/mlx5/wr.c @@ -6,6 +6,7 @@ #include #include #include +#include "crypto.h" #include "wr.h" #include "umr.h" @@ -21,6 +22,7 @@ static const u32 mlx5_ib_opcode[] = { [IB_WR_SEND_WITH_INV] = MLX5_OPCODE_SEND_INVAL, [IB_WR_LOCAL_INV] = MLX5_OPCODE_UMR, [IB_WR_REG_MR] = MLX5_OPCODE_UMR, + [IB_WR_REG_MR_CRYPTO] = MLX5_OPCODE_UMR, [IB_WR_MASKED_ATOMIC_CMP_AND_SWP] = MLX5_OPCODE_ATOMIC_MASKED_CS, [IB_WR_MASKED_ATOMIC_FETCH_AND_ADD] = MLX5_OPCODE_ATOMIC_MASKED_FA, [MLX5_IB_WR_UMR] = MLX5_OPCODE_UMR, @@ -115,7 +117,7 @@ static void set_data_ptr_seg(struct mlx5_wqe_data_seg *dseg, struct ib_sge *sg) dseg->addr = cpu_to_be64(sg->addr); } -static __be64 frwr_mkey_mask(bool atomic) +static __be64 frwr_mkey_mask(bool atomic, bool crypto) { u64 result; @@ -134,6 +136,9 @@ static __be64 frwr_mkey_mask(bool atomic) if (atomic) result |= MLX5_MKEY_MASK_A; + if (crypto) + result |= MLX5_MKEY_MASK_BSF_EN; + return cpu_to_be64(result); } @@ -159,7 +164,8 @@ static __be64 sig_mkey_mask(void) } static void set_reg_umr_seg(struct mlx5_wqe_umr_ctrl_seg *umr, - struct mlx5_ib_mr *mr, u8 flags, bool atomic) + struct mlx5_ib_mr *mr, u8 flags, bool atomic, + bool crypto) { int size = (mr->mmkey.ndescs + mr->meta_ndescs) * mr->desc_size; @@ -167,7 +173,9 @@ static void set_reg_umr_seg(struct mlx5_wqe_umr_ctrl_seg *umr, umr->flags = flags; umr->xlt_octowords = cpu_to_be16(mlx5r_umr_get_xlt_octo(size)); - umr->mkey_mask = frwr_mkey_mask(atomic); + umr->mkey_mask = frwr_mkey_mask(atomic, crypto); + if (crypto) + umr->bsf_octowords = cpu_to_be16(MLX5_MKEY_BSF_OCTO_SIZE); } static void set_linv_umr_seg(struct mlx5_wqe_umr_ctrl_seg *umr) @@ -188,7 +196,7 @@ static u8 get_umr_flags(int acc) static void set_reg_mkey_seg(struct mlx5_mkey_seg *seg, struct mlx5_ib_mr *mr, - u32 key, int access) + u32 key, int access, bool crypto) { int ndescs = ALIGN(mr->mmkey.ndescs + mr->meta_ndescs, 8) >> 1; @@ -203,6 +211,8 @@ static void set_reg_mkey_seg(struct mlx5_mkey_seg *seg, seg->flags = get_umr_flags(access) | mr->access_mode; seg->qpn_mkey7_0 = cpu_to_be32((key & 0xff) | 0xffffff00); seg->flags_pd = cpu_to_be32(MLX5_MKEY_REMOTE_INVAL); + if (crypto) + seg->flags_pd |= cpu_to_be32(MLX5_MKEY_BSF_EN); seg->start_addr = cpu_to_be64(mr->ibmr.iova); seg->len = cpu_to_be64(mr->ibmr.length); seg->xlt_oct_size = cpu_to_be32(ndescs); @@ -353,6 +363,66 @@ static void mlx5_fill_inl_bsf(struct ib_sig_domain *domain, cpu_to_be16(domain->sig.dif.apptag_check_mask); } +static void mlx5_set_xts_tweak(struct ib_crypto_attrs *crypto_attrs, + struct mlx5_bsf_crypto *bsf, bool is_be) +{ + int tweak_array_size = sizeof(bsf->be_xts_init_tweak) / sizeof(u32); + int i, j; + + /* The endianness of the initial tweak in the kernel is LE */ + if (is_be) { + for (i = 0; i < tweak_array_size; i++) { + j = tweak_array_size - i - 1; + bsf->be_xts_init_tweak[i] = + cpu_to_be32(crypto_attrs->xts_init_tweak[j]); + } + } else { + for (i = 0; i < tweak_array_size; i++) + bsf->le_xts_init_tweak[i] = + cpu_to_le32(crypto_attrs->xts_init_tweak[i]); + } +} + +static int mlx5_set_bsf_crypto(struct ib_mr *ibmr, struct mlx5_bsf_crypto *bsf) +{ + struct mlx5_ib_dev *dev = to_mdev(ibmr->device); + struct mlx5_core_dev *mdev = dev->mdev; + struct ib_crypto_attrs *crypto_attrs = ibmr->crypto_attrs; + u64 data_size = ibmr->length; + + if (crypto_attrs->encrypt_standard != IB_CRYPTO_AES_XTS) + return -EINVAL; + + memset(bsf, 0, sizeof(*bsf)); + + /* Crypto only */ + bsf->bsf_size_type = 1 << 7; + /* Crypto type */ + bsf->bsf_size_type |= 1; + + switch (crypto_attrs->encrypt_domain) { + case IB_CRYPTO_ENCRYPTED_WIRE_DOMAIN: + bsf->encryption_order = MLX5_CRYPTO_ENCRYPTED_WIRE; + break; + case IB_CRYPTO_ENCRYPTED_MEM_DOMAIN: + bsf->encryption_order = MLX5_CRYPTO_ENCRYPTED_MEM; + break; + default: + WARN_ONCE(1, "Bad encryption domain (%d) is given.\n", + crypto_attrs->encrypt_domain); + return -EINVAL; + } + + bsf->encryption_standard = MLX5_CRYPTO_AES_XTS; + bsf->raw_data_size = cpu_to_be32(data_size); + bsf->block_size_p = bs_selector(crypto_attrs->data_unit_size); + mlx5_set_xts_tweak(crypto_attrs, bsf, + MLX5_CAP_GEN(mdev, aes_xts_multi_block_be_tweak)); + bsf->rsvd_dek_pointer |= cpu_to_be32(crypto_attrs->dek & 0xffffff); + + return 0; +} + static int mlx5_set_bsf(struct ib_mr *sig_mr, struct ib_sig_attrs *sig_attrs, struct mlx5_bsf *bsf, u32 data_size) @@ -632,10 +702,53 @@ static int set_psv_wr(struct ib_sig_domain *domain, return 0; } +static int set_crypto_data_segment(struct mlx5_ib_qp *qp, struct mlx5_ib_mr *mr, + void **seg, int *size, void **cur_edge) +{ + int mr_list_size = (mr->mmkey.ndescs + mr->meta_ndescs) * mr->desc_size; + int tmp_size; + + mlx5r_memcpy_send_wqe(&qp->sq, cur_edge, seg, size, mr->descs, + mr_list_size); + tmp_size = *size; + *size = ALIGN(*size, MLX5_SEND_WQE_BB >> 4); + *seg += (*size - tmp_size) * 16; + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); + + if (mlx5_set_bsf_crypto(&mr->ibmr, *seg)) + return -EINVAL; + + *seg += sizeof(struct mlx5_bsf_crypto); + *size += sizeof(struct mlx5_bsf_crypto) / 16; + handle_post_send_edge(&qp->sq, seg, *size, cur_edge); + + return 0; +} + +static int set_dma_bsf_crypto(struct mlx5_ib_mr *mr) +{ + int mr_list_size = (mr->mmkey.ndescs + mr->meta_ndescs) * mr->desc_size; + int aligned_size = ALIGN(mr_list_size, MLX5_SEND_WQE_BB); + + ib_dma_sync_single_for_cpu(mr->ibmr.device, mr->desc_map + aligned_size, + sizeof(struct mlx5_bsf_crypto), + DMA_TO_DEVICE); + + if (mlx5_set_bsf_crypto(&mr->ibmr, mr->descs + aligned_size)) + return -EINVAL; + + ib_dma_sync_single_for_device(mr->ibmr.device, + mr->desc_map + aligned_size, + sizeof(struct mlx5_bsf_crypto), + DMA_TO_DEVICE); + + return 0; +} + static int set_reg_wr(struct mlx5_ib_qp *qp, const struct ib_reg_wr *wr, void **seg, int *size, void **cur_edge, - bool check_not_free) + bool check_not_free, bool crypto) { struct mlx5_ib_mr *mr = to_mmr(wr->mr); struct mlx5_ib_pd *pd = to_mpd(qp->ibqp.pd); @@ -667,17 +780,21 @@ static int set_reg_wr(struct mlx5_ib_qp *qp, if (umr_inline) flags |= MLX5_UMR_INLINE; - set_reg_umr_seg(*seg, mr, flags, atomic); + set_reg_umr_seg(*seg, mr, flags, atomic, crypto); *seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); *size += sizeof(struct mlx5_wqe_umr_ctrl_seg) / 16; handle_post_send_edge(&qp->sq, seg, *size, cur_edge); - set_reg_mkey_seg(*seg, mr, wr->key, wr->access); + set_reg_mkey_seg(*seg, mr, wr->key, wr->access, crypto); *seg += sizeof(struct mlx5_mkey_seg); *size += sizeof(struct mlx5_mkey_seg) / 16; handle_post_send_edge(&qp->sq, seg, *size, cur_edge); if (umr_inline) { + if (crypto) + return set_crypto_data_segment(qp, mr, seg, size, + cur_edge); + mlx5r_memcpy_send_wqe(&qp->sq, cur_edge, seg, size, mr->descs, mr_list_size); *size = ALIGN(*size, MLX5_SEND_WQE_BB >> 4); @@ -685,6 +802,9 @@ static int set_reg_wr(struct mlx5_ib_qp *qp, set_reg_data_seg(*seg, mr, pd); *seg += sizeof(struct mlx5_wqe_data_seg); *size += (sizeof(struct mlx5_wqe_data_seg) / 16); + + if (crypto) + return set_dma_bsf_crypto(mr); } return 0; } @@ -806,7 +926,7 @@ static int handle_reg_mr(struct mlx5_ib_qp *qp, const struct ib_send_wr *wr, { qp->sq.wr_data[idx] = IB_WR_REG_MR; (*ctrl)->imm = cpu_to_be32(reg_wr(wr)->key); - return set_reg_wr(qp, reg_wr(wr), seg, size, cur_edge, true); + return set_reg_wr(qp, reg_wr(wr), seg, size, cur_edge, true, false); } static int handle_psv(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, @@ -870,7 +990,8 @@ static int handle_reg_mr_integrity(struct mlx5_ib_dev *dev, (*ctrl)->imm = cpu_to_be32(reg_pi_wr.key); /* UMR for data + prot registration */ - err = set_reg_wr(qp, ®_pi_wr, seg, size, cur_edge, false); + err = set_reg_wr(qp, ®_pi_wr, seg, size, cur_edge, false, + false); if (unlikely(err)) goto out; @@ -928,6 +1049,20 @@ static int handle_reg_mr_integrity(struct mlx5_ib_dev *dev, return err; } +static int handle_reg_mr_crypto(struct mlx5_ib_qp *qp, + const struct ib_send_wr *wr, + struct mlx5_wqe_ctrl_seg **ctrl, void **seg, + int *size, void **cur_edge, unsigned int idx) +{ + qp->sq.wr_data[idx] = IB_WR_REG_MR_CRYPTO; + (*ctrl)->imm = cpu_to_be32(reg_wr(wr)->key); + + if (unlikely(!qp->ibqp.crypto_en)) + return -EINVAL; + + return set_reg_wr(qp, reg_wr(wr), seg, size, cur_edge, true, true); +} + static int handle_qpt_rc(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, const struct ib_send_wr *wr, struct mlx5_wqe_ctrl_seg **ctrl, void **seg, int *size, @@ -971,6 +1106,14 @@ static int handle_qpt_rc(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp, *num_sge = 0; break; + case IB_WR_REG_MR_CRYPTO: + err = handle_reg_mr_crypto(qp, wr, ctrl, seg, size, cur_edge, + *idx); + if (unlikely(err)) + goto out; + *num_sge = 0; + break; + default: break; } @@ -1105,7 +1248,8 @@ int mlx5_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr, } if (wr->opcode == IB_WR_REG_MR || - wr->opcode == IB_WR_REG_MR_INTEGRITY) { + wr->opcode == IB_WR_REG_MR_INTEGRITY || + wr->opcode == IB_WR_REG_MR_CRYPTO) { fence = dev->umr_fence; next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL; } else { From patchwork Mon Jan 16 13:05:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 063A3C46467 for ; Mon, 16 Jan 2023 13:07:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231474AbjAPNHE (ORCPT ); Mon, 16 Jan 2023 08:07:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230213AbjAPNGu (ORCPT ); Mon, 16 Jan 2023 08:06:50 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3164A5E5; Mon, 16 Jan 2023 05:06:49 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9DA52B80D31; Mon, 16 Jan 2023 13:06:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 84A50C433EF; Mon, 16 Jan 2023 13:06:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874407; bh=JlISh7ZpJM1JouHMtly+4IcB63O1MxsyN1Eywkp8Wsg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aEBoKJSxCjguG2DHqyA7O8qM4GE3x282g/b/CwTfZAS4zWD4B6wASQ+jEB5v/9zR/ ghrEPqNMvev2QZmBhfekV3oRIuWjuTMKqu+Tzfqn8kldJwTA8QYAxUbuyHvhGL80BM QdA4d1/nx3Ly5CVTs3zgz2eyIgWBsTZ5eAcizz+q/laOZkFfW1DiI9wgCVeZrkKahJ +g3SoDr4gJw/IVa0yBpVcsJEhstqA9mYnolCld9y2SO7DzLDOhhyLv3ueEl/plNU8I RLCCMIed/IZ9bzeQdgF5gf/J2ek6Vtx4EP26J7GKfgPLdqlLQWH17T/t1+hT8LCDHf vrmptUx/xttaw== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH rdma-next 11/13] nvme: Introduce a local variable Date: Mon, 16 Jan 2023 15:05:58 +0200 Message-Id: X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin The patch doesn't change any logic. Signed-off-by: Israel Rukshin Signed-off-by: Leon Romanovsky --- drivers/nvme/host/core.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 7be562a4e1aa..51a9880db6ce 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -1870,6 +1870,7 @@ static void nvme_update_disk_info(struct gendisk *disk, sector_t capacity = nvme_lba_to_sect(ns, le64_to_cpu(id->nsze)); unsigned short bs = 1 << ns->lba_shift; u32 atomic_bs, phys_bs, io_opt = 0; + struct nvme_ctrl *ctrl = ns->ctrl; /* * The block layer can't support LBA sizes larger than the page size @@ -1892,7 +1893,7 @@ static void nvme_update_disk_info(struct gendisk *disk, if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs; else - atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs; + atomic_bs = (1 + ctrl->subsys->awupf) * bs; } if (id->nsfeat & NVME_NS_FEAT_IO_OPT) { @@ -1922,7 +1923,7 @@ static void nvme_update_disk_info(struct gendisk *disk, if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) && (ns->features & NVME_NS_METADATA_SUPPORTED)) nvme_init_integrity(disk, ns, - ns->ctrl->max_integrity_segments); + ctrl->max_integrity_segments); else if (!nvme_ns_has_pi(ns)) capacity = 0; } @@ -1931,7 +1932,7 @@ static void nvme_update_disk_info(struct gendisk *disk, nvme_config_discard(disk, ns); blk_queue_max_write_zeroes_sectors(disk->queue, - ns->ctrl->max_zeroes_sectors); + ctrl->max_zeroes_sectors); } static bool nvme_ns_is_readonly(struct nvme_ns *ns, struct nvme_ns_info *info) From patchwork Mon Jan 16 13:05:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57C1CC54EBE for ; Mon, 16 Jan 2023 13:07:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231338AbjAPNHJ (ORCPT ); Mon, 16 Jan 2023 08:07:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231420AbjAPNHB (ORCPT ); Mon, 16 Jan 2023 08:07:01 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAA661710; Mon, 16 Jan 2023 05:07:00 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 645F660F9B; Mon, 16 Jan 2023 13:07:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DA0DFC433F1; Mon, 16 Jan 2023 13:06:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874419; bh=Z9+KomyeN4vAjModCpaAbKOjkyqP4m8dRccCmgRLn8k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OPeQnGdMKL0qvPIWYWzLqm//vIJ6xVrQ+zSrw9NRNMJnbSzuJa9VEEYupUejVnt06 HRNhIbjJVVOzzBSXSeoUjnDrYIDLqtF3CjSOmEgHFu37MIVWOwUbq/DFyGfdnJ++Yg BvhvBUhhKztQYlOb92vy7VS8Uiq44148kyxwVg4auh30C5QonsBIXdBs6dnWQlVAaH p/08idib6ImCyqQwz1xJ8CyMsMgn2pwOnLcSRdNel6waQkTHTGQTYUJsOWFJsf4Ywz ++JUX3bjsvRlVxxLqSrPNBs7farZKtwyKf2xjpumDEoLbmQvbxXcM2CM0TkhY3sduc 8JxubTZg4TsUw== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH rdma-next 12/13] nvme: Add crypto profile at nvme controller Date: Mon, 16 Jan 2023 15:05:59 +0200 Message-Id: X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin The crypto profile will be filled by the transport drivers. This is a preparation patch for adding support of inline encryption at nvme-rdma driver. Signed-off-by: Israel Rukshin Signed-off-by: Leon Romanovsky --- drivers/nvme/host/core.c | 3 +++ drivers/nvme/host/nvme.h | 4 ++++ 2 files changed, 7 insertions(+) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 51a9880db6ce..f09e4e0216b3 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -1928,6 +1928,9 @@ static void nvme_update_disk_info(struct gendisk *disk, capacity = 0; } + if (ctrl->crypto_enable) + blk_crypto_register(&ctrl->crypto_profile, disk->queue); + set_capacity_and_notify(disk, capacity); nvme_config_discard(disk, ns); diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 424c8a467a0c..591380f53744 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -16,6 +16,7 @@ #include #include #include +#include #include @@ -374,6 +375,9 @@ struct nvme_ctrl { enum nvme_ctrl_type cntrltype; enum nvme_dctype dctype; + + bool crypto_enable; + struct blk_crypto_profile crypto_profile; }; enum nvme_iopolicy { From patchwork Mon Jan 16 13:06:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13103063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BF0EC46467 for ; Mon, 16 Jan 2023 13:07:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231533AbjAPNHL (ORCPT ); Mon, 16 Jan 2023 08:07:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231287AbjAPNG6 (ORCPT ); Mon, 16 Jan 2023 08:06:58 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 920897EE1; Mon, 16 Jan 2023 05:06:56 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 203C460F9B; Mon, 16 Jan 2023 13:06:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A8A0EC433F0; Mon, 16 Jan 2023 13:06:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673874415; bh=3onQ/rBBcurHQvS6iIFYjN6t3VLGjjleEvgmJokrbgA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NUJkHjzulOFJcMkWyEr4QZGeeBrMIcy+EHSm7ZNEzgiWesNYaqRow8CXR3AlkAQ8i dJvjT8FNvlrkqVq9fs5JF1v5O8p2AJXQrGk7x8i+newd1L8Qc7oAM4Y8faxFFuMm3I qqHPzLr8ApM9FLcq0iPB+rYjUNNpKdpQ1pstknMWu7AfW0R7Y8HpZXx4Em51+CPR8b ZB3Fbt5NLbLtolW/kHQfgw1BIVr7TB3wHpobxPSgBfk55jPas04n8Oz1TpsI+e0FKC rLnD8WKyrb3HBRB++nAeBaQIanu1gD4gW6dZCc76bECmRRIMEPgYDKKGHewU0YMchi fee5Llt2FnYWA== From: Leon Romanovsky To: Jason Gunthorpe Cc: Israel Rukshin , Bryan Tan , Christoph Hellwig , Eric Dumazet , Jakub Kicinski , Jens Axboe , Keith Busch , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Masami Hiramatsu , Max Gurtovoy , netdev@vger.kernel.org, Paolo Abeni , Saeed Mahameed , Sagi Grimberg , Selvin Xavier , Steven Rostedt , Vishnu Dasa , Yishai Hadas Subject: [PATCH rdma-next 13/13] nvme-rdma: Add inline encryption support Date: Mon, 16 Jan 2023 15:06:00 +0200 Message-Id: X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Israel Rukshin Add support for inline encryption/decryption of the data at a similar way like integrity is used via a unique Mkey. The feature is enabled only when CONFIG_BLK_INLINE_ENCRYPTION is configured. There is no special configuration to enable crypto at nvme modules. The code was tested with fscrypt and BF-3 HW, which had more than 50% CPU utilization improvement at 64K and bigger I/O sizes when comparing to the SW only solution at this case. Signed-off-by: Israel Rukshin Signed-off-by: Leon Romanovsky --- drivers/nvme/host/rdma.c | 236 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 232 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index bbad26b82b56..8bea38eb976f 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -40,6 +40,10 @@ #define NVME_RDMA_METADATA_SGL_SIZE \ (sizeof(struct scatterlist) * NVME_INLINE_METADATA_SG_CNT) +#define NVME_RDMA_NUM_CRYPTO_KEYSLOTS (32U) +/* Bitmask for 512B and 4K data unit sizes */ +#define NVME_RDMA_DATA_UNIT_SIZES (512U | 4096U) + struct nvme_rdma_device { struct ib_device *dev; struct ib_pd *pd; @@ -75,6 +79,7 @@ struct nvme_rdma_request { struct nvme_rdma_sgl data_sgl; struct nvme_rdma_sgl *metadata_sgl; bool use_sig_mr; + bool use_crypto; }; enum nvme_rdma_queue_flags { @@ -97,6 +102,7 @@ struct nvme_rdma_queue { int cm_error; struct completion cm_done; bool pi_support; + bool crypto_support; int cq_size; struct mutex queue_lock; }; @@ -126,6 +132,8 @@ struct nvme_rdma_ctrl { struct nvme_ctrl ctrl; bool use_inline_data; u32 io_queues[HCTX_MAX_TYPES]; + + struct ib_dek *dek[NVME_RDMA_NUM_CRYPTO_KEYSLOTS]; }; static inline struct nvme_rdma_ctrl *to_rdma_ctrl(struct nvme_ctrl *ctrl) @@ -275,6 +283,8 @@ static int nvme_rdma_create_qp(struct nvme_rdma_queue *queue, const int factor) init_attr.recv_cq = queue->ib_cq; if (queue->pi_support) init_attr.create_flags |= IB_QP_CREATE_INTEGRITY_EN; + if (queue->crypto_support) + init_attr.create_flags |= IB_QP_CREATE_CRYPTO_EN; init_attr.qp_context = queue; ret = rdma_create_qp(queue->cm_id, dev->pd, &init_attr); @@ -364,6 +374,77 @@ static int nvme_rdma_dev_get(struct nvme_rdma_device *dev) return kref_get_unless_zero(&dev->ref); } +static int nvme_rdma_crypto_keyslot_program(struct blk_crypto_profile *profile, + const struct blk_crypto_key *key, + unsigned int slot) +{ + struct nvme_ctrl *nctrl = + container_of(profile, struct nvme_ctrl, crypto_profile); + struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl); + struct ib_dek_attr dek_attr = { }; + int err = 0; + + if (slot > NVME_RDMA_NUM_CRYPTO_KEYSLOTS) { + dev_err(nctrl->device, "slot index reached the limit %d/%d", + slot, NVME_RDMA_NUM_CRYPTO_KEYSLOTS); + return -EOPNOTSUPP; + } + + if (WARN_ON_ONCE(key->crypto_cfg.crypto_mode != + BLK_ENCRYPTION_MODE_AES_256_XTS)) + return -EOPNOTSUPP; + + if (ctrl->dek[slot]) { + dev_dbg(nctrl->device, "slot %d is taken, free DEK ID %d\n", + slot, ctrl->dek[slot]->id); + ib_destroy_dek(ctrl->dek[slot]); + } + + dek_attr.key_blob = key->raw; + dek_attr.key_blob_size = key->size; + dek_attr.key_type = IB_CRYPTO_KEY_TYPE_AES_XTS; + ctrl->dek[slot] = ib_create_dek(ctrl->device->pd, &dek_attr); + if (IS_ERR(ctrl->dek[slot])) { + err = PTR_ERR(ctrl->dek[slot]); + dev_err(nctrl->device, + "failed to create DEK at slot %d, err %d\n", slot, err); + ctrl->dek[slot] = NULL; + } else { + dev_dbg(nctrl->device, "DEK ID %d was created at slot %d\n", + ctrl->dek[slot]->id, slot); + } + + return err; +} + +static int nvme_rdma_crypto_keyslot_evict(struct blk_crypto_profile *profile, + const struct blk_crypto_key *key, + unsigned int slot) +{ + struct nvme_ctrl *nctrl = + container_of(profile, struct nvme_ctrl, crypto_profile); + struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl); + + if (slot > NVME_RDMA_NUM_CRYPTO_KEYSLOTS) { + dev_err(nctrl->device, "slot index reached the limit %d/%d\n", + slot, NVME_RDMA_NUM_CRYPTO_KEYSLOTS); + return -EOPNOTSUPP; + } + + if (!ctrl->dek[slot]) { + dev_err(nctrl->device, "slot %d is empty\n", slot); + return -EINVAL; + } + + dev_dbg(nctrl->device, "Destroy DEK ID %d slot %d\n", + ctrl->dek[slot]->id, slot); + + ib_destroy_dek(ctrl->dek[slot]); + ctrl->dek[slot] = NULL; + + return 0; +} + static struct nvme_rdma_device * nvme_rdma_find_get_device(struct rdma_cm_id *cm_id) { @@ -430,6 +511,8 @@ static void nvme_rdma_destroy_queue_ib(struct nvme_rdma_queue *queue) dev = queue->device; ibdev = dev->dev; + if (queue->crypto_support) + ib_mr_pool_destroy(queue->qp, &queue->qp->crypto_mrs); if (queue->pi_support) ib_mr_pool_destroy(queue->qp, &queue->qp->sig_mrs); ib_mr_pool_destroy(queue->qp, &queue->qp->rdma_mrs); @@ -553,10 +636,24 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) } } + if (queue->crypto_support) { + ret = ib_mr_pool_init(queue->qp, &queue->qp->crypto_mrs, + queue->queue_size, IB_MR_TYPE_CRYPTO, + pages_per_mr, 0); + if (ret) { + dev_err(queue->ctrl->ctrl.device, + "failed to initialize crypto MR pool sized %d for QID %d\n", + queue->queue_size, nvme_rdma_queue_idx(queue)); + goto out_destroy_sig_mr_pool; + } + } + set_bit(NVME_RDMA_Q_TR_READY, &queue->flags); return 0; +out_destroy_sig_mr_pool: + ib_mr_pool_destroy(queue->qp, &queue->qp->sig_mrs); out_destroy_mr_pool: ib_mr_pool_destroy(queue->qp, &queue->qp->rdma_mrs); out_destroy_ring: @@ -585,6 +682,9 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl, queue->pi_support = true; else queue->pi_support = false; + + queue->crypto_support = idx && ctrl->ctrl.crypto_enable; + init_completion(&queue->cm_done); if (idx > 0) @@ -805,15 +905,109 @@ static int nvme_rdma_alloc_tag_set(struct nvme_ctrl *ctrl) static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl) { + unsigned int i; + if (ctrl->async_event_sqe.data) { cancel_work_sync(&ctrl->ctrl.async_event_work); nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe, sizeof(struct nvme_command), DMA_TO_DEVICE); ctrl->async_event_sqe.data = NULL; } + + for (i = 0; i < NVME_RDMA_NUM_CRYPTO_KEYSLOTS; i++) { + if (!ctrl->dek[i]) + continue; + ib_destroy_dek(ctrl->dek[i]); + ctrl->dek[i] = NULL; + } + nvme_rdma_free_queue(&ctrl->queues[0]); } +#ifdef CONFIG_BLK_INLINE_ENCRYPTION +static const struct blk_crypto_ll_ops nvme_rdma_crypto_ops = { + .keyslot_program = nvme_rdma_crypto_keyslot_program, + .keyslot_evict = nvme_rdma_crypto_keyslot_evict, +}; + +static int nvme_rdma_crypto_profile_init(struct nvme_rdma_ctrl *ctrl, bool new) +{ + struct blk_crypto_profile *profile = &ctrl->ctrl.crypto_profile; + int ret; + + if (!new) { + blk_crypto_reprogram_all_keys(profile); + return 0; + } + + ret = devm_blk_crypto_profile_init(ctrl->ctrl.device, profile, + NVME_RDMA_NUM_CRYPTO_KEYSLOTS); + if (ret) { + dev_err(ctrl->ctrl.device, + "devm_blk_crypto_profile_init failed err %d\n", ret); + return ret; + } + + profile->ll_ops = nvme_rdma_crypto_ops; + profile->dev = ctrl->ctrl.device; + profile->max_dun_bytes_supported = IB_CRYPTO_XTS_TWEAK_MAX_SIZE; + profile->modes_supported[BLK_ENCRYPTION_MODE_AES_256_XTS] = + NVME_RDMA_DATA_UNIT_SIZES; + + return 0; +} + +static void nvme_rdma_set_crypto_attrs(struct ib_crypto_attrs *crypto_attrs, + struct nvme_rdma_queue *queue, struct nvme_rdma_request *req) +{ + struct request *rq = blk_mq_rq_from_pdu(req); + u32 slot_idx = blk_crypto_keyslot_index(rq->crypt_keyslot); + + memset(crypto_attrs, 0, sizeof(*crypto_attrs)); + + crypto_attrs->encrypt_domain = IB_CRYPTO_ENCRYPTED_WIRE_DOMAIN; + crypto_attrs->encrypt_standard = IB_CRYPTO_AES_XTS; + crypto_attrs->data_unit_size = + rq->crypt_ctx->bc_key->crypto_cfg.data_unit_size; + crypto_attrs->dek = queue->ctrl->dek[slot_idx]->id; + memcpy(crypto_attrs->xts_init_tweak, rq->crypt_ctx->bc_dun, + sizeof(crypto_attrs->xts_init_tweak)); +} + +static int nvme_rdma_fill_crypto_caps(struct nvme_rdma_ctrl *ctrl, bool new) +{ + struct ib_crypto_caps *caps = &ctrl->device->dev->attrs.crypto_caps; + + if (caps->crypto_engines & IB_CRYPTO_ENGINES_CAP_AES_XTS) { + ctrl->ctrl.crypto_enable = true; + return 0; + } + + if (!new && ctrl->ctrl.crypto_enable) { + dev_err(ctrl->ctrl.device, "Must support crypto!\n"); + return -EOPNOTSUPP; + } + ctrl->ctrl.crypto_enable = false; + + return 0; +} +#else +static int nvme_rdma_crypto_profile_init(struct nvme_rdma_ctrl *ctrl, bool new) +{ + return -EOPNOTSUPP; +} + +static void nvme_rdma_set_crypto_attrs(struct ib_crypto_attrs *crypto_attrs, + struct nvme_rdma_queue *queue, struct nvme_rdma_request *req) +{ +} + +static int nvme_rdma_fill_crypto_caps(struct nvme_rdma_ctrl *ctrl, bool new) +{ + return 0; +} +#endif /* CONFIG_BLK_INLINE_ENCRYPTION */ + static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl, bool new) { @@ -835,6 +1029,10 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl, ctrl->max_fr_pages = nvme_rdma_get_max_fr_pages(ctrl->device->dev, pi_capable); + error = nvme_rdma_fill_crypto_caps(ctrl, new); + if (error) + goto out_free_queue; + /* * Bind the async event SQE DMA mapping to the admin queue lifetime. * It's safe, since any chage in the underlying RDMA device will issue @@ -870,6 +1068,12 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl, else ctrl->ctrl.max_integrity_segments = 0; + if (ctrl->ctrl.crypto_enable) { + error = nvme_rdma_crypto_profile_init(ctrl, new); + if (error) + goto out_quiesce_queue; + } + nvme_unquiesce_admin_queue(&ctrl->ctrl); error = nvme_init_ctrl_finish(&ctrl->ctrl, false); @@ -1268,6 +1472,8 @@ static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue, if (req->use_sig_mr) pool = &queue->qp->sig_mrs; + else if (req->use_crypto) + pool = &queue->qp->crypto_mrs; if (req->mr) { ib_mr_pool_put(queue->qp, pool, req->mr); @@ -1331,9 +1537,13 @@ static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue, int count) { struct nvme_keyed_sgl_desc *sg = &c->common.dptr.ksgl; + struct list_head *pool = &queue->qp->rdma_mrs; int nr; - req->mr = ib_mr_pool_get(queue->qp, &queue->qp->rdma_mrs); + if (req->use_crypto) + pool = &queue->qp->crypto_mrs; + + req->mr = ib_mr_pool_get(queue->qp, pool); if (WARN_ON_ONCE(!req->mr)) return -EAGAIN; @@ -1344,18 +1554,24 @@ static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue, nr = ib_map_mr_sg(req->mr, req->data_sgl.sg_table.sgl, count, NULL, SZ_4K); if (unlikely(nr < count)) { - ib_mr_pool_put(queue->qp, &queue->qp->rdma_mrs, req->mr); + ib_mr_pool_put(queue->qp, pool, req->mr); req->mr = NULL; if (nr < 0) return nr; return -EINVAL; } + if (req->use_crypto) + nvme_rdma_set_crypto_attrs(req->mr->crypto_attrs, queue, req); + ib_update_fast_reg_key(req->mr, ib_inc_rkey(req->mr->rkey)); req->reg_cqe.done = nvme_rdma_memreg_done; memset(&req->reg_wr, 0, sizeof(req->reg_wr)); - req->reg_wr.wr.opcode = IB_WR_REG_MR; + if (req->use_crypto) + req->reg_wr.wr.opcode = IB_WR_REG_MR_CRYPTO; + else + req->reg_wr.wr.opcode = IB_WR_REG_MR; req->reg_wr.wr.wr_cqe = &req->reg_cqe; req->reg_wr.wr.num_sge = 0; req->reg_wr.mr = req->mr; @@ -1571,7 +1787,7 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue, goto out; } - if (count <= dev->num_inline_segments) { + if (count <= dev->num_inline_segments && !req->use_crypto) { if (rq_data_dir(rq) == WRITE && nvme_rdma_queue_idx(queue) && queue->ctrl->use_inline_data && blk_rq_payload_bytes(rq) <= @@ -2052,6 +2268,18 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx, else req->use_sig_mr = false; +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + if (rq->crypt_ctx) { + if (WARN_ON_ONCE(!queue->crypto_support)) { + err = -EIO; + goto err; + } + req->use_crypto = true; + } else { + req->use_crypto = false; + } +#endif + err = nvme_rdma_map_data(queue, rq, c); if (unlikely(err < 0)) { dev_err(queue->ctrl->ctrl.device,