From patchwork Wed Mar 18 12:43:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 11445193 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 78C4D1392 for ; Wed, 18 Mar 2020 12:43:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 452772076E for ; Wed, 18 Mar 2020 12:43:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584535422; bh=vASXrDvoBFpmoQSL6/mstCHhAm3Ufn+ewWf5WFgg8YY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=SrEpIWxm/d42Ydo7WrBF4r1JQzCW4PKpP3feqXGHHR52+puKwaMKvuDhsTnAmvbBi ADiy0EBYhpFlrrUnKjSYoAm6KXiP4gy7OjR2IuZdTbF8goiYQqbvuY2RcF5vtOzgFS 0Ax5Nz0IlmnQHKsOaE1OmvVLh0tTl31CSQ5h5b/8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726847AbgCRMnl (ORCPT ); Wed, 18 Mar 2020 08:43:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:52602 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726832AbgCRMnl (ORCPT ); Wed, 18 Mar 2020 08:43:41 -0400 Received: from localhost (unknown [213.57.247.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E248120768; Wed, 18 Mar 2020 12:43:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584535420; bh=vASXrDvoBFpmoQSL6/mstCHhAm3Ufn+ewWf5WFgg8YY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oQXWZkA2E9SQBWk6eWUQ/ngY8uJF0UtsALqy6R+bBQaY1N4ID8zfzAtjjQA9/aXMQ rVxQgRDaIMLErWef5cYH65lKra+9zM2FWNEe2BETXdSGsoFgbLT3mxNwZ73FvqhrCr zweoUDJnef3+Sqiy/mLuQPNDd85jl66jNE5Djkd4= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Yishai Hadas , linux-rdma@vger.kernel.org, Michael Guralnik Subject: [PATCH rdma-next 1/4] IB/mlx5: Expose UAR object and its alloc/destroy commands Date: Wed, 18 Mar 2020 14:43:26 +0200 Message-Id: <20200318124329.52111-2-leon@kernel.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200318124329.52111-1-leon@kernel.org> References: <20200318124329.52111-1-leon@kernel.org> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Yishai Hadas Expose UAR object and its alloc/destroy commands to be used over the ioctl interface by user space applications. This API supports both BF & NC modes and enables a dynamic allocation of UARs once really needed. As the number of driver objects were limited by the core ones when the merged tree is prepared, had to decrease the number of core objects to enable the new UAR object usage. Signed-off-by: Yishai Hadas Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/main.c | 172 ++++++++++++++++++++-- drivers/infiniband/hw/mlx5/mlx5_ib.h | 2 + include/rdma/uverbs_ioctl.h | 2 +- include/uapi/rdma/mlx5_user_ioctl_cmds.h | 18 +++ include/uapi/rdma/mlx5_user_ioctl_verbs.h | 5 + 5 files changed, 189 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 05804e4ba292..e8787af2d74d 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -2022,6 +2022,17 @@ static phys_addr_t uar_index2pfn(struct mlx5_ib_dev *dev, return (dev->mdev->bar_addr >> PAGE_SHIFT) + uar_idx / fw_uars_per_page; } +static u64 uar_index2paddress(struct mlx5_ib_dev *dev, + int uar_idx) +{ + unsigned int fw_uars_per_page; + + fw_uars_per_page = MLX5_CAP_GEN(dev->mdev, uar_4k) ? + MLX5_UARS_IN_PAGE : 1; + + return (dev->mdev->bar_addr + (uar_idx / fw_uars_per_page) * PAGE_SIZE); +} + static int get_command(unsigned long offset) { return (offset >> MLX5_IB_MMAP_CMD_SHIFT) & MLX5_IB_MMAP_CMD_MASK; @@ -2106,6 +2117,11 @@ static void mlx5_ib_mmap_free(struct rdma_user_mmap_entry *entry) mutex_unlock(&var_table->bitmap_lock); kfree(mentry); break; + case MLX5_IB_MMAP_TYPE_UAR_WC: + case MLX5_IB_MMAP_TYPE_UAR_NC: + mlx5_cmd_free_uar(dev->mdev, mentry->page_idx); + kfree(mentry); + break; default: WARN_ON(true); } @@ -2257,7 +2273,8 @@ static int mlx5_ib_mmap_offset(struct mlx5_ib_dev *dev, mentry = to_mmmap(entry); pfn = (mentry->address >> PAGE_SHIFT); - if (mentry->mmap_flag == MLX5_IB_MMAP_TYPE_VAR) + if (mentry->mmap_flag == MLX5_IB_MMAP_TYPE_VAR || + mentry->mmap_flag == MLX5_IB_MMAP_TYPE_UAR_NC) prot = pgprot_noncached(vma->vm_page_prot); else prot = pgprot_writecombine(vma->vm_page_prot); @@ -6079,9 +6096,9 @@ static void mlx5_ib_cleanup_multiport_master(struct mlx5_ib_dev *dev) mlx5_nic_vport_disable_roce(dev->mdev); } -static int var_obj_cleanup(struct ib_uobject *uobject, - enum rdma_remove_reason why, - struct uverbs_attr_bundle *attrs) +static int mmap_obj_cleanup(struct ib_uobject *uobject, + enum rdma_remove_reason why, + struct uverbs_attr_bundle *attrs) { struct mlx5_user_mmap_entry *obj = uobject->object; @@ -6089,6 +6106,16 @@ static int var_obj_cleanup(struct ib_uobject *uobject, return 0; } +static int mlx5_rdma_user_mmap_entry_insert(struct mlx5_ib_ucontext *c, + struct mlx5_user_mmap_entry *entry, + size_t length) +{ + return rdma_user_mmap_entry_insert_range( + &c->ibucontext, &entry->rdma_entry, length, + (MLX5_IB_MMAP_OFFSET_START << 16), + ((MLX5_IB_MMAP_OFFSET_END << 16) + (1UL << 16) - 1)); +} + static struct mlx5_user_mmap_entry * alloc_var_entry(struct mlx5_ib_ucontext *c) { @@ -6119,10 +6146,8 @@ alloc_var_entry(struct mlx5_ib_ucontext *c) entry->page_idx = page_idx; entry->mmap_flag = MLX5_IB_MMAP_TYPE_VAR; - err = rdma_user_mmap_entry_insert_range( - &c->ibucontext, &entry->rdma_entry, var_table->stride_size, - MLX5_IB_MMAP_OFFSET_START << 16, - (MLX5_IB_MMAP_OFFSET_END << 16) + (1UL << 16) - 1); + err = mlx5_rdma_user_mmap_entry_insert(c, entry, + var_table->stride_size); if (err) goto err_insert; @@ -6206,7 +6231,7 @@ DECLARE_UVERBS_NAMED_METHOD_DESTROY( UA_MANDATORY)); DECLARE_UVERBS_NAMED_OBJECT(MLX5_IB_OBJECT_VAR, - UVERBS_TYPE_ALLOC_IDR(var_obj_cleanup), + UVERBS_TYPE_ALLOC_IDR(mmap_obj_cleanup), &UVERBS_METHOD(MLX5_IB_METHOD_VAR_OBJ_ALLOC), &UVERBS_METHOD(MLX5_IB_METHOD_VAR_OBJ_DESTROY)); @@ -6218,6 +6243,134 @@ static bool var_is_supported(struct ib_device *device) MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q); } +static struct mlx5_user_mmap_entry * +alloc_uar_entry(struct mlx5_ib_ucontext *c, + enum mlx5_ib_uapi_uar_alloc_type alloc_type) +{ + struct mlx5_user_mmap_entry *entry; + struct mlx5_ib_dev *dev; + u32 uar_index; + int err; + + entry = kzalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + return ERR_PTR(-ENOMEM); + + dev = to_mdev(c->ibucontext.device); + err = mlx5_cmd_alloc_uar(dev->mdev, &uar_index); + if (err) + goto end; + + entry->page_idx = uar_index; + entry->address = uar_index2paddress(dev, uar_index); + if (alloc_type == MLX5_IB_UAPI_UAR_ALLOC_TYPE_BF) + entry->mmap_flag = MLX5_IB_MMAP_TYPE_UAR_WC; + else + entry->mmap_flag = MLX5_IB_MMAP_TYPE_UAR_NC; + + err = mlx5_rdma_user_mmap_entry_insert(c, entry, PAGE_SIZE); + if (err) + goto err_insert; + + return entry; + +err_insert: + mlx5_cmd_free_uar(dev->mdev, uar_index); +end: + kfree(entry); + return ERR_PTR(err); +} + +static int UVERBS_HANDLER(MLX5_IB_METHOD_UAR_OBJ_ALLOC)( + struct uverbs_attr_bundle *attrs) +{ + struct ib_uobject *uobj = uverbs_attr_get_uobject( + attrs, MLX5_IB_ATTR_UAR_OBJ_ALLOC_HANDLE); + enum mlx5_ib_uapi_uar_alloc_type alloc_type; + struct mlx5_ib_ucontext *c; + struct mlx5_user_mmap_entry *entry; + u64 mmap_offset; + u32 length; + int err; + + c = to_mucontext(ib_uverbs_get_ucontext(attrs)); + if (IS_ERR(c)) + return PTR_ERR(c); + + err = uverbs_get_const(&alloc_type, attrs, + MLX5_IB_ATTR_UAR_OBJ_ALLOC_TYPE); + if (err) + return err; + + if (alloc_type != MLX5_IB_UAPI_UAR_ALLOC_TYPE_BF && + alloc_type != MLX5_IB_UAPI_UAR_ALLOC_TYPE_NC) + return -EOPNOTSUPP; + + if (!to_mdev(c->ibucontext.device)->wc_support && + alloc_type == MLX5_IB_UAPI_UAR_ALLOC_TYPE_BF) + return -EOPNOTSUPP; + + entry = alloc_uar_entry(c, alloc_type); + if (IS_ERR(entry)) + return PTR_ERR(entry); + + mmap_offset = mlx5_entry_to_mmap_offset(entry); + length = entry->rdma_entry.npages * PAGE_SIZE; + uobj->object = entry; + + err = uverbs_copy_to(attrs, MLX5_IB_ATTR_UAR_OBJ_ALLOC_MMAP_OFFSET, + &mmap_offset, sizeof(mmap_offset)); + if (err) + goto err; + + err = uverbs_copy_to(attrs, MLX5_IB_ATTR_UAR_OBJ_ALLOC_PAGE_ID, + &entry->page_idx, sizeof(entry->page_idx)); + if (err) + goto err; + + err = uverbs_copy_to(attrs, MLX5_IB_ATTR_UAR_OBJ_ALLOC_MMAP_LENGTH, + &length, sizeof(length)); + if (err) + goto err; + + return 0; + +err: + rdma_user_mmap_entry_remove(&entry->rdma_entry); + return err; +} + +DECLARE_UVERBS_NAMED_METHOD( + MLX5_IB_METHOD_UAR_OBJ_ALLOC, + UVERBS_ATTR_IDR(MLX5_IB_ATTR_UAR_OBJ_ALLOC_HANDLE, + MLX5_IB_OBJECT_UAR, + UVERBS_ACCESS_NEW, + UA_MANDATORY), + UVERBS_ATTR_CONST_IN(MLX5_IB_ATTR_UAR_OBJ_ALLOC_TYPE, + enum mlx5_ib_uapi_uar_alloc_type, + UA_MANDATORY), + UVERBS_ATTR_PTR_OUT(MLX5_IB_ATTR_UAR_OBJ_ALLOC_PAGE_ID, + UVERBS_ATTR_TYPE(u32), + UA_MANDATORY), + UVERBS_ATTR_PTR_OUT(MLX5_IB_ATTR_UAR_OBJ_ALLOC_MMAP_LENGTH, + UVERBS_ATTR_TYPE(u32), + UA_MANDATORY), + UVERBS_ATTR_PTR_OUT(MLX5_IB_ATTR_UAR_OBJ_ALLOC_MMAP_OFFSET, + UVERBS_ATTR_TYPE(u64), + UA_MANDATORY)); + +DECLARE_UVERBS_NAMED_METHOD_DESTROY( + MLX5_IB_METHOD_UAR_OBJ_DESTROY, + UVERBS_ATTR_IDR(MLX5_IB_ATTR_UAR_OBJ_DESTROY_HANDLE, + MLX5_IB_OBJECT_UAR, + UVERBS_ACCESS_DESTROY, + UA_MANDATORY)); + +DECLARE_UVERBS_NAMED_OBJECT(MLX5_IB_OBJECT_UAR, + UVERBS_TYPE_ALLOC_IDR(mmap_obj_cleanup), + &UVERBS_METHOD(MLX5_IB_METHOD_UAR_OBJ_ALLOC), + &UVERBS_METHOD(MLX5_IB_METHOD_UAR_OBJ_DESTROY)); + ADD_UVERBS_ATTRIBUTES_SIMPLE( mlx5_ib_dm, UVERBS_OBJECT_DM, @@ -6249,6 +6402,7 @@ static const struct uapi_definition mlx5_ib_defs[] = { UAPI_DEF_CHAIN_OBJ_TREE(UVERBS_OBJECT_DM, &mlx5_ib_dm), UAPI_DEF_CHAIN_OBJ_TREE_NAMED(MLX5_IB_OBJECT_VAR, UAPI_DEF_IS_OBJ_SUPPORTED(var_is_supported)), + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(MLX5_IB_OBJECT_UAR), {} }; diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 85d4f3958e32..370b03db366b 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -124,6 +124,8 @@ enum { enum mlx5_ib_mmap_type { MLX5_IB_MMAP_TYPE_MEMIC = 1, MLX5_IB_MMAP_TYPE_VAR = 2, + MLX5_IB_MMAP_TYPE_UAR_WC = 3, + MLX5_IB_MMAP_TYPE_UAR_NC = 4, }; struct mlx5_ib_ucontext { diff --git a/include/rdma/uverbs_ioctl.h b/include/rdma/uverbs_ioctl.h index 28570ac2b6a0..9f3b1e004046 100644 --- a/include/rdma/uverbs_ioctl.h +++ b/include/rdma/uverbs_ioctl.h @@ -173,7 +173,7 @@ enum uapi_radix_data { UVERBS_API_OBJ_KEY_BITS = 5, UVERBS_API_OBJ_KEY_SHIFT = UVERBS_API_METHOD_KEY_BITS + UVERBS_API_METHOD_KEY_SHIFT, - UVERBS_API_OBJ_KEY_NUM_CORE = 24, + UVERBS_API_OBJ_KEY_NUM_CORE = 20, UVERBS_API_OBJ_KEY_NUM_DRIVER = (1 << UVERBS_API_OBJ_KEY_BITS) - UVERBS_API_OBJ_KEY_NUM_CORE, UVERBS_API_OBJ_KEY_MASK = GENMASK(31, UVERBS_API_OBJ_KEY_SHIFT), diff --git a/include/uapi/rdma/mlx5_user_ioctl_cmds.h b/include/uapi/rdma/mlx5_user_ioctl_cmds.h index 8f4a417fc70a..24f3388c3182 100644 --- a/include/uapi/rdma/mlx5_user_ioctl_cmds.h +++ b/include/uapi/rdma/mlx5_user_ioctl_cmds.h @@ -131,6 +131,23 @@ enum mlx5_ib_var_obj_methods { MLX5_IB_METHOD_VAR_OBJ_DESTROY, }; +enum mlx5_ib_uar_alloc_attrs { + MLX5_IB_ATTR_UAR_OBJ_ALLOC_HANDLE = (1U << UVERBS_ID_NS_SHIFT), + MLX5_IB_ATTR_UAR_OBJ_ALLOC_TYPE, + MLX5_IB_ATTR_UAR_OBJ_ALLOC_MMAP_OFFSET, + MLX5_IB_ATTR_UAR_OBJ_ALLOC_MMAP_LENGTH, + MLX5_IB_ATTR_UAR_OBJ_ALLOC_PAGE_ID, +}; + +enum mlx5_ib_uar_obj_destroy_attrs { + MLX5_IB_ATTR_UAR_OBJ_DESTROY_HANDLE = (1U << UVERBS_ID_NS_SHIFT), +}; + +enum mlx5_ib_uar_obj_methods { + MLX5_IB_METHOD_UAR_OBJ_ALLOC = (1U << UVERBS_ID_NS_SHIFT), + MLX5_IB_METHOD_UAR_OBJ_DESTROY, +}; + enum mlx5_ib_devx_umem_reg_attrs { MLX5_IB_ATTR_DEVX_UMEM_REG_HANDLE = (1U << UVERBS_ID_NS_SHIFT), MLX5_IB_ATTR_DEVX_UMEM_REG_ADDR, @@ -190,6 +207,7 @@ enum mlx5_ib_objects { MLX5_IB_OBJECT_DEVX_ASYNC_EVENT_FD, MLX5_IB_OBJECT_VAR, MLX5_IB_OBJECT_PP, + MLX5_IB_OBJECT_UAR, }; enum mlx5_ib_flow_matcher_create_attrs { diff --git a/include/uapi/rdma/mlx5_user_ioctl_verbs.h b/include/uapi/rdma/mlx5_user_ioctl_verbs.h index b4641a7865f7..3f7a97c28045 100644 --- a/include/uapi/rdma/mlx5_user_ioctl_verbs.h +++ b/include/uapi/rdma/mlx5_user_ioctl_verbs.h @@ -77,5 +77,10 @@ enum mlx5_ib_uapi_pp_alloc_flags { MLX5_IB_UAPI_PP_ALLOC_FLAGS_DEDICATED_INDEX = 1 << 0, }; +enum mlx5_ib_uapi_uar_alloc_type { + MLX5_IB_UAPI_UAR_ALLOC_TYPE_BF = 0x0, + MLX5_IB_UAPI_UAR_ALLOC_TYPE_NC = 0x1, +}; + #endif From patchwork Wed Mar 18 12:43:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 11445195 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B6B841392 for ; Wed, 18 Mar 2020 12:43:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 96A262076E for ; Wed, 18 Mar 2020 12:43:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584535425; bh=PHZ0f5kOJyCEO2otrssBy3lVfkvEANMRhxKDJqmYQMc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=YPNrH27YOD3MUAMwaIhhDEBZUGPgSgMQhfhhwTg3B63Ri1zwxLuaLJHNZOy1PQnf7 40vKP8uYwP6QVwPyDxRNRF/hgeP69zmi//KfPkj0fSHZyyunakvBhuoSKdGOIzkuSL ux/xAMoDjkB3cGsOFeBWHbim/UhVDZpDg7Vv/n48= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726756AbgCRMnp (ORCPT ); Wed, 18 Mar 2020 08:43:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:52628 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726550AbgCRMno (ORCPT ); Wed, 18 Mar 2020 08:43:44 -0400 Received: from localhost (unknown [213.57.247.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8A92320768; Wed, 18 Mar 2020 12:43:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584535424; bh=PHZ0f5kOJyCEO2otrssBy3lVfkvEANMRhxKDJqmYQMc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VyaEHT9WdEuMNrASr5daEy0YfwkY/QF6C9XuqMgCyPx1AdRiDDrd7gsV/T1GgX4pO BmiEO7GX5Ey8NwiRFyLkbMXDrMXe9PewDvrnn2I1b+KXCb2g7ZeNT2WghDWRCrBrjz 9xGpo8NKc2M7hmCt9cVnJ3Z2as62NICfkbtk7eLk= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Yishai Hadas , linux-rdma@vger.kernel.org, Michael Guralnik Subject: [PATCH rdma-next 2/4] IB/mlx5: Extend CQ creation to get uar page index from user space Date: Wed, 18 Mar 2020 14:43:27 +0200 Message-Id: <20200318124329.52111-3-leon@kernel.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200318124329.52111-1-leon@kernel.org> References: <20200318124329.52111-1-leon@kernel.org> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Yishai Hadas Extend CQ creation to get uar page index from user space, this mode can be used with the UAR dynamic mode APIs to allocate/destroy a UAR object. Signed-off-by: Yishai Hadas Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/cq.c | 17 +++++++++++------ include/uapi/rdma/mlx5-abi.h | 4 ++++ 2 files changed, 15 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c index 3dec3de903b7..eafedc2f697b 100644 --- a/drivers/infiniband/hw/mlx5/cq.c +++ b/drivers/infiniband/hw/mlx5/cq.c @@ -715,17 +715,19 @@ static int create_cq_user(struct mlx5_ib_dev *dev, struct ib_udata *udata, struct mlx5_ib_ucontext *context = rdma_udata_to_drv_context( udata, struct mlx5_ib_ucontext, ibucontext); - ucmdlen = udata->inlen < sizeof(ucmd) ? - (sizeof(ucmd) - sizeof(ucmd.flags)) : sizeof(ucmd); + ucmdlen = min(udata->inlen, sizeof(ucmd)); + if (ucmdlen < offsetof(struct mlx5_ib_create_cq, flags)) + return -EINVAL; if (ib_copy_from_udata(&ucmd, udata, ucmdlen)) return -EFAULT; - if (ucmdlen == sizeof(ucmd) && - (ucmd.flags & ~(MLX5_IB_CREATE_CQ_FLAGS_CQE_128B_PAD))) + if ((ucmd.flags & ~(MLX5_IB_CREATE_CQ_FLAGS_CQE_128B_PAD | + MLX5_IB_CREATE_CQ_FLAGS_UAR_PAGE_INDEX))) return -EINVAL; - if (ucmd.cqe_size != 64 && ucmd.cqe_size != 128) + if ((ucmd.cqe_size != 64 && ucmd.cqe_size != 128) || + ucmd.reserved0 || ucmd.reserved1) return -EINVAL; *cqe_size = ucmd.cqe_size; @@ -762,7 +764,10 @@ static int create_cq_user(struct mlx5_ib_dev *dev, struct ib_udata *udata, MLX5_SET(cqc, cqc, log_page_size, page_shift - MLX5_ADAPTER_PAGE_SHIFT); - *index = context->bfregi.sys_pages[0]; + if (ucmd.flags & MLX5_IB_CREATE_CQ_FLAGS_UAR_PAGE_INDEX) + *index = ucmd.uar_page_index; + else + *index = context->bfregi.sys_pages[0]; if (ucmd.cqe_comp_en == 1) { int mini_cqe_format; diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h index 624f5b53eb1f..e900f9a64feb 100644 --- a/include/uapi/rdma/mlx5-abi.h +++ b/include/uapi/rdma/mlx5-abi.h @@ -266,6 +266,7 @@ struct mlx5_ib_query_device_resp { enum mlx5_ib_create_cq_flags { MLX5_IB_CREATE_CQ_FLAGS_CQE_128B_PAD = 1 << 0, + MLX5_IB_CREATE_CQ_FLAGS_UAR_PAGE_INDEX = 1 << 1, }; struct mlx5_ib_create_cq { @@ -275,6 +276,9 @@ struct mlx5_ib_create_cq { __u8 cqe_comp_en; __u8 cqe_comp_res_format; __u16 flags; + __u16 uar_page_index; + __u16 reserved0; + __u32 reserved1; }; struct mlx5_ib_create_cq_resp { From patchwork Wed Mar 18 12:43:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 11445201 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B25D14B4 for ; Wed, 18 Mar 2020 12:43:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4BDA62076E for ; Wed, 18 Mar 2020 12:43:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584535433; bh=7kbgYgm/lLNPbLTncEpH2AES0a7rX6gjFmt7eRkVvos=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=zbugjKhxHyRoNqrU6yIs8e8fy6/uOPYC/qMD/YjsISCH5ziMDtBnFVBkWoG8+0uKD TFFY17vZvIBXi6/e+Xw3jmUahCjUJAJOTvacVcSIHDKWPW6GNc8vMSHgGiH7sMbz/k 42hTQDJeQDEuBm9EWW/mG5cY8010ujFl55DrumqI= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726874AbgCRMnw (ORCPT ); Wed, 18 Mar 2020 08:43:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:52694 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726827AbgCRMnw (ORCPT ); Wed, 18 Mar 2020 08:43:52 -0400 Received: from localhost (unknown [213.57.247.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D8F5220768; Wed, 18 Mar 2020 12:43:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584535431; bh=7kbgYgm/lLNPbLTncEpH2AES0a7rX6gjFmt7eRkVvos=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YGzHzgo2mPfBK9rxTvmwkOdixoXOxgzyD35rwQ0wORiYh0Hox6s/DdAbHs16PW1xz MJPlS80c9ovRTX4hVDgIIr9epl5G+0PFfnkysWFpxwFFpz4uhhPo1Szm6TpeFexZru IYWKIhTsnuMvlAj2EEU5j/fJiKyro1cR3whFYv0Q= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Yishai Hadas , linux-rdma@vger.kernel.org, Michael Guralnik Subject: [PATCH rdma-next 3/4] IB/mlx5: Extend QP creation to get uar page index from user space Date: Wed, 18 Mar 2020 14:43:28 +0200 Message-Id: <20200318124329.52111-4-leon@kernel.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200318124329.52111-1-leon@kernel.org> References: <20200318124329.52111-1-leon@kernel.org> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Yishai Hadas Extend QP creation to get uar page index from user space, this mode can be used with the UAR dynamic mode APIs to allocate/destroy a UAR object. As part of enabling this option blocked the weird/un-supported cross channel option which uses index 0 hard-coded. This QP flag wasn't exposed to user space as part of any formal upstream release, the dynamic option can allow having valid UAR page index instead. Signed-off-by: Yishai Hadas Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/qp.c | 27 +++++++++++++++++---------- include/uapi/rdma/mlx5-abi.h | 1 + 2 files changed, 18 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 88db580f7272..380ba3321851 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -919,6 +919,7 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd, void *qpc; int err; u16 uid; + u32 uar_flags; err = ib_copy_from_udata(&ucmd, udata, sizeof(ucmd)); if (err) { @@ -928,24 +929,29 @@ static int create_user_qp(struct mlx5_ib_dev *dev, struct ib_pd *pd, context = rdma_udata_to_drv_context(udata, struct mlx5_ib_ucontext, ibucontext); - if (ucmd.flags & MLX5_QP_FLAG_BFREG_INDEX) { + uar_flags = ucmd.flags & (MLX5_QP_FLAG_UAR_PAGE_INDEX | + MLX5_QP_FLAG_BFREG_INDEX); + switch (uar_flags) { + case MLX5_QP_FLAG_UAR_PAGE_INDEX: + uar_index = ucmd.bfreg_index; + bfregn = MLX5_IB_INVALID_BFREG; + break; + case MLX5_QP_FLAG_BFREG_INDEX: uar_index = bfregn_to_uar_index(dev, &context->bfregi, ucmd.bfreg_index, true); if (uar_index < 0) return uar_index; - bfregn = MLX5_IB_INVALID_BFREG; - } else if (qp->flags & MLX5_IB_QP_CROSS_CHANNEL) { - /* - * TBD: should come from the verbs when we have the API - */ - /* In CROSS_CHANNEL CQ and QP must use the same UAR */ - bfregn = MLX5_CROSS_CHANNEL_BFREG; - } - else { + break; + case 0: + if (qp->flags & MLX5_IB_QP_CROSS_CHANNEL) + return -EINVAL; bfregn = alloc_bfreg(dev, &context->bfregi); if (bfregn < 0) return bfregn; + break; + default: + return -EINVAL; } mlx5_ib_dbg(dev, "bfregn 0x%x, uar_index 0x%x\n", bfregn, uar_index); @@ -2100,6 +2106,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd, MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC | MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_UC | MLX5_QP_FLAG_TUNNEL_OFFLOADS | + MLX5_QP_FLAG_UAR_PAGE_INDEX | MLX5_QP_FLAG_TYPE_DCI | MLX5_QP_FLAG_TYPE_DCT)) return -EINVAL; diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h index e900f9a64feb..a65d60b44829 100644 --- a/include/uapi/rdma/mlx5-abi.h +++ b/include/uapi/rdma/mlx5-abi.h @@ -49,6 +49,7 @@ enum { MLX5_QP_FLAG_TIR_ALLOW_SELF_LB_MC = 1 << 7, MLX5_QP_FLAG_ALLOW_SCATTER_CQE = 1 << 8, MLX5_QP_FLAG_PACKET_BASED_CREDIT_MODE = 1 << 9, + MLX5_QP_FLAG_UAR_PAGE_INDEX = 1 << 10, }; enum { From patchwork Wed Mar 18 12:43:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 11445199 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2181414B4 for ; Wed, 18 Mar 2020 12:43:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0198F2076E for ; Wed, 18 Mar 2020 12:43:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584535430; bh=iddU/5ft/10pg+Xr5zE0XMBFuNwqtbvd8ElppRsR3+8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=hAOfdra54ldiLZeB2O1rVpTcKGL1eu23MbgW+MP0dBfa5y2r+IzQVqp5mWwq76c7r dfIdux3gC1kIVOIxYWdkVZ3YEpC5dukBZn7M/pdzPldbr4CyWWUaEO+pDB38del9K2 ASE48gDZvlLxZgmwxg8Uu8gRerDulziBAw3rihhA= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726840AbgCRMnt (ORCPT ); Wed, 18 Mar 2020 08:43:49 -0400 Received: from mail.kernel.org ([198.145.29.99]:52654 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726827AbgCRMnt (ORCPT ); Wed, 18 Mar 2020 08:43:49 -0400 Received: from localhost (unknown [213.57.247.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 31C5520768; Wed, 18 Mar 2020 12:43:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584535427; bh=iddU/5ft/10pg+Xr5zE0XMBFuNwqtbvd8ElppRsR3+8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F85LsXsnfexgwo67WV+O6TlVa6l1FuHW+D+KXBOYfs+GlFh2BgSeICaGzip7N+xaX mSIru3oqMvZ7H3pXQw7tTT34mKC791Ax118FhGXxBtB7xfPspwLNEXOx0+Z49JRINM Xam0TnqMqIfRvGBDGd5MBKlmSf/lMxcawiN7GGgE= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Yishai Hadas , linux-rdma@vger.kernel.org, Michael Guralnik , netdev@vger.kernel.org, Saeed Mahameed Subject: [PATCH mlx5-next 4/4] IB/mlx5: Move to fully dynamic UAR mode once user space supports it Date: Wed, 18 Mar 2020 14:43:29 +0200 Message-Id: <20200318124329.52111-5-leon@kernel.org> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200318124329.52111-1-leon@kernel.org> References: <20200318124329.52111-1-leon@kernel.org> MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Yishai Hadas Move to fully dynamic UAR mode once user space supports it. In this case we prevent any legacy mode of UARs on the allocated context and prevent redundant allocation of the static ones. Signed-off-by: Yishai Hadas Reviewed-by: Michael Guralnik Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/cq.c | 8 ++++++-- drivers/infiniband/hw/mlx5/main.c | 13 ++++++++++++- drivers/infiniband/hw/mlx5/qp.c | 6 ++++++ include/linux/mlx5/driver.h | 1 + include/uapi/rdma/mlx5-abi.h | 1 + 5 files changed, 26 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c index eafedc2f697b..146ba2966744 100644 --- a/drivers/infiniband/hw/mlx5/cq.c +++ b/drivers/infiniband/hw/mlx5/cq.c @@ -764,10 +764,14 @@ static int create_cq_user(struct mlx5_ib_dev *dev, struct ib_udata *udata, MLX5_SET(cqc, cqc, log_page_size, page_shift - MLX5_ADAPTER_PAGE_SHIFT); - if (ucmd.flags & MLX5_IB_CREATE_CQ_FLAGS_UAR_PAGE_INDEX) + if (ucmd.flags & MLX5_IB_CREATE_CQ_FLAGS_UAR_PAGE_INDEX) { *index = ucmd.uar_page_index; - else + } else if (context->bfregi.lib_uar_dyn) { + err = -EINVAL; + goto err_cqb; + } else { *index = context->bfregi.sys_pages[0]; + } if (ucmd.cqe_comp_en == 1) { int mini_cqe_format; diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index e8787af2d74d..e355e06bf3ac 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -1787,6 +1787,7 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx, max_cqe_version); u32 dump_fill_mkey; bool lib_uar_4k; + bool lib_uar_dyn; if (!dev->ib_active) return -EAGAIN; @@ -1845,8 +1846,14 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx, } lib_uar_4k = req.lib_caps & MLX5_LIB_CAP_4K_UAR; + lib_uar_dyn = req.lib_caps & MLX5_LIB_CAP_DYN_UAR; bfregi = &context->bfregi; + if (lib_uar_dyn) { + bfregi->lib_uar_dyn = lib_uar_dyn; + goto uar_done; + } + /* updates req->total_num_bfregs */ err = calc_total_bfregs(dev, lib_uar_4k, &req, bfregi); if (err) @@ -1873,6 +1880,7 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx, if (err) goto out_sys_pages; +uar_done: if (req.flags & MLX5_IB_ALLOC_UCTX_DEVX) { err = mlx5_ib_devx_create(dev, true); if (err < 0) @@ -1894,7 +1902,7 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx, INIT_LIST_HEAD(&context->db_page_list); mutex_init(&context->db_page_mutex); - resp.tot_bfregs = req.total_num_bfregs; + resp.tot_bfregs = lib_uar_dyn ? 0 : req.total_num_bfregs; resp.num_ports = dev->num_ports; if (offsetofend(typeof(resp), cqe_version) <= udata->outlen) @@ -2142,6 +2150,9 @@ static int uar_mmap(struct mlx5_ib_dev *dev, enum mlx5_ib_mmap_cmd cmd, int max_valid_idx = dyn_uar ? bfregi->num_sys_pages : bfregi->num_static_sys_pages; + if (bfregi->lib_uar_dyn) + return -EINVAL; + if (vma->vm_end - vma->vm_start != PAGE_SIZE) return -EINVAL; diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 380ba3321851..319d514a2223 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -697,6 +697,9 @@ static int alloc_bfreg(struct mlx5_ib_dev *dev, { int bfregn = -ENOMEM; + if (bfregi->lib_uar_dyn) + return -EINVAL; + mutex_lock(&bfregi->lock); if (bfregi->ver >= 2) { bfregn = alloc_high_class_bfreg(dev, bfregi); @@ -768,6 +771,9 @@ int bfregn_to_uar_index(struct mlx5_ib_dev *dev, u32 index_of_sys_page; u32 offset; + if (bfregi->lib_uar_dyn) + return -EINVAL; + bfregs_per_sys_page = get_uars_per_sys_page(dev, bfregi->lib_uar_4k) * MLX5_NON_FP_BFREGS_PER_UAR; index_of_sys_page = bfregn / bfregs_per_sys_page; diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 3f10a9633012..e4ab0eb9d202 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -224,6 +224,7 @@ struct mlx5_bfreg_info { struct mutex lock; u32 ver; bool lib_uar_4k; + u8 lib_uar_dyn : 1; u32 num_sys_pages; u32 num_static_sys_pages; u32 total_num_bfregs; diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h index a65d60b44829..df1cc3641bda 100644 --- a/include/uapi/rdma/mlx5-abi.h +++ b/include/uapi/rdma/mlx5-abi.h @@ -79,6 +79,7 @@ struct mlx5_ib_alloc_ucontext_req { enum mlx5_lib_caps { MLX5_LIB_CAP_4K_UAR = (__u64)1 << 0, + MLX5_LIB_CAP_DYN_UAR = (__u64)1 << 1, }; enum mlx5_ib_alloc_uctx_v2_flags {