From patchwork Wed Jan 17 13:51:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 10169295 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 80BEC603B5 for ; Wed, 17 Jan 2018 13:51:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6F70228558 for ; Wed, 17 Jan 2018 13:51:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 63A982855E; Wed, 17 Jan 2018 13:51:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C363C28558 for ; Wed, 17 Jan 2018 13:51:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753333AbeAQNvy (ORCPT ); Wed, 17 Jan 2018 08:51:54 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:41368 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752999AbeAQNvx (ORCPT ); Wed, 17 Jan 2018 08:51:53 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from yishaih@mellanox.com) with ESMTPS (AES256-SHA encrypted); 17 Jan 2018 15:51:49 +0200 Received: from vnc17.mtl.labs.mlnx (vnc17.mtl.labs.mlnx [10.7.2.17]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id w0HDpnil014953; Wed, 17 Jan 2018 15:51:49 +0200 Received: from vnc17.mtl.labs.mlnx (vnc17.mtl.labs.mlnx [127.0.0.1]) by vnc17.mtl.labs.mlnx (8.13.8/8.13.8) with ESMTP id w0HDpmR2020808; Wed, 17 Jan 2018 15:51:48 +0200 Received: (from yishaih@localhost) by vnc17.mtl.labs.mlnx (8.13.8/8.13.8/Submit) id w0HDpmL8020807; Wed, 17 Jan 2018 15:51:48 +0200 From: Yishai Hadas To: linux-rdma@vger.kernel.org Cc: yishaih@mellanox.com, ferasda@mellanox.com, jgg@mellanox.com, majd@mellanox.com, valex@mellanox.com, Eitan Rabin Subject: [PATCH rdma-core 1/5] mlx5: Support for user space clock info Date: Wed, 17 Jan 2018 15:51:29 +0200 Message-Id: <1516197093-20699-2-git-send-email-yishaih@mellanox.com> X-Mailer: git-send-email 1.8.2.3 In-Reply-To: <1516197093-20699-1-git-send-email-yishaih@mellanox.com> References: <1516197093-20699-1-git-send-email-yishaih@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Feras Daoud This patch maps the kernel driver's clock info page to userspace. With this, users will be able to read the clock info directly from the shared page without any system calls. Signed-off-by: Feras Daoud Signed-off-by: Eitan Rabin Reviewed-by: Yishai Hadas --- providers/mlx5/mlx5-abi.h | 20 +++++++++++++++++++- providers/mlx5/mlx5.c | 27 +++++++++++++++++++++++++++ providers/mlx5/mlx5.h | 2 ++ 3 files changed, 48 insertions(+), 1 deletion(-) diff --git a/providers/mlx5/mlx5-abi.h b/providers/mlx5/mlx5-abi.h index 7b96429..c5d323e 100644 --- a/providers/mlx5/mlx5-abi.h +++ b/providers/mlx5/mlx5-abi.h @@ -81,6 +81,23 @@ enum mlx5_ib_alloc_ucontext_resp_mask { MLX5_IB_ALLOC_UCONTEXT_RESP_MASK_CORE_CLOCK_OFFSET = 1UL << 0, }; +/* Bit indexes for the mlx5_alloc_ucontext_resp.clock_info_versions bitmap */ +enum { + MLX5_IB_CLOCK_INFO_V1 = 0, +}; + +struct mlx5_ib_clock_info { + __u32 sig; + __u32 resv; + __u64 nsec; + __u64 last_cycles; + __u64 frac; + __u32 mult; + __u32 shift; + __u64 mask; + __u64 overflow_period; +}; + struct mlx5_alloc_ucontext_resp { struct ibv_get_context_resp ibv_resp; __u32 qp_tab_size; @@ -98,7 +115,8 @@ struct mlx5_alloc_ucontext_resp { __u32 response_length; __u8 cqe_version; __u8 cmds_supp_uhw; - __u16 reserved2; + __u8 reserved2; + __u8 clock_info_versions; __u64 hca_core_clock_offset; __u32 log_uar_size; __u32 num_uars_per_page; diff --git a/providers/mlx5/mlx5.c b/providers/mlx5/mlx5.c index 52eb4f9..28bb320 100644 --- a/providers/mlx5/mlx5.c +++ b/providers/mlx5/mlx5.c @@ -607,6 +607,23 @@ static int mlx5_map_internal_clock(struct mlx5_device *mdev, return 0; } +static void mlx5_map_clock_info(struct mlx5_device *mdev, + struct ibv_context *ibv_ctx) +{ + struct mlx5_context *context = to_mctx(ibv_ctx); + void *clock_info_page; + off_t offset = 0; + + set_command(MLX5_MMAP_GET_CLOCK_INFO_CMD, &offset); + set_index(MLX5_IB_CLOCK_INFO_V1, &offset); + clock_info_page = mmap(NULL, mdev->page_size, + PROT_READ, MAP_SHARED, ibv_ctx->cmd_fd, + offset * mdev->page_size); + + if (clock_info_page != MAP_FAILED) + context->clock_info_page = clock_info_page; +} + int mlx5dv_query_device(struct ibv_context *ctx_in, struct mlx5dv_context *attrs_out) { @@ -1027,6 +1044,14 @@ static int mlx5_init_context(struct verbs_device *vdev, mlx5_map_internal_clock(mdev, ctx); } + context->clock_info_page = NULL; + if (resp.response_length + sizeof(resp.ibv_resp) >= + offsetof(struct mlx5_alloc_ucontext_resp, clock_info_versions) + + sizeof(resp.clock_info_versions) && + (resp.clock_info_versions & (1 << MLX5_IB_CLOCK_INFO_V1))) { + mlx5_map_clock_info(mdev, ctx); + } + mlx5_read_env(&vdev->device, context); mlx5_spinlock_init(&context->hugetlb_lock); @@ -1107,6 +1132,8 @@ static void mlx5_cleanup_context(struct verbs_device *device, if (context->hca_core_clock) munmap(context->hca_core_clock - context->core_clock.offset, page_size); + if (context->clock_info_page) + munmap((void *)context->clock_info_page, page_size); close_debug_file(context); } diff --git a/providers/mlx5/mlx5.h b/providers/mlx5/mlx5.h index d8b1858..c0f342d 100644 --- a/providers/mlx5/mlx5.h +++ b/providers/mlx5/mlx5.h @@ -60,6 +60,7 @@ enum { MLX5_MMAP_GET_CONTIGUOUS_PAGES_CMD = 1, MLX5_MMAP_GET_CORE_CLOCK_CMD = 5, MLX5_MMAP_ALLOC_WC = 6, + MLX5_MMAP_GET_CLOCK_INFO_CMD = 7, }; enum { @@ -287,6 +288,7 @@ struct mlx5_context { uint64_t mask; } core_clock; void *hca_core_clock; + const struct mlx5_ib_clock_info *clock_info_page; struct ibv_tso_caps cached_tso_caps; int cmds_supp_uhw; uint32_t uar_size;