From patchwork Sun Nov 15 12:30:32 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Barak X-Patchwork-Id: 7618901 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id D3CBCC05CA for ; Sun, 15 Nov 2015 12:33:45 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C4704205B9 for ; Sun, 15 Nov 2015 12:33:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A8DCE205ED for ; Sun, 15 Nov 2015 12:33:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751272AbbKOMdh (ORCPT ); Sun, 15 Nov 2015 07:33:37 -0500 Received: from [193.47.165.129] ([193.47.165.129]:48570 "EHLO mellanox.co.il" rhost-flags-FAIL-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1751319AbbKOMd1 (ORCPT ); Sun, 15 Nov 2015 07:33:27 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from matanb@mellanox.com) with ESMTPS (AES256-SHA encrypted); 15 Nov 2015 14:32:51 +0200 Received: from rsws33.mtr.labs.mlnx (dev-r-vrt-064.mtr.labs.mlnx [10.212.64.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id tAFCWo62019038; Sun, 15 Nov 2015 14:32:51 +0200 From: Matan Barak To: Eli Cohen Cc: linux-rdma@vger.kernel.org, Doug Ledford , Matan Barak , Eran Ben Elisha , Christoph Lameter Subject: [PATCH libmlx5 5/7] Add ibv_query_values support Date: Sun, 15 Nov 2015 14:30:32 +0200 Message-Id: <1447590634-12858-6-git-send-email-matanb@mellanox.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1447590634-12858-1-git-send-email-matanb@mellanox.com> References: <1447590634-12858-1-git-send-email-matanb@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In order to query the current HCA's core clock, libmlx5 should support ibv_query_values verb. Querying the hardware's cycles register is done by mmaping this register to user-space. Therefore, when libmlx5 initializes we mmap the cycles register. This assumes the machine's architecture places the PCI and memory in the same address space. Signed-off-by: Matan Barak --- src/mlx5.c | 38 ++++++++++++++++++++++++++++++++++++++ src/mlx5.h | 6 +++++- src/verbs.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 89 insertions(+), 1 deletion(-) diff --git a/src/mlx5.c b/src/mlx5.c index 229d99d..81d1da2 100644 --- a/src/mlx5.c +++ b/src/mlx5.c @@ -524,6 +524,30 @@ static int single_threaded_app(void) return 0; } +static int mlx5_map_internal_clock(struct mlx5_device *mdev, + struct ibv_context *ibv_ctx) +{ + struct mlx5_context *context = to_mctx(ibv_ctx); + void *hca_clock_page; + off_t offset = 0; + + set_command(MLX5_MMAP_GET_CORE_CLOCK_CMD, &offset); + hca_clock_page = mmap(NULL, mdev->page_size, + PROT_READ, MAP_SHARED, ibv_ctx->cmd_fd, + mdev->page_size * offset); + + if (hca_clock_page == MAP_FAILED) { + fprintf(stderr, PFX + "Warning: Timestamp available,\n" + "but failed to mmap() hca core clock page.\n"); + return -1; + } + + context->hca_core_clock = hca_clock_page + + (context->core_clock.offset & (mdev->page_size - 1)); + return 0; +} + static int mlx5_init_context(struct verbs_device *vdev, struct ibv_context *ctx, int cmd_fd) { @@ -539,6 +563,10 @@ static int mlx5_init_context(struct verbs_device *vdev, off_t offset; struct mlx5_device *mdev; struct verbs_context *v_ctx; + int ret; + uint32_t comp_mask; + struct ibv_device_attr_ex dev_attrs; + struct ibv_query_device_ex_input dev_attrs_input = {.comp_mask = 0}; mdev = to_mdev(&vdev->device); v_ctx = verbs_get_ctx(ctx); @@ -647,6 +675,12 @@ static int mlx5_init_context(struct verbs_device *vdev, context->bfs[j].uuarn = j; } + context->hca_core_clock = NULL; + ret = _mlx5_query_device_ex(ctx, &dev_attrs_input, &dev_attrs, + sizeof(dev_attrs), &comp_mask); + if (!ret && comp_mask & QUERY_DEVICE_RESP_MASK_TIMESTAMP) + mlx5_map_internal_clock(mdev, ctx); + mlx5_spinlock_init(&context->lock32); context->prefer_bf = get_always_bf(); @@ -664,6 +698,7 @@ static int mlx5_init_context(struct verbs_device *vdev, verbs_set_ctx_op(v_ctx, create_srq_ex, mlx5_create_srq_ex); verbs_set_ctx_op(v_ctx, get_srq_num, mlx5_get_srq_num); verbs_set_ctx_op(v_ctx, query_device_ex, mlx5_query_device_ex); + verbs_set_ctx_op(v_ctx, query_values, mlx5_query_values); verbs_set_ctx_op(v_ctx, create_cq_ex, mlx5_create_cq_ex); if (context->cqe_version && context->cqe_version == 1) verbs_set_ctx_op(v_ctx, poll_cq_ex, mlx5_poll_cq_v1_ex); @@ -697,6 +732,9 @@ static void mlx5_cleanup_context(struct verbs_device *device, if (context->uar[i]) munmap(context->uar[i], page_size); } + if (context->hca_core_clock) + munmap(context->hca_core_clock - context->core_clock.offset, + page_size); close_debug_file(context); } diff --git a/src/mlx5.h b/src/mlx5.h index 66dc4a9..818fe85 100644 --- a/src/mlx5.h +++ b/src/mlx5.h @@ -117,7 +117,8 @@ enum { enum { MLX5_MMAP_GET_REGULAR_PAGES_CMD = 0, - MLX5_MMAP_GET_CONTIGUOUS_PAGES_CMD = 1 + MLX5_MMAP_GET_CONTIGUOUS_PAGES_CMD = 1, + MLX5_MMAP_GET_CORE_CLOCK_CMD = 5 }; #define MLX5_CQ_PREFIX "MLX_CQ" @@ -311,6 +312,7 @@ struct mlx5_context { uint64_t offset; uint64_t mask; } core_clock; + void *hca_core_clock; }; struct mlx5_bitmap { @@ -593,6 +595,8 @@ int mlx5_query_device_ex(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr, size_t attr_size); +int mlx5_query_values(struct ibv_context *context, + struct ibv_values_ex *values); struct ibv_qp *mlx5_create_qp_ex(struct ibv_context *context, struct ibv_qp_init_attr_ex *attr); int mlx5_query_port(struct ibv_context *context, uint8_t port, diff --git a/src/verbs.c b/src/verbs.c index 76885f3..50955ae 100644 --- a/src/verbs.c +++ b/src/verbs.c @@ -79,6 +79,52 @@ int mlx5_query_device(struct ibv_context *context, struct ibv_device_attr *attr) return 0; } +#define READL(ptr) (*((uint32_t *)(ptr))) +static int mlx5_read_clock(struct ibv_context *context, uint64_t *cycles) +{ + unsigned int clockhi, clocklo, clockhi1; + int i; + struct mlx5_context *ctx = to_mctx(context); + + if (!ctx->hca_core_clock) + return -EOPNOTSUPP; + + /* Handle wraparound */ + for (i = 0; i < 2; i++) { + clockhi = ntohl(READL(ctx->hca_core_clock)); + clocklo = ntohl(READL(ctx->hca_core_clock + 4)); + clockhi1 = ntohl(READL(ctx->hca_core_clock)); + if (clockhi == clockhi1) + break; + } + + *cycles = (uint64_t)clockhi << 32 | (uint64_t)clocklo; + + return 0; +} + +int mlx5_query_values(struct ibv_context *context, + struct ibv_values_ex *values) +{ + uint32_t comp_mask = 0; + int err = 0; + + if (values->comp_mask & IBV_VALUES_MASK_RAW_CLOCK) { + uint64_t cycles; + + err = mlx5_read_clock(context, &cycles); + if (!err) { + values->raw_clock.tv_sec = 0; + values->raw_clock.tv_nsec = cycles; + comp_mask |= IBV_VALUES_MASK_RAW_CLOCK; + } + } + + values->comp_mask = comp_mask; + + return err; +} + int mlx5_query_port(struct ibv_context *context, uint8_t port, struct ibv_port_attr *attr) {