From patchwork Tue Nov 17 21:52:19 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haggai Abramonvsky X-Patchwork-Id: 7642491 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A28CFBF90C for ; Tue, 17 Nov 2015 21:52:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9E032204F6 for ; Tue, 17 Nov 2015 21:52:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 80782204A9 for ; Tue, 17 Nov 2015 21:52:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754912AbbKQVww (ORCPT ); Tue, 17 Nov 2015 16:52:52 -0500 Received: from [193.47.165.129] ([193.47.165.129]:38641 "EHLO mellanox.co.il" rhost-flags-FAIL-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1754810AbbKQVww (ORCPT ); Tue, 17 Nov 2015 16:52:52 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from hagaya@mellanox.com) with ESMTPS (AES256-SHA encrypted); 17 Nov 2015 23:52:27 +0200 Received: from dev-l-vrt-194.mtl.labs.mlnx (dev-l-vrt-194.mtl.labs.mlnx [10.134.194.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id tAHLqRGw017329; Tue, 17 Nov 2015 23:52:27 +0200 From: Haggai Abramonvsky To: Eli Cohen Cc: linux-rdma@vger.kernel.org, Haggai Abramovsky Subject: [PATCH libmlx5 v2 2/5] libmlx5: Add QPs and XSRQs resource tracking Date: Tue, 17 Nov 2015 23:52:19 +0200 Message-Id: <1447797142-24149-3-git-send-email-hagaya@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1447797142-24149-1-git-send-email-hagaya@mellanox.com> References: <1447797142-24149-1-git-send-email-hagaya@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Haggai Abramovsky Add new database that stores all the QPs and XSRQs context. Insertions and deletions to the database are done using the object's user-index. This database will allow us to retrieve the objects; QPs and XSRQs; by their user-index in the poll_one. Signed-off-by: Haggai Abramovsky --- src/mlx5.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ src/mlx5.h | 24 ++++++++++++++++++++++ 2 files changed, 91 insertions(+) diff --git a/src/mlx5.c b/src/mlx5.c index e44898a..dc4c5c4 100644 --- a/src/mlx5.c +++ b/src/mlx5.c @@ -128,6 +128,73 @@ static int read_number_from_line(const char *line, int *value) return 0; } +static int32_t get_free_uidx(struct mlx5_context *ctx) +{ + int32_t tind; + int32_t i; + + for (tind = 0; tind < MLX5_UIDX_TABLE_SIZE; tind++) { + if (ctx->uidx_table[tind].refcnt < MLX5_UIDX_TABLE_MASK) + break; + } + + if (tind == MLX5_UIDX_TABLE_SIZE) + return -1; + + if (!ctx->uidx_table[tind].refcnt) + return tind << MLX5_UIDX_TABLE_SHIFT; + + for (i = 0; i < MLX5_UIDX_TABLE_MASK + 1; i++) { + if (!ctx->uidx_table[tind].table[i]) + break; + } + + return (tind << MLX5_UIDX_TABLE_SHIFT) | i; +} + +int32_t mlx5_store_uidx(struct mlx5_context *ctx, void *rsc) +{ + int32_t tind; + int32_t uidx; + + pthread_mutex_lock(&ctx->uidx_table_mutex); + uidx = get_free_uidx(ctx); + if (uidx < 0) + goto out; + + tind = uidx >> MLX5_UIDX_TABLE_SHIFT; + + if (!ctx->uidx_table[tind].refcnt) { + ctx->uidx_table[tind].table = calloc(MLX5_UIDX_TABLE_MASK + 1, + sizeof(void *)); + if (!ctx->uidx_table[tind].table) { + uidx = -1; + goto out; + } + } + + ++ctx->uidx_table[tind].refcnt; + ctx->uidx_table[tind].table[uidx & MLX5_UIDX_TABLE_MASK] = rsc; + +out: + pthread_mutex_unlock(&ctx->uidx_table_mutex); + return uidx; +} + +void mlx5_clear_uidx(struct mlx5_context *ctx, uint32_t uidx) +{ + int tind = uidx >> MLX5_UIDX_TABLE_SHIFT; + + pthread_mutex_lock(&ctx->uidx_table_mutex); + + if (!--ctx->uidx_table[tind].refcnt) + free(ctx->uidx_table[tind].table); + else + ctx->uidx_table[tind].table[uidx & MLX5_UIDX_TABLE_MASK] = NULL; + + pthread_mutex_unlock(&ctx->uidx_table_mutex); +} + static int mlx5_is_sandy_bridge(int *num_cores) { char line[128]; diff --git a/src/mlx5.h b/src/mlx5.h index d8ce908..dd618bd 100644 --- a/src/mlx5.h +++ b/src/mlx5.h @@ -165,6 +165,12 @@ enum { }; enum { + MLX5_UIDX_TABLE_SHIFT = 12, + MLX5_UIDX_TABLE_MASK = (1 << MLX5_UIDX_TABLE_SHIFT) - 1, + MLX5_UIDX_TABLE_SIZE = 1 << (24 - MLX5_UIDX_TABLE_SHIFT), +}; + +enum { MLX5_SRQ_TABLE_SHIFT = 12, MLX5_SRQ_TABLE_MASK = (1 << MLX5_SRQ_TABLE_SHIFT) - 1, MLX5_SRQ_TABLE_SIZE = 1 << (24 - MLX5_SRQ_TABLE_SHIFT), @@ -275,6 +281,12 @@ struct mlx5_context { } srq_table[MLX5_SRQ_TABLE_SIZE]; pthread_mutex_t srq_table_mutex; + struct { + struct mlx5_resource **table; + int refcnt; + } uidx_table[MLX5_UIDX_TABLE_SIZE]; + pthread_mutex_t uidx_table_mutex; + void *uar[MLX5_MAX_UAR_PAGES]; struct mlx5_spinlock lock32; struct mlx5_db_page *db_list; @@ -616,6 +628,8 @@ void mlx5_set_sq_sizes(struct mlx5_qp *qp, struct ibv_qp_cap *cap, struct mlx5_qp *mlx5_find_qp(struct mlx5_context *ctx, uint32_t qpn); int mlx5_store_qp(struct mlx5_context *ctx, uint32_t qpn, struct mlx5_qp *qp); void mlx5_clear_qp(struct mlx5_context *ctx, uint32_t qpn); +int32_t mlx5_store_uidx(struct mlx5_context *ctx, void *rsc); +void mlx5_clear_uidx(struct mlx5_context *ctx, uint32_t uidx); struct mlx5_srq *mlx5_find_srq(struct mlx5_context *ctx, uint32_t srqn); int mlx5_store_srq(struct mlx5_context *ctx, uint32_t srqn, struct mlx5_srq *srq); @@ -640,6 +654,16 @@ int mlx5_close_xrcd(struct ibv_xrcd *ib_xrcd); struct ibv_srq *mlx5_create_srq_ex(struct ibv_context *context, struct ibv_srq_init_attr_ex *attr); +static inline void *mlx5_find_uidx(struct mlx5_context *ctx, uint32_t uidx) +{ + int tind = uidx >> MLX5_UIDX_TABLE_SHIFT; + + if (likely(ctx->uidx_table[tind].refcnt)) + return ctx->uidx_table[tind].table[uidx & MLX5_UIDX_TABLE_MASK]; + + return NULL; +} + static inline int mlx5_spin_lock(struct mlx5_spinlock *lock) { if (!mlx5_single_threaded)