From patchwork Sat Jan 16 15:55:57 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 8049241 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id ABB1F9F716 for ; Sat, 16 Jan 2016 15:56:47 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B28FD2034C for ; Sat, 16 Jan 2016 15:56:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D9FF320364 for ; Sat, 16 Jan 2016 15:56:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751475AbcAPP4j (ORCPT ); Sat, 16 Jan 2016 10:56:39 -0500 Received: from mail-wm0-f46.google.com ([74.125.82.46]:36999 "EHLO mail-wm0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751105AbcAPP4f (ORCPT ); Sat, 16 Jan 2016 10:56:35 -0500 Received: by mail-wm0-f46.google.com with SMTP id n5so6139025wmn.0 for ; Sat, 16 Jan 2016 07:56:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=leon-nu.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jtBkGozbFP6xB3SbmSMr1BO6OpiFT49YMYiWKcorVyM=; b=igu/j3UF0IaujV8N1Gh3p7tdxE1Iul8vLixn1H6rpcX0TWOwWyhOgwZfmYaLYbJNAz 9CMSBI5/88CpTlvQwmYvyfGTN1F4no7W3b75liHNZafcizbwjvMAv4e6OI4Ke5SGVeI2 +OtmM5ujzaw0GvLZ46y3yARKbUgbjUHxdidLei5gyQKnVbOC/BrOQWbh1D3zxH/eMmrZ 3OIV4vJFllaM2cfQwHOdnstr9KR+SiSeJcReTVz3KXmYMJGmypM5le9zaqhPr/RPwkR6 +BThgppU0mzwMAFPQt+IlIoLRGgBX2ncVEX5nXfI+CC7cdnppSKJqcEd7TZZ+8Et4KXh vlIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jtBkGozbFP6xB3SbmSMr1BO6OpiFT49YMYiWKcorVyM=; b=a5uXKVa5IBC0JTANFhoxoFXITVAUkHEpmVysKJqqiJQeHmjJlnRC1ZlpKMOt5OMz+g tHkV6j07yz11TwiCDL2rEII9WX7A536LwvATHSXnKa2JqVFd+pq2tHrpCrH2fMdfon85 NYsvmzjEfbno6mQIBZCgwiG7mEglOjzm+QHqfr1Ua67knXlULxmPKlNEav/dCRlQTGE4 BsXP8URpFAh23bFbM6ICaNaN/seioXOWT5nQPOdqwvEe8j+QpJC8F5rHPz28PMck4Jws FaM0skwPyIhSbSvBpBZeVCgD+rFOj2DqO4T3rp+yyO/PjTXh4z847P4Fyhv+RkbGwkzP Y7Vg== X-Gm-Message-State: ALoCoQmbZer0pKD/IkVPWYPBuJ8AFbI6VrFu4atXUVWAPNV6MLbmsmxt7XXwo9h7M85klb9kQz2Z4g1HE3hyKvsaLduAmAFXOg== X-Received: by 10.194.91.180 with SMTP id cf20mr15656762wjb.121.1452959794067; Sat, 16 Jan 2016 07:56:34 -0800 (PST) Received: from localhost ([213.57.247.249]) by smtp.gmail.com with ESMTPSA id c26sm7414479wmi.21.2016.01.16.07.56.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 16 Jan 2016 07:56:33 -0800 (PST) From: Leon Romanovsky To: yishaih@mellanox.com Cc: linux-rdma@vger.kernel.org, Leon Romanovsky Subject: [PATCH libmlx5 V1 1/2] Add CQ ignore overrun creation flag Date: Sat, 16 Jan 2016 17:55:57 +0200 Message-Id: <1452959758-29611-2-git-send-email-leon@leon.nu> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1452959758-29611-1-git-send-email-leon@leon.nu> References: <1452959758-29611-1-git-send-email-leon@leon.nu> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Leon Romanovsky In cross-channel mode, the send/receive queues will forward their completions to managing QP. It can cause to overrun errors in managed send/receive queues. This patch adds ability to provide CQ flags for ibv_create_cq_ex calls and new flag to disable CQ overrun checks. Signed-off-by: Leon Romanovsky Reviewed-by: Sagi Grimberg --- src/mlx5-abi.h | 14 ++++++++++++++ src/verbs.c | 57 ++++++++++++++++++++++++++++++++++++++++++--------------- 2 files changed, 56 insertions(+), 15 deletions(-) diff --git a/src/mlx5-abi.h b/src/mlx5-abi.h index 769ea811d26b..85f6ee3f691e 100644 --- a/src/mlx5-abi.h +++ b/src/mlx5-abi.h @@ -91,6 +91,20 @@ struct mlx5_create_cq_resp { __u32 cqn; }; +struct mlx5_create_cq_ex { + struct ibv_create_cq_ex ibv_cmd; + __u64 buf_addr; + __u64 db_addr; + __u32 cqe_size; + __u32 comp_mask; +}; + +struct mlx5_create_cq_resp_ex { + struct ibv_create_cq_resp_ex ibv_resp; + __u32 cqn; + __u32 comp_mask; +}; + struct mlx5_create_srq { struct ibv_create_srq ibv_cmd; __u64 buf_addr; diff --git a/src/verbs.c b/src/verbs.c index 94b4d8f2424f..064a500b0a06 100644 --- a/src/verbs.c +++ b/src/verbs.c @@ -250,17 +250,26 @@ enum { }; enum { - CREATE_CQ_SUPPORTED_FLAGS = IBV_CREATE_CQ_ATTR_COMPLETION_TIMESTAMP + CREATE_CQ_SUPPORTED_FLAGS = IBV_CREATE_CQ_ATTR_COMPLETION_TIMESTAMP | + IBV_CREATE_CQ_ATTR_IGNORE_OVERRUN +}; + +enum mlx5_cmd_type { + MLX5_LEGACY_CMD, + MLX5_EXTENDED_CMD }; static struct ibv_cq *create_cq(struct ibv_context *context, - const struct ibv_create_cq_attr_ex *cq_attr) + struct ibv_create_cq_attr_ex *cq_attr, + enum mlx5_cmd_type ctype) { struct mlx5_create_cq cmd; + struct mlx5_create_cq_ex cmd_ex; struct mlx5_create_cq_resp resp; + struct mlx5_create_cq_resp_ex resp_ex; struct mlx5_cq *cq; int cqe_sz; - int ret; + int ret = -1; int ncqe; #ifdef MLX5_DEBUG FILE *fp = to_mctx(context)->dbg_fp; @@ -299,7 +308,6 @@ static struct ibv_cq *create_cq(struct ibv_context *context, return NULL; } - memset(&cmd, 0, sizeof cmd); cq->cons_index = 0; if (mlx5_spinlock_init(&cq->lock)) @@ -342,22 +350,41 @@ static struct ibv_cq *create_cq(struct ibv_context *context, cq->arm_sn = 0; cq->cqe_sz = cqe_sz; - cmd.buf_addr = (uintptr_t) cq->buf_a.buf; - cmd.db_addr = (uintptr_t) cq->dbrec; - cmd.cqe_size = cqe_sz; + if (ctype == MLX5_LEGACY_CMD) { + memset(&cmd, 0, sizeof(cmd)); + cmd.buf_addr = (uintptr_t) cq->buf_a.buf; + cmd.db_addr = (uintptr_t) cq->dbrec; + cmd.cqe_size = cqe_sz; + + ret = ibv_cmd_create_cq(context, ncqe - 1, cq_attr->channel, + cq_attr->comp_vector, + &cq->ibv_cq, &cmd.ibv_cmd, sizeof cmd, + &resp.ibv_resp, sizeof resp); + cq->cqn = resp.cqn; + + } + else if (ctype == MLX5_EXTENDED_CMD) { + memset(&cmd_ex, 0, sizeof(cmd_ex)); + cmd_ex.buf_addr = (uintptr_t) cq->buf_a.buf; + cmd_ex.db_addr = (uintptr_t) cq->dbrec; + cmd_ex.cqe_size = cqe_sz; + + ret = ibv_cmd_create_cq_ex(context, cq_attr, + &cq->ibv_cq, &cmd_ex.ibv_cmd, + sizeof(cmd_ex.ibv_cmd), sizeof(cmd_ex), + &resp_ex.ibv_resp, + sizeof(resp_ex.ibv_resp), sizeof(resp_ex)); + cq->cqn = resp_ex.cqn; + } - ret = ibv_cmd_create_cq(context, ncqe - 1, cq_attr->channel, - cq_attr->comp_vector, - &cq->ibv_cq, &cmd.ibv_cmd, sizeof cmd, - &resp.ibv_resp, sizeof resp); if (ret) { - mlx5_dbg(fp, MLX5_DBG_CQ, "ret %d\n", ret); + mlx5_dbg(fp, MLX5_DBG_CQ, "ret %d, ctype = %d\n", ret, ctype); goto err_db; } cq->active_buf = &cq->buf_a; cq->resize_buf = NULL; - cq->cqn = resp.cqn; + cq->stall_enable = to_mctx(context)->stall_enable; cq->stall_adaptive_enable = to_mctx(context)->stall_adaptive_enable; cq->stall_cycles = to_mctx(context)->stall_cycles; @@ -390,13 +417,13 @@ struct ibv_cq *mlx5_create_cq(struct ibv_context *context, int cqe, .comp_vector = comp_vector, .wc_flags = IBV_WC_STANDARD_FLAGS}; - return create_cq(context, &cq_attr); + return create_cq(context, &cq_attr, MLX5_LEGACY_CMD); } struct ibv_cq *mlx5_create_cq_ex(struct ibv_context *context, struct ibv_create_cq_attr_ex *cq_attr) { - return create_cq(context, cq_attr); + return create_cq(context, cq_attr, MLX5_EXTENDED_CMD); } int mlx5_resize_cq(struct ibv_cq *ibcq, int cqe)