From patchwork Mon Nov 3 08:02:42 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eli Cohen X-Patchwork-Id: 5214331 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 57DB6C11AC for ; Mon, 3 Nov 2014 08:02:59 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 90E1A2017A for ; Mon, 3 Nov 2014 08:02:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B4FE720166 for ; Mon, 3 Nov 2014 08:02:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751118AbaKCIC5 (ORCPT ); Mon, 3 Nov 2014 03:02:57 -0500 Received: from mail-lb0-f176.google.com ([209.85.217.176]:51557 "EHLO mail-lb0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751020AbaKCIC4 (ORCPT ); Mon, 3 Nov 2014 03:02:56 -0500 Received: by mail-lb0-f176.google.com with SMTP id z11so6681369lbi.7 for ; Mon, 03 Nov 2014 00:02:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iMlMRWJoigGYZZY+xWmtoUNwhkTjVIptTLE74VIlBO4=; b=gUWv06yeHLx/cKDBMjkfx/eXElixTxQkgATO48pLkCpLslkTJigTcfINTjC5Pbi4t4 ssMTxBlMHj3mlD4uxDUqBnIPFGDiEZdE6di7Ju+ErQpzQmIpAwC0SBJra18FBTD+El6g +te4g4IXpvyXQlsvgq2ooktH0qmhBqF264p/vw3sZhx/PfIs6ST26bDKoUWVifRt7m3i cKiSEvhHznTZ9Mt78GLgW4ux6iV5n2SXPqBlMPfSv7ozBqAQAgJPyu4Ydoq3cGBDe1pS vsSfs2OJkIKZpoVzVG4KjMaeLx2Z8mOCkuciAZL7S14lh8MJXn/+KtOH3Ld7d4QLPfwh sdSQ== X-Gm-Message-State: ALoCoQnXgjwgn81SDM2m08EFi9a0jesqK7X+x+7mzUttK/b4P3CzhL0lUO4LJaib+syMyvuZ4JZ1 X-Received: by 10.112.54.162 with SMTP id k2mr48676197lbp.63.1415001774972; Mon, 03 Nov 2014 00:02:54 -0800 (PST) Received: from localhost (out.voltaire.com. [193.47.165.251]) by mx.google.com with ESMTPSA id n4sm7652533lah.2.2014.11.03.00.02.54 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 03 Nov 2014 00:02:54 -0800 (PST) From: Eli Cohen X-Google-Original-From: Eli Cohen To: roland@kernel.org Cc: linux-rdma@vger.kernel.org, ogerlitz@mellanox.com, yevgenyp@mellanox.com, Eli Cohen Subject: [PATCH for-next 1/5] IB/mlx5: Fix sparse warnings Date: Mon, 3 Nov 2014 10:02:42 +0200 Message-Id: <1415001766-8366-2-git-send-email-eli@mellanox.com> X-Mailer: git-send-email 2.1.2 In-Reply-To: <1415001766-8366-1-git-send-email-eli@mellanox.com> References: <1415001766-8366-1-git-send-email-eli@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP 1. Add required __acquire/__release statements to balance spinlock usage. 2. Change the index parameter of begin_wqe() to be unsigned to match supplied argument type. Signed-off-by: Eli Cohen --- drivers/infiniband/hw/mlx5/qp.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index e261a53f9a02..9ca39ad68cb8 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -1011,9 +1011,14 @@ static void mlx5_ib_lock_cqs(struct mlx5_ib_cq *send_cq, struct mlx5_ib_cq *recv } } else { spin_lock_irq(&send_cq->lock); + __acquire(&recv_cq->lock); } } else if (recv_cq) { spin_lock_irq(&recv_cq->lock); + __acquire(&send_cq->lock); + } else { + __acquire(&send_cq->lock); + __acquire(&recv_cq->lock); } } @@ -1033,10 +1038,15 @@ static void mlx5_ib_unlock_cqs(struct mlx5_ib_cq *send_cq, struct mlx5_ib_cq *re spin_unlock_irq(&recv_cq->lock); } } else { + __release(&recv_cq->lock); spin_unlock_irq(&send_cq->lock); } } else if (recv_cq) { + __release(&send_cq->lock); spin_unlock_irq(&recv_cq->lock); + } else { + __release(&recv_cq->lock); + __release(&send_cq->lock); } } @@ -2411,7 +2421,7 @@ static u8 get_fence(u8 fence, struct ib_send_wr *wr) static int begin_wqe(struct mlx5_ib_qp *qp, void **seg, struct mlx5_wqe_ctrl_seg **ctrl, - struct ib_send_wr *wr, int *idx, + struct ib_send_wr *wr, unsigned *idx, int *size, int nreq) { int err = 0; @@ -2737,6 +2747,8 @@ out: if (bf->need_lock) spin_lock(&bf->lock); + else + __acquire(&bf->lock); /* TBD enable WC */ if (0 && nreq == 1 && bf->uuarn && inl && size > 1 && size <= bf->buf_size / 16) { @@ -2753,6 +2765,8 @@ out: bf->offset ^= bf->buf_size; if (bf->need_lock) spin_unlock(&bf->lock); + else + __release(&bf->lock); } spin_unlock_irqrestore(&qp->sq.lock, flags);