From patchwork Tue Dec 2 10:26:18 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eli Cohen X-Patchwork-Id: 5419181 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A40829F1C5 for ; Tue, 2 Dec 2014 10:27:18 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D49D12028D for ; Tue, 2 Dec 2014 10:27:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E24A22027D for ; Tue, 2 Dec 2014 10:27:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932259AbaLBK1J (ORCPT ); Tue, 2 Dec 2014 05:27:09 -0500 Received: from mail-wg0-f47.google.com ([74.125.82.47]:59300 "EHLO mail-wg0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754488AbaLBK0v (ORCPT ); Tue, 2 Dec 2014 05:26:51 -0500 Received: by mail-wg0-f47.google.com with SMTP id n12so16650357wgh.34 for ; Tue, 02 Dec 2014 02:26:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xcciANnYUWTRHBDJInyxBrWGPr3K+8wLz4KTgtO7g+k=; b=kXeuF8jCTg3qOIUr6gR+3Oj6y/DsDVm7BMn5wkdAHQVwKMwKh2myMdwdemhWwxiPKi kmUGRoejZQ+L8s4g56ENnj6fG65gUYjmySLkugILtVq0YaGU4c9RfgOs7xhAQJtiRLfh xyF+jCKWRvFUUFJivLFqCs0lsBLzCN42cvrUZNsd0R8vxZcyi+j4HMaw7Vp3RloVh/Kl O7ABz08Rfwe6KOgi4H9x7yKeKhYDthFv+fVVEWYaU9Ox6RlItr7/8INAmzpVpa3jeYMr QNSJjDb4PVLWFsZX4wG9StiClipdbEtMDTZ48EZ03ieJh1V94sGuk2QybZBmy4yu+oHK ukww== X-Gm-Message-State: ALoCoQlBQX55ZPts1BfaVN5/ss5VosqV7/uCPBFO+ve7CvU+GglGA5Wjfe3qhN2shi+vc8DVc8oI X-Received: by 10.195.11.6 with SMTP id ee6mr103274834wjd.95.1417516009998; Tue, 02 Dec 2014 02:26:49 -0800 (PST) Received: from localhost (out.voltaire.com. [193.47.165.251]) by mx.google.com with ESMTPSA id eq4sm12842712wjd.42.2014.12.02.02.26.49 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Tue, 02 Dec 2014 02:26:49 -0800 (PST) From: Eli Cohen X-Google-Original-From: Eli Cohen To: davem@davemloft.net Cc: roland@kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, ogerlitz@mellanox.com, amirv@mellanox.com, Eli Cohen Subject: [PATCH net-next 8/9] mlx5: Fix sparse warnings Date: Tue, 2 Dec 2014 12:26:18 +0200 Message-Id: <1417515979-22418-9-git-send-email-eli@mellanox.com> X-Mailer: git-send-email 2.1.3 In-Reply-To: <1417515979-22418-1-git-send-email-eli@mellanox.com> References: <1417515979-22418-1-git-send-email-eli@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP 1. Add required __acquire/__release statements to balance spinlock usage. 2. Change the index parameter of begin_wqe() to be unsigned to match supplied argument type. Signed-off-by: Eli Cohen --- drivers/infiniband/hw/mlx5/qp.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 0e2ef9fe0e29..1cae1c7132b4 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -1011,9 +1011,14 @@ static void mlx5_ib_lock_cqs(struct mlx5_ib_cq *send_cq, struct mlx5_ib_cq *recv } } else { spin_lock_irq(&send_cq->lock); + __acquire(&recv_cq->lock); } } else if (recv_cq) { spin_lock_irq(&recv_cq->lock); + __acquire(&send_cq->lock); + } else { + __acquire(&send_cq->lock); + __acquire(&recv_cq->lock); } } @@ -1033,10 +1038,15 @@ static void mlx5_ib_unlock_cqs(struct mlx5_ib_cq *send_cq, struct mlx5_ib_cq *re spin_unlock_irq(&recv_cq->lock); } } else { + __release(&recv_cq->lock); spin_unlock_irq(&send_cq->lock); } } else if (recv_cq) { + __release(&send_cq->lock); spin_unlock_irq(&recv_cq->lock); + } else { + __release(&recv_cq->lock); + __release(&send_cq->lock); } } @@ -2411,7 +2421,7 @@ static u8 get_fence(u8 fence, struct ib_send_wr *wr) static int begin_wqe(struct mlx5_ib_qp *qp, void **seg, struct mlx5_wqe_ctrl_seg **ctrl, - struct ib_send_wr *wr, int *idx, + struct ib_send_wr *wr, unsigned *idx, int *size, int nreq) { int err = 0; @@ -2737,6 +2747,8 @@ out: if (bf->need_lock) spin_lock(&bf->lock); + else + __acquire(&bf->lock); /* TBD enable WC */ if (0 && nreq == 1 && bf->uuarn && inl && size > 1 && size <= bf->buf_size / 16) { @@ -2753,6 +2765,8 @@ out: bf->offset ^= bf->buf_size; if (bf->need_lock) spin_unlock(&bf->lock); + else + __release(&bf->lock); } spin_unlock_irqrestore(&qp->sq.lock, flags);