From patchwork Fri Feb 22 12:16:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 10825787 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A753F922 for ; Fri, 22 Feb 2019 12:16:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D674289BE for ; Fri, 22 Feb 2019 12:16:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7E11829030; Fri, 22 Feb 2019 12:16:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 07026289BE for ; Fri, 22 Feb 2019 12:16:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725942AbfBVMQj (ORCPT ); Fri, 22 Feb 2019 07:16:39 -0500 Received: from mail-ed1-f66.google.com ([209.85.208.66]:41803 "EHLO mail-ed1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725926AbfBVMQi (ORCPT ); Fri, 22 Feb 2019 07:16:38 -0500 Received: by mail-ed1-f66.google.com with SMTP id x7so1570566eds.8 for ; Fri, 22 Feb 2019 04:16:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id; bh=NmT5r3eWbb24Sw0QSdICBxswfalQJP5PhdUUMAhymig=; b=MNVpzYo6Yu3iNtmb2oi5eMF+t55KoqUOIT1KvgBpslNkZw8O76dc5Um9dXnjdGHolu hxF17juTV5MfIjaOEIwnqsBpCr6X7Ad0w2PkVu3WbuguLSHSQflChZXhYkFgbyrcS4vz X5/C6ogG3K4rxXv8g50MMy4oN242ZvDQ+bVpo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=NmT5r3eWbb24Sw0QSdICBxswfalQJP5PhdUUMAhymig=; b=mjhi4n9gtnk7Rvhsx3J2/GIeT0f57Q9eWC88h9YQaaS2buZq1Dl2FKcR27hrbPPPFm EtY55wFMKEzdPUA9GD94eD58ktueHn0cIJbZKiYTMlpPQuYPDh1At+oqPDwxbLX9nviq SbluMm9L7ZRH2TZBYkNF8Kr6qpMrTOsFS6oemCmF5jg3qYgUsELSY3nVrUN+lf+IhlX4 koUVC3yFOCdIpIw9oQ5bNw/w5jNGxG2CUFGiMZaFIe2oezEWu8ZGwZvCEpajaadolDQl 8Rib8/m9z5Zy5T47QguKc2XsyJIemONwURFGMyRmIZ4yVNBmizVqh0658BJBZtqji+cW oUXQ== X-Gm-Message-State: AHQUAuaKJlbm7HDWfum39MsBI0ZeVHYccqsrXtTj/z3lv39np7mb6U40 nHquVXpDlLwHVlw/Cn9Zw1nF95lpjZV+rmwAOvAIpoqvdXRZpnax48IqcnU3sK0VwFJ6zkJGaQv Upl82sDOgQuwO+GYNK7jBwLtdMFZ6A0r3DOYXUVLqCw78F4CEi11PiscgSAFUaC5UDKiRKq8nvi l5TT4= X-Google-Smtp-Source: AHgI3IY15p3+p0IX8wURfHpIguBgUKMuBbI3Ex1cAuPmzm49YId/hJHturVr3p93T20jJFN/3V8UUQ== X-Received: by 2002:a17:906:50a:: with SMTP id j10mr2679221eja.236.1550837796552; Fri, 22 Feb 2019 04:16:36 -0800 (PST) Received: from neo00-el73.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g5sm242226ejj.42.2019.02.22.04.16.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 22 Feb 2019 04:16:35 -0800 (PST) From: Devesh Sharma To: linux-rdma@vger.kernel.org Cc: dledford@redhat.com, jgg@mellanox.com, Devesh Sharma , Selvin Xavier Subject: [for-next V2] bnxt_re: fix the regression due to changes in alloc_pbl Date: Fri, 22 Feb 2019 07:16:19 -0500 Message-Id: <1550837779-16793-1-git-send-email-devesh.sharma@broadcom.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP While adding the use of for_each_sg_dma_page iterator for Brodcom's rdma driver, there was a regression added in the __alloc_pbl path. The change left bnxt_re in DOA state in for-next branch. Fixing the regression to avoid the host crash when a user space object is created. Restricting the unconditional access to hwq.pg_arr when hwq is initialized for user space objects. Fixes: 161ebe2498d4 ("RDMA/bnxt_re: Use for_each_sg_dma_page iterator on umem SGL") Reported-by: Gal Pressman Signed-off-by: Selvin Xavier Signed-off-by: Devesh Sharma --- drivers/infiniband/hw/bnxt_re/ib_verbs.c | 11 +++++++---- drivers/infiniband/hw/bnxt_re/qplib_fp.c | 20 ++++++-------------- drivers/infiniband/hw/bnxt_re/qplib_res.c | 5 +---- 3 files changed, 14 insertions(+), 22 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index 83bf6f5d..fc65751 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -793,8 +793,8 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp) { struct bnxt_re_qp *qp = container_of(ib_qp, struct bnxt_re_qp, ib_qp); struct bnxt_re_dev *rdev = qp->rdev; - int rc; unsigned int flags; + int rc; bnxt_qplib_flush_cqn_wq(&qp->qplib_qp); rc = bnxt_qplib_destroy_qp(&rdev->qplib_res, &qp->qplib_qp); @@ -803,9 +803,12 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp) return rc; } - flags = bnxt_re_lock_cqs(qp); - bnxt_qplib_clean_qp(&qp->qplib_qp); - bnxt_re_unlock_cqs(qp, flags); + if (!rdma_is_kernel_res(&qp->res)) { + flags = bnxt_re_lock_cqs(qp); + bnxt_qplib_clean_qp(&qp->qplib_qp); + bnxt_re_unlock_cqs(qp, flags); + } + bnxt_qplib_free_qp_res(&rdev->qplib_res, &qp->qplib_qp); if (ib_qp->qp_type == IB_QPT_GSI && rdev->qp1_sqp) { diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c index 77eb3d5..71c34d5 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c @@ -862,18 +862,18 @@ int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp) int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp) { struct bnxt_qplib_rcfw *rcfw = res->rcfw; - struct sq_send *hw_sq_send_hdr, **hw_sq_send_ptr; - struct cmdq_create_qp req; - struct creq_create_qp_resp resp; - struct bnxt_qplib_pbl *pbl; - struct sq_psn_search **psn_search_ptr; unsigned long int psn_search, poff = 0; + struct sq_psn_search **psn_search_ptr; struct bnxt_qplib_q *sq = &qp->sq; struct bnxt_qplib_q *rq = &qp->rq; int i, rc, req_size, psn_sz = 0; + struct sq_send **hw_sq_send_ptr; + struct creq_create_qp_resp resp; struct bnxt_qplib_hwq *xrrq; u16 cmd_flags = 0, max_ssge; - u32 sw_prod, qp_flags = 0; + struct cmdq_create_qp req; + struct bnxt_qplib_pbl *pbl; + u32 qp_flags = 0; u16 max_rsge; RCFW_CMD_PREP(req, CREATE_QP, cmd_flags); @@ -948,14 +948,6 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp) CMDQ_CREATE_QP_SQ_PG_SIZE_PG_1G : CMDQ_CREATE_QP_SQ_PG_SIZE_PG_4K); - /* initialize all SQ WQEs to LOCAL_INVALID (sq prep for hw fetch) */ - hw_sq_send_ptr = (struct sq_send **)sq->hwq.pbl_ptr; - for (sw_prod = 0; sw_prod < sq->hwq.max_elements; sw_prod++) { - hw_sq_send_hdr = &hw_sq_send_ptr[get_sqe_pg(sw_prod)] - [get_sqe_idx(sw_prod)]; - hw_sq_send_hdr->wqe_type = SQ_BASE_WQE_TYPE_LOCAL_INVALID; - } - if (qp->scq) req.scq_cid = cpu_to_le32(qp->scq->id); diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c index d08b9d9..0bc24f9 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_res.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c @@ -119,11 +119,8 @@ static int __alloc_pbl(struct pci_dev *pdev, struct bnxt_qplib_pbl *pbl, for_each_sg_dma_page (sghead, &sg_iter, pages, 0) { pbl->pg_map_arr[i] = sg_page_iter_dma_address(&sg_iter); pbl->pg_arr[i] = NULL; - if (!pbl->pg_arr[i]) - goto fail; - - i++; pbl->pg_count++; + i++; } }