From patchwork Thu Feb 21 09:13:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 10823315 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EDB2F14E1 for ; Thu, 21 Feb 2019 09:13:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D6DDA2E65F for ; Thu, 21 Feb 2019 09:13:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C59512F6C9; Thu, 21 Feb 2019 09:13:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 45E6C2E65F for ; Thu, 21 Feb 2019 09:13:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726656AbfBUJNz (ORCPT ); Thu, 21 Feb 2019 04:13:55 -0500 Received: from mail-ed1-f66.google.com ([209.85.208.66]:40509 "EHLO mail-ed1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726514AbfBUJNz (ORCPT ); Thu, 21 Feb 2019 04:13:55 -0500 Received: by mail-ed1-f66.google.com with SMTP id 10so22421643eds.7 for ; Thu, 21 Feb 2019 01:13:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id; bh=j2/iJeUj69TwDAzcgFhoebcq0Y4va6QI+Ul5DP5VXrA=; b=grhs0C39ReN4D4IjGz10bjIhCOlVxFSVwH5S5zmYbjOFAjgUUgBbWflRbWC6owRP4D e1dWCuEXMiOiMMs2hCui5w4kvEJ0msKlxIyl6o2MHEf/yjspwySkd4mptAM0otuSpLYl QrAuW60ISVXxSNZgkb5j3YLz/EYY3CR6CB3G0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=j2/iJeUj69TwDAzcgFhoebcq0Y4va6QI+Ul5DP5VXrA=; b=U23PLnpC38xYzXUKhJz2Z4Tsy8BzSCDXlfPlwjOZ88IqUCeaFXxmeUTw7k7z5pdUQw 2Hbb8Uw1jb0CTHSu3Kk2HBAFRfEni6fvnabxlRCjLwjhuqdSZFo23OKAKj7G0sf2Gxuf qUPEJjShLNUXkMlq3m71+VeJ4muRrXHUrAnr9UUQfR0JWrchCUXh4Ic6qA89+I+gTDh0 kKa1cXk7a3UHbwH/g/zFfgHsyrkcm6LvbPbqwiaq3d++Ji3juKMnnKhD7OSOWnuzvwHZ HQmUbUNFlshKmkvk7J0LpmXznRs2G49bVext348odUn1NiPIhiweIQc4BWkl6J4S7rs5 8YOQ== X-Gm-Message-State: AHQUAub/MGUqqUj2RLVNIMV8z1TC3CW7l/tXw5PGkTZmh80/Wqv6gUtv Ul6ygvNGNXhgZeDMnvKSECe0aCZkDBOXPodJiG57g9aywVibEiiGG/R6Zie6dyklc31y0VtHkYO Z6G2sBvtJI2Kg+L3dBKk1/yXtz1jcQLZMagVjiybvzx2XHfb1zM70uJXek7qMXe08SmQ0n/jBai kLbwk= X-Google-Smtp-Source: AHgI3IbqZ/QxReWEbQ2lP2ZaaPbsHnzqr1Sa1MDWogEUD9dWrXHSrwSfjqo5eyXlq7tTeXRMm05DZQ== X-Received: by 2002:a05:6402:1351:: with SMTP id y17mr2525260edw.111.1550740433088; Thu, 21 Feb 2019 01:13:53 -0800 (PST) Received: from neo00-el73.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id r42sm6665267edd.23.2019.02.21.01.13.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 21 Feb 2019 01:13:51 -0800 (PST) From: Devesh Sharma To: linux-rdma@vger.kernel.org Cc: dledford@redhat.com, jgg@mellanox.com, Devesh Sharma Subject: [for-next] bnxt_re: fix the regression due to changes in alloc_pbl Date: Thu, 21 Feb 2019 04:13:28 -0500 Message-Id: <1550740408-25003-1-git-send-email-devesh.sharma@broadcom.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the backdrop of ODP changes recently being done in the kernel ib stack and drivers, there was a regression added in the __alloc_pbl path. The change left bnxt_re in DOA state in for-next branch. Fixing the regression to avoid the host crash when a user space object is created. Restricting the unconditional access to hwq.pg_arr when hwq is initialized for a user space object. Fixes: commit 161ebe2498d4 ("RDMA/bnxt_re: Use for_each_sg_dma_page iterator on umem SGL") Signed-off-by: Devesh Sharma --- drivers/infiniband/hw/bnxt_re/ib_verbs.c | 13 +++++++++---- drivers/infiniband/hw/bnxt_re/qplib_fp.c | 20 ++++++-------------- drivers/infiniband/hw/bnxt_re/qplib_res.c | 5 +---- 3 files changed, 16 insertions(+), 22 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index 2ed7786..6150a2f 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -793,9 +793,11 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp) { struct bnxt_re_qp *qp = container_of(ib_qp, struct bnxt_re_qp, ib_qp); struct bnxt_re_dev *rdev = qp->rdev; - int rc; + struct ib_pd *ibpd; unsigned int flags; + int rc; + ibpd = qp->ib_qp.pd; bnxt_qplib_flush_cqn_wq(&qp->qplib_qp); rc = bnxt_qplib_destroy_qp(&rdev->qplib_res, &qp->qplib_qp); if (rc) { @@ -803,9 +805,12 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp) return rc; } - flags = bnxt_re_lock_cqs(qp); - bnxt_qplib_clean_qp(&qp->qplib_qp); - bnxt_re_unlock_cqs(qp, flags); + if (!ibpd->uobject) { + flags = bnxt_re_lock_cqs(qp); + bnxt_qplib_clean_qp(&qp->qplib_qp); + bnxt_re_unlock_cqs(qp, flags); + } + bnxt_qplib_free_qp_res(&rdev->qplib_res, &qp->qplib_qp); if (ib_qp->qp_type == IB_QPT_GSI && rdev->qp1_sqp) { diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c index 77eb3d5..71c34d5 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c @@ -862,18 +862,18 @@ int bnxt_qplib_create_qp1(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp) int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp) { struct bnxt_qplib_rcfw *rcfw = res->rcfw; - struct sq_send *hw_sq_send_hdr, **hw_sq_send_ptr; - struct cmdq_create_qp req; - struct creq_create_qp_resp resp; - struct bnxt_qplib_pbl *pbl; - struct sq_psn_search **psn_search_ptr; unsigned long int psn_search, poff = 0; + struct sq_psn_search **psn_search_ptr; struct bnxt_qplib_q *sq = &qp->sq; struct bnxt_qplib_q *rq = &qp->rq; int i, rc, req_size, psn_sz = 0; + struct sq_send **hw_sq_send_ptr; + struct creq_create_qp_resp resp; struct bnxt_qplib_hwq *xrrq; u16 cmd_flags = 0, max_ssge; - u32 sw_prod, qp_flags = 0; + struct cmdq_create_qp req; + struct bnxt_qplib_pbl *pbl; + u32 qp_flags = 0; u16 max_rsge; RCFW_CMD_PREP(req, CREATE_QP, cmd_flags); @@ -948,14 +948,6 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp) CMDQ_CREATE_QP_SQ_PG_SIZE_PG_1G : CMDQ_CREATE_QP_SQ_PG_SIZE_PG_4K); - /* initialize all SQ WQEs to LOCAL_INVALID (sq prep for hw fetch) */ - hw_sq_send_ptr = (struct sq_send **)sq->hwq.pbl_ptr; - for (sw_prod = 0; sw_prod < sq->hwq.max_elements; sw_prod++) { - hw_sq_send_hdr = &hw_sq_send_ptr[get_sqe_pg(sw_prod)] - [get_sqe_idx(sw_prod)]; - hw_sq_send_hdr->wqe_type = SQ_BASE_WQE_TYPE_LOCAL_INVALID; - } - if (qp->scq) req.scq_cid = cpu_to_le32(qp->scq->id); diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c index d08b9d9..0bc24f9 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_res.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c @@ -119,11 +119,8 @@ static int __alloc_pbl(struct pci_dev *pdev, struct bnxt_qplib_pbl *pbl, for_each_sg_dma_page (sghead, &sg_iter, pages, 0) { pbl->pg_map_arr[i] = sg_page_iter_dma_address(&sg_iter); pbl->pg_arr[i] = NULL; - if (!pbl->pg_arr[i]) - goto fail; - - i++; pbl->pg_count++; + i++; } }