From patchwork Thu Aug 3 08:45:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Selvin Xavier X-Patchwork-Id: 13339559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE327C001DF for ; Thu, 3 Aug 2023 08:57:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234868AbjHCI5W (ORCPT ); Thu, 3 Aug 2023 04:57:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234872AbjHCI5T (ORCPT ); Thu, 3 Aug 2023 04:57:19 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F536198A for ; Thu, 3 Aug 2023 01:57:15 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id d2e1a72fcca58-686ba29ccb1so497392b3a.1 for ; Thu, 03 Aug 2023 01:57:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1691053034; x=1691657834; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=lhlGoyqCeGFZBviLAbloKUQwyrYUhWzL08TMj9Ug3AM=; b=TstGcHfF4L/0h5pxIBmLvVLlYJt59GD5Dm67MeRYRrh6Kgmov7A/x6NOsKGljUdFow n50WiKGSxr/pRALEr9X/2eO9hM5btJ7fhlUeqMBF8xWNvqPoTji9Du0mA13qmg5ulhjt O2Wxausp5I6yep0V0mbt7IB0p6v1N/zBWeeJQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691053034; x=1691657834; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lhlGoyqCeGFZBviLAbloKUQwyrYUhWzL08TMj9Ug3AM=; b=PHRIzKHo/1ZBu/AgtWkk2HJsr3heO2q4D0QrLyD+kYq49YEI65ii4QTH8ya9bt5oNT 3uidjyEvy/mAso2jlfpGvfO5FOWNySlPq/z1qAZdofY/dCxjuYTARZA/6ucKm/bvOyqu IN1u/GqNwgZqe0FSLDVVlcXVYNImXhPYx3wb8RtQrYl31XpTxUaror5na9USuR2QVZaC EeZavdE3CuHDvptm6fUs9jWF8zUJ73xUbZ5KZ1aEpiWuch2Gf3UnzmOGhrF/CTfssGVW qa+mQoRjBFbc5RiNVNvDoIXVQ2r4RTjzhR+yTJqnht/3pG0Udo3VssQFgJxlzuI5LAC2 iUEQ== X-Gm-Message-State: ABy/qLYA4uB5b9mNJ9s/pOsFR0pn1a9HJjrpH9r/ICMYvKqcV+gdL1TB TeyKJK90SKCBCF5gV+TRxRAZyg== X-Google-Smtp-Source: APBJJlFaFOM2MT7PhzxMZeeOi0Xh/ghDL8+eMzRSZs2rjyqU7jnhYwtVQGy5PsAdkOW+Xr+mH7fEWw== X-Received: by 2002:a05:6a00:1f0f:b0:67a:72d5:3365 with SMTP id be15-20020a056a001f0f00b0067a72d53365mr19383505pfb.6.1691053034531; Thu, 03 Aug 2023 01:57:14 -0700 (PDT) Received: from dhcp-10-192-206-197.iig.avagotech.net.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id x42-20020a056a000bea00b0068713008f98sm10289035pfu.129.2023.08.03.01.57.11 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 03 Aug 2023 01:57:13 -0700 (PDT) From: Selvin Xavier To: jgg@ziepe.ca, leon@kernel.org Cc: linux-rdma@vger.kernel.org, andrew.gospodarek@broadcom.com, Selvin Xavier , Kashyap Desai , Hongguang Gao Subject: [PATCH for-next v2 3/6] RDMA/bnxt_re: Fix the sideband buffer size handling for FW commands Date: Thu, 3 Aug 2023 01:45:23 -0700 Message-Id: <1691052326-32143-4-git-send-email-selvin.xavier@broadcom.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1691052326-32143-1-git-send-email-selvin.xavier@broadcom.com> References: <1691052326-32143-1-git-send-email-selvin.xavier@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org bnxt_qplib_rcfw_alloc_sbuf allocates 24 bytes and it is better to fit on stack variables. This way we can avoid unwanted kmalloc call. Call dma_alloc_coherent directly instead of wrapper bnxt_qplib_rcfw_alloc_sbuf. Also, FW expects the side buffer needs to be aligned to BNXT_QPLIB_CMDQE_UNITS(16B). So align the size to have the extra padding bytes. Signed-off-by: Kashyap Desai Signed-off-by: Hongguang Gao Signed-off-by: Selvin Xavier --- drivers/infiniband/hw/bnxt_re/qplib_fp.c | 38 ++++++++++-------- drivers/infiniband/hw/bnxt_re/qplib_rcfw.c | 34 +--------------- drivers/infiniband/hw/bnxt_re/qplib_sp.c | 63 +++++++++++++++--------------- 3 files changed, 55 insertions(+), 80 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c index f9dee0d..282e34e 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c @@ -709,7 +709,7 @@ int bnxt_qplib_query_srq(struct bnxt_qplib_res *res, struct bnxt_qplib_rcfw *rcfw = res->rcfw; struct creq_query_srq_resp resp = {}; struct bnxt_qplib_cmdqmsg msg = {}; - struct bnxt_qplib_rcfw_sbuf *sbuf; + struct bnxt_qplib_rcfw_sbuf sbuf; struct creq_query_srq_resp_sb *sb; struct cmdq_query_srq req = {}; int rc = 0; @@ -719,17 +719,20 @@ int bnxt_qplib_query_srq(struct bnxt_qplib_res *res, sizeof(req)); /* Configure the request */ - sbuf = bnxt_qplib_rcfw_alloc_sbuf(rcfw, sizeof(*sb)); - if (!sbuf) + sbuf.size = ALIGN(sizeof(*sb), BNXT_QPLIB_CMDQE_UNITS); + sbuf.sb = dma_alloc_coherent(&rcfw->pdev->dev, sbuf.size, + &sbuf.dma_addr, GFP_KERNEL); + if (!sbuf.sb) return -ENOMEM; - req.resp_size = sizeof(*sb) / BNXT_QPLIB_CMDQE_UNITS; + req.resp_size = sbuf.size / BNXT_QPLIB_CMDQE_UNITS; req.srq_cid = cpu_to_le32(srq->id); - sb = sbuf->sb; - bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, sbuf, sizeof(req), + sb = sbuf.sb; + bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, &sbuf, sizeof(req), sizeof(resp), 0); rc = bnxt_qplib_rcfw_send_message(rcfw, &msg); srq->threshold = le16_to_cpu(sb->srq_limit); - bnxt_qplib_rcfw_free_sbuf(rcfw, sbuf); + dma_free_coherent(&rcfw->pdev->dev, sbuf.size, + sbuf.sb, sbuf.dma_addr); return rc; } @@ -1347,24 +1350,26 @@ int bnxt_qplib_query_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp) struct bnxt_qplib_rcfw *rcfw = res->rcfw; struct creq_query_qp_resp resp = {}; struct bnxt_qplib_cmdqmsg msg = {}; - struct bnxt_qplib_rcfw_sbuf *sbuf; + struct bnxt_qplib_rcfw_sbuf sbuf; struct creq_query_qp_resp_sb *sb; struct cmdq_query_qp req = {}; u32 temp32[4]; int i, rc = 0; + sbuf.size = ALIGN(sizeof(*sb), BNXT_QPLIB_CMDQE_UNITS); + sbuf.sb = dma_alloc_coherent(&rcfw->pdev->dev, sbuf.size, + &sbuf.dma_addr, GFP_KERNEL); + if (!sbuf.sb) + return -ENOMEM; + sb = sbuf.sb; + bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req, CMDQ_BASE_OPCODE_QUERY_QP, sizeof(req)); - sbuf = bnxt_qplib_rcfw_alloc_sbuf(rcfw, sizeof(*sb)); - if (!sbuf) - return -ENOMEM; - sb = sbuf->sb; - req.qp_cid = cpu_to_le32(qp->id); - req.resp_size = sizeof(*sb) / BNXT_QPLIB_CMDQE_UNITS; - bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, sbuf, sizeof(req), + req.resp_size = sbuf.size / BNXT_QPLIB_CMDQE_UNITS; + bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, &sbuf, sizeof(req), sizeof(resp), 0); rc = bnxt_qplib_rcfw_send_message(rcfw, &msg); if (rc) @@ -1423,7 +1428,8 @@ int bnxt_qplib_query_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp) memcpy(qp->smac, sb->src_mac, 6); qp->vlan_id = le16_to_cpu(sb->vlan_pcp_vlan_dei_vlan_id); bail: - bnxt_qplib_rcfw_free_sbuf(rcfw, sbuf); + dma_free_coherent(&rcfw->pdev->dev, sbuf.size, + sbuf.sb, sbuf.dma_addr); return rc; } diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c index b30e66b..9d26871 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c @@ -335,7 +335,8 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, cpu_to_le64(sbuf->dma_addr)); __set_cmdq_base_resp_size(msg->req, msg->req_sz, ALIGN(sbuf->size, - BNXT_QPLIB_CMDQE_UNITS)); + BNXT_QPLIB_CMDQE_UNITS) / + BNXT_QPLIB_CMDQE_UNITS); } preq = (u8 *)msg->req; @@ -1196,34 +1197,3 @@ int bnxt_qplib_enable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw, return 0; } - -struct bnxt_qplib_rcfw_sbuf *bnxt_qplib_rcfw_alloc_sbuf( - struct bnxt_qplib_rcfw *rcfw, - u32 size) -{ - struct bnxt_qplib_rcfw_sbuf *sbuf; - - sbuf = kzalloc(sizeof(*sbuf), GFP_KERNEL); - if (!sbuf) - return NULL; - - sbuf->size = size; - sbuf->sb = dma_alloc_coherent(&rcfw->pdev->dev, sbuf->size, - &sbuf->dma_addr, GFP_KERNEL); - if (!sbuf->sb) - goto bail; - - return sbuf; -bail: - kfree(sbuf); - return NULL; -} - -void bnxt_qplib_rcfw_free_sbuf(struct bnxt_qplib_rcfw *rcfw, - struct bnxt_qplib_rcfw_sbuf *sbuf) -{ - if (sbuf->sb) - dma_free_coherent(&rcfw->pdev->dev, sbuf->size, - sbuf->sb, sbuf->dma_addr); - kfree(sbuf); -} diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c index b77928a..05ee8fd 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c @@ -94,7 +94,7 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw, struct creq_query_func_resp resp = {}; struct bnxt_qplib_cmdqmsg msg = {}; struct creq_query_func_resp_sb *sb; - struct bnxt_qplib_rcfw_sbuf *sbuf; + struct bnxt_qplib_rcfw_sbuf sbuf; struct cmdq_query_func req = {}; u8 *tqm_alloc; int i, rc = 0; @@ -104,16 +104,14 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw, CMDQ_BASE_OPCODE_QUERY_FUNC, sizeof(req)); - sbuf = bnxt_qplib_rcfw_alloc_sbuf(rcfw, sizeof(*sb)); - if (!sbuf) { - dev_err(&rcfw->pdev->dev, - "SP: QUERY_FUNC alloc side buffer failed\n"); + sbuf.size = ALIGN(sizeof(*sb), BNXT_QPLIB_CMDQE_UNITS); + sbuf.sb = dma_alloc_coherent(&rcfw->pdev->dev, sbuf.size, + &sbuf.dma_addr, GFP_KERNEL); + if (!sbuf.sb) return -ENOMEM; - } - - sb = sbuf->sb; - req.resp_size = sizeof(*sb) / BNXT_QPLIB_CMDQE_UNITS; - bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, sbuf, sizeof(req), + sb = sbuf.sb; + req.resp_size = sbuf.size / BNXT_QPLIB_CMDQE_UNITS; + bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, &sbuf, sizeof(req), sizeof(resp), 0); rc = bnxt_qplib_rcfw_send_message(rcfw, &msg); if (rc) @@ -174,7 +172,8 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw, attr->is_atomic = bnxt_qplib_is_atomic_cap(rcfw); bail: - bnxt_qplib_rcfw_free_sbuf(rcfw, sbuf); + dma_free_coherent(&rcfw->pdev->dev, sbuf.size, + sbuf.sb, sbuf.dma_addr); return rc; } @@ -717,23 +716,22 @@ int bnxt_qplib_get_roce_stats(struct bnxt_qplib_rcfw *rcfw, struct creq_query_roce_stats_resp_sb *sb; struct cmdq_query_roce_stats req = {}; struct bnxt_qplib_cmdqmsg msg = {}; - struct bnxt_qplib_rcfw_sbuf *sbuf; + struct bnxt_qplib_rcfw_sbuf sbuf; int rc = 0; bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req, CMDQ_BASE_OPCODE_QUERY_ROCE_STATS, sizeof(req)); - sbuf = bnxt_qplib_rcfw_alloc_sbuf(rcfw, sizeof(*sb)); - if (!sbuf) { - dev_err(&rcfw->pdev->dev, - "SP: QUERY_ROCE_STATS alloc side buffer failed\n"); + sbuf.size = ALIGN(sizeof(*sb), BNXT_QPLIB_CMDQE_UNITS); + sbuf.sb = dma_alloc_coherent(&rcfw->pdev->dev, sbuf.size, + &sbuf.dma_addr, GFP_KERNEL); + if (!sbuf.sb) return -ENOMEM; - } + sb = sbuf.sb; - sb = sbuf->sb; - req.resp_size = sizeof(*sb) / BNXT_QPLIB_CMDQE_UNITS; - bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, sbuf, sizeof(req), + req.resp_size = sbuf.size / BNXT_QPLIB_CMDQE_UNITS; + bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, &sbuf, sizeof(req), sizeof(resp), 0); rc = bnxt_qplib_rcfw_send_message(rcfw, &msg); if (rc) @@ -789,7 +787,8 @@ int bnxt_qplib_get_roce_stats(struct bnxt_qplib_rcfw *rcfw, } bail: - bnxt_qplib_rcfw_free_sbuf(rcfw, sbuf); + dma_free_coherent(&rcfw->pdev->dev, sbuf.size, + sbuf.sb, sbuf.dma_addr); return rc; } @@ -800,32 +799,31 @@ int bnxt_qplib_qext_stat(struct bnxt_qplib_rcfw *rcfw, u32 fid, struct creq_query_roce_stats_ext_resp_sb *sb; struct cmdq_query_roce_stats_ext req = {}; struct bnxt_qplib_cmdqmsg msg = {}; - struct bnxt_qplib_rcfw_sbuf *sbuf; + struct bnxt_qplib_rcfw_sbuf sbuf; int rc; - sbuf = bnxt_qplib_rcfw_alloc_sbuf(rcfw, sizeof(*sb)); - if (!sbuf) { - dev_err(&rcfw->pdev->dev, - "SP: QUERY_ROCE_STATS_EXT alloc sb failed"); + sbuf.size = ALIGN(sizeof(*sb), BNXT_QPLIB_CMDQE_UNITS); + sbuf.sb = dma_alloc_coherent(&rcfw->pdev->dev, sbuf.size, + &sbuf.dma_addr, GFP_KERNEL); + if (!sbuf.sb) return -ENOMEM; - } + sb = sbuf.sb; bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req, CMDQ_QUERY_ROCE_STATS_EXT_OPCODE_QUERY_ROCE_STATS, sizeof(req)); - req.resp_size = ALIGN(sizeof(*sb), BNXT_QPLIB_CMDQE_UNITS); - req.resp_addr = cpu_to_le64(sbuf->dma_addr); + req.resp_size = sbuf.size / BNXT_QPLIB_CMDQE_UNITS; + req.resp_addr = cpu_to_le64(sbuf.dma_addr); req.function_id = cpu_to_le32(fid); req.flags = cpu_to_le16(CMDQ_QUERY_ROCE_STATS_EXT_FLAGS_FUNCTION_ID); - bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, sbuf, sizeof(req), + bnxt_qplib_fill_cmdqmsg(&msg, &req, &resp, &sbuf, sizeof(req), sizeof(resp), 0); rc = bnxt_qplib_rcfw_send_message(rcfw, &msg); if (rc) goto bail; - sb = sbuf->sb; estat->tx_atomic_req = le64_to_cpu(sb->tx_atomic_req_pkts); estat->tx_read_req = le64_to_cpu(sb->tx_read_req_pkts); estat->tx_read_res = le64_to_cpu(sb->tx_read_res_pkts); @@ -849,7 +847,8 @@ int bnxt_qplib_qext_stat(struct bnxt_qplib_rcfw *rcfw, u32 fid, estat->rx_ecn_marked = le64_to_cpu(sb->rx_ecn_marked_pkts); bail: - bnxt_qplib_rcfw_free_sbuf(rcfw, sbuf); + dma_free_coherent(&rcfw->pdev->dev, sbuf.size, + sbuf.sb, sbuf.dma_addr); return rc; }