From patchwork Fri Jun 9 11:01:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Selvin Xavier X-Patchwork-Id: 13273724 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DED35C83003 for ; Fri, 9 Jun 2023 11:15:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238846AbjFILPQ (ORCPT ); Fri, 9 Jun 2023 07:15:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239633AbjFILO4 (ORCPT ); Fri, 9 Jun 2023 07:14:56 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCF332711 for ; Fri, 9 Jun 2023 04:14:35 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-650bacd6250so1374947b3a.2 for ; Fri, 09 Jun 2023 04:14:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1686309275; x=1688901275; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=HllvExOepAYGKaPmufyBbkGifg2uZX8YnMQy+tHB1+k=; b=c9g60u7y/C6HAH2rDSASq1dAdfduH9IOMpJCXAuAsTnjL0PfOQO/bbEAjJYXct94sN 0yzoTMJZakRUs+x5fees/fzq9sBnqh6NMhQP/HmTQGKOomPEDO/C2Nwiia+MNJdE+VWi 6WOQ5igZeEDQZ+TOy2hOQMYIy1omWZ3LNxEDY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686309275; x=1688901275; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HllvExOepAYGKaPmufyBbkGifg2uZX8YnMQy+tHB1+k=; b=ONCogfUbmqVTmc5pv9uTev38DynXcuBFjmhKms09R0z5augtltH417TaFKHtPXVBPP GoJbkTKFxG4GBPLG5j1GdBggaouozIBStECrsgznoSJeqSOwFQDssiLHU4aTab4MfhoX g0i+8UIIt5pYlq2V6tCV7gc+yGZ1V7c6kOcAIvbMLZMdxCWMyjufDhw4UERUrNSZUKBx Ij972BfkK6PLf+WtjzQdhpcCOlX53IdchYzBlnYtS0qeggKQxAU+M9+uSVqJMgsJ8lt4 VsiThjfJUwsY65l/H8grZzdssf4GKDYHt4QG/+BNMlRNC8I5QJ3XWdaBx00lGte+ZUft Jtiw== X-Gm-Message-State: AC+VfDxs+DsKzyO4CLH0pEVeUrK35je1z8hSeqBMPvNd94Yxyo3Z28dE WYNZa/+0uC0Ih1Rmyd3IJudLDg== X-Google-Smtp-Source: ACHHUZ6a6RIJJaNXvZew9IEfw87LFJvFnFWfn+PAuC2/sAcxIM05J4c5mTOPhoTYUawWldMvZiJxQw== X-Received: by 2002:a05:6a20:428b:b0:111:ee3b:59b1 with SMTP id o11-20020a056a20428b00b00111ee3b59b1mr786722pzj.2.1686309275202; Fri, 09 Jun 2023 04:14:35 -0700 (PDT) Received: from dhcp-10-192-206-197.iig.avagotech.net.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id q4-20020a170902dac400b001b0142908f7sm2992954plx.291.2023.06.09.04.14.32 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Jun 2023 04:14:34 -0700 (PDT) From: Selvin Xavier To: jgg@ziepe.ca, leon@kernel.org Cc: linux-rdma@vger.kernel.org, andrew.gospodarek@broadcom.com, kashyap.desai@broadcom.com, Selvin Xavier Subject: [PATCH v2 for-next 08/17] RDMA/bnxt_re: Simplify the function that sends the FW commands Date: Fri, 9 Jun 2023 04:01:45 -0700 Message-Id: <1686308514-11996-9-git-send-email-selvin.xavier@broadcom.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1686308514-11996-1-git-send-email-selvin.xavier@broadcom.com> References: <1686308514-11996-1-git-send-email-selvin.xavier@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Kashyap Desai - Use __send_message_basic_sanity helper function. - Do not retry posting same command if there is a queue full detection. - ENXIO is used to indicate controller recovery. - In the case of ERR_DEVICE_DETACHED state, the driver should not post commands to the firmware, but also return fabricated written code. Signed-off-by: Kashyap Desai Signed-off-by: Selvin Xavier --- drivers/infiniband/hw/bnxt_re/qplib_rcfw.c | 125 +++++++++++++++-------------- drivers/infiniband/hw/bnxt_re/qplib_rcfw.h | 22 +++++ 2 files changed, 86 insertions(+), 61 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c index f7d1238..15f6793 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c @@ -170,34 +170,22 @@ static int __block_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie, u8 opcode) static int __send_message(struct bnxt_qplib_rcfw *rcfw, struct bnxt_qplib_cmdqmsg *msg) { - struct bnxt_qplib_cmdq_ctx *cmdq = &rcfw->cmdq; - struct bnxt_qplib_hwq *hwq = &cmdq->hwq; + u32 bsize, opcode, free_slots, required_slots; + struct bnxt_qplib_cmdq_ctx *cmdq; struct bnxt_qplib_crsqe *crsqe; struct bnxt_qplib_cmdqe *cmdqe; + struct bnxt_qplib_hwq *hwq; u32 sw_prod, cmdq_prod; struct pci_dev *pdev; unsigned long flags; - u32 bsize, opcode; u16 cookie, cbit; u8 *preq; + cmdq = &rcfw->cmdq; + hwq = &cmdq->hwq; pdev = rcfw->pdev; opcode = __get_cmdq_base_opcode(msg->req, msg->req_sz); - if (!test_bit(FIRMWARE_INITIALIZED_FLAG, &cmdq->flags) && - (opcode != CMDQ_BASE_OPCODE_QUERY_FUNC && - opcode != CMDQ_BASE_OPCODE_INITIALIZE_FW && - opcode != CMDQ_BASE_OPCODE_QUERY_VERSION)) { - dev_err(&pdev->dev, - "RCFW not initialized, reject opcode 0x%x\n", opcode); - return -EINVAL; - } - - if (test_bit(FIRMWARE_INITIALIZED_FLAG, &cmdq->flags) && - opcode == CMDQ_BASE_OPCODE_INITIALIZE_FW) { - dev_err(&pdev->dev, "RCFW already initialized!\n"); - return -EINVAL; - } if (test_bit(FIRMWARE_TIMED_OUT, &cmdq->flags)) return -ETIMEDOUT; @@ -206,40 +194,37 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, * cmdqe */ spin_lock_irqsave(&hwq->lock, flags); - if (msg->req->cmd_size >= HWQ_FREE_SLOTS(hwq)) { - dev_err(&pdev->dev, "RCFW: CMDQ is full!\n"); + required_slots = bnxt_qplib_get_cmd_slots(msg->req); + free_slots = HWQ_FREE_SLOTS(hwq); + cookie = cmdq->seq_num & RCFW_MAX_COOKIE_VALUE; + cbit = cookie % rcfw->cmdq_depth; + + if (required_slots >= free_slots || + test_bit(cbit, cmdq->cmdq_bitmap)) { + dev_info_ratelimited(&pdev->dev, + "CMDQ is full req/free %d/%d!", + required_slots, free_slots); spin_unlock_irqrestore(&hwq->lock, flags); return -EAGAIN; } - - - cookie = cmdq->seq_num & RCFW_MAX_COOKIE_VALUE; - cbit = cookie % rcfw->cmdq_depth; if (msg->block) cookie |= RCFW_CMD_IS_BLOCKING; - set_bit(cbit, cmdq->cmdq_bitmap); __set_cmdq_base_cookie(msg->req, msg->req_sz, cpu_to_le16(cookie)); crsqe = &rcfw->crsqe_tbl[cbit]; - if (crsqe->resp) { - spin_unlock_irqrestore(&hwq->lock, flags); - return -EBUSY; - } - - /* change the cmd_size to the number of 16byte cmdq unit. - * req->cmd_size is modified here - */ bsize = bnxt_qplib_set_cmd_slots(msg->req); - - memset(msg->resp, 0, sizeof(*msg->resp)); + crsqe->free_slots = free_slots; crsqe->resp = (struct creq_qp_event *)msg->resp; crsqe->resp->cookie = cpu_to_le16(cookie); crsqe->req_size = __get_cmdq_base_cmd_size(msg->req, msg->req_sz); if (__get_cmdq_base_resp_size(msg->req, msg->req_sz) && msg->sb) { struct bnxt_qplib_rcfw_sbuf *sbuf = msg->sb; - __set_cmdq_base_resp_addr(msg->req, msg->req_sz, cpu_to_le64(sbuf->dma_addr)); + + __set_cmdq_base_resp_addr(msg->req, msg->req_sz, + cpu_to_le64(sbuf->dma_addr)); __set_cmdq_base_resp_size(msg->req, msg->req_sz, - ALIGN(sbuf->size, BNXT_QPLIB_CMDQE_UNITS)); + ALIGN(sbuf->size, + BNXT_QPLIB_CMDQE_UNITS)); } preq = (u8 *)msg->req; @@ -247,11 +232,6 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, /* Locate the next cmdq slot */ sw_prod = HWQ_CMP(hwq->prod, hwq); cmdqe = bnxt_qplib_get_qe(hwq, sw_prod, NULL); - if (!cmdqe) { - dev_err(&pdev->dev, - "RCFW request failed with no cmdqe!\n"); - goto done; - } /* Copy a segment of the req cmd to the cmdq */ memset(cmdqe, 0, sizeof(*cmdqe)); memcpy(cmdqe, preq, min_t(u32, bsize, sizeof(*cmdqe))); @@ -275,12 +255,43 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, wmb(); writel(cmdq_prod, cmdq->cmdq_mbox.prod); writel(RCFW_CMDQ_TRIG_VAL, cmdq->cmdq_mbox.db); -done: spin_unlock_irqrestore(&hwq->lock, flags); /* Return the CREQ response pointer */ return 0; } +static int __send_message_basic_sanity(struct bnxt_qplib_rcfw *rcfw, + struct bnxt_qplib_cmdqmsg *msg) +{ + struct bnxt_qplib_cmdq_ctx *cmdq; + u32 opcode; + + cmdq = &rcfw->cmdq; + opcode = __get_cmdq_base_opcode(msg->req, msg->req_sz); + + /* Prevent posting if f/w is not in a state to process */ + if (test_bit(ERR_DEVICE_DETACHED, &rcfw->cmdq.flags)) + return -ENXIO; + + if (test_bit(FIRMWARE_INITIALIZED_FLAG, &cmdq->flags) && + opcode == CMDQ_BASE_OPCODE_INITIALIZE_FW) { + dev_err(&rcfw->pdev->dev, "QPLIB: RCFW already initialized!"); + return -EINVAL; + } + + if (!test_bit(FIRMWARE_INITIALIZED_FLAG, &cmdq->flags) && + (opcode != CMDQ_BASE_OPCODE_QUERY_FUNC && + opcode != CMDQ_BASE_OPCODE_INITIALIZE_FW && + opcode != CMDQ_BASE_OPCODE_QUERY_VERSION)) { + dev_err(&rcfw->pdev->dev, + "QPLIB: RCFW not initialized, reject opcode 0x%x", + opcode); + return -EOPNOTSUPP; + } + + return 0; +} + /** * __bnxt_qplib_rcfw_send_message - qplib interface to send * and complete rcfw command. @@ -299,29 +310,21 @@ static int __bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw, { struct creq_qp_event *evnt = (struct creq_qp_event *)msg->resp; u16 cookie; - u8 opcode, retry_cnt = 0xFF; int rc = 0; + u8 opcode; - /* Prevent posting if f/w is not in a state to process */ - if (test_bit(ERR_DEVICE_DETACHED, &rcfw->cmdq.flags)) - return 0; + opcode = __get_cmdq_base_opcode(msg->req, msg->req_sz); - do { - opcode = __get_cmdq_base_opcode(msg->req, msg->req_sz); - rc = __send_message(rcfw, msg); - cookie = le16_to_cpu(__get_cmdq_base_cookie(msg->req, msg->req_sz)) & - RCFW_MAX_COOKIE_VALUE; - if (!rc) - break; - if (!retry_cnt || (rc != -EAGAIN && rc != -EBUSY)) { - /* send failed */ - dev_err(&rcfw->pdev->dev, "cmdq[%#x]=%#x send failed\n", - cookie, opcode); - return rc; - } - msg->block ? mdelay(1) : usleep_range(500, 1000); + rc = __send_message_basic_sanity(rcfw, msg); + if (rc) + return rc == -ENXIO ? bnxt_qplib_map_rc(opcode) : rc; + + rc = __send_message(rcfw, msg); + if (rc) + return rc; - } while (retry_cnt--); + cookie = le16_to_cpu(__get_cmdq_base_cookie(msg->req, msg->req_sz)) + & RCFW_MAX_COOKIE_VALUE; if (msg->block) rc = __block_for_resp(rcfw, cookie, opcode); diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h index 862bfbf..b7bbbae 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h @@ -89,6 +89,26 @@ static inline u32 bnxt_qplib_cmdqe_page_size(u32 depth) return (bnxt_qplib_cmdqe_npages(depth) * PAGE_SIZE); } +/* Get the number of command units required for the req. The + * function returns correct value only if called before + * setting using bnxt_qplib_set_cmd_slots + */ +static inline u32 bnxt_qplib_get_cmd_slots(struct cmdq_base *req) +{ + u32 cmd_units = 0; + + if (HAS_TLV_HEADER(req)) { + struct roce_tlv *tlv_req = (struct roce_tlv *)req; + + cmd_units = tlv_req->total_size; + } else { + cmd_units = (req->cmd_size + BNXT_QPLIB_CMDQE_UNITS - 1) / + BNXT_QPLIB_CMDQE_UNITS; + } + + return cmd_units; +} + static inline u32 bnxt_qplib_set_cmd_slots(struct cmdq_base *req) { u32 cmd_byte = 0; @@ -130,6 +150,8 @@ typedef int (*aeq_handler_t)(struct bnxt_qplib_rcfw *, void *, void *); struct bnxt_qplib_crsqe { struct creq_qp_event *resp; u32 req_size; + /* Free slots at the time of submission */ + u32 free_slots; }; struct bnxt_qplib_rcfw_sbuf {