From patchwork Thu Jun 8 10:25:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Selvin Xavier X-Patchwork-Id: 13271930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7F84C7EE23 for ; Thu, 8 Jun 2023 10:37:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234380AbjFHKhq (ORCPT ); Thu, 8 Jun 2023 06:37:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234388AbjFHKhp (ORCPT ); Thu, 8 Jun 2023 06:37:45 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1ED152D42 for ; Thu, 8 Jun 2023 03:37:30 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-1b24eba184bso1792525ad.0 for ; Thu, 08 Jun 2023 03:37:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1686220649; x=1688812649; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=bkr4h3jmbZEKUu1ORuVEewoKSGIQpNT88aUTHIfD7Gc=; b=XT3tC3DG9BVQffRrsRCkV/ouWFlZi+zxF+i8uplUPYDwKPexi2G6wroWohfCi/7ntK qMFrQ02lpY13Pucf2BpwkXbpQz7getC+hq9s/dvXL2pXzmi1DggvOrElgtY8urUmCoX/ wT6dFW/8MpWa8IIpyO/uucssRg+xbhxuGtvXs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686220649; x=1688812649; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bkr4h3jmbZEKUu1ORuVEewoKSGIQpNT88aUTHIfD7Gc=; b=cgmmLtqE3ujPzZStAZdIjlzuD3BRHVXZZerNeYyMvBwKrUFAI3l+7y89vJqo0NGxra 95ncGXUGzYKbCcy0vSy7G/XXQgxEPiPE2zOQPRa+tq9RbvbZFC4LjLPkSAikxKCqYeB7 fCBf1EEk+zOGcOnuwf5Pv4jDegzJ/fcUqLPu0InUGWCrV15S1Z6sJQxiqkXwbD+l5l8G XxWS5nfcfEoqXgcrPGzxT5kkIvx4SEuy15XjAOvRiYXO8ZnW8ojS9dYUfHedWQQ8JR7b IcpiIzIVJ0DlxPqgTPC1RjB5J0bhTx3ThNltz669KdBtxrP6WXU6yxDt1SwKMDA/oeU3 os7Q== X-Gm-Message-State: AC+VfDx2BBKUip4XtrQuHRa+0kD6G85tC3jG1o9U0iDPpqF43GaUYlzJ RSRDuuwo3QpyBZVW6CGrS9dv3w== X-Google-Smtp-Source: ACHHUZ5If6NwLXh43coNDS0nKDP6OWr0/1Zve52rjTIl5KIUaCiTgIl25Pqb668hS+863SMGPwAvLg== X-Received: by 2002:a17:902:cecb:b0:1b0:3b07:705b with SMTP id d11-20020a170902cecb00b001b03b07705bmr3860600plg.7.1686220649428; Thu, 08 Jun 2023 03:37:29 -0700 (PDT) Received: from dhcp-10-192-206-197.iig.avagotech.net.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id jj11-20020a170903048b00b001a980a23802sm1128510plb.111.2023.06.08.03.37.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 08 Jun 2023 03:37:28 -0700 (PDT) From: Selvin Xavier To: jgg@ziepe.ca, leon@kernel.org Cc: linux-rdma@vger.kernel.org, andrew.gospodarek@broadcom.com, Kashyap Desai , Selvin Xavier Subject: [PATCH for-next 12/17] RDMA/bnxt_re: post destroy_ah for delayed completion of AH creation Date: Thu, 8 Jun 2023 03:25:03 -0700 Message-Id: <1686219908-11181-13-git-send-email-selvin.xavier@broadcom.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1686219908-11181-1-git-send-email-selvin.xavier@broadcom.com> References: <1686219908-11181-1-git-send-email-selvin.xavier@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Kashyap Desai AH create may be called from interrpt context and driver has a special timeout (8 sec) for this command. This is to avoid soft lockups when the FW command takes more time. Driver returns -ETIMEOUT and fail create AH, without waiting for actual completion from firmware. When FW completion is received, use is_waiter_alive flag to avoid a regular completion path. If create_ah opcode is detected in completion path which does not have waiter alive, driver will fetch ah_id from successful firmware completion in the interrupt context and sends destroy_ah command for same ah_id. This special post is done in quick manner using helper function __send_message_no_waiter. timeout_send is only used for debugging purposes. If timeout_send value keeps incrementing, it indicates out of sync active ah counter between driver and firmware. This is a limitation but graceful handling is possible in future. Signed-off-by: Kashyap Desai Signed-off-by: Selvin Xavier --- drivers/infiniband/hw/bnxt_re/qplib_rcfw.c | 108 +++++++++++++++++++++++++++++ drivers/infiniband/hw/bnxt_re/qplib_rcfw.h | 2 + 2 files changed, 110 insertions(+) diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c index 4d208ac..3461e3b 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c @@ -177,6 +177,73 @@ static int __block_for_resp(struct bnxt_qplib_rcfw *rcfw, u16 cookie, u8 opcode) return -ETIMEDOUT; }; +/* __send_message_no_waiter - get cookie and post the message. + * @rcfw - rcfw channel instance of rdev + * @msg - qplib message internal + * + * This function will just post and don't bother about completion. + * Current design of this function is - + * user must hold the completion queue hwq->lock. + * user must have used existing completion and free the resources. + * this function will not check queue full condition. + * this function will explicitly set is_waiter_alive=false. + * current use case is - send destroy_ah if create_ah is return + * after waiter of create_ah is lost. It can be extended for other + * use case as well. + * + * Returns: Nothing + * + */ +static void __send_message_no_waiter(struct bnxt_qplib_rcfw *rcfw, + struct bnxt_qplib_cmdqmsg *msg) +{ + struct bnxt_qplib_cmdq_ctx *cmdq = &rcfw->cmdq; + struct bnxt_qplib_hwq *hwq = &cmdq->hwq; + struct bnxt_qplib_crsqe *crsqe; + struct bnxt_qplib_cmdqe *cmdqe; + u32 sw_prod, cmdq_prod; + u16 cookie, cbit; + u32 bsize; + u8 *preq; + + cookie = cmdq->seq_num & RCFW_MAX_COOKIE_VALUE; + cbit = cookie % rcfw->cmdq_depth; + + set_bit(cbit, cmdq->cmdq_bitmap); + __set_cmdq_base_cookie(msg->req, msg->req_sz, cpu_to_le16(cookie)); + crsqe = &rcfw->crsqe_tbl[cbit]; + + /* Set cmd_size in terms of 16B slots in req. */ + bsize = bnxt_qplib_set_cmd_slots(msg->req); + /* GET_CMD_SIZE would return number of slots in either case of tlv + * and non-tlv commands after call to bnxt_qplib_set_cmd_slots() + */ + crsqe->is_internal_cmd = true; + crsqe->is_waiter_alive = false; + crsqe->req_size = __get_cmdq_base_cmd_size(msg->req, msg->req_sz); + + preq = (u8 *)msg->req; + do { + /* Locate the next cmdq slot */ + sw_prod = HWQ_CMP(hwq->prod, hwq); + cmdqe = bnxt_qplib_get_qe(hwq, sw_prod, NULL); + /* Copy a segment of the req cmd to the cmdq */ + memset(cmdqe, 0, sizeof(*cmdqe)); + memcpy(cmdqe, preq, min_t(u32, bsize, sizeof(*cmdqe))); + preq += min_t(u32, bsize, sizeof(*cmdqe)); + bsize -= min_t(u32, bsize, sizeof(*cmdqe)); + hwq->prod++; + } while (bsize > 0); + cmdq->seq_num++; + + cmdq_prod = hwq->prod; + atomic_inc(&rcfw->timeout_send); + /* ring CMDQ DB */ + wmb(); + writel(cmdq_prod, cmdq->cmdq_mbox.prod); + writel(RCFW_CMDQ_TRIG_VAL, cmdq->cmdq_mbox.db); +} + static int __send_message(struct bnxt_qplib_rcfw *rcfw, struct bnxt_qplib_cmdqmsg *msg) { @@ -223,6 +290,7 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, crsqe->free_slots = free_slots; crsqe->resp = (struct creq_qp_event *)msg->resp; crsqe->resp->cookie = cpu_to_le16(cookie); + crsqe->is_internal_cmd = false; crsqe->is_waiter_alive = true; crsqe->req_size = __get_cmdq_base_cmd_size(msg->req, msg->req_sz); if (__get_cmdq_base_resp_size(msg->req, msg->req_sz) && msg->sb) { @@ -347,6 +415,26 @@ static int __send_message_basic_sanity(struct bnxt_qplib_rcfw *rcfw, return 0; } +/* This function will just post and do not bother about completion */ +static void __destroy_timedout_ah(struct bnxt_qplib_rcfw *rcfw, + struct creq_create_ah_resp *create_ah_resp) +{ + struct bnxt_qplib_cmdqmsg msg = {}; + struct cmdq_destroy_ah req = {}; + + bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req, + CMDQ_BASE_OPCODE_DESTROY_AH, + sizeof(req)); + req.ah_cid = create_ah_resp->xid; + msg.req = (struct cmdq_base *)&req; + msg.req_sz = sizeof(req); + __send_message_no_waiter(rcfw, &msg); + dev_info_ratelimited(&rcfw->pdev->dev, + "From %s: ah_cid = %d timeout_send %d\n", + __func__, req.ah_cid, + atomic_read(&rcfw->timeout_send)); +} + /** * __bnxt_qplib_rcfw_send_message - qplib interface to send * and complete rcfw command. @@ -568,6 +656,8 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw, if (!test_and_clear_bit(cbit, rcfw->cmdq.cmdq_bitmap)) dev_warn(&pdev->dev, "CMD bit %d was not requested\n", cbit); + if (crsqe->is_internal_cmd && !qp_event->status) + atomic_dec(&rcfw->timeout_send); if (crsqe->is_waiter_alive) { if (crsqe->resp) @@ -584,6 +674,24 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw, crsqe->resp = NULL; hwq->cons += req_size; + + /* This is a case to handle below scenario - + * Create AH is completed successfully by firmware, + * but completion took more time and driver already lost + * the context of create_ah from caller. + * We have already return failure for create_ah verbs, + * so let's destroy the same address vector since it is + * no more used in stack. We don't care about completion + * in __send_message_no_waiter. + * If destroy_ah is failued by firmware, there will be AH + * resource leak and relatively not critical + unlikely + * scenario. Current design is not to handle such case. + */ + if (!is_waiter_alive && !qp_event->status && + qp_event->event == CREQ_QP_EVENT_EVENT_CREATE_AH) + __destroy_timedout_ah(rcfw, + (struct creq_create_ah_resp *) + qp_event); spin_unlock_irqrestore(&hwq->lock, flags); } *num_wait += wait_cmds; diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h index 54576f1..338bf6a 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h @@ -153,6 +153,7 @@ struct bnxt_qplib_crsqe { /* Free slots at the time of submission */ u32 free_slots; bool is_waiter_alive; + bool is_internal_cmd; }; struct bnxt_qplib_rcfw_sbuf { @@ -225,6 +226,7 @@ struct bnxt_qplib_rcfw { u32 cmdq_depth; atomic_t rcfw_intr_enabled; struct semaphore rcfw_inflight; + atomic_t timeout_send; }; struct bnxt_qplib_cmdqmsg {