From patchwork Thu Jun 8 10:24:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Selvin Xavier X-Patchwork-Id: 13271925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39279C7EE25 for ; Thu, 8 Jun 2023 10:37:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234306AbjFHKh0 (ORCPT ); Thu, 8 Jun 2023 06:37:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235695AbjFHKhY (ORCPT ); Thu, 8 Jun 2023 06:37:24 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76F4F1712 for ; Thu, 8 Jun 2023 03:37:16 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id d9443c01a7336-1b04706c974so2889195ad.2 for ; Thu, 08 Jun 2023 03:37:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1686220636; x=1688812636; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=AChhikG6yh1WerfUWTyfgKQ7r/qWyJ9gkI+8JTMPYYc=; b=hgGti8x8Y6NUHfcYlaTatuKB/2Nc0+FpDSTQuftCm0nW1X+TCfl+gF03lCWtt0csQN vzgeoafEk+C5rIMHmmknf3bohg2HJ9iwlx0Z5w3++l9GA+BHTmVpHkvADCAuj0F0PFIR l8X1NaZToqdFYHPBCmo4PXtJ7OKLIY20qcuIA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686220636; x=1688812636; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AChhikG6yh1WerfUWTyfgKQ7r/qWyJ9gkI+8JTMPYYc=; b=ee4nRHKEZyKOQ6ZoSM+NSX4R0jl7efKE7bmWtDWLVDArcDR4Y0m+fLC99OJIkRIOaG w6OZslzFa1yvRr8ig9JaBS448NgHv9to7Kl68jckntgXlWAEsmsfa7A2BP8eG6t9GxmV YiMwGcd2gvnbKlhnN7OOE6Yij+W9qgHSuUBuLGQsWkuNUiwG43mXP355Vv9Ug7FmhEhF oAFGjLPF0Hf+Fnhdcx3MD4uDeDv0zKald9my273vPcQceTMUpAo+YboESAJ33Z3ElCA2 8IjdJwArO9usmHNb+jMGQlvNr7foMJVSlQeeiTG50JEnY5GLuSvlQLFrtkJJMgd0LoXU ABSQ== X-Gm-Message-State: AC+VfDw6QjbrPYprvo6uWYVJjIu1R+0kJIFhnbwYS/yh0gcdd3RYgMNU JN5dTNeKyQNvOJnu3B8vpmBL8Q== X-Google-Smtp-Source: ACHHUZ4AFGp55LZkUoDJMk8ZN9yy9ko1y8LhDt+2fvhETTV6YUBsARRRlb/B+rmBIKkMIddTQcVJDg== X-Received: by 2002:a17:903:32cb:b0:1b0:3742:e732 with SMTP id i11-20020a17090332cb00b001b03742e732mr9453957plr.23.1686220635771; Thu, 08 Jun 2023 03:37:15 -0700 (PDT) Received: from dhcp-10-192-206-197.iig.avagotech.net.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id jj11-20020a170903048b00b001a980a23802sm1128510plb.111.2023.06.08.03.37.13 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 08 Jun 2023 03:37:15 -0700 (PDT) From: Selvin Xavier To: jgg@ziepe.ca, leon@kernel.org Cc: linux-rdma@vger.kernel.org, andrew.gospodarek@broadcom.com, Kashyap Desai , Selvin Xavier Subject: [PATCH for-next 07/17] RDMA/bnxt_re: use shadow qd while posting non blocking rcfw command Date: Thu, 8 Jun 2023 03:24:58 -0700 Message-Id: <1686219908-11181-8-git-send-email-selvin.xavier@broadcom.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1686219908-11181-1-git-send-email-selvin.xavier@broadcom.com> References: <1686219908-11181-1-git-send-email-selvin.xavier@broadcom.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Kashyap Desai Whenever there is a fast path IO and create/destroy resources from the slow path is happening in parallel, we may notice high latency of slow path command completion. Introduces a shadow queue depth to prevent the outstanding requests to the FW. Driver will not allow more than #RCFW_CMD_NON_BLOCKING_SHADOW_QD non-blocking commands to the Firmware. Shadow queue depth is a soft limit only for non-blocking commands. Blocking commands will be posted to the firmware as long as there is a free slot. Signed-off-by: Kashyap Desai Signed-off-by: Selvin Xavier --- drivers/infiniband/hw/bnxt_re/qplib_rcfw.c | 60 +++++++++++++++++++++++++++++- drivers/infiniband/hw/bnxt_re/qplib_rcfw.h | 3 ++ 2 files changed, 61 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c index 3c4f72a..f7d1238 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c @@ -281,8 +281,21 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw, return 0; } -int bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw, - struct bnxt_qplib_cmdqmsg *msg) +/** + * __bnxt_qplib_rcfw_send_message - qplib interface to send + * and complete rcfw command. + * @rcfw - rcfw channel instance of rdev + * @msg - qplib message internal + * + * This function does not account shadow queue depth. It will send + * all the command unconditionally as long as send queue is not full. + * + * Returns: + * 0 if command completed by firmware. + * Non zero if the command is not completed by firmware. + */ +static int __bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw, + struct bnxt_qplib_cmdqmsg *msg) { struct creq_qp_event *evnt = (struct creq_qp_event *)msg->resp; u16 cookie; @@ -331,6 +344,48 @@ int bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw, return rc; } + +/** + * bnxt_qplib_rcfw_send_message - qplib interface to send + * and complete rcfw command. + * @rcfw - rcfw channel instance of rdev + * @msg - qplib message internal + * + * Driver interact with Firmware through rcfw channel/slow path in two ways. + * a. Blocking rcfw command send. In this path, driver cannot hold + * the context for longer period since it is holding cpu until + * command is not completed. + * b. Non-blocking rcfw command send. In this path, driver can hold the + * context for longer period. There may be many pending command waiting + * for completion because of non-blocking nature. + * + * Driver will use shadow queue depth. Current queue depth of 8K + * (due to size of rcfw message there can be actual ~4K rcfw outstanding) + * is not optimal for rcfw command processing in firmware. + * + * Restrict at max #RCFW_CMD_NON_BLOCKING_SHADOW_QD Non-Blocking rcfw commands. + * Allow all blocking commands until there is no queue full. + * + * Returns: + * 0 if command completed by firmware. + * Non zero if the command is not completed by firmware. + */ +int bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw, + struct bnxt_qplib_cmdqmsg *msg) +{ + int ret; + + if (!msg->block) { + down(&rcfw->rcfw_inflight); + ret = __bnxt_qplib_rcfw_send_message(rcfw, msg); + up(&rcfw->rcfw_inflight); + } else { + ret = __bnxt_qplib_rcfw_send_message(rcfw, msg); + } + + return ret; +} + /* Completions */ static int bnxt_qplib_process_func_event(struct bnxt_qplib_rcfw *rcfw, struct creq_func_event *func_event) @@ -932,6 +987,7 @@ int bnxt_qplib_enable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw, return rc; } + sema_init(&rcfw->rcfw_inflight, RCFW_CMD_NON_BLOCKING_SHADOW_QD); bnxt_qplib_start_rcfw(rcfw); return 0; diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h index 32e5b67..862bfbf 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h @@ -66,6 +66,8 @@ static inline void bnxt_qplib_rcfw_cmd_prep(struct cmdq_base *req, req->cmd_size = cmd_size; } +/* Shadow queue depth for non blocking command */ +#define RCFW_CMD_NON_BLOCKING_SHADOW_QD 64 #define RCFW_CMD_WAIT_TIME_MS 20000 /* 20 Seconds timeout */ /* CMDQ elements */ @@ -197,6 +199,7 @@ struct bnxt_qplib_rcfw { u64 oos_prev; u32 init_oos_stats; u32 cmdq_depth; + struct semaphore rcfw_inflight; }; struct bnxt_qplib_cmdqmsg {