From patchwork Tue Nov 21 00:00:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 10067523 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 493EF60224 for ; Tue, 21 Nov 2017 00:01:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3AEA129194 for ; Tue, 21 Nov 2017 00:01:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2FD172923B; Tue, 21 Nov 2017 00:01:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9421B29194 for ; Tue, 21 Nov 2017 00:01:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752016AbdKUAB0 (ORCPT ); Mon, 20 Nov 2017 19:01:26 -0500 Received: from mail-qt0-f194.google.com ([209.85.216.194]:42197 "EHLO mail-qt0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751518AbdKUABE (ORCPT ); Mon, 20 Nov 2017 19:01:04 -0500 Received: by mail-qt0-f194.google.com with SMTP id 31so16965768qtz.9 for ; Mon, 20 Nov 2017 16:01:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lG4/ZT82tkD+ehTeh27QHwqjS4J1nRom6L5PXfRzKAU=; b=Dh1+yHE6Qq5M9BQ1cwSeITY/Ajn0Vq4u6nMzJ5lgqRkb1njO9psARmz7jKR1HzgFQp hH7/JK6LL08Vx2OFj3kDk6uu+pmU6aAqOpuTHx/ymAuQj0bNhCxMcjRpRus3A9VjE75Z WZ+LA5Xkip7HRSiOH0bra6M1aOt9lWWQL11hlsNe4ar5WkMfaKGh6tEDRBj1/LKSPTn/ PSt0n7yzsPRQ4XpXC9I97TayNxBa1b4FKwH/CsG99oqClPGeFs5B2EbUCf7HRJlt9ymW 7erwmhFEoQXGlZZcgBwfDh1J1Zuj3W/ZHKGLNxyqsXSVgyHe9V4uQoI3tDkrryIR+qgi FNyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lG4/ZT82tkD+ehTeh27QHwqjS4J1nRom6L5PXfRzKAU=; b=RnymqWLScI7Fu0gy6WlKrQw7+0Au3BUxsDnPzAmVokgghixrHbemFVpWxyzKb472BB Ech81o5YC7lbkDD8a1aT1Jz883GWZBDYIfrTM/+Ne9VAzw3wr+4CTiFHuVYFfyIbTUge PAEo+HpCwH5auenQqQNPaYsPYWW7f1aWoLThug1H3F6+vWoT9KP3KvUAWIYTj3qRgdkb WnshjOcleGYLj/M9i3OcgatwCqbPfr07KgAVLgYjvXxupSNiZxhPlcbAgXgsfjOUnIKv LbsP6tVdd9U1ncDmH/wRikOLDg2ULdhmGqAyX5lT5DOvX0key44GKTG8D8D7NYK0wjhx 2ASw== X-Gm-Message-State: AJaThX7ef77qTs+hjKi0uz6DAtQhWI3TW3Iwf9/vV8qZJEYSLmSCEiOz duflhMEvWlYftjmie4YqUyl8y0Tj X-Google-Smtp-Source: AGs4zMZtxnTLDp1zgCEcSKnW8uW6oa7Ih/Gfn22wCk/ZoxiKSgz3i4ViiFZX4opOrcvgec5/uQ6RzQ== X-Received: by 10.237.50.130 with SMTP id z2mr11607694qtd.243.1511222463351; Mon, 20 Nov 2017 16:01:03 -0800 (PST) Received: from pallmd1.broadcom.com ([192.19.228.250]) by smtp.gmail.com with ESMTPSA id w143sm1612821qka.84.2017.11.20.16.01.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 20 Nov 2017 16:01:03 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Dick Kennedy , James Smart Subject: [PATCH v3 09/17] lpfc: Adjust default value of lpfc_nvmet_mrq Date: Mon, 20 Nov 2017 16:00:36 -0800 Message-Id: <20171121000044.27702-10-jsmart2021@gmail.com> X-Mailer: git-send-email 2.13.1 In-Reply-To: <20171121000044.27702-1-jsmart2021@gmail.com> References: <20171121000044.27702-1-jsmart2021@gmail.com> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The current default for async hw receive queues is 1, which presents issues under heavy load as number of queues influence the available async receive buffer limits. Raise the default to the either the current hw limit (16) or the number of hw qs configured (io channel value). Revise the attribute definition for mrq to better reflect what we do for hw queues. E.g. 0 means default to optimal (# of cpus), non-zero specifies a specific limit. Before this change, mrq=0 meant target mode was disabled. As 0 now has a different meaning, rework the if tests to use the better nvmet_support check. Signed-off-by: Dick Kennedy Signed-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_attr.c | 11 ++++++++-- drivers/scsi/lpfc/lpfc_debugfs.c | 2 +- drivers/scsi/lpfc/lpfc_init.c | 47 ++++++++++++++++++++++++---------------- drivers/scsi/lpfc/lpfc_nvmet.h | 4 ++++ 4 files changed, 42 insertions(+), 22 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c index 4dcd129ca901..598e07f43912 100644 --- a/drivers/scsi/lpfc/lpfc_attr.c +++ b/drivers/scsi/lpfc/lpfc_attr.c @@ -3361,12 +3361,13 @@ LPFC_ATTR_R(suppress_rsp, 1, 0, 1, /* * lpfc_nvmet_mrq: Specify number of RQ pairs for processing NVMET cmds + * lpfc_nvmet_mrq = 0 driver will calcualte optimal number of RQ pairs * lpfc_nvmet_mrq = 1 use a single RQ pair * lpfc_nvmet_mrq >= 2 use specified RQ pairs for MRQ * */ LPFC_ATTR_R(nvmet_mrq, - 1, 1, 16, + LPFC_NVMET_MRQ_AUTO, LPFC_NVMET_MRQ_AUTO, LPFC_NVMET_MRQ_MAX, "Specify number of RQ pairs for processing NVMET cmds"); /* @@ -6357,6 +6358,9 @@ lpfc_nvme_mod_param_dep(struct lpfc_hba *phba) phba->cfg_nvmet_fb_size = LPFC_NVMET_FB_SZ_MAX; } + if (!phba->cfg_nvmet_mrq) + phba->cfg_nvmet_mrq = phba->cfg_nvme_io_channel; + /* Adjust lpfc_nvmet_mrq to avoid running out of WQE slots */ if (phba->cfg_nvmet_mrq > phba->cfg_nvme_io_channel) { phba->cfg_nvmet_mrq = phba->cfg_nvme_io_channel; @@ -6364,10 +6368,13 @@ lpfc_nvme_mod_param_dep(struct lpfc_hba *phba) "6018 Adjust lpfc_nvmet_mrq to %d\n", phba->cfg_nvmet_mrq); } + if (phba->cfg_nvmet_mrq > LPFC_NVMET_MRQ_MAX) + phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_MAX; + } else { /* Not NVME Target mode. Turn off Target parameters. */ phba->nvmet_support = 0; - phba->cfg_nvmet_mrq = 0; + phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_OFF; phba->cfg_nvmet_fb_size = 0; } diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c index 4df5a21bd93b..b7f57492aefc 100644 --- a/drivers/scsi/lpfc/lpfc_debugfs.c +++ b/drivers/scsi/lpfc/lpfc_debugfs.c @@ -3213,7 +3213,7 @@ lpfc_idiag_cqs_for_eq(struct lpfc_hba *phba, char *pbuffer, return 1; } - if (eqidx < phba->cfg_nvmet_mrq) { + if ((eqidx < phba->cfg_nvmet_mrq) && phba->nvmet_support) { /* NVMET CQset */ qp = phba->sli4_hba.nvmet_cqset[eqidx]; *len = __lpfc_idiag_print_cq(qp, "NVMET CQset", pbuffer, *len); diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index fc9f91327724..7a06f23a3baf 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -7939,8 +7939,12 @@ lpfc_sli4_queue_verify(struct lpfc_hba *phba) phba->cfg_fcp_io_channel = io_channel; if (phba->cfg_nvme_io_channel > io_channel) phba->cfg_nvme_io_channel = io_channel; - if (phba->cfg_nvme_io_channel < phba->cfg_nvmet_mrq) - phba->cfg_nvmet_mrq = phba->cfg_nvme_io_channel; + if (phba->nvmet_support) { + if (phba->cfg_nvme_io_channel < phba->cfg_nvmet_mrq) + phba->cfg_nvmet_mrq = phba->cfg_nvme_io_channel; + } + if (phba->cfg_nvmet_mrq > LPFC_NVMET_MRQ_MAX) + phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_MAX; lpfc_printf_log(phba, KERN_ERR, LOG_INIT, "2574 IO channels: irqs %d fcp %d nvme %d MRQ: %d\n", @@ -8454,13 +8458,15 @@ lpfc_sli4_queue_destroy(struct lpfc_hba *phba) /* Release NVME CQ mapping array */ lpfc_sli4_release_queue_map(&phba->sli4_hba.nvme_cq_map); - lpfc_sli4_release_queues(&phba->sli4_hba.nvmet_cqset, - phba->cfg_nvmet_mrq); + if (phba->nvmet_support) { + lpfc_sli4_release_queues(&phba->sli4_hba.nvmet_cqset, + phba->cfg_nvmet_mrq); - lpfc_sli4_release_queues(&phba->sli4_hba.nvmet_mrq_hdr, - phba->cfg_nvmet_mrq); - lpfc_sli4_release_queues(&phba->sli4_hba.nvmet_mrq_data, - phba->cfg_nvmet_mrq); + lpfc_sli4_release_queues(&phba->sli4_hba.nvmet_mrq_hdr, + phba->cfg_nvmet_mrq); + lpfc_sli4_release_queues(&phba->sli4_hba.nvmet_mrq_data, + phba->cfg_nvmet_mrq); + } /* Release mailbox command work queue */ __lpfc_sli4_release_queue(&phba->sli4_hba.mbx_wq); @@ -9015,19 +9021,22 @@ lpfc_sli4_queue_unset(struct lpfc_hba *phba) for (qidx = 0; qidx < phba->cfg_nvme_io_channel; qidx++) lpfc_cq_destroy(phba, phba->sli4_hba.nvme_cq[qidx]); - /* Unset NVMET MRQ queue */ - if (phba->sli4_hba.nvmet_mrq_hdr) { - for (qidx = 0; qidx < phba->cfg_nvmet_mrq; qidx++) - lpfc_rq_destroy(phba, + if (phba->nvmet_support) { + /* Unset NVMET MRQ queue */ + if (phba->sli4_hba.nvmet_mrq_hdr) { + for (qidx = 0; qidx < phba->cfg_nvmet_mrq; qidx++) + lpfc_rq_destroy( + phba, phba->sli4_hba.nvmet_mrq_hdr[qidx], phba->sli4_hba.nvmet_mrq_data[qidx]); - } + } - /* Unset NVMET CQ Set complete queue */ - if (phba->sli4_hba.nvmet_cqset) { - for (qidx = 0; qidx < phba->cfg_nvmet_mrq; qidx++) - lpfc_cq_destroy(phba, - phba->sli4_hba.nvmet_cqset[qidx]); + /* Unset NVMET CQ Set complete queue */ + if (phba->sli4_hba.nvmet_cqset) { + for (qidx = 0; qidx < phba->cfg_nvmet_mrq; qidx++) + lpfc_cq_destroy( + phba, phba->sli4_hba.nvmet_cqset[qidx]); + } } /* Unset FCP response complete queue */ @@ -10403,7 +10412,7 @@ lpfc_get_sli4_parameters(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq) !phba->nvme_support) { phba->nvme_support = 0; phba->nvmet_support = 0; - phba->cfg_nvmet_mrq = 0; + phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_OFF; phba->cfg_nvme_io_channel = 0; phba->io_channel_irqs = phba->cfg_fcp_io_channel; lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_NVME, diff --git a/drivers/scsi/lpfc/lpfc_nvmet.h b/drivers/scsi/lpfc/lpfc_nvmet.h index 25a65b0bb7f3..6723e7b81946 100644 --- a/drivers/scsi/lpfc/lpfc_nvmet.h +++ b/drivers/scsi/lpfc/lpfc_nvmet.h @@ -25,6 +25,10 @@ #define LPFC_NVMET_RQE_DEF_COUNT 512 #define LPFC_NVMET_SUCCESS_LEN 12 +#define LPFC_NVMET_MRQ_OFF 0xffff +#define LPFC_NVMET_MRQ_AUTO 0 +#define LPFC_NVMET_MRQ_MAX 16 + /* Used for NVME Target */ struct lpfc_nvmet_tgtport { struct lpfc_hba *phba;