From patchwork Tue Nov 21 00:00:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 10067511 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C798D60224 for ; Tue, 21 Nov 2017 00:01:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B931A29194 for ; Tue, 21 Nov 2017 00:01:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AD55F2923B; Tue, 21 Nov 2017 00:01:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2C19329194 for ; Tue, 21 Nov 2017 00:01:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752048AbdKUABT (ORCPT ); Mon, 20 Nov 2017 19:01:19 -0500 Received: from mail-qk0-f175.google.com ([209.85.220.175]:34773 "EHLO mail-qk0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752016AbdKUABM (ORCPT ); Mon, 20 Nov 2017 19:01:12 -0500 Received: by mail-qk0-f175.google.com with SMTP id v137so9759956qkb.1 for ; Mon, 20 Nov 2017 16:01:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TUNVNQKUGx4+SrUW2rJVVufwWyttKze33QmkamzUmuA=; b=E2XwUcm8ZyVAwWE75GyZ0R36U5vtiY9qAvVhha8Ql7mf6zkZ/OD4M0IF1SpmNqQHZL ja08DFn6fGfZDS2P8KB8dT8jq0Yz0dlmuYL1hc8gPORp/3XoSi3PEqxkDofzzK7TcxNQ +ujJxL2j+gTqmr6Ar7votLnR8TUiE81MYd4cBFQHUQJs9jS07zmXkmuGvkpAqBTju0Xu XfpM4JKrJh+P6X7eMIAdoU6QUG8oQBTBqnOpozDsnvfclPWHoK5D7a3VEoAEqvORxDpG PeTC81qaw4DQtydzEmCdJk/oIdV6QW4bqfB/vW4WUPXm/H82O54/JevxMQeKZgJcZX6A hOCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TUNVNQKUGx4+SrUW2rJVVufwWyttKze33QmkamzUmuA=; b=cVoVpdrybwBcs7I/xZ5QyxZYAt5esQZs4lnLP133IOeg04HVJ7GJ8tx/PjqB7xTsmd nNJKfiRLMU8cvXUMo8UZWgaodU2dL5Snfu/VRaH5J1rdwCcb7gugdf6HiZ6+Bw/tlVZr QJV+5i/skC9xx1idD8nm0HI6LAoCItP54qrCuCwLCpUDKHJAJ9LscIvc1cEhXlWFewZr pnAtkwXLFXiHu3keiKTZB57JnE7NRFeE3ivjRXPWYaEMnTSBvlMvPjy+Mo6WbsXlm8h+ 0NRAKe8C+BBTavZe3WXXc1yZ2P0Dk1cVIwfGX4PGGp1vRsih/36MO7EKE840fjMHkI/f dVWQ== X-Gm-Message-State: AJaThX7WlI2f0p6nTnglsZWXTbeeTSvwa0Aw+Jz5S0yWz4bX6c3ReTNH 8TL3eKeOsG4u2KyGhlIkQzK3/AUP X-Google-Smtp-Source: AGs4zMb24WkNAsboXB8e4hFRXa0vVU9i65NqsKOwHylDMdNZs9X3RNXL3QtqTQVzbZqImgKuAbbYtg== X-Received: by 10.55.154.215 with SMTP id c206mr24109170qke.348.1511222471458; Mon, 20 Nov 2017 16:01:11 -0800 (PST) Received: from pallmd1.broadcom.com ([192.19.228.250]) by smtp.gmail.com with ESMTPSA id w143sm1612821qka.84.2017.11.20.16.01.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 20 Nov 2017 16:01:11 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Dick Kennedy , James Smart Subject: [PATCH v3 16/17] lpfc: small sg cnt cleanup Date: Mon, 20 Nov 2017 16:00:43 -0800 Message-Id: <20171121000044.27702-17-jsmart2021@gmail.com> X-Mailer: git-send-email 2.13.1 In-Reply-To: <20171121000044.27702-1-jsmart2021@gmail.com> References: <20171121000044.27702-1-jsmart2021@gmail.com> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The logic for sg_seg_cnt is a bit convoluted. This patch tries to clean up a couple of areas, especially around the +2 and +1 logic. This patch: - cleans up the lpfc_sg_seg_cnt attribute to specify a real minimum rather than making the minimum be whatever the default is. - Remove the hardcoding of +2 (for the number of elements we use in a sgl for cmd iu and rsp iu) and +1 (an additional entry to compensate for nvme's reduction of io size based on a possible partial page) logic in sg list initialization. In the case where the +1 logic is referenced in host and target io checks, use the values set in the transport template as that value was properly set. There can certainly be more done in this area and it will be addressed in combined host/target driver effort. Signed-off-by: Dick Kennedy Signed-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc.h | 1 + drivers/scsi/lpfc/lpfc_attr.c | 2 +- drivers/scsi/lpfc/lpfc_init.c | 19 ++++++++++++++----- drivers/scsi/lpfc/lpfc_nvme.c | 3 ++- drivers/scsi/lpfc/lpfc_nvmet.c | 2 +- 5 files changed, 19 insertions(+), 8 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h index 46a89bdff8e4..dd2191c83052 100644 --- a/drivers/scsi/lpfc/lpfc.h +++ b/drivers/scsi/lpfc/lpfc.h @@ -55,6 +55,7 @@ struct lpfc_sli2_slim; #define LPFC_MAX_SG_SLI4_SEG_CNT_DIF 128 /* sg element count per scsi cmnd */ #define LPFC_MAX_SG_SEG_CNT_DIF 512 /* sg element count per scsi cmnd */ #define LPFC_MAX_SG_SEG_CNT 4096 /* sg element count per scsi cmnd */ +#define LPFC_MIN_SG_SEG_CNT 32 /* sg element count per scsi cmnd */ #define LPFC_MAX_SGL_SEG_CNT 512 /* SGL element count per scsi cmnd */ #define LPFC_MAX_BPL_SEG_CNT 4096 /* BPL element count per scsi cmnd */ #define LPFC_MAX_NVME_SEG_CNT 256 /* max SGL element cnt per NVME cmnd */ diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c index 598e07f43912..74d6fe984df4 100644 --- a/drivers/scsi/lpfc/lpfc_attr.c +++ b/drivers/scsi/lpfc/lpfc_attr.c @@ -5135,7 +5135,7 @@ LPFC_ATTR(delay_discovery, 0, 0, 1, * this parameter will be limited to 128 if BlockGuard is enabled under SLI4 * and will be limited to 512 if BlockGuard is enabled under SLI3. */ -LPFC_ATTR_R(sg_seg_cnt, LPFC_DEFAULT_SG_SEG_CNT, LPFC_DEFAULT_SG_SEG_CNT, +LPFC_ATTR_R(sg_seg_cnt, LPFC_MIN_SG_SEG_CNT, LPFC_DEFAULT_SG_SEG_CNT, LPFC_MAX_SG_SEG_CNT, "Max Scatter Gather Segment Count"); /* diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index c466ceb43bc9..92dc865ca52c 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -5812,6 +5812,7 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba) struct lpfc_mqe *mqe; int longs; int fof_vectors = 0; + int extra; uint64_t wwn; phba->sli4_hba.num_online_cpu = num_online_cpus(); @@ -5867,13 +5868,21 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba) */ /* + * 1 for cmd, 1 for rsp, NVME adds an extra one + * for boundary conditions in its max_sgl_segment template. + */ + extra = 2; + if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) + extra++; + + /* * It doesn't matter what family our adapter is in, we are * limited to 2 Pages, 512 SGEs, for our SGL. * There are going to be 2 reserved SGEs: 1 FCP cmnd + 1 FCP rsp */ max_buf_size = (2 * SLI4_PAGE_SIZE); - if (phba->cfg_sg_seg_cnt > LPFC_MAX_SGL_SEG_CNT - 2) - phba->cfg_sg_seg_cnt = LPFC_MAX_SGL_SEG_CNT - 2; + if (phba->cfg_sg_seg_cnt > LPFC_MAX_SGL_SEG_CNT - extra) + phba->cfg_sg_seg_cnt = LPFC_MAX_SGL_SEG_CNT - extra; /* * Since lpfc_sg_seg_cnt is module param, the sg_dma_buf_size @@ -5906,14 +5915,14 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba) */ phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd) + sizeof(struct fcp_rsp) + - ((phba->cfg_sg_seg_cnt + 2) * + ((phba->cfg_sg_seg_cnt + extra) * sizeof(struct sli4_sge)); /* Total SGEs for scsi_sg_list */ - phba->cfg_total_seg_cnt = phba->cfg_sg_seg_cnt + 2; + phba->cfg_total_seg_cnt = phba->cfg_sg_seg_cnt + extra; /* - * NOTE: if (phba->cfg_sg_seg_cnt + 2) <= 256 we only + * NOTE: if (phba->cfg_sg_seg_cnt + extra) <= 256 we only * need to post 1 page for the SGL. */ } diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c index 50bbc61bfe5d..ce2186673dad 100644 --- a/drivers/scsi/lpfc/lpfc_nvme.c +++ b/drivers/scsi/lpfc/lpfc_nvme.c @@ -62,6 +62,7 @@ lpfc_get_nvme_buf(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp); static void lpfc_release_nvme_buf(struct lpfc_hba *, struct lpfc_nvme_buf *); +static struct nvme_fc_port_template lpfc_nvme_template; /** * lpfc_nvme_create_queue - @@ -1174,7 +1175,7 @@ lpfc_nvme_prep_io_dma(struct lpfc_vport *vport, first_data_sgl = sgl; lpfc_ncmd->seg_cnt = nCmd->sg_cnt; - if (lpfc_ncmd->seg_cnt > phba->cfg_nvme_seg_cnt + 1) { + if (lpfc_ncmd->seg_cnt > lpfc_nvme_template.max_sgl_segments) { lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, "6058 Too many sg segments from " "NVME Transport. Max %d, " diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c index 2b50aecc2722..d80cd1def3b9 100644 --- a/drivers/scsi/lpfc/lpfc_nvmet.c +++ b/drivers/scsi/lpfc/lpfc_nvmet.c @@ -2003,7 +2003,7 @@ lpfc_nvmet_prep_fcp_wqe(struct lpfc_hba *phba, return NULL; } - if (rsp->sg_cnt > phba->cfg_nvme_seg_cnt) { + if (rsp->sg_cnt > lpfc_tgttemplate.max_sgl_segments) { lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, "6109 NVMET prep FCP wqe: seg cnt err: " "NPORT x%x oxid x%x ste %d cnt %d\n",