From patchwork Wed Jan 24 22:45:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 10183217 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 52E5960233 for ; Wed, 24 Jan 2018 22:46:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3AE1D2861D for ; Wed, 24 Jan 2018 22:46:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2FD15287B2; Wed, 24 Jan 2018 22:46:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B23602861D for ; Wed, 24 Jan 2018 22:46:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933140AbeAXWqZ (ORCPT ); Wed, 24 Jan 2018 17:46:25 -0500 Received: from mail-qt0-f193.google.com ([209.85.216.193]:35505 "EHLO mail-qt0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932800AbeAXWqU (ORCPT ); Wed, 24 Jan 2018 17:46:20 -0500 Received: by mail-qt0-f193.google.com with SMTP id g14so14672094qti.2 for ; Wed, 24 Jan 2018 14:46:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2eo24I0hCQFf/xdojqkmyaZMay5ZVwDMZ3HhiBZsIQU=; b=KZm2/2DPhCP/l3Z1ylsISECQ+93P/lNZoE09Ky66tfvAxjdvTYLZAg3KxWEpdhtuAg ywa9XJomzGoGHmUv8NKdYjfhBgM2KS6Zf7dFADliJRPw5gGaCOP09gGKjpnYAGhEqTj5 jUxzWiBhdkcMXQDzN+6BzSmC4qrqe0XpFYkU06Dr7Y4LF4s2XZgciDmfOJkcGBbWC/SI LAcFStoN3OgP030Vwi1rfpOqaEyT5BJW2azGUtB6E7DYvXf7c/NLakAsgD9RWHp6lyCE l31gWgvpp1McN4sk/kqr2NZq/Ztgh5ELlkK7XLrSzWgNkZuu/HobJ1ns6L2WGxN9BmtR 5bCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2eo24I0hCQFf/xdojqkmyaZMay5ZVwDMZ3HhiBZsIQU=; b=eQ4pS9GFxxAOjXktgIRqU57VtfKhIlBDlBdZteR+0XVCO9omTLhvGEhiQBV4M1qHsx zqG3ZcVhhiLqWuVQbtLPOdps/BHqmBdfcO//Afs3P3SuzSVNyv6zoEzFnzkwh+WAAMuk MZcaHxJirnCJ6PQ3vHJBtx0/aFIOVHhXiauLQr+5Wi0Sor5qeNQvymmvPjqJn7JoH71A uZcBiESkx2GVTPTIy3jB+TcmSwruOTiSEyCikTxx74ojsuAF2WgauZMGfiGT9A8Yu3pf VRnhEKOnVbYonQxj83zwTF9RQidEwvV1HKOyiMzeK29BPYTxqeNoYxqCcoVEF31w/Sbc DKww== X-Gm-Message-State: AKwxytfJI88b3m13/T+yXWHmKPEYzfko0Ahu6ja03ji2wLBZpwVg8K2h mCF3J7uY/EQlzUFt64dcuuPM3g== X-Google-Smtp-Source: AH8x226KFU6QdcDRcohN+xhgZTiaDWHArB2MXNLVFMqNK2ozIDJBj/8/6tfoYVdPPL5pb2hAq/xVOg== X-Received: by 10.237.55.132 with SMTP id j4mr12502188qtb.324.1516833980015; Wed, 24 Jan 2018 14:46:20 -0800 (PST) Received: from pallmd1.broadcom.com ([192.19.223.250]) by smtp.gmail.com with ESMTPSA id e5sm977713qkj.87.2018.01.24.14.46.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 24 Jan 2018 14:46:19 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Dick Kennedy , James Smart Subject: [PATCH 17/19] lpfc: Fix nonrecovery of NVME controller after cable swap. Date: Wed, 24 Jan 2018 14:45:46 -0800 Message-Id: <20180124224548.9530-18-jsmart2021@gmail.com> X-Mailer: git-send-email 2.13.1 In-Reply-To: <20180124224548.9530-1-jsmart2021@gmail.com> References: <20180124224548.9530-1-jsmart2021@gmail.com> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In a test that is doing large numbers of cable swaps on the target, the nvme controllers wouldn't reconnect. During the cable swaps, the targets n_port_id would change. This information was passed to the nvme-fc transport, in the new remoteport registration. However, the nvme-fc transport didn't update the n_port_id value in the remoteport struct when it reused an existing structure. Later, when a new association was attempted on the remoteport, the driver's NVME LS routine would use the stale n_port_id from the remoteport struct to address the LS. As the device is no longer at that address, the LS would go into never never land. Separately, the nvme-fc transport will be corrected to update the n_port_id value on a re-registration. However, for now, there's no reason to use the transports values. The private pointer points to the drivers node structure and the node structure is up to date. Therefore, revise the LS routine to use the drivers data structures for the LS. Augmented the debug message for better debugging in the future. Also removed a duplicate if check that seems to have slipped in. Signed-off-by: Dick Kennedy Signed-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_nvme.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c index 92643ffa79c3..fc6d85f0bfcf 100644 --- a/drivers/scsi/lpfc/lpfc_nvme.c +++ b/drivers/scsi/lpfc/lpfc_nvme.c @@ -241,10 +241,11 @@ lpfc_nvme_cmpl_gen_req(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe, ndlp = (struct lpfc_nodelist *)cmdwqe->context1; lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC, "6047 nvme cmpl Enter " - "Data %p DID %x Xri: %x status %x cmd:%p lsreg:%p " - "bmp:%p ndlp:%p\n", + "Data %p DID %x Xri: %x status %x reason x%x cmd:%p " + "lsreg:%p bmp:%p ndlp:%p\n", pnvme_lsreq, ndlp ? ndlp->nlp_DID : 0, cmdwqe->sli4_xritag, status, + (wcqe->parameter & 0xffff), cmdwqe, pnvme_lsreq, cmdwqe->context3, ndlp); lpfc_nvmeio_data(phba, "NVME LS CMPL: xri x%x stat x%x parm x%x\n", @@ -419,6 +420,7 @@ lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport, { int ret = 0; struct lpfc_nvme_lport *lport; + struct lpfc_nvme_rport *rport; struct lpfc_vport *vport; struct lpfc_nodelist *ndlp; struct ulp_bde64 *bpl; @@ -437,19 +439,18 @@ lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport, */ lport = (struct lpfc_nvme_lport *)pnvme_lport->private; + rport = (struct lpfc_nvme_rport *)pnvme_rport->private; vport = lport->vport; if (vport->load_flag & FC_UNLOADING) return -ENODEV; - if (vport->load_flag & FC_UNLOADING) - return -ENODEV; - - ndlp = lpfc_findnode_did(vport, pnvme_rport->port_id); + /* Need the ndlp. It is stored in the driver's rport. */ + ndlp = rport->ndlp; if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) { lpfc_printf_vlog(vport, KERN_ERR, LOG_NODE | LOG_NVME_IOERR, - "6051 DID x%06x not an active rport.\n", - pnvme_rport->port_id); + "6051 Remoteport %p, rport has invalid ndlp. " + "Failing LS Req\n", pnvme_rport); return -ENODEV; } @@ -500,8 +501,9 @@ lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport, /* Expand print to include key fields. */ lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC, - "6149 ENTER. lport %p, rport %p lsreq%p rqstlen:%d " - "rsplen:%d %pad %pad\n", + "6149 Issue LS Req to DID 0x%06x lport %p, rport %p " + "lsreq%p rqstlen:%d rsplen:%d %pad %pad\n", + ndlp->nlp_DID, pnvme_lport, pnvme_rport, pnvme_lsreq, pnvme_lsreq->rqstlen, pnvme_lsreq->rsplen, &pnvme_lsreq->rqstdma, @@ -517,7 +519,7 @@ lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport, ndlp, 2, 30, 0); if (ret != WQE_SUCCESS) { atomic_inc(&lport->xmt_ls_err); - lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC, + lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_DISC, "6052 EXIT. issue ls wqe failed lport %p, " "rport %p lsreq%p Status %x DID %x\n", pnvme_lport, pnvme_rport, pnvme_lsreq,