From patchwork Fri Jan 26 19:22:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 10186907 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BA2AA601D5 for ; Fri, 26 Jan 2018 19:23:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AC05A28C2A for ; Fri, 26 Jan 2018 19:23:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9FEAA29DDC; Fri, 26 Jan 2018 19:23:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D0B7528C2A for ; Fri, 26 Jan 2018 19:23:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752866AbeAZTXI (ORCPT ); Fri, 26 Jan 2018 14:23:08 -0500 Received: from mail-qt0-f196.google.com ([209.85.216.196]:35088 "EHLO mail-qt0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752588AbeAZTXD (ORCPT ); Fri, 26 Jan 2018 14:23:03 -0500 Received: by mail-qt0-f196.google.com with SMTP id g14so4004290qti.2 for ; Fri, 26 Jan 2018 11:23:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9DJIKBfBGPIEazMrKWOZC/PCdCJMycvF6Qq6UhSXqQk=; b=iOfOZ1wrmUIO2WE0E9l8kvSdc9oAIHzAF/B/5swYWn4/o1eu0d+rt7k84YvkHLANf8 SiDihwVKWcoQz0wGbMSdwQBMYUn9fXVaOkPeyAMx1lRHnkfz/rbZ0rCWNdSEb2+W9zAJ I89feHaLZSpoE+HOLIbue6UBfKrhUIPb0I5Z2c9wd/s+Kig++QTqqLILnVKOBbVJpNtT DgA7JtzYANiNZcDH0chhjAi9lMT3VUp8kPsqyzjHKLVc8HsiB2+FzLiB9qF5DoZ6UN0W 0eBoGSz/XPCQRJcHt5M4TmMAh4r6D18NJyoH6pSMUMgN7e7ml5ud1dH6tvznciz5iYGL o+QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9DJIKBfBGPIEazMrKWOZC/PCdCJMycvF6Qq6UhSXqQk=; b=ahy4HRbCEYQmW0KvGmbgFTYe+AnDuuvlRSBcmF0nMU4NQrgG8DnUo+uS9M4MemS3s+ sUIcaMDaskLoMRJxo/QdWDx/OIDpQFi+ulHD1rxQc/EiwEnKSJS0nvYq+FM9PhloJZAO HugnUSZVaOHnuHv8/xkahhHXwD23fZG0Y4f+WWko7oNqHe58Kk5os/29jo1prfZ+p+ad gtwoFeFNc/PBXRbNvs9APx6uj4KK3hLouSl2yiQkNBMv76Yh/tWKTFJSgMnX66ijMBB+ ZIlJJvT/g+u7zZKyIoZikwK+92etT/OMrm0/BKHvhW54EO89jXN6zHNabM+JbSNUx0ET vjrw== X-Gm-Message-State: AKwxytcnNRSyX/BromAoquSgl5huy0+uip0kVDUHawjK+6sDSEigZi03 8vOAI870ng/ZyI/SAlsabn629g== X-Google-Smtp-Source: AH8x226JxftTOZwv375VkAL4/NBhuiUEtVEl2Mv07VHe6ahILRaDMjaa+YwXY8+uXgw6witHuXujeA== X-Received: by 10.200.15.18 with SMTP id e18mr24920805qtk.23.1516994583029; Fri, 26 Jan 2018 11:23:03 -0800 (PST) Received: from pallmd1.broadcom.com ([192.19.223.250]) by smtp.gmail.com with ESMTPSA id f129sm3811300qke.82.2018.01.26.11.23.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 26 Jan 2018 11:23:02 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Dick Kennedy , James Smart Subject: [PATCH v2 08/19] lpfc: Fix RQ empty firmware trap Date: Fri, 26 Jan 2018 11:22:28 -0800 Message-Id: <20180126192239.10727-9-jsmart2021@gmail.com> X-Mailer: git-send-email 2.13.1 In-Reply-To: <20180126192239.10727-1-jsmart2021@gmail.com> References: <20180126192239.10727-1-jsmart2021@gmail.com> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When nvme target deferred receive logic waits for exchange resources, the corresponding receive buffer is not replenished with the hardware. This can result in a lack of asynchronous receive buffer resources in the hardware, resulting in a "2885 Port Status Event: ... error 1=0x52004a01 ..." message. Correct by replenishing the buffer whenenver the deferred logic kicks in. Update corresponding debug messages and statistics as well. Signed-off-by: Dick Kennedy Signed-off-by: James Smart --- drivers/scsi/lpfc/lpfc_attr.c | 6 ++++++ drivers/scsi/lpfc/lpfc_mem.c | 8 ++++++-- drivers/scsi/lpfc/lpfc_nvmet.c | 31 +++++++++++++++++++++---------- drivers/scsi/lpfc/lpfc_nvmet.h | 7 +++++-- drivers/scsi/lpfc/lpfc_sli.c | 12 ++++++++++++ 5 files changed, 50 insertions(+), 14 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c index d188fb565a32..91df2b4e9fce 100644 --- a/drivers/scsi/lpfc/lpfc_attr.c +++ b/drivers/scsi/lpfc/lpfc_attr.c @@ -259,6 +259,12 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr, atomic_read(&tgtp->xmt_abort_rsp), atomic_read(&tgtp->xmt_abort_rsp_error)); + len += snprintf(buf + len, PAGE_SIZE - len, + "DELAY: ctx %08x fod %08x wqfull %08x\n", + atomic_read(&tgtp->defer_ctx), + atomic_read(&tgtp->defer_fod), + atomic_read(&tgtp->defer_wqfull)); + /* Calculate outstanding IOs */ tot = atomic_read(&tgtp->rcv_fcp_cmd_drop); tot += atomic_read(&tgtp->xmt_fcp_release); diff --git a/drivers/scsi/lpfc/lpfc_mem.c b/drivers/scsi/lpfc/lpfc_mem.c index 56faeb049b4a..60078e61da5e 100644 --- a/drivers/scsi/lpfc/lpfc_mem.c +++ b/drivers/scsi/lpfc/lpfc_mem.c @@ -755,10 +755,14 @@ lpfc_rq_buf_free(struct lpfc_hba *phba, struct lpfc_dmabuf *mp) if (rc < 0) { (rqbp->rqb_free_buffer)(phba, rqb_entry); lpfc_printf_log(phba, KERN_ERR, LOG_INIT, - "6409 Cannot post to RQ %d: %x %x\n", + "6409 Cannot post to HRQ %d: %x %x %x " + "DRQ %x %x\n", rqb_entry->hrq->queue_id, rqb_entry->hrq->host_index, - rqb_entry->hrq->hba_index); + rqb_entry->hrq->hba_index, + rqb_entry->hrq->entry_count, + rqb_entry->drq->host_index, + rqb_entry->drq->hba_index); } else { list_add_tail(&rqb_entry->hbuf.list, &rqbp->rqb_buffer_list); rqbp->buffer_count++; diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c index 9c2acf90212c..0539585d32d4 100644 --- a/drivers/scsi/lpfc/lpfc_nvmet.c +++ b/drivers/scsi/lpfc/lpfc_nvmet.c @@ -270,8 +270,6 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf) "NVMET RCV BUSY: xri x%x sz %d " "from %06x\n", oxid, size, sid); - /* defer repost rcv buffer till .defer_rcv callback */ - ctxp->flag &= ~LPFC_NVMET_DEFER_RCV_REPOST; atomic_inc(&tgtp->rcv_fcp_cmd_out); return; } @@ -837,6 +835,7 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport, list_add_tail(&nvmewqeq->list, &wq->wqfull_list); wq->q_flag |= HBA_NVMET_WQFULL; spin_unlock_irqrestore(&pring->ring_lock, iflags); + atomic_inc(&lpfc_nvmep->defer_wqfull); return 0; } @@ -975,11 +974,9 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport, tgtp = phba->targetport->private; atomic_inc(&tgtp->rcv_fcp_cmd_defer); - if (ctxp->flag & LPFC_NVMET_DEFER_RCV_REPOST) - lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */ - else - nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf); - ctxp->flag &= ~LPFC_NVMET_DEFER_RCV_REPOST; + + /* Free the nvmebuf since a new buffer already replaced it */ + nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf); } static struct nvmet_fc_target_template lpfc_tgttemplate = { @@ -1309,6 +1306,9 @@ lpfc_nvmet_create_targetport(struct lpfc_hba *phba) atomic_set(&tgtp->xmt_abort_sol, 0); atomic_set(&tgtp->xmt_abort_rsp, 0); atomic_set(&tgtp->xmt_abort_rsp_error, 0); + atomic_set(&tgtp->defer_ctx, 0); + atomic_set(&tgtp->defer_fod, 0); + atomic_set(&tgtp->defer_wqfull, 0); } return error; } @@ -1810,6 +1810,8 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba, lpfc_nvmeio_data(phba, "NVMET FCP RCV: xri x%x sz %d CPU %02x\n", oxid, size, smp_processor_id()); + tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; + if (!ctx_buf) { /* Queue this NVME IO to process later */ spin_lock_irqsave(&phba->sli4_hba.nvmet_io_wait_lock, iflag); @@ -1825,10 +1827,11 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba, lpfc_post_rq_buffer( phba, phba->sli4_hba.nvmet_mrq_hdr[qno], phba->sli4_hba.nvmet_mrq_data[qno], 1, qno); + + atomic_inc(&tgtp->defer_ctx); return; } - tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; payload = (uint32_t *)(nvmebuf->dbuf.virt); sid = sli4_sid_from_fc_hdr(fc_hdr); @@ -1892,12 +1895,20 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba, /* Processing of FCP command is deferred */ if (rc == -EOVERFLOW) { + /* + * Post a brand new DMA buffer to RQ and defer + * freeing rcv buffer till .defer_rcv callback + */ + qno = nvmebuf->idx; + lpfc_post_rq_buffer( + phba, phba->sli4_hba.nvmet_mrq_hdr[qno], + phba->sli4_hba.nvmet_mrq_data[qno], 1, qno); + lpfc_nvmeio_data(phba, "NVMET RCV BUSY: xri x%x sz %d from %06x\n", oxid, size, sid); - /* defer reposting rcv buffer till .defer_rcv callback */ - ctxp->flag |= LPFC_NVMET_DEFER_RCV_REPOST; atomic_inc(&tgtp->rcv_fcp_cmd_out); + atomic_inc(&tgtp->defer_fod); return; } ctxp->rqb_buffer = nvmebuf; diff --git a/drivers/scsi/lpfc/lpfc_nvmet.h b/drivers/scsi/lpfc/lpfc_nvmet.h index 354cce443c9f..5da35de5ea45 100644 --- a/drivers/scsi/lpfc/lpfc_nvmet.h +++ b/drivers/scsi/lpfc/lpfc_nvmet.h @@ -72,7 +72,6 @@ struct lpfc_nvmet_tgtport { atomic_t xmt_fcp_rsp_aborted; atomic_t xmt_fcp_rsp_drop; - /* Stats counters - lpfc_nvmet_xmt_fcp_abort */ atomic_t xmt_fcp_xri_abort_cqe; atomic_t xmt_fcp_abort; @@ -81,6 +80,11 @@ struct lpfc_nvmet_tgtport { atomic_t xmt_abort_unsol; atomic_t xmt_abort_rsp; atomic_t xmt_abort_rsp_error; + + /* Stats counters - defer IO */ + atomic_t defer_ctx; + atomic_t defer_fod; + atomic_t defer_wqfull; }; struct lpfc_nvmet_ctx_info { @@ -131,7 +135,6 @@ struct lpfc_nvmet_rcv_ctx { #define LPFC_NVMET_XBUSY 0x4 /* XB bit set on IO cmpl */ #define LPFC_NVMET_CTX_RLS 0x8 /* ctx free requested */ #define LPFC_NVMET_ABTS_RCV 0x10 /* ABTS received on exchange */ -#define LPFC_NVMET_DEFER_RCV_REPOST 0x20 /* repost to RQ on defer rcv */ #define LPFC_NVMET_DEFER_WQFULL 0x40 /* Waiting on a free WQE */ struct rqb_dmabuf *rqb_buffer; struct lpfc_nvmet_ctxbuf *ctxbuf; diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c index fbda2fbcbfec..8b2919a553d6 100644 --- a/drivers/scsi/lpfc/lpfc_sli.c +++ b/drivers/scsi/lpfc/lpfc_sli.c @@ -6535,9 +6535,11 @@ lpfc_post_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hrq, struct lpfc_rqe hrqe; struct lpfc_rqe drqe; struct lpfc_rqb *rqbp; + unsigned long flags; struct rqb_dmabuf *rqb_buffer; LIST_HEAD(rqb_buf_list); + spin_lock_irqsave(&phba->hbalock, flags); rqbp = hrq->rqbp; for (i = 0; i < count; i++) { /* IF RQ is already full, don't bother */ @@ -6561,6 +6563,15 @@ lpfc_post_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hrq, drqe.address_hi = putPaddrHigh(rqb_buffer->dbuf.phys); rc = lpfc_sli4_rq_put(hrq, drq, &hrqe, &drqe); if (rc < 0) { + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, + "6421 Cannot post to HRQ %d: %x %x %x " + "DRQ %x %x\n", + hrq->queue_id, + hrq->host_index, + hrq->hba_index, + hrq->entry_count, + drq->host_index, + drq->hba_index); rqbp->rqb_free_buffer(phba, rqb_buffer); } else { list_add_tail(&rqb_buffer->hbuf.list, @@ -6568,6 +6579,7 @@ lpfc_post_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hrq, rqbp->buffer_count++; } } + spin_unlock_irqrestore(&phba->hbalock, flags); return 1; }