From patchwork Tue Nov 21 00:00:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 10067493 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7C10160224 for ; Tue, 21 Nov 2017 00:01:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6D23A29194 for ; Tue, 21 Nov 2017 00:01:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 606C1291AB; Tue, 21 Nov 2017 00:01:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4861129194 for ; Tue, 21 Nov 2017 00:01:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751982AbdKUABC (ORCPT ); Mon, 20 Nov 2017 19:01:02 -0500 Received: from mail-qk0-f193.google.com ([209.85.220.193]:42443 "EHLO mail-qk0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751282AbdKUAA5 (ORCPT ); Mon, 20 Nov 2017 19:00:57 -0500 Received: by mail-qk0-f193.google.com with SMTP id a194so9753375qkc.9 for ; Mon, 20 Nov 2017 16:00:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/y/PYNHgXDq3KXY9Zih8dd3j/KtrTKK70zPQM3r86r0=; b=B/2diefGxIG03t7ULU6H1dknPhOf+fr9VdxSFjr5G9XMuzXdzJRgba9bLBFBx0XY6u Yc4Qlpg36oducfWMPHdyGBoclDytgYyf1uFA1BOg47mOYZF8vKglfZUHLQiTChP75q4d BqMRVgI6N0mSov8lc6CvmHQCg47ZTRR3h5qVy9Dxzo8I9Irrigzewp2ErmtCkIhuHNPK RyhoBtiMW4MPlH5/MNZ8+Z9rE/BxgnDXVosRmdwgzUmBAzxYTNgPYk8KNA30O+E0J5vg qT0IS1BnrZmstMX1jdrCGZl9qtupfTVlbpn0GIkCc2N+sXDPOn7Si5y+murd5tqIwmOS U9rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/y/PYNHgXDq3KXY9Zih8dd3j/KtrTKK70zPQM3r86r0=; b=keQyf6S70nYBWgG6tvIPyEjBbNHqhFTI9BqYtohZsUj4wXh9TzmLqa4DC4dKcIk551 gqgJH1dDqns2dyh4MTAPLVKE/rLs2YmhYzroT1dEJ1s4z6aUl1Fd+8wLbEssBdoh0gu9 pvarZywoLOjsNOuD31LlHsERX5ELuTaj9SQx1MCrRQWE1HqugQS6ao4NKId7LYcZi2b7 TyY7X+1xvFgYcfA/H++awaADKBmsTdFwhV1ZzIeZ2IvGxmf+QZ+Dj2weHPZZhT/h+TGM SEz8AIoX54KyK0VodBYs6S0rhJrz6uIOkS6IxlT1FpzvfzE8NuXJm2JArDCzt5npKhWh 1AyA== X-Gm-Message-State: AJaThX7Udf9jZeCjdz8MhpNB6AEjFCX3G3KtDZ6bRYzlC0ZNUV8TpmOY X2QB0Sh65cBPcI2rfAev0+p0ifaY X-Google-Smtp-Source: AGs4zMbRKyLo1tzC5elapHRl2+SMYWTlCiGfY/FwFrWV6lL9Oqw4hWT9rOfbg0SlO1251AKfk+mRGg== X-Received: by 10.55.104.69 with SMTP id d66mr23436321qkc.142.1511222456493; Mon, 20 Nov 2017 16:00:56 -0800 (PST) Received: from pallmd1.broadcom.com ([192.19.228.250]) by smtp.gmail.com with ESMTPSA id w143sm1612821qka.84.2017.11.20.16.00.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 20 Nov 2017 16:00:56 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Dick Kennedy , James Smart Subject: [PATCH v3 03/17] lpfc: Handle XRI_ABORTED_CQE in soft IRQ Date: Mon, 20 Nov 2017 16:00:30 -0800 Message-Id: <20171121000044.27702-4-jsmart2021@gmail.com> X-Mailer: git-send-email 2.13.1 In-Reply-To: <20171121000044.27702-1-jsmart2021@gmail.com> References: <20171121000044.27702-1-jsmart2021@gmail.com> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP XRI_ABORTED_CQE completions were not being handled in the fast path. They were being queued and deferred to the lpfc worker thread for processing. This is an artifact of the driver design prior to moving queue processing out of the isr and into a workq element. Now that queue processing is already in a deferred context, remove this artifact and process them directly. Signed-off-by: Dick Kennedy Signed-off-by: James Smart Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc.h | 1 - drivers/scsi/lpfc/lpfc_hbadisc.c | 2 - drivers/scsi/lpfc/lpfc_init.c | 8 ---- drivers/scsi/lpfc/lpfc_sli.c | 97 +++++++++++++++------------------------- drivers/scsi/lpfc/lpfc_sli4.h | 2 - 5 files changed, 35 insertions(+), 75 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h index 231302273257..7219b6ce5dc7 100644 --- a/drivers/scsi/lpfc/lpfc.h +++ b/drivers/scsi/lpfc/lpfc.h @@ -705,7 +705,6 @@ struct lpfc_hba { * capability */ #define HBA_NVME_IOQ_FLUSH 0x80000 /* NVME IO queues flushed. */ -#define NVME_XRI_ABORT_EVENT 0x100000 uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/ struct lpfc_dmabuf slim2p; diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c index d9a03beb76a4..3468257bda02 100644 --- a/drivers/scsi/lpfc/lpfc_hbadisc.c +++ b/drivers/scsi/lpfc/lpfc_hbadisc.c @@ -640,8 +640,6 @@ lpfc_work_done(struct lpfc_hba *phba) lpfc_handle_rrq_active(phba); if (phba->hba_flag & FCP_XRI_ABORT_EVENT) lpfc_sli4_fcp_xri_abort_event_proc(phba); - if (phba->hba_flag & NVME_XRI_ABORT_EVENT) - lpfc_sli4_nvme_xri_abort_event_proc(phba); if (phba->hba_flag & ELS_XRI_ABORT_EVENT) lpfc_sli4_els_xri_abort_event_proc(phba); if (phba->hba_flag & ASYNC_EVENT) diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index 52c039e9f4a4..745aff753396 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -5954,9 +5954,6 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba) INIT_LIST_HEAD(&phba->sli4_hba.lpfc_abts_nvme_buf_list); INIT_LIST_HEAD(&phba->sli4_hba.lpfc_abts_nvmet_ctx_list); INIT_LIST_HEAD(&phba->sli4_hba.lpfc_nvmet_io_wait_list); - - /* Fast-path XRI aborted CQ Event work queue list */ - INIT_LIST_HEAD(&phba->sli4_hba.sp_nvme_xri_aborted_work_queue); } /* This abort list used by worker thread */ @@ -9199,11 +9196,6 @@ lpfc_sli4_cq_event_release_all(struct lpfc_hba *phba) /* Pending ELS XRI abort events */ list_splice_init(&phba->sli4_hba.sp_els_xri_aborted_work_queue, &cqelist); - if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) { - /* Pending NVME XRI abort events */ - list_splice_init(&phba->sli4_hba.sp_nvme_xri_aborted_work_queue, - &cqelist); - } /* Pending asynnc events */ list_splice_init(&phba->sli4_hba.sp_asynce_work_queue, &cqelist); diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c index 2ad444ce7529..4b76db19ef73 100644 --- a/drivers/scsi/lpfc/lpfc_sli.c +++ b/drivers/scsi/lpfc/lpfc_sli.c @@ -12318,41 +12318,6 @@ void lpfc_sli4_fcp_xri_abort_event_proc(struct lpfc_hba *phba) } /** - * lpfc_sli4_nvme_xri_abort_event_proc - Process nvme xri abort event - * @phba: pointer to lpfc hba data structure. - * - * This routine is invoked by the worker thread to process all the pending - * SLI4 NVME abort XRI events. - **/ -void lpfc_sli4_nvme_xri_abort_event_proc(struct lpfc_hba *phba) -{ - struct lpfc_cq_event *cq_event; - - /* First, declare the fcp xri abort event has been handled */ - spin_lock_irq(&phba->hbalock); - phba->hba_flag &= ~NVME_XRI_ABORT_EVENT; - spin_unlock_irq(&phba->hbalock); - /* Now, handle all the fcp xri abort events */ - while (!list_empty(&phba->sli4_hba.sp_nvme_xri_aborted_work_queue)) { - /* Get the first event from the head of the event queue */ - spin_lock_irq(&phba->hbalock); - list_remove_head(&phba->sli4_hba.sp_nvme_xri_aborted_work_queue, - cq_event, struct lpfc_cq_event, list); - spin_unlock_irq(&phba->hbalock); - /* Notify aborted XRI for NVME work queue */ - if (phba->nvmet_support) { - lpfc_sli4_nvmet_xri_aborted(phba, - &cq_event->cqe.wcqe_axri); - } else { - lpfc_sli4_nvme_xri_aborted(phba, - &cq_event->cqe.wcqe_axri); - } - /* Free the event processed back to the free pool */ - lpfc_sli4_cq_event_release(phba, cq_event); - } -} - -/** * lpfc_sli4_els_xri_abort_event_proc - Process els xri abort event * @phba: pointer to lpfc hba data structure. * @@ -12548,6 +12513,24 @@ lpfc_sli4_els_wcqe_to_rspiocbq(struct lpfc_hba *phba, return irspiocbq; } +inline struct lpfc_cq_event * +lpfc_cq_event_setup(struct lpfc_hba *phba, void *entry, int size) +{ + struct lpfc_cq_event *cq_event; + + /* Allocate a new internal CQ_EVENT entry */ + cq_event = lpfc_sli4_cq_event_alloc(phba); + if (!cq_event) { + lpfc_printf_log(phba, KERN_ERR, LOG_SLI, + "0602 Failed to alloc CQ_EVENT entry\n"); + return NULL; + } + + /* Move the CQE into the event */ + memcpy(&cq_event->cqe, entry, size); + return cq_event; +} + /** * lpfc_sli4_sp_handle_async_event - Handle an asynchroous event * @phba: Pointer to HBA context object. @@ -12569,16 +12552,9 @@ lpfc_sli4_sp_handle_async_event(struct lpfc_hba *phba, struct lpfc_mcqe *mcqe) "word2:x%x, word3:x%x\n", mcqe->word0, mcqe->mcqe_tag0, mcqe->mcqe_tag1, mcqe->trailer); - /* Allocate a new internal CQ_EVENT entry */ - cq_event = lpfc_sli4_cq_event_alloc(phba); - if (!cq_event) { - lpfc_printf_log(phba, KERN_ERR, LOG_SLI, - "0394 Failed to allocate CQ_EVENT entry\n"); + cq_event = lpfc_cq_event_setup(phba, mcqe, sizeof(struct lpfc_mcqe)); + if (!cq_event) return false; - } - - /* Move the CQE into an asynchronous event entry */ - memcpy(&cq_event->cqe, mcqe, sizeof(struct lpfc_mcqe)); spin_lock_irqsave(&phba->hbalock, iflags); list_add_tail(&cq_event->list, &phba->sli4_hba.sp_asynce_work_queue); /* Set the async event flag */ @@ -12824,18 +12800,12 @@ lpfc_sli4_sp_handle_abort_xri_wcqe(struct lpfc_hba *phba, struct lpfc_cq_event *cq_event; unsigned long iflags; - /* Allocate a new internal CQ_EVENT entry */ - cq_event = lpfc_sli4_cq_event_alloc(phba); - if (!cq_event) { - lpfc_printf_log(phba, KERN_ERR, LOG_SLI, - "0602 Failed to allocate CQ_EVENT entry\n"); - return false; - } - - /* Move the CQE into the proper xri abort event list */ - memcpy(&cq_event->cqe, wcqe, sizeof(struct sli4_wcqe_xri_aborted)); switch (cq->subtype) { case LPFC_FCP: + cq_event = lpfc_cq_event_setup( + phba, wcqe, sizeof(struct sli4_wcqe_xri_aborted)); + if (!cq_event) + return false; spin_lock_irqsave(&phba->hbalock, iflags); list_add_tail(&cq_event->list, &phba->sli4_hba.sp_fcp_xri_aborted_work_queue); @@ -12845,6 +12815,10 @@ lpfc_sli4_sp_handle_abort_xri_wcqe(struct lpfc_hba *phba, workposted = true; break; case LPFC_ELS: + cq_event = lpfc_cq_event_setup( + phba, wcqe, sizeof(struct sli4_wcqe_xri_aborted)); + if (!cq_event) + return false; spin_lock_irqsave(&phba->hbalock, iflags); list_add_tail(&cq_event->list, &phba->sli4_hba.sp_els_xri_aborted_work_queue); @@ -12854,13 +12828,13 @@ lpfc_sli4_sp_handle_abort_xri_wcqe(struct lpfc_hba *phba, workposted = true; break; case LPFC_NVME: - spin_lock_irqsave(&phba->hbalock, iflags); - list_add_tail(&cq_event->list, - &phba->sli4_hba.sp_nvme_xri_aborted_work_queue); - /* Set the nvme xri abort event flag */ - phba->hba_flag |= NVME_XRI_ABORT_EVENT; - spin_unlock_irqrestore(&phba->hbalock, iflags); - workposted = true; + /* Notify aborted XRI for NVME work queue */ + if (phba->nvmet_support) + lpfc_sli4_nvmet_xri_aborted(phba, wcqe); + else + lpfc_sli4_nvme_xri_aborted(phba, wcqe); + + workposted = false; break; default: lpfc_printf_log(phba, KERN_ERR, LOG_SLI, @@ -12868,7 +12842,6 @@ lpfc_sli4_sp_handle_abort_xri_wcqe(struct lpfc_hba *phba, "%08x %08x %08x %08x\n", cq->subtype, wcqe->word0, wcqe->parameter, wcqe->word2, wcqe->word3); - lpfc_sli4_cq_event_release(phba, cq_event); workposted = false; break; } diff --git a/drivers/scsi/lpfc/lpfc_sli4.h b/drivers/scsi/lpfc/lpfc_sli4.h index 301ce46d2d70..da302bfb0223 100644 --- a/drivers/scsi/lpfc/lpfc_sli4.h +++ b/drivers/scsi/lpfc/lpfc_sli4.h @@ -672,7 +672,6 @@ struct lpfc_sli4_hba { struct list_head sp_asynce_work_queue; struct list_head sp_fcp_xri_aborted_work_queue; struct list_head sp_els_xri_aborted_work_queue; - struct list_head sp_nvme_xri_aborted_work_queue; struct list_head sp_unsol_work_queue; struct lpfc_sli4_link link_state; struct lpfc_sli4_lnk_info lnk_info; @@ -824,7 +823,6 @@ void lpfc_sli4_fcf_redisc_event_proc(struct lpfc_hba *); int lpfc_sli4_resume_rpi(struct lpfc_nodelist *, void (*)(struct lpfc_hba *, LPFC_MBOXQ_t *), void *); void lpfc_sli4_fcp_xri_abort_event_proc(struct lpfc_hba *); -void lpfc_sli4_nvme_xri_abort_event_proc(struct lpfc_hba *phba); void lpfc_sli4_els_xri_abort_event_proc(struct lpfc_hba *); void lpfc_sli4_fcp_xri_aborted(struct lpfc_hba *, struct sli4_wcqe_xri_aborted *);