From patchwork Fri Jun 30 08:02:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Carpenter X-Patchwork-Id: 9818659 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E76CC603D7 for ; Fri, 30 Jun 2017 08:03:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DDC421FE84 for ; Fri, 30 Jun 2017 08:03:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D19B0285F1; Fri, 30 Jun 2017 08:03:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7FE9B1FE84 for ; Fri, 30 Jun 2017 08:03:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751858AbdF3ID0 (ORCPT ); Fri, 30 Jun 2017 04:03:26 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:37326 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751745AbdF3IDK (ORCPT ); Fri, 30 Jun 2017 04:03:10 -0400 Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v5U835sI005771 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 30 Jun 2017 08:03:06 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserv0021.oracle.com (8.13.8/8.14.4) with ESMTP id v5U8356A019681 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 30 Jun 2017 08:03:05 GMT Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24]) by aserv0121.oracle.com (8.13.8/8.13.8) with ESMTP id v5U8335p031583; Fri, 30 Jun 2017 08:03:04 GMT Received: from mwanda (/154.122.109.149) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 30 Jun 2017 01:03:02 -0700 Date: Fri, 30 Jun 2017 11:02:51 +0300 From: Dan Carpenter To: James Smart Cc: Dick Kennedy , "James E.J. Bottomley" , "Martin K. Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-janitors@vger.kernel.org Subject: [PATCH 1/2] scsi: lpfc: spin_lock_irq() is not nestable Message-ID: <20170630080250.mjbosf64qlytrsii@mwanda> MIME-Version: 1.0 Content-Disposition: inline X-Mailer: git-send-email haha only kidding User-Agent: NeoMutt/20170113 (1.7.2) X-Source-IP: aserv0021.oracle.com [141.146.126.233] Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We're calling spin_lock_irq() multiple times, the problem is that on the first spin_unlock_irq() then we will re-enable IRQs and we don't want that. Fixes: 966bb5b71196 ("scsi: lpfc: Break up IO ctx list into a separate get and put list") Signed-off-by: Dan Carpenter Signed-off-By: James Smart diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c index 7dc061a14f95..afc523209845 100644 --- a/drivers/scsi/lpfc/lpfc_nvmet.c +++ b/drivers/scsi/lpfc/lpfc_nvmet.c @@ -866,44 +866,44 @@ lpfc_nvmet_cleanup_io_context(struct lpfc_hba *phba) unsigned long flags; spin_lock_irqsave(&phba->sli4_hba.nvmet_ctx_get_lock, flags); - spin_lock_irq(&phba->sli4_hba.nvmet_ctx_put_lock); + spin_lock(&phba->sli4_hba.nvmet_ctx_put_lock); list_for_each_entry_safe(ctx_buf, next_ctx_buf, &phba->sli4_hba.lpfc_nvmet_ctx_get_list, list) { - spin_lock_irq(&phba->sli4_hba.abts_nvme_buf_list_lock); + spin_lock(&phba->sli4_hba.abts_nvme_buf_list_lock); list_del_init(&ctx_buf->list); - spin_unlock_irq(&phba->sli4_hba.abts_nvme_buf_list_lock); + spin_unlock(&phba->sli4_hba.abts_nvme_buf_list_lock); __lpfc_clear_active_sglq(phba, ctx_buf->sglq->sli4_lxritag); ctx_buf->sglq->state = SGL_FREED; ctx_buf->sglq->ndlp = NULL; - spin_lock_irq(&phba->sli4_hba.sgl_list_lock); + spin_lock(&phba->sli4_hba.sgl_list_lock); list_add_tail(&ctx_buf->sglq->list, &phba->sli4_hba.lpfc_nvmet_sgl_list); - spin_unlock_irq(&phba->sli4_hba.sgl_list_lock); + spin_unlock(&phba->sli4_hba.sgl_list_lock); lpfc_sli_release_iocbq(phba, ctx_buf->iocbq); kfree(ctx_buf->context); } list_for_each_entry_safe(ctx_buf, next_ctx_buf, &phba->sli4_hba.lpfc_nvmet_ctx_put_list, list) { - spin_lock_irq(&phba->sli4_hba.abts_nvme_buf_list_lock); + spin_lock(&phba->sli4_hba.abts_nvme_buf_list_lock); list_del_init(&ctx_buf->list); - spin_unlock_irq(&phba->sli4_hba.abts_nvme_buf_list_lock); + spin_unlock(&phba->sli4_hba.abts_nvme_buf_list_lock); __lpfc_clear_active_sglq(phba, ctx_buf->sglq->sli4_lxritag); ctx_buf->sglq->state = SGL_FREED; ctx_buf->sglq->ndlp = NULL; - spin_lock_irq(&phba->sli4_hba.sgl_list_lock); + spin_lock(&phba->sli4_hba.sgl_list_lock); list_add_tail(&ctx_buf->sglq->list, &phba->sli4_hba.lpfc_nvmet_sgl_list); - spin_unlock_irq(&phba->sli4_hba.sgl_list_lock); + spin_unlock(&phba->sli4_hba.sgl_list_lock); lpfc_sli_release_iocbq(phba, ctx_buf->iocbq); kfree(ctx_buf->context); } - spin_unlock_irq(&phba->sli4_hba.nvmet_ctx_put_lock); + spin_unlock(&phba->sli4_hba.nvmet_ctx_put_lock); spin_unlock_irqrestore(&phba->sli4_hba.nvmet_ctx_get_lock, flags); }