From patchwork Mon May 14 17:47:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 10399059 X-Patchwork-Delegate: dledford@redhat.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D5E1860216 for ; Mon, 14 May 2018 17:48:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C69C828382 for ; Mon, 14 May 2018 17:48:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BAEF628396; Mon, 14 May 2018 17:48:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 062E428382 for ; Mon, 14 May 2018 17:48:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752037AbeENRsS (ORCPT ); Mon, 14 May 2018 13:48:18 -0400 Received: from mail-qt0-f193.google.com ([209.85.216.193]:46840 "EHLO mail-qt0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751992AbeENRsR (ORCPT ); Mon, 14 May 2018 13:48:17 -0400 Received: by mail-qt0-f193.google.com with SMTP id m16-v6so17236332qtg.13 for ; Mon, 14 May 2018 10:48:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uj5KLIrHnmjNqcCvdz0/CviktZurif/DqbYV3V5UFQo=; b=IdjnGx0rUkh6RFCul59u3lHeLEqBngin1saF9dPaDkqqap0LPiUME46mdvkfMafnty mJRyzCd2aoN/4UWCGiHPKTFbSnNVgH77kM/jUHkJ9MIEmY4E0m7wkqOFH9WQDOBAnwm2 1nFGh3zo+ingfxw00gM3KvxU/45QwzjT7ZiCg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uj5KLIrHnmjNqcCvdz0/CviktZurif/DqbYV3V5UFQo=; b=S6zSEmZ+dpB5+r4dF65HwVJETCgU81J+hYHDapxwzokeAfpfISAoD9HFD31YsYhXnu yPHY0cGRS1Cu91SmWehjtmFYyFY9ZfZYyKcfyLYH+3xYQKM2RPWLOR4LJrlgyF+aK5Q4 HtqKy5ylEhEz2mplEgPA6MlA8KMHmTPZnfPKhE0TiI2AXEoqK3yYVX9CKzUu+M/d2+ei DyKDLLonCQnuh71FQ4jmS3BzsFd+OV0F76WKPUVoRUzG3PtDy6CqsrczAw0SsE3XJIp9 /5eWahEGIiCEiR0ox2RxJlZ5AaUpU2Jexq3diNP7rGC2LZjRr5ohASn2Y5mLskczgS5H TKeQ== X-Gm-Message-State: ALKqPwdrKtdey09COg/ckAaiVTvHQU41kBlAFZIllrSoNKotGJnKwrHu VqrcIJnMYz0HJ/YM9C7Iy8qpqBiNQ6g= X-Google-Smtp-Source: AB8JxZok4Q6NfuKWmcqLYdw559Mo6oUJN1EBg8yb34WgYdsaQs8LqHYjsyYNY3TnHLgJ/+z94/FitA== X-Received: by 2002:ac8:1a8e:: with SMTP id x14-v6mr9999974qtj.288.1526320096594; Mon, 14 May 2018 10:48:16 -0700 (PDT) Received: from neo00-el73.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id l1-v6sm7875419qki.32.2018.05.14.10.48.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 14 May 2018 10:48:16 -0700 (PDT) From: Devesh Sharma To: dledford@redhat.com, jgg@mellanox.com Cc: linux-rdma@vger.kernel.org, jtoppins@redhat.com, ddutile@redhat.com, Devesh Sharma Subject: [rdma-next 2/2] bnxt_re: pause roce irqs when l2 need reshuffling Date: Mon, 14 May 2018 13:47:40 -0400 Message-Id: <1526320060-9517-3-git-send-email-devesh.sharma@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1526320060-9517-1-git-send-email-devesh.sharma@broadcom.com> References: <1526320060-9517-1-git-send-email-devesh.sharma@broadcom.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When there is a need from L2 driver to change the number available IRQs, L2 driver informs RoCE driver via a new API to free all it's IRQs so that it can proceed with disabling MSI-X on all the vectors. Once L2 driver is done reshuffling the IRQs, it tells roce driver to resume all the IRQs it was using. L2 driver will guarantee that none of the Ring-ID to vector mapping change as a result of reshuffling. Roce driver gets the same vectors which it was using prior to the change. RoCE driver just has to re-enable all the vectors when L2 driver tell it to do so via a new API. Signed-off-by: Devesh Sharma --- drivers/infiniband/hw/bnxt_re/main.c | 55 +++++++++++++++++++++++++++++- drivers/infiniband/hw/bnxt_re/qplib_fp.c | 20 ++++++----- drivers/infiniband/hw/bnxt_re/qplib_fp.h | 3 ++ drivers/infiniband/hw/bnxt_re/qplib_rcfw.c | 22 +++++++----- drivers/infiniband/hw/bnxt_re/qplib_rcfw.h | 3 ++ 5 files changed, 86 insertions(+), 17 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c index f6c739e..20b9f31 100644 --- a/drivers/infiniband/hw/bnxt_re/main.c +++ b/drivers/infiniband/hw/bnxt_re/main.c @@ -185,12 +185,65 @@ static void bnxt_re_shutdown(void *p) bnxt_re_ib_unreg(rdev, false); } +static void bnxt_re_stop_irq(void *handle) +{ + struct bnxt_re_dev *rdev = (struct bnxt_re_dev *)handle; + struct bnxt_qplib_rcfw *rcfw = &rdev->rcfw; + struct bnxt_qplib_nq *nq; + int indx; + + for (indx = BNXT_RE_NQ_IDX; indx < rdev->num_msix; indx++) { + nq = &rdev->nq[indx - 1]; + bnxt_qplib_nq_stop_irq(nq, false); + } + + bnxt_qplib_rcfw_stop_irq(rcfw, false); +} + +static void bnxt_re_start_irq(void *handle, struct bnxt_msix_entry *ent) +{ + struct bnxt_re_dev *rdev = (struct bnxt_re_dev *)handle; + struct bnxt_msix_entry *msix_ent = rdev->msix_entries; + struct bnxt_qplib_rcfw *rcfw = &rdev->rcfw; + struct bnxt_qplib_nq *nq; + int indx, rc; + + if (!ent) { + /* Not setting the f/w timeout bit in rcfw. + * During the driver unload the first command + * to f/w will timeout and that will set the + * timeout bit. + */ + dev_err(rdev_to_dev(rdev), "Failed to re-start IRQs\n"); + return; + } + + /* Vectors may change after restart, so update with new vectors + * in device sctructure. + */ + for (indx = 0; indx < rdev->num_msix; indx++) + rdev->msix_entries[indx].vector = ent[indx].vector; + + bnxt_qplib_rcfw_start_irq(rcfw, msix_ent[BNXT_RE_AEQ_IDX].vector, + false); + for (indx = BNXT_RE_NQ_IDX ; indx < rdev->num_msix; indx++) { + nq = &rdev->nq[indx - 1]; + rc = bnxt_qplib_nq_start_irq(nq, indx - 1, + msix_ent[indx].vector, false); + if (rc) + dev_warn(rdev_to_dev(rdev), + "Failed to reinit NQ index %d\n", indx - 1); + } +} + static struct bnxt_ulp_ops bnxt_re_ulp_ops = { .ulp_async_notifier = NULL, .ulp_stop = bnxt_re_stop, .ulp_start = bnxt_re_start, .ulp_sriov_config = bnxt_re_sriov_config, - .ulp_shutdown = bnxt_re_shutdown + .ulp_shutdown = bnxt_re_shutdown, + .ulp_irq_stop = bnxt_re_stop_irq, + .ulp_irq_restart = bnxt_re_start_irq }; /* RoCE -> Net driver */ diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c index e56f063..b0d343d 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c @@ -336,14 +336,15 @@ static irqreturn_t bnxt_qplib_nq_irq(int irq, void *dev_instance) return IRQ_HANDLED; } -static void bnxt_qplib_nq_stop_irq(struct bnxt_qplib_nq *nq) +void bnxt_qplib_nq_stop_irq(struct bnxt_qplib_nq *nq, bool kill) { tasklet_disable(&nq->worker); /* Mask h/w interrupt */ NQ_DB(nq->bar_reg_iomem, nq->hwq.cons, nq->hwq.max_elements); /* Sync with last running IRQ handler */ synchronize_irq(nq->vector); - tasklet_kill(&nq->worker); + if (kill) + tasklet_kill(&nq->worker); if (nq->requested) { irq_set_affinity_hint(nq->vector, NULL); free_irq(nq->vector, nq); @@ -359,7 +360,7 @@ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq) } /* Make sure the HW is stopped! */ - bnxt_qplib_nq_stop_irq(nq); + bnxt_qplib_nq_stop_irq(nq, true); if (nq->bar_reg_iomem) iounmap(nq->bar_reg_iomem); @@ -370,8 +371,8 @@ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq) nq->vector = 0; } -static int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, - int nq_indx, int msix_vector) +int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx, + int msix_vector, bool need_init) { int rc; @@ -379,8 +380,11 @@ static int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, return -EFAULT; nq->vector = msix_vector; - tasklet_init(&nq->worker, bnxt_qplib_service_nq, - (unsigned long)nq); + if (need_init) + tasklet_init(&nq->worker, bnxt_qplib_service_nq, + (unsigned long)nq); + else + tasklet_enable(&nq->worker); memset(nq->name, 0, 32); sprintf(nq->name, "bnxt_qplib_nq-%d", nq_indx); @@ -437,7 +441,7 @@ int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq, goto fail; } - rc = bnxt_qplib_nq_start_irq(nq, nq_idx, msix_vector); + rc = bnxt_qplib_nq_start_irq(nq, nq_idx, msix_vector, true); if (rc) { dev_err(&nq->pdev->dev, "QPLIB: Failed to request irq for nq-idx %d", nq_idx); diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h index ade9f13..72352ca 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h +++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h @@ -467,7 +467,10 @@ struct bnxt_qplib_nq_work { struct bnxt_qplib_cq *cq; }; +void bnxt_qplib_nq_stop_irq(struct bnxt_qplib_nq *nq, bool kill); void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq); +int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx, + int msix_vector, bool need_init); int bnxt_qplib_enable_nq(struct pci_dev *pdev, struct bnxt_qplib_nq *nq, int nq_idx, int msix_vector, int bar_reg_offset, int (*cqn_handler)(struct bnxt_qplib_nq *nq, diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c index fe9e2aa..2852d35 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c @@ -582,7 +582,7 @@ int bnxt_qplib_alloc_rcfw_channel(struct pci_dev *pdev, return -ENOMEM; } -static void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw) +void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill) { tasklet_disable(&rcfw->worker); /* Mask h/w interrupts */ @@ -590,7 +590,8 @@ static void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw) rcfw->creq.max_elements); /* Sync with last running IRQ-handler */ synchronize_irq(rcfw->vector); - tasklet_kill(&rcfw->worker); + if (kill) + tasklet_kill(&rcfw->worker); if (rcfw->requested) { free_irq(rcfw->vector, rcfw); @@ -602,7 +603,7 @@ void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw) { unsigned long indx; - bnxt_qplib_rcfw_stop_irq(rcfw); + bnxt_qplib_rcfw_stop_irq(rcfw, true); if (rcfw->cmdq_bar_reg_iomem) iounmap(rcfw->cmdq_bar_reg_iomem); @@ -623,16 +624,20 @@ void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw) rcfw->vector = 0; } -static int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, - int msix_vector) +int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector, + bool need_init) { int rc; if (rcfw->requested) return -EFAULT; + rcfw->vector = msix_vector; - tasklet_init(&rcfw->worker, - bnxt_qplib_service_creq, (unsigned long)rcfw); + if (need_init) + tasklet_init(&rcfw->worker, + bnxt_qplib_service_creq, (unsigned long)rcfw); + else + tasklet_enable(&rcfw->worker); rc = request_irq(rcfw->vector, bnxt_qplib_creq_irq, 0, "bnxt_qplib_creq", rcfw); if (rc) @@ -640,6 +645,7 @@ static int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, rcfw->requested = true; CREQ_DB_REARM(rcfw->creq_bar_reg_iomem, rcfw->creq.cons, rcfw->creq.max_elements); + return 0; } @@ -708,7 +714,7 @@ int bnxt_qplib_enable_rcfw_channel(struct pci_dev *pdev, rcfw->aeq_handler = aeq_handler; init_waitqueue_head(&rcfw->waitq); - rc = bnxt_qplib_rcfw_start_irq(rcfw, msix_vector); + rc = bnxt_qplib_rcfw_start_irq(rcfw, msix_vector, true); if (rc) { dev_err(&rcfw->pdev->dev, "QPLIB: Failed to request IRQ for CREQ rc = 0x%x", rc); diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h index c7cce2e..46416df 100644 --- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h +++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h @@ -195,7 +195,10 @@ struct bnxt_qplib_rcfw { void bnxt_qplib_free_rcfw_channel(struct bnxt_qplib_rcfw *rcfw); int bnxt_qplib_alloc_rcfw_channel(struct pci_dev *pdev, struct bnxt_qplib_rcfw *rcfw, int qp_tbl_sz); +void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill); void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw); +int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector, + bool need_init); int bnxt_qplib_enable_rcfw_channel(struct pci_dev *pdev, struct bnxt_qplib_rcfw *rcfw, int msix_vector,