From patchwork Wed Oct 2 10:49:12 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Gordeev X-Patchwork-Id: 2976521 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Original-To: patchwork-linux-pci@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DE4289F288 for ; Wed, 2 Oct 2013 17:07:28 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C945120383 for ; Wed, 2 Oct 2013 17:07:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8EEF420353 for ; Wed, 2 Oct 2013 17:07:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754063Ab3JBRGy (ORCPT ); Wed, 2 Oct 2013 13:06:54 -0400 Received: from 221-186-24-89.in-addr.arpa ([89.24.186.221]:25721 "EHLO dhcp-26-207.brq.redhat.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1753937Ab3JBRGx (ORCPT ); Wed, 2 Oct 2013 13:06:53 -0400 Received: from dhcp-26-207.brq.redhat.com (localhost [127.0.0.1]) by dhcp-26-207.brq.redhat.com (8.14.5/8.14.5) with ESMTP id r92Ax4Yf002696; Wed, 2 Oct 2013 12:59:04 +0200 Received: (from agordeev@localhost) by dhcp-26-207.brq.redhat.com (8.14.5/8.14.5/Submit) id r92Ax3rd002695; Wed, 2 Oct 2013 12:59:03 +0200 From: Alexander Gordeev To: linux-kernel@vger.kernel.org Cc: Alexander Gordeev , Bjorn Helgaas , Ralf Baechle , Michael Ellerman , Benjamin Herrenschmidt , Martin Schwidefsky , Ingo Molnar , Tejun Heo , Dan Williams , Andy King , Jon Mason , Matt Porter , linux-pci@vger.kernel.org, linux-mips@linux-mips.org, linuxppc-dev@lists.ozlabs.org, linux390@de.ibm.com, linux-s390@vger.kernel.org, x86@kernel.org, linux-ide@vger.kernel.org, iss_storagedev@hp.com, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, e1000-devel@lists.sourceforge.net, linux-driver@qlogic.com, Solarflare linux maintainers , "VMware, Inc." , linux-scsi@vger.kernel.org Subject: [PATCH RFC 56/77] nvme: Update MSI/MSI-X interrupts enablement code Date: Wed, 2 Oct 2013 12:49:12 +0200 Message-Id: X-Mailer: git-send-email 1.7.7.6 In-Reply-To: References: Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,KHOP_BIG_TO_CC, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As result of recent re-design of the MSI/MSI-X interrupts enabling pattern this driver has to be updated to use the new technique to obtain a optimal number of MSI/MSI-X interrupts required. Signed-off-by: Alexander Gordeev --- drivers/block/nvme-core.c | 48 +++++++++++++++++++++++--------------------- 1 files changed, 25 insertions(+), 23 deletions(-) diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c index da52092..f69d7af 100644 --- a/drivers/block/nvme-core.c +++ b/drivers/block/nvme-core.c @@ -1774,34 +1774,36 @@ static int nvme_setup_io_queues(struct nvme_dev *dev) /* Deregister the admin queue's interrupt */ free_irq(dev->entry[0].vector, dev->queues[0]); - vecs = nr_io_queues; + result = pci_msix_table_size(pdev); + if (result < 0) + goto msi; + + vecs = min(result, nr_io_queues); for (i = 0; i < vecs; i++) dev->entry[i].entry = i; - for (;;) { - result = pci_enable_msix(pdev, dev->entry, vecs); - if (result <= 0) - break; - vecs = result; - } - if (result < 0) { - vecs = nr_io_queues; - if (vecs > 32) - vecs = 32; - for (;;) { - result = pci_enable_msi_block(pdev, vecs); - if (result == 0) { - for (i = 0; i < vecs; i++) - dev->entry[i].vector = i + pdev->irq; - break; - } else if (result < 0) { - vecs = 1; - break; - } - vecs = result; - } + result = pci_enable_msix(pdev, dev->entry, vecs); + if (result == 0) + goto irq; + + msi: + result = pci_get_msi_cap(pdev); + if (result < 0) + goto no_msi; + + vecs = min(result, nr_io_queues); + + result = pci_enable_msi_block(pdev, vecs); + if (result == 0) { + for (i = 0; i < vecs; i++) + dev->entry[i].vector = i + pdev->irq; + goto irq; } + no_msi: + vecs = 1; + + irq: /* * Should investigate if there's a performance win from allocating * more queues than interrupt vectors; it might allow the submission