From patchwork Tue Feb 18 15:09:02 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Gordeev X-Patchwork-Id: 3672211 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Original-To: patchwork-linux-pci@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4B2369F370 for ; Tue, 18 Feb 2014 15:08:35 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6FA2D2015E for ; Tue, 18 Feb 2014 15:08:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 80171201FE for ; Tue, 18 Feb 2014 15:08:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755191AbaBRPII (ORCPT ); Tue, 18 Feb 2014 10:08:08 -0500 Received: from mx1.redhat.com ([209.132.183.28]:49474 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756224AbaBRPHU (ORCPT ); Tue, 18 Feb 2014 10:07:20 -0500 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1IF7DZe025601 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Tue, 18 Feb 2014 10:07:14 -0500 Received: from dhcp-26-207.brq.redhat.com (vpn-48-202.rdu2.redhat.com [10.10.48.202]) by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s1IF755m001540; Tue, 18 Feb 2014 10:07:10 -0500 From: Alexander Gordeev To: linux-kernel@vger.kernel.org Cc: Alexander Gordeev , Mike Marciniszyn , Roland Dreier , Sean Hefty , Hal Rosenstock , linux-rdma@vger.kernel.org, linux-pci@vger.kernel.org Subject: [PATCH 2/2] qib: Use pci_enable_msix_range() instead of pci_enable_msix() Date: Tue, 18 Feb 2014 16:09:02 +0100 Message-Id: In-Reply-To: References: X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As result of deprecation of MSI-X/MSI enablement functions pci_enable_msix() and pci_enable_msi_block() all drivers using these two interfaces need to be updated to use the new pci_enable_msi_range() and pci_enable_msix_range() interfaces. Signed-off-by: Alexander Gordeev Cc: Mike Marciniszyn Cc: Roland Dreier Cc: Sean Hefty Cc: Hal Rosenstock Cc: linux-rdma@vger.kernel.org Cc: linux-pci@vger.kernel.org Acked-by: Mike Marciniszyn --- drivers/infiniband/hw/qib/qib_pcie.c | 55 +++++++++++++++++---------------- 1 files changed, 28 insertions(+), 27 deletions(-) diff --git a/drivers/infiniband/hw/qib/qib_pcie.c b/drivers/infiniband/hw/qib/qib_pcie.c index c8d9c4a..61a0046 100644 --- a/drivers/infiniband/hw/qib/qib_pcie.c +++ b/drivers/infiniband/hw/qib/qib_pcie.c @@ -197,46 +197,47 @@ static void qib_msix_setup(struct qib_devdata *dd, int pos, u32 *msixcnt, struct qib_msix_entry *qib_msix_entry) { int ret; - u32 tabsize = 0; - u16 msix_flags; + int nvec = *msixcnt; struct msix_entry *msix_entry; int i; + ret = pci_msix_vec_count(dd->pcidev); + if (ret < 0) + goto do_intx; + + nvec = min(nvec, ret); + /* We can't pass qib_msix_entry array to qib_msix_setup * so use a dummy msix_entry array and copy the allocated * irq back to the qib_msix_entry array. */ - msix_entry = kmalloc(*msixcnt * sizeof(*msix_entry), GFP_KERNEL); - if (!msix_entry) { - ret = -ENOMEM; + msix_entry = kmalloc(nvec * sizeof(*msix_entry), GFP_KERNEL); + if (!msix_entry) goto do_intx; - } - for (i = 0; i < *msixcnt; i++) + + for (i = 0; i < nvec; i++) msix_entry[i] = qib_msix_entry[i].msix; - pci_read_config_word(dd->pcidev, pos + PCI_MSIX_FLAGS, &msix_flags); - tabsize = 1 + (msix_flags & PCI_MSIX_FLAGS_QSIZE); - if (tabsize > *msixcnt) - tabsize = *msixcnt; - ret = pci_enable_msix(dd->pcidev, msix_entry, tabsize); - if (ret > 0) { - tabsize = ret; - ret = pci_enable_msix(dd->pcidev, msix_entry, tabsize); - } -do_intx: - if (ret) { - qib_dev_err(dd, - "pci_enable_msix %d vectors failed: %d, falling back to INTx\n", - tabsize, ret); - tabsize = 0; - } - for (i = 0; i < tabsize; i++) + ret = pci_enable_msix_range(dd->pcidev, msix_entry, 1, nvec); + if (ret < 0) + goto free_msix_entry; + else + nvec = ret; + + for (i = 0; i < nvec; i++) qib_msix_entry[i].msix = msix_entry[i]; + kfree(msix_entry); - *msixcnt = tabsize; + *msixcnt = nvec; + return; - if (ret) - qib_enable_intx(dd->pcidev); +free_msix_entry: + kfree(msix_entry); +do_intx: + qib_dev_err(dd, "pci_enable_msix_range %d vectors failed: %d, " + "falling back to INTx\n", nvec, ret); + *msixcnt = 0; + qib_enable_intx(dd->pcidev); } /**