From patchwork Wed Dec 26 23:33:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 10743371 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C302913BF for ; Wed, 26 Dec 2018 23:34:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B2BF528732 for ; Wed, 26 Dec 2018 23:34:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A6C1C28786; Wed, 26 Dec 2018 23:34:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 46FDC28732 for ; Wed, 26 Dec 2018 23:34:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727704AbeLZXeP (ORCPT ); Wed, 26 Dec 2018 18:34:15 -0500 Received: from mail-yw1-f68.google.com ([209.85.161.68]:35017 "EHLO mail-yw1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727570AbeLZXeO (ORCPT ); Wed, 26 Dec 2018 18:34:14 -0500 Received: by mail-yw1-f68.google.com with SMTP id h32so6768862ywk.2 for ; Wed, 26 Dec 2018 15:34:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=EPdLGELvtrFembZkevCkpMBznEWeNa/0O7jULJYeBlI=; b=Ry13WGQ07jV2QQDwhnp+W6+FUzUZfULU+x26oVhXQzP3YhAcyLTi7+SscetA7jTjbS MUI/Ta21unaTNqbBkt5nyJLTxDptI9jNsLRxRE+WxPznPBnSqZ99fCkb5E0Q1azWSkX1 pTmF9oXACHBEb49vwGLk+QmZVTVNXyMGazOSBDWZ6WpJTTxvstf0ZObUjiIv4qqHGYJN ngBwRbfR5tgGeONvyyszdeWKdT4ptdwO/6f6NDtzoNDWYb4ijXok/bU50m0/gGhJ7qoa FrZUMq4MzDxyUjbPP/JYlShLgFBeWvOqIsRxMGTF6iFbVmvsM9RfoEJf2BJ5Zk/OaH/i N11Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=EPdLGELvtrFembZkevCkpMBznEWeNa/0O7jULJYeBlI=; b=uYl8yImgqq7NvDnlXRg+htRw4mCsKAiy48xE1MbQ05QuZBVOMOWEI+uGzgOh85cTK/ QKaODEcyKCMCPwM7Ds9iuQBU44JMWKRryEBGhzyrG720SvAN1no81/NhHveJA1cYe6EK vDKrSynxNIKySXZEMmw+fv/pXCbhocQPJktWsPpp3yIFfWW7BfoFqL75L73rozxT6X0T zWQmmGeP7YsGf7nib+W99yJhQms9rdXyzTxxZS9to2qlaP0jVOy7p1MOCLpse0G5Kyee XUlHCshOGcAWW8z7WFWNuf8oc/s1hA55xyr01GNSWKSs3O+bsMMAVGNMWJQ+lamxPXW2 oi/w== X-Gm-Message-State: AA+aEWZAuElB2UKMsyBMqRufJIFNUCHtSLFkGkBIL63/HTTHBqZzYE5v 9xH9b4nAaUV+b7/HugRdg0Z6QS27 X-Google-Smtp-Source: AFSGD/UC27RDP0J7loxpYIooouoAaN5aci2Wyx7NVRA9qpDa5ojNJF5zVUg+kzHtMtRHaTvMeSZTtw== X-Received: by 2002:a0d:d454:: with SMTP id w81mr22055958ywd.110.1545867253132; Wed, 26 Dec 2018 15:34:13 -0800 (PST) Received: from os42.localdomain ([192.19.228.250]) by smtp.gmail.com with ESMTPSA id h145sm13616483ywc.72.2018.12.26.15.34.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 26 Dec 2018 15:34:12 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Dick Kennedy Subject: [PATCH 19/25] lpfc: Utilize new IRQ API when allocating MSI-X vectors Date: Wed, 26 Dec 2018 15:33:28 -0800 Message-Id: <20181226233334.27518-20-jsmart2021@gmail.com> X-Mailer: git-send-email 2.13.7 In-Reply-To: <20181226233334.27518-1-jsmart2021@gmail.com> References: <20181226233334.27518-1-jsmart2021@gmail.com> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Current driver uses the older IRQ API for msix allocation Change driver to utilize pci_alloc_irq_vectors when allocation IRQ vectors. Make lpfc_cpu_affinity_check use pci_irq_get_affinity to determine how the kernel mapped all the IRQs. Remove msix_entries from SLI4 structure, replaced with pci_irq_vector() usage. Signed-off-by: Dick Kennedy Signed-off-by: James Smart Reviewed-by: Hannes Reinecke Reviewed-by: Hannes Reinecke --- drivers/scsi/lpfc/lpfc_init.c | 162 ++++-------------------------------------- 1 file changed, 13 insertions(+), 149 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index 0e9c7292ef8d..309383c0cb35 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -10531,103 +10531,6 @@ lpfc_find_eq_handle(struct lpfc_hba *phba, uint16_t hdwq) return 0; } -/** - * lpfc_find_phys_id_eq - Find the next EQ that corresponds to the specified - * Physical Id. - * @phba: pointer to lpfc hba data structure. - * @eqidx: EQ index - * @phys_id: CPU package physical id - */ -static uint16_t -lpfc_find_phys_id_eq(struct lpfc_hba *phba, uint16_t eqidx, uint16_t phys_id) -{ - struct lpfc_vector_map_info *cpup; - int cpu, desired_phys_id; - - desired_phys_id = LPFC_VECTOR_MAP_EMPTY; - - /* Find the desired phys_id for the specified EQ */ - cpup = phba->sli4_hba.cpu_map; - for (cpu = 0; cpu < phba->sli4_hba.num_present_cpu; cpu++) { - if ((cpup->irq != LPFC_VECTOR_MAP_EMPTY) && - (cpup->eq == eqidx)) { - desired_phys_id = cpup->phys_id; - break; - } - cpup++; - } - if (phys_id == desired_phys_id) - return eqidx; - - /* Find a EQ thats on the specified phys_id */ - cpup = phba->sli4_hba.cpu_map; - for (cpu = 0; cpu < phba->sli4_hba.num_present_cpu; cpu++) { - if ((cpup->irq != LPFC_VECTOR_MAP_EMPTY) && - (cpup->phys_id == phys_id)) - return cpup->eq; - cpup++; - } - return 0; -} - -/** - * lpfc_find_cpu_map - Find next available CPU map entry that matches the - * phys_id and core_id. - * @phba: pointer to lpfc hba data structure. - * @phys_id: CPU package physical id - * @core_id: CPU core id - * @hdwqidx: Hardware Queue index - * @eqidx: EQ index - * @isr_avail: Should an IRQ be associated with this entry - */ -static struct lpfc_vector_map_info * -lpfc_find_cpu_map(struct lpfc_hba *phba, uint16_t phys_id, uint16_t core_id, - uint16_t hdwqidx, uint16_t eqidx, int isr_avail) -{ - struct lpfc_vector_map_info *cpup; - int cpu; - - cpup = phba->sli4_hba.cpu_map; - for (cpu = 0; cpu < phba->sli4_hba.num_present_cpu; cpu++) { - /* Does the cpup match the one we are looking for */ - if ((cpup->phys_id == phys_id) && - (cpup->core_id == core_id)) { - /* If it has been already assigned, then skip it */ - if (cpup->hdwq != LPFC_VECTOR_MAP_EMPTY) { - cpup++; - continue; - } - /* Ensure we are on the same phys_id as the first one */ - if (!isr_avail) - cpup->eq = lpfc_find_phys_id_eq(phba, eqidx, - phys_id); - else - cpup->eq = eqidx; - - cpup->hdwq = hdwqidx; - if (isr_avail) { - cpup->irq = - pci_irq_vector(phba->pcidev, eqidx); - - /* Now affinitize to the selected CPU */ - irq_set_affinity_hint(cpup->irq, - get_cpu_mask(cpu)); - irq_set_status_flags(cpup->irq, - IRQ_NO_BALANCING); - - lpfc_printf_log(phba, KERN_INFO, LOG_INIT, - "3330 Set Affinity: CPU %d " - "EQ %d irq %d (HDWQ %x)\n", - cpu, cpup->eq, - cpup->irq, cpup->hdwq); - } - return cpup; - } - cpup++; - } - return 0; -} - #ifdef CONFIG_X86 /** * lpfc_find_hyper - Determine if the CPU map entry is hyper-threaded @@ -10670,11 +10573,11 @@ lpfc_find_hyper(struct lpfc_hba *phba, int cpu, static void lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors) { - int i, j, idx, phys_id; + int i, cpu, idx, phys_id; int max_phys_id, min_phys_id; int max_core_id, min_core_id; struct lpfc_vector_map_info *cpup; - int cpu, eqidx, hdwqidx, isr_avail; + const struct cpumask *maskp; #ifdef CONFIG_X86 struct cpuinfo_x86 *cpuinfo; #endif @@ -10731,60 +10634,21 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors) eqi->icnt = 0; } - /* - * If the number of IRQ vectors == number of CPUs, - * mapping is pretty simple: 1 to 1. - * This is the desired path if NVME is enabled. - */ - if (vectors == phba->sli4_hba.num_present_cpu) { - cpup = phba->sli4_hba.cpu_map; - for (idx = 0; idx < vectors; idx++) { + for (idx = 0; idx < phba->cfg_irq_chann; idx++) { + maskp = pci_irq_get_affinity(phba->pcidev, idx); + if (!maskp) + continue; + + for_each_cpu_and(cpu, maskp, cpu_present_mask) { + cpup = &phba->sli4_hba.cpu_map[cpu]; cpup->eq = idx; cpup->hdwq = idx; cpup->irq = pci_irq_vector(phba->pcidev, idx); - /* Now affinitize to the selected CPU */ - irq_set_affinity_hint( - pci_irq_vector(phba->pcidev, idx), - get_cpu_mask(idx)); - irq_set_status_flags(cpup->irq, IRQ_NO_BALANCING); - - lpfc_printf_log(phba, KERN_INFO, LOG_INIT, + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, "3336 Set Affinity: CPU %d " - "EQ %d irq %d\n", - idx, cpup->eq, - pci_irq_vector(phba->pcidev, idx)); - cpup++; - } - return; - } - - idx = 0; - isr_avail = 1; - eqidx = 0; - hdwqidx = 0; - - /* Mapping is more complicated for this case. Hardware Queues are - * assigned in a "ping pong" fashion, ping pong-ing between the - * available phys_id's. - */ - while (idx < phba->sli4_hba.num_present_cpu) { - for (i = min_core_id; i <= max_core_id; i++) { - for (j = min_phys_id; j <= max_phys_id; j++) { - cpup = lpfc_find_cpu_map(phba, j, i, hdwqidx, - eqidx, isr_avail); - if (!cpup) - continue; - idx++; - hdwqidx++; - if (hdwqidx >= phba->cfg_hdw_queue) - hdwqidx = 0; - eqidx++; - if (eqidx >= phba->cfg_irq_chann) { - isr_avail = 0; - eqidx = 0; - } - } + "hdwq %d irq %d\n", + cpu, cpup->hdwq, cpup->irq); } } return; @@ -10811,7 +10675,7 @@ lpfc_sli4_enable_msix(struct lpfc_hba *phba) vectors = phba->cfg_irq_chann; rc = pci_alloc_irq_vectors(phba->pcidev, - (phba->nvmet_support) ? 1 : 2, + 1, vectors, PCI_IRQ_MSIX | PCI_IRQ_AFFINITY); if (rc < 0) { lpfc_printf_log(phba, KERN_INFO, LOG_INIT,