From patchwork Tue Dec 4 15:51:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dou Liyang X-Patchwork-Id: 10712145 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C14CA16B1 for ; Tue, 4 Dec 2018 15:52:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ACD722C0CA for ; Tue, 4 Dec 2018 15:52:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AAC942C19C; Tue, 4 Dec 2018 15:52:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 986352C0CA for ; Tue, 4 Dec 2018 15:52:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726226AbeLDPwH (ORCPT ); Tue, 4 Dec 2018 10:52:07 -0500 Received: from mail-pf1-f193.google.com ([209.85.210.193]:37412 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726724AbeLDPwG (ORCPT ); Tue, 4 Dec 2018 10:52:06 -0500 Received: by mail-pf1-f193.google.com with SMTP id y126so8421213pfb.4; Tue, 04 Dec 2018 07:52:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ycjN0XZKSqKeo48kxtJsjHK53d+rsTlTqVtiUeLajJk=; b=MfsJ8gzmNAs6Z5K4uxx18QM1y1qYdeCQCDYYpC3YQ2FPd3xE3nkB4Wrmd8OrEZalBt Mw4EG3dDgJBfgJfcIzdBr96J+0EPEtws64SZMUYgOh+87GeTESh+sAlvoNeeAfIOuFar j+r+JKpzK509XzFcZ11c9RLBc7ZNX7Z/2fIXIjsdXaJDunUFPC8oqCvpLSNU5F2gkCpJ fzIAtunHvSSyCZ8RoajnDwf0w9aE7HWd3HjAUWuhuLy3U8+dZCMsmZKdnD/IYLpRJt6E ad9vA668Od/J0HtNXB0XPy5EJ3zMwiJ0Gpt4hqa/eKfGMnQLo0xe2jjugrm+anjgYGQF d1Ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ycjN0XZKSqKeo48kxtJsjHK53d+rsTlTqVtiUeLajJk=; b=UbmDHRCB8Hm37RWvnxzzBb2t6TX9JBFVgSV5XPa619JPPMBRj95QZfhB/Cep7vqImH crqi+iPVWW8N5H5LcCC7yDut6v+JnfDYVUvVODcHfMigOHA/Q2lJ+waPfdZqBvYo3xz4 3/iM9Y8KpeF5OONWGA/Ab6UqKXoAxwA/lN0X4qOkEGQtd0vQg4z10fsZmO0/855LVoyl ob8fNJLmU2MunJFpdtLz9tEEI7+e8pFghuKJN4i0L68ri5+Gg++//sZX1i+dnE23CXTr 0qS4D6Cb7BqVqW+qYLfQbny7YcZgHHzlS+EdjdyJc7IaORlH3wBzmCPHGkOUpPtTgJXY ji2A== X-Gm-Message-State: AA+aEWaNX+uSdW/PbzozK5sI7aACQU60SJmTd5vZn2YeQZ3Y//Hs00lb xPCWrnkIc6fdU4wECaboyUC3CfRTwPo= X-Google-Smtp-Source: AFSGD/Vwc0QZobHZ/bPb73hTLpx3VR7b5cttXuiFIKDFVZwwsesQg4w4aqZp3k+sxGiReM/bxui5zA== X-Received: by 2002:a63:bf0b:: with SMTP id v11mr17531989pgf.302.1543938725208; Tue, 04 Dec 2018 07:52:05 -0800 (PST) Received: from localhost.localdomain ([104.238.160.83]) by smtp.gmail.com with ESMTPSA id u78sm40653444pfi.2.2018.12.04.07.51.57 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 04 Dec 2018 07:52:04 -0800 (PST) From: Dou Liyang To: linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org Cc: tglx@linutronix.de, kashyap.desai@broadcom.com, shivasharan.srikanteshwara@broadcom.com, sumit.saxena@broadcom.com, ming.lei@redhat.com, hch@lst.de, bhelgaas@google.com, douliyang1@huawei.com, Dou Liyang Subject: [PATCH 1/3] genirq/core: Add a new interrupt affinity descriptor Date: Tue, 4 Dec 2018 23:51:20 +0800 Message-Id: <20181204155122.6327-2-douliyangs@gmail.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181204155122.6327-1-douliyangs@gmail.com> References: <20181204155122.6327-1-douliyangs@gmail.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now, Linux just spreads the interrupt affinity info by a cpumask pointer and mark it as managed interrupt if its cpumask is not NULL. if there are some other info should be passed, this design is not good to expand, adding new arguments is the most staightforward method, But this will break many functions. So, add a new interrupt affinity descriptor, replace the cpumask pointer with its pointer which allows to expand this in the future without touching all the functions ever again, Just modify the data irq_affinity_desc structure. No functional change, just prepares for support of spreading managed flags. Suggested-by: Thomas Gleixner Suggested-by: Bjorn Helgaas Signed-off-by: Dou Liyang --- drivers/pci/msi.c | 9 ++++----- include/linux/interrupt.h | 14 ++++++++++++-- include/linux/irq.h | 6 ++++-- include/linux/irqdomain.h | 6 ++++-- include/linux/msi.h | 4 ++-- kernel/irq/affinity.c | 22 ++++++++++++---------- kernel/irq/devres.c | 4 ++-- kernel/irq/irqdesc.c | 16 ++++++++++------ kernel/irq/irqdomain.c | 4 ++-- kernel/irq/msi.c | 7 ++++--- 10 files changed, 56 insertions(+), 36 deletions(-) diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index 265ed3e4c920..7a1c8a09efa5 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -534,14 +534,13 @@ static int populate_msi_sysfs(struct pci_dev *pdev) static struct msi_desc * msi_setup_entry(struct pci_dev *dev, int nvec, const struct irq_affinity *affd) { - struct cpumask *masks = NULL; + struct irq_affinity_desc *masks = NULL; struct msi_desc *entry; u16 control; if (affd) masks = irq_create_affinity_masks(nvec, affd); - /* MSI Entry Initialization */ entry = alloc_msi_entry(&dev->dev, nvec, masks); if (!entry) @@ -672,7 +671,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base, struct msix_entry *entries, int nvec, const struct irq_affinity *affd) { - struct cpumask *curmsk, *masks = NULL; + struct irq_affinity_desc *curmsk, *masks = NULL; struct msi_desc *entry; int ret, i; @@ -1264,7 +1263,7 @@ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr) for_each_pci_msi_entry(entry, dev) { if (i == nr) - return entry->affinity; + return &entry->affinity->mask; i++; } WARN_ON_ONCE(1); @@ -1276,7 +1275,7 @@ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr) nr >= entry->nvec_used)) return NULL; - return &entry->affinity[nr]; + return &entry->affinity[nr].mask; } else { return cpu_possible_mask; } diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index ca397ff40836..71be303231e9 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -257,6 +257,14 @@ struct irq_affinity { int *sets; }; +/** + * struct irq_affinity_desc - Interrupt affinity descriptor + * @mask: It's one cpumask per descriptor. + */ +struct irq_affinity_desc { + struct cpumask mask; +}; + #if defined(CONFIG_SMP) extern cpumask_var_t irq_default_affinity; @@ -303,7 +311,9 @@ extern int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m); extern int irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify); -struct cpumask *irq_create_affinity_masks(int nvec, const struct irq_affinity *affd); +struct irq_affinity_desc * +irq_create_affinity_masks(int nvec, const struct irq_affinity *affd); + int irq_calc_affinity_vectors(int minvec, int maxvec, const struct irq_affinity *affd); #else /* CONFIG_SMP */ @@ -337,7 +347,7 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify) return 0; } -static inline struct cpumask * +static inline struct irq_affinity_desc * irq_create_affinity_masks(int nvec, const struct irq_affinity *affd) { return NULL; diff --git a/include/linux/irq.h b/include/linux/irq.h index c9bffda04a45..def2b2aac8b1 100644 --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -27,6 +27,7 @@ struct seq_file; struct module; struct msi_msg; +struct irq_affinity_desc; enum irqchip_irq_state; /* @@ -834,11 +835,12 @@ struct cpumask *irq_data_get_effective_affinity_mask(struct irq_data *d) unsigned int arch_dynirq_lower_bound(unsigned int from); int __irq_alloc_descs(int irq, unsigned int from, unsigned int cnt, int node, - struct module *owner, const struct cpumask *affinity); + struct module *owner, + const struct irq_affinity_desc *affinity); int __devm_irq_alloc_descs(struct device *dev, int irq, unsigned int from, unsigned int cnt, int node, struct module *owner, - const struct cpumask *affinity); + const struct irq_affinity_desc *affinity); /* use macros to avoid needing export.h for THIS_MODULE */ #define irq_alloc_descs(irq, from, cnt, node) \ diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h index 068aa46f0d55..35965f41d7be 100644 --- a/include/linux/irqdomain.h +++ b/include/linux/irqdomain.h @@ -43,6 +43,7 @@ struct irq_chip; struct irq_data; struct cpumask; struct seq_file; +struct irq_affinity_desc; /* Number of irqs reserved for a legacy isa controller */ #define NUM_ISA_INTERRUPTS 16 @@ -266,7 +267,7 @@ extern bool irq_domain_check_msi_remap(void); extern void irq_set_default_host(struct irq_domain *host); extern int irq_domain_alloc_descs(int virq, unsigned int nr_irqs, irq_hw_number_t hwirq, int node, - const struct cpumask *affinity); + const struct irq_affinity_desc *affinity); static inline struct fwnode_handle *of_node_to_fwnode(struct device_node *node) { @@ -449,7 +450,8 @@ static inline struct irq_domain *irq_domain_add_hierarchy(struct irq_domain *par extern int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base, unsigned int nr_irqs, int node, void *arg, - bool realloc, const struct cpumask *affinity); + bool realloc, + const struct irq_affinity_desc *affinity); extern void irq_domain_free_irqs(unsigned int virq, unsigned int nr_irqs); extern int irq_domain_activate_irq(struct irq_data *irq_data, bool early); extern void irq_domain_deactivate_irq(struct irq_data *irq_data); diff --git a/include/linux/msi.h b/include/linux/msi.h index 0e9c50052ff3..7ba4c230181c 100644 --- a/include/linux/msi.h +++ b/include/linux/msi.h @@ -76,7 +76,7 @@ struct msi_desc { unsigned int nvec_used; struct device *dev; struct msi_msg msg; - struct cpumask *affinity; + struct irq_affinity_desc *affinity; union { /* PCI MSI/X specific data */ @@ -136,7 +136,7 @@ static inline void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg) #endif /* CONFIG_PCI_MSI */ struct msi_desc *alloc_msi_entry(struct device *dev, int nvec, - const struct cpumask *affinity); + const struct irq_affinity_desc *affinity); void free_msi_entry(struct msi_desc *entry); void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg); void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg); diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 08c904eb7279..1562a36e7c0f 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -99,7 +99,7 @@ static int __irq_build_affinity_masks(const struct irq_affinity *affd, cpumask_var_t *node_to_cpumask, const struct cpumask *cpu_mask, struct cpumask *nmsk, - struct cpumask *masks) + struct irq_affinity_desc *masks) { int n, nodes, cpus_per_vec, extra_vecs, done = 0; int last_affv = firstvec + numvecs; @@ -117,7 +117,9 @@ static int __irq_build_affinity_masks(const struct irq_affinity *affd, */ if (numvecs <= nodes) { for_each_node_mask(n, nodemsk) { - cpumask_or(masks + curvec, masks + curvec, node_to_cpumask[n]); + cpumask_or(&masks[curvec].mask, + &masks[curvec].mask, + node_to_cpumask[n]); if (++curvec == last_affv) curvec = firstvec; } @@ -150,7 +152,8 @@ static int __irq_build_affinity_masks(const struct irq_affinity *affd, cpus_per_vec++; --extra_vecs; } - irq_spread_init_one(masks + curvec, nmsk, cpus_per_vec); + irq_spread_init_one(&masks[curvec].mask, nmsk, + cpus_per_vec); } done += v; @@ -173,7 +176,7 @@ static int __irq_build_affinity_masks(const struct irq_affinity *affd, static int irq_build_affinity_masks(const struct irq_affinity *affd, int startvec, int numvecs, int firstvec, cpumask_var_t *node_to_cpumask, - struct cpumask *masks) + struct irq_affinity_desc *masks) { int curvec = startvec, nr_present, nr_others; int ret = -ENOMEM; @@ -226,15 +229,15 @@ static int irq_build_affinity_masks(const struct irq_affinity *affd, * @nvecs: The total number of vectors * @affd: Description of the affinity requirements * - * Returns the masks pointer or NULL if allocation failed. + * Returns the irq_affinity_desc pointer or NULL if allocation failed. */ -struct cpumask * +struct irq_affinity_desc * irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) { int affvecs = nvecs - affd->pre_vectors - affd->post_vectors; int curvec, usedvecs; cpumask_var_t *node_to_cpumask; - struct cpumask *masks = NULL; + struct irq_affinity_desc *masks = NULL; int i, nr_sets; /* @@ -254,8 +257,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) /* Fill out vectors at the beginning that don't need affinity */ for (curvec = 0; curvec < affd->pre_vectors; curvec++) - cpumask_copy(masks + curvec, irq_default_affinity); - + cpumask_copy(&masks[curvec].mask, irq_default_affinity); /* * Spread on present CPUs starting from affd->pre_vectors. If we * have multiple sets, build each sets affinity mask separately. @@ -285,7 +287,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) else curvec = affd->pre_vectors + usedvecs; for (; curvec < nvecs; curvec++) - cpumask_copy(masks + curvec, irq_default_affinity); + cpumask_copy(&masks[curvec].mask, irq_default_affinity); outnodemsk: free_node_to_cpumask(node_to_cpumask); diff --git a/kernel/irq/devres.c b/kernel/irq/devres.c index 6a682c229e10..5d5378ea0afe 100644 --- a/kernel/irq/devres.c +++ b/kernel/irq/devres.c @@ -169,7 +169,7 @@ static void devm_irq_desc_release(struct device *dev, void *res) * @cnt: Number of consecutive irqs to allocate * @node: Preferred node on which the irq descriptor should be allocated * @owner: Owning module (can be NULL) - * @affinity: Optional pointer to an affinity mask array of size @cnt + * @affinity: Optional pointer to an irq_affinity_desc array of size @cnt * which hints where the irq descriptors should be allocated * and which default affinities to use * @@ -179,7 +179,7 @@ static void devm_irq_desc_release(struct device *dev, void *res) */ int __devm_irq_alloc_descs(struct device *dev, int irq, unsigned int from, unsigned int cnt, int node, struct module *owner, - const struct cpumask *affinity) + const struct irq_affinity_desc *affinity) { struct irq_desc_devres *dr; int base; diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c index 578d0e5f1b5b..f87fa2b9935a 100644 --- a/kernel/irq/irqdesc.c +++ b/kernel/irq/irqdesc.c @@ -449,8 +449,10 @@ static void free_desc(unsigned int irq) } static int alloc_descs(unsigned int start, unsigned int cnt, int node, - const struct cpumask *affinity, struct module *owner) + const struct irq_affinity_desc *affinity, + struct module *owner) { + const struct irq_affinity_desc *cur_affinity= affinity; const struct cpumask *mask = NULL; struct irq_desc *desc; unsigned int flags; @@ -458,9 +460,11 @@ static int alloc_descs(unsigned int start, unsigned int cnt, int node, /* Validate affinity mask(s) */ if (affinity) { - for (i = 0, mask = affinity; i < cnt; i++, mask++) { + for (i = 0; i < cnt; i++) { + mask = &cur_affinity->mask; if (cpumask_empty(mask)) return -EINVAL; + cur_affinity++; } } @@ -469,8 +473,8 @@ static int alloc_descs(unsigned int start, unsigned int cnt, int node, for (i = 0; i < cnt; i++) { if (affinity) { - node = cpu_to_node(cpumask_first(affinity)); - mask = affinity; + mask = &affinity->mask; + node = cpu_to_node(cpumask_first(mask)); affinity++; } desc = alloc_desc(start + i, node, flags, mask, owner); @@ -575,7 +579,7 @@ static void free_desc(unsigned int irq) } static inline int alloc_descs(unsigned int start, unsigned int cnt, int node, - const struct cpumask *affinity, + const struct irq_affinity_desc *affinity, struct module *owner) { u32 i; @@ -705,7 +709,7 @@ EXPORT_SYMBOL_GPL(irq_free_descs); */ int __ref __irq_alloc_descs(int irq, unsigned int from, unsigned int cnt, int node, - struct module *owner, const struct cpumask *affinity) + struct module *owner, const struct irq_affinity_desc *affinity) { int start, ret; diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c index 3366d11c3e02..8b0be4bd6565 100644 --- a/kernel/irq/irqdomain.c +++ b/kernel/irq/irqdomain.c @@ -969,7 +969,7 @@ const struct irq_domain_ops irq_domain_simple_ops = { EXPORT_SYMBOL_GPL(irq_domain_simple_ops); int irq_domain_alloc_descs(int virq, unsigned int cnt, irq_hw_number_t hwirq, - int node, const struct cpumask *affinity) + int node, const struct irq_affinity_desc *affinity) { unsigned int hint; @@ -1281,7 +1281,7 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain, */ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base, unsigned int nr_irqs, int node, void *arg, - bool realloc, const struct cpumask *affinity) + bool realloc, const struct irq_affinity_desc *affinity) { int i, ret, virq; diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c index 4ca2fd46645d..36b7f92fcff0 100644 --- a/kernel/irq/msi.c +++ b/kernel/irq/msi.c @@ -23,11 +23,12 @@ * @nvec: The number of vectors used in this entry * @affinity: Optional pointer to an affinity mask array size of @nvec * - * If @affinity is not NULL then a an affinity array[@nvec] is allocated - * and the affinity masks from @affinity are copied. + * If @affinity is not NULL then an affinity array[@nvec] is allocated + * and the affinity masks and flags from @affinity are copied. */ struct msi_desc * -alloc_msi_entry(struct device *dev, int nvec, const struct cpumask *affinity) +alloc_msi_entry(struct device *dev, int nvec, + const struct irq_affinity_desc *affinity) { struct msi_desc *desc; From patchwork Tue Dec 4 15:51:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dou Liyang X-Patchwork-Id: 10712147 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D3E8109C for ; Tue, 4 Dec 2018 15:52:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 608F52C1CE for ; Tue, 4 Dec 2018 15:52:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 53CD72C1E9; Tue, 4 Dec 2018 15:52:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 388342C1C4 for ; Tue, 4 Dec 2018 15:52:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726420AbeLDPwQ (ORCPT ); Tue, 4 Dec 2018 10:52:16 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:35116 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726152AbeLDPwQ (ORCPT ); Tue, 4 Dec 2018 10:52:16 -0500 Received: by mail-pf1-f195.google.com with SMTP id z9so8421462pfi.2; Tue, 04 Dec 2018 07:52:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=OdkkpT2J6isGXQFXROnXRZLRzOG2eTpEzLoXtl4D7vE=; b=VMn5yjFLX50xmZ3eRGIo2oStm+weW96F3K87gg01DV7ds+jMlwl7ElOipaltnJWEqs 0oEvQPtxYg5nVReerTdLn72U1GdLqtRiUrcrmTSHl0JINBDkFSmGKh0n8ms0/gJ6Yfre DyfXVpDmcwWpTd2Z4t7hJRdQu3caDLT6OPlRCiaqPpOWsLrj+aFy7xEAIcBWVvY9iiCL 3p2UsnkOtcQwvasHcLXBIXgoXfIt+9cPk8xdoB5WUQ4ZjtqHHk9mCZ6GSJ9z19/wYTbU 4TIytrpxB+N9T+hZbv2CBAjwPkULAx9maUQ8iLOEbz3JaYqC3AyL275iikixWOGscJ9O 2Cww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=OdkkpT2J6isGXQFXROnXRZLRzOG2eTpEzLoXtl4D7vE=; b=okX8XGHkzEt3L82z5VceQFFnvwtO8EP1cGjIRCsSxVstSBoJfRiolnr3h568IfAkBB sogSBVIA+x/8/8jGYxaXVr2J5tOzuUPkWDVjujhHZq+LaFY3diYf39Vn4DvVvDdAfAQr v2Wz5HZiW0JcKx+3g8I45snvXmYAqAM1e1Q2hrMF58AZsWx/cH3JgH+abQ++MWtDsELv YXj2RHwGMBXQ/BznTVnqDZw+ZK2S6C05ybIBX9myuFyTLB5vJR2NtYYJwalHnaAnlXJt MiCobHml8HJl7nQTMxyE1TT1jz6cJlnxuMPkCMtNqe8dTOCEqqxDEZsGg3D+fRvaM1+R BBkg== X-Gm-Message-State: AA+aEWZwO0LHIw0pzbcR5av/xE7gUFuwsRU83ygDCQ7ULJEVoDhtCKic t+D5bOH5Csa0KGQAYEG2PZpq4mL4AZ4= X-Google-Smtp-Source: AFSGD/UB1jNRuiQjwaBGiUOur7BBVnkAlp246AZ5aMSW1HedDbdqSe8mo5M2nm4yr2ARz0hJsZ3u2A== X-Received: by 2002:a63:504d:: with SMTP id q13mr17486239pgl.319.1543938734239; Tue, 04 Dec 2018 07:52:14 -0800 (PST) Received: from localhost.localdomain ([104.238.160.83]) by smtp.gmail.com with ESMTPSA id u78sm40653444pfi.2.2018.12.04.07.52.05 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 04 Dec 2018 07:52:13 -0800 (PST) From: Dou Liyang To: linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org Cc: tglx@linutronix.de, kashyap.desai@broadcom.com, shivasharan.srikanteshwara@broadcom.com, sumit.saxena@broadcom.com, ming.lei@redhat.com, hch@lst.de, bhelgaas@google.com, douliyang1@huawei.com, Dou Liyang Subject: [PATCH 2/3] irq/affinity: Add is_managed into struct irq_affinity_desc Date: Tue, 4 Dec 2018 23:51:21 +0800 Message-Id: <20181204155122.6327-3-douliyangs@gmail.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181204155122.6327-1-douliyangs@gmail.com> References: <20181204155122.6327-1-douliyangs@gmail.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now, Linux uses the irq_affinity_desc to convey information. As Kashyap and Sumit reported, in MSI/-x subsystem, the pre/post vectors may be used to some extra reply queues for performance. https://marc.info/?l=linux-kernel&m=153543887027997&w=2 Their affinities are not NULL, but, they should be mapped as unmanaged interrupts. So, only transfering the irq affinity assignments is not enough. Add a new bit "is_managed" to convey the info in irq_affinity_desc and use it in alloc_descs(). Reported-by: Kashyap Desai Reported-by: Sumit Saxena Signed-off-by: Dou Liyang --- include/linux/interrupt.h | 1 + kernel/irq/affinity.c | 7 +++++++ kernel/irq/irqdesc.c | 9 +++++++-- 3 files changed, 15 insertions(+), 2 deletions(-) diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index 71be303231e9..a12b3dbbc45e 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -263,6 +263,7 @@ struct irq_affinity { */ struct irq_affinity_desc { struct cpumask mask; + unsigned int is_managed : 1; }; #if defined(CONFIG_SMP) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 1562a36e7c0f..d122575ba1b4 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -289,6 +289,13 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) for (; curvec < nvecs; curvec++) cpumask_copy(&masks[curvec].mask, irq_default_affinity); + /* Setup complementary information */ + for (i = 0; i < nvecs; i++) { + if (i >= affd->pre_vectors && i < nvecs - affd->post_vectors) + masks[i].is_managed = 1; + else + masks[i].is_managed = 0; + } outnodemsk: free_node_to_cpumask(node_to_cpumask); return masks; diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c index f87fa2b9935a..6b0821c144c0 100644 --- a/kernel/irq/irqdesc.c +++ b/kernel/irq/irqdesc.c @@ -455,7 +455,7 @@ static int alloc_descs(unsigned int start, unsigned int cnt, int node, const struct irq_affinity_desc *cur_affinity= affinity; const struct cpumask *mask = NULL; struct irq_desc *desc; - unsigned int flags; + unsigned int flags = 0; int i; /* Validate affinity mask(s) */ @@ -468,11 +468,16 @@ static int alloc_descs(unsigned int start, unsigned int cnt, int node, } } - flags = affinity ? IRQD_AFFINITY_MANAGED | IRQD_MANAGED_SHUTDOWN : 0; mask = NULL; for (i = 0; i < cnt; i++) { if (affinity) { + if (affinity->is_managed) { + flags = IRQD_AFFINITY_MANAGED | + IRQD_MANAGED_SHUTDOWN; + } else { + flags = 0; + } mask = &affinity->mask; node = cpu_to_node(cpumask_first(mask)); affinity++; From patchwork Tue Dec 4 15:51:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dou Liyang X-Patchwork-Id: 10712149 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 55032109C for ; Tue, 4 Dec 2018 15:52:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 475DD2C1E8 for ; Tue, 4 Dec 2018 15:52:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 45B752C1AB; Tue, 4 Dec 2018 15:52:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB7AE2C1B8 for ; Tue, 4 Dec 2018 15:52:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726740AbeLDPwZ (ORCPT ); Tue, 4 Dec 2018 10:52:25 -0500 Received: from mail-pg1-f195.google.com ([209.85.215.195]:45049 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726445AbeLDPwY (ORCPT ); Tue, 4 Dec 2018 10:52:24 -0500 Received: by mail-pg1-f195.google.com with SMTP id t13so7537115pgr.11; Tue, 04 Dec 2018 07:52:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=T5+rm2G+CG4RcqJPn9oni1Nfoo3IO96rRBFqIoeXuRI=; b=SaIss0jiT7fv8RECTR0Ooa+5NmTZBhwA/6/uZwuhKF2dvklF0GUA3/S3QJsCGPn9tQ RyYV4q3Sc1TqfEn9/Y12XuZSyy4aEeqVXiuhYcMauETgrH+ChpCTpEQyrT0NT+mqLuZv JO0EVxDGxTKGNLaQZv7glSIUmYLf2DK6AvR5Ni8W4v3RnKnWDAswJFwoAjIL1F/iwRUS PnnCvv8W+bLtHFHcZyf/L3gX8bPDifalNZE8vwoz8wtQw5Kb4zCVffois0Ke/MqF5DXf gESc5LVmevhohhL9RWw+8dr8EzyqJG+OZ1aJ+phwfXqmg4z7ktlXxJRyNRw+i8v06OIZ QqBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=T5+rm2G+CG4RcqJPn9oni1Nfoo3IO96rRBFqIoeXuRI=; b=AmKk0BAOa1FY3yKbpZeovQM3jqTcpC9sO4/Aduv43As14RpfoNBNMz31ysDH5hJObt qww1aneqyJBUzIqBo+NmWk3Vt9HNVQOZ1J08GdECHkAB4vqMDKG0WSbF5/uUSseArvyO rlq1kXgEWO1hTlIcA3n7tFY7+6qBwQ+VVqcytICz90FetDMGeJFPbFXa7fYBxkxmuEvX aUCT9iKNeFmSQzSBiplbhLxgVwAIxhzaYp6trcMiMwo9algY9sKb0zmRMaJMOdLTmluj s9UfbeAvhwEkGjWudEaD/nIcALAKjAELwc26bnS8ggQlLPD+XJOong/s6z7XIboVB/dk cebQ== X-Gm-Message-State: AA+aEWYJSIEGN531TkWKg59CnM7qVQGlWLap8ckFmQ6+qs3KyMDkXNWu K9NzPgQIPgBnOYkZ+HrwL6WDj5U7X3A= X-Google-Smtp-Source: AFSGD/UlhGhR2KgdWc6y1cFJc6QsYSDcJSSfx0QU+nf0iACA6VOSoXAphdwShOw0Q1ZUwG5f98RmLg== X-Received: by 2002:a62:f247:: with SMTP id y7mr20584770pfl.25.1543938743845; Tue, 04 Dec 2018 07:52:23 -0800 (PST) Received: from localhost.localdomain ([104.238.160.83]) by smtp.gmail.com with ESMTPSA id u78sm40653444pfi.2.2018.12.04.07.52.15 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 04 Dec 2018 07:52:23 -0800 (PST) From: Dou Liyang To: linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org Cc: tglx@linutronix.de, kashyap.desai@broadcom.com, shivasharan.srikanteshwara@broadcom.com, sumit.saxena@broadcom.com, ming.lei@redhat.com, hch@lst.de, bhelgaas@google.com, douliyang1@huawei.com, Dou Liyang Subject: [PATCH 3/3] irq/affinity: Fix a possible breakage Date: Tue, 4 Dec 2018 23:51:22 +0800 Message-Id: <20181204155122.6327-4-douliyangs@gmail.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20181204155122.6327-1-douliyangs@gmail.com> References: <20181204155122.6327-1-douliyangs@gmail.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In case of irq_default_affinity != cpu_possible_mask, setting the affinity for the pre/post vectors to irq_default_affinity is a breakage. Just set the pre/post vectors to cpu_possible_mask and be done with it. Suggested-by: Thomas Gleixner Signed-off-by: Dou Liyang --- kernel/irq/affinity.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index d122575ba1b4..aaa1dd82c3df 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -257,7 +257,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) /* Fill out vectors at the beginning that don't need affinity */ for (curvec = 0; curvec < affd->pre_vectors; curvec++) - cpumask_copy(&masks[curvec].mask, irq_default_affinity); + cpumask_copy(&masks[curvec].mask, cpu_possible_mask); /* * Spread on present CPUs starting from affd->pre_vectors. If we * have multiple sets, build each sets affinity mask separately. @@ -282,12 +282,15 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) } /* Fill out vectors at the end that don't need affinity */ - if (usedvecs >= affvecs) + if (usedvecs >= affvecs) { curvec = affd->pre_vectors + affvecs; - else + } else { curvec = affd->pre_vectors + usedvecs; + for (; curvec < affd->pre_vectors + affvecs; curvec++) + cpumask_copy(&masks[curvec].mask, irq_default_affinity); + } for (; curvec < nvecs; curvec++) - cpumask_copy(&masks[curvec].mask, irq_default_affinity); + cpumask_copy(&masks[curvec].mask, cpu_possible_mask); /* Setup complementary information */ for (i = 0; i < nvecs; i++) {