From patchwork Fri Jul 1 20:00:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Samuel Holland X-Patchwork-Id: 12903728 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88B4BCCA480 for ; Fri, 1 Jul 2022 20:10:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232011AbiGAUKx (ORCPT ); Fri, 1 Jul 2022 16:10:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231634AbiGAUKu (ORCPT ); Fri, 1 Jul 2022 16:10:50 -0400 Received: from new2-smtp.messagingengine.com (new2-smtp.messagingengine.com [66.111.4.224]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B9C3101E3; Fri, 1 Jul 2022 13:10:48 -0700 (PDT) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailnew.nyi.internal (Postfix) with ESMTP id B24605802E7; Fri, 1 Jul 2022 16:01:22 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Fri, 01 Jul 2022 16:01:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sholland.org; h= cc:cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm3; t=1656705682; x=1656712882; bh=1g Yq3VbEkPVty/pd1tEYQnVO1uU7x5LYLlMSe3zbDrE=; b=bdIu56u5NUcFW5hvaK 2/laTWzN8GTHoYTAZXrQlwuGxUO/51vWxcPMTA2kgfltyNMsvcq9NazmXFT1M9hf Yy7nInyHLoEyitpTjDbv3GYoUVgLpWSo40ttAnfTRma7mvNYsPbG1mejs2Qgcb3t LbFItmmgx0ui1jMDv36Wq3t0B7zszJ1O7HYnTX+/01o0+2PoRscDY1WPAVAiDqzA lL8KiMkDxHJI+62Lumh+214ubp82KV2hgQ6WmN9StyO1UxZc5Jrl4uDo5CIcMf+v w3d+EflqLt3LJ6h3D97ccfd/v3Oe7fE6Pop4QYptsoopZcfNSmaoapHf/0dyMeat KJjA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656705682; x=1656712882; bh=1gYq3VbEkPVty /pd1tEYQnVO1uU7x5LYLlMSe3zbDrE=; b=IIGVa/3L4rIIDkpm5x9kJF4p4zPgE bekRQi2xgDfEmYrxPVdTUgxGIsZGgD0mmbIIymHEeJR6lUkDl7AgH4weEwbRvI39 z+3v4BNWVNKOg4Y3UKxsTxHjzdvdNIGGn03gzsHuwPtkq11popiSb4aRYBnIvyaL L3pj53Cf9IJhHuC3KtMRhkQ9lEVDTzotxVI7nK9L+t1iY2zY5ZlKJF5XkZlzy1kW qRxbxA7qUT5pUcyoRZWtPspZfkyMkSc2u3lh/Z1BI0SuCIzlQIrm1Ogt1M9ktsDh yae+YwhKLB+1GibJY4Yhz0XeYPpHVyQA4KoXzDl26EyU2QNQe5gKGx26w== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudehfedgudeggecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefurghm uhgvlhcujfholhhlrghnugcuoehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhgqeenuc ggtffrrghtthgvrhhnpedukeetueduhedtleetvefguddvvdejhfefudelgfduveeggeeh gfdufeeitdevteenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhg X-ME-Proxy: Feedback-ID: i0ad843c9:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Jul 2022 16:01:10 -0400 (EDT) From: Samuel Holland To: Marc Zyngier , Thomas Gleixner Cc: Samuel Holland , Andy Shevchenko , Bartosz Golaszewski , Bjorn Helgaas , Boris Ostrovsky , Borislav Petkov , Broadcom internal kernel review list , Chris Zankel , Colin Ian King , Dave Hansen , Dexuan Cui , Florian Fainelli , Guo Ren , "H. Peter Anvin" , Haiyang Zhang , Helge Deller , Ingo Molnar , Ivan Kokshaysky , "James E.J. Bottomley" , Jan Beulich , Joerg Roedel , Juergen Gross , Julia Lawall , "K. Y. Srinivasan" , Kees Cook , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , Linus Walleij , Lorenzo Pieralisi , Mark Rutland , Matt Turner , Max Filippov , Maximilian Heyne , Oleksandr Tyshchenko , Rich Felker , Richard Henderson , Rikard Falkeborn , Rob Herring , Russell King , Serge Semin , Stefano Stabellini , Stephen Hemminger , Sven Schnelle , Thomas Bogendoerfer , Wei Liu , Wei Xu , Will Deacon , Yoshinori Sato , iommu@lists.linux-foundation.org, iommu@lists.linux.dev, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-hyperv@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-pci@vger.kernel.org, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org, xen-devel@lists.xenproject.org, kernel test robot Subject: [PATCH v3 1/8] irqchip/mips-gic: Only register IPI domain when SMP is enabled Date: Fri, 1 Jul 2022 15:00:49 -0500 Message-Id: <20220701200056.46555-2-samuel@sholland.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220701200056.46555-1-samuel@sholland.org> References: <20220701200056.46555-1-samuel@sholland.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org The MIPS GIC irqchip driver may be selected in a uniprocessor configuration, but it unconditionally registers an IPI domain. Limit the part of the driver dealing with IPIs to only be compiled when GENERIC_IRQ_IPI is enabled, which corresponds to an SMP configuration. Reported-by: kernel test robot Signed-off-by: Samuel Holland --- Changes in v3: - New patch to fix build errors in uniprocessor MIPS configs drivers/irqchip/Kconfig | 3 +- drivers/irqchip/irq-mips-gic.c | 80 +++++++++++++++++++++++----------- 2 files changed, 56 insertions(+), 27 deletions(-) diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index 1f23a6be7d88..d26a4ff7c99f 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -322,7 +322,8 @@ config KEYSTONE_IRQ config MIPS_GIC bool - select GENERIC_IRQ_IPI + select GENERIC_IRQ_IPI if SMP + select IRQ_DOMAIN_HIERARCHY select MIPS_CM config INGENIC_IRQ diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c index ff89b36267dd..8a9efb6ae587 100644 --- a/drivers/irqchip/irq-mips-gic.c +++ b/drivers/irqchip/irq-mips-gic.c @@ -52,13 +52,15 @@ static DEFINE_PER_CPU_READ_MOSTLY(unsigned long[GIC_MAX_LONGS], pcpu_masks); static DEFINE_SPINLOCK(gic_lock); static struct irq_domain *gic_irq_domain; -static struct irq_domain *gic_ipi_domain; static int gic_shared_intrs; static unsigned int gic_cpu_pin; static unsigned int timer_cpu_pin; static struct irq_chip gic_level_irq_controller, gic_edge_irq_controller; + +#ifdef CONFIG_GENERIC_IRQ_IPI static DECLARE_BITMAP(ipi_resrv, GIC_MAX_INTRS); static DECLARE_BITMAP(ipi_available, GIC_MAX_INTRS); +#endif /* CONFIG_GENERIC_IRQ_IPI */ static struct gic_all_vpes_chip_data { u32 map; @@ -472,9 +474,11 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq, u32 map; if (hwirq >= GIC_SHARED_HWIRQ_BASE) { +#ifdef CONFIG_GENERIC_IRQ_IPI /* verify that shared irqs don't conflict with an IPI irq */ if (test_bit(GIC_HWIRQ_TO_SHARED(hwirq), ipi_resrv)) return -EBUSY; +#endif /* CONFIG_GENERIC_IRQ_IPI */ err = irq_domain_set_hwirq_and_chip(d, virq, hwirq, &gic_level_irq_controller, @@ -567,6 +571,8 @@ static const struct irq_domain_ops gic_irq_domain_ops = { .map = gic_irq_domain_map, }; +#ifdef CONFIG_GENERIC_IRQ_IPI + static int gic_ipi_domain_xlate(struct irq_domain *d, struct device_node *ctrlr, const u32 *intspec, unsigned int intsize, irq_hw_number_t *out_hwirq, @@ -670,6 +676,48 @@ static const struct irq_domain_ops gic_ipi_domain_ops = { .match = gic_ipi_domain_match, }; +static int gic_register_ipi_domain(struct device_node *node) +{ + struct irq_domain *gic_ipi_domain; + unsigned int v[2], num_ipis; + + gic_ipi_domain = irq_domain_add_hierarchy(gic_irq_domain, + IRQ_DOMAIN_FLAG_IPI_PER_CPU, + GIC_NUM_LOCAL_INTRS + gic_shared_intrs, + node, &gic_ipi_domain_ops, NULL); + if (!gic_ipi_domain) { + pr_err("Failed to add IPI domain"); + return -ENXIO; + } + + irq_domain_update_bus_token(gic_ipi_domain, DOMAIN_BUS_IPI); + + if (node && + !of_property_read_u32_array(node, "mti,reserved-ipi-vectors", v, 2)) { + bitmap_set(ipi_resrv, v[0], v[1]); + } else { + /* + * Reserve 2 interrupts per possible CPU/VP for use as IPIs, + * meeting the requirements of arch/mips SMP. + */ + num_ipis = 2 * num_possible_cpus(); + bitmap_set(ipi_resrv, gic_shared_intrs - num_ipis, num_ipis); + } + + bitmap_copy(ipi_available, ipi_resrv, GIC_MAX_INTRS); + + return 0; +} + +#else /* !CONFIG_GENERIC_IRQ_IPI */ + +static inline int gic_register_ipi_domain(struct device_node *node) +{ + return 0; +} + +#endif /* !CONFIG_GENERIC_IRQ_IPI */ + static int gic_cpu_startup(unsigned int cpu) { /* Enable or disable EIC */ @@ -688,11 +736,12 @@ static int gic_cpu_startup(unsigned int cpu) static int __init gic_of_init(struct device_node *node, struct device_node *parent) { - unsigned int cpu_vec, i, gicconfig, v[2], num_ipis; + unsigned int cpu_vec, i, gicconfig; unsigned long reserved; phys_addr_t gic_base; struct resource res; size_t gic_len; + int ret; /* Find the first available CPU vector. */ i = 0; @@ -780,30 +829,9 @@ static int __init gic_of_init(struct device_node *node, return -ENXIO; } - gic_ipi_domain = irq_domain_add_hierarchy(gic_irq_domain, - IRQ_DOMAIN_FLAG_IPI_PER_CPU, - GIC_NUM_LOCAL_INTRS + gic_shared_intrs, - node, &gic_ipi_domain_ops, NULL); - if (!gic_ipi_domain) { - pr_err("Failed to add IPI domain"); - return -ENXIO; - } - - irq_domain_update_bus_token(gic_ipi_domain, DOMAIN_BUS_IPI); - - if (node && - !of_property_read_u32_array(node, "mti,reserved-ipi-vectors", v, 2)) { - bitmap_set(ipi_resrv, v[0], v[1]); - } else { - /* - * Reserve 2 interrupts per possible CPU/VP for use as IPIs, - * meeting the requirements of arch/mips SMP. - */ - num_ipis = 2 * num_possible_cpus(); - bitmap_set(ipi_resrv, gic_shared_intrs - num_ipis, num_ipis); - } - - bitmap_copy(ipi_available, ipi_resrv, GIC_MAX_INTRS); + ret = gic_register_ipi_domain(node); + if (ret) + return ret; board_bind_eic_interrupt = &gic_bind_eic_interrupt; From patchwork Fri Jul 1 20:00:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Samuel Holland X-Patchwork-Id: 12903727 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E419CCA47F for ; Fri, 1 Jul 2022 20:10:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232024AbiGAUKy (ORCPT ); Fri, 1 Jul 2022 16:10:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231676AbiGAUKu (ORCPT ); Fri, 1 Jul 2022 16:10:50 -0400 Received: from new2-smtp.messagingengine.com (new2-smtp.messagingengine.com [66.111.4.224]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E547724A; Fri, 1 Jul 2022 13:10:48 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailnew.nyi.internal (Postfix) with ESMTP id D5ADC580285; Fri, 1 Jul 2022 16:01:29 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Fri, 01 Jul 2022 16:01:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sholland.org; h= cc:cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm3; t=1656705689; x=1656712889; bh=f2 gTOnVVlawVp6N5bLg8smmhsDaH+rFeOulG3ok0Jrc=; b=S00usAZDc/YM64VNCn FjrEnQNeYOIV4LNXFxrNe+DvmbJ1spyaNknAlkMKCOYdB8xZAB6BY9sx7dI8cRwD aNrKuvCkIxj/PKyZRFpy+snzjxdT/wGYVUKvGizh4ybfVco0jpNkt9KuqI3lODmv VgMOfXezM/xbArBQvHUjLN61LUq2x7L0hMa6QJigRe0ReXc8M/xo9kR71pvG5fav zb0DMcwbxYzRwjHctt2+SKmtMgzgd0yTB8uENd3D0VqTECGgquIpAjLlEagHYC3j iGuAmAltKJ2/jWPPQ1vs09gP9bynvMqw+WFPrd9CKAWkKbcNwG53wW45gJRugpgB 7etg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656705689; x=1656712889; bh=f2gTOnVVlawVp 6N5bLg8smmhsDaH+rFeOulG3ok0Jrc=; b=p8dJ+iBG+yyKxcHv8Ird4QI+8K5e2 deriiVPbLfaAp7jH0aTFXbKRlCyn1EkHQyKB2khg8+haSRMztqOCdPVXCF1Z0WCH XMZ2dC520GBYyCM3t4JRbY69RiaW08lpNIFhNE6+kx66gd+4fAfUbrwSyPs6mbPc nB4SQ4rjMwxLFceJ639l6X5Gnfabx0+YkFVX26GUHctAf/ClGo5S4UwBUh4m9yPh 5lA0hUcpMSQycZcnZaO6J6XLkQxj0nrANK1+St4Vqj9eYNEZlQb3hS2C/cwKEL+5 BILChd6I7vOuP7X0g3FgLvTGrvQkLzGX2qG9+nUpNgkqLUj+CVjSJhhdQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudehfedgudeggecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefurghm uhgvlhcujfholhhlrghnugcuoehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhgqeenuc ggtffrrghtthgvrhhnpedukeetueduhedtleetvefguddvvdejhfefudelgfduveeggeeh gfdufeeitdevteenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhg X-ME-Proxy: Feedback-ID: i0ad843c9:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Jul 2022 16:01:23 -0400 (EDT) From: Samuel Holland To: Marc Zyngier , Thomas Gleixner Cc: Samuel Holland , Andy Shevchenko , Bartosz Golaszewski , Bjorn Helgaas , Boris Ostrovsky , Borislav Petkov , Broadcom internal kernel review list , Chris Zankel , Colin Ian King , Dave Hansen , Dexuan Cui , Florian Fainelli , Guo Ren , "H. Peter Anvin" , Haiyang Zhang , Helge Deller , Ingo Molnar , Ivan Kokshaysky , "James E.J. Bottomley" , Jan Beulich , Joerg Roedel , Juergen Gross , Julia Lawall , "K. Y. Srinivasan" , Kees Cook , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , Linus Walleij , Lorenzo Pieralisi , Mark Rutland , Matt Turner , Max Filippov , Maximilian Heyne , Oleksandr Tyshchenko , Rich Felker , Richard Henderson , Rikard Falkeborn , Rob Herring , Russell King , Serge Semin , Stefano Stabellini , Stephen Hemminger , Sven Schnelle , Thomas Bogendoerfer , Wei Liu , Wei Xu , Will Deacon , Yoshinori Sato , iommu@lists.linux-foundation.org, iommu@lists.linux.dev, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-hyperv@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-pci@vger.kernel.org, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org, xen-devel@lists.xenproject.org, kernel test robot Subject: [PATCH v3 2/8] genirq: GENERIC_IRQ_IPI depends on SMP Date: Fri, 1 Jul 2022 15:00:50 -0500 Message-Id: <20220701200056.46555-3-samuel@sholland.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220701200056.46555-1-samuel@sholland.org> References: <20220701200056.46555-1-samuel@sholland.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org The generic IPI code depends on the IRQ affinity mask being allocated and initialized. This will not be the case if SMP is disabled. Fix up the remaining driver that selected GENERIC_IRQ_IPI in a non-SMP config. Reported-by: kernel test robot Signed-off-by: Samuel Holland --- (no changes since v2) Changes in v2: - New patch to prevent GENERIC_IRQ_IPI from being selected on !SMP drivers/irqchip/Kconfig | 2 +- kernel/irq/Kconfig | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index d26a4ff7c99f..5dd98a81efc8 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -177,7 +177,7 @@ config MADERA_IRQ config IRQ_MIPS_CPU bool select GENERIC_IRQ_CHIP - select GENERIC_IRQ_IPI if SYS_SUPPORTS_MULTITHREADING + select GENERIC_IRQ_IPI if SMP && SYS_SUPPORTS_MULTITHREADING select IRQ_DOMAIN select GENERIC_IRQ_EFFECTIVE_AFF_MASK diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig index 10929eda9825..fc760d064a65 100644 --- a/kernel/irq/Kconfig +++ b/kernel/irq/Kconfig @@ -82,6 +82,7 @@ config IRQ_FASTEOI_HIERARCHY_HANDLERS # Generic IRQ IPI support config GENERIC_IRQ_IPI bool + depends on SMP select IRQ_DOMAIN_HIERARCHY # Generic MSI interrupt support From patchwork Fri Jul 1 20:00:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Samuel Holland X-Patchwork-Id: 12903726 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BDDCCCA485 for ; Fri, 1 Jul 2022 20:10:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231757AbiGAUKv (ORCPT ); Fri, 1 Jul 2022 16:10:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230475AbiGAUKt (ORCPT ); Fri, 1 Jul 2022 16:10:49 -0400 X-Greylist: delayed 575 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Fri, 01 Jul 2022 13:10:48 PDT Received: from new2-smtp.messagingengine.com (new2-smtp.messagingengine.com [66.111.4.224]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78FF1BF72; Fri, 1 Jul 2022 13:10:48 -0700 (PDT) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailnew.nyi.internal (Postfix) with ESMTP id 4055A5802F0; Fri, 1 Jul 2022 16:01:36 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Fri, 01 Jul 2022 16:01:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sholland.org; h= cc:cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm3; t=1656705696; x=1656712896; bh=D0 9F9v2OsTHtUMBckCisA3fh+Z9dxIk2ZgixoBozn6s=; b=ewPaKoSkHX2BJXiLrX 5+oSIjybcWAklW5wxV0V+2224R1S5aSbhX/ZumtO+pevNUEajJ4W+xkdb1xobtmO 5XYTY4EQgS3vI31X9KpJqpcSStss2CBfMnsdLsoEi24/QiILGCKcwtKmT3IHTl3M Mv1Py5jjNaGrcKT545FKqrpJreT45hH04ngq4b9+jLCXhOz/SJLgVou/odMMcxHf jE7L5Oa7EAAx5jIAr6O1i4neJfVW0AS/PUT6wWsmA07a/DS+FN2RC5D0zeLTTFHO gxpbBrSPW1wLPGNr1k53E8cUdIxhDr0COGRwcsfZT5hqn7efH9ktohj1e3H0gw2y IDKw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656705696; x=1656712896; bh=D09F9v2OsTHtU MBckCisA3fh+Z9dxIk2ZgixoBozn6s=; b=TBQ5zmWoRj8elXqnGUzA7FTSfUjgW 8okx2QN7XCl3ky8WadN5LKY6hzwdOfA9BJ0eXRmer/tFEiAUvbmU6wyJhpcSQZgj Rmt76VA2NSnO3ovcCIK5WjyW5TFWl9HMcuavVAdkHsJTygI2nPusqodgPFJV4ZxT L/edAsj355kbI6l/RzXXpdx9yUIvX2RXPv6ueqmRU44epIbZ+yjNJpk1CA53x/K2 6amlSq383hivdloACmsNrkmMJ+PWpdATylN2kVPVACnX4rT6i/tvFuKDeoqp86hr Enu7oREUdbzL42YZLwjDCYWMY8dHPJjpHxLTP4IgdrwaIjKWoEmRjseFA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudehfedgudeggecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefurghm uhgvlhcujfholhhlrghnugcuoehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhgqeenuc ggtffrrghtthgvrhhnpedukeetueduhedtleetvefguddvvdejhfefudelgfduveeggeeh gfdufeeitdevteenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfh hrohhmpehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhg X-ME-Proxy: Feedback-ID: i0ad843c9:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Jul 2022 16:01:30 -0400 (EDT) From: Samuel Holland To: Marc Zyngier , Thomas Gleixner Cc: Samuel Holland , Andy Shevchenko , Bartosz Golaszewski , Bjorn Helgaas , Boris Ostrovsky , Borislav Petkov , Broadcom internal kernel review list , Chris Zankel , Colin Ian King , Dave Hansen , Dexuan Cui , Florian Fainelli , Guo Ren , "H. Peter Anvin" , Haiyang Zhang , Helge Deller , Ingo Molnar , Ivan Kokshaysky , "James E.J. Bottomley" , Jan Beulich , Joerg Roedel , Juergen Gross , Julia Lawall , "K. Y. Srinivasan" , Kees Cook , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , Linus Walleij , Lorenzo Pieralisi , Mark Rutland , Matt Turner , Max Filippov , Maximilian Heyne , Oleksandr Tyshchenko , Rich Felker , Richard Henderson , Rikard Falkeborn , Rob Herring , Russell King , Serge Semin , Stefano Stabellini , Stephen Hemminger , Sven Schnelle , Thomas Bogendoerfer , Wei Liu , Wei Xu , Will Deacon , Yoshinori Sato , iommu@lists.linux-foundation.org, iommu@lists.linux.dev, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-hyperv@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-pci@vger.kernel.org, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org, xen-devel@lists.xenproject.org Subject: [PATCH v3 3/8] genirq: GENERIC_IRQ_EFFECTIVE_AFF_MASK depends on SMP Date: Fri, 1 Jul 2022 15:00:51 -0500 Message-Id: <20220701200056.46555-4-samuel@sholland.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220701200056.46555-1-samuel@sholland.org> References: <20220701200056.46555-1-samuel@sholland.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org An IRQ's effective affinity can only be different from its configured affinity if there are multiple CPUs. Make it clear that this option is only meaningful when SMP is enabled. Most of the relevant code in irqdesc.c is already hidden behind CONFIG_SMP anyway. Signed-off-by: Samuel Holland --- (no changes since v1) arch/arm/mach-hisi/Kconfig | 2 +- drivers/irqchip/Kconfig | 14 +++++++------- kernel/irq/Kconfig | 1 + 3 files changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/arm/mach-hisi/Kconfig b/arch/arm/mach-hisi/Kconfig index 75cccbd3f05f..7b3440687176 100644 --- a/arch/arm/mach-hisi/Kconfig +++ b/arch/arm/mach-hisi/Kconfig @@ -40,7 +40,7 @@ config ARCH_HIP04 select HAVE_ARM_ARCH_TIMER select MCPM if SMP select MCPM_QUAD_CLUSTER if SMP - select GENERIC_IRQ_EFFECTIVE_AFF_MASK + select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP help Support for Hisilicon HiP04 SoC family diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index 5dd98a81efc8..462adac905a6 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -8,7 +8,7 @@ config IRQCHIP config ARM_GIC bool select IRQ_DOMAIN_HIERARCHY - select GENERIC_IRQ_EFFECTIVE_AFF_MASK + select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP config ARM_GIC_PM bool @@ -34,7 +34,7 @@ config ARM_GIC_V3 bool select IRQ_DOMAIN_HIERARCHY select PARTITION_PERCPU - select GENERIC_IRQ_EFFECTIVE_AFF_MASK + select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP config ARM_GIC_V3_ITS bool @@ -76,7 +76,7 @@ config ARMADA_370_XP_IRQ bool select GENERIC_IRQ_CHIP select PCI_MSI if PCI - select GENERIC_IRQ_EFFECTIVE_AFF_MASK + select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP config ALPINE_MSI bool @@ -112,7 +112,7 @@ config BCM6345_L1_IRQ bool select GENERIC_IRQ_CHIP select IRQ_DOMAIN - select GENERIC_IRQ_EFFECTIVE_AFF_MASK + select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP config BCM7038_L1_IRQ tristate "Broadcom STB 7038-style L1/L2 interrupt controller driver" @@ -120,7 +120,7 @@ config BCM7038_L1_IRQ default ARCH_BRCMSTB || BMIPS_GENERIC select GENERIC_IRQ_CHIP select IRQ_DOMAIN - select GENERIC_IRQ_EFFECTIVE_AFF_MASK + select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP config BCM7120_L2_IRQ tristate "Broadcom STB 7120-style L2 interrupt controller driver" @@ -179,7 +179,7 @@ config IRQ_MIPS_CPU select GENERIC_IRQ_CHIP select GENERIC_IRQ_IPI if SMP && SYS_SUPPORTS_MULTITHREADING select IRQ_DOMAIN - select GENERIC_IRQ_EFFECTIVE_AFF_MASK + select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP config CLPS711X_IRQCHIP bool @@ -294,7 +294,7 @@ config VERSATILE_FPGA_IRQ_NR config XTENSA_MX bool select IRQ_DOMAIN - select GENERIC_IRQ_EFFECTIVE_AFF_MASK + select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP config XILINX_INTC bool "Xilinx Interrupt Controller IP" diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig index fc760d064a65..db3d174c53d4 100644 --- a/kernel/irq/Kconfig +++ b/kernel/irq/Kconfig @@ -24,6 +24,7 @@ config GENERIC_IRQ_SHOW_LEVEL # Supports effective affinity mask config GENERIC_IRQ_EFFECTIVE_AFF_MASK + depends on SMP bool # Support for delayed migration from interrupt context From patchwork Fri Jul 1 20:00:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Samuel Holland X-Patchwork-Id: 12903730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9943CCA479 for ; Fri, 1 Jul 2022 20:11:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232306AbiGAULW (ORCPT ); Fri, 1 Jul 2022 16:11:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231892AbiGAUKv (ORCPT ); Fri, 1 Jul 2022 16:10:51 -0400 Received: from new2-smtp.messagingengine.com (new2-smtp.messagingengine.com [66.111.4.224]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F7BDBE35; Fri, 1 Jul 2022 13:10:50 -0700 (PDT) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailnew.nyi.internal (Postfix) with ESMTP id 88D6C5802F4; Fri, 1 Jul 2022 16:01:41 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Fri, 01 Jul 2022 16:01:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sholland.org; h= cc:cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm3; t=1656705701; x=1656712901; bh=0L WuRMkphck+oUsETtbUMciB/g6XpbS+D68bi898KtM=; b=AqGoMTSNUno8lOmyDh KCwBkoAA7CbRzCpTwUdrVZNLF/wgidvQ7dsm+84o2fZ7gQevOX9qgVYV5nm/9kgO AMuQd8xE3VTWjlAJ7GrdD9VG7BV18Jcp9kKqlP10nOfIKsoMI+rW7kzM0NAfbTyC NWr74D8Q3JSoeOaEx9LsHr33xds/373ohmcrmDKPwF8OjXUOpnM+aWDRtp6nuvq9 kg59XMZJPKD6+LWdXV+szMDXjt+Bh7ouD13imUf0Q05dynZ6LTdGmF8mVgEGl3Bl ONZn3X292lorO+Z+uaZEodDFOzhGvINgJHQDr8YSLNCPUjVe/p4PN6U1+nlRcr03 GbWw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656705701; x=1656712901; bh=0LWuRMkphck+o UsETtbUMciB/g6XpbS+D68bi898KtM=; b=oa1e9Su6i+EtxBIhYMfUYN1BBIh7+ NtLc3K9bSFNOEpdIhhM9orp+f/WF+uOfv1NI8kwLKLdemiOeS5HWJECGkfsHhJiH /wrY8WMMNhZUZUQf87Y6I2nlwQL4IKXSCH2sx7h4StxNsLo412pY5+cK8XhqNURa IF4psEvV+ZGo699etAF0/P+4jL5LalpbzQlttWsppbNEXfxA/dOghfaTOCAwlKrK KEvs69E1t/EoreKt1IwBdd8N4B9H428Bsjz94BHuby3lv0Gnlfekvlya7hY+n98C YWKS5+DGo6Bue2FdLbpuwQxoB2okv4DaH6kIF8BDFGbnyGUVJ78Jz+rvQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudehfedgudeggecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefurghm uhgvlhcujfholhhlrghnugcuoehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhgqeenuc ggtffrrghtthgvrhhnpedukeetueduhedtleetvefguddvvdejhfefudelgfduveeggeeh gfdufeeitdevteenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhg X-ME-Proxy: Feedback-ID: i0ad843c9:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Jul 2022 16:01:36 -0400 (EDT) From: Samuel Holland To: Marc Zyngier , Thomas Gleixner Cc: Samuel Holland , Andy Shevchenko , Bartosz Golaszewski , Bjorn Helgaas , Boris Ostrovsky , Borislav Petkov , Broadcom internal kernel review list , Chris Zankel , Colin Ian King , Dave Hansen , Dexuan Cui , Florian Fainelli , Guo Ren , "H. Peter Anvin" , Haiyang Zhang , Helge Deller , Ingo Molnar , Ivan Kokshaysky , "James E.J. Bottomley" , Jan Beulich , Joerg Roedel , Juergen Gross , Julia Lawall , "K. Y. Srinivasan" , Kees Cook , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , Linus Walleij , Lorenzo Pieralisi , Mark Rutland , Matt Turner , Max Filippov , Maximilian Heyne , Oleksandr Tyshchenko , Rich Felker , Richard Henderson , Rikard Falkeborn , Rob Herring , Russell King , Serge Semin , Stefano Stabellini , Stephen Hemminger , Sven Schnelle , Thomas Bogendoerfer , Wei Liu , Wei Xu , Will Deacon , Yoshinori Sato , iommu@lists.linux-foundation.org, iommu@lists.linux.dev, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-hyperv@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-pci@vger.kernel.org, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org, xen-devel@lists.xenproject.org Subject: [PATCH v3 4/8] genirq: Drop redundant irq_init_effective_affinity Date: Fri, 1 Jul 2022 15:00:52 -0500 Message-Id: <20220701200056.46555-5-samuel@sholland.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220701200056.46555-1-samuel@sholland.org> References: <20220701200056.46555-1-samuel@sholland.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org It does exactly the same thing as irq_data_update_effective_affinity. Signed-off-by: Samuel Holland --- Changes in v3: - New patch to drop irq_init_effective_affinity kernel/irq/manage.c | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 8c396319d5ac..40fe7806cc8c 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -205,16 +205,8 @@ static void irq_validate_effective_affinity(struct irq_data *data) pr_warn_once("irq_chip %s did not update eff. affinity mask of irq %u\n", chip->name, data->irq); } - -static inline void irq_init_effective_affinity(struct irq_data *data, - const struct cpumask *mask) -{ - cpumask_copy(irq_data_get_effective_affinity_mask(data), mask); -} #else static inline void irq_validate_effective_affinity(struct irq_data *data) { } -static inline void irq_init_effective_affinity(struct irq_data *data, - const struct cpumask *mask) { } #endif int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask, @@ -347,7 +339,7 @@ static bool irq_set_affinity_deactivated(struct irq_data *data, return false; cpumask_copy(desc->irq_common_data.affinity, mask); - irq_init_effective_affinity(data, mask); + irq_data_update_effective_affinity(data, mask); irqd_set(data, IRQD_AFFINITY_SET); return true; } From patchwork Fri Jul 1 20:00:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Samuel Holland X-Patchwork-Id: 12903733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29849C43334 for ; Fri, 1 Jul 2022 20:11:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232383AbiGAULZ (ORCPT ); Fri, 1 Jul 2022 16:11:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232246AbiGAULE (ORCPT ); Fri, 1 Jul 2022 16:11:04 -0400 Received: from new2-smtp.messagingengine.com (new2-smtp.messagingengine.com [66.111.4.224]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 487C624A; Fri, 1 Jul 2022 13:10:51 -0700 (PDT) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailnew.nyi.internal (Postfix) with ESMTP id F1DE55802F6; Fri, 1 Jul 2022 16:01:46 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Fri, 01 Jul 2022 16:01:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sholland.org; h= cc:cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm3; t=1656705706; x=1656712906; bh=SS qXvxPvOnT181Alnoj0HfgL9fqTJVzreJxmfCVyBec=; b=hcAoBpvIf8eiMsWnHm b6RJp+0UU2OILuy21U79vOjusxjAcQG94kzCl/SLOhhz8J4ASW+y84iY/hEL50ms +t+80HjBAylZNEJe73yGhHSZF1RSQ8RBU9Nk62QyCXZpUC4CexurEGTlBgY/Xqfo fnfCIqYMFwRBTgEwEckqi0Kl9E6cHZVfUrF5kxIWdFViA5bUJ11OHLJ7T0XapXw2 hQWKkSyPabF2oJi63qd5eTn9Zl4sQmG20eYW/wqRNYdW/SrAsFhGxn4LKFrrzqDC C0MqxbtLaYxpKOf9o9kKWlf6yKvCjeXy1x18+k24AKggcDhNdzIH/iJu5OZQ3wo4 ssMA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656705706; x=1656712906; bh=SSqXvxPvOnT18 1Alnoj0HfgL9fqTJVzreJxmfCVyBec=; b=HeWsMOcvIEpk7+MFtWHCXsiI6BlFC 2TQ2SkcGU6TjX/RAuIxgMoxDlrLAJRLOQUP5M4Vjn/Sk2LIfFF6zmyCdELkbAfiW sNvfuqkLM1Vey2QklfYRRWjhcegVctGnxhxUwegYbB0z6YgklYBKnTe07fR9Aokx OJk4Rl2PfNKRw931jKDBYNYjhcdhJJxhHzUaHJY1VFb0uXTx2Qq9+O7Z8+sDYq3K dW+YTOeJLeT+EWHVFce25KHIXMDeukkeCTF05NMBW9QbVqWeENDneUsjCVRvIwS4 +pREO1Ryo1aq2EmzH+e4VaMl82vWzrZw+kMmsklRZow8dIJvLbZg3JAtQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudehfedgudeggecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefurghm uhgvlhcujfholhhlrghnugcuoehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhgqeenuc ggtffrrghtthgvrhhnpedukeetueduhedtleetvefguddvvdejhfefudelgfduveeggeeh gfdufeeitdevteenucevlhhushhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfh hrohhmpehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhg X-ME-Proxy: Feedback-ID: i0ad843c9:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Jul 2022 16:01:41 -0400 (EDT) From: Samuel Holland To: Marc Zyngier , Thomas Gleixner Cc: Samuel Holland , Andy Shevchenko , Bartosz Golaszewski , Bjorn Helgaas , Boris Ostrovsky , Borislav Petkov , Broadcom internal kernel review list , Chris Zankel , Colin Ian King , Dave Hansen , Dexuan Cui , Florian Fainelli , Guo Ren , "H. Peter Anvin" , Haiyang Zhang , Helge Deller , Ingo Molnar , Ivan Kokshaysky , "James E.J. Bottomley" , Jan Beulich , Joerg Roedel , Juergen Gross , Julia Lawall , "K. Y. Srinivasan" , Kees Cook , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , Linus Walleij , Lorenzo Pieralisi , Mark Rutland , Matt Turner , Max Filippov , Maximilian Heyne , Oleksandr Tyshchenko , Rich Felker , Richard Henderson , Rikard Falkeborn , Rob Herring , Russell King , Serge Semin , Stefano Stabellini , Stephen Hemminger , Sven Schnelle , Thomas Bogendoerfer , Wei Liu , Wei Xu , Will Deacon , Yoshinori Sato , iommu@lists.linux-foundation.org, iommu@lists.linux.dev, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-hyperv@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-pci@vger.kernel.org, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org, xen-devel@lists.xenproject.org Subject: [PATCH v3 5/8] genirq: Refactor accessors to use irq_data_get_affinity_mask Date: Fri, 1 Jul 2022 15:00:53 -0500 Message-Id: <20220701200056.46555-6-samuel@sholland.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220701200056.46555-1-samuel@sholland.org> References: <20220701200056.46555-1-samuel@sholland.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org A couple of functions directly reference the affinity mask. Route them through irq_data_get_affinity_mask so they will pick up any refactoring done there. Signed-off-by: Samuel Holland --- (no changes since v1) include/linux/irq.h | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/irq.h b/include/linux/irq.h index 505308253d23..69ee4e2f36ce 100644 --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -879,16 +879,16 @@ static inline int irq_data_get_node(struct irq_data *d) return irq_common_data_get_node(d->common); } -static inline struct cpumask *irq_get_affinity_mask(int irq) +static inline struct cpumask *irq_data_get_affinity_mask(struct irq_data *d) { - struct irq_data *d = irq_get_irq_data(irq); - - return d ? d->common->affinity : NULL; + return d->common->affinity; } -static inline struct cpumask *irq_data_get_affinity_mask(struct irq_data *d) +static inline struct cpumask *irq_get_affinity_mask(int irq) { - return d->common->affinity; + struct irq_data *d = irq_get_irq_data(irq); + + return d ? irq_data_get_affinity_mask(d) : NULL; } #ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK @@ -910,7 +910,7 @@ static inline void irq_data_update_effective_affinity(struct irq_data *d, static inline struct cpumask *irq_data_get_effective_affinity_mask(struct irq_data *d) { - return d->common->affinity; + return irq_data_get_affinity_mask(d); } #endif From patchwork Fri Jul 1 20:00:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Samuel Holland X-Patchwork-Id: 12903731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 469DDC43334 for ; Fri, 1 Jul 2022 20:11:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232340AbiGAULX (ORCPT ); Fri, 1 Jul 2022 16:11:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231924AbiGAUKw (ORCPT ); Fri, 1 Jul 2022 16:10:52 -0400 Received: from new2-smtp.messagingengine.com (new2-smtp.messagingengine.com [66.111.4.224]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BB484F676; Fri, 1 Jul 2022 13:10:50 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailnew.nyi.internal (Postfix) with ESMTP id C3753580314; Fri, 1 Jul 2022 16:01:50 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Fri, 01 Jul 2022 16:01:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sholland.org; h= cc:cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm3; t=1656705710; x=1656712910; bh=GP TgF7ArqFMLRWCMzAXk/iblC7hZOhNlz+8yGyXcVmI=; b=QpxRZ/QqCKAaTufwyd lUjoM1KJpo2vAxdEuEjb/waTg16dZR9eragrRhhtwxuPLt0ECneX3zy6CNzBBx5c Ygb4PPzfUzDV5VvtfKPQBAj8iCK8i+A3N0IvHHfLXEkCFny82EoJbTgg4N7U1bwk TgxvTxE3rNtOE/fFih1K1eJ8kHseL1OspFJFfo5CJ04JSj1DGPwQ6FHwv0IhNoq4 EGKiBH64+38GGFPCSypyrkvUHdkYQuk9s/BixzzRoak4pO7Ob+kyqg83h69D4GAa h3rUBxATwnOMT09jx3nmAk3wW/b1biR06qzg7OZv2+rzhWJ7Rr/DFMBhHTnlVHau I+8A== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656705710; x=1656712910; bh=GPTgF7ArqFMLR WCMzAXk/iblC7hZOhNlz+8yGyXcVmI=; b=bZV6JNZNCNgrHjWghrR9O98nWKlYN DX/X53ob25+i0NwhRNBQiBLDVz6UVV3he1PXha4QFwp5yMwGnxpB7lu/UEE8Xi3o OpXqvhbTbTBs7KDOhb4KUllG3Pk9Aa1vRSyBe3BCvgnGtLlZf/oqGEk+Jnl3BG85 OJ3xW7PAvFO/MhjF/+sAERp5XAdEzU4Ff5xnJ4prZbZoQYpU1kHRmjt2+C7DkY3Y SuzrsxXUNapv8zup6IGTgy0/s8A+vKX+TshcO/FgHArWEelqqro6tFdd578sYzkL 51oiHdOk0YWBV5old0Q464FJ20N+qIDsdwwqcQx1JwG+SPenzYypJolkw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudehfedgudeggecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefurghm uhgvlhcujfholhhlrghnugcuoehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhgqeenuc ggtffrrghtthgvrhhnpedukeetueduhedtleetvefguddvvdejhfefudelgfduveeggeeh gfdufeeitdevteenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfh hrohhmpehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhg X-ME-Proxy: Feedback-ID: i0ad843c9:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Jul 2022 16:01:47 -0400 (EDT) From: Samuel Holland To: Marc Zyngier , Thomas Gleixner Cc: Samuel Holland , Andy Shevchenko , Bartosz Golaszewski , Bjorn Helgaas , Boris Ostrovsky , Borislav Petkov , Broadcom internal kernel review list , Chris Zankel , Colin Ian King , Dave Hansen , Dexuan Cui , Florian Fainelli , Guo Ren , "H. Peter Anvin" , Haiyang Zhang , Helge Deller , Ingo Molnar , Ivan Kokshaysky , "James E.J. Bottomley" , Jan Beulich , Joerg Roedel , Juergen Gross , Julia Lawall , "K. Y. Srinivasan" , Kees Cook , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , Linus Walleij , Lorenzo Pieralisi , Mark Rutland , Matt Turner , Max Filippov , Maximilian Heyne , Oleksandr Tyshchenko , Rich Felker , Richard Henderson , Rikard Falkeborn , Rob Herring , Russell King , Serge Semin , Stefano Stabellini , Stephen Hemminger , Sven Schnelle , Thomas Bogendoerfer , Wei Liu , Wei Xu , Will Deacon , Yoshinori Sato , iommu@lists.linux-foundation.org, iommu@lists.linux.dev, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-hyperv@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-pci@vger.kernel.org, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org, xen-devel@lists.xenproject.org Subject: [PATCH v3 6/8] genirq: Add and use an irq_data_update_affinity helper Date: Fri, 1 Jul 2022 15:00:54 -0500 Message-Id: <20220701200056.46555-7-samuel@sholland.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220701200056.46555-1-samuel@sholland.org> References: <20220701200056.46555-1-samuel@sholland.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org Some architectures and irqchip drivers modify the cpumask returned by irq_data_get_affinity_mask, usually by copying in to it. This is problematic for uniprocessor configurations, where the affinity mask should be constant, as it is known at compile time. Add and use a setter for the affinity mask, following the pattern of irq_data_update_effective_affinity. This allows the getter function to return a const cpumask pointer. Signed-off-by: Samuel Holland Reviewed-by: Oleksandr Tyshchenko # Xen bits --- Changes in v3: - New patch to introduce irq_data_update_affinity arch/alpha/kernel/irq.c | 2 +- arch/ia64/kernel/iosapic.c | 2 +- arch/ia64/kernel/irq.c | 4 ++-- arch/ia64/kernel/msi_ia64.c | 4 ++-- arch/parisc/kernel/irq.c | 2 +- drivers/irqchip/irq-bcm6345-l1.c | 4 ++-- drivers/parisc/iosapic.c | 2 +- drivers/sh/intc/chip.c | 2 +- drivers/xen/events/events_base.c | 7 ++++--- include/linux/irq.h | 6 ++++++ 10 files changed, 21 insertions(+), 14 deletions(-) diff --git a/arch/alpha/kernel/irq.c b/arch/alpha/kernel/irq.c index f6d2946edbd2..15f2effd6baf 100644 --- a/arch/alpha/kernel/irq.c +++ b/arch/alpha/kernel/irq.c @@ -60,7 +60,7 @@ int irq_select_affinity(unsigned int irq) cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0); last_cpu = cpu; - cpumask_copy(irq_data_get_affinity_mask(data), cpumask_of(cpu)); + irq_data_update_affinity(data, cpumask_of(cpu)); chip->irq_set_affinity(data, cpumask_of(cpu), false); return 0; } diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c index 35adcf89035a..99300850abc1 100644 --- a/arch/ia64/kernel/iosapic.c +++ b/arch/ia64/kernel/iosapic.c @@ -834,7 +834,7 @@ iosapic_unregister_intr (unsigned int gsi) if (iosapic_intr_info[irq].count == 0) { #ifdef CONFIG_SMP /* Clear affinity */ - cpumask_setall(irq_get_affinity_mask(irq)); + irq_data_update_affinity(irq_get_irq_data(irq), cpu_all_mask); #endif /* Clear the interrupt information */ iosapic_intr_info[irq].dest = 0; diff --git a/arch/ia64/kernel/irq.c b/arch/ia64/kernel/irq.c index ecef17c7c35b..275b9ea58c64 100644 --- a/arch/ia64/kernel/irq.c +++ b/arch/ia64/kernel/irq.c @@ -57,8 +57,8 @@ static char irq_redir [NR_IRQS]; // = { [0 ... NR_IRQS-1] = 1 }; void set_irq_affinity_info (unsigned int irq, int hwid, int redir) { if (irq < NR_IRQS) { - cpumask_copy(irq_get_affinity_mask(irq), - cpumask_of(cpu_logical_id(hwid))); + irq_data_update_affinity(irq_get_irq_data(irq), + cpumask_of(cpu_logical_id(hwid))); irq_redir[irq] = (char) (redir & 0xff); } } diff --git a/arch/ia64/kernel/msi_ia64.c b/arch/ia64/kernel/msi_ia64.c index df5c28f252e3..025e5133c860 100644 --- a/arch/ia64/kernel/msi_ia64.c +++ b/arch/ia64/kernel/msi_ia64.c @@ -37,7 +37,7 @@ static int ia64_set_msi_irq_affinity(struct irq_data *idata, msg.data = data; pci_write_msi_msg(irq, &msg); - cpumask_copy(irq_data_get_affinity_mask(idata), cpumask_of(cpu)); + irq_data_update_affinity(idata, cpumask_of(cpu)); return 0; } @@ -132,7 +132,7 @@ static int dmar_msi_set_affinity(struct irq_data *data, msg.address_lo |= MSI_ADDR_DEST_ID_CPU(cpu_physical_id(cpu)); dmar_msi_write(irq, &msg); - cpumask_copy(irq_data_get_affinity_mask(data), mask); + irq_data_update_affinity(data, mask); return 0; } diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c index 0fe2d79fb123..5ebb1771b4ab 100644 --- a/arch/parisc/kernel/irq.c +++ b/arch/parisc/kernel/irq.c @@ -315,7 +315,7 @@ unsigned long txn_affinity_addr(unsigned int irq, int cpu) { #ifdef CONFIG_SMP struct irq_data *d = irq_get_irq_data(irq); - cpumask_copy(irq_data_get_affinity_mask(d), cpumask_of(cpu)); + irq_data_update_affinity(d, cpumask_of(cpu)); #endif return per_cpu(cpu_data, cpu).txn_addr; diff --git a/drivers/irqchip/irq-bcm6345-l1.c b/drivers/irqchip/irq-bcm6345-l1.c index 142a7431745f..6899e37810a8 100644 --- a/drivers/irqchip/irq-bcm6345-l1.c +++ b/drivers/irqchip/irq-bcm6345-l1.c @@ -216,11 +216,11 @@ static int bcm6345_l1_set_affinity(struct irq_data *d, enabled = intc->cpus[old_cpu]->enable_cache[word] & mask; if (enabled) __bcm6345_l1_mask(d); - cpumask_copy(irq_data_get_affinity_mask(d), dest); + irq_data_update_affinity(d, dest); if (enabled) __bcm6345_l1_unmask(d); } else { - cpumask_copy(irq_data_get_affinity_mask(d), dest); + irq_data_update_affinity(d, dest); } raw_spin_unlock_irqrestore(&intc->lock, flags); diff --git a/drivers/parisc/iosapic.c b/drivers/parisc/iosapic.c index 8a3b0c3a1e92..3a8c98615634 100644 --- a/drivers/parisc/iosapic.c +++ b/drivers/parisc/iosapic.c @@ -677,7 +677,7 @@ static int iosapic_set_affinity_irq(struct irq_data *d, if (dest_cpu < 0) return -1; - cpumask_copy(irq_data_get_affinity_mask(d), cpumask_of(dest_cpu)); + irq_data_update_affinity(d, cpumask_of(dest_cpu)); vi->txn_addr = txn_affinity_addr(d->irq, dest_cpu); spin_lock_irqsave(&iosapic_lock, flags); diff --git a/drivers/sh/intc/chip.c b/drivers/sh/intc/chip.c index 358df7510186..828d81e02b37 100644 --- a/drivers/sh/intc/chip.c +++ b/drivers/sh/intc/chip.c @@ -72,7 +72,7 @@ static int intc_set_affinity(struct irq_data *data, if (!cpumask_intersects(cpumask, cpu_online_mask)) return -1; - cpumask_copy(irq_data_get_affinity_mask(data), cpumask); + irq_data_update_affinity(data, cpumask); return IRQ_SET_MASK_OK_NOCOPY; } diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c index 46d9295d9a6e..5e8321f43cbd 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -528,9 +528,10 @@ static void bind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int cpu, BUG_ON(irq == -1); if (IS_ENABLED(CONFIG_SMP) && force_affinity) { - cpumask_copy(irq_get_affinity_mask(irq), cpumask_of(cpu)); - cpumask_copy(irq_get_effective_affinity_mask(irq), - cpumask_of(cpu)); + struct irq_data *data = irq_get_irq_data(irq); + + irq_data_update_affinity(data, cpumask_of(cpu)); + irq_data_update_effective_affinity(data, cpumask_of(cpu)); } xen_evtchn_port_bind_to_cpu(evtchn, cpu, info->cpu); diff --git a/include/linux/irq.h b/include/linux/irq.h index 69ee4e2f36ce..adcfebceb777 100644 --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -884,6 +884,12 @@ static inline struct cpumask *irq_data_get_affinity_mask(struct irq_data *d) return d->common->affinity; } +static inline void irq_data_update_affinity(struct irq_data *d, + const struct cpumask *m) +{ + cpumask_copy(d->common->affinity, m); +} + static inline struct cpumask *irq_get_affinity_mask(int irq) { struct irq_data *d = irq_get_irq_data(irq); From patchwork Fri Jul 1 20:00:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Samuel Holland X-Patchwork-Id: 12903732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E8A4CCA480 for ; Fri, 1 Jul 2022 20:11:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232248AbiGAULY (ORCPT ); Fri, 1 Jul 2022 16:11:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231812AbiGAULE (ORCPT ); Fri, 1 Jul 2022 16:11:04 -0400 Received: from new2-smtp.messagingengine.com (new2-smtp.messagingengine.com [66.111.4.224]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 487FB101E3; Fri, 1 Jul 2022 13:10:51 -0700 (PDT) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailnew.nyi.internal (Postfix) with ESMTP id 7D327580349; Fri, 1 Jul 2022 16:01:54 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Fri, 01 Jul 2022 16:01:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sholland.org; h= cc:cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm3; t=1656705714; x=1656712914; bh=eX ge2IdClR+d/3BSSaE+9pvQi9MnqCUCBzZI2A/q3+Y=; b=lImLZgXi07b4AZOlUz ewCsOrGtD6ZG0b0Mwa2BHgRwjFJmUlv0aH5fIk/DFa7yX2gI/3SZ/0xuOkd8jDjZ BbNp1qO7x0eSKZBj1sd1hXOLG/sNI5FsZUaZakMr6rsWHtRYiFSFc8hCCdBSWD+4 6O2YrMjeZ/AoZIkX9FYOQ4vEtHx1y7a1tweDcek1Ml4Gv4ckD/1v2cjeAemIIIjI oqu6hoU7YENo5WoZZrUdWM/Go6RBzeGWW8DbMfuUywji9JyCz8RN59j3Sx45lf/G fS2D7jR5wRlvMjddPMgm/uvDWZR7y/pOpjeXGCJ1ZxDL+ZZFkXsELAFJBSY9i4w8 BCpg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656705714; x=1656712914; bh=eXge2IdClR+d/ 3BSSaE+9pvQi9MnqCUCBzZI2A/q3+Y=; b=swj9eb2D+mogmU8eUS1kfG8FTODks 1/zWKUjEPxZEmSvMtrBYsfF5C22QcJB4FQjn/CXr6CYKXO6MFMWUlGpNz27m2Bm3 0klqwEwuEi9mGsvk4WVshdGyJ/bVCk2NnK/JfaZMjFjdeDgGl50Q+/8Cw//Lop94 y3fvLWqnMEX83eS4l/KXUJ59M8/Qrzl7mcIo1Ivji8q2SGQYZ+wyLDsbeXvZdrFQ 2lcXegwwX6ewyuzWNdmlIJy+3BkKGlnQfYjCZDdpas0JVugj0w6PexiSaI1UznbT mwqF+H/9QSHFuUj7i7Lx+Ftx5w/bzNBrifQVKKFiIyb9S26Ed5xCveTmQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudehfedgudeggecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefurghm uhgvlhcujfholhhlrghnugcuoehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhgqeenuc ggtffrrghtthgvrhhnpedukeetueduhedtleetvefguddvvdejhfefudelgfduveeggeeh gfdufeeitdevteenucevlhhushhtvghrufhiiigvpeefnecurfgrrhgrmhepmhgrihhlfh hrohhmpehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhg X-ME-Proxy: Feedback-ID: i0ad843c9:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Jul 2022 16:01:51 -0400 (EDT) From: Samuel Holland To: Marc Zyngier , Thomas Gleixner Cc: Samuel Holland , Andy Shevchenko , Bartosz Golaszewski , Bjorn Helgaas , Boris Ostrovsky , Borislav Petkov , Broadcom internal kernel review list , Chris Zankel , Colin Ian King , Dave Hansen , Dexuan Cui , Florian Fainelli , Guo Ren , "H. Peter Anvin" , Haiyang Zhang , Helge Deller , Ingo Molnar , Ivan Kokshaysky , "James E.J. Bottomley" , Jan Beulich , Joerg Roedel , Juergen Gross , Julia Lawall , "K. Y. Srinivasan" , Kees Cook , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , Linus Walleij , Lorenzo Pieralisi , Mark Rutland , Matt Turner , Max Filippov , Maximilian Heyne , Oleksandr Tyshchenko , Rich Felker , Richard Henderson , Rikard Falkeborn , Rob Herring , Russell King , Serge Semin , Stefano Stabellini , Stephen Hemminger , Sven Schnelle , Thomas Bogendoerfer , Wei Liu , Wei Xu , Will Deacon , Yoshinori Sato , iommu@lists.linux-foundation.org, iommu@lists.linux.dev, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-hyperv@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-pci@vger.kernel.org, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org, xen-devel@lists.xenproject.org Subject: [PATCH v3 7/8] genirq: Return a const cpumask from irq_data_get_affinity_mask Date: Fri, 1 Jul 2022 15:00:55 -0500 Message-Id: <20220701200056.46555-8-samuel@sholland.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220701200056.46555-1-samuel@sholland.org> References: <20220701200056.46555-1-samuel@sholland.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org Now that the irq_data_update_affinity helper exists, enforce its use by returning a a const cpumask from irq_data_get_affinity_mask. Since the previous commit already updated places that needed to call irq_data_update_affinity, this commit updates the remaining code that either did not modify the cpumask or immediately passed the modified mask to irq_set_affinity. Signed-off-by: Samuel Holland Reviewed-by: Michael Kelley --- Changes in v3: - New patch to make the returned cpumasks const arch/mips/cavium-octeon/octeon-irq.c | 4 ++-- arch/sh/kernel/irq.c | 7 ++++--- arch/x86/hyperv/irqdomain.c | 2 +- arch/xtensa/kernel/irq.c | 7 ++++--- drivers/iommu/hyperv-iommu.c | 2 +- drivers/pci/controller/pci-hyperv.c | 10 +++++----- include/linux/irq.h | 12 +++++++----- kernel/irq/chip.c | 8 +++++--- kernel/irq/debugfs.c | 2 +- kernel/irq/ipi.c | 16 +++++++++------- 10 files changed, 39 insertions(+), 31 deletions(-) diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c index 6cdcbf4de763..9cb9ed44bcaf 100644 --- a/arch/mips/cavium-octeon/octeon-irq.c +++ b/arch/mips/cavium-octeon/octeon-irq.c @@ -263,7 +263,7 @@ static int next_cpu_for_irq(struct irq_data *data) #ifdef CONFIG_SMP int cpu; - struct cpumask *mask = irq_data_get_affinity_mask(data); + const struct cpumask *mask = irq_data_get_affinity_mask(data); int weight = cpumask_weight(mask); struct octeon_ciu_chip_data *cd = irq_data_get_irq_chip_data(data); @@ -758,7 +758,7 @@ static void octeon_irq_cpu_offline_ciu(struct irq_data *data) { int cpu = smp_processor_id(); cpumask_t new_affinity; - struct cpumask *mask = irq_data_get_affinity_mask(data); + const struct cpumask *mask = irq_data_get_affinity_mask(data); if (!cpumask_test_cpu(cpu, mask)) return; diff --git a/arch/sh/kernel/irq.c b/arch/sh/kernel/irq.c index ef0f0827cf57..56269c2c3414 100644 --- a/arch/sh/kernel/irq.c +++ b/arch/sh/kernel/irq.c @@ -230,16 +230,17 @@ void migrate_irqs(void) struct irq_data *data = irq_get_irq_data(irq); if (irq_data_get_node(data) == cpu) { - struct cpumask *mask = irq_data_get_affinity_mask(data); + const struct cpumask *mask = irq_data_get_affinity_mask(data); unsigned int newcpu = cpumask_any_and(mask, cpu_online_mask); if (newcpu >= nr_cpu_ids) { pr_info_ratelimited("IRQ%u no longer affine to CPU%u\n", irq, cpu); - cpumask_setall(mask); + irq_set_affinity(irq, cpu_all_mask); + } else { + irq_set_affinity(irq, mask); } - irq_set_affinity(irq, mask); } } } diff --git a/arch/x86/hyperv/irqdomain.c b/arch/x86/hyperv/irqdomain.c index 7e0f6bedc248..42c70d28ef27 100644 --- a/arch/x86/hyperv/irqdomain.c +++ b/arch/x86/hyperv/irqdomain.c @@ -192,7 +192,7 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) struct pci_dev *dev; struct hv_interrupt_entry out_entry, *stored_entry; struct irq_cfg *cfg = irqd_cfg(data); - cpumask_t *affinity; + const cpumask_t *affinity; int cpu; u64 status; diff --git a/arch/xtensa/kernel/irq.c b/arch/xtensa/kernel/irq.c index 529fe9245821..42f106004400 100644 --- a/arch/xtensa/kernel/irq.c +++ b/arch/xtensa/kernel/irq.c @@ -169,7 +169,7 @@ void migrate_irqs(void) for_each_active_irq(i) { struct irq_data *data = irq_get_irq_data(i); - struct cpumask *mask; + const struct cpumask *mask; unsigned int newcpu; if (irqd_is_per_cpu(data)) @@ -185,9 +185,10 @@ void migrate_irqs(void) pr_info_ratelimited("IRQ%u no longer affine to CPU%u\n", i, cpu); - cpumask_setall(mask); + irq_set_affinity(i, cpu_all_mask); + } else { + irq_set_affinity(i, mask); } - irq_set_affinity(i, mask); } } #endif /* CONFIG_HOTPLUG_CPU */ diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv-iommu.c index e285a220c913..51bd66a45a11 100644 --- a/drivers/iommu/hyperv-iommu.c +++ b/drivers/iommu/hyperv-iommu.c @@ -194,7 +194,7 @@ hyperv_root_ir_compose_msi_msg(struct irq_data *irq_data, struct msi_msg *msg) u32 vector; struct irq_cfg *cfg; int ioapic_id; - struct cpumask *affinity; + const struct cpumask *affinity; int cpu; struct hv_interrupt_entry entry; struct hyperv_root_ir_data *data = irq_data->chip_data; diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index db814f7b93ba..aebada45569b 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -642,7 +642,7 @@ static void hv_arch_irq_unmask(struct irq_data *data) struct hv_retarget_device_interrupt *params; struct tran_int_desc *int_desc; struct hv_pcibus_device *hbus; - struct cpumask *dest; + const struct cpumask *dest; cpumask_var_t tmp; struct pci_bus *pbus; struct pci_dev *pdev; @@ -1613,7 +1613,7 @@ static void hv_pci_compose_compl(void *context, struct pci_response *resp, } static u32 hv_compose_msi_req_v1( - struct pci_create_interrupt *int_pkt, struct cpumask *affinity, + struct pci_create_interrupt *int_pkt, const struct cpumask *affinity, u32 slot, u8 vector, u8 vector_count) { int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE; @@ -1641,7 +1641,7 @@ static int hv_compose_msi_req_get_cpu(struct cpumask *affinity) } static u32 hv_compose_msi_req_v2( - struct pci_create_interrupt2 *int_pkt, struct cpumask *affinity, + struct pci_create_interrupt2 *int_pkt, const struct cpumask *affinity, u32 slot, u8 vector, u8 vector_count) { int cpu; @@ -1660,7 +1660,7 @@ static u32 hv_compose_msi_req_v2( } static u32 hv_compose_msi_req_v3( - struct pci_create_interrupt3 *int_pkt, struct cpumask *affinity, + struct pci_create_interrupt3 *int_pkt, const struct cpumask *affinity, u32 slot, u32 vector, u8 vector_count) { int cpu; @@ -1697,7 +1697,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) struct hv_pci_dev *hpdev; struct pci_bus *pbus; struct pci_dev *pdev; - struct cpumask *dest; + const struct cpumask *dest; struct compose_comp_ctxt comp; struct tran_int_desc *int_desc; struct msi_desc *msi_desc; diff --git a/include/linux/irq.h b/include/linux/irq.h index adcfebceb777..02073f7a156e 100644 --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -879,7 +879,8 @@ static inline int irq_data_get_node(struct irq_data *d) return irq_common_data_get_node(d->common); } -static inline struct cpumask *irq_data_get_affinity_mask(struct irq_data *d) +static inline +const struct cpumask *irq_data_get_affinity_mask(struct irq_data *d) { return d->common->affinity; } @@ -890,7 +891,7 @@ static inline void irq_data_update_affinity(struct irq_data *d, cpumask_copy(d->common->affinity, m); } -static inline struct cpumask *irq_get_affinity_mask(int irq) +static inline const struct cpumask *irq_get_affinity_mask(int irq) { struct irq_data *d = irq_get_irq_data(irq); @@ -899,7 +900,7 @@ static inline struct cpumask *irq_get_affinity_mask(int irq) #ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK static inline -struct cpumask *irq_data_get_effective_affinity_mask(struct irq_data *d) +const struct cpumask *irq_data_get_effective_affinity_mask(struct irq_data *d) { return d->common->effective_affinity; } @@ -914,13 +915,14 @@ static inline void irq_data_update_effective_affinity(struct irq_data *d, { } static inline -struct cpumask *irq_data_get_effective_affinity_mask(struct irq_data *d) +const struct cpumask *irq_data_get_effective_affinity_mask(struct irq_data *d) { return irq_data_get_affinity_mask(d); } #endif -static inline struct cpumask *irq_get_effective_affinity_mask(unsigned int irq) +static inline +const struct cpumask *irq_get_effective_affinity_mask(unsigned int irq) { struct irq_data *d = irq_get_irq_data(irq); diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c index 886789dcee43..9c7ad2266317 100644 --- a/kernel/irq/chip.c +++ b/kernel/irq/chip.c @@ -188,7 +188,8 @@ enum { #ifdef CONFIG_SMP static int -__irq_startup_managed(struct irq_desc *desc, struct cpumask *aff, bool force) +__irq_startup_managed(struct irq_desc *desc, const struct cpumask *aff, + bool force) { struct irq_data *d = irq_desc_get_irq_data(desc); @@ -224,7 +225,8 @@ __irq_startup_managed(struct irq_desc *desc, struct cpumask *aff, bool force) } #else static __always_inline int -__irq_startup_managed(struct irq_desc *desc, struct cpumask *aff, bool force) +__irq_startup_managed(struct irq_desc *desc, const struct cpumask *aff, + bool force) { return IRQ_STARTUP_NORMAL; } @@ -252,7 +254,7 @@ static int __irq_startup(struct irq_desc *desc) int irq_startup(struct irq_desc *desc, bool resend, bool force) { struct irq_data *d = irq_desc_get_irq_data(desc); - struct cpumask *aff = irq_data_get_affinity_mask(d); + const struct cpumask *aff = irq_data_get_affinity_mask(d); int ret = 0; desc->depth = 0; diff --git a/kernel/irq/debugfs.c b/kernel/irq/debugfs.c index bc8e40cf2b65..bbcaac64038e 100644 --- a/kernel/irq/debugfs.c +++ b/kernel/irq/debugfs.c @@ -30,7 +30,7 @@ static void irq_debug_show_bits(struct seq_file *m, int ind, unsigned int state, static void irq_debug_show_masks(struct seq_file *m, struct irq_desc *desc) { struct irq_data *data = irq_desc_get_irq_data(desc); - struct cpumask *msk; + const struct cpumask *msk; msk = irq_data_get_affinity_mask(data); seq_printf(m, "affinity: %*pbl\n", cpumask_pr_args(msk)); diff --git a/kernel/irq/ipi.c b/kernel/irq/ipi.c index 08ce7da3b57c..bbd945bacef0 100644 --- a/kernel/irq/ipi.c +++ b/kernel/irq/ipi.c @@ -115,11 +115,11 @@ int irq_reserve_ipi(struct irq_domain *domain, int irq_destroy_ipi(unsigned int irq, const struct cpumask *dest) { struct irq_data *data = irq_get_irq_data(irq); - struct cpumask *ipimask = data ? irq_data_get_affinity_mask(data) : NULL; + const struct cpumask *ipimask; struct irq_domain *domain; unsigned int nr_irqs; - if (!irq || !data || !ipimask) + if (!irq || !data) return -EINVAL; domain = data->domain; @@ -131,7 +131,8 @@ int irq_destroy_ipi(unsigned int irq, const struct cpumask *dest) return -EINVAL; } - if (WARN_ON(!cpumask_subset(dest, ipimask))) + ipimask = irq_data_get_affinity_mask(data); + if (!ipimask || WARN_ON(!cpumask_subset(dest, ipimask))) /* * Must be destroying a subset of CPUs to which this IPI * was set up to target @@ -162,12 +163,13 @@ int irq_destroy_ipi(unsigned int irq, const struct cpumask *dest) irq_hw_number_t ipi_get_hwirq(unsigned int irq, unsigned int cpu) { struct irq_data *data = irq_get_irq_data(irq); - struct cpumask *ipimask = data ? irq_data_get_affinity_mask(data) : NULL; + const struct cpumask *ipimask; - if (!data || !ipimask || cpu >= nr_cpu_ids) + if (!data || cpu >= nr_cpu_ids) return INVALID_HWIRQ; - if (!cpumask_test_cpu(cpu, ipimask)) + ipimask = irq_data_get_affinity_mask(data); + if (!ipimask || !cpumask_test_cpu(cpu, ipimask)) return INVALID_HWIRQ; /* @@ -186,7 +188,7 @@ EXPORT_SYMBOL_GPL(ipi_get_hwirq); static int ipi_send_verify(struct irq_chip *chip, struct irq_data *data, const struct cpumask *dest, unsigned int cpu) { - struct cpumask *ipimask = irq_data_get_affinity_mask(data); + const struct cpumask *ipimask = irq_data_get_affinity_mask(data); if (!chip || !ipimask) return -EINVAL; From patchwork Fri Jul 1 20:00:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Samuel Holland X-Patchwork-Id: 12903725 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51345C3F2D4 for ; Fri, 1 Jul 2022 20:10:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231946AbiGAUKw (ORCPT ); Fri, 1 Jul 2022 16:10:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231648AbiGAUKu (ORCPT ); Fri, 1 Jul 2022 16:10:50 -0400 Received: from new2-smtp.messagingengine.com (new2-smtp.messagingengine.com [66.111.4.224]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE96E1EAE1; Fri, 1 Jul 2022 13:10:48 -0700 (PDT) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailnew.nyi.internal (Postfix) with ESMTP id 50D84580357; Fri, 1 Jul 2022 16:01:57 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Fri, 01 Jul 2022 16:01:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sholland.org; h= cc:cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm3; t=1656705717; x=1656712917; bh=mr ZZqztLVKSIkpl2ZtG9SZsAXGvA9Vl+x5zOkvs3Bjg=; b=ifFqAwFeM2wxyJCvKE TcogHoKvz8Ikgyvbheen4lB6TK8my3nTFtPtZqZwV2EJoW1d675CVWiRCQYrtpCV gjmX3xfutUzOU3hClg0Njj4ieVpzITbRxgXZVuosglRi9Ve/QUV4hEboq9Cl6D5q 5ZUDm0FkkQme2ysoms3ELh8Ej8sCf/bY7d3513Rv+xbTCp2MmH04t4WPs+p30/R/ NUoTk/RQLxrRh5NqM1X0miI8sZ9j0rcT8Y1gbcNzWv3fehEjbKWta0Zm2a34hcFz IRiPSucdZsSIbDrwFJcccZcbWOSp8i+md8XOrmXWaR/5piNMD50Sh8euiQzn0koH sIGA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; t=1656705717; x=1656712917; bh=mrZZqztLVKSIk pl2ZtG9SZsAXGvA9Vl+x5zOkvs3Bjg=; b=P1MvxlIOuM7l0z+qS8yeYED1ddgJs ZCs2QYRhK40y97qVVKx50uDXKGc7mEB0wEwGcSiJsX1JtDdjkQWUx99X65YS4sz+ VfSaSS6Ie4KKuWHsotLfT5em9QY23iuX70+vsHcDY2KRtIP+aVtenEK44OuBMsai UBAjIHn3DU+BwynQs3RgoaYjQarWyRcQP7W5Gh2Uup25KvchrpUJqNUtKa3U+bgZ BTQhJ+xzfCrqL5QAda74Vyo+egxNHDbWs7GXn2vOV7ssh9klKGGfy1CHr9gnsXME J2Ch54d15PxAoSgBunXMCXpz/ijn5crnwUN/1XIG9K1lsPwNHpUnb99dA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudehfedgudeggecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefurghm uhgvlhcujfholhhlrghnugcuoehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhgqeenuc ggtffrrghtthgvrhhnpedukeetueduhedtleetvefguddvvdejhfefudelgfduveeggeeh gfdufeeitdevteenucevlhhushhtvghrufhiiigvpeegnecurfgrrhgrmhepmhgrihhlfh hrohhmpehsrghmuhgvlhesshhhohhllhgrnhgurdhorhhg X-ME-Proxy: Feedback-ID: i0ad843c9:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Jul 2022 16:01:54 -0400 (EDT) From: Samuel Holland To: Marc Zyngier , Thomas Gleixner Cc: Samuel Holland , Andy Shevchenko , Bartosz Golaszewski , Bjorn Helgaas , Boris Ostrovsky , Borislav Petkov , Broadcom internal kernel review list , Chris Zankel , Colin Ian King , Dave Hansen , Dexuan Cui , Florian Fainelli , Guo Ren , "H. Peter Anvin" , Haiyang Zhang , Helge Deller , Ingo Molnar , Ivan Kokshaysky , "James E.J. Bottomley" , Jan Beulich , Joerg Roedel , Juergen Gross , Julia Lawall , "K. Y. Srinivasan" , Kees Cook , =?utf-8?q?Krzysztof_Wilczy=C5=84ski?= , Linus Walleij , Lorenzo Pieralisi , Mark Rutland , Matt Turner , Max Filippov , Maximilian Heyne , Oleksandr Tyshchenko , Rich Felker , Richard Henderson , Rikard Falkeborn , Rob Herring , Russell King , Serge Semin , Stefano Stabellini , Stephen Hemminger , Sven Schnelle , Thomas Bogendoerfer , Wei Liu , Wei Xu , Will Deacon , Yoshinori Sato , iommu@lists.linux-foundation.org, iommu@lists.linux.dev, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-hyperv@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linux-pci@vger.kernel.org, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org, xen-devel@lists.xenproject.org Subject: [PATCH v3 8/8] genirq: Provide an IRQ affinity mask in non-SMP configs Date: Fri, 1 Jul 2022 15:00:56 -0500 Message-Id: <20220701200056.46555-9-samuel@sholland.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220701200056.46555-1-samuel@sholland.org> References: <20220701200056.46555-1-samuel@sholland.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org IRQ affinity masks are not allocated in uniprocessor configurations. This requires special case non-SMP code in drivers for irqchips which have per-CPU enable or mask registers. Since IRQ affinity is always the same in a uniprocessor configuration, we can provide a correct affinity mask without allocating one per IRQ. By returning a real cpumask from irq_data_get_affinity_mask even when SMP is disabled, irqchip drivers which iterate over that mask will automatically do the right thing. Signed-off-by: Samuel Holland --- Changes in v3: - Use cpumask_of(0) instead of cpu_possible_mask include/linux/irq.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/include/linux/irq.h b/include/linux/irq.h index 02073f7a156e..996e22744edd 100644 --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -151,7 +151,9 @@ struct irq_common_data { #endif void *handler_data; struct msi_desc *msi_desc; +#ifdef CONFIG_SMP cpumask_var_t affinity; +#endif #ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK cpumask_var_t effective_affinity; #endif @@ -882,13 +884,19 @@ static inline int irq_data_get_node(struct irq_data *d) static inline const struct cpumask *irq_data_get_affinity_mask(struct irq_data *d) { +#ifdef CONFIG_SMP return d->common->affinity; +#else + return cpumask_of(0); +#endif } static inline void irq_data_update_affinity(struct irq_data *d, const struct cpumask *m) { +#ifdef CONFIG_SMP cpumask_copy(d->common->affinity, m); +#endif } static inline const struct cpumask *irq_get_affinity_mask(int irq)