From patchwork Sun Dec 6 17:43:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Druzhinin X-Patchwork-Id: 11954211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB1F2C433FE for ; Sun, 6 Dec 2020 18:08:08 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6217D2311B for ; Sun, 6 Dec 2020 18:08:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6217D2311B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.45737.81155 (Exim 4.92) (envelope-from ) id 1klyRy-0000zx-KE; Sun, 06 Dec 2020 18:07:58 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 45737.81155; Sun, 06 Dec 2020 18:07:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1klyRy-0000zq-Gm; Sun, 06 Dec 2020 18:07:58 +0000 Received: by outflank-mailman (input) for mailman id 45737; Sun, 06 Dec 2020 18:07:56 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1klyRw-0000zg-Gh for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:07:56 +0000 Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 946a5f03-3572-4cb1-900d-f03293bcd297; Sun, 06 Dec 2020 18:07:55 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 946a5f03-3572-4cb1-900d-f03293bcd297 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1607278074; h=from:to:cc:subject:date:message-id:mime-version; bh=MQBlZ1unAHl3xhYSGHiFx1nWofa1ttfc/pz0nrz8Db0=; b=Fmg3tmX0n1QiDLMXdFUHqFO9XgQiiFvYJLiVYaVYW/f851RZjdtq+eQY d+a1rDvdsVjbuw0AYKQQ2OA+CkMSZLYk6fX5ejPzJLzHZKnw5enCk+pF0 w5QEpfBR0yU7pnWWN1GVo/hzk66R5pTLcByZhfKBbOhYoWDLtd+4xPnx2 4=; Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none IronPort-SDR: JuzEmUbSliWrhIawRwrK/2vFOaFvg5JFBZDXYUxf66XViuTVgotwRByk/7ZzwKhvA9oNG1szJ8 8B0i/V812EmqnZiqxKzCmFjWpiTj9LohTm3Nxs448O124n4zXk95gx0WNaoj/omA2j7yVPf5I5 EQCFgu3scYXcNMdSKi9Zn8KAvmmCOu4CfNwghZx7WvA3EREvmYavy1gS3yh57sXhAlaRWcY2xM AzCNIegbylTG+wev7P4dtW1U2+zdaFdp8cFVNYTd7QXkBDVt6zWzaaaAu2rUvDle3XXgrdaZNo s9E= X-SBRS: 5.1 X-MesageID: 32966870 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.78,397,1599537600"; d="scan'208";a="32966870" From: Igor Druzhinin To: CC: , , , , , , , , "Igor Druzhinin" Subject: [PATCH v3 1/2] x86/IRQ: make max number of guests for a shared IRQ configurable Date: Sun, 6 Dec 2020 17:43:06 +0000 Message-ID: <1607276587-19231-1-git-send-email-igor.druzhinin@citrix.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 ... and increase the default to 16. Current limit of 7 is too restrictive for modern systems where one GSI could be shared by potentially many PCI INTx sources where each of them corresponds to a device passed through to its own guest. Some systems do not apply due dilligence in swizzling INTx links in case e.g. INTA is declared as interrupt pin for the majority of PCI devices behind a single router, resulting in overuse of a GSI. Introduce a new command line option to configure that limit and dynamically allocate an array of the necessary size. Set the default size now to 16 which is higher than 7 but could later be increased even more if necessary. Signed-off-by: Igor Druzhinin Reviewed-by: Jan Beulich --- Changes in v2: - introduced a command line option as suggested - set initial default limit to 16 Changes in v3: - changed option name to us - instead of _ - used uchar instead of uint to utilize integer_param overflow handling logic - used xmalloc_flex_struct - restructured printk as suggested --- docs/misc/xen-command-line.pandoc | 10 ++++++++++ xen/arch/x86/irq.c | 22 +++++++++++++++------- 2 files changed, 25 insertions(+), 7 deletions(-) diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc index b4a0d60..53e676b 100644 --- a/docs/misc/xen-command-line.pandoc +++ b/docs/misc/xen-command-line.pandoc @@ -1641,6 +1641,16 @@ This option is ignored in **pv-shim** mode. ### nr_irqs (x86) > `= ` +### irq-max-guests (x86) +> `= ` + +> Default: `16` + +Maximum number of guests any individual IRQ could be shared between, +i.e. a limit on the number of guests it is possible to start each having +assigned a device sharing a common interrupt line. Accepts values between +1 and 255. + ### numa (x86) > `= on | off | fake= | noacpi` diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index 8d1f9a9..4fd0578 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -42,6 +42,10 @@ integer_param("nr_irqs", nr_irqs); int __read_mostly opt_irq_vector_map = OPT_IRQ_VECTOR_MAP_DEFAULT; custom_param("irq_vector_map", parse_irq_vector_map_param); +/* Max number of guests IRQ could be shared with */ +static unsigned char __read_mostly irq_max_guests; +integer_param("irq-max-guests", irq_max_guests); + vmask_t global_used_vector_map; struct irq_desc __read_mostly *irq_desc = NULL; @@ -435,6 +439,9 @@ int __init init_irq_data(void) for ( ; irq < nr_irqs; irq++ ) irq_to_desc(irq)->irq = irq; + if ( !irq_max_guests ) + irq_max_guests = 16; + #ifdef CONFIG_PV /* Never allocate the hypercall vector or Linux/BSD fast-trap vector. */ set_bit(LEGACY_SYSCALL_VECTOR, used_vectors); @@ -1028,7 +1035,6 @@ int __init setup_irq(unsigned int irq, unsigned int irqflags, * HANDLING OF GUEST-BOUND PHYSICAL IRQS */ -#define IRQ_MAX_GUESTS 7 typedef struct { u8 nr_guests; u8 in_flight; @@ -1039,7 +1045,7 @@ typedef struct { #define ACKTYPE_EOI 2 /* EOI on the CPU that was interrupted */ cpumask_var_t cpu_eoi_map; /* CPUs that need to EOI this interrupt */ struct timer eoi_timer; - struct domain *guest[IRQ_MAX_GUESTS]; + struct domain *guest[]; } irq_guest_action_t; /* @@ -1564,7 +1570,8 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share) if ( newaction == NULL ) { spin_unlock_irq(&desc->lock); - if ( (newaction = xmalloc(irq_guest_action_t)) != NULL && + if ( (newaction = xmalloc_flex_struct(irq_guest_action_t, guest, + irq_max_guests)) != NULL && zalloc_cpumask_var(&newaction->cpu_eoi_map) ) goto retry; xfree(newaction); @@ -1633,11 +1640,12 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share) goto retry; } - if ( action->nr_guests == IRQ_MAX_GUESTS ) + if ( action->nr_guests == irq_max_guests ) { - printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%d. " - "Already at max share.\n", - pirq->pirq, v->domain->domain_id); + printk(XENLOG_G_INFO + "Cannot bind IRQ%d to dom%pd: already at max share %u ", + pirq->pirq, v->domain, irq_max_guests); + printk("(increase with irq-max-guests= option)\n"); rc = -EBUSY; goto unlock_out; } From patchwork Sun Dec 6 17:43:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Druzhinin X-Patchwork-Id: 11954209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA8C2C433FE for ; Sun, 6 Dec 2020 18:05:47 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 65EAA2311B for ; Sun, 6 Dec 2020 18:05:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 65EAA2311B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.45732.81143 (Exim 4.92) (envelope-from ) id 1klyPa-0000qG-3p; Sun, 06 Dec 2020 18:05:30 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 45732.81143; Sun, 06 Dec 2020 18:05:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1klyPZ-0000q9-WD; Sun, 06 Dec 2020 18:05:30 +0000 Received: by outflank-mailman (input) for mailman id 45732; Sun, 06 Dec 2020 18:05:28 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1klyPY-0000q4-DF for xen-devel@lists.xenproject.org; Sun, 06 Dec 2020 18:05:28 +0000 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b2ff1f83-c7cb-4d53-89fa-cd580fa1c340; Sun, 06 Dec 2020 18:05:27 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b2ff1f83-c7cb-4d53-89fa-cd580fa1c340 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1607277926; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=hPEg/xcTyqwf1mlGpj4j/c1lKeAMXPZhJ+mEGlPUrac=; b=POgSskPi3pWHvJYa1YQrjisfGMYSmd618LEbIN4kP5ondjC0bcduSiRN M90UEVfXASA7CnXHTSGDhYDDXU1Nu9Mi9hrGOx3CAH6iMigD1E1gMetcD d9HtuS2OOMVs+uKdQCx2oNbv/Mkdq40dLBLuExMVbfw+oLgyIZBZFh8Qo 8=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none IronPort-SDR: tv/Uab4Qve+f99T5GDTW5BPOIqiy8je4JaAvyItLKm4NppSbNzyvUXGRS7TtSbsCI9zNztqPZR xBq0bqR7FW5emIgLGC4GfpSYceSF8yEl5XG9IZAd079sHA/tytbLxKYChZ/wBnCEXTlhcCFyol BpWdztvJayZ/DCIZNyTWs1skxWGEfZGWGokPBxfwInRl5coz+ssk2CvqadfrVU61uIQPs+0WPu wm/hWkB3QVti22pKPLrbp+hrypa6ogH8hLp1tPr7q/DCdNtoy+QGN9VwLnfsThmU7HQXe4Fc3u UiM= X-SBRS: 5.1 X-MesageID: 32638347 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.78,397,1599537600"; d="scan'208";a="32638347" From: Igor Druzhinin To: CC: , , , , , , , , "Igor Druzhinin" Subject: [PATCH v3 2/2] x86/IRQ: allocate guest array of max size only for shareable IRQs Date: Sun, 6 Dec 2020 17:43:07 +0000 Message-ID: <1607276587-19231-2-git-send-email-igor.druzhinin@citrix.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1607276587-19231-1-git-send-email-igor.druzhinin@citrix.com> References: <1607276587-19231-1-git-send-email-igor.druzhinin@citrix.com> MIME-Version: 1.0 ... and increase default "irq-max-guests" to 32. It's not necessary to have an array of a size more than 1 for non-shareable IRQs and it might impact scalability in case of high "irq-max-guests" values being used - every IRQ in the system including MSIs would be supplied with an array of that size. Since it's now less impactful to use higher "irq-max-guests" value - bump the default to 32. That should give more headroom for future systems. Signed-off-by: Igor Druzhinin Reviewed-by: Jan Beulich --- New in v2. Based on Jan's suggestion. Changes in v3: - almost none since I prefer the clarity of the code as is --- docs/misc/xen-command-line.pandoc | 2 +- xen/arch/x86/irq.c | 7 ++++--- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc index 53e676b..f7db2b6 100644 --- a/docs/misc/xen-command-line.pandoc +++ b/docs/misc/xen-command-line.pandoc @@ -1644,7 +1644,7 @@ This option is ignored in **pv-shim** mode. ### irq-max-guests (x86) > `= ` -> Default: `16` +> Default: `32` Maximum number of guests any individual IRQ could be shared between, i.e. a limit on the number of guests it is possible to start each having diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index 4fd0578..a088818 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -440,7 +440,7 @@ int __init init_irq_data(void) irq_to_desc(irq)->irq = irq; if ( !irq_max_guests ) - irq_max_guests = 16; + irq_max_guests = 32; #ifdef CONFIG_PV /* Never allocate the hypercall vector or Linux/BSD fast-trap vector. */ @@ -1540,6 +1540,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share) unsigned int irq; struct irq_desc *desc; irq_guest_action_t *action, *newaction = NULL; + unsigned char max_nr_guests = will_share ? irq_max_guests : 1; int rc = 0; WARN_ON(!spin_is_locked(&v->domain->event_lock)); @@ -1571,7 +1572,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share) { spin_unlock_irq(&desc->lock); if ( (newaction = xmalloc_flex_struct(irq_guest_action_t, guest, - irq_max_guests)) != NULL && + max_nr_guests)) != NULL && zalloc_cpumask_var(&newaction->cpu_eoi_map) ) goto retry; xfree(newaction); @@ -1640,7 +1641,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share) goto retry; } - if ( action->nr_guests == irq_max_guests ) + if ( action->nr_guests >= max_nr_guests ) { printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%pd: already at max share %u ",