From patchwork Tue Feb 18 12:21:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11388327 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6AA1713A4 for ; Tue, 18 Feb 2020 12:23:06 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5147E2173E for ; Tue, 18 Feb 2020 12:23:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5147E2173E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j41sP-0006dn-5V; Tue, 18 Feb 2020 12:21:21 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j41sN-0006db-Vg for xen-devel@lists.xenproject.org; Tue, 18 Feb 2020 12:21:20 +0000 X-Inumbo-ID: 2a2a8b52-5249-11ea-aa99-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 2a2a8b52-5249-11ea-aa99-bc764e2007e4; Tue, 18 Feb 2020 12:21:18 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id CCB3BB368; Tue, 18 Feb 2020 12:21:17 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 18 Feb 2020 13:21:13 +0100 Message-Id: <20200218122114.17596-4-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200218122114.17596-1-jgross@suse.com> References: <20200218122114.17596-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 3/4] xen: add process_pending_softirqs_norcu() for keyhandlers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Kevin Tian , Stefano Stabellini , Julien Grall , Jun Nakajima , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Some keyhandlers are calling process_pending_softirqs() while holding a rcu_read_lock(). This is wrong, as process_pending_softirqs() might activate rcu calls which should not happen inside a rcu_read_lock(). For that purpose add process_pending_softirqs_norcu() which will not do any rcu activity and use this for keyhandlers. Signed-off-by: Juergen Gross --- xen/arch/x86/mm/p2m-ept.c | 2 +- xen/arch/x86/numa.c | 4 ++-- xen/common/keyhandler.c | 6 +++--- xen/common/softirq.c | 17 +++++++++++++---- xen/drivers/passthrough/amd/pci_amd_iommu.c | 2 +- xen/drivers/passthrough/vtd/iommu.c | 2 +- xen/drivers/vpci/msi.c | 4 ++-- xen/include/xen/softirq.h | 2 ++ 8 files changed, 25 insertions(+), 14 deletions(-) diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index d4defa01c2..af2b012144 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -1342,7 +1342,7 @@ static void ept_dump_p2m_table(unsigned char key) c ?: ept_entry->ipat ? '!' : ' '); if ( !(record_counter++ % 100) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); } unmap_domain_page(table); } diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c index f1066c59c7..cf6fcc9966 100644 --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -418,7 +418,7 @@ static void dump_numa(unsigned char key) printk("Memory location of each domain:\n"); for_each_domain ( d ) { - process_pending_softirqs(); + process_pending_softirqs_norcu(); printk("Domain %u (total: %u):\n", d->domain_id, domain_tot_pages(d)); @@ -462,7 +462,7 @@ static void dump_numa(unsigned char key) for ( j = 0; j < d->max_vcpus; j++ ) { if ( !(j & 0x3f) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); if ( vnuma->vcpu_to_vnode[j] == i ) { diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c index 87bd145374..0d32bc4e2a 100644 --- a/xen/common/keyhandler.c +++ b/xen/common/keyhandler.c @@ -263,7 +263,7 @@ static void dump_domains(unsigned char key) { unsigned int i; - process_pending_softirqs(); + process_pending_softirqs_norcu(); printk("General information for domain %u:\n", d->domain_id); printk(" refcnt=%d dying=%d pause_count=%d\n", @@ -307,7 +307,7 @@ static void dump_domains(unsigned char key) for_each_sched_unit_vcpu ( unit, v ) { if ( !(v->vcpu_id & 0x3f) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); printk(" VCPU%d: CPU%d [has=%c] poll=%d " "upcall_pend=%02x upcall_mask=%02x ", @@ -337,7 +337,7 @@ static void dump_domains(unsigned char key) for_each_vcpu ( d, v ) { if ( !(v->vcpu_id & 0x3f) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); printk("Notifying guest %d:%d (virq %d, port %d)\n", d->domain_id, v->vcpu_id, diff --git a/xen/common/softirq.c b/xen/common/softirq.c index b83ad96d6c..3fe75ca3e8 100644 --- a/xen/common/softirq.c +++ b/xen/common/softirq.c @@ -25,7 +25,7 @@ static softirq_handler softirq_handlers[NR_SOFTIRQS]; static DEFINE_PER_CPU(cpumask_t, batch_mask); static DEFINE_PER_CPU(unsigned int, batching); -static void __do_softirq(unsigned long ignore_mask) +static void __do_softirq(unsigned long ignore_mask, bool rcu_allowed) { unsigned int i, cpu; unsigned long pending; @@ -38,7 +38,7 @@ static void __do_softirq(unsigned long ignore_mask) */ cpu = smp_processor_id(); - if ( rcu_pending(cpu) ) + if ( rcu_allowed && rcu_pending(cpu) ) rcu_check_callbacks(cpu); if ( ((pending = (softirq_pending(cpu) & ~ignore_mask)) == 0) @@ -55,13 +55,22 @@ void process_pending_softirqs(void) { ASSERT(!in_irq() && local_irq_is_enabled()); /* Do not enter scheduler as it can preempt the calling context. */ - __do_softirq((1ul << SCHEDULE_SOFTIRQ) | (1ul << SCHED_SLAVE_SOFTIRQ)); + __do_softirq((1ul << SCHEDULE_SOFTIRQ) | (1ul << SCHED_SLAVE_SOFTIRQ), + true); +} + +void process_pending_softirqs_norcu(void) +{ + ASSERT(!in_irq() && local_irq_is_enabled()); + /* Do not enter scheduler as it can preempt the calling context. */ + __do_softirq((1ul << SCHEDULE_SOFTIRQ) | (1ul << SCHED_SLAVE_SOFTIRQ), + false); } void do_softirq(void) { ASSERT_NOT_IN_ATOMIC(); - __do_softirq(0); + __do_softirq(0, true); } void open_softirq(int nr, softirq_handler handler) diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c index 3112653960..880d64c748 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -587,7 +587,7 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level, struct amd_iommu_pte *pde = &table_vaddr[index]; if ( !(index % 2) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); if ( !pde->pr ) continue; diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index 3d60976dd5..c7bd8d4ada 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -2646,7 +2646,7 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, for ( i = 0; i < PTE_NUM; i++ ) { if ( !(i % 2) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); pte = &pt_vaddr[i]; if ( !dma_pte_present(*pte) ) diff --git a/xen/drivers/vpci/msi.c b/xen/drivers/vpci/msi.c index 75010762ed..1d337604cc 100644 --- a/xen/drivers/vpci/msi.c +++ b/xen/drivers/vpci/msi.c @@ -321,13 +321,13 @@ void vpci_dump_msi(void) * holding the lock. */ printk("unable to print all MSI-X entries: %d\n", rc); - process_pending_softirqs(); + process_pending_softirqs_norcu(); continue; } } spin_unlock(&pdev->vpci->lock); - process_pending_softirqs(); + process_pending_softirqs_norcu(); } } rcu_read_unlock(&domlist_read_lock); diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h index b4724f5c8b..b5bf3b83b1 100644 --- a/xen/include/xen/softirq.h +++ b/xen/include/xen/softirq.h @@ -37,7 +37,9 @@ void cpu_raise_softirq_batch_finish(void); * Process pending softirqs on this CPU. This should be called periodically * when performing work that prevents softirqs from running in a timely manner. * Use this instead of do_softirq() when you do not want to be preempted. + * The norcu variant is to be used while holding a read_rcu_lock(). */ void process_pending_softirqs(void); +void process_pending_softirqs_norcu(void); #endif /* __XEN_SOFTIRQ_H__ */