From patchwork Tue Feb 18 12:21:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11388321 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4831E92A for ; Tue, 18 Feb 2020 12:22:57 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2E9E2207FD for ; Tue, 18 Feb 2020 12:22:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2E9E2207FD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j41sO-0006dh-TT; Tue, 18 Feb 2020 12:21:20 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j41sN-0006dV-B6 for xen-devel@lists.xenproject.org; Tue, 18 Feb 2020 12:21:19 +0000 X-Inumbo-ID: 29ec189a-5249-11ea-8170-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 29ec189a-5249-11ea-8170-12813bfff9fa; Tue, 18 Feb 2020 12:21:18 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 49C44B365; Tue, 18 Feb 2020 12:21:17 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 18 Feb 2020 13:21:11 +0100 Message-Id: <20200218122114.17596-2-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200218122114.17596-1-jgross@suse.com> References: <20200218122114.17596-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 1/4] xen/rcu: use rcu softirq for forcing quiescent state X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" As rcu callbacks are processed in __do_softirq() there is no need to use the scheduling softirq for forcing quiescent state. Any other softirq would do the job and the scheduling one is the most expensive. So use the already existing rcu softirq for that purpose. For telling apart why the rcu softirq was raised add a flag for the current usage. Signed-off-by: Juergen Gross Acked-by: Andrew Cooper --- xen/common/rcupdate.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c index 91d4ad0fd8..079ea9d8a1 100644 --- a/xen/common/rcupdate.c +++ b/xen/common/rcupdate.c @@ -89,6 +89,8 @@ struct rcu_data { /* 3) idle CPUs handling */ struct timer idle_timer; bool idle_timer_active; + + bool process_callbacks; }; /* @@ -194,7 +196,7 @@ static void force_quiescent_state(struct rcu_data *rdp, struct rcu_ctrlblk *rcp) { cpumask_t cpumask; - raise_softirq(SCHEDULE_SOFTIRQ); + raise_softirq(RCU_SOFTIRQ); if (unlikely(rdp->qlen - rdp->last_rs_qlen > rsinterval)) { rdp->last_rs_qlen = rdp->qlen; /* @@ -202,7 +204,7 @@ static void force_quiescent_state(struct rcu_data *rdp, * rdp->cpu is the current cpu. */ cpumask_andnot(&cpumask, &rcp->cpumask, cpumask_of(rdp->cpu)); - cpumask_raise_softirq(&cpumask, SCHEDULE_SOFTIRQ); + cpumask_raise_softirq(&cpumask, RCU_SOFTIRQ); } } @@ -259,7 +261,10 @@ static void rcu_do_batch(struct rcu_data *rdp) if (!rdp->donelist) rdp->donetail = &rdp->donelist; else + { + rdp->process_callbacks = true; raise_softirq(RCU_SOFTIRQ); + } } /* @@ -410,7 +415,13 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp, static void rcu_process_callbacks(void) { - __rcu_process_callbacks(&rcu_ctrlblk, &this_cpu(rcu_data)); + struct rcu_data *rdp = &this_cpu(rcu_data); + + if ( rdp->process_callbacks ) + { + rdp->process_callbacks = false; + __rcu_process_callbacks(&rcu_ctrlblk, rdp); + } } static int __rcu_pending(struct rcu_ctrlblk *rcp, struct rcu_data *rdp) @@ -518,6 +529,9 @@ static void rcu_idle_timer_handler(void* data) void rcu_check_callbacks(int cpu) { + struct rcu_data *rdp = &this_cpu(rcu_data); + + rdp->process_callbacks = true; raise_softirq(RCU_SOFTIRQ); } From patchwork Tue Feb 18 12:21:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11388323 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B8CD792A for ; Tue, 18 Feb 2020 12:22:58 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9F919207FD for ; Tue, 18 Feb 2020 12:22:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9F919207FD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j41sU-0006e6-EB; Tue, 18 Feb 2020 12:21:26 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j41sS-0006dz-PK for xen-devel@lists.xenproject.org; Tue, 18 Feb 2020 12:21:24 +0000 X-Inumbo-ID: 29edd338-5249-11ea-b0fd-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 29edd338-5249-11ea-b0fd-bc764e2007e4; Tue, 18 Feb 2020 12:21:18 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 7829DB367; Tue, 18 Feb 2020 12:21:17 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 18 Feb 2020 13:21:12 +0100 Message-Id: <20200218122114.17596-3-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200218122114.17596-1-jgross@suse.com> References: <20200218122114.17596-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 2/4] xen/rcu: don't use stop_machine_run() for rcu_barrier() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Today rcu_barrier() is calling stop_machine_run() to synchronize all physical cpus in order to ensure all pending rcu calls have finished when returning. As stop_machine_run() is using tasklets this requires scheduling of idle vcpus on all cpus imposing the need to call rcu_barrier() on idle cpus only in case of core scheduling being active, as otherwise a scheduling deadlock would occur. There is no need at all to do the syncing of the cpus in tasklets, as rcu activity is started in __do_softirq() called whenever softirq activity is allowed. So rcu_barrier() can easily be modified to use softirq for synchronization of the cpus no longer requiring any scheduling activity. As there already is a rcu softirq reuse that for the synchronization. Remove the barrier element from struct rcu_data as it isn't used. Finally switch rcu_barrier() to return void as it now can never fail. Signed-off-by: Juergen Gross --- V2: - use get_cpu_maps() - add recursion detection --- xen/common/rcupdate.c | 72 ++++++++++++++++++++++++++++++++-------------- xen/include/xen/rcupdate.h | 2 +- 2 files changed, 51 insertions(+), 23 deletions(-) diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c index 079ea9d8a1..e6add0b120 100644 --- a/xen/common/rcupdate.c +++ b/xen/common/rcupdate.c @@ -83,7 +83,6 @@ struct rcu_data { struct rcu_head **donetail; long blimit; /* Upper limit on a processed batch */ int cpu; - struct rcu_head barrier; long last_rs_qlen; /* qlen during the last resched */ /* 3) idle CPUs handling */ @@ -91,6 +90,7 @@ struct rcu_data { bool idle_timer_active; bool process_callbacks; + bool barrier_active; }; /* @@ -143,47 +143,68 @@ static int qhimark = 10000; static int qlowmark = 100; static int rsinterval = 1000; -struct rcu_barrier_data { - struct rcu_head head; - atomic_t *cpu_count; -}; +/* + * rcu_barrier() handling: + * cpu_count holds the number of cpu required to finish barrier handling. + * Cpus are synchronized via softirq mechanism. rcu_barrier() is regarded to + * be active if cpu_count is not zero. In case rcu_barrier() is called on + * multiple cpus it is enough to check for cpu_count being not zero on entry + * and to call process_pending_softirqs() in a loop until cpu_count drops to + * zero, as syncing has been requested already and we don't need to sync + * multiple times. + */ +static atomic_t cpu_count = ATOMIC_INIT(0); static void rcu_barrier_callback(struct rcu_head *head) { - struct rcu_barrier_data *data = container_of( - head, struct rcu_barrier_data, head); - atomic_inc(data->cpu_count); + atomic_dec(&cpu_count); } -static int rcu_barrier_action(void *_cpu_count) +static void rcu_barrier_action(void) { - struct rcu_barrier_data data = { .cpu_count = _cpu_count }; - - ASSERT(!local_irq_is_enabled()); - local_irq_enable(); + struct rcu_head head; /* * When callback is executed, all previously-queued RCU work on this CPU * is completed. When all CPUs have executed their callback, data.cpu_count * will have been incremented to include every online CPU. */ - call_rcu(&data.head, rcu_barrier_callback); + call_rcu(&head, rcu_barrier_callback); - while ( atomic_read(data.cpu_count) != num_online_cpus() ) + while ( atomic_read(&cpu_count) ) { process_pending_softirqs(); cpu_relax(); } - - local_irq_disable(); - - return 0; } -int rcu_barrier(void) +void rcu_barrier(void) { - atomic_t cpu_count = ATOMIC_INIT(0); - return stop_machine_run(rcu_barrier_action, &cpu_count, NR_CPUS); + int initial = atomic_read(&cpu_count); + + while ( !get_cpu_maps() ) + { + process_pending_softirqs(); + if ( initial && !atomic_read(&cpu_count) ) + return; + + cpu_relax(); + initial = atomic_read(&cpu_count); + } + + if ( !initial ) + { + atomic_set(&cpu_count, num_online_cpus()); + cpumask_raise_softirq(&cpu_online_map, RCU_SOFTIRQ); + } + + while ( atomic_read(&cpu_count) ) + { + process_pending_softirqs(); + cpu_relax(); + } + + put_cpu_maps(); } /* Is batch a before batch b ? */ @@ -422,6 +443,13 @@ static void rcu_process_callbacks(void) rdp->process_callbacks = false; __rcu_process_callbacks(&rcu_ctrlblk, rdp); } + + if ( atomic_read(&cpu_count) && !rdp->barrier_active ) + { + rdp->barrier_active = true; + rcu_barrier_action(); + rdp->barrier_active = false; + } } static int __rcu_pending(struct rcu_ctrlblk *rcp, struct rcu_data *rdp) diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h index 174d058113..87f35b7704 100644 --- a/xen/include/xen/rcupdate.h +++ b/xen/include/xen/rcupdate.h @@ -143,7 +143,7 @@ void rcu_check_callbacks(int cpu); void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *head)); -int rcu_barrier(void); +void rcu_barrier(void); void rcu_idle_enter(unsigned int cpu); void rcu_idle_exit(unsigned int cpu); From patchwork Tue Feb 18 12:21:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11388327 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6AA1713A4 for ; Tue, 18 Feb 2020 12:23:06 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5147E2173E for ; Tue, 18 Feb 2020 12:23:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5147E2173E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j41sP-0006dn-5V; Tue, 18 Feb 2020 12:21:21 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j41sN-0006db-Vg for xen-devel@lists.xenproject.org; Tue, 18 Feb 2020 12:21:20 +0000 X-Inumbo-ID: 2a2a8b52-5249-11ea-aa99-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 2a2a8b52-5249-11ea-aa99-bc764e2007e4; Tue, 18 Feb 2020 12:21:18 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id CCB3BB368; Tue, 18 Feb 2020 12:21:17 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 18 Feb 2020 13:21:13 +0100 Message-Id: <20200218122114.17596-4-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200218122114.17596-1-jgross@suse.com> References: <20200218122114.17596-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 3/4] xen: add process_pending_softirqs_norcu() for keyhandlers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Kevin Tian , Stefano Stabellini , Julien Grall , Jun Nakajima , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Some keyhandlers are calling process_pending_softirqs() while holding a rcu_read_lock(). This is wrong, as process_pending_softirqs() might activate rcu calls which should not happen inside a rcu_read_lock(). For that purpose add process_pending_softirqs_norcu() which will not do any rcu activity and use this for keyhandlers. Signed-off-by: Juergen Gross --- xen/arch/x86/mm/p2m-ept.c | 2 +- xen/arch/x86/numa.c | 4 ++-- xen/common/keyhandler.c | 6 +++--- xen/common/softirq.c | 17 +++++++++++++---- xen/drivers/passthrough/amd/pci_amd_iommu.c | 2 +- xen/drivers/passthrough/vtd/iommu.c | 2 +- xen/drivers/vpci/msi.c | 4 ++-- xen/include/xen/softirq.h | 2 ++ 8 files changed, 25 insertions(+), 14 deletions(-) diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index d4defa01c2..af2b012144 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -1342,7 +1342,7 @@ static void ept_dump_p2m_table(unsigned char key) c ?: ept_entry->ipat ? '!' : ' '); if ( !(record_counter++ % 100) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); } unmap_domain_page(table); } diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c index f1066c59c7..cf6fcc9966 100644 --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -418,7 +418,7 @@ static void dump_numa(unsigned char key) printk("Memory location of each domain:\n"); for_each_domain ( d ) { - process_pending_softirqs(); + process_pending_softirqs_norcu(); printk("Domain %u (total: %u):\n", d->domain_id, domain_tot_pages(d)); @@ -462,7 +462,7 @@ static void dump_numa(unsigned char key) for ( j = 0; j < d->max_vcpus; j++ ) { if ( !(j & 0x3f) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); if ( vnuma->vcpu_to_vnode[j] == i ) { diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c index 87bd145374..0d32bc4e2a 100644 --- a/xen/common/keyhandler.c +++ b/xen/common/keyhandler.c @@ -263,7 +263,7 @@ static void dump_domains(unsigned char key) { unsigned int i; - process_pending_softirqs(); + process_pending_softirqs_norcu(); printk("General information for domain %u:\n", d->domain_id); printk(" refcnt=%d dying=%d pause_count=%d\n", @@ -307,7 +307,7 @@ static void dump_domains(unsigned char key) for_each_sched_unit_vcpu ( unit, v ) { if ( !(v->vcpu_id & 0x3f) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); printk(" VCPU%d: CPU%d [has=%c] poll=%d " "upcall_pend=%02x upcall_mask=%02x ", @@ -337,7 +337,7 @@ static void dump_domains(unsigned char key) for_each_vcpu ( d, v ) { if ( !(v->vcpu_id & 0x3f) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); printk("Notifying guest %d:%d (virq %d, port %d)\n", d->domain_id, v->vcpu_id, diff --git a/xen/common/softirq.c b/xen/common/softirq.c index b83ad96d6c..3fe75ca3e8 100644 --- a/xen/common/softirq.c +++ b/xen/common/softirq.c @@ -25,7 +25,7 @@ static softirq_handler softirq_handlers[NR_SOFTIRQS]; static DEFINE_PER_CPU(cpumask_t, batch_mask); static DEFINE_PER_CPU(unsigned int, batching); -static void __do_softirq(unsigned long ignore_mask) +static void __do_softirq(unsigned long ignore_mask, bool rcu_allowed) { unsigned int i, cpu; unsigned long pending; @@ -38,7 +38,7 @@ static void __do_softirq(unsigned long ignore_mask) */ cpu = smp_processor_id(); - if ( rcu_pending(cpu) ) + if ( rcu_allowed && rcu_pending(cpu) ) rcu_check_callbacks(cpu); if ( ((pending = (softirq_pending(cpu) & ~ignore_mask)) == 0) @@ -55,13 +55,22 @@ void process_pending_softirqs(void) { ASSERT(!in_irq() && local_irq_is_enabled()); /* Do not enter scheduler as it can preempt the calling context. */ - __do_softirq((1ul << SCHEDULE_SOFTIRQ) | (1ul << SCHED_SLAVE_SOFTIRQ)); + __do_softirq((1ul << SCHEDULE_SOFTIRQ) | (1ul << SCHED_SLAVE_SOFTIRQ), + true); +} + +void process_pending_softirqs_norcu(void) +{ + ASSERT(!in_irq() && local_irq_is_enabled()); + /* Do not enter scheduler as it can preempt the calling context. */ + __do_softirq((1ul << SCHEDULE_SOFTIRQ) | (1ul << SCHED_SLAVE_SOFTIRQ), + false); } void do_softirq(void) { ASSERT_NOT_IN_ATOMIC(); - __do_softirq(0); + __do_softirq(0, true); } void open_softirq(int nr, softirq_handler handler) diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c index 3112653960..880d64c748 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -587,7 +587,7 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level, struct amd_iommu_pte *pde = &table_vaddr[index]; if ( !(index % 2) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); if ( !pde->pr ) continue; diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index 3d60976dd5..c7bd8d4ada 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -2646,7 +2646,7 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, for ( i = 0; i < PTE_NUM; i++ ) { if ( !(i % 2) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); pte = &pt_vaddr[i]; if ( !dma_pte_present(*pte) ) diff --git a/xen/drivers/vpci/msi.c b/xen/drivers/vpci/msi.c index 75010762ed..1d337604cc 100644 --- a/xen/drivers/vpci/msi.c +++ b/xen/drivers/vpci/msi.c @@ -321,13 +321,13 @@ void vpci_dump_msi(void) * holding the lock. */ printk("unable to print all MSI-X entries: %d\n", rc); - process_pending_softirqs(); + process_pending_softirqs_norcu(); continue; } } spin_unlock(&pdev->vpci->lock); - process_pending_softirqs(); + process_pending_softirqs_norcu(); } } rcu_read_unlock(&domlist_read_lock); diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h index b4724f5c8b..b5bf3b83b1 100644 --- a/xen/include/xen/softirq.h +++ b/xen/include/xen/softirq.h @@ -37,7 +37,9 @@ void cpu_raise_softirq_batch_finish(void); * Process pending softirqs on this CPU. This should be called periodically * when performing work that prevents softirqs from running in a timely manner. * Use this instead of do_softirq() when you do not want to be preempted. + * The norcu variant is to be used while holding a read_rcu_lock(). */ void process_pending_softirqs(void); +void process_pending_softirqs_norcu(void); #endif /* __XEN_SOFTIRQ_H__ */ From patchwork Tue Feb 18 12:21:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11388329 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3FD1113A4 for ; Tue, 18 Feb 2020 12:23:10 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 26234207FD for ; Tue, 18 Feb 2020 12:23:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 26234207FD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j41sY-0006fk-O9; Tue, 18 Feb 2020 12:21:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j41sX-0006fK-Pl for xen-devel@lists.xenproject.org; Tue, 18 Feb 2020 12:21:29 +0000 X-Inumbo-ID: 2a3f902e-5249-11ea-bc8e-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 2a3f902e-5249-11ea-bc8e-bc764e2007e4; Tue, 18 Feb 2020 12:21:18 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 0D51EB369; Tue, 18 Feb 2020 12:21:18 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 18 Feb 2020 13:21:14 +0100 Message-Id: <20200218122114.17596-5-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200218122114.17596-1-jgross@suse.com> References: <20200218122114.17596-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 4/4] xen/rcu: add assertions to debug build X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Xen's RCU implementation relies on no softirq handling taking place while being in a RCU critical section. Add ASSERT()s in debug builds in order to catch any violations. For that purpose modify rcu_read_[un]lock() to use a dedicated percpu counter instead of preempt_[en|dis]able() as this enables to test that condition in __do_softirq() (ASSERT_NOT_IN_ATOMIC() is not usable there due to __cpu_up() calling process_pending_softirqs() while holding the cpu hotplug lock). Dropping the now no longer needed #include of preempt.h in rcupdate.h requires adding it in some sources. Signed-off-by: Juergen Gross --- xen/common/multicall.c | 1 + xen/common/rcupdate.c | 4 ++++ xen/common/softirq.c | 2 ++ xen/common/wait.c | 1 + xen/include/xen/rcupdate.h | 21 +++++++++++++++++---- 5 files changed, 25 insertions(+), 4 deletions(-) diff --git a/xen/common/multicall.c b/xen/common/multicall.c index 5a199ebf8f..67f1a23485 100644 --- a/xen/common/multicall.c +++ b/xen/common/multicall.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c index e6add0b120..b03f4b44d9 100644 --- a/xen/common/rcupdate.c +++ b/xen/common/rcupdate.c @@ -46,6 +46,10 @@ #include #include +#ifndef NDEBUG +DEFINE_PER_CPU(unsigned int, rcu_lock_cnt); +#endif + /* Global control variables for rcupdate callback mechanism. */ static struct rcu_ctrlblk { long cur; /* Current batch number. */ diff --git a/xen/common/softirq.c b/xen/common/softirq.c index 3fe75ca3e8..18be8db0c6 100644 --- a/xen/common/softirq.c +++ b/xen/common/softirq.c @@ -30,6 +30,8 @@ static void __do_softirq(unsigned long ignore_mask, bool rcu_allowed) unsigned int i, cpu; unsigned long pending; + ASSERT(!rcu_allowed || rcu_quiesce_allowed()); + for ( ; ; ) { /* diff --git a/xen/common/wait.c b/xen/common/wait.c index 24716e7676..9cdb174036 100644 --- a/xen/common/wait.c +++ b/xen/common/wait.c @@ -19,6 +19,7 @@ * along with this program; If not, see . */ +#include #include #include #include diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h index 87f35b7704..a5ee7fec2b 100644 --- a/xen/include/xen/rcupdate.h +++ b/xen/include/xen/rcupdate.h @@ -34,10 +34,23 @@ #include #include #include -#include +#include #define __rcu +#ifndef NDEBUG +DECLARE_PER_CPU(unsigned int, rcu_lock_cnt); + +#define rcu_quiesce_disable() (this_cpu(rcu_lock_cnt))++ +#define rcu_quiesce_enable() (this_cpu(rcu_lock_cnt))-- +#define rcu_quiesce_allowed() (!this_cpu(rcu_lock_cnt)) + +#else +#define rcu_quiesce_disable() ((void)0) +#define rcu_quiesce_enable() ((void)0) +#define rcu_quiesce_allowed() true +#endif + /** * struct rcu_head - callback structure for use with RCU * @next: next update requests in a list @@ -90,16 +103,16 @@ typedef struct _rcu_read_lock rcu_read_lock_t; * will be deferred until the outermost RCU read-side critical section * completes. * - * It is illegal to block while in an RCU read-side critical section. + * It is illegal to process softirqs while in an RCU read-side critical section. */ -#define rcu_read_lock(x) ({ ((void)(x)); preempt_disable(); }) +#define rcu_read_lock(x) ({ ((void)(x)); rcu_quiesce_disable(); }) /** * rcu_read_unlock - marks the end of an RCU read-side critical section. * * See rcu_read_lock() for more information. */ -#define rcu_read_unlock(x) ({ ((void)(x)); preempt_enable(); }) +#define rcu_read_unlock(x) ({ ((void)(x)); rcu_quiesce_enable(); }) /* * So where is rcu_write_lock()? It does not exist, as there is no