From patchwork Sat Sep 14 06:42:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11145545 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0C039112B for ; Sat, 14 Sep 2019 06:44:23 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E51AA20692 for ; Sat, 14 Sep 2019 06:44:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E51AA20692 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i91lJ-0005M4-J0; Sat, 14 Sep 2019 06:42:25 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i91lH-0005Lz-T9 for xen-devel@lists.xenproject.org; Sat, 14 Sep 2019 06:42:23 +0000 X-Inumbo-ID: cdaaeb92-d6ba-11e9-978d-bc764e2007e4 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id cdaaeb92-d6ba-11e9-978d-bc764e2007e4; Sat, 14 Sep 2019 06:42:21 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id CCE97AD82; Sat, 14 Sep 2019 06:42:20 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Sat, 14 Sep 2019 08:42:17 +0200 Message-Id: <20190914064217.4877-1-jgross@suse.com> X-Mailer: git-send-email 2.16.4 Subject: [Xen-devel] [PATCH v2] xen/sched: rework and rename vcpu_force_reschedule() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , Dario Faggioli , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" vcpu_force_reschedule() is only used for modifying the periodic timer of a vcpu. Forcing a vcpu to give up the physical cpu for that purpose is kind of brutal. So instead of doing the reschedule dance just operate on the timer directly. By protecting periodic timer modifications against concurrent timer activation via a per-vcpu lock it is even no longer required to bother the target vcpu at all for updating its timer. Rename the function to vcpu_set_periodic_timer() as this now reflects the functionality. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- - Carved out from my core scheduling series - Reworked to avoid deadlock when 2 vcpus are trying to modify each others periodic timers, leading to address all comments by Jan Beulich. V2: - test periodic_period again in vcpu_periodic_timer_work() when lock obtained (Jan Beulich) --- xen/arch/x86/pv/shim.c | 4 +--- xen/common/domain.c | 6 ++---- xen/common/schedule.c | 53 ++++++++++++++++++++++++++++--------------------- xen/include/xen/sched.h | 3 ++- 4 files changed, 35 insertions(+), 31 deletions(-) diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c index 324ca27f93..5edbcd9ac5 100644 --- a/xen/arch/x86/pv/shim.c +++ b/xen/arch/x86/pv/shim.c @@ -410,7 +410,7 @@ int pv_shim_shutdown(uint8_t reason) unmap_vcpu_info(v); /* Reset the periodic timer to the default value. */ - v->periodic_period = MILLISECS(10); + vcpu_set_periodic_timer(v, MILLISECS(10)); /* Stop the singleshot timer. */ stop_timer(&v->singleshot_timer); @@ -419,8 +419,6 @@ int pv_shim_shutdown(uint8_t reason) if ( v != current ) vcpu_unpause_by_systemcontroller(v); - else - vcpu_force_reschedule(v); } return 0; diff --git a/xen/common/domain.c b/xen/common/domain.c index 9a48b2504b..0cff749bbe 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -1494,15 +1494,13 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg) if ( set.period_ns > STIME_DELTA_MAX ) return -EINVAL; - v->periodic_period = set.period_ns; - vcpu_force_reschedule(v); + vcpu_set_periodic_timer(v, set.period_ns); break; } case VCPUOP_stop_periodic_timer: - v->periodic_period = 0; - vcpu_force_reschedule(v); + vcpu_set_periodic_timer(v, 0); break; case VCPUOP_set_singleshot_timer: diff --git a/xen/common/schedule.c b/xen/common/schedule.c index fdeec10c3b..13b5ffc7cf 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -312,6 +312,7 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor) v->processor = processor; /* Initialise the per-vcpu timers. */ + spin_lock_init(&v->periodic_timer_lock); init_timer(&v->periodic_timer, vcpu_periodic_timer_fn, v, v->processor); init_timer(&v->singleshot_timer, vcpu_singleshot_timer_fn, @@ -724,24 +725,6 @@ static void vcpu_migrate_finish(struct vcpu *v) vcpu_wake(v); } -/* - * Force a VCPU through a deschedule/reschedule path. - * For example, using this when setting the periodic timer period means that - * most periodic-timer state need only be touched from within the scheduler - * which can thus be done without need for synchronisation. - */ -void vcpu_force_reschedule(struct vcpu *v) -{ - spinlock_t *lock = vcpu_schedule_lock_irq(v); - - if ( v->is_running ) - vcpu_migrate_start(v); - - vcpu_schedule_unlock_irq(lock, v); - - vcpu_migrate_finish(v); -} - void restore_vcpu_affinity(struct domain *d) { unsigned int cpu = smp_processor_id(); @@ -1458,14 +1441,11 @@ long sched_adjust_global(struct xen_sysctl_scheduler_op *op) return rc; } -static void vcpu_periodic_timer_work(struct vcpu *v) +static void vcpu_periodic_timer_work_locked(struct vcpu *v) { s_time_t now; s_time_t periodic_next_event; - if ( v->periodic_period == 0 ) - return; - now = NOW(); periodic_next_event = v->periodic_last_event + v->periodic_period; @@ -1476,10 +1456,37 @@ static void vcpu_periodic_timer_work(struct vcpu *v) periodic_next_event = now + v->periodic_period; } - migrate_timer(&v->periodic_timer, smp_processor_id()); + migrate_timer(&v->periodic_timer, v->processor); set_timer(&v->periodic_timer, periodic_next_event); } +static void vcpu_periodic_timer_work(struct vcpu *v) +{ + if ( v->periodic_period == 0 ) + return; + + spin_lock(&v->periodic_timer_lock); + if ( v->periodic_period ) + vcpu_periodic_timer_work_locked(v); + spin_unlock(&v->periodic_timer_lock); +} + +/* + * Set the periodic timer of a vcpu. + */ +void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value) +{ + spin_lock(&v->periodic_timer_lock); + + stop_timer(&v->periodic_timer); + + v->periodic_period = value; + if ( value ) + vcpu_periodic_timer_work_locked(v); + + spin_unlock(&v->periodic_timer_lock); +} + /* * The main function * - deschedule the current domain (scheduler independent). diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index e3601c1935..40097ff334 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -153,6 +153,7 @@ struct vcpu struct vcpu *next_in_list; + spinlock_t periodic_timer_lock; s_time_t periodic_period; s_time_t periodic_last_event; struct timer periodic_timer; @@ -864,7 +865,7 @@ struct scheduler *scheduler_get_default(void); struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr); void scheduler_free(struct scheduler *sched); int schedule_cpu_switch(unsigned int cpu, struct cpupool *c); -void vcpu_force_reschedule(struct vcpu *v); +void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value); int cpu_disable_scheduler(unsigned int cpu); /* We need it in dom0_setup_vcpu */ void sched_set_affinity(struct vcpu *v, const cpumask_t *hard,