From patchwork Fri Oct 4 06:40:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11173659 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E576139A for ; Fri, 4 Oct 2019 06:41:55 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 788F02084D for ; Fri, 4 Oct 2019 06:41:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 788F02084D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iGHGG-0005xS-FN; Fri, 04 Oct 2019 06:40:20 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iGHGF-0005wd-1G for xen-devel@lists.xenproject.org; Fri, 04 Oct 2019 06:40:19 +0000 X-Inumbo-ID: d17db1fc-e671-11e9-973f-12813bfff9fa Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id d17db1fc-e671-11e9-973f-12813bfff9fa; Fri, 04 Oct 2019 06:40:13 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B7FFBB149; Fri, 4 Oct 2019 06:40:12 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 4 Oct 2019 08:40:10 +0200 Message-Id: <20191004064010.25646-1-jgross@suse.com> X-Mailer: git-send-email 2.16.4 Subject: [Xen-devel] [PATCH] xen/sched: fix locking in sched_tick_[suspend|resume]() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" sched_tick_suspend() and sched_tick_resume() should not call the scheduler specific timer handlers in case the cpu they are running on is just being moved to or from a cpupool. Use a new percpu lock for that purpose. Reported-by: Sergey Dyasli Signed-off-by: Juergen Gross --- To be applied on top of my core scheduling series. --- xen/common/schedule.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 217fcb09ce..744f8cb5db 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -68,6 +68,9 @@ cpumask_t sched_res_mask; /* Common lock for free cpus. */ static DEFINE_SPINLOCK(sched_free_cpu_lock); +/* Lock for guarding per-scheduler calls against scheduler changes on a cpu. */ +static DEFINE_PER_CPU(spinlock_t, sched_cpu_lock); + /* Various timer handlers. */ static void s_timer_fn(void *unused); static void vcpu_periodic_timer_fn(void *data); @@ -2472,6 +2475,8 @@ static int cpu_schedule_up(unsigned int cpu) if ( sr == NULL ) return -ENOMEM; + spin_lock_init(&per_cpu(sched_cpu_lock, cpu)); + sr->master_cpu = cpu; cpumask_copy(sr->cpus, cpumask_of(cpu)); set_sched_res(cpu, sr); @@ -2763,11 +2768,14 @@ int schedule_cpu_add(unsigned int cpu, struct cpupool *c) struct scheduler *new_ops = c->sched; struct sched_resource *sr; spinlock_t *old_lock, *new_lock; + spinlock_t *cpu_lock = &per_cpu(sched_cpu_lock, cpu); unsigned long flags; int ret = 0; rcu_read_lock(&sched_res_rculock); + spin_lock(cpu_lock); + sr = get_sched_res(cpu); ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus)); @@ -2879,6 +2887,8 @@ int schedule_cpu_add(unsigned int cpu, struct cpupool *c) cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); out: + spin_unlock(cpu_lock); + rcu_read_unlock(&sched_res_rculock); return ret; @@ -2897,12 +2907,15 @@ int schedule_cpu_rm(unsigned int cpu) struct sched_unit *unit; struct scheduler *old_ops; spinlock_t *old_lock; + spinlock_t *cpu_lock = &per_cpu(sched_cpu_lock, cpu); unsigned long flags; int idx, ret = -ENOMEM; unsigned int cpu_iter; rcu_read_lock(&sched_res_rculock); + spin_lock(cpu_lock); + sr = get_sched_res(cpu); old_ops = sr->scheduler; @@ -3004,6 +3017,8 @@ int schedule_cpu_rm(unsigned int cpu) sr->cpupool = NULL; out: + spin_unlock(cpu_lock); + rcu_read_unlock(&sched_res_rculock); xfree(sr_new); @@ -3084,11 +3099,17 @@ void sched_tick_suspend(void) { struct scheduler *sched; unsigned int cpu = smp_processor_id(); + spinlock_t *lock = &per_cpu(sched_cpu_lock, cpu); rcu_read_lock(&sched_res_rculock); + spin_lock(lock); + sched = get_sched_res(cpu)->scheduler; sched_do_tick_suspend(sched, cpu); + + spin_unlock(lock); + rcu_idle_enter(cpu); rcu_idle_timer_start(); @@ -3099,14 +3120,20 @@ void sched_tick_resume(void) { struct scheduler *sched; unsigned int cpu = smp_processor_id(); + spinlock_t *lock = &per_cpu(sched_cpu_lock, cpu); rcu_read_lock(&sched_res_rculock); rcu_idle_timer_stop(); rcu_idle_exit(cpu); + + spin_lock(lock); + sched = get_sched_res(cpu)->scheduler; sched_do_tick_resume(sched, cpu); + spin_unlock(lock); + rcu_read_unlock(&sched_res_rculock); }