From patchwork Tue Jul 23 09:20:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11054171 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CAE2C912 for ; Tue, 23 Jul 2019 09:22:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BA3A427EE2 for ; Tue, 23 Jul 2019 09:22:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AEADB283A5; Tue, 23 Jul 2019 09:22:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 833B32841F for ; Tue, 23 Jul 2019 09:22:24 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hpqyl-0003qn-F8; Tue, 23 Jul 2019 09:21:03 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hpqyk-0003qd-Hi for xen-devel@lists.xenproject.org; Tue, 23 Jul 2019 09:21:02 +0000 X-Inumbo-ID: 2f65354c-ad2b-11e9-8980-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 2f65354c-ad2b-11e9-8980-bc764e045a96; Tue, 23 Jul 2019 09:21:00 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 2DD49AF79; Tue, 23 Jul 2019 09:20:59 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 23 Jul 2019 11:20:55 +0200 Message-Id: <20190723092056.15045-2-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190723092056.15045-1-jgross@suse.com> References: <20190723092056.15045-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 1/2] xen/sched: fix locking in restore_vcpu_affinity() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Commit 0763cd2687897b55e7 ("xen/sched: don't disable scheduler on cpus during suspend") removed a lock in restore_vcpu_affinity() which needs to stay: cpumask_scratch_cpu() must be protected by the scheduler lock. restore_vcpu_affinity() is being called by thaw_domains(), so with multiple domains in the system another domain might already be running and the scheduler might make use of cpumask_scratch_cpu() already. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/schedule.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 25f6ab388d..89bc259ae4 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -708,6 +708,8 @@ void restore_vcpu_affinity(struct domain *d) * set v->processor of each of their vCPUs to something that will * make sense for the scheduler of the cpupool in which they are in. */ + lock = vcpu_schedule_lock_irq(v); + cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, cpupool_domain_cpumask(d)); if ( cpumask_empty(cpumask_scratch_cpu(cpu)) ) @@ -731,6 +733,9 @@ void restore_vcpu_affinity(struct domain *d) v->processor = cpumask_any(cpumask_scratch_cpu(cpu)); + spin_unlock_irq(lock); + + /* v->processor might have changed, so reacquire the lock. */ lock = vcpu_schedule_lock_irq(v); v->processor = sched_pick_cpu(vcpu_scheduler(v), v); spin_unlock_irq(lock);