From patchwork Wed Oct 23 12:12:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11206479 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D1C95913 for ; Wed, 23 Oct 2019 12:13:46 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B66BB20640 for ; Wed, 23 Oct 2019 12:13:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B66BB20640 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iNFUv-0001cM-QL; Wed, 23 Oct 2019 12:12:17 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iNFUu-0001cH-Al for xen-devel@lists.xenproject.org; Wed, 23 Oct 2019 12:12:16 +0000 X-Inumbo-ID: 58b700b2-f58e-11e9-947c-12813bfff9fa Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 58b700b2-f58e-11e9-947c-12813bfff9fa; Wed, 23 Oct 2019 12:12:13 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D70DCB33E; Wed, 23 Oct 2019 12:12:12 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Wed, 23 Oct 2019 14:12:09 +0200 Message-Id: <20191023121209.4814-1-jgross@suse.com> X-Mailer: git-send-email 2.16.4 Subject: [Xen-devel] [PATCH] xen/pvhsim: fix cpu onlining X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Wei Liu , George Dunlap , Andrew Cooper , Dario Faggioli , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Since commit 8d3c326f6756d1 ("xen: let vcpu_create() select processor") the initial processor for all pv-shim vcpus will be 0, as no other cpus are online when the vcpus are created. Before that commit the vcpus would have processors set not being online yet, which worked just by chance. When the pv-shim vcpu becomes active it will have a hard affinity not matching its initial processor assignment leading to failing ASSERT()s or other problems depending on the selected scheduler. Fix that by doing the affinity setting after onlining the cpu but before taking the vcpu up. For vcpu 0 this is still in sched_setup_dom0_vcpus(), for the other vcpus setting the affinity there can be dropped. Fixes: 8d3c326f6756d1 ("xen: let vcpu_create() select processor") Reported-by: Sergey Dyasli Tested-by: Sergey Dyasli Signed-off-by: Juergen Gross Reviewed-by: Roger Pau Monné Acked-by: Jan Beulich --- xen/arch/x86/pv/shim.c | 2 ++ xen/common/schedule.c | 11 +++++------ 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c index 5edbcd9ac5..4329eaaefe 100644 --- a/xen/arch/x86/pv/shim.c +++ b/xen/arch/x86/pv/shim.c @@ -837,6 +837,8 @@ long pv_shim_cpu_up(void *data) v->vcpu_id, rc); return rc; } + + vcpu_set_hard_affinity(v, cpumask_of(v->vcpu_id)); } wake = test_and_clear_bit(_VPF_down, &v->pause_flags); diff --git a/xen/common/schedule.c b/xen/common/schedule.c index c327c40b92..326f4d3601 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -3102,13 +3102,12 @@ void __init sched_setup_dom0_vcpus(struct domain *d) for ( i = 1; i < d->max_vcpus; i++ ) vcpu_create(d, i); - for_each_sched_unit ( d, unit ) + if ( pv_shim ) + sched_set_affinity(d->vcpu[0]->sched_unit, + cpumask_of(0), cpumask_of(0)); + else { - unsigned int id = unit->unit_id; - - if ( pv_shim ) - sched_set_affinity(unit, cpumask_of(id), cpumask_of(id)); - else + for_each_sched_unit ( d, unit ) { if ( !opt_dom0_vcpus_pin && !dom0_affinity_relaxed ) sched_set_affinity(unit, &dom0_cpus, NULL);