From patchwork Mon Mar 20 13:10:33 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Liu X-Patchwork-Id: 9634139 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4E69A601E9 for ; Mon, 20 Mar 2017 13:14:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3884727F95 for ; Mon, 20 Mar 2017 13:14:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2C10B27FA3; Mon, 20 Mar 2017 13:14:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B0DBF27F95 for ; Mon, 20 Mar 2017 13:14:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cpx7D-0000Y8-DN; Mon, 20 Mar 2017 13:12:51 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cpx7C-0000Xx-3q for xen-devel@lists.xenproject.org; Mon, 20 Mar 2017 13:12:50 +0000 Received: from [85.158.137.68] by server-8.bemta-3.messagelabs.com id B1/F8-00609-155DFC85; Mon, 20 Mar 2017 13:12:49 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrIIsWRWlGSWpSXmKPExsXitHRDpG7A1fM RBnNW6Fp83zKZyYHR4/CHKywBjFGsmXlJ+RUJrBlbu3QLrspUdPybwNTA+EWki5GDQ0LAX+Lh rOguRk4OFgFViUVvO5hAbDYBZYmfnb1sILYIkN376zdLFyMXB7PAVkaJjRu3ghUJC3hLTG/4z wxi8wpYSHz/Og/MFhKYxSjx/kc5RFxQ4uTMJywgNrOAjsSC3Z/YQPYyC0hLLP/HARLmFLCX+H v6FyuILSqgItG5dA4LxBgFiY7px5gmMPLNQjJpFpJJsxAmLWBkXsWoXpxaVJZapGuql1SUmZ5 RkpuYmaNraGCsl5taXJyYnpqTmFSsl5yfu4kRGGT1DAyMOxgvf3U6xCjJwaQkyqsieCJCiC8p P6UyI7E4I76oNCe1+BCjBgeHwOa1qy8wSrHk5eelKknwis8HqhMsSk1PrUjLzAHGAUypBAePk givLUiat7ggMbc4Mx0idYpRl+NWw543TEJgM6TEea/OAyoSACnKKM2DGwGLyUuMslLCvIwMDA xCPAWpRbmZJajyrxjFORiVhHmdQVbxZOaVwG16BXQEE9ARiT+PgBxRkoiQkmpgtNf6Zsdj9GR XL9uiw25cu81aA4MO2P6VDJ3tGXFp3Zkp+soq3w9vL907++Vz4WoRn83+wua5u/3+75WXjl6Y vuHBIU7Z5y8+B6ndmpT77OzSVWzsK1a/KzwaceyRrb2LV9TBZo6tW29c79y34H3LhIp87efzb oYZrrOV2X5rVoXAoj8uR92uPFViKc5INNRiLipOBADKwOVbxAIAAA== X-Env-Sender: prvs=2459cfc12=wei.liu2@citrix.com X-Msg-Ref: server-13.tower-31.messagelabs.com!1490015566!90454333!1 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 19018 invoked from network); 20 Mar 2017 13:12:48 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 20 Mar 2017 13:12:48 -0000 X-IronPort-AV: E=Sophos;i="5.36,194,1486425600"; d="scan'208";a="414953981" Date: Mon, 20 Mar 2017 13:10:33 +0000 From: Wei Liu To: Jan Beulich Message-ID: <20170320131033.72avwxs6kgb4ogg7@citrix.com> References: <20170316175458.22261-1-wei.liu2@citrix.com> <20170316175458.22261-3-wei.liu2@citrix.com> <58CFBFC10200007800144E53@prv-mh.provo.novell.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <58CFBFC10200007800144E53@prv-mh.provo.novell.com> User-Agent: NeoMutt/20170113 (1.7.2) Cc: Andrew Cooper , Wei Liu , Xen-devel , Roger Pau =?iso-8859-1?Q?Monn=E9?= Subject: Re: [Xen-devel] [PATCH v2 2/4] x86: split PV dom0 builder to pv/dom0_builder.c X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP On Mon, Mar 20, 2017 at 04:40:49AM -0600, Jan Beulich wrote: > >>> On 16.03.17 at 18:54, wrote: > > @@ -154,11 +155,11 @@ static void __init parse_dom0_nodes(const char *s) > > } > > custom_param("dom0_nodes", parse_dom0_nodes); > > > > -static cpumask_t __initdata dom0_cpus; > > +cpumask_t __initdata dom0_cpus; > > I'd prefer if this variable remained static, and I think this is doable: > > > -static struct vcpu *__init setup_dom0_vcpu(struct domain *d, > > - unsigned int vcpu_id, > > - unsigned int cpu) > > +struct vcpu *__init dom0_setup_vcpu(struct domain *d, > > + unsigned int vcpu_id, > > + unsigned int cpu) > > { > > It's needed only by the callers of this function afaics, and the > cpumask_first() / cpumask_cycle() invocations could easily move > into the function, with the callers updating their "cpu" variables > from v->processor (with v assumed to be a variable to store the > return value of the function, and checked to be non-NULL). > Like this? From 6b066814a424fdaf9ee0a1d2afc6b0765961e932 Mon Sep 17 00:00:00 2001 From: Wei Liu Date: Mon, 20 Mar 2017 13:05:08 +0000 Subject: [PATCH] x86: modify setup_dom0_vcpu to keep dom0_cpus static We will later move dom0 builders to different directory. To avoid the need of making dom0_cpus visible outside its original file, modify setup_dom0_vcpus to cycle through dom0_cpus internally instead of relying on the callers to do that. No functional change. Signed-off-by: Wei Liu --- xen/arch/x86/dom0_build.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c index 1c723c9ef1..102d3daea1 100644 --- a/xen/arch/x86/dom0_build.c +++ b/xen/arch/x86/dom0_build.c @@ -158,8 +158,9 @@ static cpumask_t __initdata dom0_cpus; static struct vcpu *__init setup_dom0_vcpu(struct domain *d, unsigned int vcpu_id, - unsigned int cpu) + unsigned int prev_cpu) { + unsigned int cpu = cpumask_cycle(prev_cpu, &dom0_cpus); struct vcpu *v = alloc_vcpu(d, vcpu_id, cpu); if ( v ) @@ -215,7 +216,8 @@ struct vcpu *__init alloc_dom0_vcpu0(struct domain *dom0) return NULL; dom0->max_vcpus = max_vcpus; - return setup_dom0_vcpu(dom0, 0, cpumask_first(&dom0_cpus)); + return setup_dom0_vcpu(dom0, 0, + cpumask_last(&dom0_cpus) /* so it wraps around to first pcpu */); } #ifdef CONFIG_SHADOW_PAGING @@ -1155,8 +1157,11 @@ static int __init construct_dom0_pv( cpu = v->processor; for ( i = 1; i < d->max_vcpus; i++ ) { - cpu = cpumask_cycle(cpu, &dom0_cpus); - setup_dom0_vcpu(d, i, cpu); + struct vcpu *p = setup_dom0_vcpu(d, i, cpu); + if ( !p ) + panic("Cannot allocate vcpu%u for Dom0", i); + + cpu = p->processor; } d->arch.paging.mode = 0; @@ -1902,8 +1907,11 @@ static int __init pvh_setup_cpus(struct domain *d, paddr_t entry, cpu = v->processor; for ( i = 1; i < d->max_vcpus; i++ ) { - cpu = cpumask_cycle(cpu, &dom0_cpus); - setup_dom0_vcpu(d, i, cpu); + struct vcpu *p = setup_dom0_vcpu(d, i, cpu); + if ( !p ) + panic("Cannot allocate vcpu%u for Dom0", i); + + cpu = p->processor; } rc = arch_set_info_hvm_guest(v, &cpu_ctx);