From patchwork Fri Jul 26 06:25:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 11060363 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3C2E91398 for ; Fri, 26 Jul 2019 06:27:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2C365205A9 for ; Fri, 26 Jul 2019 06:27:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2068428A4F; Fri, 26 Jul 2019 06:27:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A6D7D205A9 for ; Fri, 26 Jul 2019 06:27:36 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hqtfv-00038Q-9q; Fri, 26 Jul 2019 06:25:55 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hqtft-00038K-NA for xen-devel@lists.xenproject.org; Fri, 26 Jul 2019 06:25:53 +0000 X-Inumbo-ID: 3730ec1e-af6e-11e9-8980-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 3730ec1e-af6e-11e9-8980-bc764e045a96; Fri, 26 Jul 2019 06:25:52 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 924C1AD31; Fri, 26 Jul 2019 06:25:51 +0000 (UTC) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Fri, 26 Jul 2019 08:25:51 +0200 Message-ID: <156412235104.2385.3911161728130674771.stgit@Palanthas> In-Reply-To: <156412188377.2385.12588508835559819141.stgit@Palanthas> References: <156412188377.2385.12588508835559819141.stgit@Palanthas> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 1/4] xen: sched: refector code around vcpu_deassign() in null scheduler X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: George Dunlap Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP vcpu_deassign() is called only once (in _vcpu_remove()). Let's consolidate the two functions into one. No functional change intended. Signed-off-by: Dario Faggioli Acked-by: George Dunlap --- xen/common/sched_null.c | 76 ++++++++++++++++++++++------------------------- 1 file changed, 35 insertions(+), 41 deletions(-) diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index c02c1b9c1f..c47c1b5aae 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -358,9 +358,14 @@ static void vcpu_assign(struct null_private *prv, struct vcpu *v, } } -static void vcpu_deassign(struct null_private *prv, struct vcpu *v, - unsigned int cpu) +static void vcpu_deassign(struct null_private *prv, struct vcpu *v) { + unsigned int bs; + unsigned int cpu = v->processor; + struct null_vcpu *wvc; + + ASSERT(list_empty(&null_vcpu(v)->waitq_elem)); + per_cpu(npc, cpu).vcpu = NULL; cpumask_set_cpu(cpu, &prv->cpus_free); @@ -377,6 +382,32 @@ static void vcpu_deassign(struct null_private *prv, struct vcpu *v, d.cpu = cpu; __trace_var(TRC_SNULL_VCPU_DEASSIGN, 1, sizeof(d), &d); } + + spin_lock(&prv->waitq_lock); + + /* + * If v is assigned to a pCPU, let's see if there is someone waiting, + * suitable to be assigned to it (prioritizing vcpus that have + * soft-affinity with cpu). + */ + for_each_affinity_balance_step( bs ) + { + list_for_each_entry( wvc, &prv->waitq, waitq_elem ) + { + if ( bs == BALANCE_SOFT_AFFINITY && !has_soft_affinity(wvc->vcpu) ) + continue; + + if ( vcpu_check_affinity(wvc->vcpu, cpu, bs) ) + { + list_del_init(&wvc->waitq_elem); + vcpu_assign(prv, wvc->vcpu, cpu); + cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); + spin_unlock(&prv->waitq_lock); + return; + } + } + } + spin_unlock(&prv->waitq_lock); } /* Change the scheduler of cpu to us (null). */ @@ -459,43 +490,6 @@ static void null_vcpu_insert(const struct scheduler *ops, struct vcpu *v) SCHED_STAT_CRANK(vcpu_insert); } -static void _vcpu_remove(struct null_private *prv, struct vcpu *v) -{ - unsigned int bs; - unsigned int cpu = v->processor; - struct null_vcpu *wvc; - - ASSERT(list_empty(&null_vcpu(v)->waitq_elem)); - - vcpu_deassign(prv, v, cpu); - - spin_lock(&prv->waitq_lock); - - /* - * If v is assigned to a pCPU, let's see if there is someone waiting, - * suitable to be assigned to it (prioritizing vcpus that have - * soft-affinity with cpu). - */ - for_each_affinity_balance_step( bs ) - { - list_for_each_entry( wvc, &prv->waitq, waitq_elem ) - { - if ( bs == BALANCE_SOFT_AFFINITY && !has_soft_affinity(wvc->vcpu) ) - continue; - - if ( vcpu_check_affinity(wvc->vcpu, cpu, bs) ) - { - list_del_init(&wvc->waitq_elem); - vcpu_assign(prv, wvc->vcpu, cpu); - cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); - spin_unlock(&prv->waitq_lock); - return; - } - } - } - spin_unlock(&prv->waitq_lock); -} - static void null_vcpu_remove(const struct scheduler *ops, struct vcpu *v) { struct null_private *prv = null_priv(ops); @@ -519,7 +513,7 @@ static void null_vcpu_remove(const struct scheduler *ops, struct vcpu *v) ASSERT(per_cpu(npc, v->processor).vcpu == v); ASSERT(!cpumask_test_cpu(v->processor, &prv->cpus_free)); - _vcpu_remove(prv, v); + vcpu_deassign(prv, v); out: vcpu_schedule_unlock_irq(lock, v); @@ -605,7 +599,7 @@ static void null_vcpu_migrate(const struct scheduler *ops, struct vcpu *v, */ if ( likely(list_empty(&nvc->waitq_elem)) ) { - _vcpu_remove(prv, v); + vcpu_deassign(prv, v); SCHED_STAT_CRANK(migrate_running); } else