From patchwork Fri Jul 15 18:02:00 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: George Dunlap X-Patchwork-Id: 9232401 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EE9F360865 for ; Fri, 15 Jul 2016 18:04:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D22BD23F88 for ; Fri, 15 Jul 2016 18:04:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C57D727BFF; Fri, 15 Jul 2016 18:04:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5FE3523F88 for ; Fri, 15 Jul 2016 18:04:37 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bO7RC-0006Lj-SD; Fri, 15 Jul 2016 18:02:10 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bO7RB-0006LL-9N for xen-devel@lists.xenproject.org; Fri, 15 Jul 2016 18:02:09 +0000 Received: from [193.109.254.147] by server-6.bemta-14.messagelabs.com id 90/F5-30934-02529875; Fri, 15 Jul 2016 18:02:08 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmplkeJIrShJLcpLzFFi42JxWrrBXldBtTP c4N4bLovvWyYzOTB6HP5whSWAMYo1My8pvyKBNWP5xSPsBb+0KqY032BpYDyk1MXIySEh4C+x 7sEUdhCbTUBPYt7xryxdjBwcIgIqErf3GnQxcnEwCyxjlDj2YjYTSI2wQIjEwsNzGUFsFgFVi T8PXzGD2LwCHhL31rayQ8yUkzh//CdYXAioZvGDo+wQNYISJ2c+YQGxmQUkJA6+eMEMUc8tcf v0VOYJjDyzkJTNQlK2gJFpFaNGcWpRWWqRrpGhXlJRZnpGSW5iZo6uoaGJXm5qcXFiempOYlK xXnJ+7iZGYJDUMzAw7mC8eMnvEKMkB5OSKO/GlI5wIb6k/JTKjMTijPii0pzU4kOMMhwcShK8 giqd4UKCRanpqRVpmTnAcIVJS3DwKInwxioDpXmLCxJzizPTIVKnGHU5lux6sJZJiCUvPy9VS pyXH2SGAEhRRmke3AhY7FxilJUS5mVkYGAQ4ilILcrNLEGVf8UozsGoJMwrCTKFJzOvBG7TK6 AjmICOsDZvBzmiJBEhJdXA6Hvt64e4L/4Gb9k1X9vYfVt0TmHF594X01689zTqZbqaqPapU2P Rv7LflwrbnSda2Kh3LLi49l9S7/rU+zN3KuXG/grQfOZ8de9ve4Vb3LM59vU9qmvprjN/9vGJ tmnl84ISh6TpXmLsM1hkvifcnpzGE/LYh80p415GKTuvAedr3abNj7JSlFiKMxINtZiLihMBF IEcuJgCAAA= X-Env-Sender: prvs=99710a114=George.Dunlap@citrix.com X-Msg-Ref: server-15.tower-27.messagelabs.com!1468605725!1594683!2 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.77; banners=-,-,- X-VirusChecked: Checked Received: (qmail 11944 invoked from network); 15 Jul 2016 18:02:07 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 15 Jul 2016 18:02:07 -0000 X-IronPort-AV: E=Sophos;i="5.28,369,1464652800"; d="scan'208";a="373484126" From: George Dunlap To: Date: Fri, 15 Jul 2016 19:02:00 +0100 Message-ID: <1468605722-24239-1-git-send-email-george.dunlap@citrix.com> X-Mailer: git-send-email 2.1.4 MIME-Version: 1.0 X-DLP: MIA2 Cc: Dario Faggioli , George Dunlap , Anshul Makkar , Meng Xu Subject: [Xen-devel] [PATCH 1/3] xen: Some code motion to avoid having to do forward-declaration X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP For sched_credit2, move the vcpu insert / remove / free functions near the domain insert / remove / alloc / free functions (and after cpu_pick). For sched_rt, move rt_cpu_pick() further up. This is pure code motion; no functional change. Signed-off-by: George Dunlap Reviewed-by: Meng Xu ​ Acked-by: Dario Faggioli --- CC: Dario Faggioli CC: Anshul Makkar CC: Meng Xu --- xen/common/sched_credit2.c | 118 ++++++++++++++++++++++----------------------- xen/common/sched_rt.c | 46 +++++++++--------- 2 files changed, 82 insertions(+), 82 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 8b95a47..3b9aa27 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -971,65 +971,6 @@ runq_deassign(const struct scheduler *ops, struct vcpu *vc) } static void -csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc) -{ - struct csched2_vcpu *svc = vc->sched_priv; - struct csched2_dom * const sdom = svc->sdom; - spinlock_t *lock; - - printk("%s: Inserting %pv\n", __func__, vc); - - BUG_ON(is_idle_vcpu(vc)); - - /* Add vcpu to runqueue of initial processor */ - lock = vcpu_schedule_lock_irq(vc); - - runq_assign(ops, vc); - - vcpu_schedule_unlock_irq(lock, vc); - - sdom->nr_vcpus++; - - SCHED_STAT_CRANK(vcpu_insert); - - CSCHED2_VCPU_CHECK(vc); -} - -static void -csched2_free_vdata(const struct scheduler *ops, void *priv) -{ - struct csched2_vcpu *svc = priv; - - xfree(svc); -} - -static void -csched2_vcpu_remove(const struct scheduler *ops, struct vcpu *vc) -{ - struct csched2_vcpu * const svc = CSCHED2_VCPU(vc); - struct csched2_dom * const sdom = svc->sdom; - - BUG_ON( sdom == NULL ); - BUG_ON( !list_empty(&svc->runq_elem) ); - - if ( ! is_idle_vcpu(vc) ) - { - spinlock_t *lock; - - SCHED_STAT_CRANK(vcpu_remove); - - /* Remove from runqueue */ - lock = vcpu_schedule_lock_irq(vc); - - runq_deassign(ops, vc); - - vcpu_schedule_unlock_irq(lock, vc); - - svc->sdom->nr_vcpus--; - } -} - -static void csched2_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) { struct csched2_vcpu * const svc = CSCHED2_VCPU(vc); @@ -1668,6 +1609,65 @@ csched2_dom_destroy(const struct scheduler *ops, struct domain *dom) csched2_free_domdata(ops, CSCHED2_DOM(dom)); } +static void +csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc) +{ + struct csched2_vcpu *svc = vc->sched_priv; + struct csched2_dom * const sdom = svc->sdom; + spinlock_t *lock; + + printk("%s: Inserting %pv\n", __func__, vc); + + BUG_ON(is_idle_vcpu(vc)); + + /* Add vcpu to runqueue of initial processor */ + lock = vcpu_schedule_lock_irq(vc); + + runq_assign(ops, vc); + + vcpu_schedule_unlock_irq(lock, vc); + + sdom->nr_vcpus++; + + SCHED_STAT_CRANK(vcpu_insert); + + CSCHED2_VCPU_CHECK(vc); +} + +static void +csched2_free_vdata(const struct scheduler *ops, void *priv) +{ + struct csched2_vcpu *svc = priv; + + xfree(svc); +} + +static void +csched2_vcpu_remove(const struct scheduler *ops, struct vcpu *vc) +{ + struct csched2_vcpu * const svc = CSCHED2_VCPU(vc); + struct csched2_dom * const sdom = svc->sdom; + + BUG_ON( sdom == NULL ); + BUG_ON( !list_empty(&svc->runq_elem) ); + + if ( ! is_idle_vcpu(vc) ) + { + spinlock_t *lock; + + SCHED_STAT_CRANK(vcpu_remove); + + /* Remove from runqueue */ + lock = vcpu_schedule_lock_irq(vc); + + runq_deassign(ops, vc); + + vcpu_schedule_unlock_irq(lock, vc); + + svc->sdom->nr_vcpus--; + } +} + /* How long should we let this vcpu run for? */ static s_time_t csched2_runtime(const struct scheduler *ops, int cpu, struct csched2_vcpu *snext) diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 98524a6..bd3a2a0 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -582,6 +582,29 @@ replq_reinsert(const struct scheduler *ops, struct rt_vcpu *svc) } /* + * Pick a valid CPU for the vcpu vc + * Valid CPU of a vcpu is intesection of vcpu's affinity + * and available cpus + */ +static int +rt_cpu_pick(const struct scheduler *ops, struct vcpu *vc) +{ + cpumask_t cpus; + cpumask_t *online; + int cpu; + + online = cpupool_domain_cpumask(vc->domain); + cpumask_and(&cpus, online, vc->cpu_hard_affinity); + + cpu = cpumask_test_cpu(vc->processor, &cpus) + ? vc->processor + : cpumask_cycle(vc->processor, &cpus); + ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) ); + + return cpu; +} + +/* * Init/Free related code */ static int @@ -894,29 +917,6 @@ rt_vcpu_remove(const struct scheduler *ops, struct vcpu *vc) } /* - * Pick a valid CPU for the vcpu vc - * Valid CPU of a vcpu is intesection of vcpu's affinity - * and available cpus - */ -static int -rt_cpu_pick(const struct scheduler *ops, struct vcpu *vc) -{ - cpumask_t cpus; - cpumask_t *online; - int cpu; - - online = cpupool_domain_cpumask(vc->domain); - cpumask_and(&cpus, online, vc->cpu_hard_affinity); - - cpu = cpumask_test_cpu(vc->processor, &cpus) - ? vc->processor - : cpumask_cycle(vc->processor, &cpus); - ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) ); - - return cpu; -} - -/* * Burn budget in nanosecond granularity */ static void