From patchwork Wed May 11 15:20:16 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyang Chen X-Patchwork-Id: 9071871 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id B9CF2BF29F for ; Wed, 11 May 2016 15:25:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 66080200BE for ; Wed, 11 May 2016 15:25:08 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BF50C2017D for ; Wed, 11 May 2016 15:25:05 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1b0VyS-0008Bt-NG; Wed, 11 May 2016 15:22:56 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1b0VyQ-0008Bh-P9 for xen-devel@lists.xenproject.org; Wed, 11 May 2016 15:22:54 +0000 Received: from [85.158.139.211] by server-15.bemta-5.messagelabs.com id 46/48-26599-E4E43375; Wed, 11 May 2016 15:22:54 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrELMWRWlGSWpSXmKPExsUyr8m9SNfXzzj c4Pk6OYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNePpwwb2gtPpFQvXizYwvvLtYuTiEBJYziSx 7dx1pi5GTg42AS2J9nkrWUBsEQEliXurJgPFOTiYBaolHu80AAkLCxhIzHu6BaycRUBV4sK5K awgNq+As8TOBXPZQMolBBQk5kyyAQlLCIRKvHiznAXClpN4/PAB4wRGrgWMDKsY1YtTi8pSi3 SN9ZKKMtMzSnITM3N0DQ1M9XJTi4sT01NzEpOK9ZLzczcxAv3HAAQ7GPf+czrEKMnBpCTK+0b aOFyILyk/pTIjsTgjvqg0J7X4EKMMB4eSBO96H6CcYFFqempFWmYOMJBg0hIcPEoivCdA0rzF BYm5xZnpEKlTjLoch6ZfW8skxJKXn5cqJc5bDlIkAFKUUZoHNwIW1JcYZaWEeRmBjhLiKUgty s0sQZV/xSjOwagkzHsYZApPZl4J3KZXQEcwAR1Rfd0I5IiSRISUVAMjV8f0fSumLMubaVLQEy jYLjVz4juO+3c6tmzuE+r2C9ilsDi/KLg+5OuFO7fcTl5LNQ+JsWvUSt21kVnHeunFqfuUJq1 /l++WWGm/qNWHYaLTvuunN9t9kk99Jhpwpd7s2f+QzkL7Hld2ld99BoGLxbRLDvra5vzPnvg6 ZoaX7WFTh9BV2tZKLMUZiYZazEXFiQDtxc5OZQIAAA== X-Env-Sender: tiche@seas.upenn.edu X-Msg-Ref: server-6.tower-206.messagelabs.com!1462980172!38821663!1 X-Originating-IP: [158.130.71.114] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 10513 invoked from network); 11 May 2016 15:22:53 -0000 Received: from coyote.seas.upenn.edu (HELO hound.seas.upenn.edu) (158.130.71.114) by server-6.tower-206.messagelabs.com with SMTP; 11 May 2016 15:22:53 -0000 Received: from localhost.localdomain (dhcp0576.nic.resnet.group.upenn.edu [165.123.139.58]) (authenticated bits=0) by hound.seas.upenn.edu (8.14.9/8.14.5) with ESMTP id u4BFMgRE017205 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 11 May 2016 11:22:42 -0400 From: Tianyang Chen To: xen-devel@lists.xenproject.org Date: Wed, 11 May 2016 11:20:16 -0400 Message-Id: <1462980016-4854-1-git-send-email-tiche@seas.upenn.edu> X-Mailer: git-send-email 1.7.9.5 X-Proofpoint-Virus-Version: vendor=nai engine=5400 definitions=5800 signatures=585085 X-Proofpoint-Spam-Reason: safe Cc: dario.faggioli@citrix.com, Tianyang Chen , george.dunlap@citrix.com, mengxu@cis.upenn.edu Subject: [Xen-devel] [PATCH] xen: sched: rtds: refactor code X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP No functional change: -Various coding style fix -Added comments for UPDATE_LIMIT_SHIFT. Use non-atomic bit-ops: -Vcpu flags are checked and cleared atomically. Performance can be improved with corresponding non-atomic versions since schedule.c already has spin_locks in place. Suggested-by: Dario Faggioli Signed-off-by: Tianyang Chen --- xen/common/sched_rt.c | 122 ++++++++++++++++++++++++++----------------------- 1 file changed, 64 insertions(+), 58 deletions(-) diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 7f8f411..1a18f6d 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -80,7 +80,7 @@ * in schedule.c * * The functions involes RunQ and needs to grab locks are: - * vcpu_insert, vcpu_remove, context_saved, __runq_insert + * vcpu_insert, vcpu_remove, context_saved, runq_insert */ @@ -107,6 +107,12 @@ */ #define RTDS_MIN_BUDGET (MICROSECS(10)) +/* + * UPDATE_LIMIT_SHIT: a constant used in rt_update_deadline(). When finding + * the next deadline, performing addition could be faster if the difference + * between cur_deadline and now is small. If the difference is bigger than + * 1024 * period, use multiplication. + */ #define UPDATE_LIMIT_SHIFT 10 /* @@ -158,25 +164,25 @@ static void repl_timer_handler(void *data); /* - * Systme-wide private data, include global RunQueue/DepletedQ + * System-wide private data, include global RunQueue/DepletedQ * Global lock is referenced by schedule_data.schedule_lock from all * physical cpus. It can be grabbed via vcpu_schedule_lock_irq() */ struct rt_private { - spinlock_t lock; /* the global coarse grand lock */ - struct list_head sdom; /* list of availalbe domains, used for dump */ - struct list_head runq; /* ordered list of runnable vcpus */ - struct list_head depletedq; /* unordered list of depleted vcpus */ - struct list_head replq; /* ordered list of vcpus that need replenishment */ - cpumask_t tickled; /* cpus been tickled */ - struct timer *repl_timer; /* replenishment timer */ + spinlock_t lock; /* the global coarse grand lock */ + struct list_head sdom; /* list of availalbe domains, used for dump */ + struct list_head runq; /* ordered list of runnable vcpus */ + struct list_head depletedq; /* unordered list of depleted vcpus */ + struct list_head replq; /* ordered list of vcpus that need replenishment */ + cpumask_t tickled; /* cpus been tickled */ + struct timer *repl_timer; /* replenishment timer */ }; /* * Virtual CPU */ struct rt_vcpu { - struct list_head q_elem; /* on the runq/depletedq list */ + struct list_head q_elem; /* on the runq/depletedq list */ struct list_head replq_elem; /* on the replenishment events list */ /* Up-pointers */ @@ -188,19 +194,19 @@ struct rt_vcpu { s_time_t budget; /* VCPU current infomation in nanosecond */ - s_time_t cur_budget; /* current budget */ - s_time_t last_start; /* last start time */ - s_time_t cur_deadline; /* current deadline for EDF */ + s_time_t cur_budget; /* current budget */ + s_time_t last_start; /* last start time */ + s_time_t cur_deadline; /* current deadline for EDF */ - unsigned flags; /* mark __RTDS_scheduled, etc.. */ + unsigned flags; /* mark __RTDS_scheduled, etc.. */ }; /* * Domain */ struct rt_dom { - struct list_head sdom_elem; /* link list on rt_priv */ - struct domain *dom; /* pointer to upper domain */ + struct list_head sdom_elem; /* link list on rt_priv */ + struct domain *dom; /* pointer to upper domain */ }; /* @@ -241,13 +247,13 @@ static inline struct list_head *rt_replq(const struct scheduler *ops) * and the replenishment events queue. */ static int -__vcpu_on_q(const struct rt_vcpu *svc) +vcpu_on_q(const struct rt_vcpu *svc) { return !list_empty(&svc->q_elem); } static struct rt_vcpu * -__q_elem(struct list_head *elem) +q_elem(struct list_head *elem) { return list_entry(elem, struct rt_vcpu, q_elem); } @@ -303,7 +309,7 @@ rt_dump_vcpu(const struct scheduler *ops, const struct rt_vcpu *svc) svc->cur_budget, svc->cur_deadline, svc->last_start, - __vcpu_on_q(svc), + vcpu_on_q(svc), vcpu_runnable(svc->vcpu), svc->flags, keyhandler_scratch); @@ -339,28 +345,28 @@ rt_dump(const struct scheduler *ops) replq = rt_replq(ops); printk("Global RunQueue info:\n"); - list_for_each( iter, runq ) + list_for_each ( iter, runq ) { - svc = __q_elem(iter); + svc = q_elem(iter); rt_dump_vcpu(ops, svc); } printk("Global DepletedQueue info:\n"); - list_for_each( iter, depletedq ) + list_for_each ( iter, depletedq ) { - svc = __q_elem(iter); + svc = q_elem(iter); rt_dump_vcpu(ops, svc); } printk("Global Replenishment Events info:\n"); - list_for_each( iter, replq ) + list_for_each ( iter, replq ) { svc = replq_elem(iter); rt_dump_vcpu(ops, svc); } printk("Domain info:\n"); - list_for_each( iter, &prv->sdom ) + list_for_each ( iter, &prv->sdom ) { struct vcpu *v; @@ -380,7 +386,7 @@ rt_dump(const struct scheduler *ops) /* * update deadline and budget when now >= cur_deadline - * it need to be updated to the deadline of the current period + * it needs to be updated to the deadline of the current period */ static void rt_update_deadline(s_time_t now, struct rt_vcpu *svc) @@ -463,14 +469,14 @@ deadline_queue_insert(struct rt_vcpu * (*qelem)(struct list_head *), return !pos; } #define deadline_runq_insert(...) \ - deadline_queue_insert(&__q_elem, ##__VA_ARGS__) + deadline_queue_insert(&q_elem, ##__VA_ARGS__) #define deadline_replq_insert(...) \ deadline_queue_insert(&replq_elem, ##__VA_ARGS__) static inline void -__q_remove(struct rt_vcpu *svc) +q_remove(struct rt_vcpu *svc) { - ASSERT( __vcpu_on_q(svc) ); + ASSERT( vcpu_on_q(svc) ); list_del_init(&svc->q_elem); } @@ -506,13 +512,13 @@ replq_remove(const struct scheduler *ops, struct rt_vcpu *svc) * Insert svc without budget in DepletedQ unsorted; */ static void -__runq_insert(const struct scheduler *ops, struct rt_vcpu *svc) +runq_insert(const struct scheduler *ops, struct rt_vcpu *svc) { struct rt_private *prv = rt_priv(ops); struct list_head *runq = rt_runq(ops); ASSERT( spin_is_locked(&prv->lock) ); - ASSERT( !__vcpu_on_q(svc) ); + ASSERT( !vcpu_on_q(svc) ); ASSERT( vcpu_on_replq(svc) ); /* add svc to runq if svc still has budget */ @@ -840,12 +846,12 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc) if ( now >= svc->cur_deadline ) rt_update_deadline(now, svc); - if ( !__vcpu_on_q(svc) && vcpu_runnable(vc) ) + if ( !vcpu_on_q(svc) && vcpu_runnable(vc) ) { replq_insert(ops, svc); if ( !vc->is_running ) - __runq_insert(ops, svc); + runq_insert(ops, svc); } vcpu_schedule_unlock_irq(lock, vc); @@ -867,8 +873,8 @@ rt_vcpu_remove(const struct scheduler *ops, struct vcpu *vc) BUG_ON( sdom == NULL ); lock = vcpu_schedule_lock_irq(vc); - if ( __vcpu_on_q(svc) ) - __q_remove(svc); + if ( vcpu_on_q(svc) ) + q_remove(svc); if ( vcpu_on_replq(svc) ) replq_remove(ops,svc); @@ -930,7 +936,7 @@ burn_budget(const struct scheduler *ops, struct rt_vcpu *svc, s_time_t now) if ( svc->cur_budget <= 0 ) { svc->cur_budget = 0; - set_bit(__RTDS_depleted, &svc->flags); + __set_bit(__RTDS_depleted, &svc->flags); } /* TRACE */ @@ -955,7 +961,7 @@ burn_budget(const struct scheduler *ops, struct rt_vcpu *svc, s_time_t now) * lock is grabbed before calling this function */ static struct rt_vcpu * -__runq_pick(const struct scheduler *ops, const cpumask_t *mask) +runq_pick(const struct scheduler *ops, const cpumask_t *mask) { struct list_head *runq = rt_runq(ops); struct list_head *iter; @@ -964,9 +970,9 @@ __runq_pick(const struct scheduler *ops, const cpumask_t *mask) cpumask_t cpu_common; cpumask_t *online; - list_for_each(iter, runq) + list_for_each ( iter, runq ) { - iter_svc = __q_elem(iter); + iter_svc = q_elem(iter); /* mask cpu_hard_affinity & cpupool & mask */ online = cpupool_domain_cpumask(iter_svc->vcpu->domain); @@ -1028,7 +1034,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched } else { - snext = __runq_pick(ops, cpumask_of(cpu)); + snext = runq_pick(ops, cpumask_of(cpu)); if ( snext == NULL ) snext = rt_vcpu(idle_vcpu[cpu]); @@ -1044,7 +1050,7 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched if ( snext != scurr && !is_idle_vcpu(current) && vcpu_runnable(current) ) - set_bit(__RTDS_delayed_runq_add, &scurr->flags); + __set_bit(__RTDS_delayed_runq_add, &scurr->flags); snext->last_start = now; ret.time = -1; /* if an idle vcpu is picked */ @@ -1052,8 +1058,8 @@ rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched { if ( snext != scurr ) { - __q_remove(snext); - set_bit(__RTDS_scheduled, &snext->flags); + q_remove(snext); + __set_bit(__RTDS_scheduled, &snext->flags); } if ( snext->vcpu->processor != cpu ) { @@ -1081,13 +1087,13 @@ rt_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) if ( curr_on_cpu(vc->processor) == vc ) cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); - else if ( __vcpu_on_q(svc) ) + else if ( vcpu_on_q(svc) ) { - __q_remove(svc); + q_remove(svc); replq_remove(ops, svc); } else if ( svc->flags & RTDS_delayed_runq_add ) - clear_bit(__RTDS_delayed_runq_add, &svc->flags); + __clear_bit(__RTDS_delayed_runq_add, &svc->flags); } /* @@ -1201,7 +1207,7 @@ rt_vcpu_wake(const struct scheduler *ops, struct vcpu *vc) } /* on RunQ/DepletedQ, just update info is ok */ - if ( unlikely(__vcpu_on_q(svc)) ) + if ( unlikely(vcpu_on_q(svc)) ) { SCHED_STAT_CRANK(vcpu_wake_onrunq); return; @@ -1229,7 +1235,7 @@ rt_vcpu_wake(const struct scheduler *ops, struct vcpu *vc) */ if ( unlikely(svc->flags & RTDS_scheduled) ) { - set_bit(__RTDS_delayed_runq_add, &svc->flags); + __set_bit(__RTDS_delayed_runq_add, &svc->flags); /* * The vcpu is waking up already, and we didn't even had the time to * remove its next replenishment event from the replenishment queue @@ -1245,7 +1251,7 @@ rt_vcpu_wake(const struct scheduler *ops, struct vcpu *vc) /* Replenishment event got cancelled when we blocked. Add it back. */ replq_insert(ops, svc); /* insert svc to runq/depletedq because svc is not in queue now */ - __runq_insert(ops, svc); + runq_insert(ops, svc); runq_tickle(ops, svc); } @@ -1260,15 +1266,15 @@ rt_context_saved(const struct scheduler *ops, struct vcpu *vc) struct rt_vcpu *svc = rt_vcpu(vc); spinlock_t *lock = vcpu_schedule_lock_irq(vc); - clear_bit(__RTDS_scheduled, &svc->flags); + __clear_bit(__RTDS_scheduled, &svc->flags); /* not insert idle vcpu to runq */ if ( is_idle_vcpu(vc) ) goto out; - if ( test_and_clear_bit(__RTDS_delayed_runq_add, &svc->flags) && + if ( __test_and_clear_bit(__RTDS_delayed_runq_add, &svc->flags) && likely(vcpu_runnable(vc)) ) { - __runq_insert(ops, svc); + runq_insert(ops, svc); runq_tickle(ops, svc); } else @@ -1414,10 +1420,10 @@ static void repl_timer_handler(void *data){ rt_update_deadline(now, svc); list_add(&svc->replq_elem, &tmp_replq); - if ( __vcpu_on_q(svc) ) + if ( vcpu_on_q(svc) ) { - __q_remove(svc); - __runq_insert(ops, svc); + q_remove(svc); + runq_insert(ops, svc); } } @@ -1435,13 +1441,13 @@ static void repl_timer_handler(void *data){ if ( curr_on_cpu(svc->vcpu->processor) == svc->vcpu && !list_empty(runq) ) { - struct rt_vcpu *next_on_runq = __q_elem(runq->next); + struct rt_vcpu *next_on_runq = q_elem(runq->next); if ( svc->cur_deadline > next_on_runq->cur_deadline ) runq_tickle(ops, next_on_runq); } - else if ( __vcpu_on_q(svc) && - test_and_clear_bit(__RTDS_depleted, &svc->flags) ) + else if ( vcpu_on_q(svc) && + __test_and_clear_bit(__RTDS_depleted, &svc->flags) ) runq_tickle(ops, svc); list_del(&svc->replq_elem);