From patchwork Thu Feb 9 13:58:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9564635 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 69830601C3 for ; Thu, 9 Feb 2017 14:01:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 542B828499 for ; Thu, 9 Feb 2017 14:01:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 491BF284D1; Thu, 9 Feb 2017 14:01:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 24CD028499 for ; Thu, 9 Feb 2017 14:01:39 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cbpFO-0003xN-7P; Thu, 09 Feb 2017 13:58:54 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cbpFM-0003wH-Vi for xen-devel@lists.xenproject.org; Thu, 09 Feb 2017 13:58:53 +0000 Received: from [193.109.254.147] by server-10.bemta-6.messagelabs.com id 97/79-13192-B957C985; Thu, 09 Feb 2017 13:58:51 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpmleJIrShJLcpLzFFi42K5GNpwUHdG6Zw Igwcb5Cy+b5nM5MDocfjDFZYAxijWzLyk/IoE1ozW9eUFh6YwVrQdOMvYwHi4rIuRi0NIYCaj xOfDJ1hBHBaBNawS8+e+YgFxJAQusUpMnLSCvYuRE8iJkdjd+AmoigPIrpJYMyMDJCwkoCJxc /sqJohJPxkljh+/wwKSEBbQkzhy9Ac7hJ0o0X3hKJjNJmAg8WbHXlYQW0RASeLeqslMIDazQJ TEmeXNzCDzWQRUJQ4/sQIJ8wp4Spw6tBmsnFPAW+L9vbXsEHu9JJb9XQq2SlRATmLl5RZWiHp BiZMzn7CAjGEW0JRYv0sfYrq8xPa3c5gnMIrMQlI1C6FqFpKqBYzMqxg1ilOLylKLdI2N9ZKK MtMzSnITM3N0DQ3M9HJTi4sT01NzEpOK9ZLzczcxAoOfAQh2MO5cH3iIUZKDSUmUV7ZgToQQX 1J+SmVGYnFGfFFpTmrxIUYNDg6BCWfnTmeSYsnLz0tVkuAVKAGqEyxKTU+tSMvMAcYnTKkEB4 +SCG96MVCat7ggMbc4Mx0idYrRmKOn6/RLJo49uy6/ZBICmyQlzssKMkkApDSjNA9uECxtXGK UlRLmZQQ6U4inILUoN7MEVf4VozgHo5IwbyHIFJ7MvBK4fa+ATmECOuX66Vkgp5QkIqSkGhh7 y9TZss7vlDZPFrfMaeXsXi3K+jm4efFj89ob95SX9R/ZULJJuftYr8IXEUaPk/ukg3VVFtV2f Nn/VnlPnl3qnL3X8wNtAo8ZOUhmhjLN+yVwIH7V9q7UlyVvOc9tZ2j53pXeP/fghU3Ve5s3zH nL4LNsJ/dPlj1xs5QPOq0R8i24prju+QwlluKMREMt5qLiRACNcN30FgMAAA== X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-7.tower-27.messagelabs.com!1486648728!81631605!1 X-Originating-IP: [209.85.128.193] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 42786 invoked from network); 9 Feb 2017 13:58:48 -0000 Received: from mail-wr0-f193.google.com (HELO mail-wr0-f193.google.com) (209.85.128.193) by server-7.tower-27.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 9 Feb 2017 13:58:48 -0000 Received: by mail-wr0-f193.google.com with SMTP id k90so11988902wrc.3 for ; Thu, 09 Feb 2017 05:58:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=+6fMdn4e+5COIZ963LFy/2wDL/jdaco2/Het1HacM8Y=; b=u5e3n/MIkkBFsuG9M2IhpO54HXAYyuV5R/qlum8AzoBb9dojOGqKdDzO2KmflqBCG3 dPUNwrMDvN6hDKaUWJWNLjZfSbVTdKUi4bCn/i8DTzZ5E6eO0bR6LdErraIKCv21Dj+Q 9fOPN8k/1QlEGe38gs/TRpvYJ0GA1upZcZIGg+w7915mz+m7PvwWhLnohiAg8qchZ+lh wSgEdtPf8WRN1vxC2CjKCkDT7IpfpV1xM53K79t50O8FZECYCnAy+syN9Q2FyD89TbRw uYGuWZ33id4mMAqFN2zDVsF1Ia0bIG0y6EOcO9xAijWm7Affd5/6nkLTaauVwonbPQ24 Pmlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=+6fMdn4e+5COIZ963LFy/2wDL/jdaco2/Het1HacM8Y=; b=taIolleHBmvgt1r3LLXwUNCkuXjQAy0P2vU4gQh6Wk+3twxS5J/hv2kTaPYI2JlKtq LmOhx8wTekBWt404JH+4MylkvsM3aneJSzA/2hHCR4EWkJsBrOCVvgdkoPVVJ6cMtsLB 4i60QfdvG/WFD0UIJqpdZ//YHCqWnq7kF9jDRUp/As0gZgmbgr7gFvuGkh8EutlOexFq Y9oMKAFJA7Oeh9TfYxjIfJTY0T38ZEbZqNxU8xxXVa0BM0prHiG+LhholjiiuHYx87Og +3yljFEkPKpoXIbIjyJSP+JFSgNUX4WU5OYZCtkjv1YEkDMtTOwfbkoexQgFJUxq2Spx 5OSQ== X-Gm-Message-State: AMke39kDDaSXkr3P7Ecpf53sl/TNFTMsO3py6iyyuxuqRSj1XXU45ufzjmnodaDtoRdF7Q== X-Received: by 10.223.172.107 with SMTP id v98mr3002065wrc.77.1486648727888; Thu, 09 Feb 2017 05:58:47 -0800 (PST) Received: from Solace.fritz.box ([80.66.223.139]) by smtp.gmail.com with ESMTPSA id w16sm8992735wmd.4.2017.02.09.05.58.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Feb 2017 05:58:47 -0800 (PST) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Thu, 09 Feb 2017 14:58:45 +0100 Message-ID: <148664872587.595.4960352148914014602.stgit@Solace.fritz.box> In-Reply-To: <148664844741.595.10506268024432565895.stgit@Solace.fritz.box> References: <148664844741.595.10506268024432565895.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: George Dunlap , Anshul Makkar Subject: [Xen-devel] [PATCH v2 04/10] xen: credit2: make accessor helpers inline functions instead of macros X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP There isn't any particular reason for the accessor helpers to be macro, so turn them into 'static inline'-s, which are better. Note that it is necessary to move the function definitions below the structure declarations. No functional change intended. Signed-off-by: Dario Faggioli --- Cc: George Dunlap Cc: Anshul Makkar --- xen/common/sched_credit2.c | 156 +++++++++++++++++++++++++------------------- 1 file changed, 89 insertions(+), 67 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index b482990..786dcca 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -209,18 +209,6 @@ static unsigned int __read_mostly opt_migrate_resist = 500; integer_param("sched_credit2_migrate_resist", opt_migrate_resist); /* - * Useful macros - */ -#define CSCHED2_PRIV(_ops) \ - ((struct csched2_private *)((_ops)->sched_data)) -#define CSCHED2_VCPU(_vcpu) ((struct csched2_vcpu *) (_vcpu)->sched_priv) -#define CSCHED2_DOM(_dom) ((struct csched2_dom *) (_dom)->sched_priv) -/* CPU to runq_id macro */ -#define c2r(_ops, _cpu) (CSCHED2_PRIV(_ops)->runq_map[(_cpu)]) -/* CPU to runqueue struct macro */ -#define RQD(_ops, _cpu) (&CSCHED2_PRIV(_ops)->rqd[c2r(_ops, _cpu)]) - -/* * Load tracking and load balancing * * Load history of runqueues and vcpus is accounted for by using an @@ -441,6 +429,40 @@ struct csched2_dom { }; /* + * Accessor helpers functions. + */ +static always_inline +struct csched2_private *csched2_priv(const struct scheduler *ops) +{ + return ops->sched_data; +} + +static always_inline +struct csched2_vcpu *csched2_vcpu(struct vcpu *v) +{ + return v->sched_priv; +} + +static always_inline +struct csched2_dom *csched2_dom(struct domain *d) +{ + return d->sched_priv; +} + +/* CPU to runq_id macro */ +static always_inline int c2r(const struct scheduler *ops, unsigned cpu) +{ + return (csched2_priv(ops))->runq_map[(cpu)]; +} + +/* CPU to runqueue struct macro */ +static always_inline +struct csched2_runqueue_data *c2rqd(const struct scheduler *ops, unsigned cpu) +{ + return &csched2_priv(ops)->rqd[c2r(ops, cpu)]; +} + +/* * Hyperthreading (SMT) support. * * We use a special per-runq mask (smt_idle) and update it according to the @@ -694,7 +716,7 @@ static void __update_runq_load(const struct scheduler *ops, struct csched2_runqueue_data *rqd, int change, s_time_t now) { - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_private *prv = csched2_priv(ops); s_time_t delta, load = rqd->load; unsigned int P, W; @@ -781,7 +803,7 @@ static void __update_svc_load(const struct scheduler *ops, struct csched2_vcpu *svc, int change, s_time_t now) { - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_private *prv = csched2_priv(ops); s_time_t delta, vcpu_load; unsigned int P, W; @@ -878,7 +900,7 @@ static void runq_insert(const struct scheduler *ops, struct csched2_vcpu *svc) { unsigned int cpu = svc->vcpu->processor; - struct list_head * runq = &RQD(ops, cpu)->runq; + struct list_head * runq = &c2rqd(ops, cpu)->runq; int pos = 0; ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); @@ -936,7 +958,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_vcpu *new, s_time_t now) int i, ipid = -1; s_time_t lowest = (1<<30); unsigned int cpu = new->vcpu->processor; - struct csched2_runqueue_data *rqd = RQD(ops, cpu); + struct csched2_runqueue_data *rqd = c2rqd(ops, cpu); cpumask_t mask; struct csched2_vcpu * cur; @@ -1007,7 +1029,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_vcpu *new, s_time_t now) cpumask_and(&mask, &mask, cpumask_scratch_cpu(cpu)); if ( __cpumask_test_and_clear_cpu(cpu, &mask) ) { - cur = CSCHED2_VCPU(curr_on_cpu(cpu)); + cur = csched2_vcpu(curr_on_cpu(cpu)); burn_credits(rqd, cur, now); if ( cur->credit < new->credit ) @@ -1023,7 +1045,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_vcpu *new, s_time_t now) /* Already looked at this one above */ ASSERT(i != cpu); - cur = CSCHED2_VCPU(curr_on_cpu(i)); + cur = csched2_vcpu(curr_on_cpu(i)); /* * Even if the cpu is not in rqd->idle, it may be running the @@ -1096,7 +1118,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_vcpu *new, s_time_t now) static void reset_credit(const struct scheduler *ops, int cpu, s_time_t now, struct csched2_vcpu *snext) { - struct csched2_runqueue_data *rqd = RQD(ops, cpu); + struct csched2_runqueue_data *rqd = c2rqd(ops, cpu); struct list_head *iter; int m; @@ -1174,7 +1196,7 @@ void burn_credits(struct csched2_runqueue_data *rqd, { s_time_t delta; - ASSERT(svc == CSCHED2_VCPU(curr_on_cpu(svc->vcpu->processor))); + ASSERT(svc == csched2_vcpu(curr_on_cpu(svc->vcpu->processor))); if ( unlikely(is_idle_vcpu(svc->vcpu)) ) { @@ -1261,11 +1283,11 @@ static void update_max_weight(struct csched2_runqueue_data *rqd, int new_weight, static /*inline*/ void __csched2_vcpu_check(struct vcpu *vc) { - struct csched2_vcpu * const svc = CSCHED2_VCPU(vc); + struct csched2_vcpu * const svc = csched2_vcpu(vc); struct csched2_dom * const sdom = svc->sdom; BUG_ON( svc->vcpu != vc ); - BUG_ON( sdom != CSCHED2_DOM(vc->domain) ); + BUG_ON( sdom != csched2_dom(vc->domain) ); if ( sdom ) { BUG_ON( is_idle_vcpu(vc) ); @@ -1305,7 +1327,7 @@ csched2_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd) svc->credit = CSCHED2_CREDIT_INIT; svc->weight = svc->sdom->weight; /* Starting load of 50% */ - svc->avgload = 1ULL << (CSCHED2_PRIV(ops)->load_precision_shift - 1); + svc->avgload = 1ULL << (csched2_priv(ops)->load_precision_shift - 1); svc->load_last_update = NOW() >> LOADAVG_GRANULARITY_SHIFT; } else @@ -1357,7 +1379,7 @@ runq_assign(const struct scheduler *ops, struct vcpu *vc) ASSERT(svc->rqd == NULL); - __runq_assign(svc, RQD(ops, vc->processor)); + __runq_assign(svc, c2rqd(ops, vc->processor)); } static void @@ -1382,7 +1404,7 @@ runq_deassign(const struct scheduler *ops, struct vcpu *vc) { struct csched2_vcpu *svc = vc->sched_priv; - ASSERT(svc->rqd == RQD(ops, vc->processor)); + ASSERT(svc->rqd == c2rqd(ops, vc->processor)); __runq_deassign(svc); } @@ -1390,7 +1412,7 @@ runq_deassign(const struct scheduler *ops, struct vcpu *vc) static void csched2_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) { - struct csched2_vcpu * const svc = CSCHED2_VCPU(vc); + struct csched2_vcpu * const svc = csched2_vcpu(vc); ASSERT(!is_idle_vcpu(vc)); SCHED_STAT_CRANK(vcpu_sleep); @@ -1399,7 +1421,7 @@ csched2_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ); else if ( __vcpu_on_runq(svc) ) { - ASSERT(svc->rqd == RQD(ops, vc->processor)); + ASSERT(svc->rqd == c2rqd(ops, vc->processor)); update_load(ops, svc->rqd, svc, -1, NOW()); __runq_remove(svc); } @@ -1410,7 +1432,7 @@ csched2_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) static void csched2_vcpu_wake(const struct scheduler *ops, struct vcpu *vc) { - struct csched2_vcpu * const svc = CSCHED2_VCPU(vc); + struct csched2_vcpu * const svc = csched2_vcpu(vc); unsigned int cpu = vc->processor; s_time_t now; @@ -1448,7 +1470,7 @@ csched2_vcpu_wake(const struct scheduler *ops, struct vcpu *vc) if ( svc->rqd == NULL ) runq_assign(ops, vc); else - ASSERT(RQD(ops, vc->processor) == svc->rqd ); + ASSERT(c2rqd(ops, vc->processor) == svc->rqd ); now = NOW(); @@ -1465,7 +1487,7 @@ out: static void csched2_vcpu_yield(const struct scheduler *ops, struct vcpu *v) { - struct csched2_vcpu * const svc = CSCHED2_VCPU(v); + struct csched2_vcpu * const svc = csched2_vcpu(v); __set_bit(__CSFLAG_vcpu_yield, &svc->flags); } @@ -1473,12 +1495,12 @@ csched2_vcpu_yield(const struct scheduler *ops, struct vcpu *v) static void csched2_context_saved(const struct scheduler *ops, struct vcpu *vc) { - struct csched2_vcpu * const svc = CSCHED2_VCPU(vc); + struct csched2_vcpu * const svc = csched2_vcpu(vc); spinlock_t *lock = vcpu_schedule_lock_irq(vc); s_time_t now = NOW(); - BUG_ON( !is_idle_vcpu(vc) && svc->rqd != RQD(ops, vc->processor)); - ASSERT(is_idle_vcpu(vc) || svc->rqd == RQD(ops, vc->processor)); + BUG_ON( !is_idle_vcpu(vc) && svc->rqd != c2rqd(ops, vc->processor)); + ASSERT(is_idle_vcpu(vc) || svc->rqd == c2rqd(ops, vc->processor)); /* This vcpu is now eligible to be put on the runqueue again */ __clear_bit(__CSFLAG_scheduled, &svc->flags); @@ -1509,9 +1531,9 @@ csched2_context_saved(const struct scheduler *ops, struct vcpu *vc) static int csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc) { - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_private *prv = csched2_priv(ops); int i, min_rqi = -1, new_cpu, cpu = vc->processor; - struct csched2_vcpu *svc = CSCHED2_VCPU(vc); + struct csched2_vcpu *svc = csched2_vcpu(vc); s_time_t min_avgload = MAX_LOAD; ASSERT(!cpumask_empty(&prv->active_queues)); @@ -1774,7 +1796,7 @@ static bool_t vcpu_is_migrateable(struct csched2_vcpu *svc, static void balance_load(const struct scheduler *ops, int cpu, s_time_t now) { - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_private *prv = csched2_priv(ops); int i, max_delta_rqi = -1; struct list_head *push_iter, *pull_iter; bool_t inner_load_updated = 0; @@ -1789,7 +1811,7 @@ static void balance_load(const struct scheduler *ops, int cpu, s_time_t now) */ ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); - st.lrqd = RQD(ops, cpu); + st.lrqd = c2rqd(ops, cpu); __update_runq_load(ops, st.lrqd, 0, now); @@ -1962,7 +1984,7 @@ csched2_vcpu_migrate( const struct scheduler *ops, struct vcpu *vc, unsigned int new_cpu) { struct domain *d = vc->domain; - struct csched2_vcpu * const svc = CSCHED2_VCPU(vc); + struct csched2_vcpu * const svc = csched2_vcpu(vc); struct csched2_runqueue_data *trqd; s_time_t now = NOW(); @@ -1995,10 +2017,10 @@ csched2_vcpu_migrate( } /* If here, new_cpu must be a valid Credit2 pCPU, and in our affinity. */ - ASSERT(cpumask_test_cpu(new_cpu, &CSCHED2_PRIV(ops)->initialized)); + ASSERT(cpumask_test_cpu(new_cpu, &csched2_priv(ops)->initialized)); ASSERT(cpumask_test_cpu(new_cpu, vc->cpu_hard_affinity)); - trqd = RQD(ops, new_cpu); + trqd = c2rqd(ops, new_cpu); /* * Do the actual movement toward new_cpu, and update vc->processor. @@ -2020,8 +2042,8 @@ csched2_dom_cntl( struct domain *d, struct xen_domctl_scheduler_op *op) { - struct csched2_dom * const sdom = CSCHED2_DOM(d); - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_dom * const sdom = csched2_dom(d); + struct csched2_private *prv = csched2_priv(ops); unsigned long flags; int rc = 0; @@ -2054,10 +2076,10 @@ csched2_dom_cntl( /* Update weights for vcpus, and max_weight for runqueues on which they reside */ for_each_vcpu ( d, v ) { - struct csched2_vcpu *svc = CSCHED2_VCPU(v); + struct csched2_vcpu *svc = csched2_vcpu(v); spinlock_t *lock = vcpu_schedule_lock(svc->vcpu); - ASSERT(svc->rqd == RQD(ops, svc->vcpu->processor)); + ASSERT(svc->rqd == c2rqd(ops, svc->vcpu->processor)); svc->weight = sdom->weight; update_max_weight(svc->rqd, svc->weight, old_weight); @@ -2081,7 +2103,7 @@ static int csched2_sys_cntl(const struct scheduler *ops, struct xen_sysctl_scheduler_op *sc) { xen_sysctl_credit2_schedule_t *params = &sc->u.sched_credit2; - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_private *prv = csched2_priv(ops); unsigned long flags; switch (sc->cmd ) @@ -2112,7 +2134,7 @@ static int csched2_sys_cntl(const struct scheduler *ops, static void * csched2_alloc_domdata(const struct scheduler *ops, struct domain *dom) { - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_private *prv = csched2_priv(ops); struct csched2_dom *sdom; unsigned long flags; @@ -2128,7 +2150,7 @@ csched2_alloc_domdata(const struct scheduler *ops, struct domain *dom) write_lock_irqsave(&prv->lock, flags); - list_add_tail(&sdom->sdom_elem, &CSCHED2_PRIV(ops)->sdom); + list_add_tail(&sdom->sdom_elem, &csched2_priv(ops)->sdom); write_unlock_irqrestore(&prv->lock, flags); @@ -2157,7 +2179,7 @@ csched2_free_domdata(const struct scheduler *ops, void *data) { unsigned long flags; struct csched2_dom *sdom = data; - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_private *prv = csched2_priv(ops); write_lock_irqsave(&prv->lock, flags); @@ -2171,9 +2193,9 @@ csched2_free_domdata(const struct scheduler *ops, void *data) static void csched2_dom_destroy(const struct scheduler *ops, struct domain *dom) { - ASSERT(CSCHED2_DOM(dom)->nr_vcpus == 0); + ASSERT(csched2_dom(dom)->nr_vcpus == 0); - csched2_free_domdata(ops, CSCHED2_DOM(dom)); + csched2_free_domdata(ops, csched2_dom(dom)); } static void @@ -2218,7 +2240,7 @@ csched2_free_vdata(const struct scheduler *ops, void *priv) static void csched2_vcpu_remove(const struct scheduler *ops, struct vcpu *vc) { - struct csched2_vcpu * const svc = CSCHED2_VCPU(vc); + struct csched2_vcpu * const svc = csched2_vcpu(vc); spinlock_t *lock; ASSERT(!is_idle_vcpu(vc)); @@ -2243,9 +2265,9 @@ csched2_runtime(const struct scheduler *ops, int cpu, { s_time_t time, min_time; int rt_credit; /* Proposed runtime measured in credits */ - struct csched2_runqueue_data *rqd = RQD(ops, cpu); + struct csched2_runqueue_data *rqd = c2rqd(ops, cpu); struct list_head *runq = &rqd->runq; - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_private *prv = csched2_priv(ops); /* * If we're idle, just stay so. Others (or external events) @@ -2334,7 +2356,7 @@ runq_candidate(struct csched2_runqueue_data *rqd, { struct list_head *iter; struct csched2_vcpu *snext = NULL; - struct csched2_private *prv = CSCHED2_PRIV(per_cpu(scheduler, cpu)); + struct csched2_private *prv = csched2_priv(per_cpu(scheduler, cpu)); bool yield = __test_and_clear_bit(__CSFLAG_vcpu_yield, &scurr->flags); *skipped = 0; @@ -2373,7 +2395,7 @@ runq_candidate(struct csched2_runqueue_data *rqd, if ( vcpu_runnable(scurr->vcpu) ) snext = scurr; else - snext = CSCHED2_VCPU(idle_vcpu[cpu]); + snext = csched2_vcpu(idle_vcpu[cpu]); list_for_each( iter, &rqd->runq ) { @@ -2453,7 +2475,7 @@ csched2_schedule( { const int cpu = smp_processor_id(); struct csched2_runqueue_data *rqd; - struct csched2_vcpu * const scurr = CSCHED2_VCPU(current); + struct csched2_vcpu * const scurr = csched2_vcpu(current); struct csched2_vcpu *snext = NULL; unsigned int skipped_vcpus = 0; struct task_slice ret; @@ -2462,9 +2484,9 @@ csched2_schedule( SCHED_STAT_CRANK(schedule); CSCHED2_VCPU_CHECK(current); - BUG_ON(!cpumask_test_cpu(cpu, &CSCHED2_PRIV(ops)->initialized)); + BUG_ON(!cpumask_test_cpu(cpu, &csched2_priv(ops)->initialized)); - rqd = RQD(ops, cpu); + rqd = c2rqd(ops, cpu); BUG_ON(!cpumask_test_cpu(cpu, &rqd->active)); ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); @@ -2522,7 +2544,7 @@ csched2_schedule( { __clear_bit(__CSFLAG_vcpu_yield, &scurr->flags); trace_var(TRC_CSCHED2_SCHED_TASKLET, 1, 0, NULL); - snext = CSCHED2_VCPU(idle_vcpu[cpu]); + snext = csched2_vcpu(idle_vcpu[cpu]); } else snext = runq_candidate(rqd, scurr, cpu, now, &skipped_vcpus); @@ -2643,7 +2665,7 @@ csched2_dump_vcpu(struct csched2_private *prv, struct csched2_vcpu *svc) static inline void dump_pcpu(const struct scheduler *ops, int cpu) { - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_private *prv = csched2_priv(ops); struct csched2_vcpu *svc; #define cpustr keyhandler_scratch @@ -2653,7 +2675,7 @@ dump_pcpu(const struct scheduler *ops, int cpu) printk("core=%s\n", cpustr); /* current VCPU (nothing to say if that's the idle vcpu) */ - svc = CSCHED2_VCPU(curr_on_cpu(cpu)); + svc = csched2_vcpu(curr_on_cpu(cpu)); if ( svc && !is_idle_vcpu(svc->vcpu) ) { printk("\trun: "); @@ -2666,7 +2688,7 @@ static void csched2_dump(const struct scheduler *ops) { struct list_head *iter_sdom; - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_private *prv = csched2_priv(ops); unsigned long flags; unsigned int i, j, loop; #define cpustr keyhandler_scratch @@ -2726,7 +2748,7 @@ csched2_dump(const struct scheduler *ops) for_each_vcpu( sdom->dom, v ) { - struct csched2_vcpu * const svc = CSCHED2_VCPU(v); + struct csched2_vcpu * const svc = csched2_vcpu(v); spinlock_t *lock; lock = vcpu_schedule_lock(svc->vcpu); @@ -2897,7 +2919,7 @@ init_pdata(struct csched2_private *prv, unsigned int cpu) static void csched2_init_pdata(const struct scheduler *ops, void *pdata, int cpu) { - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_private *prv = csched2_priv(ops); spinlock_t *old_lock; unsigned long flags; unsigned rqi; @@ -2925,7 +2947,7 @@ static void csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu, void *pdata, void *vdata) { - struct csched2_private *prv = CSCHED2_PRIV(new_ops); + struct csched2_private *prv = csched2_priv(new_ops); struct csched2_vcpu *svc = vdata; unsigned rqi; @@ -2972,7 +2994,7 @@ static void csched2_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu) { unsigned long flags; - struct csched2_private *prv = CSCHED2_PRIV(ops); + struct csched2_private *prv = csched2_priv(ops); struct csched2_runqueue_data *rqd; int rqi; @@ -3081,7 +3103,7 @@ csched2_deinit(struct scheduler *ops) { struct csched2_private *prv; - prv = CSCHED2_PRIV(ops); + prv = csched2_priv(ops); ops->sched_data = NULL; xfree(prv); }