From patchwork Fri Aug 9 14:57:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juergen Gross X-Patchwork-Id: 11086653 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1E59F1709 for ; Fri, 9 Aug 2019 15:00:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0901A1FF83 for ; Fri, 9 Aug 2019 15:00:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F109F1FFBE; Fri, 9 Aug 2019 15:00:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7ECA41FFBD for ; Fri, 9 Aug 2019 15:00:12 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hw6Lz-0006X0-LZ; Fri, 09 Aug 2019 14:58:51 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hw6Lx-0006TK-48 for xen-devel@lists.xenproject.org; Fri, 09 Aug 2019 14:58:49 +0000 X-Inumbo-ID: 2d70953e-bab6-11e9-ba73-77a104decbdc Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 2d70953e-bab6-11e9-ba73-77a104decbdc; Fri, 09 Aug 2019 14:58:42 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 71042B05E; Fri, 9 Aug 2019 14:58:40 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 9 Aug 2019 16:57:53 +0200 Message-Id: <20190809145833.1020-9-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190809145833.1020-1-jgross@suse.com> References: <20190809145833.1020-1-jgross@suse.com> Subject: [Xen-devel] [PATCH v2 08/48] xen/sched: switch vcpu_schedule_lock to unit_schedule_lock X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Meng Xu , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Rename vcpu_schedule_[un]lock[_irq]() to unit_schedule_[un]lock[_irq]() and let it take a sched_unit pointer instead of a vcpu pointer as parameter. Signed-off-by: Juergen Gross --- xen/common/sched_credit.c | 17 ++++++++-------- xen/common/sched_credit2.c | 40 ++++++++++++++++++------------------- xen/common/sched_null.c | 16 +++++++-------- xen/common/sched_rt.c | 15 +++++++------- xen/common/schedule.c | 49 +++++++++++++++++++++++----------------------- xen/include/xen/sched-if.h | 12 ++++++------ 6 files changed, 75 insertions(+), 74 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 261d2083c7..603793f1d0 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -926,7 +926,8 @@ __csched_vcpu_acct_stop_locked(struct csched_private *prv, static void csched_vcpu_acct(struct csched_private *prv, unsigned int cpu) { - struct csched_unit * const svc = CSCHED_UNIT(current->sched_unit); + struct sched_unit *currunit = current->sched_unit; + struct csched_unit * const svc = CSCHED_UNIT(currunit); const struct scheduler *ops = per_cpu(scheduler, cpu); ASSERT( current->processor == cpu ); @@ -962,7 +963,7 @@ csched_vcpu_acct(struct csched_private *prv, unsigned int cpu) { unsigned int new_cpu; unsigned long flags; - spinlock_t *lock = vcpu_schedule_lock_irqsave(current, &flags); + spinlock_t *lock = unit_schedule_lock_irqsave(currunit, &flags); /* * If it's been active a while, check if we'd be better off @@ -971,7 +972,7 @@ csched_vcpu_acct(struct csched_private *prv, unsigned int cpu) */ new_cpu = _csched_cpu_pick(ops, current, 0); - vcpu_schedule_unlock_irqrestore(lock, flags, current); + unit_schedule_unlock_irqrestore(lock, flags, currunit); if ( new_cpu != cpu ) { @@ -1023,19 +1024,19 @@ csched_unit_insert(const struct scheduler *ops, struct sched_unit *unit) BUG_ON( is_idle_vcpu(vc) ); /* csched_res_pick() looks in vc->processor's runq, so we need the lock. */ - lock = vcpu_schedule_lock_irq(vc); + lock = unit_schedule_lock_irq(unit); unit->res = csched_res_pick(ops, unit); vc->processor = unit->res->processor; spin_unlock_irq(lock); - lock = vcpu_schedule_lock_irq(vc); + lock = unit_schedule_lock_irq(unit); if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running ) runq_insert(svc); - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); SCHED_STAT_CRANK(vcpu_insert); } @@ -2133,12 +2134,12 @@ csched_dump(const struct scheduler *ops) spinlock_t *lock; svc = list_entry(iter_svc, struct csched_unit, active_vcpu_elem); - lock = vcpu_schedule_lock(svc->vcpu); + lock = unit_schedule_lock(svc->vcpu->sched_unit); printk("\t%3d: ", ++loop); csched_dump_vcpu(svc); - vcpu_schedule_unlock(lock, svc->vcpu); + unit_schedule_unlock(lock, svc->vcpu->sched_unit); } } diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index 02e2855d8d..1798fcf8c4 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -171,7 +171,7 @@ * - runqueue lock * + it is per-runqueue, so: * * cpus in a runqueue take the runqueue lock, when using - * pcpu_schedule_lock() / vcpu_schedule_lock() (and friends), + * pcpu_schedule_lock() / unit_schedule_lock() (and friends), * * a cpu may (try to) take a "remote" runqueue lock, e.g., for * load balancing; * + serializes runqueue operations (removing and inserting vcpus); @@ -1891,7 +1891,7 @@ unpark_parked_vcpus(const struct scheduler *ops, struct list_head *vcpus) unsigned long flags; s_time_t now; - lock = vcpu_schedule_lock_irqsave(svc->vcpu, &flags); + lock = unit_schedule_lock_irqsave(svc->vcpu->sched_unit, &flags); __clear_bit(_VPF_parked, &svc->vcpu->pause_flags); if ( unlikely(svc->flags & CSFLAG_scheduled) ) @@ -1924,7 +1924,7 @@ unpark_parked_vcpus(const struct scheduler *ops, struct list_head *vcpus) } list_del_init(&svc->parked_elem); - vcpu_schedule_unlock_irqrestore(lock, flags, svc->vcpu); + unit_schedule_unlock_irqrestore(lock, flags, svc->vcpu->sched_unit); } } @@ -2163,7 +2163,7 @@ csched2_context_saved(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc = unit->vcpu_list; struct csched2_unit * const svc = csched2_unit(unit); - spinlock_t *lock = vcpu_schedule_lock_irq(vc); + spinlock_t *lock = unit_schedule_lock_irq(unit); s_time_t now = NOW(); LIST_HEAD(were_parked); @@ -2195,7 +2195,7 @@ csched2_context_saved(const struct scheduler *ops, struct sched_unit *unit) else if ( !is_idle_vcpu(vc) ) update_load(ops, svc->rqd, svc, -1, now); - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); unpark_parked_vcpus(ops, &were_parked); } @@ -2848,14 +2848,14 @@ csched2_dom_cntl( for_each_vcpu ( d, v ) { struct csched2_unit *svc = csched2_unit(v->sched_unit); - spinlock_t *lock = vcpu_schedule_lock(svc->vcpu); + spinlock_t *lock = unit_schedule_lock(svc->vcpu->sched_unit); ASSERT(svc->rqd == c2rqd(ops, svc->vcpu->processor)); svc->weight = sdom->weight; update_max_weight(svc->rqd, svc->weight, old_weight); - vcpu_schedule_unlock(lock, svc->vcpu); + unit_schedule_unlock(lock, svc->vcpu->sched_unit); } } /* Cap */ @@ -2886,7 +2886,7 @@ csched2_dom_cntl( for_each_vcpu ( d, v ) { svc = csched2_unit(v->sched_unit); - lock = vcpu_schedule_lock(svc->vcpu); + lock = unit_schedule_lock(svc->vcpu->sched_unit); /* * Too small quotas would in theory cause a lot of overhead, * which then won't happen because, in csched2_runtime(), @@ -2894,7 +2894,7 @@ csched2_dom_cntl( */ svc->budget_quota = max(sdom->tot_budget / sdom->nr_vcpus, CSCHED2_MIN_TIMER); - vcpu_schedule_unlock(lock, svc->vcpu); + unit_schedule_unlock(lock, svc->vcpu->sched_unit); } if ( sdom->cap == 0 ) @@ -2929,7 +2929,7 @@ csched2_dom_cntl( for_each_vcpu ( d, v ) { svc = csched2_unit(v->sched_unit); - lock = vcpu_schedule_lock(svc->vcpu); + lock = unit_schedule_lock(svc->vcpu->sched_unit); if ( v->is_running ) { unsigned int cpu = v->processor; @@ -2960,7 +2960,7 @@ csched2_dom_cntl( cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); } svc->budget = 0; - vcpu_schedule_unlock(lock, svc->vcpu); + unit_schedule_unlock(lock, svc->vcpu->sched_unit); } } @@ -2976,12 +2976,12 @@ csched2_dom_cntl( for_each_vcpu ( d, v ) { struct csched2_unit *svc = csched2_unit(v->sched_unit); - spinlock_t *lock = vcpu_schedule_lock(svc->vcpu); + spinlock_t *lock = unit_schedule_lock(svc->vcpu->sched_unit); svc->budget = STIME_MAX; svc->budget_quota = 0; - vcpu_schedule_unlock(lock, svc->vcpu); + unit_schedule_unlock(lock, svc->vcpu->sched_unit); } sdom->cap = 0; /* @@ -3120,19 +3120,19 @@ csched2_unit_insert(const struct scheduler *ops, struct sched_unit *unit) ASSERT(list_empty(&svc->runq_elem)); /* csched2_res_pick() expects the pcpu lock to be held */ - lock = vcpu_schedule_lock_irq(vc); + lock = unit_schedule_lock_irq(unit); unit->res = csched2_res_pick(ops, unit); vc->processor = unit->res->processor; spin_unlock_irq(lock); - lock = vcpu_schedule_lock_irq(vc); + lock = unit_schedule_lock_irq(unit); /* Add vcpu to runqueue of initial processor */ runq_assign(ops, vc); - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); sdom->nr_vcpus++; @@ -3162,11 +3162,11 @@ csched2_unit_remove(const struct scheduler *ops, struct sched_unit *unit) SCHED_STAT_CRANK(vcpu_remove); /* Remove from runqueue */ - lock = vcpu_schedule_lock_irq(vc); + lock = unit_schedule_lock_irq(unit); runq_deassign(ops, vc); - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); svc->sdom->nr_vcpus--; } @@ -3750,12 +3750,12 @@ csched2_dump(const struct scheduler *ops) struct csched2_unit * const svc = csched2_unit(v->sched_unit); spinlock_t *lock; - lock = vcpu_schedule_lock(svc->vcpu); + lock = unit_schedule_lock(svc->vcpu->sched_unit); printk("\t%3d: ", ++loop); csched2_dump_vcpu(prv, svc); - vcpu_schedule_unlock(lock, svc->vcpu); + unit_schedule_unlock(lock, svc->vcpu->sched_unit); } } diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index 5f0356c7f8..40ef9f9089 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -309,7 +309,7 @@ pick_res(struct null_private *prv, struct sched_unit *unit) * all the pCPUs are busy. * * In fact, there must always be something sane in v->processor, or - * vcpu_schedule_lock() and friends won't work. This is not a problem, + * unit_schedule_lock() and friends won't work. This is not a problem, * as we will actually assign the vCPU to the pCPU we return from here, * only if the pCPU is free. */ @@ -450,11 +450,11 @@ static void null_unit_insert(const struct scheduler *ops, ASSERT(!is_idle_vcpu(v)); - lock = vcpu_schedule_lock_irq(v); + lock = unit_schedule_lock_irq(unit); if ( unlikely(!is_vcpu_online(v)) ) { - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, unit); return; } @@ -464,7 +464,7 @@ static void null_unit_insert(const struct scheduler *ops, spin_unlock(lock); - lock = vcpu_schedule_lock(v); + lock = unit_schedule_lock(unit); cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, cpupool_domain_cpumask(v->domain)); @@ -513,7 +513,7 @@ static void null_unit_remove(const struct scheduler *ops, ASSERT(!is_idle_vcpu(v)); - lock = vcpu_schedule_lock_irq(v); + lock = unit_schedule_lock_irq(unit); /* If offline, the vcpu shouldn't be assigned, nor in the waitqueue */ if ( unlikely(!is_vcpu_online(v)) ) @@ -536,7 +536,7 @@ static void null_unit_remove(const struct scheduler *ops, vcpu_deassign(prv, v); out: - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, unit); SCHED_STAT_CRANK(vcpu_remove); } @@ -935,13 +935,13 @@ static void null_dump(const struct scheduler *ops) struct null_unit * const nvc = null_unit(v->sched_unit); spinlock_t *lock; - lock = vcpu_schedule_lock(nvc->vcpu); + lock = unit_schedule_lock(nvc->vcpu->sched_unit); printk("\t%3d: ", ++loop); dump_vcpu(prv, nvc); printk("\n"); - vcpu_schedule_unlock(lock, nvc->vcpu); + unit_schedule_unlock(lock, nvc->vcpu->sched_unit); } } diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 3ce85122cc..a279582392 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -177,7 +177,7 @@ static void repl_timer_handler(void *data); /* * System-wide private data, include global RunQueue/DepletedQ * Global lock is referenced by sched_res->schedule_lock from all - * physical cpus. It can be grabbed via vcpu_schedule_lock_irq() + * physical cpus. It can be grabbed via unit_schedule_lock_irq() */ struct rt_private { spinlock_t lock; /* the global coarse-grained lock */ @@ -896,7 +896,7 @@ rt_unit_insert(const struct scheduler *ops, struct sched_unit *unit) unit->res = rt_res_pick(ops, unit); vc->processor = unit->res->processor; - lock = vcpu_schedule_lock_irq(vc); + lock = unit_schedule_lock_irq(unit); now = NOW(); if ( now >= svc->cur_deadline ) @@ -909,7 +909,7 @@ rt_unit_insert(const struct scheduler *ops, struct sched_unit *unit) if ( !vc->is_running ) runq_insert(ops, svc); } - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); SCHED_STAT_CRANK(vcpu_insert); } @@ -920,7 +920,6 @@ rt_unit_insert(const struct scheduler *ops, struct sched_unit *unit) static void rt_unit_remove(const struct scheduler *ops, struct sched_unit *unit) { - struct vcpu *vc = unit->vcpu_list; struct rt_unit * const svc = rt_unit(unit); struct rt_dom * const sdom = svc->sdom; spinlock_t *lock; @@ -929,14 +928,14 @@ rt_unit_remove(const struct scheduler *ops, struct sched_unit *unit) BUG_ON( sdom == NULL ); - lock = vcpu_schedule_lock_irq(vc); + lock = unit_schedule_lock_irq(unit); if ( vcpu_on_q(svc) ) q_remove(svc); if ( vcpu_on_replq(svc) ) replq_remove(ops,svc); - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); } /* @@ -1331,7 +1330,7 @@ rt_context_saved(const struct scheduler *ops, struct sched_unit *unit) { struct vcpu *vc = unit->vcpu_list; struct rt_unit *svc = rt_unit(unit); - spinlock_t *lock = vcpu_schedule_lock_irq(vc); + spinlock_t *lock = unit_schedule_lock_irq(unit); __clear_bit(__RTDS_scheduled, &svc->flags); /* not insert idle vcpu to runq */ @@ -1348,7 +1347,7 @@ rt_context_saved(const struct scheduler *ops, struct sched_unit *unit) replq_remove(ops, svc); out: - vcpu_schedule_unlock_irq(lock, vc); + unit_schedule_unlock_irq(lock, unit); } /* diff --git a/xen/common/schedule.c b/xen/common/schedule.c index c167eb23f2..e5ae402a29 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -250,7 +250,8 @@ static inline void vcpu_runstate_change( void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) { - spinlock_t *lock = likely(v == current) ? NULL : vcpu_schedule_lock_irq(v); + spinlock_t *lock = likely(v == current) + ? NULL : unit_schedule_lock_irq(v->sched_unit); s_time_t delta; memcpy(runstate, &v->runstate, sizeof(*runstate)); @@ -259,7 +260,7 @@ void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) runstate->time[runstate->state] += delta; if ( unlikely(lock != NULL) ) - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, v->sched_unit); } uint64_t get_cpu_idle_time(unsigned int cpu) @@ -473,7 +474,7 @@ int sched_move_domain(struct domain *d, struct cpupool *c) migrate_timer(&v->singleshot_timer, new_p); migrate_timer(&v->poll_timer, new_p); - lock = vcpu_schedule_lock_irq(v); + lock = unit_schedule_lock_irq(v->sched_unit); sched_set_affinity(v, &cpumask_all, &cpumask_all); @@ -482,7 +483,7 @@ int sched_move_domain(struct domain *d, struct cpupool *c) /* * With v->processor modified we must not * - make any further changes assuming we hold the scheduler lock, - * - use vcpu_schedule_unlock_irq(). + * - use unit_schedule_unlock_irq(). */ spin_unlock_irq(lock); @@ -581,11 +582,11 @@ void vcpu_sleep_nosync(struct vcpu *v) TRACE_2D(TRC_SCHED_SLEEP, v->domain->domain_id, v->vcpu_id); - lock = vcpu_schedule_lock_irqsave(v, &flags); + lock = unit_schedule_lock_irqsave(v->sched_unit, &flags); vcpu_sleep_nosync_locked(v); - vcpu_schedule_unlock_irqrestore(lock, flags, v); + unit_schedule_unlock_irqrestore(lock, flags, v->sched_unit); } void vcpu_sleep_sync(struct vcpu *v) @@ -605,7 +606,7 @@ void vcpu_wake(struct vcpu *v) TRACE_2D(TRC_SCHED_WAKE, v->domain->domain_id, v->vcpu_id); - lock = vcpu_schedule_lock_irqsave(v, &flags); + lock = unit_schedule_lock_irqsave(v->sched_unit, &flags); if ( likely(vcpu_runnable(v)) ) { @@ -619,7 +620,7 @@ void vcpu_wake(struct vcpu *v) vcpu_runstate_change(v, RUNSTATE_offline, NOW()); } - vcpu_schedule_unlock_irqrestore(lock, flags, v); + unit_schedule_unlock_irqrestore(lock, flags, v->sched_unit); } void vcpu_unblock(struct vcpu *v) @@ -687,9 +688,9 @@ static void vcpu_move_locked(struct vcpu *v, unsigned int new_cpu) * These steps are encapsulated in the following two functions; they * should be called like this: * - * lock = vcpu_schedule_lock_irq(v); + * lock = unit_schedule_lock_irq(unit); * vcpu_migrate_start(v); - * vcpu_schedule_unlock_irq(lock, v) + * unit_schedule_unlock_irq(lock, unit) * vcpu_migrate_finish(v); * * vcpu_migrate_finish() will do the work now if it can, or simply @@ -794,12 +795,12 @@ static void vcpu_migrate_finish(struct vcpu *v) */ void vcpu_force_reschedule(struct vcpu *v) { - spinlock_t *lock = vcpu_schedule_lock_irq(v); + spinlock_t *lock = unit_schedule_lock_irq(v->sched_unit); if ( v->is_running ) vcpu_migrate_start(v); - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, v->sched_unit); vcpu_migrate_finish(v); } @@ -826,7 +827,7 @@ void restore_vcpu_affinity(struct domain *d) * set v->processor of each of their vCPUs to something that will * make sense for the scheduler of the cpupool in which they are in. */ - lock = vcpu_schedule_lock_irq(v); + lock = unit_schedule_lock_irq(v->sched_unit); cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, cpupool_domain_cpumask(d)); @@ -855,7 +856,7 @@ void restore_vcpu_affinity(struct domain *d) spin_unlock_irq(lock); /* v->processor might have changed, so reacquire the lock. */ - lock = vcpu_schedule_lock_irq(v); + lock = unit_schedule_lock_irq(v->sched_unit); v->sched_unit->res = sched_pick_resource(vcpu_scheduler(v), v->sched_unit); v->processor = v->sched_unit->res->processor; @@ -890,7 +891,7 @@ int cpu_disable_scheduler(unsigned int cpu) for_each_vcpu ( d, v ) { unsigned long flags; - spinlock_t *lock = vcpu_schedule_lock_irqsave(v, &flags); + spinlock_t *lock = unit_schedule_lock_irqsave(v->sched_unit, &flags); cpumask_and(&online_affinity, v->cpu_hard_affinity, c->cpu_valid); if ( cpumask_empty(&online_affinity) && @@ -899,7 +900,7 @@ int cpu_disable_scheduler(unsigned int cpu) if ( v->affinity_broken ) { /* The vcpu is temporarily pinned, can't move it. */ - vcpu_schedule_unlock_irqrestore(lock, flags, v); + unit_schedule_unlock_irqrestore(lock, flags, v->sched_unit); ret = -EADDRINUSE; break; } @@ -912,7 +913,7 @@ int cpu_disable_scheduler(unsigned int cpu) if ( v->processor != cpu ) { /* The vcpu is not on this cpu, so we can move on. */ - vcpu_schedule_unlock_irqrestore(lock, flags, v); + unit_schedule_unlock_irqrestore(lock, flags, v->sched_unit); continue; } @@ -925,7 +926,7 @@ int cpu_disable_scheduler(unsigned int cpu) * things would have failed before getting in here. */ vcpu_migrate_start(v); - vcpu_schedule_unlock_irqrestore(lock, flags, v); + unit_schedule_unlock_irqrestore(lock, flags, v->sched_unit); vcpu_migrate_finish(v); @@ -989,7 +990,7 @@ static int vcpu_set_affinity( spinlock_t *lock; int ret = 0; - lock = vcpu_schedule_lock_irq(v); + lock = unit_schedule_lock_irq(v->sched_unit); if ( v->affinity_broken ) ret = -EBUSY; @@ -1011,7 +1012,7 @@ static int vcpu_set_affinity( vcpu_migrate_start(v); } - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, v->sched_unit); domain_update_node_affinity(v->domain); @@ -1143,10 +1144,10 @@ static long do_poll(struct sched_poll *sched_poll) long vcpu_yield(void) { struct vcpu * v=current; - spinlock_t *lock = vcpu_schedule_lock_irq(v); + spinlock_t *lock = unit_schedule_lock_irq(v->sched_unit); sched_yield(vcpu_scheduler(v), v->sched_unit); - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, v->sched_unit); SCHED_STAT_CRANK(vcpu_yield); @@ -1243,7 +1244,7 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason) int ret = -EINVAL; bool migrate; - lock = vcpu_schedule_lock_irq(v); + lock = unit_schedule_lock_irq(v->sched_unit); if ( cpu == NR_CPUS ) { @@ -1276,7 +1277,7 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason) if ( migrate ) vcpu_migrate_start(v); - vcpu_schedule_unlock_irq(lock, v); + unit_schedule_unlock_irq(lock, v->sched_unit); if ( migrate ) vcpu_migrate_finish(v); diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 212c612374..ed7b7da3a3 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -101,22 +101,22 @@ static inline void kind##_schedule_unlock##irq(spinlock_t *lock \ #define EXTRA_TYPE(arg) sched_lock(pcpu, unsigned int cpu, cpu, ) -sched_lock(vcpu, const struct vcpu *v, v->processor, ) +sched_lock(unit, const struct sched_unit *i, i->res->processor, ) sched_lock(pcpu, unsigned int cpu, cpu, _irq) -sched_lock(vcpu, const struct vcpu *v, v->processor, _irq) +sched_lock(unit, const struct sched_unit *i, i->res->processor, _irq) sched_unlock(pcpu, unsigned int cpu, cpu, ) -sched_unlock(vcpu, const struct vcpu *v, v->processor, ) +sched_unlock(unit, const struct sched_unit *i, i->res->processor, ) sched_unlock(pcpu, unsigned int cpu, cpu, _irq) -sched_unlock(vcpu, const struct vcpu *v, v->processor, _irq) +sched_unlock(unit, const struct sched_unit *i, i->res->processor, _irq) #undef EXTRA_TYPE #define EXTRA_TYPE(arg) , unsigned long arg #define spin_unlock_irqsave spin_unlock_irqrestore sched_lock(pcpu, unsigned int cpu, cpu, _irqsave, *flags) -sched_lock(vcpu, const struct vcpu *v, v->processor, _irqsave, *flags) +sched_lock(unit, const struct sched_unit *i, i->res->processor, _irqsave, *flags) #undef spin_unlock_irqsave sched_unlock(pcpu, unsigned int cpu, cpu, _irqrestore, flags) -sched_unlock(vcpu, const struct vcpu *v, v->processor, _irqrestore, flags) +sched_unlock(unit, const struct sched_unit *i, i->res->processor, _irqrestore, flags) #undef EXTRA_TYPE #undef sched_unlock