From patchwork Tue May 28 10:32:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10964615 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 53B3918EC for ; Tue, 28 May 2019 10:34:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4488726E4F for ; Tue, 28 May 2019 10:34:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3850C28807; Tue, 28 May 2019 10:34:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A779A26E4F for ; Tue, 28 May 2019 10:34:54 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZQ9-0004jy-S9; Tue, 28 May 2019 10:33:29 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZQ5-0004by-N1 for xen-devel@lists.xenproject.org; Tue, 28 May 2019 10:33:25 +0000 X-Inumbo-ID: 03eacb6e-8134-11e9-869b-13c30c006a30 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 03eacb6e-8134-11e9-869b-13c30c006a30; Tue, 28 May 2019 10:33:22 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 4017AB034; Tue, 28 May 2019 10:33:20 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 28 May 2019 12:32:27 +0200 Message-Id: <20190528103313.1343-15-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190528103313.1343-1-jgross@suse.com> References: <20190528103313.1343-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 14/60] xen/sched: add scheduler helpers hiding vcpu X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Add the following helpers using a sched_unit as input instead of a vcpu: - is_idle_unit() similar to is_idle_vcpu() - unit_runnable() like vcpu_runnable() - sched_set_res() to set the current processor of an unit - sched_unit_cpu() to get the current processor of an unit - sched_{set|clear}_pause_flags[_atomic]() to modify pause_flags of the associated vcpu(s) - sched_idle_unit() to get the sched_unit pointer of the idle vcpu of a specific physical cpu Signed-off-by: Juergen Gross --- xen/common/sched_credit.c | 3 +-- xen/common/schedule.c | 19 ++++++++-------- xen/include/xen/sched-if.h | 56 ++++++++++++++++++++++++++++++++++++++++++---- 3 files changed, 62 insertions(+), 16 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index ffac2f4bbb..3f002771da 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -1665,8 +1665,7 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step) SCHED_STAT_CRANK(migrate_queued); WARN_ON(vc->is_urgent); runq_remove(speer); - vc->processor = cpu; - vc->sched_unit->res = get_sched_res(cpu); + sched_set_res(vc->sched_unit, get_sched_res(cpu)); /* * speer will start executing directly on cpu, without having to * go through runq_insert(). So we must update the runnable count diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 212c1e637f..78d9108956 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -317,12 +317,11 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor) struct domain *d = v->domain; struct sched_unit *unit; - v->processor = processor; - if ( (unit = sched_alloc_unit(v)) == NULL ) return 1; - unit->res = get_sched_res(processor); + sched_set_res(unit, get_sched_res(processor)); + /* Initialise the per-vcpu timers. */ init_timer(&v->periodic_timer, vcpu_periodic_timer_fn, v, v->processor); @@ -436,8 +435,7 @@ int sched_move_domain(struct domain *d, struct cpupool *c) sched_set_affinity(v, &cpumask_all, &cpumask_all); - v->processor = new_p; - v->sched_unit->res = get_sched_res(new_p); + sched_set_res(v->sched_unit, get_sched_res(new_p)); /* * With v->processor modified we must not * - make any further changes assuming we hold the scheduler lock, @@ -775,8 +773,9 @@ void restore_vcpu_affinity(struct domain *d) spinlock_t *lock; unsigned int old_cpu = v->processor; struct sched_unit *unit = v->sched_unit; + struct sched_resource *res; - ASSERT(!vcpu_runnable(v)); + ASSERT(!unit_runnable(unit)); /* * Re-assign the initial processor as after resume we have no @@ -807,12 +806,12 @@ void restore_vcpu_affinity(struct domain *d) } } - v->processor = cpumask_any(cpumask_scratch_cpu(cpu)); - unit->res = get_sched_res(v->processor); + res = get_sched_res(cpumask_any(cpumask_scratch_cpu(cpu))); + sched_set_res(unit, res); lock = unit_schedule_lock_irq(unit); - unit->res = sched_pick_resource(vcpu_scheduler(v), unit); - v->processor = unit->res->processor; + res = sched_pick_resource(vcpu_scheduler(v), unit); + sched_set_res(unit, res); spin_unlock_irq(lock); if ( old_cpu != v->processor ) diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 17c01abc25..da9aa04370 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -59,6 +59,57 @@ static inline void set_sched_res(unsigned int cpu, struct sched_resource *res) per_cpu(sched_res, cpu) = res; } +static inline bool is_idle_unit(const struct sched_unit *unit) +{ + return is_idle_vcpu(unit->vcpu); +} + +static inline bool unit_runnable(const struct sched_unit *unit) +{ + return vcpu_runnable(unit->vcpu); +} + +static inline void sched_set_res(struct sched_unit *unit, + struct sched_resource *res) +{ + unit->vcpu->processor = res->processor; + unit->res = res; +} + +static inline unsigned int sched_unit_cpu(struct sched_unit *unit) +{ + return unit->res->processor; +} + +static inline void sched_set_pause_flags(struct sched_unit *unit, + unsigned int bit) +{ + __set_bit(bit, &unit->vcpu->pause_flags); +} + +static inline void sched_clear_pause_flags(struct sched_unit *unit, + unsigned int bit) +{ + __clear_bit(bit, &unit->vcpu->pause_flags); +} + +static inline void sched_set_pause_flags_atomic(struct sched_unit *unit, + unsigned int bit) +{ + set_bit(bit, &unit->vcpu->pause_flags); +} + +static inline void sched_clear_pause_flags_atomic(struct sched_unit *unit, + unsigned int bit) +{ + clear_bit(bit, &unit->vcpu->pause_flags); +} + +static inline struct sched_unit *sched_idle_unit(unsigned int cpu) +{ + return idle_vcpu[cpu]->sched_unit; +} + /* * Scratch space, for avoiding having too many cpumask_t on the stack. * Within each scheduler, when using the scratch mask of one pCPU: @@ -345,10 +396,7 @@ static inline void sched_migrate(const struct scheduler *s, if ( s->migrate ) s->migrate(s, unit, cpu); else - { - unit->vcpu->processor = cpu; - unit->res = get_sched_res(cpu); - } + sched_set_res(unit, get_sched_res(cpu)); } static inline struct sched_resource *sched_pick_resource(