From patchwork Fri Mar 29 15:09:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10877325 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8432C1708 for ; Fri, 29 Mar 2019 15:12:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E832294E6 for ; Fri, 29 Mar 2019 15:12:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6BF3329889; Fri, 29 Mar 2019 15:12:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E29BA294E6 for ; Fri, 29 Mar 2019 15:12:33 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t9W-0005cR-8k; Fri, 29 Mar 2019 15:10:42 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t8o-0003uh-Q1 for xen-devel@lists.xenproject.org; Fri, 29 Mar 2019 15:09:58 +0000 X-Inumbo-ID: b385956a-5234-11e9-bea9-3738f648f92c Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id b385956a-5234-11e9-bea9-3738f648f92c; Fri, 29 Mar 2019 15:09:52 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 40F1CB01C; Fri, 29 Mar 2019 15:09:51 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 29 Mar 2019 16:09:25 +0100 Message-Id: <20190329150934.17694-41-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190329150934.17694-1-jgross@suse.com> References: <20190329150934.17694-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC 40/49] xen/sched: add support for multiple vcpus per sched item where missing X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Tim Deegan , Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Dario Faggioli , Julien Grall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP In several places there is support for multiple vcpus per sched item missing. Add that missing support (with the exception of initial allocation) and missing helpers for that. Signed-off-by: Juergen Gross --- xen/common/schedule.c | 28 +++++++++++++--------- xen/include/xen/sched-if.h | 60 +++++++++++++++++++++++++++++++++++++++------- 2 files changed, 69 insertions(+), 19 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index d3474e6565..d33efbcdc5 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -184,8 +184,9 @@ static inline void vcpu_runstate_change( s_time_t delta; bool old_run, new_run; - ASSERT(v->runstate.state != new_state); ASSERT(spin_is_locked(per_cpu(sched_res, v->processor)->schedule_lock)); + if ( v->runstate.state == new_state ) + return; vcpu_urgent_count_update(v); @@ -221,18 +222,23 @@ static inline void vcpu_runstate_change( v->runstate.state = new_state; } +static inline void vcpu_runstate_helper(struct vcpu *v, int new_state, + s_time_t new_entry_time) +{ + vcpu_runstate_change(v, + ((v->pause_flags & VPF_blocked) ? RUNSTATE_blocked : + (vcpu_runnable(v) ? new_state : RUNSTATE_offline)), + new_entry_time); +} + static inline void sched_item_runstate_change(struct sched_item *item, bool running, s_time_t new_entry_time) { - struct vcpu *v = item->vcpu; + int new_state = running ? RUNSTATE_running : RUNSTATE_runnable; + struct vcpu *v; - if ( running ) - vcpu_runstate_change(v, RUNSTATE_running, new_entry_time); - else - vcpu_runstate_change(v, - ((v->pause_flags & VPF_blocked) ? RUNSTATE_blocked : - (vcpu_runnable(v) ? RUNSTATE_runnable : RUNSTATE_offline)), - new_entry_time); + for_each_sched_item_vcpu( item, v ) + vcpu_runstate_helper(v, new_state, new_entry_time); } void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) @@ -1616,7 +1622,7 @@ static void sched_switch_items(struct sched_resource *sd, (next->vcpu->runstate.state == RUNSTATE_runnable) ? (now - next->state_entry_time) : 0, prev->next_time); - ASSERT(prev->vcpu->runstate.state == RUNSTATE_running); + ASSERT(item_running(prev)); TRACE_4D(TRC_SCHED_SWITCH, prev->domain->domain_id, prev->item_id, next->domain->domain_id, next->item_id); @@ -1624,7 +1630,7 @@ static void sched_switch_items(struct sched_resource *sd, sched_item_runstate_change(prev, false, now); prev->last_run_time = now; - ASSERT(next->vcpu->runstate.state != RUNSTATE_running); + ASSERT(!item_running(next)); sched_item_runstate_change(next, true, now); /* diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index 9688d174e4..49724aafd0 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -107,15 +107,41 @@ static inline bool is_idle_item(const struct sched_item *item) return is_idle_vcpu(item->vcpu); } +static inline bool item_running(const struct sched_item *item) +{ + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + if ( v->runstate.state == RUNSTATE_running ) + return true; + + return false; +} + static inline bool item_runnable(const struct sched_item *item) { - return vcpu_runnable(item->vcpu); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + if ( vcpu_runnable(v) ) + return true; + + return false; } static inline void sched_set_res(struct sched_item *item, struct sched_resource *res) { - item->vcpu->processor = res->processor; + int cpu = cpumask_first(res->cpus); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + { + ASSERT(cpu < nr_cpu_ids); + v->processor = cpu; + cpu = cpumask_next(cpu, res->cpus); + } + item->res = res; } @@ -127,25 +153,37 @@ static inline unsigned int sched_item_cpu(struct sched_item *item) static inline void sched_set_pause_flags(struct sched_item *item, unsigned int bit) { - __set_bit(bit, &item->vcpu->pause_flags); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + __set_bit(bit, &v->pause_flags); } static inline void sched_clear_pause_flags(struct sched_item *item, unsigned int bit) { - __clear_bit(bit, &item->vcpu->pause_flags); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + __clear_bit(bit, &v->pause_flags); } static inline void sched_set_pause_flags_atomic(struct sched_item *item, unsigned int bit) { - set_bit(bit, &item->vcpu->pause_flags); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + set_bit(bit, &v->pause_flags); } static inline void sched_clear_pause_flags_atomic(struct sched_item *item, unsigned int bit) { - clear_bit(bit, &item->vcpu->pause_flags); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + clear_bit(bit, &v->pause_flags); } static inline struct sched_item *sched_idle_item(unsigned int cpu) @@ -327,12 +365,18 @@ static inline void sched_free_domdata(const struct scheduler *s, static inline void sched_item_pause_nosync(struct sched_item *item) { - vcpu_pause_nosync(item->vcpu); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + vcpu_pause_nosync(v); } static inline void sched_item_unpause(struct sched_item *item) { - vcpu_unpause(item->vcpu); + struct vcpu *v; + + for_each_sched_item_vcpu( item, v ) + vcpu_unpause(v); } #define REGISTER_SCHEDULER(x) static const struct scheduler *x##_entry \