From patchwork Tue May 28 10:32:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10964627 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E999112C for ; Tue, 28 May 2019 10:35:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 300492839C for ; Tue, 28 May 2019 10:35:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 24B0627C2D; Tue, 28 May 2019 10:35:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C6C712857D for ; Tue, 28 May 2019 10:35:06 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZQm-00065t-R8; Tue, 28 May 2019 10:34:08 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZQE-0004tm-QN for xen-devel@lists.xenproject.org; Tue, 28 May 2019 10:33:34 +0000 X-Inumbo-ID: 081a39b4-8134-11e9-8980-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 081a39b4-8134-11e9-8980-bc764e045a96; Tue, 28 May 2019 10:33:29 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 981E1AF38; Tue, 28 May 2019 10:33:27 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 28 May 2019 12:32:56 +0200 Message-Id: <20190528103313.1343-44-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190528103313.1343-1-jgross@suse.com> References: <20190528103313.1343-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 43/60] xen/sched: add a percpu resource index X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Add a percpu variable holding the index of the cpu in the current sched_resource structure. This index is used to get the correct vcpu of a sched_unit on a specific cpu. For now this index will be zero for all cpus, but with core scheduling it will be possible to have higher values, too. Signed-off-by: Juergen Gross --- RFC V2: new patch (carved out from RFC V1 patch 49) --- xen/common/schedule.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index b4e87e2a58..58d3de340e 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -68,6 +68,7 @@ static void poll_timer_fn(void *data); /* This is global for now so that private implementations can reach it */ DEFINE_PER_CPU(struct scheduler *, scheduler); DEFINE_PER_CPU(struct sched_resource *, sched_res); +static DEFINE_PER_CPU(unsigned int, sched_res_idx); /* Scratch space for cpumasks. */ DEFINE_PER_CPU(cpumask_t, cpumask_scratch); @@ -78,6 +79,12 @@ extern const struct scheduler *__start_schedulers_array[], *__end_schedulers_arr static struct scheduler __read_mostly ops; +static inline struct vcpu *sched_unit2vcpu_cpu(struct sched_unit *unit, + unsigned int cpu) +{ + return unit->domain->vcpu[unit->unit_id + per_cpu(sched_res_idx, cpu)]; +} + static inline struct scheduler *dom_scheduler(const struct domain *d) { if ( likely(d->cpupool != NULL) ) @@ -1863,7 +1870,7 @@ static void sched_slave(void) pcpu_schedule_unlock_irq(lock, cpu); - sched_context_switch(vprev, next->vcpu, now); + sched_context_switch(vprev, sched_unit2vcpu_cpu(next, cpu), now); } /* @@ -1922,7 +1929,7 @@ static void schedule(void) pcpu_schedule_unlock_irq(lock, cpu); - vnext = next->vcpu; + vnext = sched_unit2vcpu_cpu(next, cpu); sched_context_switch(vprev, vnext, now); }