diff mbox series

[RFC,31/49] xen/sched: use sched_resource cpu instead smp_processor_id in schedulers

Message ID 20190329150934.17694-32-jgross@suse.com (mailing list archive)
State Superseded
Headers show
Series xen: add core scheduling support | expand

Commit Message

Jürgen Groß March 29, 2019, 3:09 p.m. UTC
Especially in the do_schedule() functions of the different schedulers
using smp_processor_id() for the local cpu number is correct only if
the sched_item is a single vcpu. As soon as larger sched_items are
used most uses should be replaced by the cpu number of the local
sched_resource instead.

Add a helper to get that sched_resource cpu and modify the schedulers
to use it in a correct way.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched_arinc653.c |  2 +-
 xen/common/sched_credit.c   | 19 ++++++++--------
 xen/common/sched_credit2.c  | 53 +++++++++++++++++++++++----------------------
 xen/common/sched_null.c     | 17 ++++++++-------
 xen/common/sched_rt.c       | 17 ++++++++-------
 xen/common/schedule.c       |  2 +-
 xen/include/xen/sched-if.h  |  5 +++++
 7 files changed, 62 insertions(+), 53 deletions(-)

Comments

Andrew Cooper March 29, 2019, 7:36 p.m. UTC | #1
On 29/03/2019 15:09, Juergen Gross wrote:
> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> index 9b5527c1eb..0b5e5e566b 100644
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -347,7 +347,7 @@ int sched_init_vcpu(struct vcpu *v)
>      if ( (item = sched_alloc_item(v)) == NULL )
>          return 1;
>  
> -    if ( is_idle_domain(d) || d->is_pinned )
> +    if ( is_idle_domain(d) )
>          processor = v->vcpu_id;
>      else
>          processor = sched_select_initial_cpu(v);

This looks like a spurious change.  I also don't an obvious other patch
that it might fit into.

As for the field itself, it is also fairly objectionable.  It is only
ever set for the hardware domain, and only if dom0_vcpus_pin is used,
but the actual pinning information is also reflected in dom0's hard
affinity mask.

In practice, all this flag does is permit the use of VCPUOP_get_physid,
disallow the use of vcpu_set_hard_affinity(), and allow dom0 to attempt
to actually write to MSR_AMD64_NB_CFG, MSR_FAM10H_MMIO_CONF_BASE,
MSR_IA32_UCODE_REV, MSR_IA32_THERM_CONTROL and
MSR_IA32_ENERGY_PERF_BIAS, rather than having the write silently discarded.

Dom0's use of those MSRs is dubious at best, and disabled by default,
*and* when active, also cross-checks with the hard affinity mask.  Does
anyone use dom0_vcpus_pin in production?

I think there is quite a lot of value in getting rid of d->is_pinned and
is_pinned_vcpu() entirely, with will remove an extreme
corner-case-x86-ism out of the common code.

~Andrew
Jürgen Groß March 30, 2019, 10:22 a.m. UTC | #2
On 29/03/2019 20:36, Andrew Cooper wrote:
> On 29/03/2019 15:09, Juergen Gross wrote:
>> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
>> index 9b5527c1eb..0b5e5e566b 100644
>> --- a/xen/common/schedule.c
>> +++ b/xen/common/schedule.c
>> @@ -347,7 +347,7 @@ int sched_init_vcpu(struct vcpu *v)
>>      if ( (item = sched_alloc_item(v)) == NULL )
>>          return 1;
>>  
>> -    if ( is_idle_domain(d) || d->is_pinned )
>> +    if ( is_idle_domain(d) )
>>          processor = v->vcpu_id;
>>      else
>>          processor = sched_select_initial_cpu(v);
> 
> This looks like a spurious change.  I also don't an obvious other patch
> that it might fit into.

That should be in patch 30.

> As for the field itself, it is also fairly objectionable.  It is only
> ever set for the hardware domain, and only if dom0_vcpus_pin is used,
> but the actual pinning information is also reflected in dom0's hard
> affinity mask.

Right.

> In practice, all this flag does is permit the use of VCPUOP_get_physid,
> disallow the use of vcpu_set_hard_affinity(), and allow dom0 to attempt
> to actually write to MSR_AMD64_NB_CFG, MSR_FAM10H_MMIO_CONF_BASE,
> MSR_IA32_UCODE_REV, MSR_IA32_THERM_CONTROL and
> MSR_IA32_ENERGY_PERF_BIAS, rather than having the write silently discarded.
> 
> Dom0's use of those MSRs is dubious at best, and disabled by default,
> *and* when active, also cross-checks with the hard affinity mask.  Does
> anyone use dom0_vcpus_pin in production?

I have seen it on customer systems.

> I think there is quite a lot of value in getting rid of d->is_pinned and
> is_pinned_vcpu() entirely, with will remove an extreme
> corner-case-x86-ism out of the common code.

Right. I don't see a reason why we need anything else but the hard
affinity for this option.

Allowing to re-pin (or unpin) vcpus should be fine. And use of the
said MSRs could be restricted to the correct hard affinity being
active.


Juergen
Jan Beulich April 1, 2019, 8:10 a.m. UTC | #3
>>> On 30.03.19 at 11:22, <jgross@suse.com> wrote:
> On 29/03/2019 20:36, Andrew Cooper wrote:
>> In practice, all this flag does is permit the use of VCPUOP_get_physid,
>> disallow the use of vcpu_set_hard_affinity(), and allow dom0 to attempt
>> to actually write to MSR_AMD64_NB_CFG, MSR_FAM10H_MMIO_CONF_BASE,
>> MSR_IA32_UCODE_REV, MSR_IA32_THERM_CONTROL and
>> MSR_IA32_ENERGY_PERF_BIAS, rather than having the write silently discarded.
>> 
>> Dom0's use of those MSRs is dubious at best, and disabled by default,
>> *and* when active, also cross-checks with the hard affinity mask.  Does
>> anyone use dom0_vcpus_pin in production?
> 
> I have seen it on customer systems.

Same here, but I've never seen it used for a good reason.

>> I think there is quite a lot of value in getting rid of d->is_pinned and
>> is_pinned_vcpu() entirely, with will remove an extreme
>> corner-case-x86-ism out of the common code.

I think its origin was "cpufreq=dom0-kernel", which I think should go
away with it then.

Jan
Jürgen Groß April 1, 2019, 8:46 a.m. UTC | #4
On 01/04/2019 10:10, Jan Beulich wrote:
>>>> On 30.03.19 at 11:22, <jgross@suse.com> wrote:
>> On 29/03/2019 20:36, Andrew Cooper wrote:
>>> In practice, all this flag does is permit the use of VCPUOP_get_physid,
>>> disallow the use of vcpu_set_hard_affinity(), and allow dom0 to attempt
>>> to actually write to MSR_AMD64_NB_CFG, MSR_FAM10H_MMIO_CONF_BASE,
>>> MSR_IA32_UCODE_REV, MSR_IA32_THERM_CONTROL and
>>> MSR_IA32_ENERGY_PERF_BIAS, rather than having the write silently discarded.
>>>
>>> Dom0's use of those MSRs is dubious at best, and disabled by default,
>>> *and* when active, also cross-checks with the hard affinity mask.  Does
>>> anyone use dom0_vcpus_pin in production?
>>
>> I have seen it on customer systems.
> 
> Same here, but I've never seen it used for a good reason.
> 
>>> I think there is quite a lot of value in getting rid of d->is_pinned and
>>> is_pinned_vcpu() entirely, with will remove an extreme
>>> corner-case-x86-ism out of the common code.
> 
> I think its origin was "cpufreq=dom0-kernel", which I think should go
> away with it then.

Fine with me. Another candidate for a later cleanup, I guess.


Juergen
diff mbox series

Patch

diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c
index 61f9ea6824..3919c0a3e9 100644
--- a/xen/common/sched_arinc653.c
+++ b/xen/common/sched_arinc653.c
@@ -513,7 +513,7 @@  a653sched_do_schedule(
     static unsigned int sched_index = 0;
     static s_time_t next_switch_time;
     a653sched_priv_t *sched_priv = SCHED_PRIV(ops);
-    const unsigned int cpu = smp_processor_id();
+    const unsigned int cpu = sched_get_resource_cpu(smp_processor_id());
     unsigned long flags;
 
     spin_lock_irqsave(&sched_priv->lock, flags);
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 9db5c3fc71..4734f52fc7 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -1839,8 +1839,9 @@  static struct task_slice
 csched_schedule(
     const struct scheduler *ops, s_time_t now, bool_t tasklet_work_scheduled)
 {
-    const int cpu = smp_processor_id();
-    struct list_head * const runq = RUNQ(cpu);
+    const unsigned int cpu = smp_processor_id();
+    const unsigned int sched_cpu = sched_get_resource_cpu(cpu);
+    struct list_head * const runq = RUNQ(sched_cpu);
     struct sched_item *item = current->sched_item;
     struct csched_item * const scurr = CSCHED_ITEM(item);
     struct csched_private *prv = CSCHED_PRIV(ops);
@@ -1950,7 +1951,7 @@  csched_schedule(
     {
         BUG_ON( is_idle_item(item) || list_empty(runq) );
         /* Current has blocked. Update the runnable counter for this cpu. */
-        dec_nr_runnable(cpu);
+        dec_nr_runnable(sched_cpu);
     }
 
     snext = __runq_elem(runq->next);
@@ -1960,7 +1961,7 @@  csched_schedule(
     if ( tasklet_work_scheduled )
     {
         TRACE_0D(TRC_CSCHED_SCHED_TASKLET);
-        snext = CSCHED_ITEM(sched_idle_item(cpu));
+        snext = CSCHED_ITEM(sched_idle_item(sched_cpu));
         snext->pri = CSCHED_PRI_TS_BOOST;
     }
 
@@ -1980,7 +1981,7 @@  csched_schedule(
     if ( snext->pri > CSCHED_PRI_TS_OVER )
         __runq_remove(snext);
     else
-        snext = csched_load_balance(prv, cpu, snext, &ret.migrated);
+        snext = csched_load_balance(prv, sched_cpu, snext, &ret.migrated);
 
     /*
      * Update idlers mask if necessary. When we're idling, other CPUs
@@ -1988,12 +1989,12 @@  csched_schedule(
      */
     if ( !tasklet_work_scheduled && snext->pri == CSCHED_PRI_IDLE )
     {
-        if ( !cpumask_test_cpu(cpu, prv->idlers) )
-            cpumask_set_cpu(cpu, prv->idlers);
+        if ( !cpumask_test_cpu(sched_cpu, prv->idlers) )
+            cpumask_set_cpu(sched_cpu, prv->idlers);
     }
-    else if ( cpumask_test_cpu(cpu, prv->idlers) )
+    else if ( cpumask_test_cpu(sched_cpu, prv->idlers) )
     {
-        cpumask_clear_cpu(cpu, prv->idlers);
+        cpumask_clear_cpu(sched_cpu, prv->idlers);
     }
 
     if ( !is_idle_item(snext->item) )
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 7918d46a23..d5cb8c0200 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -3447,7 +3447,8 @@  static struct task_slice
 csched2_schedule(
     const struct scheduler *ops, s_time_t now, bool tasklet_work_scheduled)
 {
-    const int cpu = smp_processor_id();
+    const unsigned int cpu = smp_processor_id();
+    const unsigned int sched_cpu = sched_get_resource_cpu(cpu);
     struct csched2_runqueue_data *rqd;
     struct sched_item *curritem = current->sched_item;
     struct csched2_item * const scurr = csched2_item(curritem);
@@ -3459,22 +3460,22 @@  csched2_schedule(
     SCHED_STAT_CRANK(schedule);
     CSCHED2_ITEM_CHECK(curritem);
 
-    BUG_ON(!cpumask_test_cpu(cpu, &csched2_priv(ops)->initialized));
+    BUG_ON(!cpumask_test_cpu(sched_cpu, &csched2_priv(ops)->initialized));
 
-    rqd = c2rqd(ops, cpu);
-    BUG_ON(!cpumask_test_cpu(cpu, &rqd->active));
+    rqd = c2rqd(ops, sched_cpu);
+    BUG_ON(!cpumask_test_cpu(sched_cpu, &rqd->active));
 
-    ASSERT(spin_is_locked(per_cpu(sched_res, cpu)->schedule_lock));
+    ASSERT(spin_is_locked(per_cpu(sched_res, sched_cpu)->schedule_lock));
 
     BUG_ON(!is_idle_item(curritem) && scurr->rqd != rqd);
 
     /* Clear "tickled" bit now that we've been scheduled */
-    tickled = cpumask_test_cpu(cpu, &rqd->tickled);
+    tickled = cpumask_test_cpu(sched_cpu, &rqd->tickled);
     if ( tickled )
     {
-        __cpumask_clear_cpu(cpu, &rqd->tickled);
+        __cpumask_clear_cpu(sched_cpu, &rqd->tickled);
         cpumask_andnot(cpumask_scratch, &rqd->idle, &rqd->tickled);
-        smt_idle_mask_set(cpu, cpumask_scratch, &rqd->smt_idle);
+        smt_idle_mask_set(sched_cpu, cpumask_scratch, &rqd->smt_idle);
     }
 
     if ( unlikely(tb_init_done) )
@@ -3484,10 +3485,10 @@  csched2_schedule(
             unsigned tasklet:8, idle:8, smt_idle:8, tickled:8;
         } d;
         d.cpu = cpu;
-        d.rq_id = c2r(cpu);
+        d.rq_id = c2r(sched_cpu);
         d.tasklet = tasklet_work_scheduled;
         d.idle = is_idle_item(curritem);
-        d.smt_idle = cpumask_test_cpu(cpu, &rqd->smt_idle);
+        d.smt_idle = cpumask_test_cpu(sched_cpu, &rqd->smt_idle);
         d.tickled = tickled;
         __trace_var(TRC_CSCHED2_SCHEDULE, 1,
                     sizeof(d),
@@ -3527,10 +3528,10 @@  csched2_schedule(
     {
         __clear_bit(__CSFLAG_item_yield, &scurr->flags);
         trace_var(TRC_CSCHED2_SCHED_TASKLET, 1, 0, NULL);
-        snext = csched2_item(sched_idle_item(cpu));
+        snext = csched2_item(sched_idle_item(sched_cpu));
     }
     else
-        snext = runq_candidate(rqd, scurr, cpu, now, &skipped_items);
+        snext = runq_candidate(rqd, scurr, sched_cpu, now, &skipped_items);
 
     /* If switching from a non-idle runnable item, put it
      * back on the runqueue. */
@@ -3555,10 +3556,10 @@  csched2_schedule(
         }
 
         /* Clear the idle mask if necessary */
-        if ( cpumask_test_cpu(cpu, &rqd->idle) )
+        if ( cpumask_test_cpu(sched_cpu, &rqd->idle) )
         {
-            __cpumask_clear_cpu(cpu, &rqd->idle);
-            smt_idle_mask_clear(cpu, &rqd->smt_idle);
+            __cpumask_clear_cpu(sched_cpu, &rqd->idle);
+            smt_idle_mask_clear(sched_cpu, &rqd->smt_idle);
         }
 
         /*
@@ -3577,18 +3578,18 @@  csched2_schedule(
          */
         if ( skipped_items == 0 && snext->credit <= CSCHED2_CREDIT_RESET )
         {
-            reset_credit(ops, cpu, now, snext);
-            balance_load(ops, cpu, now);
+            reset_credit(ops, sched_cpu, now, snext);
+            balance_load(ops, sched_cpu, now);
         }
 
         snext->start_time = now;
         snext->tickled_cpu = -1;
 
         /* Safe because lock for old processor is held */
-        if ( sched_item_cpu(snext->item) != cpu )
+        if ( sched_item_cpu(snext->item) != sched_cpu )
         {
             snext->credit += CSCHED2_MIGRATE_COMPENSATION;
-            sched_set_res(snext->item, per_cpu(sched_res, cpu));
+            sched_set_res(snext->item, per_cpu(sched_res, sched_cpu));
             SCHED_STAT_CRANK(migrated);
             ret.migrated = 1;
         }
@@ -3601,17 +3602,17 @@  csched2_schedule(
          */
         if ( tasklet_work_scheduled )
         {
-            if ( cpumask_test_cpu(cpu, &rqd->idle) )
+            if ( cpumask_test_cpu(sched_cpu, &rqd->idle) )
             {
-                __cpumask_clear_cpu(cpu, &rqd->idle);
-                smt_idle_mask_clear(cpu, &rqd->smt_idle);
+                __cpumask_clear_cpu(sched_cpu, &rqd->idle);
+                smt_idle_mask_clear(sched_cpu, &rqd->smt_idle);
             }
         }
-        else if ( !cpumask_test_cpu(cpu, &rqd->idle) )
+        else if ( !cpumask_test_cpu(sched_cpu, &rqd->idle) )
         {
-            __cpumask_set_cpu(cpu, &rqd->idle);
+            __cpumask_set_cpu(sched_cpu, &rqd->idle);
             cpumask_andnot(cpumask_scratch, &rqd->idle, &rqd->tickled);
-            smt_idle_mask_set(cpu, cpumask_scratch, &rqd->smt_idle);
+            smt_idle_mask_set(sched_cpu, cpumask_scratch, &rqd->smt_idle);
         }
         /* Make sure avgload gets updated periodically even
          * if there's no activity */
@@ -3621,7 +3622,7 @@  csched2_schedule(
     /*
      * Return task to run next...
      */
-    ret.time = csched2_runtime(ops, cpu, snext, now);
+    ret.time = csched2_runtime(ops, sched_cpu, snext, now);
     ret.task = snext->item;
 
     CSCHED2_ITEM_CHECK(ret.task);
diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
index ceb026c8af..34ce7a05d3 100644
--- a/xen/common/sched_null.c
+++ b/xen/common/sched_null.c
@@ -709,6 +709,7 @@  static struct task_slice null_schedule(const struct scheduler *ops,
 {
     unsigned int bs;
     const unsigned int cpu = smp_processor_id();
+    const unsigned int sched_cpu = sched_get_resource_cpu(cpu);
     struct null_private *prv = null_priv(ops);
     struct null_item *wvc;
     struct task_slice ret;
@@ -724,14 +725,14 @@  static struct task_slice null_schedule(const struct scheduler *ops,
         } d;
         d.cpu = cpu;
         d.tasklet = tasklet_work_scheduled;
-        if ( per_cpu(npc, cpu).item == NULL )
+        if ( per_cpu(npc, sched_cpu).item == NULL )
         {
             d.item = d.dom = -1;
         }
         else
         {
-            d.item = per_cpu(npc, cpu).item->item_id;
-            d.dom = per_cpu(npc, cpu).item->domain->domain_id;
+            d.item = per_cpu(npc, sched_cpu).item->item_id;
+            d.dom = per_cpu(npc, sched_cpu).item->domain->domain_id;
         }
         __trace_var(TRC_SNULL_SCHEDULE, 1, sizeof(d), &d);
     }
@@ -739,10 +740,10 @@  static struct task_slice null_schedule(const struct scheduler *ops,
     if ( tasklet_work_scheduled )
     {
         trace_var(TRC_SNULL_TASKLET, 1, 0, NULL);
-        ret.task = sched_idle_item(cpu);
+        ret.task = sched_idle_item(sched_cpu);
     }
     else
-        ret.task = per_cpu(npc, cpu).item;
+        ret.task = per_cpu(npc, sched_cpu).item;
     ret.migrated = 0;
     ret.time = -1;
 
@@ -773,9 +774,9 @@  static struct task_slice null_schedule(const struct scheduler *ops,
                      !has_soft_affinity(wvc->item) )
                     continue;
 
-                if ( item_check_affinity(wvc->item, cpu, bs) )
+                if ( item_check_affinity(wvc->item, sched_cpu, bs) )
                 {
-                    item_assign(prv, wvc->item, cpu);
+                    item_assign(prv, wvc->item, sched_cpu);
                     list_del_init(&wvc->waitq_elem);
                     ret.task = wvc->item;
                     goto unlock;
@@ -787,7 +788,7 @@  static struct task_slice null_schedule(const struct scheduler *ops,
     }
 
     if ( unlikely(ret.task == NULL || !item_runnable(ret.task)) )
-        ret.task = sched_idle_item(cpu);
+        ret.task = sched_idle_item(sched_cpu);
 
     NULL_ITEM_CHECK(ret.task);
     return ret;
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 730aa292d4..2366e33beb 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -1065,7 +1065,8 @@  runq_pick(const struct scheduler *ops, const cpumask_t *mask)
 static struct task_slice
 rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_scheduled)
 {
-    const int cpu = smp_processor_id();
+    const unsigned int cpu = smp_processor_id();
+    const unsigned int sched_cpu = sched_get_resource_cpu(cpu);
     struct rt_private *prv = rt_priv(ops);
     struct rt_item *const scurr = rt_item(current->sched_item);
     struct rt_item *snext = NULL;
@@ -1079,7 +1080,7 @@  rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched
         } d;
         d.cpu = cpu;
         d.tasklet = tasklet_work_scheduled;
-        d.tickled = cpumask_test_cpu(cpu, &prv->tickled);
+        d.tickled = cpumask_test_cpu(sched_cpu, &prv->tickled);
         d.idle = is_idle_item(curritem);
         trace_var(TRC_RTDS_SCHEDULE, 1,
                   sizeof(d),
@@ -1087,7 +1088,7 @@  rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched
     }
 
     /* clear ticked bit now that we've been scheduled */
-    cpumask_clear_cpu(cpu, &prv->tickled);
+    cpumask_clear_cpu(sched_cpu, &prv->tickled);
 
     /* burn_budget would return for IDLE ITEM */
     burn_budget(ops, scurr, now);
@@ -1095,13 +1096,13 @@  rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched
     if ( tasklet_work_scheduled )
     {
         trace_var(TRC_RTDS_SCHED_TASKLET, 1, 0,  NULL);
-        snext = rt_item(sched_idle_item(cpu));
+        snext = rt_item(sched_idle_item(sched_cpu));
     }
     else
     {
-        snext = runq_pick(ops, cpumask_of(cpu));
+        snext = runq_pick(ops, cpumask_of(sched_cpu));
         if ( snext == NULL )
-            snext = rt_item(sched_idle_item(cpu));
+            snext = rt_item(sched_idle_item(sched_cpu));
 
         /* if scurr has higher priority and budget, still pick scurr */
         if ( !is_idle_item(curritem) &&
@@ -1126,9 +1127,9 @@  rt_schedule(const struct scheduler *ops, s_time_t now, bool_t tasklet_work_sched
             q_remove(snext);
             __set_bit(__RTDS_scheduled, &snext->flags);
         }
-        if ( sched_item_cpu(snext->item) != cpu )
+        if ( sched_item_cpu(snext->item) != sched_cpu )
         {
-            sched_set_res(snext->item, per_cpu(sched_res, cpu));
+            sched_set_res(snext->item, per_cpu(sched_res, sched_cpu));
             ret.migrated = 1;
         }
         ret.time = snext->cur_budget; /* invoke the scheduler next time */
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 9b5527c1eb..0b5e5e566b 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -347,7 +347,7 @@  int sched_init_vcpu(struct vcpu *v)
     if ( (item = sched_alloc_item(v)) == NULL )
         return 1;
 
-    if ( is_idle_domain(d) || d->is_pinned )
+    if ( is_idle_domain(d) )
         processor = v->vcpu_id;
     else
         processor = sched_select_initial_cpu(v);
diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h
index 18134c7972..38b403dfbf 100644
--- a/xen/include/xen/sched-if.h
+++ b/xen/include/xen/sched-if.h
@@ -141,6 +141,11 @@  static inline bool vcpu_running(struct vcpu *v)
     return v->sched_item->is_running;
 }
 
+static inline unsigned int sched_get_resource_cpu(unsigned int cpu)
+{
+    return per_cpu(sched_res, cpu)->processor;
+}
+
 /*
  * Scratch space, for avoiding having too many cpumask_t on the stack.
  * Within each scheduler, when using the scratch mask of one pCPU: