Message ID | 146894244486.483.13984277930071960202.stgit@Solace.fritz.box (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, Jul 19, 2016 at 4:34 PM, Dario Faggioli <dario.faggioli@citrix.com> wrote: > In fact, when not finding a suitable runqueue where to > place a vCPU, and hence using a fallback, we either: > - don't issue any trace record (while we should), > - risk underruning when accessing the runqueues > array, while preparing the trace record. > > Fix both issues and, while there, also a couple of style > problems found nearby. > > Spotted by Coverity. > > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> > Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> > --- > Cc: George Dunlap <george.dunlap@citrix.com> > Cc: Anshul Makkar <anshul.makkar@citrix.com> > --- > Changes from v1: > * cite Coverity in the changelog. > --- > xen/common/sched_credit2.c | 13 +++++++------ > 1 file changed, 7 insertions(+), 6 deletions(-) > > diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c > index a55240f..3009ff9 100644 > --- a/xen/common/sched_credit2.c > +++ b/xen/common/sched_credit2.c > @@ -1443,7 +1443,8 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc) > { > /* We may be here because someone requested us to migrate. */ > __clear_bit(__CSFLAG_runq_migrate_request, &svc->flags); > - return get_fallback_cpu(svc); > + new_cpu = get_fallback_cpu(svc); > + goto out; > } > > /* First check to see if we're here because someone else suggested a place > @@ -1505,7 +1506,7 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc) > if ( rqd_avgload < min_avgload ) > { > min_avgload = rqd_avgload; > - min_rqi=i; > + min_rqi = i; > } > } > > @@ -1520,20 +1521,20 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc) > BUG_ON(new_cpu >= nr_cpu_ids); > } > > -out_up: > + out_up: > read_unlock(&prv->lock); > - > + out: > if ( unlikely(tb_init_done) ) > { > struct { > uint64_t b_avgload; > unsigned vcpu:16, dom:16; > unsigned rq_id:16, new_cpu:16; > - } d; > - d.b_avgload = prv->rqd[min_rqi].b_avgload; > + } d; > d.dom = vc->domain->domain_id; > d.vcpu = vc->vcpu_id; > d.rq_id = c2r(ops, new_cpu); > + d.b_avgload = prv->rqd[d.rq_id].b_avgload; Hmm, actually -- is this unlocked access to the prv structure the best idea? It looks like at the moment nothing bad should happen (as we don't re-initialize a pcpu's entry in prv->runq_map[] to -1 when de-initializing the pcpu), but if we ever *did*, then there'd be a race condition we could possibly trip over. Sorry for missing this during review. What about having a local variable that we initialize to something sensible (like 0 or -1) and setting it before the read_unlock()? -George
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index a55240f..3009ff9 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -1443,7 +1443,8 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc) { /* We may be here because someone requested us to migrate. */ __clear_bit(__CSFLAG_runq_migrate_request, &svc->flags); - return get_fallback_cpu(svc); + new_cpu = get_fallback_cpu(svc); + goto out; } /* First check to see if we're here because someone else suggested a place @@ -1505,7 +1506,7 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc) if ( rqd_avgload < min_avgload ) { min_avgload = rqd_avgload; - min_rqi=i; + min_rqi = i; } } @@ -1520,20 +1521,20 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc) BUG_ON(new_cpu >= nr_cpu_ids); } -out_up: + out_up: read_unlock(&prv->lock); - + out: if ( unlikely(tb_init_done) ) { struct { uint64_t b_avgload; unsigned vcpu:16, dom:16; unsigned rq_id:16, new_cpu:16; - } d; - d.b_avgload = prv->rqd[min_rqi].b_avgload; + } d; d.dom = vc->domain->domain_id; d.vcpu = vc->vcpu_id; d.rq_id = c2r(ops, new_cpu); + d.b_avgload = prv->rqd[d.rq_id].b_avgload; d.new_cpu = new_cpu; __trace_var(TRC_CSCHED2_PICKED_CPU, 1, sizeof(d),
In fact, when not finding a suitable runqueue where to place a vCPU, and hence using a fallback, we either: - don't issue any trace record (while we should), - risk underruning when accessing the runqueues array, while preparing the trace record. Fix both issues and, while there, also a couple of style problems found nearby. Spotted by Coverity. Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> --- Cc: George Dunlap <george.dunlap@citrix.com> Cc: Anshul Makkar <anshul.makkar@citrix.com> --- Changes from v1: * cite Coverity in the changelog. --- xen/common/sched_credit2.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)