Message ID | 1470967188.6250.48.camel@citrix.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 12.08.16 at 03:59, <dario.faggioli@citrix.com> wrote: > On Fri, 2016-08-05 at 07:24 -0600, Jan Beulich wrote: >> I'd really like to have those backported, but I have to ask one >> of you to identify which prereq-s are needed on 4.6 and 4.5 >> (I'll revert them from 4.5 right away, but I'll wait for an osstest >> flight to confirm the same issue exists on 4.6). >> > So, for 4.6, I think the only prerequisite would be this: > > 6b53bb4ab3c9bd5eccde88a5175cf72589ba6d52 > "sched: better handle (not) inserting idle vCPUs in runqueues" > > That, however, does not apply cleanly. The important part of it is the > last hunk: > > diff --git a/xen/common/schedule.c b/xen/common/schedule.c > index 92057eb..c195129 100644 > --- a/xen/common/schedule.c > +++ b/xen/common/schedule.c > @@ -240,20 +240,22 @@ int sched_init_vcpu(struct vcpu *v, unsigned int > processor) > init_timer(&v->poll_timer, poll_timer_fn, > v, v->processor); > > - /* Idle VCPUs are scheduled immediately. */ > + v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv); > + if ( v->sched_priv == NULL ) > + return 1; > + > + TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id); > + > + /* Idle VCPUs are scheduled immediately, so don't put them in runqueue. > */ > if ( is_idle_domain(d) ) > { > per_cpu(schedule_data, v->processor).curr = v; > v->is_running = 1; > } > - > - TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id); > - > - v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv); > - if ( v->sched_priv == NULL ) > - return 1; > - > - SCHED_OP(DOM2OP(d), insert_vcpu, v); > + else > + { > + SCHED_OP(DOM2OP(d), insert_vcpu, v); > + } > > return 0; > } > > With this only applied, things works for me. The hunk is actually the > core of the patch, the only real functionality change. The other hunks > are refactoring and cleanups (made possible by it). > > So, I'm not sure whether the best route here is: > - fully backport 6b53bb4ab3c9b; > - backport only the last hunk of 6b53bb4ab3c9b as its own patch; > - fold the last hunk of 6b53bb4ab3c9b in the backport of George's > patch (I mean, what was 83dff3992a89 in staging-4.6); > > Thoughts? First of all - thanks a lot for helping out here. With above extra commit things are indeed back to normal again for me. Since the adjustments to that commit to make it apply were mostly mechanical, I think I'd prefer taking the entire backport. Same for 4.5 then, were the backport adjusted for 4.6 then applied cleanly. Jan
On Fri, 2016-08-12 at 07:53 -0600, Jan Beulich wrote: > > > > On 12.08.16 at 03:59, <dario.faggioli@citrix.com> wrote: > > So, I'm not sure whether the best route here is: > > - fully backport 6b53bb4ab3c9b; > > - backport only the last hunk of 6b53bb4ab3c9b as its own patch; > > - fold the last hunk of 6b53bb4ab3c9b in the backport of George's > > patch (I mean, what was 83dff3992a89 in staging-4.6); > > > > Thoughts? > First of all - thanks a lot for helping out here. > :-) > With above extra > commit things are indeed back to normal again for me. Since the > adjustments to that commit to make it apply were mostly > mechanical, I think I'd prefer taking the entire backport. > Fine. > Same > for 4.5 then, were the backport adjusted for 4.6 then applied > cleanly. > So, you've done the backports yourself, and you don't want/need me to do them right? I'm asking because that's how I read what you're saying here, but I don't see that having happened in staging-{4.5,4.6}. If that's me failing to check, or checking in the wrong place, sorry for the noise. Regards, Dario
>>> On 16.08.16 at 12:21, <dario.faggioli@citrix.com> wrote: > On Fri, 2016-08-12 at 07:53 -0600, Jan Beulich wrote: >> Same >> for 4.5 then, were the backport adjusted for 4.6 then applied >> cleanly. >> > So, you've done the backports yourself, and you don't want/need me to > do them right? Indeed. > I'm asking because that's how I read what you're saying here, but I > don't see that having happened in staging-{4.5,4.6}. If that's me > failing to check, or checking in the wrong place, sorry for the noise. Well, I do things in batches, so these will now simply be part of the next batch. Jan
diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 92057eb..c195129 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -240,20 +240,22 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor) init_timer(&v->poll_timer, poll_timer_fn, v, v->processor); - /* Idle VCPUs are scheduled immediately. */ + v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv); + if ( v->sched_priv == NULL ) + return 1; + + TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id); + + /* Idle VCPUs are scheduled immediately, so don't put them in runqueue. */ if ( is_idle_domain(d) ) { per_cpu(schedule_data, v->processor).curr = v; v->is_running = 1; } - - TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id); - - v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv); - if ( v->sched_priv == NULL ) - return 1; - - SCHED_OP(DOM2OP(d), insert_vcpu, v); + else + { + SCHED_OP(DOM2OP(d), insert_vcpu, v); + } return 0; }