From patchwork Mon Jan 31 21:44:25 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 521251 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p0VLnb09015076 for ; Mon, 31 Jan 2011 21:49:41 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756594Ab1AaVsR (ORCPT ); Mon, 31 Jan 2011 16:48:17 -0500 Received: from mx1.redhat.com ([209.132.183.28]:27114 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756571Ab1AaVsK (ORCPT ); Mon, 31 Jan 2011 16:48:10 -0500 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id p0VLm3EW000516 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Mon, 31 Jan 2011 16:48:03 -0500 Received: from annuminas.surriel.com (ovpn-113-70.phx2.redhat.com [10.3.113.70]) by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p0VLlpTj031904; Mon, 31 Jan 2011 16:48:01 -0500 Date: Mon, 31 Jan 2011 16:44:25 -0500 From: Rik van Riel To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Avi Kiviti , Srivatsa Vaddagiri , Peter Zijlstra , Mike Galbraith , Chris Wright , "Nakajima, Jun" Subject: [PATCH -v8 4/7] sched: Add yield_to(task, preempt) functionality. Message-ID: <20110131164425.0aaa2c6e@annuminas.surriel.com> In-Reply-To: <20110131164049.0ad68e72@annuminas.surriel.com> References: <20110131164049.0ad68e72@annuminas.surriel.com> Organization: Red Hat, Inc. Mime-Version: 1.0 X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Mon, 31 Jan 2011 21:49:42 +0000 (UTC) diff --git a/include/linux/sched.h b/include/linux/sched.h index 2c79e92..6c43fc4 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1047,6 +1047,7 @@ struct sched_class { void (*enqueue_task) (struct rq *rq, struct task_struct *p, int flags); void (*dequeue_task) (struct rq *rq, struct task_struct *p, int flags); void (*yield_task) (struct rq *rq); + bool (*yield_to_task) (struct rq *rq, struct task_struct *p, bool preempt); void (*check_preempt_curr) (struct rq *rq, struct task_struct *p, int flags); @@ -1943,6 +1944,7 @@ static inline int rt_mutex_getprio(struct task_struct *p) # define rt_mutex_adjust_pi(p) do { } while (0) #endif +extern bool yield_to(struct task_struct *p, bool preempt); extern void set_user_nice(struct task_struct *p, long nice); extern int task_prio(const struct task_struct *p); extern int task_nice(const struct task_struct *p); diff --git a/kernel/sched.c b/kernel/sched.c index 7ff53e2..5e70156 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -5270,6 +5270,56 @@ void __sched yield(void) } EXPORT_SYMBOL(yield); +/** + * yield_to - yield the current processor to another thread in + * your thread group, or accelerate that thread toward the + * processor it's on. + * + * It's the caller's job to ensure that the target task struct + * can't go away on us before we can do any checks. + * + * Returns true if we indeed boosted the target task. + */ +bool __sched yield_to(struct task_struct *p, bool preempt) +{ + struct task_struct *curr = current; + struct rq *rq, *p_rq; + unsigned long flags; + bool yielded = 0; + + local_irq_save(flags); + rq = this_rq(); + +again: + p_rq = task_rq(p); + double_rq_lock(rq, p_rq); + while (task_rq(p) != p_rq) { + double_rq_unlock(rq, p_rq); + goto again; + } + + if (!curr->sched_class->yield_to_task) + goto out; + + if (curr->sched_class != p->sched_class) + goto out; + + if (task_running(p_rq, p) || p->state) + goto out; + + yielded = curr->sched_class->yield_to_task(rq, p, preempt); + +out: + double_rq_unlock(rq, p_rq); + local_irq_restore(flags); + + if (yielded) + yield(); + + return yielded; +} +EXPORT_SYMBOL_GPL(yield_to); + /* * This task is about to go to sleep on IO. Increment rq->nr_iowait so * that process accounting knows that this is a task in IO wait state. diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index a56d410..be729d7 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -1800,6 +1800,23 @@ static void yield_task_fair(struct rq *rq) set_skip_buddy(se); } +static bool yield_to_task_fair(struct rq *rq, struct task_struct *p, bool preempt) +{ + struct sched_entity *se = &p->se; + + if (!se->on_rq) + return false; + + /* Tell the scheduler that we'd really like pse to run next. */ + set_next_buddy(se); + + /* Make p's CPU reschedule; pick_next_entity takes care of fairness. */ + if (preempt) + resched_task(rq->curr); + + return true; +} + #ifdef CONFIG_SMP /************************************************** * Fair scheduling class load-balancing methods: @@ -3993,6 +4010,7 @@ static const struct sched_class fair_sched_class = { .enqueue_task = enqueue_task_fair, .dequeue_task = dequeue_task_fair, .yield_task = yield_task_fair, + .yield_to_task = yield_to_task_fair, .check_preempt_curr = check_preempt_wakeup,