From patchwork Tue May 3 21:46:59 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9008631 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 21DE4BF440 for ; Tue, 3 May 2016 21:49:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2150C2037E for ; Tue, 3 May 2016 21:49:05 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0F2932026F for ; Tue, 3 May 2016 21:49:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1axi9p-0005SM-BJ; Tue, 03 May 2016 21:47:05 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1axi9o-0005Rr-25 for xen-devel@lists.xenproject.org; Tue, 03 May 2016 21:47:04 +0000 Received: from [85.158.137.68] by server-8.bemta-3.messagelabs.com id 41/0D-04050-75C19275; Tue, 03 May 2016 21:47:03 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrEIsWRWlGSWpSXmKPExsXiVRvkpBsmoxl usK1L3eL7lslMDowehz9cYQlgjGLNzEvKr0hgzWh5OYuloF+34sPl/4wNjM+Uuhi5OIQEpjNK dK7YyQTisAisYZWYvW0uC4gjIXCJVaL5233mLkZOICdGYnHnOkYIu1Li6sX5bCC2kICKxM3tq 5gg7EVMEt+74kFsYQE9iSNHf7BD2IEScw9cAZvDJmAg8WbHXlYQW0RASeLeqslgm5kFJjFKrJ rfDlbEIqAqMftFJ9gyXgEfiUMbpoI1cALZNzses0Is85ZY03IM7AhRATmJlZdbWCHqBSVOznw C9AEH0FBNifW79EHCzALyEtvfzmGewCgyC0nVLISqWUiqFjAyr2JUL04tKkst0jXTSyrKTM8o yU3MzNE1NDDWy00tLk5MT81JTCrWS87P3cQIDP96BgbGHYxX2pwPMUpyMCmJ8npKaIYL8SXlp 1RmJBZnxBeV5qQWH2KU4eBQkuD9LwWUEyxKTU+tSMvMAUYiTFqCg0dJhPcGSJq3uCAxtzgzHS J1ilGXY8vUe2uZhFjy8vNSpcR5+aWBigRAijJK8+BGwJLCJUZZKWFeRgYGBiGegtSi3MwSVPl XjOIcjErCvH9BVvFk5pXAbXoFdAQT0BHZ61VBjihJREhJNTBaGXUJ9am+XPF5q1Rvgex088ao f4uCg6QuiMWKX9/9i1+k2707j7dxwi9WH9dzr607FvBOfvTkzNELh6o7jyzuXeo3S2CPTZdRa KvwjZ1Jt5/ezb52foFyh8Szoz+b/n7//nnymp+Zi02ZPhao7kw4cnD2saddd97wmfz3XbZa96 HJsutzPb6YK7EUZyQaajEXFScCACtL57IFAwAA X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-10.tower-31.messagelabs.com!1462312022!37617879!1 X-Originating-IP: [74.125.82.66] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 4292 invoked from network); 3 May 2016 21:47:02 -0000 Received: from mail-wm0-f66.google.com (HELO mail-wm0-f66.google.com) (74.125.82.66) by server-10.tower-31.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 3 May 2016 21:47:02 -0000 Received: by mail-wm0-f66.google.com with SMTP id r12so6279358wme.0 for ; Tue, 03 May 2016 14:47:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=QcNj+d/RFyAX/a50ete9V8Ze5Xa5PwU7vMVGnNgpXzA=; b=0CnsJ01/MBCjzv9I/B5UlUkk9hRW9V5bAVIbEfwfDx29MyDNXUwUGJZKb/PSJ7A9Cp IfuQdOH/sBctnJF/geml+1KfyZcWmv6WPil9IW8gT+Wv/TneU6sU+Tt+G9bxKDGpP+z1 LzK0HB/iWa5J+Do3r64/KJ1c7CkBu7Shi1n19T1upAdfpJFWVLwgaHAUZwXBDOefsy8n e+wpr2DPNPVoRLp7LDU6PoE6+BUWbfPQ3nMG6cM8CMIU5gSi6+G5oYmbJap9OIP7SVhr WZ0f4UdWf1qE6Q9nKvrv7S4zAR1LIWGGCxENYalQDxE77UNVER94tHv5zVBt5HjuAMQV NNiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=QcNj+d/RFyAX/a50ete9V8Ze5Xa5PwU7vMVGnNgpXzA=; b=XRURKKHsDWR616vfulpa3LSmxToRKzsSqEyPHJ/BN7hGiF/IWotib3EXBJZWtUvzEE Erxv6nYdRbLVN9j+RGf9JNVR8U0xqWwRp1twnAuKlIyOjPPT1yVfr5Sg2+B4wBW3/MZF GgMPTlmoOqt7JhAp22hliHRulva50crVpETObzQqG8I2l4dLeI0ihU+Ou/wQ8NKHFpxU DQwVtOEKvlq0p/8W1rksl/b6OmNuweZRP941i0Rm3zNJHtZHntKBt1rOimUodajofH63 lQjaJjWH3vSjOJWK561TNUEjQ9QA5E9cwfcRccztfOHOarcnupshE+NrGEWt03MpdXCj v2uA== X-Gm-Message-State: AOPr4FUes1D5BGPMk2t5Lepcl2a6ulLzAM+oqOS5PhB8Vdvow+bVVN6Z8ptekSNef+1BQg== X-Received: by 10.194.59.138 with SMTP id z10mr5041216wjq.74.1462312022264; Tue, 03 May 2016 14:47:02 -0700 (PDT) Received: from Solace.fritz.box (net-93-65-132-113.cust.vodafonedsl.it. [93.65.132.113]) by smtp.gmail.com with ESMTPSA id r204sm1256827wmg.20.2016.05.03.14.47.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 03 May 2016 14:47:01 -0700 (PDT) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Tue, 03 May 2016 23:46:59 +0200 Message-ID: <146231201861.25631.15476137738176988146.stgit@Solace.fritz.box> In-Reply-To: <146231184906.25631.6550047090421454264.stgit@Solace.fritz.box> References: <146231184906.25631.6550047090421454264.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: Tianyang Chen , Wei Liu , Meng Xu , George Dunlap Subject: [Xen-devel] [PATCH for 4.7 4/4] xen: adopt .deinit_pdata and improve timer handling X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The scheduling hooks API is now used properly, and no initialization or de-initialization happen in alloc/free_pdata any longer. In fact, just like it is for Credit2, there is no real need for implementing alloc_pdata and free_pdata. This also made it possible to improve the replenishment timer handling logic, such that now the timer is always kept on one of the pCPU of the scheduler it's servicing. Before this commit, in fact, even if the pCPU where the timer happened to be initialized at creation time was moved to another cpupool, the timer stayed there, potentially inferfearing with the new scheduler of the pCPU itself. Signed-off-by: Dario Faggioli Acked-by: George Dunlap --- Cc: Meng Xu Cc: George Dunlap Cc: Tianyang Chen Cc: Wei Liu --- xen/common/sched_rt.c | 74 ++++++++++++++++++++++++++++++++++++------------- 1 file changed, 55 insertions(+), 19 deletions(-) diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 673fc92..7f8f411 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -590,6 +590,10 @@ rt_init(struct scheduler *ops) if ( prv == NULL ) return -ENOMEM; + prv->repl_timer = xzalloc(struct timer); + if ( prv->repl_timer == NULL ) + return -ENOMEM; + spin_lock_init(&prv->lock); INIT_LIST_HEAD(&prv->sdom); INIT_LIST_HEAD(&prv->runq); @@ -600,12 +604,6 @@ rt_init(struct scheduler *ops) ops->sched_data = prv; - /* - * The timer initialization will happen later when - * the first pcpu is added to this pool in alloc_pdata. - */ - prv->repl_timer = NULL; - return 0; } @@ -614,7 +612,8 @@ rt_deinit(struct scheduler *ops) { struct rt_private *prv = rt_priv(ops); - kill_timer(prv->repl_timer); + ASSERT(prv->repl_timer->status == TIMER_STATUS_invalid || + prv->repl_timer->status == TIMER_STATUS_killed); xfree(prv->repl_timer); ops->sched_data = NULL; @@ -632,9 +631,19 @@ rt_init_pdata(const struct scheduler *ops, void *pdata, int cpu) spinlock_t *old_lock; unsigned long flags; - /* Move the scheduler lock to our global runqueue lock. */ old_lock = pcpu_schedule_lock_irqsave(cpu, &flags); + /* + * TIMER_STATUS_invalid means we are the first cpu that sees the timer + * allocated but not initialized, and so it's up to us to initialize it. + */ + if ( prv->repl_timer->status == TIMER_STATUS_invalid ) + { + init_timer(prv->repl_timer, repl_timer_handler, (void*) ops, cpu); + dprintk(XENLOG_DEBUG, "RTDS: timer initialized on cpu %u\n", cpu); + } + + /* Move the scheduler lock to our global runqueue lock. */ per_cpu(schedule_data, cpu).schedule_lock = &prv->lock; /* _Not_ pcpu_schedule_unlock(): per_cpu().schedule_lock changed! */ @@ -659,6 +668,20 @@ rt_switch_sched(struct scheduler *new_ops, unsigned int cpu, */ ASSERT(per_cpu(schedule_data, cpu).schedule_lock != &prv->lock); + /* + * If we are the absolute first cpu being switched toward this + * scheduler (in which case we'll see TIMER_STATUS_invalid), or the + * first one that is added back to the cpupool that had all its cpus + * removed (in which case we'll see TIMER_STATUS_killed), it's our + * job to (re)initialize the timer. + */ + if ( prv->repl_timer->status == TIMER_STATUS_invalid || + prv->repl_timer->status == TIMER_STATUS_killed ) + { + init_timer(prv->repl_timer, repl_timer_handler, (void*) new_ops, cpu); + dprintk(XENLOG_DEBUG, "RTDS: timer initialized on cpu %u\n", cpu); + } + idle_vcpu[cpu]->sched_priv = vdata; per_cpu(scheduler, cpu) = new_ops; per_cpu(schedule_data, cpu).sched_priv = NULL; /* no pdata */ @@ -672,23 +695,36 @@ rt_switch_sched(struct scheduler *new_ops, unsigned int cpu, per_cpu(schedule_data, cpu).schedule_lock = &prv->lock; } -static void * -rt_alloc_pdata(const struct scheduler *ops, int cpu) +static void +rt_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu) { + unsigned long flags; struct rt_private *prv = rt_priv(ops); - if ( prv->repl_timer == NULL ) - { - /* Allocate the timer on the first cpu of this pool. */ - prv->repl_timer = xzalloc(struct timer); + spin_lock_irqsave(&prv->lock, flags); - if ( prv->repl_timer == NULL ) - return ERR_PTR(-ENOMEM); + if ( prv->repl_timer->cpu == cpu ) + { + struct cpupool *c = per_cpu(cpupool, cpu); + unsigned int new_cpu = cpumask_cycle(cpu, cpupool_online_cpumask(c)); - init_timer(prv->repl_timer, repl_timer_handler, (void *)ops, cpu); + /* + * Make sure the timer run on one of the cpus that are still available + * to this scheduler. If there aren't any left, it means it's the time + * to just kill it. + */ + if ( new_cpu >= nr_cpu_ids ) + { + kill_timer(prv->repl_timer); + dprintk(XENLOG_DEBUG, "RTDS: timer killed on cpu %d\n", cpu); + } + else + { + migrate_timer(prv->repl_timer, new_cpu); + } } - return NULL; + spin_unlock_irqrestore(&prv->lock, flags); } static void * @@ -1433,9 +1469,9 @@ static const struct scheduler sched_rtds_def = { .dump_settings = rt_dump, .init = rt_init, .deinit = rt_deinit, - .alloc_pdata = rt_alloc_pdata, .init_pdata = rt_init_pdata, .switch_sched = rt_switch_sched, + .deinit_pdata = rt_deinit_pdata, .alloc_domdata = rt_alloc_domdata, .free_domdata = rt_free_domdata, .init_domain = rt_dom_init,