From patchwork Fri Apr 8 01:23:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 8779531 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 46984C0554 for ; Fri, 8 Apr 2016 01:25:55 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 17D4120274 for ; Fri, 8 Apr 2016 01:25:54 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C98A8201CD for ; Fri, 8 Apr 2016 01:25:52 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aoL94-0007hG-7v; Fri, 08 Apr 2016 01:23:34 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aoL93-0007h3-IM for xen-devel@lists.xenproject.org; Fri, 08 Apr 2016 01:23:33 +0000 Received: from [85.158.139.211] by server-3.bemta-5.messagelabs.com id 75/A6-25417-41807075; Fri, 08 Apr 2016 01:23:32 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrLIsWRWlGSWpSXmKPExsXiVRvkpCvCwR5 u8OkAo8X3LZOZHBg9Dn+4whLAGMWamZeUX5HAmvHkzj72ghsOFTdnvWVqYHxs3MXIxSEkMJ1R YlnrdHYQh0VgDavE1wdTGEEcCYFLrBL3by5m62LkBHJiJH70bgGyOYDsaom334xBwkICKhI3t 69igpi0hEmiu+cDI0hCWEBP4sjRH+wQdoRE759/YHPYBAwk3uzYywpiiwgoSdxbNZkJxGYWKJ S4v3o7C4jNIqAq8aPhATOIzStgL/H19iawmZwCDhITNzxlglhsL7Gt+wSYLSogJ7HycgsrRL2 gxMmZT1hA7mQW0JRYv0sfYry8xPa3c5gnMIrMQlI1C6FqFpKqBYzMqxjVi1OLylKLdC31kooy 0zNKchMzc3QNDUz1clOLixPTU3MSk4r1kvNzNzECg58BCHYwrm11PsQoycGkJMp75Q1buBBfU n5KZUZicUZ8UWlOavEhRg0ODoEJZ+dOZ5JiycvPS1WS4P3Ixh4uJFiUmp5akZaZA4xPmFIJDh 4lEV5rdqA0b3FBYm5xZjpE6hSjLseWqffWMgmBzZAS530NMkMApCijNA9uBCxVXGKUlRLmZQQ 6UIinILUoN7MEVf4VozgHo5IwrwzIKp7MvBK4Ta+AjmACOuICPxvIESWJCCmpBkbrqR43Eq/t 3cn83KL7urfTxYd3QkJV7txiPLr1dO5Uq1UBAqYKLI/bVhiv0LsVf7mTRatpS1d08p6jZoeOh Cx/voM/JNlw9uqvlcLvBXQ+XvWUnKKjZSpiuT7skuwt09bW1Q/+84V8WMS17ibjmT9TdNjOBr 00PukRN/1rh/KmvfrsxYY2W7cpsRRnJBpqMRcVJwIAZrX6BxADAAA= X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-3.tower-206.messagelabs.com!1460078611!33607653!1 X-Originating-IP: [74.125.82.66] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.28; banners=-,-,- X-VirusChecked: Checked Received: (qmail 39239 invoked from network); 8 Apr 2016 01:23:32 -0000 Received: from mail-wm0-f66.google.com (HELO mail-wm0-f66.google.com) (74.125.82.66) by server-3.tower-206.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 8 Apr 2016 01:23:32 -0000 Received: by mail-wm0-f66.google.com with SMTP id a140so747817wma.2 for ; Thu, 07 Apr 2016 18:23:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=5YbZVeepXby6Bfz1DOniEY5s7laTTpL6r66uM7hL2Qc=; b=DDwDS0eXlv4SkqEq7eUn3ic5xmF9CrEgj+MJews8hm9c1S4wdJYesr/OVN6HmYGbDh VmPgjQowRmJJ2bqdJ++YboiMN4DQ28+sJ4QmwLT3YOlS7uqi5r46yVP69X+bdGisOO/m ABqB5D4aifk8ExLU26kPjQ1i9guUScmhJGMWdniuBpWUFTts2E0IE4xEfkP1D8zz3OZR +aRh3yPKCRumSPg7BRF6C5jy0HUx2OlODxkNn5pMbLaIOpf9OgSompARsIZDtNEvny/t 0NGvo82OwqDgsjjj0zUrIvqUZH6vKTE4RuZXxx+EYA7RDLczO6rdwO2ls5fE9ANEihFk 0AyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=5YbZVeepXby6Bfz1DOniEY5s7laTTpL6r66uM7hL2Qc=; b=LMUsMI3pFJDOxS7Bb4hkZHelQlGZqy/QET5b6JKkKf4LVmdoy4N6qBb73DAAx1aRlB yF/p6cb3Hi3MX3JMHITsJc5YaBR7kcfew0DE1RDQaz+zi8IMiW7fpIMp4hfPTZ9XudKA 2qBIR2Y7BbEUb+ZQ1D80Nfy9MI+4nCRDiHeF308tn5qRcU5jQ1jpxH5w92ucDFkRhrNf DOj4pU7X0I88LgcXVe00Rihg35kF7QU/xviEen8XFlUaqS1+myjgwv3U3iWyxBE7KttS hCqTBXHlXbkoZxQdL+fKI+ySp/Ph6j+jzoY4wuXV0AQg9eviREAD+gfM/nobxBtdwY6x CRnw== X-Gm-Message-State: AD7BkJKbJoQxKM2+s4YlF4l1fECJ/d0kIuSXm0/okHuVW2t4rWidnQu0LPNWXAdR8O3peg== X-Received: by 10.194.176.129 with SMTP id ci1mr6267047wjc.166.1460078611729; Thu, 07 Apr 2016 18:23:31 -0700 (PDT) Received: from Solace.fritz.box (net-37-116-155-252.cust.vodafonedsl.it. [37.116.155.252]) by smtp.gmail.com with ESMTPSA id cb2sm10976079wjc.16.2016.04.07.18.23.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Apr 2016 18:23:30 -0700 (PDT) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Fri, 08 Apr 2016 03:23:29 +0200 Message-ID: <20160408012328.10762.37642.stgit@Solace.fritz.box> In-Reply-To: <20160408011204.10762.14241.stgit@Solace.fritz.box> References: <20160408011204.10762.14241.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: George Dunlap , Juergen Gross , Meng Xu Subject: [Xen-devel] [PATCH v3 02/11] xen: sched: implement .init_pdata in Credit, Credit2 and RTDS X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In fact, if a scheduler needs per-pCPU information, that needs to be initialized appropriately. So, we take the code that is performing initializations from (right now) .alloc_pdata, and use it for .init_pdata, leaving only actualy allocations in the former, if any (which is the case in RTDS and Credit1). On the other hand, in Credit2, since we don't really need any per-pCPU data allocation, everything that was being done in .alloc_pdata, is now done in .init_pdata. And the fact that now .alloc_pdata can be left undefined, allows us to just get rid of it. Still for Credit2, the fact that .init_pdata is called during CPU_STARTING (rather than CPU_UP_PREPARE) kills the need for the scheduler to setup a similar callback itself, simplifying the code. And thanks to such simplification, it is now also ok to move some of the logic meant at double checking that a cpu was (or was not) initialized, into ASSERTS (rather than an if() and a BUG_ON). Signed-off-by: Dario Faggioli Reviewed-by: Meng Xu Reviewed-by: George Dunlap --- Cc: Juergen Gross --- Changes from v2: * make the ASSERT() in credit more linear, as suggested during review; * minor adjustements to the changelog, as suggested during review. --- xen/common/sched_credit.c | 20 +++++++++--- xen/common/sched_credit2.c | 72 +++----------------------------------------- xen/common/sched_rt.c | 11 ++++++- 3 files changed, 28 insertions(+), 75 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 63a4a63..f503e73 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -527,8 +527,6 @@ static void * csched_alloc_pdata(const struct scheduler *ops, int cpu) { struct csched_pcpu *spc; - struct csched_private *prv = CSCHED_PRIV(ops); - unsigned long flags; /* Allocate per-PCPU info */ spc = xzalloc(struct csched_pcpu); @@ -541,6 +539,19 @@ csched_alloc_pdata(const struct scheduler *ops, int cpu) return ERR_PTR(-ENOMEM); } + return spc; +} + +static void +csched_init_pdata(const struct scheduler *ops, void *pdata, int cpu) +{ + struct csched_private *prv = CSCHED_PRIV(ops); + struct csched_pcpu * const spc = pdata; + unsigned long flags; + + /* cpu data needs to be allocated, but STILL uninitialized */ + ASSERT(spc && spc->runq.next == NULL && spc->runq.prev == NULL); + spin_lock_irqsave(&prv->lock, flags); /* Initialize/update system-wide config */ @@ -561,16 +572,12 @@ csched_alloc_pdata(const struct scheduler *ops, int cpu) INIT_LIST_HEAD(&spc->runq); spc->runq_sort_last = prv->runq_sort; spc->idle_bias = nr_cpu_ids - 1; - if ( per_cpu(schedule_data, cpu).sched_priv == NULL ) - per_cpu(schedule_data, cpu).sched_priv = spc; /* Start off idling... */ BUG_ON(!is_idle_vcpu(curr_on_cpu(cpu))); cpumask_set_cpu(cpu, prv->idlers); spin_unlock_irqrestore(&prv->lock, flags); - - return spc; } #ifndef NDEBUG @@ -2054,6 +2061,7 @@ static const struct scheduler sched_credit_def = { .alloc_vdata = csched_alloc_vdata, .free_vdata = csched_free_vdata, .alloc_pdata = csched_alloc_pdata, + .init_pdata = csched_init_pdata, .free_pdata = csched_free_pdata, .alloc_domdata = csched_alloc_domdata, .free_domdata = csched_free_domdata, diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index e97d8be..8a56953 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -1971,7 +1971,8 @@ static void deactivate_runqueue(struct csched2_private *prv, int rqi) cpumask_clear_cpu(rqi, &prv->active_queues); } -static void init_pcpu(const struct scheduler *ops, int cpu) +static void +csched2_init_pdata(const struct scheduler *ops, void *pdata, int cpu) { unsigned rqi; unsigned long flags; @@ -1981,12 +1982,7 @@ static void init_pcpu(const struct scheduler *ops, int cpu) spin_lock_irqsave(&prv->lock, flags); - if ( cpumask_test_cpu(cpu, &prv->initialized) ) - { - printk("%s: Strange, cpu %d already initialized!\n", __func__, cpu); - spin_unlock_irqrestore(&prv->lock, flags); - return; - } + ASSERT(!cpumask_test_cpu(cpu, &prv->initialized)); /* Figure out which runqueue to put it in */ rqi = 0; @@ -2036,20 +2032,6 @@ static void init_pcpu(const struct scheduler *ops, int cpu) return; } -static void * -csched2_alloc_pdata(const struct scheduler *ops, int cpu) -{ - /* Check to see if the cpu is online yet */ - /* Note: cpu 0 doesn't get a STARTING callback */ - if ( cpu == 0 || cpu_to_socket(cpu) != XEN_INVALID_SOCKET_ID ) - init_pcpu(ops, cpu); - else - printk("%s: cpu %d not online yet, deferring initializatgion\n", - __func__, cpu); - - return NULL; -} - static void csched2_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) { @@ -2061,7 +2043,7 @@ csched2_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) spin_lock_irqsave(&prv->lock, flags); - BUG_ON(!cpumask_test_cpu(cpu, &prv->initialized)); + ASSERT(cpumask_test_cpu(cpu, &prv->initialized)); /* Find the old runqueue and remove this cpu from it */ rqi = prv->runq_map[cpu]; @@ -2099,49 +2081,6 @@ csched2_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) } static int -csched2_cpu_starting(int cpu) -{ - struct scheduler *ops; - - /* Hope this is safe from cpupools switching things around. :-) */ - ops = per_cpu(scheduler, cpu); - - if ( ops->alloc_pdata == csched2_alloc_pdata ) - init_pcpu(ops, cpu); - - return NOTIFY_DONE; -} - -static int cpu_credit2_callback( - struct notifier_block *nfb, unsigned long action, void *hcpu) -{ - unsigned int cpu = (unsigned long)hcpu; - int rc = 0; - - switch ( action ) - { - case CPU_STARTING: - csched2_cpu_starting(cpu); - break; - default: - break; - } - - return !rc ? NOTIFY_DONE : notifier_from_errno(rc); -} - -static struct notifier_block cpu_credit2_nfb = { - .notifier_call = cpu_credit2_callback -}; - -static int -csched2_global_init(void) -{ - register_cpu_notifier(&cpu_credit2_nfb); - return 0; -} - -static int csched2_init(struct scheduler *ops) { int i; @@ -2219,12 +2158,11 @@ static const struct scheduler sched_credit2_def = { .dump_cpu_state = csched2_dump_pcpu, .dump_settings = csched2_dump, - .global_init = csched2_global_init, .init = csched2_init, .deinit = csched2_deinit, .alloc_vdata = csched2_alloc_vdata, .free_vdata = csched2_free_vdata, - .alloc_pdata = csched2_alloc_pdata, + .init_pdata = csched2_init_pdata, .free_pdata = csched2_free_pdata, .alloc_domdata = csched2_alloc_domdata, .free_domdata = csched2_free_domdata, diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index aece318..b96bd93 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -666,8 +666,8 @@ rt_deinit(struct scheduler *ops) * Point per_cpu spinlock to the global system lock; * All cpu have same global system lock */ -static void * -rt_alloc_pdata(const struct scheduler *ops, int cpu) +static void +rt_init_pdata(const struct scheduler *ops, void *pdata, int cpu) { struct rt_private *prv = rt_priv(ops); spinlock_t *old_lock; @@ -680,6 +680,12 @@ rt_alloc_pdata(const struct scheduler *ops, int cpu) /* _Not_ pcpu_schedule_unlock(): per_cpu().schedule_lock changed! */ spin_unlock_irqrestore(old_lock, flags); +} + +static void * +rt_alloc_pdata(const struct scheduler *ops, int cpu) +{ + struct rt_private *prv = rt_priv(ops); if ( !alloc_cpumask_var(&_cpumask_scratch[cpu]) ) return ERR_PTR(-ENOMEM); @@ -1461,6 +1467,7 @@ static const struct scheduler sched_rtds_def = { .deinit = rt_deinit, .alloc_pdata = rt_alloc_pdata, .free_pdata = rt_free_pdata, + .init_pdata = rt_init_pdata, .alloc_domdata = rt_alloc_domdata, .free_domdata = rt_free_domdata, .init_domain = rt_dom_init,