From patchwork Fri Apr 8 01:23:21 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 8779521 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 3F17DC0553 for ; Fri, 8 Apr 2016 01:25:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E85FA2025A for ; Fri, 8 Apr 2016 01:25:52 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B6659201ED for ; Fri, 8 Apr 2016 01:25:51 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aoL8w-0007g3-T1; Fri, 08 Apr 2016 01:23:26 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aoL8w-0007fv-4A for xen-devel@lists.xenproject.org; Fri, 08 Apr 2016 01:23:26 +0000 Received: from [193.109.254.147] by server-16.bemta-14.messagelabs.com id 9A/00-02863-D0807075; Fri, 08 Apr 2016 01:23:25 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrIIsWRWlGSWpSXmKPExsXiVRvkosvDwR5 ucPGikMX3LZOZHBg9Dn+4whLAGMWamZeUX5HAmtF5dSJ7wVvbiqNr5jM1MN4y7GLk4hASmMoo 8fXZPRYQh0VgDavEg/8zWEEcCYFLrBL77n9l7mLkBHJiJJpub2aCsCskZlw+xQJiCwmoSNzcv ooJwl7EJHGrzx/EFhbQkzhy9Ad7FyMHkO0rMeu3DEiYTcBA4s2OvawgtoiAksS9VZOZQHYxC/ xmlPi+Yh9YgkVAVeLG9A6wvbwC9hKTGzayg9icAg4SEzc8hdplL7Gt+wSYLSogJ7HycgsrRL2 gxMmZT1hA9jILaEqs36UPEmYWkJfY/nYO8wRGkVlIqmYhVM1CUrWAkXkVo3pxalFZapGuoV5S UWZ6RkluYmaOrqGhiV5uanFxYnpqTmJSsV5yfu4mRmDwMwDBDsajnc6HGCU5mJREea+8YQsX4 kvKT6nMSCzOiC8qzUktPsQow8GhJMH7kY09XEiwKDU9tSItMwcYhzBpCQ4eJRFea3agNG9xQW JucWY6ROoUoy7Hlqn31jIJseTl56VKifO+BpkhAFKUUZoHNwKWEi4xykoJ8zICHSXEU5BalJt Zgir/ilGcg1FJmFcGZBVPZl4J3KZXQEcwAR1xgZ8N5IiSRISUVAOj/4tf95ecFD22YQbna9Hd hVOlHr/hD727r8iiRObItwcLU85p8Xy6mfg8hPta5NIrFXr7Hy2ylopcNOVD3FWXcz0Ghds2W k52Nna6/+fKk7Ofb9TL6N2S+2mpKvH1xo7mlRVLqm2/1miV/7bN0A2IeFu76mZG/DSBnYHiWX P5F0wNvx20mWuPhRJLcUaioRZzUXEiAEQpHgoEAwAA X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-2.tower-27.messagelabs.com!1460078603!34354651!1 X-Originating-IP: [74.125.82.68] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.28; banners=-,-,- X-VirusChecked: Checked Received: (qmail 37954 invoked from network); 8 Apr 2016 01:23:24 -0000 Received: from mail-wm0-f68.google.com (HELO mail-wm0-f68.google.com) (74.125.82.68) by server-2.tower-27.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 8 Apr 2016 01:23:24 -0000 Received: by mail-wm0-f68.google.com with SMTP id n3so758341wmn.1 for ; Thu, 07 Apr 2016 18:23:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=s5C3PlfvXxNDpvWsKewbH5dgxdNUK6SPT19q916wSWc=; b=LTzxP6dbCQVKAWc+xlK31zvz+SYlMpPXaRNoJcePhEomoppEljk5zyW2HWxdh9tipb IA9wRXmZhAO2Ku9QdDy5Xd6vKxN3HcZXWjsPrW1HE6wmpHUWB0v5+jV3FjHxaISTbL4U N/ZyNUo0FdAX4V3SsJ9KxjIHzAfoD5B9aGHhXraVbIc+yskAFvCfxt1/IcENiBjnIjEr IkyJTh3NVOGgLxGCBYBqTYzPbIsPJgiKuT3yDsw0wnjJx3BsAttzAsp6QlQY2uLkweka 4JfDbItWQfOR1KGdGLhAeqZw9tp+qiGECiYWRJnzm+Cs8d6dHExRf//lFOD++L9ilPWU Foag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=s5C3PlfvXxNDpvWsKewbH5dgxdNUK6SPT19q916wSWc=; b=Pr/M4koXK63HuuPZCpQD/dBr8yDcSMzHFHWAWEa+fXOUVa6B895vSlkaiAfRG9Te9U YZGtow/CMlvMUGTMQM1nnGJpsccgsOCI6oowAH7zDGit6wcPHCFj5dAfR7QkvJUsmKjR wOb/5vyhUFNpvLUPfsDrES3jAIr+jebxNvDC1aFXgFmztgNLlxyDGHlUCZ8zRQH8huf5 t0BGb5UWTIamob/9kH9sZz+382ILqqbupMBWhtnMldBJFJ+ymkXFpCA8iLCimdBr0PwN aUI5UKGX1DBQWL1LYvw96f9ck+8fnVeQZmcCRwDT4yQmEYNQygVmewipb5vxZA7oLzkN /iZA== X-Gm-Message-State: AD7BkJKgejVNn4FqPLmy1V+z1LsLN4cWl33yW4Sba4TODXVGbLaCm6EWzj67sHqXpcRgCQ== X-Received: by 10.28.98.137 with SMTP id w131mr601003wmb.30.1460078603744; Thu, 07 Apr 2016 18:23:23 -0700 (PDT) Received: from Solace.fritz.box (net-37-116-155-252.cust.vodafonedsl.it. [37.116.155.252]) by smtp.gmail.com with ESMTPSA id i206sm582747wmf.1.2016.04.07.18.23.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Apr 2016 18:23:22 -0700 (PDT) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Fri, 08 Apr 2016 03:23:21 +0200 Message-ID: <20160408012320.10762.68823.stgit@Solace.fritz.box> In-Reply-To: <20160408011204.10762.14241.stgit@Solace.fritz.box> References: <20160408011204.10762.14241.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: Juergen Gross , George Dunlap , Robert VanVossen , Josh Whitehead , Meng Xu , Jan Beulich Subject: [Xen-devel] [PATCH v3 01/11] xen: sched: make implementing .alloc_pdata optional X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The .alloc_pdata scheduler hook must, before this change, be implemented by all schedulers --even those ones that don't need to allocate anything. Make it possible to just use the SCHED_OP(), like for the other hooks, by using ERR_PTR() and IS_ERR() for error reporting. This: - makes NULL a variant of success; - allows for errors other than ENOMEM to be properly communicated (if ever necessary). This, in turn, means that schedulers not needing to allocate any per-pCPU data, can avoid implementing the hook. In fact, the artificial implementation of .alloc_pdata in the ARINC653 is removed (and, while there, nuke .free_pdata too, as it is equally useless). Signed-off-by: Dario Faggioli Reviewed-by: Meng Xu Reviewed-by: Juergen Gross Acked-by: George Dunlap Acked-by: Robert VanVossen --- Cc: Robert VanVossen Cc: Josh Whitehead Cc: Jan Beulich --- Changes from v1: * only update sd->sched_priv if alloc_pdata does not return IS_ERR, so that xfree() can always be safely called on sd->sched_priv itself, as requested during review; * xen/err.h included in .c files that actually need it, instead than in sched-if.h. --- xen/common/sched_arinc653.c | 31 ------------------------------- xen/common/sched_credit.c | 5 +++-- xen/common/sched_credit2.c | 2 +- xen/common/sched_rt.c | 8 ++++---- xen/common/schedule.c | 27 +++++++++++++++++---------- 5 files changed, 25 insertions(+), 48 deletions(-) diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c index 8a11a2f..b79fcdf 100644 --- a/xen/common/sched_arinc653.c +++ b/xen/common/sched_arinc653.c @@ -456,34 +456,6 @@ a653sched_free_vdata(const struct scheduler *ops, void *priv) } /** - * This function allocates scheduler-specific data for a physical CPU - * - * We do not actually make use of any per-CPU data but the hypervisor expects - * a non-NULL return value - * - * @param ops Pointer to this instance of the scheduler structure - * - * @return Pointer to the allocated data - */ -static void * -a653sched_alloc_pdata(const struct scheduler *ops, int cpu) -{ - /* return a non-NULL value to keep schedule.c happy */ - return SCHED_PRIV(ops); -} - -/** - * This function frees scheduler-specific data for a physical CPU - * - * @param ops Pointer to this instance of the scheduler structure - */ -static void -a653sched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) -{ - /* nop */ -} - -/** * This function allocates scheduler-specific data for a domain * * We do not actually make use of any per-domain data but the hypervisor @@ -737,9 +709,6 @@ static const struct scheduler sched_arinc653_def = { .free_vdata = a653sched_free_vdata, .alloc_vdata = a653sched_alloc_vdata, - .free_pdata = a653sched_free_pdata, - .alloc_pdata = a653sched_alloc_pdata, - .free_domdata = a653sched_free_domdata, .alloc_domdata = a653sched_alloc_domdata, diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 4c4927f..63a4a63 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -23,6 +23,7 @@ #include #include #include +#include /* @@ -532,12 +533,12 @@ csched_alloc_pdata(const struct scheduler *ops, int cpu) /* Allocate per-PCPU info */ spc = xzalloc(struct csched_pcpu); if ( spc == NULL ) - return NULL; + return ERR_PTR(-ENOMEM); if ( !alloc_cpumask_var(&spc->balance_mask) ) { xfree(spc); - return NULL; + return ERR_PTR(-ENOMEM); } spin_lock_irqsave(&prv->lock, flags); diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index b8c8e40..e97d8be 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -2047,7 +2047,7 @@ csched2_alloc_pdata(const struct scheduler *ops, int cpu) printk("%s: cpu %d not online yet, deferring initializatgion\n", __func__, cpu); - return (void *)1; + return NULL; } static void diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 321b0a5..aece318 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -29,6 +29,7 @@ #include #include #include +#include #include /* @@ -681,7 +682,7 @@ rt_alloc_pdata(const struct scheduler *ops, int cpu) spin_unlock_irqrestore(old_lock, flags); if ( !alloc_cpumask_var(&_cpumask_scratch[cpu]) ) - return NULL; + return ERR_PTR(-ENOMEM); if ( prv->repl_timer == NULL ) { @@ -689,13 +690,12 @@ rt_alloc_pdata(const struct scheduler *ops, int cpu) prv->repl_timer = xzalloc(struct timer); if ( prv->repl_timer == NULL ) - return NULL; + return ERR_PTR(-ENOMEM); init_timer(prv->repl_timer, repl_timer_handler, (void *)ops, cpu); } - /* 1 indicates alloc. succeed in schedule.c */ - return (void *)1; + return NULL; } static void diff --git a/xen/common/schedule.c b/xen/common/schedule.c index b7dee16..1941613 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -37,6 +37,7 @@ #include #include #include +#include /* opt_sched: scheduler - default to configured value */ static char __initdata opt_sched[10] = CONFIG_SCHED_DEFAULT; @@ -1462,6 +1463,7 @@ static void poll_timer_fn(void *data) static int cpu_schedule_up(unsigned int cpu) { struct schedule_data *sd = &per_cpu(schedule_data, cpu); + void *sched_priv; per_cpu(scheduler, cpu) = &ops; spin_lock_init(&sd->_lock); @@ -1500,9 +1502,16 @@ static int cpu_schedule_up(unsigned int cpu) if ( idle_vcpu[cpu] == NULL ) return -ENOMEM; - if ( (ops.alloc_pdata != NULL) && - ((sd->sched_priv = ops.alloc_pdata(&ops, cpu)) == NULL) ) - return -ENOMEM; + /* + * We don't want to risk calling xfree() on an sd->sched_priv + * (e.g., inside free_pdata, from cpu_schedule_down() called + * during CPU_UP_CANCELLED) that contains an IS_ERR value. + */ + sched_priv = SCHED_OP(&ops, alloc_pdata, cpu); + if ( IS_ERR(sched_priv) ) + return PTR_ERR(sched_priv); + + sd->sched_priv = sched_priv; return 0; } @@ -1512,8 +1521,7 @@ static void cpu_schedule_down(unsigned int cpu) struct schedule_data *sd = &per_cpu(schedule_data, cpu); struct scheduler *sched = per_cpu(scheduler, cpu); - if ( sd->sched_priv != NULL ) - SCHED_OP(sched, free_pdata, sd->sched_priv, cpu); + SCHED_OP(sched, free_pdata, sd->sched_priv, cpu); SCHED_OP(sched, free_vdata, idle_vcpu[cpu]->sched_priv); idle_vcpu[cpu]->sched_priv = NULL; @@ -1608,9 +1616,8 @@ void __init scheduler_init(void) idle_domain->max_vcpus = nr_cpu_ids; if ( alloc_vcpu(idle_domain, 0, 0) == NULL ) BUG(); - if ( ops.alloc_pdata && - !(this_cpu(schedule_data).sched_priv = ops.alloc_pdata(&ops, 0)) ) - BUG(); + this_cpu(schedule_data).sched_priv = SCHED_OP(&ops, alloc_pdata, 0); + BUG_ON(IS_ERR(this_cpu(schedule_data).sched_priv)); SCHED_OP(&ops, init_pdata, this_cpu(schedule_data).sched_priv, 0); } @@ -1653,8 +1660,8 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c) idle = idle_vcpu[cpu]; ppriv = SCHED_OP(new_ops, alloc_pdata, cpu); - if ( ppriv == NULL ) - return -ENOMEM; + if ( IS_ERR(ppriv) ) + return PTR_ERR(ppriv); SCHED_OP(new_ops, init_pdata, ppriv, cpu); vpriv = SCHED_OP(new_ops, alloc_vdata, idle, idle->domain->sched_priv); if ( vpriv == NULL )