From patchwork Wed Aug 17 17:19:25 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9286235 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BF11460839 for ; Wed, 17 Aug 2016 17:22:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AF708294A2 for ; Wed, 17 Aug 2016 17:22:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A405C294B8; Wed, 17 Aug 2016 17:22:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C5F53294A2 for ; Wed, 17 Aug 2016 17:22:00 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ba4V1-0000k4-Kd; Wed, 17 Aug 2016 17:19:31 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ba4Uz-0000iS-UB for xen-devel@lists.xenproject.org; Wed, 17 Aug 2016 17:19:30 +0000 Received: from [193.109.254.147] by server-11.bemta-6.messagelabs.com id 53/CB-08498-1AC94B75; Wed, 17 Aug 2016 17:19:29 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrEIsWRWlGSWpSXmKPExsXiVRvkortgzpZ wg7Zdahbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8bJm9NYC/rCK5b0/2ZuYPzk2MXIxSEkMJNR YtLyHhYQh0VgDatE3/8O5i5GTg4JgUusEn8afSHsGIm/j/axQdhVEq2/prCD2EICKhI3t69ig pg0l0li6sntTCAJYQE9iSNHf7BD2IESB6YuAWtmEzCQeLNjLyuILSKgJHFv1WSwZmaBJkaJxz ubWUASLAKqEp8n3mbsYuTg4BXwkbj1JR0kzAlirv7NArHYW+Lw5B6wmaICchIrL7eAzeQVEJQ 4OfMJC0grs4CmxPpd+iBhZgF5ie1v5zBPYBSZhaRqFkLVLCRVCxiZVzFqFKcWlaUW6Rpa6CUV ZaZnlOQmZuboGhqY6eWmFhcnpqfmJCYV6yXn525iBIY/AxDsYLy5MeAQoyQHk5Io753qLeFCf En5KZUZicUZ8UWlOanFhxhlODiUJHh7ZgPlBItS01Mr0jJzgJEIk5bg4FES4a0FSfMWFyTmFm emQ6ROMepybJl6by2TEEtefl6qlDhvCkiRAEhRRmke3AhYUrjEKCslzMsIdJQQT0FqUW5mCar 8K0ZxDkYlYV59kCk8mXklcJteAR3BBHQELz/YESWJCCmpBkbnY0pPrpoyJgc+kN/wYE2ZTK+e 24EVSxnzdbL/5f0Qq3eqceU7d/2Ch/nmZy1Vl732Gc22z1Tj4Nb3n3uyzE2olS3jw9Tbhi3rt 3zfLfDC8SEL94SjQSz/eBlcz5guydM1fikcMzn3kWZfPPfPvkQXXx+TvPLqE++84k8bq+9wCX V3P3yhVYmlOCPRUIu5qDgRALtrFTEFAwAA X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-9.tower-27.messagelabs.com!1471454368!54454940!1 X-Originating-IP: [74.125.82.68] X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 15909 invoked from network); 17 Aug 2016 17:19:28 -0000 Received: from mail-wm0-f68.google.com (HELO mail-wm0-f68.google.com) (74.125.82.68) by server-9.tower-27.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 17 Aug 2016 17:19:28 -0000 Received: by mail-wm0-f68.google.com with SMTP id i138so26184099wmf.3 for ; Wed, 17 Aug 2016 10:19:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=qmwm/KfV9dyHWBLx1MxEPed0F0EnHvqP9KwYcCHIcTA=; b=bUXEH2MVzBnqaK3WIiRDBTHi0tZVK7qTR/hcdfNZoJ5ErBWQLpGgT2EnCXKx+9aQEt zeR6sCMUsAiYIMF0qt0G5Ak5zc0kYwZGoRVlWu7uSyC6hl5tuAi6U2jt5RJComKsWKCB PjvuXM1iYkj+G35TlkfQSUaHKcek/u2k943UyA0jQUHIVg2KHtUdNrBZpgdlalRzpkVG qY/nVPF1OLjfNlPvlNjCnH94v5PIdz3MXxMpLbKj98gYQ32DIZ98OR8b3WEGJGLVv/U9 awAIr1TXnrB6K1AdWxXGKdKgJwJLIqn1H018jweDhvsosbqKePIacWF+QZpFi/bNKTpp 1rGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=qmwm/KfV9dyHWBLx1MxEPed0F0EnHvqP9KwYcCHIcTA=; b=ao/DW4fJO/jPoPdrD1BpGhuiXA9yRg6GAMPV3GQpmy4DQSBz6tblqbB+jMkzlAKoYf DsdgTBLcdQ9WuFKzRAgZ+RzQqzKb9ZNyoTANo0OQ3QGP9JAxTlBs9+ylQVzYT+olYVHI 3BZFW/oXdC3UBsRdBm50D3vzmIrWbLkSRgNxyqBvcUDVRYgxVETGYi0enrCLsLu1hxtH zZkPFPlSThCPpHamEV3Fzhwz70OrnHnV+9TM5GdZ/72fZXrZce/XvILtB+cTC4yiRgd1 VpGtQBgeOwS7wUwFWLJaIRQLMEK0pDXJOR6SbsQAe909oSypm9o90Waahttx/VvChHok H61g== X-Gm-Message-State: AEkoouvLiiRKdBV0k5A6Y3hzbnSXuCvp94iWT/plTpLYocMqCz7jL8CaAujRnLJLkhIPYg== X-Received: by 10.194.36.226 with SMTP id t2mr12083080wjj.184.1471454367798; Wed, 17 Aug 2016 10:19:27 -0700 (PDT) Received: from Solace.fritz.box (net-2-32-14-104.cust.vodafonedsl.it. [2.32.14.104]) by smtp.gmail.com with ESMTPSA id q4sm32559576wjk.24.2016.08.17.10.19.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 17 Aug 2016 10:19:27 -0700 (PDT) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Wed, 17 Aug 2016 19:19:25 +0200 Message-ID: <147145436577.25877.13915037049197823235.stgit@Solace.fritz.box> In-Reply-To: <147145358844.25877.7490417583264534196.stgit@Solace.fritz.box> References: <147145358844.25877.7490417583264534196.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: Anshul Makkar , "Justin T. Weaver" , George Dunlap Subject: [Xen-devel] [PATCH 16/24] xen: sched: factor affinity helpers out of sched_credit.c X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP make it possible to use the various helpers from other schedulers, e.g., for implementing soft affinity within them. Since we are touching the code, also make it start using variables called v for struct_vcpu*, as it is preferrable. No functional change intended. Signed-off-by: Dario Faggioli Signed-off-by: Justin T. Weaver Reviewed-by: George Dunlap --- Cc: George Dunlap Cc: Anshul Makkar --- xen/common/sched_credit.c | 98 +++++++------------------------------------- xen/include/xen/sched-if.h | 65 +++++++++++++++++++++++++++++ 2 files changed, 80 insertions(+), 83 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 14b207d..5d5bba9 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -137,27 +137,6 @@ #define TRC_CSCHED_SCHEDULE TRC_SCHED_CLASS_EVT(CSCHED, 9) #define TRC_CSCHED_RATELIMIT TRC_SCHED_CLASS_EVT(CSCHED, 10) - -/* - * Hard and soft affinity load balancing. - * - * Idea is each vcpu has some pcpus that it prefers, some that it does not - * prefer but is OK with, and some that it cannot run on at all. The first - * set of pcpus are the ones that are both in the soft affinity *and* in the - * hard affinity; the second set of pcpus are the ones that are in the hard - * affinity but *not* in the soft affinity; the third set of pcpus are the - * ones that are not in the hard affinity. - * - * We implement a two step balancing logic. Basically, every time there is - * the need to decide where to run a vcpu, we first check the soft affinity - * (well, actually, the && between soft and hard affinity), to see if we can - * send it where it prefers to (and can) run on. However, if the first step - * does not find any suitable and free pcpu, we fall back checking the hard - * affinity. - */ -#define CSCHED_BALANCE_SOFT_AFFINITY 0 -#define CSCHED_BALANCE_HARD_AFFINITY 1 - /* * Boot parameters */ @@ -287,53 +266,6 @@ __runq_remove(struct csched_vcpu *svc) list_del_init(&svc->runq_elem); } - -#define for_each_csched_balance_step(step) \ - for ( (step) = 0; (step) <= CSCHED_BALANCE_HARD_AFFINITY; (step)++ ) - - -/* - * Hard affinity balancing is always necessary and must never be skipped. - * But soft affinity need only be considered when it has a functionally - * different effect than other constraints (such as hard affinity, cpus - * online, or cpupools). - * - * Soft affinity only needs to be considered if: - * * The cpus in the cpupool are not a subset of soft affinity - * * The hard affinity is not a subset of soft affinity - * * There is an overlap between the soft affinity and the mask which is - * currently being considered. - */ -static inline int __vcpu_has_soft_affinity(const struct vcpu *vc, - const cpumask_t *mask) -{ - return !cpumask_subset(cpupool_domain_cpumask(vc->domain), - vc->cpu_soft_affinity) && - !cpumask_subset(vc->cpu_hard_affinity, vc->cpu_soft_affinity) && - cpumask_intersects(vc->cpu_soft_affinity, mask); -} - -/* - * Each csched-balance step uses its own cpumask. This function determines - * which one (given the step) and copies it in mask. For the soft affinity - * balancing step, the pcpus that are not part of vc's hard affinity are - * filtered out from the result, to avoid running a vcpu where it would - * like, but is not allowed to! - */ -static void -csched_balance_cpumask(const struct vcpu *vc, int step, cpumask_t *mask) -{ - if ( step == CSCHED_BALANCE_SOFT_AFFINITY ) - { - cpumask_and(mask, vc->cpu_soft_affinity, vc->cpu_hard_affinity); - - if ( unlikely(cpumask_empty(mask)) ) - cpumask_copy(mask, vc->cpu_hard_affinity); - } - else /* step == CSCHED_BALANCE_HARD_AFFINITY */ - cpumask_copy(mask, vc->cpu_hard_affinity); -} - static void burn_credits(struct csched_vcpu *svc, s_time_t now) { s_time_t delta; @@ -398,18 +330,18 @@ static inline void __runq_tickle(struct csched_vcpu *new) * Soft and hard affinity balancing loop. For vcpus without * a useful soft affinity, consider hard affinity only. */ - for_each_csched_balance_step( balance_step ) + for_each_affinity_balance_step( balance_step ) { int new_idlers_empty; - if ( balance_step == CSCHED_BALANCE_SOFT_AFFINITY - && !__vcpu_has_soft_affinity(new->vcpu, - new->vcpu->cpu_hard_affinity) ) + if ( balance_step == BALANCE_SOFT_AFFINITY + && !has_soft_affinity(new->vcpu, + new->vcpu->cpu_hard_affinity) ) continue; /* Are there idlers suitable for new (for this balance step)? */ - csched_balance_cpumask(new->vcpu, balance_step, - cpumask_scratch_cpu(cpu)); + affinity_balance_cpumask(new->vcpu, balance_step, + cpumask_scratch_cpu(cpu)); cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu), &idle_mask); new_idlers_empty = cpumask_empty(cpumask_scratch_cpu(cpu)); @@ -420,7 +352,7 @@ static inline void __runq_tickle(struct csched_vcpu *new) * hard affinity as well, before taking final decisions. */ if ( new_idlers_empty - && balance_step == CSCHED_BALANCE_SOFT_AFFINITY ) + && balance_step == BALANCE_SOFT_AFFINITY ) continue; /* @@ -721,7 +653,7 @@ _csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc, bool_t commit) online = cpupool_domain_cpumask(vc->domain); cpumask_and(&cpus, vc->cpu_hard_affinity, online); - for_each_csched_balance_step( balance_step ) + for_each_affinity_balance_step( balance_step ) { /* * We want to pick up a pcpu among the ones that are online and @@ -741,12 +673,12 @@ _csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc, bool_t commit) * cpus and, if the result is empty, we just skip the soft affinity * balancing step all together. */ - if ( balance_step == CSCHED_BALANCE_SOFT_AFFINITY - && !__vcpu_has_soft_affinity(vc, &cpus) ) + if ( balance_step == BALANCE_SOFT_AFFINITY + && !has_soft_affinity(vc, &cpus) ) continue; /* Pick an online CPU from the proper affinity mask */ - csched_balance_cpumask(vc, balance_step, &cpus); + affinity_balance_cpumask(vc, balance_step, &cpus); cpumask_and(&cpus, &cpus, online); /* If present, prefer vc's current processor */ @@ -1605,11 +1537,11 @@ csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step) * vCPUs with useful soft affinities in some sort of bitmap * or counter. */ - if ( balance_step == CSCHED_BALANCE_SOFT_AFFINITY - && !__vcpu_has_soft_affinity(vc, vc->cpu_hard_affinity) ) + if ( balance_step == BALANCE_SOFT_AFFINITY + && !has_soft_affinity(vc, vc->cpu_hard_affinity) ) continue; - csched_balance_cpumask(vc, balance_step, cpumask_scratch_cpu(cpu)); + affinity_balance_cpumask(vc, balance_step, cpumask_scratch_cpu(cpu)); if ( __csched_vcpu_is_migrateable(vc, cpu, cpumask_scratch_cpu(cpu)) ) { @@ -1665,7 +1597,7 @@ csched_load_balance(struct csched_private *prv, int cpu, * 1. any "soft-affine work" to steal first, * 2. if not finding anything, any "hard-affine work" to steal. */ - for_each_csched_balance_step( bstep ) + for_each_affinity_balance_step( bstep ) { /* * We peek at the non-idling CPUs in a node-wise fashion. In fact, diff --git a/xen/include/xen/sched-if.h b/xen/include/xen/sched-if.h index bc0e794..496ed80 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/include/xen/sched-if.h @@ -201,4 +201,69 @@ static inline cpumask_t* cpupool_domain_cpumask(struct domain *d) return d->cpupool->cpu_valid; } +/* + * Hard and soft affinity load balancing. + * + * Idea is each vcpu has some pcpus that it prefers, some that it does not + * prefer but is OK with, and some that it cannot run on at all. The first + * set of pcpus are the ones that are both in the soft affinity *and* in the + * hard affinity; the second set of pcpus are the ones that are in the hard + * affinity but *not* in the soft affinity; the third set of pcpus are the + * ones that are not in the hard affinity. + * + * We implement a two step balancing logic. Basically, every time there is + * the need to decide where to run a vcpu, we first check the soft affinity + * (well, actually, the && between soft and hard affinity), to see if we can + * send it where it prefers to (and can) run on. However, if the first step + * does not find any suitable and free pcpu, we fall back checking the hard + * affinity. + */ +#define BALANCE_SOFT_AFFINITY 0 +#define BALANCE_HARD_AFFINITY 1 + +#define for_each_affinity_balance_step(step) \ + for ( (step) = 0; (step) <= BALANCE_HARD_AFFINITY; (step)++ ) + +/* + * Hard affinity balancing is always necessary and must never be skipped. + * But soft affinity need only be considered when it has a functionally + * different effect than other constraints (such as hard affinity, cpus + * online, or cpupools). + * + * Soft affinity only needs to be considered if: + * * The cpus in the cpupool are not a subset of soft affinity + * * The hard affinity is not a subset of soft affinity + * * There is an overlap between the soft affinity and the mask which is + * currently being considered. + */ +static inline int has_soft_affinity(const struct vcpu *v, + const cpumask_t *mask) +{ + return !cpumask_subset(cpupool_domain_cpumask(v->domain), + v->cpu_soft_affinity) && + !cpumask_subset(v->cpu_hard_affinity, v->cpu_soft_affinity) && + cpumask_intersects(v->cpu_soft_affinity, mask); +} + +/* + * This function determines copies in mask the cpumask that should be + * used for a particular affinity balancing step. For the soft affinity + * one, the pcpus that are not part of vc's hard affinity are filtered + * out from the result, to avoid running a vcpu where it would like, + * but is not allowed to! + */ +static inline void +affinity_balance_cpumask(const struct vcpu *v, int step, cpumask_t *mask) +{ + if ( step == BALANCE_SOFT_AFFINITY ) + { + cpumask_and(mask, v->cpu_soft_affinity, v->cpu_hard_affinity); + + if ( unlikely(cpumask_empty(mask)) ) + cpumask_copy(mask, v->cpu_hard_affinity); + } + else /* step == BALANCE_HARD_AFFINITY */ + cpumask_copy(mask, v->cpu_hard_affinity); +} + #endif /* __XEN_SCHED_IF_H__ */