From patchwork Fri Mar 17 18:43:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9631201 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7548560245 for ; Fri, 17 Mar 2017 18:45:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6CF30284C2 for ; Fri, 17 Mar 2017 18:45:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 61F12284EE; Fri, 17 Mar 2017 18:45:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B0DDD284C2 for ; Fri, 17 Mar 2017 18:45:29 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cowqE-0005EW-TV; Fri, 17 Mar 2017 18:43:10 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cowqD-0005Dw-UJ for xen-devel@lists.xenproject.org; Fri, 17 Mar 2017 18:43:10 +0000 Received: from [85.158.143.35] by server-1.bemta-6.messagelabs.com id 5E/68-27678-D3E2CC85; Fri, 17 Mar 2017 18:43:09 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrCIsWRWlGSWpSXmKPExsXiVRvkqGujdyb C4Np/KYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNWPvzolsBct1Kro/7WNuYFym1MXIxSEkMINR 4ta1TUwgDovAGlaJ/YdXM4I4EgKXWCWetb5j62LkBHJiJJp+XmKGsCslnp5fzAhiCwmoSNzcv ooJYtQfRonm47/ZQRLCAnoSR47+gLLtJV4f/sgKYrMJGEi82bEXzBYRUJK4t2oyWDOzwA1Gib 6188AaWARUJY4/mwhWxCvgJ/H340ImEJsTyD71eS0zxGZfieM39rOA2KICchIrL7dA1QtKnJz 5BCjOATRUU2L9Ln2QMLOAvMT2t3OYJzCKzEJSNQuhahaSqgWMzKsYNYpTi8pSi3SNDPSSijLT M0pyEzNzdA0NzPRyU4uLE9NTcxKTivWS83M3MQJjgAEIdjD+WhZwiFGSg0lJlNda5kyEEF9Sf kplRmJxRnxRaU5q8SFGGQ4OJQleQVWgnGBRanpqRVpmDjAaYdISHDxKIrz/VYDSvMUFibnFme kQqVOMuhwf+g+/YRJiycvPS5US55UAmSEAUpRRmgc3ApYYLjHKSgnzMgIdJcRTkFqUm1mCKv+ KUZyDUUmYdzrIFJ7MvBK4Ta+AjmACOuLthxMgR5QkIqSkGhiDc20/+h3g+1r3JtT3nNHBOCuZ abab/h/eo/67f5+w4IwErrx5c2dt6lfp+vnV9XXdXhan+X0+6emfdDZcrFpSmD7HIjkngPF97 9JInUPqJ6s4thUExi9wfDQpcuWFpddVPGfcTSpy5VzPxpw6gys9pPXPomUHks3fbvww+3Tjpe t1XxZlfTRQYinOSDTUYi4qTgQANpetyQcDAAA= X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-13.tower-21.messagelabs.com!1489776188!57020250!1 X-Originating-IP: [74.125.82.65] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 44197 invoked from network); 17 Mar 2017 18:43:08 -0000 Received: from mail-wm0-f65.google.com (HELO mail-wm0-f65.google.com) (74.125.82.65) by server-13.tower-21.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 17 Mar 2017 18:43:08 -0000 Received: by mail-wm0-f65.google.com with SMTP id z133so4645383wmb.2 for ; Fri, 17 Mar 2017 11:43:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=2L98rb/+irur8VwONJ/jgJHw5H1DqaKSfis8/6cuBX0=; b=gRiInRM7BFwlH2RV2rsfzc/9ymZd7puBZGD3e5EHtpN+85bjOU+yWpA+3aXBTa5Xpo s8tB36zXTGZfnvNUvMlEs35aW9812iKc0DtBd9dfC2PsQvpZiRGRmn/ed1QECeQFWPGW IxiOJUQXC572kxfygWWQP74NojJHSHjFaPMacV06tf4OG47jUxLI+wM6bGDx/RyOM9jA nk9n5+0NrFCT4OUIK8NfUvLbneil5lsuSLX+WCSphEJKBZQMxRz3FK9gzfObGb/WtSm0 aNEOSmqeq+6Pz3HVZEqwQ+3avtJJH0215UiSbRLj60JACwhtr0tvOrcGFaphG1BgM93K o8PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=2L98rb/+irur8VwONJ/jgJHw5H1DqaKSfis8/6cuBX0=; b=kwbiYxQ494smtJwrt4j7YO7cWGzXLZo15VMUG4velZ4NHtd+YGPEhWGlfV53EX49Ys jhgV87R58gSALV7TsF/mTVfk41JxOkwcmJgXutvA6BgSAqZbI7jYMhMuAGEAl36K4pVA 0tRUOvVALg6oP/BbpNDQ37fLbqdaAg0xIo7Nz+AYl5BWxjYAJoWOeL21S9n6lKpx9HZ9 vJw7Ep4DQ9nzRDK7QPqcn+390mroj9z+4QCe1O7G42RW0QO9SoZlJzxcqQMCNNseMKO0 ipBVfWDtGp6LT7HGQHPOblDmWYLv5nBt/e6utjnO0L2ZiKA1Vf5oqPCb/iMmp9SQW+ay ltXw== X-Gm-Message-State: AFeK/H2j3FIjrBp2lIf/UOmeiheP5or/S4JZtJtqTNWOjPpGDRyJiLzUcBEA+nnT8eAitg== X-Received: by 10.28.184.87 with SMTP id i84mr4090599wmf.129.1489776188049; Fri, 17 Mar 2017 11:43:08 -0700 (PDT) Received: from Palanthas.fritz.box ([80.66.223.93]) by smtp.gmail.com with ESMTPSA id z70sm10885205wrc.2.2017.03.17.11.43.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Mar 2017 11:43:07 -0700 (PDT) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Fri, 17 Mar 2017 19:43:05 +0100 Message-ID: <148977618592.29510.6991110994080248461.stgit@Palanthas.fritz.box> In-Reply-To: <148977585611.29510.906390949919041674.stgit@Palanthas.fritz.box> References: <148977585611.29510.906390949919041674.stgit@Palanthas.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: Stefano Stabellini , Jonathan Davies , Julien Grall , George Dunlap , Marcus Granado Subject: [Xen-devel] [PATCH 2/3] xen: sched_null: support for hard affinity X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP As a (rudimental) way of directing and affecting the placement logic implemented by the scheduler, support vCPU hard affinity. Basically, a vCPU will now be assigned only to a pCPU that is part of its own hard affinity. If such pCPU(s) is (are) busy, the vCPU will wait, like it happens when there are no free pCPUs. Signed-off-by: Dario Faggioli --- Cc: George Dunlap Cc: Stefano Stabellini Cc: Julien Grall Cc: Jonathan Davies Cc: Marcus Granado --- xen/common/sched_null.c | 43 ++++++++++++++++++++++++++++++++----------- 1 file changed, 32 insertions(+), 11 deletions(-) diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index 6a13308..ea055f1 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -117,6 +117,14 @@ static inline struct null_dom *null_dom(const struct domain *d) return d->sched_priv; } +static inline bool check_nvc_affinity(struct null_vcpu *nvc, unsigned int cpu) +{ + cpumask_and(cpumask_scratch_cpu(cpu), nvc->vcpu->cpu_hard_affinity, + cpupool_domain_cpumask(nvc->vcpu->domain)); + + return cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu)); +} + static int null_init(struct scheduler *ops) { struct null_private *prv; @@ -284,16 +292,20 @@ static unsigned int pick_cpu(struct null_private *prv, struct vcpu *v) ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock)); + cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, cpus); + /* * If our processor is free, or we are assigned to it, and it is - * also still valid, just go for it. + * also still valid and part of our affinity, just go for it. */ if ( likely((per_cpu(npc, cpu).vcpu == NULL || per_cpu(npc, cpu).vcpu == v) - && cpumask_test_cpu(cpu, cpus)) ) + && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) ) return cpu; - /* If not, just go for a valid free pCPU, if any */ + /* If not, just go for a free pCPU, within our affinity, if any */ cpumask_and(cpumask_scratch_cpu(cpu), &prv->cpus_free, cpus); + cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu), + v->cpu_hard_affinity); cpu = cpumask_first(cpumask_scratch_cpu(cpu)); /* @@ -308,7 +320,10 @@ static unsigned int pick_cpu(struct null_private *prv, struct vcpu *v) * only if the pCPU is free. */ if ( unlikely(cpu == nr_cpu_ids) ) - cpu = cpumask_any(cpus); + { + cpumask_and(cpumask_scratch_cpu(cpu), cpus, v->cpu_hard_affinity); + cpu = cpumask_any(cpumask_scratch_cpu(cpu)); + } return cpu; } @@ -391,6 +406,9 @@ static void null_vcpu_insert(const struct scheduler *ops, struct vcpu *v) lock = pcpu_schedule_lock(cpu); } + cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, + cpupool_domain_cpumask(v->domain)); + /* * If the pCPU is free, we assign v to it. * @@ -408,8 +426,7 @@ static void null_vcpu_insert(const struct scheduler *ops, struct vcpu *v) */ vcpu_assign(prv, v, cpu); } - else if ( cpumask_intersects(&prv->cpus_free, - cpupool_domain_cpumask(v->domain)) ) + else if ( cpumask_intersects(&prv->cpus_free, cpumask_scratch_cpu(cpu)) ) { spin_unlock(lock); goto retry; @@ -462,7 +479,7 @@ static void null_vcpu_remove(const struct scheduler *ops, struct vcpu *v) spin_lock(&prv->waitq_lock); wvc = list_first_entry_or_null(&prv->waitq, struct null_vcpu, waitq_elem); - if ( wvc ) + if ( wvc && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu)) ) { vcpu_assign(prv, wvc->vcpu, cpu); list_del_init(&wvc->waitq_elem); @@ -550,7 +567,7 @@ static void null_vcpu_migrate(const struct scheduler *ops, struct vcpu *v, spin_lock(&prv->waitq_lock); wvc = list_first_entry_or_null(&prv->waitq, struct null_vcpu, waitq_elem); - if ( wvc && cpumask_test_cpu(cpu, cpupool_domain_cpumask(v->domain)) ) + if ( wvc && check_nvc_affinity(wvc, cpu) ) { vcpu_assign(prv, wvc->vcpu, cpu); list_del_init(&wvc->waitq_elem); @@ -573,11 +590,15 @@ static void null_vcpu_migrate(const struct scheduler *ops, struct vcpu *v, * Let's now consider new_cpu, which is where v is being sent. It can be * either free, or have a vCPU already assigned to it. * - * In the former case, we should assign v to it, and try to get it to run. + * In the former case, we should assign v to it, and try to get it to run, + * if possible, according to affinity. * * In latter, all we can do is to park v in the waitqueue. */ - if ( per_cpu(npc, new_cpu).vcpu == NULL ) + cpumask_and(cpumask_scratch_cpu(cpu), cpupool_domain_cpumask(v->domain), + nvc->vcpu->cpu_hard_affinity); + if ( per_cpu(npc, new_cpu).vcpu == NULL && + cpumask_test_cpu(new_cpu, cpumask_scratch_cpu(cpu)) ) { /* We don't know whether v was in the waitqueue. If yes, remove it */ spin_lock(&prv->waitq_lock); @@ -666,7 +687,7 @@ static struct task_slice null_schedule(const struct scheduler *ops, { spin_lock(&prv->waitq_lock); wvc = list_first_entry_or_null(&prv->waitq, struct null_vcpu, waitq_elem); - if ( wvc ) + if ( wvc && check_nvc_affinity(wvc, cpu) ) { vcpu_assign(prv, wvc->vcpu, cpu); list_del_init(&wvc->waitq_elem);