From patchwork Fri Mar 29 15:09:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 10877271 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AFAE61708 for ; Fri, 29 Mar 2019 15:11:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9553E29883 for ; Fri, 29 Mar 2019 15:11:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 935C82986C; Fri, 29 Mar 2019 15:11:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F091E29883 for ; Fri, 29 Mar 2019 15:11:41 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t9I-00055v-ED; Fri, 29 Mar 2019 15:10:28 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h9t8o-0003tc-BD for xen-devel@lists.xenproject.org; Fri, 29 Mar 2019 15:09:58 +0000 X-Inumbo-ID: b4f92660-5234-11e9-bc90-bc764e045a96 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id b4f92660-5234-11e9-bc90-bc764e045a96; Fri, 29 Mar 2019 15:09:54 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 9BE79B048; Fri, 29 Mar 2019 15:09:53 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Fri, 29 Mar 2019 16:09:33 +0100 Message-Id: <20190329150934.17694-49-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190329150934.17694-1-jgross@suse.com> References: <20190329150934.17694-1-jgross@suse.com> Subject: [Xen-devel] [PATCH RFC 48/49] xen/sched: make vcpu_wake() core scheduling aware X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP With core scheduling active a vcpu being woken up via vcpu_wake() might be on a physical cpu in guest idle already. In this case it just needs to be set to "running" and pinged via cpu_raise_softirq(). Signed-off-by: Juergen Gross --- xen/common/schedule.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 7b30a153df..ba03b588c8 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -705,16 +705,19 @@ void vcpu_wake(struct vcpu *v) { unsigned long flags; spinlock_t *lock; + struct sched_item *item = v->sched_item; TRACE_2D(TRC_SCHED_WAKE, v->domain->domain_id, v->vcpu_id); - lock = item_schedule_lock_irqsave(v->sched_item, &flags); + lock = item_schedule_lock_irqsave(item, &flags); if ( likely(vcpu_runnable(v)) ) { if ( v->runstate.state >= RUNSTATE_blocked ) vcpu_runstate_change(v, RUNSTATE_runnable, NOW()); - SCHED_OP(vcpu_scheduler(v), wake, v->sched_item); + SCHED_OP(vcpu_scheduler(v), wake, item); + if ( item->is_running && v->runstate.state != RUNSTATE_running ) + cpu_raise_softirq(v->processor, SCHEDULE_SOFTIRQ); } else if ( !(v->pause_flags & VPF_blocked) ) { @@ -722,7 +725,7 @@ void vcpu_wake(struct vcpu *v) vcpu_runstate_change(v, RUNSTATE_offline, NOW()); } - item_schedule_unlock_irqrestore(lock, flags, v->sched_item); + item_schedule_unlock_irqrestore(lock, flags, item); } void vcpu_unblock(struct vcpu *v)