From patchwork Fri Apr 24 05:48:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Davidlohr Bueso X-Patchwork-Id: 11507145 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF82914B4 for ; Fri, 24 Apr 2020 05:52:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E1B362098B for ; Fri, 24 Apr 2020 05:52:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726554AbgDXFwY (ORCPT ); Fri, 24 Apr 2020 01:52:24 -0400 Received: from mx2.suse.de ([195.135.220.15]:37006 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726442AbgDXFwV (ORCPT ); Fri, 24 Apr 2020 01:52:21 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id F2A62AD9A; Fri, 24 Apr 2020 05:52:17 +0000 (UTC) From: Davidlohr Bueso To: tglx@linutronix.de, pbonzini@redhat.com Cc: peterz@infradead.org, maz@kernel.org, bigeasy@linutronix.de, rostedt@goodmis.org, torvalds@linux-foundation.org, will@kernel.org, joel@joelfernandes.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, dave@stgolabs.net, Davidlohr Bueso Subject: [PATCH 1/5] rcuwait: Fix stale wake call name in comment Date: Thu, 23 Apr 2020 22:48:33 -0700 Message-Id: <20200424054837.5138-2-dave@stgolabs.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200424054837.5138-1-dave@stgolabs.net> References: <20200424054837.5138-1-dave@stgolabs.net> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The 'trywake' name was renamed to simply 'wake', update the comment. Acked-by: Peter Zijlstra (Intel) Signed-off-by: Davidlohr Bueso --- kernel/exit.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/exit.c b/kernel/exit.c index 389a88cb3081..9f9015f3f6b0 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -236,7 +236,7 @@ void rcuwait_wake_up(struct rcuwait *w) /* * Order condition vs @task, such that everything prior to the load * of @task is visible. This is the condition as to why the user called - * rcuwait_trywake() in the first place. Pairs with set_current_state() + * rcuwait_wake() in the first place. Pairs with set_current_state() * barrier (A) in rcuwait_wait_event(). * * WAIT WAKE From patchwork Fri Apr 24 05:48:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Davidlohr Bueso X-Patchwork-Id: 11507147 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 548F0912 for ; Fri, 24 Apr 2020 05:52:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3F0512098B for ; Fri, 24 Apr 2020 05:52:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726525AbgDXFwY (ORCPT ); Fri, 24 Apr 2020 01:52:24 -0400 Received: from mx2.suse.de ([195.135.220.15]:37030 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725554AbgDXFwX (ORCPT ); Fri, 24 Apr 2020 01:52:23 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id B2B15ADEB; Fri, 24 Apr 2020 05:52:20 +0000 (UTC) From: Davidlohr Bueso To: tglx@linutronix.de, pbonzini@redhat.com Cc: peterz@infradead.org, maz@kernel.org, bigeasy@linutronix.de, rostedt@goodmis.org, torvalds@linux-foundation.org, will@kernel.org, joel@joelfernandes.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, dave@stgolabs.net, Davidlohr Bueso Subject: [PATCH 2/5] rcuwait: Let rcuwait_wake_up() return whether or not a task was awoken Date: Thu, 23 Apr 2020 22:48:34 -0700 Message-Id: <20200424054837.5138-3-dave@stgolabs.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200424054837.5138-1-dave@stgolabs.net> References: <20200424054837.5138-1-dave@stgolabs.net> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Propagating the return value of wake_up_process() back to the caller can come in handy for future users, such as for statistics or accounting purposes. Acked-by: Peter Zijlstra (Intel) Signed-off-by: Davidlohr Bueso --- include/linux/rcuwait.h | 2 +- kernel/exit.c | 7 +++++-- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/include/linux/rcuwait.h b/include/linux/rcuwait.h index 2ffe1ee6d482..6ebb23258a27 100644 --- a/include/linux/rcuwait.h +++ b/include/linux/rcuwait.h @@ -25,7 +25,7 @@ static inline void rcuwait_init(struct rcuwait *w) w->task = NULL; } -extern void rcuwait_wake_up(struct rcuwait *w); +extern int rcuwait_wake_up(struct rcuwait *w); /* * The caller is responsible for locking around rcuwait_wait_event(), diff --git a/kernel/exit.c b/kernel/exit.c index 9f9015f3f6b0..f3beb637acf7 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -227,8 +227,9 @@ void release_task(struct task_struct *p) goto repeat; } -void rcuwait_wake_up(struct rcuwait *w) +int rcuwait_wake_up(struct rcuwait *w) { + int ret = 0; struct task_struct *task; rcu_read_lock(); @@ -248,8 +249,10 @@ void rcuwait_wake_up(struct rcuwait *w) task = rcu_dereference(w->task); if (task) - wake_up_process(task); + ret = wake_up_process(task); rcu_read_unlock(); + + return ret; } EXPORT_SYMBOL_GPL(rcuwait_wake_up); From patchwork Fri Apr 24 05:48:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Davidlohr Bueso X-Patchwork-Id: 11507143 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 09C5114B4 for ; Fri, 24 Apr 2020 05:52:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E7B142098B for ; Fri, 24 Apr 2020 05:52:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726601AbgDXFw2 (ORCPT ); Fri, 24 Apr 2020 01:52:28 -0400 Received: from mx2.suse.de ([195.135.220.15]:37056 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726586AbgDXFw0 (ORCPT ); Fri, 24 Apr 2020 01:52:26 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 69B48AEA8; Fri, 24 Apr 2020 05:52:23 +0000 (UTC) From: Davidlohr Bueso To: tglx@linutronix.de, pbonzini@redhat.com Cc: peterz@infradead.org, maz@kernel.org, bigeasy@linutronix.de, rostedt@goodmis.org, torvalds@linux-foundation.org, will@kernel.org, joel@joelfernandes.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, dave@stgolabs.net, Davidlohr Bueso Subject: [PATCH 3/5] rcuwait: Introduce prepare_to and finish_rcuwait Date: Thu, 23 Apr 2020 22:48:35 -0700 Message-Id: <20200424054837.5138-4-dave@stgolabs.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200424054837.5138-1-dave@stgolabs.net> References: <20200424054837.5138-1-dave@stgolabs.net> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This allows further flexibility for some callers to implement ad-hoc versions of the generic rcuwait_wait_event(). For example, kvm will need this to maintain tracing semantics. The naming is of course similar to what waitqueue apis offer. Also go ahead and make use of rcu_assign_pointer() for both task writes as it will make the __rcu sparse people happy - this will be the special nil case, thus no added serialization. Acked-by: Peter Zijlstra (Intel) Signed-off-by: Davidlohr Bueso --- include/linux/rcuwait.h | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/include/linux/rcuwait.h b/include/linux/rcuwait.h index 6ebb23258a27..45bc6604e9b1 100644 --- a/include/linux/rcuwait.h +++ b/include/linux/rcuwait.h @@ -29,12 +29,25 @@ extern int rcuwait_wake_up(struct rcuwait *w); /* * The caller is responsible for locking around rcuwait_wait_event(), - * such that writes to @task are properly serialized. + * and [prepare_to/finish]_rcuwait() such that writes to @task are + * properly serialized. */ + +static inline void prepare_to_rcuwait(struct rcuwait *w) +{ + rcu_assign_pointer(w->task, current); +} + +static inline void finish_rcuwait(struct rcuwait *w) +{ + rcu_assign_pointer(w->task, NULL); + __set_current_state(TASK_RUNNING); +} + #define rcuwait_wait_event(w, condition, state) \ ({ \ int __ret = 0; \ - rcu_assign_pointer((w)->task, current); \ + prepare_to_rcuwait(w); \ for (;;) { \ /* \ * Implicit barrier (A) pairs with (B) in \ @@ -51,9 +64,7 @@ extern int rcuwait_wake_up(struct rcuwait *w); \ schedule(); \ } \ - \ - WRITE_ONCE((w)->task, NULL); \ - __set_current_state(TASK_RUNNING); \ + finish_rcuwait(w); \ __ret; \ }) From patchwork Fri Apr 24 05:48:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Davidlohr Bueso X-Patchwork-Id: 11507137 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 867AE912 for ; Fri, 24 Apr 2020 05:52:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 789CC21655 for ; Fri, 24 Apr 2020 05:52:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726623AbgDXFwa (ORCPT ); Fri, 24 Apr 2020 01:52:30 -0400 Received: from mx2.suse.de ([195.135.220.15]:37084 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726598AbgDXFw3 (ORCPT ); Fri, 24 Apr 2020 01:52:29 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 50F4EAEBD; Fri, 24 Apr 2020 05:52:26 +0000 (UTC) From: Davidlohr Bueso To: tglx@linutronix.de, pbonzini@redhat.com Cc: peterz@infradead.org, maz@kernel.org, bigeasy@linutronix.de, rostedt@goodmis.org, torvalds@linux-foundation.org, will@kernel.org, joel@joelfernandes.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, dave@stgolabs.net, Davidlohr Bueso Subject: [PATCH 4/5] rcuwait: Introduce rcuwait_active() Date: Thu, 23 Apr 2020 22:48:36 -0700 Message-Id: <20200424054837.5138-5-dave@stgolabs.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200424054837.5138-1-dave@stgolabs.net> References: <20200424054837.5138-1-dave@stgolabs.net> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This call is lockless and thus should not be trustedblindly, ie: for wakeup purposes, which is already provided correctly by rcuwait_wakeup(). Signed-off-by: Davidlohr Bueso Acked-by: Peter Zijlstra (Intel) Reported-by: Wanpeng Li Signed-off-by: Paolo Bonzini Acked-by: Davidlohr Bueso Tested-by: Wanpeng Li --- include/linux/rcuwait.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/include/linux/rcuwait.h b/include/linux/rcuwait.h index 45bc6604e9b1..c1414ce44abc 100644 --- a/include/linux/rcuwait.h +++ b/include/linux/rcuwait.h @@ -25,6 +25,15 @@ static inline void rcuwait_init(struct rcuwait *w) w->task = NULL; } +/* + * Note: this provides no serialization and, just as with waitqueues, + * requires care to estimate as to whether or not the wait is active. + */ +static inline int rcuwait_active(struct rcuwait *w) +{ + return !!rcu_dereference(w->task); +} + extern int rcuwait_wake_up(struct rcuwait *w); /* From patchwork Fri Apr 24 05:48:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Davidlohr Bueso X-Patchwork-Id: 11507139 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 538C0912 for ; Fri, 24 Apr 2020 05:52:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3C9F02098B for ; Fri, 24 Apr 2020 05:52:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726651AbgDXFwf (ORCPT ); Fri, 24 Apr 2020 01:52:35 -0400 Received: from mx2.suse.de ([195.135.220.15]:37108 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726646AbgDXFwd (ORCPT ); Fri, 24 Apr 2020 01:52:33 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id E23BBAEB9; Fri, 24 Apr 2020 05:52:29 +0000 (UTC) From: Davidlohr Bueso To: tglx@linutronix.de, pbonzini@redhat.com Cc: peterz@infradead.org, maz@kernel.org, bigeasy@linutronix.de, rostedt@goodmis.org, torvalds@linux-foundation.org, will@kernel.org, joel@joelfernandes.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, dave@stgolabs.net, Paul Mackerras , kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, Davidlohr Bueso Subject: [PATCH 5/5] kvm: Replace vcpu->swait with rcuwait Date: Thu, 23 Apr 2020 22:48:37 -0700 Message-Id: <20200424054837.5138-6-dave@stgolabs.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200424054837.5138-1-dave@stgolabs.net> References: <20200424054837.5138-1-dave@stgolabs.net> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The use of any sort of waitqueue (simple or regular) for wait/waking vcpus has always been an overkill and semantically wrong. Because this is per-vcpu (which is blocked) there is only ever a single waiting vcpu, thus no need for any sort of queue. As such, make use of the rcuwait primitive, with the following considerations: - rcuwait already provides the proper barriers that serialize concurrent waiter and waker. - Task wakeup is done in rcu read critical region, with a stable task pointer. - Because there is no concurrency among waiters, we need not worry about rcuwait_wait_event() calls corrupting the wait->task. As a consequence, this saves the locking done in swait when modifying the queue. This also applies to per-vcore wait for powerpc kvm-hv. The x86 tscdeadline_latency test mentioned in 8577370fb0cb ("KVM: Use simple waitqueue for vcpu->wq") shows that, on avg, latency is reduced by around 15-20% with this change. Cc: Paul Mackerras Cc: kvmarm@lists.cs.columbia.edu Cc: linux-mips@vger.kernel.org Reviewed-by: Marc Zyngier Signed-off-by: Davidlohr Bueso --- arch/mips/kvm/mips.c | 6 ++---- arch/powerpc/include/asm/kvm_book3s.h | 2 +- arch/powerpc/include/asm/kvm_host.h | 2 +- arch/powerpc/kvm/book3s_hv.c | 22 ++++++++-------------- arch/powerpc/kvm/powerpc.c | 2 +- arch/x86/kvm/lapic.c | 2 +- include/linux/kvm_host.h | 10 +++++----- virt/kvm/arm/arch_timer.c | 3 ++- virt/kvm/arm/arm.c | 9 +++++---- virt/kvm/async_pf.c | 3 +-- virt/kvm/kvm_main.c | 19 +++++++++---------- 11 files changed, 36 insertions(+), 44 deletions(-) diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 8f05dd0a0f4e..fad6acce46e4 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -284,8 +284,7 @@ static enum hrtimer_restart kvm_mips_comparecount_wakeup(struct hrtimer *timer) kvm_mips_callbacks->queue_timer_int(vcpu); vcpu->arch.wait = 0; - if (swq_has_sleeper(&vcpu->wq)) - swake_up_one(&vcpu->wq); + rcuwait_wake_up(&vcpu->wait); return kvm_mips_count_timeout(vcpu); } @@ -511,8 +510,7 @@ int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, dvcpu->arch.wait = 0; - if (swq_has_sleeper(&dvcpu->wq)) - swake_up_one(&dvcpu->wq); + rcuwait_wake_up(&dvcpu->wait); return 0; } diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index 506e4df2d730..6e5d85ba588d 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -78,7 +78,7 @@ struct kvmppc_vcore { struct kvm_vcpu *runnable_threads[MAX_SMT_THREADS]; struct list_head preempt_list; spinlock_t lock; - struct swait_queue_head wq; + struct rcuwait wait; spinlock_t stoltb_lock; /* protects stolen_tb and preempt_tb */ u64 stolen_tb; u64 preempt_tb; diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 1dc63101ffe1..337047ba4a56 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -751,7 +751,7 @@ struct kvm_vcpu_arch { u8 irq_pending; /* Used by XIVE to signal pending guest irqs */ u32 last_inst; - struct swait_queue_head *wqp; + struct rcuwait *waitp; struct kvmppc_vcore *vcore; int ret; int trap; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 93493f0cbfe8..b8d42f523ca7 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -230,13 +230,11 @@ static bool kvmppc_ipi_thread(int cpu) static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu) { int cpu; - struct swait_queue_head *wqp; + struct rcuwait *wait; - wqp = kvm_arch_vcpu_wq(vcpu); - if (swq_has_sleeper(wqp)) { - swake_up_one(wqp); + wait = kvm_arch_vcpu_get_wait(vcpu); + if (rcuwait_wake_up(wait)) ++vcpu->stat.halt_wakeup; - } cpu = READ_ONCE(vcpu->arch.thread_cpu); if (cpu >= 0 && kvmppc_ipi_thread(cpu)) @@ -2125,7 +2123,7 @@ static struct kvmppc_vcore *kvmppc_vcore_create(struct kvm *kvm, int id) spin_lock_init(&vcore->lock); spin_lock_init(&vcore->stoltb_lock); - init_swait_queue_head(&vcore->wq); + rcuwait_init(&vcore->wait); vcore->preempt_tb = TB_NIL; vcore->lpcr = kvm->arch.lpcr; vcore->first_vcpuid = id; @@ -3784,7 +3782,6 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) ktime_t cur, start_poll, start_wait; int do_sleep = 1; u64 block_ns; - DECLARE_SWAITQUEUE(wait); /* Poll for pending exceptions and ceded state */ cur = start_poll = ktime_get(); @@ -3812,10 +3809,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) } } - prepare_to_swait_exclusive(&vc->wq, &wait, TASK_INTERRUPTIBLE); - if (kvmppc_vcore_check_block(vc)) { - finish_swait(&vc->wq, &wait); do_sleep = 0; /* If we polled, count this as a successful poll */ if (vc->halt_poll_ns) @@ -3828,8 +3822,8 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) vc->vcore_state = VCORE_SLEEPING; trace_kvmppc_vcore_blocked(vc, 0); spin_unlock(&vc->lock); - schedule(); - finish_swait(&vc->wq, &wait); + rcuwait_wait_event(&vc->wait, + kvmppc_vcore_check_block(vc), TASK_INTERRUPTIBLE); spin_lock(&vc->lock); vc->vcore_state = VCORE_INACTIVE; trace_kvmppc_vcore_blocked(vc, 1); @@ -3940,7 +3934,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) kvmppc_start_thread(vcpu, vc); trace_kvm_guest_enter(vcpu); } else if (vc->vcore_state == VCORE_SLEEPING) { - swake_up_one(&vc->wq); + rcuwait_wake_up(&vc->wait); } } @@ -4279,7 +4273,7 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu) } user_vrsave = mfspr(SPRN_VRSAVE); - vcpu->arch.wqp = &vcpu->arch.vcore->wq; + vcpu->arch.waitp = &vcpu->arch.vcore->wait; vcpu->arch.pgdir = kvm->mm->pgd; vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index e15166b0a16d..4a074b587520 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -751,7 +751,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) if (err) goto out_vcpu_uninit; - vcpu->arch.wqp = &vcpu->wq; + vcpu->arch.waitp = &vcpu->wait; kvmppc_create_vcpu_debugfs(vcpu, vcpu->vcpu_id); return 0; diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 9af25c97612a..54345dc645ba 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1833,7 +1833,7 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu) /* If the preempt notifier has already run, it also called apic_timer_expired */ if (!apic->lapic_timer.hv_timer_in_use) goto out; - WARN_ON(swait_active(&vcpu->wq)); + WARN_ON(rcuwait_active(&vcpu->wait)); cancel_hv_timer(apic); apic_timer_expired(apic); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6d58beb65454..fc34021546bd 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -23,7 +23,7 @@ #include #include #include -#include +#include #include #include #include @@ -277,7 +277,7 @@ struct kvm_vcpu { struct mutex mutex; struct kvm_run *run; - struct swait_queue_head wq; + struct rcuwait wait; struct pid __rcu *pid; int sigset_active; sigset_t sigset; @@ -956,12 +956,12 @@ static inline bool kvm_arch_has_assigned_device(struct kvm *kvm) } #endif -static inline struct swait_queue_head *kvm_arch_vcpu_wq(struct kvm_vcpu *vcpu) +static inline struct rcuwait *kvm_arch_vcpu_get_wait(struct kvm_vcpu *vcpu) { #ifdef __KVM_HAVE_ARCH_WQP - return vcpu->arch.wqp; + return vcpu->arch.waitp; #else - return &vcpu->wq; + return &vcpu->wait; #endif } diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c index 93bd59b46848..d5024416e722 100644 --- a/virt/kvm/arm/arch_timer.c +++ b/virt/kvm/arm/arch_timer.c @@ -571,6 +571,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu) { struct arch_timer_cpu *timer = vcpu_timer(vcpu); struct timer_map map; + struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu); if (unlikely(!timer->enabled)) return; @@ -593,7 +594,7 @@ void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu) if (map.emul_ptimer) soft_timer_cancel(&map.emul_ptimer->hrtimer); - if (swait_active(kvm_arch_vcpu_wq(vcpu))) + if (rcuwait_active(wait)) kvm_timer_blocking(vcpu); /* diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 48d0ec44ad77..479f36d02418 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -579,16 +579,17 @@ void kvm_arm_resume_guest(struct kvm *kvm) kvm_for_each_vcpu(i, vcpu, kvm) { vcpu->arch.pause = false; - swake_up_one(kvm_arch_vcpu_wq(vcpu)); + rcuwait_wake_up(kvm_arch_vcpu_get_wait(vcpu)); } } static void vcpu_req_sleep(struct kvm_vcpu *vcpu) { - struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu); + struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu); - swait_event_interruptible_exclusive(*wq, ((!vcpu->arch.power_off) && - (!vcpu->arch.pause))); + rcuwait_wait_event(wait, + (!vcpu->arch.power_off) &&(!vcpu->arch.pause), + TASK_INTERRUPTIBLE); if (vcpu->arch.power_off || vcpu->arch.pause) { /* Awaken to handle a signal, request we sleep again later. */ diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 15e5b037f92d..10b533f641a6 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -80,8 +80,7 @@ static void async_pf_execute(struct work_struct *work) trace_kvm_async_pf_completed(addr, cr2_or_gpa); - if (swq_has_sleeper(&vcpu->wq)) - swake_up_one(&vcpu->wq); + rcuwait_wake_up(&vcpu->wait); mmput(mm); kvm_put_kvm(vcpu->kvm); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 74bdb7bf3295..f027ae3598e8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -341,7 +341,7 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) vcpu->kvm = kvm; vcpu->vcpu_id = id; vcpu->pid = NULL; - init_swait_queue_head(&vcpu->wq); + rcuwait_init(&vcpu->wait); kvm_async_pf_vcpu_init(vcpu); vcpu->pre_pcpu = -1; @@ -2671,7 +2671,6 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu) void kvm_vcpu_block(struct kvm_vcpu *vcpu) { ktime_t start, cur; - DECLARE_SWAITQUEUE(wait); bool waited = false; u64 block_ns; @@ -2697,8 +2696,9 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) } while (single_task_running() && ktime_before(cur, stop)); } + prepare_to_rcuwait(&vcpu->wait); for (;;) { - prepare_to_swait_exclusive(&vcpu->wq, &wait, TASK_INTERRUPTIBLE); + set_current_state(TASK_INTERRUPTIBLE); if (kvm_vcpu_check_block(vcpu) < 0) break; @@ -2706,8 +2706,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) waited = true; schedule(); } - - finish_swait(&vcpu->wq, &wait); + finish_rcuwait(&vcpu->wait); cur = ktime_get(); out: kvm_arch_vcpu_unblocking(vcpu); @@ -2738,11 +2737,10 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_block); bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu) { - struct swait_queue_head *wqp; + struct rcuwait *wait; - wqp = kvm_arch_vcpu_wq(vcpu); - if (swq_has_sleeper(wqp)) { - swake_up_one(wqp); + wait = kvm_arch_vcpu_get_wait(vcpu); + if (rcuwait_wake_up(wait)) { WRITE_ONCE(vcpu->ready, true); ++vcpu->stat.halt_wakeup; return true; @@ -2884,7 +2882,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode) continue; if (vcpu == me) continue; - if (swait_active(&vcpu->wq) && !vcpu_dy_runnable(vcpu)) + if (rcuwait_active(&vcpu->wait) && + !vcpu_dy_runnable(vcpu)) continue; if (READ_ONCE(vcpu->preempted) && yield_to_kernel_mode && !kvm_arch_vcpu_in_kernel(vcpu))