From patchwork Mon Jul 11 07:08:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 9222951 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7CA3D604DB for ; Mon, 11 Jul 2016 07:09:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6A3282793B for ; Mon, 11 Jul 2016 07:09:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5CD9627B2F; Mon, 11 Jul 2016 07:09:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A47072793B for ; Mon, 11 Jul 2016 07:09:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757867AbcGKHJS (ORCPT ); Mon, 11 Jul 2016 03:09:18 -0400 Received: from mail-pa0-f67.google.com ([209.85.220.67]:36329 "EHLO mail-pa0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757833AbcGKHJR (ORCPT ); Mon, 11 Jul 2016 03:09:17 -0400 Received: by mail-pa0-f67.google.com with SMTP id ib6so12374966pad.3; Mon, 11 Jul 2016 00:09:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=H9jHXZ8gSVpbj0xT9AwZCUY/h3FC9SuxFl0aFBslLJk=; b=cxS9iq2rGiaswhsz43rABf+okThmayNRuK8aEFYZlkknjYQEsbs0ygHBmJYvJ57MCc PS2Clq5nm73EFPmlM5+8tVzfg3/sTthCwHBLjDaZFtoaXiDXJsUEpeD6c6m8oU94sIna xy+993Y3ZimzmS5H3qNLdNV4jsWxNiUjntGxUzDt4s4F9RRqAFgmWnKW3/bGlTsIEjic 3bJlL1vSD9bi01R+xwudQP9f6AqI+u5iVj2dXGsrCBcHArMlzQBTr/5P3UsQzqQePsrQ /tbvAJiKkLjRdXXcg9M4sjB6UHd4Z5Mes+Nd4chpb0OQKfy1Ikedh4gkddSrpasIYpng uVmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=H9jHXZ8gSVpbj0xT9AwZCUY/h3FC9SuxFl0aFBslLJk=; b=V8cJQOM35l7xfP9u6iIKNTKhcYEE4oU7HJCiyhzCpSmQiCqFP54zXAj3tHvGn3s3e7 f521Hpamif0iqNvGbRk9aBg2lQOPnVLKwp8pAg5eIDrWmpr1qGrE9Y+e2yPIt0RDcvHd jrOGEUKCCgzLMalPhx8+1Dyh1JMOtscZrg7YRZNrHKrMm31gqUN5OhT/rbXaNZ+vv6IW NL/Djiheiy/BzuXNKn7uO+fh9VYYM9X6HLD7FJ1uqvfhQ/ErW1qsGypBvNl2o0NawLO/ posa6e2cSIWhAM/nCemocnpuIjm/YxMO0L8ZpwfYmYQmS9sEr5aDb6Ai3AXNvBTgFpW5 OgzQ== X-Gm-Message-State: ALyK8tIfW7WkTzol4pPjNin4sqrmFZiYJ2sg3Y8aCGiKEK/JNV4dkymGUa2GPUSyzNBCCw== X-Received: by 10.66.16.133 with SMTP id g5mr32912294pad.145.1468220955863; Mon, 11 Jul 2016 00:09:15 -0700 (PDT) Received: from dyn253.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id 4sm2019179pav.33.2016.07.11.00.09.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 11 Jul 2016 00:09:15 -0700 (PDT) From: Suraj Jitindar Singh To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, mpe@ellerman.id.au, paulus@samba.org, benh@kernel.crashing.org, kvm@vger.kernel.org, pbonzini@redhat.com, agraf@suse.com, rkrcmar@redhat.com, sjitindarsingh@gmail.com Subject: [PATCH V2 2/5] kvm/ppc/book3s_hv: Change vcore element runnable_threads from linked-list to array Date: Mon, 11 Jul 2016 17:08:29 +1000 Message-Id: <1468220912-22828-2-git-send-email-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1468220912-22828-1-git-send-email-sjitindarsingh@gmail.com> References: <1468220912-22828-1-git-send-email-sjitindarsingh@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The struct kvmppc_vcore is a structure used to store various information about a virtual core for a kvm guest. The runnable_threads element of the struct provides a list of all of the currently runnable vcpus on the core (those in the KVMPPC_VCPU_RUNNABLE state). The previous implementation of this list was a linked_list. The next patch requires that the list be able to be iterated over without holding the vcore lock. Reimplement the runnable_threads list in the kvmppc_vcore struct as an array. Implement function to iterate over valid entries in the array and update access sites accordingly. --- Change Log: V1 -> V2: - Nothing Signed-off-by: Suraj Jitindar Singh --- arch/powerpc/include/asm/kvm_book3s.h | 2 +- arch/powerpc/include/asm/kvm_host.h | 1 - arch/powerpc/kvm/book3s_hv.c | 68 +++++++++++++++++++++-------------- 3 files changed, 43 insertions(+), 28 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index a50c5fe..151f817 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -87,7 +87,7 @@ struct kvmppc_vcore { u8 vcore_state; u8 in_guest; struct kvmppc_vcore *master_vcore; - struct list_head runnable_threads; + struct kvm_vcpu *runnable_threads[MAX_SMT_THREADS]; struct list_head preempt_list; spinlock_t lock; struct swait_queue_head wq; diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 19c6731..02d06e9 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -633,7 +633,6 @@ struct kvm_vcpu_arch { long pgfault_index; unsigned long pgfault_hpte[2]; - struct list_head run_list; struct task_struct *run_task; struct kvm_run *kvm_run; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index e20beae..3bcf9e6 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -57,6 +57,7 @@ #include #include #include +#include #include "book3s.h" @@ -96,6 +97,26 @@ MODULE_PARM_DESC(h_ipi_redirect, "Redirect H_IPI wakeup to a free host core"); static void kvmppc_end_cede(struct kvm_vcpu *vcpu); static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu); +static inline struct kvm_vcpu *next_runnable_thread(struct kvmppc_vcore *vc, + int *ip) +{ + int i = *ip; + struct kvm_vcpu *vcpu; + + while (++i < MAX_SMT_THREADS) { + vcpu = READ_ONCE(vc->runnable_threads[i]); + if (vcpu) { + *ip = i; + return vcpu; + } + } + return NULL; +} + +/* Used to traverse the list of runnable threads for a given vcore */ +#define for_each_runnable_thread(i, vcpu, vc) \ + for (i = -1; (vcpu = next_runnable_thread(vc, &i)); ) + static bool kvmppc_ipi_thread(int cpu) { /* On POWER8 for IPIs to threads in the same core, use msgsnd */ @@ -1492,7 +1513,6 @@ static struct kvmppc_vcore *kvmppc_vcore_create(struct kvm *kvm, int core) if (vcore == NULL) return NULL; - INIT_LIST_HEAD(&vcore->runnable_threads); spin_lock_init(&vcore->lock); spin_lock_init(&vcore->stoltb_lock); init_swait_queue_head(&vcore->wq); @@ -1801,7 +1821,7 @@ static void kvmppc_remove_runnable(struct kvmppc_vcore *vc, vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; spin_unlock_irq(&vcpu->arch.tbacct_lock); --vc->n_runnable; - list_del(&vcpu->arch.run_list); + WRITE_ONCE(vc->runnable_threads[vcpu->arch.ptid], NULL); } static int kvmppc_grab_hwthread(int cpu) @@ -2208,10 +2228,10 @@ static bool can_piggyback(struct kvmppc_vcore *pvc, struct core_info *cip, static void prepare_threads(struct kvmppc_vcore *vc) { - struct kvm_vcpu *vcpu, *vnext; + int i; + struct kvm_vcpu *vcpu; - list_for_each_entry_safe(vcpu, vnext, &vc->runnable_threads, - arch.run_list) { + for_each_runnable_thread(i, vcpu, vc) { if (signal_pending(vcpu->arch.run_task)) vcpu->arch.ret = -EINTR; else if (vcpu->arch.vpa.update_pending || @@ -2258,15 +2278,14 @@ static void collect_piggybacks(struct core_info *cip, int target_threads) static void post_guest_process(struct kvmppc_vcore *vc, bool is_master) { - int still_running = 0; + int still_running = 0, i; u64 now; long ret; - struct kvm_vcpu *vcpu, *vnext; + struct kvm_vcpu *vcpu; spin_lock(&vc->lock); now = get_tb(); - list_for_each_entry_safe(vcpu, vnext, &vc->runnable_threads, - arch.run_list) { + for_each_runnable_thread(i, vcpu, vc) { /* cancel pending dec exception if dec is positive */ if (now < vcpu->arch.dec_expires && kvmppc_core_pending_dec(vcpu)) @@ -2306,8 +2325,8 @@ static void post_guest_process(struct kvmppc_vcore *vc, bool is_master) } if (vc->n_runnable > 0 && vc->runner == NULL) { /* make sure there's a candidate runner awake */ - vcpu = list_first_entry(&vc->runnable_threads, - struct kvm_vcpu, arch.run_list); + i = -1; + vcpu = next_runnable_thread(vc, &i); wake_up(&vcpu->arch.cpu_run); } } @@ -2360,7 +2379,7 @@ static inline void kvmppc_set_host_core(int cpu) */ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) { - struct kvm_vcpu *vcpu, *vnext; + struct kvm_vcpu *vcpu; int i; int srcu_idx; struct core_info core_info; @@ -2396,8 +2415,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) */ if ((threads_per_core > 1) && ((vc->num_threads > threads_per_subcore) || !on_primary_thread())) { - list_for_each_entry_safe(vcpu, vnext, &vc->runnable_threads, - arch.run_list) { + for_each_runnable_thread(i, vcpu, vc) { vcpu->arch.ret = -EBUSY; kvmppc_remove_runnable(vc, vcpu); wake_up(&vcpu->arch.cpu_run); @@ -2476,8 +2494,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc) active |= 1 << thr; list_for_each_entry(pvc, &core_info.vcs[sub], preempt_list) { pvc->pcpu = pcpu + thr; - list_for_each_entry(vcpu, &pvc->runnable_threads, - arch.run_list) { + for_each_runnable_thread(i, vcpu, pvc) { kvmppc_start_thread(vcpu, pvc); kvmppc_create_dtl_entry(vcpu, pvc); trace_kvm_guest_enter(vcpu); @@ -2610,7 +2627,7 @@ static void kvmppc_wait_for_exec(struct kvmppc_vcore *vc, static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) { struct kvm_vcpu *vcpu; - int do_sleep = 1; + int do_sleep = 1, i; DECLARE_SWAITQUEUE(wait); prepare_to_swait(&vc->wq, &wait, TASK_INTERRUPTIBLE); @@ -2619,7 +2636,7 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) * Check one last time for pending exceptions and ceded state after * we put ourselves on the wait queue */ - list_for_each_entry(vcpu, &vc->runnable_threads, arch.run_list) { + for_each_runnable_thread(i, vcpu, vc) { if (vcpu->arch.pending_exceptions || !vcpu->arch.ceded) { do_sleep = 0; break; @@ -2643,9 +2660,9 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) { - int n_ceded; + int n_ceded, i; struct kvmppc_vcore *vc; - struct kvm_vcpu *v, *vn; + struct kvm_vcpu *v; trace_kvmppc_run_vcpu_enter(vcpu); @@ -2665,7 +2682,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) vcpu->arch.stolen_logged = vcore_stolen_time(vc, mftb()); vcpu->arch.state = KVMPPC_VCPU_RUNNABLE; vcpu->arch.busy_preempt = TB_NIL; - list_add_tail(&vcpu->arch.run_list, &vc->runnable_threads); + WRITE_ONCE(vc->runnable_threads[vcpu->arch.ptid], vcpu); ++vc->n_runnable; /* @@ -2705,8 +2722,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) kvmppc_wait_for_exec(vc, vcpu, TASK_INTERRUPTIBLE); continue; } - list_for_each_entry_safe(v, vn, &vc->runnable_threads, - arch.run_list) { + for_each_runnable_thread(i, v, vc) { kvmppc_core_prepare_to_enter(v); if (signal_pending(v->arch.run_task)) { kvmppc_remove_runnable(vc, v); @@ -2719,7 +2735,7 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) if (!vc->n_runnable || vcpu->arch.state != KVMPPC_VCPU_RUNNABLE) break; n_ceded = 0; - list_for_each_entry(v, &vc->runnable_threads, arch.run_list) { + for_each_runnable_thread(i, v, vc) { if (!v->arch.pending_exceptions) n_ceded += v->arch.ceded; else @@ -2758,8 +2774,8 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) if (vc->n_runnable && vc->vcore_state == VCORE_INACTIVE) { /* Wake up some vcpu to run the core */ - v = list_first_entry(&vc->runnable_threads, - struct kvm_vcpu, arch.run_list); + i = -1; + v = next_runnable_thread(vc, &i); wake_up(&v->arch.cpu_run); }