From patchwork Tue Jul 19 08:12:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 9236333 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E6CA7600CB for ; Tue, 19 Jul 2016 08:13:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D8232205AA for ; Tue, 19 Jul 2016 08:13:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CC9D82094F; Tue, 19 Jul 2016 08:13:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4CA44205AA for ; Tue, 19 Jul 2016 08:13:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752823AbcGSIN2 (ORCPT ); Tue, 19 Jul 2016 04:13:28 -0400 Received: from mail-pf0-f195.google.com ([209.85.192.195]:33057 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752668AbcGSINX (ORCPT ); Tue, 19 Jul 2016 04:13:23 -0400 Received: by mail-pf0-f195.google.com with SMTP id i6so934955pfe.0; Tue, 19 Jul 2016 01:13:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=Wc/pewYCa6nWhh5Bqu5OM3a8AcypMCMkN2qpRuKwSn0=; b=REU+LjW75nJWiomSS2ZonwLdmcUnwt0vhBQkOFkyVlsczsPSMYix8tb9xrKzqpga78 Ho7JGW0l7816LkclN8NO12gPlzEgPm0t7UzYjAMR1Hmd1VrlxiE9mc5VFe32Xiw/3/MR rsFuGDnbaWSq2VAA+mOeErtNpZkdmGpDeu+47jIY9JqbF+bJkG5jmS5GnmUMdtlT7L8A sPkma90UV+KB3j+sAJK6iCs8u69sUxK1wPmSf7XU7G7Gw6pSLMZY/YJLdMxu3rwMyUSA rydrT0dZmO3fvpt0Y/9J/Nv4KCmmniEVsv72Z67VWfzMdDgehZEnsmFt/qTrhUAhUJpV wYsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=Wc/pewYCa6nWhh5Bqu5OM3a8AcypMCMkN2qpRuKwSn0=; b=P3rNVAo/g83ljC1wrCT+JJQn/huEauJavUO+2xA+MedF8e69fWeLWSkjke9yrvr0vJ 46iJbpYoPDejEB9LZ5c5t2I4gIjbTur0sqN5418N4cn6xB1ZHGCPMxEUOVK4xUtskJJS aV1MFESaVV+pgIbenpiCTS4rOws2sLJDwTupo7R+FlLQJ48/sUl6fC/GQ5QiPS7BaBy+ uF3NcvC6MSGIfLCvOhHlAPDNbPEBvf35YLLO4DQlJZmU3Ms+Ov2UfiGl/y15HDZQ0o3H xeTpoUWTYRJmniuHO/MFCuaM73kYhfkvBFOfHw2zVFFvKR/5iEpJInPDyErdfZWEb9R/ Fx3Q== X-Gm-Message-State: ALyK8tIdviApFUH/90dMennHgeQxk86Z5xboPlqXB75DE2NWxrWH8JoYMuJqfciQ/ORLDQ== X-Received: by 10.98.58.149 with SMTP id v21mr52352155pfj.19.1468916001430; Tue, 19 Jul 2016 01:13:21 -0700 (PDT) Received: from dyn253.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id c125sm2826967pfc.40.2016.07.19.01.13.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Jul 2016 01:13:20 -0700 (PDT) From: Suraj Jitindar Singh To: linuxppc-dev@lists.ozlabs.org Cc: sjitindarsingh@gmail.com, kvm-ppc@vger.kernel.org, kvm@vger.kernel.org, mpe@ellerman.id.au, paulus@samba.org, benh@kernel.crashing.org, pbonzini@redhat.com, agraf@suse.com, rkrcmar@redhat.com, dmatlack@google.com, borntraeger@de.ibm.com Subject: [PATCH V4 1/5] kvm/ppc/book3s: Move struct kvmppc_vcore from kvm_host.h to kvm_book3s.h Date: Tue, 19 Jul 2016 18:12:53 +1000 Message-Id: <1468915977-26929-1-git-send-email-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.5.5 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The next commit will introduce a member to the kvmppc_vcore struct which references MAX_SMT_THREADS which is defined in kvm_book3s_asm.h, however this file isn't included in kvm_host.h directly. Thus compiling for certain platforms such as pmac32_defconfig and ppc64e_defconfig with KVM fails due to MAX_SMT_THREADS not being defined. Move the struct kvmppc_vcore definition to kvm_book3s.h which explicitly includes kvm_book3s_asm.h. Signed-off-by: Suraj Jitindar Singh --- Change Log: V1 -> V2: - Added patch to series --- arch/powerpc/include/asm/kvm_book3s.h | 35 +++++++++++++++++++++++++++++++++++ arch/powerpc/include/asm/kvm_host.h | 35 ----------------------------------- 2 files changed, 35 insertions(+), 35 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index 8f39796..a50c5fe 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -69,6 +69,41 @@ struct hpte_cache { int pagesize; }; +/* + * Struct for a virtual core. + * Note: entry_exit_map combines a bitmap of threads that have entered + * in the bottom 8 bits and a bitmap of threads that have exited in the + * next 8 bits. This is so that we can atomically set the entry bit + * iff the exit map is 0 without taking a lock. + */ +struct kvmppc_vcore { + int n_runnable; + int num_threads; + int entry_exit_map; + int napping_threads; + int first_vcpuid; + u16 pcpu; + u16 last_cpu; + u8 vcore_state; + u8 in_guest; + struct kvmppc_vcore *master_vcore; + struct list_head runnable_threads; + struct list_head preempt_list; + spinlock_t lock; + struct swait_queue_head wq; + spinlock_t stoltb_lock; /* protects stolen_tb and preempt_tb */ + u64 stolen_tb; + u64 preempt_tb; + struct kvm_vcpu *runner; + struct kvm *kvm; + u64 tb_offset; /* guest timebase - host timebase */ + ulong lpcr; + u32 arch_compat; + ulong pcr; + ulong dpdes; /* doorbell state (POWER8) */ + ulong conferring_threads; +}; + struct kvmppc_vcpu_book3s { struct kvmppc_sid_map sid_map[SID_MAP_NUM]; struct { diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index ec35af3..19c6731 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -275,41 +275,6 @@ struct kvm_arch { #endif }; -/* - * Struct for a virtual core. - * Note: entry_exit_map combines a bitmap of threads that have entered - * in the bottom 8 bits and a bitmap of threads that have exited in the - * next 8 bits. This is so that we can atomically set the entry bit - * iff the exit map is 0 without taking a lock. - */ -struct kvmppc_vcore { - int n_runnable; - int num_threads; - int entry_exit_map; - int napping_threads; - int first_vcpuid; - u16 pcpu; - u16 last_cpu; - u8 vcore_state; - u8 in_guest; - struct kvmppc_vcore *master_vcore; - struct list_head runnable_threads; - struct list_head preempt_list; - spinlock_t lock; - struct swait_queue_head wq; - spinlock_t stoltb_lock; /* protects stolen_tb and preempt_tb */ - u64 stolen_tb; - u64 preempt_tb; - struct kvm_vcpu *runner; - struct kvm *kvm; - u64 tb_offset; /* guest timebase - host timebase */ - ulong lpcr; - u32 arch_compat; - ulong pcr; - ulong dpdes; /* doorbell state (POWER8) */ - ulong conferring_threads; -}; - #define VCORE_ENTRY_MAP(vc) ((vc)->entry_exit_map & 0xff) #define VCORE_EXIT_MAP(vc) ((vc)->entry_exit_map >> 8) #define VCORE_IS_EXITING(vc) (VCORE_EXIT_MAP(vc) != 0)