From patchwork Wed Mar 16 05:06:58 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Gibson X-Patchwork-Id: 8594951 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A4D099F3D1 for ; Wed, 16 Mar 2016 05:13:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 30516202F0 for ; Wed, 16 Mar 2016 05:13:43 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A794A20295 for ; Wed, 16 Mar 2016 05:13:41 +0000 (UTC) Received: from localhost ([::1]:53218 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ag3m8-0007c6-RI for patchwork-qemu-devel@patchwork.kernel.org; Wed, 16 Mar 2016 01:13:40 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34253) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ag3es-0001A5-7g for qemu-devel@nongnu.org; Wed, 16 Mar 2016 01:06:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ag3ep-0005Qt-Bo for qemu-devel@nongnu.org; Wed, 16 Mar 2016 01:06:10 -0400 Received: from ozlabs.org ([2401:3900:2:1::2]:56348) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ag3eo-0005Pj-Lo; Wed, 16 Mar 2016 01:06:07 -0400 Received: by ozlabs.org (Postfix, from userid 1007) id 3qPzwB3VcCz9t3h; Wed, 16 Mar 2016 16:06:01 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gibson.dropbear.id.au; s=201602; t=1458104762; bh=XljJaWXf9L4Ih1SKthko+ZMc9aHGLKqH+6M8XE+FP2o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p44GpifHORMcT1k3TZ61dibniLgF4SaILDCJLaUIDxdT/q+TB9hHymREKVj9JwKwo +WKadBVbD37pEhD48YdqCU650Oh/DPQrqOrUPCFbLutyvKv0zbT4fGdQBz0Vugcidj k/9GbrnH2B8tgUDBqkIhgDkztCXTYK3ZEyKJvFHc= From: David Gibson To: peter.maydell@linaro.org Date: Wed, 16 Mar 2016 16:06:58 +1100 Message-Id: <1458104828-32541-7-git-send-email-david@gibson.dropbear.id.au> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1458104828-32541-1-git-send-email-david@gibson.dropbear.id.au> References: <1458104828-32541-1-git-send-email-david@gibson.dropbear.id.au> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2401:3900:2:1::2 Cc: qemu-devel@nongnu.org, aik@ozlabs.ru, agraf@suse.de, mdroth@linux.vnet.ibm.com, alex.williamson@redhat.com, qemu-ppc@nongnu.org, David Gibson Subject: [Qemu-devel] [PULL 06/16] target-ppc: Split out SREGS get/put functions X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently the getting and setting of Power MMU registers (sregs) take up large inline chunks of the kvm_arch_get_registers() and kvm_arch_put_registers() functions. Especially since there are two variants (for Book-E and Book-S CPUs), only one of which will be used in practice, this is pretty hard to read. This patch splits these out into helper functions for clarity. No functional change is expected. Signed-off-by: David Gibson Reviewed-by: Thomas Huth Reviewed-by: Alexey Kardashevskiy Reviewed-by: Greg Kurz --- target-ppc/kvm.c | 421 ++++++++++++++++++++++++++++++------------------------- 1 file changed, 228 insertions(+), 193 deletions(-) diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c index d67c169..4161f64 100644 --- a/target-ppc/kvm.c +++ b/target-ppc/kvm.c @@ -867,6 +867,44 @@ static int kvm_put_vpa(CPUState *cs) } #endif /* TARGET_PPC64 */ +static int kvmppc_put_books_sregs(PowerPCCPU *cpu) +{ + CPUPPCState *env = &cpu->env; + struct kvm_sregs sregs; + int i; + + sregs.pvr = env->spr[SPR_PVR]; + + sregs.u.s.sdr1 = env->spr[SPR_SDR1]; + + /* Sync SLB */ +#ifdef TARGET_PPC64 + for (i = 0; i < ARRAY_SIZE(env->slb); i++) { + sregs.u.s.ppc64.slb[i].slbe = env->slb[i].esid; + if (env->slb[i].esid & SLB_ESID_V) { + sregs.u.s.ppc64.slb[i].slbe |= i; + } + sregs.u.s.ppc64.slb[i].slbv = env->slb[i].vsid; + } +#endif + + /* Sync SRs */ + for (i = 0; i < 16; i++) { + sregs.u.s.ppc32.sr[i] = env->sr[i]; + } + + /* Sync BATs */ + for (i = 0; i < 8; i++) { + /* Beware. We have to swap upper and lower bits here */ + sregs.u.s.ppc32.dbat[i] = ((uint64_t)env->DBAT[0][i] << 32) + | env->DBAT[1][i]; + sregs.u.s.ppc32.ibat[i] = ((uint64_t)env->IBAT[0][i] << 32) + | env->IBAT[1][i]; + } + + return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_SREGS, &sregs); +} + int kvm_arch_put_registers(CPUState *cs, int level) { PowerPCCPU *cpu = POWERPC_CPU(cs); @@ -920,39 +958,8 @@ int kvm_arch_put_registers(CPUState *cs, int level) } if (cap_segstate && (level >= KVM_PUT_RESET_STATE)) { - struct kvm_sregs sregs; - - sregs.pvr = env->spr[SPR_PVR]; - - sregs.u.s.sdr1 = env->spr[SPR_SDR1]; - - /* Sync SLB */ -#ifdef TARGET_PPC64 - for (i = 0; i < ARRAY_SIZE(env->slb); i++) { - sregs.u.s.ppc64.slb[i].slbe = env->slb[i].esid; - if (env->slb[i].esid & SLB_ESID_V) { - sregs.u.s.ppc64.slb[i].slbe |= i; - } - sregs.u.s.ppc64.slb[i].slbv = env->slb[i].vsid; - } -#endif - - /* Sync SRs */ - for (i = 0; i < 16; i++) { - sregs.u.s.ppc32.sr[i] = env->sr[i]; - } - - /* Sync BATs */ - for (i = 0; i < 8; i++) { - /* Beware. We have to swap upper and lower bits here */ - sregs.u.s.ppc32.dbat[i] = ((uint64_t)env->DBAT[0][i] << 32) - | env->DBAT[1][i]; - sregs.u.s.ppc32.ibat[i] = ((uint64_t)env->IBAT[0][i] << 32) - | env->IBAT[1][i]; - } - - ret = kvm_vcpu_ioctl(cs, KVM_SET_SREGS, &sregs); - if (ret) { + ret = kvmppc_put_books_sregs(cpu); + if (ret < 0) { return ret; } } @@ -1014,12 +1021,197 @@ static void kvm_sync_excp(CPUPPCState *env, int vector, int ivor) env->excp_vectors[vector] = env->spr[ivor] + env->spr[SPR_BOOKE_IVPR]; } +static int kvmppc_get_booke_sregs(PowerPCCPU *cpu) +{ + CPUPPCState *env = &cpu->env; + struct kvm_sregs sregs; + int ret; + + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_SREGS, &sregs); + if (ret < 0) { + return ret; + } + + if (sregs.u.e.features & KVM_SREGS_E_BASE) { + env->spr[SPR_BOOKE_CSRR0] = sregs.u.e.csrr0; + env->spr[SPR_BOOKE_CSRR1] = sregs.u.e.csrr1; + env->spr[SPR_BOOKE_ESR] = sregs.u.e.esr; + env->spr[SPR_BOOKE_DEAR] = sregs.u.e.dear; + env->spr[SPR_BOOKE_MCSR] = sregs.u.e.mcsr; + env->spr[SPR_BOOKE_TSR] = sregs.u.e.tsr; + env->spr[SPR_BOOKE_TCR] = sregs.u.e.tcr; + env->spr[SPR_DECR] = sregs.u.e.dec; + env->spr[SPR_TBL] = sregs.u.e.tb & 0xffffffff; + env->spr[SPR_TBU] = sregs.u.e.tb >> 32; + env->spr[SPR_VRSAVE] = sregs.u.e.vrsave; + } + + if (sregs.u.e.features & KVM_SREGS_E_ARCH206) { + env->spr[SPR_BOOKE_PIR] = sregs.u.e.pir; + env->spr[SPR_BOOKE_MCSRR0] = sregs.u.e.mcsrr0; + env->spr[SPR_BOOKE_MCSRR1] = sregs.u.e.mcsrr1; + env->spr[SPR_BOOKE_DECAR] = sregs.u.e.decar; + env->spr[SPR_BOOKE_IVPR] = sregs.u.e.ivpr; + } + + if (sregs.u.e.features & KVM_SREGS_E_64) { + env->spr[SPR_BOOKE_EPCR] = sregs.u.e.epcr; + } + + if (sregs.u.e.features & KVM_SREGS_E_SPRG8) { + env->spr[SPR_BOOKE_SPRG8] = sregs.u.e.sprg8; + } + + if (sregs.u.e.features & KVM_SREGS_E_IVOR) { + env->spr[SPR_BOOKE_IVOR0] = sregs.u.e.ivor_low[0]; + kvm_sync_excp(env, POWERPC_EXCP_CRITICAL, SPR_BOOKE_IVOR0); + env->spr[SPR_BOOKE_IVOR1] = sregs.u.e.ivor_low[1]; + kvm_sync_excp(env, POWERPC_EXCP_MCHECK, SPR_BOOKE_IVOR1); + env->spr[SPR_BOOKE_IVOR2] = sregs.u.e.ivor_low[2]; + kvm_sync_excp(env, POWERPC_EXCP_DSI, SPR_BOOKE_IVOR2); + env->spr[SPR_BOOKE_IVOR3] = sregs.u.e.ivor_low[3]; + kvm_sync_excp(env, POWERPC_EXCP_ISI, SPR_BOOKE_IVOR3); + env->spr[SPR_BOOKE_IVOR4] = sregs.u.e.ivor_low[4]; + kvm_sync_excp(env, POWERPC_EXCP_EXTERNAL, SPR_BOOKE_IVOR4); + env->spr[SPR_BOOKE_IVOR5] = sregs.u.e.ivor_low[5]; + kvm_sync_excp(env, POWERPC_EXCP_ALIGN, SPR_BOOKE_IVOR5); + env->spr[SPR_BOOKE_IVOR6] = sregs.u.e.ivor_low[6]; + kvm_sync_excp(env, POWERPC_EXCP_PROGRAM, SPR_BOOKE_IVOR6); + env->spr[SPR_BOOKE_IVOR7] = sregs.u.e.ivor_low[7]; + kvm_sync_excp(env, POWERPC_EXCP_FPU, SPR_BOOKE_IVOR7); + env->spr[SPR_BOOKE_IVOR8] = sregs.u.e.ivor_low[8]; + kvm_sync_excp(env, POWERPC_EXCP_SYSCALL, SPR_BOOKE_IVOR8); + env->spr[SPR_BOOKE_IVOR9] = sregs.u.e.ivor_low[9]; + kvm_sync_excp(env, POWERPC_EXCP_APU, SPR_BOOKE_IVOR9); + env->spr[SPR_BOOKE_IVOR10] = sregs.u.e.ivor_low[10]; + kvm_sync_excp(env, POWERPC_EXCP_DECR, SPR_BOOKE_IVOR10); + env->spr[SPR_BOOKE_IVOR11] = sregs.u.e.ivor_low[11]; + kvm_sync_excp(env, POWERPC_EXCP_FIT, SPR_BOOKE_IVOR11); + env->spr[SPR_BOOKE_IVOR12] = sregs.u.e.ivor_low[12]; + kvm_sync_excp(env, POWERPC_EXCP_WDT, SPR_BOOKE_IVOR12); + env->spr[SPR_BOOKE_IVOR13] = sregs.u.e.ivor_low[13]; + kvm_sync_excp(env, POWERPC_EXCP_DTLB, SPR_BOOKE_IVOR13); + env->spr[SPR_BOOKE_IVOR14] = sregs.u.e.ivor_low[14]; + kvm_sync_excp(env, POWERPC_EXCP_ITLB, SPR_BOOKE_IVOR14); + env->spr[SPR_BOOKE_IVOR15] = sregs.u.e.ivor_low[15]; + kvm_sync_excp(env, POWERPC_EXCP_DEBUG, SPR_BOOKE_IVOR15); + + if (sregs.u.e.features & KVM_SREGS_E_SPE) { + env->spr[SPR_BOOKE_IVOR32] = sregs.u.e.ivor_high[0]; + kvm_sync_excp(env, POWERPC_EXCP_SPEU, SPR_BOOKE_IVOR32); + env->spr[SPR_BOOKE_IVOR33] = sregs.u.e.ivor_high[1]; + kvm_sync_excp(env, POWERPC_EXCP_EFPDI, SPR_BOOKE_IVOR33); + env->spr[SPR_BOOKE_IVOR34] = sregs.u.e.ivor_high[2]; + kvm_sync_excp(env, POWERPC_EXCP_EFPRI, SPR_BOOKE_IVOR34); + } + + if (sregs.u.e.features & KVM_SREGS_E_PM) { + env->spr[SPR_BOOKE_IVOR35] = sregs.u.e.ivor_high[3]; + kvm_sync_excp(env, POWERPC_EXCP_EPERFM, SPR_BOOKE_IVOR35); + } + + if (sregs.u.e.features & KVM_SREGS_E_PC) { + env->spr[SPR_BOOKE_IVOR36] = sregs.u.e.ivor_high[4]; + kvm_sync_excp(env, POWERPC_EXCP_DOORI, SPR_BOOKE_IVOR36); + env->spr[SPR_BOOKE_IVOR37] = sregs.u.e.ivor_high[5]; + kvm_sync_excp(env, POWERPC_EXCP_DOORCI, SPR_BOOKE_IVOR37); + } + } + + if (sregs.u.e.features & KVM_SREGS_E_ARCH206_MMU) { + env->spr[SPR_BOOKE_MAS0] = sregs.u.e.mas0; + env->spr[SPR_BOOKE_MAS1] = sregs.u.e.mas1; + env->spr[SPR_BOOKE_MAS2] = sregs.u.e.mas2; + env->spr[SPR_BOOKE_MAS3] = sregs.u.e.mas7_3 & 0xffffffff; + env->spr[SPR_BOOKE_MAS4] = sregs.u.e.mas4; + env->spr[SPR_BOOKE_MAS6] = sregs.u.e.mas6; + env->spr[SPR_BOOKE_MAS7] = sregs.u.e.mas7_3 >> 32; + env->spr[SPR_MMUCFG] = sregs.u.e.mmucfg; + env->spr[SPR_BOOKE_TLB0CFG] = sregs.u.e.tlbcfg[0]; + env->spr[SPR_BOOKE_TLB1CFG] = sregs.u.e.tlbcfg[1]; + } + + if (sregs.u.e.features & KVM_SREGS_EXP) { + env->spr[SPR_BOOKE_EPR] = sregs.u.e.epr; + } + + if (sregs.u.e.features & KVM_SREGS_E_PD) { + env->spr[SPR_BOOKE_EPLC] = sregs.u.e.eplc; + env->spr[SPR_BOOKE_EPSC] = sregs.u.e.epsc; + } + + if (sregs.u.e.impl_id == KVM_SREGS_E_IMPL_FSL) { + env->spr[SPR_E500_SVR] = sregs.u.e.impl.fsl.svr; + env->spr[SPR_Exxx_MCAR] = sregs.u.e.impl.fsl.mcar; + env->spr[SPR_HID0] = sregs.u.e.impl.fsl.hid0; + + if (sregs.u.e.impl.fsl.features & KVM_SREGS_E_FSL_PIDn) { + env->spr[SPR_BOOKE_PID1] = sregs.u.e.impl.fsl.pid1; + env->spr[SPR_BOOKE_PID2] = sregs.u.e.impl.fsl.pid2; + } + } + + return 0; +} + +static int kvmppc_get_books_sregs(PowerPCCPU *cpu) +{ + CPUPPCState *env = &cpu->env; + struct kvm_sregs sregs; + int ret; + int i; + + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_SREGS, &sregs); + if (ret < 0) { + return ret; + } + + if (!env->external_htab) { + ppc_store_sdr1(env, sregs.u.s.sdr1); + } + + /* Sync SLB */ +#ifdef TARGET_PPC64 + /* + * The packed SLB array we get from KVM_GET_SREGS only contains + * information about valid entries. So we flush our internal copy + * to get rid of stale ones, then put all valid SLB entries back + * in. + */ + memset(env->slb, 0, sizeof(env->slb)); + for (i = 0; i < ARRAY_SIZE(env->slb); i++) { + target_ulong rb = sregs.u.s.ppc64.slb[i].slbe; + target_ulong rs = sregs.u.s.ppc64.slb[i].slbv; + /* + * Only restore valid entries + */ + if (rb & SLB_ESID_V) { + ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfffULL, rs); + } + } +#endif + + /* Sync SRs */ + for (i = 0; i < 16; i++) { + env->sr[i] = sregs.u.s.ppc32.sr[i]; + } + + /* Sync BATs */ + for (i = 0; i < 8; i++) { + env->DBAT[0][i] = sregs.u.s.ppc32.dbat[i] & 0xffffffff; + env->DBAT[1][i] = sregs.u.s.ppc32.dbat[i] >> 32; + env->IBAT[0][i] = sregs.u.s.ppc32.ibat[i] & 0xffffffff; + env->IBAT[1][i] = sregs.u.s.ppc32.ibat[i] >> 32; + } + + return 0; +} + int kvm_arch_get_registers(CPUState *cs) { PowerPCCPU *cpu = POWERPC_CPU(cs); CPUPPCState *env = &cpu->env; struct kvm_regs regs; - struct kvm_sregs sregs; uint32_t cr; int i, ret; @@ -1059,174 +1251,17 @@ int kvm_arch_get_registers(CPUState *cs) kvm_get_fp(cs); if (cap_booke_sregs) { - ret = kvm_vcpu_ioctl(cs, KVM_GET_SREGS, &sregs); + ret = kvmppc_get_booke_sregs(cpu); if (ret < 0) { return ret; } - - if (sregs.u.e.features & KVM_SREGS_E_BASE) { - env->spr[SPR_BOOKE_CSRR0] = sregs.u.e.csrr0; - env->spr[SPR_BOOKE_CSRR1] = sregs.u.e.csrr1; - env->spr[SPR_BOOKE_ESR] = sregs.u.e.esr; - env->spr[SPR_BOOKE_DEAR] = sregs.u.e.dear; - env->spr[SPR_BOOKE_MCSR] = sregs.u.e.mcsr; - env->spr[SPR_BOOKE_TSR] = sregs.u.e.tsr; - env->spr[SPR_BOOKE_TCR] = sregs.u.e.tcr; - env->spr[SPR_DECR] = sregs.u.e.dec; - env->spr[SPR_TBL] = sregs.u.e.tb & 0xffffffff; - env->spr[SPR_TBU] = sregs.u.e.tb >> 32; - env->spr[SPR_VRSAVE] = sregs.u.e.vrsave; - } - - if (sregs.u.e.features & KVM_SREGS_E_ARCH206) { - env->spr[SPR_BOOKE_PIR] = sregs.u.e.pir; - env->spr[SPR_BOOKE_MCSRR0] = sregs.u.e.mcsrr0; - env->spr[SPR_BOOKE_MCSRR1] = sregs.u.e.mcsrr1; - env->spr[SPR_BOOKE_DECAR] = sregs.u.e.decar; - env->spr[SPR_BOOKE_IVPR] = sregs.u.e.ivpr; - } - - if (sregs.u.e.features & KVM_SREGS_E_64) { - env->spr[SPR_BOOKE_EPCR] = sregs.u.e.epcr; - } - - if (sregs.u.e.features & KVM_SREGS_E_SPRG8) { - env->spr[SPR_BOOKE_SPRG8] = sregs.u.e.sprg8; - } - - if (sregs.u.e.features & KVM_SREGS_E_IVOR) { - env->spr[SPR_BOOKE_IVOR0] = sregs.u.e.ivor_low[0]; - kvm_sync_excp(env, POWERPC_EXCP_CRITICAL, SPR_BOOKE_IVOR0); - env->spr[SPR_BOOKE_IVOR1] = sregs.u.e.ivor_low[1]; - kvm_sync_excp(env, POWERPC_EXCP_MCHECK, SPR_BOOKE_IVOR1); - env->spr[SPR_BOOKE_IVOR2] = sregs.u.e.ivor_low[2]; - kvm_sync_excp(env, POWERPC_EXCP_DSI, SPR_BOOKE_IVOR2); - env->spr[SPR_BOOKE_IVOR3] = sregs.u.e.ivor_low[3]; - kvm_sync_excp(env, POWERPC_EXCP_ISI, SPR_BOOKE_IVOR3); - env->spr[SPR_BOOKE_IVOR4] = sregs.u.e.ivor_low[4]; - kvm_sync_excp(env, POWERPC_EXCP_EXTERNAL, SPR_BOOKE_IVOR4); - env->spr[SPR_BOOKE_IVOR5] = sregs.u.e.ivor_low[5]; - kvm_sync_excp(env, POWERPC_EXCP_ALIGN, SPR_BOOKE_IVOR5); - env->spr[SPR_BOOKE_IVOR6] = sregs.u.e.ivor_low[6]; - kvm_sync_excp(env, POWERPC_EXCP_PROGRAM, SPR_BOOKE_IVOR6); - env->spr[SPR_BOOKE_IVOR7] = sregs.u.e.ivor_low[7]; - kvm_sync_excp(env, POWERPC_EXCP_FPU, SPR_BOOKE_IVOR7); - env->spr[SPR_BOOKE_IVOR8] = sregs.u.e.ivor_low[8]; - kvm_sync_excp(env, POWERPC_EXCP_SYSCALL, SPR_BOOKE_IVOR8); - env->spr[SPR_BOOKE_IVOR9] = sregs.u.e.ivor_low[9]; - kvm_sync_excp(env, POWERPC_EXCP_APU, SPR_BOOKE_IVOR9); - env->spr[SPR_BOOKE_IVOR10] = sregs.u.e.ivor_low[10]; - kvm_sync_excp(env, POWERPC_EXCP_DECR, SPR_BOOKE_IVOR10); - env->spr[SPR_BOOKE_IVOR11] = sregs.u.e.ivor_low[11]; - kvm_sync_excp(env, POWERPC_EXCP_FIT, SPR_BOOKE_IVOR11); - env->spr[SPR_BOOKE_IVOR12] = sregs.u.e.ivor_low[12]; - kvm_sync_excp(env, POWERPC_EXCP_WDT, SPR_BOOKE_IVOR12); - env->spr[SPR_BOOKE_IVOR13] = sregs.u.e.ivor_low[13]; - kvm_sync_excp(env, POWERPC_EXCP_DTLB, SPR_BOOKE_IVOR13); - env->spr[SPR_BOOKE_IVOR14] = sregs.u.e.ivor_low[14]; - kvm_sync_excp(env, POWERPC_EXCP_ITLB, SPR_BOOKE_IVOR14); - env->spr[SPR_BOOKE_IVOR15] = sregs.u.e.ivor_low[15]; - kvm_sync_excp(env, POWERPC_EXCP_DEBUG, SPR_BOOKE_IVOR15); - - if (sregs.u.e.features & KVM_SREGS_E_SPE) { - env->spr[SPR_BOOKE_IVOR32] = sregs.u.e.ivor_high[0]; - kvm_sync_excp(env, POWERPC_EXCP_SPEU, SPR_BOOKE_IVOR32); - env->spr[SPR_BOOKE_IVOR33] = sregs.u.e.ivor_high[1]; - kvm_sync_excp(env, POWERPC_EXCP_EFPDI, SPR_BOOKE_IVOR33); - env->spr[SPR_BOOKE_IVOR34] = sregs.u.e.ivor_high[2]; - kvm_sync_excp(env, POWERPC_EXCP_EFPRI, SPR_BOOKE_IVOR34); - } - - if (sregs.u.e.features & KVM_SREGS_E_PM) { - env->spr[SPR_BOOKE_IVOR35] = sregs.u.e.ivor_high[3]; - kvm_sync_excp(env, POWERPC_EXCP_EPERFM, SPR_BOOKE_IVOR35); - } - - if (sregs.u.e.features & KVM_SREGS_E_PC) { - env->spr[SPR_BOOKE_IVOR36] = sregs.u.e.ivor_high[4]; - kvm_sync_excp(env, POWERPC_EXCP_DOORI, SPR_BOOKE_IVOR36); - env->spr[SPR_BOOKE_IVOR37] = sregs.u.e.ivor_high[5]; - kvm_sync_excp(env, POWERPC_EXCP_DOORCI, SPR_BOOKE_IVOR37); - } - } - - if (sregs.u.e.features & KVM_SREGS_E_ARCH206_MMU) { - env->spr[SPR_BOOKE_MAS0] = sregs.u.e.mas0; - env->spr[SPR_BOOKE_MAS1] = sregs.u.e.mas1; - env->spr[SPR_BOOKE_MAS2] = sregs.u.e.mas2; - env->spr[SPR_BOOKE_MAS3] = sregs.u.e.mas7_3 & 0xffffffff; - env->spr[SPR_BOOKE_MAS4] = sregs.u.e.mas4; - env->spr[SPR_BOOKE_MAS6] = sregs.u.e.mas6; - env->spr[SPR_BOOKE_MAS7] = sregs.u.e.mas7_3 >> 32; - env->spr[SPR_MMUCFG] = sregs.u.e.mmucfg; - env->spr[SPR_BOOKE_TLB0CFG] = sregs.u.e.tlbcfg[0]; - env->spr[SPR_BOOKE_TLB1CFG] = sregs.u.e.tlbcfg[1]; - } - - if (sregs.u.e.features & KVM_SREGS_EXP) { - env->spr[SPR_BOOKE_EPR] = sregs.u.e.epr; - } - - if (sregs.u.e.features & KVM_SREGS_E_PD) { - env->spr[SPR_BOOKE_EPLC] = sregs.u.e.eplc; - env->spr[SPR_BOOKE_EPSC] = sregs.u.e.epsc; - } - - if (sregs.u.e.impl_id == KVM_SREGS_E_IMPL_FSL) { - env->spr[SPR_E500_SVR] = sregs.u.e.impl.fsl.svr; - env->spr[SPR_Exxx_MCAR] = sregs.u.e.impl.fsl.mcar; - env->spr[SPR_HID0] = sregs.u.e.impl.fsl.hid0; - - if (sregs.u.e.impl.fsl.features & KVM_SREGS_E_FSL_PIDn) { - env->spr[SPR_BOOKE_PID1] = sregs.u.e.impl.fsl.pid1; - env->spr[SPR_BOOKE_PID2] = sregs.u.e.impl.fsl.pid2; - } - } } if (cap_segstate) { - ret = kvm_vcpu_ioctl(cs, KVM_GET_SREGS, &sregs); + ret = kvmppc_get_books_sregs(cpu); if (ret < 0) { return ret; } - - if (!env->external_htab) { - ppc_store_sdr1(env, sregs.u.s.sdr1); - } - - /* Sync SLB */ -#ifdef TARGET_PPC64 - /* - * The packed SLB array we get from KVM_GET_SREGS only contains - * information about valid entries. So we flush our internal - * copy to get rid of stale ones, then put all valid SLB entries - * back in. - */ - memset(env->slb, 0, sizeof(env->slb)); - for (i = 0; i < ARRAY_SIZE(env->slb); i++) { - target_ulong rb = sregs.u.s.ppc64.slb[i].slbe; - target_ulong rs = sregs.u.s.ppc64.slb[i].slbv; - /* - * Only restore valid entries - */ - if (rb & SLB_ESID_V) { - ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfffULL, rs); - } - } -#endif - - /* Sync SRs */ - for (i = 0; i < 16; i++) { - env->sr[i] = sregs.u.s.ppc32.sr[i]; - } - - /* Sync BATs */ - for (i = 0; i < 8; i++) { - env->DBAT[0][i] = sregs.u.s.ppc32.dbat[i] & 0xffffffff; - env->DBAT[1][i] = sregs.u.s.ppc32.dbat[i] >> 32; - env->IBAT[0][i] = sregs.u.s.ppc32.ibat[i] & 0xffffffff; - env->IBAT[1][i] = sregs.u.s.ppc32.ibat[i] >> 32; - } } if (cap_hior) {