From patchwork Tue Mar 3 03:43:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Gibson X-Patchwork-Id: 11417141 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E2F87924 for ; Tue, 3 Mar 2020 03:48:42 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B9F6021775 for ; Tue, 3 Mar 2020 03:48:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=gibson.dropbear.id.au header.i=@gibson.dropbear.id.au header.b="oP7C8mA/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B9F6021775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=gibson.dropbear.id.au Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:41634 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j8yXx-0005IW-RQ for patchwork-qemu-devel@patchwork.kernel.org; Mon, 02 Mar 2020 22:48:41 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:35274) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j8yTU-0004ho-UO for qemu-devel@nongnu.org; Mon, 02 Mar 2020 22:44:06 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1j8yTT-0002Ei-FR for qemu-devel@nongnu.org; Mon, 02 Mar 2020 22:44:04 -0500 Received: from ozlabs.org ([2401:3900:2:1::2]:45057) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1j8yTT-0002Bu-4w; Mon, 02 Mar 2020 22:44:03 -0500 Received: by ozlabs.org (Postfix, from userid 1007) id 48WjY628plz9sSs; Tue, 3 Mar 2020 14:43:54 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gibson.dropbear.id.au; s=201602; t=1583207034; bh=lW+OJt/EIRex+r1jIEvF3RrwgcZ9EFrGxf6iw6OaAyk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oP7C8mA/w4bSdl6e9989fHca687CJrmSdmwwg6+jzKSHRHY+gmpknDBUQSYu12EEK 8mgdINew4u2eHJBOmBTjkaYR+BxWvvj0I013UW2rObTlxi3cNMvBKNfHHzMPgeWHWO gLG/BCIoAVJVywQmHpwx4800g4+fT/Mtd+LVYTr4= From: David Gibson To: qemu-ppc@nongnu.org, clg@kaod.org, qemu-devel@nongnu.org, groug@kaod.org Subject: [PATCH v7 11/17] target/ppc: Don't store VRMA SLBE persistently Date: Tue, 3 Mar 2020 14:43:45 +1100 Message-Id: <20200303034351.333043-12-david@gibson.dropbear.id.au> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200303034351.333043-1-david@gibson.dropbear.id.au> References: <20200303034351.333043-1-david@gibson.dropbear.id.au> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2401:3900:2:1::2 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lvivier@redhat.com, Thomas Huth , Xiao Guangrong , farosas@linux.ibm.com, aik@ozlabs.ru, "Michael S. Tsirkin" , Mark Cave-Ayland , Paolo Bonzini , paulus@samba.org, "Edgar E. Iglesias" , Igor Mammedov , David Gibson Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Currently, we construct the SLBE used for VRMA translations when the LPCR is written (which controls some bits in the SLBE), then use it later for translations. This is a bit complex and confusing - simplify it by simply constructing the SLBE directly from the LPCR when we need it. Signed-off-by: David Gibson Reviewed-by: Greg Kurz Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Philippe Mathieu-Daudé --- target/ppc/cpu.h | 3 -- target/ppc/mmu-hash64.c | 92 ++++++++++++++++------------------------- 2 files changed, 35 insertions(+), 60 deletions(-) diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h index f9871b1233..5a55fb02bd 100644 --- a/target/ppc/cpu.h +++ b/target/ppc/cpu.h @@ -1044,9 +1044,6 @@ struct CPUPPCState { uint32_t flags; uint64_t insns_flags; uint64_t insns_flags2; -#if defined(TARGET_PPC64) - ppc_slb_t vrma_slb; -#endif int error_code; uint32_t pending_interrupts; diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c index 4fd7b7ee74..34f6009b1e 100644 --- a/target/ppc/mmu-hash64.c +++ b/target/ppc/mmu-hash64.c @@ -784,11 +784,41 @@ static target_ulong rmls_limit(PowerPCCPU *cpu) return rma_sizes[rmls]; } +static int build_vrma_slbe(PowerPCCPU *cpu, ppc_slb_t *slb) +{ + CPUPPCState *env = &cpu->env; + target_ulong lpcr = env->spr[SPR_LPCR]; + uint32_t vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT; + target_ulong vsid = SLB_VSID_VRMA | ((vrmasd << 4) & SLB_VSID_LLP_MASK); + int i; + + for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) { + const PPCHash64SegmentPageSizes *sps = &cpu->hash64_opts->sps[i]; + + if (!sps->page_shift) { + break; + } + + if ((vsid & SLB_VSID_LLP_MASK) == sps->slb_enc) { + slb->esid = SLB_ESID_V; + slb->vsid = vsid; + slb->sps = sps; + return 0; + } + } + + error_report("Bad page size encoding in LPCR[VRMASD]; LPCR=0x" + TARGET_FMT_lx"\n", lpcr); + + return -1; +} + int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx, int mmu_idx) { CPUState *cs = CPU(cpu); CPUPPCState *env = &cpu->env; + ppc_slb_t vrma_slbe; ppc_slb_t *slb; unsigned apshift; hwaddr ptex; @@ -827,8 +857,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, } } else if (ppc_hash64_use_vrma(env)) { /* Emulated VRMA mode */ - slb = &env->vrma_slb; - if (!slb->sps) { + slb = &vrma_slbe; + if (build_vrma_slbe(cpu, slb) != 0) { /* Invalid VRMA setup, machine check */ cs->exception_index = POWERPC_EXCP_MCHECK; env->error_code = 0; @@ -976,6 +1006,7 @@ skip_slb_search: hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr) { CPUPPCState *env = &cpu->env; + ppc_slb_t vrma_slbe; ppc_slb_t *slb; hwaddr ptex, raddr; ppc_hash_pte64_t pte; @@ -997,8 +1028,8 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr) return raddr | env->spr[SPR_HRMOR]; } else if (ppc_hash64_use_vrma(env)) { /* Emulated VRMA mode */ - slb = &env->vrma_slb; - if (!slb->sps) { + slb = &vrma_slbe; + if (build_vrma_slbe(cpu, slb) != 0) { return -1; } } else { @@ -1037,65 +1068,12 @@ void ppc_hash64_tlb_flush_hpte(PowerPCCPU *cpu, target_ulong ptex, cpu->env.tlb_need_flush = TLB_NEED_GLOBAL_FLUSH | TLB_NEED_LOCAL_FLUSH; } -static void ppc_hash64_update_vrma(PowerPCCPU *cpu) -{ - CPUPPCState *env = &cpu->env; - const PPCHash64SegmentPageSizes *sps = NULL; - target_ulong esid, vsid, lpcr; - ppc_slb_t *slb = &env->vrma_slb; - uint32_t vrmasd; - int i; - - /* First clear it */ - slb->esid = slb->vsid = 0; - slb->sps = NULL; - - /* Is VRMA enabled ? */ - if (!ppc_hash64_use_vrma(env)) { - return; - } - - /* - * Make one up. Mostly ignore the ESID which will not be needed - * for translation - */ - lpcr = env->spr[SPR_LPCR]; - vsid = SLB_VSID_VRMA; - vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT; - vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP); - esid = SLB_ESID_V; - - for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) { - const PPCHash64SegmentPageSizes *sps1 = &cpu->hash64_opts->sps[i]; - - if (!sps1->page_shift) { - break; - } - - if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) { - sps = sps1; - break; - } - } - - if (!sps) { - error_report("Bad page size encoding esid 0x"TARGET_FMT_lx - " vsid 0x"TARGET_FMT_lx, esid, vsid); - return; - } - - slb->vsid = vsid; - slb->esid = esid; - slb->sps = sps; -} - void ppc_store_lpcr(PowerPCCPU *cpu, target_ulong val) { PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu); CPUPPCState *env = &cpu->env; env->spr[SPR_LPCR] = val & pcc->lpcr_mask; - ppc_hash64_update_vrma(cpu); } void helper_store_lpcr(CPUPPCState *env, target_ulong val)