From patchwork Mon Feb 24 23:37:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Gibson X-Patchwork-Id: 11402163 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F32DA138D for ; Mon, 24 Feb 2020 23:40:42 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C780B2084E for ; Mon, 24 Feb 2020 23:40:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=gibson.dropbear.id.au header.i=@gibson.dropbear.id.au header.b="lBGUvJaJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C780B2084E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=gibson.dropbear.id.au Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:46144 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j6NL7-0003sr-Uq for patchwork-qemu-devel@patchwork.kernel.org; Mon, 24 Feb 2020 18:40:41 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:47131) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j6NIL-0006Kx-EO for qemu-devel@nongnu.org; Mon, 24 Feb 2020 18:37:50 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1j6NIK-00086h-13 for qemu-devel@nongnu.org; Mon, 24 Feb 2020 18:37:49 -0500 Received: from ozlabs.org ([203.11.71.1]:46699) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1j6NIJ-00083A-B0; Mon, 24 Feb 2020 18:37:47 -0500 Received: by ozlabs.org (Postfix, from userid 1007) id 48RJQ256Wlz9sRN; Tue, 25 Feb 2020 10:37:30 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gibson.dropbear.id.au; s=201602; t=1582587450; bh=GpG7JIVsiHpMpesoVJsSFbR7MFKVdTtNyC10o80HfXM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lBGUvJaJYSMWEcxMYfTo/3GL2YEISwMo2dsklK0mKQZR1SRtJF1W/PQryDYqxM45q IIY+JI8IalBpvWhZqs0SK381fqYMtJOZ9zlAbvOWOIPxZChnZO1zXqSticwhkgqvSp FKeiKCfhxYFHYwcYmMEIpRdi/UxsYqyBcMI47GrQ= From: David Gibson To: groug@kaod.org, qemu-ppc@nongnu.org, qemu-devel@nongnu.org, clg@kaod.org Subject: [PATCH v6 04/18] target/ppc: Correct handling of real mode accesses with vhyp on hash MMU Date: Tue, 25 Feb 2020 10:37:10 +1100 Message-Id: <20200224233724.46415-5-david@gibson.dropbear.id.au> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200224233724.46415-1-david@gibson.dropbear.id.au> References: <20200224233724.46415-1-david@gibson.dropbear.id.au> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 203.11.71.1 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lvivier@redhat.com, Thomas Huth , Xiao Guangrong , "Michael S. Tsirkin" , aik@ozlabs.ru, farosas@linux.ibm.com, Mark Cave-Ayland , Igor Mammedov , paulus@samba.org, Paolo Bonzini , "Edgar E. Iglesias" , David Gibson Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" On ppc we have the concept of virtual hypervisor ("vhyp") mode, where we only model the non-hypervisor-privileged parts of the cpu. Essentially we model the hypervisor's behaviour from the point of view of a guest OS, but we don't model the hypervisor's execution. In particular, in this mode, qemu's notion of target physical address is a guest physical address from the vcpu's point of view. So accesses in guest real mode don't require translation. If we were modelling the hypervisor mode, we'd need to translate the guest physical address into a host physical address. Currently, we handle this sloppily: we rely on setting up the virtual LPCR and RMOR registers so that GPAs are simply HPAs plus an offset, which we set to zero. This is already conceptually dubious, since the LPCR and RMOR registers don't exist in the non-hypervisor portion of the CPU. It gets worse with POWER9, where RMOR and LPCR[VPM0] no longer exist at all. Clean this up by explicitly handling the vhyp case. While we're there, remove some unnecessary nesting of if statements that made the logic to select the correct real mode behaviour a bit less clear than it could be. Signed-off-by: David Gibson Reviewed-by: Cédric Le Goater Reviewed-by: Greg Kurz --- target/ppc/mmu-hash64.c | 60 ++++++++++++++++++++++++----------------- 1 file changed, 35 insertions(+), 25 deletions(-) diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c index 3e0be4d55f..392f90e0ae 100644 --- a/target/ppc/mmu-hash64.c +++ b/target/ppc/mmu-hash64.c @@ -789,27 +789,30 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, */ raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL; - /* In HV mode, add HRMOR if top EA bit is clear */ - if (msr_hv || !env->has_hv_mode) { + if (cpu->vhyp) { + /* + * In virtual hypervisor mode, there's nothing to do: + * EA == GPA == qemu guest address + */ + } else if (msr_hv || !env->has_hv_mode) { + /* In HV mode, add HRMOR if top EA bit is clear */ if (!(eaddr >> 63)) { raddr |= env->spr[SPR_HRMOR]; } - } else { - /* Otherwise, check VPM for RMA vs VRMA */ - if (env->spr[SPR_LPCR] & LPCR_VPM0) { - slb = &env->vrma_slb; - if (slb->sps) { - goto skip_slb_search; - } - /* Not much else to do here */ + } else if (env->spr[SPR_LPCR] & LPCR_VPM0) { + /* Emulated VRMA mode */ + slb = &env->vrma_slb; + if (!slb->sps) { + /* Invalid VRMA setup, machine check */ cs->exception_index = POWERPC_EXCP_MCHECK; env->error_code = 0; return 1; - } else if (raddr < env->rmls) { - /* RMA. Check bounds in RMLS */ - raddr |= env->spr[SPR_RMOR]; - } else { - /* The access failed, generate the approriate interrupt */ + } + + goto skip_slb_search; + } else { + /* Emulated old-style RMO mode, bounds check against RMLS */ + if (raddr >= env->rmls) { if (rwx == 2) { ppc_hash64_set_isi(cs, SRR1_PROTFAULT); } else { @@ -821,6 +824,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, } return 1; } + + raddr |= env->spr[SPR_RMOR]; } tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK, PAGE_READ | PAGE_WRITE | PAGE_EXEC, mmu_idx, @@ -953,22 +958,27 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr) /* In real mode the top 4 effective address bits are ignored */ raddr = addr & 0x0FFFFFFFFFFFFFFFULL; - /* In HV mode, add HRMOR if top EA bit is clear */ - if ((msr_hv || !env->has_hv_mode) && !(addr >> 63)) { + if (cpu->vhyp) { + /* + * In virtual hypervisor mode, there's nothing to do: + * EA == GPA == qemu guest address + */ + return raddr; + } else if ((msr_hv || !env->has_hv_mode) && !(addr >> 63)) { + /* In HV mode, add HRMOR if top EA bit is clear */ return raddr | env->spr[SPR_HRMOR]; - } - - /* Otherwise, check VPM for RMA vs VRMA */ - if (env->spr[SPR_LPCR] & LPCR_VPM0) { + } else if (env->spr[SPR_LPCR] & LPCR_VPM0) { + /* Emulated VRMA mode */ slb = &env->vrma_slb; if (!slb->sps) { return -1; } - } else if (raddr < env->rmls) { - /* RMA. Check bounds in RMLS */ - return raddr | env->spr[SPR_RMOR]; } else { - return -1; + /* Emulated old-style RMO mode, bounds check against RMLS */ + if (raddr >= env->rmls) { + return -1; + } + return raddr | env->spr[SPR_RMOR]; } } else { slb = slb_lookup(cpu, addr);