From patchwork Mon Jan 25 05:15:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Gibson X-Patchwork-Id: 8103681 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 23939BEEE5 for ; Mon, 25 Jan 2016 05:16:58 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 69E13203A4 for ; Mon, 25 Jan 2016 05:16:57 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8F87C202BE for ; Mon, 25 Jan 2016 05:16:56 +0000 (UTC) Received: from localhost ([::1]:34836 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aNZWJ-0003xS-Qw for patchwork-qemu-devel@patchwork.kernel.org; Mon, 25 Jan 2016 00:16:55 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56100) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aNZUU-0000ic-Ni for qemu-devel@nongnu.org; Mon, 25 Jan 2016 00:15:03 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aNZUS-0001Hk-VO for qemu-devel@nongnu.org; Mon, 25 Jan 2016 00:15:02 -0500 Received: from ozlabs.org ([2401:3900:2:1::2]:56765) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aNZUS-0001Gm-CU; Mon, 25 Jan 2016 00:15:00 -0500 Received: by ozlabs.org (Postfix, from userid 1007) id D56D5140662; Mon, 25 Jan 2016 16:14:56 +1100 (AEDT) From: David Gibson To: benh@kernel.crashing.org Date: Mon, 25 Jan 2016 16:15:47 +1100 Message-Id: <1453698952-32092-6-git-send-email-david@gibson.dropbear.id.au> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1453698952-32092-1-git-send-email-david@gibson.dropbear.id.au> References: <1453698952-32092-1-git-send-email-david@gibson.dropbear.id.au> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2401:3900:2:1::2 Cc: lvivier@redhat.com, thuth@redhat.com, aik@ozlabs.ru, agraf@suse.de, qemu-devel@nongnu.org, qemu-ppc@nongnu.org, David Gibson Subject: [Qemu-devel] [PATCH 05/10] target-ppc: Use actual page size encodings from HPTE X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP At present the 64-bit hash MMU code uses information from the SLB to determine the page size of a translation. We do need that information to correctly look up the hash table. However the MMU also allows a possibly larger page size to be encoded into the HPTE itself, which is used to populate the TLB. At present qemu doesn't check that, and so doesn't support the MPSS "Multiple Page Size per Segment" feature. This makes a start on allowing this, by adding an hpte_page_shift() function which looks up the page size of an HPTE. We use this to validate page sizes encodings on faults, and populate the qemu TLB with larger page sizes when appropriate. Signed-off-by: David Gibson --- target-ppc/mmu-hash64.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 70 insertions(+), 4 deletions(-) diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c index 28ad361..bcad826 100644 --- a/target-ppc/mmu-hash64.c +++ b/target-ppc/mmu-hash64.c @@ -21,6 +21,7 @@ #include "exec/helper-proto.h" #include "qemu/error-report.h" #include "sysemu/kvm.h" +#include "qemu/error-report.h" #include "kvm_ppc.h" #include "mmu-hash64.h" @@ -474,6 +475,43 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu, return pte_offset; } +static unsigned hpte_page_shift(const struct ppc_one_seg_page_size *sps, + uint64_t pte0, uint64_t pte1) +{ + int i; + + if (!(pte0 & HPTE64_V_LARGE)) { + if (sps->page_shift != 12) { + /* 4kiB page in a non 4kiB segment */ + return 0; + } + /* Normal 4kiB page */ + return 12; + } + + for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) { + const struct ppc_one_page_size *ps = &sps->enc[i]; + uint64_t mask; + + if (!ps->page_shift) { + break; + } + + if (ps->page_shift == 12) { + /* L bit is set so this can't be a 4kiB page */ + continue; + } + + mask = ((1ULL << ps->page_shift) - 1) & HPTE64_R_RPN; + + if ((pte1 & mask) == ps->pte_enc) { + return ps->page_shift; + } + } + + return 0; /* Bad page size encoding */ +} + static hwaddr ppc_hash64_pte_raddr(unsigned page_shift, ppc_hash_pte64_t pte, target_ulong eaddr) { @@ -489,6 +527,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, CPUState *cs = CPU(cpu); CPUPPCState *env = &cpu->env; ppc_slb_t *slb; + unsigned apshift; hwaddr pte_offset; ppc_hash_pte64_t pte; int pp_prot, amr_prot, prot; @@ -552,6 +591,28 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, qemu_log_mask(CPU_LOG_MMU, "found PTE at offset %08" HWADDR_PRIx "\n", pte_offset); + /* Validate page size encoding */ + apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1); + if (!apshift) { + error_report("Bad page size encoding in HPTE 0x%"PRIx64" - 0x%"PRIx64 + " @ 0x%"HWADDR_PRIx, pte.pte0, pte.pte1, pte_offset); + /* Treat it like a hash miss for the guest */ + if (rwx == 2) { + cs->exception_index = POWERPC_EXCP_ISI; + env->error_code = 0x40000000; + } else { + cs->exception_index = POWERPC_EXCP_DSI; + env->error_code = 0; + env->spr[SPR_DAR] = eaddr; + if (rwx == 1) { + env->spr[SPR_DSISR] = 0x42000000; + } else { + env->spr[SPR_DSISR] = 0x40000000; + } + } + return 1; + } + /* 5. Check access permissions */ pp_prot = ppc_hash64_pte_prot(cpu, slb, pte); @@ -604,10 +665,10 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, target_ulong eaddr, /* 7. Determine the real address from the PTE */ - raddr = ppc_hash64_pte_raddr(slb->sps->page_shift, pte, eaddr); + raddr = ppc_hash64_pte_raddr(apshift, pte, eaddr); tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK, - prot, mmu_idx, TARGET_PAGE_SIZE); + prot, mmu_idx, 1ULL << apshift); return 0; } @@ -618,6 +679,7 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr) ppc_slb_t *slb; hwaddr pte_offset; ppc_hash_pte64_t pte; + unsigned apshift; if (msr_dr == 0) { /* In real mode the top 4 effective address bits are ignored */ @@ -634,8 +696,12 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr) return -1; } - return ppc_hash64_pte_raddr(slb->sps->page_shift, pte, addr) - & TARGET_PAGE_MASK; + apshift = hpte_page_shift(slb->sps, pte.pte0, pte.pte1); + if (!apshift) { + return -1; + } + + return ppc_hash64_pte_raddr(apshift, pte, addr) & TARGET_PAGE_MASK; } void ppc_hash64_store_hpte(PowerPCCPU *cpu,