From patchwork Thu May 26 16:35:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alvise rigo X-Patchwork-Id: 9137261 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D58A76075A for ; Thu, 26 May 2016 16:41:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C8D432807E for ; Thu, 26 May 2016 16:41:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BC0B0282E2; Thu, 26 May 2016 16:41:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D23C52807E for ; Thu, 26 May 2016 16:41:08 +0000 (UTC) Received: from localhost ([::1]:39506 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b5yLL-0002Mw-VT for patchwork-qemu-devel@patchwork.kernel.org; Thu, 26 May 2016 12:41:08 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58566) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b5yGk-0006a6-Oo for qemu-devel@nongnu.org; Thu, 26 May 2016 12:36:24 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b5yGj-0003mx-23 for qemu-devel@nongnu.org; Thu, 26 May 2016 12:36:22 -0400 Received: from mail-wm0-x241.google.com ([2a00:1450:400c:c09::241]:33263) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b5yGi-0003mp-O4 for qemu-devel@nongnu.org; Thu, 26 May 2016 12:36:20 -0400 Received: by mail-wm0-x241.google.com with SMTP id a136so7015523wme.0 for ; Thu, 26 May 2016 09:36:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtualopensystems-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WagVLOVv3rpLzyGg6r69rmS7pbtM/5NV14nT+05/mhQ=; b=mRV4Sow/iJlrRMQzAZgfFAedEpC6EyczIIU25vghKQOD6qfmpNvabz2pHGWG3DBOB8 kpfvUsAzu9IxeOmyWTt33Zs7BTGb/M/c3JG+/olXmI7zKq2Hy8zzMBSu/9GpUjMY8WBC 2M7z97yZt6pGBf7z5h+vZThQjJSioptXUWmOIcG1PFXV0JFThpI3PMqgSv9epUAfSisx m0EQ0ifY7wbuZdUjzpru129xgovyI1r8xsZOcI5brp3mKEf7MDbsUmMwQCCxxU/dTM7Z QYPp/ctzX2ctQImtrQL1uh7Gqye96x2RwOIVMP/l0GXXtlphZVpejCl6Q83w/5qu480q kkcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WagVLOVv3rpLzyGg6r69rmS7pbtM/5NV14nT+05/mhQ=; b=Qdv6zKeIjdBQ9B6JlXFbSGYa0tnNITfpUMb/IrQxi+CWyGsxeBr3ozPf1Ai2vmEIvU X0+/XevczT1Y2z7gwEfVwIpP2L8u8beTLNdJOKdMN0k/HXE8+iVtqZ4R4w8j5WK591Y0 ENy4oZNa9CnQDahR49Fls/e4bSjRMR84RsXvXxxHIUQliKYqFpy6d/fo76uCXaVekmn4 XE58XCHz5Acd84zFw507gsTZcAi53njPX0rXm0K5HC5FZ+wDdkH789F4bS3n703Hpda8 e0lsEBPNeCzXsuC03RI/MIzX3Q/vYOfsLx+qJpUia7TENuTLgQZ6ILxt260rfP05GLcB PQIQ== X-Gm-Message-State: ALyK8tJEKbK381IvYXImVk02uw4Rc373XP2Uq8LHtlMOqMiO1GqKW9pdjDneDv/BfJljMw== X-Received: by 10.28.0.142 with SMTP id 136mr4787643wma.22.1464280580010; Thu, 26 May 2016 09:36:20 -0700 (PDT) Received: from linarch.localdomain (LPuteaux-656-1-278-113.w80-15.abo.wanadoo.fr. [80.15.154.113]) by smtp.googlemail.com with ESMTPSA id lf9sm15133703wjc.44.2016.05.26.09.36.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 May 2016 09:36:19 -0700 (PDT) From: Alvise Rigo To: mttcg@listserver.greensocs.com, alex.bennee@linaro.org Date: Thu, 26 May 2016 18:35:47 +0200 Message-Id: <20160526163549.3276-9-a.rigo@virtualopensystems.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20160526163549.3276-1-a.rigo@virtualopensystems.com> References: <20160526163549.3276-1-a.rigo@virtualopensystems.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::241 Subject: [Qemu-devel] [RFC 08/10] cputlb: Query tlb_flush_page_by_mmuidx X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, rth@twiddle.net, claudio.fontana@huawei.com, qemu-devel@nongnu.org, Alvise Rigo , cota@braap.org, serge.fdrv@gmail.com, pbonzini@redhat.com, jani.kokkonen@huawei.com, tech@virtualopensystems.com, fred.konrad@greensocs.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Similarly to the previous commit, make tlb_flush_page_by_mmuidx query the flushes when targeting different VCPUs. Signed-off-by: Alvise Rigo --- cputlb.c | 90 ++++++++++++++++++++++++++++++++++--------------- include/exec/exec-all.h | 5 +-- target-arm/helper.c | 35 ++++++++++--------- 3 files changed, 85 insertions(+), 45 deletions(-) diff --git a/cputlb.c b/cputlb.c index 73624d6..77a1997 100644 --- a/cputlb.c +++ b/cputlb.c @@ -157,6 +157,8 @@ static inline void tlb_tables_flush_bitmap(CPUState *cpu, unsigned long *bitmap) struct TLBFlushByMMUIdxParams { DECLARE_BITMAP(idx_to_flush, NB_MMU_MODES); + /* Used by tlb_flush_page_by_mmuidx */ + target_ulong addr; }; static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, void *opaque) @@ -255,28 +257,13 @@ void tlb_flush_page(CPUState *cpu, target_ulong addr) tb_flush_jmp_cache(cpu, addr); } -void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...) +static void tlb_flush_page_by_mmuidx_async_work(CPUState *cpu, void *opaque) { CPUArchState *env = cpu->env_ptr; - int i, k; - va_list argp; - - va_start(argp, addr); - - tlb_debug("addr "TARGET_FMT_lx"\n", addr); - - /* Check if we need to flush due to large pages. */ - if ((addr & env->tlb_flush_mask) == env->tlb_flush_addr) { - tlb_debug("forced full flush (" - TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", - env->tlb_flush_addr, env->tlb_flush_mask); + struct TLBFlushByMMUIdxParams *params = opaque; + target_ulong addr = params->addr; + int mmu_idx, i; - /* Temporarily use current_cpu until tlb_flush_page_by_mmuidx - * is reworked */ - tlb_flush_by_mmuidx(current_cpu, cpu, argp); - va_end(argp); - return; - } /* must reset current TB so that interrupts cannot modify the links while we are modifying them */ cpu->current_tb = NULL; @@ -284,6 +271,49 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...) addr &= TARGET_PAGE_MASK; i = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { + if (test_bit(mmu_idx, params->idx_to_flush)) { + int k; + + tlb_debug("idx %d\n", mmu_idx); + tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr); + /* check whether there are vltb entries that need to be flushed */ + for (k = 0; k < CPU_VTLB_SIZE; k++) { + tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], addr); + } + } + } + + tb_flush_jmp_cache(cpu, addr); + + g_free(params); +} + +static void v_tlb_flush_page_by_mmuidx(CPUState *cpu, CPUState *target_cpu, + target_ulong addr, unsigned long *idxmap) +{ + if (!qemu_cpu_is_self(target_cpu)) { + struct TLBFlushByMMUIdxParams *params; + + params = g_malloc(sizeof(struct TLBFlushByMMUIdxParams)); + params->addr = addr; + memcpy(params->idx_to_flush, idxmap, MMUIDX_BITMAP_SIZE); + async_wait_run_on_cpu(target_cpu, cpu, + tlb_flush_page_by_mmuidx_async_work, params); + } else { + tlb_tables_flush_bitmap(cpu, idxmap); + } +} + +void tlb_flush_page_by_mmuidx(CPUState *cpu, CPUState *target, + target_ulong addr, ...) +{ + DECLARE_BITMAP(idxmap, NB_MMU_MODES) = { 0 }; + CPUArchState *env = target->env_ptr; + va_list argp; + + va_start(argp, addr); + for (;;) { int mmu_idx = va_arg(argp, int); @@ -291,18 +321,24 @@ void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...) break; } - tlb_debug("idx %d\n", mmu_idx); + set_bit(mmu_idx, idxmap); + } - tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr); + va_end(argp); - /* check whether there are vltb entries that need to be flushed */ - for (k = 0; k < CPU_VTLB_SIZE; k++) { - tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], addr); - } + tlb_debug("addr "TARGET_FMT_lx"\n", addr); + + /* Check if we need to flush due to large pages. */ + if ((addr & env->tlb_flush_mask) == env->tlb_flush_addr) { + tlb_debug("forced full flush (" + TARGET_FMT_lx "/" TARGET_FMT_lx ")\n", + env->tlb_flush_addr, env->tlb_flush_mask); + + v_tlb_flush_by_mmuidx(cpu, target, idxmap); + return; } - va_end(argp); - tb_flush_jmp_cache(cpu, addr); + v_tlb_flush_page_by_mmuidx(cpu, target, addr, idxmap); } static void tlb_flush_page_async_work(CPUState *cpu, void *opaque) diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 066870b..cb891d2 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -143,7 +143,8 @@ void tlb_flush(CPUState *cpu, int flush_global); * Flush one page from the TLB of the specified CPU, for the specified * MMU indexes. */ -void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, ...); +void tlb_flush_page_by_mmuidx(CPUState *cpu, CPUState *target, + target_ulong addr, ...); /** * tlb_flush_by_mmuidx: * @cpu: CPU whose TLB should be flushed @@ -200,7 +201,7 @@ static inline void tlb_flush(CPUState *cpu, int flush_global) { } -static inline void tlb_flush_page_by_mmuidx(CPUState *cpu, +static inline void tlb_flush_page_by_mmuidx(CPUState *cpu, CPUState *target, target_ulong addr, ...) { } diff --git a/target-arm/helper.c b/target-arm/helper.c index 3dcd910..0187c0a 100644 --- a/target-arm/helper.c +++ b/target-arm/helper.c @@ -2869,10 +2869,10 @@ static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t pageaddr = sextract64(value << 12, 0, 56); if (arm_is_secure_below_el3(env)) { - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S1SE1, + tlb_flush_page_by_mmuidx(cs, cs, pageaddr, ARMMMUIdx_S1SE1, ARMMMUIdx_S1SE0, -1); } else { - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S12NSE1, + tlb_flush_page_by_mmuidx(cs, cs, pageaddr, ARMMMUIdx_S12NSE1, ARMMMUIdx_S12NSE0, -1); } } @@ -2888,7 +2888,7 @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri, CPUState *cs = CPU(cpu); uint64_t pageaddr = sextract64(value << 12, 0, 56); - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S1E2, -1); + tlb_flush_page_by_mmuidx(cs, cs, pageaddr, ARMMMUIdx_S1E2, -1); } static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri, @@ -2902,23 +2902,23 @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri, CPUState *cs = CPU(cpu); uint64_t pageaddr = sextract64(value << 12, 0, 56); - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S1E3, -1); + tlb_flush_page_by_mmuidx(cs, cs, pageaddr, ARMMMUIdx_S1E3, -1); } static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { bool sec = arm_is_secure_below_el3(env); - CPUState *other_cs; + CPUState *other_cs, *this_cs = ENV_GET_CPU(env); uint64_t pageaddr = sextract64(value << 12, 0, 56); CPU_FOREACH(other_cs) { if (sec) { - tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S1SE1, - ARMMMUIdx_S1SE0, -1); + tlb_flush_page_by_mmuidx(this_cs, other_cs, pageaddr, + ARMMMUIdx_S1SE1, ARMMMUIdx_S1SE0, -1); } else { - tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S12NSE1, - ARMMMUIdx_S12NSE0, -1); + tlb_flush_page_by_mmuidx(this_cs, other_cs, pageaddr, + ARMMMUIdx_S12NSE1, ARMMMUIdx_S12NSE0, -1); } } } @@ -2926,22 +2926,24 @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri, static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { - CPUState *other_cs; + CPUState *other_cs, *this_cs = ENV_GET_CPU(env); uint64_t pageaddr = sextract64(value << 12, 0, 56); CPU_FOREACH(other_cs) { - tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S1E2, -1); + tlb_flush_page_by_mmuidx(this_cs, other_cs, pageaddr, + ARMMMUIdx_S1E2, -1); } } static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { - CPUState *other_cs; + CPUState *other_cs, *this_cs = ENV_GET_CPU(env); uint64_t pageaddr = sextract64(value << 12, 0, 56); CPU_FOREACH(other_cs) { - tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S1E3, -1); + tlb_flush_page_by_mmuidx(this_cs, other_cs, pageaddr, + ARMMMUIdx_S1E3, -1); } } @@ -2964,13 +2966,13 @@ static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri, pageaddr = sextract64(value << 12, 0, 48); - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdx_S2NS, -1); + tlb_flush_page_by_mmuidx(cs, cs, pageaddr, ARMMMUIdx_S2NS, -1); } static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) { - CPUState *other_cs; + CPUState *other_cs, *this_cs = ENV_GET_CPU(env); uint64_t pageaddr; if (!arm_feature(env, ARM_FEATURE_EL2) || !(env->cp15.scr_el3 & SCR_NS)) { @@ -2980,7 +2982,8 @@ static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri, pageaddr = sextract64(value << 12, 0, 48); CPU_FOREACH(other_cs) { - tlb_flush_page_by_mmuidx(other_cs, pageaddr, ARMMMUIdx_S2NS, -1); + tlb_flush_page_by_mmuidx(this_cs, other_cs, pageaddr, + ARMMMUIdx_S2NS, -1); } }