From patchwork Fri May 10 20:00:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 10939515 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 85FC2112C for ; Fri, 10 May 2019 20:05:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 720BC28641 for ; Fri, 10 May 2019 20:05:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 65E9128689; Fri, 10 May 2019 20:05:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 96FB728641 for ; Fri, 10 May 2019 20:05:16 +0000 (UTC) Received: from localhost ([127.0.0.1]:49343 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBlb-0005ff-VI for patchwork-qemu-devel@patchwork.kernel.org; Fri, 10 May 2019 16:05:15 -0400 Received: from eggs.gnu.org ([209.51.188.92]:33911) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBhp-0002W3-P5 for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hPBhi-0002VI-2C for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:18 -0400 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]:38059) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hPBhd-0002Po-Sg for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:11 -0400 Received: by mail-wm1-x332.google.com with SMTP id f2so8598602wmj.3 for ; Fri, 10 May 2019 13:01:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eKgvAaykMdYlV+CFih2bS4g/EVzTgF73re99mr2LDvw=; b=ZLv3M2cA+rjgzl9qJzGESFR8h1bylh2Ym+OTQ6ti/6zCG4Rp7aFJiaiI5c7wiCR1dv 9YJlLkb12SVcO4zzJP9cpsMRzbjNE4ibSg/KW8H+/SpsrGTONaXIX/c0sPzyN4nujHar tDV+Fkyv7Y3Vs6nPEM+2VVSY6V3sOvGkaC/Binxy/KnaEvKpuBovW+eTO6RH0P02RMtq KDQbfHGaEIIfqYscP+wSn7Y1yRhOAAF/+p8noJfdAthr955G+OnLWnRYiFN6gObvTWHS JiVylYWahF0DTeCON5rnX8hz8DrzIhk4/JasgSAPBN1nl8UwdVyUrWq3AXpAZltMgILF DS0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eKgvAaykMdYlV+CFih2bS4g/EVzTgF73re99mr2LDvw=; b=NSUveD9OB3m+8cWCDaQu5KTpgKvD5FczuIIWeh99XrC9vttZUKeYvSP1dQFEuO335J ddeg3IJCxkkbbIE/uRX8F/oFVM4ZeHQ9rfmPnZsx3DcdiipBN7bnXo9BfmfYpqDvGdVh omK6Xajf5H4W3LFlVbtOOyiI0gPEr2oyN4tEhDDVTQH+vwejdUYCBLXS4g5Xsows0Utt wb43FemAOFlcnJgXt2IsQBPJvSI+b5e7dEWSNo+KHiLM7VhSWQEotMdrILpH7FC+d4Mf WgTP7cQD5ggVdpgZn1X/wL6nYJBLTJ1svEv4DTDjGZ043350UIU796nm2CGzzKXYWWts FCiA== X-Gm-Message-State: APjAAAUTKac2iu5v/kyUHeV2h/0hoPG0vtf6PRpOPGRweSAMojsAMRn/ mn8Bq59GJ7zmcjmrma/IHGq+6PXG8oc= X-Google-Smtp-Source: APXvYqwVgZG55hVCXM9VqcU6ow1CjCRHD0Ru0myMxUckYaTxGeuFWj8pflRsArCyCtpi8aEvUDsbrw== X-Received: by 2002:a1c:a684:: with SMTP id p126mr3456363wme.101.1557518466423; Fri, 10 May 2019 13:01:06 -0700 (PDT) Received: from zen.linaroharston ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id i25sm3523248wmb.46.2019.05.10.13.01.02 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 May 2019 13:01:03 -0700 (PDT) Received: from zen.linaroharston. (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id E23861FF8C; Fri, 10 May 2019 21:01:01 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: peter.maydell@linaro.org Date: Fri, 10 May 2019 21:00:57 +0100 Message-Id: <20190510200101.31096-2-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190510200101.31096-1-alex.bennee@linaro.org> References: <20190510200101.31096-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::332 Subject: [Qemu-devel] [PULL 1/5] accel/tcg: demacro cputlb X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Cave-Ayland , Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= , qemu-devel@nongnu.org, Richard Henderson Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Instead of expanding a series of macros to generate the load/store helpers we move stuff into common functions and rely on the compiler to eliminate the dead code for each variant. Signed-off-by: Alex Bennée Tested-by: Mark Cave-Ayland diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index f2f618217d6..12f21865ee7 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1168,26 +1168,421 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, } #ifdef TARGET_WORDS_BIGENDIAN -# define TGT_BE(X) (X) -# define TGT_LE(X) BSWAP(X) +#define NEED_BE_BSWAP 0 +#define NEED_LE_BSWAP 1 #else -# define TGT_BE(X) BSWAP(X) -# define TGT_LE(X) (X) +#define NEED_BE_BSWAP 1 +#define NEED_LE_BSWAP 0 #endif -#define MMUSUFFIX _mmu +/* + * Byte Swap Helper + * + * This should all dead code away depending on the build host and + * access type. + */ -#define DATA_SIZE 1 -#include "softmmu_template.h" +static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian) +{ + if ((big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP)) { + switch (size) { + case 1: return val; + case 2: return bswap16(val); + case 4: return bswap32(val); + case 8: return bswap64(val); + default: + g_assert_not_reached(); + } + } else { + return val; + } +} -#define DATA_SIZE 2 -#include "softmmu_template.h" +/* + * Load Helpers + * + * We support two different access types. SOFTMMU_CODE_ACCESS is + * specifically for reading instructions from system memory. It is + * called by the translation loop and in some helpers where the code + * is disassembled. It shouldn't be called directly by guest code. + */ -#define DATA_SIZE 4 -#include "softmmu_template.h" +static uint64_t load_helper(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr, + size_t size, bool big_endian, + bool code_read) +{ + uintptr_t mmu_idx = get_mmuidx(oi); + uintptr_t index = tlb_index(env, mmu_idx, addr); + CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); + target_ulong tlb_addr = code_read ? entry->addr_code : entry->addr_read; + const size_t tlb_off = code_read ? + offsetof(CPUTLBEntry, addr_code) : offsetof(CPUTLBEntry, addr_read); + unsigned a_bits = get_alignment_bits(get_memop(oi)); + void *haddr; + uint64_t res; + + /* Handle CPU specific unaligned behaviour */ + if (addr & ((1 << a_bits) - 1)) { + cpu_unaligned_access(ENV_GET_CPU(env), addr, + code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, + mmu_idx, retaddr); + } -#define DATA_SIZE 8 -#include "softmmu_template.h" + /* If the TLB entry is for a different page, reload and try again. */ + if (!tlb_hit(tlb_addr, addr)) { + if (!victim_tlb_hit(env, mmu_idx, index, tlb_off, + addr & TARGET_PAGE_MASK)) { + tlb_fill(ENV_GET_CPU(env), addr, size, + code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, + mmu_idx, retaddr); + index = tlb_index(env, mmu_idx, addr); + entry = tlb_entry(env, mmu_idx, addr); + } + tlb_addr = code_read ? entry->addr_code : entry->addr_read; + } + + /* Handle an IO access. */ + if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { + CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; + uint64_t tmp; + + if ((addr & (size - 1)) != 0) { + goto do_unaligned_access; + } + + tmp = io_readx(env, iotlbentry, mmu_idx, addr, retaddr, + tlb_addr & TLB_RECHECK, + code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, size); + return handle_bswap(tmp, size, big_endian); + } + + /* Handle slow unaligned access (it spans two pages or IO). */ + if (size > 1 + && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 + >= TARGET_PAGE_SIZE)) { + target_ulong addr1, addr2; + tcg_target_ulong r1, r2; + unsigned shift; + do_unaligned_access: + addr1 = addr & ~(size - 1); + addr2 = addr1 + size; + r1 = load_helper(env, addr1, oi, retaddr, size, big_endian, code_read); + r2 = load_helper(env, addr2, oi, retaddr, size, big_endian, code_read); + shift = (addr & (size - 1)) * 8; + + if (big_endian) { + /* Big-endian combine. */ + res = (r1 << shift) | (r2 >> ((size * 8) - shift)); + } else { + /* Little-endian combine. */ + res = (r1 >> shift) | (r2 << ((size * 8) - shift)); + } + return res & MAKE_64BIT_MASK(0, size * 8); + } + + haddr = (void *)((uintptr_t)addr + entry->addend); + + switch (size) { + case 1: + res = ldub_p(haddr); + break; + case 2: + if (big_endian) { + res = lduw_be_p(haddr); + } else { + res = lduw_le_p(haddr); + } + break; + case 4: + if (big_endian) { + res = (uint32_t)ldl_be_p(haddr); + } else { + res = (uint32_t)ldl_le_p(haddr); + } + break; + case 8: + if (big_endian) { + res = ldq_be_p(haddr); + } else { + res = ldq_le_p(haddr); + } + break; + default: + g_assert_not_reached(); + } + + return res; +} + +/* + * For the benefit of TCG generated code, we want to avoid the + * complication of ABI-specific return type promotion and always + * return a value extended to the register size of the host. This is + * tcg_target_long, except in the case of a 32-bit host and 64-bit + * data, and for that we always have uint64_t. + * + * We don't bother with this widened value for SOFTMMU_CODE_ACCESS. + */ + +tcg_target_ulong __attribute__((flatten)) +helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 1, false, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, false, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, true, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, false, false); +} + +tcg_target_ulong __attribute__((flatten)) +helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, true, false); +} + +uint64_t __attribute__((flatten)) +helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 8, false, false); +} + +uint64_t __attribute__((flatten)) +helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 8, true, false); +} + +/* + * Provide signed versions of the load routines as well. We can of course + * avoid this for 64-bit data, or for 32-bit data on 32-bit host. + */ + + +tcg_target_ulong helper_ret_ldsb_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return (int8_t)helper_ret_ldub_mmu(env, addr, oi, retaddr); +} + +tcg_target_ulong helper_le_ldsw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return (int16_t)helper_le_lduw_mmu(env, addr, oi, retaddr); +} + +tcg_target_ulong helper_be_ldsw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return (int16_t)helper_be_lduw_mmu(env, addr, oi, retaddr); +} + +tcg_target_ulong helper_le_ldsl_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return (int32_t)helper_le_ldul_mmu(env, addr, oi, retaddr); +} + +tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return (int32_t)helper_be_ldul_mmu(env, addr, oi, retaddr); +} + +/* + * Store Helpers + */ + +static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr, size_t size, + bool big_endian) +{ + uintptr_t mmu_idx = get_mmuidx(oi); + uintptr_t index = tlb_index(env, mmu_idx, addr); + CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); + target_ulong tlb_addr = tlb_addr_write(entry); + const size_t tlb_off = offsetof(CPUTLBEntry, addr_write); + unsigned a_bits = get_alignment_bits(get_memop(oi)); + void *haddr; + + /* Handle CPU specific unaligned behaviour */ + if (addr & ((1 << a_bits) - 1)) { + cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, + mmu_idx, retaddr); + } + + /* If the TLB entry is for a different page, reload and try again. */ + if (!tlb_hit(tlb_addr, addr)) { + if (!victim_tlb_hit(env, mmu_idx, index, tlb_off, + addr & TARGET_PAGE_MASK)) { + tlb_fill(ENV_GET_CPU(env), addr, size, MMU_DATA_STORE, + mmu_idx, retaddr); + index = tlb_index(env, mmu_idx, addr); + entry = tlb_entry(env, mmu_idx, addr); + } + tlb_addr = tlb_addr_write(entry) & ~TLB_INVALID_MASK; + } + + /* Handle an IO access. */ + if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { + CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; + + if ((addr & (size - 1)) != 0) { + goto do_unaligned_access; + } + + io_writex(env, iotlbentry, mmu_idx, + handle_bswap(val, size, big_endian), + addr, retaddr, tlb_addr & TLB_RECHECK, size); + return; + } + + /* Handle slow unaligned access (it spans two pages or IO). */ + if (size > 1 + && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1 + >= TARGET_PAGE_SIZE)) { + int i; + uintptr_t index2; + CPUTLBEntry *entry2; + target_ulong page2, tlb_addr2; + do_unaligned_access: + /* + * Ensure the second page is in the TLB. Note that the first page + * is already guaranteed to be filled, and that the second page + * cannot evict the first. + */ + page2 = (addr + size) & TARGET_PAGE_MASK; + index2 = tlb_index(env, mmu_idx, page2); + entry2 = tlb_entry(env, mmu_idx, page2); + tlb_addr2 = tlb_addr_write(entry2); + if (!tlb_hit_page(tlb_addr2, page2) + && !victim_tlb_hit(env, mmu_idx, index2, tlb_off, + page2 & TARGET_PAGE_MASK)) { + tlb_fill(ENV_GET_CPU(env), page2, size, MMU_DATA_STORE, + mmu_idx, retaddr); + } + + /* + * XXX: not efficient, but simple. + * This loop must go in the forward direction to avoid issues + * with self-modifying code in Windows 64-bit. + */ + for (i = 0; i < size; ++i) { + uint8_t val8; + if (big_endian) { + /* Big-endian extract. */ + val8 = val >> (((size - 1) * 8) - (i * 8)); + } else { + /* Little-endian extract. */ + val8 = val >> (i * 8); + } + store_helper(env, addr + i, val8, oi, retaddr, 1, big_endian); + } + return; + } + + haddr = (void *)((uintptr_t)addr + entry->addend); + + switch (size) { + case 1: + stb_p(haddr, val); + break; + case 2: + if (big_endian) { + stw_be_p(haddr, val); + } else { + stw_le_p(haddr, val); + } + break; + case 4: + if (big_endian) { + stl_be_p(haddr, val); + } else { + stl_le_p(haddr, val); + } + break; + case 8: + if (big_endian) { + stq_be_p(haddr, val); + } else { + stq_le_p(haddr, val); + } + break; + default: + g_assert_not_reached(); + break; + } +} + +void __attribute__((flatten)) +helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 1, false); +} + +void __attribute__((flatten)) +helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 2, false); +} + +void __attribute__((flatten)) +helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 2, true); +} + +void __attribute__((flatten)) +helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 4, false); +} + +void __attribute__((flatten)) +helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 4, true); +} + +void __attribute__((flatten)) +helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 8, false); +} + +void __attribute__((flatten)) +helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + store_helper(env, addr, val, oi, retaddr, 8, true); +} /* First set of helpers allows passing in of OI and RETADDR. This makes them callable from other helpers. */ @@ -1248,20 +1643,51 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, /* Code access functions. */ -#undef MMUSUFFIX -#define MMUSUFFIX _cmmu -#undef GETPC -#define GETPC() ((uintptr_t)0) -#define SOFTMMU_CODE_ACCESS +uint8_t __attribute__((flatten)) +helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 1, false, true); +} -#define DATA_SIZE 1 -#include "softmmu_template.h" +uint16_t __attribute__((flatten)) +helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, false, true); +} -#define DATA_SIZE 2 -#include "softmmu_template.h" +uint16_t __attribute__((flatten)) +helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, true, true); +} -#define DATA_SIZE 4 -#include "softmmu_template.h" +uint32_t __attribute__((flatten)) +helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, false, true); +} -#define DATA_SIZE 8 -#include "softmmu_template.h" +uint32_t __attribute__((flatten)) +helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, true, true); +} + +uint64_t __attribute__((flatten)) +helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 8, false, true); +} + +uint64_t __attribute__((flatten)) +helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 8, true, true); +} diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h deleted file mode 100644 index e970a8b378e..00000000000 --- a/accel/tcg/softmmu_template.h +++ /dev/null @@ -1,454 +0,0 @@ -/* - * Software MMU support - * - * Generate helpers used by TCG for qemu_ld/st ops and code load - * functions. - * - * Included from target op helpers and exec.c. - * - * Copyright (c) 2003 Fabrice Bellard - * - * This library is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * This library is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with this library; if not, see . - */ -#if DATA_SIZE == 8 -#define SUFFIX q -#define LSUFFIX q -#define SDATA_TYPE int64_t -#define DATA_TYPE uint64_t -#elif DATA_SIZE == 4 -#define SUFFIX l -#define LSUFFIX l -#define SDATA_TYPE int32_t -#define DATA_TYPE uint32_t -#elif DATA_SIZE == 2 -#define SUFFIX w -#define LSUFFIX uw -#define SDATA_TYPE int16_t -#define DATA_TYPE uint16_t -#elif DATA_SIZE == 1 -#define SUFFIX b -#define LSUFFIX ub -#define SDATA_TYPE int8_t -#define DATA_TYPE uint8_t -#else -#error unsupported data size -#endif - - -/* For the benefit of TCG generated code, we want to avoid the complication - of ABI-specific return type promotion and always return a value extended - to the register size of the host. This is tcg_target_long, except in the - case of a 32-bit host and 64-bit data, and for that we always have - uint64_t. Don't bother with this widened value for SOFTMMU_CODE_ACCESS. */ -#if defined(SOFTMMU_CODE_ACCESS) || DATA_SIZE == 8 -# define WORD_TYPE DATA_TYPE -# define USUFFIX SUFFIX -#else -# define WORD_TYPE tcg_target_ulong -# define USUFFIX glue(u, SUFFIX) -# define SSUFFIX glue(s, SUFFIX) -#endif - -#ifdef SOFTMMU_CODE_ACCESS -#define READ_ACCESS_TYPE MMU_INST_FETCH -#define ADDR_READ addr_code -#else -#define READ_ACCESS_TYPE MMU_DATA_LOAD -#define ADDR_READ addr_read -#endif - -#if DATA_SIZE == 8 -# define BSWAP(X) bswap64(X) -#elif DATA_SIZE == 4 -# define BSWAP(X) bswap32(X) -#elif DATA_SIZE == 2 -# define BSWAP(X) bswap16(X) -#else -# define BSWAP(X) (X) -#endif - -#if DATA_SIZE == 1 -# define helper_le_ld_name glue(glue(helper_ret_ld, USUFFIX), MMUSUFFIX) -# define helper_be_ld_name helper_le_ld_name -# define helper_le_lds_name glue(glue(helper_ret_ld, SSUFFIX), MMUSUFFIX) -# define helper_be_lds_name helper_le_lds_name -# define helper_le_st_name glue(glue(helper_ret_st, SUFFIX), MMUSUFFIX) -# define helper_be_st_name helper_le_st_name -#else -# define helper_le_ld_name glue(glue(helper_le_ld, USUFFIX), MMUSUFFIX) -# define helper_be_ld_name glue(glue(helper_be_ld, USUFFIX), MMUSUFFIX) -# define helper_le_lds_name glue(glue(helper_le_ld, SSUFFIX), MMUSUFFIX) -# define helper_be_lds_name glue(glue(helper_be_ld, SSUFFIX), MMUSUFFIX) -# define helper_le_st_name glue(glue(helper_le_st, SUFFIX), MMUSUFFIX) -# define helper_be_st_name glue(glue(helper_be_st, SUFFIX), MMUSUFFIX) -#endif - -#ifndef SOFTMMU_CODE_ACCESS -static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env, - size_t mmu_idx, size_t index, - target_ulong addr, - uintptr_t retaddr, - bool recheck, - MMUAccessType access_type) -{ - CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; - return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, recheck, - access_type, DATA_SIZE); -} -#endif - -WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - uintptr_t mmu_idx = get_mmuidx(oi); - uintptr_t index = tlb_index(env, mmu_idx, addr); - CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); - target_ulong tlb_addr = entry->ADDR_READ; - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - DATA_TYPE res; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!VICTIM_TLB_HIT(ADDR_READ, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, READ_ACCESS_TYPE, - mmu_idx, retaddr); - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - } - tlb_addr = entry->ADDR_READ; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr, - tlb_addr & TLB_RECHECK, - READ_ACCESS_TYPE); - res = TGT_LE(res); - return res; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (DATA_SIZE > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 - >= TARGET_PAGE_SIZE)) { - target_ulong addr1, addr2; - DATA_TYPE res1, res2; - unsigned shift; - do_unaligned_access: - addr1 = addr & ~(DATA_SIZE - 1); - addr2 = addr1 + DATA_SIZE; - res1 = helper_le_ld_name(env, addr1, oi, retaddr); - res2 = helper_le_ld_name(env, addr2, oi, retaddr); - shift = (addr & (DATA_SIZE - 1)) * 8; - - /* Little-endian combine. */ - res = (res1 >> shift) | (res2 << ((DATA_SIZE * 8) - shift)); - return res; - } - - haddr = addr + entry->addend; -#if DATA_SIZE == 1 - res = glue(glue(ld, LSUFFIX), _p)((uint8_t *)haddr); -#else - res = glue(glue(ld, LSUFFIX), _le_p)((uint8_t *)haddr); -#endif - return res; -} - -#if DATA_SIZE > 1 -WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - uintptr_t mmu_idx = get_mmuidx(oi); - uintptr_t index = tlb_index(env, mmu_idx, addr); - CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); - target_ulong tlb_addr = entry->ADDR_READ; - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - DATA_TYPE res; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, READ_ACCESS_TYPE, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!VICTIM_TLB_HIT(ADDR_READ, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, READ_ACCESS_TYPE, - mmu_idx, retaddr); - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - } - tlb_addr = entry->ADDR_READ; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr, - tlb_addr & TLB_RECHECK, - READ_ACCESS_TYPE); - res = TGT_BE(res); - return res; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (DATA_SIZE > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 - >= TARGET_PAGE_SIZE)) { - target_ulong addr1, addr2; - DATA_TYPE res1, res2; - unsigned shift; - do_unaligned_access: - addr1 = addr & ~(DATA_SIZE - 1); - addr2 = addr1 + DATA_SIZE; - res1 = helper_be_ld_name(env, addr1, oi, retaddr); - res2 = helper_be_ld_name(env, addr2, oi, retaddr); - shift = (addr & (DATA_SIZE - 1)) * 8; - - /* Big-endian combine. */ - res = (res1 << shift) | (res2 >> ((DATA_SIZE * 8) - shift)); - return res; - } - - haddr = addr + entry->addend; - res = glue(glue(ld, LSUFFIX), _be_p)((uint8_t *)haddr); - return res; -} -#endif /* DATA_SIZE > 1 */ - -#ifndef SOFTMMU_CODE_ACCESS - -/* Provide signed versions of the load routines as well. We can of course - avoid this for 64-bit data, or for 32-bit data on 32-bit host. */ -#if DATA_SIZE * 8 < TCG_TARGET_REG_BITS -WORD_TYPE helper_le_lds_name(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - return (SDATA_TYPE)helper_le_ld_name(env, addr, oi, retaddr); -} - -# if DATA_SIZE > 1 -WORD_TYPE helper_be_lds_name(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - return (SDATA_TYPE)helper_be_ld_name(env, addr, oi, retaddr); -} -# endif -#endif - -static inline void glue(io_write, SUFFIX)(CPUArchState *env, - size_t mmu_idx, size_t index, - DATA_TYPE val, - target_ulong addr, - uintptr_t retaddr, - bool recheck) -{ - CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; - return io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr, - recheck, DATA_SIZE); -} - -void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - uintptr_t mmu_idx = get_mmuidx(oi); - uintptr_t index = tlb_index(env, mmu_idx, addr); - CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); - target_ulong tlb_addr = tlb_addr_write(entry); - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!VICTIM_TLB_HIT(addr_write, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, MMU_DATA_STORE, - mmu_idx, retaddr); - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - } - tlb_addr = tlb_addr_write(entry) & ~TLB_INVALID_MASK; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - val = TGT_LE(val); - glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, - retaddr, tlb_addr & TLB_RECHECK); - return; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (DATA_SIZE > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 - >= TARGET_PAGE_SIZE)) { - int i; - target_ulong page2; - CPUTLBEntry *entry2; - do_unaligned_access: - /* Ensure the second page is in the TLB. Note that the first page - is already guaranteed to be filled, and that the second page - cannot evict the first. */ - page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK; - entry2 = tlb_entry(env, mmu_idx, page2); - if (!tlb_hit_page(tlb_addr_write(entry2), page2) - && !VICTIM_TLB_HIT(addr_write, page2)) { - tlb_fill(ENV_GET_CPU(env), page2, DATA_SIZE, MMU_DATA_STORE, - mmu_idx, retaddr); - } - - /* XXX: not efficient, but simple. */ - /* This loop must go in the forward direction to avoid issues - with self-modifying code in Windows 64-bit. */ - for (i = 0; i < DATA_SIZE; ++i) { - /* Little-endian extract. */ - uint8_t val8 = val >> (i * 8); - glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, - oi, retaddr); - } - return; - } - - haddr = addr + entry->addend; -#if DATA_SIZE == 1 - glue(glue(st, SUFFIX), _p)((uint8_t *)haddr, val); -#else - glue(glue(st, SUFFIX), _le_p)((uint8_t *)haddr, val); -#endif -} - -#if DATA_SIZE > 1 -void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, - TCGMemOpIdx oi, uintptr_t retaddr) -{ - uintptr_t mmu_idx = get_mmuidx(oi); - uintptr_t index = tlb_index(env, mmu_idx, addr); - CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr); - target_ulong tlb_addr = tlb_addr_write(entry); - unsigned a_bits = get_alignment_bits(get_memop(oi)); - uintptr_t haddr; - - if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!VICTIM_TLB_HIT(addr_write, addr)) { - tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, MMU_DATA_STORE, - mmu_idx, retaddr); - index = tlb_index(env, mmu_idx, addr); - entry = tlb_entry(env, mmu_idx, addr); - } - tlb_addr = tlb_addr_write(entry) & ~TLB_INVALID_MASK; - } - - /* Handle an IO access. */ - if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; - } - - /* ??? Note that the io helpers always read data in the target - byte ordering. We should push the LE/BE request down into io. */ - val = TGT_BE(val); - glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr, - tlb_addr & TLB_RECHECK); - return; - } - - /* Handle slow unaligned access (it spans two pages or IO). */ - if (DATA_SIZE > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 - >= TARGET_PAGE_SIZE)) { - int i; - target_ulong page2; - CPUTLBEntry *entry2; - do_unaligned_access: - /* Ensure the second page is in the TLB. Note that the first page - is already guaranteed to be filled, and that the second page - cannot evict the first. */ - page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK; - entry2 = tlb_entry(env, mmu_idx, page2); - if (!tlb_hit_page(tlb_addr_write(entry2), page2) - && !VICTIM_TLB_HIT(addr_write, page2)) { - tlb_fill(ENV_GET_CPU(env), page2, DATA_SIZE, MMU_DATA_STORE, - mmu_idx, retaddr); - } - - /* XXX: not efficient, but simple */ - /* This loop must go in the forward direction to avoid issues - with self-modifying code. */ - for (i = 0; i < DATA_SIZE; ++i) { - /* Big-endian extract. */ - uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8)); - glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, - oi, retaddr); - } - return; - } - - haddr = addr + entry->addend; - glue(glue(st, SUFFIX), _be_p)((uint8_t *)haddr, val); -} -#endif /* DATA_SIZE > 1 */ -#endif /* !defined(SOFTMMU_CODE_ACCESS) */ - -#undef READ_ACCESS_TYPE -#undef DATA_TYPE -#undef SUFFIX -#undef LSUFFIX -#undef DATA_SIZE -#undef ADDR_READ -#undef WORD_TYPE -#undef SDATA_TYPE -#undef USUFFIX -#undef SSUFFIX -#undef BSWAP -#undef helper_le_ld_name -#undef helper_be_ld_name -#undef helper_le_lds_name -#undef helper_be_lds_name -#undef helper_le_st_name -#undef helper_be_st_name From patchwork Fri May 10 20:00:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 10939513 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 059361390 for ; Fri, 10 May 2019 20:05:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E4F8A2866D for ; Fri, 10 May 2019 20:05:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D310028641; Fri, 10 May 2019 20:05:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1F53A28641 for ; Fri, 10 May 2019 20:05:02 +0000 (UTC) Received: from localhost ([127.0.0.1]:49311 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBlN-0005Q5-GL for patchwork-qemu-devel@patchwork.kernel.org; Fri, 10 May 2019 16:05:01 -0400 Received: from eggs.gnu.org ([209.51.188.92]:33865) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBhk-0002SE-KD for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hPBhi-0002V6-1m for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:16 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]:35262) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hPBhd-0002OD-Sh for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:11 -0400 Received: by mail-wm1-x32e.google.com with SMTP id q15so4593544wmj.0 for ; Fri, 10 May 2019 13:01:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2jRFCM+GPqo8vu/j1Bzu3XKh2Bvye2YGkMc6M6eXRGI=; b=r7rcIN8gz/AAQGBl6XwVxMMRHzeRto+c+l5pUscHfJYSwn5TAmru/biwi0N7WgPKR9 HJDNVART7BjP7o4G4BezRN4vy+Fr+8is3tWe5Y+Chpu8++2AejceiCQwhFAWB5VYNNpM tfc9u5fowCi2CWNaK4c09jPuO+rpSTTMKZbedUeKU+GB88Lfbq3a50WuaLkTVA5wYQei iuhVNj/Sx1Bge3pl2hAkmL+1sha2sH/Xph60yoNX3UbClBcgmi6cCwCveFekVtN/H5Hk AeRxw51HWeUiu+KJViCgOhd6Bcssizfmh3gHeZ1nTj8yAVRjRPzOjxDCyeLI32ZgNqz0 pCcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2jRFCM+GPqo8vu/j1Bzu3XKh2Bvye2YGkMc6M6eXRGI=; b=RF8O4EtM8b17cHe8nqnkSXxQeW1EDy5RZtDDsdbqJseuL10RLhqcHH8V/hTwlpw/gi gIswdHFTKUQw4+dd2unM70NEBO9MZaRR91OWvhHJ3tG00+X9b3nOYvWWdCzd8A/9XTlM D9C4+wZe6l3NPFx/PqtgxUWwThM98yGRYzt025F88S1jLTxzkHOpfP0sX/+xqX5pbD/0 S2D/8nkBSi6Rluow6Ht8NTQWWak+qnOckVoCN5W9fUVvsGqNUZe8u7eA6H8cDNz2Eq+X ETatroB7bIIUtY0DYhm1+Jt81eV33eFevkO4THL8SB020Dy3PxwJmwadtHXd/gPuU1hx ylWw== X-Gm-Message-State: APjAAAVv2204dfjR1f9/XiXSDaOgulcwmYdZoihL/JwEvXEgnpWBeZ3S A+Inl2YTYzyoidT/kTu2omyuDg== X-Google-Smtp-Source: APXvYqy9nuUdRgYXfPBZ4QMLPiY863ken8C5KeelZTwWlV5gEpPDv9FvTP7kN1ZqQeR6F4cpfqrk7g== X-Received: by 2002:a1c:9e04:: with SMTP id h4mr8671879wme.135.1557518464549; Fri, 10 May 2019 13:01:04 -0700 (PDT) Received: from zen.linaroharston ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id s13sm3139414wrw.17.2019.05.10.13.01.02 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 May 2019 13:01:03 -0700 (PDT) Received: from zen.linaroharston. (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 047DA1FF8F; Fri, 10 May 2019 21:01:02 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: peter.maydell@linaro.org Date: Fri, 10 May 2019 21:00:58 +0100 Message-Id: <20190510200101.31096-3-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190510200101.31096-1-alex.bennee@linaro.org> References: <20190510200101.31096-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::32e Subject: [Qemu-devel] [PULL 2/5] cputlb: Move TLB_RECHECK handling into load/store_helper X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , Mark Cave-Ayland , qemu-devel@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= , Richard Henderson Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Richard Henderson Having this in io_readx/io_writex meant that we forgot to re-compute index after tlb_fill. It also means we can use the normal aligned memory load path. It also fixes a bug in that we had cached a use of index across a tlb_fill. Signed-off-by: Richard Henderson Signed-off-by: Alex Bennée Tested-by: Mark Cave-Ayland diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 12f21865ee7..9c04eb1687c 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -856,9 +856,8 @@ static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr) } static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, - int mmu_idx, - target_ulong addr, uintptr_t retaddr, - bool recheck, MMUAccessType access_type, int size) + int mmu_idx, target_ulong addr, uintptr_t retaddr, + MMUAccessType access_type, int size) { CPUState *cpu = ENV_GET_CPU(env); hwaddr mr_offset; @@ -868,30 +867,6 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, bool locked = false; MemTxResult r; - if (recheck) { - /* - * This is a TLB_RECHECK access, where the MMU protection - * covers a smaller range than a target page, and we must - * repeat the MMU check here. This tlb_fill() call might - * longjump out if this access should cause a guest exception. - */ - CPUTLBEntry *entry; - target_ulong tlb_addr; - - tlb_fill(cpu, addr, size, access_type, mmu_idx, retaddr); - - entry = tlb_entry(env, mmu_idx, addr); - tlb_addr = (access_type == MMU_DATA_LOAD ? - entry->addr_read : entry->addr_code); - if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) { - /* RAM access */ - uintptr_t haddr = addr + entry->addend; - - return ldn_p((void *)haddr, size); - } - /* Fall through for handling IO accesses */ - } - section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs); mr = section->mr; mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr; @@ -925,9 +900,8 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, } static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, - int mmu_idx, - uint64_t val, target_ulong addr, - uintptr_t retaddr, bool recheck, int size) + int mmu_idx, uint64_t val, target_ulong addr, + uintptr_t retaddr, int size) { CPUState *cpu = ENV_GET_CPU(env); hwaddr mr_offset; @@ -936,30 +910,6 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, bool locked = false; MemTxResult r; - if (recheck) { - /* - * This is a TLB_RECHECK access, where the MMU protection - * covers a smaller range than a target page, and we must - * repeat the MMU check here. This tlb_fill() call might - * longjump out if this access should cause a guest exception. - */ - CPUTLBEntry *entry; - target_ulong tlb_addr; - - tlb_fill(cpu, addr, size, MMU_DATA_STORE, mmu_idx, retaddr); - - entry = tlb_entry(env, mmu_idx, addr); - tlb_addr = tlb_addr_write(entry); - if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) { - /* RAM access */ - uintptr_t haddr = addr + entry->addend; - - stn_p((void *)haddr, size, val); - return; - } - /* Fall through for handling IO accesses */ - } - section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs); mr = section->mr; mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr; @@ -1218,14 +1168,15 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, target_ulong tlb_addr = code_read ? entry->addr_code : entry->addr_read; const size_t tlb_off = code_read ? offsetof(CPUTLBEntry, addr_code) : offsetof(CPUTLBEntry, addr_read); + const MMUAccessType access_type = + code_read ? MMU_INST_FETCH : MMU_DATA_LOAD; unsigned a_bits = get_alignment_bits(get_memop(oi)); void *haddr; uint64_t res; /* Handle CPU specific unaligned behaviour */ if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, - code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, + cpu_unaligned_access(ENV_GET_CPU(env), addr, access_type, mmu_idx, retaddr); } @@ -1234,8 +1185,7 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, if (!victim_tlb_hit(env, mmu_idx, index, tlb_off, addr & TARGET_PAGE_MASK)) { tlb_fill(ENV_GET_CPU(env), addr, size, - code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, - mmu_idx, retaddr); + access_type, mmu_idx, retaddr); index = tlb_index(env, mmu_idx, addr); entry = tlb_entry(env, mmu_idx, addr); } @@ -1244,17 +1194,33 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, /* Handle an IO access. */ if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; - uint64_t tmp; - if ((addr & (size - 1)) != 0) { goto do_unaligned_access; } - tmp = io_readx(env, iotlbentry, mmu_idx, addr, retaddr, - tlb_addr & TLB_RECHECK, - code_read ? MMU_INST_FETCH : MMU_DATA_LOAD, size); - return handle_bswap(tmp, size, big_endian); + if (tlb_addr & TLB_RECHECK) { + /* + * This is a TLB_RECHECK access, where the MMU protection + * covers a smaller range than a target page, and we must + * repeat the MMU check here. This tlb_fill() call might + * longjump out if this access should cause a guest exception. + */ + tlb_fill(ENV_GET_CPU(env), addr, size, + access_type, mmu_idx, retaddr); + index = tlb_index(env, mmu_idx, addr); + entry = tlb_entry(env, mmu_idx, addr); + + tlb_addr = code_read ? entry->addr_code : entry->addr_read; + tlb_addr &= ~TLB_RECHECK; + if (!(tlb_addr & ~TARGET_PAGE_MASK)) { + /* RAM access */ + goto do_aligned_access; + } + } + + res = io_readx(env, &env->iotlb[mmu_idx][index], mmu_idx, addr, + retaddr, access_type, size); + return handle_bswap(res, size, big_endian); } /* Handle slow unaligned access (it spans two pages or IO). */ @@ -1281,8 +1247,8 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, return res & MAKE_64BIT_MASK(0, size * 8); } + do_aligned_access: haddr = (void *)((uintptr_t)addr + entry->addend); - switch (size) { case 1: res = ldub_p(haddr); @@ -1446,15 +1412,33 @@ static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, /* Handle an IO access. */ if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index]; - if ((addr & (size - 1)) != 0) { goto do_unaligned_access; } - io_writex(env, iotlbentry, mmu_idx, + if (tlb_addr & TLB_RECHECK) { + /* + * This is a TLB_RECHECK access, where the MMU protection + * covers a smaller range than a target page, and we must + * repeat the MMU check here. This tlb_fill() call might + * longjump out if this access should cause a guest exception. + */ + tlb_fill(ENV_GET_CPU(env), addr, size, MMU_DATA_STORE, + mmu_idx, retaddr); + index = tlb_index(env, mmu_idx, addr); + entry = tlb_entry(env, mmu_idx, addr); + + tlb_addr = tlb_addr_write(entry); + tlb_addr &= ~TLB_RECHECK; + if (!(tlb_addr & ~TARGET_PAGE_MASK)) { + /* RAM access */ + goto do_aligned_access; + } + } + + io_writex(env, &env->iotlb[mmu_idx][index], mmu_idx, handle_bswap(val, size, big_endian), - addr, retaddr, tlb_addr & TLB_RECHECK, size); + addr, retaddr, size); return; } @@ -1502,8 +1486,8 @@ static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, return; } + do_aligned_access: haddr = (void *)((uintptr_t)addr + entry->addend); - switch (size) { case 1: stb_p(haddr, val); From patchwork Fri May 10 20:00:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 10939511 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A569D1390 for ; Fri, 10 May 2019 20:04:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8BCD928C84 for ; Fri, 10 May 2019 20:04:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 801DB28CCD; Fri, 10 May 2019 20:04:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 02BFF28C84 for ; Fri, 10 May 2019 20:04:54 +0000 (UTC) Received: from localhost ([127.0.0.1]:49309 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBlG-0005Jv-5R for patchwork-qemu-devel@patchwork.kernel.org; Fri, 10 May 2019 16:04:54 -0400 Received: from eggs.gnu.org ([209.51.188.92]:33863) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBhk-0002SC-K8 for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hPBhi-0002VD-1f for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:16 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]:53921) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hPBhd-0002Nq-Se for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:11 -0400 Received: by mail-wm1-x32e.google.com with SMTP id 198so8678939wme.3 for ; Fri, 10 May 2019 13:01:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=L07Ir53L/bT8njmxfGpI+tM70yQ090QH+043WWisSwk=; b=yiA54U+S+deHa7/+IrX/xbPFaaFsVstt4MONTKcHtTlEUI+jTcaxnaciQhmVzjAQBd 2vMSBBNfDBwnRTAJ19a7VSjtO7P5qPLTB3ZwQM9/iOXieupLWAmQskv531zOHUL0FGkh 3nSNhiZ3uGpOZ9KoPYdWofwefBeG6c2IC0uzLuvxUICxL+ictPrzZ7Au3ht+kRQrMnMB xS2R/1rOZqMs9Q1n4KylHehc0xJqrP5PDOtNmZG3DvWkNFGuIDLmNT0uQWAG3t/Bduh5 OfMmivzZ4o/njMsQi7jg8E7MdCkakfiBU9VOkI9+47EEEQue7qdVG+OfOXZ+Zx8PT2oF xH0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=L07Ir53L/bT8njmxfGpI+tM70yQ090QH+043WWisSwk=; b=GOzCXlrPoHuWgxB2AH0z2I+GP1wqkJ5OH39Lt8wUJif3q9Dy94yWaR/ES7Q99LIUY1 0nG2UdgWXHNblnrleelJxvg3YBadg2+Gqs80oGrcyfUIKXYWeK+/zFbCH9nx30SbReo/ 1bxlAUigGPz5t8xrF+1m78eYTUi2l/nSS5AGoyCl9IIeCmLCZarAVKdKAByU1xuXhIT+ tIF+uhQ1bibKUz0M9NWNnnBj620vU2MMc3FzK7qaRgEWWz+c2CuC/6f37FzVuf21De82 T/yyC1c0zoamk8Oa5j6vLJcXteF2UL450HO8ieSr/jqyxARBxNlmNleBaagWK9RsW0hi Eezg== X-Gm-Message-State: APjAAAVnHrZTbjn3suZODbYqSk0J4rX+gtM9hIPkBjF81zgxN89BlyUv lXILbSXvW+SCCEi3/iNt/HhRLw== X-Google-Smtp-Source: APXvYqyLAbyiYYDv4OWgAYRGYPJVfhh0GMQMOEeMlhjCFy7K9ZC7wuWF4BuGuR1fIuqzuvULK8W2Ww== X-Received: by 2002:a1c:e904:: with SMTP id q4mr7950555wmc.43.1557518463971; Fri, 10 May 2019 13:01:03 -0700 (PDT) Received: from zen.linaroharston ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id b12sm12868352wrf.21.2019.05.10.13.01.02 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 May 2019 13:01:03 -0700 (PDT) Received: from zen.linaroharston. (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 181471FF90; Fri, 10 May 2019 21:01:02 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: peter.maydell@linaro.org Date: Fri, 10 May 2019 21:00:59 +0100 Message-Id: <20190510200101.31096-4-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190510200101.31096-1-alex.bennee@linaro.org> References: <20190510200101.31096-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::32e Subject: [Qemu-devel] [PULL 3/5] cputlb: Drop attribute flatten X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , Mark Cave-Ayland , qemu-devel@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= , Richard Henderson Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Richard Henderson Going to approach this problem via __attribute__((always_inline)) instead, but full conversion will take several steps. Signed-off-by: Richard Henderson Signed-off-by: Alex Bennée Tested-by: Mark Cave-Ayland diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 9c04eb1687c..ccbb47d8d1c 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1291,51 +1291,44 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, * We don't bother with this widened value for SOFTMMU_CODE_ACCESS. */ -tcg_target_ulong __attribute__((flatten)) -helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 1, false, false); } -tcg_target_ulong __attribute__((flatten)) -helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 2, false, false); } -tcg_target_ulong __attribute__((flatten)) -helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 2, true, false); } -tcg_target_ulong __attribute__((flatten)) -helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 4, false, false); } -tcg_target_ulong __attribute__((flatten)) -helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 4, true, false); } -uint64_t __attribute__((flatten)) -helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 8, false, false); } -uint64_t __attribute__((flatten)) -helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 8, true, false); } @@ -1519,51 +1512,44 @@ static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, } } -void __attribute__((flatten)) -helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 1, false); } -void __attribute__((flatten)) -helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 2, false); } -void __attribute__((flatten)) -helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 2, true); } -void __attribute__((flatten)) -helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 4, false); } -void __attribute__((flatten)) -helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 4, true); } -void __attribute__((flatten)) -helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 8, false); } -void __attribute__((flatten)) -helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, - TCGMemOpIdx oi, uintptr_t retaddr) +void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr) { store_helper(env, addr, val, oi, retaddr, 8, true); } @@ -1627,51 +1613,44 @@ helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, /* Code access functions. */ -uint8_t __attribute__((flatten)) -helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint8_t helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 1, false, true); } -uint16_t __attribute__((flatten)) -helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint16_t helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 2, false, true); } -uint16_t __attribute__((flatten)) -helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint16_t helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 2, true, true); } -uint32_t __attribute__((flatten)) -helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint32_t helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 4, false, true); } -uint32_t __attribute__((flatten)) -helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 4, true, true); } -uint64_t __attribute__((flatten)) -helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint64_t helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 8, false, true); } -uint64_t __attribute__((flatten)) -helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, - uintptr_t retaddr) +uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) { return load_helper(env, addr, oi, retaddr, 8, true, true); } From patchwork Fri May 10 20:01:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 10939491 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 78193112C for ; Fri, 10 May 2019 20:02:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6799F28C9D for ; Fri, 10 May 2019 20:02:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5B11A28CCD; Fri, 10 May 2019 20:02:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7C40328CCC for ; Fri, 10 May 2019 20:02:42 +0000 (UTC) Received: from localhost ([127.0.0.1]:49293 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBj7-0003QC-PM for patchwork-qemu-devel@patchwork.kernel.org; Fri, 10 May 2019 16:02:41 -0400 Received: from eggs.gnu.org ([209.51.188.92]:33864) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBhk-0002SD-KA for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hPBhi-0002V3-1a for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:16 -0400 Received: from mail-wr1-x433.google.com ([2a00:1450:4864:20::433]:41637) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hPBhd-0002Ou-Pq for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:11 -0400 Received: by mail-wr1-x433.google.com with SMTP id d12so9113947wrm.8 for ; Fri, 10 May 2019 13:01:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gaTltrKsl46/gR/ETi1cLz0ePXsyiO5Cm5Dxu2eiLII=; b=jCSzlH+wv2zlswKIDzjD5mxxuES3EnsN973gmFG9dzQcZWPZzIWAvf65BWmqWzH27r 4Ldhnc2C79kFep+3XKb3Jjfv2+svwKGFvNImIp4t1/E723DCwF2As9B4RLV0MLNiVbuN nC7RBMmgmevxbdz9uPqeNDWxZY5GR/j2W3xUF4V/A0jdUIq2iJfbgLxFNHfiLCGxqBaG EwcXp/x+mqfeUocnurbOcrrt3Prh1nhdx7W62KP9P8X2FiTiRgVlaA2kEOrIzqcvyFAt TlVq/rGAj902lh5sUbdH+FGr8CtfK5pdkDFn0YGgrYdazybPDDHfwopAw1/kdPSWkdk7 kLLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gaTltrKsl46/gR/ETi1cLz0ePXsyiO5Cm5Dxu2eiLII=; b=gT2uf4etZBOSlukKKWUX3/muFJuMeskBnomuDc+tBbo7qJbdQpYYKEIASl2Sdtg+aE WigC1WRVBCK1kk8gjit5an59HJzwO3IjzEBOb0F95w/DMt+uovff6ClpP8WO0WM2FOyz SdHXmV7UfDb9o1pwNIzf5ra7eRTEkMQS4GzItpfN0oXpf7Fm7VmwwxH8xRtp1zmKcGmD o1Qmb6SVejnRutWKJvWidOWfJUX5MaWxgZvpifqWXENNynbykCytOTCRKg+PzcvdH9rY EFLF4gz7UJPUGffSlUBNeSoWWvQum0wh++HZVNEbWAqUvMcqbzdPr91+V9qISYFlyoOh t95g== X-Gm-Message-State: APjAAAU6v77/+/BHaUK2rccRPnMmFOo3fcwtEdgh2xR+DTQXmveM6xfa 5cFBEj4p35r5G/0GTk6xCM7G7g== X-Google-Smtp-Source: APXvYqyvAdsGAsoP4PiNbW74Y8Fw7L9ElkNahYvfYIu6unghhTSFN3d0FlC3jkgknSyFLVqf7LgO2g== X-Received: by 2002:a5d:4d46:: with SMTP id a6mr9519576wru.142.1557518465502; Fri, 10 May 2019 13:01:05 -0700 (PDT) Received: from zen.linaroharston ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id l16sm4980963wrb.40.2019.05.10.13.01.02 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 May 2019 13:01:03 -0700 (PDT) Received: from zen.linaroharston. (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 2DC311FF91; Fri, 10 May 2019 21:01:02 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: peter.maydell@linaro.org Date: Fri, 10 May 2019 21:01:00 +0100 Message-Id: <20190510200101.31096-5-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190510200101.31096-1-alex.bennee@linaro.org> References: <20190510200101.31096-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::433 Subject: [Qemu-devel] [PULL 4/5] cputlb: Do unaligned load recursion to outermost function X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , Mark Cave-Ayland , qemu-devel@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= , Richard Henderson Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Richard Henderson If we attempt to recurse from load_helper back to load_helper, even via intermediary, we do not get all of the constants expanded away as desired. But if we recurse back to the original helper (or a shim that has a consistent function signature), the operands are folded away as desired. Signed-off-by: Richard Henderson Signed-off-by: Alex Bennée Tested-by: Mark Cave-Ayland diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index ccbb47d8d1c..e4d0c943011 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1157,10 +1157,13 @@ static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian) * is disassembled. It shouldn't be called directly by guest code. */ -static uint64_t load_helper(CPUArchState *env, target_ulong addr, - TCGMemOpIdx oi, uintptr_t retaddr, - size_t size, bool big_endian, - bool code_read) +typedef uint64_t FullLoadHelper(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr); + +static inline uint64_t __attribute__((always_inline)) +load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, + uintptr_t retaddr, size_t size, bool big_endian, bool code_read, + FullLoadHelper *full_load) { uintptr_t mmu_idx = get_mmuidx(oi); uintptr_t index = tlb_index(env, mmu_idx, addr); @@ -1233,8 +1236,8 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, do_unaligned_access: addr1 = addr & ~(size - 1); addr2 = addr1 + size; - r1 = load_helper(env, addr1, oi, retaddr, size, big_endian, code_read); - r2 = load_helper(env, addr2, oi, retaddr, size, big_endian, code_read); + r1 = full_load(env, addr1, oi, retaddr); + r2 = full_load(env, addr2, oi, retaddr); shift = (addr & (size - 1)) * 8; if (big_endian) { @@ -1291,46 +1294,83 @@ static uint64_t load_helper(CPUArchState *env, target_ulong addr, * We don't bother with this widened value for SOFTMMU_CODE_ACCESS. */ +static uint64_t full_ldub_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 1, false, false, + full_ldub_mmu); +} + tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 1, false, false); + return full_ldub_mmu(env, addr, oi, retaddr); +} + +static uint64_t full_le_lduw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, false, false, + full_le_lduw_mmu); } tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 2, false, false); + return full_le_lduw_mmu(env, addr, oi, retaddr); +} + +static uint64_t full_be_lduw_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, true, false, + full_be_lduw_mmu); } tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 2, true, false); + return full_be_lduw_mmu(env, addr, oi, retaddr); +} + +static uint64_t full_le_ldul_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, false, false, + full_le_ldul_mmu); } tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 4, false, false); + return full_le_ldul_mmu(env, addr, oi, retaddr); +} + +static uint64_t full_be_ldul_mmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, true, false, + full_be_ldul_mmu); } tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 4, true, false); + return full_be_ldul_mmu(env, addr, oi, retaddr); } uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 8, false, false); + return load_helper(env, addr, oi, retaddr, 8, false, false, + helper_le_ldq_mmu); } uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 8, true, false); + return load_helper(env, addr, oi, retaddr, 8, true, false, + helper_be_ldq_mmu); } /* @@ -1613,44 +1653,81 @@ void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, /* Code access functions. */ +static uint64_t full_ldub_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 1, false, true, + full_ldub_cmmu); +} + uint8_t helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 1, false, true); + return full_ldub_cmmu(env, addr, oi, retaddr); +} + +static uint64_t full_le_lduw_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, false, true, + full_le_lduw_cmmu); } uint16_t helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 2, false, true); + return full_le_lduw_cmmu(env, addr, oi, retaddr); +} + +static uint64_t full_be_lduw_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 2, true, true, + full_be_lduw_cmmu); } uint16_t helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 2, true, true); + return full_be_lduw_cmmu(env, addr, oi, retaddr); +} + +static uint64_t full_le_ldul_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, false, true, + full_le_ldul_cmmu); } uint32_t helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 4, false, true); + return full_le_ldul_cmmu(env, addr, oi, retaddr); +} + +static uint64_t full_be_ldul_cmmu(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + return load_helper(env, addr, oi, retaddr, 4, true, true, + full_be_ldul_cmmu); } uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 4, true, true); + return full_be_ldul_cmmu(env, addr, oi, retaddr); } uint64_t helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 8, false, true); + return load_helper(env, addr, oi, retaddr, 8, false, true, + helper_le_ldq_cmmu); } uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr) { - return load_helper(env, addr, oi, retaddr, 8, true, true); + return load_helper(env, addr, oi, retaddr, 8, true, true, + helper_be_ldq_cmmu); } From patchwork Fri May 10 20:01:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 10939487 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 609E51390 for ; Fri, 10 May 2019 20:02:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 505AC28C9B for ; Fri, 10 May 2019 20:02:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 44DBE28CCC; Fri, 10 May 2019 20:02:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AC57828C9B for ; Fri, 10 May 2019 20:02:34 +0000 (UTC) Received: from localhost ([127.0.0.1]:49291 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBj0-0003MJ-0F for patchwork-qemu-devel@patchwork.kernel.org; Fri, 10 May 2019 16:02:34 -0400 Received: from eggs.gnu.org ([209.51.188.92]:33862) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hPBhk-0002SB-Jq for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hPBhh-0002UX-F6 for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:16 -0400 Received: from mail-wm1-x32d.google.com ([2a00:1450:4864:20::32d]:36288) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1hPBhd-0002QC-RW for qemu-devel@nongnu.org; Fri, 10 May 2019 16:01:10 -0400 Received: by mail-wm1-x32d.google.com with SMTP id j187so8604743wmj.1 for ; Fri, 10 May 2019 13:01:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=P6OpXLABH0bE0gbKxQnLm+pXDLfhXNolnZxvDMNERq8=; b=xhEzya3U9fREEkpXMVzA1gW7VCa1JRlmW+ax4krdH1cg6AfVfG+pRTIADUgvrwP3ui wY2i0oSPOacSY2m4GajImz3JrG5CKdrSBmV5Z2P/OPpmS+BtVN06A21Yp6AWDqjd1saU sciyBX6CQV1VaqlnlDXVDwzmjsTgHjez5t8RzfrQOFE2lGYhzrGtM+38fCT+0ZqE+n4s g9NMU6SPP+OlchhNRq0K5K4rxnY13SFjCqSFN3BdgYzn6ND5tsS7OLGPWjhicjJaANsT 6KLke5mTmGNfZGqZvhq2yg1zIcD1BmUY7ms1hus77DcYZEBNrnpd+5EWNKXfQ6N7pHGz t8Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=P6OpXLABH0bE0gbKxQnLm+pXDLfhXNolnZxvDMNERq8=; b=nNiXuA2qaSyIkvfflGFfbTzdPmsumY2neMEJeaKcTl3LrGaWlNKaIbpfXWqymsFV/1 kdE2q7RI5FtUOaB4vPJWMdN8hdBKVdFdx3+gktf4hp1VRjAnmjBmmJxGpy4O27qlUNPs BDjp6EQMWV36i2eYkE8s9aIto4x9yC/x7Eniqv3qBGg1zBJU/GaGAJIUOOhrHdfOov1R BWUvnmYZ0OGIVq/VaGKJ6XpvRNwS9qnOlHV+9W7WHGJjqeP2/1R8ntv1PZKIcrfRSPJZ +bdtvCKUnw1fA7hfZ6Y70vyS2qB5cXj6gakXPbFQBx3+lra/4g0Y4FQ5u9+1XrOzxxU2 w5Dg== X-Gm-Message-State: APjAAAUNMBz8ztdBAY3OUi0GKdxmjxckfXznMADKVqR1Y8w77E4Tq7QV mUtEoPQokGkLLQvrKZLuutH2kQ== X-Google-Smtp-Source: APXvYqwCu5mhIa6kBL1ZH+y9XQaX2oLvrikJsJu5jvhcN0saWhHCSC3IHDrTJShfsx7Qh6FCEnwOUw== X-Received: by 2002:a05:600c:1191:: with SMTP id i17mr318389wmf.84.1557518467333; Fri, 10 May 2019 13:01:07 -0700 (PDT) Received: from zen.linaroharston ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id b10sm11569023wrh.59.2019.05.10.13.01.03 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 10 May 2019 13:01:06 -0700 (PDT) Received: from zen.linaroharston. (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 428801FF92; Fri, 10 May 2019 21:01:02 +0100 (BST) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: peter.maydell@linaro.org Date: Fri, 10 May 2019 21:01:01 +0100 Message-Id: <20190510200101.31096-6-alex.bennee@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190510200101.31096-1-alex.bennee@linaro.org> References: <20190510200101.31096-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::32d Subject: [Qemu-devel] [PULL 5/5] cputlb: Do unaligned store recursion to outermost function X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Henderson , Mark Cave-Ayland , qemu-devel@nongnu.org, Paolo Bonzini , =?utf-8?q?Alex_Benn=C3=A9e?= , Richard Henderson Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Richard Henderson This is less tricky than for loads, because we always fall back to single byte stores to implement unaligned stores. Signed-off-by: Richard Henderson Signed-off-by: Alex Bennée Tested-by: Mark Cave-Ayland diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index e4d0c943011..a0833247684 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1413,9 +1413,9 @@ tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr, * Store Helpers */ -static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, - TCGMemOpIdx oi, uintptr_t retaddr, size_t size, - bool big_endian) +static inline void __attribute__((always_inline)) +store_helper(CPUArchState *env, target_ulong addr, uint64_t val, + TCGMemOpIdx oi, uintptr_t retaddr, size_t size, bool big_endian) { uintptr_t mmu_idx = get_mmuidx(oi); uintptr_t index = tlb_index(env, mmu_idx, addr); @@ -1514,7 +1514,7 @@ static void store_helper(CPUArchState *env, target_ulong addr, uint64_t val, /* Little-endian extract. */ val8 = val >> (i * 8); } - store_helper(env, addr + i, val8, oi, retaddr, 1, big_endian); + helper_ret_stb_mmu(env, addr + i, val8, oi, retaddr); } return; }