From patchwork Tue Nov 26 13:15:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D22D3D3B98F for ; Tue, 26 Nov 2024 13:17:18 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvQE-0007vk-Ds; Tue, 26 Nov 2024 08:16:06 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvQB-0007uo-6d for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:04 -0500 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvQ6-0003Y9-Sj for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:02 -0500 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-43494a20379so24195785e9.0 for ; Tue, 26 Nov 2024 05:15:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732626957; x=1733231757; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dn3u5hvx4ZY/sHn5P2fZV7LoRqZ/UcBOOwFKeqFk5wk=; b=T+WPKfuWCI5AX7HEwSjVWrY5qDHVNuvjCI99KWf1cfvW3tFqNx5TXf3vyhhH/Th+KR JY9nUKA58iq2qTdonAd8SsJ2aaYEiYeZUyLJaoxxdyKAOXCH/hnkaeWDc/344p6NTHMZ Ke1ju+vOEGam2qpQ06zUBpMvu5zz5ueJsNYjjuxQK5s84fZZhAVpsjmn9zvWoNgyj8pu IG1PvLy5xDQOBkfflYessaLJSuRkm/x/wuerFyOJrzaykH9yDGCmPn/GbOY33INlEpma c+BZ1z1ZTOtjnIwgJdd0WOHTUpWnFHMeaq8WNuG2qMX+1Bv9Dvu7kHP82zyiTsDQ2tZ5 c9Sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732626957; x=1733231757; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dn3u5hvx4ZY/sHn5P2fZV7LoRqZ/UcBOOwFKeqFk5wk=; b=WqMO7cpxycTjfaSSYzIEIwnc5cS/0EUmGPplgJxmsUt1VZfe5qPkHMHXT4B5qmbd7b uVqP0ubY3r1jjxzlVH6X4Cdt+NDdfCrLLXKz5fQF/JJUz2OPuNSFaz8MF5cEehhXj063 mmjmVInxV5G/0uyR3/p44Uv99jI6q6OsWHXgha4BXI7SeTaRCiHoYJ+PpVNbI1mQP3Ov D5ZPRFeHjxCWSsYg38lFNWoJnSQMmuk+Qr6j3S88O4C1fNY5nUMi86vRNGyabQcRWnbW mHYPP1D/LaIk4ZIC2JGq43+XFXEQPL7+ocfgzJXnV6n8ZYe/jpU8FrkM5uGqr/RjGeAB yOQA== X-Gm-Message-State: AOJu0YzLNp+VRmG4B3+SO1kydAn8v4YDSm2xJAROMI/nbSaew5dk9/FG h4wcX1BjfGvNhL7mNbsQIO8E3Y1nqlCGkU4EHgRbLWt9sZdbaqDVT9QCNf/Pmx7aS2Nfy3BxOOv J X-Gm-Gg: ASbGncseMnqGY25YdcBSc2cnO22m3CriTY7YonyCCmb1VMoA13FCufLTDnP/yO+BUFY n/+diruxYnmmW3+otKXxvH/F+N+FYZPXcwHK+IikKw3M67WbrC/LwSUJBuVcRtWOeo3WbcCjYn2 W5y4AVFtioQ3Rt1Wn3q9HVBg2rpG/bwYqSSAF4XfqzAXnUlCiWLu1D5DeekFwXwIE1sPoq0WWLS vjcdeEALKSSUmcSUHQ5ELPi9wkpn9Jp0302MnrWm5ZZReZpqLSlTtF4BYn5ZChvkV5busvY X-Google-Smtp-Source: AGHT+IH0JaWdv6Tx4juuUcigWC6/Gkha+yGAROHfAEkhsey3PWnNVBd/wqHqCoLzca2kJyvsXKXZtA== X-Received: by 2002:a05:600c:34cb:b0:434:a59c:43c6 with SMTP id 5b1f17b1804b1-434a59c47bdmr23077335e9.26.1732626955979; Tue, 26 Nov 2024 05:15:55 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-433b4642b8dsm229684175e9.41.2024.11.26.05.15.53 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:15:55 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 01/13] target/mips: Rename gen_load_gpr() -> gen_load_gpr_tl() Date: Tue, 26 Nov 2024 14:15:33 +0100 Message-ID: <20241126131546.66145-2-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=philmd@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org MIPS gen_load_gpr() takes a target-specific TCGv argument. Rename it as gen_load_gpr_tl() to clarify, like other TCG core helpers. Mechanical change doing: $ sed -i -e 's/gen_load_gpr/gen_load_gpr_tl/' \ $(git grep -l gen_load_gpr) Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- target/mips/tcg/translate.h | 2 +- target/mips/tcg/lcsr_translate.c | 8 +- target/mips/tcg/loong_translate.c | 20 +-- target/mips/tcg/msa_translate.c | 2 +- target/mips/tcg/mxu_translate.c | 48 ++--- target/mips/tcg/octeon_translate.c | 22 +-- target/mips/tcg/translate.c | 206 +++++++++++----------- target/mips/tcg/translate_addr_const.c | 8 +- target/mips/tcg/tx79_translate.c | 26 +-- target/mips/tcg/vr54xx_translate.c | 4 +- target/mips/tcg/micromips_translate.c.inc | 14 +- target/mips/tcg/mips16e_translate.c.inc | 12 +- target/mips/tcg/nanomips_translate.c.inc | 120 ++++++------- 13 files changed, 246 insertions(+), 246 deletions(-) diff --git a/target/mips/tcg/translate.h b/target/mips/tcg/translate.h index 1bf153d1838..f1aa706a357 100644 --- a/target/mips/tcg/translate.h +++ b/target/mips/tcg/translate.h @@ -155,7 +155,7 @@ void check_cop1x(DisasContext *ctx); void gen_base_offset_addr(DisasContext *ctx, TCGv addr, int base, int offset); void gen_move_low32(TCGv ret, TCGv_i64 arg); void gen_move_high32(TCGv ret, TCGv_i64 arg); -void gen_load_gpr(TCGv t, int reg); +void gen_load_gpr_tl(TCGv t, int reg); void gen_store_gpr(TCGv t, int reg); #if defined(TARGET_MIPS64) void gen_load_gpr_hi(TCGv_i64 t, int reg); diff --git a/target/mips/tcg/lcsr_translate.c b/target/mips/tcg/lcsr_translate.c index 352b0f43282..193b211049c 100644 --- a/target/mips/tcg/lcsr_translate.c +++ b/target/mips/tcg/lcsr_translate.c @@ -21,7 +21,7 @@ static bool trans_CPUCFG(DisasContext *ctx, arg_CPUCFG *a) TCGv dest = tcg_temp_new(); TCGv src1 = tcg_temp_new(); - gen_load_gpr(src1, a->rs); + gen_load_gpr_tl(src1, a->rs); gen_helper_lcsr_cpucfg(dest, tcg_env, src1); gen_store_gpr(dest, a->rd); @@ -36,7 +36,7 @@ static bool gen_rdcsr(DisasContext *ctx, arg_r *a, TCGv src1 = tcg_temp_new(); check_cp0_enabled(ctx); - gen_load_gpr(src1, a->rs); + gen_load_gpr_tl(src1, a->rs); func(dest, tcg_env, src1); gen_store_gpr(dest, a->rd); @@ -50,8 +50,8 @@ static bool gen_wrcsr(DisasContext *ctx, arg_r *a, TCGv addr = tcg_temp_new(); check_cp0_enabled(ctx); - gen_load_gpr(addr, a->rs); - gen_load_gpr(val, a->rd); + gen_load_gpr_tl(addr, a->rs); + gen_load_gpr_tl(val, a->rd); func(tcg_env, addr, val); return true; diff --git a/target/mips/tcg/loong_translate.c b/target/mips/tcg/loong_translate.c index 7d74cc34f8a..f527069cfb7 100644 --- a/target/mips/tcg/loong_translate.c +++ b/target/mips/tcg/loong_translate.c @@ -42,8 +42,8 @@ static bool gen_lext_DIV_G(DisasContext *s, int rd, int rs, int rt, l2 = gen_new_label(); l3 = gen_new_label(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); if (!is_double) { tcg_gen_ext32s_tl(t0, t0); @@ -95,8 +95,8 @@ static bool gen_lext_DIVU_G(DisasContext *s, int rd, int rs, int rt, l1 = gen_new_label(); l2 = gen_new_label(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); if (!is_double) { tcg_gen_ext32u_tl(t0, t0); @@ -143,8 +143,8 @@ static bool gen_lext_MOD_G(DisasContext *s, int rd, int rs, int rt, l2 = gen_new_label(); l3 = gen_new_label(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); if (!is_double) { tcg_gen_ext32u_tl(t0, t0); @@ -192,8 +192,8 @@ static bool gen_lext_MODU_G(DisasContext *s, int rd, int rs, int rt, l1 = gen_new_label(); l2 = gen_new_label(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); if (!is_double) { tcg_gen_ext32u_tl(t0, t0); @@ -235,8 +235,8 @@ static bool gen_lext_MULT_G(DisasContext *s, int rd, int rs, int rt, t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); tcg_gen_mul_tl(cpu_gpr[rd], t0, t1); if (!is_double) { diff --git a/target/mips/tcg/msa_translate.c b/target/mips/tcg/msa_translate.c index 75cf80a20ed..6f6eaab93aa 100644 --- a/target/mips/tcg/msa_translate.c +++ b/target/mips/tcg/msa_translate.c @@ -536,7 +536,7 @@ static bool trans_CTCMSA(DisasContext *ctx, arg_msa_elm *a) telm = tcg_temp_new(); - gen_load_gpr(telm, a->ws); + gen_load_gpr_tl(telm, a->ws); gen_helper_msa_ctcmsa(tcg_env, telm, tcg_constant_i32(a->wd)); return true; diff --git a/target/mips/tcg/mxu_translate.c b/target/mips/tcg/mxu_translate.c index 35ebb0397da..002447a10d7 100644 --- a/target/mips/tcg/mxu_translate.c +++ b/target/mips/tcg/mxu_translate.c @@ -679,7 +679,7 @@ static void gen_mxu_s32i2m(DisasContext *ctx) XRa = extract32(ctx->opcode, 6, 5); Rb = extract32(ctx->opcode, 16, 5); - gen_load_gpr(t0, Rb); + gen_load_gpr_tl(t0, Rb); if (XRa <= 15) { gen_store_mxu_gpr(t0, XRa); } else if (XRa == 16) { @@ -728,7 +728,7 @@ static void gen_mxu_s8ldd(DisasContext *ctx, bool postmodify) optn3 = extract32(ctx->opcode, 18, 3); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr(t0, Rb); + gen_load_gpr_tl(t0, Rb); tcg_gen_addi_tl(t0, t0, (int8_t)s8); if (postmodify) { gen_store_gpr(t0, Rb); @@ -813,7 +813,7 @@ static void gen_mxu_s8std(DisasContext *ctx, bool postmodify) return; } - gen_load_gpr(t0, Rb); + gen_load_gpr_tl(t0, Rb); tcg_gen_addi_tl(t0, t0, (int8_t)s8); if (postmodify) { gen_store_gpr(t0, Rb); @@ -862,7 +862,7 @@ static void gen_mxu_s16ldd(DisasContext *ctx, bool postmodify) optn2 = extract32(ctx->opcode, 19, 2); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr(t0, Rb); + gen_load_gpr_tl(t0, Rb); tcg_gen_addi_tl(t0, t0, s10); if (postmodify) { gen_store_gpr(t0, Rb); @@ -921,7 +921,7 @@ static void gen_mxu_s16std(DisasContext *ctx, bool postmodify) return; } - gen_load_gpr(t0, Rb); + gen_load_gpr_tl(t0, Rb); tcg_gen_addi_tl(t0, t0, s10); if (postmodify) { gen_store_gpr(t0, Rb); @@ -968,8 +968,8 @@ static void gen_mxu_s32mul(DisasContext *ctx, bool mulu) tcg_gen_movi_tl(t0, 0); tcg_gen_movi_tl(t1, 0); } else { - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); if (mulu) { tcg_gen_mulu2_tl(t0, t1, t0, t1); @@ -1528,7 +1528,7 @@ static void gen_mxu_s32ldxx(DisasContext *ctx, bool reversed, bool postinc) s12 = sextract32(ctx->opcode, 10, 10); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr(t0, Rb); + gen_load_gpr_tl(t0, Rb); tcg_gen_movi_tl(t1, s12 * 4); tcg_gen_add_tl(t0, t0, t1); @@ -1563,7 +1563,7 @@ static void gen_mxu_s32stxx(DisasContext *ctx, bool reversed, bool postinc) s12 = sextract32(ctx->opcode, 10, 10); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr(t0, Rb); + gen_load_gpr_tl(t0, Rb); tcg_gen_movi_tl(t1, s12 * 4); tcg_gen_add_tl(t0, t0, t1); @@ -1599,8 +1599,8 @@ static void gen_mxu_s32ldxvx(DisasContext *ctx, bool reversed, Rc = extract32(ctx->opcode, 16, 5); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr(t0, Rb); - gen_load_gpr(t1, Rc); + gen_load_gpr_tl(t0, Rb); + gen_load_gpr_tl(t1, Rc); tcg_gen_shli_tl(t1, t1, strd2); tcg_gen_add_tl(t0, t0, t1); @@ -1637,8 +1637,8 @@ static void gen_mxu_lxx(DisasContext *ctx, uint32_t strd2, MemOp mop) Rc = extract32(ctx->opcode, 16, 5); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr(t0, Rb); - gen_load_gpr(t1, Rc); + gen_load_gpr_tl(t0, Rb); + gen_load_gpr_tl(t1, Rc); tcg_gen_shli_tl(t1, t1, strd2); tcg_gen_add_tl(t0, t0, t1); @@ -1668,8 +1668,8 @@ static void gen_mxu_s32stxvx(DisasContext *ctx, bool reversed, Rc = extract32(ctx->opcode, 16, 5); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr(t0, Rb); - gen_load_gpr(t1, Rc); + gen_load_gpr_tl(t0, Rb); + gen_load_gpr_tl(t1, Rc); tcg_gen_shli_tl(t1, t1, strd2); tcg_gen_add_tl(t0, t0, t1); @@ -1906,7 +1906,7 @@ static void gen_mxu_d32sxxv(DisasContext *ctx, bool right, bool arithmetic) gen_load_mxu_gpr(t0, XRa); gen_load_mxu_gpr(t1, XRd); - gen_load_gpr(t2, rs); + gen_load_gpr_tl(t2, rs); tcg_gen_andi_tl(t2, t2, 0x0f); if (right) { @@ -1954,7 +1954,7 @@ static void gen_mxu_d32sarl(DisasContext *ctx, bool sarw) /* Make SFT4 from rb field */ tcg_gen_movi_tl(t2, rb >> 1); } else { - gen_load_gpr(t2, rb); + gen_load_gpr_tl(t2, rb); tcg_gen_andi_tl(t2, t2, 0x0f); } gen_load_mxu_gpr(t0, XRb); @@ -2060,7 +2060,7 @@ static void gen_mxu_q16sxxv(DisasContext *ctx, bool right, bool arithmetic) gen_load_mxu_gpr(t0, XRa); gen_load_mxu_gpr(t2, XRd); - gen_load_gpr(t5, rs); + gen_load_gpr_tl(t5, rs); tcg_gen_andi_tl(t5, t5, 0x0f); @@ -3659,7 +3659,7 @@ static void gen_mxu_s32extr(DisasContext *ctx) gen_load_mxu_gpr(t0, XRd); gen_load_mxu_gpr(t1, XRa); - gen_load_gpr(t2, rs); + gen_load_gpr_tl(t2, rs); tcg_gen_andi_tl(t2, t2, 0x1f); tcg_gen_subfi_tl(t2, 32, t2); tcg_gen_brcondi_tl(TCG_COND_GE, t2, bits5, l_xra_only); @@ -3709,8 +3709,8 @@ static void gen_mxu_s32extrv(DisasContext *ctx) /* {tmp} = {XRa:XRd} >> (64 - rs - rt) */ gen_load_mxu_gpr(t0, XRd); gen_load_mxu_gpr(t1, XRa); - gen_load_gpr(t2, rs); - gen_load_gpr(t4, rt); + gen_load_gpr_tl(t2, rs); + gen_load_gpr_tl(t4, rt); tcg_gen_brcondi_tl(TCG_COND_EQ, t4, 0, l_zero); tcg_gen_andi_tl(t2, t2, 0x1f); tcg_gen_subfi_tl(t2, 32, t2); @@ -4303,7 +4303,7 @@ static void gen_mxu_S32ALN(DisasContext *ctx) gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); - gen_load_gpr(t2, rs); + gen_load_gpr_tl(t2, rs); tcg_gen_andi_tl(t2, t2, 0x07); /* do nothing for undefined cases */ @@ -4364,8 +4364,8 @@ static void gen_mxu_s32madd_sub(DisasContext *ctx, bool sub, bool uns) TCGv_i64 t2 = tcg_temp_new_i64(); TCGv_i64 t3 = tcg_temp_new_i64(); - gen_load_gpr(t0, Rb); - gen_load_gpr(t1, Rc); + gen_load_gpr_tl(t0, Rb); + gen_load_gpr_tl(t1, Rc); if (uns) { tcg_gen_extu_tl_i64(t2, t0); diff --git a/target/mips/tcg/octeon_translate.c b/target/mips/tcg/octeon_translate.c index e25c4cbaa06..6b0dbf946d8 100644 --- a/target/mips/tcg/octeon_translate.c +++ b/target/mips/tcg/octeon_translate.c @@ -26,7 +26,7 @@ static bool trans_BBIT(DisasContext *ctx, arg_BBIT *a) /* Load needed operands */ TCGv t0 = tcg_temp_new(); - gen_load_gpr(t0, a->rs); + gen_load_gpr_tl(t0, a->rs); p = tcg_constant_tl(1ULL << a->p); if (a->set) { @@ -52,8 +52,8 @@ static bool trans_BADDU(DisasContext *ctx, arg_BADDU *a) t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, a->rs); - gen_load_gpr(t1, a->rt); + gen_load_gpr_tl(t0, a->rs); + gen_load_gpr_tl(t1, a->rt); tcg_gen_add_tl(t0, t0, t1); tcg_gen_andi_i64(cpu_gpr[a->rd], t0, 0xff); @@ -71,8 +71,8 @@ static bool trans_DMUL(DisasContext *ctx, arg_DMUL *a) t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, a->rs); - gen_load_gpr(t1, a->rt); + gen_load_gpr_tl(t0, a->rs); + gen_load_gpr_tl(t1, a->rt); tcg_gen_mul_i64(cpu_gpr[a->rd], t0, t1); return true; @@ -88,7 +88,7 @@ static bool trans_EXTS(DisasContext *ctx, arg_EXTS *a) } t0 = tcg_temp_new(); - gen_load_gpr(t0, a->rs); + gen_load_gpr_tl(t0, a->rs); tcg_gen_sextract_tl(t0, t0, a->p, a->lenm1 + 1); gen_store_gpr(t0, a->rt); return true; @@ -104,7 +104,7 @@ static bool trans_CINS(DisasContext *ctx, arg_CINS *a) } t0 = tcg_temp_new(); - gen_load_gpr(t0, a->rs); + gen_load_gpr_tl(t0, a->rs); tcg_gen_deposit_z_tl(t0, t0, a->p, a->lenm1 + 1); gen_store_gpr(t0, a->rt); return true; @@ -120,7 +120,7 @@ static bool trans_POP(DisasContext *ctx, arg_POP *a) } t0 = tcg_temp_new(); - gen_load_gpr(t0, a->rs); + gen_load_gpr_tl(t0, a->rs); if (!a->dw) { tcg_gen_andi_i64(t0, t0, 0xffffffff); } @@ -141,8 +141,8 @@ static bool trans_SEQNE(DisasContext *ctx, arg_SEQNE *a) t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, a->rs); - gen_load_gpr(t1, a->rt); + gen_load_gpr_tl(t0, a->rs); + gen_load_gpr_tl(t1, a->rt); if (a->ne) { tcg_gen_setcond_tl(TCG_COND_NE, cpu_gpr[a->rd], t1, t0); @@ -163,7 +163,7 @@ static bool trans_SEQNEI(DisasContext *ctx, arg_SEQNEI *a) t0 = tcg_temp_new(); - gen_load_gpr(t0, a->rs); + gen_load_gpr_tl(t0, a->rs); /* Sign-extend to 64 bit value */ target_ulong imm = a->imm; diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c index de7045874dd..13fbe5d378f 100644 --- a/target/mips/tcg/translate.c +++ b/target/mips/tcg/translate.c @@ -1188,7 +1188,7 @@ static const char regnames_LO[][4] = { }; /* General purpose registers moves. */ -void gen_load_gpr(TCGv t, int reg) +void gen_load_gpr_tl(TCGv t, int reg) { assert(reg >= 0 && reg <= ARRAY_SIZE(cpu_gpr)); if (reg == 0) { @@ -1256,7 +1256,7 @@ static inline void gen_store_srsgpr(int from, int to) TCGv_i32 t2 = tcg_temp_new_i32(); TCGv_ptr addr = tcg_temp_new_ptr(); - gen_load_gpr(t0, from); + gen_load_gpr_tl(t0, from); tcg_gen_ld_i32(t2, tcg_env, offsetof(CPUMIPSState, CP0_SRSCtl)); tcg_gen_shri_i32(t2, t2, CP0SRSCtl_PSS); tcg_gen_andi_i32(t2, t2, 0xf); @@ -1949,7 +1949,7 @@ void gen_base_offset_addr(DisasContext *ctx, TCGv addr, int base, int offset) if (base == 0) { tcg_gen_movi_tl(addr, offset); } else if (offset == 0) { - gen_load_gpr(addr, base); + gen_load_gpr_tl(addr, base); } else { tcg_gen_movi_tl(addr, offset); gen_op_addr_add(ctx, addr, cpu_gpr[base], addr); @@ -2063,13 +2063,13 @@ static void gen_ld(DisasContext *ctx, uint32_t opc, break; case OPC_LDL: t1 = tcg_temp_new(); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t1, rt); gen_lxl(ctx, t1, t0, mem_idx, mo_endian(ctx) | MO_UQ); gen_store_gpr(t1, rt); break; case OPC_LDR: t1 = tcg_temp_new(); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t1, rt); gen_lxr(ctx, t1, t0, mem_idx, mo_endian(ctx) | MO_UQ); gen_store_gpr(t1, rt); break; @@ -2129,7 +2129,7 @@ static void gen_ld(DisasContext *ctx, uint32_t opc, /* fall through */ case OPC_LWL: t1 = tcg_temp_new(); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t1, rt); gen_lxl(ctx, t1, t0, mem_idx, mo_endian(ctx) | MO_UL); tcg_gen_ext32s_tl(t1, t1); gen_store_gpr(t1, rt); @@ -2139,7 +2139,7 @@ static void gen_ld(DisasContext *ctx, uint32_t opc, /* fall through */ case OPC_LWR: t1 = tcg_temp_new(); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t1, rt); gen_lxr(ctx, t1, t0, mem_idx, mo_endian(ctx) | MO_UL); tcg_gen_ext32s_tl(t1, t1); gen_store_gpr(t1, rt); @@ -2164,7 +2164,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt, int mem_idx = ctx->mem_idx; gen_base_offset_addr(ctx, t0, base, offset); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t1, rt); switch (opc) { #if defined(TARGET_MIPS64) case OPC_SD: @@ -2233,7 +2233,7 @@ static void gen_st_cond(DisasContext *ctx, int rt, int base, int offset, gen_set_label(l1); /* generate cmpxchg */ val = tcg_temp_new(); - gen_load_gpr(val, rt); + gen_load_gpr_tl(val, rt); tcg_gen_atomic_cmpxchg_tl(t0, cpu_lladdr, cpu_llval, val, eva ? MIPS_HFLAG_UM : ctx->mem_idx, tcg_mo); tcg_gen_setcond_tl(TCG_COND_EQ, t0, t0, cpu_llval); @@ -2332,7 +2332,7 @@ static void gen_arith_imm(DisasContext *ctx, uint32_t opc, TCGv t2 = tcg_temp_new(); TCGLabel *l1 = gen_new_label(); - gen_load_gpr(t1, rs); + gen_load_gpr_tl(t1, rs); tcg_gen_addi_tl(t0, t1, uimm); tcg_gen_ext32s_tl(t0, t0); @@ -2363,7 +2363,7 @@ static void gen_arith_imm(DisasContext *ctx, uint32_t opc, TCGv t2 = tcg_temp_new(); TCGLabel *l1 = gen_new_label(); - gen_load_gpr(t1, rs); + gen_load_gpr_tl(t1, rs); tcg_gen_addi_tl(t0, t1, uimm); tcg_gen_xori_tl(t1, t1, ~uimm); @@ -2447,7 +2447,7 @@ static void gen_slt_imm(DisasContext *ctx, uint32_t opc, return; } t0 = tcg_temp_new(); - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); switch (opc) { case OPC_SLTI: tcg_gen_setcondi_tl(TCG_COND_LT, cpu_gpr[rt], t0, uimm); @@ -2471,7 +2471,7 @@ static void gen_shift_imm(DisasContext *ctx, uint32_t opc, } t0 = tcg_temp_new(); - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); switch (opc) { case OPC_SLL: tcg_gen_shli_tl(t0, t0, uimm); @@ -2553,8 +2553,8 @@ static void gen_arith(DisasContext *ctx, uint32_t opc, TCGv t2 = tcg_temp_new(); TCGLabel *l1 = gen_new_label(); - gen_load_gpr(t1, rs); - gen_load_gpr(t2, rt); + gen_load_gpr_tl(t1, rs); + gen_load_gpr_tl(t2, rt); tcg_gen_add_tl(t0, t1, t2); tcg_gen_ext32s_tl(t0, t0); tcg_gen_xor_tl(t1, t1, t2); @@ -2586,8 +2586,8 @@ static void gen_arith(DisasContext *ctx, uint32_t opc, TCGv t2 = tcg_temp_new(); TCGLabel *l1 = gen_new_label(); - gen_load_gpr(t1, rs); - gen_load_gpr(t2, rt); + gen_load_gpr_tl(t1, rs); + gen_load_gpr_tl(t2, rt); tcg_gen_sub_tl(t0, t1, t2); tcg_gen_ext32s_tl(t0, t0); tcg_gen_xor_tl(t2, t1, t2); @@ -2624,8 +2624,8 @@ static void gen_arith(DisasContext *ctx, uint32_t opc, TCGv t2 = tcg_temp_new(); TCGLabel *l1 = gen_new_label(); - gen_load_gpr(t1, rs); - gen_load_gpr(t2, rt); + gen_load_gpr_tl(t1, rs); + gen_load_gpr_tl(t2, rt); tcg_gen_add_tl(t0, t1, t2); tcg_gen_xor_tl(t1, t1, t2); tcg_gen_xor_tl(t2, t0, t2); @@ -2655,8 +2655,8 @@ static void gen_arith(DisasContext *ctx, uint32_t opc, TCGv t2 = tcg_temp_new(); TCGLabel *l1 = gen_new_label(); - gen_load_gpr(t1, rs); - gen_load_gpr(t2, rt); + gen_load_gpr_tl(t1, rs); + gen_load_gpr_tl(t2, rt); tcg_gen_sub_tl(t0, t1, t2); tcg_gen_xor_tl(t2, t1, t2); tcg_gen_xor_tl(t1, t0, t1); @@ -2706,10 +2706,10 @@ static void gen_cond_move(DisasContext *ctx, uint32_t opc, } t0 = tcg_temp_new(); - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); t1 = tcg_constant_tl(0); t2 = tcg_temp_new(); - gen_load_gpr(t2, rs); + gen_load_gpr_tl(t2, rs); switch (opc) { case OPC_MOVN: tcg_gen_movcond_tl(TCG_COND_NE, cpu_gpr[rd], t0, t1, t2, cpu_gpr[rd]); @@ -2792,8 +2792,8 @@ static void gen_slt(DisasContext *ctx, uint32_t opc, t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); switch (opc) { case OPC_SLT: tcg_gen_setcond_tl(TCG_COND_LT, cpu_gpr[rd], t0, t1); @@ -2820,8 +2820,8 @@ static void gen_shift(DisasContext *ctx, uint32_t opc, t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); switch (opc) { case OPC_SLLV: tcg_gen_andi_tl(t0, t0, 0x1f); @@ -3018,8 +3018,8 @@ static void gen_r6_muldiv(DisasContext *ctx, int opc, int rd, int rs, int rt) t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); switch (opc) { case R6_OPC_DIV: @@ -3189,8 +3189,8 @@ static void gen_div1_tx79(DisasContext *ctx, uint32_t opc, int rs, int rt) t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); switch (opc) { case MMI_OPC_DIV1: @@ -3240,8 +3240,8 @@ static void gen_muldiv(DisasContext *ctx, uint32_t opc, t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); if (acc != 0) { check_dsp(ctx); @@ -3431,8 +3431,8 @@ static void gen_mul_txx9(DisasContext *ctx, uint32_t opc, TCGv t1 = tcg_temp_new(); int acc = 0; - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); switch (opc) { case MMI_OPC_MULT1: @@ -3528,7 +3528,7 @@ static void gen_cl(DisasContext *ctx, uint32_t opc, return; } t0 = cpu_gpr[rd]; - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); switch (opc) { case OPC_CLO: @@ -3966,11 +3966,11 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, case OPC_GSSQ: t1 = tcg_temp_new(); gen_base_offset_addr(ctx, t0, rs, lsq_offset); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t1, rt); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); gen_base_offset_addr(ctx, t0, rs, lsq_offset + 8); - gen_load_gpr(t1, lsq_rt1); + gen_load_gpr_tl(t1, lsq_rt1); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); break; @@ -4190,25 +4190,25 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, #endif case OPC_GSSBX: t1 = tcg_temp_new(); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t1, rt); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_SB); break; case OPC_GSSHX: t1 = tcg_temp_new(); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t1, rt); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UW | ctx->default_tcg_memop_mask); break; case OPC_GSSWX: t1 = tcg_temp_new(); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t1, rt); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); break; #if defined(TARGET_MIPS64) case OPC_GSSDX: t1 = tcg_temp_new(); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t1, rt); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); break; @@ -4251,8 +4251,8 @@ static void gen_trap(DisasContext *ctx, uint32_t opc, case OPC_TNE: /* Compare two registers */ if (rs != rt) { - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); cond = 1; } break; @@ -4264,7 +4264,7 @@ static void gen_trap(DisasContext *ctx, uint32_t opc, case OPC_TNEI: /* Compare register to immediate */ if (rs != 0 || imm != 0) { - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); tcg_gen_movi_tl(t1, (int32_t)imm); cond = 1; } @@ -4382,8 +4382,8 @@ static void gen_compute_branch(DisasContext *ctx, uint32_t opc, case OPC_BNEL: /* Compare two registers */ if (rs != rt) { - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); bcond_compute = 1; } btgt = ctx->base.pc_next + insn_bytes + offset; @@ -4402,7 +4402,7 @@ static void gen_compute_branch(DisasContext *ctx, uint32_t opc, case OPC_BLTZL: /* Compare to zero */ if (rs != 0) { - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); bcond_compute = 1; } btgt = ctx->base.pc_next + insn_bytes + offset; @@ -4444,7 +4444,7 @@ static void gen_compute_branch(DisasContext *ctx, uint32_t opc, gen_reserved_instruction(ctx); goto out; } - gen_load_gpr(btarget, rs); + gen_load_gpr_tl(btarget, rs); break; default: MIPS_INVAL("branch/jump"); @@ -4622,7 +4622,7 @@ static void gen_bitops(DisasContext *ctx, uint32_t opc, int rt, TCGv t0 = tcg_temp_new(); TCGv t1 = tcg_temp_new(); - gen_load_gpr(t1, rs); + gen_load_gpr_tl(t1, rs); switch (opc) { case OPC_EXT: if (lsb + msb > 31) { @@ -4657,7 +4657,7 @@ static void gen_bitops(DisasContext *ctx, uint32_t opc, int rt, if (lsb > msb) { goto fail; } - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); tcg_gen_deposit_tl(t0, t0, t1, lsb, msb - lsb + 1); tcg_gen_ext32s_tl(t0, t0); break; @@ -4672,7 +4672,7 @@ static void gen_bitops(DisasContext *ctx, uint32_t opc, int rt, if (lsb > msb) { goto fail; } - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); tcg_gen_deposit_tl(t0, t0, t1, lsb, msb - lsb + 1); break; #endif @@ -4695,7 +4695,7 @@ static void gen_bshfl(DisasContext *ctx, uint32_t op2, int rt, int rd) } t0 = tcg_temp_new(); - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); switch (op2) { case OPC_WSBH: { @@ -4763,9 +4763,9 @@ static void gen_align_bits(DisasContext *ctx, int wordsz, int rd, int rs, t0 = tcg_temp_new(); if (bits == 0 || bits == wordsz) { if (bits == 0) { - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); } else { - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); } switch (wordsz) { case 32: @@ -4779,8 +4779,8 @@ static void gen_align_bits(DisasContext *ctx, int wordsz, int rd, int rs, } } else { TCGv t1 = tcg_temp_new(); - gen_load_gpr(t0, rt); - gen_load_gpr(t1, rs); + gen_load_gpr_tl(t0, rt); + gen_load_gpr_tl(t1, rs); switch (wordsz) { case 32: { @@ -4814,7 +4814,7 @@ static void gen_bitswap(DisasContext *ctx, int opc, int rd, int rt) return; } t0 = tcg_temp_new(); - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); switch (opc) { case OPC_BITSWAP: gen_helper_bitswap(cpu_gpr[rd], t0); @@ -8290,7 +8290,7 @@ static void gen_mttr(CPUMIPSState *env, DisasContext *ctx, int rd, int rt, int other_tc = env->CP0_VPEControl & (0xff << CP0VPECo_TargTC); TCGv t0 = tcg_temp_new(); - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); if ((env->CP0_VPEConf0 & (1 << CP0VPEC0_MVP)) == 0 && ((env->tcs[other_tc].CP0_TCBind & (0xf << CP0TCBd_CurVPE)) != (env->active_tc.CP0_TCBind & (0xf << CP0TCBd_CurVPE)))) { @@ -8504,7 +8504,7 @@ static void gen_cp0(CPUMIPSState *env, DisasContext *ctx, uint32_t opc, { TCGv t0 = tcg_temp_new(); - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); gen_mtc0(ctx, t0, rd, ctx->opcode & 0x7); } opn = "mtc0"; @@ -8524,7 +8524,7 @@ static void gen_cp0(CPUMIPSState *env, DisasContext *ctx, uint32_t opc, { TCGv t0 = tcg_temp_new(); - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); gen_dmtc0(ctx, t0, rd, ctx->opcode & 0x7); } opn = "dmtc0"; @@ -8543,7 +8543,7 @@ static void gen_cp0(CPUMIPSState *env, DisasContext *ctx, uint32_t opc, check_mvh(ctx); { TCGv t0 = tcg_temp_new(); - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); gen_mthc0(ctx, t0, rd, ctx->opcode & 0x7); } opn = "mthc0"; @@ -9051,7 +9051,7 @@ static void gen_cp1(DisasContext *ctx, uint32_t opc, int rt, int fs) gen_store_gpr(t0, rt); break; case OPC_MTC1: - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); { TCGv_i32 fp0 = tcg_temp_new_i32(); @@ -9064,7 +9064,7 @@ static void gen_cp1(DisasContext *ctx, uint32_t opc, int rt, int fs) gen_store_gpr(t0, rt); break; case OPC_CTC1: - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); save_cpu_state(ctx, 0); gen_helper_0e2i(ctc1, t0, tcg_constant_i32(fs), rt); /* Stop translation as we may have changed hflags */ @@ -9076,7 +9076,7 @@ static void gen_cp1(DisasContext *ctx, uint32_t opc, int rt, int fs) gen_store_gpr(t0, rt); break; case OPC_DMTC1: - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); gen_store_fpr64(ctx, t0, fs); break; #endif @@ -9090,7 +9090,7 @@ static void gen_cp1(DisasContext *ctx, uint32_t opc, int rt, int fs) gen_store_gpr(t0, rt); break; case OPC_MTHC1: - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); { TCGv_i32 fp0 = tcg_temp_new_i32(); @@ -9126,7 +9126,7 @@ static void gen_movci(DisasContext *ctx, int rd, int rs, int cc, int tf) t0 = tcg_temp_new_i32(); tcg_gen_andi_i32(t0, fpu_fcr31, 1 << get_fp_bit(cc)); tcg_gen_brcondi_i32(cond, t0, 0, l1); - gen_load_gpr(cpu_gpr[rd], rs); + gen_load_gpr_tl(cpu_gpr[rd], rs); gen_set_label(l1); } @@ -10546,9 +10546,9 @@ static void gen_flt3_ldst(DisasContext *ctx, uint32_t opc, TCGv t0 = tcg_temp_new(); if (base == 0) { - gen_load_gpr(t0, index); + gen_load_gpr_tl(t0, index); } else if (index == 0) { - gen_load_gpr(t0, base); + gen_load_gpr_tl(t0, base); } else { gen_op_addr_add(ctx, t0, cpu_gpr[base], cpu_gpr[index]); } @@ -10628,7 +10628,7 @@ static void gen_flt3_arith(DisasContext *ctx, uint32_t opc, TCGLabel *l1 = gen_new_label(); TCGLabel *l2 = gen_new_label(); - gen_load_gpr(t0, fr); + gen_load_gpr_tl(t0, fr); tcg_gen_andi_tl(t0, t0, 0x7); tcg_gen_brcondi_tl(TCG_COND_NE, t0, 0, l1); @@ -11006,8 +11006,8 @@ static void gen_compute_compact_branch(DisasContext *ctx, uint32_t opc, /* compact branch */ case OPC_BOVC: /* OPC_BEQZALC, OPC_BEQC */ case OPC_BNVC: /* OPC_BNEZALC, OPC_BNEC */ - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); bcond_compute = 1; ctx->btarget = addr_add(ctx, ctx->base.pc_next + 4, offset); if (rs <= rt && rs == 0) { @@ -11017,8 +11017,8 @@ static void gen_compute_compact_branch(DisasContext *ctx, uint32_t opc, break; case OPC_BLEZC: /* OPC_BGEZC, OPC_BGEC */ case OPC_BGTZC: /* OPC_BLTZC, OPC_BLTC */ - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); bcond_compute = 1; ctx->btarget = addr_add(ctx, ctx->base.pc_next + 4, offset); break; @@ -11029,8 +11029,8 @@ static void gen_compute_compact_branch(DisasContext *ctx, uint32_t opc, /* OPC_BGTZALC, OPC_BLTZALC */ tcg_gen_movi_tl(cpu_gpr[31], ctx->base.pc_next + 4 + m16_lowbit); } - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); bcond_compute = 1; ctx->btarget = addr_add(ctx, ctx->base.pc_next + 4, offset); break; @@ -11042,14 +11042,14 @@ static void gen_compute_compact_branch(DisasContext *ctx, uint32_t opc, case OPC_BNEZC: if (rs != 0) { /* OPC_BEQZC, OPC_BNEZC */ - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); bcond_compute = 1; ctx->btarget = addr_add(ctx, ctx->base.pc_next + 4, offset); } else { /* OPC_JIC, OPC_JIALC */ TCGv tbase = tcg_temp_new(); - gen_load_gpr(tbase, rt); + gen_load_gpr_tl(tbase, rt); gen_op_addr_addi(ctx, btarget, tbase, offset); } break; @@ -11145,8 +11145,8 @@ static void gen_compute_compact_branch(DisasContext *ctx, uint32_t opc, TCGv t4 = tcg_temp_new(); TCGv input_overflow = tcg_temp_new(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); tcg_gen_ext32s_tl(t2, t0); tcg_gen_setcond_tl(TCG_COND_NE, input_overflow, t2, t0); tcg_gen_ext32s_tl(t3, t1); @@ -11248,10 +11248,10 @@ void gen_ldxs(DisasContext *ctx, int base, int index, int rd) TCGv t0 = tcg_temp_new(); TCGv t1 = tcg_temp_new(); - gen_load_gpr(t0, base); + gen_load_gpr_tl(t0, base); if (index != 0) { - gen_load_gpr(t1, index); + gen_load_gpr_tl(t1, index); tcg_gen_shli_tl(t1, t1, 2); gen_op_addr_add(ctx, t0, t1, t0); } @@ -11334,9 +11334,9 @@ static void gen_mips_lx(DisasContext *ctx, uint32_t opc, t0 = tcg_temp_new(); if (base == 0) { - gen_load_gpr(t0, offset); + gen_load_gpr_tl(t0, offset); } else if (offset == 0) { - gen_load_gpr(t0, base); + gen_load_gpr_tl(t0, base); } else { gen_op_addr_add(ctx, t0, cpu_gpr[base], cpu_gpr[offset]); } @@ -11377,8 +11377,8 @@ static void gen_mipsdsp_arith(DisasContext *ctx, uint32_t op1, uint32_t op2, v1_t = tcg_temp_new(); v2_t = tcg_temp_new(); - gen_load_gpr(v1_t, v1); - gen_load_gpr(v2_t, v2); + gen_load_gpr_tl(v1_t, v1); + gen_load_gpr_tl(v2_t, v2); switch (op1) { case OPC_ADDUH_QB_DSP: @@ -11822,8 +11822,8 @@ static void gen_mipsdsp_shift(DisasContext *ctx, uint32_t opc, v2_t = tcg_temp_new(); tcg_gen_movi_tl(t0, v1); - gen_load_gpr(v1_t, v1); - gen_load_gpr(v2_t, v2); + gen_load_gpr_tl(v1_t, v1); + gen_load_gpr_tl(v2_t, v2); switch (opc) { case OPC_SHLL_QB_DSP: @@ -12060,8 +12060,8 @@ static void gen_mipsdsp_multiply(DisasContext *ctx, uint32_t op1, uint32_t op2, v2_t = tcg_temp_new(); tcg_gen_movi_i32(t0, ret); - gen_load_gpr(v1_t, v1); - gen_load_gpr(v2_t, v2); + gen_load_gpr_tl(v1_t, v1); + gen_load_gpr_tl(v2_t, v2); switch (op1) { case OPC_MUL_PH_DSP: @@ -12359,7 +12359,7 @@ static void gen_mipsdsp_bitinsn(DisasContext *ctx, uint32_t op1, uint32_t op2, t0 = tcg_temp_new(); val_t = tcg_temp_new(); - gen_load_gpr(val_t, val); + gen_load_gpr_tl(val_t, val); switch (op1) { case OPC_ABSQ_S_PH_DSP: @@ -12498,8 +12498,8 @@ static void gen_mipsdsp_add_cmp_pick(DisasContext *ctx, v1_t = tcg_temp_new(); v2_t = tcg_temp_new(); - gen_load_gpr(v1_t, v1); - gen_load_gpr(v2_t, v2); + gen_load_gpr_tl(v1_t, v1); + gen_load_gpr_tl(v2_t, v2); switch (op1) { case OPC_CMPU_EQ_QB_DSP: @@ -12676,7 +12676,7 @@ static void gen_mipsdsp_append(CPUMIPSState *env, DisasContext *ctx, } t0 = tcg_temp_new(); - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); switch (op1) { case OPC_APPEND_DSP: @@ -12768,7 +12768,7 @@ static void gen_mipsdsp_accinsn(DisasContext *ctx, uint32_t op1, uint32_t op2, t1 = tcg_temp_new(); v1_t = tcg_temp_new(); - gen_load_gpr(v1_t, v1); + gen_load_gpr_tl(v1_t, v1); switch (op1) { case OPC_EXTR_W_DSP: @@ -13785,8 +13785,8 @@ static void decode_opc_special3_legacy(CPUMIPSState *env, DisasContext *ctx) t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, rt); - gen_load_gpr(t1, rs); + gen_load_gpr_tl(t0, rt); + gen_load_gpr_tl(t1, rs); gen_helper_insv(cpu_gpr[rt], tcg_env, t1, t0); break; @@ -14045,8 +14045,8 @@ static void decode_opc_special3_legacy(CPUMIPSState *env, DisasContext *ctx) t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, rt); - gen_load_gpr(t1, rs); + gen_load_gpr_tl(t0, rt); + gen_load_gpr_tl(t1, rs); gen_helper_dinsv(cpu_gpr[rt], tcg_env, t1, t0); break; @@ -14272,8 +14272,8 @@ static void decode_opc_special3(CPUMIPSState *env, DisasContext *ctx) TCGv t0 = tcg_temp_new(); TCGv t1 = tcg_temp_new(); - gen_load_gpr(t0, rt); - gen_load_gpr(t1, rs); + gen_load_gpr_tl(t0, rt); + gen_load_gpr_tl(t1, rs); gen_helper_fork(t0, t1); } break; @@ -14282,7 +14282,7 @@ static void decode_opc_special3(CPUMIPSState *env, DisasContext *ctx) { TCGv t0 = tcg_temp_new(); - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); gen_helper_yield(t0, tcg_env, t0); gen_store_gpr(t0, rd); } @@ -14973,7 +14973,7 @@ static bool decode_opc_legacy(CPUMIPSState *env, DisasContext *ctx) generate_exception(ctx, EXCP_RI); } else if (rt != 0) { TCGv t0 = tcg_temp_new(); - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); tcg_gen_addi_tl(cpu_gpr[rt], t0, imm << 16); } #else diff --git a/target/mips/tcg/translate_addr_const.c b/target/mips/tcg/translate_addr_const.c index 6f4b39f715b..e66361c97dd 100644 --- a/target/mips/tcg/translate_addr_const.c +++ b/target/mips/tcg/translate_addr_const.c @@ -24,8 +24,8 @@ bool gen_lsa(DisasContext *ctx, int rd, int rt, int rs, int sa) } t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); tcg_gen_shli_tl(t0, t0, sa + 1); tcg_gen_add_tl(cpu_gpr[rd], t0, t1); tcg_gen_ext32s_tl(cpu_gpr[rd], cpu_gpr[rd]); @@ -45,8 +45,8 @@ bool gen_dlsa(DisasContext *ctx, int rd, int rt, int rs, int sa) } t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); tcg_gen_shli_tl(t0, t0, sa + 1); tcg_gen_add_tl(cpu_gpr[rd], t0, t1); return true; diff --git a/target/mips/tcg/tx79_translate.c b/target/mips/tcg/tx79_translate.c index ae3f5e19c43..9a204a2d884 100644 --- a/target/mips/tcg/tx79_translate.c +++ b/target/mips/tcg/tx79_translate.c @@ -78,14 +78,14 @@ static bool trans_MFLO1(DisasContext *ctx, arg_r *a) static bool trans_MTHI1(DisasContext *ctx, arg_r *a) { - gen_load_gpr(cpu_HI[1], a->rs); + gen_load_gpr_tl(cpu_HI[1], a->rs); return true; } static bool trans_MTLO1(DisasContext *ctx, arg_r *a) { - gen_load_gpr(cpu_LO[1], a->rs); + gen_load_gpr_tl(cpu_LO[1], a->rs); return true; } @@ -128,8 +128,8 @@ static bool trans_parallel_arith(DisasContext *ctx, arg_r *a, bx = tcg_temp_new_i64(); /* Lower half */ - gen_load_gpr(ax, a->rs); - gen_load_gpr(bx, a->rt); + gen_load_gpr_tl(ax, a->rs); + gen_load_gpr_tl(bx, a->rt); gen_logic_i64(cpu_gpr[a->rd], ax, bx); /* Upper half */ @@ -250,8 +250,8 @@ static bool trans_parallel_compare(DisasContext *ctx, arg_r *a, t2 = tcg_temp_new_i64(); /* Lower half */ - gen_load_gpr(ax, a->rs); - gen_load_gpr(bx, a->rt); + gen_load_gpr_tl(ax, a->rs); + gen_load_gpr_tl(bx, a->rt); for (int i = 0; i < (64 / wlen); i++) { tcg_gen_sextract_i64(t0, ax, wlen * i, wlen); tcg_gen_sextract_i64(t1, bx, wlen * i, wlen); @@ -363,7 +363,7 @@ static bool trans_SQ(DisasContext *ctx, arg_i *a) tcg_gen_andi_tl(addr, addr, ~0xf); /* Lower half */ - gen_load_gpr(t0, a->rt); + gen_load_gpr_tl(t0, a->rt); tcg_gen_qemu_st_i64(t0, addr, ctx->mem_idx, mo_endian(ctx) | MO_UQ); /* Upper half */ @@ -427,8 +427,8 @@ static bool trans_PPACW(DisasContext *ctx, arg_r *a) b0 = tcg_temp_new_i64(); t0 = tcg_temp_new_i64(); - gen_load_gpr(a0, a->rs); - gen_load_gpr(b0, a->rt); + gen_load_gpr_tl(a0, a->rs); + gen_load_gpr_tl(b0, a->rt); gen_load_gpr_hi(t0, a->rt); /* b1 */ tcg_gen_deposit_i64(cpu_gpr[a->rd], b0, t0, 32, 32); @@ -457,8 +457,8 @@ static bool trans_PEXTLx(DisasContext *ctx, arg_r *a, unsigned wlen) ax = tcg_temp_new_i64(); bx = tcg_temp_new_i64(); - gen_load_gpr(ax, a->rs); - gen_load_gpr(bx, a->rt); + gen_load_gpr_tl(ax, a->rs); + gen_load_gpr_tl(bx, a->rt); /* Lower half */ for (int i = 0; i < 64 / (2 * wlen); i++) { @@ -506,8 +506,8 @@ static bool trans_PEXTLW(DisasContext *ctx, arg_r *a) ax = tcg_temp_new_i64(); bx = tcg_temp_new_i64(); - gen_load_gpr(ax, a->rs); - gen_load_gpr(bx, a->rt); + gen_load_gpr_tl(ax, a->rs); + gen_load_gpr_tl(bx, a->rt); gen_pextw(cpu_gpr[a->rd], cpu_gpr_hi[a->rd], ax, bx); return true; } diff --git a/target/mips/tcg/vr54xx_translate.c b/target/mips/tcg/vr54xx_translate.c index c877ede76e9..d1e9f0e51cd 100644 --- a/target/mips/tcg/vr54xx_translate.c +++ b/target/mips/tcg/vr54xx_translate.c @@ -40,8 +40,8 @@ static bool trans_mult_acc(DisasContext *ctx, arg_r *a, TCGv t0 = tcg_temp_new(); TCGv t1 = tcg_temp_new(); - gen_load_gpr(t0, a->rs); - gen_load_gpr(t1, a->rt); + gen_load_gpr_tl(t0, a->rs); + gen_load_gpr_tl(t1, a->rt); gen_helper_mult_acc(t0, tcg_env, t0, t1); diff --git a/target/mips/tcg/micromips_translate.c.inc b/target/mips/tcg/micromips_translate.c.inc index c479bec1081..fd85977bb8b 100644 --- a/target/mips/tcg/micromips_translate.c.inc +++ b/target/mips/tcg/micromips_translate.c.inc @@ -861,8 +861,8 @@ static inline void gen_movep(DisasContext *ctx, int enc_dest, int enc_rt, rd = rd_enc[enc_dest]; re = re_enc[enc_dest]; - gen_load_gpr(cpu_gpr[rd], rs_rt_enc[enc_rs]); - gen_load_gpr(cpu_gpr[re], rs_rt_enc[enc_rt]); + gen_load_gpr_tl(cpu_gpr[rd], rs_rt_enc[enc_rs]); + gen_load_gpr_tl(cpu_gpr[re], rs_rt_enc[enc_rt]); } static void gen_pool16c_r6_insn(DisasContext *ctx) @@ -986,11 +986,11 @@ static void gen_ldst_pair(DisasContext *ctx, uint32_t opc, int rd, gen_store_gpr(t1, rd + 1); break; case SWP: - gen_load_gpr(t1, rd); + gen_load_gpr_tl(t1, rd); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); gen_op_addr_addi(ctx, t0, t0, 4); - gen_load_gpr(t1, rd + 1); + gen_load_gpr_tl(t1, rd + 1); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); break; @@ -1009,11 +1009,11 @@ static void gen_ldst_pair(DisasContext *ctx, uint32_t opc, int rd, gen_store_gpr(t1, rd + 1); break; case SDP: - gen_load_gpr(t1, rd); + gen_load_gpr_tl(t1, rd); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); gen_op_addr_addi(ctx, t0, t0, 8); - gen_load_gpr(t1, rd + 1); + gen_load_gpr_tl(t1, rd + 1); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); break; @@ -1064,7 +1064,7 @@ static void gen_pool32axf(CPUMIPSState *env, DisasContext *ctx, int rt, int rs) { TCGv t0 = tcg_temp_new(); - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); gen_mtc0(ctx, t0, rs, (ctx->opcode >> 11) & 0x7); } break; diff --git a/target/mips/tcg/mips16e_translate.c.inc b/target/mips/tcg/mips16e_translate.c.inc index a9af8f1e74a..52a34b3c4b9 100644 --- a/target/mips/tcg/mips16e_translate.c.inc +++ b/target/mips/tcg/mips16e_translate.c.inc @@ -132,7 +132,7 @@ static void decr_and_store(DisasContext *ctx, unsigned regidx, TCGv t0) TCGv t1 = tcg_temp_new(); gen_op_addr_addi(ctx, t0, t0, -4); - gen_load_gpr(t1, regidx); + gen_load_gpr_tl(t1, regidx); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); } @@ -180,30 +180,30 @@ static void gen_mips16_save(DisasContext *ctx, switch (args) { case 4: gen_base_offset_addr(ctx, t0, 29, 12); - gen_load_gpr(t1, 7); + gen_load_gpr_tl(t1, 7); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); /* Fall through */ case 3: gen_base_offset_addr(ctx, t0, 29, 8); - gen_load_gpr(t1, 6); + gen_load_gpr_tl(t1, 6); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); /* Fall through */ case 2: gen_base_offset_addr(ctx, t0, 29, 4); - gen_load_gpr(t1, 5); + gen_load_gpr_tl(t1, 5); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); /* Fall through */ case 1: gen_base_offset_addr(ctx, t0, 29, 0); - gen_load_gpr(t1, 4); + gen_load_gpr_tl(t1, 4); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); } - gen_load_gpr(t0, 29); + gen_load_gpr_tl(t0, 29); if (do_ra) { decr_and_store(ctx, 31, t0); diff --git a/target/mips/tcg/nanomips_translate.c.inc b/target/mips/tcg/nanomips_translate.c.inc index 1e274143bbd..99ce1f96564 100644 --- a/target/mips/tcg/nanomips_translate.c.inc +++ b/target/mips/tcg/nanomips_translate.c.inc @@ -1029,8 +1029,8 @@ static void gen_scwp(DisasContext *ctx, uint32_t base, int16_t offset, tcg_gen_ld_tl(lladdr, tcg_env, offsetof(CPUMIPSState, lladdr)); tcg_gen_brcond_tl(TCG_COND_NE, taddr, lladdr, lab_fail); - gen_load_gpr(tmp1, reg1); - gen_load_gpr(tmp2, reg2); + gen_load_gpr_tl(tmp1, reg1); + gen_load_gpr_tl(tmp2, reg2); if (disas_is_bigendian(ctx)) { tcg_gen_concat_tl_i64(tval, tmp2, tmp1); @@ -1073,7 +1073,7 @@ static void gen_save(DisasContext *ctx, uint8_t rt, uint8_t count, int this_rt = use_gp ? 28 : (rt & 0x10) | ((rt + counter) & 0x1f); int this_offset = -((counter + 1) << 2); gen_base_offset_addr(ctx, va, 29, this_offset); - gen_load_gpr(t0, this_rt); + gen_load_gpr_tl(t0, this_rt); tcg_gen_qemu_st_tl(t0, va, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); counter++; @@ -1121,8 +1121,8 @@ static void gen_compute_branch_nm(DisasContext *ctx, uint32_t opc, case OPC_BNE: /* Compare two registers */ if (rs != rt) { - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); bcond_compute = 1; } btgt = ctx->base.pc_next + insn_bytes + offset; @@ -1130,7 +1130,7 @@ static void gen_compute_branch_nm(DisasContext *ctx, uint32_t opc, case OPC_BGEZAL: /* Compare to zero */ if (rs != 0) { - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); bcond_compute = 1; } btgt = ctx->base.pc_next + insn_bytes + offset; @@ -1152,7 +1152,7 @@ static void gen_compute_branch_nm(DisasContext *ctx, uint32_t opc, gen_reserved_instruction(ctx); goto out; } - gen_load_gpr(btarget, rs); + gen_load_gpr_tl(btarget, rs); break; default: MIPS_INVAL("branch/jump"); @@ -1358,8 +1358,8 @@ static void gen_pool32a0_nanomips_insn(CPUMIPSState *env, DisasContext *ctx) TCGv t1 = tcg_temp_new(); TCGv t2 = tcg_temp_new(); - gen_load_gpr(t1, rs); - gen_load_gpr(t2, rt); + gen_load_gpr_tl(t1, rs); + gen_load_gpr_tl(t2, rt); tcg_gen_add_tl(t0, t1, t2); tcg_gen_ext32s_tl(t0, t0); tcg_gen_xor_tl(t1, t1, t2); @@ -1409,7 +1409,7 @@ static void gen_pool32a0_nanomips_insn(CPUMIPSState *env, DisasContext *ctx) { TCGv t0 = tcg_temp_new(); - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); gen_mtc0(ctx, t0, rs, extract32(ctx->opcode, 11, 3)); } break; @@ -1458,8 +1458,8 @@ static void gen_pool32a0_nanomips_insn(CPUMIPSState *env, DisasContext *ctx) TCGv t0 = tcg_temp_new(); TCGv t1 = tcg_temp_new(); - gen_load_gpr(t0, rt); - gen_load_gpr(t1, rs); + gen_load_gpr_tl(t0, rt); + gen_load_gpr_tl(t1, rs); gen_helper_fork(t0, t1); } break; @@ -1484,7 +1484,7 @@ static void gen_pool32a0_nanomips_insn(CPUMIPSState *env, DisasContext *ctx) { TCGv t0 = tcg_temp_new(); - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); gen_helper_yield(t0, tcg_env, t0); gen_store_gpr(t0, rt); } @@ -1511,8 +1511,8 @@ static void gen_pool32axf_1_5_nanomips_insn(DisasContext *ctx, uint32_t opc, tcg_gen_movi_i32(t0, v2 >> 3); - gen_load_gpr(v0_t, ret); - gen_load_gpr(v1_t, v1); + gen_load_gpr_tl(v0_t, ret); + gen_load_gpr_tl(v1_t, v1); switch (opc) { case NM_MAQ_S_W_PHR: @@ -1545,7 +1545,7 @@ static void gen_pool32axf_1_nanomips_insn(DisasContext *ctx, uint32_t opc, TCGv t0 = tcg_temp_new(); TCGv v0_t = tcg_temp_new(); - gen_load_gpr(v0_t, v1); + gen_load_gpr_tl(v0_t, v1); switch (opc) { case NM_POOL32AXF_1_0: @@ -1588,7 +1588,7 @@ static void gen_pool32axf_1_nanomips_insn(DisasContext *ctx, uint32_t opc, gen_store_gpr(t0, ret); break; case NM_WRDSP: - gen_load_gpr(t0, ret); + gen_load_gpr_tl(t0, ret); gen_helper_wrdsp(t0, tcg_constant_tl(imm), tcg_env); break; case NM_EXTP: @@ -1776,8 +1776,8 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, TCGv v0_t = tcg_temp_new(); TCGv v1_t = tcg_temp_new(); - gen_load_gpr(v0_t, rt); - gen_load_gpr(v1_t, rs); + gen_load_gpr_tl(v0_t, rt); + gen_load_gpr_tl(v1_t, rs); switch (opc) { case NM_POOL32AXF_2_0_7: @@ -1791,7 +1791,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, case NM_BALIGN: check_dsp_r2(ctx); if (rt != 0) { - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); rd &= 3; if (rd != 0 && rd != 2) { tcg_gen_shli_tl(cpu_gpr[ret], cpu_gpr[ret], 8 * rd); @@ -1809,8 +1809,8 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, TCGv_i64 t2 = tcg_temp_new_i64(); TCGv_i64 t3 = tcg_temp_new_i64(); - gen_load_gpr(t0, rt); - gen_load_gpr(t1, rs); + gen_load_gpr_tl(t0, rt); + gen_load_gpr_tl(t1, rs); tcg_gen_ext_tl_i64(t2, t0); tcg_gen_ext_tl_i64(t3, t1); tcg_gen_mul_i64(t2, t2, t3); @@ -1830,8 +1830,8 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, if (acc || ctx->insn_flags & ISA_MIPS_R6) { check_dsp_r2(ctx); } - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); tcg_gen_trunc_tl_i32(t2, t0); tcg_gen_trunc_tl_i32(t3, t1); tcg_gen_muls2_i32(t2, t3, t2, t3); @@ -1841,7 +1841,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, break; case NM_EXTRV_W: check_dsp(ctx); - gen_load_gpr(v1_t, rs); + gen_load_gpr_tl(v1_t, rs); gen_helper_extr_w(t0, tcg_constant_tl(rd >> 3), v1_t, tcg_env); gen_store_gpr(t0, ret); break; @@ -1862,8 +1862,8 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, TCGv_i64 t2 = tcg_temp_new_i64(); TCGv_i64 t3 = tcg_temp_new_i64(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); tcg_gen_ext32u_tl(t0, t0); tcg_gen_ext32u_tl(t1, t1); tcg_gen_extu_tl_i64(t2, t0); @@ -1885,8 +1885,8 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, if (acc || ctx->insn_flags & ISA_MIPS_R6) { check_dsp_r2(ctx); } - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); tcg_gen_trunc_tl_i32(t2, t0); tcg_gen_trunc_tl_i32(t3, t1); tcg_gen_mulu2_i32(t2, t3, t2, t3); @@ -1925,8 +1925,8 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, TCGv_i64 t2 = tcg_temp_new_i64(); TCGv_i64 t3 = tcg_temp_new_i64(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); tcg_gen_ext_tl_i64(t2, t0); tcg_gen_ext_tl_i64(t3, t1); tcg_gen_mul_i64(t2, t2, t3); @@ -1964,8 +1964,8 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, TCGv_i64 t2 = tcg_temp_new_i64(); TCGv_i64 t3 = tcg_temp_new_i64(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); tcg_gen_ext32u_tl(t0, t0); tcg_gen_ext32u_tl(t1, t1); tcg_gen_extu_tl_i64(t2, t0); @@ -1997,7 +1997,7 @@ static void gen_pool32axf_4_nanomips_insn(DisasContext *ctx, uint32_t opc, TCGv t0 = tcg_temp_new(); TCGv v0_t = tcg_temp_new(); - gen_load_gpr(v0_t, rs); + gen_load_gpr_tl(v0_t, rs); switch (opc) { case NM_ABSQ_S_QB: @@ -2096,7 +2096,7 @@ static void gen_pool32axf_4_nanomips_insn(DisasContext *ctx, uint32_t opc, { TCGv tv0 = tcg_temp_new(); - gen_load_gpr(tv0, rt); + gen_load_gpr_tl(tv0, rt); gen_helper_insv(v0_t, tcg_env, v0_t, tv0); gen_store_gpr(v0_t, ret); } @@ -2132,7 +2132,7 @@ static void gen_pool32axf_7_nanomips_insn(DisasContext *ctx, uint32_t opc, TCGv t0 = tcg_temp_new(); TCGv rs_t = tcg_temp_new(); - gen_load_gpr(rs_t, rs); + gen_load_gpr_tl(rs_t, rs); switch (opc) { case NM_SHRA_R_QB: @@ -2289,7 +2289,7 @@ static void gen_compute_imm_branch(DisasContext *ctx, uint32_t opc, TCGv t0 = tcg_temp_new(); TCGv timm = tcg_constant_tl(imm); - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); ctx->btarget = addr_add(ctx, ctx->base.pc_next + 4, offset); /* Load needed operands and calculate btarget */ @@ -2389,7 +2389,7 @@ static void gen_compute_nanomips_pbalrsc_branch(DisasContext *ctx, int rs, TCGv t0 = tcg_temp_new(); /* load rs */ - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); /* link */ if (rt != 0) { @@ -2422,8 +2422,8 @@ static void gen_compute_compact_branch_nm(DisasContext *ctx, uint32_t opc, /* compact branch */ case OPC_BGEC: case OPC_BLTC: - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); bcond_compute = 1; ctx->btarget = addr_add(ctx, ctx->base.pc_next + 4, offset); break; @@ -2434,8 +2434,8 @@ static void gen_compute_compact_branch_nm(DisasContext *ctx, uint32_t opc, /* OPC_BGTZALC, OPC_BLTZALC */ tcg_gen_movi_tl(cpu_gpr[31], ctx->base.pc_next + 4); } - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); bcond_compute = 1; ctx->btarget = addr_add(ctx, ctx->base.pc_next + 4, offset); break; @@ -2445,14 +2445,14 @@ static void gen_compute_compact_branch_nm(DisasContext *ctx, uint32_t opc, case OPC_BEQZC: if (rs != 0) { /* OPC_BEQZC, OPC_BNEZC */ - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); bcond_compute = 1; ctx->btarget = addr_add(ctx, ctx->base.pc_next + 4, offset); } else { /* OPC_JIC, OPC_JIALC */ TCGv tbase = tcg_temp_new(); - gen_load_gpr(tbase, rt); + gen_load_gpr_tl(tbase, rt); gen_op_addr_addi(ctx, btarget, tbase, offset); } break; @@ -2587,8 +2587,8 @@ static void gen_p_lsx(DisasContext *ctx, int rd, int rs, int rt) t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); if ((extract32(ctx->opcode, 6, 1)) == 1) { /* PP.LSXS instructions require shifting */ @@ -2648,20 +2648,20 @@ static void gen_p_lsx(DisasContext *ctx, int rd, int rs, int rt) break; case NM_SBX: check_nms(ctx); - gen_load_gpr(t1, rd); + gen_load_gpr_tl(t1, rd); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_8); break; case NM_SHX: /*case NM_SHXS:*/ check_nms(ctx); - gen_load_gpr(t1, rd); + gen_load_gpr_tl(t1, rd); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UW | ctx->default_tcg_memop_mask); break; case NM_SWX: /*case NM_SWXS:*/ check_nms(ctx); - gen_load_gpr(t1, rd); + gen_load_gpr_tl(t1, rd); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); break; @@ -3010,8 +3010,8 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, TCGv v1_t = tcg_temp_new(); TCGv v2_t = tcg_temp_new(); - gen_load_gpr(v1_t, rs); - gen_load_gpr(v2_t, rt); + gen_load_gpr_tl(v1_t, rs); + gen_load_gpr_tl(v2_t, rt); switch (opc) { case NM_CMP_EQ_PH: @@ -3386,7 +3386,7 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, break; case NM_APPEND: check_dsp_r2(ctx); - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); if (rd != 0) { tcg_gen_deposit_tl(cpu_gpr[rt], t0, cpu_gpr[rt], rd, 32 - rd); } @@ -3722,7 +3722,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) target_long addr = addr_add(ctx, ctx->base.pc_next + 6, addr_off); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t1, rt); tcg_gen_qemu_st_tl(t1, tcg_constant_tl(addr), ctx->mem_idx, mo_endian(ctx) | MO_UL @@ -3785,7 +3785,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) TCGv t0 = tcg_temp_new(); imm = extract32(ctx->opcode, 0, 12); - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); tcg_gen_setcondi_tl(TCG_COND_EQ, t0, t0, imm); gen_store_gpr(t0, rt); } @@ -3843,7 +3843,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) TCGv_i32 stripe = tcg_constant_i32(extract32(ctx->opcode, 6, 1)); - gen_load_gpr(t0, rs); + gen_load_gpr_tl(t0, rs); gen_helper_rotx(cpu_gpr[rt], t0, shift, shiftx, stripe); } break; @@ -4109,7 +4109,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) gen_store_gpr(t0, rt); break; case NM_UASH: - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t1, rt); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UW | MO_UNALN); break; @@ -4300,7 +4300,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) break; case NM_SWM: this_rt = (rt == 0) ? 0 : this_rt; - gen_load_gpr(t1, this_rt); + gen_load_gpr_tl(t1, this_rt); tcg_gen_qemu_st_tl(t1, va, ctx->mem_idx, memop | mo_endian(ctx) | MO_UL); break; @@ -4324,7 +4324,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) rd = (extract32(ctx->opcode, 24, 1)) == 0 ? 4 : 5; rt = decode_gpr_gpr4_zero(extract32(ctx->opcode, 25, 1) << 3 | extract32(ctx->opcode, 21, 3)); - gen_load_gpr(t0, rt); + gen_load_gpr_tl(t0, rt); tcg_gen_mov_tl(cpu_gpr[rd], t0); gen_compute_branch_nm(ctx, OPC_BGEZAL, 4, 0, 0, s); } @@ -4808,8 +4808,8 @@ static int decode_isa_nanomips(CPUMIPSState *env, DisasContext *ctx) rs = r1; rt = r2; } - gen_load_gpr(t0, rs); - gen_load_gpr(t1, rt); + gen_load_gpr_tl(t0, rs); + gen_load_gpr_tl(t1, rt); tcg_gen_mov_tl(cpu_gpr[rd], t0); tcg_gen_mov_tl(cpu_gpr[re], t1); } From patchwork Tue Nov 26 13:15:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50006D3B98B for ; Tue, 26 Nov 2024 13:16:36 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvQR-00085J-ED; Tue, 26 Nov 2024 08:16:19 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvQH-0007y1-EN for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:11 -0500 Received: from mail-wm1-x32c.google.com ([2a00:1450:4864:20::32c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvQC-0003ZX-QQ for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:09 -0500 Received: by mail-wm1-x32c.google.com with SMTP id 5b1f17b1804b1-434a742481aso4720145e9.3 for ; Tue, 26 Nov 2024 05:16:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732626963; x=1733231763; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SxPG+MKvhWdDsZNig39E+VzWYlvVWASoU0oHPcJiLXI=; b=QX0SvUVmG4h5ZH9qubP/JWX2Z1BRTCxd1CGZ8rm0udeVkStO+lMWhBmHohR7cPYY8u IWBf+30AjGsaYvMUUpY938VqFEu0WIaA6m+OnBuclKVRNp76cON9bvsuYdrvz4MVvwe+ /tUFbBHjcKx7qzFsiotaX0JRMtiyZMJwWICuCAOJ2iNM8YuROH7pmkNO+jMpo0dax7Jq v6oQeC9qc1C93yb1akfcIfPJzepYRi+4JEIrILTxl4hvgWy+ATPGhFomXsfiooc80+7K DiWVjF/r+VY+i6YonZQ684pTJHyoz7mAn2cA7ABzM7mzVUh4TSyErz7MOcvgN+SBU6+1 67Dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732626963; x=1733231763; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SxPG+MKvhWdDsZNig39E+VzWYlvVWASoU0oHPcJiLXI=; b=dDEH835HRSSFR8PNh39NE9Kw4d84mS9y0j6xNKT4P3r9f6bz6Hfp/WQm1cW/QkKLDq oRJE+et/sSif5BgE05PBZ9t8unSGXkD+scHNWkZZ+QLIc0LcxGboh/4tgp2oiGyah3oT wfFONY/+lWTPQk9Cu7gEkUaXwGbCb1++G0FtGaVUgl9bqqlchNbw2nGE7udnVTJ+nCa6 GMJCMAvws+fC8XkKP8EY7sTtDnkPV8bSy005nOpVllFjwSG4htzAFsZ66jtebzvRHU6E V/K7VrAgJHmWcgrdEKmtZAZ/HdYogMrNn8oiTtUW6oTKBVxhcbmxT+tZz2RwqwBD0wmu TOGg== X-Gm-Message-State: AOJu0Yw/K7DFWYfj+kflfr8isKfNQBgZKbsw22ypGvxS8VeiC8921+Go kXGYmt1xvGT8Xch7c4KK80/Zulx1vKbqNjPcHBQI+sm6lg9WdR9ala19LMvwLtBDTnYK5Qg7gd+ E X-Gm-Gg: ASbGncsf39xZOjDsF/9TdGEFjv3hsQEHdWj8J7L9PDRsoIU8mX7ao8bjc9z5M64Bw47 4j+acySrLmbJToynIl2j4oR9xgqf1N3a4Sa+tQ/uOwe1tGvqC4gBFGODW8n/yioxna8uL8tMEBJ I3MIWSIu/2ls2mZRKUIy1xbjZhomO4osfqYJqQJWN+zcd2RkbF+MnCTeC2qFkmiQX/6Fok2bjfa KztyPCB1I4Mg98t3kEC4pFYF3rZ2MrX0J8HIS0IruXMJtDm9JWOJGE4VRELmIQACfDoZQat X-Google-Smtp-Source: AGHT+IGF0YKThHkrnYNvhw1uMZivM40ptauYqYMSUzxW1x7zfgQHvs7yk3QIX6u1wrz4kCsrai8GCA== X-Received: by 2002:a05:600c:5122:b0:433:c76d:d57e with SMTP id 5b1f17b1804b1-433ce420ad8mr142152885e9.5.1732626962288; Tue, 26 Nov 2024 05:16:02 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4349eeb375fsm78750925e9.3.2024.11.26.05.16.00 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:16:01 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 02/13] target/mips: Rename gen_store_gpr() -> gen_store_gpr_tl() Date: Tue, 26 Nov 2024 14:15:34 +0100 Message-ID: <20241126131546.66145-3-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32c; envelope-from=philmd@linaro.org; helo=mail-wm1-x32c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org MIPS gen_store_gpr() takes a target-specific TCGv argument. Rename it as gen_store_gpr_tl() to clarify, like other TCG core helpers. Mechanical change doing: $ sed -i -e 's/gen_store_gpr/gen_store_gpr_tl/' \ $(git grep -l gen_store_gpr) Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- target/mips/tcg/translate.h | 2 +- target/mips/tcg/lcsr_translate.c | 4 +- target/mips/tcg/msa_translate.c | 2 +- target/mips/tcg/mxu_translate.c | 20 +- target/mips/tcg/octeon_translate.c | 6 +- target/mips/tcg/translate.c | 120 +++++----- target/mips/tcg/tx79_translate.c | 6 +- target/mips/tcg/vr54xx_translate.c | 2 +- target/mips/tcg/micromips_translate.c.inc | 12 +- target/mips/tcg/mips16e_translate.c.inc | 2 +- target/mips/tcg/nanomips_translate.c.inc | 260 +++++++++++----------- 11 files changed, 218 insertions(+), 218 deletions(-) diff --git a/target/mips/tcg/translate.h b/target/mips/tcg/translate.h index f1aa706a357..49f174d3617 100644 --- a/target/mips/tcg/translate.h +++ b/target/mips/tcg/translate.h @@ -156,7 +156,7 @@ void gen_base_offset_addr(DisasContext *ctx, TCGv addr, int base, int offset); void gen_move_low32(TCGv ret, TCGv_i64 arg); void gen_move_high32(TCGv ret, TCGv_i64 arg); void gen_load_gpr_tl(TCGv t, int reg); -void gen_store_gpr(TCGv t, int reg); +void gen_store_gpr_tl(TCGv t, int reg); #if defined(TARGET_MIPS64) void gen_load_gpr_hi(TCGv_i64 t, int reg); void gen_store_gpr_hi(TCGv_i64 t, int reg); diff --git a/target/mips/tcg/lcsr_translate.c b/target/mips/tcg/lcsr_translate.c index 193b211049c..2ca5562b480 100644 --- a/target/mips/tcg/lcsr_translate.c +++ b/target/mips/tcg/lcsr_translate.c @@ -23,7 +23,7 @@ static bool trans_CPUCFG(DisasContext *ctx, arg_CPUCFG *a) gen_load_gpr_tl(src1, a->rs); gen_helper_lcsr_cpucfg(dest, tcg_env, src1); - gen_store_gpr(dest, a->rd); + gen_store_gpr_tl(dest, a->rd); return true; } @@ -38,7 +38,7 @@ static bool gen_rdcsr(DisasContext *ctx, arg_r *a, check_cp0_enabled(ctx); gen_load_gpr_tl(src1, a->rs); func(dest, tcg_env, src1); - gen_store_gpr(dest, a->rd); + gen_store_gpr_tl(dest, a->rd); return true; } diff --git a/target/mips/tcg/msa_translate.c b/target/mips/tcg/msa_translate.c index 6f6eaab93aa..25939da4b3e 100644 --- a/target/mips/tcg/msa_translate.c +++ b/target/mips/tcg/msa_translate.c @@ -553,7 +553,7 @@ static bool trans_CFCMSA(DisasContext *ctx, arg_msa_elm *a) telm = tcg_temp_new(); gen_helper_msa_cfcmsa(telm, tcg_env, tcg_constant_i32(a->ws)); - gen_store_gpr(telm, a->wd); + gen_store_gpr_tl(telm, a->wd); return true; } diff --git a/target/mips/tcg/mxu_translate.c b/target/mips/tcg/mxu_translate.c index 002447a10d7..9525aebc053 100644 --- a/target/mips/tcg/mxu_translate.c +++ b/target/mips/tcg/mxu_translate.c @@ -706,7 +706,7 @@ static void gen_mxu_s32m2i(DisasContext *ctx) gen_load_mxu_cr(t0); } - gen_store_gpr(t0, Rb); + gen_store_gpr_tl(t0, Rb); } /* @@ -731,7 +731,7 @@ static void gen_mxu_s8ldd(DisasContext *ctx, bool postmodify) gen_load_gpr_tl(t0, Rb); tcg_gen_addi_tl(t0, t0, (int8_t)s8); if (postmodify) { - gen_store_gpr(t0, Rb); + gen_store_gpr_tl(t0, Rb); } switch (optn3) { @@ -816,7 +816,7 @@ static void gen_mxu_s8std(DisasContext *ctx, bool postmodify) gen_load_gpr_tl(t0, Rb); tcg_gen_addi_tl(t0, t0, (int8_t)s8); if (postmodify) { - gen_store_gpr(t0, Rb); + gen_store_gpr_tl(t0, Rb); } gen_load_mxu_gpr(t1, XRa); @@ -865,7 +865,7 @@ static void gen_mxu_s16ldd(DisasContext *ctx, bool postmodify) gen_load_gpr_tl(t0, Rb); tcg_gen_addi_tl(t0, t0, s10); if (postmodify) { - gen_store_gpr(t0, Rb); + gen_store_gpr_tl(t0, Rb); } switch (optn2) { @@ -924,7 +924,7 @@ static void gen_mxu_s16std(DisasContext *ctx, bool postmodify) gen_load_gpr_tl(t0, Rb); tcg_gen_addi_tl(t0, t0, s10); if (postmodify) { - gen_store_gpr(t0, Rb); + gen_store_gpr_tl(t0, Rb); } gen_load_mxu_gpr(t1, XRa); @@ -1538,7 +1538,7 @@ static void gen_mxu_s32ldxx(DisasContext *ctx, bool reversed, bool postinc) gen_store_mxu_gpr(t1, XRa); if (postinc) { - gen_store_gpr(t0, Rb); + gen_store_gpr_tl(t0, Rb); } } @@ -1573,7 +1573,7 @@ static void gen_mxu_s32stxx(DisasContext *ctx, bool reversed, bool postinc) ctx->default_tcg_memop_mask); if (postinc) { - gen_store_gpr(t0, Rb); + gen_store_gpr_tl(t0, Rb); } } @@ -1610,7 +1610,7 @@ static void gen_mxu_s32ldxvx(DisasContext *ctx, bool reversed, gen_store_mxu_gpr(t1, XRa); if (postinc) { - gen_store_gpr(t0, Rb); + gen_store_gpr_tl(t0, Rb); } } @@ -1643,7 +1643,7 @@ static void gen_mxu_lxx(DisasContext *ctx, uint32_t strd2, MemOp mop) tcg_gen_add_tl(t0, t0, t1); tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mop | ctx->default_tcg_memop_mask); - gen_store_gpr(t1, Ra); + gen_store_gpr_tl(t1, Ra); } /* @@ -1679,7 +1679,7 @@ static void gen_mxu_s32stxvx(DisasContext *ctx, bool reversed, ctx->default_tcg_memop_mask); if (postinc) { - gen_store_gpr(t0, Rb); + gen_store_gpr_tl(t0, Rb); } } diff --git a/target/mips/tcg/octeon_translate.c b/target/mips/tcg/octeon_translate.c index 6b0dbf946d8..587f4f8f692 100644 --- a/target/mips/tcg/octeon_translate.c +++ b/target/mips/tcg/octeon_translate.c @@ -90,7 +90,7 @@ static bool trans_EXTS(DisasContext *ctx, arg_EXTS *a) t0 = tcg_temp_new(); gen_load_gpr_tl(t0, a->rs); tcg_gen_sextract_tl(t0, t0, a->p, a->lenm1 + 1); - gen_store_gpr(t0, a->rt); + gen_store_gpr_tl(t0, a->rt); return true; } @@ -106,7 +106,7 @@ static bool trans_CINS(DisasContext *ctx, arg_CINS *a) t0 = tcg_temp_new(); gen_load_gpr_tl(t0, a->rs); tcg_gen_deposit_z_tl(t0, t0, a->p, a->lenm1 + 1); - gen_store_gpr(t0, a->rt); + gen_store_gpr_tl(t0, a->rt); return true; } @@ -125,7 +125,7 @@ static bool trans_POP(DisasContext *ctx, arg_POP *a) tcg_gen_andi_i64(t0, t0, 0xffffffff); } tcg_gen_ctpop_tl(t0, t0); - gen_store_gpr(t0, a->rd); + gen_store_gpr_tl(t0, a->rd); return true; } diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c index 13fbe5d378f..629846a596d 100644 --- a/target/mips/tcg/translate.c +++ b/target/mips/tcg/translate.c @@ -1198,7 +1198,7 @@ void gen_load_gpr_tl(TCGv t, int reg) } } -void gen_store_gpr(TCGv t, int reg) +void gen_store_gpr_tl(TCGv t, int reg) { assert(reg >= 0 && reg <= ARRAY_SIZE(cpu_gpr)); if (reg != 0) { @@ -1246,7 +1246,7 @@ static inline void gen_load_srsgpr(int from, int to) tcg_gen_ld_tl(t0, addr, sizeof(target_ulong) * from); } - gen_store_gpr(t0, to); + gen_store_gpr_tl(t0, to); } static inline void gen_store_srsgpr(int from, int to) @@ -2049,42 +2049,42 @@ static void gen_ld(DisasContext *ctx, uint32_t opc, case OPC_LWU: tcg_gen_qemu_ld_tl(t0, t0, mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_LD: tcg_gen_qemu_ld_tl(t0, t0, mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_LLD: case R6_OPC_LLD: op_ld_lld(t0, t0, mem_idx, ctx); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_LDL: t1 = tcg_temp_new(); gen_load_gpr_tl(t1, rt); gen_lxl(ctx, t1, t0, mem_idx, mo_endian(ctx) | MO_UQ); - gen_store_gpr(t1, rt); + gen_store_gpr_tl(t1, rt); break; case OPC_LDR: t1 = tcg_temp_new(); gen_load_gpr_tl(t1, rt); gen_lxr(ctx, t1, t0, mem_idx, mo_endian(ctx) | MO_UQ); - gen_store_gpr(t1, rt); + gen_store_gpr_tl(t1, rt); break; case OPC_LDPC: t1 = tcg_constant_tl(pc_relative_pc(ctx)); gen_op_addr_add(ctx, t0, t0, t1); tcg_gen_qemu_ld_tl(t0, t0, mem_idx, mo_endian(ctx) | MO_UQ); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; #endif case OPC_LWPC: t1 = tcg_constant_tl(pc_relative_pc(ctx)); gen_op_addr_add(ctx, t0, t0, t1); tcg_gen_qemu_ld_tl(t0, t0, mem_idx, mo_endian(ctx) | MO_SL); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_LWE: mem_idx = MIPS_HFLAG_UM; @@ -2092,7 +2092,7 @@ static void gen_ld(DisasContext *ctx, uint32_t opc, case OPC_LW: tcg_gen_qemu_ld_tl(t0, t0, mem_idx, mo_endian(ctx) | MO_SL | ctx->default_tcg_memop_mask); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_LHE: mem_idx = MIPS_HFLAG_UM; @@ -2100,7 +2100,7 @@ static void gen_ld(DisasContext *ctx, uint32_t opc, case OPC_LH: tcg_gen_qemu_ld_tl(t0, t0, mem_idx, mo_endian(ctx) | MO_SW | ctx->default_tcg_memop_mask); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_LHUE: mem_idx = MIPS_HFLAG_UM; @@ -2108,21 +2108,21 @@ static void gen_ld(DisasContext *ctx, uint32_t opc, case OPC_LHU: tcg_gen_qemu_ld_tl(t0, t0, mem_idx, mo_endian(ctx) | MO_UW | ctx->default_tcg_memop_mask); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_LBE: mem_idx = MIPS_HFLAG_UM; /* fall through */ case OPC_LB: tcg_gen_qemu_ld_tl(t0, t0, mem_idx, MO_SB); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_LBUE: mem_idx = MIPS_HFLAG_UM; /* fall through */ case OPC_LBU: tcg_gen_qemu_ld_tl(t0, t0, mem_idx, MO_UB); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_LWLE: mem_idx = MIPS_HFLAG_UM; @@ -2132,7 +2132,7 @@ static void gen_ld(DisasContext *ctx, uint32_t opc, gen_load_gpr_tl(t1, rt); gen_lxl(ctx, t1, t0, mem_idx, mo_endian(ctx) | MO_UL); tcg_gen_ext32s_tl(t1, t1); - gen_store_gpr(t1, rt); + gen_store_gpr_tl(t1, rt); break; case OPC_LWRE: mem_idx = MIPS_HFLAG_UM; @@ -2142,7 +2142,7 @@ static void gen_ld(DisasContext *ctx, uint32_t opc, gen_load_gpr_tl(t1, rt); gen_lxr(ctx, t1, t0, mem_idx, mo_endian(ctx) | MO_UL); tcg_gen_ext32s_tl(t1, t1); - gen_store_gpr(t1, rt); + gen_store_gpr_tl(t1, rt); break; case OPC_LLE: mem_idx = MIPS_HFLAG_UM; @@ -2150,7 +2150,7 @@ static void gen_ld(DisasContext *ctx, uint32_t opc, case OPC_LL: case R6_OPC_LL: op_ld_ll(t0, t0, mem_idx, ctx); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; } } @@ -2227,7 +2227,7 @@ static void gen_st_cond(DisasContext *ctx, int rt, int base, int offset, /* compare the address against that of the preceding LL */ gen_base_offset_addr(ctx, addr, base, offset); tcg_gen_brcond_tl(TCG_COND_EQ, addr, cpu_lladdr, l1); - gen_store_gpr(tcg_constant_tl(0), rt); + gen_store_gpr_tl(tcg_constant_tl(0), rt); tcg_gen_br(done); gen_set_label(l1); @@ -2237,7 +2237,7 @@ static void gen_st_cond(DisasContext *ctx, int rt, int base, int offset, tcg_gen_atomic_cmpxchg_tl(t0, cpu_lladdr, cpu_llval, val, eva ? MIPS_HFLAG_UM : ctx->mem_idx, tcg_mo); tcg_gen_setcond_tl(TCG_COND_EQ, t0, t0, cpu_llval); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); gen_set_label(done); } @@ -2344,7 +2344,7 @@ static void gen_arith_imm(DisasContext *ctx, uint32_t opc, generate_exception(ctx, EXCP_OVERFLOW); gen_set_label(l1); tcg_gen_ext32s_tl(t0, t0); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } break; case OPC_ADDIU: @@ -2373,7 +2373,7 @@ static void gen_arith_imm(DisasContext *ctx, uint32_t opc, /* operands of same sign, result different sign */ generate_exception(ctx, EXCP_OVERFLOW); gen_set_label(l1); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } break; case OPC_DADDIU: @@ -2564,7 +2564,7 @@ static void gen_arith(DisasContext *ctx, uint32_t opc, /* operands of same sign, result different sign */ generate_exception(ctx, EXCP_OVERFLOW); gen_set_label(l1); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); } break; case OPC_ADDU: @@ -2600,7 +2600,7 @@ static void gen_arith(DisasContext *ctx, uint32_t opc, */ generate_exception(ctx, EXCP_OVERFLOW); gen_set_label(l1); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); } break; case OPC_SUBU: @@ -2634,7 +2634,7 @@ static void gen_arith(DisasContext *ctx, uint32_t opc, /* operands of same sign, result different sign */ generate_exception(ctx, EXCP_OVERFLOW); gen_set_label(l1); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); } break; case OPC_DADDU: @@ -2668,7 +2668,7 @@ static void gen_arith(DisasContext *ctx, uint32_t opc, */ generate_exception(ctx, EXCP_OVERFLOW); gen_set_label(l1); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); } break; case OPC_DSUBU: @@ -2940,7 +2940,7 @@ static inline void gen_r6_ld(target_long addr, int reg, int memidx, { TCGv t0 = tcg_temp_new(); tcg_gen_qemu_ld_tl(t0, tcg_constant_tl(addr), memidx, memop); - gen_store_gpr(t0, reg); + gen_store_gpr_tl(t0, reg); } static inline void gen_pcrel(DisasContext *ctx, int opc, target_ulong pc, @@ -3948,8 +3948,8 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, gen_base_offset_addr(ctx, t0, rs, lsq_offset + 8); tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); - gen_store_gpr(t1, rt); - gen_store_gpr(t0, lsq_rt1); + gen_store_gpr_tl(t1, rt); + gen_store_gpr_tl(t0, lsq_rt1); break; case OPC_GSLQC1: check_cp1_enabled(ctx); @@ -4140,12 +4140,12 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, switch (opc) { case OPC_GSLBX: tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_SB); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_GSLHX: tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_SW | ctx->default_tcg_memop_mask); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_GSLWX: gen_base_offset_addr(ctx, t0, rs, offset); @@ -4154,7 +4154,7 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, } tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_SL | ctx->default_tcg_memop_mask); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; #if defined(TARGET_MIPS64) case OPC_GSLDX: @@ -4164,7 +4164,7 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, } tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; #endif case OPC_GSLWXC1: @@ -4682,7 +4682,7 @@ fail: gen_reserved_instruction(ctx); return; } - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } static void gen_bshfl(DisasContext *ctx, uint32_t op2, int rt, int rd) @@ -8276,7 +8276,7 @@ static void gen_mftr(CPUMIPSState *env, DisasContext *ctx, int rt, int rd, } } trace_mips_translate_tr("mftr", rt, u, sel, h); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); return; die: @@ -9048,7 +9048,7 @@ static void gen_cp1(DisasContext *ctx, uint32_t opc, int rt, int fs) gen_load_fpr32(ctx, fp0, fs); tcg_gen_ext_i32_tl(t0, fp0); } - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_MTC1: gen_load_gpr_tl(t0, rt); @@ -9061,7 +9061,7 @@ static void gen_cp1(DisasContext *ctx, uint32_t opc, int rt, int fs) break; case OPC_CFC1: gen_helper_1e0i(cfc1, t0, fs); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_CTC1: gen_load_gpr_tl(t0, rt); @@ -9073,7 +9073,7 @@ static void gen_cp1(DisasContext *ctx, uint32_t opc, int rt, int fs) #if defined(TARGET_MIPS64) case OPC_DMFC1: gen_load_fpr64(ctx, t0, fs); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_DMTC1: gen_load_gpr_tl(t0, rt); @@ -9087,7 +9087,7 @@ static void gen_cp1(DisasContext *ctx, uint32_t opc, int rt, int fs) gen_load_fpr32h(ctx, fp0, fs); tcg_gen_ext_i32_tl(t0, fp0); } - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_MTHC1: gen_load_gpr_tl(t0, rt); @@ -10848,16 +10848,16 @@ void gen_rdhwr(DisasContext *ctx, int rt, int rd, int sel) switch (rd) { case 0: gen_helper_rdhwr_cpunum(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case 1: gen_helper_rdhwr_synci_step(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case 2: translator_io_start(&ctx->base); gen_helper_rdhwr_cc(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); /* * Break the TB to be able to take timer interrupts immediately * after reading count. DISAS_STOP isn't sufficient, we need to ensure @@ -10868,7 +10868,7 @@ void gen_rdhwr(DisasContext *ctx, int rt, int rd, int sel) break; case 3: gen_helper_rdhwr_ccres(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case 4: check_insn(ctx, ISA_MIPS_R6); @@ -10880,25 +10880,25 @@ void gen_rdhwr(DisasContext *ctx, int rt, int rd, int sel) generate_exception(ctx, EXCP_RI); } gen_helper_rdhwr_performance(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case 5: check_insn(ctx, ISA_MIPS_R6); gen_helper_rdhwr_xnp(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case 29: #if defined(CONFIG_USER_ONLY) tcg_gen_ld_tl(t0, tcg_env, offsetof(CPUMIPSState, active_tc.CP0_UserLocal)); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; #else if ((ctx->hflags & MIPS_HFLAG_CP0) || (ctx->hflags & MIPS_HFLAG_HWRENA_ULR)) { tcg_gen_ld_tl(t0, tcg_env, offsetof(CPUMIPSState, active_tc.CP0_UserLocal)); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } else { gen_reserved_instruction(ctx); } @@ -11257,7 +11257,7 @@ void gen_ldxs(DisasContext *ctx, int base, int index, int rd) } tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_SL); - gen_store_gpr(t1, rd); + gen_store_gpr_tl(t1, rd); } static void gen_sync(int stype) @@ -11344,20 +11344,20 @@ static void gen_mips_lx(DisasContext *ctx, uint32_t opc, switch (opc) { case OPC_LBUX: tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_UB); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); break; case OPC_LHX: tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_SW); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); break; case OPC_LWX: tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_SL); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); break; #if defined(TARGET_MIPS64) case OPC_LDX: tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); break; #endif } @@ -14284,7 +14284,7 @@ static void decode_opc_special3(CPUMIPSState *env, DisasContext *ctx) gen_load_gpr_tl(t0, rs); gen_helper_yield(t0, tcg_env, t0); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); } break; default: @@ -14465,42 +14465,42 @@ static bool decode_opc_legacy(CPUMIPSState *env, DisasContext *ctx) case OPC_DMT: check_cp0_mt(ctx); gen_helper_dmt(t0); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_EMT: check_cp0_mt(ctx); gen_helper_emt(t0); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_DVPE: check_cp0_mt(ctx); gen_helper_dvpe(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_EVPE: check_cp0_mt(ctx); gen_helper_evpe(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case OPC_DVP: check_insn(ctx, ISA_MIPS_R6); if (ctx->vp) { gen_helper_dvp(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } break; case OPC_EVP: check_insn(ctx, ISA_MIPS_R6); if (ctx->vp) { gen_helper_evp(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } break; case OPC_DI: check_insn(ctx, ISA_MIPS_R2); save_cpu_state(ctx, 1); gen_helper_di(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); /* * Stop translation as we may have switched * the execution mode. @@ -14511,7 +14511,7 @@ static bool decode_opc_legacy(CPUMIPSState *env, DisasContext *ctx) check_insn(ctx, ISA_MIPS_R2); save_cpu_state(ctx, 1); gen_helper_ei(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); /* * DISAS_STOP isn't sufficient, we need to ensure we break * out of translated code to check for pending interrupts. diff --git a/target/mips/tcg/tx79_translate.c b/target/mips/tcg/tx79_translate.c index 9a204a2d884..90d63e5dfc4 100644 --- a/target/mips/tcg/tx79_translate.c +++ b/target/mips/tcg/tx79_translate.c @@ -64,14 +64,14 @@ bool decode_ext_tx79(DisasContext *ctx, uint32_t insn) static bool trans_MFHI1(DisasContext *ctx, arg_r *a) { - gen_store_gpr(cpu_HI[1], a->rd); + gen_store_gpr_tl(cpu_HI[1], a->rd); return true; } static bool trans_MFLO1(DisasContext *ctx, arg_r *a) { - gen_store_gpr(cpu_LO[1], a->rd); + gen_store_gpr_tl(cpu_LO[1], a->rd); return true; } @@ -341,7 +341,7 @@ static bool trans_LQ(DisasContext *ctx, arg_i *a) /* Lower half */ tcg_gen_qemu_ld_i64(t0, addr, ctx->mem_idx, mo_endian(ctx) | MO_UQ); - gen_store_gpr(t0, a->rt); + gen_store_gpr_tl(t0, a->rt); /* Upper half */ tcg_gen_addi_i64(addr, addr, 8); diff --git a/target/mips/tcg/vr54xx_translate.c b/target/mips/tcg/vr54xx_translate.c index d1e9f0e51cd..99ce81b7159 100644 --- a/target/mips/tcg/vr54xx_translate.c +++ b/target/mips/tcg/vr54xx_translate.c @@ -45,7 +45,7 @@ static bool trans_mult_acc(DisasContext *ctx, arg_r *a, gen_helper_mult_acc(t0, tcg_env, t0, t1); - gen_store_gpr(t0, a->rd); + gen_store_gpr_tl(t0, a->rd); return true; } diff --git a/target/mips/tcg/micromips_translate.c.inc b/target/mips/tcg/micromips_translate.c.inc index fd85977bb8b..cb3dbd264a0 100644 --- a/target/mips/tcg/micromips_translate.c.inc +++ b/target/mips/tcg/micromips_translate.c.inc @@ -979,11 +979,11 @@ static void gen_ldst_pair(DisasContext *ctx, uint32_t opc, int rd, } tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_SL | ctx->default_tcg_memop_mask); - gen_store_gpr(t1, rd); + gen_store_gpr_tl(t1, rd); gen_op_addr_addi(ctx, t0, t0, 4); tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_SL | ctx->default_tcg_memop_mask); - gen_store_gpr(t1, rd + 1); + gen_store_gpr_tl(t1, rd + 1); break; case SWP: gen_load_gpr_tl(t1, rd); @@ -1002,11 +1002,11 @@ static void gen_ldst_pair(DisasContext *ctx, uint32_t opc, int rd, } tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); - gen_store_gpr(t1, rd); + gen_store_gpr_tl(t1, rd); gen_op_addr_addi(ctx, t0, t0, 8); tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); - gen_store_gpr(t1, rd + 1); + gen_store_gpr_tl(t1, rd + 1); break; case SDP: gen_load_gpr_tl(t1, rd); @@ -1268,7 +1268,7 @@ static void gen_pool32axf(CPUMIPSState *env, DisasContext *ctx, int rt, int rs) save_cpu_state(ctx, 1); gen_helper_di(t0, tcg_env); - gen_store_gpr(t0, rs); + gen_store_gpr_tl(t0, rs); /* * Stop translation as we may have switched the execution * mode. @@ -1283,7 +1283,7 @@ static void gen_pool32axf(CPUMIPSState *env, DisasContext *ctx, int rt, int rs) save_cpu_state(ctx, 1); gen_helper_ei(t0, tcg_env); - gen_store_gpr(t0, rs); + gen_store_gpr_tl(t0, rs); /* * DISAS_STOP isn't sufficient, we need to ensure we break out * of translated code to check for pending interrupts. diff --git a/target/mips/tcg/mips16e_translate.c.inc b/target/mips/tcg/mips16e_translate.c.inc index 52a34b3c4b9..ceb41be0c26 100644 --- a/target/mips/tcg/mips16e_translate.c.inc +++ b/target/mips/tcg/mips16e_translate.c.inc @@ -295,7 +295,7 @@ static void decr_and_load(DisasContext *ctx, unsigned regidx, TCGv t0) gen_op_addr_add(ctx, t0, t0, t2); tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TE | MO_SL | ctx->default_tcg_memop_mask); - gen_store_gpr(t1, regidx); + gen_store_gpr_tl(t1, regidx); } static void gen_mips16_restore(DisasContext *ctx, diff --git a/target/mips/tcg/nanomips_translate.c.inc b/target/mips/tcg/nanomips_translate.c.inc index 99ce1f96564..31a31c00979 100644 --- a/target/mips/tcg/nanomips_translate.c.inc +++ b/target/mips/tcg/nanomips_translate.c.inc @@ -1005,8 +1005,8 @@ static void gen_llwp(DisasContext *ctx, uint32_t base, int16_t offset, } else { tcg_gen_extr_i64_tl(tmp1, tmp2, tval); } - gen_store_gpr(tmp1, reg1); - gen_store_gpr(tmp2, reg2); + gen_store_gpr_tl(tmp1, reg1); + gen_store_gpr_tl(tmp2, reg2); tcg_gen_st_i64(tval, tcg_env, offsetof(CPUMIPSState, llval_wp)); tcg_gen_st_tl(taddr, tcg_env, offsetof(CPUMIPSState, lladdr)); } @@ -1098,7 +1098,7 @@ static void gen_restore(DisasContext *ctx, uint8_t rt, uint8_t count, tcg_gen_qemu_ld_tl(t0, va, ctx->mem_idx, mo_endian(ctx) | MO_SL | ctx->default_tcg_memop_mask); tcg_gen_ext32s_tl(t0, t0); - gen_store_gpr(t0, this_rt); + gen_store_gpr_tl(t0, this_rt); counter++; } @@ -1336,14 +1336,14 @@ static void gen_pool32a0_nanomips_insn(CPUMIPSState *env, DisasContext *ctx) if (ctx->vp) { check_cp0_enabled(ctx); gen_helper_dvp(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } break; case NM_EVP: if (ctx->vp) { check_cp0_enabled(ctx); gen_helper_evp(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } break; } @@ -1368,7 +1368,7 @@ static void gen_pool32a0_nanomips_insn(CPUMIPSState *env, DisasContext *ctx) /* operands of same sign, result different sign */ tcg_gen_setcondi_tl(TCG_COND_LT, t0, t1, 0); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); } break; case NM_MUL: @@ -1424,12 +1424,12 @@ static void gen_pool32a0_nanomips_insn(CPUMIPSState *env, DisasContext *ctx) /* DMT */ check_cp0_mt(ctx); gen_helper_dmt(t0); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } else if (rs == 0) { /* DVPE */ check_cp0_mt(ctx); gen_helper_dvpe(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } else { gen_reserved_instruction(ctx); } @@ -1439,12 +1439,12 @@ static void gen_pool32a0_nanomips_insn(CPUMIPSState *env, DisasContext *ctx) /* EMT */ check_cp0_mt(ctx); gen_helper_emt(t0); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } else if (rs == 0) { /* EVPE */ check_cp0_mt(ctx); gen_helper_evpe(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } else { gen_reserved_instruction(ctx); } @@ -1486,7 +1486,7 @@ static void gen_pool32a0_nanomips_insn(CPUMIPSState *env, DisasContext *ctx) gen_load_gpr_tl(t0, rs); gen_helper_yield(t0, tcg_env, t0); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } break; #endif @@ -1585,7 +1585,7 @@ static void gen_pool32axf_1_nanomips_insn(DisasContext *ctx, uint32_t opc, switch (extract32(ctx->opcode, 12, 2)) { case NM_RDDSP: gen_helper_rddsp(t0, tcg_constant_tl(imm), tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; case NM_WRDSP: gen_load_gpr_tl(t0, ret); @@ -1594,12 +1594,12 @@ static void gen_pool32axf_1_nanomips_insn(DisasContext *ctx, uint32_t opc, case NM_EXTP: gen_helper_extp(t0, tcg_constant_tl(v2 >> 3), tcg_constant_tl(v1), tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; case NM_EXTPDP: gen_helper_extpdp(t0, tcg_constant_tl(v2 >> 3), tcg_constant_tl(v1), tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; } break; @@ -1608,11 +1608,11 @@ static void gen_pool32axf_1_nanomips_insn(DisasContext *ctx, uint32_t opc, switch (extract32(ctx->opcode, 12, 1)) { case NM_SHLL_QB: gen_helper_shll_qb(t0, tcg_constant_tl(v2 >> 2), v0_t, tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; case NM_SHRL_QB: gen_helper_shrl_qb(t0, tcg_constant_tl(v2 >> 2), v0_t); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; } break; @@ -1626,22 +1626,22 @@ static void gen_pool32axf_1_nanomips_insn(DisasContext *ctx, uint32_t opc, case NM_EXTR_W: gen_helper_extr_w(t0, tcg_constant_tl(v2 >> 3), tcg_constant_tl(v1), tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; case NM_EXTR_R_W: gen_helper_extr_r_w(t0, tcg_constant_tl(v2 >> 3), tcg_constant_tl(v1), tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; case NM_EXTR_RS_W: gen_helper_extr_rs_w(t0, tcg_constant_tl(v2 >> 3), tcg_constant_tl(v1), tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; case NM_EXTR_S_H: gen_helper_extr_s_h(t0, tcg_constant_tl(v2 >> 3), tcg_constant_tl(v1), tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; } break; @@ -1843,7 +1843,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, check_dsp(ctx); gen_load_gpr_tl(v1_t, rs); gen_helper_extr_w(t0, tcg_constant_tl(rd >> 3), v1_t, tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; } break; @@ -1897,7 +1897,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, case NM_EXTRV_R_W: check_dsp(ctx); gen_helper_extr_r_w(t0, tcg_constant_tl(rd >> 3), v1_t, tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; default: gen_reserved_instruction(ctx); @@ -1916,7 +1916,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, case NM_EXTPV: check_dsp(ctx); gen_helper_extp(t0, tcg_constant_tl(rd >> 3), v1_t, tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; case NM_MSUB: check_dsp(ctx); @@ -1939,7 +1939,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, case NM_EXTRV_RS_W: check_dsp(ctx); gen_helper_extr_rs_w(t0, tcg_constant_tl(rd >> 3), v1_t, tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; } break; @@ -1955,7 +1955,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, case NM_EXTPDPV: check_dsp(ctx); gen_helper_extpdp(t0, tcg_constant_tl(rd >> 3), v1_t, tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; case NM_MSUBU: check_dsp(ctx); @@ -1980,7 +1980,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, case NM_EXTRV_S_H: check_dsp(ctx); gen_helper_extr_s_h(t0, tcg_constant_tl(rd >> 3), v1_t, tcg_env); - gen_store_gpr(t0, ret); + gen_store_gpr_tl(t0, ret); break; } break; @@ -2003,70 +2003,70 @@ static void gen_pool32axf_4_nanomips_insn(DisasContext *ctx, uint32_t opc, case NM_ABSQ_S_QB: check_dsp_r2(ctx); gen_helper_absq_s_qb(v0_t, v0_t, tcg_env); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_ABSQ_S_PH: check_dsp(ctx); gen_helper_absq_s_ph(v0_t, v0_t, tcg_env); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_ABSQ_S_W: check_dsp(ctx); gen_helper_absq_s_w(v0_t, v0_t, tcg_env); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_PRECEQ_W_PHL: check_dsp(ctx); tcg_gen_andi_tl(v0_t, v0_t, 0xFFFF0000); tcg_gen_ext32s_tl(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_PRECEQ_W_PHR: check_dsp(ctx); tcg_gen_andi_tl(v0_t, v0_t, 0x0000FFFF); tcg_gen_shli_tl(v0_t, v0_t, 16); tcg_gen_ext32s_tl(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_PRECEQU_PH_QBL: check_dsp(ctx); gen_helper_precequ_ph_qbl(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_PRECEQU_PH_QBR: check_dsp(ctx); gen_helper_precequ_ph_qbr(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_PRECEQU_PH_QBLA: check_dsp(ctx); gen_helper_precequ_ph_qbla(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_PRECEQU_PH_QBRA: check_dsp(ctx); gen_helper_precequ_ph_qbra(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_PRECEU_PH_QBL: check_dsp(ctx); gen_helper_preceu_ph_qbl(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_PRECEU_PH_QBR: check_dsp(ctx); gen_helper_preceu_ph_qbr(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_PRECEU_PH_QBLA: check_dsp(ctx); gen_helper_preceu_ph_qbla(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_PRECEU_PH_QBRA: check_dsp(ctx); gen_helper_preceu_ph_qbra(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_REPLV_PH: check_dsp(ctx); @@ -2074,7 +2074,7 @@ static void gen_pool32axf_4_nanomips_insn(DisasContext *ctx, uint32_t opc, tcg_gen_shli_tl(t0, v0_t, 16); tcg_gen_or_tl(v0_t, v0_t, t0); tcg_gen_ext32s_tl(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_REPLV_QB: check_dsp(ctx); @@ -2084,12 +2084,12 @@ static void gen_pool32axf_4_nanomips_insn(DisasContext *ctx, uint32_t opc, tcg_gen_shli_tl(t0, v0_t, 16); tcg_gen_or_tl(v0_t, v0_t, t0); tcg_gen_ext32s_tl(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_BITREV: check_dsp(ctx); gen_helper_bitrev(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_INSV: check_dsp(ctx); @@ -2098,13 +2098,13 @@ static void gen_pool32axf_4_nanomips_insn(DisasContext *ctx, uint32_t opc, gen_load_gpr_tl(tv0, rt); gen_helper_insv(v0_t, tcg_env, v0_t, tv0); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); } break; case NM_RADDU_W_QB: check_dsp(ctx); gen_helper_raddu_w_qb(v0_t, v0_t); - gen_store_gpr(v0_t, ret); + gen_store_gpr_tl(v0_t, ret); break; case NM_BITSWAP: gen_bitswap(ctx, OPC_BITSWAP, ret, rs); @@ -2141,19 +2141,19 @@ static void gen_pool32axf_7_nanomips_insn(DisasContext *ctx, uint32_t opc, case 0: /* NM_SHRA_QB */ gen_helper_shra_qb(t0, tcg_constant_tl(rd >> 2), rs_t); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case 1: /* NM_SHRA_R_QB */ gen_helper_shra_r_qb(t0, tcg_constant_tl(rd >> 2), rs_t); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; } break; case NM_SHRL_PH: check_dsp_r2(ctx); gen_helper_shrl_ph(t0, tcg_constant_tl(rd >> 1), rs_t); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case NM_REPL_QB: check_dsp(ctx); @@ -2166,7 +2166,7 @@ static void gen_pool32axf_7_nanomips_insn(DisasContext *ctx, uint32_t opc, (uint32_t)imm << 8 | (uint32_t)imm; result = (int32_t)result; - gen_store_gpr(tcg_constant_tl(result), rt); + gen_store_gpr_tl(tcg_constant_tl(result), rt); } break; default: @@ -2229,7 +2229,7 @@ static void gen_pool32axf_nanomips_insn(CPUMIPSState *env, DisasContext *ctx) save_cpu_state(ctx, 1); gen_helper_di(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); /* Stop translation as we may have switched the execution mode */ ctx->base.is_jmp = DISAS_STOP; } @@ -2241,7 +2241,7 @@ static void gen_pool32axf_nanomips_insn(CPUMIPSState *env, DisasContext *ctx) save_cpu_state(ctx, 1); gen_helper_ei(t0, tcg_env); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); /* Stop translation as we may have switched the execution mode */ ctx->base.is_jmp = DISAS_STOP; } @@ -2622,29 +2622,29 @@ static void gen_p_lsx(DisasContext *ctx, int rd, int rs, int rt) switch (extract32(ctx->opcode, 7, 4)) { case NM_LBX: tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_SB); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); break; case NM_LHX: /*case NM_LHXS:*/ tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_SW | ctx->default_tcg_memop_mask); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); break; case NM_LWX: /*case NM_LWXS:*/ tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_SL | ctx->default_tcg_memop_mask); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); break; case NM_LBUX: tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_UB); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); break; case NM_LHUX: /*case NM_LHUXS:*/ tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_UW | ctx->default_tcg_memop_mask); - gen_store_gpr(t0, rd); + gen_store_gpr_tl(t0, rd); break; case NM_SBX: check_nms(ctx); @@ -3041,70 +3041,70 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case NM_CMPGU_EQ_QB: check_dsp(ctx); gen_helper_cmpgu_eq_qb(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_CMPGU_LT_QB: check_dsp(ctx); gen_helper_cmpgu_lt_qb(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_CMPGU_LE_QB: check_dsp(ctx); gen_helper_cmpgu_le_qb(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_CMPGDU_EQ_QB: check_dsp_r2(ctx); gen_helper_cmpgu_eq_qb(v1_t, v1_t, v2_t); tcg_gen_deposit_tl(cpu_dspctrl, cpu_dspctrl, v1_t, 24, 4); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_CMPGDU_LT_QB: check_dsp_r2(ctx); gen_helper_cmpgu_lt_qb(v1_t, v1_t, v2_t); tcg_gen_deposit_tl(cpu_dspctrl, cpu_dspctrl, v1_t, 24, 4); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_CMPGDU_LE_QB: check_dsp_r2(ctx); gen_helper_cmpgu_le_qb(v1_t, v1_t, v2_t); tcg_gen_deposit_tl(cpu_dspctrl, cpu_dspctrl, v1_t, 24, 4); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_PACKRL_PH: check_dsp(ctx); gen_helper_packrl_ph(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_PICK_QB: check_dsp(ctx); gen_helper_pick_qb(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_PICK_PH: check_dsp(ctx); gen_helper_pick_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_ADDQ_S_W: check_dsp(ctx); gen_helper_addq_s_w(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_SUBQ_S_W: check_dsp(ctx); gen_helper_subq_s_w(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_ADDSC: check_dsp(ctx); gen_helper_addsc(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_ADDWC: check_dsp(ctx); gen_helper_addwc(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_ADDQ_S_PH: check_dsp(ctx); @@ -3112,12 +3112,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* ADDQ_PH */ gen_helper_addq_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* ADDQ_S_PH */ gen_helper_addq_s_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3127,12 +3127,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* ADDQH_PH */ gen_helper_addqh_ph(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* ADDQH_R_PH */ gen_helper_addqh_r_ph(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3142,12 +3142,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* ADDQH_W */ gen_helper_addqh_w(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* ADDQH_R_W */ gen_helper_addqh_r_w(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3157,12 +3157,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* ADDU_QB */ gen_helper_addu_qb(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* ADDU_S_QB */ gen_helper_addu_s_qb(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3172,12 +3172,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* ADDU_PH */ gen_helper_addu_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* ADDU_S_PH */ gen_helper_addu_s_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3187,12 +3187,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* ADDUH_QB */ gen_helper_adduh_qb(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* ADDUH_R_QB */ gen_helper_adduh_r_qb(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3202,12 +3202,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* SHRAV_PH */ gen_helper_shra_ph(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* SHRAV_R_PH */ gen_helper_shra_r_ph(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3217,12 +3217,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* SHRAV_QB */ gen_helper_shra_qb(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* SHRAV_R_QB */ gen_helper_shra_r_qb(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3232,12 +3232,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* SUBQ_PH */ gen_helper_subq_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* SUBQ_S_PH */ gen_helper_subq_s_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3247,12 +3247,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* SUBQH_PH */ gen_helper_subqh_ph(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* SUBQH_R_PH */ gen_helper_subqh_r_ph(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3262,12 +3262,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* SUBQH_W */ gen_helper_subqh_w(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* SUBQH_R_W */ gen_helper_subqh_r_w(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3277,12 +3277,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* SUBU_QB */ gen_helper_subu_qb(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* SUBU_S_QB */ gen_helper_subu_s_qb(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3292,12 +3292,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* SUBU_PH */ gen_helper_subu_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* SUBU_S_PH */ gen_helper_subu_s_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3307,12 +3307,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* SUBUH_QB */ gen_helper_subuh_qb(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* SUBUH_R_QB */ gen_helper_subuh_r_qb(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3322,12 +3322,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* SHLLV_PH */ gen_helper_shll_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* SHLLV_S_PH */ gen_helper_shll_s_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; @@ -3340,7 +3340,7 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, TCGv_i32 sa_t = tcg_constant_i32(rd); gen_helper_precr_sra_ph_w(v1_t, sa_t, v1_t, cpu_gpr[rt]); - gen_store_gpr(v1_t, rt); + gen_store_gpr_tl(v1_t, rt); } break; case 1: @@ -3349,7 +3349,7 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, TCGv_i32 sa_t = tcg_constant_i32(rd); gen_helper_precr_sra_r_ph_w(v1_t, sa_t, v1_t, cpu_gpr[rt]); - gen_store_gpr(v1_t, rt); + gen_store_gpr_tl(v1_t, rt); } break; } @@ -3357,32 +3357,32 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case NM_MULEU_S_PH_QBL: check_dsp(ctx); gen_helper_muleu_s_ph_qbl(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_MULEU_S_PH_QBR: check_dsp(ctx); gen_helper_muleu_s_ph_qbr(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_MULQ_RS_PH: check_dsp(ctx); gen_helper_mulq_rs_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_MULQ_S_PH: check_dsp_r2(ctx); gen_helper_mulq_s_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_MULQ_RS_W: check_dsp_r2(ctx); gen_helper_mulq_rs_w(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_MULQ_S_W: check_dsp_r2(ctx); gen_helper_mulq_s_w(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_APPEND: check_dsp_r2(ctx); @@ -3395,32 +3395,32 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case NM_MODSUB: check_dsp(ctx); gen_helper_modsub(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_SHRAV_R_W: check_dsp(ctx); gen_helper_shra_r_w(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_SHRLV_PH: check_dsp_r2(ctx); gen_helper_shrl_ph(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_SHRLV_QB: check_dsp(ctx); gen_helper_shrl_qb(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_SHLLV_QB: check_dsp(ctx); gen_helper_shll_qb(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_SHLLV_S_W: check_dsp(ctx); gen_helper_shll_s_w(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_SHILO: check_dsp(ctx); @@ -3434,12 +3434,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case NM_MULEQ_S_W_PHL: check_dsp(ctx); gen_helper_muleq_s_w_phl(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_MULEQ_S_W_PHR: check_dsp(ctx); gen_helper_muleq_s_w_phr(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_MUL_S_PH: check_dsp_r2(ctx); @@ -3447,44 +3447,44 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* MUL_PH */ gen_helper_mul_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case 1: /* MUL_S_PH */ gen_helper_mul_s_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; } break; case NM_PRECR_QB_PH: check_dsp_r2(ctx); gen_helper_precr_qb_ph(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_PRECRQ_QB_PH: check_dsp(ctx); gen_helper_precrq_qb_ph(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_PRECRQ_PH_W: check_dsp(ctx); gen_helper_precrq_ph_w(v1_t, v1_t, v2_t); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_PRECRQ_RS_PH_W: check_dsp(ctx); gen_helper_precrq_rs_ph_w(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_PRECRQU_S_QB_PH: check_dsp(ctx); gen_helper_precrqu_s_qb_ph(v1_t, v1_t, v2_t, tcg_env); - gen_store_gpr(v1_t, ret); + gen_store_gpr_tl(v1_t, ret); break; case NM_SHRA_R_W: check_dsp(ctx); gen_helper_shra_r_w(v1_t, tcg_constant_tl(rd), v1_t); - gen_store_gpr(v1_t, rt); + gen_store_gpr_tl(v1_t, rt); break; case NM_SHRA_R_PH: check_dsp(ctx); @@ -3493,12 +3493,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* SHRA_PH */ gen_helper_shra_ph(v1_t, t0, v1_t); - gen_store_gpr(v1_t, rt); + gen_store_gpr_tl(v1_t, rt); break; case 1: /* SHRA_R_PH */ gen_helper_shra_r_ph(v1_t, t0, v1_t); - gen_store_gpr(v1_t, rt); + gen_store_gpr_tl(v1_t, rt); break; } break; @@ -3509,12 +3509,12 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case 0: /* SHLL_PH */ gen_helper_shll_ph(v1_t, t0, v1_t, tcg_env); - gen_store_gpr(v1_t, rt); + gen_store_gpr_tl(v1_t, rt); break; case 2: /* SHLL_S_PH */ gen_helper_shll_s_ph(v1_t, t0, v1_t, tcg_env); - gen_store_gpr(v1_t, rt); + gen_store_gpr_tl(v1_t, rt); break; default: gen_reserved_instruction(ctx); @@ -3524,7 +3524,7 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case NM_SHLL_S_W: check_dsp(ctx); gen_helper_shll_s_w(v1_t, tcg_constant_tl(rd), v1_t, tcg_env); - gen_store_gpr(v1_t, rt); + gen_store_gpr_tl(v1_t, rt); break; case NM_REPL_PH: check_dsp(ctx); @@ -3787,7 +3787,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) imm = extract32(ctx->opcode, 0, 12); gen_load_gpr_tl(t0, rs); tcg_gen_setcondi_tl(TCG_COND_EQ, t0, t0, imm); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); } break; case NM_ADDIUNEG: @@ -4106,7 +4106,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) case NM_UALH: tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_SW | MO_UNALN); - gen_store_gpr(t0, rt); + gen_store_gpr_tl(t0, rt); break; case NM_UASH: gen_load_gpr_tl(t1, rt); @@ -4292,7 +4292,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) case NM_LWM: tcg_gen_qemu_ld_tl(t1, va, ctx->mem_idx, memop | mo_endian(ctx) | MO_SL); - gen_store_gpr(t1, this_rt); + gen_store_gpr_tl(t1, this_rt); if ((this_rt == rs) && (counter != (count - 1))) { /* UNPREDICTABLE */ From patchwork Tue Nov 26 13:15:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 423F9D3B98D for ; Tue, 26 Nov 2024 13:17:46 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvQb-0008FQ-KR; Tue, 26 Nov 2024 08:16:30 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvQL-00080o-PJ for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:15 -0500 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvQI-0003a3-8y for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:12 -0500 Received: by mail-wr1-x430.google.com with SMTP id ffacd0b85a97d-382296631f1so4476595f8f.3 for ; Tue, 26 Nov 2024 05:16:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732626968; x=1733231768; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yIU8a98mE/FRheV+tK1HZL4BvggUU/5/laC5qfhy/+w=; b=rsMAmeR3B79wLu/aaPluJD06mcjlUt8d26QNpzn5lfAGjEpL86cXQa87dVzCjBVBYX DAdwfdBNr7CNUy1x/cb1RgbGOuITDol1moJsyp4cFPhHXCZ6Rm+SwUMo0e1xfwWSWFHy 53YOsu6wdKloX8F4tuJE5Tnn3nz4Om97eehGTSACUlpBpsgeVb/llLS3zUwZ+r3q23Yn c+8GdPY6AmWdAxeByWapk0t2qyZxzFffM1LkLSnS6B8sKI+VAjsqMTtENzUv34B1w1Es YyLLnI4JGYYjLfc+GUdWVHLSFs3u2ICLLj/1eK5ohOMbhIilLFMRRvzePpIxHtQMGrDN iMGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732626968; x=1733231768; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yIU8a98mE/FRheV+tK1HZL4BvggUU/5/laC5qfhy/+w=; b=ENXRpUEDbqqPaaSEncBNl379TRdHfznGnn1UnET7QjFyhvDgcEd8gCCXjom09rZegU /D035BjCvHzCifiswuo1TBAdcljtLKeqnfkr9Refywnot2EPYvXLPLTEvp07yKooLDP4 YsfG0cxDeeycTKBdYMUDQQPF/zIl8IWavy0GhF/Kq01wd9iU8bCpSHAiovh1lNdMcdEH P9jfOSLR8c6+8Cfr2iciyE1jraTs/dW9JhOoss4UCgi+fR5MYNv3y0ObE3/Hc56xnfjI oYkPPLFzzAaHo755bpJH2gg+MjG5TSiJvxjJlxeo92B8CH1Y5bNyDmUXLPw0O2uP5qt8 28rA== X-Gm-Message-State: AOJu0Yz5pRJ8++RcOV+Sy2cNNrs4MXqMQlQhwGck4a5WxCMkLpz1aSeH 8iuJ95Z3F712AcOoNqeXGREJB4zNnJYzkVJc/D6cnAvCVy2am7XrHfR8VzpJE/EYlMeCnddrCh4 h X-Gm-Gg: ASbGncuC7s2cbxHjh2dsE0u4zx+jYZTV2P9Xp2WTMEpuYouaf6UxkpABuQN+aLtixL+ cu0ql9lFK1znTeSOxnIY2BQUq4oyOEyttOO92+oJUrtwXoL7fGVgvdSkGlT7O77NQ4DPgbwtQWO 9q5EtxbHDMgZWnJYLcpt4y16KpEgb7ym2Gdlw4C/SBSFpPi7MXda/h7muPSViOwqWdm1hGMFXj5 LvpF6Rv1L/Xi6jfq6E6jxHitHr2Wy+T09Lc+TdHlKRpH2mpZxXG620vezoQui2rciirjK1H X-Google-Smtp-Source: AGHT+IFTwKPCuTrfX3b/KNVdvjfRL7TafZM2fQKaWcI1i6H/RTEaVmpf5ygdT0PP0qfX4lITn8bE2A== X-Received: by 2002:a05:6000:188f:b0:382:4493:ff8f with SMTP id ffacd0b85a97d-38260bcb030mr11922974f8f.43.1732626968073; Tue, 26 Nov 2024 05:16:08 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4349d8b6da4sm89347545e9.24.2024.11.26.05.16.06 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:16:07 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 03/13] target/mips: Rename gen_move_low32() -> gen_move_low32_tl() Date: Tue, 26 Nov 2024 14:15:35 +0100 Message-ID: <20241126131546.66145-4-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::430; envelope-from=philmd@linaro.org; helo=mail-wr1-x430.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org MIPS gen_move_low32() takes a target-specific TCGv argument. Rename it as gen_move_low32_tl() to clarify, like other TCG core helpers. Mechanical change doing: $ sed -i -e 's/gen_move_low32/gen_move_low32_tl/' \ $(git grep -l gen_move_low32) Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- target/mips/tcg/translate.h | 2 +- target/mips/tcg/mxu_translate.c | 2 +- target/mips/tcg/translate.c | 30 ++++++++++++------------ target/mips/tcg/nanomips_translate.c.inc | 8 +++---- 4 files changed, 21 insertions(+), 21 deletions(-) diff --git a/target/mips/tcg/translate.h b/target/mips/tcg/translate.h index 49f174d3617..6437180d891 100644 --- a/target/mips/tcg/translate.h +++ b/target/mips/tcg/translate.h @@ -153,7 +153,7 @@ void check_cp1_registers(DisasContext *ctx, int regs); void check_cop1x(DisasContext *ctx); void gen_base_offset_addr(DisasContext *ctx, TCGv addr, int base, int offset); -void gen_move_low32(TCGv ret, TCGv_i64 arg); +void gen_move_low32_tl(TCGv ret, TCGv_i64 arg); void gen_move_high32(TCGv ret, TCGv_i64 arg); void gen_load_gpr_tl(TCGv t, int reg); void gen_store_gpr_tl(TCGv t, int reg); diff --git a/target/mips/tcg/mxu_translate.c b/target/mips/tcg/mxu_translate.c index 9525aebc053..94aa137cb25 100644 --- a/target/mips/tcg/mxu_translate.c +++ b/target/mips/tcg/mxu_translate.c @@ -4385,7 +4385,7 @@ static void gen_mxu_s32madd_sub(DisasContext *ctx, bool sub, bool uns) } else { tcg_gen_add_i64(t3, t3, t2); } - gen_move_low32(t1, t3); + gen_move_low32_tl(t1, t3); gen_move_high32(t0, t3); tcg_gen_mov_tl(cpu_HI[0], t0); diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c index 629846a596d..5e776136f09 100644 --- a/target/mips/tcg/translate.c +++ b/target/mips/tcg/translate.c @@ -1452,7 +1452,7 @@ static target_long addr_add(DisasContext *ctx, target_long base, } /* Sign-extract the low 32-bits to a target_long. */ -void gen_move_low32(TCGv ret, TCGv_i64 arg) +void gen_move_low32_tl(TCGv ret, TCGv_i64 arg) { #if defined(TARGET_MIPS64) tcg_gen_ext32s_i64(ret, arg); @@ -3341,7 +3341,7 @@ static void gen_muldiv(DisasContext *ctx, uint32_t opc, tcg_gen_mul_i64(t2, t2, t3); tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_add_i64(t2, t2, t3); - gen_move_low32(cpu_LO[acc], t2); + gen_move_low32_tl(cpu_LO[acc], t2); gen_move_high32(cpu_HI[acc], t2); } break; @@ -3357,7 +3357,7 @@ static void gen_muldiv(DisasContext *ctx, uint32_t opc, tcg_gen_mul_i64(t2, t2, t3); tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_add_i64(t2, t2, t3); - gen_move_low32(cpu_LO[acc], t2); + gen_move_low32_tl(cpu_LO[acc], t2); gen_move_high32(cpu_HI[acc], t2); } break; @@ -3371,7 +3371,7 @@ static void gen_muldiv(DisasContext *ctx, uint32_t opc, tcg_gen_mul_i64(t2, t2, t3); tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_sub_i64(t2, t3, t2); - gen_move_low32(cpu_LO[acc], t2); + gen_move_low32_tl(cpu_LO[acc], t2); gen_move_high32(cpu_HI[acc], t2); } break; @@ -3387,7 +3387,7 @@ static void gen_muldiv(DisasContext *ctx, uint32_t opc, tcg_gen_mul_i64(t2, t2, t3); tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_sub_i64(t2, t3, t2); - gen_move_low32(cpu_LO[acc], t2); + gen_move_low32_tl(cpu_LO[acc], t2); gen_move_high32(cpu_HI[acc], t2); } break; @@ -3482,10 +3482,10 @@ static void gen_mul_txx9(DisasContext *ctx, uint32_t opc, tcg_gen_mul_i64(t2, t2, t3); tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_add_i64(t2, t2, t3); - gen_move_low32(cpu_LO[acc], t2); + gen_move_low32_tl(cpu_LO[acc], t2); gen_move_high32(cpu_HI[acc], t2); if (rd) { - gen_move_low32(cpu_gpr[rd], t2); + gen_move_low32_tl(cpu_gpr[rd], t2); } } break; @@ -3504,10 +3504,10 @@ static void gen_mul_txx9(DisasContext *ctx, uint32_t opc, tcg_gen_mul_i64(t2, t2, t3); tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_add_i64(t2, t2, t3); - gen_move_low32(cpu_LO[acc], t2); + gen_move_low32_tl(cpu_LO[acc], t2); gen_move_high32(cpu_HI[acc], t2); if (rd) { - gen_move_low32(cpu_gpr[rd], t2); + gen_move_low32_tl(cpu_gpr[rd], t2); } } break; @@ -4787,7 +4787,7 @@ static void gen_align_bits(DisasContext *ctx, int wordsz, int rd, int rs, TCGv_i64 t2 = tcg_temp_new_i64(); tcg_gen_concat_tl_i64(t2, t1, t0); tcg_gen_shri_i64(t2, t2, 32 - bits); - gen_move_low32(cpu_gpr[rd], t2); + gen_move_low32_tl(cpu_gpr[rd], t2); } break; #if defined(TARGET_MIPS64) @@ -4865,7 +4865,7 @@ static inline void gen_mfhc0_entrylo(TCGv arg, target_ulong off) #else tcg_gen_shri_i64(t0, t0, 32); #endif - gen_move_low32(arg, t0); + gen_move_low32_tl(arg, t0); } static inline void gen_mfhc0_load64(TCGv arg, target_ulong off, int shift) @@ -4874,7 +4874,7 @@ static inline void gen_mfhc0_load64(TCGv arg, target_ulong off, int shift) tcg_gen_ld_i64(t0, tcg_env, off); tcg_gen_shri_i64(t0, t0, 32 + shift); - gen_move_low32(arg, t0); + gen_move_low32_tl(arg, t0); } static inline void gen_mfc0_load32(TCGv arg, target_ulong off) @@ -5195,7 +5195,7 @@ static void gen_mfc0(DisasContext *ctx, TCGv arg, int reg, int sel) tcg_gen_deposit_tl(tmp, tmp, arg, 30, 2); } #endif - gen_move_low32(arg, tmp); + gen_move_low32_tl(arg, tmp); } register_name = "EntryLo0"; break; @@ -5252,7 +5252,7 @@ static void gen_mfc0(DisasContext *ctx, TCGv arg, int reg, int sel) tcg_gen_deposit_tl(tmp, tmp, arg, 30, 2); } #endif - gen_move_low32(arg, tmp); + gen_move_low32_tl(arg, tmp); } register_name = "EntryLo1"; break; @@ -5769,7 +5769,7 @@ static void gen_mfc0(DisasContext *ctx, TCGv arg, int reg, int sel) { TCGv_i64 tmp = tcg_temp_new_i64(); tcg_gen_ld_i64(tmp, tcg_env, offsetof(CPUMIPSState, CP0_TagLo)); - gen_move_low32(arg, tmp); + gen_move_low32_tl(arg, tmp); } register_name = "TagLo"; break; diff --git a/target/mips/tcg/nanomips_translate.c.inc b/target/mips/tcg/nanomips_translate.c.inc index 31a31c00979..5a4a64f3609 100644 --- a/target/mips/tcg/nanomips_translate.c.inc +++ b/target/mips/tcg/nanomips_translate.c.inc @@ -1816,7 +1816,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, tcg_gen_mul_i64(t2, t2, t3); tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_add_i64(t2, t2, t3); - gen_move_low32(cpu_LO[acc], t2); + gen_move_low32_tl(cpu_LO[acc], t2); gen_move_high32(cpu_HI[acc], t2); } break; @@ -1871,7 +1871,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, tcg_gen_mul_i64(t2, t2, t3); tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_add_i64(t2, t2, t3); - gen_move_low32(cpu_LO[acc], t2); + gen_move_low32_tl(cpu_LO[acc], t2); gen_move_high32(cpu_HI[acc], t2); } break; @@ -1932,7 +1932,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, tcg_gen_mul_i64(t2, t2, t3); tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_sub_i64(t2, t3, t2); - gen_move_low32(cpu_LO[acc], t2); + gen_move_low32_tl(cpu_LO[acc], t2); gen_move_high32(cpu_HI[acc], t2); } break; @@ -1973,7 +1973,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, tcg_gen_mul_i64(t2, t2, t3); tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_sub_i64(t2, t3, t2); - gen_move_low32(cpu_LO[acc], t2); + gen_move_low32_tl(cpu_LO[acc], t2); gen_move_high32(cpu_HI[acc], t2); } break; From patchwork Tue Nov 26 13:15:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 81778D3B98B for ; Tue, 26 Nov 2024 13:16:46 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvQU-0008DY-0X; Tue, 26 Nov 2024 08:16:22 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvQS-00087Y-Ny for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:20 -0500 Received: from mail-wr1-x42a.google.com ([2a00:1450:4864:20::42a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvQN-0003bV-SM for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:19 -0500 Received: by mail-wr1-x42a.google.com with SMTP id ffacd0b85a97d-3822ec43fb0so4326650f8f.3 for ; Tue, 26 Nov 2024 05:16:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732626974; x=1733231774; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8oAuoX6wuxWz3vmPjCvqWsYQJc7jnJ87JuaD5eKYWgM=; b=titvYtcDRrOCtA2UbSRBHCDHN4TxOsYVOh6XlNELgQms3239YbCkCGCOOsK6tWGDd9 uT8uIp5/mAolX87IVyEiLioe4uKMwsVkumpZxhpK3J6W8SjHfOjerito98VJhpW50Dy9 b6aHnp2dYDet+foq+BkyIp+L4hV4nb0jj4FA7mykubvTdpEJ5Qj3rK8NUApb8rjGn5Ki PQ767ZJrk4wpzH2zITUWTysoO0lhr+I1jnf13CLiZDBUphbakjTTHovFQdOj+SSzmIPG JBQP3Sbyn8TlOC1A60CFKh8NCJocv+kvUkXPVMDkymrtNRp4AOgHM6F6L2QmzGOraEDA nMIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732626974; x=1733231774; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8oAuoX6wuxWz3vmPjCvqWsYQJc7jnJ87JuaD5eKYWgM=; b=HeNxwRiFKkWsVSPEZix97t4Pjun76AMILW+jLl1Au9v76yFF+PTLM4bcV9fEXc2ajE Ve9lDs7W/LvYLgTr0qgS1UNoIMcE7Z2f0m3HSLWeoNLZLoWiXHUMfecqySD+x7QESmYT UcJN+f+lwILqbLC519YCJl3vDdW7Z8cHWo934VYDfOSlsmCcqCJz/Whvo23DHc33zUVd SoDgb4sKVP5LAO2x3riEf8G6Kr1nmwLASilN0+DCp1htOPI0HNDG7EL+i/ORzlVp5o/u fR+cGfmYWcMhkW1rIrZbsEBDQfUt9KfuLHO8D5oiH+9vn0iHLNddljD/LS8iAmWsrc5p ZKcA== X-Gm-Message-State: AOJu0Yw5zoL3Byz52/EnQgV2c/yQCfcEPkgFOyai9AZlMDHTcg3MSUog DKqkLVaMODDXH6+cU22m/xQ9Is3j5HutA9EEcWV9ufnBkheNCOm0VV8dOxVbunQh/4s9mS4df4N t X-Gm-Gg: ASbGnctZDyHSEVoQ+MieYMjjnUM+StQr2lmPZnwvOCnvMPgRge6EbJnuJxqYQQXedUf 5HHE0VxOZOS8BCj3tRbpUYKMXfjJMMdVUmmjLTemvoBlIhSzYyf8WsJfUUx/6+Niop5DcqueiVc oEExS8OQduoHnwbvaGXexHyKXV5r7niSQL4svRxwZS0Hj1JDXEWpVX0VmrjDjWwLZYd5iKgxgBa +iUhgZw3E/KIWgvb5gwDveHF3BoFOxwYWGs3HPbb27hxGkPCJn5qpnUUE7aOsleQhC50QAF X-Google-Smtp-Source: AGHT+IGGXZ9bRopaNI9ttg7m5cBaEgZ050iMO81mMejaXgjReQ2ymfXEOvIQ2gSMzV+YRU9djZcL1w== X-Received: by 2002:a5d:5f47:0:b0:382:542d:39ab with SMTP id ffacd0b85a97d-38260b449f2mr15672781f8f.3.1732626973871; Tue, 26 Nov 2024 05:16:13 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fad61b0sm13546159f8f.7.2024.11.26.05.16.12 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:16:13 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 04/13] target/mips: Rename gen_move_high32() -> gen_move_high32_tl() Date: Tue, 26 Nov 2024 14:15:36 +0100 Message-ID: <20241126131546.66145-5-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42a; envelope-from=philmd@linaro.org; helo=mail-wr1-x42a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org MIPS gen_move_high32() takes a target-specific TCGv argument. Rename it as gen_move_high32_tl() to clarify, like other TCG core helpers. Mechanical change doing: $ sed -i -e 's/gen_move_high32/gen_move_high32_tl/' \ $(git grep -l gen_move_high32) Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- target/mips/tcg/translate.h | 2 +- target/mips/tcg/mxu_translate.c | 2 +- target/mips/tcg/translate.c | 14 +++++++------- target/mips/tcg/nanomips_translate.c.inc | 8 ++++---- 4 files changed, 13 insertions(+), 13 deletions(-) diff --git a/target/mips/tcg/translate.h b/target/mips/tcg/translate.h index 6437180d891..90b8ac0e5b1 100644 --- a/target/mips/tcg/translate.h +++ b/target/mips/tcg/translate.h @@ -154,7 +154,7 @@ void check_cop1x(DisasContext *ctx); void gen_base_offset_addr(DisasContext *ctx, TCGv addr, int base, int offset); void gen_move_low32_tl(TCGv ret, TCGv_i64 arg); -void gen_move_high32(TCGv ret, TCGv_i64 arg); +void gen_move_high32_tl(TCGv ret, TCGv_i64 arg); void gen_load_gpr_tl(TCGv t, int reg); void gen_store_gpr_tl(TCGv t, int reg); #if defined(TARGET_MIPS64) diff --git a/target/mips/tcg/mxu_translate.c b/target/mips/tcg/mxu_translate.c index 94aa137cb25..20b8314b478 100644 --- a/target/mips/tcg/mxu_translate.c +++ b/target/mips/tcg/mxu_translate.c @@ -4386,7 +4386,7 @@ static void gen_mxu_s32madd_sub(DisasContext *ctx, bool sub, bool uns) tcg_gen_add_i64(t3, t3, t2); } gen_move_low32_tl(t1, t3); - gen_move_high32(t0, t3); + gen_move_high32_tl(t0, t3); tcg_gen_mov_tl(cpu_HI[0], t0); tcg_gen_mov_tl(cpu_LO[0], t1); diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c index 5e776136f09..31907c75a62 100644 --- a/target/mips/tcg/translate.c +++ b/target/mips/tcg/translate.c @@ -1462,7 +1462,7 @@ void gen_move_low32_tl(TCGv ret, TCGv_i64 arg) } /* Sign-extract the high 32-bits to a target_long. */ -void gen_move_high32(TCGv ret, TCGv_i64 arg) +void gen_move_high32_tl(TCGv ret, TCGv_i64 arg) { #if defined(TARGET_MIPS64) tcg_gen_sari_i64(ret, arg, 32); @@ -3342,7 +3342,7 @@ static void gen_muldiv(DisasContext *ctx, uint32_t opc, tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_add_i64(t2, t2, t3); gen_move_low32_tl(cpu_LO[acc], t2); - gen_move_high32(cpu_HI[acc], t2); + gen_move_high32_tl(cpu_HI[acc], t2); } break; case OPC_MADDU: @@ -3358,7 +3358,7 @@ static void gen_muldiv(DisasContext *ctx, uint32_t opc, tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_add_i64(t2, t2, t3); gen_move_low32_tl(cpu_LO[acc], t2); - gen_move_high32(cpu_HI[acc], t2); + gen_move_high32_tl(cpu_HI[acc], t2); } break; case OPC_MSUB: @@ -3372,7 +3372,7 @@ static void gen_muldiv(DisasContext *ctx, uint32_t opc, tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_sub_i64(t2, t3, t2); gen_move_low32_tl(cpu_LO[acc], t2); - gen_move_high32(cpu_HI[acc], t2); + gen_move_high32_tl(cpu_HI[acc], t2); } break; case OPC_MSUBU: @@ -3388,7 +3388,7 @@ static void gen_muldiv(DisasContext *ctx, uint32_t opc, tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_sub_i64(t2, t3, t2); gen_move_low32_tl(cpu_LO[acc], t2); - gen_move_high32(cpu_HI[acc], t2); + gen_move_high32_tl(cpu_HI[acc], t2); } break; default: @@ -3483,7 +3483,7 @@ static void gen_mul_txx9(DisasContext *ctx, uint32_t opc, tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_add_i64(t2, t2, t3); gen_move_low32_tl(cpu_LO[acc], t2); - gen_move_high32(cpu_HI[acc], t2); + gen_move_high32_tl(cpu_HI[acc], t2); if (rd) { gen_move_low32_tl(cpu_gpr[rd], t2); } @@ -3505,7 +3505,7 @@ static void gen_mul_txx9(DisasContext *ctx, uint32_t opc, tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_add_i64(t2, t2, t3); gen_move_low32_tl(cpu_LO[acc], t2); - gen_move_high32(cpu_HI[acc], t2); + gen_move_high32_tl(cpu_HI[acc], t2); if (rd) { gen_move_low32_tl(cpu_gpr[rd], t2); } diff --git a/target/mips/tcg/nanomips_translate.c.inc b/target/mips/tcg/nanomips_translate.c.inc index 5a4a64f3609..b78381bcf54 100644 --- a/target/mips/tcg/nanomips_translate.c.inc +++ b/target/mips/tcg/nanomips_translate.c.inc @@ -1817,7 +1817,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_add_i64(t2, t2, t3); gen_move_low32_tl(cpu_LO[acc], t2); - gen_move_high32(cpu_HI[acc], t2); + gen_move_high32_tl(cpu_HI[acc], t2); } break; case NM_MULT: @@ -1872,7 +1872,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_add_i64(t2, t2, t3); gen_move_low32_tl(cpu_LO[acc], t2); - gen_move_high32(cpu_HI[acc], t2); + gen_move_high32_tl(cpu_HI[acc], t2); } break; case NM_MULTU: @@ -1933,7 +1933,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_sub_i64(t2, t3, t2); gen_move_low32_tl(cpu_LO[acc], t2); - gen_move_high32(cpu_HI[acc], t2); + gen_move_high32_tl(cpu_HI[acc], t2); } break; case NM_EXTRV_RS_W: @@ -1974,7 +1974,7 @@ static void gen_pool32axf_2_nanomips_insn(DisasContext *ctx, uint32_t opc, tcg_gen_concat_tl_i64(t3, cpu_LO[acc], cpu_HI[acc]); tcg_gen_sub_i64(t2, t3, t2); gen_move_low32_tl(cpu_LO[acc], t2); - gen_move_high32(cpu_HI[acc], t2); + gen_move_high32_tl(cpu_HI[acc], t2); } break; case NM_EXTRV_S_H: From patchwork Tue Nov 26 13:15:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3CC75D3B98E for ; Tue, 26 Nov 2024 13:17:12 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvQs-0008Nv-1G; Tue, 26 Nov 2024 08:16:47 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvQX-0008Fy-UA for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:27 -0500 Received: from mail-wm1-x335.google.com ([2a00:1450:4864:20::335]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvQU-0003cu-FX for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:25 -0500 Received: by mail-wm1-x335.google.com with SMTP id 5b1f17b1804b1-434a7ee3d60so2665815e9.1 for ; Tue, 26 Nov 2024 05:16:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732626980; x=1733231780; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ucDdwRNw1pepY0YQzDNULFb44Q37tueFbobxeDpAy74=; b=XyAEQbOi3frxZahQrI3Kdh4If3Zv/oBWWrLaWe6jM4gY5ofVsu2WA2ls5AQzW20uDW j65TXehiZTx+8IAUzo7GSynlrewh2viFAi3ltKGcg92YllcUXO3aSmKc8OtdH9/bbWRP LhScoLasQHvCBZP23Q02de4qvV2oI8XFufycPadnOOq9ZJ3CpZ/vya3kt4pKK2jT2cMx nzsgbAoC1BOGqa9qpTPvDfSP2aDkh3YNraXuUz6dJq40Dr+mLLWa42CClSmfL6m7s4rB pkjvhjVOrkM7cDDxTSFouZjA38GJ0192YmRXHQyjx2aW8tg1mivaAXK3+oTmvU3Uv4ME wTrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732626980; x=1733231780; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ucDdwRNw1pepY0YQzDNULFb44Q37tueFbobxeDpAy74=; b=E2ZLnlKX5t3b76+0770SfjodTIq1W/kd9joQbzlnbAMb50AUx01Dm9C9SFSfu5aX7o PI8Qp/gtu9mR/gC59GsVKVHH8kyEJSGf5ca8zdLvP7ZPRpK5JsTFx5QjnnElK2sQc1Il 3vSZmHalrzkfI+Qzknk+/kSciZfovscjz0AsH8bJb/aSu2d8Cg3EuWJBr09H0ycDor0b EZOiThF/NBU+y7KqV17lNKl10ANWF76orcior5U4lQVCtvC07oEuCoZT2vAt4Qp0Pk8W f5dEMFhqSxJUgFs8KQKu4LU1+hhX1NCfxaGl9Cy9zDOYLdnSjnsZyuwSd/skCNyyybzF /7TA== X-Gm-Message-State: AOJu0YxAJJY6ExTY6z25azdE7CyZKrXQL4FKimfkIf4oos+eMl81YUs6 ybQUCHg5J49bEhzTaJsp8mKorgzlJuWTDID7EJD7Tg7ZdWoVS/03sEEZKwkuiOKmx5w5VflwDT8 m X-Gm-Gg: ASbGncufnvrAaUc4bscTdULUDL7gpSAI4TaB98WqyLAzTQCkDDO4pQS8pM5n4dGsckB qBQ9/ZUmNl+t2Gqgr3AKzkPBDTJ6OoaHi+KTZut1VomdqQsoxBbFtdb1bGZ9tH5bsR3RcBvW9Th RAa4mzaLKSSz39nzKihGm+TjKTy70sW5xu0gAoPmGyAwTFRgQz5GKed0UMxYJPavG2LDGTjK5E5 7woBziEEdG6vTsml7nPyprZAGylz4CG1PfEqjNMuN/J5xGJiTBMN9XmL7OERMxXi56kiMqO X-Google-Smtp-Source: AGHT+IGd2SiVoYraXmqYRw+SyDf2x4155Y3w9Gyw5ld1/HXwscoDmYQkeVOXZlrrzf8x4x6y/Xpyyw== X-Received: by 2002:a05:6000:796:b0:382:46d2:52b6 with SMTP id ffacd0b85a97d-385bf9a3f47mr2735341f8f.0.1732626979772; Tue, 26 Nov 2024 05:16:19 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fafedbcsm13472427f8f.41.2024.11.26.05.16.18 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:16:19 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 05/13] target/mips: Rename gen_base_offset_addr() -> gen_base_offset_addr_tl() Date: Tue, 26 Nov 2024 14:15:37 +0100 Message-ID: <20241126131546.66145-6-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::335; envelope-from=philmd@linaro.org; helo=mail-wm1-x335.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org MIPS gen_base_offset_addr() takes a target-specific TCGv argument. Rename it as gen_base_offset_addr_tl() to clarify, like other TCG core helpers. Mechanical change doing: $ sed -i -e 's/gen_base_offset_addr/gen_base_offset_addr_tl/' \ $(git grep -l gen_base_offset_addr) Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- target/mips/tcg/translate.h | 2 +- target/mips/tcg/msa_translate.c | 2 +- target/mips/tcg/translate.c | 54 +++++++++++------------ target/mips/tcg/tx79_translate.c | 4 +- target/mips/tcg/micromips_translate.c.inc | 4 +- target/mips/tcg/mips16e_translate.c.inc | 8 ++-- target/mips/tcg/nanomips_translate.c.inc | 12 ++--- 7 files changed, 43 insertions(+), 43 deletions(-) diff --git a/target/mips/tcg/translate.h b/target/mips/tcg/translate.h index 90b8ac0e5b1..94dd30216f5 100644 --- a/target/mips/tcg/translate.h +++ b/target/mips/tcg/translate.h @@ -152,7 +152,7 @@ void check_cp1_64bitmode(DisasContext *ctx); void check_cp1_registers(DisasContext *ctx, int regs); void check_cop1x(DisasContext *ctx); -void gen_base_offset_addr(DisasContext *ctx, TCGv addr, int base, int offset); +void gen_base_offset_addr_tl(DisasContext *ctx, TCGv addr, int base, int offset); void gen_move_low32_tl(TCGv ret, TCGv_i64 arg); void gen_move_high32_tl(TCGv ret, TCGv_i64 arg); void gen_load_gpr_tl(TCGv t, int reg); diff --git a/target/mips/tcg/msa_translate.c b/target/mips/tcg/msa_translate.c index 25939da4b3e..b515ee52f53 100644 --- a/target/mips/tcg/msa_translate.c +++ b/target/mips/tcg/msa_translate.c @@ -769,7 +769,7 @@ static bool trans_msa_ldst(DisasContext *ctx, arg_msa_i *a, taddr = tcg_temp_new(); - gen_base_offset_addr(ctx, taddr, a->ws, a->sa << a->df); + gen_base_offset_addr_tl(ctx, taddr, a->ws, a->sa << a->df); gen_msa_ldst(tcg_env, tcg_constant_i32(a->wd), taddr); return true; diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c index 31907c75a62..667b20bccb8 100644 --- a/target/mips/tcg/translate.c +++ b/target/mips/tcg/translate.c @@ -1944,7 +1944,7 @@ OP_LD_ATOMIC(lld, mo_endian(ctx) | MO_UQ); #endif #undef OP_LD_ATOMIC -void gen_base_offset_addr(DisasContext *ctx, TCGv addr, int base, int offset) +void gen_base_offset_addr_tl(DisasContext *ctx, TCGv addr, int base, int offset) { if (base == 0) { tcg_gen_movi_tl(addr, offset); @@ -2042,7 +2042,7 @@ static void gen_ld(DisasContext *ctx, uint32_t opc, } t0 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, base, offset); + gen_base_offset_addr_tl(ctx, t0, base, offset); switch (opc) { #if defined(TARGET_MIPS64) @@ -2163,7 +2163,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt, TCGv t1 = tcg_temp_new(); int mem_idx = ctx->mem_idx; - gen_base_offset_addr(ctx, t0, base, offset); + gen_base_offset_addr_tl(ctx, t0, base, offset); gen_load_gpr_tl(t1, rt); switch (opc) { #if defined(TARGET_MIPS64) @@ -2225,7 +2225,7 @@ static void gen_st_cond(DisasContext *ctx, int rt, int base, int offset, t0 = tcg_temp_new(); addr = tcg_temp_new(); /* compare the address against that of the preceding LL */ - gen_base_offset_addr(ctx, addr, base, offset); + gen_base_offset_addr_tl(ctx, addr, base, offset); tcg_gen_brcond_tl(TCG_COND_EQ, addr, cpu_lladdr, l1); gen_store_gpr_tl(tcg_constant_tl(0), rt); tcg_gen_br(done); @@ -2303,7 +2303,7 @@ static void gen_cop1_ldst(DisasContext *ctx, uint32_t op, int rt, check_insn(ctx, ISA_MIPS2); /* Fallthrough */ default: - gen_base_offset_addr(ctx, t0, rs, imm); + gen_base_offset_addr_tl(ctx, t0, rs, imm); gen_flt_ldst(ctx, op, rt, t0); } } else { @@ -3942,10 +3942,10 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, #if defined(TARGET_MIPS64) case OPC_GSLQ: t1 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, rs, lsq_offset); + gen_base_offset_addr_tl(ctx, t0, rs, lsq_offset); tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); - gen_base_offset_addr(ctx, t0, rs, lsq_offset + 8); + gen_base_offset_addr_tl(ctx, t0, rs, lsq_offset + 8); tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); gen_store_gpr_tl(t1, rt); @@ -3954,10 +3954,10 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, case OPC_GSLQC1: check_cp1_enabled(ctx); t1 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, rs, lsq_offset); + gen_base_offset_addr_tl(ctx, t0, rs, lsq_offset); tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); - gen_base_offset_addr(ctx, t0, rs, lsq_offset + 8); + gen_base_offset_addr_tl(ctx, t0, rs, lsq_offset + 8); tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); gen_store_fpr64(ctx, t1, rt); @@ -3965,11 +3965,11 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, break; case OPC_GSSQ: t1 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, rs, lsq_offset); + gen_base_offset_addr_tl(ctx, t0, rs, lsq_offset); gen_load_gpr_tl(t1, rt); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); - gen_base_offset_addr(ctx, t0, rs, lsq_offset + 8); + gen_base_offset_addr_tl(ctx, t0, rs, lsq_offset + 8); gen_load_gpr_tl(t1, lsq_rt1); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); @@ -3977,11 +3977,11 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, case OPC_GSSQC1: check_cp1_enabled(ctx); t1 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, rs, lsq_offset); + gen_base_offset_addr_tl(ctx, t0, rs, lsq_offset); gen_load_fpr64(ctx, t1, rt); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); - gen_base_offset_addr(ctx, t0, rs, lsq_offset + 8); + gen_base_offset_addr_tl(ctx, t0, rs, lsq_offset + 8); gen_load_fpr64(ctx, t1, lsq_rt1); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); @@ -3991,7 +3991,7 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, switch (MASK_LOONGSON_GSSHFLS(ctx->opcode)) { case OPC_GSLWLC1: check_cp1_enabled(ctx); - gen_base_offset_addr(ctx, t0, rs, shf_offset); + gen_base_offset_addr_tl(ctx, t0, rs, shf_offset); fp0 = tcg_temp_new_i32(); gen_load_fpr32(ctx, fp0, rt); t1 = tcg_temp_new(); @@ -4002,7 +4002,7 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, break; case OPC_GSLWRC1: check_cp1_enabled(ctx); - gen_base_offset_addr(ctx, t0, rs, shf_offset); + gen_base_offset_addr_tl(ctx, t0, rs, shf_offset); fp0 = tcg_temp_new_i32(); gen_load_fpr32(ctx, fp0, rt); t1 = tcg_temp_new(); @@ -4014,7 +4014,7 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, #if defined(TARGET_MIPS64) case OPC_GSLDLC1: check_cp1_enabled(ctx); - gen_base_offset_addr(ctx, t0, rs, shf_offset); + gen_base_offset_addr_tl(ctx, t0, rs, shf_offset); t1 = tcg_temp_new(); gen_load_fpr64(ctx, t1, rt); gen_lxl(ctx, t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ); @@ -4022,7 +4022,7 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, break; case OPC_GSLDRC1: check_cp1_enabled(ctx); - gen_base_offset_addr(ctx, t0, rs, shf_offset); + gen_base_offset_addr_tl(ctx, t0, rs, shf_offset); t1 = tcg_temp_new(); gen_load_fpr64(ctx, t1, rt); gen_lxr(ctx, t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ); @@ -4040,7 +4040,7 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, case OPC_GSSWLC1: check_cp1_enabled(ctx); t1 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, rs, shf_offset); + gen_base_offset_addr_tl(ctx, t0, rs, shf_offset); fp0 = tcg_temp_new_i32(); gen_load_fpr32(ctx, fp0, rt); tcg_gen_ext_i32_tl(t1, fp0); @@ -4049,7 +4049,7 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, case OPC_GSSWRC1: check_cp1_enabled(ctx); t1 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, rs, shf_offset); + gen_base_offset_addr_tl(ctx, t0, rs, shf_offset); fp0 = tcg_temp_new_i32(); gen_load_fpr32(ctx, fp0, rt); tcg_gen_ext_i32_tl(t1, fp0); @@ -4059,14 +4059,14 @@ static void gen_loongson_lswc2(DisasContext *ctx, int rt, case OPC_GSSDLC1: check_cp1_enabled(ctx); t1 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, rs, shf_offset); + gen_base_offset_addr_tl(ctx, t0, rs, shf_offset); gen_load_fpr64(ctx, t1, rt); gen_helper_0e2i(sdl, t1, t0, ctx->mem_idx); break; case OPC_GSSDRC1: check_cp1_enabled(ctx); t1 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, rs, shf_offset); + gen_base_offset_addr_tl(ctx, t0, rs, shf_offset); gen_load_fpr64(ctx, t1, rt); gen_helper_0e2i(sdr, t1, t0, ctx->mem_idx); break; @@ -4134,7 +4134,7 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, t0 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, rs, offset); + gen_base_offset_addr_tl(ctx, t0, rs, offset); gen_op_addr_add(ctx, t0, cpu_gpr[rd], t0); switch (opc) { @@ -4148,7 +4148,7 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, gen_store_gpr_tl(t0, rt); break; case OPC_GSLWX: - gen_base_offset_addr(ctx, t0, rs, offset); + gen_base_offset_addr_tl(ctx, t0, rs, offset); if (rd) { gen_op_addr_add(ctx, t0, cpu_gpr[rd], t0); } @@ -4158,7 +4158,7 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, break; #if defined(TARGET_MIPS64) case OPC_GSLDX: - gen_base_offset_addr(ctx, t0, rs, offset); + gen_base_offset_addr_tl(ctx, t0, rs, offset); if (rd) { gen_op_addr_add(ctx, t0, cpu_gpr[rd], t0); } @@ -4168,7 +4168,7 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, break; #endif case OPC_GSLWXC1: - gen_base_offset_addr(ctx, t0, rs, offset); + gen_base_offset_addr_tl(ctx, t0, rs, offset); if (rd) { gen_op_addr_add(ctx, t0, cpu_gpr[rd], t0); } @@ -4179,7 +4179,7 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, break; #if defined(TARGET_MIPS64) case OPC_GSLDXC1: - gen_base_offset_addr(ctx, t0, rs, offset); + gen_base_offset_addr_tl(ctx, t0, rs, offset); if (rd) { gen_op_addr_add(ctx, t0, cpu_gpr[rd], t0); } @@ -11229,7 +11229,7 @@ static void gen_cache_operation(DisasContext *ctx, uint32_t op, int base, { TCGv_i32 t0 = tcg_constant_i32(op); TCGv t1 = tcg_temp_new(); - gen_base_offset_addr(ctx, t1, base, offset); + gen_base_offset_addr_tl(ctx, t1, base, offset); gen_helper_cache(tcg_env, t1, t0); } diff --git a/target/mips/tcg/tx79_translate.c b/target/mips/tcg/tx79_translate.c index 90d63e5dfc4..2694f41a318 100644 --- a/target/mips/tcg/tx79_translate.c +++ b/target/mips/tcg/tx79_translate.c @@ -332,7 +332,7 @@ static bool trans_LQ(DisasContext *ctx, arg_i *a) t0 = tcg_temp_new_i64(); addr = tcg_temp_new(); - gen_base_offset_addr(ctx, addr, a->base, a->offset); + gen_base_offset_addr_tl(ctx, addr, a->base, a->offset); /* * Clear least-significant four bits of the effective * address, effectively creating an aligned address. @@ -355,7 +355,7 @@ static bool trans_SQ(DisasContext *ctx, arg_i *a) TCGv_i64 t0 = tcg_temp_new_i64(); TCGv addr = tcg_temp_new(); - gen_base_offset_addr(ctx, addr, a->base, a->offset); + gen_base_offset_addr_tl(ctx, addr, a->base, a->offset); /* * Clear least-significant four bits of the effective * address, effectively creating an aligned address. diff --git a/target/mips/tcg/micromips_translate.c.inc b/target/mips/tcg/micromips_translate.c.inc index cb3dbd264a0..69289bc13bb 100644 --- a/target/mips/tcg/micromips_translate.c.inc +++ b/target/mips/tcg/micromips_translate.c.inc @@ -702,7 +702,7 @@ static void gen_ldst_multiple(DisasContext *ctx, uint32_t opc, int reglist, t0 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, base, offset); + gen_base_offset_addr_tl(ctx, t0, base, offset); t1 = tcg_constant_tl(reglist); t2 = tcg_constant_i32(ctx->mem_idx); @@ -969,7 +969,7 @@ static void gen_ldst_pair(DisasContext *ctx, uint32_t opc, int rd, t0 = tcg_temp_new(); t1 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, base, offset); + gen_base_offset_addr_tl(ctx, t0, base, offset); switch (opc) { case LWP: diff --git a/target/mips/tcg/mips16e_translate.c.inc b/target/mips/tcg/mips16e_translate.c.inc index ceb41be0c26..754a5f7be4c 100644 --- a/target/mips/tcg/mips16e_translate.c.inc +++ b/target/mips/tcg/mips16e_translate.c.inc @@ -179,25 +179,25 @@ static void gen_mips16_save(DisasContext *ctx, switch (args) { case 4: - gen_base_offset_addr(ctx, t0, 29, 12); + gen_base_offset_addr_tl(ctx, t0, 29, 12); gen_load_gpr_tl(t1, 7); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); /* Fall through */ case 3: - gen_base_offset_addr(ctx, t0, 29, 8); + gen_base_offset_addr_tl(ctx, t0, 29, 8); gen_load_gpr_tl(t1, 6); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); /* Fall through */ case 2: - gen_base_offset_addr(ctx, t0, 29, 4); + gen_base_offset_addr_tl(ctx, t0, 29, 4); gen_load_gpr_tl(t1, 5); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); /* Fall through */ case 1: - gen_base_offset_addr(ctx, t0, 29, 0); + gen_base_offset_addr_tl(ctx, t0, 29, 0); gen_load_gpr_tl(t1, 4); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); diff --git a/target/mips/tcg/nanomips_translate.c.inc b/target/mips/tcg/nanomips_translate.c.inc index b78381bcf54..950a4c23e70 100644 --- a/target/mips/tcg/nanomips_translate.c.inc +++ b/target/mips/tcg/nanomips_translate.c.inc @@ -997,7 +997,7 @@ static void gen_llwp(DisasContext *ctx, uint32_t base, int16_t offset, TCGv tmp1 = tcg_temp_new(); TCGv tmp2 = tcg_temp_new(); - gen_base_offset_addr(ctx, taddr, base, offset); + gen_base_offset_addr_tl(ctx, taddr, base, offset); tcg_gen_qemu_ld_i64(tval, taddr, ctx->mem_idx, mo_endian(ctx) | MO_UQ | MO_ALIGN); if (disas_is_bigendian(ctx)) { @@ -1024,7 +1024,7 @@ static void gen_scwp(DisasContext *ctx, uint32_t base, int16_t offset, TCGLabel *lab_fail = gen_new_label(); TCGLabel *lab_done = gen_new_label(); - gen_base_offset_addr(ctx, taddr, base, offset); + gen_base_offset_addr_tl(ctx, taddr, base, offset); tcg_gen_ld_tl(lladdr, tcg_env, offsetof(CPUMIPSState, lladdr)); tcg_gen_brcond_tl(TCG_COND_NE, taddr, lladdr, lab_fail); @@ -1072,7 +1072,7 @@ static void gen_save(DisasContext *ctx, uint8_t rt, uint8_t count, bool use_gp = gp && (counter == count - 1); int this_rt = use_gp ? 28 : (rt & 0x10) | ((rt + counter) & 0x1f); int this_offset = -((counter + 1) << 2); - gen_base_offset_addr(ctx, va, 29, this_offset); + gen_base_offset_addr_tl(ctx, va, 29, this_offset); gen_load_gpr_tl(t0, this_rt); tcg_gen_qemu_st_tl(t0, va, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); @@ -1094,7 +1094,7 @@ static void gen_restore(DisasContext *ctx, uint8_t rt, uint8_t count, bool use_gp = gp && (counter == count - 1); int this_rt = use_gp ? 28 : (rt & 0x10) | ((rt + counter) & 0x1f); int this_offset = u - ((counter + 1) << 2); - gen_base_offset_addr(ctx, va, 29, this_offset); + gen_base_offset_addr_tl(ctx, va, 29, this_offset); tcg_gen_qemu_ld_tl(t0, va, ctx->mem_idx, mo_endian(ctx) | MO_SL | ctx->default_tcg_memop_mask); tcg_gen_ext32s_tl(t0, t0); @@ -4100,7 +4100,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) TCGv t0 = tcg_temp_new(); TCGv t1 = tcg_temp_new(); - gen_base_offset_addr(ctx, t0, rs, s); + gen_base_offset_addr_tl(ctx, t0, rs, s); switch (extract32(ctx->opcode, 11, 4)) { case NM_UALH: @@ -4286,7 +4286,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) int this_rt = ((rt + counter) & 0x1f) | (rt & 0x10); int this_offset = offset + (counter << 2); - gen_base_offset_addr(ctx, va, rs, this_offset); + gen_base_offset_addr_tl(ctx, va, rs, this_offset); switch (extract32(ctx->opcode, 11, 1)) { case NM_LWM: From patchwork Tue Nov 26 13:15:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ACA91D3B98E for ; Tue, 26 Nov 2024 13:17:55 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvRu-0000hY-G1; Tue, 26 Nov 2024 08:17:51 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvQe-0008NL-G3 for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:34 -0500 Received: from mail-lj1-x236.google.com ([2a00:1450:4864:20::236]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvQa-0003eI-PZ for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:31 -0500 Received: by mail-lj1-x236.google.com with SMTP id 38308e7fff4ca-2ffc76368c6so31287071fa.0 for ; Tue, 26 Nov 2024 05:16:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732626986; x=1733231786; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YaQXErJo7YtbBSN/89WKLiUD33Zff+wbQUKG0qIgN9E=; b=mk1p34YgWJXEDz3h6ZLbPO8GWm9n66rluLf47yDM7etFVmY8wEk9rfWk7nKHNBpWCz 98q9ghb21aJhAT1pVasaVTHNJaxJeeKRvhdoaaKBok8xKE3iLddYGIB6up0eNauyC7fl 22KkdcD/9LdTKXh+UhuILy8cLZ8MQLCp0hCXrYdDw88tl6y2zG2r8+dCT9CvqRZJnLaL rdyeFegV5e2SkZW41RD0Oc6dS0hXwBrVkh6NmiTaES4tRsvNEovJ93pYEixgdkux6lv2 aR9SA4iiNVHZGY5ggnYGQg5L0emVQ1q0t4dxyu+7xKqm9Mx/cg8nfKd5+On8oQyVWZUu ZPLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732626986; x=1733231786; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YaQXErJo7YtbBSN/89WKLiUD33Zff+wbQUKG0qIgN9E=; b=brS89nzsFFoxr7TpvXpvkb4QWuCy2n0WIKGXm+DRlfMU7d0qJq3DqQfUdkDnjd2muu Ok+IQ4WnzZFtFT14B8plvMntSpagwT0EGFO83M76QFZHMyLm3j/W8vegzA39uqL9r+Nc JMpBYOU2g4N9F/J0YpeklAwvO/nDyQx4AzevMTnLwq2B8Z5uMKT2RmPBgdKj0a3zXiZo z5rQs9pZ+BTG7dl3ZnXy61HCh4iu2QbCmuGRy4up8L0fPf5FHkBFBbNOLw3D2RJHfZGw rFHOKCrAjobBFjfmmIqyRDa4uvFZQhFCWZ5P55+DWP7XNl9lr5QNcvyEOdpHLty1RPWL PjDA== X-Gm-Message-State: AOJu0YxtG1cHasd2Z87OPrL/rg361COm6wcga8YOBjvCRkEaZhrUaAk8 ycjssvhG0cvXX6HIuGrXRygqnLfQoWAVpC7CLOSzJW2MZxqCwuzOyrlMZMqPErIm4r1yZke/kGh g X-Gm-Gg: ASbGnctspb34XKmZfx4VCALBiCnpCquiaGcTeIyEtmKvH00XmFMzjkH8x/AfTcgXquY tHDMrTKN3A5f4jHWkEAAJ5MioRX4KPX0YR4QCmGEo9/lhBW1RsMC3vOyxoyXWz7p5PmPPL9/L9y ngrF8GAAdyWh9088U42k+CHKPkJ/ief+9ZVuljoFAZdRivGorIcl7hk7Vs908aKA6nbNwj+1EPI fjqm5voSBdz+R1uuzsL5DJ3hjKuDWqNFL4Y/BGGG+qBCHeBpUWDbP4rlx0CiHqhcmRc6qZN X-Google-Smtp-Source: AGHT+IHej8zKhkRlzaS1zlpPBgy1NIZBa9RprgWCL1WMpMAtTqNerJn6e7PbGHmBhmX2OUtA4DYubQ== X-Received: by 2002:a05:651c:1546:b0:2fa:c0b5:ac8c with SMTP id 38308e7fff4ca-2ffa7124ff6mr116324351fa.21.1732626985606; Tue, 26 Nov 2024 05:16:25 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434921ceea7sm113970555e9.25.2024.11.26.05.16.24 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:16:24 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 06/13] target/mips: Rename gen_op_addr_add?() -> gen_op_addr_add?_tl() Date: Tue, 26 Nov 2024 14:15:38 +0100 Message-ID: <20241126131546.66145-7-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::236; envelope-from=philmd@linaro.org; helo=mail-lj1-x236.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org MIPS gen_op_addr_add() and gen_op_addr_addi() take a target-specific TCGv argument. Rename them respectively as gen_op_addr_add_tl() and gen_op_addr_addi_tl() like other TCG core helpers. Mechanical change done using sed tool. Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- target/mips/tcg/translate.h | 4 ++-- target/mips/tcg/translate.c | 28 +++++++++++------------ target/mips/tcg/micromips_translate.c.inc | 8 +++---- target/mips/tcg/mips16e_translate.c.inc | 10 ++++---- target/mips/tcg/nanomips_translate.c.inc | 14 ++++++------ 5 files changed, 32 insertions(+), 32 deletions(-) diff --git a/target/mips/tcg/translate.h b/target/mips/tcg/translate.h index 94dd30216f5..9517e18eef9 100644 --- a/target/mips/tcg/translate.h +++ b/target/mips/tcg/translate.h @@ -175,8 +175,8 @@ void gen_addiupc(DisasContext *ctx, int rx, int imm, /* * Address Computation and Large Constant Instructions */ -void gen_op_addr_add(DisasContext *ctx, TCGv ret, TCGv arg0, TCGv arg1); -void gen_op_addr_addi(DisasContext *ctx, TCGv ret, TCGv base, target_long ofs); +void gen_op_addr_add_tl(DisasContext *ctx, TCGv ret, TCGv arg0, TCGv arg1); +void gen_op_addr_addi_tl(DisasContext *ctx, TCGv ret, TCGv base, target_long ofs); bool gen_lsa(DisasContext *ctx, int rd, int rt, int rs, int sa); bool gen_dlsa(DisasContext *ctx, int rd, int rt, int rs, int sa); diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c index 667b20bccb8..ad688b9b23d 100644 --- a/target/mips/tcg/translate.c +++ b/target/mips/tcg/translate.c @@ -1415,7 +1415,7 @@ int get_fp_bit(int cc) } /* Addresses computation */ -void gen_op_addr_add(DisasContext *ctx, TCGv ret, TCGv arg0, TCGv arg1) +void gen_op_addr_add_tl(DisasContext *ctx, TCGv ret, TCGv arg0, TCGv arg1) { tcg_gen_add_tl(ret, arg0, arg1); @@ -1426,7 +1426,7 @@ void gen_op_addr_add(DisasContext *ctx, TCGv ret, TCGv arg0, TCGv arg1) #endif } -void gen_op_addr_addi(DisasContext *ctx, TCGv ret, TCGv base, target_long ofs) +void gen_op_addr_addi_tl(DisasContext *ctx, TCGv ret, TCGv base, target_long ofs) { tcg_gen_addi_tl(ret, base, ofs); @@ -1952,7 +1952,7 @@ void gen_base_offset_addr_tl(DisasContext *ctx, TCGv addr, int base, int offset) gen_load_gpr_tl(addr, base); } else { tcg_gen_movi_tl(addr, offset); - gen_op_addr_add(ctx, addr, cpu_gpr[base], addr); + gen_op_addr_add_tl(ctx, addr, cpu_gpr[base], addr); } } @@ -2075,14 +2075,14 @@ static void gen_ld(DisasContext *ctx, uint32_t opc, break; case OPC_LDPC: t1 = tcg_constant_tl(pc_relative_pc(ctx)); - gen_op_addr_add(ctx, t0, t0, t1); + gen_op_addr_add_tl(ctx, t0, t0, t1); tcg_gen_qemu_ld_tl(t0, t0, mem_idx, mo_endian(ctx) | MO_UQ); gen_store_gpr_tl(t0, rt); break; #endif case OPC_LWPC: t1 = tcg_constant_tl(pc_relative_pc(ctx)); - gen_op_addr_add(ctx, t0, t0, t1); + gen_op_addr_add_tl(ctx, t0, t0, t1); tcg_gen_qemu_ld_tl(t0, t0, mem_idx, mo_endian(ctx) | MO_SL); gen_store_gpr_tl(t0, rt); break; @@ -4135,7 +4135,7 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, t0 = tcg_temp_new(); gen_base_offset_addr_tl(ctx, t0, rs, offset); - gen_op_addr_add(ctx, t0, cpu_gpr[rd], t0); + gen_op_addr_add_tl(ctx, t0, cpu_gpr[rd], t0); switch (opc) { case OPC_GSLBX: @@ -4150,7 +4150,7 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, case OPC_GSLWX: gen_base_offset_addr_tl(ctx, t0, rs, offset); if (rd) { - gen_op_addr_add(ctx, t0, cpu_gpr[rd], t0); + gen_op_addr_add_tl(ctx, t0, cpu_gpr[rd], t0); } tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_SL | ctx->default_tcg_memop_mask); @@ -4160,7 +4160,7 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, case OPC_GSLDX: gen_base_offset_addr_tl(ctx, t0, rs, offset); if (rd) { - gen_op_addr_add(ctx, t0, cpu_gpr[rd], t0); + gen_op_addr_add_tl(ctx, t0, cpu_gpr[rd], t0); } tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); @@ -4170,7 +4170,7 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, case OPC_GSLWXC1: gen_base_offset_addr_tl(ctx, t0, rs, offset); if (rd) { - gen_op_addr_add(ctx, t0, cpu_gpr[rd], t0); + gen_op_addr_add_tl(ctx, t0, cpu_gpr[rd], t0); } fp0 = tcg_temp_new_i32(); tcg_gen_qemu_ld_i32(fp0, t0, ctx->mem_idx, mo_endian(ctx) | MO_SL | @@ -4181,7 +4181,7 @@ static void gen_loongson_lsdc2(DisasContext *ctx, int rt, case OPC_GSLDXC1: gen_base_offset_addr_tl(ctx, t0, rs, offset); if (rd) { - gen_op_addr_add(ctx, t0, cpu_gpr[rd], t0); + gen_op_addr_add_tl(ctx, t0, cpu_gpr[rd], t0); } tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); @@ -10550,7 +10550,7 @@ static void gen_flt3_ldst(DisasContext *ctx, uint32_t opc, } else if (index == 0) { gen_load_gpr_tl(t0, base); } else { - gen_op_addr_add(ctx, t0, cpu_gpr[base], cpu_gpr[index]); + gen_op_addr_add_tl(ctx, t0, cpu_gpr[base], cpu_gpr[index]); } /* * Don't do NOP if destination is zero: we must perform the actual @@ -11050,7 +11050,7 @@ static void gen_compute_compact_branch(DisasContext *ctx, uint32_t opc, TCGv tbase = tcg_temp_new(); gen_load_gpr_tl(tbase, rt); - gen_op_addr_addi(ctx, btarget, tbase, offset); + gen_op_addr_addi_tl(ctx, btarget, tbase, offset); } break; default: @@ -11253,7 +11253,7 @@ void gen_ldxs(DisasContext *ctx, int base, int index, int rd) if (index != 0) { gen_load_gpr_tl(t1, index); tcg_gen_shli_tl(t1, t1, 2); - gen_op_addr_add(ctx, t0, t1, t0); + gen_op_addr_add_tl(ctx, t0, t1, t0); } tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_SL); @@ -11338,7 +11338,7 @@ static void gen_mips_lx(DisasContext *ctx, uint32_t opc, } else if (offset == 0) { gen_load_gpr_tl(t0, base); } else { - gen_op_addr_add(ctx, t0, cpu_gpr[base], cpu_gpr[offset]); + gen_op_addr_add_tl(ctx, t0, cpu_gpr[base], cpu_gpr[offset]); } switch (opc) { diff --git a/target/mips/tcg/micromips_translate.c.inc b/target/mips/tcg/micromips_translate.c.inc index 69289bc13bb..fb7e6c8ddd9 100644 --- a/target/mips/tcg/micromips_translate.c.inc +++ b/target/mips/tcg/micromips_translate.c.inc @@ -980,7 +980,7 @@ static void gen_ldst_pair(DisasContext *ctx, uint32_t opc, int rd, tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_SL | ctx->default_tcg_memop_mask); gen_store_gpr_tl(t1, rd); - gen_op_addr_addi(ctx, t0, t0, 4); + gen_op_addr_addi_tl(ctx, t0, t0, 4); tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_SL | ctx->default_tcg_memop_mask); gen_store_gpr_tl(t1, rd + 1); @@ -989,7 +989,7 @@ static void gen_ldst_pair(DisasContext *ctx, uint32_t opc, int rd, gen_load_gpr_tl(t1, rd); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); - gen_op_addr_addi(ctx, t0, t0, 4); + gen_op_addr_addi_tl(ctx, t0, t0, 4); gen_load_gpr_tl(t1, rd + 1); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); @@ -1003,7 +1003,7 @@ static void gen_ldst_pair(DisasContext *ctx, uint32_t opc, int rd, tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); gen_store_gpr_tl(t1, rd); - gen_op_addr_addi(ctx, t0, t0, 8); + gen_op_addr_addi_tl(ctx, t0, t0, 8); tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); gen_store_gpr_tl(t1, rd + 1); @@ -1012,7 +1012,7 @@ static void gen_ldst_pair(DisasContext *ctx, uint32_t opc, int rd, gen_load_gpr_tl(t1, rd); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); - gen_op_addr_addi(ctx, t0, t0, 8); + gen_op_addr_addi_tl(ctx, t0, t0, 8); gen_load_gpr_tl(t1, rd + 1); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UQ | ctx->default_tcg_memop_mask); diff --git a/target/mips/tcg/mips16e_translate.c.inc b/target/mips/tcg/mips16e_translate.c.inc index 754a5f7be4c..7af83e77280 100644 --- a/target/mips/tcg/mips16e_translate.c.inc +++ b/target/mips/tcg/mips16e_translate.c.inc @@ -131,7 +131,7 @@ static void decr_and_store(DisasContext *ctx, unsigned regidx, TCGv t0) { TCGv t1 = tcg_temp_new(); - gen_op_addr_addi(ctx, t0, t0, -4); + gen_op_addr_addi_tl(ctx, t0, t0, -4); gen_load_gpr_tl(t1, regidx); tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, mo_endian(ctx) | MO_UL | ctx->default_tcg_memop_mask); @@ -283,7 +283,7 @@ static void gen_mips16_save(DisasContext *ctx, } } - gen_op_addr_addi(ctx, cpu_gpr[29], cpu_gpr[29], -framesize); + gen_op_addr_addi_tl(ctx, cpu_gpr[29], cpu_gpr[29], -framesize); } static void decr_and_load(DisasContext *ctx, unsigned regidx, TCGv t0) @@ -292,7 +292,7 @@ static void decr_and_load(DisasContext *ctx, unsigned regidx, TCGv t0) TCGv t2 = tcg_temp_new(); tcg_gen_movi_tl(t2, -4); - gen_op_addr_add(ctx, t0, t0, t2); + gen_op_addr_add_tl(ctx, t0, t0, t2); tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TE | MO_SL | ctx->default_tcg_memop_mask); gen_store_gpr_tl(t1, regidx); @@ -306,7 +306,7 @@ static void gen_mips16_restore(DisasContext *ctx, int astatic; TCGv t0 = tcg_temp_new(); - gen_op_addr_addi(ctx, t0, cpu_gpr[29], -framesize); + gen_op_addr_addi_tl(ctx, t0, cpu_gpr[29], -framesize); if (do_ra) { decr_and_load(ctx, 31, t0); @@ -386,7 +386,7 @@ static void gen_mips16_restore(DisasContext *ctx, } } - gen_op_addr_addi(ctx, cpu_gpr[29], cpu_gpr[29], -framesize); + gen_op_addr_addi_tl(ctx, cpu_gpr[29], cpu_gpr[29], -framesize); } #if defined(TARGET_MIPS64) diff --git a/target/mips/tcg/nanomips_translate.c.inc b/target/mips/tcg/nanomips_translate.c.inc index 950a4c23e70..2ad936c66d4 100644 --- a/target/mips/tcg/nanomips_translate.c.inc +++ b/target/mips/tcg/nanomips_translate.c.inc @@ -1058,7 +1058,7 @@ static void gen_scwp(DisasContext *ctx, uint32_t base, int16_t offset, static void gen_adjust_sp(DisasContext *ctx, int u) { - gen_op_addr_addi(ctx, cpu_gpr[29], cpu_gpr[29], u); + gen_op_addr_addi_tl(ctx, cpu_gpr[29], cpu_gpr[29], u); } static void gen_save(DisasContext *ctx, uint8_t rt, uint8_t count, @@ -2398,7 +2398,7 @@ static void gen_compute_nanomips_pbalrsc_branch(DisasContext *ctx, int rs, /* calculate btarget */ tcg_gen_shli_tl(t0, t0, 1); - gen_op_addr_add(ctx, btarget, tcg_constant_tl(ctx->base.pc_next + 4), t0); + gen_op_addr_add_tl(ctx, btarget, tcg_constant_tl(ctx->base.pc_next + 4), t0); /* branch completion */ clear_branch_hflags(ctx); @@ -2453,7 +2453,7 @@ static void gen_compute_compact_branch_nm(DisasContext *ctx, uint32_t opc, TCGv tbase = tcg_temp_new(); gen_load_gpr_tl(tbase, rt); - gen_op_addr_addi(ctx, btarget, tbase, offset); + gen_op_addr_addi_tl(ctx, btarget, tbase, offset); } break; default: @@ -2617,7 +2617,7 @@ static void gen_p_lsx(DisasContext *ctx, int rd, int rs, int rt) return; } } - gen_op_addr_add(ctx, t0, t0, t1); + gen_op_addr_add_tl(ctx, t0, t0, t1); switch (extract32(ctx->opcode, 7, 4)) { case NM_LBX: @@ -3654,7 +3654,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) case NM_ADDIUGP_W: if (rt != 0) { offset = extract32(ctx->opcode, 0, 21); - gen_op_addr_addi(ctx, cpu_gpr[rt], cpu_gpr[28], offset); + gen_op_addr_addi_tl(ctx, cpu_gpr[rt], cpu_gpr[28], offset); } break; case NM_LWGP: @@ -3689,7 +3689,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) case NM_ADDIUGP48: check_nms(ctx); if (rt != 0) { - gen_op_addr_addi(ctx, cpu_gpr[rt], cpu_gpr[28], addr_off); + gen_op_addr_addi_tl(ctx, cpu_gpr[rt], cpu_gpr[28], addr_off); } break; case NM_ADDIUPC48: @@ -3921,7 +3921,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx) break; case NM_ADDIUGP_B: if (rt != 0) { - gen_op_addr_addi(ctx, cpu_gpr[rt], cpu_gpr[28], u); + gen_op_addr_addi_tl(ctx, cpu_gpr[rt], cpu_gpr[28], u); } break; case NM_P_GP_LH: From patchwork Tue Nov 26 13:15:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4D9F2D3B98B for ; Tue, 26 Nov 2024 13:17:55 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvRc-0000O7-Of; Tue, 26 Nov 2024 08:17:40 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvQk-0008Rk-VD for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:42 -0500 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvQf-0003fK-3D for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:35 -0500 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-4349cc45219so23565125e9.3 for ; Tue, 26 Nov 2024 05:16:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732626991; x=1733231791; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=decA0AqUsGYrPYvQ/Z3re+icmP1BYXUuyKugc2m0v2U=; b=M7VAlJR7QLDJvDTQz+cSIhQgma6JSC22UXkXZ3gzbXz0EsM9H3AY6jO2tqUufacfhm Z+VMYHxY7ZtFLC8nInQpYUexI7JTGdlcYjP8z5ZEoqW3Pmo6VJRAp3C/LRR2a7fN7wt3 fhWWQNrkp6pnjiDgnu6nQthR5eThVpd69vhrtyBAl+fNC3ET9Yl3G6PEmV9o6GRK79++ 1dzguDYl2LU+8LD3GIAZegqO12Ru3uuPxk9yDscxhSTfAeozicyLk4Ec3Lsb5k/MHvPb KUllqMc3xxs+pMugT+FAlQ3gBGDG03VO2/qS8VzoeOVvR1mnY7E2axzfca+CVuGSNtkO bV/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732626991; x=1733231791; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=decA0AqUsGYrPYvQ/Z3re+icmP1BYXUuyKugc2m0v2U=; b=RWbgZ6oYjgWRD3oIHSanRtVBNi6tSpDNZGTBbeRtkRD/1u8Ii+J4iedvfNq5gp7Im5 +6Vq5pwTlNvN6mwr9PhoB2nBrduNTrEgMzGQxYIA2I8AGhzbV9cO+U9KwTfAnMAxRvo0 YU0ePLT9RzSKZ1HgzjEOsdAYSyUkfrNvkD8fRHrnGm9aLhoC4f8s1XReexRkh9Y/Dne2 3k/uz+TlfNzZ4KAEgFAAYALQVxM874sOBAHA75qSV2ctmg9r6FN4cvG13Kkv4mk1MrYS mm56Sys1MiUBOhmivOAKi1I0zQ+Ya8NVfKptzepwqO+1MA2rA9TJBcAfPBz6c7simO+Q LkCQ== X-Gm-Message-State: AOJu0YxAL2S6aoE+7fsoXGndlxa7glT3fCdf4A8zkKx9CGQ3DsBxVG34 yoi9qgGrhJje5Vfr9N/ATdNqxZMT7Uaw+FTSDAz8Y26BtvtytWVBsSxA4C6yj4HrKW0obruS4B3 5 X-Gm-Gg: ASbGncvm1IeXKF+LDBoUrt+Y2BagL9VZxNcL0PGZC8eeybzaUBAU8y29jAua8kQj0/V ftjcYrH84okLkL+aGluuVxaSNcHgUytpthdmA2HVk+EXqVMuSrq66ZIOlzrtwnd3p7PdC3wu11I UnUzO0weHbc4AtRXaN3TRXTbhqCswo4U1HQKZPVgNnyKSV2F+xKb2mY0F/BFoM/VQMwnioXINQk 2BKq9qONKIkBFZ3gZcdm7ew6cNVLBdn7LNWfukIf6BD2PCj/GCJKAx3rj5FXdGIekzRHmLc X-Google-Smtp-Source: AGHT+IHWipfHXR64dGHuB0+xqlRjApAxrFlkyDsK6ILBV46a/XF7fycBrcr8+PXx4/5Gea/bTRS9og== X-Received: by 2002:a05:600c:35c4:b0:431:93d8:e1a1 with SMTP id 5b1f17b1804b1-433ce4c1f1bmr143665895e9.27.1732626991176; Tue, 26 Nov 2024 05:16:31 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fbee043sm13335835f8f.104.2024.11.26.05.16.29 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:16:30 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 07/13] target/mips: Introduce gen_load_gpr_i32() Date: Tue, 26 Nov 2024 14:15:39 +0100 Message-ID: <20241126131546.66145-8-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32a; envelope-from=philmd@linaro.org; helo=mail-wm1-x32a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Similarly to the gen_load_gpr_tl() helper which loads a target-wide TCG register from the CPU generic purpose registers, add a helper to load 32-bit TCG register. Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- target/mips/tcg/translate.h | 1 + target/mips/tcg/translate.c | 10 ++++++++++ 2 files changed, 11 insertions(+) diff --git a/target/mips/tcg/translate.h b/target/mips/tcg/translate.h index 9517e18eef9..e15d631ad2a 100644 --- a/target/mips/tcg/translate.h +++ b/target/mips/tcg/translate.h @@ -156,6 +156,7 @@ void gen_base_offset_addr_tl(DisasContext *ctx, TCGv addr, int base, int offset) void gen_move_low32_tl(TCGv ret, TCGv_i64 arg); void gen_move_high32_tl(TCGv ret, TCGv_i64 arg); void gen_load_gpr_tl(TCGv t, int reg); +void gen_load_gpr_i32(TCGv_i32 t, int reg); void gen_store_gpr_tl(TCGv t, int reg); #if defined(TARGET_MIPS64) void gen_load_gpr_hi(TCGv_i64 t, int reg); diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c index ad688b9b23d..d7c83c863d5 100644 --- a/target/mips/tcg/translate.c +++ b/target/mips/tcg/translate.c @@ -1198,6 +1198,16 @@ void gen_load_gpr_tl(TCGv t, int reg) } } +void gen_load_gpr_i32(TCGv_i32 t, int reg) +{ + assert(reg >= 0 && reg <= ARRAY_SIZE(cpu_gpr)); + if (reg == 0) { + tcg_gen_movi_i32(t, 0); + } else { + tcg_gen_trunc_tl_i32(t, cpu_gpr[reg]); + } +} + void gen_store_gpr_tl(TCGv t, int reg) { assert(reg >= 0 && reg <= ARRAY_SIZE(cpu_gpr)); From patchwork Tue Nov 26 13:15:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 015C8D3B98E for ; Tue, 26 Nov 2024 13:18:38 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvS6-0001fW-04; Tue, 26 Nov 2024 08:18:02 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvQo-0008ST-0E for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:44 -0500 Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvQl-0003gW-2J for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:40 -0500 Received: by mail-wr1-x434.google.com with SMTP id ffacd0b85a97d-3825c05cc90so3646265f8f.1 for ; Tue, 26 Nov 2024 05:16:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732626997; x=1733231797; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NRFaOAbwvwcs4m43OVY/bnuL6cj5pbeln7bkTpDrxFM=; b=OXdIOn9dBfFnD+2nqN7fo3wmixSxutpwOF32ZwvHUQSw00rXai5IYeoJup5vsdIPpb gwQ2b12vo8FxpsNd+z3X1pOnPmOQrgDrKVBIC6XdHb8VA2r0ozRH8K3Rclow6E+SzwRa EFQGqjPttnrJON+MQ6fhPeYj81PgLeOxd2RqA2Jxdc/USP2AgnAQ3yMTvlErdo5BNskF RKZ2Z209ZymawJxMooLOYsjfI0H0lb9ial3TTaz02BYJ3R25ySDo3rmZPajhHe5xjKts pg97OG+qQmz7HksgP7Z7dv/ilQpkyM7zI0OZiwFajCL4pEYBR8bqmMhm1dyR3jWIuK5h hxdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732626997; x=1733231797; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NRFaOAbwvwcs4m43OVY/bnuL6cj5pbeln7bkTpDrxFM=; b=KB4pyPDfYPVmJBjWgZ5r6HSrpolP5BIDG91WsL5PEa88HOFifOQQy7jk7VvK1LT1K8 Q/2OhoicHH8KciK89gaYAG93j9ZhypqKxorkBNH1n4hbZ2r16Qr9KUqrmeISn+XTohaV o9vQ8drtbYoIoxkQrt+xQ7RLVpweM0z0DsVluulGMc59nIN2P3Mkpg3dCZs1vMDi8oWt 1Sh7d5/PT4q6g91feZFXjjLs2G0bfpwY8eLAQrxr8mRC4JYp9PX7/gypNQmXkNI+nCXU U4TrY1s075Sg/C0hhZRavTsbHtCOr4QDnEm0ETU2GY8z/RdvNq/OpkiPTzBgjFLcS42t qPcQ== X-Gm-Message-State: AOJu0YwfUeQVGfJrGLHsYdvlMTXuW/ucQ9yZOs3TStZ1ebPuUBZ0Ty7t cvO99hPYU2mJNGnY8eni1qt3/JMOVcghxKuGhhr6/Zct8LsahqtMWu/8uLjEKKrljd7INxpmUmL z X-Gm-Gg: ASbGnctH8Ep2QjmWErvmC2ST+fmhDO+uJDfcTngtGWqmrV14UMLAO/LREYIFasmqpvf UP5xsf9DAbZo9c1Suj1KGL+wMJQEaSEGXJXvwTo9ujTt2t8qK7bddmQqWzMlPNggsscxLZQYm5l +WdRXSkZLG2rQf1SJXkZzw/Q3eLKg5M0PUAK46x0puQ4AbNvQoXMaLAI2CGniH8STi8iyy+k+XA +Z/X2ieGlfzsfdHTMaBv1wmRt97VjQqzQhg4OkJ017iQLO1DETWNAg2CxqM9Gk6lOp0lJMq X-Google-Smtp-Source: AGHT+IHYkhzo1tdW73+SJSCXtnDzdiXeJ9xv4DhnAzIngIqAcFcOZavgX+D+2UuOhfVTMfTbK67L1g== X-Received: by 2002:a05:6000:2cd:b0:382:498a:9cf5 with SMTP id ffacd0b85a97d-38260b5ae29mr16808245f8f.14.1732626996769; Tue, 26 Nov 2024 05:16:36 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434a59c46casm25977585e9.7.2024.11.26.05.16.35 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:16:36 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 08/13] target/mips: Introduce gen_store_gpr_i32() Date: Tue, 26 Nov 2024 14:15:40 +0100 Message-ID: <20241126131546.66145-9-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::434; envelope-from=philmd@linaro.org; helo=mail-wr1-x434.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Similarly to the gen_store_gpr_tl() helper which stores a target-wide TCG register to the CPU generic purpose registers, add a helper to store 32-bit TCG register. Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- target/mips/tcg/translate.h | 1 + target/mips/tcg/translate.c | 8 ++++++++ 2 files changed, 9 insertions(+) diff --git a/target/mips/tcg/translate.h b/target/mips/tcg/translate.h index e15d631ad2a..d9faa82ff70 100644 --- a/target/mips/tcg/translate.h +++ b/target/mips/tcg/translate.h @@ -158,6 +158,7 @@ void gen_move_high32_tl(TCGv ret, TCGv_i64 arg); void gen_load_gpr_tl(TCGv t, int reg); void gen_load_gpr_i32(TCGv_i32 t, int reg); void gen_store_gpr_tl(TCGv t, int reg); +void gen_store_gpr_i32(TCGv_i32 t, int reg); #if defined(TARGET_MIPS64) void gen_load_gpr_hi(TCGv_i64 t, int reg); void gen_store_gpr_hi(TCGv_i64 t, int reg); diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c index d7c83c863d5..6ac0734d1b2 100644 --- a/target/mips/tcg/translate.c +++ b/target/mips/tcg/translate.c @@ -1216,6 +1216,14 @@ void gen_store_gpr_tl(TCGv t, int reg) } } +void gen_store_gpr_i32(TCGv_i32 t, int reg) +{ + assert(reg >= 0 && reg <= ARRAY_SIZE(cpu_gpr)); + if (reg != 0) { + tcg_gen_ext_i32_tl(cpu_gpr[reg], t); + } +} + #if defined(TARGET_MIPS64) void gen_load_gpr_hi(TCGv_i64 t, int reg) { From patchwork Tue Nov 26 13:15:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5D376D3B98D for ; Tue, 26 Nov 2024 13:18:38 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvS1-00015z-Lm; Tue, 26 Nov 2024 08:17:58 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvQt-00005t-2R for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:48 -0500 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvQq-0003iT-CL for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:46 -0500 Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-434a752140eso4727455e9.3 for ; Tue, 26 Nov 2024 05:16:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732627002; x=1733231802; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KHGO/TiW0qdojQl+y3dtK5J0HIxG+MvCfdjbdAdzhXA=; b=zJW7zyMLymHpLARWAtYGlS3MAPfCwkktzBbp1t7ciIQBFi613gqHREVbOyHd/1xiul eYsCFS4J0OtCB9iOVootQ4EtEibS6uTy2LuxMYH5GeTtBd7qY+z6rwkHP13WSDeZXVZk moqxH3/TC0n1m2AaZES8iV3EIksTFQ6SSRpvF41gHMkzjiKDbN6c9m8WpGeqjcKOW8nV 0MF+69yu9zZwb/NhqHANn32YYA2oaRyYbEpaleW6sVGZzaAZOW8GE8z6PnzD0Nelen2w ey1bCOlSNdEks303bGfGxKRDduay0Pcdgy3+OZZZbutu6IK8o866yUYELTbaU1vtSKmU uWKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732627002; x=1733231802; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KHGO/TiW0qdojQl+y3dtK5J0HIxG+MvCfdjbdAdzhXA=; b=oiO2HP3twlvnpo87Tc9JjhcnFs5YjsBS7jbpwm4NUc0Lk2rQA0KsOKlgSmxMO1ShGp xMK30EOIGwlegdvqtziN33L1kHoJzYY9s600yEI/R5DPITFeIaOu1DnKAjXfcSQavi7d 2VwMnqktMPpq2TOcA3olpYEi2oql+1vp84CNNza2OXFDzlbp0zbGb2qbk4HLtSHfIO2t 0RZRNjY2blHyHo9sWP8nFizRTjbpbKgNhHuGUQJ2wamoAwH5rB9VxNMZpM8yj672jmeu lvuY4QZIny4wOO1JqVahOPCtfB2UDbobDGqc67ot6Sw1oW1BjNw7zZ1L5eulH0TyGyjd 0png== X-Gm-Message-State: AOJu0Ywra8BSUO5FlpaBLOTrlSo7Q4EjEDVIflgSLxlIQdPsjziOHgEt IFdu+6sh3BecUXwDx7tndjZGLv3fCUvsOglS19V3WKrhsg5CpAR+dRoeq9WNhqZZKdaa9tfL3zq x X-Gm-Gg: ASbGncs/GU8ywZ8hi52MabhiYY2FHKz4r4JhMo+4qW8lTMTQiGg7UsRvrkOIYNc0ocz 31HX4TXcFbX0Yv8aqZ60xVgOzJx/GcFlPp2vaCTy7DnmmODyHQwTpUbC4m2I8j2UOL5SQYGFb/M C9mBhAPJITkB7FJuUSCz0Kn7owpbevmZaBPEw96tUGjb/6jt0tGeGwxBxKfKffXgvdrfht1ipSb SH1SPsat10eizQoqJ/CtrzJP5ctHME8AOaKUjsS1x1wNkLRa/lIF4YeJQMarWgesME1hcCf X-Google-Smtp-Source: AGHT+IF4Ko5bnpfAcsXVkqH09n90r9PIw4BcSGAKiwX1mqW57x8FXH2BEjIY8y1YpDztE9Ncf8ITfw== X-Received: by 2002:a5d:5f86:0:b0:381:f5c2:df24 with SMTP id ffacd0b85a97d-38260bf39a6mr12510615f8f.57.1732627002466; Tue, 26 Nov 2024 05:16:42 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fad6270sm13216274f8f.14.2024.11.26.05.16.41 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:16:41 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 09/13] target/mips: Introduce gen_move_low32_i32() Date: Tue, 26 Nov 2024 14:15:41 +0100 Message-ID: <20241126131546.66145-10-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32e; envelope-from=philmd@linaro.org; helo=mail-wm1-x32e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Similarly to the gen_move_low32_tl() helper which sign-extract the 32-lower bits of a target-wide TCG register, add a helper to sign-extract from 32-bit TCG register. Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- target/mips/tcg/translate.h | 1 + target/mips/tcg/translate.c | 5 +++++ 2 files changed, 6 insertions(+) diff --git a/target/mips/tcg/translate.h b/target/mips/tcg/translate.h index d9faa82ff70..d5d74faad92 100644 --- a/target/mips/tcg/translate.h +++ b/target/mips/tcg/translate.h @@ -154,6 +154,7 @@ void check_cop1x(DisasContext *ctx); void gen_base_offset_addr_tl(DisasContext *ctx, TCGv addr, int base, int offset); void gen_move_low32_tl(TCGv ret, TCGv_i64 arg); +void gen_move_low32_i32(TCGv_i32 ret, TCGv_i64 arg); void gen_move_high32_tl(TCGv ret, TCGv_i64 arg); void gen_load_gpr_tl(TCGv t, int reg); void gen_load_gpr_i32(TCGv_i32 t, int reg); diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c index 6ac0734d1b2..80e2a8e5256 100644 --- a/target/mips/tcg/translate.c +++ b/target/mips/tcg/translate.c @@ -1479,6 +1479,11 @@ void gen_move_low32_tl(TCGv ret, TCGv_i64 arg) #endif } +void gen_move_low32_i32(TCGv_i32 ret, TCGv_i64 arg) +{ + tcg_gen_extrl_i64_i32(ret, arg); +} + /* Sign-extract the high 32-bits to a target_long. */ void gen_move_high32_tl(TCGv ret, TCGv_i64 arg) { From patchwork Tue Nov 26 13:15:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 10F41D3B98B for ; Tue, 26 Nov 2024 13:18:38 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvRz-00013s-1b; Tue, 26 Nov 2024 08:17:56 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvQz-00007q-Dt for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:54 -0500 Received: from mail-wr1-x436.google.com ([2a00:1450:4864:20::436]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvQv-0003jH-Sj for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:52 -0500 Received: by mail-wr1-x436.google.com with SMTP id ffacd0b85a97d-382376fcc4fso3310627f8f.2 for ; Tue, 26 Nov 2024 05:16:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732627008; x=1733231808; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Z1ZuRlLkL1yhuLbXRwUvfotEPQVAqZnSZGvAIWOZu34=; b=lcnjZ5xRLFoHH7T6rfLlL+gYkEIYyxDTyliyva8wRpZVWoxFwR21yrOv3D/OCayKO0 SOO9ScM09lwclOfxG2p+T+P4iLj1AT8tcGFF2p2+Kr44UPuIYcuOfyuubOpsz8bL9Pap k5jYzYkOPLBupBxzlGGti4gx7DtuYltDTjQGwD1o0srlA6VtwkQpmETrG5gCnDECEHtv cx1C/BEC9h2PfZdabronuHS7GjTfux/28fQqeoTWEkrRXTm2DV0LFsH80fZGgV+BPzAn pY2U6aT3vkDeum1fhgCHvWeAReDltTNms0p2TER38lh9vk9jLsUs8SFUICcg7moLylKw A6qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732627008; x=1733231808; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Z1ZuRlLkL1yhuLbXRwUvfotEPQVAqZnSZGvAIWOZu34=; b=UdgTfCm/6rf6qRVkRSKJw6wD59qgl2e6+tenQtIBeOdsgXAoqx8fK8D8dmcl4GA1bd NRwDwxq7kZq+6fAoASCOQhvfbZMigM0omzRhXBBXWIu8BrsGlP3OBtKxmfQLJwZ144HR h+cyTUNfk5i/Mu6CBdnkJOcsoughpRhQOiG1nXB1nKwBI3ZCt5X8AqiIHvcbgIaKpZFf wdZFCW8Z323fYJ2b/6bpwfbSV5ei9VJQqmLmB/zeJ9VwjKJxYtTQL+K3d5ugo9iniCQq WQ5HlRZHrtsW/BjDenahWw5Oq/xqhsol+9OfuaFHuOEt6ccoy9cApS2iJ7yis1jncBOL F8sA== X-Gm-Message-State: AOJu0YzRy1UEvLsl4C0ZAbjFitpP72j9it9cK4EJwZeFbVJ17sEXNqe2 EW0lFKhKMCck34kz2DRNnULFansjU/wdELxNj2VtFFNFcZmUNRzK4Tl7pXTZlAYOfeyoNPmTX1M t X-Gm-Gg: ASbGncs9ctJK47BviXKTnTVlNuPwpH3l++9oNNiimlUGFYx5tfwDj24ItKish20WsCA H+M9UiQghhTbbc5Bic0oQodBU0KztSgf/9HY3BkafxkyPtgNpid8qcFlxDVAfBHtdPh7TOLovty QRxQKllUHgljpZRnfvpHDM9nYG83dVRGLsvz7Y8wFtts9PDt0Fz67HXrpx0YsilBZEul2FAy/32 81eEbECHWVJ0+Ss9DpHjjd92pEZyadKL6Fx7Lgvq4UOyAkMqXd1FHfEgJeAJSCzRZpBAg2t X-Google-Smtp-Source: AGHT+IE6ksKULJh7jpGq4BUWC/zS4KcPIDru/aoPbuRwEOzLRNzEUVqMedduij6bqfxBdu3W9nUTVA== X-Received: by 2002:a5d:6dab:0:b0:382:4851:46d8 with SMTP id ffacd0b85a97d-38260b4d514mr13103906f8f.8.1732627008100; Tue, 26 Nov 2024 05:16:48 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fbed6b5sm13604378f8f.89.2024.11.26.05.16.46 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:16:47 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 10/13] target/mips: Introduce gen_move_high32_i32() Date: Tue, 26 Nov 2024 14:15:42 +0100 Message-ID: <20241126131546.66145-11-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::436; envelope-from=philmd@linaro.org; helo=mail-wr1-x436.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Similarly to the gen_move_high32_tl() helper which sign-extract the 32-higher bits of a target-wide TCG register, add a helper to sign-extract from 32-bit TCG registers. Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- target/mips/tcg/translate.h | 1 + target/mips/tcg/translate.c | 5 +++++ 2 files changed, 6 insertions(+) diff --git a/target/mips/tcg/translate.h b/target/mips/tcg/translate.h index d5d74faad92..f974cf29297 100644 --- a/target/mips/tcg/translate.h +++ b/target/mips/tcg/translate.h @@ -156,6 +156,7 @@ void gen_base_offset_addr_tl(DisasContext *ctx, TCGv addr, int base, int offset) void gen_move_low32_tl(TCGv ret, TCGv_i64 arg); void gen_move_low32_i32(TCGv_i32 ret, TCGv_i64 arg); void gen_move_high32_tl(TCGv ret, TCGv_i64 arg); +void gen_move_high32_i32(TCGv_i32 ret, TCGv_i64 arg); void gen_load_gpr_tl(TCGv t, int reg); void gen_load_gpr_i32(TCGv_i32 t, int reg); void gen_store_gpr_tl(TCGv t, int reg); diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c index 80e2a8e5256..d6be37d56d3 100644 --- a/target/mips/tcg/translate.c +++ b/target/mips/tcg/translate.c @@ -1494,6 +1494,11 @@ void gen_move_high32_tl(TCGv ret, TCGv_i64 arg) #endif } +void gen_move_high32_i32(TCGv_i32 ret, TCGv_i64 arg) +{ + tcg_gen_extrh_i64_i32(ret, arg); +} + bool check_cp0_enabled(DisasContext *ctx) { if (unlikely(!(ctx->hflags & MIPS_HFLAG_CP0))) { From patchwork Tue Nov 26 13:15:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B421ED3B98B for ; Tue, 26 Nov 2024 13:19:00 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvS5-0001ei-U9; Tue, 26 Nov 2024 08:18:01 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvR9-0000Hr-GM for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:17:13 -0500 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvR2-0003kN-S2 for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:16:59 -0500 Received: by mail-wr1-x430.google.com with SMTP id ffacd0b85a97d-38248b810ffso4258656f8f.0 for ; Tue, 26 Nov 2024 05:16:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732627014; x=1733231814; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KHnU5YARdhLWfnC/0qXuXZ7xJ6VXbn/8tQex+fQr1RU=; b=FA/cOxfj+YfkntvepQ8IcexJjA5L/cA2+J1/cQVrK9y+T4jUSgyzWQEWOCUNzvZK16 C2khvgTvsXJfkDU+gkuY5VNV3GsD5e87GpcagdPMRa5pYpw5AC6UmU7KZRKcvVl3JFWA IByWxfoX35/2M1wSPexX1+7wuB5XODyBqX5k7v3QFKS6XmKog9A7WaMBm2EIlfFbI3s+ tE5H1MsXVFcBdMtOWQRlm0MYylowbY759xu89l6ICLBO+jxmOXVDPU3cdHAUyw2wqs+T P1E21fnXnLAxU5+ouiaclN6YgYg+Hn55K/85vuQP6dLskSPWRVcWXf8G/RwM+1kGr6mX EAYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732627014; x=1733231814; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KHnU5YARdhLWfnC/0qXuXZ7xJ6VXbn/8tQex+fQr1RU=; b=Bzoley3HQL3ys7/DaNPCNLSt+Qxc1WAgBcsKJOTClssRz7bFGeeHN2jp+kQDGmqGdx WCSh23FsTVIWcLqXtUf6N47G+7y746vpcUA+ySMr/r5jpglJ3KHAafqf5sSGEaCzq7fl dBlD6wYVZ4Md5+wgn2F0NWoTxUoHF93878Sb4IE9ROotvp216DkQVTb0w1zQMrSwFY1q kIPunJj5JW6Z0JByylYqdkGAeSORL2exsZratiTQhBzBwahqUN7uYDHKegqaQydicYzJ BuVnoTPtWXmZTgGqtVXZB+sdpQvz9wXwgqeA5XYn+HETFFaeRjnb4odKESyzE0mOyzbz 74Lw== X-Gm-Message-State: AOJu0YzK0kUYedlnftfJyZbmZ1P1cjVu4hqn1881dBDeQSS4YIDU9+Sg X7oKH1L8B/xnFQqMZPZNTKrdOWQ2EZpKYaPD+8t0VOlF5P1PAsBzFy7Iiycz2YavTptEO6v6z7F Y X-Gm-Gg: ASbGnct/+EUhGRAq9JE/GcoPmqFGpGPLp0onUae0CHNZy4TtyEtP789pZyaVazPjuOZ vD3Ae+zY9vksZ2TImYPyg6PyeBcWuk+0PwOp0SdLc8t9nfwWMwH5jKnqLOvcXT3y6GUj5uk0yKq ALFS7bUoLIF6rrCqdpQvKyy/hnBGTrXUjy67w63rPPtTXpyHYWNjBG5CXBzqept8lvgHOkIkBM/ I/PjkXJRYrL80D1Dx9p5nAIEfUz/ibPHT8pfc4H11JOfgLHXz0ZZqbG7T2FEjXicOQspsHe X-Google-Smtp-Source: AGHT+IEoIUAo6mK1MLNka4pOqD1gZUNOgI7Xxf8gMQmI0tfAqXx490OBQkrCvBO+mYV8VXUWYkifHQ== X-Received: by 2002:a5d:6d03:0:b0:382:495b:7ed7 with SMTP id ffacd0b85a97d-38260bcb1e3mr16417879f8f.39.1732627013777; Tue, 26 Nov 2024 05:16:53 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434a5dda11fsm15252735e9.1.2024.11.26.05.16.52 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:16:53 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 11/13] target/mips: Declare MXU registers as 32-bit Date: Tue, 26 Nov 2024 14:15:43 +0100 Message-ID: <20241126131546.66145-12-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::430; envelope-from=philmd@linaro.org; helo=mail-wr1-x430.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org MXU extension is only built for 32-bit targets, so the MXU registers can be directly declared as 32-bit. Signed-off-by: Philippe Mathieu-Daudé --- target/mips/cpu.h | 4 ++-- target/mips/sysemu/machine.c | 4 ++-- target/mips/tcg/mxu_translate.c | 8 ++++---- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/target/mips/cpu.h b/target/mips/cpu.h index f6877ece8b4..f80b05885b1 100644 --- a/target/mips/cpu.h +++ b/target/mips/cpu.h @@ -514,8 +514,8 @@ struct TCState { float_status msa_fp_status; #define NUMBER_OF_MXU_REGISTERS 16 - target_ulong mxu_gpr[NUMBER_OF_MXU_REGISTERS - 1]; - target_ulong mxu_cr; + uint32_t mxu_gpr[NUMBER_OF_MXU_REGISTERS - 1]; + uint32_t mxu_cr; #define MXU_CR_LC 31 #define MXU_CR_RC 30 #define MXU_CR_BIAS 2 diff --git a/target/mips/sysemu/machine.c b/target/mips/sysemu/machine.c index 8af11fd896b..823a49e2ca1 100644 --- a/target/mips/sysemu/machine.c +++ b/target/mips/sysemu/machine.c @@ -98,8 +98,8 @@ static const VMStateField vmstate_tc_fields[] = { VMSTATE_INT32(CP0_Debug_tcstatus, TCState), VMSTATE_UINTTL(CP0_UserLocal, TCState), VMSTATE_INT32(msacsr, TCState), - VMSTATE_UINTTL_ARRAY(mxu_gpr, TCState, NUMBER_OF_MXU_REGISTERS - 1), - VMSTATE_UINTTL(mxu_cr, TCState), + VMSTATE_UINT32_ARRAY(mxu_gpr, TCState, NUMBER_OF_MXU_REGISTERS - 1), + VMSTATE_UINT32(mxu_cr, TCState), VMSTATE_END_OF_LIST() }; diff --git a/target/mips/tcg/mxu_translate.c b/target/mips/tcg/mxu_translate.c index 20b8314b478..ee70ae96c32 100644 --- a/target/mips/tcg/mxu_translate.c +++ b/target/mips/tcg/mxu_translate.c @@ -606,8 +606,8 @@ enum { #define MXU_OPTN3_PTN7 7 /* MXU registers */ -static TCGv mxu_gpr[NUMBER_OF_MXU_REGISTERS - 1]; -static TCGv mxu_CR; +static TCGv_i32 mxu_gpr[NUMBER_OF_MXU_REGISTERS - 1]; +static TCGv_i32 mxu_CR; static const char mxuregnames[NUMBER_OF_MXU_REGISTERS][4] = { "XR1", "XR2", "XR3", "XR4", "XR5", "XR6", "XR7", "XR8", @@ -617,12 +617,12 @@ static const char mxuregnames[NUMBER_OF_MXU_REGISTERS][4] = { void mxu_translate_init(void) { for (unsigned i = 0; i < NUMBER_OF_MXU_REGISTERS - 1; i++) { - mxu_gpr[i] = tcg_global_mem_new(tcg_env, + mxu_gpr[i] = tcg_global_mem_new_i32(tcg_env, offsetof(CPUMIPSState, active_tc.mxu_gpr[i]), mxuregnames[i]); } - mxu_CR = tcg_global_mem_new(tcg_env, + mxu_CR = tcg_global_mem_new_i32(tcg_env, offsetof(CPUMIPSState, active_tc.mxu_cr), mxuregnames[NUMBER_OF_MXU_REGISTERS - 1]); } From patchwork Tue Nov 26 13:15:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 158B0D3B98E for ; Tue, 26 Nov 2024 13:18:47 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvSc-0002eM-PO; Tue, 26 Nov 2024 08:18:35 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvRH-0000MD-Jw for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:17:17 -0500 Received: from mail-wm1-x330.google.com ([2a00:1450:4864:20::330]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvR9-0003lY-L9 for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:17:08 -0500 Received: by mail-wm1-x330.google.com with SMTP id 5b1f17b1804b1-4349cc45219so23569245e9.3 for ; Tue, 26 Nov 2024 05:17:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732627021; x=1733231821; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YSQBzSlAdODx41wo1IKcP1SlnbBLzR0kktNb2psirMo=; b=E04IWYL2//8YzGauijtEoCahJzVkMOVY+/lcD4ZCn4dTpHKUF9uw16tOUFPGnfD05d nog+Tj/wg8X6AIuQFVEmnQr6r4sqRRnr3k27vFlgInunHBrFOjim5DFBkXyg5vmYBuJS WLHVBBoVpXjIhRAnL3LAzISoshBvpa5La6fTpevbXBvr5UVrF5Lk2oL4JLkA+wWA9xRB MuwMjrXxBLtASA0OrP0RmfqSJ43G0XkisSe6h7/bnJawYDP3m1pXMrkrFNh5FyQx8oXI 1oiSpVyGRcL2x9Ir5wkvY+qTB38FE171pUU4aH/RIjkM9AKK4mvPLL6CpBpgOQWPFWMm 0dsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732627021; x=1733231821; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YSQBzSlAdODx41wo1IKcP1SlnbBLzR0kktNb2psirMo=; b=JqJDQvNh2cyDAEs2XfDWq8T7NJVrm52bQwEK+fl+EaYjHs7f2yaLE2LmHv66qy7Ufa TGVOxMMa+ohcm89n78uBshMmVrCnxJpJHUxehLQlQIqzuSfwsyId9dZYTMmUJYSSNXuU hLi7vmMq+6QYMGvspKF6GIg759znTaiCuUeyJo2im43/rwQeEJHumFdP5dwxTT35PnBb arze37cED9icVq942EhLy4XezYyBGFdq0YqwhFCFGZ+AzK8wmkjgUAS0NarY3/uNIq/B WPVzGL7Nf2NY0oU0O7N4KzbLQhXUf3rP2Ugq57TQKojjmwOjk1BCDiHbDaMFBcJLQxnD jGYA== X-Gm-Message-State: AOJu0YwyhQfAPa6jzAA5elv9y7EtWdG/+cDJUdB66T0Hzrkw2ICztl97 Dgt+VpiMTKBGP4CBeSWhWJ8XqOSeNtMsPevUHAIRJ/YN4QzRJiOrTOAA/3/uh3zCB1P2QTF+e2/ I X-Gm-Gg: ASbGnctYjmWgVik8U9DG291MCkBkTH2byhelzAuN2kkWRIN9qfPjMP6R5Nhqqja47gq MzVUNXSUR0JjlUy+GB8UxpPLUp2/V8knqo5GAA7Iz1eedmWmN+d8xrPZuHDMP2rSxl8t1HdRRxn gDNklexaIswLaQPY63fTHXj1c7k7fxcV3SIv3w8kH7O41PxL5eQ+WY58wRPLDY36b51+DEHkC7E PmOqF0MNiwZ7zQvnzZfB7gXLFRensC8PK+P22YoA1jDZqgaRnnpsktfoqpRuzjkqFH6g3Hn X-Google-Smtp-Source: AGHT+IF624rhvCZ3vdcklpaWPbSB9HH/OZtqIvs6WYLxM1pWvPmJEKsUOWu78VPYgqrcbOYilZCILQ== X-Received: by 2002:a05:600c:4fcc:b0:42c:ba83:3f0e with SMTP id 5b1f17b1804b1-433ce410127mr139533845e9.7.1732627020270; Tue, 26 Nov 2024 05:17:00 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-433b01e1188sm235163425e9.1.2024.11.26.05.16.58 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:16:59 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 12/13] target/mips: Access MXU registers using TCGv_i32 API Date: Tue, 26 Nov 2024 14:15:44 +0100 Message-ID: <20241126131546.66145-13-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::330; envelope-from=philmd@linaro.org; helo=mail-wm1-x330.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org MXU extension is only built for 32-bit targets, its registers are 32-bit only: no need to call the 'target-wide' TCG API, we can simply use the 32-bit one. Mechanical change doing: $ sed -e -i 's/_tl/_i32/g' target/mips/tcg/mxu_translate.c Signed-off-by: Philippe Mathieu-Daudé --- target/mips/tcg/mxu_translate.c | 1538 +++++++++++++++---------------- 1 file changed, 769 insertions(+), 769 deletions(-) diff --git a/target/mips/tcg/mxu_translate.c b/target/mips/tcg/mxu_translate.c index ee70ae96c32..69b6b352024 100644 --- a/target/mips/tcg/mxu_translate.c +++ b/target/mips/tcg/mxu_translate.c @@ -631,16 +631,16 @@ void mxu_translate_init(void) static inline void gen_load_mxu_gpr(TCGv t, unsigned int reg) { if (reg == 0) { - tcg_gen_movi_tl(t, 0); + tcg_gen_movi_i32(t, 0); } else if (reg <= 15) { - tcg_gen_mov_tl(t, mxu_gpr[reg - 1]); + tcg_gen_mov_i32(t, mxu_gpr[reg - 1]); } } static inline void gen_store_mxu_gpr(TCGv t, unsigned int reg) { if (reg > 0 && reg <= 15) { - tcg_gen_mov_tl(mxu_gpr[reg - 1], t); + tcg_gen_mov_i32(mxu_gpr[reg - 1], t); } } @@ -648,22 +648,22 @@ static inline void gen_extract_mxu_gpr(TCGv t, unsigned int reg, unsigned int ofs, unsigned int len) { if (reg == 0) { - tcg_gen_movi_tl(t, 0); + tcg_gen_movi_i32(t, 0); } else if (reg <= 15) { - tcg_gen_extract_tl(t, mxu_gpr[reg - 1], ofs, len); + tcg_gen_extract_i32(t, mxu_gpr[reg - 1], ofs, len); } } /* MXU control register moves. */ static inline void gen_load_mxu_cr(TCGv t) { - tcg_gen_mov_tl(t, mxu_CR); + tcg_gen_mov_i32(t, mxu_CR); } static inline void gen_store_mxu_cr(TCGv t) { /* TODO: Add handling of RW rules for MXU_CR. */ - tcg_gen_mov_tl(mxu_CR, t); + tcg_gen_mov_i32(mxu_CR, t); } /* @@ -679,7 +679,7 @@ static void gen_mxu_s32i2m(DisasContext *ctx) XRa = extract32(ctx->opcode, 6, 5); Rb = extract32(ctx->opcode, 16, 5); - gen_load_gpr_tl(t0, Rb); + gen_load_gpr_i32(t0, Rb); if (XRa <= 15) { gen_store_mxu_gpr(t0, XRa); } else if (XRa == 16) { @@ -706,7 +706,7 @@ static void gen_mxu_s32m2i(DisasContext *ctx) gen_load_mxu_cr(t0); } - gen_store_gpr_tl(t0, Rb); + gen_store_gpr_i32(t0, Rb); } /* @@ -728,61 +728,61 @@ static void gen_mxu_s8ldd(DisasContext *ctx, bool postmodify) optn3 = extract32(ctx->opcode, 18, 3); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr_tl(t0, Rb); - tcg_gen_addi_tl(t0, t0, (int8_t)s8); + gen_load_gpr_i32(t0, Rb); + tcg_gen_addi_i32(t0, t0, (int8_t)s8); if (postmodify) { - gen_store_gpr_tl(t0, Rb); + gen_store_gpr_i32(t0, Rb); } switch (optn3) { /* XRa[7:0] = tmp8 */ case MXU_OPTN3_PTN0: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); gen_load_mxu_gpr(t0, XRa); - tcg_gen_deposit_tl(t0, t0, t1, 0, 8); + tcg_gen_deposit_i32(t0, t0, t1, 0, 8); break; /* XRa[15:8] = tmp8 */ case MXU_OPTN3_PTN1: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); gen_load_mxu_gpr(t0, XRa); - tcg_gen_deposit_tl(t0, t0, t1, 8, 8); + tcg_gen_deposit_i32(t0, t0, t1, 8, 8); break; /* XRa[23:16] = tmp8 */ case MXU_OPTN3_PTN2: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); gen_load_mxu_gpr(t0, XRa); - tcg_gen_deposit_tl(t0, t0, t1, 16, 8); + tcg_gen_deposit_i32(t0, t0, t1, 16, 8); break; /* XRa[31:24] = tmp8 */ case MXU_OPTN3_PTN3: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); gen_load_mxu_gpr(t0, XRa); - tcg_gen_deposit_tl(t0, t0, t1, 24, 8); + tcg_gen_deposit_i32(t0, t0, t1, 24, 8); break; /* XRa = {8'b0, tmp8, 8'b0, tmp8} */ case MXU_OPTN3_PTN4: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); - tcg_gen_deposit_tl(t0, t1, t1, 16, 16); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_deposit_i32(t0, t1, t1, 16, 16); break; /* XRa = {tmp8, 8'b0, tmp8, 8'b0} */ case MXU_OPTN3_PTN5: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); - tcg_gen_shli_tl(t1, t1, 8); - tcg_gen_deposit_tl(t0, t1, t1, 16, 16); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_shli_i32(t1, t1, 8); + tcg_gen_deposit_i32(t0, t1, t1, 16, 16); break; /* XRa = {{8{sign of tmp8}}, tmp8, {8{sign of tmp8}}, tmp8} */ case MXU_OPTN3_PTN6: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_SB); - tcg_gen_mov_tl(t0, t1); - tcg_gen_andi_tl(t0, t0, 0xFF00FFFF); - tcg_gen_shli_tl(t1, t1, 16); - tcg_gen_or_tl(t0, t0, t1); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_SB); + tcg_gen_mov_i32(t0, t1); + tcg_gen_andi_i32(t0, t0, 0xFF00FFFF); + tcg_gen_shli_i32(t1, t1, 16); + tcg_gen_or_i32(t0, t0, t1); break; /* XRa = {tmp8, tmp8, tmp8, tmp8} */ case MXU_OPTN3_PTN7: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UB); - tcg_gen_deposit_tl(t1, t1, t1, 8, 8); - tcg_gen_deposit_tl(t0, t1, t1, 16, 16); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_deposit_i32(t1, t1, t1, 8, 8); + tcg_gen_deposit_i32(t0, t1, t1, 16, 16); break; } @@ -813,33 +813,33 @@ static void gen_mxu_s8std(DisasContext *ctx, bool postmodify) return; } - gen_load_gpr_tl(t0, Rb); - tcg_gen_addi_tl(t0, t0, (int8_t)s8); + gen_load_gpr_i32(t0, Rb); + tcg_gen_addi_i32(t0, t0, (int8_t)s8); if (postmodify) { - gen_store_gpr_tl(t0, Rb); + gen_store_gpr_i32(t0, Rb); } gen_load_mxu_gpr(t1, XRa); switch (optn3) { /* XRa[7:0] => tmp8 */ case MXU_OPTN3_PTN0: - tcg_gen_extract_tl(t1, t1, 0, 8); + tcg_gen_extract_i32(t1, t1, 0, 8); break; /* XRa[15:8] => tmp8 */ case MXU_OPTN3_PTN1: - tcg_gen_extract_tl(t1, t1, 8, 8); + tcg_gen_extract_i32(t1, t1, 8, 8); break; /* XRa[23:16] => tmp8 */ case MXU_OPTN3_PTN2: - tcg_gen_extract_tl(t1, t1, 16, 8); + tcg_gen_extract_i32(t1, t1, 16, 8); break; /* XRa[31:24] => tmp8 */ case MXU_OPTN3_PTN3: - tcg_gen_extract_tl(t1, t1, 24, 8); + tcg_gen_extract_i32(t1, t1, 24, 8); break; } - tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_UB); + tcg_gen_qemu_st_i32(t1, t0, ctx->mem_idx, MO_UB); } /* @@ -862,34 +862,34 @@ static void gen_mxu_s16ldd(DisasContext *ctx, bool postmodify) optn2 = extract32(ctx->opcode, 19, 2); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr_tl(t0, Rb); - tcg_gen_addi_tl(t0, t0, s10); + gen_load_gpr_i32(t0, Rb); + tcg_gen_addi_i32(t0, t0, s10); if (postmodify) { - gen_store_gpr_tl(t0, Rb); + gen_store_gpr_i32(t0, Rb); } switch (optn2) { /* XRa[15:0] = tmp16 */ case MXU_OPTN2_PTN0: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UW); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UW); gen_load_mxu_gpr(t0, XRa); - tcg_gen_deposit_tl(t0, t0, t1, 0, 16); + tcg_gen_deposit_i32(t0, t0, t1, 0, 16); break; /* XRa[31:16] = tmp16 */ case MXU_OPTN2_PTN1: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UW); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UW); gen_load_mxu_gpr(t0, XRa); - tcg_gen_deposit_tl(t0, t0, t1, 16, 16); + tcg_gen_deposit_i32(t0, t0, t1, 16, 16); break; /* XRa = sign_extend(tmp16) */ case MXU_OPTN2_PTN2: - tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_SW); + tcg_gen_qemu_ld_i32(t0, t0, ctx->mem_idx, MO_SW); break; /* XRa = {tmp16, tmp16} */ case MXU_OPTN2_PTN3: - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_UW); - tcg_gen_deposit_tl(t0, t1, t1, 0, 16); - tcg_gen_deposit_tl(t0, t1, t1, 16, 16); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_UW); + tcg_gen_deposit_i32(t0, t1, t1, 0, 16); + tcg_gen_deposit_i32(t0, t1, t1, 16, 16); break; } @@ -921,25 +921,25 @@ static void gen_mxu_s16std(DisasContext *ctx, bool postmodify) return; } - gen_load_gpr_tl(t0, Rb); - tcg_gen_addi_tl(t0, t0, s10); + gen_load_gpr_i32(t0, Rb); + tcg_gen_addi_i32(t0, t0, s10); if (postmodify) { - gen_store_gpr_tl(t0, Rb); + gen_store_gpr_i32(t0, Rb); } gen_load_mxu_gpr(t1, XRa); switch (optn2) { /* XRa[15:0] => tmp16 */ case MXU_OPTN2_PTN0: - tcg_gen_extract_tl(t1, t1, 0, 16); + tcg_gen_extract_i32(t1, t1, 0, 16); break; /* XRa[31:16] => tmp16 */ case MXU_OPTN2_PTN1: - tcg_gen_extract_tl(t1, t1, 16, 16); + tcg_gen_extract_i32(t1, t1, 16, 16); break; } - tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_UW); + tcg_gen_qemu_st_i32(t1, t0, ctx->mem_idx, MO_UW); } /* @@ -965,20 +965,20 @@ static void gen_mxu_s32mul(DisasContext *ctx, bool mulu) rt = extract32(ctx->opcode, 21, 5); if (unlikely(rs == 0 || rt == 0)) { - tcg_gen_movi_tl(t0, 0); - tcg_gen_movi_tl(t1, 0); + tcg_gen_movi_i32(t0, 0); + tcg_gen_movi_i32(t1, 0); } else { - gen_load_gpr_tl(t0, rs); - gen_load_gpr_tl(t1, rt); + gen_load_gpr_i32(t0, rs); + gen_load_gpr_i32(t1, rt); if (mulu) { - tcg_gen_mulu2_tl(t0, t1, t0, t1); + tcg_gen_mulu2_i32(t0, t1, t0, t1); } else { - tcg_gen_muls2_tl(t0, t1, t0, t1); + tcg_gen_muls2_i32(t0, t1, t0, t1); } } - tcg_gen_mov_tl(cpu_HI[0], t1); - tcg_gen_mov_tl(cpu_LO[0], t0); + tcg_gen_mov_i32(cpu_HI[0], t1); + tcg_gen_mov_i32(cpu_LO[0], t0); gen_store_mxu_gpr(t1, XRa); gen_store_mxu_gpr(t0, XRd); } @@ -1014,38 +1014,38 @@ static void gen_mxu_d16mul(DisasContext *ctx, bool fractional, */ gen_load_mxu_gpr(t1, XRb); - tcg_gen_sextract_tl(t0, t1, 0, 16); - tcg_gen_sextract_tl(t1, t1, 16, 16); + tcg_gen_sextract_i32(t0, t1, 0, 16); + tcg_gen_sextract_i32(t1, t1, 16, 16); gen_load_mxu_gpr(t3, XRc); - tcg_gen_sextract_tl(t2, t3, 0, 16); - tcg_gen_sextract_tl(t3, t3, 16, 16); + tcg_gen_sextract_i32(t2, t3, 0, 16); + tcg_gen_sextract_i32(t3, t3, 16, 16); switch (optn2) { case MXU_OPTN2_WW: /* XRB.H*XRC.H == lop, XRB.L*XRC.L == rop */ - tcg_gen_mul_tl(t3, t1, t3); - tcg_gen_mul_tl(t2, t0, t2); + tcg_gen_mul_i32(t3, t1, t3); + tcg_gen_mul_i32(t2, t0, t2); break; case MXU_OPTN2_LW: /* XRB.L*XRC.H == lop, XRB.L*XRC.L == rop */ - tcg_gen_mul_tl(t3, t0, t3); - tcg_gen_mul_tl(t2, t0, t2); + tcg_gen_mul_i32(t3, t0, t3); + tcg_gen_mul_i32(t2, t0, t2); break; case MXU_OPTN2_HW: /* XRB.H*XRC.H == lop, XRB.H*XRC.L == rop */ - tcg_gen_mul_tl(t3, t1, t3); - tcg_gen_mul_tl(t2, t1, t2); + tcg_gen_mul_i32(t3, t1, t3); + tcg_gen_mul_i32(t2, t1, t2); break; case MXU_OPTN2_XW: /* XRB.L*XRC.H == lop, XRB.H*XRC.L == rop */ - tcg_gen_mul_tl(t3, t0, t3); - tcg_gen_mul_tl(t2, t1, t2); + tcg_gen_mul_i32(t3, t0, t3); + tcg_gen_mul_i32(t2, t1, t2); break; } if (fractional) { TCGLabel *l_done = gen_new_label(); TCGv rounding = tcg_temp_new(); - tcg_gen_shli_tl(t3, t3, 1); - tcg_gen_shli_tl(t2, t2, 1); - tcg_gen_andi_tl(rounding, mxu_CR, 0x2); - tcg_gen_brcondi_tl(TCG_COND_EQ, rounding, 0, l_done); + tcg_gen_shli_i32(t3, t3, 1); + tcg_gen_shli_i32(t2, t2, 1); + tcg_gen_andi_i32(rounding, mxu_CR, 0x2); + tcg_gen_brcondi_i32(TCG_COND_EQ, rounding, 0, l_done); if (packed_result) { TCGLabel *l_apply_bias_l = gen_new_label(); TCGLabel *l_apply_bias_r = gen_new_label(); @@ -1056,22 +1056,22 @@ static void gen_mxu_d16mul(DisasContext *ctx, bool fractional, * D16MULF supports unbiased rounding aka "bankers rounding", * "round to even", "convergent rounding" */ - tcg_gen_andi_tl(bias, mxu_CR, 0x4); - tcg_gen_brcondi_tl(TCG_COND_NE, bias, 0, l_apply_bias_l); - tcg_gen_andi_tl(t0, t3, 0x1ffff); - tcg_gen_brcondi_tl(TCG_COND_EQ, t0, 0x8000, l_half_done); + tcg_gen_andi_i32(bias, mxu_CR, 0x4); + tcg_gen_brcondi_i32(TCG_COND_NE, bias, 0, l_apply_bias_l); + tcg_gen_andi_i32(t0, t3, 0x1ffff); + tcg_gen_brcondi_i32(TCG_COND_EQ, t0, 0x8000, l_half_done); gen_set_label(l_apply_bias_l); - tcg_gen_addi_tl(t3, t3, 0x8000); + tcg_gen_addi_i32(t3, t3, 0x8000); gen_set_label(l_half_done); - tcg_gen_brcondi_tl(TCG_COND_NE, bias, 0, l_apply_bias_r); - tcg_gen_andi_tl(t0, t2, 0x1ffff); - tcg_gen_brcondi_tl(TCG_COND_EQ, t0, 0x8000, l_done); + tcg_gen_brcondi_i32(TCG_COND_NE, bias, 0, l_apply_bias_r); + tcg_gen_andi_i32(t0, t2, 0x1ffff); + tcg_gen_brcondi_i32(TCG_COND_EQ, t0, 0x8000, l_done); gen_set_label(l_apply_bias_r); - tcg_gen_addi_tl(t2, t2, 0x8000); + tcg_gen_addi_i32(t2, t2, 0x8000); } else { /* D16MULE doesn't support unbiased rounding */ - tcg_gen_addi_tl(t3, t3, 0x8000); - tcg_gen_addi_tl(t2, t2, 0x8000); + tcg_gen_addi_i32(t3, t3, 0x8000); + tcg_gen_addi_i32(t2, t2, 0x8000); } gen_set_label(l_done); } @@ -1079,9 +1079,9 @@ static void gen_mxu_d16mul(DisasContext *ctx, bool fractional, gen_store_mxu_gpr(t3, XRa); gen_store_mxu_gpr(t2, XRd); } else { - tcg_gen_andi_tl(t3, t3, 0xffff0000); - tcg_gen_shri_tl(t2, t2, 16); - tcg_gen_or_tl(t3, t3, t2); + tcg_gen_andi_i32(t3, t3, 0xffff0000); + tcg_gen_shri_i32(t2, t2, 16); + tcg_gen_or_i32(t3, t3, t2); gen_store_mxu_gpr(t3, XRa); } } @@ -1113,55 +1113,55 @@ static void gen_mxu_d16mac(DisasContext *ctx, bool fractional, aptn2 = extract32(ctx->opcode, 24, 2); gen_load_mxu_gpr(t1, XRb); - tcg_gen_sextract_tl(t0, t1, 0, 16); - tcg_gen_sextract_tl(t1, t1, 16, 16); + tcg_gen_sextract_i32(t0, t1, 0, 16); + tcg_gen_sextract_i32(t1, t1, 16, 16); gen_load_mxu_gpr(t3, XRc); - tcg_gen_sextract_tl(t2, t3, 0, 16); - tcg_gen_sextract_tl(t3, t3, 16, 16); + tcg_gen_sextract_i32(t2, t3, 0, 16); + tcg_gen_sextract_i32(t3, t3, 16, 16); switch (optn2) { case MXU_OPTN2_WW: /* XRB.H*XRC.H == lop, XRB.L*XRC.L == rop */ - tcg_gen_mul_tl(t3, t1, t3); - tcg_gen_mul_tl(t2, t0, t2); + tcg_gen_mul_i32(t3, t1, t3); + tcg_gen_mul_i32(t2, t0, t2); break; case MXU_OPTN2_LW: /* XRB.L*XRC.H == lop, XRB.L*XRC.L == rop */ - tcg_gen_mul_tl(t3, t0, t3); - tcg_gen_mul_tl(t2, t0, t2); + tcg_gen_mul_i32(t3, t0, t3); + tcg_gen_mul_i32(t2, t0, t2); break; case MXU_OPTN2_HW: /* XRB.H*XRC.H == lop, XRB.H*XRC.L == rop */ - tcg_gen_mul_tl(t3, t1, t3); - tcg_gen_mul_tl(t2, t1, t2); + tcg_gen_mul_i32(t3, t1, t3); + tcg_gen_mul_i32(t2, t1, t2); break; case MXU_OPTN2_XW: /* XRB.L*XRC.H == lop, XRB.H*XRC.L == rop */ - tcg_gen_mul_tl(t3, t0, t3); - tcg_gen_mul_tl(t2, t1, t2); + tcg_gen_mul_i32(t3, t0, t3); + tcg_gen_mul_i32(t2, t1, t2); break; } if (fractional) { - tcg_gen_shli_tl(t3, t3, 1); - tcg_gen_shli_tl(t2, t2, 1); + tcg_gen_shli_i32(t3, t3, 1); + tcg_gen_shli_i32(t2, t2, 1); } gen_load_mxu_gpr(t0, XRa); gen_load_mxu_gpr(t1, XRd); switch (aptn2) { case MXU_APTN2_AA: - tcg_gen_add_tl(t3, t0, t3); - tcg_gen_add_tl(t2, t1, t2); + tcg_gen_add_i32(t3, t0, t3); + tcg_gen_add_i32(t2, t1, t2); break; case MXU_APTN2_AS: - tcg_gen_add_tl(t3, t0, t3); - tcg_gen_sub_tl(t2, t1, t2); + tcg_gen_add_i32(t3, t0, t3); + tcg_gen_sub_i32(t2, t1, t2); break; case MXU_APTN2_SA: - tcg_gen_sub_tl(t3, t0, t3); - tcg_gen_add_tl(t2, t1, t2); + tcg_gen_sub_i32(t3, t0, t3); + tcg_gen_add_i32(t2, t1, t2); break; case MXU_APTN2_SS: - tcg_gen_sub_tl(t3, t0, t3); - tcg_gen_sub_tl(t2, t1, t2); + tcg_gen_sub_i32(t3, t0, t3); + tcg_gen_sub_i32(t2, t1, t2); break; } @@ -1169,8 +1169,8 @@ static void gen_mxu_d16mac(DisasContext *ctx, bool fractional, TCGLabel *l_done = gen_new_label(); TCGv rounding = tcg_temp_new(); - tcg_gen_andi_tl(rounding, mxu_CR, 0x2); - tcg_gen_brcondi_tl(TCG_COND_EQ, rounding, 0, l_done); + tcg_gen_andi_i32(rounding, mxu_CR, 0x2); + tcg_gen_brcondi_i32(TCG_COND_EQ, rounding, 0, l_done); if (packed_result) { TCGLabel *l_apply_bias_l = gen_new_label(); TCGLabel *l_apply_bias_r = gen_new_label(); @@ -1181,22 +1181,22 @@ static void gen_mxu_d16mac(DisasContext *ctx, bool fractional, * D16MACF supports unbiased rounding aka "bankers rounding", * "round to even", "convergent rounding" */ - tcg_gen_andi_tl(bias, mxu_CR, 0x4); - tcg_gen_brcondi_tl(TCG_COND_NE, bias, 0, l_apply_bias_l); - tcg_gen_andi_tl(t0, t3, 0x1ffff); - tcg_gen_brcondi_tl(TCG_COND_EQ, t0, 0x8000, l_half_done); + tcg_gen_andi_i32(bias, mxu_CR, 0x4); + tcg_gen_brcondi_i32(TCG_COND_NE, bias, 0, l_apply_bias_l); + tcg_gen_andi_i32(t0, t3, 0x1ffff); + tcg_gen_brcondi_i32(TCG_COND_EQ, t0, 0x8000, l_half_done); gen_set_label(l_apply_bias_l); - tcg_gen_addi_tl(t3, t3, 0x8000); + tcg_gen_addi_i32(t3, t3, 0x8000); gen_set_label(l_half_done); - tcg_gen_brcondi_tl(TCG_COND_NE, bias, 0, l_apply_bias_r); - tcg_gen_andi_tl(t0, t2, 0x1ffff); - tcg_gen_brcondi_tl(TCG_COND_EQ, t0, 0x8000, l_done); + tcg_gen_brcondi_i32(TCG_COND_NE, bias, 0, l_apply_bias_r); + tcg_gen_andi_i32(t0, t2, 0x1ffff); + tcg_gen_brcondi_i32(TCG_COND_EQ, t0, 0x8000, l_done); gen_set_label(l_apply_bias_r); - tcg_gen_addi_tl(t2, t2, 0x8000); + tcg_gen_addi_i32(t2, t2, 0x8000); } else { /* D16MACE doesn't support unbiased rounding */ - tcg_gen_addi_tl(t3, t3, 0x8000); - tcg_gen_addi_tl(t2, t2, 0x8000); + tcg_gen_addi_i32(t3, t3, 0x8000); + tcg_gen_addi_i32(t2, t2, 0x8000); } gen_set_label(l_done); } @@ -1205,9 +1205,9 @@ static void gen_mxu_d16mac(DisasContext *ctx, bool fractional, gen_store_mxu_gpr(t3, XRa); gen_store_mxu_gpr(t2, XRd); } else { - tcg_gen_andi_tl(t3, t3, 0xffff0000); - tcg_gen_shri_tl(t2, t2, 16); - tcg_gen_or_tl(t3, t3, t2); + tcg_gen_andi_i32(t3, t3, 0xffff0000); + tcg_gen_shri_i32(t2, t2, 16); + tcg_gen_or_i32(t3, t3, t2); gen_store_mxu_gpr(t3, XRa); } } @@ -1234,60 +1234,60 @@ static void gen_mxu_d16madl(DisasContext *ctx) aptn2 = extract32(ctx->opcode, 24, 2); gen_load_mxu_gpr(t1, XRb); - tcg_gen_sextract_tl(t0, t1, 0, 16); - tcg_gen_sextract_tl(t1, t1, 16, 16); + tcg_gen_sextract_i32(t0, t1, 0, 16); + tcg_gen_sextract_i32(t1, t1, 16, 16); gen_load_mxu_gpr(t3, XRc); - tcg_gen_sextract_tl(t2, t3, 0, 16); - tcg_gen_sextract_tl(t3, t3, 16, 16); + tcg_gen_sextract_i32(t2, t3, 0, 16); + tcg_gen_sextract_i32(t3, t3, 16, 16); switch (optn2) { case MXU_OPTN2_WW: /* XRB.H*XRC.H == lop, XRB.L*XRC.L == rop */ - tcg_gen_mul_tl(t3, t1, t3); - tcg_gen_mul_tl(t2, t0, t2); + tcg_gen_mul_i32(t3, t1, t3); + tcg_gen_mul_i32(t2, t0, t2); break; case MXU_OPTN2_LW: /* XRB.L*XRC.H == lop, XRB.L*XRC.L == rop */ - tcg_gen_mul_tl(t3, t0, t3); - tcg_gen_mul_tl(t2, t0, t2); + tcg_gen_mul_i32(t3, t0, t3); + tcg_gen_mul_i32(t2, t0, t2); break; case MXU_OPTN2_HW: /* XRB.H*XRC.H == lop, XRB.H*XRC.L == rop */ - tcg_gen_mul_tl(t3, t1, t3); - tcg_gen_mul_tl(t2, t1, t2); + tcg_gen_mul_i32(t3, t1, t3); + tcg_gen_mul_i32(t2, t1, t2); break; case MXU_OPTN2_XW: /* XRB.L*XRC.H == lop, XRB.H*XRC.L == rop */ - tcg_gen_mul_tl(t3, t0, t3); - tcg_gen_mul_tl(t2, t1, t2); + tcg_gen_mul_i32(t3, t0, t3); + tcg_gen_mul_i32(t2, t1, t2); break; } - tcg_gen_extract_tl(t2, t2, 0, 16); - tcg_gen_extract_tl(t3, t3, 0, 16); + tcg_gen_extract_i32(t2, t2, 0, 16); + tcg_gen_extract_i32(t3, t3, 0, 16); gen_load_mxu_gpr(t1, XRa); - tcg_gen_extract_tl(t0, t1, 0, 16); - tcg_gen_extract_tl(t1, t1, 16, 16); + tcg_gen_extract_i32(t0, t1, 0, 16); + tcg_gen_extract_i32(t1, t1, 16, 16); switch (aptn2) { case MXU_APTN2_AA: - tcg_gen_add_tl(t3, t1, t3); - tcg_gen_add_tl(t2, t0, t2); + tcg_gen_add_i32(t3, t1, t3); + tcg_gen_add_i32(t2, t0, t2); break; case MXU_APTN2_AS: - tcg_gen_add_tl(t3, t1, t3); - tcg_gen_sub_tl(t2, t0, t2); + tcg_gen_add_i32(t3, t1, t3); + tcg_gen_sub_i32(t2, t0, t2); break; case MXU_APTN2_SA: - tcg_gen_sub_tl(t3, t1, t3); - tcg_gen_add_tl(t2, t0, t2); + tcg_gen_sub_i32(t3, t1, t3); + tcg_gen_add_i32(t2, t0, t2); break; case MXU_APTN2_SS: - tcg_gen_sub_tl(t3, t1, t3); - tcg_gen_sub_tl(t2, t0, t2); + tcg_gen_sub_i32(t3, t1, t3); + tcg_gen_sub_i32(t2, t0, t2); break; } - tcg_gen_andi_tl(t2, t2, 0xffff); - tcg_gen_shli_tl(t3, t3, 16); - tcg_gen_or_tl(mxu_gpr[XRd - 1], t3, t2); + tcg_gen_andi_i32(t2, t2, 0xffff); + tcg_gen_shli_i32(t3, t3, 16); + tcg_gen_or_i32(mxu_gpr[XRd - 1], t3, t2); } /* @@ -1319,32 +1319,32 @@ static void gen_mxu_s16mad(DisasContext *ctx) switch (optn2) { case MXU_OPTN2_WW: /* XRB.H*XRC.H */ - tcg_gen_sextract_tl(t0, t0, 16, 16); - tcg_gen_sextract_tl(t1, t1, 16, 16); + tcg_gen_sextract_i32(t0, t0, 16, 16); + tcg_gen_sextract_i32(t1, t1, 16, 16); break; case MXU_OPTN2_LW: /* XRB.L*XRC.L */ - tcg_gen_sextract_tl(t0, t0, 0, 16); - tcg_gen_sextract_tl(t1, t1, 0, 16); + tcg_gen_sextract_i32(t0, t0, 0, 16); + tcg_gen_sextract_i32(t1, t1, 0, 16); break; case MXU_OPTN2_HW: /* XRB.H*XRC.L */ - tcg_gen_sextract_tl(t0, t0, 16, 16); - tcg_gen_sextract_tl(t1, t1, 0, 16); + tcg_gen_sextract_i32(t0, t0, 16, 16); + tcg_gen_sextract_i32(t1, t1, 0, 16); break; case MXU_OPTN2_XW: /* XRB.L*XRC.H */ - tcg_gen_sextract_tl(t0, t0, 0, 16); - tcg_gen_sextract_tl(t1, t1, 16, 16); + tcg_gen_sextract_i32(t0, t0, 0, 16); + tcg_gen_sextract_i32(t1, t1, 16, 16); break; } - tcg_gen_mul_tl(t0, t0, t1); + tcg_gen_mul_i32(t0, t0, t1); gen_load_mxu_gpr(t1, XRa); switch (aptn1) { case MXU_APTN1_A: - tcg_gen_add_tl(t1, t1, t0); + tcg_gen_add_i32(t1, t1, t0); break; case MXU_APTN1_S: - tcg_gen_sub_tl(t1, t1, t0); + tcg_gen_sub_i32(t1, t1, t0); break; } @@ -1384,53 +1384,53 @@ static void gen_mxu_q8mul_mac(DisasContext *ctx, bool su, bool mac) if (su) { /* Q8MULSU / Q8MACSU */ - tcg_gen_sextract_tl(t0, t3, 0, 8); - tcg_gen_sextract_tl(t1, t3, 8, 8); - tcg_gen_sextract_tl(t2, t3, 16, 8); - tcg_gen_sextract_tl(t3, t3, 24, 8); + tcg_gen_sextract_i32(t0, t3, 0, 8); + tcg_gen_sextract_i32(t1, t3, 8, 8); + tcg_gen_sextract_i32(t2, t3, 16, 8); + tcg_gen_sextract_i32(t3, t3, 24, 8); } else { /* Q8MUL / Q8MAC */ - tcg_gen_extract_tl(t0, t3, 0, 8); - tcg_gen_extract_tl(t1, t3, 8, 8); - tcg_gen_extract_tl(t2, t3, 16, 8); - tcg_gen_extract_tl(t3, t3, 24, 8); + tcg_gen_extract_i32(t0, t3, 0, 8); + tcg_gen_extract_i32(t1, t3, 8, 8); + tcg_gen_extract_i32(t2, t3, 16, 8); + tcg_gen_extract_i32(t3, t3, 24, 8); } - tcg_gen_extract_tl(t4, t7, 0, 8); - tcg_gen_extract_tl(t5, t7, 8, 8); - tcg_gen_extract_tl(t6, t7, 16, 8); - tcg_gen_extract_tl(t7, t7, 24, 8); + tcg_gen_extract_i32(t4, t7, 0, 8); + tcg_gen_extract_i32(t5, t7, 8, 8); + tcg_gen_extract_i32(t6, t7, 16, 8); + tcg_gen_extract_i32(t7, t7, 24, 8); - tcg_gen_mul_tl(t0, t0, t4); - tcg_gen_mul_tl(t1, t1, t5); - tcg_gen_mul_tl(t2, t2, t6); - tcg_gen_mul_tl(t3, t3, t7); + tcg_gen_mul_i32(t0, t0, t4); + tcg_gen_mul_i32(t1, t1, t5); + tcg_gen_mul_i32(t2, t2, t6); + tcg_gen_mul_i32(t3, t3, t7); if (mac) { gen_load_mxu_gpr(t4, XRd); gen_load_mxu_gpr(t5, XRa); - tcg_gen_extract_tl(t6, t4, 0, 16); - tcg_gen_extract_tl(t7, t4, 16, 16); + tcg_gen_extract_i32(t6, t4, 0, 16); + tcg_gen_extract_i32(t7, t4, 16, 16); if (aptn2 & 1) { - tcg_gen_sub_tl(t0, t6, t0); - tcg_gen_sub_tl(t1, t7, t1); + tcg_gen_sub_i32(t0, t6, t0); + tcg_gen_sub_i32(t1, t7, t1); } else { - tcg_gen_add_tl(t0, t6, t0); - tcg_gen_add_tl(t1, t7, t1); + tcg_gen_add_i32(t0, t6, t0); + tcg_gen_add_i32(t1, t7, t1); } - tcg_gen_extract_tl(t6, t5, 0, 16); - tcg_gen_extract_tl(t7, t5, 16, 16); + tcg_gen_extract_i32(t6, t5, 0, 16); + tcg_gen_extract_i32(t7, t5, 16, 16); if (aptn2 & 2) { - tcg_gen_sub_tl(t2, t6, t2); - tcg_gen_sub_tl(t3, t7, t3); + tcg_gen_sub_i32(t2, t6, t2); + tcg_gen_sub_i32(t3, t7, t3); } else { - tcg_gen_add_tl(t2, t6, t2); - tcg_gen_add_tl(t3, t7, t3); + tcg_gen_add_i32(t2, t6, t2); + tcg_gen_add_i32(t3, t7, t3); } } - tcg_gen_deposit_tl(t0, t0, t1, 16, 16); - tcg_gen_deposit_tl(t1, t2, t3, 16, 16); + tcg_gen_deposit_i32(t0, t0, t1, 16, 16); + tcg_gen_deposit_i32(t1, t2, t3, 16, 16); gen_store_mxu_gpr(t0, XRd); gen_store_mxu_gpr(t1, XRa); @@ -1464,45 +1464,45 @@ static void gen_mxu_q8madl(DisasContext *ctx) gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t7, XRc); - tcg_gen_extract_tl(t0, t3, 0, 8); - tcg_gen_extract_tl(t1, t3, 8, 8); - tcg_gen_extract_tl(t2, t3, 16, 8); - tcg_gen_extract_tl(t3, t3, 24, 8); + tcg_gen_extract_i32(t0, t3, 0, 8); + tcg_gen_extract_i32(t1, t3, 8, 8); + tcg_gen_extract_i32(t2, t3, 16, 8); + tcg_gen_extract_i32(t3, t3, 24, 8); - tcg_gen_extract_tl(t4, t7, 0, 8); - tcg_gen_extract_tl(t5, t7, 8, 8); - tcg_gen_extract_tl(t6, t7, 16, 8); - tcg_gen_extract_tl(t7, t7, 24, 8); + tcg_gen_extract_i32(t4, t7, 0, 8); + tcg_gen_extract_i32(t5, t7, 8, 8); + tcg_gen_extract_i32(t6, t7, 16, 8); + tcg_gen_extract_i32(t7, t7, 24, 8); - tcg_gen_mul_tl(t0, t0, t4); - tcg_gen_mul_tl(t1, t1, t5); - tcg_gen_mul_tl(t2, t2, t6); - tcg_gen_mul_tl(t3, t3, t7); + tcg_gen_mul_i32(t0, t0, t4); + tcg_gen_mul_i32(t1, t1, t5); + tcg_gen_mul_i32(t2, t2, t6); + tcg_gen_mul_i32(t3, t3, t7); gen_load_mxu_gpr(t4, XRa); - tcg_gen_extract_tl(t6, t4, 0, 8); - tcg_gen_extract_tl(t7, t4, 8, 8); + tcg_gen_extract_i32(t6, t4, 0, 8); + tcg_gen_extract_i32(t7, t4, 8, 8); if (aptn2 & 1) { - tcg_gen_sub_tl(t0, t6, t0); - tcg_gen_sub_tl(t1, t7, t1); + tcg_gen_sub_i32(t0, t6, t0); + tcg_gen_sub_i32(t1, t7, t1); } else { - tcg_gen_add_tl(t0, t6, t0); - tcg_gen_add_tl(t1, t7, t1); + tcg_gen_add_i32(t0, t6, t0); + tcg_gen_add_i32(t1, t7, t1); } - tcg_gen_extract_tl(t6, t4, 16, 8); - tcg_gen_extract_tl(t7, t4, 24, 8); + tcg_gen_extract_i32(t6, t4, 16, 8); + tcg_gen_extract_i32(t7, t4, 24, 8); if (aptn2 & 2) { - tcg_gen_sub_tl(t2, t6, t2); - tcg_gen_sub_tl(t3, t7, t3); + tcg_gen_sub_i32(t2, t6, t2); + tcg_gen_sub_i32(t3, t7, t3); } else { - tcg_gen_add_tl(t2, t6, t2); - tcg_gen_add_tl(t3, t7, t3); + tcg_gen_add_i32(t2, t6, t2); + tcg_gen_add_i32(t3, t7, t3); } - tcg_gen_andi_tl(t5, t0, 0xff); - tcg_gen_deposit_tl(t5, t5, t1, 8, 8); - tcg_gen_deposit_tl(t5, t5, t2, 16, 8); - tcg_gen_deposit_tl(t5, t5, t3, 24, 8); + tcg_gen_andi_i32(t5, t0, 0xff); + tcg_gen_deposit_i32(t5, t5, t1, 8, 8); + tcg_gen_deposit_i32(t5, t5, t2, 16, 8); + tcg_gen_deposit_i32(t5, t5, t3, 24, 8); gen_store_mxu_gpr(t5, XRd); } @@ -1528,17 +1528,17 @@ static void gen_mxu_s32ldxx(DisasContext *ctx, bool reversed, bool postinc) s12 = sextract32(ctx->opcode, 10, 10); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr_tl(t0, Rb); - tcg_gen_movi_tl(t1, s12 * 4); - tcg_gen_add_tl(t0, t0, t1); + gen_load_gpr_i32(t0, Rb); + tcg_gen_movi_i32(t1, s12 * 4); + tcg_gen_add_i32(t0, t0, t1); - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_SL | mo_endian_rev(ctx, reversed) | ctx->default_tcg_memop_mask); gen_store_mxu_gpr(t1, XRa); if (postinc) { - gen_store_gpr_tl(t0, Rb); + gen_store_gpr_i32(t0, Rb); } } @@ -1563,17 +1563,17 @@ static void gen_mxu_s32stxx(DisasContext *ctx, bool reversed, bool postinc) s12 = sextract32(ctx->opcode, 10, 10); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr_tl(t0, Rb); - tcg_gen_movi_tl(t1, s12 * 4); - tcg_gen_add_tl(t0, t0, t1); + gen_load_gpr_i32(t0, Rb); + tcg_gen_movi_i32(t1, s12 * 4); + tcg_gen_add_i32(t0, t0, t1); gen_load_mxu_gpr(t1, XRa); - tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, + tcg_gen_qemu_st_i32(t1, t0, ctx->mem_idx, MO_SL | mo_endian_rev(ctx, reversed) | ctx->default_tcg_memop_mask); if (postinc) { - gen_store_gpr_tl(t0, Rb); + gen_store_gpr_i32(t0, Rb); } } @@ -1599,18 +1599,18 @@ static void gen_mxu_s32ldxvx(DisasContext *ctx, bool reversed, Rc = extract32(ctx->opcode, 16, 5); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr_tl(t0, Rb); - gen_load_gpr_tl(t1, Rc); - tcg_gen_shli_tl(t1, t1, strd2); - tcg_gen_add_tl(t0, t0, t1); + gen_load_gpr_i32(t0, Rb); + gen_load_gpr_i32(t1, Rc); + tcg_gen_shli_i32(t1, t1, strd2); + tcg_gen_add_i32(t0, t0, t1); - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, MO_SL | mo_endian_rev(ctx, reversed) | ctx->default_tcg_memop_mask); gen_store_mxu_gpr(t1, XRa); if (postinc) { - gen_store_gpr_tl(t0, Rb); + gen_store_gpr_i32(t0, Rb); } } @@ -1637,13 +1637,13 @@ static void gen_mxu_lxx(DisasContext *ctx, uint32_t strd2, MemOp mop) Rc = extract32(ctx->opcode, 16, 5); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr_tl(t0, Rb); - gen_load_gpr_tl(t1, Rc); - tcg_gen_shli_tl(t1, t1, strd2); - tcg_gen_add_tl(t0, t0, t1); + gen_load_gpr_i32(t0, Rb); + gen_load_gpr_i32(t1, Rc); + tcg_gen_shli_i32(t1, t1, strd2); + tcg_gen_add_i32(t0, t0, t1); - tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, mop | ctx->default_tcg_memop_mask); - gen_store_gpr_tl(t1, Ra); + tcg_gen_qemu_ld_i32(t1, t0, ctx->mem_idx, mop | ctx->default_tcg_memop_mask); + gen_store_gpr_i32(t1, Ra); } /* @@ -1668,18 +1668,18 @@ static void gen_mxu_s32stxvx(DisasContext *ctx, bool reversed, Rc = extract32(ctx->opcode, 16, 5); Rb = extract32(ctx->opcode, 21, 5); - gen_load_gpr_tl(t0, Rb); - gen_load_gpr_tl(t1, Rc); - tcg_gen_shli_tl(t1, t1, strd2); - tcg_gen_add_tl(t0, t0, t1); + gen_load_gpr_i32(t0, Rb); + gen_load_gpr_i32(t1, Rc); + tcg_gen_shli_i32(t1, t1, strd2); + tcg_gen_add_i32(t0, t0, t1); gen_load_mxu_gpr(t1, XRa); - tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, + tcg_gen_qemu_st_i32(t1, t0, ctx->mem_idx, MO_SL | mo_endian_rev(ctx, reversed) | ctx->default_tcg_memop_mask); if (postinc) { - gen_store_gpr_tl(t0, Rb); + gen_store_gpr_i32(t0, Rb); } } @@ -1867,15 +1867,15 @@ static void gen_mxu_d32sxx(DisasContext *ctx, bool right, bool arithmetic) if (right) { if (arithmetic) { - tcg_gen_sari_tl(t0, t0, sft4); - tcg_gen_sari_tl(t1, t1, sft4); + tcg_gen_sari_i32(t0, t0, sft4); + tcg_gen_sari_i32(t1, t1, sft4); } else { - tcg_gen_shri_tl(t0, t0, sft4); - tcg_gen_shri_tl(t1, t1, sft4); + tcg_gen_shri_i32(t0, t0, sft4); + tcg_gen_shri_i32(t1, t1, sft4); } } else { - tcg_gen_shli_tl(t0, t0, sft4); - tcg_gen_shli_tl(t1, t1, sft4); + tcg_gen_shli_i32(t0, t0, sft4); + tcg_gen_shli_i32(t1, t1, sft4); } gen_store_mxu_gpr(t0, XRa); gen_store_mxu_gpr(t1, XRd); @@ -1906,20 +1906,20 @@ static void gen_mxu_d32sxxv(DisasContext *ctx, bool right, bool arithmetic) gen_load_mxu_gpr(t0, XRa); gen_load_mxu_gpr(t1, XRd); - gen_load_gpr_tl(t2, rs); - tcg_gen_andi_tl(t2, t2, 0x0f); + gen_load_gpr_i32(t2, rs); + tcg_gen_andi_i32(t2, t2, 0x0f); if (right) { if (arithmetic) { - tcg_gen_sar_tl(t0, t0, t2); - tcg_gen_sar_tl(t1, t1, t2); + tcg_gen_sar_i32(t0, t0, t2); + tcg_gen_sar_i32(t1, t1, t2); } else { - tcg_gen_shr_tl(t0, t0, t2); - tcg_gen_shr_tl(t1, t1, t2); + tcg_gen_shr_i32(t0, t0, t2); + tcg_gen_shr_i32(t1, t1, t2); } } else { - tcg_gen_shl_tl(t0, t0, t2); - tcg_gen_shl_tl(t1, t1, t2); + tcg_gen_shl_i32(t0, t0, t2); + tcg_gen_shl_i32(t1, t1, t2); } gen_store_mxu_gpr(t0, XRa); gen_store_mxu_gpr(t1, XRd); @@ -1952,17 +1952,17 @@ static void gen_mxu_d32sarl(DisasContext *ctx, bool sarw) if (!sarw) { /* Make SFT4 from rb field */ - tcg_gen_movi_tl(t2, rb >> 1); + tcg_gen_movi_i32(t2, rb >> 1); } else { - gen_load_gpr_tl(t2, rb); - tcg_gen_andi_tl(t2, t2, 0x0f); + gen_load_gpr_i32(t2, rb); + tcg_gen_andi_i32(t2, t2, 0x0f); } gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); - tcg_gen_sar_tl(t0, t0, t2); - tcg_gen_sar_tl(t1, t1, t2); - tcg_gen_extract_tl(t2, t1, 0, 16); - tcg_gen_deposit_tl(t2, t2, t0, 16, 16); + tcg_gen_sar_i32(t0, t0, t2); + tcg_gen_sar_i32(t1, t1, t2); + tcg_gen_extract_i32(t2, t1, 0, 16); + tcg_gen_deposit_i32(t2, t2, t0, 16, 16); gen_store_mxu_gpr(t2, XRa); } } @@ -1997,37 +1997,37 @@ static void gen_mxu_q16sxx(DisasContext *ctx, bool right, bool arithmetic) gen_load_mxu_gpr(t2, XRc); if (arithmetic) { - tcg_gen_sextract_tl(t1, t0, 16, 16); - tcg_gen_sextract_tl(t0, t0, 0, 16); - tcg_gen_sextract_tl(t3, t2, 16, 16); - tcg_gen_sextract_tl(t2, t2, 0, 16); + tcg_gen_sextract_i32(t1, t0, 16, 16); + tcg_gen_sextract_i32(t0, t0, 0, 16); + tcg_gen_sextract_i32(t3, t2, 16, 16); + tcg_gen_sextract_i32(t2, t2, 0, 16); } else { - tcg_gen_extract_tl(t1, t0, 16, 16); - tcg_gen_extract_tl(t0, t0, 0, 16); - tcg_gen_extract_tl(t3, t2, 16, 16); - tcg_gen_extract_tl(t2, t2, 0, 16); + tcg_gen_extract_i32(t1, t0, 16, 16); + tcg_gen_extract_i32(t0, t0, 0, 16); + tcg_gen_extract_i32(t3, t2, 16, 16); + tcg_gen_extract_i32(t2, t2, 0, 16); } if (right) { if (arithmetic) { - tcg_gen_sari_tl(t0, t0, sft4); - tcg_gen_sari_tl(t1, t1, sft4); - tcg_gen_sari_tl(t2, t2, sft4); - tcg_gen_sari_tl(t3, t3, sft4); + tcg_gen_sari_i32(t0, t0, sft4); + tcg_gen_sari_i32(t1, t1, sft4); + tcg_gen_sari_i32(t2, t2, sft4); + tcg_gen_sari_i32(t3, t3, sft4); } else { - tcg_gen_shri_tl(t0, t0, sft4); - tcg_gen_shri_tl(t1, t1, sft4); - tcg_gen_shri_tl(t2, t2, sft4); - tcg_gen_shri_tl(t3, t3, sft4); + tcg_gen_shri_i32(t0, t0, sft4); + tcg_gen_shri_i32(t1, t1, sft4); + tcg_gen_shri_i32(t2, t2, sft4); + tcg_gen_shri_i32(t3, t3, sft4); } } else { - tcg_gen_shli_tl(t0, t0, sft4); - tcg_gen_shli_tl(t1, t1, sft4); - tcg_gen_shli_tl(t2, t2, sft4); - tcg_gen_shli_tl(t3, t3, sft4); + tcg_gen_shli_i32(t0, t0, sft4); + tcg_gen_shli_i32(t1, t1, sft4); + tcg_gen_shli_i32(t2, t2, sft4); + tcg_gen_shli_i32(t3, t3, sft4); } - tcg_gen_deposit_tl(t0, t0, t1, 16, 16); - tcg_gen_deposit_tl(t2, t2, t3, 16, 16); + tcg_gen_deposit_i32(t0, t0, t1, 16, 16); + tcg_gen_deposit_i32(t2, t2, t3, 16, 16); gen_store_mxu_gpr(t0, XRa); gen_store_mxu_gpr(t2, XRd); @@ -2060,42 +2060,42 @@ static void gen_mxu_q16sxxv(DisasContext *ctx, bool right, bool arithmetic) gen_load_mxu_gpr(t0, XRa); gen_load_mxu_gpr(t2, XRd); - gen_load_gpr_tl(t5, rs); - tcg_gen_andi_tl(t5, t5, 0x0f); + gen_load_gpr_i32(t5, rs); + tcg_gen_andi_i32(t5, t5, 0x0f); if (arithmetic) { - tcg_gen_sextract_tl(t1, t0, 16, 16); - tcg_gen_sextract_tl(t0, t0, 0, 16); - tcg_gen_sextract_tl(t3, t2, 16, 16); - tcg_gen_sextract_tl(t2, t2, 0, 16); + tcg_gen_sextract_i32(t1, t0, 16, 16); + tcg_gen_sextract_i32(t0, t0, 0, 16); + tcg_gen_sextract_i32(t3, t2, 16, 16); + tcg_gen_sextract_i32(t2, t2, 0, 16); } else { - tcg_gen_extract_tl(t1, t0, 16, 16); - tcg_gen_extract_tl(t0, t0, 0, 16); - tcg_gen_extract_tl(t3, t2, 16, 16); - tcg_gen_extract_tl(t2, t2, 0, 16); + tcg_gen_extract_i32(t1, t0, 16, 16); + tcg_gen_extract_i32(t0, t0, 0, 16); + tcg_gen_extract_i32(t3, t2, 16, 16); + tcg_gen_extract_i32(t2, t2, 0, 16); } if (right) { if (arithmetic) { - tcg_gen_sar_tl(t0, t0, t5); - tcg_gen_sar_tl(t1, t1, t5); - tcg_gen_sar_tl(t2, t2, t5); - tcg_gen_sar_tl(t3, t3, t5); + tcg_gen_sar_i32(t0, t0, t5); + tcg_gen_sar_i32(t1, t1, t5); + tcg_gen_sar_i32(t2, t2, t5); + tcg_gen_sar_i32(t3, t3, t5); } else { - tcg_gen_shr_tl(t0, t0, t5); - tcg_gen_shr_tl(t1, t1, t5); - tcg_gen_shr_tl(t2, t2, t5); - tcg_gen_shr_tl(t3, t3, t5); + tcg_gen_shr_i32(t0, t0, t5); + tcg_gen_shr_i32(t1, t1, t5); + tcg_gen_shr_i32(t2, t2, t5); + tcg_gen_shr_i32(t3, t3, t5); } } else { - tcg_gen_shl_tl(t0, t0, t5); - tcg_gen_shl_tl(t1, t1, t5); - tcg_gen_shl_tl(t2, t2, t5); - tcg_gen_shl_tl(t3, t3, t5); + tcg_gen_shl_i32(t0, t0, t5); + tcg_gen_shl_i32(t1, t1, t5); + tcg_gen_shl_i32(t2, t2, t5); + tcg_gen_shl_i32(t3, t3, t5); } - tcg_gen_deposit_tl(t0, t0, t1, 16, 16); - tcg_gen_deposit_tl(t2, t2, t3, 16, 16); + tcg_gen_deposit_i32(t0, t0, t1, 16, 16); + tcg_gen_deposit_i32(t2, t2, t3, 16, 16); gen_store_mxu_gpr(t0, XRa); gen_store_mxu_gpr(t2, XRd); @@ -2142,7 +2142,7 @@ static void gen_mxu_S32MAX_S32MIN(DisasContext *ctx) /* both operands zero registers -> just set destination to zero */ tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely((XRb == 0) || (XRc == 0))) { - /* exactly one operand is zero register - find which one is not...*/ + /* exaci32y one operand is zero register - find which one is not...*/ uint32_t XRx = XRb ? XRb : XRc; /* ...and do max/min operation with one operand 0 */ if (opc == OPC_MXU_S32MAX) { @@ -2192,7 +2192,7 @@ static void gen_mxu_D16MAX_D16MIN(DisasContext *ctx) /* both operands zero registers -> just set destination to zero */ tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely((XRb == 0) || (XRc == 0))) { - /* exactly one operand is zero register - find which one is not...*/ + /* exaci32y one operand is zero register - find which one is not...*/ uint32_t XRx = XRb ? XRb : XRc; /* ...and do half-word-wise max/min with one operand 0 */ TCGv_i32 t0 = tcg_temp_new(); @@ -2285,7 +2285,7 @@ static void gen_mxu_Q8MAX_Q8MIN(DisasContext *ctx) /* both operands zero registers -> just set destination to zero */ tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely((XRb == 0) || (XRc == 0))) { - /* exactly one operand is zero register - make it be the first...*/ + /* exaci32y one operand is zero register - make it be the first...*/ uint32_t XRx = XRb ? XRb : XRc; /* ...and do byte-wise max/min with one operand 0 */ TCGv_i32 t0 = tcg_temp_new(); @@ -2387,10 +2387,10 @@ static void gen_mxu_q8slt(DisasContext *ctx, bool sltu) /* destination is zero register -> do nothing */ } else if (unlikely((XRb == 0) && (XRc == 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRb == XRc)) { /* both operands same registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ TCGv t0 = tcg_temp_new(); @@ -2401,18 +2401,18 @@ static void gen_mxu_q8slt(DisasContext *ctx, bool sltu) gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t4, XRc); - tcg_gen_movi_tl(t2, 0); + tcg_gen_movi_i32(t2, 0); for (int i = 0; i < 4; i++) { if (sltu) { - tcg_gen_extract_tl(t0, t3, 8 * i, 8); - tcg_gen_extract_tl(t1, t4, 8 * i, 8); + tcg_gen_extract_i32(t0, t3, 8 * i, 8); + tcg_gen_extract_i32(t1, t4, 8 * i, 8); } else { - tcg_gen_sextract_tl(t0, t3, 8 * i, 8); - tcg_gen_sextract_tl(t1, t4, 8 * i, 8); + tcg_gen_sextract_i32(t0, t3, 8 * i, 8); + tcg_gen_sextract_i32(t1, t4, 8 * i, 8); } - tcg_gen_setcond_tl(TCG_COND_LT, t0, t0, t1); - tcg_gen_deposit_tl(t2, t2, t0, 8 * i, 8); + tcg_gen_setcond_i32(TCG_COND_LT, t0, t0, t1); + tcg_gen_deposit_i32(t2, t2, t0, 8 * i, 8); } gen_store_mxu_gpr(t2, XRa); } @@ -2438,10 +2438,10 @@ static void gen_mxu_S32SLT(DisasContext *ctx) /* destination is zero register -> do nothing */ } else if (unlikely((XRb == 0) && (XRc == 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRb == XRc)) { /* both operands same registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ TCGv t0 = tcg_temp_new(); @@ -2449,7 +2449,7 @@ static void gen_mxu_S32SLT(DisasContext *ctx) gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); - tcg_gen_setcond_tl(TCG_COND_LT, mxu_gpr[XRa - 1], t0, t1); + tcg_gen_setcond_i32(TCG_COND_LT, mxu_gpr[XRa - 1], t0, t1); } } @@ -2474,10 +2474,10 @@ static void gen_mxu_D16SLT(DisasContext *ctx) /* destination is zero register -> do nothing */ } else if (unlikely((XRb == 0) && (XRc == 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRb == XRc)) { /* both operands same registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ TCGv t0 = tcg_temp_new(); @@ -2488,14 +2488,14 @@ static void gen_mxu_D16SLT(DisasContext *ctx) gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t4, XRc); - tcg_gen_sextract_tl(t0, t3, 16, 16); - tcg_gen_sextract_tl(t1, t4, 16, 16); - tcg_gen_setcond_tl(TCG_COND_LT, t0, t0, t1); - tcg_gen_shli_tl(t2, t0, 16); - tcg_gen_sextract_tl(t0, t3, 0, 16); - tcg_gen_sextract_tl(t1, t4, 0, 16); - tcg_gen_setcond_tl(TCG_COND_LT, t0, t0, t1); - tcg_gen_or_tl(mxu_gpr[XRa - 1], t2, t0); + tcg_gen_sextract_i32(t0, t3, 16, 16); + tcg_gen_sextract_i32(t1, t4, 16, 16); + tcg_gen_setcond_i32(TCG_COND_LT, t0, t0, t1); + tcg_gen_shli_i32(t2, t0, 16); + tcg_gen_sextract_i32(t0, t3, 0, 16); + tcg_gen_sextract_i32(t1, t4, 0, 16); + tcg_gen_setcond_i32(TCG_COND_LT, t0, t0, t1); + tcg_gen_or_i32(mxu_gpr[XRa - 1], t2, t0); } } @@ -2525,10 +2525,10 @@ static void gen_mxu_d16avg(DisasContext *ctx, bool round45) /* destination is zero register -> do nothing */ } else if (unlikely((XRb == 0) && (XRc == 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRb == XRc)) { /* both operands same registers -> just set destination to same */ - tcg_gen_mov_tl(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); + tcg_gen_mov_i32(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); } else { /* the most general case */ TCGv t0 = tcg_temp_new(); @@ -2539,22 +2539,22 @@ static void gen_mxu_d16avg(DisasContext *ctx, bool round45) gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t4, XRc); - tcg_gen_sextract_tl(t0, t3, 16, 16); - tcg_gen_sextract_tl(t1, t4, 16, 16); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_sextract_i32(t0, t3, 16, 16); + tcg_gen_sextract_i32(t1, t4, 16, 16); + tcg_gen_add_i32(t0, t0, t1); if (round45) { - tcg_gen_addi_tl(t0, t0, 1); + tcg_gen_addi_i32(t0, t0, 1); } - tcg_gen_shli_tl(t2, t0, 15); - tcg_gen_andi_tl(t2, t2, 0xffff0000); - tcg_gen_sextract_tl(t0, t3, 0, 16); - tcg_gen_sextract_tl(t1, t4, 0, 16); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_shli_i32(t2, t0, 15); + tcg_gen_andi_i32(t2, t2, 0xffff0000); + tcg_gen_sextract_i32(t0, t3, 0, 16); + tcg_gen_sextract_i32(t1, t4, 0, 16); + tcg_gen_add_i32(t0, t0, t1); if (round45) { - tcg_gen_addi_tl(t0, t0, 1); + tcg_gen_addi_i32(t0, t0, 1); } - tcg_gen_shri_tl(t0, t0, 1); - tcg_gen_deposit_tl(t2, t2, t0, 0, 16); + tcg_gen_shri_i32(t0, t0, 1); + tcg_gen_deposit_i32(t2, t2, t0, 0, 16); gen_store_mxu_gpr(t2, XRa); } } @@ -2585,10 +2585,10 @@ static void gen_mxu_q8avg(DisasContext *ctx, bool round45) /* destination is zero register -> do nothing */ } else if (unlikely((XRb == 0) && (XRc == 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRb == XRc)) { /* both operands same registers -> just set destination to same */ - tcg_gen_mov_tl(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); + tcg_gen_mov_i32(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); } else { /* the most general case */ TCGv t0 = tcg_temp_new(); @@ -2599,17 +2599,17 @@ static void gen_mxu_q8avg(DisasContext *ctx, bool round45) gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t4, XRc); - tcg_gen_movi_tl(t2, 0); + tcg_gen_movi_i32(t2, 0); for (int i = 0; i < 4; i++) { - tcg_gen_extract_tl(t0, t3, 8 * i, 8); - tcg_gen_extract_tl(t1, t4, 8 * i, 8); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_extract_i32(t0, t3, 8 * i, 8); + tcg_gen_extract_i32(t1, t4, 8 * i, 8); + tcg_gen_add_i32(t0, t0, t1); if (round45) { - tcg_gen_addi_tl(t0, t0, 1); + tcg_gen_addi_i32(t0, t0, 1); } - tcg_gen_shri_tl(t0, t0, 1); - tcg_gen_deposit_tl(t2, t2, t0, 8 * i, 8); + tcg_gen_shri_i32(t0, t0, 1); + tcg_gen_deposit_i32(t2, t2, t0, 8 * i, 8); } gen_store_mxu_gpr(t2, XRa); } @@ -2649,28 +2649,28 @@ static void gen_mxu_q8movzn(DisasContext *ctx, TCGCond cond) gen_load_mxu_gpr(t1, XRb); gen_load_mxu_gpr(t2, XRa); - tcg_gen_extract_tl(t3, t1, 24, 8); - tcg_gen_brcondi_tl(cond, t3, 0, l_quarterdone); - tcg_gen_extract_tl(t3, t0, 24, 8); - tcg_gen_deposit_tl(t2, t2, t3, 24, 8); + tcg_gen_extract_i32(t3, t1, 24, 8); + tcg_gen_brcondi_i32(cond, t3, 0, l_quarterdone); + tcg_gen_extract_i32(t3, t0, 24, 8); + tcg_gen_deposit_i32(t2, t2, t3, 24, 8); gen_set_label(l_quarterdone); - tcg_gen_extract_tl(t3, t1, 16, 8); - tcg_gen_brcondi_tl(cond, t3, 0, l_halfdone); - tcg_gen_extract_tl(t3, t0, 16, 8); - tcg_gen_deposit_tl(t2, t2, t3, 16, 8); + tcg_gen_extract_i32(t3, t1, 16, 8); + tcg_gen_brcondi_i32(cond, t3, 0, l_halfdone); + tcg_gen_extract_i32(t3, t0, 16, 8); + tcg_gen_deposit_i32(t2, t2, t3, 16, 8); gen_set_label(l_halfdone); - tcg_gen_extract_tl(t3, t1, 8, 8); - tcg_gen_brcondi_tl(cond, t3, 0, l_quarterrest); - tcg_gen_extract_tl(t3, t0, 8, 8); - tcg_gen_deposit_tl(t2, t2, t3, 8, 8); + tcg_gen_extract_i32(t3, t1, 8, 8); + tcg_gen_brcondi_i32(cond, t3, 0, l_quarterrest); + tcg_gen_extract_i32(t3, t0, 8, 8); + tcg_gen_deposit_i32(t2, t2, t3, 8, 8); gen_set_label(l_quarterrest); - tcg_gen_extract_tl(t3, t1, 0, 8); - tcg_gen_brcondi_tl(cond, t3, 0, l_done); - tcg_gen_extract_tl(t3, t0, 0, 8); - tcg_gen_deposit_tl(t2, t2, t3, 0, 8); + tcg_gen_extract_i32(t3, t1, 0, 8); + tcg_gen_brcondi_i32(cond, t3, 0, l_done); + tcg_gen_extract_i32(t3, t0, 0, 8); + tcg_gen_deposit_i32(t2, t2, t3, 0, 8); gen_set_label(l_done); gen_store_mxu_gpr(t2, XRa); @@ -2708,16 +2708,16 @@ static void gen_mxu_d16movzn(DisasContext *ctx, TCGCond cond) gen_load_mxu_gpr(t1, XRb); gen_load_mxu_gpr(t2, XRa); - tcg_gen_extract_tl(t3, t1, 16, 16); - tcg_gen_brcondi_tl(cond, t3, 0, l_halfdone); - tcg_gen_extract_tl(t3, t0, 16, 16); - tcg_gen_deposit_tl(t2, t2, t3, 16, 16); + tcg_gen_extract_i32(t3, t1, 16, 16); + tcg_gen_brcondi_i32(cond, t3, 0, l_halfdone); + tcg_gen_extract_i32(t3, t0, 16, 16); + tcg_gen_deposit_i32(t2, t2, t3, 16, 16); gen_set_label(l_halfdone); - tcg_gen_extract_tl(t3, t1, 0, 16); - tcg_gen_brcondi_tl(cond, t3, 0, l_done); - tcg_gen_extract_tl(t3, t0, 0, 16); - tcg_gen_deposit_tl(t2, t2, t3, 0, 16); + tcg_gen_extract_i32(t3, t1, 0, 16); + tcg_gen_brcondi_i32(cond, t3, 0, l_done); + tcg_gen_extract_i32(t3, t0, 0, 16); + tcg_gen_deposit_i32(t2, t2, t3, 0, 16); gen_set_label(l_done); gen_store_mxu_gpr(t2, XRa); @@ -2751,7 +2751,7 @@ static void gen_mxu_s32movzn(DisasContext *ctx, TCGCond cond) gen_load_mxu_gpr(t0, XRc); gen_load_mxu_gpr(t1, XRb); - tcg_gen_brcondi_tl(cond, t1, 0, l_done); + tcg_gen_brcondi_i32(cond, t1, 0, l_done); gen_store_mxu_gpr(t0, XRa); gen_set_label(l_done); } @@ -2784,18 +2784,18 @@ static void gen_mxu_S32CPS(DisasContext *ctx) /* destination is zero register -> do nothing */ } else if (unlikely(XRb == 0)) { /* XRc make no sense 0 - 0 = 0 -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRc == 0)) { /* condition always false -> just move XRb to XRa */ - tcg_gen_mov_tl(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); + tcg_gen_mov_i32(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); } else { /* the most general case */ TCGv t0 = tcg_temp_new(); TCGLabel *l_not_less = gen_new_label(); TCGLabel *l_done = gen_new_label(); - tcg_gen_brcondi_tl(TCG_COND_GE, mxu_gpr[XRc - 1], 0, l_not_less); - tcg_gen_neg_tl(t0, mxu_gpr[XRb - 1]); + tcg_gen_brcondi_i32(TCG_COND_GE, mxu_gpr[XRc - 1], 0, l_not_less); + tcg_gen_neg_i32(t0, mxu_gpr[XRb - 1]); tcg_gen_br(l_done); gen_set_label(l_not_less); gen_load_mxu_gpr(t0, XRb); @@ -2824,10 +2824,10 @@ static void gen_mxu_D16CPS(DisasContext *ctx) /* destination is zero register -> do nothing */ } else if (unlikely(XRb == 0)) { /* XRc make no sense 0 - 0 = 0 -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else if (unlikely(XRc == 0)) { /* condition always false -> just move XRb to XRa */ - tcg_gen_mov_tl(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); + tcg_gen_mov_i32(mxu_gpr[XRa - 1], mxu_gpr[XRb - 1]); } else { /* the most general case */ TCGv t0 = tcg_temp_new(); @@ -2836,25 +2836,25 @@ static void gen_mxu_D16CPS(DisasContext *ctx) TCGLabel *l_not_less_lo = gen_new_label(); TCGLabel *l_done_lo = gen_new_label(); - tcg_gen_sextract_tl(t0, mxu_gpr[XRc - 1], 16, 16); - tcg_gen_sextract_tl(t1, mxu_gpr[XRb - 1], 16, 16); - tcg_gen_brcondi_tl(TCG_COND_GE, t0, 0, l_done_hi); - tcg_gen_subfi_tl(t1, 0, t1); + tcg_gen_sextract_i32(t0, mxu_gpr[XRc - 1], 16, 16); + tcg_gen_sextract_i32(t1, mxu_gpr[XRb - 1], 16, 16); + tcg_gen_brcondi_i32(TCG_COND_GE, t0, 0, l_done_hi); + tcg_gen_subfi_i32(t1, 0, t1); gen_set_label(l_done_hi); tcg_gen_shli_i32(t1, t1, 16); - tcg_gen_sextract_tl(t0, mxu_gpr[XRc - 1], 0, 16); - tcg_gen_brcondi_tl(TCG_COND_GE, t0, 0, l_not_less_lo); - tcg_gen_sextract_tl(t0, mxu_gpr[XRb - 1], 0, 16); - tcg_gen_subfi_tl(t0, 0, t0); + tcg_gen_sextract_i32(t0, mxu_gpr[XRc - 1], 0, 16); + tcg_gen_brcondi_i32(TCG_COND_GE, t0, 0, l_not_less_lo); + tcg_gen_sextract_i32(t0, mxu_gpr[XRb - 1], 0, 16); + tcg_gen_subfi_i32(t0, 0, t0); tcg_gen_br(l_done_lo); gen_set_label(l_not_less_lo); - tcg_gen_extract_tl(t0, mxu_gpr[XRb - 1], 0, 16); + tcg_gen_extract_i32(t0, mxu_gpr[XRb - 1], 0, 16); gen_set_label(l_done_lo); - tcg_gen_deposit_tl(mxu_gpr[XRa - 1], t1, t0, 0, 16); + tcg_gen_deposit_i32(mxu_gpr[XRa - 1], t1, t0, 0, 16); } } @@ -2880,7 +2880,7 @@ static void gen_mxu_Q8ABD(DisasContext *ctx) /* destination is zero register -> do nothing */ } else if (unlikely((XRb == 0) && (XRc == 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ TCGv t0 = tcg_temp_new(); @@ -2891,16 +2891,16 @@ static void gen_mxu_Q8ABD(DisasContext *ctx) gen_load_mxu_gpr(t3, XRb); gen_load_mxu_gpr(t4, XRc); - tcg_gen_movi_tl(t2, 0); + tcg_gen_movi_i32(t2, 0); for (int i = 0; i < 4; i++) { - tcg_gen_extract_tl(t0, t3, 8 * i, 8); - tcg_gen_extract_tl(t1, t4, 8 * i, 8); + tcg_gen_extract_i32(t0, t3, 8 * i, 8); + tcg_gen_extract_i32(t1, t4, 8 * i, 8); - tcg_gen_sub_tl(t0, t0, t1); - tcg_gen_abs_tl(t0, t0); + tcg_gen_sub_i32(t0, t0, t1); + tcg_gen_abs_i32(t0, t0); - tcg_gen_deposit_tl(t2, t2, t0, 8 * i, 8); + tcg_gen_deposit_i32(t2, t2, t0, 8 * i, 8); } gen_store_mxu_gpr(t2, XRa); } @@ -2940,31 +2940,31 @@ static void gen_mxu_Q8ADD(DisasContext *ctx) gen_load_mxu_gpr(t4, XRc); for (int i = 0; i < 4; i++) { - tcg_gen_andi_tl(t0, t3, 0xff); - tcg_gen_andi_tl(t1, t4, 0xff); + tcg_gen_andi_i32(t0, t3, 0xff); + tcg_gen_andi_i32(t1, t4, 0xff); if (i < 2) { if (aptn2 & 0x01) { - tcg_gen_sub_tl(t0, t0, t1); + tcg_gen_sub_i32(t0, t0, t1); } else { - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_add_i32(t0, t0, t1); } } else { if (aptn2 & 0x02) { - tcg_gen_sub_tl(t0, t0, t1); + tcg_gen_sub_i32(t0, t0, t1); } else { - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_add_i32(t0, t0, t1); } } if (i < 3) { - tcg_gen_shri_tl(t3, t3, 8); - tcg_gen_shri_tl(t4, t4, 8); + tcg_gen_shri_i32(t3, t3, 8); + tcg_gen_shri_i32(t4, t4, 8); } if (i > 0) { - tcg_gen_deposit_tl(t2, t2, t0, 8 * i, 8); + tcg_gen_deposit_i32(t2, t2, t0, 8 * i, 8); } else { - tcg_gen_andi_tl(t0, t0, 0xff); - tcg_gen_mov_tl(t2, t0); + tcg_gen_andi_i32(t0, t0, 0xff); + tcg_gen_mov_i32(t2, t0); } } gen_store_mxu_gpr(t2, XRa); @@ -2999,10 +2999,10 @@ static void gen_mxu_q8adde(DisasContext *ctx, bool accumulate) if (unlikely((XRb == 0) && (XRc == 0))) { /* both operands zero registers -> just set destination to zero */ if (XRa != 0) { - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } if (XRd != 0) { - tcg_gen_movi_tl(mxu_gpr[XRd - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRd - 1], 0); } } else { /* the most general case */ @@ -3019,22 +3019,22 @@ static void gen_mxu_q8adde(DisasContext *ctx, bool accumulate) gen_extract_mxu_gpr(t2, XRb, 24, 8); gen_extract_mxu_gpr(t3, XRc, 24, 8); if (aptn2 & 2) { - tcg_gen_sub_tl(t0, t0, t1); - tcg_gen_sub_tl(t2, t2, t3); + tcg_gen_sub_i32(t0, t0, t1); + tcg_gen_sub_i32(t2, t2, t3); } else { - tcg_gen_add_tl(t0, t0, t1); - tcg_gen_add_tl(t2, t2, t3); + tcg_gen_add_i32(t0, t0, t1); + tcg_gen_add_i32(t2, t2, t3); } if (accumulate) { gen_load_mxu_gpr(t5, XRa); - tcg_gen_extract_tl(t1, t5, 0, 16); - tcg_gen_extract_tl(t3, t5, 16, 16); - tcg_gen_add_tl(t0, t0, t1); - tcg_gen_add_tl(t2, t2, t3); + tcg_gen_extract_i32(t1, t5, 0, 16); + tcg_gen_extract_i32(t3, t5, 16, 16); + tcg_gen_add_i32(t0, t0, t1); + tcg_gen_add_i32(t2, t2, t3); } - tcg_gen_shli_tl(t2, t2, 16); - tcg_gen_extract_tl(t0, t0, 0, 16); - tcg_gen_or_tl(t4, t2, t0); + tcg_gen_shli_i32(t2, t2, 16); + tcg_gen_extract_i32(t0, t0, 0, 16); + tcg_gen_or_i32(t4, t2, t0); } if (XRd != 0) { gen_extract_mxu_gpr(t0, XRb, 0, 8); @@ -3042,22 +3042,22 @@ static void gen_mxu_q8adde(DisasContext *ctx, bool accumulate) gen_extract_mxu_gpr(t2, XRb, 8, 8); gen_extract_mxu_gpr(t3, XRc, 8, 8); if (aptn2 & 1) { - tcg_gen_sub_tl(t0, t0, t1); - tcg_gen_sub_tl(t2, t2, t3); + tcg_gen_sub_i32(t0, t0, t1); + tcg_gen_sub_i32(t2, t2, t3); } else { - tcg_gen_add_tl(t0, t0, t1); - tcg_gen_add_tl(t2, t2, t3); + tcg_gen_add_i32(t0, t0, t1); + tcg_gen_add_i32(t2, t2, t3); } if (accumulate) { gen_load_mxu_gpr(t5, XRd); - tcg_gen_extract_tl(t1, t5, 0, 16); - tcg_gen_extract_tl(t3, t5, 16, 16); - tcg_gen_add_tl(t0, t0, t1); - tcg_gen_add_tl(t2, t2, t3); + tcg_gen_extract_i32(t1, t5, 0, 16); + tcg_gen_extract_i32(t3, t5, 16, 16); + tcg_gen_add_i32(t0, t0, t1); + tcg_gen_add_i32(t2, t2, t3); } - tcg_gen_shli_tl(t2, t2, 16); - tcg_gen_extract_tl(t0, t0, 0, 16); - tcg_gen_or_tl(t5, t2, t0); + tcg_gen_shli_i32(t2, t2, 16); + tcg_gen_extract_i32(t0, t0, 0, 16); + tcg_gen_or_i32(t5, t2, t0); } gen_store_mxu_gpr(t4, XRa); @@ -3090,7 +3090,7 @@ static void gen_mxu_d8sum(DisasContext *ctx, bool sumc) /* destination is zero register -> do nothing */ } else if (unlikely((XRb == 0) && (XRc == 0))) { /* both operands zero registers -> just set destination to zero */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ TCGv t0 = tcg_temp_new(); @@ -3101,35 +3101,35 @@ static void gen_mxu_d8sum(DisasContext *ctx, bool sumc) TCGv t5 = tcg_temp_new(); if (XRb != 0) { - tcg_gen_extract_tl(t0, mxu_gpr[XRb - 1], 0, 8); - tcg_gen_extract_tl(t1, mxu_gpr[XRb - 1], 8, 8); - tcg_gen_extract_tl(t2, mxu_gpr[XRb - 1], 16, 8); - tcg_gen_extract_tl(t3, mxu_gpr[XRb - 1], 24, 8); - tcg_gen_add_tl(t4, t0, t1); - tcg_gen_add_tl(t4, t4, t2); - tcg_gen_add_tl(t4, t4, t3); + tcg_gen_extract_i32(t0, mxu_gpr[XRb - 1], 0, 8); + tcg_gen_extract_i32(t1, mxu_gpr[XRb - 1], 8, 8); + tcg_gen_extract_i32(t2, mxu_gpr[XRb - 1], 16, 8); + tcg_gen_extract_i32(t3, mxu_gpr[XRb - 1], 24, 8); + tcg_gen_add_i32(t4, t0, t1); + tcg_gen_add_i32(t4, t4, t2); + tcg_gen_add_i32(t4, t4, t3); } else { - tcg_gen_mov_tl(t4, 0); + tcg_gen_mov_i32(t4, 0); } if (XRc != 0) { - tcg_gen_extract_tl(t0, mxu_gpr[XRc - 1], 0, 8); - tcg_gen_extract_tl(t1, mxu_gpr[XRc - 1], 8, 8); - tcg_gen_extract_tl(t2, mxu_gpr[XRc - 1], 16, 8); - tcg_gen_extract_tl(t3, mxu_gpr[XRc - 1], 24, 8); - tcg_gen_add_tl(t5, t0, t1); - tcg_gen_add_tl(t5, t5, t2); - tcg_gen_add_tl(t5, t5, t3); + tcg_gen_extract_i32(t0, mxu_gpr[XRc - 1], 0, 8); + tcg_gen_extract_i32(t1, mxu_gpr[XRc - 1], 8, 8); + tcg_gen_extract_i32(t2, mxu_gpr[XRc - 1], 16, 8); + tcg_gen_extract_i32(t3, mxu_gpr[XRc - 1], 24, 8); + tcg_gen_add_i32(t5, t0, t1); + tcg_gen_add_i32(t5, t5, t2); + tcg_gen_add_i32(t5, t5, t3); } else { - tcg_gen_mov_tl(t5, 0); + tcg_gen_mov_i32(t5, 0); } if (sumc) { - tcg_gen_addi_tl(t4, t4, 2); - tcg_gen_addi_tl(t5, t5, 2); + tcg_gen_addi_i32(t4, t4, 2); + tcg_gen_addi_i32(t5, t5, 2); } - tcg_gen_shli_tl(t4, t4, 16); + tcg_gen_shli_i32(t4, t4, 16); - tcg_gen_or_tl(mxu_gpr[XRa - 1], t4, t5); + tcg_gen_or_i32(mxu_gpr[XRa - 1], t4, t5); } } @@ -3156,66 +3156,66 @@ static void gen_mxu_q16add(DisasContext *ctx) TCGv t5 = tcg_temp_new(); gen_load_mxu_gpr(t1, XRb); - tcg_gen_extract_tl(t0, t1, 0, 16); - tcg_gen_extract_tl(t1, t1, 16, 16); + tcg_gen_extract_i32(t0, t1, 0, 16); + tcg_gen_extract_i32(t1, t1, 16, 16); gen_load_mxu_gpr(t3, XRc); - tcg_gen_extract_tl(t2, t3, 0, 16); - tcg_gen_extract_tl(t3, t3, 16, 16); + tcg_gen_extract_i32(t2, t3, 0, 16); + tcg_gen_extract_i32(t3, t3, 16, 16); switch (optn2) { case MXU_OPTN2_WW: /* XRB.H+XRC.H == lop, XRB.L+XRC.L == rop */ - tcg_gen_mov_tl(t4, t1); - tcg_gen_mov_tl(t5, t0); + tcg_gen_mov_i32(t4, t1); + tcg_gen_mov_i32(t5, t0); break; case MXU_OPTN2_LW: /* XRB.L+XRC.H == lop, XRB.L+XRC.L == rop */ - tcg_gen_mov_tl(t4, t0); - tcg_gen_mov_tl(t5, t0); + tcg_gen_mov_i32(t4, t0); + tcg_gen_mov_i32(t5, t0); break; case MXU_OPTN2_HW: /* XRB.H+XRC.H == lop, XRB.H+XRC.L == rop */ - tcg_gen_mov_tl(t4, t1); - tcg_gen_mov_tl(t5, t1); + tcg_gen_mov_i32(t4, t1); + tcg_gen_mov_i32(t5, t1); break; case MXU_OPTN2_XW: /* XRB.L+XRC.H == lop, XRB.H+XRC.L == rop */ - tcg_gen_mov_tl(t4, t0); - tcg_gen_mov_tl(t5, t1); + tcg_gen_mov_i32(t4, t0); + tcg_gen_mov_i32(t5, t1); break; } switch (aptn2) { case MXU_APTN2_AA: /* lop +, rop + */ - tcg_gen_add_tl(t0, t4, t3); - tcg_gen_add_tl(t1, t5, t2); - tcg_gen_add_tl(t4, t4, t3); - tcg_gen_add_tl(t5, t5, t2); + tcg_gen_add_i32(t0, t4, t3); + tcg_gen_add_i32(t1, t5, t2); + tcg_gen_add_i32(t4, t4, t3); + tcg_gen_add_i32(t5, t5, t2); break; case MXU_APTN2_AS: /* lop +, rop + */ - tcg_gen_sub_tl(t0, t4, t3); - tcg_gen_sub_tl(t1, t5, t2); - tcg_gen_add_tl(t4, t4, t3); - tcg_gen_add_tl(t5, t5, t2); + tcg_gen_sub_i32(t0, t4, t3); + tcg_gen_sub_i32(t1, t5, t2); + tcg_gen_add_i32(t4, t4, t3); + tcg_gen_add_i32(t5, t5, t2); break; case MXU_APTN2_SA: /* lop +, rop + */ - tcg_gen_add_tl(t0, t4, t3); - tcg_gen_add_tl(t1, t5, t2); - tcg_gen_sub_tl(t4, t4, t3); - tcg_gen_sub_tl(t5, t5, t2); + tcg_gen_add_i32(t0, t4, t3); + tcg_gen_add_i32(t1, t5, t2); + tcg_gen_sub_i32(t4, t4, t3); + tcg_gen_sub_i32(t5, t5, t2); break; case MXU_APTN2_SS: /* lop +, rop + */ - tcg_gen_sub_tl(t0, t4, t3); - tcg_gen_sub_tl(t1, t5, t2); - tcg_gen_sub_tl(t4, t4, t3); - tcg_gen_sub_tl(t5, t5, t2); + tcg_gen_sub_i32(t0, t4, t3); + tcg_gen_sub_i32(t1, t5, t2); + tcg_gen_sub_i32(t4, t4, t3); + tcg_gen_sub_i32(t5, t5, t2); break; } - tcg_gen_shli_tl(t0, t0, 16); - tcg_gen_extract_tl(t1, t1, 0, 16); - tcg_gen_shli_tl(t4, t4, 16); - tcg_gen_extract_tl(t5, t5, 0, 16); + tcg_gen_shli_i32(t0, t0, 16); + tcg_gen_extract_i32(t1, t1, 0, 16); + tcg_gen_shli_i32(t4, t4, 16); + tcg_gen_extract_i32(t5, t5, 0, 16); - tcg_gen_or_tl(mxu_gpr[XRa - 1], t4, t5); - tcg_gen_or_tl(mxu_gpr[XRd - 1], t0, t1); + tcg_gen_or_i32(mxu_gpr[XRa - 1], t4, t5); + tcg_gen_or_i32(mxu_gpr[XRd - 1], t0, t1); } /* @@ -3242,56 +3242,56 @@ static void gen_mxu_q16acc(DisasContext *ctx) TCGv s0 = tcg_temp_new(); gen_load_mxu_gpr(t1, XRb); - tcg_gen_extract_tl(t0, t1, 0, 16); - tcg_gen_extract_tl(t1, t1, 16, 16); + tcg_gen_extract_i32(t0, t1, 0, 16); + tcg_gen_extract_i32(t1, t1, 16, 16); gen_load_mxu_gpr(t3, XRc); - tcg_gen_extract_tl(t2, t3, 0, 16); - tcg_gen_extract_tl(t3, t3, 16, 16); + tcg_gen_extract_i32(t2, t3, 0, 16); + tcg_gen_extract_i32(t3, t3, 16, 16); switch (aptn2) { case MXU_APTN2_AA: /* lop +, rop + */ - tcg_gen_add_tl(s3, t1, t3); - tcg_gen_add_tl(s2, t0, t2); - tcg_gen_add_tl(s1, t1, t3); - tcg_gen_add_tl(s0, t0, t2); + tcg_gen_add_i32(s3, t1, t3); + tcg_gen_add_i32(s2, t0, t2); + tcg_gen_add_i32(s1, t1, t3); + tcg_gen_add_i32(s0, t0, t2); break; case MXU_APTN2_AS: /* lop +, rop - */ - tcg_gen_sub_tl(s3, t1, t3); - tcg_gen_sub_tl(s2, t0, t2); - tcg_gen_add_tl(s1, t1, t3); - tcg_gen_add_tl(s0, t0, t2); + tcg_gen_sub_i32(s3, t1, t3); + tcg_gen_sub_i32(s2, t0, t2); + tcg_gen_add_i32(s1, t1, t3); + tcg_gen_add_i32(s0, t0, t2); break; case MXU_APTN2_SA: /* lop -, rop + */ - tcg_gen_add_tl(s3, t1, t3); - tcg_gen_add_tl(s2, t0, t2); - tcg_gen_sub_tl(s1, t1, t3); - tcg_gen_sub_tl(s0, t0, t2); + tcg_gen_add_i32(s3, t1, t3); + tcg_gen_add_i32(s2, t0, t2); + tcg_gen_sub_i32(s1, t1, t3); + tcg_gen_sub_i32(s0, t0, t2); break; case MXU_APTN2_SS: /* lop -, rop - */ - tcg_gen_sub_tl(s3, t1, t3); - tcg_gen_sub_tl(s2, t0, t2); - tcg_gen_sub_tl(s1, t1, t3); - tcg_gen_sub_tl(s0, t0, t2); + tcg_gen_sub_i32(s3, t1, t3); + tcg_gen_sub_i32(s2, t0, t2); + tcg_gen_sub_i32(s1, t1, t3); + tcg_gen_sub_i32(s0, t0, t2); break; } if (XRa != 0) { - tcg_gen_add_tl(t0, mxu_gpr[XRa - 1], s0); - tcg_gen_extract_tl(t0, t0, 0, 16); - tcg_gen_extract_tl(t1, mxu_gpr[XRa - 1], 16, 16); - tcg_gen_add_tl(t1, t1, s1); - tcg_gen_shli_tl(t1, t1, 16); - tcg_gen_or_tl(mxu_gpr[XRa - 1], t1, t0); + tcg_gen_add_i32(t0, mxu_gpr[XRa - 1], s0); + tcg_gen_extract_i32(t0, t0, 0, 16); + tcg_gen_extract_i32(t1, mxu_gpr[XRa - 1], 16, 16); + tcg_gen_add_i32(t1, t1, s1); + tcg_gen_shli_i32(t1, t1, 16); + tcg_gen_or_i32(mxu_gpr[XRa - 1], t1, t0); } if (XRd != 0) { - tcg_gen_add_tl(t0, mxu_gpr[XRd - 1], s2); - tcg_gen_extract_tl(t0, t0, 0, 16); - tcg_gen_extract_tl(t1, mxu_gpr[XRd - 1], 16, 16); - tcg_gen_add_tl(t1, t1, s3); - tcg_gen_shli_tl(t1, t1, 16); - tcg_gen_or_tl(mxu_gpr[XRd - 1], t1, t0); + tcg_gen_add_i32(t0, mxu_gpr[XRd - 1], s2); + tcg_gen_extract_i32(t0, t0, 0, 16); + tcg_gen_extract_i32(t1, mxu_gpr[XRd - 1], 16, 16); + tcg_gen_add_i32(t1, t1, s3); + tcg_gen_shli_i32(t1, t1, 16); + tcg_gen_or_i32(mxu_gpr[XRd - 1], t1, t0); } } @@ -3321,46 +3321,46 @@ static void gen_mxu_q16accm(DisasContext *ctx) TCGv a0 = tcg_temp_new(); TCGv a1 = tcg_temp_new(); - tcg_gen_extract_tl(t0, t2, 0, 16); - tcg_gen_extract_tl(t1, t2, 16, 16); + tcg_gen_extract_i32(t0, t2, 0, 16); + tcg_gen_extract_i32(t1, t2, 16, 16); gen_load_mxu_gpr(a1, XRa); - tcg_gen_extract_tl(a0, a1, 0, 16); - tcg_gen_extract_tl(a1, a1, 16, 16); + tcg_gen_extract_i32(a0, a1, 0, 16); + tcg_gen_extract_i32(a1, a1, 16, 16); if (aptn2 & 2) { - tcg_gen_sub_tl(a0, a0, t0); - tcg_gen_sub_tl(a1, a1, t1); + tcg_gen_sub_i32(a0, a0, t0); + tcg_gen_sub_i32(a1, a1, t1); } else { - tcg_gen_add_tl(a0, a0, t0); - tcg_gen_add_tl(a1, a1, t1); + tcg_gen_add_i32(a0, a0, t0); + tcg_gen_add_i32(a1, a1, t1); } - tcg_gen_extract_tl(a0, a0, 0, 16); - tcg_gen_shli_tl(a1, a1, 16); - tcg_gen_or_tl(mxu_gpr[XRa - 1], a1, a0); + tcg_gen_extract_i32(a0, a0, 0, 16); + tcg_gen_shli_i32(a1, a1, 16); + tcg_gen_or_i32(mxu_gpr[XRa - 1], a1, a0); } if (XRd != 0) { TCGv a0 = tcg_temp_new(); TCGv a1 = tcg_temp_new(); - tcg_gen_extract_tl(t0, t3, 0, 16); - tcg_gen_extract_tl(t1, t3, 16, 16); + tcg_gen_extract_i32(t0, t3, 0, 16); + tcg_gen_extract_i32(t1, t3, 16, 16); gen_load_mxu_gpr(a1, XRd); - tcg_gen_extract_tl(a0, a1, 0, 16); - tcg_gen_extract_tl(a1, a1, 16, 16); + tcg_gen_extract_i32(a0, a1, 0, 16); + tcg_gen_extract_i32(a1, a1, 16, 16); if (aptn2 & 1) { - tcg_gen_sub_tl(a0, a0, t0); - tcg_gen_sub_tl(a1, a1, t1); + tcg_gen_sub_i32(a0, a0, t0); + tcg_gen_sub_i32(a1, a1, t1); } else { - tcg_gen_add_tl(a0, a0, t0); - tcg_gen_add_tl(a1, a1, t1); + tcg_gen_add_i32(a0, a0, t0); + tcg_gen_add_i32(a1, a1, t1); } - tcg_gen_extract_tl(a0, a0, 0, 16); - tcg_gen_shli_tl(a1, a1, 16); - tcg_gen_or_tl(mxu_gpr[XRd - 1], a1, a0); + tcg_gen_extract_i32(a0, a0, 0, 16); + tcg_gen_shli_i32(a1, a1, 16); + tcg_gen_or_i32(mxu_gpr[XRd - 1], a1, a0); } } @@ -3388,24 +3388,24 @@ static void gen_mxu_d16asum(DisasContext *ctx) gen_load_mxu_gpr(t3, XRc); if (XRa != 0) { - tcg_gen_sextract_tl(t0, t2, 0, 16); - tcg_gen_sextract_tl(t1, t2, 16, 16); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_sextract_i32(t0, t2, 0, 16); + tcg_gen_sextract_i32(t1, t2, 16, 16); + tcg_gen_add_i32(t0, t0, t1); if (aptn2 & 2) { - tcg_gen_sub_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); + tcg_gen_sub_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); } else { - tcg_gen_add_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); + tcg_gen_add_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); } } if (XRd != 0) { - tcg_gen_sextract_tl(t0, t3, 0, 16); - tcg_gen_sextract_tl(t1, t3, 16, 16); - tcg_gen_add_tl(t0, t0, t1); + tcg_gen_sextract_i32(t0, t3, 0, 16); + tcg_gen_sextract_i32(t1, t3, 16, 16); + tcg_gen_add_i32(t0, t0, t1); if (aptn2 & 1) { - tcg_gen_sub_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t0); + tcg_gen_sub_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t0); } else { - tcg_gen_add_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t0); + tcg_gen_add_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t0); } } } @@ -3445,14 +3445,14 @@ static void gen_mxu_d32add(DisasContext *ctx) gen_load_mxu_gpr(t1, XRc); gen_load_mxu_cr(cr); if (XRa != 0) { - tcg_gen_extract_tl(t2, cr, 31, 1); - tcg_gen_add_tl(t0, t0, t2); - tcg_gen_add_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); + tcg_gen_extract_i32(t2, cr, 31, 1); + tcg_gen_add_i32(t0, t0, t2); + tcg_gen_add_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); } if (XRd != 0) { - tcg_gen_extract_tl(t2, cr, 30, 1); - tcg_gen_add_tl(t1, t1, t2); - tcg_gen_add_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t1); + tcg_gen_extract_i32(t2, cr, 30, 1); + tcg_gen_add_i32(t1, t1, t2); + tcg_gen_add_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t1); } } } else if (unlikely(XRa == 0 && XRd == 0)) { @@ -3468,27 +3468,27 @@ static void gen_mxu_d32add(DisasContext *ctx) if (XRa != 0) { if (aptn2 & 2) { tcg_gen_sub_i32(t2, t0, t1); - tcg_gen_setcond_tl(TCG_COND_GTU, carry, t0, t1); + tcg_gen_setcond_i32(TCG_COND_GTU, carry, t0, t1); } else { tcg_gen_add_i32(t2, t0, t1); - tcg_gen_setcond_tl(TCG_COND_GTU, carry, t0, t2); + tcg_gen_setcond_i32(TCG_COND_GTU, carry, t0, t2); } - tcg_gen_andi_tl(cr, cr, 0x7fffffff); - tcg_gen_shli_tl(carry, carry, 31); - tcg_gen_or_tl(cr, cr, carry); + tcg_gen_andi_i32(cr, cr, 0x7fffffff); + tcg_gen_shli_i32(carry, carry, 31); + tcg_gen_or_i32(cr, cr, carry); gen_store_mxu_gpr(t2, XRa); } if (XRd != 0) { if (aptn2 & 1) { tcg_gen_sub_i32(t2, t0, t1); - tcg_gen_setcond_tl(TCG_COND_GTU, carry, t0, t1); + tcg_gen_setcond_i32(TCG_COND_GTU, carry, t0, t1); } else { tcg_gen_add_i32(t2, t0, t1); - tcg_gen_setcond_tl(TCG_COND_GTU, carry, t0, t2); + tcg_gen_setcond_i32(TCG_COND_GTU, carry, t0, t2); } - tcg_gen_andi_tl(cr, cr, 0xbfffffff); - tcg_gen_shli_tl(carry, carry, 30); - tcg_gen_or_tl(cr, cr, carry); + tcg_gen_andi_i32(cr, cr, 0xbfffffff); + tcg_gen_shli_i32(carry, carry, 30); + tcg_gen_or_i32(cr, cr, carry); gen_store_mxu_gpr(t2, XRd); } gen_store_mxu_cr(cr); @@ -3521,19 +3521,19 @@ static void gen_mxu_d32acc(DisasContext *ctx) gen_load_mxu_gpr(t1, XRc); if (XRa != 0) { if (aptn2 & 2) { - tcg_gen_sub_tl(t2, t0, t1); + tcg_gen_sub_i32(t2, t0, t1); } else { - tcg_gen_add_tl(t2, t0, t1); + tcg_gen_add_i32(t2, t0, t1); } - tcg_gen_add_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t2); + tcg_gen_add_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t2); } if (XRd != 0) { if (aptn2 & 1) { - tcg_gen_sub_tl(t2, t0, t1); + tcg_gen_sub_i32(t2, t0, t1); } else { - tcg_gen_add_tl(t2, t0, t1); + tcg_gen_add_i32(t2, t0, t1); } - tcg_gen_add_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t2); + tcg_gen_add_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t2); } } } @@ -3563,19 +3563,19 @@ static void gen_mxu_d32accm(DisasContext *ctx) gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); if (XRa != 0) { - tcg_gen_add_tl(t2, t0, t1); + tcg_gen_add_i32(t2, t0, t1); if (aptn2 & 2) { - tcg_gen_sub_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t2); + tcg_gen_sub_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t2); } else { - tcg_gen_add_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t2); + tcg_gen_add_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t2); } } if (XRd != 0) { - tcg_gen_sub_tl(t2, t0, t1); + tcg_gen_sub_i32(t2, t0, t1); if (aptn2 & 1) { - tcg_gen_sub_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t2); + tcg_gen_sub_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t2); } else { - tcg_gen_add_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t2); + tcg_gen_add_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t2); } } } @@ -3606,16 +3606,16 @@ static void gen_mxu_d32asum(DisasContext *ctx) gen_load_mxu_gpr(t1, XRc); if (XRa != 0) { if (aptn2 & 2) { - tcg_gen_sub_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); + tcg_gen_sub_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); } else { - tcg_gen_add_tl(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); + tcg_gen_add_i32(mxu_gpr[XRa - 1], mxu_gpr[XRa - 1], t0); } } if (XRd != 0) { if (aptn2 & 1) { - tcg_gen_sub_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t1); + tcg_gen_sub_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t1); } else { - tcg_gen_add_tl(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t1); + tcg_gen_add_i32(mxu_gpr[XRd - 1], mxu_gpr[XRd - 1], t1); } } } @@ -3659,24 +3659,24 @@ static void gen_mxu_s32extr(DisasContext *ctx) gen_load_mxu_gpr(t0, XRd); gen_load_mxu_gpr(t1, XRa); - gen_load_gpr_tl(t2, rs); - tcg_gen_andi_tl(t2, t2, 0x1f); - tcg_gen_subfi_tl(t2, 32, t2); - tcg_gen_brcondi_tl(TCG_COND_GE, t2, bits5, l_xra_only); - tcg_gen_subfi_tl(t2, bits5, t2); - tcg_gen_subfi_tl(t3, 32, t2); - tcg_gen_shr_tl(t0, t0, t3); - tcg_gen_shl_tl(t1, t1, t2); - tcg_gen_or_tl(t0, t0, t1); + gen_load_gpr_i32(t2, rs); + tcg_gen_andi_i32(t2, t2, 0x1f); + tcg_gen_subfi_i32(t2, 32, t2); + tcg_gen_brcondi_i32(TCG_COND_GE, t2, bits5, l_xra_only); + tcg_gen_subfi_i32(t2, bits5, t2); + tcg_gen_subfi_i32(t3, 32, t2); + tcg_gen_shr_i32(t0, t0, t3); + tcg_gen_shl_i32(t1, t1, t2); + tcg_gen_or_i32(t0, t0, t1); tcg_gen_br(l_done); gen_set_label(l_xra_only); - tcg_gen_subi_tl(t2, t2, bits5); - tcg_gen_shr_tl(t0, t1, t2); + tcg_gen_subi_i32(t2, t2, bits5); + tcg_gen_shr_i32(t0, t1, t2); gen_set_label(l_done); - tcg_gen_extract_tl(t0, t0, 0, bits5); + tcg_gen_extract_i32(t0, t0, 0, bits5); } else { /* unspecified behavior but matches tests on real hardware*/ - tcg_gen_movi_tl(t0, 0); + tcg_gen_movi_i32(t0, 0); } gen_store_mxu_gpr(t0, XRa); } @@ -3709,34 +3709,34 @@ static void gen_mxu_s32extrv(DisasContext *ctx) /* {tmp} = {XRa:XRd} >> (64 - rs - rt) */ gen_load_mxu_gpr(t0, XRd); gen_load_mxu_gpr(t1, XRa); - gen_load_gpr_tl(t2, rs); - gen_load_gpr_tl(t4, rt); - tcg_gen_brcondi_tl(TCG_COND_EQ, t4, 0, l_zero); - tcg_gen_andi_tl(t2, t2, 0x1f); - tcg_gen_subfi_tl(t2, 32, t2); - tcg_gen_brcond_tl(TCG_COND_GE, t2, t4, l_xra_only); - tcg_gen_sub_tl(t2, t4, t2); - tcg_gen_subfi_tl(t3, 32, t2); - tcg_gen_shr_tl(t0, t0, t3); - tcg_gen_shl_tl(t1, t1, t2); - tcg_gen_or_tl(t0, t0, t1); + gen_load_gpr_i32(t2, rs); + gen_load_gpr_i32(t4, rt); + tcg_gen_brcondi_i32(TCG_COND_EQ, t4, 0, l_zero); + tcg_gen_andi_i32(t2, t2, 0x1f); + tcg_gen_subfi_i32(t2, 32, t2); + tcg_gen_brcond_i32(TCG_COND_GE, t2, t4, l_xra_only); + tcg_gen_sub_i32(t2, t4, t2); + tcg_gen_subfi_i32(t3, 32, t2); + tcg_gen_shr_i32(t0, t0, t3); + tcg_gen_shl_i32(t1, t1, t2); + tcg_gen_or_i32(t0, t0, t1); tcg_gen_br(l_extract); gen_set_label(l_xra_only); - tcg_gen_sub_tl(t2, t2, t4); - tcg_gen_shr_tl(t0, t1, t2); + tcg_gen_sub_i32(t2, t2, t4); + tcg_gen_shr_i32(t0, t1, t2); tcg_gen_br(l_extract); /* unspecified behavior but matches tests on real hardware*/ gen_set_label(l_zero); - tcg_gen_movi_tl(t0, 0); + tcg_gen_movi_i32(t0, 0); tcg_gen_br(l_done); /* {XRa} = extract({tmp}, 0, rt) */ gen_set_label(l_extract); - tcg_gen_subfi_tl(t4, 32, t4); - tcg_gen_shl_tl(t0, t0, t4); - tcg_gen_shr_tl(t0, t0, t4); + tcg_gen_subfi_i32(t4, 32, t4); + tcg_gen_shl_i32(t0, t0, t4); + tcg_gen_shr_i32(t0, t0, t4); gen_set_label(l_done); gen_store_mxu_gpr(t0, XRa); @@ -3766,29 +3766,29 @@ static void gen_mxu_s32lui(DisasContext *ctx) switch (optn3) { case 0: - tcg_gen_movi_tl(t0, s8); + tcg_gen_movi_i32(t0, s8); break; case 1: - tcg_gen_movi_tl(t0, s8 << 8); + tcg_gen_movi_i32(t0, s8 << 8); break; case 2: - tcg_gen_movi_tl(t0, s8 << 16); + tcg_gen_movi_i32(t0, s8 << 16); break; case 3: - tcg_gen_movi_tl(t0, s8 << 24); + tcg_gen_movi_i32(t0, s8 << 24); break; case 4: - tcg_gen_movi_tl(t0, (s8 << 16) | s8); + tcg_gen_movi_i32(t0, (s8 << 16) | s8); break; case 5: - tcg_gen_movi_tl(t0, (s8 << 24) | (s8 << 8)); + tcg_gen_movi_i32(t0, (s8 << 24) | (s8 << 8)); break; case 6: s16 = (uint16_t)(int16_t)(int8_t)s8; - tcg_gen_movi_tl(t0, (s16 << 16) | s16); + tcg_gen_movi_i32(t0, (s16 << 16) | s16); break; case 7: - tcg_gen_movi_tl(t0, (s8 << 24) | (s8 << 16) | (s8 << 8) | s8); + tcg_gen_movi_i32(t0, (s8 << 24) | (s8 << 16) | (s8 << 8) | s8); break; } gen_store_mxu_gpr(t0, XRa); @@ -3820,7 +3820,7 @@ static void gen_mxu_Q16SAT(DisasContext *ctx) TCGv t1 = tcg_temp_new(); TCGv t2 = tcg_temp_new(); - tcg_gen_movi_tl(t2, 0); + tcg_gen_movi_i32(t2, 0); if (XRb != 0) { TCGLabel *l_less_hi = gen_new_label(); TCGLabel *l_less_lo = gen_new_label(); @@ -3829,32 +3829,32 @@ static void gen_mxu_Q16SAT(DisasContext *ctx) TCGLabel *l_greater_lo = gen_new_label(); TCGLabel *l_done = gen_new_label(); - tcg_gen_sari_tl(t0, mxu_gpr[XRb - 1], 16); - tcg_gen_brcondi_tl(TCG_COND_LT, t0, 0, l_less_hi); - tcg_gen_brcondi_tl(TCG_COND_GT, t0, 255, l_greater_hi); + tcg_gen_sari_i32(t0, mxu_gpr[XRb - 1], 16); + tcg_gen_brcondi_i32(TCG_COND_LT, t0, 0, l_less_hi); + tcg_gen_brcondi_i32(TCG_COND_GT, t0, 255, l_greater_hi); tcg_gen_br(l_lo); gen_set_label(l_less_hi); - tcg_gen_movi_tl(t0, 0); + tcg_gen_movi_i32(t0, 0); tcg_gen_br(l_lo); gen_set_label(l_greater_hi); - tcg_gen_movi_tl(t0, 255); + tcg_gen_movi_i32(t0, 255); gen_set_label(l_lo); - tcg_gen_shli_tl(t1, mxu_gpr[XRb - 1], 16); - tcg_gen_sari_tl(t1, t1, 16); - tcg_gen_brcondi_tl(TCG_COND_LT, t1, 0, l_less_lo); - tcg_gen_brcondi_tl(TCG_COND_GT, t1, 255, l_greater_lo); + tcg_gen_shli_i32(t1, mxu_gpr[XRb - 1], 16); + tcg_gen_sari_i32(t1, t1, 16); + tcg_gen_brcondi_i32(TCG_COND_LT, t1, 0, l_less_lo); + tcg_gen_brcondi_i32(TCG_COND_GT, t1, 255, l_greater_lo); tcg_gen_br(l_done); gen_set_label(l_less_lo); - tcg_gen_movi_tl(t1, 0); + tcg_gen_movi_i32(t1, 0); tcg_gen_br(l_done); gen_set_label(l_greater_lo); - tcg_gen_movi_tl(t1, 255); + tcg_gen_movi_i32(t1, 255); gen_set_label(l_done); - tcg_gen_shli_tl(t2, t0, 24); - tcg_gen_shli_tl(t1, t1, 16); - tcg_gen_or_tl(t2, t2, t1); + tcg_gen_shli_i32(t2, t0, 24); + tcg_gen_shli_i32(t1, t1, 16); + tcg_gen_or_i32(t2, t2, t1); } if (XRc != 0) { @@ -3865,32 +3865,32 @@ static void gen_mxu_Q16SAT(DisasContext *ctx) TCGLabel *l_greater_lo = gen_new_label(); TCGLabel *l_done = gen_new_label(); - tcg_gen_sari_tl(t0, mxu_gpr[XRc - 1], 16); - tcg_gen_brcondi_tl(TCG_COND_LT, t0, 0, l_less_hi); - tcg_gen_brcondi_tl(TCG_COND_GT, t0, 255, l_greater_hi); + tcg_gen_sari_i32(t0, mxu_gpr[XRc - 1], 16); + tcg_gen_brcondi_i32(TCG_COND_LT, t0, 0, l_less_hi); + tcg_gen_brcondi_i32(TCG_COND_GT, t0, 255, l_greater_hi); tcg_gen_br(l_lo); gen_set_label(l_less_hi); - tcg_gen_movi_tl(t0, 0); + tcg_gen_movi_i32(t0, 0); tcg_gen_br(l_lo); gen_set_label(l_greater_hi); - tcg_gen_movi_tl(t0, 255); + tcg_gen_movi_i32(t0, 255); gen_set_label(l_lo); - tcg_gen_shli_tl(t1, mxu_gpr[XRc - 1], 16); - tcg_gen_sari_tl(t1, t1, 16); - tcg_gen_brcondi_tl(TCG_COND_LT, t1, 0, l_less_lo); - tcg_gen_brcondi_tl(TCG_COND_GT, t1, 255, l_greater_lo); + tcg_gen_shli_i32(t1, mxu_gpr[XRc - 1], 16); + tcg_gen_sari_i32(t1, t1, 16); + tcg_gen_brcondi_i32(TCG_COND_LT, t1, 0, l_less_lo); + tcg_gen_brcondi_i32(TCG_COND_GT, t1, 255, l_greater_lo); tcg_gen_br(l_done); gen_set_label(l_less_lo); - tcg_gen_movi_tl(t1, 0); + tcg_gen_movi_i32(t1, 0); tcg_gen_br(l_done); gen_set_label(l_greater_lo); - tcg_gen_movi_tl(t1, 255); + tcg_gen_movi_i32(t1, 255); gen_set_label(l_done); - tcg_gen_shli_tl(t0, t0, 8); - tcg_gen_or_tl(t2, t2, t0); - tcg_gen_or_tl(t2, t2, t1); + tcg_gen_shli_i32(t0, t0, 8); + tcg_gen_or_i32(t2, t2, t0); + tcg_gen_or_i32(t2, t2, t1); } gen_store_mxu_gpr(t2, XRa); } @@ -3930,47 +3930,47 @@ static void gen_mxu_q16scop(DisasContext *ctx) gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); - tcg_gen_sextract_tl(t2, t0, 16, 16); - tcg_gen_brcondi_tl(TCG_COND_LT, t2, 0, l_b_hi_lt); - tcg_gen_brcondi_tl(TCG_COND_GT, t2, 0, l_b_hi_gt); - tcg_gen_movi_tl(t3, 0); + tcg_gen_sextract_i32(t2, t0, 16, 16); + tcg_gen_brcondi_i32(TCG_COND_LT, t2, 0, l_b_hi_lt); + tcg_gen_brcondi_i32(TCG_COND_GT, t2, 0, l_b_hi_gt); + tcg_gen_movi_i32(t3, 0); tcg_gen_br(l_b_lo); gen_set_label(l_b_hi_lt); - tcg_gen_movi_tl(t3, 0xffff0000); + tcg_gen_movi_i32(t3, 0xffff0000); tcg_gen_br(l_b_lo); gen_set_label(l_b_hi_gt); - tcg_gen_movi_tl(t3, 0x00010000); + tcg_gen_movi_i32(t3, 0x00010000); gen_set_label(l_b_lo); - tcg_gen_sextract_tl(t2, t0, 0, 16); - tcg_gen_brcondi_tl(TCG_COND_EQ, t2, 0, l_c_hi); - tcg_gen_brcondi_tl(TCG_COND_LT, t2, 0, l_b_lo_lt); - tcg_gen_ori_tl(t3, t3, 0x00000001); + tcg_gen_sextract_i32(t2, t0, 0, 16); + tcg_gen_brcondi_i32(TCG_COND_EQ, t2, 0, l_c_hi); + tcg_gen_brcondi_i32(TCG_COND_LT, t2, 0, l_b_lo_lt); + tcg_gen_ori_i32(t3, t3, 0x00000001); tcg_gen_br(l_c_hi); gen_set_label(l_b_lo_lt); - tcg_gen_ori_tl(t3, t3, 0x0000ffff); + tcg_gen_ori_i32(t3, t3, 0x0000ffff); tcg_gen_br(l_c_hi); gen_set_label(l_c_hi); - tcg_gen_sextract_tl(t2, t1, 16, 16); - tcg_gen_brcondi_tl(TCG_COND_LT, t2, 0, l_c_hi_lt); - tcg_gen_brcondi_tl(TCG_COND_GT, t2, 0, l_c_hi_gt); - tcg_gen_movi_tl(t4, 0); + tcg_gen_sextract_i32(t2, t1, 16, 16); + tcg_gen_brcondi_i32(TCG_COND_LT, t2, 0, l_c_hi_lt); + tcg_gen_brcondi_i32(TCG_COND_GT, t2, 0, l_c_hi_gt); + tcg_gen_movi_i32(t4, 0); tcg_gen_br(l_c_lo); gen_set_label(l_c_hi_lt); - tcg_gen_movi_tl(t4, 0xffff0000); + tcg_gen_movi_i32(t4, 0xffff0000); tcg_gen_br(l_c_lo); gen_set_label(l_c_hi_gt); - tcg_gen_movi_tl(t4, 0x00010000); + tcg_gen_movi_i32(t4, 0x00010000); gen_set_label(l_c_lo); - tcg_gen_sextract_tl(t2, t1, 0, 16); - tcg_gen_brcondi_tl(TCG_COND_EQ, t2, 0, l_done); - tcg_gen_brcondi_tl(TCG_COND_LT, t2, 0, l_c_lo_lt); - tcg_gen_ori_tl(t4, t4, 0x00000001); + tcg_gen_sextract_i32(t2, t1, 0, 16); + tcg_gen_brcondi_i32(TCG_COND_EQ, t2, 0, l_done); + tcg_gen_brcondi_i32(TCG_COND_LT, t2, 0, l_c_lo_lt); + tcg_gen_ori_i32(t4, t4, 0x00000001); tcg_gen_br(l_done); gen_set_label(l_c_lo_lt); - tcg_gen_ori_tl(t4, t4, 0x0000ffff); + tcg_gen_ori_i32(t4, t4, 0x0000ffff); gen_set_label(l_done); gen_store_mxu_gpr(t3, XRa); @@ -4001,52 +4001,52 @@ static void gen_mxu_s32sfl(DisasContext *ctx) switch (ptn2) { case 0: - tcg_gen_andi_tl(t2, t0, 0xff000000); - tcg_gen_andi_tl(t3, t1, 0x000000ff); - tcg_gen_deposit_tl(t3, t3, t0, 8, 8); - tcg_gen_shri_tl(t0, t0, 8); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t3, t3, t0, 24, 8); - tcg_gen_deposit_tl(t3, t3, t1, 16, 8); - tcg_gen_shri_tl(t0, t0, 8); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t2, t2, t0, 8, 8); - tcg_gen_deposit_tl(t2, t2, t1, 0, 8); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t2, t2, t1, 16, 8); + tcg_gen_andi_i32(t2, t0, 0xff000000); + tcg_gen_andi_i32(t3, t1, 0x000000ff); + tcg_gen_deposit_i32(t3, t3, t0, 8, 8); + tcg_gen_shri_i32(t0, t0, 8); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t3, t3, t0, 24, 8); + tcg_gen_deposit_i32(t3, t3, t1, 16, 8); + tcg_gen_shri_i32(t0, t0, 8); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t2, t2, t0, 8, 8); + tcg_gen_deposit_i32(t2, t2, t1, 0, 8); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t2, t2, t1, 16, 8); break; case 1: - tcg_gen_andi_tl(t2, t0, 0xff000000); - tcg_gen_andi_tl(t3, t1, 0x000000ff); - tcg_gen_deposit_tl(t3, t3, t0, 16, 8); - tcg_gen_shri_tl(t0, t0, 8); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t2, t2, t0, 16, 8); - tcg_gen_deposit_tl(t2, t2, t1, 0, 8); - tcg_gen_shri_tl(t0, t0, 8); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t3, t3, t0, 24, 8); - tcg_gen_deposit_tl(t3, t3, t1, 8, 8); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t2, t2, t1, 8, 8); + tcg_gen_andi_i32(t2, t0, 0xff000000); + tcg_gen_andi_i32(t3, t1, 0x000000ff); + tcg_gen_deposit_i32(t3, t3, t0, 16, 8); + tcg_gen_shri_i32(t0, t0, 8); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t2, t2, t0, 16, 8); + tcg_gen_deposit_i32(t2, t2, t1, 0, 8); + tcg_gen_shri_i32(t0, t0, 8); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t3, t3, t0, 24, 8); + tcg_gen_deposit_i32(t3, t3, t1, 8, 8); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t2, t2, t1, 8, 8); break; case 2: - tcg_gen_andi_tl(t2, t0, 0xff00ff00); - tcg_gen_andi_tl(t3, t1, 0x00ff00ff); - tcg_gen_deposit_tl(t3, t3, t0, 8, 8); - tcg_gen_shri_tl(t0, t0, 16); - tcg_gen_shri_tl(t1, t1, 8); - tcg_gen_deposit_tl(t2, t2, t1, 0, 8); - tcg_gen_deposit_tl(t3, t3, t0, 24, 8); - tcg_gen_shri_tl(t1, t1, 16); - tcg_gen_deposit_tl(t2, t2, t1, 16, 8); + tcg_gen_andi_i32(t2, t0, 0xff00ff00); + tcg_gen_andi_i32(t3, t1, 0x00ff00ff); + tcg_gen_deposit_i32(t3, t3, t0, 8, 8); + tcg_gen_shri_i32(t0, t0, 16); + tcg_gen_shri_i32(t1, t1, 8); + tcg_gen_deposit_i32(t2, t2, t1, 0, 8); + tcg_gen_deposit_i32(t3, t3, t0, 24, 8); + tcg_gen_shri_i32(t1, t1, 16); + tcg_gen_deposit_i32(t2, t2, t1, 16, 8); break; case 3: - tcg_gen_andi_tl(t2, t0, 0xffff0000); - tcg_gen_andi_tl(t3, t1, 0x0000ffff); - tcg_gen_shri_tl(t1, t1, 16); - tcg_gen_deposit_tl(t2, t2, t1, 0, 16); - tcg_gen_deposit_tl(t3, t3, t0, 16, 16); + tcg_gen_andi_i32(t2, t0, 0xffff0000); + tcg_gen_andi_i32(t3, t1, 0x0000ffff); + tcg_gen_shri_i32(t1, t1, 16); + tcg_gen_deposit_i32(t2, t2, t1, 0, 16); + tcg_gen_deposit_i32(t3, t3, t0, 16, 16); break; } @@ -4077,20 +4077,20 @@ static void gen_mxu_q8sad(DisasContext *ctx) gen_load_mxu_gpr(t2, XRb); gen_load_mxu_gpr(t3, XRc); gen_load_mxu_gpr(t5, XRd); - tcg_gen_movi_tl(t4, 0); + tcg_gen_movi_i32(t4, 0); for (int i = 0; i < 4; i++) { - tcg_gen_andi_tl(t0, t2, 0xff); - tcg_gen_andi_tl(t1, t3, 0xff); - tcg_gen_sub_tl(t0, t0, t1); - tcg_gen_abs_tl(t0, t0); - tcg_gen_add_tl(t4, t4, t0); + tcg_gen_andi_i32(t0, t2, 0xff); + tcg_gen_andi_i32(t1, t3, 0xff); + tcg_gen_sub_i32(t0, t0, t1); + tcg_gen_abs_i32(t0, t0); + tcg_gen_add_i32(t4, t4, t0); if (i < 3) { - tcg_gen_shri_tl(t2, t2, 8); - tcg_gen_shri_tl(t3, t3, 8); + tcg_gen_shri_i32(t2, t2, 8); + tcg_gen_shri_i32(t3, t3, 8); } } - tcg_gen_add_tl(t5, t5, t4); + tcg_gen_add_i32(t5, t5, t4); gen_store_mxu_gpr(t4, XRa); gen_store_mxu_gpr(t5, XRd); } @@ -4290,7 +4290,7 @@ static void gen_mxu_S32ALN(DisasContext *ctx) /* destination is zero register -> do nothing */ } else if (unlikely((XRb == 0) && (XRc == 0))) { /* both operands zero registers -> just set destination to all 0s */ - tcg_gen_movi_tl(mxu_gpr[XRa - 1], 0); + tcg_gen_movi_i32(mxu_gpr[XRa - 1], 0); } else { /* the most general case */ TCGv t0 = tcg_temp_new(); @@ -4303,21 +4303,21 @@ static void gen_mxu_S32ALN(DisasContext *ctx) gen_load_mxu_gpr(t0, XRb); gen_load_mxu_gpr(t1, XRc); - gen_load_gpr_tl(t2, rs); - tcg_gen_andi_tl(t2, t2, 0x07); + gen_load_gpr_i32(t2, rs); + tcg_gen_andi_i32(t2, t2, 0x07); /* do nothing for undefined cases */ - tcg_gen_brcondi_tl(TCG_COND_GE, t2, 5, l_exit); + tcg_gen_brcondi_i32(TCG_COND_GE, t2, 5, l_exit); - tcg_gen_brcondi_tl(TCG_COND_EQ, t2, 0, l_b_only); - tcg_gen_brcondi_tl(TCG_COND_EQ, t2, 4, l_c_only); + tcg_gen_brcondi_i32(TCG_COND_EQ, t2, 0, l_b_only); + tcg_gen_brcondi_i32(TCG_COND_EQ, t2, 4, l_c_only); - tcg_gen_shli_tl(t2, t2, 3); - tcg_gen_subfi_tl(t3, 32, t2); + tcg_gen_shli_i32(t2, t2, 3); + tcg_gen_subfi_i32(t3, 32, t2); - tcg_gen_shl_tl(t0, t0, t2); - tcg_gen_shr_tl(t1, t1, t3); - tcg_gen_or_tl(mxu_gpr[XRa - 1], t0, t1); + tcg_gen_shl_i32(t0, t0, t2); + tcg_gen_shr_i32(t1, t1, t3); + tcg_gen_or_i32(mxu_gpr[XRa - 1], t0, t1); tcg_gen_br(l_exit); gen_set_label(l_b_only); @@ -4364,32 +4364,32 @@ static void gen_mxu_s32madd_sub(DisasContext *ctx, bool sub, bool uns) TCGv_i64 t2 = tcg_temp_new_i64(); TCGv_i64 t3 = tcg_temp_new_i64(); - gen_load_gpr_tl(t0, Rb); - gen_load_gpr_tl(t1, Rc); + gen_load_gpr_i32(t0, Rb); + gen_load_gpr_i32(t1, Rc); if (uns) { - tcg_gen_extu_tl_i64(t2, t0); - tcg_gen_extu_tl_i64(t3, t1); + tcg_gen_extu_i32_i64(t2, t0); + tcg_gen_extu_i32_i64(t3, t1); } else { - tcg_gen_ext_tl_i64(t2, t0); - tcg_gen_ext_tl_i64(t3, t1); + tcg_gen_ext_i32_i64(t2, t0); + tcg_gen_ext_i32_i64(t3, t1); } tcg_gen_mul_i64(t2, t2, t3); gen_load_mxu_gpr(t0, XRa); gen_load_mxu_gpr(t1, XRd); - tcg_gen_concat_tl_i64(t3, t1, t0); + tcg_gen_concat_i32_i64(t3, t1, t0); if (sub) { tcg_gen_sub_i64(t3, t3, t2); } else { tcg_gen_add_i64(t3, t3, t2); } - gen_move_low32_tl(t1, t3); - gen_move_high32_tl(t0, t3); + gen_move_low32_i32(t1, t3); + gen_move_high32_i32(t0, t3); - tcg_gen_mov_tl(cpu_HI[0], t0); - tcg_gen_mov_tl(cpu_LO[0], t1); + tcg_gen_mov_i32(cpu_HI[0], t0); + tcg_gen_mov_i32(cpu_LO[0], t1); gen_store_mxu_gpr(t1, XRd); gen_store_mxu_gpr(t0, XRa); @@ -4940,8 +4940,8 @@ bool decode_ase_mxu(DisasContext *ctx, uint32_t insn) TCGLabel *l_exit = gen_new_label(); gen_load_mxu_cr(t_mxu_cr); - tcg_gen_andi_tl(t_mxu_cr, t_mxu_cr, MXU_CR_MXU_EN); - tcg_gen_brcondi_tl(TCG_COND_NE, t_mxu_cr, MXU_CR_MXU_EN, l_exit); + tcg_gen_andi_i32(t_mxu_cr, t_mxu_cr, MXU_CR_MXU_EN); + tcg_gen_brcondi_i32(TCG_COND_NE, t_mxu_cr, MXU_CR_MXU_EN, l_exit); switch (opcode) { case OPC_MXU_S32MADD: From patchwork Tue Nov 26 13:15:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 13885932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 581FFD3B98B for ; Tue, 26 Nov 2024 13:18:19 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tFvSB-00022Z-T6; Tue, 26 Nov 2024 08:18:11 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tFvRK-0000NW-4k for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:17:19 -0500 Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tFvRF-0003m6-0e for qemu-devel@nongnu.org; Tue, 26 Nov 2024 08:17:12 -0500 Received: by mail-wr1-x434.google.com with SMTP id ffacd0b85a97d-38241435528so3669551f8f.2 for ; Tue, 26 Nov 2024 05:17:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1732627026; x=1733231826; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=k4o3elrIgsKejK7gl+opoqyQLP4kR/40fNsCOdLbzlE=; b=sV+UCBp/FjgLkUg0GXrdNRAC+wP9EKEGSCUoCMVHevAYs0aigwFh7r+/E39rsj3a0G LB7qAU81zz/EpaDvSPV11j6EnIvlfTpaTtlEw2gSM5KUPMLQ4EUM4UryBslNIKCJ6enm J37YzrUQO9JrTYiO2nIGa8GJLyVDmcVqR8rjt9isvTAJ2nCCv2aPGRuGGm9lZj9Mx+Dn aP9sINfomS1c+CkXYpGgDJYoZXFRIQly9pFQNACohUfhgVmPiZjEl4o/rlZbUuQJsyRd lRwto1c+gJMu1/5RUzWoJmqQvHANJpJbk4D77AVqxBN6cRjVHlEfdpqwwn8sPEw7YeHa 9uBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732627026; x=1733231826; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k4o3elrIgsKejK7gl+opoqyQLP4kR/40fNsCOdLbzlE=; b=OXv3rSFFjFGz2wx3hkl0ZuqlcKirD2xFyrbBaQq4/GfU6p6MSPBp9pVSOTWvR0qOTg OlNIh9BHoDUSrtDBVWykGftMYFjsjy35MN66vq8OnBlhcvZq+O8Ct9kWWDLIlGxJUZy3 OsZlsonMWYd5FIXKyzR1oixMuhTUVSxJ+k4mf9X+CtSOAhPbbAwXWFlbMpEJW8TEDayQ kPFvLmpIRmfxlh0ifFa/jF2AMhJhuhB543MvwumYabsc3AXLKSPsjtvZVItHdkyop/8t w2hYXE+pZquq2sB+0wFu01PrrzDeoqe67m3AQS5pxup4ei/ROFqM1CgZ5yNKA3cwfIql hRkw== X-Gm-Message-State: AOJu0YwfDPbBV/ZgxjQOxAVdcy3HvhTFnUGCPe6dgGLBvPKZkfGvWPmz MqhK+lrbCt43WD2Q2EtEhsG88tSALR5eJAjjsLHw3FJ9LU89dfoZTxbcNRzD2Cr4GNnPMkQ5CeB n X-Gm-Gg: ASbGncsuBBOo0McWuKoZTRktkTkyIXSm+TVLkcHxU3h8SMVPzoLiCgLsyxrOfxghmQx tEzo1W8e6OveNxP4SK4LhngX8LoNPEScnJdPqXhtBs5+mM8PjsahBeIP2AjZd5/8U4HNdVoAc5e ASnpdE10x/1qV7m+h7aOjcyxjwhKB94mP/97ks3lgak/DQ6rupDxhcIbgjj0oE/DU1cRrbT9ay2 taXFFogjuqSNH1qdj6McKCfOywcP7t3JTNxcrfnm3H0Ku8iBQIMO5tNslTJaEGJssvDUCQ6 X-Google-Smtp-Source: AGHT+IEe8oj6j0fKdcpbgYOMYcy2iTno57TC3a9RlFuamtG12GPCOrfnyoKSKlEfXFe4UGhel4ZqOw== X-Received: by 2002:a05:6000:1543:b0:382:4b9a:f500 with SMTP id ffacd0b85a97d-38260b5bb7amr14369799f8f.18.1732627025888; Tue, 26 Nov 2024 05:17:05 -0800 (PST) Received: from localhost.localdomain ([176.176.143.205]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fad6436sm13307851f8f.13.2024.11.26.05.17.04 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Nov 2024 05:17:05 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Cc: Aurelien Jarno , Aleksandar Rikalo , Anton Johansson , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Huacai Chen , Jiaxun Yang Subject: [PATCH 13/13] target/mips: Make DSPControl register 32-bit wide Date: Tue, 26 Nov 2024 14:15:45 +0100 Message-ID: <20241126131546.66145-14-philmd@linaro.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241126131546.66145-1-philmd@linaro.org> References: <20241126131546.66145-1-philmd@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::434; envelope-from=philmd@linaro.org; helo=mail-wr1-x434.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Per 'MIPS® DSP Module for MIPS64™ Architecture, Revision 3.02', * 3.10 Additional Register State for the DSP Module ~Figure 3.5 MIPS® DSP Module Control Register (DSPControl) Format~ the DSPControl register is 32-bit wide. Convert it from 'target_ulong' to 'uint32_t'. Update TCG calls to truncate/extend from i32 to target_ulong. Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- target/mips/cpu.h | 2 +- target/mips/tcg/sysemu_helper.h.inc | 4 +-- target/mips/sysemu/machine.c | 5 ++- target/mips/tcg/dsp_helper.c | 10 +++--- target/mips/tcg/sysemu/cp0_helper.c | 4 +-- target/mips/tcg/translate.c | 40 +++++++++++++++++------- target/mips/tcg/nanomips_translate.c.inc | 16 +++++++--- 7 files changed, 54 insertions(+), 27 deletions(-) diff --git a/target/mips/cpu.h b/target/mips/cpu.h index f80b05885b1..bc636510132 100644 --- a/target/mips/cpu.h +++ b/target/mips/cpu.h @@ -472,7 +472,7 @@ struct TCState { target_ulong HI[MIPS_DSP_ACC]; target_ulong LO[MIPS_DSP_ACC]; target_ulong ACX[MIPS_DSP_ACC]; - target_ulong DSPControl; + uint32_t DSPControl; int32_t CP0_TCStatus; #define CP0TCSt_TCU3 31 #define CP0TCSt_TCU2 30 diff --git a/target/mips/tcg/sysemu_helper.h.inc b/target/mips/tcg/sysemu_helper.h.inc index 1861d538de1..36ce21f863b 100644 --- a/target/mips/tcg/sysemu_helper.h.inc +++ b/target/mips/tcg/sysemu_helper.h.inc @@ -144,12 +144,12 @@ DEF_HELPER_2(mftgpr, tl, env, i32) DEF_HELPER_2(mftlo, tl, env, i32) DEF_HELPER_2(mfthi, tl, env, i32) DEF_HELPER_2(mftacx, tl, env, i32) -DEF_HELPER_1(mftdsp, tl, env) +DEF_HELPER_1(mftdsp, i32, env) DEF_HELPER_3(mttgpr, void, env, tl, i32) DEF_HELPER_3(mttlo, void, env, tl, i32) DEF_HELPER_3(mtthi, void, env, tl, i32) DEF_HELPER_3(mttacx, void, env, tl, i32) -DEF_HELPER_2(mttdsp, void, env, tl) +DEF_HELPER_2(mttdsp, void, env, i32) DEF_HELPER_0(dmt, tl) DEF_HELPER_0(emt, tl) DEF_HELPER_1(dvpe, tl, env) diff --git a/target/mips/sysemu/machine.c b/target/mips/sysemu/machine.c index 823a49e2ca1..c1fb72864f6 100644 --- a/target/mips/sysemu/machine.c +++ b/target/mips/sysemu/machine.c @@ -88,7 +88,10 @@ static const VMStateField vmstate_tc_fields[] = { VMSTATE_UINTTL_ARRAY(HI, TCState, MIPS_DSP_ACC), VMSTATE_UINTTL_ARRAY(LO, TCState, MIPS_DSP_ACC), VMSTATE_UINTTL_ARRAY(ACX, TCState, MIPS_DSP_ACC), - VMSTATE_UINTTL(DSPControl, TCState), + VMSTATE_UINT32(DSPControl, TCState), +#if defined(TARGET_MIPS64) + VMSTATE_UNUSED(4), +#endif /* TARGET_MIPS64 */ VMSTATE_INT32(CP0_TCStatus, TCState), VMSTATE_INT32(CP0_TCBind, TCState), VMSTATE_UINTTL(CP0_TCHalt, TCState), diff --git a/target/mips/tcg/dsp_helper.c b/target/mips/tcg/dsp_helper.c index 7a4362c8ef4..e58d6b9ef84 100644 --- a/target/mips/tcg/dsp_helper.c +++ b/target/mips/tcg/dsp_helper.c @@ -54,7 +54,7 @@ typedef union { static inline void set_DSPControl_overflow_flag(uint32_t flag, int position, CPUMIPSState *env) { - env->active_tc.DSPControl |= (target_ulong)flag << position; + env->active_tc.DSPControl |= flag << position; } static inline void set_DSPControl_carryflag(bool flag, CPUMIPSState *env) @@ -76,7 +76,7 @@ static inline void set_DSPControl_24(uint32_t flag, int len, CPUMIPSState *env) filter = ~filter; env->active_tc.DSPControl &= filter; - env->active_tc.DSPControl |= (target_ulong)flag << 24; + env->active_tc.DSPControl |= flag << 24; } static inline void set_DSPControl_pos(uint32_t pos, CPUMIPSState *env) @@ -113,7 +113,7 @@ static inline uint32_t get_DSPControl_pos(CPUMIPSState *env) static inline void set_DSPControl_efi(uint32_t flag, CPUMIPSState *env) { env->active_tc.DSPControl &= 0xFFFFBFFF; - env->active_tc.DSPControl |= (target_ulong)flag << 14; + env->active_tc.DSPControl |= flag << 14; } #define DO_MIPS_SAT_ABS(size) \ @@ -2923,7 +2923,7 @@ target_ulong helper_##name(CPUMIPSState *env, target_ulong rs, \ uint32_t pos, size, msb, lsb; \ uint32_t const sizefilter = 0x3F; \ target_ulong temp; \ - target_ulong dspc; \ + uint32_t dspc; \ \ dspc = env->active_tc.DSPControl; \ \ @@ -3063,7 +3063,7 @@ target_ulong helper_##name(target_ulong rs, target_ulong rt, \ { \ uint32_t rs_t, rt_t; \ uint32_t cc; \ - target_ulong dsp; \ + uint32_t dsp; \ int i; \ target_ulong result = 0; \ \ diff --git a/target/mips/tcg/sysemu/cp0_helper.c b/target/mips/tcg/sysemu/cp0_helper.c index 79a5c833cee..61b7644f3a4 100644 --- a/target/mips/tcg/sysemu/cp0_helper.c +++ b/target/mips/tcg/sysemu/cp0_helper.c @@ -1483,7 +1483,7 @@ target_ulong helper_mftacx(CPUMIPSState *env, uint32_t sel) } } -target_ulong helper_mftdsp(CPUMIPSState *env) +uint32_t helper_mftdsp(CPUMIPSState *env) { int other_tc = env->CP0_VPEControl & (0xff << CP0VPECo_TargTC); CPUMIPSState *other = mips_cpu_map_tc(env, &other_tc); @@ -1543,7 +1543,7 @@ void helper_mttacx(CPUMIPSState *env, target_ulong arg1, uint32_t sel) } } -void helper_mttdsp(CPUMIPSState *env, target_ulong arg1) +void helper_mttdsp(CPUMIPSState *env, uint32_t arg1) { int other_tc = env->CP0_VPEControl & (0xff << CP0VPECo_TargTC); CPUMIPSState *other = mips_cpu_map_tc(env, &other_tc); diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c index d6be37d56d3..6f2eacbba97 100644 --- a/target/mips/tcg/translate.c +++ b/target/mips/tcg/translate.c @@ -1172,7 +1172,8 @@ TCGv cpu_gpr[32], cpu_PC; */ TCGv_i64 cpu_gpr_hi[32]; TCGv cpu_HI[MIPS_DSP_ACC], cpu_LO[MIPS_DSP_ACC]; -static TCGv cpu_dspctrl, btarget; +static TCGv_i32 cpu_dspctrl; +static TCGv btarget; TCGv bcond; static TCGv cpu_lladdr, cpu_llval; static TCGv_i32 hflags; @@ -4438,9 +4439,11 @@ static void gen_compute_branch(DisasContext *ctx, uint32_t opc, case OPC_BPOSGE32: #if defined(TARGET_MIPS64) case OPC_BPOSGE64: - tcg_gen_andi_tl(t0, cpu_dspctrl, 0x7F); + tcg_gen_extu_i32_tl(t1, cpu_dspctrl); + tcg_gen_andi_tl(t0, t1, 0x7F); #else - tcg_gen_andi_tl(t0, cpu_dspctrl, 0x3F); + tcg_gen_extu_i32_tl(t1, cpu_dspctrl); + tcg_gen_andi_tl(t0, t1, 0x3F); #endif bcond_compute = 1; btgt = ctx->base.pc_next + insn_bytes + offset; @@ -8225,6 +8228,7 @@ static void gen_mftr(CPUMIPSState *env, DisasContext *ctx, int rt, int rd, gen_mfc0(ctx, t0, rt, sel); } } else { + TCGv_i32 t32; switch (sel) { /* GPR registers. */ case 0: @@ -8270,7 +8274,9 @@ static void gen_mftr(CPUMIPSState *env, DisasContext *ctx, int rt, int rd, gen_helper_1e0i(mftacx, t0, 3); break; case 16: - gen_helper_mftdsp(t0, tcg_env); + t32 = tcg_temp_new_i32(); + gen_helper_mftdsp(t32, tcg_env); + tcg_gen_extu_i32_tl(t0, t32); break; default: goto die; @@ -8425,6 +8431,7 @@ static void gen_mttr(CPUMIPSState *env, DisasContext *ctx, int rd, int rt, gen_mtc0(ctx, t0, rd, sel); } } else { + TCGv_i32 t32; switch (sel) { /* GPR registers. */ case 0: @@ -8470,7 +8477,9 @@ static void gen_mttr(CPUMIPSState *env, DisasContext *ctx, int rd, int rt, gen_helper_0e1i(mttacx, t0, 3); break; case 16: - gen_helper_mttdsp(tcg_env, t0); + t32 = tcg_temp_new_i32(); + gen_load_gpr_i32(t32, rt); + gen_helper_mttdsp(tcg_env, t32); break; default: goto die; @@ -12516,6 +12525,7 @@ static void gen_mipsdsp_add_cmp_pick(DisasContext *ctx, TCGv t1; TCGv v1_t; TCGv v2_t; + TCGv_i32 t32; if ((ret == 0) && (check_ret == 1)) { /* Treat as NOP. */ @@ -12560,25 +12570,31 @@ static void gen_mipsdsp_add_cmp_pick(DisasContext *ctx, check_dsp_r2(ctx); gen_helper_cmpgu_eq_qb(t1, v1_t, v2_t); tcg_gen_mov_tl(cpu_gpr[ret], t1); - tcg_gen_andi_tl(cpu_dspctrl, cpu_dspctrl, 0xF0FFFFFF); + tcg_gen_andi_i32(cpu_dspctrl, cpu_dspctrl, 0xF0FFFFFF); tcg_gen_shli_tl(t1, t1, 24); - tcg_gen_or_tl(cpu_dspctrl, cpu_dspctrl, t1); + t32 = tcg_temp_new_i32(); + tcg_gen_trunc_tl_i32(t32, t1); + tcg_gen_or_i32(cpu_dspctrl, cpu_dspctrl, t32); break; case OPC_CMPGDU_LT_QB: check_dsp_r2(ctx); gen_helper_cmpgu_lt_qb(t1, v1_t, v2_t); tcg_gen_mov_tl(cpu_gpr[ret], t1); - tcg_gen_andi_tl(cpu_dspctrl, cpu_dspctrl, 0xF0FFFFFF); + tcg_gen_andi_i32(cpu_dspctrl, cpu_dspctrl, 0xF0FFFFFF); tcg_gen_shli_tl(t1, t1, 24); - tcg_gen_or_tl(cpu_dspctrl, cpu_dspctrl, t1); + t32 = tcg_temp_new_i32(); + tcg_gen_trunc_tl_i32(t32, t1); + tcg_gen_or_i32(cpu_dspctrl, cpu_dspctrl, t32); break; case OPC_CMPGDU_LE_QB: check_dsp_r2(ctx); gen_helper_cmpgu_le_qb(t1, v1_t, v2_t); tcg_gen_mov_tl(cpu_gpr[ret], t1); - tcg_gen_andi_tl(cpu_dspctrl, cpu_dspctrl, 0xF0FFFFFF); + tcg_gen_andi_i32(cpu_dspctrl, cpu_dspctrl, 0xF0FFFFFF); tcg_gen_shli_tl(t1, t1, 24); - tcg_gen_or_tl(cpu_dspctrl, cpu_dspctrl, t1); + t32 = tcg_temp_new_i32(); + tcg_gen_trunc_tl_i32(t32, t1); + tcg_gen_or_i32(cpu_dspctrl, cpu_dspctrl, t32); break; case OPC_CMP_EQ_PH: check_dsp(ctx); @@ -15303,7 +15319,7 @@ void mips_tcg_init(void) offsetof(CPUMIPSState, active_tc.LO[i]), regnames_LO[i]); } - cpu_dspctrl = tcg_global_mem_new(tcg_env, + cpu_dspctrl = tcg_global_mem_new_i32(tcg_env, offsetof(CPUMIPSState, active_tc.DSPControl), "DSPControl"); diff --git a/target/mips/tcg/nanomips_translate.c.inc b/target/mips/tcg/nanomips_translate.c.inc index 2ad936c66d4..1d6b70083b0 100644 --- a/target/mips/tcg/nanomips_translate.c.inc +++ b/target/mips/tcg/nanomips_translate.c.inc @@ -1136,7 +1136,8 @@ static void gen_compute_branch_nm(DisasContext *ctx, uint32_t opc, btgt = ctx->base.pc_next + insn_bytes + offset; break; case OPC_BPOSGE32: - tcg_gen_andi_tl(t0, cpu_dspctrl, 0x3F); + tcg_gen_extu_i32_tl(t1, cpu_dspctrl); + tcg_gen_andi_tl(t0, t1, 0x3F); bcond_compute = 1; btgt = ctx->base.pc_next + insn_bytes + offset; break; @@ -3009,6 +3010,7 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, TCGv t0 = tcg_temp_new(); TCGv v1_t = tcg_temp_new(); TCGv v2_t = tcg_temp_new(); + TCGv_i32 v1_t32; gen_load_gpr_tl(v1_t, rs); gen_load_gpr_tl(v2_t, rt); @@ -3056,19 +3058,25 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc, case NM_CMPGDU_EQ_QB: check_dsp_r2(ctx); gen_helper_cmpgu_eq_qb(v1_t, v1_t, v2_t); - tcg_gen_deposit_tl(cpu_dspctrl, cpu_dspctrl, v1_t, 24, 4); + v1_t32 = tcg_temp_new_i32(); + tcg_gen_trunc_tl_i32(v1_t32, v1_t); + tcg_gen_deposit_i32(cpu_dspctrl, cpu_dspctrl, v1_t32, 24, 4); gen_store_gpr_tl(v1_t, ret); break; case NM_CMPGDU_LT_QB: check_dsp_r2(ctx); gen_helper_cmpgu_lt_qb(v1_t, v1_t, v2_t); - tcg_gen_deposit_tl(cpu_dspctrl, cpu_dspctrl, v1_t, 24, 4); + v1_t32 = tcg_temp_new_i32(); + tcg_gen_trunc_tl_i32(v1_t32, v1_t); + tcg_gen_deposit_i32(cpu_dspctrl, cpu_dspctrl, v1_t32, 24, 4); gen_store_gpr_tl(v1_t, ret); break; case NM_CMPGDU_LE_QB: check_dsp_r2(ctx); gen_helper_cmpgu_le_qb(v1_t, v1_t, v2_t); - tcg_gen_deposit_tl(cpu_dspctrl, cpu_dspctrl, v1_t, 24, 4); + v1_t32 = tcg_temp_new_i32(); + tcg_gen_trunc_tl_i32(v1_t32, v1_t); + tcg_gen_deposit_i32(cpu_dspctrl, cpu_dspctrl, v1_t32, 24, 4); gen_store_gpr_tl(v1_t, ret); break; case NM_PACKRL_PH: