From patchwork Wed Jan 13 17:24:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 12017445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31E2AC433DB for ; Wed, 13 Jan 2021 17:27:38 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5C40A20739 for ; Wed, 13 Jan 2021 17:27:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5C40A20739 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amsat.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:39752 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kzjvk-0001kj-2e for qemu-devel@archiver.kernel.org; Wed, 13 Jan 2021 12:27:36 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:41524) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kzjtP-0008GR-R1; Wed, 13 Jan 2021 12:25:11 -0500 Received: from mail-wr1-x42c.google.com ([2a00:1450:4864:20::42c]:40517) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1kzjtN-0005zA-HZ; Wed, 13 Jan 2021 12:25:11 -0500 Received: by mail-wr1-x42c.google.com with SMTP id 91so2969599wrj.7; Wed, 13 Jan 2021 09:25:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=opIsNYo5TStaMNiMtcqdcufjts3VGM32woib2HwK48c=; b=KiDmARLk4//D17rAw/u8KTlZ3Wg5lmouH51QZiga5FBp/S4FUvGwP6Abgth6ml/2/n aF2t/JNm62Wj9AdsxcX3fyYwJzY0IzjDlZLdPi1S+vNaTpfyP/HEQm5Zzb6Y/d8w3vnG 4n71Gyc3UZ8q6arYj6I8eEN4Z6UaCgfs1APkJnZlSbETxgLQuqB/s2jzcC9MnXBwIJ8c Cm89Ad5v0VdK+RPPXnzOo32sPTO/zKa3GMJxfhWRB+JGpT9egztRVyu1C2fMXeq3VTwL J5CSQxL+9p7pPPNVpM5Xp4ytTweKvFjOdLPNlcIbX/tmGbuzda9BRXxsi9Z6PUP57jS4 1SZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=opIsNYo5TStaMNiMtcqdcufjts3VGM32woib2HwK48c=; b=P1Dl1FZ1bkeNaYGWaVCseJPAX7s+9ClzK2tjqCDUGfwwJMtP9HTPKChMYAE3l59s53 QNXosCARHlH50cNz2MCFfUL+uFg8jJff6TORQwAhYr8koFZaVQFSMhVcF9ouK0morjWQ P3oTWgzXwtU7/WTohgM+yzuSM7EVR9Gi7553hX9ABSkkA6wPuTzcTDvAG9zrJfVwXZne X7sLKIVfYSelnXppnwvgphiwa475Om39NKlEFq5uEhvtdZCsLA0EBYvLT2zYY0kEvv0o JPvTMKDwZ6TS6SK/DGw/R1KTphHLbjgEekv075CsHzQs5L4/9R7QwUdS+D9SUFKB72jB OZHQ== X-Gm-Message-State: AOAM530ijM+cfPlPwMRVKO73RrCZwSzyV+B2lfHTJbdWSbk09oeIKfXb 4F6VCBiyjuDJMlTb2aU3v27Nrkt/vys= X-Google-Smtp-Source: ABdhPJxP++c3isUFMk5BQZzHDaio9aheYmk2z3yoivQtA4757JAhclJ/8cS45E3sgovAJbvUjiSLnw== X-Received: by 2002:adf:a388:: with SMTP id l8mr3675689wrb.354.1610558706818; Wed, 13 Jan 2021 09:25:06 -0800 (PST) Received: from x1w.redhat.com (13.red-83-57-169.dynamicip.rima-tde.net. [83.57.169.13]) by smtp.gmail.com with ESMTPSA id t16sm4002751wmi.3.2021.01.13.09.25.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Jan 2021 09:25:06 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v2 1/6] tcg/arm: Hoist common argument loads in tcg_out_op() Date: Wed, 13 Jan 2021 18:24:54 +0100 Message-Id: <20210113172459.2481060-2-f4bug@amsat.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210113172459.2481060-1-f4bug@amsat.org> References: <20210113172459.2481060-1-f4bug@amsat.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42c; envelope-from=philippe.mathieu.daude@gmail.com; helo=mail-wr1-x42c.google.com X-Spam_score_int: -14 X-Spam_score: -1.5 X-Spam_bar: - X-Spam_report: (-1.5 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.249, FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.248, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Huth , Huacai Chen , qemu-riscv@nongnu.org, Stefan Weil , Cornelia Huck , Richard Henderson , Aleksandar Rikalo , =?utf-8?q?Philippe_Mathie?= =?utf-8?q?u-Daud=C3=A9?= , qemu-s390x@nongnu.org, qemu-arm@nongnu.org, Alistair Francis , Palmer Dabbelt , Miroslav Rezanina , Aurelien Jarno Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: Philippe Mathieu-Daudé --- tcg/arm/tcg-target.c.inc | 192 +++++++++++++++++++-------------------- 1 file changed, 92 insertions(+), 100 deletions(-) diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index 0fd11264544..59bd196994f 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1747,15 +1747,23 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64) static void tcg_out_epilogue(TCGContext *s); -static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, - const TCGArg *args, const int *const_args) +static void tcg_out_op(TCGContext *s, TCGOpcode opc, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { TCGArg a0, a1, a2, a3, a4, a5; - int c; + int c, c2; + + /* Hoist the loads of the most common arguments. */ + a0 = args[0]; + a1 = args[1]; + a2 = args[2]; + a3 = args[3]; + c2 = const_args[2]; switch (opc) { case INDEX_op_exit_tb: - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R0, args[0]); + tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R0, a0); tcg_out_epilogue(s); break; case INDEX_op_goto_tb: @@ -1765,7 +1773,7 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGReg base = TCG_REG_PC; tcg_debug_assert(s->tb_jmp_insn_offset == 0); - ptr = (intptr_t)tcg_splitwx_to_rx(s->tb_jmp_target_addr + args[0]); + ptr = (intptr_t)tcg_splitwx_to_rx(s->tb_jmp_target_addr + a0); dif = tcg_pcrel_diff(s, (void *)ptr) - 8; dil = sextract32(dif, 0, 12); if (dif != dil) { @@ -1778,74 +1786,68 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, tcg_out_movi32(s, COND_AL, base, ptr - dil); } tcg_out_ld32_12(s, COND_AL, TCG_REG_PC, base, dil); - set_jmp_reset_offset(s, args[0]); + set_jmp_reset_offset(s, a0); } break; case INDEX_op_goto_ptr: - tcg_out_bx(s, COND_AL, args[0]); + tcg_out_bx(s, COND_AL, a0); break; case INDEX_op_br: - tcg_out_goto_label(s, COND_AL, arg_label(args[0])); + tcg_out_goto_label(s, COND_AL, arg_label(a0)); break; case INDEX_op_ld8u_i32: - tcg_out_ld8u(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld8u(s, COND_AL, a0, a1, a2); break; case INDEX_op_ld8s_i32: - tcg_out_ld8s(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld8s(s, COND_AL, a0, a1, a2); break; case INDEX_op_ld16u_i32: - tcg_out_ld16u(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld16u(s, COND_AL, a0, a1, a2); break; case INDEX_op_ld16s_i32: - tcg_out_ld16s(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld16s(s, COND_AL, a0, a1, a2); break; case INDEX_op_ld_i32: - tcg_out_ld32u(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld32u(s, COND_AL, a0, a1, a2); break; case INDEX_op_st8_i32: - tcg_out_st8(s, COND_AL, args[0], args[1], args[2]); + tcg_out_st8(s, COND_AL, a0, a1, a2); break; case INDEX_op_st16_i32: - tcg_out_st16(s, COND_AL, args[0], args[1], args[2]); + tcg_out_st16(s, COND_AL, a0, a1, a2); break; case INDEX_op_st_i32: - tcg_out_st32(s, COND_AL, args[0], args[1], args[2]); + tcg_out_st32(s, COND_AL, a0, a1, a2); break; case INDEX_op_movcond_i32: /* Constraints mean that v2 is always in the same register as dest, * so we only need to do "if condition passed, move v1 to dest". */ - tcg_out_dat_rIN(s, COND_AL, ARITH_CMP, ARITH_CMN, 0, - args[1], args[2], const_args[2]); + tcg_out_dat_rIN(s, COND_AL, ARITH_CMP, ARITH_CMN, 0, a1, a2, c2); tcg_out_dat_rIK(s, tcg_cond_to_arm_cond[args[5]], ARITH_MOV, - ARITH_MVN, args[0], 0, args[3], const_args[3]); + ARITH_MVN, a0, 0, a3, const_args[3]); break; case INDEX_op_add_i32: - tcg_out_dat_rIN(s, COND_AL, ARITH_ADD, ARITH_SUB, - args[0], args[1], args[2], const_args[2]); + tcg_out_dat_rIN(s, COND_AL, ARITH_ADD, ARITH_SUB, a0, a1, a2, c2); break; case INDEX_op_sub_i32: if (const_args[1]) { - if (const_args[2]) { - tcg_out_movi32(s, COND_AL, args[0], args[1] - args[2]); + if (c2) { + tcg_out_movi32(s, COND_AL, a0, a1 - a2); } else { - tcg_out_dat_rI(s, COND_AL, ARITH_RSB, - args[0], args[2], args[1], 1); + tcg_out_dat_rI(s, COND_AL, ARITH_RSB, a0, a2, a1, 1); } } else { - tcg_out_dat_rIN(s, COND_AL, ARITH_SUB, ARITH_ADD, - args[0], args[1], args[2], const_args[2]); + tcg_out_dat_rIN(s, COND_AL, ARITH_SUB, ARITH_ADD, a0, a1, a2, c2); } break; case INDEX_op_and_i32: - tcg_out_dat_rIK(s, COND_AL, ARITH_AND, ARITH_BIC, - args[0], args[1], args[2], const_args[2]); + tcg_out_dat_rIK(s, COND_AL, ARITH_AND, ARITH_BIC, a0, a1, a2, c2); break; case INDEX_op_andc_i32: - tcg_out_dat_rIK(s, COND_AL, ARITH_BIC, ARITH_AND, - args[0], args[1], args[2], const_args[2]); + tcg_out_dat_rIK(s, COND_AL, ARITH_BIC, ARITH_AND, a0, a1, a2, c2); break; case INDEX_op_or_i32: c = ARITH_ORR; @@ -1854,11 +1856,10 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, c = ARITH_EOR; /* Fall through. */ gen_arith: - tcg_out_dat_rI(s, COND_AL, c, args[0], args[1], args[2], const_args[2]); + tcg_out_dat_rI(s, COND_AL, c, a0, a1, a2, c2); break; case INDEX_op_add2_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; - a3 = args[3], a4 = args[4], a5 = args[5]; + a4 = args[4], a5 = args[5]; if (a0 == a3 || (a0 == a5 && !const_args[5])) { a0 = TCG_REG_TMP; } @@ -1866,15 +1867,14 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, a0, a2, a4, const_args[4]); tcg_out_dat_rIK(s, COND_AL, ARITH_ADC, ARITH_SBC, a1, a3, a5, const_args[5]); - tcg_out_mov_reg(s, COND_AL, args[0], a0); + tcg_out_mov_reg(s, COND_AL, a0, a0); break; case INDEX_op_sub2_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; - a3 = args[3], a4 = args[4], a5 = args[5]; + a4 = args[4], a5 = args[5]; if ((a0 == a3 && !const_args[3]) || (a0 == a5 && !const_args[5])) { a0 = TCG_REG_TMP; } - if (const_args[2]) { + if (c2) { if (const_args[4]) { tcg_out_movi32(s, COND_AL, a0, a4); a4 = a0; @@ -1884,7 +1884,7 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, tcg_out_dat_rIN(s, COND_AL, ARITH_SUB | TO_CPSR, ARITH_ADD | TO_CPSR, a0, a2, a4, const_args[4]); } - if (const_args[3]) { + if (const_a3) { if (const_args[5]) { tcg_out_movi32(s, COND_AL, a1, a5); a5 = a1; @@ -1894,69 +1894,64 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, tcg_out_dat_rIK(s, COND_AL, ARITH_SBC, ARITH_ADC, a1, a3, a5, const_args[5]); } - tcg_out_mov_reg(s, COND_AL, args[0], a0); + tcg_out_mov_reg(s, COND_AL, a0, a0); break; case INDEX_op_neg_i32: - tcg_out_dat_imm(s, COND_AL, ARITH_RSB, args[0], args[1], 0); + tcg_out_dat_imm(s, COND_AL, ARITH_RSB, a0, a1, 0); break; case INDEX_op_not_i32: - tcg_out_dat_reg(s, COND_AL, - ARITH_MVN, args[0], 0, args[1], SHIFT_IMM_LSL(0)); + tcg_out_dat_reg(s, COND_AL, ARITH_MVN, a0, 0, a1, SHIFT_IMM_LSL(0)); break; case INDEX_op_mul_i32: - tcg_out_mul32(s, COND_AL, args[0], args[1], args[2]); + tcg_out_mul32(s, COND_AL, a0, a1, a2); break; case INDEX_op_mulu2_i32: - tcg_out_umull32(s, COND_AL, args[0], args[1], args[2], args[3]); + tcg_out_umull32(s, COND_AL, a0, a1, a2, a3); break; case INDEX_op_muls2_i32: - tcg_out_smull32(s, COND_AL, args[0], args[1], args[2], args[3]); + tcg_out_smull32(s, COND_AL, a0, a1, a2, a3); break; - /* XXX: Perhaps args[2] & 0x1f is wrong */ + /* XXX: Perhaps a2 & 0x1f is wrong */ case INDEX_op_shl_i32: - c = const_args[2] ? - SHIFT_IMM_LSL(args[2] & 0x1f) : SHIFT_REG_LSL(args[2]); + c = c2 ? SHIFT_IMM_LSL(a2 & 0x1f) : SHIFT_REG_LSL(a2); goto gen_shift32; case INDEX_op_shr_i32: - c = const_args[2] ? (args[2] & 0x1f) ? SHIFT_IMM_LSR(args[2] & 0x1f) : - SHIFT_IMM_LSL(0) : SHIFT_REG_LSR(args[2]); + c = c2 ? (a2 & 0x1f) ? SHIFT_IMM_LSR(a2 & 0x1f) : + SHIFT_IMM_LSL(0) : SHIFT_REG_LSR(a2); goto gen_shift32; case INDEX_op_sar_i32: - c = const_args[2] ? (args[2] & 0x1f) ? SHIFT_IMM_ASR(args[2] & 0x1f) : - SHIFT_IMM_LSL(0) : SHIFT_REG_ASR(args[2]); + c = c2 ? (a2 & 0x1f) ? SHIFT_IMM_ASR(a2 & 0x1f) : + SHIFT_IMM_LSL(0) : SHIFT_REG_ASR(a2); goto gen_shift32; case INDEX_op_rotr_i32: - c = const_args[2] ? (args[2] & 0x1f) ? SHIFT_IMM_ROR(args[2] & 0x1f) : - SHIFT_IMM_LSL(0) : SHIFT_REG_ROR(args[2]); + c = c2 ? (a2 & 0x1f) ? SHIFT_IMM_ROR(a2 & 0x1f) : + SHIFT_IMM_LSL(0) : SHIFT_REG_ROR(a2); /* Fall through. */ gen_shift32: - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, args[1], c); + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, c); break; case INDEX_op_rotl_i32: - if (const_args[2]) { - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, args[1], - ((0x20 - args[2]) & 0x1f) ? - SHIFT_IMM_ROR((0x20 - args[2]) & 0x1f) : + if (c2) { + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, + ((0x20 - a2) & 0x1f) ? + SHIFT_IMM_ROR((0x20 - a2) & 0x1f) : SHIFT_IMM_LSL(0)); } else { - tcg_out_dat_imm(s, COND_AL, ARITH_RSB, TCG_REG_TMP, args[2], 0x20); - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, args[1], + tcg_out_dat_imm(s, COND_AL, ARITH_RSB, TCG_REG_TMP, a2, 0x20); + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, SHIFT_REG_ROR(TCG_REG_TMP)); } break; case INDEX_op_ctz_i32: - tcg_out_dat_reg(s, COND_AL, INSN_RBIT, TCG_REG_TMP, 0, args[1], 0); + tcg_out_dat_reg(s, COND_AL, INSN_RBIT, TCG_REG_TMP, 0, a1, 0); a1 = TCG_REG_TMP; goto do_clz; case INDEX_op_clz_i32: - a1 = args[1]; do_clz: - a0 = args[0]; - a2 = args[2]; - c = const_args[2]; + c = c2; if (c && a2 == 32) { tcg_out_dat_reg(s, COND_AL, INSN_CLZ, a0, 0, a1, 0); break; @@ -1970,17 +1965,15 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_brcond_i32: tcg_out_dat_rIN(s, COND_AL, ARITH_CMP, ARITH_CMN, 0, - args[0], args[1], const_args[1]); - tcg_out_goto_label(s, tcg_cond_to_arm_cond[args[2]], - arg_label(args[3])); + a0, a1, const_args[1]); + tcg_out_goto_label(s, tcg_cond_to_arm_cond[a2], arg_label(a3)); break; case INDEX_op_setcond_i32: - tcg_out_dat_rIN(s, COND_AL, ARITH_CMP, ARITH_CMN, 0, - args[1], args[2], const_args[2]); - tcg_out_dat_imm(s, tcg_cond_to_arm_cond[args[3]], - ARITH_MOV, args[0], 0, 1); - tcg_out_dat_imm(s, tcg_cond_to_arm_cond[tcg_invert_cond(args[3])], - ARITH_MOV, args[0], 0, 0); + tcg_out_dat_rIN(s, COND_AL, ARITH_CMP, ARITH_CMN, 0, a1, a2, c2); + tcg_out_dat_imm(s, tcg_cond_to_arm_cond[a3], + ARITH_MOV, a0, 0, 1); + tcg_out_dat_imm(s, tcg_cond_to_arm_cond[tcg_invert_cond(a3)], + ARITH_MOV, a0, 0, 0); break; case INDEX_op_brcond2_i32: @@ -1989,9 +1982,9 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_setcond2_i32: c = tcg_out_cmp2(s, args + 1, const_args + 1); - tcg_out_dat_imm(s, tcg_cond_to_arm_cond[c], ARITH_MOV, args[0], 0, 1); + tcg_out_dat_imm(s, tcg_cond_to_arm_cond[c], ARITH_MOV, a0, 0, 1); tcg_out_dat_imm(s, tcg_cond_to_arm_cond[tcg_invert_cond(c)], - ARITH_MOV, args[0], 0, 0); + ARITH_MOV, a0, 0, 0); break; case INDEX_op_qemu_ld_i32: @@ -2008,63 +2001,62 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_bswap16_i32: - tcg_out_bswap16(s, COND_AL, args[0], args[1]); + tcg_out_bswap16(s, COND_AL, a0, a1); break; case INDEX_op_bswap32_i32: - tcg_out_bswap32(s, COND_AL, args[0], args[1]); + tcg_out_bswap32(s, COND_AL, a0, a1); break; case INDEX_op_ext8s_i32: - tcg_out_ext8s(s, COND_AL, args[0], args[1]); + tcg_out_ext8s(s, COND_AL, a0, a1); break; case INDEX_op_ext16s_i32: - tcg_out_ext16s(s, COND_AL, args[0], args[1]); + tcg_out_ext16s(s, COND_AL, a0, a1); break; case INDEX_op_ext16u_i32: - tcg_out_ext16u(s, COND_AL, args[0], args[1]); + tcg_out_ext16u(s, COND_AL, a0, a1); break; case INDEX_op_deposit_i32: - tcg_out_deposit(s, COND_AL, args[0], args[2], - args[3], args[4], const_args[2]); + tcg_out_deposit(s, COND_AL, a0, a2, a3, args[4], c2); break; case INDEX_op_extract_i32: - tcg_out_extract(s, COND_AL, args[0], args[1], args[2], args[3]); + tcg_out_extract(s, COND_AL, a0, a1, a2, a3); break; case INDEX_op_sextract_i32: - tcg_out_sextract(s, COND_AL, args[0], args[1], args[2], args[3]); + tcg_out_sextract(s, COND_AL, a0, a1, a2, a3); break; case INDEX_op_extract2_i32: /* ??? These optimization vs zero should be generic. */ /* ??? But we can't substitute 2 for 1 in the opcode stream yet. */ if (const_args[1]) { - if (const_args[2]) { - tcg_out_movi(s, TCG_TYPE_REG, args[0], 0); + if (c2) { + tcg_out_movi(s, TCG_TYPE_REG, a0, 0); } else { - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, - args[2], SHIFT_IMM_LSL(32 - args[3])); + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, + a2, SHIFT_IMM_LSL(32 - a3)); } - } else if (const_args[2]) { - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, - args[1], SHIFT_IMM_LSR(args[3])); + } else if (c2) { + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, + a1, SHIFT_IMM_LSR(a3)); } else { /* We can do extract2 in 2 insns, vs the 3 required otherwise. */ tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_TMP, 0, - args[2], SHIFT_IMM_LSL(32 - args[3])); - tcg_out_dat_reg(s, COND_AL, ARITH_ORR, args[0], TCG_REG_TMP, - args[1], SHIFT_IMM_LSR(args[3])); + a2, SHIFT_IMM_LSL(32 - a3)); + tcg_out_dat_reg(s, COND_AL, ARITH_ORR, a0, TCG_REG_TMP, + a1, SHIFT_IMM_LSR(a3)); } break; case INDEX_op_div_i32: - tcg_out_sdiv(s, COND_AL, args[0], args[1], args[2]); + tcg_out_sdiv(s, COND_AL, a0, a1, a2); break; case INDEX_op_divu_i32: - tcg_out_udiv(s, COND_AL, args[0], args[1], args[2]); + tcg_out_udiv(s, COND_AL, a0, a1, a2); break; case INDEX_op_mb: - tcg_out_mb(s, args[0]); + tcg_out_mb(s, a0); break; case INDEX_op_mov_i32: /* Always emitted via tcg_out_mov. */ From patchwork Wed Jan 13 17:24:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 12017447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87715C433E0 for ; Wed, 13 Jan 2021 17:27:41 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 112B720739 for ; Wed, 13 Jan 2021 17:27:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 112B720739 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amsat.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:40128 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kzjvn-0001v9-Nw for qemu-devel@archiver.kernel.org; Wed, 13 Jan 2021 12:27:39 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:41554) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kzjtT-0008OH-Tn; Wed, 13 Jan 2021 12:25:15 -0500 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]:40120) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1kzjtS-000615-EV; Wed, 13 Jan 2021 12:25:15 -0500 Received: by mail-wm1-x333.google.com with SMTP id r4so2322142wmh.5; Wed, 13 Jan 2021 09:25:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=usrEFBYIAmEVHDsTWIk9Dyh2ylL3MFnazze4ohg6L4U=; b=n+uHkJVJ6i1JqI7gFzbPdWDu+Gn1CcBZWSKXZ5DOTJ1tjmIRSKgzh0QpZSY52pIMoS ya6tE0zpVGf9pYrBpM0V0gdY1hX3kdEEoswJi+UruZDj3KiM8COI3b2Cm+30zvTR1BjZ FdxVsiNedQOcHVP9Ar4qTOB3aEe7Ir+xc6wBRyu/nMFcVdqgDfTE03lT5MUILJXR20Nk EUnQu7A68Ae4Nb5iP2ZsRqhG8xh78+LEyCFLZF3j4qCJh56hI3SGnzKtTctLMDtVECfi 1AnBloHq2TlxGjNp3TKAZZUUOGp22B/CKFTdpb40d8wMSeXWG9N+qMQ/h+/6zl4OMqI0 oVsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=usrEFBYIAmEVHDsTWIk9Dyh2ylL3MFnazze4ohg6L4U=; b=MW8YCBMiXivJMZAo6BzYrDd1hVPVbdZ9N5zYW8tg+k8WGXZqbpgchF47K5bGVdR+U4 izVsVyPL+FIWWITLeYWb/Uw8X48h92Cc6GfxOZE0VBn+5U7vO0scQH4w9P3GmNrKiUvh T9lf0Ea/ip3D45kgY811D/SY8R9bGC+mIylPtjaUs8GOlR2nuiK2D0wF9JnOzn+tTVY1 na3XcP0JgHWYYngWvPujFhAgF6wlaRdzY2tXccfOAOUEJJKdZZpMRIbdqMIs7pbSVXBC Wmqw07asWKwKAV3YokzKsGas8Dk9wKShX8wmhedlb+VPdDJ9K8QjpwT9UTbtnK4LxoEo bE8g== X-Gm-Message-State: AOAM533GpR6Xb6i9APVnu7+p+/vkPMfcUjHA7EQ2KYJmTwyxtI51iAPc FKNyDPVc70IexY25dQ2cFycX4a/xypc= X-Google-Smtp-Source: ABdhPJyPlpkTHbnTw9WKTtial8bvcn3XXZzC0q06/RaW2FvQp3GMJGSSJImDnrHGCye7YlLnLFJjUw== X-Received: by 2002:a1c:98c7:: with SMTP id a190mr281162wme.184.1610558711983; Wed, 13 Jan 2021 09:25:11 -0800 (PST) Received: from x1w.redhat.com (13.red-83-57-169.dynamicip.rima-tde.net. [83.57.169.13]) by smtp.gmail.com with ESMTPSA id o23sm4912400wro.57.2021.01.13.09.25.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Jan 2021 09:25:11 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v2 2/6] tcg/arm: Replace goto statement by fall through comment Date: Wed, 13 Jan 2021 18:24:55 +0100 Message-Id: <20210113172459.2481060-3-f4bug@amsat.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210113172459.2481060-1-f4bug@amsat.org> References: <20210113172459.2481060-1-f4bug@amsat.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::333; envelope-from=philippe.mathieu.daude@gmail.com; helo=mail-wm1-x333.google.com X-Spam_score_int: -14 X-Spam_score: -1.5 X-Spam_bar: - X-Spam_report: (-1.5 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.249, FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.248, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Huth , Huacai Chen , qemu-riscv@nongnu.org, Stefan Weil , Cornelia Huck , Richard Henderson , Aleksandar Rikalo , =?utf-8?q?Philippe_Mathie?= =?utf-8?q?u-Daud=C3=A9?= , qemu-s390x@nongnu.org, qemu-arm@nongnu.org, Alistair Francis , Palmer Dabbelt , Miroslav Rezanina , Aurelien Jarno Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: Philippe Mathieu-Daudé --- tcg/arm/tcg-target.c.inc | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index 59bd196994f..0ffb2b13d14 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1947,10 +1947,8 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_ctz_i32: tcg_out_dat_reg(s, COND_AL, INSN_RBIT, TCG_REG_TMP, 0, a1, 0); a1 = TCG_REG_TMP; - goto do_clz; - + /* Fall through. */ case INDEX_op_clz_i32: - do_clz: c = c2; if (c && a2 == 32) { tcg_out_dat_reg(s, COND_AL, INSN_CLZ, a0, 0, a1, 0); From patchwork Wed Jan 13 17:24:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 12017461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E034EC433E0 for ; Wed, 13 Jan 2021 17:40:46 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 09CEA23406 for ; Wed, 13 Jan 2021 17:40:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 09CEA23406 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amsat.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:34006 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kzk8S-0003Q6-PL for qemu-devel@archiver.kernel.org; Wed, 13 Jan 2021 12:40:44 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:41598) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kzjta-00008A-U6; Wed, 13 Jan 2021 12:25:22 -0500 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]:41523) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1kzjtY-00063H-A2; Wed, 13 Jan 2021 12:25:22 -0500 Received: by mail-wr1-x430.google.com with SMTP id a12so2952710wrv.8; Wed, 13 Jan 2021 09:25:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cGAPxmq968bmJMMpj0rkaQPnnLWtPOzEXjKRRKbvSdA=; b=CjqDk99CHQvO8S5MphJazHsM3oTPO5KmXjVcAja4K50Zty9GvOr2KB4u7wbL51E5ad GJfXYHxpE22HSyWFbxgE3m7rRVTpLL8uYfn7FFBEDN67nueb2KDmxGgNSWbsLWS2o7uR p9iXqmHgErwQxWUO7oLtH7EIbn9U1xrUcua17cl55b3OO5Yk1gF6B5Tz4PsNQLzXfrGh j1ok/O9S/fT2yOwgFElA1zdtuN//KRzxAYn0RzbJW2RpuhcBXE2r05H+0xwBlTawl7jS OJaJMD3YzLA8z5UqjrSHJQFFr5eCwkGpVdrdh6pS8CTcI978zec7QiaKVp/CFcPXxC5i XVTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=cGAPxmq968bmJMMpj0rkaQPnnLWtPOzEXjKRRKbvSdA=; b=VCIkCu77z2+8O0/uNgfZmkB55EWMS50J4wAJHJgqYsGVM+AApKQtLm7gKqBJd1SkCN 5JcOTiEtai4+iDAkZYWn5lQM1fzcwqJhqEeGD2anoVLBYzXzhEVwBPRzBUS810be5AD4 pR5nej3R9zo7eYrxJHQf0CW83MaGrmcN3CLdDrFyviUla/+RYuojTDKQQSRpSkH72lhQ RWqP7H0UyBI33grjJjFeO6PAfsx9Ug/FMp0Ip3aDa5HNZOO88V+NdBh9JNZvi/RxHqES Uhm95sghNM9zQ+GCBqbRjuWR2WQE1MC3/qNqbaebin96ywifr4Gs0iX2nif5Dem5MuQA AyKw== X-Gm-Message-State: AOAM530B5wQa9rTm/nn1cdN9l6rIbgtUUmtjwXxAucxxrH9IhfPTlIex +Zn40NUY+oSssJXa3+KUw+j3LVL7MFQ= X-Google-Smtp-Source: ABdhPJydaTS+KX3Igjbxc9bK8IoRQxNi+fARF+o7Od0eYSU9SbpR/y8panhJhZSYfLo+Qh23qCMLyg== X-Received: by 2002:a5d:53c9:: with SMTP id a9mr3605687wrw.188.1610558717338; Wed, 13 Jan 2021 09:25:17 -0800 (PST) Received: from x1w.redhat.com (13.red-83-57-169.dynamicip.rima-tde.net. [83.57.169.13]) by smtp.gmail.com with ESMTPSA id l20sm4835123wrh.82.2021.01.13.09.25.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Jan 2021 09:25:16 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v2 3/6] tcg/ppc: Hoist common argument loads in tcg_out_op() Date: Wed, 13 Jan 2021 18:24:56 +0100 Message-Id: <20210113172459.2481060-4-f4bug@amsat.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210113172459.2481060-1-f4bug@amsat.org> References: <20210113172459.2481060-1-f4bug@amsat.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::430; envelope-from=philippe.mathieu.daude@gmail.com; helo=mail-wr1-x430.google.com X-Spam_score_int: -14 X-Spam_score: -1.5 X-Spam_bar: - X-Spam_report: (-1.5 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.249, FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.248, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Huth , Huacai Chen , qemu-riscv@nongnu.org, Stefan Weil , Cornelia Huck , Richard Henderson , Aleksandar Rikalo , =?utf-8?q?Philippe_Mathie?= =?utf-8?q?u-Daud=C3=A9?= , qemu-s390x@nongnu.org, qemu-arm@nongnu.org, Alistair Francis , Palmer Dabbelt , Miroslav Rezanina , Aurelien Jarno Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: Philippe Mathieu-Daudé --- tcg/ppc/tcg-target.c.inc | 188 ++++++++++++++++++--------------------- 1 file changed, 85 insertions(+), 103 deletions(-) diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 19a4a12f155..70b747a8a30 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -2357,15 +2357,22 @@ static void tcg_target_qemu_prologue(TCGContext *s) tcg_out32(s, BCLR | BO_ALWAYS); } -static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, - const int *const_args) +static void tcg_out_op(TCGContext *s, TCGOpcode opc, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { TCGArg a0, a1, a2; - int c; + int c, c2; + + /* Hoist the loads of the most common arguments. */ + a0 = args[0]; + a1 = args[1]; + a2 = args[2]; + c2 = const_args[2]; switch (opc) { case INDEX_op_exit_tb: - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R3, args[0]); + tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R3, a0); tcg_out_b(s, 0, tcg_code_gen_epilogue); break; case INDEX_op_goto_tb: @@ -2389,11 +2396,11 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, /* Indirect jump. */ tcg_debug_assert(s->tb_jmp_insn_offset == NULL); tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TB, 0, - (intptr_t)(s->tb_jmp_insn_offset + args[0])); + (intptr_t)(s->tb_jmp_insn_offset + a0)); } tcg_out32(s, MTSPR | RS(TCG_REG_TB) | CTR); tcg_out32(s, BCCTR | BO_ALWAYS); - set_jmp_reset_offset(s, args[0]); + set_jmp_reset_offset(s, a0); if (USE_REG_TB) { /* For the unlinked case, need to reset TCG_REG_TB. */ tcg_out_mem_long(s, ADDI, ADD, TCG_REG_TB, TCG_REG_TB, @@ -2403,7 +2410,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, case INDEX_op_goto_ptr: tcg_out32(s, MTSPR | RS(args[0]) | CTR); if (USE_REG_TB) { - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_TB, args[0]); + tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_TB, a0); } tcg_out32(s, ADDI | TAI(TCG_REG_R3, 0, 0)); tcg_out32(s, BCCTR | BO_ALWAYS); @@ -2424,49 +2431,48 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_ld8u_i32: case INDEX_op_ld8u_i64: - tcg_out_mem_long(s, LBZ, LBZX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LBZ, LBZX, a0, a1, a2); break; case INDEX_op_ld8s_i32: case INDEX_op_ld8s_i64: - tcg_out_mem_long(s, LBZ, LBZX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LBZ, LBZX, a0, a1, a2); tcg_out32(s, EXTSB | RS(args[0]) | RA(args[0])); break; case INDEX_op_ld16u_i32: case INDEX_op_ld16u_i64: - tcg_out_mem_long(s, LHZ, LHZX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LHZ, LHZX, a0, a1, a2); break; case INDEX_op_ld16s_i32: case INDEX_op_ld16s_i64: - tcg_out_mem_long(s, LHA, LHAX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LHA, LHAX, a0, a1, a2); break; case INDEX_op_ld_i32: case INDEX_op_ld32u_i64: - tcg_out_mem_long(s, LWZ, LWZX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LWZ, LWZX, a0, a1, a2); break; case INDEX_op_ld32s_i64: - tcg_out_mem_long(s, LWA, LWAX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LWA, LWAX, a0, a1, a2); break; case INDEX_op_ld_i64: - tcg_out_mem_long(s, LD, LDX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LD, LDX, a0, a1, a2); break; case INDEX_op_st8_i32: case INDEX_op_st8_i64: - tcg_out_mem_long(s, STB, STBX, args[0], args[1], args[2]); + tcg_out_mem_long(s, STB, STBX, a0, a1, a2); break; case INDEX_op_st16_i32: case INDEX_op_st16_i64: - tcg_out_mem_long(s, STH, STHX, args[0], args[1], args[2]); + tcg_out_mem_long(s, STH, STHX, a0, a1, a2); break; case INDEX_op_st_i32: case INDEX_op_st32_i64: - tcg_out_mem_long(s, STW, STWX, args[0], args[1], args[2]); + tcg_out_mem_long(s, STW, STWX, a0, a1, a2); break; case INDEX_op_st_i64: - tcg_out_mem_long(s, STD, STDX, args[0], args[1], args[2]); + tcg_out_mem_long(s, STD, STDX, a0, a1, a2); break; case INDEX_op_add_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { do_addi_32: tcg_out_mem_long(s, ADDI, ADD, a0, a1, (int32_t)a2); @@ -2475,7 +2481,6 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, } break; case INDEX_op_sub_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[1]) { if (const_args[2]) { tcg_out_movi(s, TCG_TYPE_I32, a0, a1 - a2); @@ -2491,7 +2496,6 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_and_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { tcg_out_andi32(s, a0, a1, a2); } else { @@ -2499,7 +2503,6 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, } break; case INDEX_op_and_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { tcg_out_andi64(s, a0, a1, a2); } else { @@ -2508,7 +2511,6 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_or_i64: case INDEX_op_or_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { tcg_out_ori32(s, a0, a1, a2); } else { @@ -2517,7 +2519,6 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_xor_i64: case INDEX_op_xor_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { tcg_out_xori32(s, a0, a1, a2); } else { @@ -2525,7 +2526,6 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, } break; case INDEX_op_andc_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { tcg_out_andi32(s, a0, a1, ~a2); } else { @@ -2533,7 +2533,6 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, } break; case INDEX_op_andc_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { tcg_out_andi64(s, a0, a1, ~a2); } else { @@ -2542,57 +2541,52 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_orc_i32: if (const_args[2]) { - tcg_out_ori32(s, args[0], args[1], ~args[2]); + tcg_out_ori32(s, a0, a1, ~args[2]); break; } /* FALLTHRU */ case INDEX_op_orc_i64: - tcg_out32(s, ORC | SAB(args[1], args[0], args[2])); + tcg_out32(s, ORC | SAB(args[1], a0, a2)); break; case INDEX_op_eqv_i32: if (const_args[2]) { - tcg_out_xori32(s, args[0], args[1], ~args[2]); + tcg_out_xori32(s, a0, a1, ~args[2]); break; } /* FALLTHRU */ case INDEX_op_eqv_i64: - tcg_out32(s, EQV | SAB(args[1], args[0], args[2])); + tcg_out32(s, EQV | SAB(args[1], a0, a2)); break; case INDEX_op_nand_i32: case INDEX_op_nand_i64: - tcg_out32(s, NAND | SAB(args[1], args[0], args[2])); + tcg_out32(s, NAND | SAB(args[1], a0, a2)); break; case INDEX_op_nor_i32: case INDEX_op_nor_i64: - tcg_out32(s, NOR | SAB(args[1], args[0], args[2])); + tcg_out32(s, NOR | SAB(args[1], a0, a2)); break; case INDEX_op_clz_i32: - tcg_out_cntxz(s, TCG_TYPE_I32, CNTLZW, args[0], args[1], - args[2], const_args[2]); + tcg_out_cntxz(s, TCG_TYPE_I32, CNTLZW, a0, a1, a2, const_args[2]); break; case INDEX_op_ctz_i32: - tcg_out_cntxz(s, TCG_TYPE_I32, CNTTZW, args[0], args[1], - args[2], const_args[2]); + tcg_out_cntxz(s, TCG_TYPE_I32, CNTTZW, a0, a1, a2, const_args[2]); break; case INDEX_op_ctpop_i32: - tcg_out32(s, CNTPOPW | SAB(args[1], args[0], 0)); + tcg_out32(s, CNTPOPW | SAB(args[1], a0, 0)); break; case INDEX_op_clz_i64: - tcg_out_cntxz(s, TCG_TYPE_I64, CNTLZD, args[0], args[1], - args[2], const_args[2]); + tcg_out_cntxz(s, TCG_TYPE_I64, CNTLZD, a0, a1, a2, const_args[2]); break; case INDEX_op_ctz_i64: - tcg_out_cntxz(s, TCG_TYPE_I64, CNTTZD, args[0], args[1], - args[2], const_args[2]); + tcg_out_cntxz(s, TCG_TYPE_I64, CNTTZD, a0, a1, a2, const_args[2]); break; case INDEX_op_ctpop_i64: - tcg_out32(s, CNTPOPD | SAB(args[1], args[0], 0)); + tcg_out32(s, CNTPOPD | SAB(args[1], a0, 0)); break; case INDEX_op_mul_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { tcg_out32(s, MULLI | TAI(a0, a1, a2)); } else { @@ -2601,27 +2595,27 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_div_i32: - tcg_out32(s, DIVW | TAB(args[0], args[1], args[2])); + tcg_out32(s, DIVW | TAB(args[0], a1, a2)); break; case INDEX_op_divu_i32: - tcg_out32(s, DIVWU | TAB(args[0], args[1], args[2])); + tcg_out32(s, DIVWU | TAB(args[0], a1, a2)); break; case INDEX_op_shl_i32: if (const_args[2]) { /* Limit immediate shift count lest we create an illegal insn. */ - tcg_out_shli32(s, args[0], args[1], args[2] & 31); + tcg_out_shli32(s, a0, a1, a2 & 31); } else { - tcg_out32(s, SLW | SAB(args[1], args[0], args[2])); + tcg_out32(s, SLW | SAB(args[1], a0, a2)); } break; case INDEX_op_shr_i32: if (const_args[2]) { /* Limit immediate shift count lest we create an illegal insn. */ - tcg_out_shri32(s, args[0], args[1], args[2] & 31); + tcg_out_shri32(s, a0, a1, a2 & 31); } else { - tcg_out32(s, SRW | SAB(args[1], args[0], args[2])); + tcg_out32(s, SRW | SAB(args[1], a0, a2)); } break; case INDEX_op_sar_i32: @@ -2629,33 +2623,32 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, /* Limit immediate shift count lest we create an illegal insn. */ tcg_out32(s, SRAWI | RS(args[1]) | RA(args[0]) | SH(args[2] & 31)); } else { - tcg_out32(s, SRAW | SAB(args[1], args[0], args[2])); + tcg_out32(s, SRAW | SAB(args[1], a0, a2)); } break; case INDEX_op_rotl_i32: if (const_args[2]) { - tcg_out_rlw(s, RLWINM, args[0], args[1], args[2], 0, 31); + tcg_out_rlw(s, RLWINM, a0, a1, a2, 0, 31); } else { - tcg_out32(s, RLWNM | SAB(args[1], args[0], args[2]) + tcg_out32(s, RLWNM | SAB(args[1], a0, a2) | MB(0) | ME(31)); } break; case INDEX_op_rotr_i32: if (const_args[2]) { - tcg_out_rlw(s, RLWINM, args[0], args[1], 32 - args[2], 0, 31); + tcg_out_rlw(s, RLWINM, a0, a1, 32 - a2, 0, 31); } else { - tcg_out32(s, SUBFIC | TAI(TCG_REG_R0, args[2], 32)); - tcg_out32(s, RLWNM | SAB(args[1], args[0], TCG_REG_R0) - | MB(0) | ME(31)); + tcg_out32(s, SUBFIC | TAI(TCG_REG_R0, a2, 32)); + tcg_out32(s, RLWNM | SAB(args[1], a0, TCG_REG_R0) | MB(0) | ME(31)); } break; case INDEX_op_brcond_i32: - tcg_out_brcond(s, args[2], args[0], args[1], const_args[1], + tcg_out_brcond(s, a2, a0, a1, const_args[1], arg_label(args[3]), TCG_TYPE_I32); break; case INDEX_op_brcond_i64: - tcg_out_brcond(s, args[2], args[0], args[1], const_args[1], + tcg_out_brcond(s, a2, a0, a1, const_args[1], arg_label(args[3]), TCG_TYPE_I64); break; case INDEX_op_brcond2_i32: @@ -2669,11 +2662,10 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, case INDEX_op_not_i32: case INDEX_op_not_i64: - tcg_out32(s, NOR | SAB(args[1], args[0], args[1])); + tcg_out32(s, NOR | SAB(args[1], a0, a1)); break; case INDEX_op_add_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { do_addi_64: tcg_out_mem_long(s, ADDI, ADD, a0, a1, a2); @@ -2682,7 +2674,6 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, } break; case INDEX_op_sub_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[1]) { if (const_args[2]) { tcg_out_movi(s, TCG_TYPE_I64, a0, a1 - a2); @@ -2700,17 +2691,17 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, case INDEX_op_shl_i64: if (const_args[2]) { /* Limit immediate shift count lest we create an illegal insn. */ - tcg_out_shli64(s, args[0], args[1], args[2] & 63); + tcg_out_shli64(s, a0, a1, a2 & 63); } else { - tcg_out32(s, SLD | SAB(args[1], args[0], args[2])); + tcg_out32(s, SLD | SAB(args[1], a0, a2)); } break; case INDEX_op_shr_i64: if (const_args[2]) { /* Limit immediate shift count lest we create an illegal insn. */ - tcg_out_shri64(s, args[0], args[1], args[2] & 63); + tcg_out_shri64(s, a0, a1, a2 & 63); } else { - tcg_out32(s, SRD | SAB(args[1], args[0], args[2])); + tcg_out32(s, SRD | SAB(args[1], a0, a2)); } break; case INDEX_op_sar_i64: @@ -2718,27 +2709,26 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, int sh = SH(args[2] & 0x1f) | (((args[2] >> 5) & 1) << 1); tcg_out32(s, SRADI | RA(args[0]) | RS(args[1]) | sh); } else { - tcg_out32(s, SRAD | SAB(args[1], args[0], args[2])); + tcg_out32(s, SRAD | SAB(args[1], a0, a2)); } break; case INDEX_op_rotl_i64: if (const_args[2]) { - tcg_out_rld(s, RLDICL, args[0], args[1], args[2], 0); + tcg_out_rld(s, RLDICL, a0, a1, a2, 0); } else { - tcg_out32(s, RLDCL | SAB(args[1], args[0], args[2]) | MB64(0)); + tcg_out32(s, RLDCL | SAB(args[1], a0, a2) | MB64(0)); } break; case INDEX_op_rotr_i64: if (const_args[2]) { - tcg_out_rld(s, RLDICL, args[0], args[1], 64 - args[2], 0); + tcg_out_rld(s, RLDICL, a0, a1, 64 - a2, 0); } else { - tcg_out32(s, SUBFIC | TAI(TCG_REG_R0, args[2], 64)); - tcg_out32(s, RLDCL | SAB(args[1], args[0], TCG_REG_R0) | MB64(0)); + tcg_out32(s, SUBFIC | TAI(TCG_REG_R0, a2, 64)); + tcg_out32(s, RLDCL | SAB(args[1], a0, TCG_REG_R0) | MB64(0)); } break; case INDEX_op_mul_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { tcg_out32(s, MULLI | TAI(a0, a1, a2)); } else { @@ -2746,10 +2736,10 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, } break; case INDEX_op_div_i64: - tcg_out32(s, DIVD | TAB(args[0], args[1], args[2])); + tcg_out32(s, DIVD | TAB(args[0], a1, a2)); break; case INDEX_op_divu_i64: - tcg_out32(s, DIVDU | TAB(args[0], args[1], args[2])); + tcg_out32(s, DIVDU | TAB(args[0], a1, a2)); break; case INDEX_op_qemu_ld_i32: @@ -2781,16 +2771,14 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, tcg_out32(s, c | RS(args[1]) | RA(args[0])); break; case INDEX_op_extu_i32_i64: - tcg_out_ext32u(s, args[0], args[1]); + tcg_out_ext32u(s, a0, a1); break; case INDEX_op_setcond_i32: - tcg_out_setcond(s, TCG_TYPE_I32, args[3], args[0], args[1], args[2], - const_args[2]); + tcg_out_setcond(s, TCG_TYPE_I32, args[3], a0, a1, a2, const_args[2]); break; case INDEX_op_setcond_i64: - tcg_out_setcond(s, TCG_TYPE_I64, args[3], args[0], args[1], args[2], - const_args[2]); + tcg_out_setcond(s, TCG_TYPE_I64, args[3], a0, a1, a2, const_args[2]); break; case INDEX_op_setcond2_i32: tcg_out_setcond2(s, args, const_args); @@ -2798,7 +2786,6 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, case INDEX_op_bswap16_i32: case INDEX_op_bswap16_i64: - a0 = args[0], a1 = args[1]; /* a1 = abcd */ if (a0 != a1) { /* a0 = (a1 r<< 24) & 0xff # 000c */ @@ -2818,7 +2805,6 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, case INDEX_op_bswap32_i32: case INDEX_op_bswap32_i64: /* Stolen from gcc's builtin_bswap32 */ - a1 = args[1]; a0 = args[0] == a1 ? TCG_REG_R0 : args[0]; /* a1 = args[1] # abcd */ @@ -2835,7 +2821,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_bswap64_i64: - a0 = args[0], a1 = args[1], a2 = TCG_REG_R0; + a2 = TCG_REG_R0; if (a0 == a1) { a0 = TCG_REG_R0; a2 = a1; @@ -2869,36 +2855,34 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, case INDEX_op_deposit_i32: if (const_args[2]) { uint32_t mask = ((2u << (args[4] - 1)) - 1) << args[3]; - tcg_out_andi32(s, args[0], args[0], ~mask); + tcg_out_andi32(s, a0, a0, ~mask); } else { - tcg_out_rlw(s, RLWIMI, args[0], args[2], args[3], + tcg_out_rlw(s, RLWIMI, a0, a2, args[3], 32 - args[3] - args[4], 31 - args[3]); } break; case INDEX_op_deposit_i64: if (const_args[2]) { uint64_t mask = ((2ull << (args[4] - 1)) - 1) << args[3]; - tcg_out_andi64(s, args[0], args[0], ~mask); + tcg_out_andi64(s, a0, a0, ~mask); } else { - tcg_out_rld(s, RLDIMI, args[0], args[2], args[3], - 64 - args[3] - args[4]); + tcg_out_rld(s, RLDIMI, a0, a2, args[3], 64 - args[3] - args[4]); } break; case INDEX_op_extract_i32: - tcg_out_rlw(s, RLWINM, args[0], args[1], - 32 - args[2], 32 - args[3], 31); + tcg_out_rlw(s, RLWINM, a0, a1, 32 - a2, 32 - args[3], 31); break; case INDEX_op_extract_i64: - tcg_out_rld(s, RLDICL, args[0], args[1], 64 - args[2], 64 - args[3]); + tcg_out_rld(s, RLDICL, a0, a1, 64 - a2, 64 - args[3]); break; case INDEX_op_movcond_i32: - tcg_out_movcond(s, TCG_TYPE_I32, args[5], args[0], args[1], args[2], + tcg_out_movcond(s, TCG_TYPE_I32, args[5], a0, a1, a2, args[3], args[4], const_args[2]); break; case INDEX_op_movcond_i64: - tcg_out_movcond(s, TCG_TYPE_I64, args[5], args[0], args[1], args[2], + tcg_out_movcond(s, TCG_TYPE_I64, args[5], a0, a1, a2, args[3], args[4], const_args[2]); break; @@ -2910,14 +2894,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, /* Note that the CA bit is defined based on the word size of the environment. So in 64-bit mode it's always carry-out of bit 63. The fallback code using deposit works just as well for 32-bit. */ - a0 = args[0], a1 = args[1]; if (a0 == args[3] || (!const_args[5] && a0 == args[5])) { a0 = TCG_REG_R0; } if (const_args[4]) { - tcg_out32(s, ADDIC | TAI(a0, args[2], args[4])); + tcg_out32(s, ADDIC | TAI(a0, a2, args[4])); } else { - tcg_out32(s, ADDC | TAB(a0, args[2], args[4])); + tcg_out32(s, ADDC | TAB(a0, a2, args[4])); } if (const_args[5]) { tcg_out32(s, (args[5] ? ADDME : ADDZE) | RT(a1) | RA(args[3])); @@ -2934,14 +2917,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, #else case INDEX_op_sub2_i32: #endif - a0 = args[0], a1 = args[1]; if (a0 == args[5] || (!const_args[3] && a0 == args[3])) { a0 = TCG_REG_R0; } if (const_args[2]) { - tcg_out32(s, SUBFIC | TAI(a0, args[4], args[2])); + tcg_out32(s, SUBFIC | TAI(a0, args[4], a2)); } else { - tcg_out32(s, SUBFC | TAB(a0, args[4], args[2])); + tcg_out32(s, SUBFC | TAB(a0, args[4], a2)); } if (const_args[3]) { tcg_out32(s, (args[3] ? SUBFME : SUBFZE) | RT(a1) | RA(args[5])); @@ -2954,20 +2936,20 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_muluh_i32: - tcg_out32(s, MULHWU | TAB(args[0], args[1], args[2])); + tcg_out32(s, MULHWU | TAB(args[0], a1, a2)); break; case INDEX_op_mulsh_i32: - tcg_out32(s, MULHW | TAB(args[0], args[1], args[2])); + tcg_out32(s, MULHW | TAB(args[0], a1, a2)); break; case INDEX_op_muluh_i64: - tcg_out32(s, MULHDU | TAB(args[0], args[1], args[2])); + tcg_out32(s, MULHDU | TAB(args[0], a1, a2)); break; case INDEX_op_mulsh_i64: - tcg_out32(s, MULHD | TAB(args[0], args[1], args[2])); + tcg_out32(s, MULHD | TAB(args[0], a1, a2)); break; case INDEX_op_mb: - tcg_out_mb(s, args[0]); + tcg_out_mb(s, a0); break; case INDEX_op_mov_i32: /* Always emitted via tcg_out_mov. */ From patchwork Wed Jan 13 17:24:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 12017449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80800C433E0 for ; Wed, 13 Jan 2021 17:32:08 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 98B2F22D05 for ; Wed, 13 Jan 2021 17:32:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 98B2F22D05 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amsat.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:49990 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kzk06-00067v-GG for qemu-devel@archiver.kernel.org; Wed, 13 Jan 2021 12:32:06 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:41652) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kzjtg-000091-VC; Wed, 13 Jan 2021 12:25:35 -0500 Received: from mail-wm1-x331.google.com ([2a00:1450:4864:20::331]:36765) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1kzjtd-00064v-Vc; Wed, 13 Jan 2021 12:25:28 -0500 Received: by mail-wm1-x331.google.com with SMTP id y23so2337310wmi.1; Wed, 13 Jan 2021 09:25:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Zq1vr3q6gndSZYUaWX6IMZKeocghiG32Kg2SuEHDROY=; b=QbnLjn+k8wPIgmq7eclVXhR7pKsm67tcS6AbdCJF3750mVlN+akp1RPOKfHTZmhOxO hueb4D4BnU93a3O75Hi+faT/CxNyALcKQd1JXlvbZSdiudiftbMyqBNux3YZWbg4wOZZ ZZPtfDV+qCOd2Io8hb29qqUkUtCGvtfxrgIoHpvBf4b5sEZ3YAxNomr8hnqv8Wt5CHsE Kl2W7vZmBHoAV1WVaCq9JVQ2CxNFA0aPujMnuKvPI2W47lgQr/j8QD/61zJ2Nzrqk2kP NHCbrdrwj2GBtapYj8iIpN3bhMY39wQUUyM5Us3krNZjucFEfr0M/3lAJ0CS7BOD1rq2 cBtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=Zq1vr3q6gndSZYUaWX6IMZKeocghiG32Kg2SuEHDROY=; b=nWe6X2S2P48FhVeWTBdYstzuJe3Ha+77RNhUe5o7JVGzzHeqL5BSi3l0PO2cOrercL 7Os2gALvQ20FvgidzJWut64vQtWAtJlS2pJl4AWmaL0fGvrak0mRobRLRorn7D8mbcBX WDfbCbeC8veNJyRuSEy/OqRAU6bzvtxKKuIA7LB+VPs4B5FkGyltMgbDw6yZkuceptB9 gqw+kRJOWiTz47sVeXcWNmAX6vccIkpc57qz3Xq7RznpJ6CI6iCXQVMmZHwTzqQLdLMy iEtmKYfv5ZHrBk000EvH73c5qdoH+2XUxkTiEIg9x+FWkw/d0DX9te0Kwi7+tqSg6HJF nTIA== X-Gm-Message-State: AOAM5337ZEcgWWx1qLNHNc/X1jVfbT601rT2dVuihslRXm68BLB06MeG OVDf5t8/jLvKivdDjCtaA+o8H/Ebnmo= X-Google-Smtp-Source: ABdhPJxDqIRMWT+0VphR7aasWzGWhf4ZAl+ewm1Zz6UCJU7uHqNstOyHD8XDWTjeFiKRngan5h30ow== X-Received: by 2002:a1c:3d86:: with SMTP id k128mr296155wma.66.1610558722601; Wed, 13 Jan 2021 09:25:22 -0800 (PST) Received: from x1w.redhat.com (13.red-83-57-169.dynamicip.rima-tde.net. [83.57.169.13]) by smtp.gmail.com with ESMTPSA id s19sm1013485wrf.72.2021.01.13.09.25.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Jan 2021 09:25:21 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v2 4/6] tcg/s390: Hoist common argument loads in tcg_out_op() Date: Wed, 13 Jan 2021 18:24:57 +0100 Message-Id: <20210113172459.2481060-5-f4bug@amsat.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210113172459.2481060-1-f4bug@amsat.org> References: <20210113172459.2481060-1-f4bug@amsat.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::331; envelope-from=philippe.mathieu.daude@gmail.com; helo=mail-wm1-x331.google.com X-Spam_score_int: -14 X-Spam_score: -1.5 X-Spam_bar: - X-Spam_report: (-1.5 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.249, FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.248, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Huth , Huacai Chen , qemu-riscv@nongnu.org, Stefan Weil , Cornelia Huck , Richard Henderson , Aleksandar Rikalo , =?utf-8?q?Philippe_Mathie?= =?utf-8?q?u-Daud=C3=A9?= , qemu-s390x@nongnu.org, qemu-arm@nongnu.org, Alistair Francis , Palmer Dabbelt , Miroslav Rezanina , Aurelien Jarno Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: Philippe Mathieu-Daudé --- tcg/s390/tcg-target.c.inc | 222 ++++++++++++++++++-------------------- 1 file changed, 107 insertions(+), 115 deletions(-) diff --git a/tcg/s390/tcg-target.c.inc b/tcg/s390/tcg-target.c.inc index d7ef0790556..ec202e79cfc 100644 --- a/tcg/s390/tcg-target.c.inc +++ b/tcg/s390/tcg-target.c.inc @@ -1732,15 +1732,22 @@ static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg, case glue(glue(INDEX_op_,x),_i64) static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { S390Opcode op, op2; - TCGArg a0, a1, a2; + TCGArg a0, a1, a2, a4; + int c2; + + a0 = args[0]; + a1 = args[1]; + a2 = args[2]; + a4 = args[4]; + c2 = const_args[2]; switch (opc) { case INDEX_op_exit_tb: /* Reuse the zeroing that exists for goto_ptr. */ - a0 = args[0]; if (a0 == 0) { tgen_gotoi(s, S390_CC_ALWAYS, tcg_code_gen_epilogue); } else { @@ -1750,7 +1757,6 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_goto_tb: - a0 = args[0]; if (s->tb_jmp_insn_offset) { /* * branch displacement must be aligned for atomic patching; @@ -1784,7 +1790,6 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_goto_ptr: - a0 = args[0]; if (USE_REG_TB) { tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_TB, a0); } @@ -1794,44 +1799,42 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, OP_32_64(ld8u): /* ??? LLC (RXY format) is only present with the extended-immediate facility, whereas LLGC is always present. */ - tcg_out_mem(s, 0, RXY_LLGC, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, 0, RXY_LLGC, a0, a1, TCG_REG_NONE, a2); break; OP_32_64(ld8s): /* ??? LB is no smaller than LGB, so no point to using it. */ - tcg_out_mem(s, 0, RXY_LGB, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, 0, RXY_LGB, a0, a1, TCG_REG_NONE, a2); break; OP_32_64(ld16u): /* ??? LLH (RXY format) is only present with the extended-immediate facility, whereas LLGH is always present. */ - tcg_out_mem(s, 0, RXY_LLGH, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, 0, RXY_LLGH, a0, a1, TCG_REG_NONE, a2); break; case INDEX_op_ld16s_i32: - tcg_out_mem(s, RX_LH, RXY_LHY, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, RX_LH, RXY_LHY, a0, a1, TCG_REG_NONE, a2); break; case INDEX_op_ld_i32: - tcg_out_ld(s, TCG_TYPE_I32, args[0], args[1], args[2]); + tcg_out_ld(s, TCG_TYPE_I32, a0, a1, a2); break; OP_32_64(st8): - tcg_out_mem(s, RX_STC, RXY_STCY, args[0], args[1], - TCG_REG_NONE, args[2]); + tcg_out_mem(s, RX_STC, RXY_STCY, a0, a1, TCG_REG_NONE, a2); break; OP_32_64(st16): - tcg_out_mem(s, RX_STH, RXY_STHY, args[0], args[1], - TCG_REG_NONE, args[2]); + tcg_out_mem(s, RX_STH, RXY_STHY, a0, a1, TCG_REG_NONE, a2); break; case INDEX_op_st_i32: - tcg_out_st(s, TCG_TYPE_I32, args[0], args[1], args[2]); + tcg_out_st(s, TCG_TYPE_I32, a0, a1, a2); break; case INDEX_op_add_i32: - a0 = args[0], a1 = args[1], a2 = (int32_t)args[2]; + a2 = (int32_t)args[2]; if (const_args[2]) { do_addi_32: if (a0 == a1) { @@ -1852,9 +1855,9 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, } break; case INDEX_op_sub_i32: - a0 = args[0], a1 = args[1], a2 = (int32_t)args[2]; + a2 = (int32_t)args[2]; if (const_args[2]) { - a2 = -a2; + a2 = -args[2]; goto do_addi_32; } else if (a0 == a1) { tcg_out_insn(s, RR, SR, a0, a2); @@ -1864,7 +1867,7 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_and_i32: - a0 = args[0], a1 = args[1], a2 = (uint32_t)args[2]; + a2 = (uint32_t)args[2]; if (const_args[2]) { tcg_out_mov(s, TCG_TYPE_I32, a0, a1); tgen_andi(s, TCG_TYPE_I32, a0, a2); @@ -1875,7 +1878,7 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, } break; case INDEX_op_or_i32: - a0 = args[0], a1 = args[1], a2 = (uint32_t)args[2]; + a2 = (uint32_t)args[2]; if (const_args[2]) { tcg_out_mov(s, TCG_TYPE_I32, a0, a1); tgen_ori(s, TCG_TYPE_I32, a0, a2); @@ -1886,45 +1889,45 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, } break; case INDEX_op_xor_i32: - a0 = args[0], a1 = args[1], a2 = (uint32_t)args[2]; + a2 = (uint32_t)args[2]; if (const_args[2]) { tcg_out_mov(s, TCG_TYPE_I32, a0, a1); tgen_xori(s, TCG_TYPE_I32, a0, a2); } else if (a0 == a1) { - tcg_out_insn(s, RR, XR, args[0], args[2]); + tcg_out_insn(s, RR, XR, a0, a2); } else { tcg_out_insn(s, RRF, XRK, a0, a1, a2); } break; case INDEX_op_neg_i32: - tcg_out_insn(s, RR, LCR, args[0], args[1]); + tcg_out_insn(s, RR, LCR, a0, a1); break; case INDEX_op_mul_i32: if (const_args[2]) { if ((int32_t)args[2] == (int16_t)args[2]) { - tcg_out_insn(s, RI, MHI, args[0], args[2]); + tcg_out_insn(s, RI, MHI, a0, a2); } else { - tcg_out_insn(s, RIL, MSFI, args[0], args[2]); + tcg_out_insn(s, RIL, MSFI, a0, a2); } } else { - tcg_out_insn(s, RRE, MSR, args[0], args[2]); + tcg_out_insn(s, RRE, MSR, a0, a2); } break; case INDEX_op_div2_i32: - tcg_out_insn(s, RR, DR, TCG_REG_R2, args[4]); + tcg_out_insn(s, RR, DR, TCG_REG_R2, a4); break; case INDEX_op_divu2_i32: - tcg_out_insn(s, RRE, DLR, TCG_REG_R2, args[4]); + tcg_out_insn(s, RRE, DLR, TCG_REG_R2, a4); break; case INDEX_op_shl_i32: op = RS_SLL; op2 = RSY_SLLK; do_shift32: - a0 = args[0], a1 = args[1], a2 = (int32_t)args[2]; + a2 = (int32_t)args[2]; if (a0 == a1) { if (const_args[2]) { tcg_out_sh32(s, op, a0, TCG_REG_NONE, a2); @@ -1952,110 +1955,107 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_rotl_i32: /* ??? Using tcg_out_sh64 here for the format; it is a 32-bit rol. */ if (const_args[2]) { - tcg_out_sh64(s, RSY_RLL, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_sh64(s, RSY_RLL, a0, a1, TCG_REG_NONE, a2); } else { - tcg_out_sh64(s, RSY_RLL, args[0], args[1], args[2], 0); + tcg_out_sh64(s, RSY_RLL, a0, a1, a2, 0); } break; case INDEX_op_rotr_i32: if (const_args[2]) { - tcg_out_sh64(s, RSY_RLL, args[0], args[1], - TCG_REG_NONE, (32 - args[2]) & 31); + tcg_out_sh64(s, RSY_RLL, a0, a1, TCG_REG_NONE, (32 - a2) & 31); } else { - tcg_out_insn(s, RR, LCR, TCG_TMP0, args[2]); - tcg_out_sh64(s, RSY_RLL, args[0], args[1], TCG_TMP0, 0); + tcg_out_insn(s, RR, LCR, TCG_TMP0, a2); + tcg_out_sh64(s, RSY_RLL, a0, a1, TCG_TMP0, 0); } break; case INDEX_op_ext8s_i32: - tgen_ext8s(s, TCG_TYPE_I32, args[0], args[1]); + tgen_ext8s(s, TCG_TYPE_I32, a0, a1); break; case INDEX_op_ext16s_i32: - tgen_ext16s(s, TCG_TYPE_I32, args[0], args[1]); + tgen_ext16s(s, TCG_TYPE_I32, a0, a1); break; case INDEX_op_ext8u_i32: - tgen_ext8u(s, TCG_TYPE_I32, args[0], args[1]); + tgen_ext8u(s, TCG_TYPE_I32, a0, a1); break; case INDEX_op_ext16u_i32: - tgen_ext16u(s, TCG_TYPE_I32, args[0], args[1]); + tgen_ext16u(s, TCG_TYPE_I32, a0, a1); break; OP_32_64(bswap16): /* The TCG bswap definition requires bits 0-47 already be zero. Thus we don't need the G-type insns to implement bswap16_i64. */ - tcg_out_insn(s, RRE, LRVR, args[0], args[1]); - tcg_out_sh32(s, RS_SRL, args[0], TCG_REG_NONE, 16); + tcg_out_insn(s, RRE, LRVR, a0, a1); + tcg_out_sh32(s, RS_SRL, a0, TCG_REG_NONE, 16); break; OP_32_64(bswap32): - tcg_out_insn(s, RRE, LRVR, args[0], args[1]); + tcg_out_insn(s, RRE, LRVR, a0, a1); break; case INDEX_op_add2_i32: if (const_args[4]) { - tcg_out_insn(s, RIL, ALFI, args[0], args[4]); + tcg_out_insn(s, RIL, ALFI, a0, a4); } else { - tcg_out_insn(s, RR, ALR, args[0], args[4]); + tcg_out_insn(s, RR, ALR, a0, a4); } - tcg_out_insn(s, RRE, ALCR, args[1], args[5]); + tcg_out_insn(s, RRE, ALCR, a1, args[5]); break; case INDEX_op_sub2_i32: if (const_args[4]) { - tcg_out_insn(s, RIL, SLFI, args[0], args[4]); + tcg_out_insn(s, RIL, SLFI, a0, a4); } else { - tcg_out_insn(s, RR, SLR, args[0], args[4]); + tcg_out_insn(s, RR, SLR, a0, a4); } - tcg_out_insn(s, RRE, SLBR, args[1], args[5]); + tcg_out_insn(s, RRE, SLBR, a1, args[5]); break; case INDEX_op_br: - tgen_branch(s, S390_CC_ALWAYS, arg_label(args[0])); + tgen_branch(s, S390_CC_ALWAYS, arg_label(a0)); break; case INDEX_op_brcond_i32: - tgen_brcond(s, TCG_TYPE_I32, args[2], args[0], - args[1], const_args[1], arg_label(args[3])); + tgen_brcond(s, TCG_TYPE_I32, a2, a0, + a1, const_args[1], arg_label(args[3])); break; case INDEX_op_setcond_i32: - tgen_setcond(s, TCG_TYPE_I32, args[3], args[0], args[1], - args[2], const_args[2]); + tgen_setcond(s, TCG_TYPE_I32, args[3], a0, a1, a2, const_args[2]); break; case INDEX_op_movcond_i32: - tgen_movcond(s, TCG_TYPE_I32, args[5], args[0], args[1], - args[2], const_args[2], args[3], const_args[3]); + tgen_movcond(s, TCG_TYPE_I32, args[5], a0, a1, + a2, const_args[2], args[3], const_args[3]); break; case INDEX_op_qemu_ld_i32: /* ??? Technically we can use a non-extending instruction. */ case INDEX_op_qemu_ld_i64: - tcg_out_qemu_ld(s, args[0], args[1], args[2]); + tcg_out_qemu_ld(s, a0, a1, a2); break; case INDEX_op_qemu_st_i32: case INDEX_op_qemu_st_i64: - tcg_out_qemu_st(s, args[0], args[1], args[2]); + tcg_out_qemu_st(s, a0, a1, a2); break; case INDEX_op_ld16s_i64: - tcg_out_mem(s, 0, RXY_LGH, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, 0, RXY_LGH, a0, a1, TCG_REG_NONE, a2); break; case INDEX_op_ld32u_i64: - tcg_out_mem(s, 0, RXY_LLGF, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, 0, RXY_LLGF, a0, a1, TCG_REG_NONE, a2); break; case INDEX_op_ld32s_i64: - tcg_out_mem(s, 0, RXY_LGF, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, 0, RXY_LGF, a0, a1, TCG_REG_NONE, a2); break; case INDEX_op_ld_i64: - tcg_out_ld(s, TCG_TYPE_I64, args[0], args[1], args[2]); + tcg_out_ld(s, TCG_TYPE_I64, a0, a1, a2); break; case INDEX_op_st32_i64: - tcg_out_st(s, TCG_TYPE_I32, args[0], args[1], args[2]); + tcg_out_st(s, TCG_TYPE_I32, a0, a1, a2); break; case INDEX_op_st_i64: - tcg_out_st(s, TCG_TYPE_I64, args[0], args[1], args[2]); + tcg_out_st(s, TCG_TYPE_I64, a0, a1, a2); break; case INDEX_op_add_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { do_addi_64: if (a0 == a1) { @@ -2084,7 +2084,6 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, } break; case INDEX_op_sub_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { a2 = -a2; goto do_addi_64; @@ -2096,18 +2095,16 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_and_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { tcg_out_mov(s, TCG_TYPE_I64, a0, a1); - tgen_andi(s, TCG_TYPE_I64, args[0], args[2]); + tgen_andi(s, TCG_TYPE_I64, a0, a2); } else if (a0 == a1) { - tcg_out_insn(s, RRE, NGR, args[0], args[2]); + tcg_out_insn(s, RRE, NGR, a0, a2); } else { tcg_out_insn(s, RRF, NGRK, a0, a1, a2); } break; case INDEX_op_or_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { tcg_out_mov(s, TCG_TYPE_I64, a0, a1); tgen_ori(s, TCG_TYPE_I64, a0, a2); @@ -2118,7 +2115,6 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, } break; case INDEX_op_xor_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[2]) { tcg_out_mov(s, TCG_TYPE_I64, a0, a1); tgen_xori(s, TCG_TYPE_I64, a0, a2); @@ -2130,21 +2126,21 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_neg_i64: - tcg_out_insn(s, RRE, LCGR, args[0], args[1]); + tcg_out_insn(s, RRE, LCGR, a0, a1); break; case INDEX_op_bswap64_i64: - tcg_out_insn(s, RRE, LRVGR, args[0], args[1]); + tcg_out_insn(s, RRE, LRVGR, a0, a1); break; case INDEX_op_mul_i64: if (const_args[2]) { - if (args[2] == (int16_t)args[2]) { - tcg_out_insn(s, RI, MGHI, args[0], args[2]); + if (a2 == (int16_t)args[2]) { + tcg_out_insn(s, RI, MGHI, a0, a2); } else { - tcg_out_insn(s, RIL, MSGFI, args[0], args[2]); + tcg_out_insn(s, RIL, MSGFI, a0, a2); } } else { - tcg_out_insn(s, RRE, MSGR, args[0], args[2]); + tcg_out_insn(s, RRE, MSGR, a0, a2); } break; @@ -2153,10 +2149,10 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, into R3 with this definition, but as we do in fact always produce both quotient and remainder using INDEX_op_div_i64 instead requires jumping through even more hoops. */ - tcg_out_insn(s, RRE, DSGR, TCG_REG_R2, args[4]); + tcg_out_insn(s, RRE, DSGR, TCG_REG_R2, a4); break; case INDEX_op_divu2_i64: - tcg_out_insn(s, RRE, DLGR, TCG_REG_R2, args[4]); + tcg_out_insn(s, RRE, DLGR, TCG_REG_R2, a4); break; case INDEX_op_mulu2_i64: tcg_out_insn(s, RRE, MLGR, TCG_REG_R2, args[3]); @@ -2166,9 +2162,9 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, op = RSY_SLLG; do_shift64: if (const_args[2]) { - tcg_out_sh64(s, op, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_sh64(s, op, a0, a1, TCG_REG_NONE, a2); } else { - tcg_out_sh64(s, op, args[0], args[1], args[2], 0); + tcg_out_sh64(s, op, a0, a1, a2, 0); } break; case INDEX_op_shr_i64: @@ -2180,87 +2176,83 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_rotl_i64: if (const_args[2]) { - tcg_out_sh64(s, RSY_RLLG, args[0], args[1], - TCG_REG_NONE, args[2]); + tcg_out_sh64(s, RSY_RLLG, a0, a1, TCG_REG_NONE, a2); } else { - tcg_out_sh64(s, RSY_RLLG, args[0], args[1], args[2], 0); + tcg_out_sh64(s, RSY_RLLG, a0, a1, a2, 0); } break; case INDEX_op_rotr_i64: if (const_args[2]) { - tcg_out_sh64(s, RSY_RLLG, args[0], args[1], - TCG_REG_NONE, (64 - args[2]) & 63); + tcg_out_sh64(s, RSY_RLLG, a0, a1, TCG_REG_NONE, (64 - a2) & 63); } else { /* We can use the smaller 32-bit negate because only the low 6 bits are examined for the rotate. */ - tcg_out_insn(s, RR, LCR, TCG_TMP0, args[2]); - tcg_out_sh64(s, RSY_RLLG, args[0], args[1], TCG_TMP0, 0); + tcg_out_insn(s, RR, LCR, TCG_TMP0, a2); + tcg_out_sh64(s, RSY_RLLG, a0, a1, TCG_TMP0, 0); } break; case INDEX_op_ext8s_i64: - tgen_ext8s(s, TCG_TYPE_I64, args[0], args[1]); + tgen_ext8s(s, TCG_TYPE_I64, a0, a1); break; case INDEX_op_ext16s_i64: - tgen_ext16s(s, TCG_TYPE_I64, args[0], args[1]); + tgen_ext16s(s, TCG_TYPE_I64, a0, a1); break; case INDEX_op_ext_i32_i64: case INDEX_op_ext32s_i64: - tgen_ext32s(s, args[0], args[1]); + tgen_ext32s(s, a0, a1); break; case INDEX_op_ext8u_i64: - tgen_ext8u(s, TCG_TYPE_I64, args[0], args[1]); + tgen_ext8u(s, TCG_TYPE_I64, a0, a1); break; case INDEX_op_ext16u_i64: - tgen_ext16u(s, TCG_TYPE_I64, args[0], args[1]); + tgen_ext16u(s, TCG_TYPE_I64, a0, a1); break; case INDEX_op_extu_i32_i64: case INDEX_op_ext32u_i64: - tgen_ext32u(s, args[0], args[1]); + tgen_ext32u(s, a0, a1); break; case INDEX_op_add2_i64: if (const_args[4]) { - if ((int64_t)args[4] >= 0) { - tcg_out_insn(s, RIL, ALGFI, args[0], args[4]); + if ((int64_t)a4 >= 0) { + tcg_out_insn(s, RIL, ALGFI, a0, a4); } else { - tcg_out_insn(s, RIL, SLGFI, args[0], -args[4]); + tcg_out_insn(s, RIL, SLGFI, a0, -a4); } } else { - tcg_out_insn(s, RRE, ALGR, args[0], args[4]); + tcg_out_insn(s, RRE, ALGR, a0, a4); } - tcg_out_insn(s, RRE, ALCGR, args[1], args[5]); + tcg_out_insn(s, RRE, ALCGR, a1, args[5]); break; case INDEX_op_sub2_i64: if (const_args[4]) { - if ((int64_t)args[4] >= 0) { - tcg_out_insn(s, RIL, SLGFI, args[0], args[4]); + if ((int64_t)a4 >= 0) { + tcg_out_insn(s, RIL, SLGFI, a0, a4); } else { - tcg_out_insn(s, RIL, ALGFI, args[0], -args[4]); + tcg_out_insn(s, RIL, ALGFI, a0, -a4); } } else { - tcg_out_insn(s, RRE, SLGR, args[0], args[4]); + tcg_out_insn(s, RRE, SLGR, a0, a4); } - tcg_out_insn(s, RRE, SLBGR, args[1], args[5]); + tcg_out_insn(s, RRE, SLBGR, a1, args[5]); break; case INDEX_op_brcond_i64: - tgen_brcond(s, TCG_TYPE_I64, args[2], args[0], - args[1], const_args[1], arg_label(args[3])); + tgen_brcond(s, TCG_TYPE_I64, a2, a0, + a1, const_args[1], arg_label(args[3])); break; case INDEX_op_setcond_i64: - tgen_setcond(s, TCG_TYPE_I64, args[3], args[0], args[1], - args[2], const_args[2]); + tgen_setcond(s, TCG_TYPE_I64, args[3], a0, a1, a2, const_args[2]); break; case INDEX_op_movcond_i64: - tgen_movcond(s, TCG_TYPE_I64, args[5], args[0], args[1], - args[2], const_args[2], args[3], const_args[3]); + tgen_movcond(s, TCG_TYPE_I64, args[5], a0, a1, + a2, const_args[2], args[3], const_args[3]); break; OP_32_64(deposit): - a0 = args[0], a1 = args[1], a2 = args[2]; if (const_args[1]) { - tgen_deposit(s, a0, a2, args[3], args[4], 1); + tgen_deposit(s, a0, a2, args[3], a4, 1); } else { /* Since we can't support "0Z" as a constraint, we allow a1 in any register. Fix things up as if a matching constraint. */ @@ -2272,22 +2264,22 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, } tcg_out_mov(s, type, a0, a1); } - tgen_deposit(s, a0, a2, args[3], args[4], 0); + tgen_deposit(s, a0, a2, args[3], a4, 0); } break; OP_32_64(extract): - tgen_extract(s, args[0], args[1], args[2], args[3]); + tgen_extract(s, a0, a1, a2, args[3]); break; case INDEX_op_clz_i64: - tgen_clz(s, args[0], args[1], args[2], const_args[2]); + tgen_clz(s, a0, a1, a2, const_args[2]); break; case INDEX_op_mb: /* The host memory model is quite strong, we simply need to serialize the instruction stream. */ - if (args[0] & TCG_MO_ST_LD) { + if (a0 & TCG_MO_ST_LD) { tcg_out_insn(s, RR, BCR, s390_facilities & FACILITY_FAST_BCR_SER ? 14 : 15, 0); } From patchwork Wed Jan 13 17:24:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 12017451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8382AC433DB for ; Wed, 13 Jan 2021 17:32:21 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C925822D05 for ; Wed, 13 Jan 2021 17:32:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C925822D05 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amsat.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:51226 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kzk0J-0006h1-HG for qemu-devel@archiver.kernel.org; Wed, 13 Jan 2021 12:32:19 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:41692) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kzjtq-000099-MR; Wed, 13 Jan 2021 12:25:43 -0500 Received: from mail-wm1-x32c.google.com ([2a00:1450:4864:20::32c]:54171) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1kzjtl-0006AW-OA; Wed, 13 Jan 2021 12:25:35 -0500 Received: by mail-wm1-x32c.google.com with SMTP id k10so2301946wmi.3; Wed, 13 Jan 2021 09:25:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nbTY58aO/0J5mVZOizL8/L5H+yZ4rt3Ds459buh/Uo4=; b=uoa946qK+eG5eMCIzS2cz6yzKuNu/TbYio49p1yhcciT51de1vXX8hnCRTN6lZxLdh 2CPDKNODcNGqKK0ZMIUo3lLiFGnqCz95hUZSZdEggWPdpRPlKUlm/ccrw2XniCjE/cCo xWJyBtNpoYNy2Ro8lrTqpIF9QiRdAALofRFxiIVAqiiFUQT7Cw8NeZfu3yHdWoysQNB9 uIlLcQcfiMaxFiNjjkcXv3CU5tS4iiSi4+ZvpZO4VAnBtlxELdaNOFdMYqt4rlqFr+4u Mznwtep1Hp5wvKAA1Xukf28WicT5hAO2ARCgWEI8wAPIoLPN5D0AFijUCTjpAGivhv4b yAxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=nbTY58aO/0J5mVZOizL8/L5H+yZ4rt3Ds459buh/Uo4=; b=LsRiI0A34qU9zVnjDAp8QzxlFPhobhvlk9cDmSlO8svxmqc5nvNcAX/aN7FmgZ9Jt9 n90oFeyQULTo6j/YVS0fCsWOP2/3iKYoq8O6CFtXcUa28eqwnMFli7T4/3AnJtZ1Ejcw 9kRcZ06wXyfJ3ElyFfN0DzvDSb/6QzFrt7GxnPh39sLHUHUVj1OQQC5g6ZH7ijBt0d8Z sfPgCL9/UUnoh3vwzOK8lJMLhpWrX3oOisJDQc1PoTnSkXqF0oZe8XH13J1hDgxkWgFw qjjNZZxom6QEbE8c7IQiJROfdZ58BmrLZ8A/EKdCJizssOTkSd1WPEqUMT6Rp5xQ36Zw VuWw== X-Gm-Message-State: AOAM533ZTiyRufhC1FQXiazDFCdyAK3WFvy12pxz4+REW7AlInufxAg3 CdDkuLQMpU2gX1btjJJexQhqgdzxg2A= X-Google-Smtp-Source: ABdhPJxR03gA0wrsFd/XfrZXq8/5cAsQbe8rZGIFSVwI83fvnhK3DyuI357dOUM4Sy7cVHORHVggqg== X-Received: by 2002:a1c:a583:: with SMTP id o125mr258893wme.91.1610558727675; Wed, 13 Jan 2021 09:25:27 -0800 (PST) Received: from x1w.redhat.com (13.red-83-57-169.dynamicip.rima-tde.net. [83.57.169.13]) by smtp.gmail.com with ESMTPSA id 125sm3862219wmc.27.2021.01.13.09.25.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Jan 2021 09:25:27 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v2 5/6] tcg: Restrict tcg_out_op() to arrays of TCG_MAX_OP_ARGS elements Date: Wed, 13 Jan 2021 18:24:58 +0100 Message-Id: <20210113172459.2481060-6-f4bug@amsat.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210113172459.2481060-1-f4bug@amsat.org> References: <20210113172459.2481060-1-f4bug@amsat.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32c; envelope-from=philippe.mathieu.daude@gmail.com; helo=mail-wm1-x32c.google.com X-Spam_score_int: -14 X-Spam_score: -1.5 X-Spam_bar: - X-Spam_report: (-1.5 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.249, FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.248, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Huth , Huacai Chen , qemu-riscv@nongnu.org, Stefan Weil , Cornelia Huck , Richard Henderson , Aleksandar Rikalo , =?utf-8?q?Philippe_Mathie?= =?utf-8?q?u-Daud=C3=A9?= , qemu-s390x@nongnu.org, qemu-arm@nongnu.org, Alistair Francis , Palmer Dabbelt , Miroslav Rezanina , Aurelien Jarno Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" tcg_reg_alloc_op() allocates arrays of TCG_MAX_OP_ARGS elements. The Aarch64 target already does this since commit 8d8db193f25 ("tcg-aarch64: Hoist common argument loads in tcg_out_op"), SPARC since commit b357f902bff ("tcg-sparc: Hoist common argument loads in tcg_out_op"). RISCV missed it upon introduction in commit bdf503819ee ("tcg/riscv: Add the out op decoder"), MIPS since commit 22ee3a987d5 ("tcg-mips: Hoist args loads") and i386 since commit 42d5b514928 ("tcg/i386: Hoist common arguments in tcg_out_op"). Provide this information as a hint to the compiler in the function prototype, and update the funtion definitions. This fixes this warning (using GCC 11): tcg/aarch64/tcg-target.c.inc:1855:37: error: argument 3 of type 'const TCGArg[16]' {aka 'const long unsigned int[16]'} with mismatched bound [-Werror=array-parameter=] tcg/aarch64/tcg-target.c.inc:1856:34: error: argument 4 of type 'const int[16]' with mismatched bound [-Werror=array-parameter=] Reported-by: Miroslav Rezanina Reviewed-by: Miroslav Rezanina Reviewed-by: Richard Henderson Signed-off-by: Philippe Mathieu-Daudé --- tcg/tcg.c | 5 +++-- tcg/i386/tcg-target.c.inc | 3 ++- tcg/mips/tcg-target.c.inc | 3 ++- tcg/riscv/tcg-target.c.inc | 3 ++- tcg/tci/tcg-target.c.inc | 5 +++-- 5 files changed, 12 insertions(+), 7 deletions(-) diff --git a/tcg/tcg.c b/tcg/tcg.c index 472bf1755bf..97d074d8fab 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -110,8 +110,9 @@ static void tcg_out_ld(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg1, static bool tcg_out_mov(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg); static void tcg_out_movi(TCGContext *s, TCGType type, TCGReg ret, tcg_target_long arg); -static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, - const int *const_args); +static void tcg_out_op(TCGContext *s, TCGOpcode opc, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]); #if TCG_TARGET_MAYBE_vec static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece, TCGReg dst, TCGReg src); diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index 46e856f4421..d121dca8789 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -2215,7 +2215,8 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64) } static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { TCGArg a0, a1, a2; int c, const_a2, vexop, rexw = 0; diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index add157f6c32..b9bb54f0ecc 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1691,7 +1691,8 @@ static void tcg_out_clz(TCGContext *s, MIPSInsn opcv2, MIPSInsn opcv6, } static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { MIPSInsn i1, i2; TCGArg a0, a1, a2; diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index c60b91ba58f..5bf0d069532 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -1238,7 +1238,8 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64) static const tcg_insn_unit *tb_ret_addr; static void tcg_out_op(TCGContext *s, TCGOpcode opc, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { TCGArg a0 = args[0]; TCGArg a1 = args[1]; diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc index d5a4d9d37cf..60464524f3d 100644 --- a/tcg/tci/tcg-target.c.inc +++ b/tcg/tci/tcg-target.c.inc @@ -553,8 +553,9 @@ static inline void tcg_out_call(TCGContext *s, const tcg_insn_unit *arg) old_code_ptr[1] = s->code_ptr - old_code_ptr; } -static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, - const int *const_args) +static void tcg_out_op(TCGContext *s, TCGOpcode opc, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { uint8_t *old_code_ptr = s->code_ptr; From patchwork Wed Jan 13 17:24:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 12017459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97985C433E0 for ; Wed, 13 Jan 2021 17:37:53 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 049DD2313E for ; Wed, 13 Jan 2021 17:37:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 049DD2313E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amsat.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:58518 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kzk5f-0001j4-W2 for qemu-devel@archiver.kernel.org; Wed, 13 Jan 2021 12:37:52 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:41778) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kzjtx-0000BN-NF; Wed, 13 Jan 2021 12:25:47 -0500 Received: from mail-wm1-x333.google.com ([2a00:1450:4864:20::333]:37064) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1kzjtr-0006Hs-Dl; Wed, 13 Jan 2021 12:25:45 -0500 Received: by mail-wm1-x333.google.com with SMTP id g10so2334698wmh.2; Wed, 13 Jan 2021 09:25:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CsrU5PzWJOqWBM621BtsUfkniXFxs+sJ0jZIwq4bvko=; b=tZYvrIRroKpXLJ5e/pihISRDSjdwOQ93ujRXBjbefwCHz1KbtyPkMV031mAsWRrJoc 7rwBIF0eyiQvMcOM1N0HRz2kO4jzmzxx7RMN/sVKOhNWbRucyuqoxpRAd9FGxs3rgVMa pMUVcxsUE87I3JT4F52vFREa6b+kPsymTRW5c2bHiJkdmi9PZBRxVLPpcsPwQC04kPxH pezV3qHV3zfAzqqf96lc4dc0SBIoeCxZx4JJI/sfWW8RzbmrWReU+T5yaGv+2dtnOL5w 3puQ833pgroSlrDZr8aavGBT1DdHiGuhEa9sh1+x50DBoLWaLmtbGwpW2Xbnhda7n+4a 4Nfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=CsrU5PzWJOqWBM621BtsUfkniXFxs+sJ0jZIwq4bvko=; b=tgeEFaF/rkBnZHE38lbPHBi4BhzHgqDIDykV1g6Hv9FDSz62ngCdbpIDbNJFhrwU4o Rq/NpFaJjoU1IiZKQb6vO2xuDsRdLZR5i90cW7UKfbSb2qciCUl23iyRwhoAXLbLe2R2 327WipdkGDuA4C7YVeLJ7LM8NRGdfVeJfIqCZ1tixJ1sm/rBovFOgh1HCEdDXsBs5NBl CAZ7geuiesY6pqyABK59t1wLRlpd7bgom7H4t73a0uO4SYhFHj3Do296CUtYQAcoD2HD nk44/1pi8wb2hZgwqteGS6nwTGQNFdmMUxLCehjZ4lKgR3fjqIYMKMWuD9MkFYbAunCT MtOQ== X-Gm-Message-State: AOAM533+BpaTna7atrcjdxs50+z1NCdba9yD5UkDEhiT5zip0W0G0mGt bC0XouGC4zRhr0nxiFu81k0LzO7V4u4= X-Google-Smtp-Source: ABdhPJye576eM2Ebv7gU2u8PS2+2AyG7/UyRx2o3UsvGh7li2vvj1bw6ncnm9r8ciqfb2y4nUj3CDQ== X-Received: by 2002:a1c:40d6:: with SMTP id n205mr323192wma.0.1610558732781; Wed, 13 Jan 2021 09:25:32 -0800 (PST) Received: from x1w.redhat.com (13.red-83-57-169.dynamicip.rima-tde.net. [83.57.169.13]) by smtp.gmail.com with ESMTPSA id u83sm4066965wmu.12.2021.01.13.09.25.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Jan 2021 09:25:32 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH v2 6/6] tcg: Restrict tcg_out_vec_op() to arrays of TCG_MAX_OP_ARGS elements Date: Wed, 13 Jan 2021 18:24:59 +0100 Message-Id: <20210113172459.2481060-7-f4bug@amsat.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210113172459.2481060-1-f4bug@amsat.org> References: <20210113172459.2481060-1-f4bug@amsat.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::333; envelope-from=philippe.mathieu.daude@gmail.com; helo=mail-wm1-x333.google.com X-Spam_score_int: -14 X-Spam_score: -1.5 X-Spam_bar: - X-Spam_report: (-1.5 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.249, FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.248, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Huth , Huacai Chen , qemu-riscv@nongnu.org, Stefan Weil , Cornelia Huck , Richard Henderson , Aleksandar Rikalo , =?utf-8?q?Philippe_Mathie?= =?utf-8?q?u-Daud=C3=A9?= , qemu-s390x@nongnu.org, qemu-arm@nongnu.org, Alistair Francis , Palmer Dabbelt , Miroslav Rezanina , Aurelien Jarno Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" tcg_reg_alloc_op() allocates arrays of TCG_MAX_OP_ARGS elements. Reviewed-by: Richard Henderson Signed-off-by: Philippe Mathieu-Daudé --- tcg/tcg.c | 14 ++++++++------ tcg/aarch64/tcg-target.c.inc | 3 ++- tcg/i386/tcg-target.c.inc | 3 ++- tcg/ppc/tcg-target.c.inc | 3 ++- 4 files changed, 14 insertions(+), 9 deletions(-) diff --git a/tcg/tcg.c b/tcg/tcg.c index 97d074d8fab..3a20327f9cb 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -120,9 +120,10 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece, TCGReg dst, TCGReg base, intptr_t offset); static void tcg_out_dupi_vec(TCGContext *s, TCGType type, TCGReg dst, tcg_target_long arg); -static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, unsigned vecl, - unsigned vece, const TCGArg *args, - const int *const_args); +static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, + unsigned vecl, unsigned vece, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]); #else static inline bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece, TCGReg dst, TCGReg src) @@ -139,9 +140,10 @@ static inline void tcg_out_dupi_vec(TCGContext *s, TCGType type, { g_assert_not_reached(); } -static inline void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, unsigned vecl, - unsigned vece, const TCGArg *args, - const int *const_args) +static inline void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, + unsigned vecl, unsigned vece, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { g_assert_not_reached(); } diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index ab199b143f3..32811976e78 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -2276,7 +2276,8 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, unsigned vecl, unsigned vece, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { static const AArch64Insn cmp_insn[16] = { [TCG_COND_EQ] = I3616_CMEQ, diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index d121dca8789..87bf75735a1 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -2654,7 +2654,8 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, unsigned vecl, unsigned vece, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { static int const add_insn[4] = { OPC_PADDB, OPC_PADDW, OPC_PADDD, OPC_PADDQ diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 70b747a8a30..b8f5f8a53e1 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -3137,7 +3137,8 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece, static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, unsigned vecl, unsigned vece, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { static const uint32_t add_op[4] = { VADDUBM, VADDUHM, VADDUWM, VADDUDM },