From patchwork Mon Jan 11 15:01:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 12010925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D25ACC433E0 for ; Mon, 11 Jan 2021 15:08:24 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 69E1622795 for ; Mon, 11 Jan 2021 15:08:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 69E1622795 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amsat.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:34316 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kyynv-0000H1-Ip for qemu-devel@archiver.kernel.org; Mon, 11 Jan 2021 10:08:23 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:45010) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kyyhG-0001Da-JP; Mon, 11 Jan 2021 10:01:37 -0500 Received: from mail-ed1-x531.google.com ([2a00:1450:4864:20::531]:35853) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1kyyhC-0001RU-Vx; Mon, 11 Jan 2021 10:01:29 -0500 Received: by mail-ed1-x531.google.com with SMTP id b2so47014edm.3; Mon, 11 Jan 2021 07:01:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7h+ZogQhx4K7jgIahkOF5Dqk41tHkgsSVlTN0LVAfCk=; b=dP4/4q4XQWeA6wBuzQkSzM8QWT1yNWctBl4v3qpL9diGRWgKWVySO3+0AwWetA7z7q WJb5CWk97B+VcDrbdXBatONWkudewoH040b9og40B1GpKmJStkNLH0uuFVgQcL/qopT7 dCgpJUbOnmUByhU+kvGzWAntK6PU1Ag2Zr4PTfEAzICRkxSfUmISopPo+xKvfm2BZH+K 8eE+lZK2EXN3JETu5+5e0j30mWYwVrv+XkuWVZNDqgnMaaoPxFspsOJqtRZl3R4+rtTQ MMPhZMK9p+1+zHNmXY5VebVyTnixI2L5a452D3Q7cXXKZUJa6dQL4DWxR8rDwvwM8V4s NY3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=7h+ZogQhx4K7jgIahkOF5Dqk41tHkgsSVlTN0LVAfCk=; b=a9h458TGwuS51e622vSKLH4wyALMDS9ijbjk45/v+tnQ7//9Pe6fmQmb9VBn4ZTrOj I2Wbzo6rC1fqwpdEjs2GiAxNjvUfo1NS8ryG0ojXS3Y0IhEcEPIsZ7VsaxDie3pFotCy 5YxDqi1ZXqzjVGwhTkez4v91+vP+QM54gQQzmYE7Y6aVz16bjmUvbQ+Maw7o/Pl56Qit E5WBzOmYHl2tZp4Vs4xezPA3qXWPcOMo6YJYHEJCZHVkQmwpiAdKqprvCzHpqH3KG55i ovjfsEIkcRsusGws3X4S77yDgjG744GjrNesRxWnIMwXuiQdSXX4Xw5BJBfpHjeQmsa7 JVgw== X-Gm-Message-State: AOAM530yGw41a/8+36g6QtDlzKfIDIAs6JyaiikdqN375rloG6NzMhCE tzOly2Im58xJc6yONIrkyBKTAv2Ajbs= X-Google-Smtp-Source: ABdhPJw0uOiojo/r1bm05kVgFe0MoLXFzs4Tx0Yl8HbgBEyJFjgJ0jlUULuplszIh99BQk/9az2MjQ== X-Received: by 2002:aa7:d485:: with SMTP id b5mr13875717edr.214.1610377284077; Mon, 11 Jan 2021 07:01:24 -0800 (PST) Received: from x1w.redhat.com (129.red-88-21-205.staticip.rima-tde.net. [88.21.205.129]) by smtp.gmail.com with ESMTPSA id cw7sm5317247ejc.13.2021.01.11.07.01.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Jan 2021 07:01:23 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH 1/5] tcg/arm: Hoist common argument loads in tcg_out_op() Date: Mon, 11 Jan 2021 16:01:10 +0100 Message-Id: <20210111150114.1415930-2-f4bug@amsat.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210111150114.1415930-1-f4bug@amsat.org> References: <20210111150114.1415930-1-f4bug@amsat.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::531; envelope-from=philippe.mathieu.daude@gmail.com; helo=mail-ed1-x531.google.com X-Spam_score_int: -14 X-Spam_score: -1.5 X-Spam_bar: - X-Spam_report: (-1.5 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.248, FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Aleksandar Rikalo , Cornelia Huck , qemu-riscv@nongnu.org, Stefan Weil , Huacai Chen , Richard Henderson , Thomas Huth , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , qemu-s390x@nongnu.org, qemu-arm@nongnu.org, Alistair Francis , Palmer Dabbelt , Aurelien Jarno Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: Philippe Mathieu-Daudé --- tcg/arm/tcg-target.c.inc | 173 +++++++++++++++++++-------------------- 1 file changed, 86 insertions(+), 87 deletions(-) diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index 0fd11264544..94cc12a0fc6 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1747,15 +1747,24 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64) static void tcg_out_epilogue(TCGContext *s); -static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, - const TCGArg *args, const int *const_args) +static void tcg_out_op(TCGContext *s, TCGOpcode opc, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { TCGArg a0, a1, a2, a3, a4, a5; int c; + /* Hoist the loads of the most common arguments. */ + a0 = args[0]; + a1 = args[1]; + a2 = args[2]; + a3 = args[3]; + a4 = args[4]; + a5 = args[5]; + switch (opc) { case INDEX_op_exit_tb: - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R0, args[0]); + tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R0, a0); tcg_out_epilogue(s); break; case INDEX_op_goto_tb: @@ -1765,7 +1774,7 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGReg base = TCG_REG_PC; tcg_debug_assert(s->tb_jmp_insn_offset == 0); - ptr = (intptr_t)tcg_splitwx_to_rx(s->tb_jmp_target_addr + args[0]); + ptr = (intptr_t)tcg_splitwx_to_rx(s->tb_jmp_target_addr + a0); dif = tcg_pcrel_diff(s, (void *)ptr) - 8; dil = sextract32(dif, 0, 12); if (dif != dil) { @@ -1778,39 +1787,39 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, tcg_out_movi32(s, COND_AL, base, ptr - dil); } tcg_out_ld32_12(s, COND_AL, TCG_REG_PC, base, dil); - set_jmp_reset_offset(s, args[0]); + set_jmp_reset_offset(s, a0); } break; case INDEX_op_goto_ptr: - tcg_out_bx(s, COND_AL, args[0]); + tcg_out_bx(s, COND_AL, a0); break; case INDEX_op_br: - tcg_out_goto_label(s, COND_AL, arg_label(args[0])); + tcg_out_goto_label(s, COND_AL, arg_label(a0)); break; case INDEX_op_ld8u_i32: - tcg_out_ld8u(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld8u(s, COND_AL, a0, a1, a2); break; case INDEX_op_ld8s_i32: - tcg_out_ld8s(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld8s(s, COND_AL, a0, a1, a2); break; case INDEX_op_ld16u_i32: - tcg_out_ld16u(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld16u(s, COND_AL, a0, a1, a2); break; case INDEX_op_ld16s_i32: - tcg_out_ld16s(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld16s(s, COND_AL, a0, a1, a2); break; case INDEX_op_ld_i32: - tcg_out_ld32u(s, COND_AL, args[0], args[1], args[2]); + tcg_out_ld32u(s, COND_AL, a0, a1, a2); break; case INDEX_op_st8_i32: - tcg_out_st8(s, COND_AL, args[0], args[1], args[2]); + tcg_out_st8(s, COND_AL, a0, a1, a2); break; case INDEX_op_st16_i32: - tcg_out_st16(s, COND_AL, args[0], args[1], args[2]); + tcg_out_st16(s, COND_AL, a0, a1, a2); break; case INDEX_op_st_i32: - tcg_out_st32(s, COND_AL, args[0], args[1], args[2]); + tcg_out_st32(s, COND_AL, a0, a1, a2); break; case INDEX_op_movcond_i32: @@ -1818,34 +1827,33 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, * so we only need to do "if condition passed, move v1 to dest". */ tcg_out_dat_rIN(s, COND_AL, ARITH_CMP, ARITH_CMN, 0, - args[1], args[2], const_args[2]); - tcg_out_dat_rIK(s, tcg_cond_to_arm_cond[args[5]], ARITH_MOV, - ARITH_MVN, args[0], 0, args[3], const_args[3]); + a1, a2, const_args[2]); + tcg_out_dat_rIK(s, tcg_cond_to_arm_cond[a5], ARITH_MOV, + ARITH_MVN, a0, 0, a3, const_args[3]); break; case INDEX_op_add_i32: tcg_out_dat_rIN(s, COND_AL, ARITH_ADD, ARITH_SUB, - args[0], args[1], args[2], const_args[2]); + a0, a1, a2, const_args[2]); break; case INDEX_op_sub_i32: if (const_args[1]) { if (const_args[2]) { - tcg_out_movi32(s, COND_AL, args[0], args[1] - args[2]); + tcg_out_movi32(s, COND_AL, a0, a1 - a2); } else { - tcg_out_dat_rI(s, COND_AL, ARITH_RSB, - args[0], args[2], args[1], 1); + tcg_out_dat_rI(s, COND_AL, ARITH_RSB, a0, a2, a1, 1); } } else { tcg_out_dat_rIN(s, COND_AL, ARITH_SUB, ARITH_ADD, - args[0], args[1], args[2], const_args[2]); + a0, a1, a2, const_args[2]); } break; case INDEX_op_and_i32: tcg_out_dat_rIK(s, COND_AL, ARITH_AND, ARITH_BIC, - args[0], args[1], args[2], const_args[2]); + a0, a1, a2, const_args[2]); break; case INDEX_op_andc_i32: tcg_out_dat_rIK(s, COND_AL, ARITH_BIC, ARITH_AND, - args[0], args[1], args[2], const_args[2]); + a0, a1, a2, const_args[2]); break; case INDEX_op_or_i32: c = ARITH_ORR; @@ -1854,11 +1862,9 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, c = ARITH_EOR; /* Fall through. */ gen_arith: - tcg_out_dat_rI(s, COND_AL, c, args[0], args[1], args[2], const_args[2]); + tcg_out_dat_rI(s, COND_AL, c, a0, a1, a2, const_args[2]); break; case INDEX_op_add2_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; - a3 = args[3], a4 = args[4], a5 = args[5]; if (a0 == a3 || (a0 == a5 && !const_args[5])) { a0 = TCG_REG_TMP; } @@ -1866,11 +1872,9 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, a0, a2, a4, const_args[4]); tcg_out_dat_rIK(s, COND_AL, ARITH_ADC, ARITH_SBC, a1, a3, a5, const_args[5]); - tcg_out_mov_reg(s, COND_AL, args[0], a0); + tcg_out_mov_reg(s, COND_AL, a0, a0); break; case INDEX_op_sub2_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; - a3 = args[3], a4 = args[4], a5 = args[5]; if ((a0 == a3 && !const_args[3]) || (a0 == a5 && !const_args[5])) { a0 = TCG_REG_TMP; } @@ -1894,68 +1898,64 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, tcg_out_dat_rIK(s, COND_AL, ARITH_SBC, ARITH_ADC, a1, a3, a5, const_args[5]); } - tcg_out_mov_reg(s, COND_AL, args[0], a0); + tcg_out_mov_reg(s, COND_AL, a0, a0); break; case INDEX_op_neg_i32: - tcg_out_dat_imm(s, COND_AL, ARITH_RSB, args[0], args[1], 0); + tcg_out_dat_imm(s, COND_AL, ARITH_RSB, a0, a1, 0); break; case INDEX_op_not_i32: - tcg_out_dat_reg(s, COND_AL, - ARITH_MVN, args[0], 0, args[1], SHIFT_IMM_LSL(0)); + tcg_out_dat_reg(s, COND_AL, ARITH_MVN, a0, 0, a1, SHIFT_IMM_LSL(0)); break; case INDEX_op_mul_i32: - tcg_out_mul32(s, COND_AL, args[0], args[1], args[2]); + tcg_out_mul32(s, COND_AL, a0, a1, a2); break; case INDEX_op_mulu2_i32: - tcg_out_umull32(s, COND_AL, args[0], args[1], args[2], args[3]); + tcg_out_umull32(s, COND_AL, a0, a1, a2, a3); break; case INDEX_op_muls2_i32: - tcg_out_smull32(s, COND_AL, args[0], args[1], args[2], args[3]); + tcg_out_smull32(s, COND_AL, a0, a1, a2, a3); break; - /* XXX: Perhaps args[2] & 0x1f is wrong */ + /* XXX: Perhaps a2 & 0x1f is wrong */ case INDEX_op_shl_i32: c = const_args[2] ? - SHIFT_IMM_LSL(args[2] & 0x1f) : SHIFT_REG_LSL(args[2]); + SHIFT_IMM_LSL(a2 & 0x1f) : SHIFT_REG_LSL(a2); goto gen_shift32; case INDEX_op_shr_i32: - c = const_args[2] ? (args[2] & 0x1f) ? SHIFT_IMM_LSR(args[2] & 0x1f) : - SHIFT_IMM_LSL(0) : SHIFT_REG_LSR(args[2]); + c = const_args[2] ? (a2 & 0x1f) ? SHIFT_IMM_LSR(a2 & 0x1f) : + SHIFT_IMM_LSL(0) : SHIFT_REG_LSR(a2); goto gen_shift32; case INDEX_op_sar_i32: - c = const_args[2] ? (args[2] & 0x1f) ? SHIFT_IMM_ASR(args[2] & 0x1f) : - SHIFT_IMM_LSL(0) : SHIFT_REG_ASR(args[2]); + c = const_args[2] ? (a2 & 0x1f) ? SHIFT_IMM_ASR(a2 & 0x1f) : + SHIFT_IMM_LSL(0) : SHIFT_REG_ASR(a2); goto gen_shift32; case INDEX_op_rotr_i32: - c = const_args[2] ? (args[2] & 0x1f) ? SHIFT_IMM_ROR(args[2] & 0x1f) : - SHIFT_IMM_LSL(0) : SHIFT_REG_ROR(args[2]); + c = const_args[2] ? (a2 & 0x1f) ? SHIFT_IMM_ROR(a2 & 0x1f) : + SHIFT_IMM_LSL(0) : SHIFT_REG_ROR(a2); /* Fall through. */ gen_shift32: - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, args[1], c); + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, c); break; case INDEX_op_rotl_i32: if (const_args[2]) { - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, args[1], - ((0x20 - args[2]) & 0x1f) ? - SHIFT_IMM_ROR((0x20 - args[2]) & 0x1f) : + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, + ((0x20 - a2) & 0x1f) ? + SHIFT_IMM_ROR((0x20 - a2) & 0x1f) : SHIFT_IMM_LSL(0)); } else { - tcg_out_dat_imm(s, COND_AL, ARITH_RSB, TCG_REG_TMP, args[2], 0x20); - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, args[1], + tcg_out_dat_imm(s, COND_AL, ARITH_RSB, TCG_REG_TMP, a2, 0x20); + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, a1, SHIFT_REG_ROR(TCG_REG_TMP)); } break; case INDEX_op_ctz_i32: - tcg_out_dat_reg(s, COND_AL, INSN_RBIT, TCG_REG_TMP, 0, args[1], 0); + tcg_out_dat_reg(s, COND_AL, INSN_RBIT, TCG_REG_TMP, 0, a1, 0); a1 = TCG_REG_TMP; goto do_clz; case INDEX_op_clz_i32: - a1 = args[1]; do_clz: - a0 = args[0]; - a2 = args[2]; c = const_args[2]; if (c && a2 == 32) { tcg_out_dat_reg(s, COND_AL, INSN_CLZ, a0, 0, a1, 0); @@ -1970,28 +1970,28 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_brcond_i32: tcg_out_dat_rIN(s, COND_AL, ARITH_CMP, ARITH_CMN, 0, - args[0], args[1], const_args[1]); - tcg_out_goto_label(s, tcg_cond_to_arm_cond[args[2]], - arg_label(args[3])); + a0, a1, const_args[1]); + tcg_out_goto_label(s, tcg_cond_to_arm_cond[a2], + arg_label(a3)); break; case INDEX_op_setcond_i32: tcg_out_dat_rIN(s, COND_AL, ARITH_CMP, ARITH_CMN, 0, - args[1], args[2], const_args[2]); - tcg_out_dat_imm(s, tcg_cond_to_arm_cond[args[3]], - ARITH_MOV, args[0], 0, 1); - tcg_out_dat_imm(s, tcg_cond_to_arm_cond[tcg_invert_cond(args[3])], - ARITH_MOV, args[0], 0, 0); + a1, a2, const_args[2]); + tcg_out_dat_imm(s, tcg_cond_to_arm_cond[a3], + ARITH_MOV, a0, 0, 1); + tcg_out_dat_imm(s, tcg_cond_to_arm_cond[tcg_invert_cond(a3)], + ARITH_MOV, a0, 0, 0); break; case INDEX_op_brcond2_i32: c = tcg_out_cmp2(s, args, const_args); - tcg_out_goto_label(s, tcg_cond_to_arm_cond[c], arg_label(args[5])); + tcg_out_goto_label(s, tcg_cond_to_arm_cond[c], arg_label(a5)); break; case INDEX_op_setcond2_i32: c = tcg_out_cmp2(s, args + 1, const_args + 1); - tcg_out_dat_imm(s, tcg_cond_to_arm_cond[c], ARITH_MOV, args[0], 0, 1); + tcg_out_dat_imm(s, tcg_cond_to_arm_cond[c], ARITH_MOV, a0, 0, 1); tcg_out_dat_imm(s, tcg_cond_to_arm_cond[tcg_invert_cond(c)], - ARITH_MOV, args[0], 0, 0); + ARITH_MOV, a0, 0, 0); break; case INDEX_op_qemu_ld_i32: @@ -2008,63 +2008,62 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_bswap16_i32: - tcg_out_bswap16(s, COND_AL, args[0], args[1]); + tcg_out_bswap16(s, COND_AL, a0, a1); break; case INDEX_op_bswap32_i32: - tcg_out_bswap32(s, COND_AL, args[0], args[1]); + tcg_out_bswap32(s, COND_AL, a0, a1); break; case INDEX_op_ext8s_i32: - tcg_out_ext8s(s, COND_AL, args[0], args[1]); + tcg_out_ext8s(s, COND_AL, a0, a1); break; case INDEX_op_ext16s_i32: - tcg_out_ext16s(s, COND_AL, args[0], args[1]); + tcg_out_ext16s(s, COND_AL, a0, a1); break; case INDEX_op_ext16u_i32: - tcg_out_ext16u(s, COND_AL, args[0], args[1]); + tcg_out_ext16u(s, COND_AL, a0, a1); break; case INDEX_op_deposit_i32: - tcg_out_deposit(s, COND_AL, args[0], args[2], - args[3], args[4], const_args[2]); + tcg_out_deposit(s, COND_AL, a0, a2, a3, a4, const_args[2]); break; case INDEX_op_extract_i32: - tcg_out_extract(s, COND_AL, args[0], args[1], args[2], args[3]); + tcg_out_extract(s, COND_AL, a0, a1, a2, a3); break; case INDEX_op_sextract_i32: - tcg_out_sextract(s, COND_AL, args[0], args[1], args[2], args[3]); + tcg_out_sextract(s, COND_AL, a0, a1, a2, a3); break; case INDEX_op_extract2_i32: /* ??? These optimization vs zero should be generic. */ /* ??? But we can't substitute 2 for 1 in the opcode stream yet. */ if (const_args[1]) { if (const_args[2]) { - tcg_out_movi(s, TCG_TYPE_REG, args[0], 0); + tcg_out_movi(s, TCG_TYPE_REG, a0, 0); } else { - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, - args[2], SHIFT_IMM_LSL(32 - args[3])); + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, + a2, SHIFT_IMM_LSL(32 - a3)); } } else if (const_args[2]) { - tcg_out_dat_reg(s, COND_AL, ARITH_MOV, args[0], 0, - args[1], SHIFT_IMM_LSR(args[3])); + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, a0, 0, + a1, SHIFT_IMM_LSR(a3)); } else { /* We can do extract2 in 2 insns, vs the 3 required otherwise. */ tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_TMP, 0, - args[2], SHIFT_IMM_LSL(32 - args[3])); - tcg_out_dat_reg(s, COND_AL, ARITH_ORR, args[0], TCG_REG_TMP, - args[1], SHIFT_IMM_LSR(args[3])); + a2, SHIFT_IMM_LSL(32 - a3)); + tcg_out_dat_reg(s, COND_AL, ARITH_ORR, a0, TCG_REG_TMP, + a1, SHIFT_IMM_LSR(a3)); } break; case INDEX_op_div_i32: - tcg_out_sdiv(s, COND_AL, args[0], args[1], args[2]); + tcg_out_sdiv(s, COND_AL, a0, a1, a2); break; case INDEX_op_divu_i32: - tcg_out_udiv(s, COND_AL, args[0], args[1], args[2]); + tcg_out_udiv(s, COND_AL, a0, a1, a2); break; case INDEX_op_mb: - tcg_out_mb(s, args[0]); + tcg_out_mb(s, a0); break; case INDEX_op_mov_i32: /* Always emitted via tcg_out_mov. */ From patchwork Mon Jan 11 15:01:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 12010919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EBA4C433DB for ; Mon, 11 Jan 2021 15:04:00 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F302922795 for ; Mon, 11 Jan 2021 15:03:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F302922795 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amsat.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:49280 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kyyje-00038Q-Tj for qemu-devel@archiver.kernel.org; Mon, 11 Jan 2021 10:03:58 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:45060) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kyyhN-0001Dq-Uv; Mon, 11 Jan 2021 10:01:39 -0500 Received: from mail-ej1-x629.google.com ([2a00:1450:4864:20::629]:36354) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1kyyhJ-0001Si-7s; Mon, 11 Jan 2021 10:01:36 -0500 Received: by mail-ej1-x629.google.com with SMTP id lt17so59704ejb.3; Mon, 11 Jan 2021 07:01:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vf/wZ0/y/41t2lXyI9D2kezLuMSTe1BhgghJsOVZPUU=; b=T/O1VpQahTfQSxTND+Epa3K10kGhyPr4JFJWUlRboUKfymWu58/DYV7EsO9JlZR4Wo owyv7FiWnvYpfWVyplreK6uD9on35nlBmcA7V35qJ4OcxY75JR/BQR2pDLUz9osOJOeg AawAGG8HPE66nNBBr1jA48uBiOmMPcFPBhtHM44DUl7q+25q8QxOzI00CDjneqAQaxHC puAEcbvNae1ll8IIiTh5MhXBrc3af84XTQeZMkFis5ZqBgXz3oyhBBCleEluo2TYkAgc hnKqKrD26LnG1q5iMu+Z/JJWTdoBArVvagq0tTNu45DkMB115OPA5pKOPZEbGGBEz0hI tO7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=vf/wZ0/y/41t2lXyI9D2kezLuMSTe1BhgghJsOVZPUU=; b=IBLt67gRaeE8raUib2UTEY3GhtUdqESiUsfpneD3uTdqXPP5gksH8ZLxmaWaaNvhGj 3o+kuEF0UsWmZIHFlR9KeZELKqd1i1LJP11RpFlN7rfkq043PaEacDUiqaNLwHpQMXEh Xp5EtFHzPFuqq9exZ1Kayy4u7pv3Nq7NzjSfP+MeLTaY9M0J8GFwCBPwA+msTj9CwBko 14WWbuKpjGYMeNjZldD7A7xHv9EvA392GEPBJT8m3/b6kaOFDbZdin5Dtej47ITWldBE T+GAcKF2h4JktLih/RojIJE1n3U83VFDvhaQLpwg2FTMT+FHMEQV8l/u3sUpBUr5SpfF qiSQ== X-Gm-Message-State: AOAM530KT2/XyWsCkECQbezh6MJxQXtvm6T0sCx5QN935pfYTjt3bxlJ PZLtAlb6u1Tw72oZ+xrEvIb52dXz+68= X-Google-Smtp-Source: ABdhPJxBaleYdSx2Ns1d+A/mXqWtcAFzmwdZEzz+xPUdu3b8aObS789d/hP+MtNjQh1ipvWP3gTgVA== X-Received: by 2002:a17:906:4756:: with SMTP id j22mr11543878ejs.353.1610377290055; Mon, 11 Jan 2021 07:01:30 -0800 (PST) Received: from x1w.redhat.com (129.red-88-21-205.staticip.rima-tde.net. [88.21.205.129]) by smtp.gmail.com with ESMTPSA id mc25sm7102202ejb.58.2021.01.11.07.01.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Jan 2021 07:01:29 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH 2/5] tcg/ppc: Hoist common argument loads in tcg_out_op() Date: Mon, 11 Jan 2021 16:01:11 +0100 Message-Id: <20210111150114.1415930-3-f4bug@amsat.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210111150114.1415930-1-f4bug@amsat.org> References: <20210111150114.1415930-1-f4bug@amsat.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::629; envelope-from=philippe.mathieu.daude@gmail.com; helo=mail-ej1-x629.google.com X-Spam_score_int: -14 X-Spam_score: -1.5 X-Spam_bar: - X-Spam_report: (-1.5 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.248, FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Aleksandar Rikalo , Cornelia Huck , qemu-riscv@nongnu.org, Stefan Weil , Huacai Chen , Richard Henderson , Thomas Huth , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , qemu-s390x@nongnu.org, qemu-arm@nongnu.org, Alistair Francis , Palmer Dabbelt , Aurelien Jarno Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: Philippe Mathieu-Daudé --- tcg/ppc/tcg-target.c.inc | 294 ++++++++++++++++++--------------------- 1 file changed, 138 insertions(+), 156 deletions(-) diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 19a4a12f155..d37b519d693 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -2357,15 +2357,23 @@ static void tcg_target_qemu_prologue(TCGContext *s) tcg_out32(s, BCLR | BO_ALWAYS); } -static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, - const int *const_args) +static void tcg_out_op(TCGContext *s, TCGOpcode opc, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { TCGArg a0, a1, a2; - int c; + int c, c1, c2; + + /* Hoist the loads of the most common arguments. */ + a0 = args[0]; + a1 = args[1]; + a2 = args[2]; + c1 = const_args[1]; + c2 = const_args[2]; switch (opc) { case INDEX_op_exit_tb: - tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R3, args[0]); + tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R3, a0); tcg_out_b(s, 0, tcg_code_gen_epilogue); break; case INDEX_op_goto_tb: @@ -2376,24 +2384,24 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, if ((uintptr_t)s->code_ptr & 7) { tcg_out32(s, NOP); } - s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s); + s->tb_jmp_insn_offset[a0] = tcg_current_code_size(s); tcg_out32(s, ADDIS | TAI(TCG_REG_TB, TCG_REG_TB, 0)); tcg_out32(s, ADDI | TAI(TCG_REG_TB, TCG_REG_TB, 0)); } else { - s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s); + s->tb_jmp_insn_offset[a0] = tcg_current_code_size(s); tcg_out32(s, B); - s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s); + s->tb_jmp_reset_offset[a0] = tcg_current_code_size(s); break; } } else { /* Indirect jump. */ tcg_debug_assert(s->tb_jmp_insn_offset == NULL); tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TB, 0, - (intptr_t)(s->tb_jmp_insn_offset + args[0])); + (intptr_t)(s->tb_jmp_insn_offset + a0)); } tcg_out32(s, MTSPR | RS(TCG_REG_TB) | CTR); tcg_out32(s, BCCTR | BO_ALWAYS); - set_jmp_reset_offset(s, args[0]); + set_jmp_reset_offset(s, a0); if (USE_REG_TB) { /* For the unlinked case, need to reset TCG_REG_TB. */ tcg_out_mem_long(s, ADDI, ADD, TCG_REG_TB, TCG_REG_TB, @@ -2401,16 +2409,16 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, } break; case INDEX_op_goto_ptr: - tcg_out32(s, MTSPR | RS(args[0]) | CTR); + tcg_out32(s, MTSPR | RS(a0) | CTR); if (USE_REG_TB) { - tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_TB, args[0]); + tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_TB, a0); } tcg_out32(s, ADDI | TAI(TCG_REG_R3, 0, 0)); tcg_out32(s, BCCTR | BO_ALWAYS); break; case INDEX_op_br: { - TCGLabel *l = arg_label(args[0]); + TCGLabel *l = arg_label(a0); uint32_t insn = B; if (l->has_value) { @@ -2424,50 +2432,49 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_ld8u_i32: case INDEX_op_ld8u_i64: - tcg_out_mem_long(s, LBZ, LBZX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LBZ, LBZX, a0, a1, a2); break; case INDEX_op_ld8s_i32: case INDEX_op_ld8s_i64: - tcg_out_mem_long(s, LBZ, LBZX, args[0], args[1], args[2]); - tcg_out32(s, EXTSB | RS(args[0]) | RA(args[0])); + tcg_out_mem_long(s, LBZ, LBZX, a0, a1, a2); + tcg_out32(s, EXTSB | RS(a0) | RA(a0)); break; case INDEX_op_ld16u_i32: case INDEX_op_ld16u_i64: - tcg_out_mem_long(s, LHZ, LHZX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LHZ, LHZX, a0, a1, a2); break; case INDEX_op_ld16s_i32: case INDEX_op_ld16s_i64: - tcg_out_mem_long(s, LHA, LHAX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LHA, LHAX, a0, a1, a2); break; case INDEX_op_ld_i32: case INDEX_op_ld32u_i64: - tcg_out_mem_long(s, LWZ, LWZX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LWZ, LWZX, a0, a1, a2); break; case INDEX_op_ld32s_i64: - tcg_out_mem_long(s, LWA, LWAX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LWA, LWAX, a0, a1, a2); break; case INDEX_op_ld_i64: - tcg_out_mem_long(s, LD, LDX, args[0], args[1], args[2]); + tcg_out_mem_long(s, LD, LDX, a0, a1, a2); break; case INDEX_op_st8_i32: case INDEX_op_st8_i64: - tcg_out_mem_long(s, STB, STBX, args[0], args[1], args[2]); + tcg_out_mem_long(s, STB, STBX, a0, a1, a2); break; case INDEX_op_st16_i32: case INDEX_op_st16_i64: - tcg_out_mem_long(s, STH, STHX, args[0], args[1], args[2]); + tcg_out_mem_long(s, STH, STHX, a0, a1, a2); break; case INDEX_op_st_i32: case INDEX_op_st32_i64: - tcg_out_mem_long(s, STW, STWX, args[0], args[1], args[2]); + tcg_out_mem_long(s, STW, STWX, a0, a1, a2); break; case INDEX_op_st_i64: - tcg_out_mem_long(s, STD, STDX, args[0], args[1], args[2]); + tcg_out_mem_long(s, STD, STDX, a0, a1, a2); break; case INDEX_op_add_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { do_addi_32: tcg_out_mem_long(s, ADDI, ADD, a0, a1, (int32_t)a2); } else { @@ -2475,14 +2482,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, } break; case INDEX_op_sub_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[1]) { - if (const_args[2]) { + if (c1) { + if (c2) { tcg_out_movi(s, TCG_TYPE_I32, a0, a1 - a2); } else { tcg_out32(s, SUBFIC | TAI(a0, a2, a1)); } - } else if (const_args[2]) { + } else if (c2) { a2 = -a2; goto do_addi_32; } else { @@ -2491,16 +2497,14 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_and_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { tcg_out_andi32(s, a0, a1, a2); } else { tcg_out32(s, AND | SAB(a1, a0, a2)); } break; case INDEX_op_and_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { tcg_out_andi64(s, a0, a1, a2); } else { tcg_out32(s, AND | SAB(a1, a0, a2)); @@ -2508,8 +2512,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_or_i64: case INDEX_op_or_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { tcg_out_ori32(s, a0, a1, a2); } else { tcg_out32(s, OR | SAB(a1, a0, a2)); @@ -2517,83 +2520,75 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_xor_i64: case INDEX_op_xor_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { tcg_out_xori32(s, a0, a1, a2); } else { tcg_out32(s, XOR | SAB(a1, a0, a2)); } break; case INDEX_op_andc_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { tcg_out_andi32(s, a0, a1, ~a2); } else { tcg_out32(s, ANDC | SAB(a1, a0, a2)); } break; case INDEX_op_andc_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { tcg_out_andi64(s, a0, a1, ~a2); } else { tcg_out32(s, ANDC | SAB(a1, a0, a2)); } break; case INDEX_op_orc_i32: - if (const_args[2]) { - tcg_out_ori32(s, args[0], args[1], ~args[2]); + if (c2) { + tcg_out_ori32(s, a0, a1, ~a2); break; } /* FALLTHRU */ case INDEX_op_orc_i64: - tcg_out32(s, ORC | SAB(args[1], args[0], args[2])); + tcg_out32(s, ORC | SAB(a1, a0, a2)); break; case INDEX_op_eqv_i32: - if (const_args[2]) { - tcg_out_xori32(s, args[0], args[1], ~args[2]); + if (c2) { + tcg_out_xori32(s, a0, a1, ~a2); break; } /* FALLTHRU */ case INDEX_op_eqv_i64: - tcg_out32(s, EQV | SAB(args[1], args[0], args[2])); + tcg_out32(s, EQV | SAB(a1, a0, a2)); break; case INDEX_op_nand_i32: case INDEX_op_nand_i64: - tcg_out32(s, NAND | SAB(args[1], args[0], args[2])); + tcg_out32(s, NAND | SAB(a1, a0, a2)); break; case INDEX_op_nor_i32: case INDEX_op_nor_i64: - tcg_out32(s, NOR | SAB(args[1], args[0], args[2])); + tcg_out32(s, NOR | SAB(a1, a0, a2)); break; case INDEX_op_clz_i32: - tcg_out_cntxz(s, TCG_TYPE_I32, CNTLZW, args[0], args[1], - args[2], const_args[2]); + tcg_out_cntxz(s, TCG_TYPE_I32, CNTLZW, a0, a1, a2, c2); break; case INDEX_op_ctz_i32: - tcg_out_cntxz(s, TCG_TYPE_I32, CNTTZW, args[0], args[1], - args[2], const_args[2]); + tcg_out_cntxz(s, TCG_TYPE_I32, CNTTZW, a0, a1, a2, c2); break; case INDEX_op_ctpop_i32: - tcg_out32(s, CNTPOPW | SAB(args[1], args[0], 0)); + tcg_out32(s, CNTPOPW | SAB(a1, a0, 0)); break; case INDEX_op_clz_i64: - tcg_out_cntxz(s, TCG_TYPE_I64, CNTLZD, args[0], args[1], - args[2], const_args[2]); + tcg_out_cntxz(s, TCG_TYPE_I64, CNTLZD, a0, a1, a2, c2); break; case INDEX_op_ctz_i64: - tcg_out_cntxz(s, TCG_TYPE_I64, CNTTZD, args[0], args[1], - args[2], const_args[2]); + tcg_out_cntxz(s, TCG_TYPE_I64, CNTTZD, a0, a1, a2, c2); break; case INDEX_op_ctpop_i64: - tcg_out32(s, CNTPOPD | SAB(args[1], args[0], 0)); + tcg_out32(s, CNTPOPD | SAB(a1, a0, 0)); break; case INDEX_op_mul_i32: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { tcg_out32(s, MULLI | TAI(a0, a1, a2)); } else { tcg_out32(s, MULLW | TAB(a0, a1, a2)); @@ -2601,62 +2596,58 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_div_i32: - tcg_out32(s, DIVW | TAB(args[0], args[1], args[2])); + tcg_out32(s, DIVW | TAB(a0, a1, a2)); break; case INDEX_op_divu_i32: - tcg_out32(s, DIVWU | TAB(args[0], args[1], args[2])); + tcg_out32(s, DIVWU | TAB(a0, a1, a2)); break; case INDEX_op_shl_i32: - if (const_args[2]) { + if (c2) { /* Limit immediate shift count lest we create an illegal insn. */ - tcg_out_shli32(s, args[0], args[1], args[2] & 31); + tcg_out_shli32(s, a0, a1, a2 & 31); } else { - tcg_out32(s, SLW | SAB(args[1], args[0], args[2])); + tcg_out32(s, SLW | SAB(a1, a0, a2)); } break; case INDEX_op_shr_i32: - if (const_args[2]) { + if (c2) { /* Limit immediate shift count lest we create an illegal insn. */ - tcg_out_shri32(s, args[0], args[1], args[2] & 31); + tcg_out_shri32(s, a0, a1, a2 & 31); } else { - tcg_out32(s, SRW | SAB(args[1], args[0], args[2])); + tcg_out32(s, SRW | SAB(a1, a0, a2)); } break; case INDEX_op_sar_i32: - if (const_args[2]) { + if (c2) { /* Limit immediate shift count lest we create an illegal insn. */ - tcg_out32(s, SRAWI | RS(args[1]) | RA(args[0]) | SH(args[2] & 31)); + tcg_out32(s, SRAWI | RS(a1) | RA(a0) | SH(a2 & 31)); } else { - tcg_out32(s, SRAW | SAB(args[1], args[0], args[2])); + tcg_out32(s, SRAW | SAB(a1, a0, a2)); } break; case INDEX_op_rotl_i32: - if (const_args[2]) { - tcg_out_rlw(s, RLWINM, args[0], args[1], args[2], 0, 31); + if (c2) { + tcg_out_rlw(s, RLWINM, a0, a1, a2, 0, 31); } else { - tcg_out32(s, RLWNM | SAB(args[1], args[0], args[2]) - | MB(0) | ME(31)); + tcg_out32(s, RLWNM | SAB(a1, a0, a2) | MB(0) | ME(31)); } break; case INDEX_op_rotr_i32: - if (const_args[2]) { - tcg_out_rlw(s, RLWINM, args[0], args[1], 32 - args[2], 0, 31); + if (c2) { + tcg_out_rlw(s, RLWINM, a0, a1, 32 - a2, 0, 31); } else { - tcg_out32(s, SUBFIC | TAI(TCG_REG_R0, args[2], 32)); - tcg_out32(s, RLWNM | SAB(args[1], args[0], TCG_REG_R0) - | MB(0) | ME(31)); + tcg_out32(s, SUBFIC | TAI(TCG_REG_R0, a2, 32)); + tcg_out32(s, RLWNM | SAB(a1, a0, TCG_REG_R0) | MB(0) | ME(31)); } break; case INDEX_op_brcond_i32: - tcg_out_brcond(s, args[2], args[0], args[1], const_args[1], - arg_label(args[3]), TCG_TYPE_I32); + tcg_out_brcond(s, a2, a0, a1, c1, arg_label(args[3]), TCG_TYPE_I32); break; case INDEX_op_brcond_i64: - tcg_out_brcond(s, args[2], args[0], args[1], const_args[1], - arg_label(args[3]), TCG_TYPE_I64); + tcg_out_brcond(s, a2, a0, a1, c1, arg_label(args[3]), TCG_TYPE_I64); break; case INDEX_op_brcond2_i32: tcg_out_brcond2(s, args, const_args); @@ -2664,17 +2655,16 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, case INDEX_op_neg_i32: case INDEX_op_neg_i64: - tcg_out32(s, NEG | RT(args[0]) | RA(args[1])); + tcg_out32(s, NEG | RT(a0) | RA(a1)); break; case INDEX_op_not_i32: case INDEX_op_not_i64: - tcg_out32(s, NOR | SAB(args[1], args[0], args[1])); + tcg_out32(s, NOR | SAB(a1, a0, a1)); break; case INDEX_op_add_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { do_addi_64: tcg_out_mem_long(s, ADDI, ADD, a0, a1, a2); } else { @@ -2682,14 +2672,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, } break; case INDEX_op_sub_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[1]) { - if (const_args[2]) { + if (c1) { + if (c2) { tcg_out_movi(s, TCG_TYPE_I64, a0, a1 - a2); } else { tcg_out32(s, SUBFIC | TAI(a0, a2, a1)); } - } else if (const_args[2]) { + } else if (c2) { a2 = -a2; goto do_addi_64; } else { @@ -2698,58 +2687,57 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, break; case INDEX_op_shl_i64: - if (const_args[2]) { + if (c2) { /* Limit immediate shift count lest we create an illegal insn. */ - tcg_out_shli64(s, args[0], args[1], args[2] & 63); + tcg_out_shli64(s, a0, a1, a2 & 63); } else { - tcg_out32(s, SLD | SAB(args[1], args[0], args[2])); + tcg_out32(s, SLD | SAB(a1, a0, a2)); } break; case INDEX_op_shr_i64: - if (const_args[2]) { + if (c2) { /* Limit immediate shift count lest we create an illegal insn. */ - tcg_out_shri64(s, args[0], args[1], args[2] & 63); + tcg_out_shri64(s, a0, a1, a2 & 63); } else { - tcg_out32(s, SRD | SAB(args[1], args[0], args[2])); + tcg_out32(s, SRD | SAB(a1, a0, a2)); } break; case INDEX_op_sar_i64: - if (const_args[2]) { - int sh = SH(args[2] & 0x1f) | (((args[2] >> 5) & 1) << 1); - tcg_out32(s, SRADI | RA(args[0]) | RS(args[1]) | sh); + if (c2) { + int sh = SH(a2 & 0x1f) | (((a2 >> 5) & 1) << 1); + tcg_out32(s, SRADI | RA(a0) | RS(a1) | sh); } else { - tcg_out32(s, SRAD | SAB(args[1], args[0], args[2])); + tcg_out32(s, SRAD | SAB(a1, a0, a2)); } break; case INDEX_op_rotl_i64: - if (const_args[2]) { - tcg_out_rld(s, RLDICL, args[0], args[1], args[2], 0); + if (c2) { + tcg_out_rld(s, RLDICL, a0, a1, a2, 0); } else { - tcg_out32(s, RLDCL | SAB(args[1], args[0], args[2]) | MB64(0)); + tcg_out32(s, RLDCL | SAB(a1, a0, a2) | MB64(0)); } break; case INDEX_op_rotr_i64: - if (const_args[2]) { - tcg_out_rld(s, RLDICL, args[0], args[1], 64 - args[2], 0); + if (c2) { + tcg_out_rld(s, RLDICL, a0, a1, 64 - a2, 0); } else { - tcg_out32(s, SUBFIC | TAI(TCG_REG_R0, args[2], 64)); - tcg_out32(s, RLDCL | SAB(args[1], args[0], TCG_REG_R0) | MB64(0)); + tcg_out32(s, SUBFIC | TAI(TCG_REG_R0, a2, 64)); + tcg_out32(s, RLDCL | SAB(a1, a0, TCG_REG_R0) | MB64(0)); } break; case INDEX_op_mul_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { tcg_out32(s, MULLI | TAI(a0, a1, a2)); } else { tcg_out32(s, MULLD | TAB(a0, a1, a2)); } break; case INDEX_op_div_i64: - tcg_out32(s, DIVD | TAB(args[0], args[1], args[2])); + tcg_out32(s, DIVD | TAB(a0, a1, a2)); break; case INDEX_op_divu_i64: - tcg_out32(s, DIVDU | TAB(args[0], args[1], args[2])); + tcg_out32(s, DIVDU | TAB(a0, a1, a2)); break; case INDEX_op_qemu_ld_i32: @@ -2778,19 +2766,19 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, c = EXTSW; goto gen_ext; gen_ext: - tcg_out32(s, c | RS(args[1]) | RA(args[0])); + tcg_out32(s, c | RS(a1) | RA(a0)); break; case INDEX_op_extu_i32_i64: - tcg_out_ext32u(s, args[0], args[1]); + tcg_out_ext32u(s, a0, a1); break; case INDEX_op_setcond_i32: - tcg_out_setcond(s, TCG_TYPE_I32, args[3], args[0], args[1], args[2], - const_args[2]); + tcg_out_setcond(s, TCG_TYPE_I32, args[3], a0, a1, a2, + c2); break; case INDEX_op_setcond_i64: - tcg_out_setcond(s, TCG_TYPE_I64, args[3], args[0], args[1], args[2], - const_args[2]); + tcg_out_setcond(s, TCG_TYPE_I64, args[3], a0, a1, a2, + c2); break; case INDEX_op_setcond2_i32: tcg_out_setcond2(s, args, const_args); @@ -2798,7 +2786,6 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, case INDEX_op_bswap16_i32: case INDEX_op_bswap16_i64: - a0 = args[0], a1 = args[1]; /* a1 = abcd */ if (a0 != a1) { /* a0 = (a1 r<< 24) & 0xff # 000c */ @@ -2818,10 +2805,9 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, case INDEX_op_bswap32_i32: case INDEX_op_bswap32_i64: /* Stolen from gcc's builtin_bswap32 */ - a1 = args[1]; - a0 = args[0] == a1 ? TCG_REG_R0 : args[0]; + a0 = a0 == a1 ? TCG_REG_R0 : a0; - /* a1 = args[1] # abcd */ + /* a1 = a1 # abcd */ /* a0 = rotate_left (a1, 8) # bcda */ tcg_out_rlw(s, RLWINM, a0, a1, 8, 0, 31); /* a0 = (a0 & ~0xff000000) | ((a1 r<< 24) & 0xff000000) # dcda */ @@ -2830,12 +2816,12 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, tcg_out_rlw(s, RLWIMI, a0, a1, 24, 16, 23); if (a0 == TCG_REG_R0) { - tcg_out_mov(s, TCG_TYPE_REG, args[0], a0); + tcg_out_mov(s, TCG_TYPE_REG, a0, a0); } break; case INDEX_op_bswap64_i64: - a0 = args[0], a1 = args[1], a2 = TCG_REG_R0; + a2 = TCG_REG_R0; if (a0 == a1) { a0 = TCG_REG_R0; a2 = a1; @@ -2862,44 +2848,42 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, tcg_out_rlw(s, RLWIMI, a0, a2, 24, 16, 23); if (a0 == 0) { - tcg_out_mov(s, TCG_TYPE_REG, args[0], a0); + tcg_out_mov(s, TCG_TYPE_REG, a0, a0); } break; case INDEX_op_deposit_i32: - if (const_args[2]) { + if (c2) { uint32_t mask = ((2u << (args[4] - 1)) - 1) << args[3]; - tcg_out_andi32(s, args[0], args[0], ~mask); + tcg_out_andi32(s, a0, a0, ~mask); } else { - tcg_out_rlw(s, RLWIMI, args[0], args[2], args[3], + tcg_out_rlw(s, RLWIMI, a0, a2, args[3], 32 - args[3] - args[4], 31 - args[3]); } break; case INDEX_op_deposit_i64: - if (const_args[2]) { + if (c2) { uint64_t mask = ((2ull << (args[4] - 1)) - 1) << args[3]; - tcg_out_andi64(s, args[0], args[0], ~mask); + tcg_out_andi64(s, a0, a0, ~mask); } else { - tcg_out_rld(s, RLDIMI, args[0], args[2], args[3], - 64 - args[3] - args[4]); + tcg_out_rld(s, RLDIMI, a0, a2, args[3], 64 - args[3] - args[4]); } break; case INDEX_op_extract_i32: - tcg_out_rlw(s, RLWINM, args[0], args[1], - 32 - args[2], 32 - args[3], 31); + tcg_out_rlw(s, RLWINM, a0, a1, 32 - a2, 32 - args[3], 31); break; case INDEX_op_extract_i64: - tcg_out_rld(s, RLDICL, args[0], args[1], 64 - args[2], 64 - args[3]); + tcg_out_rld(s, RLDICL, a0, a1, 64 - a2, 64 - args[3]); break; case INDEX_op_movcond_i32: - tcg_out_movcond(s, TCG_TYPE_I32, args[5], args[0], args[1], args[2], - args[3], args[4], const_args[2]); + tcg_out_movcond(s, TCG_TYPE_I32, args[5], a0, a1, a2, + args[3], args[4], c2); break; case INDEX_op_movcond_i64: - tcg_out_movcond(s, TCG_TYPE_I64, args[5], args[0], args[1], args[2], - args[3], args[4], const_args[2]); + tcg_out_movcond(s, TCG_TYPE_I64, args[5], a0, a1, a2, + args[3], args[4], c2); break; #if TCG_TARGET_REG_BITS == 64 @@ -2910,14 +2894,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, /* Note that the CA bit is defined based on the word size of the environment. So in 64-bit mode it's always carry-out of bit 63. The fallback code using deposit works just as well for 32-bit. */ - a0 = args[0], a1 = args[1]; if (a0 == args[3] || (!const_args[5] && a0 == args[5])) { a0 = TCG_REG_R0; } if (const_args[4]) { - tcg_out32(s, ADDIC | TAI(a0, args[2], args[4])); + tcg_out32(s, ADDIC | TAI(a0, a2, args[4])); } else { - tcg_out32(s, ADDC | TAB(a0, args[2], args[4])); + tcg_out32(s, ADDC | TAB(a0, a2, args[4])); } if (const_args[5]) { tcg_out32(s, (args[5] ? ADDME : ADDZE) | RT(a1) | RA(args[3])); @@ -2925,7 +2908,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, tcg_out32(s, ADDE | TAB(a1, args[3], args[5])); } if (a0 != args[0]) { - tcg_out_mov(s, TCG_TYPE_REG, args[0], a0); + tcg_out_mov(s, TCG_TYPE_REG, a0, a0); } break; @@ -2934,14 +2917,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, #else case INDEX_op_sub2_i32: #endif - a0 = args[0], a1 = args[1]; if (a0 == args[5] || (!const_args[3] && a0 == args[3])) { a0 = TCG_REG_R0; } - if (const_args[2]) { - tcg_out32(s, SUBFIC | TAI(a0, args[4], args[2])); + if (c2) { + tcg_out32(s, SUBFIC | TAI(a0, args[4], a2)); } else { - tcg_out32(s, SUBFC | TAB(a0, args[4], args[2])); + tcg_out32(s, SUBFC | TAB(a0, args[4], a2)); } if (const_args[3]) { tcg_out32(s, (args[3] ? SUBFME : SUBFZE) | RT(a1) | RA(args[5])); @@ -2949,25 +2931,25 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, tcg_out32(s, SUBFE | TAB(a1, args[5], args[3])); } if (a0 != args[0]) { - tcg_out_mov(s, TCG_TYPE_REG, args[0], a0); + tcg_out_mov(s, TCG_TYPE_REG, a0, a0); } break; case INDEX_op_muluh_i32: - tcg_out32(s, MULHWU | TAB(args[0], args[1], args[2])); + tcg_out32(s, MULHWU | TAB(a0, a1, a2)); break; case INDEX_op_mulsh_i32: - tcg_out32(s, MULHW | TAB(args[0], args[1], args[2])); + tcg_out32(s, MULHW | TAB(a0, a1, a2)); break; case INDEX_op_muluh_i64: - tcg_out32(s, MULHDU | TAB(args[0], args[1], args[2])); + tcg_out32(s, MULHDU | TAB(a0, a1, a2)); break; case INDEX_op_mulsh_i64: - tcg_out32(s, MULHD | TAB(args[0], args[1], args[2])); + tcg_out32(s, MULHD | TAB(a0, a1, a2)); break; case INDEX_op_mb: - tcg_out_mb(s, args[0]); + tcg_out_mb(s, a0); break; case INDEX_op_mov_i32: /* Always emitted via tcg_out_mov. */ From patchwork Mon Jan 11 15:01:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 12010923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 871EFC433E0 for ; Mon, 11 Jan 2021 15:05:53 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E0B02229CA for ; Mon, 11 Jan 2021 15:05:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E0B02229CA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amsat.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:54684 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kyylT-0005Q0-Vz for qemu-devel@archiver.kernel.org; Mon, 11 Jan 2021 10:05:52 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:45112) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kyyhU-0001GG-5m; Mon, 11 Jan 2021 10:01:45 -0500 Received: from mail-ej1-x634.google.com ([2a00:1450:4864:20::634]:44615) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1kyyhQ-0001To-AZ; Mon, 11 Jan 2021 10:01:43 -0500 Received: by mail-ej1-x634.google.com with SMTP id w1so25073692ejf.11; Mon, 11 Jan 2021 07:01:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cmtTLwEGX1dw53y5AJ0KBqy6hzYnmQzAtFAizRBeHUI=; b=fgSU/bDj3BWQqTShdEUk4J8q20Gznt8Qy06beeH0diwfPtADHe6eeIIPoNNiR+E+t1 1iyl9r7DuRMLWHBUQzZZnJpAn8TKzjMQQHN4iLqkKc0gqrKJMjgVse4Fu2RDhQlTE6Kg znqrWSZz26wllprtqJX7udtgxYE3+8gGm0x1ni39kpP2PwiFZ/dIXTedqionrbyT91D6 ua1Tro4uLV//w+lAQiRrZB3OFfIEjfqQ/hAL3r8OnyZ8u2pbUlCBlHvcsTPg4GvZnmlC sWTzP8mYkazJ0SfUVZ450wUv8hRwEjXwyU/AF+aQe8/W5vfsChl/FtbsjFNhnztpxMHR sf1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=cmtTLwEGX1dw53y5AJ0KBqy6hzYnmQzAtFAizRBeHUI=; b=j2qjPsHNOSfJch/EkoLMoEWgR9KJJrjyh8hy3LO2NBBFbI+8s5C3fFXfkjJw7GkS3S uNHM6z2OiSJTJsZ30P41EJSKeqEjsBduj+q/eIsxqB0/zxtStg+M9KyM1lnmKOPV1sxW pnLRoIZ2qrd5vAEd6TbSG+/telVE3Y8L9yKSfDQIZCG4c46ZblqMdJ73JOb1Z4BuokA2 jVOi1Y3Jn3taZZGJJnEdY+PBkzEzjD3tx/00J1RmtxrFhCODKaU8Tpa4uf1pyYcQ5ZbJ 7fd+VYasOrBl4mwlKOMf03Vydib25CAUSOI7JdJcXbk2YzO3UoEMDnLOHHN2NYFEEPTd x2lw== X-Gm-Message-State: AOAM533XBPpU5iqw4koS7H2dZTV6Bjflsqcoj3TxQqc6jf09HDN3pK9n C6OovTey8Et9avP4giqaRZXnYkBn1Fs= X-Google-Smtp-Source: ABdhPJxkq/3BfKPT91aIGx8RgaeSQUgn7s2TMl4tsBTyn0IoInnOOQVXQpZp6FDEObfaky062L2g1A== X-Received: by 2002:a17:907:9d0:: with SMTP id bx16mr11714469ejc.426.1610377295803; Mon, 11 Jan 2021 07:01:35 -0800 (PST) Received: from x1w.redhat.com (129.red-88-21-205.staticip.rima-tde.net. [88.21.205.129]) by smtp.gmail.com with ESMTPSA id g10sm33212edu.97.2021.01.11.07.01.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Jan 2021 07:01:35 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Subject: [PATCH 3/5] tcg/s390: Hoist common argument loads in tcg_out_op() Date: Mon, 11 Jan 2021 16:01:12 +0100 Message-Id: <20210111150114.1415930-4-f4bug@amsat.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210111150114.1415930-1-f4bug@amsat.org> References: <20210111150114.1415930-1-f4bug@amsat.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::634; envelope-from=philippe.mathieu.daude@gmail.com; helo=mail-ej1-x634.google.com X-Spam_score_int: -14 X-Spam_score: -1.5 X-Spam_bar: - X-Spam_report: (-1.5 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.248, FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Aleksandar Rikalo , Cornelia Huck , qemu-riscv@nongnu.org, Stefan Weil , Huacai Chen , Richard Henderson , Thomas Huth , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , qemu-s390x@nongnu.org, qemu-arm@nongnu.org, Alistair Francis , Palmer Dabbelt , Aurelien Jarno Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: Philippe Mathieu-Daudé --- tcg/s390/tcg-target.c.inc | 252 ++++++++++++++++++-------------------- 1 file changed, 122 insertions(+), 130 deletions(-) diff --git a/tcg/s390/tcg-target.c.inc b/tcg/s390/tcg-target.c.inc index d7ef0790556..74b2314c78a 100644 --- a/tcg/s390/tcg-target.c.inc +++ b/tcg/s390/tcg-target.c.inc @@ -1732,15 +1732,23 @@ static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg, case glue(glue(INDEX_op_,x),_i64) static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { S390Opcode op, op2; TCGArg a0, a1, a2; + int c2, c3, c4; + + a0 = args[0]; + a1 = args[1]; + a2 = args[2]; + c2 = const_args[2]; + c3 = const_args[3]; + c4 = const_args[4]; switch (opc) { case INDEX_op_exit_tb: /* Reuse the zeroing that exists for goto_ptr. */ - a0 = args[0]; if (a0 == 0) { tgen_gotoi(s, S390_CC_ALWAYS, tcg_code_gen_epilogue); } else { @@ -1750,7 +1758,6 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_goto_tb: - a0 = args[0]; if (s->tb_jmp_insn_offset) { /* * branch displacement must be aligned for atomic patching; @@ -1784,7 +1791,6 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_goto_ptr: - a0 = args[0]; if (USE_REG_TB) { tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_TB, a0); } @@ -1794,45 +1800,43 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, OP_32_64(ld8u): /* ??? LLC (RXY format) is only present with the extended-immediate facility, whereas LLGC is always present. */ - tcg_out_mem(s, 0, RXY_LLGC, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, 0, RXY_LLGC, a0, a1, TCG_REG_NONE, a2); break; OP_32_64(ld8s): /* ??? LB is no smaller than LGB, so no point to using it. */ - tcg_out_mem(s, 0, RXY_LGB, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, 0, RXY_LGB, a0, a1, TCG_REG_NONE, a2); break; OP_32_64(ld16u): /* ??? LLH (RXY format) is only present with the extended-immediate facility, whereas LLGH is always present. */ - tcg_out_mem(s, 0, RXY_LLGH, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, 0, RXY_LLGH, a0, a1, TCG_REG_NONE, a2); break; case INDEX_op_ld16s_i32: - tcg_out_mem(s, RX_LH, RXY_LHY, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, RX_LH, RXY_LHY, a0, a1, TCG_REG_NONE, a2); break; case INDEX_op_ld_i32: - tcg_out_ld(s, TCG_TYPE_I32, args[0], args[1], args[2]); + tcg_out_ld(s, TCG_TYPE_I32, a0, a1, a2); break; OP_32_64(st8): - tcg_out_mem(s, RX_STC, RXY_STCY, args[0], args[1], - TCG_REG_NONE, args[2]); + tcg_out_mem(s, RX_STC, RXY_STCY, a0, a1, TCG_REG_NONE, a2); break; OP_32_64(st16): - tcg_out_mem(s, RX_STH, RXY_STHY, args[0], args[1], - TCG_REG_NONE, args[2]); + tcg_out_mem(s, RX_STH, RXY_STHY, a0, a1, TCG_REG_NONE, a2); break; case INDEX_op_st_i32: - tcg_out_st(s, TCG_TYPE_I32, args[0], args[1], args[2]); + tcg_out_st(s, TCG_TYPE_I32, a0, a1, a2); break; case INDEX_op_add_i32: - a0 = args[0], a1 = args[1], a2 = (int32_t)args[2]; - if (const_args[2]) { + a2 = (int32_t)a2; + if (c2) { do_addi_32: if (a0 == a1) { if (a2 == (int16_t)a2) { @@ -1852,8 +1856,8 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, } break; case INDEX_op_sub_i32: - a0 = args[0], a1 = args[1], a2 = (int32_t)args[2]; - if (const_args[2]) { + a2 = (int32_t)a2; + if (c2) { a2 = -a2; goto do_addi_32; } else if (a0 == a1) { @@ -1864,8 +1868,8 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_and_i32: - a0 = args[0], a1 = args[1], a2 = (uint32_t)args[2]; - if (const_args[2]) { + a2 = (uint32_t)a2; + if (c2) { tcg_out_mov(s, TCG_TYPE_I32, a0, a1); tgen_andi(s, TCG_TYPE_I32, a0, a2); } else if (a0 == a1) { @@ -1875,8 +1879,8 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, } break; case INDEX_op_or_i32: - a0 = args[0], a1 = args[1], a2 = (uint32_t)args[2]; - if (const_args[2]) { + a2 = (uint32_t)a2; + if (c2) { tcg_out_mov(s, TCG_TYPE_I32, a0, a1); tgen_ori(s, TCG_TYPE_I32, a0, a2); } else if (a0 == a1) { @@ -1886,30 +1890,30 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, } break; case INDEX_op_xor_i32: - a0 = args[0], a1 = args[1], a2 = (uint32_t)args[2]; - if (const_args[2]) { + a2 = (uint32_t)a2; + if (c2) { tcg_out_mov(s, TCG_TYPE_I32, a0, a1); tgen_xori(s, TCG_TYPE_I32, a0, a2); } else if (a0 == a1) { - tcg_out_insn(s, RR, XR, args[0], args[2]); + tcg_out_insn(s, RR, XR, a0, a2); } else { tcg_out_insn(s, RRF, XRK, a0, a1, a2); } break; case INDEX_op_neg_i32: - tcg_out_insn(s, RR, LCR, args[0], args[1]); + tcg_out_insn(s, RR, LCR, a0, a1); break; case INDEX_op_mul_i32: - if (const_args[2]) { - if ((int32_t)args[2] == (int16_t)args[2]) { - tcg_out_insn(s, RI, MHI, args[0], args[2]); + if (c2) { + if ((int32_t)a2 == (int16_t)a2) { + tcg_out_insn(s, RI, MHI, a0, a2); } else { - tcg_out_insn(s, RIL, MSFI, args[0], args[2]); + tcg_out_insn(s, RIL, MSFI, a0, a2); } } else { - tcg_out_insn(s, RRE, MSR, args[0], args[2]); + tcg_out_insn(s, RRE, MSR, a0, a2); } break; @@ -1924,16 +1928,16 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, op = RS_SLL; op2 = RSY_SLLK; do_shift32: - a0 = args[0], a1 = args[1], a2 = (int32_t)args[2]; + a2 = (int32_t)a2; if (a0 == a1) { - if (const_args[2]) { + if (c2) { tcg_out_sh32(s, op, a0, TCG_REG_NONE, a2); } else { tcg_out_sh32(s, op, a0, a2, 0); } } else { /* Using tcg_out_sh64 here for the format; it is a 32-bit shift. */ - if (const_args[2]) { + if (c2) { tcg_out_sh64(s, op2, a0, a1, TCG_REG_NONE, a2); } else { tcg_out_sh64(s, op2, a0, a1, a2, 0); @@ -1951,112 +1955,108 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_rotl_i32: /* ??? Using tcg_out_sh64 here for the format; it is a 32-bit rol. */ - if (const_args[2]) { - tcg_out_sh64(s, RSY_RLL, args[0], args[1], TCG_REG_NONE, args[2]); + if (c2) { + tcg_out_sh64(s, RSY_RLL, a0, a1, TCG_REG_NONE, a2); } else { - tcg_out_sh64(s, RSY_RLL, args[0], args[1], args[2], 0); + tcg_out_sh64(s, RSY_RLL, a0, a1, a2, 0); } break; case INDEX_op_rotr_i32: - if (const_args[2]) { - tcg_out_sh64(s, RSY_RLL, args[0], args[1], - TCG_REG_NONE, (32 - args[2]) & 31); + if (c2) { + tcg_out_sh64(s, RSY_RLL, a0, a1, + TCG_REG_NONE, (32 - a2) & 31); } else { - tcg_out_insn(s, RR, LCR, TCG_TMP0, args[2]); - tcg_out_sh64(s, RSY_RLL, args[0], args[1], TCG_TMP0, 0); + tcg_out_insn(s, RR, LCR, TCG_TMP0, a2); + tcg_out_sh64(s, RSY_RLL, a0, a1, TCG_TMP0, 0); } break; case INDEX_op_ext8s_i32: - tgen_ext8s(s, TCG_TYPE_I32, args[0], args[1]); + tgen_ext8s(s, TCG_TYPE_I32, a0, a1); break; case INDEX_op_ext16s_i32: - tgen_ext16s(s, TCG_TYPE_I32, args[0], args[1]); + tgen_ext16s(s, TCG_TYPE_I32, a0, a1); break; case INDEX_op_ext8u_i32: - tgen_ext8u(s, TCG_TYPE_I32, args[0], args[1]); + tgen_ext8u(s, TCG_TYPE_I32, a0, a1); break; case INDEX_op_ext16u_i32: - tgen_ext16u(s, TCG_TYPE_I32, args[0], args[1]); + tgen_ext16u(s, TCG_TYPE_I32, a0, a1); break; OP_32_64(bswap16): /* The TCG bswap definition requires bits 0-47 already be zero. Thus we don't need the G-type insns to implement bswap16_i64. */ - tcg_out_insn(s, RRE, LRVR, args[0], args[1]); - tcg_out_sh32(s, RS_SRL, args[0], TCG_REG_NONE, 16); + tcg_out_insn(s, RRE, LRVR, a0, a1); + tcg_out_sh32(s, RS_SRL, a0, TCG_REG_NONE, 16); break; OP_32_64(bswap32): - tcg_out_insn(s, RRE, LRVR, args[0], args[1]); + tcg_out_insn(s, RRE, LRVR, a0, a1); break; case INDEX_op_add2_i32: - if (const_args[4]) { - tcg_out_insn(s, RIL, ALFI, args[0], args[4]); + if (c4) { + tcg_out_insn(s, RIL, ALFI, a0, args[4]); } else { - tcg_out_insn(s, RR, ALR, args[0], args[4]); + tcg_out_insn(s, RR, ALR, a0, args[4]); } - tcg_out_insn(s, RRE, ALCR, args[1], args[5]); + tcg_out_insn(s, RRE, ALCR, a1, args[5]); break; case INDEX_op_sub2_i32: - if (const_args[4]) { - tcg_out_insn(s, RIL, SLFI, args[0], args[4]); + if (c4) { + tcg_out_insn(s, RIL, SLFI, a0, args[4]); } else { - tcg_out_insn(s, RR, SLR, args[0], args[4]); + tcg_out_insn(s, RR, SLR, a0, args[4]); } - tcg_out_insn(s, RRE, SLBR, args[1], args[5]); + tcg_out_insn(s, RRE, SLBR, a1, args[5]); break; case INDEX_op_br: - tgen_branch(s, S390_CC_ALWAYS, arg_label(args[0])); + tgen_branch(s, S390_CC_ALWAYS, arg_label(a0)); break; case INDEX_op_brcond_i32: - tgen_brcond(s, TCG_TYPE_I32, args[2], args[0], - args[1], const_args[1], arg_label(args[3])); + tgen_brcond(s, TCG_TYPE_I32, a2, a0, a1, c1, arg_label(args[3])); break; case INDEX_op_setcond_i32: - tgen_setcond(s, TCG_TYPE_I32, args[3], args[0], args[1], - args[2], const_args[2]); + tgen_setcond(s, TCG_TYPE_I32, args[3], a0, a1, a2, c2); break; case INDEX_op_movcond_i32: - tgen_movcond(s, TCG_TYPE_I32, args[5], args[0], args[1], - args[2], const_args[2], args[3], const_args[3]); + tgen_movcond(s, TCG_TYPE_I32, args[5], a0, a1, a2, c2, args[3], c3); break; case INDEX_op_qemu_ld_i32: /* ??? Technically we can use a non-extending instruction. */ case INDEX_op_qemu_ld_i64: - tcg_out_qemu_ld(s, args[0], args[1], args[2]); + tcg_out_qemu_ld(s, a0, a1, a2); break; case INDEX_op_qemu_st_i32: case INDEX_op_qemu_st_i64: - tcg_out_qemu_st(s, args[0], args[1], args[2]); + tcg_out_qemu_st(s, a0, a1, a2); break; case INDEX_op_ld16s_i64: - tcg_out_mem(s, 0, RXY_LGH, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, 0, RXY_LGH, a0, a1, TCG_REG_NONE, a2); break; case INDEX_op_ld32u_i64: - tcg_out_mem(s, 0, RXY_LLGF, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, 0, RXY_LLGF, a0, a1, TCG_REG_NONE, a2); break; case INDEX_op_ld32s_i64: - tcg_out_mem(s, 0, RXY_LGF, args[0], args[1], TCG_REG_NONE, args[2]); + tcg_out_mem(s, 0, RXY_LGF, a0, a1, TCG_REG_NONE, a2); break; case INDEX_op_ld_i64: - tcg_out_ld(s, TCG_TYPE_I64, args[0], args[1], args[2]); + tcg_out_ld(s, TCG_TYPE_I64, a0, a1, a2); break; case INDEX_op_st32_i64: - tcg_out_st(s, TCG_TYPE_I32, args[0], args[1], args[2]); + tcg_out_st(s, TCG_TYPE_I32, a0, a1, a2); break; case INDEX_op_st_i64: - tcg_out_st(s, TCG_TYPE_I64, args[0], args[1], args[2]); + tcg_out_st(s, TCG_TYPE_I64, a0, a1, a2); break; case INDEX_op_add_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { do_addi_64: if (a0 == a1) { if (a2 == (int16_t)a2) { @@ -2084,8 +2084,7 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, } break; case INDEX_op_sub_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { a2 = -a2; goto do_addi_64; } else if (a0 == a1) { @@ -2096,19 +2095,17 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_and_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { tcg_out_mov(s, TCG_TYPE_I64, a0, a1); - tgen_andi(s, TCG_TYPE_I64, args[0], args[2]); + tgen_andi(s, TCG_TYPE_I64, a0, a2); } else if (a0 == a1) { - tcg_out_insn(s, RRE, NGR, args[0], args[2]); + tcg_out_insn(s, RRE, NGR, a0, a2); } else { tcg_out_insn(s, RRF, NGRK, a0, a1, a2); } break; case INDEX_op_or_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { tcg_out_mov(s, TCG_TYPE_I64, a0, a1); tgen_ori(s, TCG_TYPE_I64, a0, a2); } else if (a0 == a1) { @@ -2118,8 +2115,7 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, } break; case INDEX_op_xor_i64: - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[2]) { + if (c2) { tcg_out_mov(s, TCG_TYPE_I64, a0, a1); tgen_xori(s, TCG_TYPE_I64, a0, a2); } else if (a0 == a1) { @@ -2130,21 +2126,21 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_neg_i64: - tcg_out_insn(s, RRE, LCGR, args[0], args[1]); + tcg_out_insn(s, RRE, LCGR, a0, a1); break; case INDEX_op_bswap64_i64: - tcg_out_insn(s, RRE, LRVGR, args[0], args[1]); + tcg_out_insn(s, RRE, LRVGR, a0, a1); break; case INDEX_op_mul_i64: - if (const_args[2]) { - if (args[2] == (int16_t)args[2]) { - tcg_out_insn(s, RI, MGHI, args[0], args[2]); + if (c2) { + if (a2 == (int16_t)a2) { + tcg_out_insn(s, RI, MGHI, a0, a2); } else { - tcg_out_insn(s, RIL, MSGFI, args[0], args[2]); + tcg_out_insn(s, RIL, MSGFI, a0, a2); } } else { - tcg_out_insn(s, RRE, MSGR, args[0], args[2]); + tcg_out_insn(s, RRE, MSGR, a0, a2); } break; @@ -2165,10 +2161,10 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_shl_i64: op = RSY_SLLG; do_shift64: - if (const_args[2]) { - tcg_out_sh64(s, op, args[0], args[1], TCG_REG_NONE, args[2]); + if (c2) { + tcg_out_sh64(s, op, a0, a1, TCG_REG_NONE, a2); } else { - tcg_out_sh64(s, op, args[0], args[1], args[2], 0); + tcg_out_sh64(s, op, a0, a1, a2, 0); } break; case INDEX_op_shr_i64: @@ -2179,87 +2175,83 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, goto do_shift64; case INDEX_op_rotl_i64: - if (const_args[2]) { - tcg_out_sh64(s, RSY_RLLG, args[0], args[1], - TCG_REG_NONE, args[2]); + if (c2) { + tcg_out_sh64(s, RSY_RLLG, a0, a1, + TCG_REG_NONE, a2); } else { - tcg_out_sh64(s, RSY_RLLG, args[0], args[1], args[2], 0); + tcg_out_sh64(s, RSY_RLLG, a0, a1, a2, 0); } break; case INDEX_op_rotr_i64: - if (const_args[2]) { - tcg_out_sh64(s, RSY_RLLG, args[0], args[1], - TCG_REG_NONE, (64 - args[2]) & 63); + if (c2) { + tcg_out_sh64(s, RSY_RLLG, a0, a1, + TCG_REG_NONE, (64 - a2) & 63); } else { /* We can use the smaller 32-bit negate because only the low 6 bits are examined for the rotate. */ - tcg_out_insn(s, RR, LCR, TCG_TMP0, args[2]); - tcg_out_sh64(s, RSY_RLLG, args[0], args[1], TCG_TMP0, 0); + tcg_out_insn(s, RR, LCR, TCG_TMP0, a2); + tcg_out_sh64(s, RSY_RLLG, a0, a1, TCG_TMP0, 0); } break; case INDEX_op_ext8s_i64: - tgen_ext8s(s, TCG_TYPE_I64, args[0], args[1]); + tgen_ext8s(s, TCG_TYPE_I64, a0, a1); break; case INDEX_op_ext16s_i64: - tgen_ext16s(s, TCG_TYPE_I64, args[0], args[1]); + tgen_ext16s(s, TCG_TYPE_I64, a0, a1); break; case INDEX_op_ext_i32_i64: case INDEX_op_ext32s_i64: - tgen_ext32s(s, args[0], args[1]); + tgen_ext32s(s, a0, a1); break; case INDEX_op_ext8u_i64: - tgen_ext8u(s, TCG_TYPE_I64, args[0], args[1]); + tgen_ext8u(s, TCG_TYPE_I64, a0, a1); break; case INDEX_op_ext16u_i64: - tgen_ext16u(s, TCG_TYPE_I64, args[0], args[1]); + tgen_ext16u(s, TCG_TYPE_I64, a0, a1); break; case INDEX_op_extu_i32_i64: case INDEX_op_ext32u_i64: - tgen_ext32u(s, args[0], args[1]); + tgen_ext32u(s, a0, a1); break; case INDEX_op_add2_i64: - if (const_args[4]) { + if (c4) { if ((int64_t)args[4] >= 0) { - tcg_out_insn(s, RIL, ALGFI, args[0], args[4]); + tcg_out_insn(s, RIL, ALGFI, a0, args[4]); } else { - tcg_out_insn(s, RIL, SLGFI, args[0], -args[4]); + tcg_out_insn(s, RIL, SLGFI, a0, -args[4]); } } else { - tcg_out_insn(s, RRE, ALGR, args[0], args[4]); + tcg_out_insn(s, RRE, ALGR, a0, args[4]); } - tcg_out_insn(s, RRE, ALCGR, args[1], args[5]); + tcg_out_insn(s, RRE, ALCGR, a1, args[5]); break; case INDEX_op_sub2_i64: - if (const_args[4]) { + if (c4) { if ((int64_t)args[4] >= 0) { - tcg_out_insn(s, RIL, SLGFI, args[0], args[4]); + tcg_out_insn(s, RIL, SLGFI, a0, args[4]); } else { - tcg_out_insn(s, RIL, ALGFI, args[0], -args[4]); + tcg_out_insn(s, RIL, ALGFI, a0, -args[4]); } } else { - tcg_out_insn(s, RRE, SLGR, args[0], args[4]); + tcg_out_insn(s, RRE, SLGR, a0, args[4]); } - tcg_out_insn(s, RRE, SLBGR, args[1], args[5]); + tcg_out_insn(s, RRE, SLBGR, a1, args[5]); break; case INDEX_op_brcond_i64: - tgen_brcond(s, TCG_TYPE_I64, args[2], args[0], - args[1], const_args[1], arg_label(args[3])); + tgen_brcond(s, TCG_TYPE_I64, a2, a0, a1, c1, arg_label(args[3])); break; case INDEX_op_setcond_i64: - tgen_setcond(s, TCG_TYPE_I64, args[3], args[0], args[1], - args[2], const_args[2]); + tgen_setcond(s, TCG_TYPE_I64, args[3], a0, a1, a2, c2); break; case INDEX_op_movcond_i64: - tgen_movcond(s, TCG_TYPE_I64, args[5], args[0], args[1], - args[2], const_args[2], args[3], const_args[3]); + tgen_movcond(s, TCG_TYPE_I64, args[5], a0, a1, a2, c2, args[3], c3); break; OP_32_64(deposit): - a0 = args[0], a1 = args[1], a2 = args[2]; - if (const_args[1]) { + if (c1) { tgen_deposit(s, a0, a2, args[3], args[4], 1); } else { /* Since we can't support "0Z" as a constraint, we allow a1 in @@ -2277,17 +2269,17 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; OP_32_64(extract): - tgen_extract(s, args[0], args[1], args[2], args[3]); + tgen_extract(s, a0, a1, a2, args[3]); break; case INDEX_op_clz_i64: - tgen_clz(s, args[0], args[1], args[2], const_args[2]); + tgen_clz(s, a0, a1, a2, c2); break; case INDEX_op_mb: /* The host memory model is quite strong, we simply need to serialize the instruction stream. */ - if (args[0] & TCG_MO_ST_LD) { + if (a0 & TCG_MO_ST_LD) { tcg_out_insn(s, RR, BCR, s390_facilities & FACILITY_FAST_BCR_SER ? 14 : 15, 0); } From patchwork Mon Jan 11 15:01:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 12010927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBBDDC433DB for ; Mon, 11 Jan 2021 15:09:35 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2FFD7222B6 for ; Mon, 11 Jan 2021 15:09:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2FFD7222B6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amsat.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:37486 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kyyp4-0001ap-6Z for qemu-devel@archiver.kernel.org; Mon, 11 Jan 2021 10:09:34 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:45146) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kyyhX-0001Ls-DV; Mon, 11 Jan 2021 10:01:47 -0500 Received: from mail-ej1-x62b.google.com ([2a00:1450:4864:20::62b]:45529) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1kyyhU-0001Uv-Ro; Mon, 11 Jan 2021 10:01:47 -0500 Received: by mail-ej1-x62b.google.com with SMTP id qw4so25081193ejb.12; Mon, 11 Jan 2021 07:01:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+Yz+q+sYA548DZkc3uhP8nuaBAcly6SiME04TkIbzu0=; b=PejHZBubdAGZ3MyD7BeZLf8GSdxbtdEHpKiQ9OZWiN+wMSNTTFCH0Dg/97ZjM4TXW0 2OVKBZTjbqVrY9QarWetjQKiVb9iA7gwDkFWgpO3N12SQhqqg3dLkQkHgjNcRg0UHbbs Ftgx5PJDU7PNcUMbb2y8OB/TZpT5oUx5XLyVm/Y4VATRlwJs2K2nv+YkikIH+MYn42eh 56qJvEX10gjPgw3fglxqCm08eBnuSw9SVthdVU7HYjJTw3uyP5Q/2NZCY3Y/1CsKJSqL 5jhLPqxFCvEA3k615otkxN40fRBJ4vwFQNB3JVhcv4LvJJ3Boe4I12zT8NQjDtAaw9b1 wsww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=+Yz+q+sYA548DZkc3uhP8nuaBAcly6SiME04TkIbzu0=; b=hlPKEm2Z54PsHKHKxtQrAc+lDVQgTq+jZMuZsF2WxI6phvYWZsxdvh3nNFL0etmNJp 0om8fc90ta/yp31E6cf7mUjs1xEL0L3Y+Ze/XhAIFLo0mc9bU2DuVVfPVeKunvI/vrgb apnLCpUnGUgm7K3pJ1zZXLzlNcsbJ0D4glTWe+BTsgqFTKytUltl2t1D/wld+N2+ojcH +edvquipRO+NbAL5homWpYtSCsq3qZcMtBjNBR0H4uvPJP19IYMpnVnQqhkNjPNWJGBW vqfJD24BiNqSjxxaGIOruT5Dd2ZmK13kF9nbOUYw0bS+2AWuBBN3AA8vffxjDJG14RF4 Jx1Q== X-Gm-Message-State: AOAM532whLmfjTMmuNsnqMl6ov5DqnOjlMukY7sJ8pIu9zNmEArkZHkJ TvuVmI9v/RU9KKPW/alr7XrLd+Eh9Zg= X-Google-Smtp-Source: ABdhPJyGU5wbk10Z1JnKHb2TkfqvKDTRbwtSey20RaMhAP4tkhOEORcuywDvafrzzvmQZsoeJt1DcQ== X-Received: by 2002:a17:906:d0c1:: with SMTP id bq1mr10818129ejb.202.1610377302012; Mon, 11 Jan 2021 07:01:42 -0800 (PST) Received: from x1w.redhat.com (129.red-88-21-205.staticip.rima-tde.net. [88.21.205.129]) by smtp.gmail.com with ESMTPSA id cc8sm68591edb.17.2021.01.11.07.01.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Jan 2021 07:01:41 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Subject: [RFC PATCH 4/5] tcg: Restrict tcg_out_op() to arrays of TCG_MAX_OP_ARGS elements Date: Mon, 11 Jan 2021 16:01:13 +0100 Message-Id: <20210111150114.1415930-5-f4bug@amsat.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210111150114.1415930-1-f4bug@amsat.org> References: <20210111150114.1415930-1-f4bug@amsat.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::62b; envelope-from=philippe.mathieu.daude@gmail.com; helo=mail-ej1-x62b.google.com X-Spam_score_int: -14 X-Spam_score: -1.5 X-Spam_bar: - X-Spam_report: (-1.5 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.248, FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Aleksandar Rikalo , Cornelia Huck , qemu-riscv@nongnu.org, Stefan Weil , Huacai Chen , Richard Henderson , Thomas Huth , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , qemu-s390x@nongnu.org, qemu-arm@nongnu.org, Alistair Francis , Palmer Dabbelt , Miroslav Rezanina , Aurelien Jarno Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" tcg_reg_alloc_op() allocates arrays of TCG_MAX_OP_ARGS elements. The Aarch64 target already does this since commit 8d8db193f25 ("tcg-aarch64: Hoist common argument loads in tcg_out_op"), SPARC since commit b357f902bff ("tcg-sparc: Hoist common argument loads in tcg_out_op"). RISCV missed it upon introduction in commit bdf503819ee ("tcg/riscv: Add the out op decoder"), MIPS since commit 22ee3a987d5 ("tcg-mips: Hoist args loads") and i386 since commit 42d5b514928 ("tcg/i386: Hoist common arguments in tcg_out_op"). Provide this information as a hint to the compiler in the function prototype, and update the funtion definitions. This fixes this warning (using GCC 11): tcg/aarch64/tcg-target.c.inc:1855:37: error: argument 3 of type 'const TCGArg[16]' {aka 'const long unsigned int[16]'} with mismatched bound [-Werror=array-parameter=] tcg/aarch64/tcg-target.c.inc:1856:34: error: argument 4 of type 'const int[16]' with mismatched bound [-Werror=array-parameter=] Reported-by: Miroslav Rezanina Signed-off-by: Philippe Mathieu-Daudé Reviewed-by: Miroslav Rezanina --- RFC because such compiler hint is somehow "new" to me. Also I expect this to be superseeded by Richard 'tcg constant' branch mentioned here: https://www.mail-archive.com/qemu-devel@nongnu.org/msg771401.html --- tcg/tcg.c | 5 +++-- tcg/i386/tcg-target.c.inc | 3 ++- tcg/mips/tcg-target.c.inc | 3 ++- tcg/riscv/tcg-target.c.inc | 3 ++- tcg/tci/tcg-target.c.inc | 5 +++-- 5 files changed, 12 insertions(+), 7 deletions(-) diff --git a/tcg/tcg.c b/tcg/tcg.c index 472bf1755bf..97d074d8fab 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -110,8 +110,9 @@ static void tcg_out_ld(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg1, static bool tcg_out_mov(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg); static void tcg_out_movi(TCGContext *s, TCGType type, TCGReg ret, tcg_target_long arg); -static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, - const int *const_args); +static void tcg_out_op(TCGContext *s, TCGOpcode opc, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]); #if TCG_TARGET_MAYBE_vec static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece, TCGReg dst, TCGReg src); diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index 46e856f4421..d121dca8789 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -2215,7 +2215,8 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64) } static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { TCGArg a0, a1, a2; int c, const_a2, vexop, rexw = 0; diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index add157f6c32..b9bb54f0ecc 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1691,7 +1691,8 @@ static void tcg_out_clz(TCGContext *s, MIPSInsn opcv2, MIPSInsn opcv6, } static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { MIPSInsn i1, i2; TCGArg a0, a1, a2; diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index c60b91ba58f..5bf0d069532 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -1238,7 +1238,8 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64) static const tcg_insn_unit *tb_ret_addr; static void tcg_out_op(TCGContext *s, TCGOpcode opc, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { TCGArg a0 = args[0]; TCGArg a1 = args[1]; diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc index d5a4d9d37cf..60464524f3d 100644 --- a/tcg/tci/tcg-target.c.inc +++ b/tcg/tci/tcg-target.c.inc @@ -553,8 +553,9 @@ static inline void tcg_out_call(TCGContext *s, const tcg_insn_unit *arg) old_code_ptr[1] = s->code_ptr - old_code_ptr; } -static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args, - const int *const_args) +static void tcg_out_op(TCGContext *s, TCGOpcode opc, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { uint8_t *old_code_ptr = s->code_ptr; From patchwork Mon Jan 11 15:01:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= X-Patchwork-Id: 12010929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29076C433DB for ; Mon, 11 Jan 2021 15:11:15 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 96C38225AC for ; Mon, 11 Jan 2021 15:11:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 96C38225AC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=amsat.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:43378 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kyyqf-0004Cg-Q9 for qemu-devel@archiver.kernel.org; Mon, 11 Jan 2021 10:11:13 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:45176) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kyyhd-0001a3-2S; Mon, 11 Jan 2021 10:01:53 -0500 Received: from mail-ej1-x631.google.com ([2a00:1450:4864:20::631]:34510) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1kyyhb-0001Vl-EH; Mon, 11 Jan 2021 10:01:52 -0500 Received: by mail-ej1-x631.google.com with SMTP id g20so83123ejb.1; Mon, 11 Jan 2021 07:01:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xngzeQsXbx86Y0kY9uD4ruhCQIyXt4hJmccYItVWSgE=; b=JU8/uZzyfrbCCh0sfNNRgDfkqxalS6KTv/bDHUEEQWCbD2O+7GHhpXW5jPv2F5/Otj ED89jPRaY4rSPbLXDOmAD4pYYOFh4eiGlSUL5E1aYNUwyOBYmM590+O4CHqGsPdJiI3T MEuO41mbNYK9zfHoRye9MP3MANfc3Y+N9cPHyaltOIJjukQ3bgE/Di8uoGiph0Pw8Nz5 O7qDedIU6bQIH39IeJ3ycpfwG0BsWJcqrMAgxCp4I9fDHPnKN4QTqR+yDdZXClXKY/BC pBxKlz2d/G2tNiPmZpAQxqWOPqpburxAXsusZUsAlQZ6twjehGHFpIxNlxpAZi+Fvmxh gsUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=xngzeQsXbx86Y0kY9uD4ruhCQIyXt4hJmccYItVWSgE=; b=fdcYO8G2+U1MUF5iikH3rYya3vx95UUft2kL7cdrsdHufwJSaw+Ko2TUmEwe2vzbgJ k+pMFUnbvnqffy5fDmTpGVdWde+oQ37UGj3Hfws8t7lCc7OqWoQBPZEx+VvnQjLmHdOT SGDP4LzyAiLy7RAzW1f+tZiAIdpt+EAkaOXLmhE+u0H/9FU0CgHgyb53sJ8Vh/MSVnhG uV8pXMf7Bff5BoyPHv9E1V/uLfddb0jdP+d8/qveo9cZA799vafrQRX3j7lRswKlqVtK hZ1PgPXdPbmep05gBvZ4M4gyurXquOYHBSE2p3CUjhx5tgW3yKaGOagR1+4PhJxe6c+D TgEQ== X-Gm-Message-State: AOAM531u9uvSGG+jClZUJUBx3ejG+2/9M6j3Y2dboC8YaM1Scfuhvax5 3I6kRTp5F0KzaQ4RUfPWT3+s3M8wLQQ= X-Google-Smtp-Source: ABdhPJxhDeFTNk2Ehjp96NDAKoPS7SZvuzzlTjnuMiEpwVOKkdWW0y7TQXWtxKdkpf5rmRU4AOoFjw== X-Received: by 2002:a17:906:fcda:: with SMTP id qx26mr11168873ejb.213.1610377308517; Mon, 11 Jan 2021 07:01:48 -0800 (PST) Received: from x1w.redhat.com (129.red-88-21-205.staticip.rima-tde.net. [88.21.205.129]) by smtp.gmail.com with ESMTPSA id b21sm51043edr.53.2021.01.11.07.01.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Jan 2021 07:01:47 -0800 (PST) From: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= To: qemu-devel@nongnu.org Subject: [RFC PATCH 5/5] tcg: Restrict tcg_out_vec_op() to arrays of TCG_MAX_OP_ARGS elements Date: Mon, 11 Jan 2021 16:01:14 +0100 Message-Id: <20210111150114.1415930-6-f4bug@amsat.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210111150114.1415930-1-f4bug@amsat.org> References: <20210111150114.1415930-1-f4bug@amsat.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::631; envelope-from=philippe.mathieu.daude@gmail.com; helo=mail-ej1-x631.google.com X-Spam_score_int: -14 X-Spam_score: -1.5 X-Spam_bar: - X-Spam_report: (-1.5 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FORGED_FROMDOMAIN=0.248, FREEMAIL_FROM=0.001, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Aleksandar Rikalo , Cornelia Huck , qemu-riscv@nongnu.org, Stefan Weil , Huacai Chen , Richard Henderson , Thomas Huth , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , qemu-s390x@nongnu.org, qemu-arm@nongnu.org, Alistair Francis , Palmer Dabbelt , Aurelien Jarno Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" tcg_reg_alloc_op() allocates arrays of TCG_MAX_OP_ARGS elements. Signed-off-by: Philippe Mathieu-Daudé --- tcg/tcg.c | 14 ++++++++------ tcg/aarch64/tcg-target.c.inc | 3 ++- tcg/i386/tcg-target.c.inc | 3 ++- tcg/ppc/tcg-target.c.inc | 3 ++- 4 files changed, 14 insertions(+), 9 deletions(-) diff --git a/tcg/tcg.c b/tcg/tcg.c index 97d074d8fab..3a20327f9cb 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -120,9 +120,10 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece, TCGReg dst, TCGReg base, intptr_t offset); static void tcg_out_dupi_vec(TCGContext *s, TCGType type, TCGReg dst, tcg_target_long arg); -static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, unsigned vecl, - unsigned vece, const TCGArg *args, - const int *const_args); +static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, + unsigned vecl, unsigned vece, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]); #else static inline bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece, TCGReg dst, TCGReg src) @@ -139,9 +140,10 @@ static inline void tcg_out_dupi_vec(TCGContext *s, TCGType type, { g_assert_not_reached(); } -static inline void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, unsigned vecl, - unsigned vece, const TCGArg *args, - const int *const_args) +static inline void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, + unsigned vecl, unsigned vece, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { g_assert_not_reached(); } diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index ab199b143f3..32811976e78 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -2276,7 +2276,8 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, unsigned vecl, unsigned vece, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { static const AArch64Insn cmp_insn[16] = { [TCG_COND_EQ] = I3616_CMEQ, diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index d121dca8789..87bf75735a1 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -2654,7 +2654,8 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, unsigned vecl, unsigned vece, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { static int const add_insn[4] = { OPC_PADDB, OPC_PADDW, OPC_PADDD, OPC_PADDQ diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index d37b519d693..279ec4b743c 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -3137,7 +3137,8 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece, static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, unsigned vecl, unsigned vece, - const TCGArg *args, const int *const_args) + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) { static const uint32_t add_op[4] = { VADDUBM, VADDUHM, VADDUWM, VADDUDM },