From patchwork Fri Sep 1 09:30:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7CE7DCA0FE9 for ; Fri, 1 Sep 2023 09:36:14 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0XE-0004va-Jt; Fri, 01 Sep 2023 05:33:48 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0X3-0004tN-0V for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:38 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0Ww-0000Tu-Q0 for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:36 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id F28E244466; Fri, 1 Sep 2023 09:33:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560801; bh=dIfXUSCjvfwbCW7yLewcEQK9z/+WqsXx8ABtTACGbXA=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=WzQF8m82dv60DgRfpPTW3mkrHSq8qXqGDtWvPYRygI8FhsUJmWTGl1wDYXBMlFmIn vyyzSQwlVS5yYOX8HiVWUhd/viPVfVZBKvcP7pdj+iLyC+bDiwfK+pLXL0RCeqyA1+ UVcmFB/LGpR3dIywurWpCQZSPI4+vVhDteVZUOmo= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 01/14] tcg/loongarch64: Import LSX instructions Date: Fri, 1 Sep 2023 17:30:54 +0800 Message-ID: <20230901093258.942357-2-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Add opcodes and encoder functions for LSX. Generated from https://github.com/jiegec/loongarch-opcodes/tree/qemu-lsx. Signed-off-by: Jiajie Chen Acked-by: Richard Henderson --- tcg/loongarch64/tcg-insn-defs.c.inc | 6251 ++++++++++++++++++++++++++- 1 file changed, 6134 insertions(+), 117 deletions(-) diff --git a/tcg/loongarch64/tcg-insn-defs.c.inc b/tcg/loongarch64/tcg-insn-defs.c.inc index b5bb0c5e73..ee3b483b02 100644 --- a/tcg/loongarch64/tcg-insn-defs.c.inc +++ b/tcg/loongarch64/tcg-insn-defs.c.inc @@ -4,7 +4,7 @@ * * This file is auto-generated by genqemutcgdefs from * https://github.com/loongson-community/loongarch-opcodes, - * from commit 25ca7effe9d88101c1cf96c4005423643386d81f. + * from commit 8027da9a8157a8b47fc48ff1def292e09c5668bd. * DO NOT EDIT. */ @@ -74,6 +74,60 @@ typedef enum { OPC_ANDI = 0x03400000, OPC_ORI = 0x03800000, OPC_XORI = 0x03c00000, + OPC_VFMADD_S = 0x09100000, + OPC_VFMADD_D = 0x09200000, + OPC_VFMSUB_S = 0x09500000, + OPC_VFMSUB_D = 0x09600000, + OPC_VFNMADD_S = 0x09900000, + OPC_VFNMADD_D = 0x09a00000, + OPC_VFNMSUB_S = 0x09d00000, + OPC_VFNMSUB_D = 0x09e00000, + OPC_VFCMP_CAF_S = 0x0c500000, + OPC_VFCMP_SAF_S = 0x0c508000, + OPC_VFCMP_CLT_S = 0x0c510000, + OPC_VFCMP_SLT_S = 0x0c518000, + OPC_VFCMP_CEQ_S = 0x0c520000, + OPC_VFCMP_SEQ_S = 0x0c528000, + OPC_VFCMP_CLE_S = 0x0c530000, + OPC_VFCMP_SLE_S = 0x0c538000, + OPC_VFCMP_CUN_S = 0x0c540000, + OPC_VFCMP_SUN_S = 0x0c548000, + OPC_VFCMP_CULT_S = 0x0c550000, + OPC_VFCMP_SULT_S = 0x0c558000, + OPC_VFCMP_CUEQ_S = 0x0c560000, + OPC_VFCMP_SUEQ_S = 0x0c568000, + OPC_VFCMP_CULE_S = 0x0c570000, + OPC_VFCMP_SULE_S = 0x0c578000, + OPC_VFCMP_CNE_S = 0x0c580000, + OPC_VFCMP_SNE_S = 0x0c588000, + OPC_VFCMP_COR_S = 0x0c5a0000, + OPC_VFCMP_SOR_S = 0x0c5a8000, + OPC_VFCMP_CUNE_S = 0x0c5c0000, + OPC_VFCMP_SUNE_S = 0x0c5c8000, + OPC_VFCMP_CAF_D = 0x0c600000, + OPC_VFCMP_SAF_D = 0x0c608000, + OPC_VFCMP_CLT_D = 0x0c610000, + OPC_VFCMP_SLT_D = 0x0c618000, + OPC_VFCMP_CEQ_D = 0x0c620000, + OPC_VFCMP_SEQ_D = 0x0c628000, + OPC_VFCMP_CLE_D = 0x0c630000, + OPC_VFCMP_SLE_D = 0x0c638000, + OPC_VFCMP_CUN_D = 0x0c640000, + OPC_VFCMP_SUN_D = 0x0c648000, + OPC_VFCMP_CULT_D = 0x0c650000, + OPC_VFCMP_SULT_D = 0x0c658000, + OPC_VFCMP_CUEQ_D = 0x0c660000, + OPC_VFCMP_SUEQ_D = 0x0c668000, + OPC_VFCMP_CULE_D = 0x0c670000, + OPC_VFCMP_SULE_D = 0x0c678000, + OPC_VFCMP_CNE_D = 0x0c680000, + OPC_VFCMP_SNE_D = 0x0c688000, + OPC_VFCMP_COR_D = 0x0c6a0000, + OPC_VFCMP_SOR_D = 0x0c6a8000, + OPC_VFCMP_CUNE_D = 0x0c6c0000, + OPC_VFCMP_SUNE_D = 0x0c6c8000, + OPC_VBITSEL_V = 0x0d100000, + OPC_VSHUF_B = 0x0d500000, OPC_ADDU16I_D = 0x10000000, OPC_LU12I_W = 0x14000000, OPC_CU32I_D = 0x16000000, @@ -92,6 +146,16 @@ typedef enum { OPC_LD_BU = 0x2a000000, OPC_LD_HU = 0x2a400000, OPC_LD_WU = 0x2a800000, + OPC_VLD = 0x2c000000, + OPC_VST = 0x2c400000, + OPC_VLDREPL_D = 0x30100000, + OPC_VLDREPL_W = 0x30200000, + OPC_VLDREPL_H = 0x30400000, + OPC_VLDREPL_B = 0x30800000, + OPC_VSTELM_D = 0x31100000, + OPC_VSTELM_W = 0x31200000, + OPC_VSTELM_H = 0x31400000, + OPC_VSTELM_B = 0x31800000, OPC_LDX_B = 0x38000000, OPC_LDX_H = 0x38040000, OPC_LDX_W = 0x38080000, @@ -103,6 +167,8 @@ typedef enum { OPC_LDX_BU = 0x38200000, OPC_LDX_HU = 0x38240000, OPC_LDX_WU = 0x38280000, + OPC_VLDX = 0x38400000, + OPC_VSTX = 0x38440000, OPC_DBAR = 0x38720000, OPC_JIRL = 0x4c000000, OPC_B = 0x50000000, @@ -113,6 +179,652 @@ typedef enum { OPC_BLE = 0x64000000, OPC_BGTU = 0x68000000, OPC_BLEU = 0x6c000000, + OPC_VSEQ_B = 0x70000000, + OPC_VSEQ_H = 0x70008000, + OPC_VSEQ_W = 0x70010000, + OPC_VSEQ_D = 0x70018000, + OPC_VSLE_B = 0x70020000, + OPC_VSLE_H = 0x70028000, + OPC_VSLE_W = 0x70030000, + OPC_VSLE_D = 0x70038000, + OPC_VSLE_BU = 0x70040000, + OPC_VSLE_HU = 0x70048000, + OPC_VSLE_WU = 0x70050000, + OPC_VSLE_DU = 0x70058000, + OPC_VSLT_B = 0x70060000, + OPC_VSLT_H = 0x70068000, + OPC_VSLT_W = 0x70070000, + OPC_VSLT_D = 0x70078000, + OPC_VSLT_BU = 0x70080000, + OPC_VSLT_HU = 0x70088000, + OPC_VSLT_WU = 0x70090000, + OPC_VSLT_DU = 0x70098000, + OPC_VADD_B = 0x700a0000, + OPC_VADD_H = 0x700a8000, + OPC_VADD_W = 0x700b0000, + OPC_VADD_D = 0x700b8000, + OPC_VSUB_B = 0x700c0000, + OPC_VSUB_H = 0x700c8000, + OPC_VSUB_W = 0x700d0000, + OPC_VSUB_D = 0x700d8000, + OPC_VADDWEV_H_B = 0x701e0000, + OPC_VADDWEV_W_H = 0x701e8000, + OPC_VADDWEV_D_W = 0x701f0000, + OPC_VADDWEV_Q_D = 0x701f8000, + OPC_VSUBWEV_H_B = 0x70200000, + OPC_VSUBWEV_W_H = 0x70208000, + OPC_VSUBWEV_D_W = 0x70210000, + OPC_VSUBWEV_Q_D = 0x70218000, + OPC_VADDWOD_H_B = 0x70220000, + OPC_VADDWOD_W_H = 0x70228000, + OPC_VADDWOD_D_W = 0x70230000, + OPC_VADDWOD_Q_D = 0x70238000, + OPC_VSUBWOD_H_B = 0x70240000, + OPC_VSUBWOD_W_H = 0x70248000, + OPC_VSUBWOD_D_W = 0x70250000, + OPC_VSUBWOD_Q_D = 0x70258000, + OPC_VADDWEV_H_BU = 0x702e0000, + OPC_VADDWEV_W_HU = 0x702e8000, + OPC_VADDWEV_D_WU = 0x702f0000, + OPC_VADDWEV_Q_DU = 0x702f8000, + OPC_VSUBWEV_H_BU = 0x70300000, + OPC_VSUBWEV_W_HU = 0x70308000, + OPC_VSUBWEV_D_WU = 0x70310000, + OPC_VSUBWEV_Q_DU = 0x70318000, + OPC_VADDWOD_H_BU = 0x70320000, + OPC_VADDWOD_W_HU = 0x70328000, + OPC_VADDWOD_D_WU = 0x70330000, + OPC_VADDWOD_Q_DU = 0x70338000, + OPC_VSUBWOD_H_BU = 0x70340000, + OPC_VSUBWOD_W_HU = 0x70348000, + OPC_VSUBWOD_D_WU = 0x70350000, + OPC_VSUBWOD_Q_DU = 0x70358000, + OPC_VADDWEV_H_BU_B = 0x703e0000, + OPC_VADDWEV_W_HU_H = 0x703e8000, + OPC_VADDWEV_D_WU_W = 0x703f0000, + OPC_VADDWEV_Q_DU_D = 0x703f8000, + OPC_VADDWOD_H_BU_B = 0x70400000, + OPC_VADDWOD_W_HU_H = 0x70408000, + OPC_VADDWOD_D_WU_W = 0x70410000, + OPC_VADDWOD_Q_DU_D = 0x70418000, + OPC_VSADD_B = 0x70460000, + OPC_VSADD_H = 0x70468000, + OPC_VSADD_W = 0x70470000, + OPC_VSADD_D = 0x70478000, + OPC_VSSUB_B = 0x70480000, + OPC_VSSUB_H = 0x70488000, + OPC_VSSUB_W = 0x70490000, + OPC_VSSUB_D = 0x70498000, + OPC_VSADD_BU = 0x704a0000, + OPC_VSADD_HU = 0x704a8000, + OPC_VSADD_WU = 0x704b0000, + OPC_VSADD_DU = 0x704b8000, + OPC_VSSUB_BU = 0x704c0000, + OPC_VSSUB_HU = 0x704c8000, + OPC_VSSUB_WU = 0x704d0000, + OPC_VSSUB_DU = 0x704d8000, + OPC_VHADDW_H_B = 0x70540000, + OPC_VHADDW_W_H = 0x70548000, + OPC_VHADDW_D_W = 0x70550000, + OPC_VHADDW_Q_D = 0x70558000, + OPC_VHSUBW_H_B = 0x70560000, + OPC_VHSUBW_W_H = 0x70568000, + OPC_VHSUBW_D_W = 0x70570000, + OPC_VHSUBW_Q_D = 0x70578000, + OPC_VHADDW_HU_BU = 0x70580000, + OPC_VHADDW_WU_HU = 0x70588000, + OPC_VHADDW_DU_WU = 0x70590000, + OPC_VHADDW_QU_DU = 0x70598000, + OPC_VHSUBW_HU_BU = 0x705a0000, + OPC_VHSUBW_WU_HU = 0x705a8000, + OPC_VHSUBW_DU_WU = 0x705b0000, + OPC_VHSUBW_QU_DU = 0x705b8000, + OPC_VADDA_B = 0x705c0000, + OPC_VADDA_H = 0x705c8000, + OPC_VADDA_W = 0x705d0000, + OPC_VADDA_D = 0x705d8000, + OPC_VABSD_B = 0x70600000, + OPC_VABSD_H = 0x70608000, + OPC_VABSD_W = 0x70610000, + OPC_VABSD_D = 0x70618000, + OPC_VABSD_BU = 0x70620000, + OPC_VABSD_HU = 0x70628000, + OPC_VABSD_WU = 0x70630000, + OPC_VABSD_DU = 0x70638000, + OPC_VAVG_B = 0x70640000, + OPC_VAVG_H = 0x70648000, + OPC_VAVG_W = 0x70650000, + OPC_VAVG_D = 0x70658000, + OPC_VAVG_BU = 0x70660000, + OPC_VAVG_HU = 0x70668000, + OPC_VAVG_WU = 0x70670000, + OPC_VAVG_DU = 0x70678000, + OPC_VAVGR_B = 0x70680000, + OPC_VAVGR_H = 0x70688000, + OPC_VAVGR_W = 0x70690000, + OPC_VAVGR_D = 0x70698000, + OPC_VAVGR_BU = 0x706a0000, + OPC_VAVGR_HU = 0x706a8000, + OPC_VAVGR_WU = 0x706b0000, + OPC_VAVGR_DU = 0x706b8000, + OPC_VMAX_B = 0x70700000, + OPC_VMAX_H = 0x70708000, + OPC_VMAX_W = 0x70710000, + OPC_VMAX_D = 0x70718000, + OPC_VMIN_B = 0x70720000, + OPC_VMIN_H = 0x70728000, + OPC_VMIN_W = 0x70730000, + OPC_VMIN_D = 0x70738000, + OPC_VMAX_BU = 0x70740000, + OPC_VMAX_HU = 0x70748000, + OPC_VMAX_WU = 0x70750000, + OPC_VMAX_DU = 0x70758000, + OPC_VMIN_BU = 0x70760000, + OPC_VMIN_HU = 0x70768000, + OPC_VMIN_WU = 0x70770000, + OPC_VMIN_DU = 0x70778000, + OPC_VMUL_B = 0x70840000, + OPC_VMUL_H = 0x70848000, + OPC_VMUL_W = 0x70850000, + OPC_VMUL_D = 0x70858000, + OPC_VMUH_B = 0x70860000, + OPC_VMUH_H = 0x70868000, + OPC_VMUH_W = 0x70870000, + OPC_VMUH_D = 0x70878000, + OPC_VMUH_BU = 0x70880000, + OPC_VMUH_HU = 0x70888000, + OPC_VMUH_WU = 0x70890000, + OPC_VMUH_DU = 0x70898000, + OPC_VMULWEV_H_B = 0x70900000, + OPC_VMULWEV_W_H = 0x70908000, + OPC_VMULWEV_D_W = 0x70910000, + OPC_VMULWEV_Q_D = 0x70918000, + OPC_VMULWOD_H_B = 0x70920000, + OPC_VMULWOD_W_H = 0x70928000, + OPC_VMULWOD_D_W = 0x70930000, + OPC_VMULWOD_Q_D = 0x70938000, + OPC_VMULWEV_H_BU = 0x70980000, + OPC_VMULWEV_W_HU = 0x70988000, + OPC_VMULWEV_D_WU = 0x70990000, + OPC_VMULWEV_Q_DU = 0x70998000, + OPC_VMULWOD_H_BU = 0x709a0000, + OPC_VMULWOD_W_HU = 0x709a8000, + OPC_VMULWOD_D_WU = 0x709b0000, + OPC_VMULWOD_Q_DU = 0x709b8000, + OPC_VMULWEV_H_BU_B = 0x70a00000, + OPC_VMULWEV_W_HU_H = 0x70a08000, + OPC_VMULWEV_D_WU_W = 0x70a10000, + OPC_VMULWEV_Q_DU_D = 0x70a18000, + OPC_VMULWOD_H_BU_B = 0x70a20000, + OPC_VMULWOD_W_HU_H = 0x70a28000, + OPC_VMULWOD_D_WU_W = 0x70a30000, + OPC_VMULWOD_Q_DU_D = 0x70a38000, + OPC_VMADD_B = 0x70a80000, + OPC_VMADD_H = 0x70a88000, + OPC_VMADD_W = 0x70a90000, + OPC_VMADD_D = 0x70a98000, + OPC_VMSUB_B = 0x70aa0000, + OPC_VMSUB_H = 0x70aa8000, + OPC_VMSUB_W = 0x70ab0000, + OPC_VMSUB_D = 0x70ab8000, + OPC_VMADDWEV_H_B = 0x70ac0000, + OPC_VMADDWEV_W_H = 0x70ac8000, + OPC_VMADDWEV_D_W = 0x70ad0000, + OPC_VMADDWEV_Q_D = 0x70ad8000, + OPC_VMADDWOD_H_B = 0x70ae0000, + OPC_VMADDWOD_W_H = 0x70ae8000, + OPC_VMADDWOD_D_W = 0x70af0000, + OPC_VMADDWOD_Q_D = 0x70af8000, + OPC_VMADDWEV_H_BU = 0x70b40000, + OPC_VMADDWEV_W_HU = 0x70b48000, + OPC_VMADDWEV_D_WU = 0x70b50000, + OPC_VMADDWEV_Q_DU = 0x70b58000, + OPC_VMADDWOD_H_BU = 0x70b60000, + OPC_VMADDWOD_W_HU = 0x70b68000, + OPC_VMADDWOD_D_WU = 0x70b70000, + OPC_VMADDWOD_Q_DU = 0x70b78000, + OPC_VMADDWEV_H_BU_B = 0x70bc0000, + OPC_VMADDWEV_W_HU_H = 0x70bc8000, + OPC_VMADDWEV_D_WU_W = 0x70bd0000, + OPC_VMADDWEV_Q_DU_D = 0x70bd8000, + OPC_VMADDWOD_H_BU_B = 0x70be0000, + OPC_VMADDWOD_W_HU_H = 0x70be8000, + OPC_VMADDWOD_D_WU_W = 0x70bf0000, + OPC_VMADDWOD_Q_DU_D = 0x70bf8000, + OPC_VDIV_B = 0x70e00000, + OPC_VDIV_H = 0x70e08000, + OPC_VDIV_W = 0x70e10000, + OPC_VDIV_D = 0x70e18000, + OPC_VMOD_B = 0x70e20000, + OPC_VMOD_H = 0x70e28000, + OPC_VMOD_W = 0x70e30000, + OPC_VMOD_D = 0x70e38000, + OPC_VDIV_BU = 0x70e40000, + OPC_VDIV_HU = 0x70e48000, + OPC_VDIV_WU = 0x70e50000, + OPC_VDIV_DU = 0x70e58000, + OPC_VMOD_BU = 0x70e60000, + OPC_VMOD_HU = 0x70e68000, + OPC_VMOD_WU = 0x70e70000, + OPC_VMOD_DU = 0x70e78000, + OPC_VSLL_B = 0x70e80000, + OPC_VSLL_H = 0x70e88000, + OPC_VSLL_W = 0x70e90000, + OPC_VSLL_D = 0x70e98000, + OPC_VSRL_B = 0x70ea0000, + OPC_VSRL_H = 0x70ea8000, + OPC_VSRL_W = 0x70eb0000, + OPC_VSRL_D = 0x70eb8000, + OPC_VSRA_B = 0x70ec0000, + OPC_VSRA_H = 0x70ec8000, + OPC_VSRA_W = 0x70ed0000, + OPC_VSRA_D = 0x70ed8000, + OPC_VROTR_B = 0x70ee0000, + OPC_VROTR_H = 0x70ee8000, + OPC_VROTR_W = 0x70ef0000, + OPC_VROTR_D = 0x70ef8000, + OPC_VSRLR_B = 0x70f00000, + OPC_VSRLR_H = 0x70f08000, + OPC_VSRLR_W = 0x70f10000, + OPC_VSRLR_D = 0x70f18000, + OPC_VSRAR_B = 0x70f20000, + OPC_VSRAR_H = 0x70f28000, + OPC_VSRAR_W = 0x70f30000, + OPC_VSRAR_D = 0x70f38000, + OPC_VSRLN_B_H = 0x70f48000, + OPC_VSRLN_H_W = 0x70f50000, + OPC_VSRLN_W_D = 0x70f58000, + OPC_VSRAN_B_H = 0x70f68000, + OPC_VSRAN_H_W = 0x70f70000, + OPC_VSRAN_W_D = 0x70f78000, + OPC_VSRLRN_B_H = 0x70f88000, + OPC_VSRLRN_H_W = 0x70f90000, + OPC_VSRLRN_W_D = 0x70f98000, + OPC_VSRARN_B_H = 0x70fa8000, + OPC_VSRARN_H_W = 0x70fb0000, + OPC_VSRARN_W_D = 0x70fb8000, + OPC_VSSRLN_B_H = 0x70fc8000, + OPC_VSSRLN_H_W = 0x70fd0000, + OPC_VSSRLN_W_D = 0x70fd8000, + OPC_VSSRAN_B_H = 0x70fe8000, + OPC_VSSRAN_H_W = 0x70ff0000, + OPC_VSSRAN_W_D = 0x70ff8000, + OPC_VSSRLRN_B_H = 0x71008000, + OPC_VSSRLRN_H_W = 0x71010000, + OPC_VSSRLRN_W_D = 0x71018000, + OPC_VSSRARN_B_H = 0x71028000, + OPC_VSSRARN_H_W = 0x71030000, + OPC_VSSRARN_W_D = 0x71038000, + OPC_VSSRLN_BU_H = 0x71048000, + OPC_VSSRLN_HU_W = 0x71050000, + OPC_VSSRLN_WU_D = 0x71058000, + OPC_VSSRAN_BU_H = 0x71068000, + OPC_VSSRAN_HU_W = 0x71070000, + OPC_VSSRAN_WU_D = 0x71078000, + OPC_VSSRLRN_BU_H = 0x71088000, + OPC_VSSRLRN_HU_W = 0x71090000, + OPC_VSSRLRN_WU_D = 0x71098000, + OPC_VSSRARN_BU_H = 0x710a8000, + OPC_VSSRARN_HU_W = 0x710b0000, + OPC_VSSRARN_WU_D = 0x710b8000, + OPC_VBITCLR_B = 0x710c0000, + OPC_VBITCLR_H = 0x710c8000, + OPC_VBITCLR_W = 0x710d0000, + OPC_VBITCLR_D = 0x710d8000, + OPC_VBITSET_B = 0x710e0000, + OPC_VBITSET_H = 0x710e8000, + OPC_VBITSET_W = 0x710f0000, + OPC_VBITSET_D = 0x710f8000, + OPC_VBITREV_B = 0x71100000, + OPC_VBITREV_H = 0x71108000, + OPC_VBITREV_W = 0x71110000, + OPC_VBITREV_D = 0x71118000, + OPC_VPACKEV_B = 0x71160000, + OPC_VPACKEV_H = 0x71168000, + OPC_VPACKEV_W = 0x71170000, + OPC_VPACKEV_D = 0x71178000, + OPC_VPACKOD_B = 0x71180000, + OPC_VPACKOD_H = 0x71188000, + OPC_VPACKOD_W = 0x71190000, + OPC_VPACKOD_D = 0x71198000, + OPC_VILVL_B = 0x711a0000, + OPC_VILVL_H = 0x711a8000, + OPC_VILVL_W = 0x711b0000, + OPC_VILVL_D = 0x711b8000, + OPC_VILVH_B = 0x711c0000, + OPC_VILVH_H = 0x711c8000, + OPC_VILVH_W = 0x711d0000, + OPC_VILVH_D = 0x711d8000, + OPC_VPICKEV_B = 0x711e0000, + OPC_VPICKEV_H = 0x711e8000, + OPC_VPICKEV_W = 0x711f0000, + OPC_VPICKEV_D = 0x711f8000, + OPC_VPICKOD_B = 0x71200000, + OPC_VPICKOD_H = 0x71208000, + OPC_VPICKOD_W = 0x71210000, + OPC_VPICKOD_D = 0x71218000, + OPC_VREPLVE_B = 0x71220000, + OPC_VREPLVE_H = 0x71228000, + OPC_VREPLVE_W = 0x71230000, + OPC_VREPLVE_D = 0x71238000, + OPC_VAND_V = 0x71260000, + OPC_VOR_V = 0x71268000, + OPC_VXOR_V = 0x71270000, + OPC_VNOR_V = 0x71278000, + OPC_VANDN_V = 0x71280000, + OPC_VORN_V = 0x71288000, + OPC_VFRSTP_B = 0x712b0000, + OPC_VFRSTP_H = 0x712b8000, + OPC_VADD_Q = 0x712d0000, + OPC_VSUB_Q = 0x712d8000, + OPC_VSIGNCOV_B = 0x712e0000, + OPC_VSIGNCOV_H = 0x712e8000, + OPC_VSIGNCOV_W = 0x712f0000, + OPC_VSIGNCOV_D = 0x712f8000, + OPC_VFADD_S = 0x71308000, + OPC_VFADD_D = 0x71310000, + OPC_VFSUB_S = 0x71328000, + OPC_VFSUB_D = 0x71330000, + OPC_VFMUL_S = 0x71388000, + OPC_VFMUL_D = 0x71390000, + OPC_VFDIV_S = 0x713a8000, + OPC_VFDIV_D = 0x713b0000, + OPC_VFMAX_S = 0x713c8000, + OPC_VFMAX_D = 0x713d0000, + OPC_VFMIN_S = 0x713e8000, + OPC_VFMIN_D = 0x713f0000, + OPC_VFMAXA_S = 0x71408000, + OPC_VFMAXA_D = 0x71410000, + OPC_VFMINA_S = 0x71428000, + OPC_VFMINA_D = 0x71430000, + OPC_VFCVT_H_S = 0x71460000, + OPC_VFCVT_S_D = 0x71468000, + OPC_VFFINT_S_L = 0x71480000, + OPC_VFTINT_W_D = 0x71498000, + OPC_VFTINTRM_W_D = 0x714a0000, + OPC_VFTINTRP_W_D = 0x714a8000, + OPC_VFTINTRZ_W_D = 0x714b0000, + OPC_VFTINTRNE_W_D = 0x714b8000, + OPC_VSHUF_H = 0x717a8000, + OPC_VSHUF_W = 0x717b0000, + OPC_VSHUF_D = 0x717b8000, + OPC_VSEQI_B = 0x72800000, + OPC_VSEQI_H = 0x72808000, + OPC_VSEQI_W = 0x72810000, + OPC_VSEQI_D = 0x72818000, + OPC_VSLEI_B = 0x72820000, + OPC_VSLEI_H = 0x72828000, + OPC_VSLEI_W = 0x72830000, + OPC_VSLEI_D = 0x72838000, + OPC_VSLEI_BU = 0x72840000, + OPC_VSLEI_HU = 0x72848000, + OPC_VSLEI_WU = 0x72850000, + OPC_VSLEI_DU = 0x72858000, + OPC_VSLTI_B = 0x72860000, + OPC_VSLTI_H = 0x72868000, + OPC_VSLTI_W = 0x72870000, + OPC_VSLTI_D = 0x72878000, + OPC_VSLTI_BU = 0x72880000, + OPC_VSLTI_HU = 0x72888000, + OPC_VSLTI_WU = 0x72890000, + OPC_VSLTI_DU = 0x72898000, + OPC_VADDI_BU = 0x728a0000, + OPC_VADDI_HU = 0x728a8000, + OPC_VADDI_WU = 0x728b0000, + OPC_VADDI_DU = 0x728b8000, + OPC_VSUBI_BU = 0x728c0000, + OPC_VSUBI_HU = 0x728c8000, + OPC_VSUBI_WU = 0x728d0000, + OPC_VSUBI_DU = 0x728d8000, + OPC_VBSLL_V = 0x728e0000, + OPC_VBSRL_V = 0x728e8000, + OPC_VMAXI_B = 0x72900000, + OPC_VMAXI_H = 0x72908000, + OPC_VMAXI_W = 0x72910000, + OPC_VMAXI_D = 0x72918000, + OPC_VMINI_B = 0x72920000, + OPC_VMINI_H = 0x72928000, + OPC_VMINI_W = 0x72930000, + OPC_VMINI_D = 0x72938000, + OPC_VMAXI_BU = 0x72940000, + OPC_VMAXI_HU = 0x72948000, + OPC_VMAXI_WU = 0x72950000, + OPC_VMAXI_DU = 0x72958000, + OPC_VMINI_BU = 0x72960000, + OPC_VMINI_HU = 0x72968000, + OPC_VMINI_WU = 0x72970000, + OPC_VMINI_DU = 0x72978000, + OPC_VFRSTPI_B = 0x729a0000, + OPC_VFRSTPI_H = 0x729a8000, + OPC_VCLO_B = 0x729c0000, + OPC_VCLO_H = 0x729c0400, + OPC_VCLO_W = 0x729c0800, + OPC_VCLO_D = 0x729c0c00, + OPC_VCLZ_B = 0x729c1000, + OPC_VCLZ_H = 0x729c1400, + OPC_VCLZ_W = 0x729c1800, + OPC_VCLZ_D = 0x729c1c00, + OPC_VPCNT_B = 0x729c2000, + OPC_VPCNT_H = 0x729c2400, + OPC_VPCNT_W = 0x729c2800, + OPC_VPCNT_D = 0x729c2c00, + OPC_VNEG_B = 0x729c3000, + OPC_VNEG_H = 0x729c3400, + OPC_VNEG_W = 0x729c3800, + OPC_VNEG_D = 0x729c3c00, + OPC_VMSKLTZ_B = 0x729c4000, + OPC_VMSKLTZ_H = 0x729c4400, + OPC_VMSKLTZ_W = 0x729c4800, + OPC_VMSKLTZ_D = 0x729c4c00, + OPC_VMSKGEZ_B = 0x729c5000, + OPC_VMSKNZ_B = 0x729c6000, + OPC_VSETEQZ_V = 0x729c9800, + OPC_VSETNEZ_V = 0x729c9c00, + OPC_VSETANYEQZ_B = 0x729ca000, + OPC_VSETANYEQZ_H = 0x729ca400, + OPC_VSETANYEQZ_W = 0x729ca800, + OPC_VSETANYEQZ_D = 0x729cac00, + OPC_VSETALLNEZ_B = 0x729cb000, + OPC_VSETALLNEZ_H = 0x729cb400, + OPC_VSETALLNEZ_W = 0x729cb800, + OPC_VSETALLNEZ_D = 0x729cbc00, + OPC_VFLOGB_S = 0x729cc400, + OPC_VFLOGB_D = 0x729cc800, + OPC_VFCLASS_S = 0x729cd400, + OPC_VFCLASS_D = 0x729cd800, + OPC_VFSQRT_S = 0x729ce400, + OPC_VFSQRT_D = 0x729ce800, + OPC_VFRECIP_S = 0x729cf400, + OPC_VFRECIP_D = 0x729cf800, + OPC_VFRSQRT_S = 0x729d0400, + OPC_VFRSQRT_D = 0x729d0800, + OPC_VFRINT_S = 0x729d3400, + OPC_VFRINT_D = 0x729d3800, + OPC_VFRINTRM_S = 0x729d4400, + OPC_VFRINTRM_D = 0x729d4800, + OPC_VFRINTRP_S = 0x729d5400, + OPC_VFRINTRP_D = 0x729d5800, + OPC_VFRINTRZ_S = 0x729d6400, + OPC_VFRINTRZ_D = 0x729d6800, + OPC_VFRINTRNE_S = 0x729d7400, + OPC_VFRINTRNE_D = 0x729d7800, + OPC_VFCVTL_S_H = 0x729de800, + OPC_VFCVTH_S_H = 0x729dec00, + OPC_VFCVTL_D_S = 0x729df000, + OPC_VFCVTH_D_S = 0x729df400, + OPC_VFFINT_S_W = 0x729e0000, + OPC_VFFINT_S_WU = 0x729e0400, + OPC_VFFINT_D_L = 0x729e0800, + OPC_VFFINT_D_LU = 0x729e0c00, + OPC_VFFINTL_D_W = 0x729e1000, + OPC_VFFINTH_D_W = 0x729e1400, + OPC_VFTINT_W_S = 0x729e3000, + OPC_VFTINT_L_D = 0x729e3400, + OPC_VFTINTRM_W_S = 0x729e3800, + OPC_VFTINTRM_L_D = 0x729e3c00, + OPC_VFTINTRP_W_S = 0x729e4000, + OPC_VFTINTRP_L_D = 0x729e4400, + OPC_VFTINTRZ_W_S = 0x729e4800, + OPC_VFTINTRZ_L_D = 0x729e4c00, + OPC_VFTINTRNE_W_S = 0x729e5000, + OPC_VFTINTRNE_L_D = 0x729e5400, + OPC_VFTINT_WU_S = 0x729e5800, + OPC_VFTINT_LU_D = 0x729e5c00, + OPC_VFTINTRZ_WU_S = 0x729e7000, + OPC_VFTINTRZ_LU_D = 0x729e7400, + OPC_VFTINTL_L_S = 0x729e8000, + OPC_VFTINTH_L_S = 0x729e8400, + OPC_VFTINTRML_L_S = 0x729e8800, + OPC_VFTINTRMH_L_S = 0x729e8c00, + OPC_VFTINTRPL_L_S = 0x729e9000, + OPC_VFTINTRPH_L_S = 0x729e9400, + OPC_VFTINTRZL_L_S = 0x729e9800, + OPC_VFTINTRZH_L_S = 0x729e9c00, + OPC_VFTINTRNEL_L_S = 0x729ea000, + OPC_VFTINTRNEH_L_S = 0x729ea400, + OPC_VEXTH_H_B = 0x729ee000, + OPC_VEXTH_W_H = 0x729ee400, + OPC_VEXTH_D_W = 0x729ee800, + OPC_VEXTH_Q_D = 0x729eec00, + OPC_VEXTH_HU_BU = 0x729ef000, + OPC_VEXTH_WU_HU = 0x729ef400, + OPC_VEXTH_DU_WU = 0x729ef800, + OPC_VEXTH_QU_DU = 0x729efc00, + OPC_VREPLGR2VR_B = 0x729f0000, + OPC_VREPLGR2VR_H = 0x729f0400, + OPC_VREPLGR2VR_W = 0x729f0800, + OPC_VREPLGR2VR_D = 0x729f0c00, + OPC_VROTRI_B = 0x72a02000, + OPC_VROTRI_H = 0x72a04000, + OPC_VROTRI_W = 0x72a08000, + OPC_VROTRI_D = 0x72a10000, + OPC_VSRLRI_B = 0x72a42000, + OPC_VSRLRI_H = 0x72a44000, + OPC_VSRLRI_W = 0x72a48000, + OPC_VSRLRI_D = 0x72a50000, + OPC_VSRARI_B = 0x72a82000, + OPC_VSRARI_H = 0x72a84000, + OPC_VSRARI_W = 0x72a88000, + OPC_VSRARI_D = 0x72a90000, + OPC_VINSGR2VR_B = 0x72eb8000, + OPC_VINSGR2VR_H = 0x72ebc000, + OPC_VINSGR2VR_W = 0x72ebe000, + OPC_VINSGR2VR_D = 0x72ebf000, + OPC_VPICKVE2GR_B = 0x72ef8000, + OPC_VPICKVE2GR_H = 0x72efc000, + OPC_VPICKVE2GR_W = 0x72efe000, + OPC_VPICKVE2GR_D = 0x72eff000, + OPC_VPICKVE2GR_BU = 0x72f38000, + OPC_VPICKVE2GR_HU = 0x72f3c000, + OPC_VPICKVE2GR_WU = 0x72f3e000, + OPC_VPICKVE2GR_DU = 0x72f3f000, + OPC_VREPLVEI_B = 0x72f78000, + OPC_VREPLVEI_H = 0x72f7c000, + OPC_VREPLVEI_W = 0x72f7e000, + OPC_VREPLVEI_D = 0x72f7f000, + OPC_VSLLWIL_H_B = 0x73082000, + OPC_VSLLWIL_W_H = 0x73084000, + OPC_VSLLWIL_D_W = 0x73088000, + OPC_VEXTL_Q_D = 0x73090000, + OPC_VSLLWIL_HU_BU = 0x730c2000, + OPC_VSLLWIL_WU_HU = 0x730c4000, + OPC_VSLLWIL_DU_WU = 0x730c8000, + OPC_VEXTL_QU_DU = 0x730d0000, + OPC_VBITCLRI_B = 0x73102000, + OPC_VBITCLRI_H = 0x73104000, + OPC_VBITCLRI_W = 0x73108000, + OPC_VBITCLRI_D = 0x73110000, + OPC_VBITSETI_B = 0x73142000, + OPC_VBITSETI_H = 0x73144000, + OPC_VBITSETI_W = 0x73148000, + OPC_VBITSETI_D = 0x73150000, + OPC_VBITREVI_B = 0x73182000, + OPC_VBITREVI_H = 0x73184000, + OPC_VBITREVI_W = 0x73188000, + OPC_VBITREVI_D = 0x73190000, + OPC_VSAT_B = 0x73242000, + OPC_VSAT_H = 0x73244000, + OPC_VSAT_W = 0x73248000, + OPC_VSAT_D = 0x73250000, + OPC_VSAT_BU = 0x73282000, + OPC_VSAT_HU = 0x73284000, + OPC_VSAT_WU = 0x73288000, + OPC_VSAT_DU = 0x73290000, + OPC_VSLLI_B = 0x732c2000, + OPC_VSLLI_H = 0x732c4000, + OPC_VSLLI_W = 0x732c8000, + OPC_VSLLI_D = 0x732d0000, + OPC_VSRLI_B = 0x73302000, + OPC_VSRLI_H = 0x73304000, + OPC_VSRLI_W = 0x73308000, + OPC_VSRLI_D = 0x73310000, + OPC_VSRAI_B = 0x73342000, + OPC_VSRAI_H = 0x73344000, + OPC_VSRAI_W = 0x73348000, + OPC_VSRAI_D = 0x73350000, + OPC_VSRLNI_B_H = 0x73404000, + OPC_VSRLNI_H_W = 0x73408000, + OPC_VSRLNI_W_D = 0x73410000, + OPC_VSRLNI_D_Q = 0x73420000, + OPC_VSRLRNI_B_H = 0x73444000, + OPC_VSRLRNI_H_W = 0x73448000, + OPC_VSRLRNI_W_D = 0x73450000, + OPC_VSRLRNI_D_Q = 0x73460000, + OPC_VSSRLNI_B_H = 0x73484000, + OPC_VSSRLNI_H_W = 0x73488000, + OPC_VSSRLNI_W_D = 0x73490000, + OPC_VSSRLNI_D_Q = 0x734a0000, + OPC_VSSRLNI_BU_H = 0x734c4000, + OPC_VSSRLNI_HU_W = 0x734c8000, + OPC_VSSRLNI_WU_D = 0x734d0000, + OPC_VSSRLNI_DU_Q = 0x734e0000, + OPC_VSSRLRNI_B_H = 0x73504000, + OPC_VSSRLRNI_H_W = 0x73508000, + OPC_VSSRLRNI_W_D = 0x73510000, + OPC_VSSRLRNI_D_Q = 0x73520000, + OPC_VSSRLRNI_BU_H = 0x73544000, + OPC_VSSRLRNI_HU_W = 0x73548000, + OPC_VSSRLRNI_WU_D = 0x73550000, + OPC_VSSRLRNI_DU_Q = 0x73560000, + OPC_VSRANI_B_H = 0x73584000, + OPC_VSRANI_H_W = 0x73588000, + OPC_VSRANI_W_D = 0x73590000, + OPC_VSRANI_D_Q = 0x735a0000, + OPC_VSRARNI_B_H = 0x735c4000, + OPC_VSRARNI_H_W = 0x735c8000, + OPC_VSRARNI_W_D = 0x735d0000, + OPC_VSRARNI_D_Q = 0x735e0000, + OPC_VSSRANI_B_H = 0x73604000, + OPC_VSSRANI_H_W = 0x73608000, + OPC_VSSRANI_W_D = 0x73610000, + OPC_VSSRANI_D_Q = 0x73620000, + OPC_VSSRANI_BU_H = 0x73644000, + OPC_VSSRANI_HU_W = 0x73648000, + OPC_VSSRANI_WU_D = 0x73650000, + OPC_VSSRANI_DU_Q = 0x73660000, + OPC_VSSRARNI_B_H = 0x73684000, + OPC_VSSRARNI_H_W = 0x73688000, + OPC_VSSRARNI_W_D = 0x73690000, + OPC_VSSRARNI_D_Q = 0x736a0000, + OPC_VSSRARNI_BU_H = 0x736c4000, + OPC_VSSRARNI_HU_W = 0x736c8000, + OPC_VSSRARNI_WU_D = 0x736d0000, + OPC_VSSRARNI_DU_Q = 0x736e0000, + OPC_VEXTRINS_D = 0x73800000, + OPC_VEXTRINS_W = 0x73840000, + OPC_VEXTRINS_H = 0x73880000, + OPC_VEXTRINS_B = 0x738c0000, + OPC_VSHUF4I_B = 0x73900000, + OPC_VSHUF4I_H = 0x73940000, + OPC_VSHUF4I_W = 0x73980000, + OPC_VSHUF4I_D = 0x739c0000, + OPC_VBITSELI_B = 0x73c40000, + OPC_VANDI_B = 0x73d00000, + OPC_VORI_B = 0x73d40000, + OPC_VXORI_B = 0x73d80000, + OPC_VNORI_B = 0x73dc0000, + OPC_VLDI = 0x73e00000, + OPC_VPERMI_W = 0x73e40000, } LoongArchInsn; static int32_t __attribute__((unused)) @@ -133,6 +845,13 @@ encode_djk_slots(LoongArchInsn opc, uint32_t d, uint32_t j, uint32_t k) return opc | d | j << 5 | k << 10; } +static int32_t __attribute__((unused)) +encode_djka_slots(LoongArchInsn opc, uint32_t d, uint32_t j, uint32_t k, + uint32_t a) +{ + return opc | d | j << 5 | k << 10 | a << 15; +} + static int32_t __attribute__((unused)) encode_djkm_slots(LoongArchInsn opc, uint32_t d, uint32_t j, uint32_t k, uint32_t m) @@ -140,12 +859,27 @@ encode_djkm_slots(LoongArchInsn opc, uint32_t d, uint32_t j, uint32_t k, return opc | d | j << 5 | k << 10 | m << 16; } +static int32_t __attribute__((unused)) +encode_djkn_slots(LoongArchInsn opc, uint32_t d, uint32_t j, uint32_t k, + uint32_t n) +{ + return opc | d | j << 5 | k << 10 | n << 18; +} + static int32_t __attribute__((unused)) encode_dk_slots(LoongArchInsn opc, uint32_t d, uint32_t k) { return opc | d | k << 10; } +static int32_t __attribute__((unused)) +encode_cdvj_insn(LoongArchInsn opc, TCGReg cd, TCGReg vj) +{ + tcg_debug_assert(cd >= 0 && cd <= 0x7); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + return encode_dj_slots(opc, cd, vj & 0x1f); +} + static int32_t __attribute__((unused)) encode_dj_insn(LoongArchInsn opc, TCGReg d, TCGReg j) { @@ -238,6 +972,42 @@ encode_dsj20_insn(LoongArchInsn opc, TCGReg d, int32_t sj20) return encode_dj_slots(opc, d, sj20 & 0xfffff); } +static int32_t __attribute__((unused)) +encode_dvjuk1_insn(LoongArchInsn opc, TCGReg d, TCGReg vj, uint32_t uk1) +{ + tcg_debug_assert(d >= 0 && d <= 0x1f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(uk1 <= 0x1); + return encode_djk_slots(opc, d, vj & 0x1f, uk1); +} + +static int32_t __attribute__((unused)) +encode_dvjuk2_insn(LoongArchInsn opc, TCGReg d, TCGReg vj, uint32_t uk2) +{ + tcg_debug_assert(d >= 0 && d <= 0x1f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(uk2 <= 0x3); + return encode_djk_slots(opc, d, vj & 0x1f, uk2); +} + +static int32_t __attribute__((unused)) +encode_dvjuk3_insn(LoongArchInsn opc, TCGReg d, TCGReg vj, uint32_t uk3) +{ + tcg_debug_assert(d >= 0 && d <= 0x1f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(uk3 <= 0x7); + return encode_djk_slots(opc, d, vj & 0x1f, uk3); +} + +static int32_t __attribute__((unused)) +encode_dvjuk4_insn(LoongArchInsn opc, TCGReg d, TCGReg vj, uint32_t uk4) +{ + tcg_debug_assert(d >= 0 && d <= 0x1f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(uk4 <= 0xf); + return encode_djk_slots(opc, d, vj & 0x1f, uk4); +} + static int32_t __attribute__((unused)) encode_sd10k16_insn(LoongArchInsn opc, int32_t sd10k16) { @@ -252,6 +1022,265 @@ encode_ud15_insn(LoongArchInsn opc, uint32_t ud15) return encode_d_slot(opc, ud15); } +static int32_t __attribute__((unused)) +encode_vdj_insn(LoongArchInsn opc, TCGReg vd, TCGReg j) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + return encode_dj_slots(opc, vd & 0x1f, j); +} + +static int32_t __attribute__((unused)) +encode_vdjk_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, TCGReg k) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(k >= 0 && k <= 0x1f); + return encode_djk_slots(opc, vd & 0x1f, j, k); +} + +static int32_t __attribute__((unused)) +encode_vdjsk10_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, int32_t sk10) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(sk10 >= -0x200 && sk10 <= 0x1ff); + return encode_djk_slots(opc, vd & 0x1f, j, sk10 & 0x3ff); +} + +static int32_t __attribute__((unused)) +encode_vdjsk11_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, int32_t sk11) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(sk11 >= -0x400 && sk11 <= 0x3ff); + return encode_djk_slots(opc, vd & 0x1f, j, sk11 & 0x7ff); +} + +static int32_t __attribute__((unused)) +encode_vdjsk12_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, int32_t sk12) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(sk12 >= -0x800 && sk12 <= 0x7ff); + return encode_djk_slots(opc, vd & 0x1f, j, sk12 & 0xfff); +} + +static int32_t __attribute__((unused)) +encode_vdjsk8un1_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, int32_t sk8, + uint32_t un1) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(sk8 >= -0x80 && sk8 <= 0x7f); + tcg_debug_assert(un1 <= 0x1); + return encode_djkn_slots(opc, vd & 0x1f, j, sk8 & 0xff, un1); +} + +static int32_t __attribute__((unused)) +encode_vdjsk8un2_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, int32_t sk8, + uint32_t un2) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(sk8 >= -0x80 && sk8 <= 0x7f); + tcg_debug_assert(un2 <= 0x3); + return encode_djkn_slots(opc, vd & 0x1f, j, sk8 & 0xff, un2); +} + +static int32_t __attribute__((unused)) +encode_vdjsk8un3_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, int32_t sk8, + uint32_t un3) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(sk8 >= -0x80 && sk8 <= 0x7f); + tcg_debug_assert(un3 <= 0x7); + return encode_djkn_slots(opc, vd & 0x1f, j, sk8 & 0xff, un3); +} + +static int32_t __attribute__((unused)) +encode_vdjsk8un4_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, int32_t sk8, + uint32_t un4) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(sk8 >= -0x80 && sk8 <= 0x7f); + tcg_debug_assert(un4 <= 0xf); + return encode_djkn_slots(opc, vd & 0x1f, j, sk8 & 0xff, un4); +} + +static int32_t __attribute__((unused)) +encode_vdjsk9_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, int32_t sk9) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(sk9 >= -0x100 && sk9 <= 0xff); + return encode_djk_slots(opc, vd & 0x1f, j, sk9 & 0x1ff); +} + +static int32_t __attribute__((unused)) +encode_vdjuk1_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, uint32_t uk1) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(uk1 <= 0x1); + return encode_djk_slots(opc, vd & 0x1f, j, uk1); +} + +static int32_t __attribute__((unused)) +encode_vdjuk2_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, uint32_t uk2) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(uk2 <= 0x3); + return encode_djk_slots(opc, vd & 0x1f, j, uk2); +} + +static int32_t __attribute__((unused)) +encode_vdjuk3_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, uint32_t uk3) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(uk3 <= 0x7); + return encode_djk_slots(opc, vd & 0x1f, j, uk3); +} + +static int32_t __attribute__((unused)) +encode_vdjuk4_insn(LoongArchInsn opc, TCGReg vd, TCGReg j, uint32_t uk4) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(j >= 0 && j <= 0x1f); + tcg_debug_assert(uk4 <= 0xf); + return encode_djk_slots(opc, vd & 0x1f, j, uk4); +} + +static int32_t __attribute__((unused)) +encode_vdsj13_insn(LoongArchInsn opc, TCGReg vd, int32_t sj13) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(sj13 >= -0x1000 && sj13 <= 0xfff); + return encode_dj_slots(opc, vd & 0x1f, sj13 & 0x1fff); +} + +static int32_t __attribute__((unused)) +encode_vdvj_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + return encode_dj_slots(opc, vd & 0x1f, vj & 0x1f); +} + +static int32_t __attribute__((unused)) +encode_vdvjk_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj, TCGReg k) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(k >= 0 && k <= 0x1f); + return encode_djk_slots(opc, vd & 0x1f, vj & 0x1f, k); +} + +static int32_t __attribute__((unused)) +encode_vdvjsk5_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(sk5 >= -0x10 && sk5 <= 0xf); + return encode_djk_slots(opc, vd & 0x1f, vj & 0x1f, sk5 & 0x1f); +} + +static int32_t __attribute__((unused)) +encode_vdvjuk1_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj, uint32_t uk1) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(uk1 <= 0x1); + return encode_djk_slots(opc, vd & 0x1f, vj & 0x1f, uk1); +} + +static int32_t __attribute__((unused)) +encode_vdvjuk2_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj, uint32_t uk2) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(uk2 <= 0x3); + return encode_djk_slots(opc, vd & 0x1f, vj & 0x1f, uk2); +} + +static int32_t __attribute__((unused)) +encode_vdvjuk3_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(uk3 <= 0x7); + return encode_djk_slots(opc, vd & 0x1f, vj & 0x1f, uk3); +} + +static int32_t __attribute__((unused)) +encode_vdvjuk4_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(uk4 <= 0xf); + return encode_djk_slots(opc, vd & 0x1f, vj & 0x1f, uk4); +} + +static int32_t __attribute__((unused)) +encode_vdvjuk5_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(uk5 <= 0x1f); + return encode_djk_slots(opc, vd & 0x1f, vj & 0x1f, uk5); +} + +static int32_t __attribute__((unused)) +encode_vdvjuk6_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(uk6 <= 0x3f); + return encode_djk_slots(opc, vd & 0x1f, vj & 0x1f, uk6); +} + +static int32_t __attribute__((unused)) +encode_vdvjuk7_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj, uint32_t uk7) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(uk7 <= 0x7f); + return encode_djk_slots(opc, vd & 0x1f, vj & 0x1f, uk7); +} + +static int32_t __attribute__((unused)) +encode_vdvjuk8_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(uk8 <= 0xff); + return encode_djk_slots(opc, vd & 0x1f, vj & 0x1f, uk8); +} + +static int32_t __attribute__((unused)) +encode_vdvjvk_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(vk >= 0x20 && vk <= 0x3f); + return encode_djk_slots(opc, vd & 0x1f, vj & 0x1f, vk & 0x1f); +} + +static int32_t __attribute__((unused)) +encode_vdvjvkva_insn(LoongArchInsn opc, TCGReg vd, TCGReg vj, TCGReg vk, + TCGReg va) +{ + tcg_debug_assert(vd >= 0x20 && vd <= 0x3f); + tcg_debug_assert(vj >= 0x20 && vj <= 0x3f); + tcg_debug_assert(vk >= 0x20 && vk <= 0x3f); + tcg_debug_assert(va >= 0x20 && va <= 0x3f); + return encode_djka_slots(opc, vd & 0x1f, vj & 0x1f, vk & 0x1f, va & 0x1f); +} + /* Emits the `clz.w d, j` instruction. */ static void __attribute__((unused)) tcg_out_opc_clz_w(TCGContext *s, TCGReg d, TCGReg j) @@ -711,277 +1740,5265 @@ tcg_out_opc_xori(TCGContext *s, TCGReg d, TCGReg j, uint32_t uk12) tcg_out32(s, encode_djuk12_insn(OPC_XORI, d, j, uk12)); } -/* Emits the `addu16i.d d, j, sk16` instruction. */ +/* Emits the `vfmadd.s vd, vj, vk, va` instruction. */ static void __attribute__((unused)) -tcg_out_opc_addu16i_d(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +tcg_out_opc_vfmadd_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk, TCGReg va) { - tcg_out32(s, encode_djsk16_insn(OPC_ADDU16I_D, d, j, sk16)); + tcg_out32(s, encode_vdvjvkva_insn(OPC_VFMADD_S, vd, vj, vk, va)); } -/* Emits the `lu12i.w d, sj20` instruction. */ +/* Emits the `vfmadd.d vd, vj, vk, va` instruction. */ static void __attribute__((unused)) -tcg_out_opc_lu12i_w(TCGContext *s, TCGReg d, int32_t sj20) +tcg_out_opc_vfmadd_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk, TCGReg va) { - tcg_out32(s, encode_dsj20_insn(OPC_LU12I_W, d, sj20)); + tcg_out32(s, encode_vdvjvkva_insn(OPC_VFMADD_D, vd, vj, vk, va)); } -/* Emits the `cu32i.d d, sj20` instruction. */ +/* Emits the `vfmsub.s vd, vj, vk, va` instruction. */ static void __attribute__((unused)) -tcg_out_opc_cu32i_d(TCGContext *s, TCGReg d, int32_t sj20) +tcg_out_opc_vfmsub_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk, TCGReg va) { - tcg_out32(s, encode_dsj20_insn(OPC_CU32I_D, d, sj20)); + tcg_out32(s, encode_vdvjvkva_insn(OPC_VFMSUB_S, vd, vj, vk, va)); } -/* Emits the `pcaddu2i d, sj20` instruction. */ +/* Emits the `vfmsub.d vd, vj, vk, va` instruction. */ static void __attribute__((unused)) -tcg_out_opc_pcaddu2i(TCGContext *s, TCGReg d, int32_t sj20) +tcg_out_opc_vfmsub_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk, TCGReg va) { - tcg_out32(s, encode_dsj20_insn(OPC_PCADDU2I, d, sj20)); + tcg_out32(s, encode_vdvjvkva_insn(OPC_VFMSUB_D, vd, vj, vk, va)); } -/* Emits the `pcalau12i d, sj20` instruction. */ +/* Emits the `vfnmadd.s vd, vj, vk, va` instruction. */ static void __attribute__((unused)) -tcg_out_opc_pcalau12i(TCGContext *s, TCGReg d, int32_t sj20) +tcg_out_opc_vfnmadd_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk, TCGReg va) { - tcg_out32(s, encode_dsj20_insn(OPC_PCALAU12I, d, sj20)); + tcg_out32(s, encode_vdvjvkva_insn(OPC_VFNMADD_S, vd, vj, vk, va)); } -/* Emits the `pcaddu12i d, sj20` instruction. */ +/* Emits the `vfnmadd.d vd, vj, vk, va` instruction. */ static void __attribute__((unused)) -tcg_out_opc_pcaddu12i(TCGContext *s, TCGReg d, int32_t sj20) +tcg_out_opc_vfnmadd_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk, TCGReg va) { - tcg_out32(s, encode_dsj20_insn(OPC_PCADDU12I, d, sj20)); + tcg_out32(s, encode_vdvjvkva_insn(OPC_VFNMADD_D, vd, vj, vk, va)); } -/* Emits the `pcaddu18i d, sj20` instruction. */ +/* Emits the `vfnmsub.s vd, vj, vk, va` instruction. */ static void __attribute__((unused)) -tcg_out_opc_pcaddu18i(TCGContext *s, TCGReg d, int32_t sj20) +tcg_out_opc_vfnmsub_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk, TCGReg va) { - tcg_out32(s, encode_dsj20_insn(OPC_PCADDU18I, d, sj20)); + tcg_out32(s, encode_vdvjvkva_insn(OPC_VFNMSUB_S, vd, vj, vk, va)); } -/* Emits the `ld.b d, j, sk12` instruction. */ +/* Emits the `vfnmsub.d vd, vj, vk, va` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ld_b(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +tcg_out_opc_vfnmsub_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk, TCGReg va) { - tcg_out32(s, encode_djsk12_insn(OPC_LD_B, d, j, sk12)); + tcg_out32(s, encode_vdvjvkva_insn(OPC_VFNMSUB_D, vd, vj, vk, va)); } -/* Emits the `ld.h d, j, sk12` instruction. */ +/* Emits the `vfcmp.caf.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ld_h(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +tcg_out_opc_vfcmp_caf_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djsk12_insn(OPC_LD_H, d, j, sk12)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CAF_S, vd, vj, vk)); } -/* Emits the `ld.w d, j, sk12` instruction. */ +/* Emits the `vfcmp.saf.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ld_w(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +tcg_out_opc_vfcmp_saf_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djsk12_insn(OPC_LD_W, d, j, sk12)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SAF_S, vd, vj, vk)); } -/* Emits the `ld.d d, j, sk12` instruction. */ +/* Emits the `vfcmp.clt.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ld_d(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +tcg_out_opc_vfcmp_clt_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djsk12_insn(OPC_LD_D, d, j, sk12)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CLT_S, vd, vj, vk)); } -/* Emits the `st.b d, j, sk12` instruction. */ +/* Emits the `vfcmp.slt.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_st_b(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +tcg_out_opc_vfcmp_slt_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djsk12_insn(OPC_ST_B, d, j, sk12)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SLT_S, vd, vj, vk)); } -/* Emits the `st.h d, j, sk12` instruction. */ +/* Emits the `vfcmp.ceq.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_st_h(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +tcg_out_opc_vfcmp_ceq_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djsk12_insn(OPC_ST_H, d, j, sk12)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CEQ_S, vd, vj, vk)); } -/* Emits the `st.w d, j, sk12` instruction. */ +/* Emits the `vfcmp.seq.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_st_w(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +tcg_out_opc_vfcmp_seq_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djsk12_insn(OPC_ST_W, d, j, sk12)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SEQ_S, vd, vj, vk)); } -/* Emits the `st.d d, j, sk12` instruction. */ +/* Emits the `vfcmp.cle.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_st_d(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +tcg_out_opc_vfcmp_cle_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djsk12_insn(OPC_ST_D, d, j, sk12)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CLE_S, vd, vj, vk)); } -/* Emits the `ld.bu d, j, sk12` instruction. */ +/* Emits the `vfcmp.sle.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ld_bu(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +tcg_out_opc_vfcmp_sle_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djsk12_insn(OPC_LD_BU, d, j, sk12)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SLE_S, vd, vj, vk)); } -/* Emits the `ld.hu d, j, sk12` instruction. */ +/* Emits the `vfcmp.cun.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ld_hu(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +tcg_out_opc_vfcmp_cun_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djsk12_insn(OPC_LD_HU, d, j, sk12)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CUN_S, vd, vj, vk)); } -/* Emits the `ld.wu d, j, sk12` instruction. */ +/* Emits the `vfcmp.sun.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ld_wu(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +tcg_out_opc_vfcmp_sun_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djsk12_insn(OPC_LD_WU, d, j, sk12)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SUN_S, vd, vj, vk)); } -/* Emits the `ldx.b d, j, k` instruction. */ +/* Emits the `vfcmp.cult.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ldx_b(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +tcg_out_opc_vfcmp_cult_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djk_insn(OPC_LDX_B, d, j, k)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CULT_S, vd, vj, vk)); } -/* Emits the `ldx.h d, j, k` instruction. */ +/* Emits the `vfcmp.sult.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ldx_h(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +tcg_out_opc_vfcmp_sult_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djk_insn(OPC_LDX_H, d, j, k)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SULT_S, vd, vj, vk)); } -/* Emits the `ldx.w d, j, k` instruction. */ +/* Emits the `vfcmp.cueq.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ldx_w(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +tcg_out_opc_vfcmp_cueq_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djk_insn(OPC_LDX_W, d, j, k)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CUEQ_S, vd, vj, vk)); } -/* Emits the `ldx.d d, j, k` instruction. */ +/* Emits the `vfcmp.sueq.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ldx_d(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +tcg_out_opc_vfcmp_sueq_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djk_insn(OPC_LDX_D, d, j, k)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SUEQ_S, vd, vj, vk)); } -/* Emits the `stx.b d, j, k` instruction. */ +/* Emits the `vfcmp.cule.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_stx_b(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +tcg_out_opc_vfcmp_cule_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djk_insn(OPC_STX_B, d, j, k)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CULE_S, vd, vj, vk)); } -/* Emits the `stx.h d, j, k` instruction. */ +/* Emits the `vfcmp.sule.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_stx_h(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +tcg_out_opc_vfcmp_sule_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djk_insn(OPC_STX_H, d, j, k)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SULE_S, vd, vj, vk)); } -/* Emits the `stx.w d, j, k` instruction. */ +/* Emits the `vfcmp.cne.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_stx_w(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +tcg_out_opc_vfcmp_cne_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djk_insn(OPC_STX_W, d, j, k)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CNE_S, vd, vj, vk)); } -/* Emits the `stx.d d, j, k` instruction. */ +/* Emits the `vfcmp.sne.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_stx_d(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +tcg_out_opc_vfcmp_sne_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djk_insn(OPC_STX_D, d, j, k)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SNE_S, vd, vj, vk)); } -/* Emits the `ldx.bu d, j, k` instruction. */ +/* Emits the `vfcmp.cor.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ldx_bu(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +tcg_out_opc_vfcmp_cor_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djk_insn(OPC_LDX_BU, d, j, k)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_COR_S, vd, vj, vk)); } -/* Emits the `ldx.hu d, j, k` instruction. */ +/* Emits the `vfcmp.sor.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ldx_hu(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +tcg_out_opc_vfcmp_sor_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djk_insn(OPC_LDX_HU, d, j, k)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SOR_S, vd, vj, vk)); } -/* Emits the `ldx.wu d, j, k` instruction. */ +/* Emits the `vfcmp.cune.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ldx_wu(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +tcg_out_opc_vfcmp_cune_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djk_insn(OPC_LDX_WU, d, j, k)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CUNE_S, vd, vj, vk)); } -/* Emits the `dbar ud15` instruction. */ +/* Emits the `vfcmp.sune.s vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_dbar(TCGContext *s, uint32_t ud15) +tcg_out_opc_vfcmp_sune_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_ud15_insn(OPC_DBAR, ud15)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SUNE_S, vd, vj, vk)); } -/* Emits the `jirl d, j, sk16` instruction. */ +/* Emits the `vfcmp.caf.d vd, vj, vk` instruction. */ static void __attribute__((unused)) -tcg_out_opc_jirl(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +tcg_out_opc_vfcmp_caf_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) { - tcg_out32(s, encode_djsk16_insn(OPC_JIRL, d, j, sk16)); + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CAF_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.saf.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_saf_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SAF_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.clt.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_clt_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CLT_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.slt.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_slt_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SLT_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.ceq.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_ceq_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CEQ_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.seq.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_seq_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SEQ_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.cle.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_cle_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CLE_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.sle.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_sle_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SLE_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.cun.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_cun_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CUN_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.sun.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_sun_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SUN_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.cult.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_cult_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CULT_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.sult.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_sult_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SULT_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.cueq.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_cueq_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CUEQ_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.sueq.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_sueq_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SUEQ_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.cule.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_cule_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CULE_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.sule.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_sule_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SULE_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.cne.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_cne_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CNE_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.sne.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_sne_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SNE_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.cor.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_cor_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_COR_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.sor.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_sor_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SOR_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.cune.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_cune_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_CUNE_D, vd, vj, vk)); +} + +/* Emits the `vfcmp.sune.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcmp_sune_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCMP_SUNE_D, vd, vj, vk)); +} + +/* Emits the `vbitsel.v vd, vj, vk, va` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitsel_v(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk, TCGReg va) +{ + tcg_out32(s, encode_vdvjvkva_insn(OPC_VBITSEL_V, vd, vj, vk, va)); +} + +/* Emits the `vshuf.b vd, vj, vk, va` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vshuf_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk, TCGReg va) +{ + tcg_out32(s, encode_vdvjvkva_insn(OPC_VSHUF_B, vd, vj, vk, va)); +} + +/* Emits the `addu16i.d d, j, sk16` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_addu16i_d(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +{ + tcg_out32(s, encode_djsk16_insn(OPC_ADDU16I_D, d, j, sk16)); +} + +/* Emits the `lu12i.w d, sj20` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_lu12i_w(TCGContext *s, TCGReg d, int32_t sj20) +{ + tcg_out32(s, encode_dsj20_insn(OPC_LU12I_W, d, sj20)); +} + +/* Emits the `cu32i.d d, sj20` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_cu32i_d(TCGContext *s, TCGReg d, int32_t sj20) +{ + tcg_out32(s, encode_dsj20_insn(OPC_CU32I_D, d, sj20)); +} + +/* Emits the `pcaddu2i d, sj20` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_pcaddu2i(TCGContext *s, TCGReg d, int32_t sj20) +{ + tcg_out32(s, encode_dsj20_insn(OPC_PCADDU2I, d, sj20)); +} + +/* Emits the `pcalau12i d, sj20` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_pcalau12i(TCGContext *s, TCGReg d, int32_t sj20) +{ + tcg_out32(s, encode_dsj20_insn(OPC_PCALAU12I, d, sj20)); +} + +/* Emits the `pcaddu12i d, sj20` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_pcaddu12i(TCGContext *s, TCGReg d, int32_t sj20) +{ + tcg_out32(s, encode_dsj20_insn(OPC_PCADDU12I, d, sj20)); +} + +/* Emits the `pcaddu18i d, sj20` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_pcaddu18i(TCGContext *s, TCGReg d, int32_t sj20) +{ + tcg_out32(s, encode_dsj20_insn(OPC_PCADDU18I, d, sj20)); +} + +/* Emits the `ld.b d, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ld_b(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_djsk12_insn(OPC_LD_B, d, j, sk12)); +} + +/* Emits the `ld.h d, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ld_h(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_djsk12_insn(OPC_LD_H, d, j, sk12)); +} + +/* Emits the `ld.w d, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ld_w(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_djsk12_insn(OPC_LD_W, d, j, sk12)); +} + +/* Emits the `ld.d d, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ld_d(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_djsk12_insn(OPC_LD_D, d, j, sk12)); +} + +/* Emits the `st.b d, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_st_b(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_djsk12_insn(OPC_ST_B, d, j, sk12)); +} + +/* Emits the `st.h d, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_st_h(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_djsk12_insn(OPC_ST_H, d, j, sk12)); +} + +/* Emits the `st.w d, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_st_w(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_djsk12_insn(OPC_ST_W, d, j, sk12)); +} + +/* Emits the `st.d d, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_st_d(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_djsk12_insn(OPC_ST_D, d, j, sk12)); +} + +/* Emits the `ld.bu d, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ld_bu(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_djsk12_insn(OPC_LD_BU, d, j, sk12)); +} + +/* Emits the `ld.hu d, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ld_hu(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_djsk12_insn(OPC_LD_HU, d, j, sk12)); +} + +/* Emits the `ld.wu d, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ld_wu(TCGContext *s, TCGReg d, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_djsk12_insn(OPC_LD_WU, d, j, sk12)); +} + +/* Emits the `vld vd, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vld(TCGContext *s, TCGReg vd, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_vdjsk12_insn(OPC_VLD, vd, j, sk12)); +} + +/* Emits the `vst vd, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vst(TCGContext *s, TCGReg vd, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_vdjsk12_insn(OPC_VST, vd, j, sk12)); +} + +/* Emits the `vldrepl.d vd, j, sk9` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vldrepl_d(TCGContext *s, TCGReg vd, TCGReg j, int32_t sk9) +{ + tcg_out32(s, encode_vdjsk9_insn(OPC_VLDREPL_D, vd, j, sk9)); +} + +/* Emits the `vldrepl.w vd, j, sk10` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vldrepl_w(TCGContext *s, TCGReg vd, TCGReg j, int32_t sk10) +{ + tcg_out32(s, encode_vdjsk10_insn(OPC_VLDREPL_W, vd, j, sk10)); +} + +/* Emits the `vldrepl.h vd, j, sk11` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vldrepl_h(TCGContext *s, TCGReg vd, TCGReg j, int32_t sk11) +{ + tcg_out32(s, encode_vdjsk11_insn(OPC_VLDREPL_H, vd, j, sk11)); +} + +/* Emits the `vldrepl.b vd, j, sk12` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vldrepl_b(TCGContext *s, TCGReg vd, TCGReg j, int32_t sk12) +{ + tcg_out32(s, encode_vdjsk12_insn(OPC_VLDREPL_B, vd, j, sk12)); +} + +/* Emits the `vstelm.d vd, j, sk8, un1` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vstelm_d(TCGContext *s, TCGReg vd, TCGReg j, int32_t sk8, + uint32_t un1) +{ + tcg_out32(s, encode_vdjsk8un1_insn(OPC_VSTELM_D, vd, j, sk8, un1)); +} + +/* Emits the `vstelm.w vd, j, sk8, un2` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vstelm_w(TCGContext *s, TCGReg vd, TCGReg j, int32_t sk8, + uint32_t un2) +{ + tcg_out32(s, encode_vdjsk8un2_insn(OPC_VSTELM_W, vd, j, sk8, un2)); +} + +/* Emits the `vstelm.h vd, j, sk8, un3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vstelm_h(TCGContext *s, TCGReg vd, TCGReg j, int32_t sk8, + uint32_t un3) +{ + tcg_out32(s, encode_vdjsk8un3_insn(OPC_VSTELM_H, vd, j, sk8, un3)); +} + +/* Emits the `vstelm.b vd, j, sk8, un4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vstelm_b(TCGContext *s, TCGReg vd, TCGReg j, int32_t sk8, + uint32_t un4) +{ + tcg_out32(s, encode_vdjsk8un4_insn(OPC_VSTELM_B, vd, j, sk8, un4)); +} + +/* Emits the `ldx.b d, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ldx_b(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_djk_insn(OPC_LDX_B, d, j, k)); +} + +/* Emits the `ldx.h d, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ldx_h(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_djk_insn(OPC_LDX_H, d, j, k)); +} + +/* Emits the `ldx.w d, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ldx_w(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_djk_insn(OPC_LDX_W, d, j, k)); +} + +/* Emits the `ldx.d d, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ldx_d(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_djk_insn(OPC_LDX_D, d, j, k)); +} + +/* Emits the `stx.b d, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_stx_b(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_djk_insn(OPC_STX_B, d, j, k)); +} + +/* Emits the `stx.h d, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_stx_h(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_djk_insn(OPC_STX_H, d, j, k)); +} + +/* Emits the `stx.w d, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_stx_w(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_djk_insn(OPC_STX_W, d, j, k)); +} + +/* Emits the `stx.d d, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_stx_d(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_djk_insn(OPC_STX_D, d, j, k)); +} + +/* Emits the `ldx.bu d, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ldx_bu(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_djk_insn(OPC_LDX_BU, d, j, k)); +} + +/* Emits the `ldx.hu d, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ldx_hu(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_djk_insn(OPC_LDX_HU, d, j, k)); +} + +/* Emits the `ldx.wu d, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ldx_wu(TCGContext *s, TCGReg d, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_djk_insn(OPC_LDX_WU, d, j, k)); +} + +/* Emits the `vldx vd, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vldx(TCGContext *s, TCGReg vd, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_vdjk_insn(OPC_VLDX, vd, j, k)); +} + +/* Emits the `vstx vd, j, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vstx(TCGContext *s, TCGReg vd, TCGReg j, TCGReg k) +{ + tcg_out32(s, encode_vdjk_insn(OPC_VSTX, vd, j, k)); +} + +/* Emits the `dbar ud15` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_dbar(TCGContext *s, uint32_t ud15) +{ + tcg_out32(s, encode_ud15_insn(OPC_DBAR, ud15)); +} + +/* Emits the `jirl d, j, sk16` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_jirl(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +{ + tcg_out32(s, encode_djsk16_insn(OPC_JIRL, d, j, sk16)); } /* Emits the `b sd10k16` instruction. */ static void __attribute__((unused)) -tcg_out_opc_b(TCGContext *s, int32_t sd10k16) +tcg_out_opc_b(TCGContext *s, int32_t sd10k16) +{ + tcg_out32(s, encode_sd10k16_insn(OPC_B, sd10k16)); +} + +/* Emits the `bl sd10k16` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_bl(TCGContext *s, int32_t sd10k16) +{ + tcg_out32(s, encode_sd10k16_insn(OPC_BL, sd10k16)); +} + +/* Emits the `beq d, j, sk16` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_beq(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +{ + tcg_out32(s, encode_djsk16_insn(OPC_BEQ, d, j, sk16)); +} + +/* Emits the `bne d, j, sk16` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_bne(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +{ + tcg_out32(s, encode_djsk16_insn(OPC_BNE, d, j, sk16)); +} + +/* Emits the `bgt d, j, sk16` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_bgt(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +{ + tcg_out32(s, encode_djsk16_insn(OPC_BGT, d, j, sk16)); +} + +/* Emits the `ble d, j, sk16` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_ble(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +{ + tcg_out32(s, encode_djsk16_insn(OPC_BLE, d, j, sk16)); +} + +/* Emits the `bgtu d, j, sk16` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_bgtu(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +{ + tcg_out32(s, encode_djsk16_insn(OPC_BGTU, d, j, sk16)); +} + +/* Emits the `bleu d, j, sk16` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_bleu(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +{ + tcg_out32(s, encode_djsk16_insn(OPC_BLEU, d, j, sk16)); +} + +/* Emits the `vseq.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vseq_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSEQ_B, vd, vj, vk)); +} + +/* Emits the `vseq.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vseq_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSEQ_H, vd, vj, vk)); +} + +/* Emits the `vseq.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vseq_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSEQ_W, vd, vj, vk)); +} + +/* Emits the `vseq.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vseq_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSEQ_D, vd, vj, vk)); +} + +/* Emits the `vsle.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsle_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLE_B, vd, vj, vk)); +} + +/* Emits the `vsle.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsle_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLE_H, vd, vj, vk)); +} + +/* Emits the `vsle.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsle_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLE_W, vd, vj, vk)); +} + +/* Emits the `vsle.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsle_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLE_D, vd, vj, vk)); +} + +/* Emits the `vsle.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsle_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLE_BU, vd, vj, vk)); +} + +/* Emits the `vsle.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsle_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLE_HU, vd, vj, vk)); +} + +/* Emits the `vsle.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsle_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLE_WU, vd, vj, vk)); +} + +/* Emits the `vsle.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsle_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLE_DU, vd, vj, vk)); +} + +/* Emits the `vslt.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslt_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLT_B, vd, vj, vk)); +} + +/* Emits the `vslt.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslt_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLT_H, vd, vj, vk)); +} + +/* Emits the `vslt.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslt_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLT_W, vd, vj, vk)); +} + +/* Emits the `vslt.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslt_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLT_D, vd, vj, vk)); +} + +/* Emits the `vslt.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslt_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLT_BU, vd, vj, vk)); +} + +/* Emits the `vslt.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslt_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLT_HU, vd, vj, vk)); +} + +/* Emits the `vslt.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslt_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLT_WU, vd, vj, vk)); +} + +/* Emits the `vslt.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslt_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLT_DU, vd, vj, vk)); +} + +/* Emits the `vadd.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vadd_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADD_B, vd, vj, vk)); +} + +/* Emits the `vadd.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vadd_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADD_H, vd, vj, vk)); +} + +/* Emits the `vadd.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vadd_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADD_W, vd, vj, vk)); +} + +/* Emits the `vadd.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vadd_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADD_D, vd, vj, vk)); +} + +/* Emits the `vsub.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsub_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUB_B, vd, vj, vk)); +} + +/* Emits the `vsub.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsub_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUB_H, vd, vj, vk)); +} + +/* Emits the `vsub.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsub_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUB_W, vd, vj, vk)); +} + +/* Emits the `vsub.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsub_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUB_D, vd, vj, vk)); +} + +/* Emits the `vaddwev.h.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwev_h_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWEV_H_B, vd, vj, vk)); +} + +/* Emits the `vaddwev.w.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwev_w_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWEV_W_H, vd, vj, vk)); +} + +/* Emits the `vaddwev.d.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwev_d_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWEV_D_W, vd, vj, vk)); +} + +/* Emits the `vaddwev.q.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwev_q_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWEV_Q_D, vd, vj, vk)); +} + +/* Emits the `vsubwev.h.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwev_h_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWEV_H_B, vd, vj, vk)); +} + +/* Emits the `vsubwev.w.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwev_w_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWEV_W_H, vd, vj, vk)); +} + +/* Emits the `vsubwev.d.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwev_d_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWEV_D_W, vd, vj, vk)); +} + +/* Emits the `vsubwev.q.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwev_q_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWEV_Q_D, vd, vj, vk)); +} + +/* Emits the `vaddwod.h.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwod_h_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWOD_H_B, vd, vj, vk)); +} + +/* Emits the `vaddwod.w.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwod_w_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWOD_W_H, vd, vj, vk)); +} + +/* Emits the `vaddwod.d.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwod_d_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWOD_D_W, vd, vj, vk)); +} + +/* Emits the `vaddwod.q.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwod_q_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWOD_Q_D, vd, vj, vk)); +} + +/* Emits the `vsubwod.h.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwod_h_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWOD_H_B, vd, vj, vk)); +} + +/* Emits the `vsubwod.w.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwod_w_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWOD_W_H, vd, vj, vk)); +} + +/* Emits the `vsubwod.d.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwod_d_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWOD_D_W, vd, vj, vk)); +} + +/* Emits the `vsubwod.q.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwod_q_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWOD_Q_D, vd, vj, vk)); +} + +/* Emits the `vaddwev.h.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwev_h_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWEV_H_BU, vd, vj, vk)); +} + +/* Emits the `vaddwev.w.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwev_w_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWEV_W_HU, vd, vj, vk)); +} + +/* Emits the `vaddwev.d.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwev_d_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWEV_D_WU, vd, vj, vk)); +} + +/* Emits the `vaddwev.q.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwev_q_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWEV_Q_DU, vd, vj, vk)); +} + +/* Emits the `vsubwev.h.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwev_h_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWEV_H_BU, vd, vj, vk)); +} + +/* Emits the `vsubwev.w.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwev_w_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWEV_W_HU, vd, vj, vk)); +} + +/* Emits the `vsubwev.d.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwev_d_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWEV_D_WU, vd, vj, vk)); +} + +/* Emits the `vsubwev.q.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwev_q_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWEV_Q_DU, vd, vj, vk)); +} + +/* Emits the `vaddwod.h.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwod_h_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWOD_H_BU, vd, vj, vk)); +} + +/* Emits the `vaddwod.w.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwod_w_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWOD_W_HU, vd, vj, vk)); +} + +/* Emits the `vaddwod.d.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwod_d_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWOD_D_WU, vd, vj, vk)); +} + +/* Emits the `vaddwod.q.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwod_q_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWOD_Q_DU, vd, vj, vk)); +} + +/* Emits the `vsubwod.h.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwod_h_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWOD_H_BU, vd, vj, vk)); +} + +/* Emits the `vsubwod.w.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwod_w_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWOD_W_HU, vd, vj, vk)); +} + +/* Emits the `vsubwod.d.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwod_d_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWOD_D_WU, vd, vj, vk)); +} + +/* Emits the `vsubwod.q.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubwod_q_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUBWOD_Q_DU, vd, vj, vk)); +} + +/* Emits the `vaddwev.h.bu.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwev_h_bu_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWEV_H_BU_B, vd, vj, vk)); +} + +/* Emits the `vaddwev.w.hu.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwev_w_hu_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWEV_W_HU_H, vd, vj, vk)); +} + +/* Emits the `vaddwev.d.wu.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwev_d_wu_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWEV_D_WU_W, vd, vj, vk)); +} + +/* Emits the `vaddwev.q.du.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwev_q_du_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWEV_Q_DU_D, vd, vj, vk)); +} + +/* Emits the `vaddwod.h.bu.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwod_h_bu_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWOD_H_BU_B, vd, vj, vk)); +} + +/* Emits the `vaddwod.w.hu.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwod_w_hu_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWOD_W_HU_H, vd, vj, vk)); +} + +/* Emits the `vaddwod.d.wu.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwod_d_wu_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWOD_D_WU_W, vd, vj, vk)); +} + +/* Emits the `vaddwod.q.du.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddwod_q_du_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDWOD_Q_DU_D, vd, vj, vk)); +} + +/* Emits the `vsadd.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsadd_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSADD_B, vd, vj, vk)); +} + +/* Emits the `vsadd.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsadd_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSADD_H, vd, vj, vk)); +} + +/* Emits the `vsadd.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsadd_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSADD_W, vd, vj, vk)); +} + +/* Emits the `vsadd.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsadd_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSADD_D, vd, vj, vk)); +} + +/* Emits the `vssub.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssub_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSUB_B, vd, vj, vk)); +} + +/* Emits the `vssub.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssub_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSUB_H, vd, vj, vk)); +} + +/* Emits the `vssub.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssub_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSUB_W, vd, vj, vk)); +} + +/* Emits the `vssub.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssub_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSUB_D, vd, vj, vk)); +} + +/* Emits the `vsadd.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsadd_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSADD_BU, vd, vj, vk)); +} + +/* Emits the `vsadd.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsadd_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSADD_HU, vd, vj, vk)); +} + +/* Emits the `vsadd.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsadd_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSADD_WU, vd, vj, vk)); +} + +/* Emits the `vsadd.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsadd_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSADD_DU, vd, vj, vk)); +} + +/* Emits the `vssub.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssub_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSUB_BU, vd, vj, vk)); +} + +/* Emits the `vssub.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssub_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSUB_HU, vd, vj, vk)); +} + +/* Emits the `vssub.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssub_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSUB_WU, vd, vj, vk)); +} + +/* Emits the `vssub.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssub_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSUB_DU, vd, vj, vk)); +} + +/* Emits the `vhaddw.h.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhaddw_h_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHADDW_H_B, vd, vj, vk)); +} + +/* Emits the `vhaddw.w.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhaddw_w_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHADDW_W_H, vd, vj, vk)); +} + +/* Emits the `vhaddw.d.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhaddw_d_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHADDW_D_W, vd, vj, vk)); +} + +/* Emits the `vhaddw.q.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhaddw_q_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHADDW_Q_D, vd, vj, vk)); +} + +/* Emits the `vhsubw.h.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhsubw_h_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHSUBW_H_B, vd, vj, vk)); +} + +/* Emits the `vhsubw.w.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhsubw_w_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHSUBW_W_H, vd, vj, vk)); +} + +/* Emits the `vhsubw.d.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhsubw_d_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHSUBW_D_W, vd, vj, vk)); +} + +/* Emits the `vhsubw.q.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhsubw_q_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHSUBW_Q_D, vd, vj, vk)); +} + +/* Emits the `vhaddw.hu.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhaddw_hu_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHADDW_HU_BU, vd, vj, vk)); +} + +/* Emits the `vhaddw.wu.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhaddw_wu_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHADDW_WU_HU, vd, vj, vk)); +} + +/* Emits the `vhaddw.du.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhaddw_du_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHADDW_DU_WU, vd, vj, vk)); +} + +/* Emits the `vhaddw.qu.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhaddw_qu_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHADDW_QU_DU, vd, vj, vk)); +} + +/* Emits the `vhsubw.hu.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhsubw_hu_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHSUBW_HU_BU, vd, vj, vk)); +} + +/* Emits the `vhsubw.wu.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhsubw_wu_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHSUBW_WU_HU, vd, vj, vk)); +} + +/* Emits the `vhsubw.du.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhsubw_du_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHSUBW_DU_WU, vd, vj, vk)); +} + +/* Emits the `vhsubw.qu.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vhsubw_qu_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VHSUBW_QU_DU, vd, vj, vk)); +} + +/* Emits the `vadda.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vadda_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDA_B, vd, vj, vk)); +} + +/* Emits the `vadda.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vadda_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDA_H, vd, vj, vk)); +} + +/* Emits the `vadda.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vadda_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDA_W, vd, vj, vk)); +} + +/* Emits the `vadda.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vadda_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADDA_D, vd, vj, vk)); +} + +/* Emits the `vabsd.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vabsd_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VABSD_B, vd, vj, vk)); +} + +/* Emits the `vabsd.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vabsd_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VABSD_H, vd, vj, vk)); +} + +/* Emits the `vabsd.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vabsd_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VABSD_W, vd, vj, vk)); +} + +/* Emits the `vabsd.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vabsd_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VABSD_D, vd, vj, vk)); +} + +/* Emits the `vabsd.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vabsd_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VABSD_BU, vd, vj, vk)); +} + +/* Emits the `vabsd.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vabsd_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VABSD_HU, vd, vj, vk)); +} + +/* Emits the `vabsd.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vabsd_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VABSD_WU, vd, vj, vk)); +} + +/* Emits the `vabsd.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vabsd_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VABSD_DU, vd, vj, vk)); +} + +/* Emits the `vavg.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavg_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVG_B, vd, vj, vk)); +} + +/* Emits the `vavg.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavg_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVG_H, vd, vj, vk)); +} + +/* Emits the `vavg.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavg_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVG_W, vd, vj, vk)); +} + +/* Emits the `vavg.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavg_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVG_D, vd, vj, vk)); +} + +/* Emits the `vavg.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavg_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVG_BU, vd, vj, vk)); +} + +/* Emits the `vavg.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavg_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVG_HU, vd, vj, vk)); +} + +/* Emits the `vavg.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavg_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVG_WU, vd, vj, vk)); +} + +/* Emits the `vavg.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavg_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVG_DU, vd, vj, vk)); +} + +/* Emits the `vavgr.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavgr_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVGR_B, vd, vj, vk)); +} + +/* Emits the `vavgr.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavgr_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVGR_H, vd, vj, vk)); +} + +/* Emits the `vavgr.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavgr_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVGR_W, vd, vj, vk)); +} + +/* Emits the `vavgr.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavgr_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVGR_D, vd, vj, vk)); +} + +/* Emits the `vavgr.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavgr_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVGR_BU, vd, vj, vk)); +} + +/* Emits the `vavgr.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavgr_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVGR_HU, vd, vj, vk)); +} + +/* Emits the `vavgr.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavgr_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVGR_WU, vd, vj, vk)); +} + +/* Emits the `vavgr.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vavgr_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAVGR_DU, vd, vj, vk)); +} + +/* Emits the `vmax.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmax_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMAX_B, vd, vj, vk)); +} + +/* Emits the `vmax.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmax_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMAX_H, vd, vj, vk)); +} + +/* Emits the `vmax.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmax_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMAX_W, vd, vj, vk)); +} + +/* Emits the `vmax.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmax_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMAX_D, vd, vj, vk)); +} + +/* Emits the `vmin.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmin_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMIN_B, vd, vj, vk)); +} + +/* Emits the `vmin.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmin_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMIN_H, vd, vj, vk)); +} + +/* Emits the `vmin.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmin_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMIN_W, vd, vj, vk)); +} + +/* Emits the `vmin.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmin_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMIN_D, vd, vj, vk)); +} + +/* Emits the `vmax.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmax_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMAX_BU, vd, vj, vk)); +} + +/* Emits the `vmax.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmax_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMAX_HU, vd, vj, vk)); +} + +/* Emits the `vmax.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmax_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMAX_WU, vd, vj, vk)); +} + +/* Emits the `vmax.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmax_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMAX_DU, vd, vj, vk)); +} + +/* Emits the `vmin.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmin_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMIN_BU, vd, vj, vk)); +} + +/* Emits the `vmin.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmin_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMIN_HU, vd, vj, vk)); +} + +/* Emits the `vmin.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmin_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMIN_WU, vd, vj, vk)); +} + +/* Emits the `vmin.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmin_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMIN_DU, vd, vj, vk)); +} + +/* Emits the `vmul.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmul_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMUL_B, vd, vj, vk)); +} + +/* Emits the `vmul.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmul_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMUL_H, vd, vj, vk)); +} + +/* Emits the `vmul.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmul_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMUL_W, vd, vj, vk)); +} + +/* Emits the `vmul.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmul_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMUL_D, vd, vj, vk)); +} + +/* Emits the `vmuh.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmuh_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMUH_B, vd, vj, vk)); +} + +/* Emits the `vmuh.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmuh_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMUH_H, vd, vj, vk)); +} + +/* Emits the `vmuh.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmuh_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMUH_W, vd, vj, vk)); +} + +/* Emits the `vmuh.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmuh_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMUH_D, vd, vj, vk)); +} + +/* Emits the `vmuh.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmuh_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMUH_BU, vd, vj, vk)); +} + +/* Emits the `vmuh.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmuh_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMUH_HU, vd, vj, vk)); +} + +/* Emits the `vmuh.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmuh_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMUH_WU, vd, vj, vk)); +} + +/* Emits the `vmuh.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmuh_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMUH_DU, vd, vj, vk)); +} + +/* Emits the `vmulwev.h.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwev_h_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWEV_H_B, vd, vj, vk)); +} + +/* Emits the `vmulwev.w.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwev_w_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWEV_W_H, vd, vj, vk)); +} + +/* Emits the `vmulwev.d.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwev_d_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWEV_D_W, vd, vj, vk)); +} + +/* Emits the `vmulwev.q.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwev_q_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWEV_Q_D, vd, vj, vk)); +} + +/* Emits the `vmulwod.h.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwod_h_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWOD_H_B, vd, vj, vk)); +} + +/* Emits the `vmulwod.w.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwod_w_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWOD_W_H, vd, vj, vk)); +} + +/* Emits the `vmulwod.d.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwod_d_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWOD_D_W, vd, vj, vk)); +} + +/* Emits the `vmulwod.q.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwod_q_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWOD_Q_D, vd, vj, vk)); +} + +/* Emits the `vmulwev.h.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwev_h_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWEV_H_BU, vd, vj, vk)); +} + +/* Emits the `vmulwev.w.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwev_w_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWEV_W_HU, vd, vj, vk)); +} + +/* Emits the `vmulwev.d.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwev_d_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWEV_D_WU, vd, vj, vk)); +} + +/* Emits the `vmulwev.q.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwev_q_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWEV_Q_DU, vd, vj, vk)); +} + +/* Emits the `vmulwod.h.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwod_h_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWOD_H_BU, vd, vj, vk)); +} + +/* Emits the `vmulwod.w.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwod_w_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWOD_W_HU, vd, vj, vk)); +} + +/* Emits the `vmulwod.d.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwod_d_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWOD_D_WU, vd, vj, vk)); +} + +/* Emits the `vmulwod.q.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwod_q_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWOD_Q_DU, vd, vj, vk)); +} + +/* Emits the `vmulwev.h.bu.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwev_h_bu_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWEV_H_BU_B, vd, vj, vk)); +} + +/* Emits the `vmulwev.w.hu.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwev_w_hu_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWEV_W_HU_H, vd, vj, vk)); +} + +/* Emits the `vmulwev.d.wu.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwev_d_wu_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWEV_D_WU_W, vd, vj, vk)); +} + +/* Emits the `vmulwev.q.du.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwev_q_du_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWEV_Q_DU_D, vd, vj, vk)); +} + +/* Emits the `vmulwod.h.bu.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwod_h_bu_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWOD_H_BU_B, vd, vj, vk)); +} + +/* Emits the `vmulwod.w.hu.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwod_w_hu_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWOD_W_HU_H, vd, vj, vk)); +} + +/* Emits the `vmulwod.d.wu.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwod_d_wu_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWOD_D_WU_W, vd, vj, vk)); +} + +/* Emits the `vmulwod.q.du.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmulwod_q_du_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMULWOD_Q_DU_D, vd, vj, vk)); +} + +/* Emits the `vmadd.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmadd_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADD_B, vd, vj, vk)); +} + +/* Emits the `vmadd.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmadd_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADD_H, vd, vj, vk)); +} + +/* Emits the `vmadd.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmadd_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADD_W, vd, vj, vk)); +} + +/* Emits the `vmadd.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmadd_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADD_D, vd, vj, vk)); +} + +/* Emits the `vmsub.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmsub_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMSUB_B, vd, vj, vk)); +} + +/* Emits the `vmsub.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmsub_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMSUB_H, vd, vj, vk)); +} + +/* Emits the `vmsub.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmsub_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMSUB_W, vd, vj, vk)); +} + +/* Emits the `vmsub.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmsub_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMSUB_D, vd, vj, vk)); +} + +/* Emits the `vmaddwev.h.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwev_h_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWEV_H_B, vd, vj, vk)); +} + +/* Emits the `vmaddwev.w.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwev_w_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWEV_W_H, vd, vj, vk)); +} + +/* Emits the `vmaddwev.d.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwev_d_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWEV_D_W, vd, vj, vk)); +} + +/* Emits the `vmaddwev.q.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwev_q_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWEV_Q_D, vd, vj, vk)); +} + +/* Emits the `vmaddwod.h.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwod_h_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWOD_H_B, vd, vj, vk)); +} + +/* Emits the `vmaddwod.w.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwod_w_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWOD_W_H, vd, vj, vk)); +} + +/* Emits the `vmaddwod.d.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwod_d_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWOD_D_W, vd, vj, vk)); +} + +/* Emits the `vmaddwod.q.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwod_q_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWOD_Q_D, vd, vj, vk)); +} + +/* Emits the `vmaddwev.h.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwev_h_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWEV_H_BU, vd, vj, vk)); +} + +/* Emits the `vmaddwev.w.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwev_w_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWEV_W_HU, vd, vj, vk)); +} + +/* Emits the `vmaddwev.d.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwev_d_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWEV_D_WU, vd, vj, vk)); +} + +/* Emits the `vmaddwev.q.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwev_q_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWEV_Q_DU, vd, vj, vk)); +} + +/* Emits the `vmaddwod.h.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwod_h_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWOD_H_BU, vd, vj, vk)); +} + +/* Emits the `vmaddwod.w.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwod_w_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWOD_W_HU, vd, vj, vk)); +} + +/* Emits the `vmaddwod.d.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwod_d_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWOD_D_WU, vd, vj, vk)); +} + +/* Emits the `vmaddwod.q.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwod_q_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWOD_Q_DU, vd, vj, vk)); +} + +/* Emits the `vmaddwev.h.bu.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwev_h_bu_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWEV_H_BU_B, vd, vj, vk)); +} + +/* Emits the `vmaddwev.w.hu.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwev_w_hu_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWEV_W_HU_H, vd, vj, vk)); +} + +/* Emits the `vmaddwev.d.wu.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwev_d_wu_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWEV_D_WU_W, vd, vj, vk)); +} + +/* Emits the `vmaddwev.q.du.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwev_q_du_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWEV_Q_DU_D, vd, vj, vk)); +} + +/* Emits the `vmaddwod.h.bu.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwod_h_bu_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWOD_H_BU_B, vd, vj, vk)); +} + +/* Emits the `vmaddwod.w.hu.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwod_w_hu_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWOD_W_HU_H, vd, vj, vk)); +} + +/* Emits the `vmaddwod.d.wu.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwod_d_wu_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWOD_D_WU_W, vd, vj, vk)); +} + +/* Emits the `vmaddwod.q.du.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaddwod_q_du_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMADDWOD_Q_DU_D, vd, vj, vk)); +} + +/* Emits the `vdiv.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vdiv_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VDIV_B, vd, vj, vk)); +} + +/* Emits the `vdiv.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vdiv_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VDIV_H, vd, vj, vk)); +} + +/* Emits the `vdiv.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vdiv_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VDIV_W, vd, vj, vk)); +} + +/* Emits the `vdiv.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vdiv_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VDIV_D, vd, vj, vk)); +} + +/* Emits the `vmod.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmod_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMOD_B, vd, vj, vk)); +} + +/* Emits the `vmod.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmod_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMOD_H, vd, vj, vk)); +} + +/* Emits the `vmod.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmod_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMOD_W, vd, vj, vk)); +} + +/* Emits the `vmod.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmod_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMOD_D, vd, vj, vk)); +} + +/* Emits the `vdiv.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vdiv_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VDIV_BU, vd, vj, vk)); +} + +/* Emits the `vdiv.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vdiv_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VDIV_HU, vd, vj, vk)); +} + +/* Emits the `vdiv.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vdiv_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VDIV_WU, vd, vj, vk)); +} + +/* Emits the `vdiv.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vdiv_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VDIV_DU, vd, vj, vk)); +} + +/* Emits the `vmod.bu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmod_bu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMOD_BU, vd, vj, vk)); +} + +/* Emits the `vmod.hu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmod_hu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMOD_HU, vd, vj, vk)); +} + +/* Emits the `vmod.wu vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmod_wu(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMOD_WU, vd, vj, vk)); +} + +/* Emits the `vmod.du vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmod_du(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VMOD_DU, vd, vj, vk)); +} + +/* Emits the `vsll.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsll_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLL_B, vd, vj, vk)); +} + +/* Emits the `vsll.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsll_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLL_H, vd, vj, vk)); +} + +/* Emits the `vsll.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsll_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLL_W, vd, vj, vk)); +} + +/* Emits the `vsll.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsll_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSLL_D, vd, vj, vk)); +} + +/* Emits the `vsrl.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrl_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRL_B, vd, vj, vk)); +} + +/* Emits the `vsrl.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrl_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRL_H, vd, vj, vk)); +} + +/* Emits the `vsrl.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrl_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRL_W, vd, vj, vk)); +} + +/* Emits the `vsrl.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrl_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRL_D, vd, vj, vk)); +} + +/* Emits the `vsra.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsra_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRA_B, vd, vj, vk)); +} + +/* Emits the `vsra.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsra_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRA_H, vd, vj, vk)); +} + +/* Emits the `vsra.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsra_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRA_W, vd, vj, vk)); +} + +/* Emits the `vsra.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsra_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRA_D, vd, vj, vk)); +} + +/* Emits the `vrotr.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vrotr_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VROTR_B, vd, vj, vk)); +} + +/* Emits the `vrotr.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vrotr_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VROTR_H, vd, vj, vk)); +} + +/* Emits the `vrotr.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vrotr_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VROTR_W, vd, vj, vk)); +} + +/* Emits the `vrotr.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vrotr_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VROTR_D, vd, vj, vk)); +} + +/* Emits the `vsrlr.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlr_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRLR_B, vd, vj, vk)); +} + +/* Emits the `vsrlr.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlr_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRLR_H, vd, vj, vk)); +} + +/* Emits the `vsrlr.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlr_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRLR_W, vd, vj, vk)); +} + +/* Emits the `vsrlr.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlr_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRLR_D, vd, vj, vk)); +} + +/* Emits the `vsrar.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrar_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRAR_B, vd, vj, vk)); +} + +/* Emits the `vsrar.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrar_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRAR_H, vd, vj, vk)); +} + +/* Emits the `vsrar.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrar_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRAR_W, vd, vj, vk)); +} + +/* Emits the `vsrar.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrar_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRAR_D, vd, vj, vk)); +} + +/* Emits the `vsrln.b.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrln_b_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRLN_B_H, vd, vj, vk)); +} + +/* Emits the `vsrln.h.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrln_h_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRLN_H_W, vd, vj, vk)); +} + +/* Emits the `vsrln.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrln_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRLN_W_D, vd, vj, vk)); +} + +/* Emits the `vsran.b.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsran_b_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRAN_B_H, vd, vj, vk)); +} + +/* Emits the `vsran.h.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsran_h_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRAN_H_W, vd, vj, vk)); +} + +/* Emits the `vsran.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsran_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRAN_W_D, vd, vj, vk)); +} + +/* Emits the `vsrlrn.b.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlrn_b_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRLRN_B_H, vd, vj, vk)); +} + +/* Emits the `vsrlrn.h.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlrn_h_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRLRN_H_W, vd, vj, vk)); +} + +/* Emits the `vsrlrn.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlrn_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRLRN_W_D, vd, vj, vk)); +} + +/* Emits the `vsrarn.b.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrarn_b_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRARN_B_H, vd, vj, vk)); +} + +/* Emits the `vsrarn.h.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrarn_h_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRARN_H_W, vd, vj, vk)); +} + +/* Emits the `vsrarn.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrarn_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSRARN_W_D, vd, vj, vk)); +} + +/* Emits the `vssrln.b.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrln_b_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRLN_B_H, vd, vj, vk)); +} + +/* Emits the `vssrln.h.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrln_h_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRLN_H_W, vd, vj, vk)); +} + +/* Emits the `vssrln.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrln_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRLN_W_D, vd, vj, vk)); +} + +/* Emits the `vssran.b.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssran_b_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRAN_B_H, vd, vj, vk)); +} + +/* Emits the `vssran.h.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssran_h_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRAN_H_W, vd, vj, vk)); +} + +/* Emits the `vssran.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssran_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRAN_W_D, vd, vj, vk)); +} + +/* Emits the `vssrlrn.b.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrn_b_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRLRN_B_H, vd, vj, vk)); +} + +/* Emits the `vssrlrn.h.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrn_h_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRLRN_H_W, vd, vj, vk)); +} + +/* Emits the `vssrlrn.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrn_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRLRN_W_D, vd, vj, vk)); +} + +/* Emits the `vssrarn.b.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarn_b_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRARN_B_H, vd, vj, vk)); +} + +/* Emits the `vssrarn.h.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarn_h_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRARN_H_W, vd, vj, vk)); +} + +/* Emits the `vssrarn.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarn_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRARN_W_D, vd, vj, vk)); +} + +/* Emits the `vssrln.bu.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrln_bu_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRLN_BU_H, vd, vj, vk)); +} + +/* Emits the `vssrln.hu.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrln_hu_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRLN_HU_W, vd, vj, vk)); +} + +/* Emits the `vssrln.wu.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrln_wu_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRLN_WU_D, vd, vj, vk)); +} + +/* Emits the `vssran.bu.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssran_bu_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRAN_BU_H, vd, vj, vk)); +} + +/* Emits the `vssran.hu.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssran_hu_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRAN_HU_W, vd, vj, vk)); +} + +/* Emits the `vssran.wu.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssran_wu_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRAN_WU_D, vd, vj, vk)); +} + +/* Emits the `vssrlrn.bu.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrn_bu_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRLRN_BU_H, vd, vj, vk)); +} + +/* Emits the `vssrlrn.hu.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrn_hu_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRLRN_HU_W, vd, vj, vk)); +} + +/* Emits the `vssrlrn.wu.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrn_wu_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRLRN_WU_D, vd, vj, vk)); +} + +/* Emits the `vssrarn.bu.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarn_bu_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRARN_BU_H, vd, vj, vk)); +} + +/* Emits the `vssrarn.hu.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarn_hu_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRARN_HU_W, vd, vj, vk)); +} + +/* Emits the `vssrarn.wu.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarn_wu_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSSRARN_WU_D, vd, vj, vk)); +} + +/* Emits the `vbitclr.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitclr_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VBITCLR_B, vd, vj, vk)); +} + +/* Emits the `vbitclr.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitclr_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VBITCLR_H, vd, vj, vk)); +} + +/* Emits the `vbitclr.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitclr_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VBITCLR_W, vd, vj, vk)); +} + +/* Emits the `vbitclr.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitclr_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VBITCLR_D, vd, vj, vk)); +} + +/* Emits the `vbitset.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitset_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VBITSET_B, vd, vj, vk)); +} + +/* Emits the `vbitset.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitset_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VBITSET_H, vd, vj, vk)); +} + +/* Emits the `vbitset.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitset_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VBITSET_W, vd, vj, vk)); +} + +/* Emits the `vbitset.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitset_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VBITSET_D, vd, vj, vk)); +} + +/* Emits the `vbitrev.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitrev_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VBITREV_B, vd, vj, vk)); +} + +/* Emits the `vbitrev.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitrev_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VBITREV_H, vd, vj, vk)); +} + +/* Emits the `vbitrev.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitrev_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VBITREV_W, vd, vj, vk)); +} + +/* Emits the `vbitrev.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitrev_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VBITREV_D, vd, vj, vk)); +} + +/* Emits the `vpackev.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpackev_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPACKEV_B, vd, vj, vk)); +} + +/* Emits the `vpackev.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpackev_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPACKEV_H, vd, vj, vk)); +} + +/* Emits the `vpackev.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpackev_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPACKEV_W, vd, vj, vk)); +} + +/* Emits the `vpackev.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpackev_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPACKEV_D, vd, vj, vk)); +} + +/* Emits the `vpackod.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpackod_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPACKOD_B, vd, vj, vk)); +} + +/* Emits the `vpackod.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpackod_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPACKOD_H, vd, vj, vk)); +} + +/* Emits the `vpackod.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpackod_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPACKOD_W, vd, vj, vk)); +} + +/* Emits the `vpackod.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpackod_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPACKOD_D, vd, vj, vk)); +} + +/* Emits the `vilvl.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vilvl_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VILVL_B, vd, vj, vk)); +} + +/* Emits the `vilvl.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vilvl_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VILVL_H, vd, vj, vk)); +} + +/* Emits the `vilvl.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vilvl_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VILVL_W, vd, vj, vk)); +} + +/* Emits the `vilvl.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vilvl_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VILVL_D, vd, vj, vk)); +} + +/* Emits the `vilvh.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vilvh_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VILVH_B, vd, vj, vk)); +} + +/* Emits the `vilvh.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vilvh_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VILVH_H, vd, vj, vk)); +} + +/* Emits the `vilvh.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vilvh_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VILVH_W, vd, vj, vk)); +} + +/* Emits the `vilvh.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vilvh_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VILVH_D, vd, vj, vk)); +} + +/* Emits the `vpickev.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickev_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPICKEV_B, vd, vj, vk)); +} + +/* Emits the `vpickev.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickev_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPICKEV_H, vd, vj, vk)); +} + +/* Emits the `vpickev.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickev_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPICKEV_W, vd, vj, vk)); +} + +/* Emits the `vpickev.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickev_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPICKEV_D, vd, vj, vk)); +} + +/* Emits the `vpickod.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickod_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPICKOD_B, vd, vj, vk)); +} + +/* Emits the `vpickod.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickod_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPICKOD_H, vd, vj, vk)); +} + +/* Emits the `vpickod.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickod_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPICKOD_W, vd, vj, vk)); +} + +/* Emits the `vpickod.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickod_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VPICKOD_D, vd, vj, vk)); +} + +/* Emits the `vreplve.b vd, vj, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vreplve_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg k) +{ + tcg_out32(s, encode_vdvjk_insn(OPC_VREPLVE_B, vd, vj, k)); +} + +/* Emits the `vreplve.h vd, vj, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vreplve_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg k) +{ + tcg_out32(s, encode_vdvjk_insn(OPC_VREPLVE_H, vd, vj, k)); +} + +/* Emits the `vreplve.w vd, vj, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vreplve_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg k) +{ + tcg_out32(s, encode_vdvjk_insn(OPC_VREPLVE_W, vd, vj, k)); +} + +/* Emits the `vreplve.d vd, vj, k` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vreplve_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg k) +{ + tcg_out32(s, encode_vdvjk_insn(OPC_VREPLVE_D, vd, vj, k)); +} + +/* Emits the `vand.v vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vand_v(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VAND_V, vd, vj, vk)); +} + +/* Emits the `vor.v vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vor_v(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VOR_V, vd, vj, vk)); +} + +/* Emits the `vxor.v vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vxor_v(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VXOR_V, vd, vj, vk)); +} + +/* Emits the `vnor.v vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vnor_v(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VNOR_V, vd, vj, vk)); +} + +/* Emits the `vandn.v vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vandn_v(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VANDN_V, vd, vj, vk)); +} + +/* Emits the `vorn.v vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vorn_v(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VORN_V, vd, vj, vk)); +} + +/* Emits the `vfrstp.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrstp_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFRSTP_B, vd, vj, vk)); +} + +/* Emits the `vfrstp.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrstp_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFRSTP_H, vd, vj, vk)); +} + +/* Emits the `vadd.q vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vadd_q(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VADD_Q, vd, vj, vk)); +} + +/* Emits the `vsub.q vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsub_q(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSUB_Q, vd, vj, vk)); +} + +/* Emits the `vsigncov.b vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsigncov_b(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSIGNCOV_B, vd, vj, vk)); +} + +/* Emits the `vsigncov.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsigncov_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSIGNCOV_H, vd, vj, vk)); +} + +/* Emits the `vsigncov.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsigncov_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSIGNCOV_W, vd, vj, vk)); +} + +/* Emits the `vsigncov.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsigncov_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSIGNCOV_D, vd, vj, vk)); +} + +/* Emits the `vfadd.s vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfadd_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFADD_S, vd, vj, vk)); +} + +/* Emits the `vfadd.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfadd_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFADD_D, vd, vj, vk)); +} + +/* Emits the `vfsub.s vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfsub_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFSUB_S, vd, vj, vk)); +} + +/* Emits the `vfsub.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfsub_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFSUB_D, vd, vj, vk)); +} + +/* Emits the `vfmul.s vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfmul_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFMUL_S, vd, vj, vk)); +} + +/* Emits the `vfmul.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfmul_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFMUL_D, vd, vj, vk)); +} + +/* Emits the `vfdiv.s vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfdiv_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFDIV_S, vd, vj, vk)); +} + +/* Emits the `vfdiv.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfdiv_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFDIV_D, vd, vj, vk)); +} + +/* Emits the `vfmax.s vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfmax_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFMAX_S, vd, vj, vk)); +} + +/* Emits the `vfmax.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfmax_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFMAX_D, vd, vj, vk)); +} + +/* Emits the `vfmin.s vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfmin_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFMIN_S, vd, vj, vk)); +} + +/* Emits the `vfmin.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfmin_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFMIN_D, vd, vj, vk)); +} + +/* Emits the `vfmaxa.s vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfmaxa_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFMAXA_S, vd, vj, vk)); +} + +/* Emits the `vfmaxa.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfmaxa_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFMAXA_D, vd, vj, vk)); +} + +/* Emits the `vfmina.s vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfmina_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFMINA_S, vd, vj, vk)); +} + +/* Emits the `vfmina.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfmina_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFMINA_D, vd, vj, vk)); +} + +/* Emits the `vfcvt.h.s vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcvt_h_s(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCVT_H_S, vd, vj, vk)); +} + +/* Emits the `vfcvt.s.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcvt_s_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFCVT_S_D, vd, vj, vk)); +} + +/* Emits the `vffint.s.l vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vffint_s_l(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFFINT_S_L, vd, vj, vk)); +} + +/* Emits the `vftint.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftint_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFTINT_W_D, vd, vj, vk)); +} + +/* Emits the `vftintrm.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrm_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFTINTRM_W_D, vd, vj, vk)); +} + +/* Emits the `vftintrp.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrp_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFTINTRP_W_D, vd, vj, vk)); +} + +/* Emits the `vftintrz.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrz_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFTINTRZ_W_D, vd, vj, vk)); +} + +/* Emits the `vftintrne.w.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrne_w_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VFTINTRNE_W_D, vd, vj, vk)); +} + +/* Emits the `vshuf.h vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vshuf_h(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSHUF_H, vd, vj, vk)); +} + +/* Emits the `vshuf.w vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vshuf_w(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSHUF_W, vd, vj, vk)); +} + +/* Emits the `vshuf.d vd, vj, vk` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vshuf_d(TCGContext *s, TCGReg vd, TCGReg vj, TCGReg vk) +{ + tcg_out32(s, encode_vdvjvk_insn(OPC_VSHUF_D, vd, vj, vk)); +} + +/* Emits the `vseqi.b vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vseqi_b(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VSEQI_B, vd, vj, sk5)); +} + +/* Emits the `vseqi.h vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vseqi_h(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VSEQI_H, vd, vj, sk5)); +} + +/* Emits the `vseqi.w vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vseqi_w(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VSEQI_W, vd, vj, sk5)); +} + +/* Emits the `vseqi.d vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vseqi_d(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VSEQI_D, vd, vj, sk5)); +} + +/* Emits the `vslei.b vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslei_b(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VSLEI_B, vd, vj, sk5)); +} + +/* Emits the `vslei.h vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslei_h(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VSLEI_H, vd, vj, sk5)); +} + +/* Emits the `vslei.w vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslei_w(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VSLEI_W, vd, vj, sk5)); +} + +/* Emits the `vslei.d vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslei_d(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VSLEI_D, vd, vj, sk5)); +} + +/* Emits the `vslei.bu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslei_bu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSLEI_BU, vd, vj, uk5)); +} + +/* Emits the `vslei.hu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslei_hu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSLEI_HU, vd, vj, uk5)); +} + +/* Emits the `vslei.wu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslei_wu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSLEI_WU, vd, vj, uk5)); +} + +/* Emits the `vslei.du vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslei_du(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSLEI_DU, vd, vj, uk5)); +} + +/* Emits the `vslti.b vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslti_b(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VSLTI_B, vd, vj, sk5)); +} + +/* Emits the `vslti.h vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslti_h(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VSLTI_H, vd, vj, sk5)); +} + +/* Emits the `vslti.w vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslti_w(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VSLTI_W, vd, vj, sk5)); +} + +/* Emits the `vslti.d vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslti_d(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VSLTI_D, vd, vj, sk5)); +} + +/* Emits the `vslti.bu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslti_bu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSLTI_BU, vd, vj, uk5)); +} + +/* Emits the `vslti.hu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslti_hu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSLTI_HU, vd, vj, uk5)); +} + +/* Emits the `vslti.wu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslti_wu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSLTI_WU, vd, vj, uk5)); +} + +/* Emits the `vslti.du vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslti_du(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSLTI_DU, vd, vj, uk5)); +} + +/* Emits the `vaddi.bu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddi_bu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VADDI_BU, vd, vj, uk5)); +} + +/* Emits the `vaddi.hu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddi_hu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VADDI_HU, vd, vj, uk5)); +} + +/* Emits the `vaddi.wu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddi_wu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VADDI_WU, vd, vj, uk5)); +} + +/* Emits the `vaddi.du vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vaddi_du(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VADDI_DU, vd, vj, uk5)); +} + +/* Emits the `vsubi.bu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubi_bu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSUBI_BU, vd, vj, uk5)); +} + +/* Emits the `vsubi.hu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubi_hu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSUBI_HU, vd, vj, uk5)); +} + +/* Emits the `vsubi.wu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubi_wu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSUBI_WU, vd, vj, uk5)); +} + +/* Emits the `vsubi.du vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsubi_du(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSUBI_DU, vd, vj, uk5)); +} + +/* Emits the `vbsll.v vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbsll_v(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VBSLL_V, vd, vj, uk5)); +} + +/* Emits the `vbsrl.v vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbsrl_v(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VBSRL_V, vd, vj, uk5)); +} + +/* Emits the `vmaxi.b vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaxi_b(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VMAXI_B, vd, vj, sk5)); +} + +/* Emits the `vmaxi.h vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaxi_h(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VMAXI_H, vd, vj, sk5)); +} + +/* Emits the `vmaxi.w vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaxi_w(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VMAXI_W, vd, vj, sk5)); +} + +/* Emits the `vmaxi.d vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaxi_d(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VMAXI_D, vd, vj, sk5)); +} + +/* Emits the `vmini.b vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmini_b(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VMINI_B, vd, vj, sk5)); +} + +/* Emits the `vmini.h vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmini_h(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VMINI_H, vd, vj, sk5)); +} + +/* Emits the `vmini.w vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmini_w(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VMINI_W, vd, vj, sk5)); +} + +/* Emits the `vmini.d vd, vj, sk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmini_d(TCGContext *s, TCGReg vd, TCGReg vj, int32_t sk5) +{ + tcg_out32(s, encode_vdvjsk5_insn(OPC_VMINI_D, vd, vj, sk5)); +} + +/* Emits the `vmaxi.bu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaxi_bu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VMAXI_BU, vd, vj, uk5)); +} + +/* Emits the `vmaxi.hu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaxi_hu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VMAXI_HU, vd, vj, uk5)); +} + +/* Emits the `vmaxi.wu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaxi_wu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VMAXI_WU, vd, vj, uk5)); +} + +/* Emits the `vmaxi.du vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmaxi_du(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VMAXI_DU, vd, vj, uk5)); +} + +/* Emits the `vmini.bu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmini_bu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VMINI_BU, vd, vj, uk5)); +} + +/* Emits the `vmini.hu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmini_hu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VMINI_HU, vd, vj, uk5)); +} + +/* Emits the `vmini.wu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmini_wu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VMINI_WU, vd, vj, uk5)); +} + +/* Emits the `vmini.du vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmini_du(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VMINI_DU, vd, vj, uk5)); +} + +/* Emits the `vfrstpi.b vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrstpi_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VFRSTPI_B, vd, vj, uk5)); +} + +/* Emits the `vfrstpi.h vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrstpi_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VFRSTPI_H, vd, vj, uk5)); +} + +/* Emits the `vclo.b vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vclo_b(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VCLO_B, vd, vj)); +} + +/* Emits the `vclo.h vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vclo_h(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VCLO_H, vd, vj)); +} + +/* Emits the `vclo.w vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vclo_w(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VCLO_W, vd, vj)); +} + +/* Emits the `vclo.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vclo_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VCLO_D, vd, vj)); +} + +/* Emits the `vclz.b vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vclz_b(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VCLZ_B, vd, vj)); +} + +/* Emits the `vclz.h vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vclz_h(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VCLZ_H, vd, vj)); +} + +/* Emits the `vclz.w vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vclz_w(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VCLZ_W, vd, vj)); +} + +/* Emits the `vclz.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vclz_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VCLZ_D, vd, vj)); +} + +/* Emits the `vpcnt.b vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpcnt_b(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VPCNT_B, vd, vj)); +} + +/* Emits the `vpcnt.h vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpcnt_h(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VPCNT_H, vd, vj)); +} + +/* Emits the `vpcnt.w vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpcnt_w(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VPCNT_W, vd, vj)); +} + +/* Emits the `vpcnt.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpcnt_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VPCNT_D, vd, vj)); +} + +/* Emits the `vneg.b vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vneg_b(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VNEG_B, vd, vj)); +} + +/* Emits the `vneg.h vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vneg_h(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VNEG_H, vd, vj)); +} + +/* Emits the `vneg.w vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vneg_w(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VNEG_W, vd, vj)); +} + +/* Emits the `vneg.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vneg_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VNEG_D, vd, vj)); +} + +/* Emits the `vmskltz.b vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmskltz_b(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VMSKLTZ_B, vd, vj)); +} + +/* Emits the `vmskltz.h vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmskltz_h(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VMSKLTZ_H, vd, vj)); +} + +/* Emits the `vmskltz.w vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmskltz_w(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VMSKLTZ_W, vd, vj)); +} + +/* Emits the `vmskltz.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmskltz_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VMSKLTZ_D, vd, vj)); +} + +/* Emits the `vmskgez.b vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmskgez_b(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VMSKGEZ_B, vd, vj)); +} + +/* Emits the `vmsknz.b vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vmsknz_b(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VMSKNZ_B, vd, vj)); +} + +/* Emits the `vseteqz.v cd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vseteqz_v(TCGContext *s, TCGReg cd, TCGReg vj) +{ + tcg_out32(s, encode_cdvj_insn(OPC_VSETEQZ_V, cd, vj)); +} + +/* Emits the `vsetnez.v cd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsetnez_v(TCGContext *s, TCGReg cd, TCGReg vj) +{ + tcg_out32(s, encode_cdvj_insn(OPC_VSETNEZ_V, cd, vj)); +} + +/* Emits the `vsetanyeqz.b cd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsetanyeqz_b(TCGContext *s, TCGReg cd, TCGReg vj) +{ + tcg_out32(s, encode_cdvj_insn(OPC_VSETANYEQZ_B, cd, vj)); +} + +/* Emits the `vsetanyeqz.h cd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsetanyeqz_h(TCGContext *s, TCGReg cd, TCGReg vj) +{ + tcg_out32(s, encode_cdvj_insn(OPC_VSETANYEQZ_H, cd, vj)); +} + +/* Emits the `vsetanyeqz.w cd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsetanyeqz_w(TCGContext *s, TCGReg cd, TCGReg vj) +{ + tcg_out32(s, encode_cdvj_insn(OPC_VSETANYEQZ_W, cd, vj)); +} + +/* Emits the `vsetanyeqz.d cd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsetanyeqz_d(TCGContext *s, TCGReg cd, TCGReg vj) +{ + tcg_out32(s, encode_cdvj_insn(OPC_VSETANYEQZ_D, cd, vj)); +} + +/* Emits the `vsetallnez.b cd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsetallnez_b(TCGContext *s, TCGReg cd, TCGReg vj) +{ + tcg_out32(s, encode_cdvj_insn(OPC_VSETALLNEZ_B, cd, vj)); +} + +/* Emits the `vsetallnez.h cd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsetallnez_h(TCGContext *s, TCGReg cd, TCGReg vj) +{ + tcg_out32(s, encode_cdvj_insn(OPC_VSETALLNEZ_H, cd, vj)); +} + +/* Emits the `vsetallnez.w cd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsetallnez_w(TCGContext *s, TCGReg cd, TCGReg vj) +{ + tcg_out32(s, encode_cdvj_insn(OPC_VSETALLNEZ_W, cd, vj)); +} + +/* Emits the `vsetallnez.d cd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsetallnez_d(TCGContext *s, TCGReg cd, TCGReg vj) +{ + tcg_out32(s, encode_cdvj_insn(OPC_VSETALLNEZ_D, cd, vj)); +} + +/* Emits the `vflogb.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vflogb_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFLOGB_S, vd, vj)); +} + +/* Emits the `vflogb.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vflogb_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFLOGB_D, vd, vj)); +} + +/* Emits the `vfclass.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfclass_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFCLASS_S, vd, vj)); +} + +/* Emits the `vfclass.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfclass_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFCLASS_D, vd, vj)); +} + +/* Emits the `vfsqrt.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfsqrt_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFSQRT_S, vd, vj)); +} + +/* Emits the `vfsqrt.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfsqrt_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFSQRT_D, vd, vj)); +} + +/* Emits the `vfrecip.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrecip_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRECIP_S, vd, vj)); +} + +/* Emits the `vfrecip.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrecip_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRECIP_D, vd, vj)); +} + +/* Emits the `vfrsqrt.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrsqrt_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRSQRT_S, vd, vj)); +} + +/* Emits the `vfrsqrt.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrsqrt_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRSQRT_D, vd, vj)); +} + +/* Emits the `vfrint.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrint_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRINT_S, vd, vj)); +} + +/* Emits the `vfrint.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrint_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRINT_D, vd, vj)); +} + +/* Emits the `vfrintrm.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrintrm_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRINTRM_S, vd, vj)); +} + +/* Emits the `vfrintrm.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrintrm_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRINTRM_D, vd, vj)); +} + +/* Emits the `vfrintrp.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrintrp_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRINTRP_S, vd, vj)); +} + +/* Emits the `vfrintrp.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrintrp_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRINTRP_D, vd, vj)); +} + +/* Emits the `vfrintrz.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrintrz_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRINTRZ_S, vd, vj)); +} + +/* Emits the `vfrintrz.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrintrz_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRINTRZ_D, vd, vj)); +} + +/* Emits the `vfrintrne.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrintrne_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRINTRNE_S, vd, vj)); +} + +/* Emits the `vfrintrne.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfrintrne_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFRINTRNE_D, vd, vj)); +} + +/* Emits the `vfcvtl.s.h vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcvtl_s_h(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFCVTL_S_H, vd, vj)); +} + +/* Emits the `vfcvth.s.h vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcvth_s_h(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFCVTH_S_H, vd, vj)); +} + +/* Emits the `vfcvtl.d.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcvtl_d_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFCVTL_D_S, vd, vj)); +} + +/* Emits the `vfcvth.d.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vfcvth_d_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFCVTH_D_S, vd, vj)); +} + +/* Emits the `vffint.s.w vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vffint_s_w(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFFINT_S_W, vd, vj)); +} + +/* Emits the `vffint.s.wu vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vffint_s_wu(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFFINT_S_WU, vd, vj)); +} + +/* Emits the `vffint.d.l vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vffint_d_l(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFFINT_D_L, vd, vj)); +} + +/* Emits the `vffint.d.lu vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vffint_d_lu(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFFINT_D_LU, vd, vj)); +} + +/* Emits the `vffintl.d.w vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vffintl_d_w(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFFINTL_D_W, vd, vj)); +} + +/* Emits the `vffinth.d.w vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vffinth_d_w(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFFINTH_D_W, vd, vj)); +} + +/* Emits the `vftint.w.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftint_w_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINT_W_S, vd, vj)); +} + +/* Emits the `vftint.l.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftint_l_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINT_L_D, vd, vj)); +} + +/* Emits the `vftintrm.w.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrm_w_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRM_W_S, vd, vj)); +} + +/* Emits the `vftintrm.l.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrm_l_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRM_L_D, vd, vj)); +} + +/* Emits the `vftintrp.w.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrp_w_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRP_W_S, vd, vj)); +} + +/* Emits the `vftintrp.l.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrp_l_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRP_L_D, vd, vj)); +} + +/* Emits the `vftintrz.w.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrz_w_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRZ_W_S, vd, vj)); +} + +/* Emits the `vftintrz.l.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrz_l_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRZ_L_D, vd, vj)); +} + +/* Emits the `vftintrne.w.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrne_w_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRNE_W_S, vd, vj)); +} + +/* Emits the `vftintrne.l.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrne_l_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRNE_L_D, vd, vj)); +} + +/* Emits the `vftint.wu.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftint_wu_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINT_WU_S, vd, vj)); +} + +/* Emits the `vftint.lu.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftint_lu_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINT_LU_D, vd, vj)); +} + +/* Emits the `vftintrz.wu.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrz_wu_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRZ_WU_S, vd, vj)); +} + +/* Emits the `vftintrz.lu.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrz_lu_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRZ_LU_D, vd, vj)); +} + +/* Emits the `vftintl.l.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintl_l_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTL_L_S, vd, vj)); +} + +/* Emits the `vftinth.l.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftinth_l_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTH_L_S, vd, vj)); +} + +/* Emits the `vftintrml.l.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrml_l_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRML_L_S, vd, vj)); +} + +/* Emits the `vftintrmh.l.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrmh_l_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRMH_L_S, vd, vj)); +} + +/* Emits the `vftintrpl.l.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrpl_l_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRPL_L_S, vd, vj)); +} + +/* Emits the `vftintrph.l.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrph_l_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRPH_L_S, vd, vj)); +} + +/* Emits the `vftintrzl.l.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrzl_l_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRZL_L_S, vd, vj)); +} + +/* Emits the `vftintrzh.l.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrzh_l_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRZH_L_S, vd, vj)); +} + +/* Emits the `vftintrnel.l.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrnel_l_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRNEL_L_S, vd, vj)); +} + +/* Emits the `vftintrneh.l.s vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vftintrneh_l_s(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VFTINTRNEH_L_S, vd, vj)); +} + +/* Emits the `vexth.h.b vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vexth_h_b(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VEXTH_H_B, vd, vj)); +} + +/* Emits the `vexth.w.h vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vexth_w_h(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VEXTH_W_H, vd, vj)); +} + +/* Emits the `vexth.d.w vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vexth_d_w(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VEXTH_D_W, vd, vj)); +} + +/* Emits the `vexth.q.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vexth_q_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VEXTH_Q_D, vd, vj)); +} + +/* Emits the `vexth.hu.bu vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vexth_hu_bu(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VEXTH_HU_BU, vd, vj)); +} + +/* Emits the `vexth.wu.hu vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vexth_wu_hu(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VEXTH_WU_HU, vd, vj)); +} + +/* Emits the `vexth.du.wu vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vexth_du_wu(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VEXTH_DU_WU, vd, vj)); +} + +/* Emits the `vexth.qu.du vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vexth_qu_du(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VEXTH_QU_DU, vd, vj)); +} + +/* Emits the `vreplgr2vr.b vd, j` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vreplgr2vr_b(TCGContext *s, TCGReg vd, TCGReg j) +{ + tcg_out32(s, encode_vdj_insn(OPC_VREPLGR2VR_B, vd, j)); +} + +/* Emits the `vreplgr2vr.h vd, j` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vreplgr2vr_h(TCGContext *s, TCGReg vd, TCGReg j) +{ + tcg_out32(s, encode_vdj_insn(OPC_VREPLGR2VR_H, vd, j)); +} + +/* Emits the `vreplgr2vr.w vd, j` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vreplgr2vr_w(TCGContext *s, TCGReg vd, TCGReg j) +{ + tcg_out32(s, encode_vdj_insn(OPC_VREPLGR2VR_W, vd, j)); +} + +/* Emits the `vreplgr2vr.d vd, j` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vreplgr2vr_d(TCGContext *s, TCGReg vd, TCGReg j) +{ + tcg_out32(s, encode_vdj_insn(OPC_VREPLGR2VR_D, vd, j)); +} + +/* Emits the `vrotri.b vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vrotri_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VROTRI_B, vd, vj, uk3)); +} + +/* Emits the `vrotri.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vrotri_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VROTRI_H, vd, vj, uk4)); +} + +/* Emits the `vrotri.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vrotri_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VROTRI_W, vd, vj, uk5)); +} + +/* Emits the `vrotri.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vrotri_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VROTRI_D, vd, vj, uk6)); +} + +/* Emits the `vsrlri.b vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlri_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VSRLRI_B, vd, vj, uk3)); +} + +/* Emits the `vsrlri.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlri_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSRLRI_H, vd, vj, uk4)); +} + +/* Emits the `vsrlri.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlri_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSRLRI_W, vd, vj, uk5)); +} + +/* Emits the `vsrlri.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlri_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSRLRI_D, vd, vj, uk6)); +} + +/* Emits the `vsrari.b vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrari_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VSRARI_B, vd, vj, uk3)); +} + +/* Emits the `vsrari.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrari_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSRARI_H, vd, vj, uk4)); +} + +/* Emits the `vsrari.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrari_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSRARI_W, vd, vj, uk5)); +} + +/* Emits the `vsrari.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrari_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSRARI_D, vd, vj, uk6)); +} + +/* Emits the `vinsgr2vr.b vd, j, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vinsgr2vr_b(TCGContext *s, TCGReg vd, TCGReg j, uint32_t uk4) +{ + tcg_out32(s, encode_vdjuk4_insn(OPC_VINSGR2VR_B, vd, j, uk4)); +} + +/* Emits the `vinsgr2vr.h vd, j, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vinsgr2vr_h(TCGContext *s, TCGReg vd, TCGReg j, uint32_t uk3) +{ + tcg_out32(s, encode_vdjuk3_insn(OPC_VINSGR2VR_H, vd, j, uk3)); +} + +/* Emits the `vinsgr2vr.w vd, j, uk2` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vinsgr2vr_w(TCGContext *s, TCGReg vd, TCGReg j, uint32_t uk2) +{ + tcg_out32(s, encode_vdjuk2_insn(OPC_VINSGR2VR_W, vd, j, uk2)); +} + +/* Emits the `vinsgr2vr.d vd, j, uk1` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vinsgr2vr_d(TCGContext *s, TCGReg vd, TCGReg j, uint32_t uk1) +{ + tcg_out32(s, encode_vdjuk1_insn(OPC_VINSGR2VR_D, vd, j, uk1)); +} + +/* Emits the `vpickve2gr.b d, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickve2gr_b(TCGContext *s, TCGReg d, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_dvjuk4_insn(OPC_VPICKVE2GR_B, d, vj, uk4)); +} + +/* Emits the `vpickve2gr.h d, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickve2gr_h(TCGContext *s, TCGReg d, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_dvjuk3_insn(OPC_VPICKVE2GR_H, d, vj, uk3)); +} + +/* Emits the `vpickve2gr.w d, vj, uk2` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickve2gr_w(TCGContext *s, TCGReg d, TCGReg vj, uint32_t uk2) +{ + tcg_out32(s, encode_dvjuk2_insn(OPC_VPICKVE2GR_W, d, vj, uk2)); +} + +/* Emits the `vpickve2gr.d d, vj, uk1` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickve2gr_d(TCGContext *s, TCGReg d, TCGReg vj, uint32_t uk1) +{ + tcg_out32(s, encode_dvjuk1_insn(OPC_VPICKVE2GR_D, d, vj, uk1)); +} + +/* Emits the `vpickve2gr.bu d, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickve2gr_bu(TCGContext *s, TCGReg d, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_dvjuk4_insn(OPC_VPICKVE2GR_BU, d, vj, uk4)); +} + +/* Emits the `vpickve2gr.hu d, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickve2gr_hu(TCGContext *s, TCGReg d, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_dvjuk3_insn(OPC_VPICKVE2GR_HU, d, vj, uk3)); +} + +/* Emits the `vpickve2gr.wu d, vj, uk2` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickve2gr_wu(TCGContext *s, TCGReg d, TCGReg vj, uint32_t uk2) +{ + tcg_out32(s, encode_dvjuk2_insn(OPC_VPICKVE2GR_WU, d, vj, uk2)); +} + +/* Emits the `vpickve2gr.du d, vj, uk1` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpickve2gr_du(TCGContext *s, TCGReg d, TCGReg vj, uint32_t uk1) +{ + tcg_out32(s, encode_dvjuk1_insn(OPC_VPICKVE2GR_DU, d, vj, uk1)); +} + +/* Emits the `vreplvei.b vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vreplvei_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VREPLVEI_B, vd, vj, uk4)); +} + +/* Emits the `vreplvei.h vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vreplvei_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VREPLVEI_H, vd, vj, uk3)); +} + +/* Emits the `vreplvei.w vd, vj, uk2` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vreplvei_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk2) +{ + tcg_out32(s, encode_vdvjuk2_insn(OPC_VREPLVEI_W, vd, vj, uk2)); +} + +/* Emits the `vreplvei.d vd, vj, uk1` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vreplvei_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk1) +{ + tcg_out32(s, encode_vdvjuk1_insn(OPC_VREPLVEI_D, vd, vj, uk1)); +} + +/* Emits the `vsllwil.h.b vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsllwil_h_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VSLLWIL_H_B, vd, vj, uk3)); +} + +/* Emits the `vsllwil.w.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsllwil_w_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSLLWIL_W_H, vd, vj, uk4)); +} + +/* Emits the `vsllwil.d.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsllwil_d_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSLLWIL_D_W, vd, vj, uk5)); +} + +/* Emits the `vextl.q.d vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vextl_q_d(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VEXTL_Q_D, vd, vj)); +} + +/* Emits the `vsllwil.hu.bu vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsllwil_hu_bu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VSLLWIL_HU_BU, vd, vj, uk3)); +} + +/* Emits the `vsllwil.wu.hu vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsllwil_wu_hu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSLLWIL_WU_HU, vd, vj, uk4)); +} + +/* Emits the `vsllwil.du.wu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsllwil_du_wu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSLLWIL_DU_WU, vd, vj, uk5)); +} + +/* Emits the `vextl.qu.du vd, vj` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vextl_qu_du(TCGContext *s, TCGReg vd, TCGReg vj) +{ + tcg_out32(s, encode_vdvj_insn(OPC_VEXTL_QU_DU, vd, vj)); +} + +/* Emits the `vbitclri.b vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitclri_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VBITCLRI_B, vd, vj, uk3)); +} + +/* Emits the `vbitclri.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitclri_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VBITCLRI_H, vd, vj, uk4)); +} + +/* Emits the `vbitclri.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitclri_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VBITCLRI_W, vd, vj, uk5)); +} + +/* Emits the `vbitclri.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitclri_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VBITCLRI_D, vd, vj, uk6)); +} + +/* Emits the `vbitseti.b vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitseti_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VBITSETI_B, vd, vj, uk3)); +} + +/* Emits the `vbitseti.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitseti_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VBITSETI_H, vd, vj, uk4)); +} + +/* Emits the `vbitseti.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitseti_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VBITSETI_W, vd, vj, uk5)); +} + +/* Emits the `vbitseti.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitseti_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VBITSETI_D, vd, vj, uk6)); +} + +/* Emits the `vbitrevi.b vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitrevi_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VBITREVI_B, vd, vj, uk3)); +} + +/* Emits the `vbitrevi.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitrevi_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VBITREVI_H, vd, vj, uk4)); +} + +/* Emits the `vbitrevi.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitrevi_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VBITREVI_W, vd, vj, uk5)); +} + +/* Emits the `vbitrevi.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitrevi_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VBITREVI_D, vd, vj, uk6)); +} + +/* Emits the `vsat.b vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsat_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VSAT_B, vd, vj, uk3)); +} + +/* Emits the `vsat.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsat_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSAT_H, vd, vj, uk4)); +} + +/* Emits the `vsat.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsat_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSAT_W, vd, vj, uk5)); +} + +/* Emits the `vsat.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsat_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSAT_D, vd, vj, uk6)); +} + +/* Emits the `vsat.bu vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsat_bu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VSAT_BU, vd, vj, uk3)); +} + +/* Emits the `vsat.hu vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsat_hu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSAT_HU, vd, vj, uk4)); +} + +/* Emits the `vsat.wu vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsat_wu(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSAT_WU, vd, vj, uk5)); +} + +/* Emits the `vsat.du vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsat_du(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSAT_DU, vd, vj, uk6)); +} + +/* Emits the `vslli.b vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslli_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VSLLI_B, vd, vj, uk3)); +} + +/* Emits the `vslli.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslli_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSLLI_H, vd, vj, uk4)); +} + +/* Emits the `vslli.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslli_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSLLI_W, vd, vj, uk5)); +} + +/* Emits the `vslli.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vslli_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSLLI_D, vd, vj, uk6)); +} + +/* Emits the `vsrli.b vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrli_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VSRLI_B, vd, vj, uk3)); +} + +/* Emits the `vsrli.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrli_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSRLI_H, vd, vj, uk4)); +} + +/* Emits the `vsrli.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrli_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSRLI_W, vd, vj, uk5)); +} + +/* Emits the `vsrli.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrli_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSRLI_D, vd, vj, uk6)); +} + +/* Emits the `vsrai.b vd, vj, uk3` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrai_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk3) +{ + tcg_out32(s, encode_vdvjuk3_insn(OPC_VSRAI_B, vd, vj, uk3)); +} + +/* Emits the `vsrai.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrai_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) { - tcg_out32(s, encode_sd10k16_insn(OPC_B, sd10k16)); + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSRAI_H, vd, vj, uk4)); } -/* Emits the `bl sd10k16` instruction. */ +/* Emits the `vsrai.w vd, vj, uk5` instruction. */ static void __attribute__((unused)) -tcg_out_opc_bl(TCGContext *s, int32_t sd10k16) +tcg_out_opc_vsrai_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) { - tcg_out32(s, encode_sd10k16_insn(OPC_BL, sd10k16)); + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSRAI_W, vd, vj, uk5)); } -/* Emits the `beq d, j, sk16` instruction. */ +/* Emits the `vsrai.d vd, vj, uk6` instruction. */ static void __attribute__((unused)) -tcg_out_opc_beq(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +tcg_out_opc_vsrai_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) { - tcg_out32(s, encode_djsk16_insn(OPC_BEQ, d, j, sk16)); + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSRAI_D, vd, vj, uk6)); } -/* Emits the `bne d, j, sk16` instruction. */ +/* Emits the `vsrlni.b.h vd, vj, uk4` instruction. */ static void __attribute__((unused)) -tcg_out_opc_bne(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +tcg_out_opc_vsrlni_b_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) { - tcg_out32(s, encode_djsk16_insn(OPC_BNE, d, j, sk16)); + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSRLNI_B_H, vd, vj, uk4)); } -/* Emits the `bgt d, j, sk16` instruction. */ +/* Emits the `vsrlni.h.w vd, vj, uk5` instruction. */ static void __attribute__((unused)) -tcg_out_opc_bgt(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +tcg_out_opc_vsrlni_h_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) { - tcg_out32(s, encode_djsk16_insn(OPC_BGT, d, j, sk16)); + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSRLNI_H_W, vd, vj, uk5)); } -/* Emits the `ble d, j, sk16` instruction. */ +/* Emits the `vsrlni.w.d vd, vj, uk6` instruction. */ static void __attribute__((unused)) -tcg_out_opc_ble(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +tcg_out_opc_vsrlni_w_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) { - tcg_out32(s, encode_djsk16_insn(OPC_BLE, d, j, sk16)); + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSRLNI_W_D, vd, vj, uk6)); } -/* Emits the `bgtu d, j, sk16` instruction. */ +/* Emits the `vsrlni.d.q vd, vj, uk7` instruction. */ static void __attribute__((unused)) -tcg_out_opc_bgtu(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +tcg_out_opc_vsrlni_d_q(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk7) { - tcg_out32(s, encode_djsk16_insn(OPC_BGTU, d, j, sk16)); + tcg_out32(s, encode_vdvjuk7_insn(OPC_VSRLNI_D_Q, vd, vj, uk7)); } -/* Emits the `bleu d, j, sk16` instruction. */ +/* Emits the `vsrlrni.b.h vd, vj, uk4` instruction. */ static void __attribute__((unused)) -tcg_out_opc_bleu(TCGContext *s, TCGReg d, TCGReg j, int32_t sk16) +tcg_out_opc_vsrlrni_b_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) { - tcg_out32(s, encode_djsk16_insn(OPC_BLEU, d, j, sk16)); + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSRLRNI_B_H, vd, vj, uk4)); +} + +/* Emits the `vsrlrni.h.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlrni_h_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSRLRNI_H_W, vd, vj, uk5)); +} + +/* Emits the `vsrlrni.w.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlrni_w_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSRLRNI_W_D, vd, vj, uk6)); +} + +/* Emits the `vsrlrni.d.q vd, vj, uk7` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrlrni_d_q(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk7) +{ + tcg_out32(s, encode_vdvjuk7_insn(OPC_VSRLRNI_D_Q, vd, vj, uk7)); +} + +/* Emits the `vssrlni.b.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlni_b_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSSRLNI_B_H, vd, vj, uk4)); +} + +/* Emits the `vssrlni.h.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlni_h_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSSRLNI_H_W, vd, vj, uk5)); +} + +/* Emits the `vssrlni.w.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlni_w_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSSRLNI_W_D, vd, vj, uk6)); +} + +/* Emits the `vssrlni.d.q vd, vj, uk7` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlni_d_q(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk7) +{ + tcg_out32(s, encode_vdvjuk7_insn(OPC_VSSRLNI_D_Q, vd, vj, uk7)); +} + +/* Emits the `vssrlni.bu.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlni_bu_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSSRLNI_BU_H, vd, vj, uk4)); +} + +/* Emits the `vssrlni.hu.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlni_hu_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSSRLNI_HU_W, vd, vj, uk5)); +} + +/* Emits the `vssrlni.wu.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlni_wu_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSSRLNI_WU_D, vd, vj, uk6)); +} + +/* Emits the `vssrlni.du.q vd, vj, uk7` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlni_du_q(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk7) +{ + tcg_out32(s, encode_vdvjuk7_insn(OPC_VSSRLNI_DU_Q, vd, vj, uk7)); +} + +/* Emits the `vssrlrni.b.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrni_b_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSSRLRNI_B_H, vd, vj, uk4)); +} + +/* Emits the `vssrlrni.h.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrni_h_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSSRLRNI_H_W, vd, vj, uk5)); +} + +/* Emits the `vssrlrni.w.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrni_w_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSSRLRNI_W_D, vd, vj, uk6)); +} + +/* Emits the `vssrlrni.d.q vd, vj, uk7` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrni_d_q(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk7) +{ + tcg_out32(s, encode_vdvjuk7_insn(OPC_VSSRLRNI_D_Q, vd, vj, uk7)); +} + +/* Emits the `vssrlrni.bu.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrni_bu_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSSRLRNI_BU_H, vd, vj, uk4)); +} + +/* Emits the `vssrlrni.hu.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrni_hu_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSSRLRNI_HU_W, vd, vj, uk5)); +} + +/* Emits the `vssrlrni.wu.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrni_wu_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSSRLRNI_WU_D, vd, vj, uk6)); +} + +/* Emits the `vssrlrni.du.q vd, vj, uk7` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrlrni_du_q(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk7) +{ + tcg_out32(s, encode_vdvjuk7_insn(OPC_VSSRLRNI_DU_Q, vd, vj, uk7)); +} + +/* Emits the `vsrani.b.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrani_b_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSRANI_B_H, vd, vj, uk4)); +} + +/* Emits the `vsrani.h.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrani_h_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSRANI_H_W, vd, vj, uk5)); +} + +/* Emits the `vsrani.w.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrani_w_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSRANI_W_D, vd, vj, uk6)); +} + +/* Emits the `vsrani.d.q vd, vj, uk7` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrani_d_q(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk7) +{ + tcg_out32(s, encode_vdvjuk7_insn(OPC_VSRANI_D_Q, vd, vj, uk7)); +} + +/* Emits the `vsrarni.b.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrarni_b_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSRARNI_B_H, vd, vj, uk4)); +} + +/* Emits the `vsrarni.h.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrarni_h_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSRARNI_H_W, vd, vj, uk5)); +} + +/* Emits the `vsrarni.w.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrarni_w_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSRARNI_W_D, vd, vj, uk6)); +} + +/* Emits the `vsrarni.d.q vd, vj, uk7` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vsrarni_d_q(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk7) +{ + tcg_out32(s, encode_vdvjuk7_insn(OPC_VSRARNI_D_Q, vd, vj, uk7)); +} + +/* Emits the `vssrani.b.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrani_b_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSSRANI_B_H, vd, vj, uk4)); +} + +/* Emits the `vssrani.h.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrani_h_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSSRANI_H_W, vd, vj, uk5)); +} + +/* Emits the `vssrani.w.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrani_w_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSSRANI_W_D, vd, vj, uk6)); +} + +/* Emits the `vssrani.d.q vd, vj, uk7` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrani_d_q(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk7) +{ + tcg_out32(s, encode_vdvjuk7_insn(OPC_VSSRANI_D_Q, vd, vj, uk7)); +} + +/* Emits the `vssrani.bu.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrani_bu_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSSRANI_BU_H, vd, vj, uk4)); +} + +/* Emits the `vssrani.hu.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrani_hu_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSSRANI_HU_W, vd, vj, uk5)); +} + +/* Emits the `vssrani.wu.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrani_wu_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSSRANI_WU_D, vd, vj, uk6)); +} + +/* Emits the `vssrani.du.q vd, vj, uk7` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrani_du_q(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk7) +{ + tcg_out32(s, encode_vdvjuk7_insn(OPC_VSSRANI_DU_Q, vd, vj, uk7)); +} + +/* Emits the `vssrarni.b.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarni_b_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSSRARNI_B_H, vd, vj, uk4)); +} + +/* Emits the `vssrarni.h.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarni_h_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSSRARNI_H_W, vd, vj, uk5)); +} + +/* Emits the `vssrarni.w.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarni_w_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSSRARNI_W_D, vd, vj, uk6)); +} + +/* Emits the `vssrarni.d.q vd, vj, uk7` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarni_d_q(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk7) +{ + tcg_out32(s, encode_vdvjuk7_insn(OPC_VSSRARNI_D_Q, vd, vj, uk7)); +} + +/* Emits the `vssrarni.bu.h vd, vj, uk4` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarni_bu_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk4) +{ + tcg_out32(s, encode_vdvjuk4_insn(OPC_VSSRARNI_BU_H, vd, vj, uk4)); +} + +/* Emits the `vssrarni.hu.w vd, vj, uk5` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarni_hu_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk5) +{ + tcg_out32(s, encode_vdvjuk5_insn(OPC_VSSRARNI_HU_W, vd, vj, uk5)); +} + +/* Emits the `vssrarni.wu.d vd, vj, uk6` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarni_wu_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk6) +{ + tcg_out32(s, encode_vdvjuk6_insn(OPC_VSSRARNI_WU_D, vd, vj, uk6)); +} + +/* Emits the `vssrarni.du.q vd, vj, uk7` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vssrarni_du_q(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk7) +{ + tcg_out32(s, encode_vdvjuk7_insn(OPC_VSSRARNI_DU_Q, vd, vj, uk7)); +} + +/* Emits the `vextrins.d vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vextrins_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VEXTRINS_D, vd, vj, uk8)); +} + +/* Emits the `vextrins.w vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vextrins_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VEXTRINS_W, vd, vj, uk8)); +} + +/* Emits the `vextrins.h vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vextrins_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VEXTRINS_H, vd, vj, uk8)); +} + +/* Emits the `vextrins.b vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vextrins_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VEXTRINS_B, vd, vj, uk8)); +} + +/* Emits the `vshuf4i.b vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vshuf4i_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VSHUF4I_B, vd, vj, uk8)); +} + +/* Emits the `vshuf4i.h vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vshuf4i_h(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VSHUF4I_H, vd, vj, uk8)); +} + +/* Emits the `vshuf4i.w vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vshuf4i_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VSHUF4I_W, vd, vj, uk8)); +} + +/* Emits the `vshuf4i.d vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vshuf4i_d(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VSHUF4I_D, vd, vj, uk8)); +} + +/* Emits the `vbitseli.b vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vbitseli_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VBITSELI_B, vd, vj, uk8)); +} + +/* Emits the `vandi.b vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vandi_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VANDI_B, vd, vj, uk8)); +} + +/* Emits the `vori.b vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vori_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VORI_B, vd, vj, uk8)); +} + +/* Emits the `vxori.b vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vxori_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VXORI_B, vd, vj, uk8)); +} + +/* Emits the `vnori.b vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vnori_b(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VNORI_B, vd, vj, uk8)); +} + +/* Emits the `vldi vd, sj13` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vldi(TCGContext *s, TCGReg vd, int32_t sj13) +{ + tcg_out32(s, encode_vdsj13_insn(OPC_VLDI, vd, sj13)); +} + +/* Emits the `vpermi.w vd, vj, uk8` instruction. */ +static void __attribute__((unused)) +tcg_out_opc_vpermi_w(TCGContext *s, TCGReg vd, TCGReg vj, uint32_t uk8) +{ + tcg_out32(s, encode_vdvjuk8_insn(OPC_VPERMI_W, vd, vj, uk8)); } /* End of generated code. */ From patchwork Fri Sep 1 09:30:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372314 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 77840CA0FE6 for ; Fri, 1 Sep 2023 09:36:16 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0X6-0004tX-2I; Fri, 01 Sep 2023 05:33:40 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0Wz-0004sh-QO for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:33 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0Ww-0000U8-Q3 for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:33 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id 3692344470; Fri, 1 Sep 2023 09:33:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560803; bh=POPbG8iynnddn4aO4cvMy+YJAe5Dh14UGlkpQXEGvnE=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=bfTwt5gGP4KTpvAXVbgtrA6uqBkPTM6pkQCC9gsp2Asoc705UkYon9T/gLxVnlGVe IolWfDpblhD/VsuZ/ysQfGZe65fsXHq7vjwD+ExdvMb65a9fRkis85Bq8jM1K7d3Z/ zBr7DhN4ii3zOrtu8RrMrYGNbwi9A0HA2WYVWc+E= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 02/14] tcg/loongarch64: Lower basic tcg vec ops to LSX Date: Fri, 1 Sep 2023 17:30:55 +0800 Message-ID: <20230901093258.942357-3-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org LSX support on host cpu is detected via hwcap. Lower the following ops to LSX: - dup_vec - dupi_vec - dupm_vec - ld_vec - st_vec Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target-con-set.h | 2 + tcg/loongarch64/tcg-target-con-str.h | 1 + tcg/loongarch64/tcg-target.c.inc | 219 ++++++++++++++++++++++++++- tcg/loongarch64/tcg-target.h | 38 ++++- tcg/loongarch64/tcg-target.opc.h | 12 ++ 5 files changed, 270 insertions(+), 2 deletions(-) create mode 100644 tcg/loongarch64/tcg-target.opc.h diff --git a/tcg/loongarch64/tcg-target-con-set.h b/tcg/loongarch64/tcg-target-con-set.h index c2bde44613..37b3f80bf9 100644 --- a/tcg/loongarch64/tcg-target-con-set.h +++ b/tcg/loongarch64/tcg-target-con-set.h @@ -17,7 +17,9 @@ C_O0_I1(r) C_O0_I2(rZ, r) C_O0_I2(rZ, rZ) +C_O0_I2(w, r) C_O1_I1(r, r) +C_O1_I1(w, r) C_O1_I2(r, r, rC) C_O1_I2(r, r, ri) C_O1_I2(r, r, rI) diff --git a/tcg/loongarch64/tcg-target-con-str.h b/tcg/loongarch64/tcg-target-con-str.h index 6e9ccca3ad..81b8d40278 100644 --- a/tcg/loongarch64/tcg-target-con-str.h +++ b/tcg/loongarch64/tcg-target-con-str.h @@ -14,6 +14,7 @@ * REGS(letter, register_mask) */ REGS('r', ALL_GENERAL_REGS) +REGS('w', ALL_VECTOR_REGS) /* * Define constraint letters for constants: diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index baf5fc3819..150278e112 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -32,6 +32,8 @@ #include "../tcg-ldst.c.inc" #include +bool use_lsx_instructions; + #ifdef CONFIG_DEBUG_TCG static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = { "zero", @@ -65,7 +67,39 @@ static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = { "s5", "s6", "s7", - "s8" + "s8", + "vr0", + "vr1", + "vr2", + "vr3", + "vr4", + "vr5", + "vr6", + "vr7", + "vr8", + "vr9", + "vr10", + "vr11", + "vr12", + "vr13", + "vr14", + "vr15", + "vr16", + "vr17", + "vr18", + "vr19", + "vr20", + "vr21", + "vr22", + "vr23", + "vr24", + "vr25", + "vr26", + "vr27", + "vr28", + "vr29", + "vr30", + "vr31", }; #endif @@ -102,6 +136,15 @@ static const int tcg_target_reg_alloc_order[] = { TCG_REG_A2, TCG_REG_A1, TCG_REG_A0, + + /* Vector registers */ + TCG_REG_V0, TCG_REG_V1, TCG_REG_V2, TCG_REG_V3, + TCG_REG_V4, TCG_REG_V5, TCG_REG_V6, TCG_REG_V7, + TCG_REG_V8, TCG_REG_V9, TCG_REG_V10, TCG_REG_V11, + TCG_REG_V12, TCG_REG_V13, TCG_REG_V14, TCG_REG_V15, + TCG_REG_V16, TCG_REG_V17, TCG_REG_V18, TCG_REG_V19, + TCG_REG_V20, TCG_REG_V21, TCG_REG_V22, TCG_REG_V23, + /* V24 - V31 are caller-saved, and skipped. */ }; static const int tcg_target_call_iarg_regs[] = { @@ -135,6 +178,7 @@ static TCGReg tcg_target_call_oarg_reg(TCGCallReturnKind kind, int slot) #define TCG_CT_CONST_WSZ 0x2000 #define ALL_GENERAL_REGS MAKE_64BIT_MASK(0, 32) +#define ALL_VECTOR_REGS MAKE_64BIT_MASK(32, 32) static inline tcg_target_long sextreg(tcg_target_long val, int pos, int len) { @@ -1486,6 +1530,154 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, } } +static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece, + TCGReg rd, TCGReg rs) +{ + switch (vece) { + case MO_8: + tcg_out_opc_vreplgr2vr_b(s, rd, rs); + break; + case MO_16: + tcg_out_opc_vreplgr2vr_h(s, rd, rs); + break; + case MO_32: + tcg_out_opc_vreplgr2vr_w(s, rd, rs); + break; + case MO_64: + tcg_out_opc_vreplgr2vr_d(s, rd, rs); + break; + default: + g_assert_not_reached(); + } + return true; +} + +static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece, + TCGReg r, TCGReg base, intptr_t offset) +{ + /* Handle imm overflow and division (vldrepl.d imm is divided by 8) */ + if (offset < -0x800 || offset > 0x7ff || \ + (offset & ((1 << vece) - 1)) != 0) { + tcg_out_addi(s, TCG_TYPE_I64, TCG_REG_TMP0, base, offset); + base = TCG_REG_TMP0; + offset = 0; + } + offset >>= vece; + + switch (vece) { + case MO_8: + tcg_out_opc_vldrepl_b(s, r, base, offset); + break; + case MO_16: + tcg_out_opc_vldrepl_h(s, r, base, offset); + break; + case MO_32: + tcg_out_opc_vldrepl_w(s, r, base, offset); + break; + case MO_64: + tcg_out_opc_vldrepl_d(s, r, base, offset); + break; + default: + g_assert_not_reached(); + } + return true; +} + +static void tcg_out_dupi_vec(TCGContext *s, TCGType type, unsigned vece, + TCGReg rd, int64_t v64) +{ + /* Try vldi if imm can fit */ + int64_t value = sextract64(v64, 0, 8 << vece); + if (-0x200 <= value && value <= 0x1FF) { + uint32_t imm = (vece << 10) | ((uint32_t)v64 & 0x3FF); + tcg_out_opc_vldi(s, rd, imm); + return; + } + + /* TODO: vldi patterns when imm 12 is set */ + + /* Fallback to vreplgr2vr */ + tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP0, value); + switch (vece) { + case MO_8: + tcg_out_opc_vreplgr2vr_b(s, rd, TCG_REG_TMP0); + break; + case MO_16: + tcg_out_opc_vreplgr2vr_h(s, rd, TCG_REG_TMP0); + break; + case MO_32: + tcg_out_opc_vreplgr2vr_w(s, rd, TCG_REG_TMP0); + break; + case MO_64: + tcg_out_opc_vreplgr2vr_d(s, rd, TCG_REG_TMP0); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, + unsigned vecl, unsigned vece, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) +{ + TCGType type = vecl + TCG_TYPE_V64; + TCGArg a0, a1, a2; + TCGReg temp = TCG_REG_TMP0; + + a0 = args[0]; + a1 = args[1]; + a2 = args[2]; + + /* Currently only supports V128 */ + tcg_debug_assert(type == TCG_TYPE_V128); + + switch (opc) { + case INDEX_op_st_vec: + /* Try to fit vst imm */ + if (-0x800 <= a2 && a2 <= 0x7ff) { + tcg_out_opc_vst(s, a0, a1, a2); + } else { + tcg_out_movi(s, TCG_TYPE_I64, temp, a2); + tcg_out_opc_vstx(s, a0, a1, temp); + } + break; + case INDEX_op_ld_vec: + /* Try to fit vld imm */ + if (-0x800 <= a2 && a2 <= 0x7ff) { + tcg_out_opc_vld(s, a0, a1, a2); + } else { + tcg_out_movi(s, TCG_TYPE_I64, temp, a2); + tcg_out_opc_vldx(s, a0, a1, temp); + } + break; + case INDEX_op_dupm_vec: + tcg_out_dupm_vec(s, type, vece, a0, a1, a2); + break; + default: + g_assert_not_reached(); + } +} + +int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece) +{ + switch (opc) { + case INDEX_op_ld_vec: + case INDEX_op_st_vec: + case INDEX_op_dup_vec: + case INDEX_op_dupm_vec: + return 1; + default: + return 0; + } +} + +void tcg_expand_vec_op(TCGOpcode opc, TCGType type, unsigned vece, + TCGArg a0, ...) +{ + g_assert_not_reached(); +} + static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) { switch (op) { @@ -1627,6 +1819,14 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_movcond_i64: return C_O1_I4(r, rZ, rJ, rZ, rZ); + case INDEX_op_ld_vec: + case INDEX_op_dupm_vec: + case INDEX_op_dup_vec: + return C_O1_I1(w, r); + + case INDEX_op_st_vec: + return C_O0_I2(w, r); + default: g_assert_not_reached(); } @@ -1708,6 +1908,10 @@ static void tcg_target_init(TCGContext *s) exit(EXIT_FAILURE); } + if (hwcap & HWCAP_LOONGARCH_LSX) { + use_lsx_instructions = 1; + } + tcg_target_available_regs[TCG_TYPE_I32] = ALL_GENERAL_REGS; tcg_target_available_regs[TCG_TYPE_I64] = ALL_GENERAL_REGS; @@ -1723,6 +1927,18 @@ static void tcg_target_init(TCGContext *s) tcg_regset_reset_reg(tcg_target_call_clobber_regs, TCG_REG_S8); tcg_regset_reset_reg(tcg_target_call_clobber_regs, TCG_REG_S9); + if (use_lsx_instructions) { + tcg_target_available_regs[TCG_TYPE_V128] = ALL_VECTOR_REGS; + tcg_regset_reset_reg(tcg_target_call_clobber_regs, TCG_REG_V24); + tcg_regset_reset_reg(tcg_target_call_clobber_regs, TCG_REG_V25); + tcg_regset_reset_reg(tcg_target_call_clobber_regs, TCG_REG_V26); + tcg_regset_reset_reg(tcg_target_call_clobber_regs, TCG_REG_V27); + tcg_regset_reset_reg(tcg_target_call_clobber_regs, TCG_REG_V28); + tcg_regset_reset_reg(tcg_target_call_clobber_regs, TCG_REG_V29); + tcg_regset_reset_reg(tcg_target_call_clobber_regs, TCG_REG_V30); + tcg_regset_reset_reg(tcg_target_call_clobber_regs, TCG_REG_V31); + } + s->reserved_regs = 0; tcg_regset_set_reg(s->reserved_regs, TCG_REG_ZERO); tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP0); @@ -1731,6 +1947,7 @@ static void tcg_target_init(TCGContext *s) tcg_regset_set_reg(s->reserved_regs, TCG_REG_SP); tcg_regset_set_reg(s->reserved_regs, TCG_REG_TP); tcg_regset_set_reg(s->reserved_regs, TCG_REG_RESERVED); + tcg_regset_set_reg(s->reserved_regs, TCG_VEC_TMP0); } typedef struct { diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index 26f1aab780..2f27d05e0c 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -30,7 +30,7 @@ #define LOONGARCH_TCG_TARGET_H #define TCG_TARGET_INSN_UNIT_SIZE 4 -#define TCG_TARGET_NB_REGS 32 +#define TCG_TARGET_NB_REGS 64 #define MAX_CODE_GEN_BUFFER_SIZE ((size_t)-1) @@ -68,13 +68,25 @@ typedef enum { TCG_REG_S7, TCG_REG_S8, + TCG_REG_V0 = 32, TCG_REG_V1, TCG_REG_V2, TCG_REG_V3, + TCG_REG_V4, TCG_REG_V5, TCG_REG_V6, TCG_REG_V7, + TCG_REG_V8, TCG_REG_V9, TCG_REG_V10, TCG_REG_V11, + TCG_REG_V12, TCG_REG_V13, TCG_REG_V14, TCG_REG_V15, + TCG_REG_V16, TCG_REG_V17, TCG_REG_V18, TCG_REG_V19, + TCG_REG_V20, TCG_REG_V21, TCG_REG_V22, TCG_REG_V23, + TCG_REG_V24, TCG_REG_V25, TCG_REG_V26, TCG_REG_V27, + TCG_REG_V28, TCG_REG_V29, TCG_REG_V30, TCG_REG_V31, + /* aliases */ TCG_AREG0 = TCG_REG_S0, TCG_REG_TMP0 = TCG_REG_T8, TCG_REG_TMP1 = TCG_REG_T7, TCG_REG_TMP2 = TCG_REG_T6, + TCG_VEC_TMP0 = TCG_REG_V23, } TCGReg; +extern bool use_lsx_instructions; + /* used for function call generation */ #define TCG_REG_CALL_STACK TCG_REG_SP #define TCG_TARGET_STACK_ALIGN 16 @@ -159,6 +171,30 @@ typedef enum { #define TCG_TARGET_HAS_mulsh_i64 1 #define TCG_TARGET_HAS_qemu_ldst_i128 0 +#define TCG_TARGET_HAS_v64 0 +#define TCG_TARGET_HAS_v128 use_lsx_instructions +#define TCG_TARGET_HAS_v256 0 + +#define TCG_TARGET_HAS_not_vec 0 +#define TCG_TARGET_HAS_neg_vec 0 +#define TCG_TARGET_HAS_abs_vec 0 +#define TCG_TARGET_HAS_andc_vec 0 +#define TCG_TARGET_HAS_orc_vec 0 +#define TCG_TARGET_HAS_nand_vec 0 +#define TCG_TARGET_HAS_nor_vec 0 +#define TCG_TARGET_HAS_eqv_vec 0 +#define TCG_TARGET_HAS_mul_vec 0 +#define TCG_TARGET_HAS_shi_vec 0 +#define TCG_TARGET_HAS_shs_vec 0 +#define TCG_TARGET_HAS_shv_vec 0 +#define TCG_TARGET_HAS_roti_vec 0 +#define TCG_TARGET_HAS_rots_vec 0 +#define TCG_TARGET_HAS_rotv_vec 0 +#define TCG_TARGET_HAS_sat_vec 0 +#define TCG_TARGET_HAS_minmax_vec 0 +#define TCG_TARGET_HAS_bitsel_vec 0 +#define TCG_TARGET_HAS_cmpsel_vec 0 + #define TCG_TARGET_DEFAULT_MO (0) #define TCG_TARGET_NEED_LDST_LABELS diff --git a/tcg/loongarch64/tcg-target.opc.h b/tcg/loongarch64/tcg-target.opc.h new file mode 100644 index 0000000000..fd1a40b7fd --- /dev/null +++ b/tcg/loongarch64/tcg-target.opc.h @@ -0,0 +1,12 @@ +/* + * Copyright (c) 2023 Jiajie Chen + * + * This work is licensed under the terms of the GNU GPL, version 2 or + * (at your option) any later version. + * + * See the COPYING file in the top-level directory for details. + * + * Target-specific opcodes for host vector expansion. These will be + * emitted by tcg_expand_vec_op. For those familiar with GCC internals, + * consider these to be UNSPEC with names. + */ From patchwork Fri Sep 1 09:30:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372310 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EA035CA0FE6 for ; Fri, 1 Sep 2023 09:35:53 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0Wy-0004rs-3A; Fri, 01 Sep 2023 05:33:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0Ww-0004rX-JG for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:30 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0Wu-0000UJ-6j for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:30 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id 7F7484446A; Fri, 1 Sep 2023 09:33:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560806; bh=yQEuyRABN8HNZwOSJokZLdDtsaQ4YuvR5LSzArBXPJU=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=KYjnpJstlXp3SYvNr4CsJF7vedE1SRXcQKSJBDDVtiAP4LMhex9KZSJbqeUy2nSvc W6+L0EbYpRdCDRmKGOGnqiL8swT+ktDOim2Z7X/oExf/3VZpaFQqD3iwZx15A3vwyS CqFyceVftqsyc1SYOYOMCDZrqzCrdWtzoEymx2C4= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 03/14] tcg/loongarch64: Lower cmp_vec to vseq/vsle/vslt Date: Fri, 1 Sep 2023 17:30:56 +0800 Message-ID: <20230901093258.942357-4-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target-con-set.h | 1 + tcg/loongarch64/tcg-target.c.inc | 60 ++++++++++++++++++++++++++++ 2 files changed, 61 insertions(+) diff --git a/tcg/loongarch64/tcg-target-con-set.h b/tcg/loongarch64/tcg-target-con-set.h index 37b3f80bf9..d04916db25 100644 --- a/tcg/loongarch64/tcg-target-con-set.h +++ b/tcg/loongarch64/tcg-target-con-set.h @@ -31,4 +31,5 @@ C_O1_I2(r, 0, rZ) C_O1_I2(r, rZ, ri) C_O1_I2(r, rZ, rJ) C_O1_I2(r, rZ, rZ) +C_O1_I2(w, w, wJ) C_O1_I4(r, rZ, rJ, rZ, rZ) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 150278e112..18fe5fc148 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1624,6 +1624,23 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, TCGType type = vecl + TCG_TYPE_V64; TCGArg a0, a1, a2; TCGReg temp = TCG_REG_TMP0; + TCGReg temp_vec = TCG_VEC_TMP0; + + static const LoongArchInsn cmp_vec_insn[16][4] = { + [TCG_COND_EQ] = {OPC_VSEQ_B, OPC_VSEQ_H, OPC_VSEQ_W, OPC_VSEQ_D}, + [TCG_COND_LE] = {OPC_VSLE_B, OPC_VSLE_H, OPC_VSLE_W, OPC_VSLE_D}, + [TCG_COND_LEU] = {OPC_VSLE_BU, OPC_VSLE_HU, OPC_VSLE_WU, OPC_VSLE_DU}, + [TCG_COND_LT] = {OPC_VSLT_B, OPC_VSLT_H, OPC_VSLT_W, OPC_VSLT_D}, + [TCG_COND_LTU] = {OPC_VSLT_BU, OPC_VSLT_HU, OPC_VSLT_WU, OPC_VSLT_DU}, + }; + static const LoongArchInsn cmp_vec_imm_insn[16][4] = { + [TCG_COND_EQ] = {OPC_VSEQI_B, OPC_VSEQI_H, OPC_VSEQI_W, OPC_VSEQI_D}, + [TCG_COND_LE] = {OPC_VSLEI_B, OPC_VSLEI_H, OPC_VSLEI_W, OPC_VSLEI_D}, + [TCG_COND_LEU] = {OPC_VSLEI_BU, OPC_VSLEI_HU, OPC_VSLEI_WU, OPC_VSLEI_DU}, + [TCG_COND_LT] = {OPC_VSLTI_B, OPC_VSLTI_H, OPC_VSLTI_W, OPC_VSLTI_D}, + [TCG_COND_LTU] = {OPC_VSLTI_BU, OPC_VSLTI_HU, OPC_VSLTI_WU, OPC_VSLTI_DU}, + }; + LoongArchInsn insn; a0 = args[0]; a1 = args[1]; @@ -1651,6 +1668,45 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, tcg_out_opc_vldx(s, a0, a1, temp); } break; + case INDEX_op_cmp_vec: + TCGCond cond = args[3]; + if (const_args[2]) { + /* + * cmp_vec dest, src, value + * Try vseqi/vslei/vslti + */ + int64_t value = sextract64(a2, 0, 8 << vece); + if ((cond == TCG_COND_EQ || cond == TCG_COND_LE || \ + cond == TCG_COND_LT) && (-0x10 <= value && value <= 0x0f)) { + tcg_out32(s, encode_vdvjsk5_insn(cmp_vec_imm_insn[cond][vece], \ + a0, a1, value)); + break; + } else if ((cond == TCG_COND_LEU || cond == TCG_COND_LTU) && + (0x00 <= value && value <= 0x1f)) { + tcg_out32(s, encode_vdvjuk5_insn(cmp_vec_imm_insn[cond][vece], \ + a0, a1, value)); + break; + } + + /* + * Fallback to: + * dupi_vec temp, a2 + * cmp_vec a0, a1, temp, cond + */ + tcg_out_dupi_vec(s, type, vece, temp_vec, a2); + a2 = temp_vec; + } + + insn = cmp_vec_insn[cond][vece]; + if (insn == 0) { + TCGArg t; + t = a1, a1 = a2, a2 = t; + cond = tcg_swap_cond(cond); + insn = cmp_vec_insn[cond][vece]; + tcg_debug_assert(insn != 0); + } + tcg_out32(s, encode_vdvjvk_insn(insn, a0, a1, a2)); + break; case INDEX_op_dupm_vec: tcg_out_dupm_vec(s, type, vece, a0, a1, a2); break; @@ -1666,6 +1722,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece) case INDEX_op_st_vec: case INDEX_op_dup_vec: case INDEX_op_dupm_vec: + case INDEX_op_cmp_vec: return 1; default: return 0; @@ -1827,6 +1884,9 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_st_vec: return C_O0_I2(w, r); + case INDEX_op_cmp_vec: + return C_O1_I2(w, w, wJ); + default: g_assert_not_reached(); } From patchwork Fri Sep 1 09:30:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372318 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A3EBACA0FE1 for ; Fri, 1 Sep 2023 09:36:48 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0X1-0004ss-Ct; Fri, 01 Sep 2023 05:33:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0Wy-0004sA-EP for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:32 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0Wv-0000UZ-E9 for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:32 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id BD68B44475; Fri, 1 Sep 2023 09:33:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560808; bh=EsD/tXi2CySNOzA2DLYuIZViuY9cZa1gQv+dQ/qSojQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=ssnRkUmE0gQsqWyY3ATOmWOvO6a1h8urER4lK9cWv2JwXdWdTGFPb8lIBbNoz1tQT Fsh8lJckNDR2PxDcJTmioRw6qIV0uASEYGMwEQNmiZjEd6sAzbRAbLRP3g0cIkNRnK ybuAydWyLmDDGHD1Yqwsn8PXmjo+NAwUsTJkOwH0= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 04/14] tcg/loongarch64: Lower add/sub_vec to vadd/vsub Date: Fri, 1 Sep 2023 17:30:57 +0800 Message-ID: <20230901093258.942357-5-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Lower the following ops: - add_vec - sub_vec Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target-con-set.h | 1 + tcg/loongarch64/tcg-target.c.inc | 58 ++++++++++++++++++++++++++++ 2 files changed, 59 insertions(+) diff --git a/tcg/loongarch64/tcg-target-con-set.h b/tcg/loongarch64/tcg-target-con-set.h index d04916db25..eaa015e813 100644 --- a/tcg/loongarch64/tcg-target-con-set.h +++ b/tcg/loongarch64/tcg-target-con-set.h @@ -31,5 +31,6 @@ C_O1_I2(r, 0, rZ) C_O1_I2(r, rZ, ri) C_O1_I2(r, rZ, rJ) C_O1_I2(r, rZ, rZ) +C_O1_I2(w, w, wi) C_O1_I2(w, w, wJ) C_O1_I4(r, rZ, rJ, rZ, rZ) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 18fe5fc148..555080f2b0 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1641,6 +1641,18 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, [TCG_COND_LTU] = {OPC_VSLTI_BU, OPC_VSLTI_HU, OPC_VSLTI_WU, OPC_VSLTI_DU}, }; LoongArchInsn insn; + static const LoongArchInsn add_vec_insn[4] = { + OPC_VADD_B, OPC_VADD_H, OPC_VADD_W, OPC_VADD_D + }; + static const LoongArchInsn add_vec_imm_insn[4] = { + OPC_VADDI_BU, OPC_VADDI_HU, OPC_VADDI_WU, OPC_VADDI_DU + }; + static const LoongArchInsn sub_vec_insn[4] = { + OPC_VSUB_B, OPC_VSUB_H, OPC_VSUB_W, OPC_VSUB_D + }; + static const LoongArchInsn sub_vec_imm_insn[4] = { + OPC_VSUBI_BU, OPC_VSUBI_HU, OPC_VSUBI_WU, OPC_VSUBI_DU + }; a0 = args[0]; a1 = args[1]; @@ -1707,6 +1719,46 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, } tcg_out32(s, encode_vdvjvk_insn(insn, a0, a1, a2)); break; + case INDEX_op_add_vec: + if (const_args[2]) { + int64_t value = sextract64(a2, 0, 8 << vece); + /* Try vaddi/vsubi */ + if (0 <= value && value <= 0x1f) { + tcg_out32(s, encode_vdvjuk5_insn(add_vec_imm_insn[vece], a0, \ + a1, value)); + break; + } else if (-0x1f <= value && value < 0) { + tcg_out32(s, encode_vdvjuk5_insn(sub_vec_imm_insn[vece], a0, \ + a1, -value)); + break; + } + + /* Fallback to dupi + vadd */ + tcg_out_dupi_vec(s, type, vece, temp_vec, a2); + a2 = temp_vec; + } + tcg_out32(s, encode_vdvjvk_insn(add_vec_insn[vece], a0, a1, a2)); + break; + case INDEX_op_sub_vec: + if (const_args[2]) { + int64_t value = sextract64(a2, 0, 8 << vece); + /* Try vaddi/vsubi */ + if (0 <= value && value <= 0x1f) { + tcg_out32(s, encode_vdvjuk5_insn(sub_vec_imm_insn[vece], a0, \ + a1, value)); + break; + } else if (-0x1f <= value && value < 0) { + tcg_out32(s, encode_vdvjuk5_insn(add_vec_imm_insn[vece], a0, \ + a1, -value)); + break; + } + + /* Fallback to dupi + vsub */ + tcg_out_dupi_vec(s, type, vece, temp_vec, a2); + a2 = temp_vec; + } + tcg_out32(s, encode_vdvjvk_insn(sub_vec_insn[vece], a0, a1, a2)); + break; case INDEX_op_dupm_vec: tcg_out_dupm_vec(s, type, vece, a0, a1, a2); break; @@ -1723,6 +1775,8 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece) case INDEX_op_dup_vec: case INDEX_op_dupm_vec: case INDEX_op_cmp_vec: + case INDEX_op_add_vec: + case INDEX_op_sub_vec: return 1; default: return 0; @@ -1887,6 +1941,10 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_cmp_vec: return C_O1_I2(w, w, wJ); + case INDEX_op_add_vec: + case INDEX_op_sub_vec: + return C_O1_I2(w, w, wi); + default: g_assert_not_reached(); } From patchwork Fri Sep 1 09:30:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372316 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 90F3FCA0FEA for ; Fri, 1 Sep 2023 09:36:16 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0X6-0004tc-1h; Fri, 01 Sep 2023 05:33:40 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0X0-0004sr-8c for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:35 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0Wx-0000V4-PZ for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:34 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id 2188844476; Fri, 1 Sep 2023 09:33:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560810; bh=fTIXSr+w9I29oIGkFt7DyS7LhclO9hmnGid6wqFpG1I=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=KjMLh0bcZGG+awpKEOuQyoe8Hq1/+NAGr6HL7btUZujJfcH8qEoLJNXcV3apW3n78 fNK7zix/zrfgx4CrTMSz8ep5M7dgMS0WhZYCSOz07I4WiK9OA97qIN1EtgBnPy8eUQ t+buUaRZc4p/6ctWFMGpuynI6X5KTnITeJBqcmqA= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 05/14] tcg/loongarch64: Lower vector bitwise operations Date: Fri, 1 Sep 2023 17:30:58 +0800 Message-ID: <20230901093258.942357-6-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Lower the following ops: - and_vec - andc_vec - or_vec - orc_vec - xor_vec - nor_vec - not_vec Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target-con-set.h | 2 ++ tcg/loongarch64/tcg-target.c.inc | 44 ++++++++++++++++++++++++++++ tcg/loongarch64/tcg-target.h | 8 ++--- 3 files changed, 50 insertions(+), 4 deletions(-) diff --git a/tcg/loongarch64/tcg-target-con-set.h b/tcg/loongarch64/tcg-target-con-set.h index eaa015e813..13a7f3b5e2 100644 --- a/tcg/loongarch64/tcg-target-con-set.h +++ b/tcg/loongarch64/tcg-target-con-set.h @@ -20,6 +20,7 @@ C_O0_I2(rZ, rZ) C_O0_I2(w, r) C_O1_I1(r, r) C_O1_I1(w, r) +C_O1_I1(w, w) C_O1_I2(r, r, rC) C_O1_I2(r, r, ri) C_O1_I2(r, r, rI) @@ -31,6 +32,7 @@ C_O1_I2(r, 0, rZ) C_O1_I2(r, rZ, ri) C_O1_I2(r, rZ, rJ) C_O1_I2(r, rZ, rZ) +C_O1_I2(w, w, w) C_O1_I2(w, w, wi) C_O1_I2(w, w, wJ) C_O1_I4(r, rZ, rJ, rZ, rZ) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 555080f2b0..20e25dc490 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1680,6 +1680,32 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, tcg_out_opc_vldx(s, a0, a1, temp); } break; + case INDEX_op_and_vec: + tcg_out_opc_vand_v(s, a0, a1, a2); + break; + case INDEX_op_andc_vec: + /* + * vandn vd, vj, vk: vd = vk & ~vj + * andc_vec vd, vj, vk: vd = vj & ~vk + * vk and vk are swapped + */ + tcg_out_opc_vandn_v(s, a0, a2, a1); + break; + case INDEX_op_or_vec: + tcg_out_opc_vor_v(s, a0, a1, a2); + break; + case INDEX_op_orc_vec: + tcg_out_opc_vorn_v(s, a0, a1, a2); + break; + case INDEX_op_xor_vec: + tcg_out_opc_vxor_v(s, a0, a1, a2); + break; + case INDEX_op_nor_vec: + tcg_out_opc_vnor_v(s, a0, a1, a2); + break; + case INDEX_op_not_vec: + tcg_out_opc_vnor_v(s, a0, a1, a1); + break; case INDEX_op_cmp_vec: TCGCond cond = args[3]; if (const_args[2]) { @@ -1777,6 +1803,13 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece) case INDEX_op_cmp_vec: case INDEX_op_add_vec: case INDEX_op_sub_vec: + case INDEX_op_and_vec: + case INDEX_op_andc_vec: + case INDEX_op_or_vec: + case INDEX_op_orc_vec: + case INDEX_op_xor_vec: + case INDEX_op_nor_vec: + case INDEX_op_not_vec: return 1; default: return 0; @@ -1945,6 +1978,17 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_sub_vec: return C_O1_I2(w, w, wi); + case INDEX_op_and_vec: + case INDEX_op_andc_vec: + case INDEX_op_or_vec: + case INDEX_op_orc_vec: + case INDEX_op_xor_vec: + case INDEX_op_nor_vec: + return C_O1_I2(w, w, w); + + case INDEX_op_not_vec: + return C_O1_I1(w, w); + default: g_assert_not_reached(); } diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index 2f27d05e0c..bf72b26ca2 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -175,13 +175,13 @@ extern bool use_lsx_instructions; #define TCG_TARGET_HAS_v128 use_lsx_instructions #define TCG_TARGET_HAS_v256 0 -#define TCG_TARGET_HAS_not_vec 0 +#define TCG_TARGET_HAS_not_vec 1 #define TCG_TARGET_HAS_neg_vec 0 #define TCG_TARGET_HAS_abs_vec 0 -#define TCG_TARGET_HAS_andc_vec 0 -#define TCG_TARGET_HAS_orc_vec 0 +#define TCG_TARGET_HAS_andc_vec 1 +#define TCG_TARGET_HAS_orc_vec 1 #define TCG_TARGET_HAS_nand_vec 0 -#define TCG_TARGET_HAS_nor_vec 0 +#define TCG_TARGET_HAS_nor_vec 1 #define TCG_TARGET_HAS_eqv_vec 0 #define TCG_TARGET_HAS_mul_vec 0 #define TCG_TARGET_HAS_shi_vec 0 From patchwork Fri Sep 1 09:30:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50584CA0FE8 for ; Fri, 1 Sep 2023 09:36:18 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0X7-0004ub-8k; Fri, 01 Sep 2023 05:33:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0X2-0004tD-7P for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:36 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0X0-0000VZ-2D for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:35 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id 687AF4446A; Fri, 1 Sep 2023 09:33:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560813; bh=vmMbf+2apOyNrTCOWDb9surYvgwbg9ezVXMTnWPVfUA=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=Qq7KJbBCpM+M9M5pmRIzEJcrXIfbGEcDsOsICNYuJE/rz+96cL4PEg6N9eEqr89SY 2hBV+LDajTvpxNXjsO3Oo0xwEVzO4bc+hon2/CAv2j4BOoJj2Ah0vidahnH5OwH9H/ Y3w0xCBsh8hvu5Dp0nRHRdotq7sjUaJV13R9j96g= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 06/14] tcg/loongarch64: Lower neg_vec to vneg Date: Fri, 1 Sep 2023 17:30:59 +0800 Message-ID: <20230901093258.942357-7-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 8 ++++++++ tcg/loongarch64/tcg-target.h | 2 +- 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 20e25dc490..16bcc2cf1b 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1653,6 +1653,9 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, static const LoongArchInsn sub_vec_imm_insn[4] = { OPC_VSUBI_BU, OPC_VSUBI_HU, OPC_VSUBI_WU, OPC_VSUBI_DU }; + static const LoongArchInsn neg_vec_insn[4] = { + OPC_VNEG_B, OPC_VNEG_H, OPC_VNEG_W, OPC_VNEG_D + }; a0 = args[0]; a1 = args[1]; @@ -1785,6 +1788,9 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, } tcg_out32(s, encode_vdvjvk_insn(sub_vec_insn[vece], a0, a1, a2)); break; + case INDEX_op_neg_vec: + tcg_out32(s, encode_vdvj_insn(neg_vec_insn[vece], a0, a1)); + break; case INDEX_op_dupm_vec: tcg_out_dupm_vec(s, type, vece, a0, a1, a2); break; @@ -1810,6 +1816,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece) case INDEX_op_xor_vec: case INDEX_op_nor_vec: case INDEX_op_not_vec: + case INDEX_op_neg_vec: return 1; default: return 0; @@ -1987,6 +1994,7 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) return C_O1_I2(w, w, w); case INDEX_op_not_vec: + case INDEX_op_neg_vec: return C_O1_I1(w, w); default: diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index bf72b26ca2..c992c4b297 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -176,7 +176,7 @@ extern bool use_lsx_instructions; #define TCG_TARGET_HAS_v256 0 #define TCG_TARGET_HAS_not_vec 1 -#define TCG_TARGET_HAS_neg_vec 0 +#define TCG_TARGET_HAS_neg_vec 1 #define TCG_TARGET_HAS_abs_vec 0 #define TCG_TARGET_HAS_andc_vec 1 #define TCG_TARGET_HAS_orc_vec 1 From patchwork Fri Sep 1 09:31:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372320 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 65730CA0FE6 for ; Fri, 1 Sep 2023 09:37:02 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0XA-0004v1-6p; Fri, 01 Sep 2023 05:33:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0X4-0004tT-If for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:38 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0X2-0000Vw-CF for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:38 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id ABAAA4446C; Fri, 1 Sep 2023 09:33:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560815; bh=a73GfEeswoFW49d7u3KhdzVhsFQPnC+rr4/0EEEgT+0=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=M1kOXRd1IfjNzGV1Ff6pYrNKeGjq1GSPpPXv7YtWFEOvlqAwtgFAqz/SE196CRxBi 8JaZtD3iC1fzXKanloOn020FQ6AzuNFE7HkGA/BoP08PoXUD14O6jqHoIAQgX+Y3p7 wNPTD3FxVkjXqM9gC86nk2PumyWLbXippbVRiWrc= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 07/14] tcg/loongarch64: Lower mul_vec to vmul Date: Fri, 1 Sep 2023 17:31:00 +0800 Message-ID: <20230901093258.942357-8-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 8 ++++++++ tcg/loongarch64/tcg-target.h | 2 +- 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 16bcc2cf1b..07c030b262 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1656,6 +1656,9 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, static const LoongArchInsn neg_vec_insn[4] = { OPC_VNEG_B, OPC_VNEG_H, OPC_VNEG_W, OPC_VNEG_D }; + static const LoongArchInsn mul_vec_insn[4] = { + OPC_VMUL_B, OPC_VMUL_H, OPC_VMUL_W, OPC_VMUL_D + }; a0 = args[0]; a1 = args[1]; @@ -1791,6 +1794,9 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, case INDEX_op_neg_vec: tcg_out32(s, encode_vdvj_insn(neg_vec_insn[vece], a0, a1)); break; + case INDEX_op_mul_vec: + tcg_out32(s, encode_vdvjvk_insn(mul_vec_insn[vece], a0, a1, a2)); + break; case INDEX_op_dupm_vec: tcg_out_dupm_vec(s, type, vece, a0, a1, a2); break; @@ -1817,6 +1823,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece) case INDEX_op_nor_vec: case INDEX_op_not_vec: case INDEX_op_neg_vec: + case INDEX_op_mul_vec: return 1; default: return 0; @@ -1991,6 +1998,7 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_orc_vec: case INDEX_op_xor_vec: case INDEX_op_nor_vec: + case INDEX_op_mul_vec: return C_O1_I2(w, w, w); case INDEX_op_not_vec: diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index c992c4b297..0880a2903d 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -183,7 +183,7 @@ extern bool use_lsx_instructions; #define TCG_TARGET_HAS_nand_vec 0 #define TCG_TARGET_HAS_nor_vec 1 #define TCG_TARGET_HAS_eqv_vec 0 -#define TCG_TARGET_HAS_mul_vec 0 +#define TCG_TARGET_HAS_mul_vec 1 #define TCG_TARGET_HAS_shi_vec 0 #define TCG_TARGET_HAS_shs_vec 0 #define TCG_TARGET_HAS_shv_vec 0 From patchwork Fri Sep 1 09:31:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E6675CA0FE8 for ; Fri, 1 Sep 2023 09:35:24 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0XA-0004vH-GQ; Fri, 01 Sep 2023 05:33:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0X6-0004uc-Vv for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:41 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0X4-0000WT-LF for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:40 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id EE73F44470; Fri, 1 Sep 2023 09:33:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560817; bh=TgETM48fZdO4+4P5iEegpJn36wekwM8numXthIESNGg=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=OV8GcFcB0UzPwkAoQX8yq2z19+WC7F9MrwnzJdNb5nLzXHGvNoC+SBAHnsL2JlwYa BZ5fKhhMN9OEAHN19u0uly4WYDtm3ufKtDudmfauKALNQXhOmmbzY00csFUyYFNMz2 UkfYSASXYxX0MTn3EBsGwh4PCrvPySIZ2G0mBn5o= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 08/14] tcg/loongarch64: Lower vector min max ops Date: Fri, 1 Sep 2023 17:31:01 +0800 Message-ID: <20230901093258.942357-9-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Lower the following ops: - smin_vec - smax_vec - umin_vec - umax_vec Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 32 ++++++++++++++++++++++++++++++++ tcg/loongarch64/tcg-target.h | 2 +- 2 files changed, 33 insertions(+), 1 deletion(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 07c030b262..ad1fbf0339 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1659,6 +1659,18 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, static const LoongArchInsn mul_vec_insn[4] = { OPC_VMUL_B, OPC_VMUL_H, OPC_VMUL_W, OPC_VMUL_D }; + static const LoongArchInsn smin_vec_insn[4] = { + OPC_VMIN_B, OPC_VMIN_H, OPC_VMIN_W, OPC_VMIN_D + }; + static const LoongArchInsn umin_vec_insn[4] = { + OPC_VMIN_BU, OPC_VMIN_HU, OPC_VMIN_WU, OPC_VMIN_DU + }; + static const LoongArchInsn smax_vec_insn[4] = { + OPC_VMAX_B, OPC_VMAX_H, OPC_VMAX_W, OPC_VMAX_D + }; + static const LoongArchInsn umax_vec_insn[4] = { + OPC_VMAX_BU, OPC_VMAX_HU, OPC_VMAX_WU, OPC_VMAX_DU + }; a0 = args[0]; a1 = args[1]; @@ -1797,6 +1809,18 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, case INDEX_op_mul_vec: tcg_out32(s, encode_vdvjvk_insn(mul_vec_insn[vece], a0, a1, a2)); break; + case INDEX_op_smin_vec: + tcg_out32(s, encode_vdvjvk_insn(smin_vec_insn[vece], a0, a1, a2)); + break; + case INDEX_op_smax_vec: + tcg_out32(s, encode_vdvjvk_insn(smax_vec_insn[vece], a0, a1, a2)); + break; + case INDEX_op_umin_vec: + tcg_out32(s, encode_vdvjvk_insn(umin_vec_insn[vece], a0, a1, a2)); + break; + case INDEX_op_umax_vec: + tcg_out32(s, encode_vdvjvk_insn(umax_vec_insn[vece], a0, a1, a2)); + break; case INDEX_op_dupm_vec: tcg_out_dupm_vec(s, type, vece, a0, a1, a2); break; @@ -1824,6 +1848,10 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece) case INDEX_op_not_vec: case INDEX_op_neg_vec: case INDEX_op_mul_vec: + case INDEX_op_smin_vec: + case INDEX_op_smax_vec: + case INDEX_op_umin_vec: + case INDEX_op_umax_vec: return 1; default: return 0; @@ -1999,6 +2027,10 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_xor_vec: case INDEX_op_nor_vec: case INDEX_op_mul_vec: + case INDEX_op_smin_vec: + case INDEX_op_smax_vec: + case INDEX_op_umin_vec: + case INDEX_op_umax_vec: return C_O1_I2(w, w, w); case INDEX_op_not_vec: diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index 0880a2903d..2b81a06c89 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -191,7 +191,7 @@ extern bool use_lsx_instructions; #define TCG_TARGET_HAS_rots_vec 0 #define TCG_TARGET_HAS_rotv_vec 0 #define TCG_TARGET_HAS_sat_vec 0 -#define TCG_TARGET_HAS_minmax_vec 0 +#define TCG_TARGET_HAS_minmax_vec 1 #define TCG_TARGET_HAS_bitsel_vec 0 #define TCG_TARGET_HAS_cmpsel_vec 0 From patchwork Fri Sep 1 09:31:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372312 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 75AFECA0FE8 for ; Fri, 1 Sep 2023 09:36:12 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0XG-0004xX-IA; Fri, 01 Sep 2023 05:33:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0X9-0004v8-6Y for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:44 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0X6-0000Ws-T5 for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:42 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id 3B5BD44466; Fri, 1 Sep 2023 09:33:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560819; bh=GPLyXEUSJvfMF3V9B1zohWysXfCynWqFwU7aEyxm5gI=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=N1OsjuHvjZSK0zb/rw1cxbjFdtDsa97/brwcyouBN4MXbMeZ2m0WT5KaGHDtxAuxz wusDawzUEuFO55t5KexwUnwQ4WxnH7Ew/eLj75Emrxdn+HUP///cCXWpNWt7jK7s/c 5904MR7SUYFypC+Rys8REFq/awaBM5mopw0niUHY= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 09/14] tcg/loongarch64: Lower vector saturated ops Date: Fri, 1 Sep 2023 17:31:02 +0800 Message-ID: <20230901093258.942357-10-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Lower the following ops: - ssadd_vec - usadd_vec - sssub_vec - ussub_vec Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 32 ++++++++++++++++++++++++++++++++ tcg/loongarch64/tcg-target.h | 2 +- 2 files changed, 33 insertions(+), 1 deletion(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index ad1fbf0339..1e587a82b1 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1671,6 +1671,18 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, static const LoongArchInsn umax_vec_insn[4] = { OPC_VMAX_BU, OPC_VMAX_HU, OPC_VMAX_WU, OPC_VMAX_DU }; + static const LoongArchInsn ssadd_vec_insn[4] = { + OPC_VSADD_B, OPC_VSADD_H, OPC_VSADD_W, OPC_VSADD_D + }; + static const LoongArchInsn usadd_vec_insn[4] = { + OPC_VSADD_BU, OPC_VSADD_HU, OPC_VSADD_WU, OPC_VSADD_DU + }; + static const LoongArchInsn sssub_vec_insn[4] = { + OPC_VSSUB_B, OPC_VSSUB_H, OPC_VSSUB_W, OPC_VSSUB_D + }; + static const LoongArchInsn ussub_vec_insn[4] = { + OPC_VSSUB_BU, OPC_VSSUB_HU, OPC_VSSUB_WU, OPC_VSSUB_DU + }; a0 = args[0]; a1 = args[1]; @@ -1821,6 +1833,18 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, case INDEX_op_umax_vec: tcg_out32(s, encode_vdvjvk_insn(umax_vec_insn[vece], a0, a1, a2)); break; + case INDEX_op_ssadd_vec: + tcg_out32(s, encode_vdvjvk_insn(ssadd_vec_insn[vece], a0, a1, a2)); + break; + case INDEX_op_usadd_vec: + tcg_out32(s, encode_vdvjvk_insn(usadd_vec_insn[vece], a0, a1, a2)); + break; + case INDEX_op_sssub_vec: + tcg_out32(s, encode_vdvjvk_insn(sssub_vec_insn[vece], a0, a1, a2)); + break; + case INDEX_op_ussub_vec: + tcg_out32(s, encode_vdvjvk_insn(ussub_vec_insn[vece], a0, a1, a2)); + break; case INDEX_op_dupm_vec: tcg_out_dupm_vec(s, type, vece, a0, a1, a2); break; @@ -1852,6 +1876,10 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece) case INDEX_op_smax_vec: case INDEX_op_umin_vec: case INDEX_op_umax_vec: + case INDEX_op_ssadd_vec: + case INDEX_op_usadd_vec: + case INDEX_op_sssub_vec: + case INDEX_op_ussub_vec: return 1; default: return 0; @@ -2031,6 +2059,10 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_smax_vec: case INDEX_op_umin_vec: case INDEX_op_umax_vec: + case INDEX_op_ssadd_vec: + case INDEX_op_usadd_vec: + case INDEX_op_sssub_vec: + case INDEX_op_ussub_vec: return C_O1_I2(w, w, w); case INDEX_op_not_vec: diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index 2b81a06c89..72bfd0d440 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -190,7 +190,7 @@ extern bool use_lsx_instructions; #define TCG_TARGET_HAS_roti_vec 0 #define TCG_TARGET_HAS_rots_vec 0 #define TCG_TARGET_HAS_rotv_vec 0 -#define TCG_TARGET_HAS_sat_vec 0 +#define TCG_TARGET_HAS_sat_vec 1 #define TCG_TARGET_HAS_minmax_vec 1 #define TCG_TARGET_HAS_bitsel_vec 0 #define TCG_TARGET_HAS_cmpsel_vec 0 From patchwork Fri Sep 1 09:31:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C559BCA0FE1 for ; Fri, 1 Sep 2023 09:35:58 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0XG-0004xW-Hd; Fri, 01 Sep 2023 05:33:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0XC-0004vZ-Hy for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:48 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0X9-0000XD-5d for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:45 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id 7E6564446A; Fri, 1 Sep 2023 09:33:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560822; bh=Y18772w/3beRRXmWjwYq4KrgkYlQjM6Mt6d8VsKmiOc=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=SOONXfBPvdCAyh/LzH1zM2TxsO9Mj+Jh+1irQVE5SoXZdEq+mzdz4dBhsGSRjjxSY afAkI9Tf565smnu3c2V5hm5oDnhGvb2vBKWP0vxfV3AK5w0n+4+T023vuu8o7qWzyc a7GCODw7uiAJRGfW5usaaPF60pMB1HYYXlJtivTM= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 10/14] tcg/loongarch64: Lower vector shift vector ops Date: Fri, 1 Sep 2023 17:31:03 +0800 Message-ID: <20230901093258.942357-11-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Lower the following ops: - shlv_vec - shrv_vec - sarv_vec Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 24 ++++++++++++++++++++++++ tcg/loongarch64/tcg-target.h | 2 +- 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 1e587a82b1..9f02805c4b 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1683,6 +1683,15 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, static const LoongArchInsn ussub_vec_insn[4] = { OPC_VSSUB_BU, OPC_VSSUB_HU, OPC_VSSUB_WU, OPC_VSSUB_DU }; + static const LoongArchInsn shlv_vec_insn[4] = { + OPC_VSLL_B, OPC_VSLL_H, OPC_VSLL_W, OPC_VSLL_D + }; + static const LoongArchInsn shrv_vec_insn[4] = { + OPC_VSRL_B, OPC_VSRL_H, OPC_VSRL_W, OPC_VSRL_D + }; + static const LoongArchInsn sarv_vec_insn[4] = { + OPC_VSRA_B, OPC_VSRA_H, OPC_VSRA_W, OPC_VSRA_D + }; a0 = args[0]; a1 = args[1]; @@ -1845,6 +1854,15 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, case INDEX_op_ussub_vec: tcg_out32(s, encode_vdvjvk_insn(ussub_vec_insn[vece], a0, a1, a2)); break; + case INDEX_op_shlv_vec: + tcg_out32(s, encode_vdvjvk_insn(shlv_vec_insn[vece], a0, a1, a2)); + break; + case INDEX_op_shrv_vec: + tcg_out32(s, encode_vdvjvk_insn(shrv_vec_insn[vece], a0, a1, a2)); + break; + case INDEX_op_sarv_vec: + tcg_out32(s, encode_vdvjvk_insn(sarv_vec_insn[vece], a0, a1, a2)); + break; case INDEX_op_dupm_vec: tcg_out_dupm_vec(s, type, vece, a0, a1, a2); break; @@ -1880,6 +1898,9 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece) case INDEX_op_usadd_vec: case INDEX_op_sssub_vec: case INDEX_op_ussub_vec: + case INDEX_op_shlv_vec: + case INDEX_op_shrv_vec: + case INDEX_op_sarv_vec: return 1; default: return 0; @@ -2063,6 +2084,9 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_usadd_vec: case INDEX_op_sssub_vec: case INDEX_op_ussub_vec: + case INDEX_op_shlv_vec: + case INDEX_op_shrv_vec: + case INDEX_op_sarv_vec: return C_O1_I2(w, w, w); case INDEX_op_not_vec: diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index 72bfd0d440..d27f3737ad 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -186,7 +186,7 @@ extern bool use_lsx_instructions; #define TCG_TARGET_HAS_mul_vec 1 #define TCG_TARGET_HAS_shi_vec 0 #define TCG_TARGET_HAS_shs_vec 0 -#define TCG_TARGET_HAS_shv_vec 0 +#define TCG_TARGET_HAS_shv_vec 1 #define TCG_TARGET_HAS_roti_vec 0 #define TCG_TARGET_HAS_rots_vec 0 #define TCG_TARGET_HAS_rotv_vec 0 From patchwork Fri Sep 1 09:31:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AE668CA0FE1 for ; Fri, 1 Sep 2023 09:36:51 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0XI-0004yv-Te; Fri, 01 Sep 2023 05:33:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0XE-0004vo-Ot for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:48 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0XC-0000XZ-D7 for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:48 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id C7D2A44470; Fri, 1 Sep 2023 09:33:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560824; bh=3FibXykoQy2JZjeKzeJIn2VJHHdqDNCY1WVZYfYrTRQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=S1H6+59YTh00w7j5fG4GFtU4JDAsZhFpaEwObU+Wz26T3nrubMbNgoobmxhZygki5 guuX+5XB+q4rOc5f/2G7QCB5xE50O7srpLFmvN9mBPyves59tOfwcA3brZDfib5B5Z 83VFaVZdIjyioqvLk1+nLUHJCgC4tV9Wvh/9QSM0= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 11/14] tcg/loongarch64: Lower bitsel_vec to vbitsel Date: Fri, 1 Sep 2023 17:31:04 +0800 Message-ID: <20230901093258.942357-12-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target-con-set.h | 1 + tcg/loongarch64/tcg-target.c.inc | 11 ++++++++++- tcg/loongarch64/tcg-target.h | 2 +- 3 files changed, 12 insertions(+), 2 deletions(-) diff --git a/tcg/loongarch64/tcg-target-con-set.h b/tcg/loongarch64/tcg-target-con-set.h index 13a7f3b5e2..fd2bd785e5 100644 --- a/tcg/loongarch64/tcg-target-con-set.h +++ b/tcg/loongarch64/tcg-target-con-set.h @@ -35,4 +35,5 @@ C_O1_I2(r, rZ, rZ) C_O1_I2(w, w, w) C_O1_I2(w, w, wi) C_O1_I2(w, w, wJ) +C_O1_I3(w, w, w, w) C_O1_I4(r, rZ, rJ, rZ, rZ) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 9f02805c4b..8de4c36396 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1622,7 +1622,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, const int const_args[TCG_MAX_OP_ARGS]) { TCGType type = vecl + TCG_TYPE_V64; - TCGArg a0, a1, a2; + TCGArg a0, a1, a2, a3; TCGReg temp = TCG_REG_TMP0; TCGReg temp_vec = TCG_VEC_TMP0; @@ -1696,6 +1696,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, a0 = args[0]; a1 = args[1]; a2 = args[2]; + a3 = args[3]; /* Currently only supports V128 */ tcg_debug_assert(type == TCG_TYPE_V128); @@ -1863,6 +1864,10 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, case INDEX_op_sarv_vec: tcg_out32(s, encode_vdvjvk_insn(sarv_vec_insn[vece], a0, a1, a2)); break; + case INDEX_op_bitsel_vec: + /* vbitsel vd, vj, vk, va = bitsel_vec vd, va, vk, vj */ + tcg_out_opc_vbitsel_v(s, a0, a3, a2, a1); + break; case INDEX_op_dupm_vec: tcg_out_dupm_vec(s, type, vece, a0, a1, a2); break; @@ -1901,6 +1906,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece) case INDEX_op_shlv_vec: case INDEX_op_shrv_vec: case INDEX_op_sarv_vec: + case INDEX_op_bitsel_vec: return 1; default: return 0; @@ -2093,6 +2099,9 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_neg_vec: return C_O1_I1(w, w); + case INDEX_op_bitsel_vec: + return C_O1_I3(w, w, w, w); + default: g_assert_not_reached(); } diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index d27f3737ad..c77672d92c 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -192,7 +192,7 @@ extern bool use_lsx_instructions; #define TCG_TARGET_HAS_rotv_vec 0 #define TCG_TARGET_HAS_sat_vec 1 #define TCG_TARGET_HAS_minmax_vec 1 -#define TCG_TARGET_HAS_bitsel_vec 0 +#define TCG_TARGET_HAS_bitsel_vec 1 #define TCG_TARGET_HAS_cmpsel_vec 0 #define TCG_TARGET_DEFAULT_MO (0) From patchwork Fri Sep 1 09:31:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8DCA1CA0FE1 for ; Fri, 1 Sep 2023 09:35:20 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0Xi-0005AR-Fx; Fri, 01 Sep 2023 05:34:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0XQ-0004zB-1r for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:34:04 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0XN-0000Xy-Rz for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:33:59 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id 364FA44466; Fri, 1 Sep 2023 09:33:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560826; bh=ABO0pxJTZv0/onz0hQ4gMaRHKh213gnyBeUdieYgf0c=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=Fv8EK3Ccj7C/3WTwidEQPnJADaaHP1NQ+9tRiXwTLzPDAsqGkFtmgUb7xf9fmR0Mz dZrvzatBkAl7C9tac9u5A2ab7cma0GCIXDmV1hOFww1aKkigqy+2GTSDxrsFH0ChCe IMegcLhNwjmHPSrxJ7sU+DT/x7+n5xeI9nGz0IiQ= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 12/14] tcg/loongarch64: Lower vector shift integer ops Date: Fri, 1 Sep 2023 17:31:05 +0800 Message-ID: <20230901093258.942357-13-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Lower the following ops: - shli_vec - shrv_vec - sarv_vec Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 21 +++++++++++++++++++++ tcg/loongarch64/tcg-target.h | 2 +- 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 8de4c36396..ccb362205e 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1692,6 +1692,15 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, static const LoongArchInsn sarv_vec_insn[4] = { OPC_VSRA_B, OPC_VSRA_H, OPC_VSRA_W, OPC_VSRA_D }; + static const LoongArchInsn shli_vec_insn[4] = { + OPC_VSLLI_B, OPC_VSLLI_H, OPC_VSLLI_W, OPC_VSLLI_D + }; + static const LoongArchInsn shri_vec_insn[4] = { + OPC_VSRLI_B, OPC_VSRLI_H, OPC_VSRLI_W, OPC_VSRLI_D + }; + static const LoongArchInsn sari_vec_insn[4] = { + OPC_VSRAI_B, OPC_VSRAI_H, OPC_VSRAI_W, OPC_VSRAI_D + }; a0 = args[0]; a1 = args[1]; @@ -1864,6 +1873,15 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, case INDEX_op_sarv_vec: tcg_out32(s, encode_vdvjvk_insn(sarv_vec_insn[vece], a0, a1, a2)); break; + case INDEX_op_shli_vec: + tcg_out32(s, encode_vdvjuk3_insn(shli_vec_insn[vece], a0, a1, a2)); + break; + case INDEX_op_shri_vec: + tcg_out32(s, encode_vdvjuk3_insn(shri_vec_insn[vece], a0, a1, a2)); + break; + case INDEX_op_sari_vec: + tcg_out32(s, encode_vdvjuk3_insn(sari_vec_insn[vece], a0, a1, a2)); + break; case INDEX_op_bitsel_vec: /* vbitsel vd, vj, vk, va = bitsel_vec vd, va, vk, vj */ tcg_out_opc_vbitsel_v(s, a0, a3, a2, a1); @@ -2097,6 +2115,9 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_not_vec: case INDEX_op_neg_vec: + case INDEX_op_shli_vec: + case INDEX_op_shri_vec: + case INDEX_op_sari_vec: return C_O1_I1(w, w); case INDEX_op_bitsel_vec: diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index c77672d92c..b4dab03469 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -184,7 +184,7 @@ extern bool use_lsx_instructions; #define TCG_TARGET_HAS_nor_vec 1 #define TCG_TARGET_HAS_eqv_vec 0 #define TCG_TARGET_HAS_mul_vec 1 -#define TCG_TARGET_HAS_shi_vec 0 +#define TCG_TARGET_HAS_shi_vec 1 #define TCG_TARGET_HAS_shs_vec 0 #define TCG_TARGET_HAS_shv_vec 1 #define TCG_TARGET_HAS_roti_vec 0 From patchwork Fri Sep 1 09:31:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372306 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 15532CA0FE1 for ; Fri, 1 Sep 2023 09:35:15 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0Xa-00056n-9J; Fri, 01 Sep 2023 05:34:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0XS-0004zL-EV for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:34:04 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0XQ-0000Z7-8f for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:34:02 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id 7A1D84446A; Fri, 1 Sep 2023 09:33:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560829; bh=q+WZ4Rw7U9YsN2EF3ToDBguD5GPQCHXmpvcA14nXFvc=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=ChdOUmWzVrfqbW3+VJFqAAcVjzyiiWyIgPrXxdF/yTYBoVEnGImL04I8/4N3uETcT nmWD2ZXDZ0IvbLgJdqxvkwOyPERl4UBWEot2JZl1qx9zi9CfFg8LNckp2eVS1F5wKL Z5Bq9Ze4XGlvF2JrxmdcS2PsFpqLdVtVwSfN6d18= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 13/14] tcg/loongarch64: Lower rotv_vec ops to LSX Date: Fri, 1 Sep 2023 17:31:06 +0800 Message-ID: <20230901093258.942357-14-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Lower the following ops: - rotrv_vec - rotlv_vec Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 14 ++++++++++++++ tcg/loongarch64/tcg-target.h | 2 +- 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index ccb362205e..6fe319a77e 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1701,6 +1701,9 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, static const LoongArchInsn sari_vec_insn[4] = { OPC_VSRAI_B, OPC_VSRAI_H, OPC_VSRAI_W, OPC_VSRAI_D }; + static const LoongArchInsn rotrv_vec_insn[4] = { + OPC_VROTR_B, OPC_VROTR_H, OPC_VROTR_W, OPC_VROTR_D + }; a0 = args[0]; a1 = args[1]; @@ -1882,6 +1885,15 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, case INDEX_op_sari_vec: tcg_out32(s, encode_vdvjuk3_insn(sari_vec_insn[vece], a0, a1, a2)); break; + case INDEX_op_rotrv_vec: + tcg_out32(s, encode_vdvjvk_insn(rotrv_vec_insn[vece], a0, a1, a2)); + break; + case INDEX_op_rotlv_vec: + /* rotlv_vec a1, a2 = rotrv_vec a1, -a2 */ + tcg_out32(s, encode_vdvj_insn(neg_vec_insn[vece], temp_vec, a2)); + tcg_out32(s, encode_vdvjvk_insn(rotrv_vec_insn[vece], a0, a1, + temp_vec)); + break; case INDEX_op_bitsel_vec: /* vbitsel vd, vj, vk, va = bitsel_vec vd, va, vk, vj */ tcg_out_opc_vbitsel_v(s, a0, a3, a2, a1); @@ -2111,6 +2123,8 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_shlv_vec: case INDEX_op_shrv_vec: case INDEX_op_sarv_vec: + case INDEX_op_rotrv_vec: + case INDEX_op_rotlv_vec: return C_O1_I2(w, w, w); case INDEX_op_not_vec: diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index b4dab03469..f6eb3cf7a6 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -189,7 +189,7 @@ extern bool use_lsx_instructions; #define TCG_TARGET_HAS_shv_vec 1 #define TCG_TARGET_HAS_roti_vec 0 #define TCG_TARGET_HAS_rots_vec 0 -#define TCG_TARGET_HAS_rotv_vec 0 +#define TCG_TARGET_HAS_rotv_vec 1 #define TCG_TARGET_HAS_sat_vec 1 #define TCG_TARGET_HAS_minmax_vec 1 #define TCG_TARGET_HAS_bitsel_vec 1 From patchwork Fri Sep 1 09:31:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiajie Chen X-Patchwork-Id: 13372309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6EB1BCA0FE1 for ; Fri, 1 Sep 2023 09:35:25 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qc0Xr-0005Fv-AD; Fri, 01 Sep 2023 05:34:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0XY-00054H-KR for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:34:09 -0400 Received: from hognose1.porkbun.com ([35.82.102.206]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qc0XS-0000Zz-IQ for qemu-devel@nongnu.org; Fri, 01 Sep 2023 05:34:07 -0400 Received: from ls3a6000.. (unknown [223.72.40.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: c@jia.je) by hognose1.porkbun.com (Postfix) with ESMTPSA id CCC8744470; Fri, 1 Sep 2023 09:33:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jia.je; s=default; t=1693560831; bh=32SlQyD3SIyzxtyd1pmcYkrZHHAhgAihMV8Lj5s6cYM=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=Ut6cJnr7pEjmJ/HK50swXu1gZpXFym9akw1mU+YzJ/NbtLe32IwDqB8JavQMmPgiW vTwEEkOksN/MFYw9JSK+ndmL/enbbto1xBhgc6PdNA3ujA22uKXAGXZ0d23OV+aVgq 6YZXpp3NMhaheTI+A0gS7+KGGVzQcXXPY4Mk8WXM= From: Jiajie Chen To: qemu-devel@nongnu.org Cc: richard.henderson@linaro.org, gaosong@loongson.cn, git@xen0n.name, Jiajie Chen Subject: [PATCH v2 14/14] tcg/loongarch64: Lower rotli_vec to vrotri Date: Fri, 1 Sep 2023 17:31:07 +0800 Message-ID: <20230901093258.942357-15-c@jia.je> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230901093258.942357-1-c@jia.je> References: <20230901093258.942357-1-c@jia.je> MIME-Version: 1.0 Received-SPF: pass client-ip=35.82.102.206; envelope-from=c@jia.je; helo=hognose1.porkbun.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Signed-off-by: Jiajie Chen Reviewed-by: Richard Henderson --- tcg/loongarch64/tcg-target.c.inc | 21 +++++++++++++++++++++ tcg/loongarch64/tcg-target.h | 2 +- 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 6fe319a77e..c4e9e0309e 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1894,6 +1894,26 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc, tcg_out32(s, encode_vdvjvk_insn(rotrv_vec_insn[vece], a0, a1, temp_vec)); break; + case INDEX_op_rotli_vec: + /* rotli_vec a1, a2 = rotri_vec a1, -a2 */ + a2 = extract32(-a2, 0, 3 + vece); + switch (vece) { + case MO_8: + tcg_out_opc_vrotri_b(s, a0, a1, a2); + break; + case MO_16: + tcg_out_opc_vrotri_h(s, a0, a1, a2); + break; + case MO_32: + tcg_out_opc_vrotri_w(s, a0, a1, a2); + break; + case MO_64: + tcg_out_opc_vrotri_d(s, a0, a1, a2); + break; + default: + g_assert_not_reached(); + } + break; case INDEX_op_bitsel_vec: /* vbitsel vd, vj, vk, va = bitsel_vec vd, va, vk, vj */ tcg_out_opc_vbitsel_v(s, a0, a3, a2, a1); @@ -2132,6 +2152,7 @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op) case INDEX_op_shli_vec: case INDEX_op_shri_vec: case INDEX_op_sari_vec: + case INDEX_op_rotli_vec: return C_O1_I1(w, w); case INDEX_op_bitsel_vec: diff --git a/tcg/loongarch64/tcg-target.h b/tcg/loongarch64/tcg-target.h index f6eb3cf7a6..3dc2dbf800 100644 --- a/tcg/loongarch64/tcg-target.h +++ b/tcg/loongarch64/tcg-target.h @@ -187,7 +187,7 @@ extern bool use_lsx_instructions; #define TCG_TARGET_HAS_shi_vec 1 #define TCG_TARGET_HAS_shs_vec 0 #define TCG_TARGET_HAS_shv_vec 1 -#define TCG_TARGET_HAS_roti_vec 0 +#define TCG_TARGET_HAS_roti_vec 1 #define TCG_TARGET_HAS_rots_vec 0 #define TCG_TARGET_HAS_rotv_vec 1 #define TCG_TARGET_HAS_sat_vec 1