From patchwork Mon Mar 9 12:14:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427019 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3FD73921 for ; Mon, 9 Mar 2020 12:23:45 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 009E820828 for ; Mon, 9 Mar 2020 12:23:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 009E820828 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42154 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHRg-0006oV-5a for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:23:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56261) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJT-0001Ft-FL for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:20 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJO-0002SF-1c for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:15 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:56201) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJN-0002Oj-4q; Mon, 09 Mar 2020 08:15:09 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436282|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.0777282-0.00838839-0.913883; FP=9422650739328877954|1|1|7|0|-1|-1|-1; HT=e02c03306; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:04 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 05/60] target/riscv: add vector stride load and store instructions Date: Mon, 9 Mar 2020 20:14:02 +0800 Message-Id: <20200309121457.13719-6-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Vector strided operations access the first memory element at the base address, and then access subsequent elements at address increments given by the byte offset contained in the x register specified by rs2. Vector unit-stride operations access elements stored contiguously in memory starting from the base effective address. It can been seen as a special case of strided operations. Signed-off-by: LIU Zhiwei --- target/riscv/cpu.h | 6 + target/riscv/helper.h | 105 ++++++ target/riscv/insn32.decode | 32 ++ target/riscv/insn_trans/trans_rvv.inc.c | 340 ++++++++++++++++++++ target/riscv/translate.c | 7 + target/riscv/vector_helper.c | 406 ++++++++++++++++++++++++ 6 files changed, 896 insertions(+) diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h index 11fc573168..a6761f3838 100644 --- a/target/riscv/cpu.h +++ b/target/riscv/cpu.h @@ -369,6 +369,12 @@ typedef CPURISCVState CPUArchState; typedef RISCVCPU ArchCPU; #include "exec/cpu-all.h" +/* share data between vector helpers and decode code */ +FIELD(VDATA, MLEN, 0, 8) +FIELD(VDATA, VM, 8, 1) +FIELD(VDATA, LMUL, 9, 2) +FIELD(VDATA, NF, 11, 4) + FIELD(TB_FLAGS, VL_EQ_VLMAX, 2, 1) FIELD(TB_FLAGS, LMUL, 3, 2) FIELD(TB_FLAGS, SEW, 5, 3) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 3c28c7e407..87dfa90609 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -78,3 +78,108 @@ DEF_HELPER_1(tlb_flush, void, env) #endif /* Vector functions */ DEF_HELPER_3(vsetvl, tl, env, tl, tl) +DEF_HELPER_5(vlb_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_b_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlh_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlh_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlh_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlh_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlh_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlh_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlw_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlw_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlw_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlw_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_b_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_b_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhu_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhu_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhu_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhu_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhu_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhu_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlwu_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlwu_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlwu_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlwu_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_b_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsh_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsh_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsh_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsh_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsh_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsh_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsw_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsw_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsw_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsw_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_b_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_6(vlsb_v_b, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsb_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsb_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsb_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsh_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsh_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsh_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsw_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsw_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlse_v_b, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlse_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlse_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlse_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsbu_v_b, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsbu_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsbu_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsbu_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlshu_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlshu_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlshu_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlswu_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlswu_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssb_v_b, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssb_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssb_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssb_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssh_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssh_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssh_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssw_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssw_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vsse_v_b, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vsse_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vsse_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vsse_v_d, void, ptr, ptr, tl, tl, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 53340bdbc4..ef521152c5 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -25,6 +25,7 @@ %sh10 20:10 %csr 20:12 %rm 12:3 +%nf 29:3 !function=ex_plus_1 # immediates: %imm_i 20:s12 @@ -43,6 +44,8 @@ &u imm rd &shift shamt rs1 rd &atomic aq rl rs2 rs1 rd +&r2nfvm vm rd rs1 nf +&rnfvm vm rd rs1 rs2 nf # Formats 32: @r ....... ..... ..... ... ..... ....... &r %rs2 %rs1 %rd @@ -62,6 +65,8 @@ @r_rm ....... ..... ..... ... ..... ....... %rs2 %rs1 %rm %rd @r2_rm ....... ..... ..... ... ..... ....... %rs1 %rm %rd @r2 ....... ..... ..... ... ..... ....... %rs1 %rd +@r2_nfvm ... ... vm:1 ..... ..... ... ..... ....... &r2nfvm %nf %rs1 %rd +@r_nfvm ... ... vm:1 ..... ..... ... ..... ....... &rnfvm %nf %rs2 %rs1 %rd @r2_zimm . zimm:11 ..... ... ..... ....... %rs1 %rd @hfence_gvma ....... ..... ..... ... ..... ....... %rs2 %rs1 @@ -210,5 +215,32 @@ fcvt_d_w 1101001 00000 ..... ... ..... 1010011 @r2_rm fcvt_d_wu 1101001 00001 ..... ... ..... 1010011 @r2_rm # *** RV32V Extension *** + +# *** Vector loads and stores are encoded within LOADFP/STORE-FP *** +vlb_v ... 100 . 00000 ..... 000 ..... 0000111 @r2_nfvm +vlh_v ... 100 . 00000 ..... 101 ..... 0000111 @r2_nfvm +vlw_v ... 100 . 00000 ..... 110 ..... 0000111 @r2_nfvm +vle_v ... 000 . 00000 ..... 111 ..... 0000111 @r2_nfvm +vlbu_v ... 000 . 00000 ..... 000 ..... 0000111 @r2_nfvm +vlhu_v ... 000 . 00000 ..... 101 ..... 0000111 @r2_nfvm +vlwu_v ... 000 . 00000 ..... 110 ..... 0000111 @r2_nfvm +vsb_v ... 000 . 00000 ..... 000 ..... 0100111 @r2_nfvm +vsh_v ... 000 . 00000 ..... 101 ..... 0100111 @r2_nfvm +vsw_v ... 000 . 00000 ..... 110 ..... 0100111 @r2_nfvm +vse_v ... 000 . 00000 ..... 111 ..... 0100111 @r2_nfvm + +vlsb_v ... 110 . ..... ..... 000 ..... 0000111 @r_nfvm +vlsh_v ... 110 . ..... ..... 101 ..... 0000111 @r_nfvm +vlsw_v ... 110 . ..... ..... 110 ..... 0000111 @r_nfvm +vlse_v ... 010 . ..... ..... 111 ..... 0000111 @r_nfvm +vlsbu_v ... 010 . ..... ..... 000 ..... 0000111 @r_nfvm +vlshu_v ... 010 . ..... ..... 101 ..... 0000111 @r_nfvm +vlswu_v ... 010 . ..... ..... 110 ..... 0000111 @r_nfvm +vssb_v ... 010 . ..... ..... 000 ..... 0100111 @r_nfvm +vssh_v ... 010 . ..... ..... 101 ..... 0100111 @r_nfvm +vssw_v ... 010 . ..... ..... 110 ..... 0100111 @r_nfvm +vsse_v ... 010 . ..... ..... 111 ..... 0100111 @r_nfvm + +# *** new major opcode OP-V *** vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index da82c72bbf..d85f2aec68 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -15,6 +15,8 @@ * You should have received a copy of the GNU General Public License along with * this program. If not, see . */ +#include "tcg/tcg-op-gvec.h" +#include "tcg/tcg-gvec-desc.h" static bool trans_vsetvl(DisasContext *ctx, arg_vsetvl * a) { @@ -67,3 +69,341 @@ static bool trans_vsetvli(DisasContext *ctx, arg_vsetvli * a) tcg_temp_free(dst); return true; } + +/* vector register offset from env */ +static uint32_t vreg_ofs(DisasContext *s, int reg) +{ + return offsetof(CPURISCVState, vreg) + reg * s->vlen / 8; +} + +/* check functions */ +static bool vext_check_isa_ill(DisasContext *s, target_ulong isa) +{ + return !s->vill && ((s->misa & isa) == isa); +} + +/* + * There are two rules check here. + * + * 1. Vector register numbers are multiples of LMUL. (Section 3.2) + * + * 2. For all widening instructions, the destination LMUL value must also be + * a supported LMUL value. (Section 11.2) + */ +static bool vext_check_reg(DisasContext *s, uint32_t reg, bool widen) +{ + /* + * The destination vector register group results are arranged as if both + * SEW and LMUL were at twice their current settings. (Section 11.2). + */ + int legal = widen ? 2 << s->lmul : 1 << s->lmul; + + return !((s->lmul == 0x3 && widen) || (reg % legal)); +} + +/* + * There are two rules check here. + * + * 1. The destination vector register group for a masked vector instruction can + * only overlap the source mask register (v0) when LMUL=1. (Section 5.3) + * + * 2. In widen instructions and some other insturctions, like vslideup.vx, + * there is no need to check whether LMUL=1. + */ +static bool vext_check_overlap_mask(DisasContext *s, uint32_t vd, bool vm, + bool force) +{ + return (vm != 0 || vd != 0) || (!force && (s->lmul == 0)); +} + +/* The LMUL setting must be such that LMUL * NFIELDS <= 8. (Section 7.8) */ +static bool vext_check_nf(DisasContext *s, uint32_t nf) +{ + return (1 << s->lmul) * nf <= 8; +} + +/* common translation macro */ +#define GEN_VEXT_TRANS(NAME, SEQ, ARGTYPE, OP, CHECK) \ +static bool trans_##NAME(DisasContext *s, arg_##ARGTYPE *a)\ +{ \ + if (CHECK(s, a)) { \ + return OP(s, a, SEQ); \ + } \ + return false; \ +} + +/* + *** unit stride load and store + */ +typedef void gen_helper_ldst_us(TCGv_ptr, TCGv_ptr, TCGv, + TCGv_env, TCGv_i32); + +static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data, + gen_helper_ldst_us *fn, DisasContext *s) +{ + TCGv_ptr dest, mask; + TCGv base; + TCGv_i32 desc; + + dest = tcg_temp_new_ptr(); + mask = tcg_temp_new_ptr(); + base = tcg_temp_new(); + + /* + * As simd_desc supports at most 256 bytes, and in this implementation, + * the max vector group length is 2048 bytes. So split it into two parts. + * + * The first part is vlen in bytes, encoded in maxsz of simd_desc. + * The second part is lmul, encoded in data of simd_desc. + */ + desc = tcg_const_i32(simd_desc(0, s->vlen / 8, data)); + + gen_get_gpr(base, rs1); + tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd)); + tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0)); + + fn(dest, mask, base, cpu_env, desc); + + tcg_temp_free_ptr(dest); + tcg_temp_free_ptr(mask); + tcg_temp_free(base); + tcg_temp_free_i32(desc); + return true; +} + +static bool ld_us_op(DisasContext *s, arg_r2nfvm *a, uint8_t seq) +{ + uint32_t data = 0; + gen_helper_ldst_us *fn; + static gen_helper_ldst_us * const fns[2][7][4] = { + /* masked unit stride load */ + { { gen_helper_vlb_v_b_mask, gen_helper_vlb_v_h_mask, + gen_helper_vlb_v_w_mask, gen_helper_vlb_v_d_mask }, + { NULL, gen_helper_vlh_v_h_mask, + gen_helper_vlh_v_w_mask, gen_helper_vlh_v_d_mask }, + { NULL, NULL, + gen_helper_vlw_v_w_mask, gen_helper_vlw_v_d_mask }, + { gen_helper_vle_v_b_mask, gen_helper_vle_v_h_mask, + gen_helper_vle_v_w_mask, gen_helper_vle_v_d_mask }, + { gen_helper_vlbu_v_b_mask, gen_helper_vlbu_v_h_mask, + gen_helper_vlbu_v_w_mask, gen_helper_vlbu_v_d_mask }, + { NULL, gen_helper_vlhu_v_h_mask, + gen_helper_vlhu_v_w_mask, gen_helper_vlhu_v_d_mask }, + { NULL, NULL, + gen_helper_vlwu_v_w_mask, gen_helper_vlwu_v_d_mask } }, + /* unmasked unit stride load */ + { { gen_helper_vlb_v_b, gen_helper_vlb_v_h, + gen_helper_vlb_v_w, gen_helper_vlb_v_d }, + { NULL, gen_helper_vlh_v_h, + gen_helper_vlh_v_w, gen_helper_vlh_v_d }, + { NULL, NULL, + gen_helper_vlw_v_w, gen_helper_vlw_v_d }, + { gen_helper_vle_v_b, gen_helper_vle_v_h, + gen_helper_vle_v_w, gen_helper_vle_v_d }, + { gen_helper_vlbu_v_b, gen_helper_vlbu_v_h, + gen_helper_vlbu_v_w, gen_helper_vlbu_v_d }, + { NULL, gen_helper_vlhu_v_h, + gen_helper_vlhu_v_w, gen_helper_vlhu_v_d }, + { NULL, NULL, + gen_helper_vlwu_v_w, gen_helper_vlwu_v_d } } + }; + + fn = fns[a->vm][seq][s->sew]; + if (fn == NULL) { + return false; + } + + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); + data = FIELD_DP32(data, VDATA, VM, a->vm); + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); + data = FIELD_DP32(data, VDATA, NF, a->nf); + return ldst_us_trans(a->rd, a->rs1, data, fn, s); +} + +static bool ld_us_check(DisasContext *s, arg_r2nfvm* a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, false) && + vext_check_reg(s, a->rd, false) && + vext_check_nf(s, a->nf)); +} + +GEN_VEXT_TRANS(vlb_v, 0, r2nfvm, ld_us_op, ld_us_check) +GEN_VEXT_TRANS(vlh_v, 1, r2nfvm, ld_us_op, ld_us_check) +GEN_VEXT_TRANS(vlw_v, 2, r2nfvm, ld_us_op, ld_us_check) +GEN_VEXT_TRANS(vle_v, 3, r2nfvm, ld_us_op, ld_us_check) +GEN_VEXT_TRANS(vlbu_v, 4, r2nfvm, ld_us_op, ld_us_check) +GEN_VEXT_TRANS(vlhu_v, 5, r2nfvm, ld_us_op, ld_us_check) +GEN_VEXT_TRANS(vlwu_v, 6, r2nfvm, ld_us_op, ld_us_check) + +static bool st_us_op(DisasContext *s, arg_r2nfvm *a, uint8_t seq) +{ + uint32_t data = 0; + gen_helper_ldst_us *fn; + static gen_helper_ldst_us * const fns[2][4][4] = { + /* masked unit stride load and store */ + { { gen_helper_vsb_v_b_mask, gen_helper_vsb_v_h_mask, + gen_helper_vsb_v_w_mask, gen_helper_vsb_v_d_mask }, + { NULL, gen_helper_vsh_v_h_mask, + gen_helper_vsh_v_w_mask, gen_helper_vsh_v_d_mask }, + { NULL, NULL, + gen_helper_vsw_v_w_mask, gen_helper_vsw_v_d_mask }, + { gen_helper_vse_v_b_mask, gen_helper_vse_v_h_mask, + gen_helper_vse_v_w_mask, gen_helper_vse_v_d_mask } }, + /* unmasked unit stride store */ + { { gen_helper_vsb_v_b, gen_helper_vsb_v_h, + gen_helper_vsb_v_w, gen_helper_vsb_v_d }, + { NULL, gen_helper_vsh_v_h, + gen_helper_vsh_v_w, gen_helper_vsh_v_d }, + { NULL, NULL, + gen_helper_vsw_v_w, gen_helper_vsw_v_d }, + { gen_helper_vse_v_b, gen_helper_vse_v_h, + gen_helper_vse_v_w, gen_helper_vse_v_d } } + }; + + fn = fns[a->vm][seq][s->sew]; + if (fn == NULL) { + return false; + } + + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); + data = FIELD_DP32(data, VDATA, VM, a->vm); + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); + data = FIELD_DP32(data, VDATA, NF, a->nf); + return ldst_us_trans(a->rd, a->rs1, data, fn, s); +} + +static bool st_us_check(DisasContext *s, arg_r2nfvm* a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_reg(s, a->rd, false) && + vext_check_nf(s, a->nf)); +} + +GEN_VEXT_TRANS(vsb_v, 0, r2nfvm, st_us_op, st_us_check) +GEN_VEXT_TRANS(vsh_v, 1, r2nfvm, st_us_op, st_us_check) +GEN_VEXT_TRANS(vsw_v, 2, r2nfvm, st_us_op, st_us_check) +GEN_VEXT_TRANS(vse_v, 3, r2nfvm, st_us_op, st_us_check) + +/* + *** stride load and store + */ +typedef void gen_helper_ldst_stride(TCGv_ptr, TCGv_ptr, TCGv, + TCGv, TCGv_env, TCGv_i32); + +static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2, + uint32_t data, gen_helper_ldst_stride *fn, DisasContext *s) +{ + TCGv_ptr dest, mask; + TCGv base, stride; + TCGv_i32 desc; + + dest = tcg_temp_new_ptr(); + mask = tcg_temp_new_ptr(); + base = tcg_temp_new(); + stride = tcg_temp_new(); + desc = tcg_const_i32(simd_desc(0, s->vlen / 8, data)); + + gen_get_gpr(base, rs1); + gen_get_gpr(stride, rs2); + tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd)); + tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0)); + + fn(dest, mask, base, stride, cpu_env, desc); + + tcg_temp_free_ptr(dest); + tcg_temp_free_ptr(mask); + tcg_temp_free(base); + tcg_temp_free(stride); + tcg_temp_free_i32(desc); + return true; +} + +static bool ld_stride_op(DisasContext *s, arg_rnfvm *a, uint8_t seq) +{ + uint32_t data = 0; + gen_helper_ldst_stride *fn; + static gen_helper_ldst_stride * const fns[7][4] = { + { gen_helper_vlsb_v_b, gen_helper_vlsb_v_h, + gen_helper_vlsb_v_w, gen_helper_vlsb_v_d }, + { NULL, gen_helper_vlsh_v_h, + gen_helper_vlsh_v_w, gen_helper_vlsh_v_d }, + { NULL, NULL, + gen_helper_vlsw_v_w, gen_helper_vlsw_v_d }, + { gen_helper_vlse_v_b, gen_helper_vlse_v_h, + gen_helper_vlse_v_w, gen_helper_vlse_v_d }, + { gen_helper_vlsbu_v_b, gen_helper_vlsbu_v_h, + gen_helper_vlsbu_v_w, gen_helper_vlsbu_v_d }, + { NULL, gen_helper_vlshu_v_h, + gen_helper_vlshu_v_w, gen_helper_vlshu_v_d }, + { NULL, NULL, + gen_helper_vlswu_v_w, gen_helper_vlswu_v_d }, + }; + + fn = fns[seq][s->sew]; + if (fn == NULL) { + return false; + } + + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); + data = FIELD_DP32(data, VDATA, VM, a->vm); + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); + data = FIELD_DP32(data, VDATA, NF, a->nf); + return ldst_stride_trans(a->rd, a->rs1, a->rs2, data, fn, s); +} + +static bool ld_stride_check(DisasContext *s, arg_rnfvm* a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, false) && + vext_check_reg(s, a->rd, false) && + vext_check_nf(s, a->nf)); +} + +GEN_VEXT_TRANS(vlsb_v, 0, rnfvm, ld_stride_op, ld_stride_check) +GEN_VEXT_TRANS(vlsh_v, 1, rnfvm, ld_stride_op, ld_stride_check) +GEN_VEXT_TRANS(vlsw_v, 2, rnfvm, ld_stride_op, ld_stride_check) +GEN_VEXT_TRANS(vlse_v, 3, rnfvm, ld_stride_op, ld_stride_check) +GEN_VEXT_TRANS(vlsbu_v, 4, rnfvm, ld_stride_op, ld_stride_check) +GEN_VEXT_TRANS(vlshu_v, 5, rnfvm, ld_stride_op, ld_stride_check) +GEN_VEXT_TRANS(vlswu_v, 6, rnfvm, ld_stride_op, ld_stride_check) + +static bool st_stride_op(DisasContext *s, arg_rnfvm *a, uint8_t seq) +{ + uint32_t data = 0; + gen_helper_ldst_stride *fn; + static gen_helper_ldst_stride * const fns[4][4] = { + /* masked stride store */ + { gen_helper_vssb_v_b, gen_helper_vssb_v_h, + gen_helper_vssb_v_w, gen_helper_vssb_v_d }, + { NULL, gen_helper_vssh_v_h, + gen_helper_vssh_v_w, gen_helper_vssh_v_d }, + { NULL, NULL, + gen_helper_vssw_v_w, gen_helper_vssw_v_d }, + { gen_helper_vsse_v_b, gen_helper_vsse_v_h, + gen_helper_vsse_v_w, gen_helper_vsse_v_d } + }; + + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); + data = FIELD_DP32(data, VDATA, VM, a->vm); + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); + data = FIELD_DP32(data, VDATA, NF, a->nf); + fn = fns[seq][s->sew]; + if (fn == NULL) { + return false; + } + + return ldst_stride_trans(a->rd, a->rs1, a->rs2, data, fn, s); +} + +static bool st_stride_check(DisasContext *s, arg_rnfvm* a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_reg(s, a->rd, false) && + vext_check_nf(s, a->nf)); +} + +GEN_VEXT_TRANS(vssb_v, 0, rnfvm, st_stride_op, st_stride_check) +GEN_VEXT_TRANS(vssh_v, 1, rnfvm, st_stride_op, st_stride_check) +GEN_VEXT_TRANS(vssw_v, 2, rnfvm, st_stride_op, st_stride_check) +GEN_VEXT_TRANS(vsse_v, 3, rnfvm, st_stride_op, st_stride_check) diff --git a/target/riscv/translate.c b/target/riscv/translate.c index af07ac4160..852545b77e 100644 --- a/target/riscv/translate.c +++ b/target/riscv/translate.c @@ -61,6 +61,7 @@ typedef struct DisasContext { uint8_t lmul; uint8_t sew; uint16_t vlen; + uint16_t mlen; bool vl_eq_vlmax; } DisasContext; @@ -548,6 +549,11 @@ static void decode_RV32_64C(DisasContext *ctx, uint16_t opcode) } } +static int ex_plus_1(DisasContext *ctx, int nf) +{ + return nf + 1; +} + #define EX_SH(amount) \ static int ex_shift_##amount(DisasContext *ctx, int imm) \ { \ @@ -784,6 +790,7 @@ static void riscv_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs) ctx->vill = FIELD_EX32(tb_flags, TB_FLAGS, VILL); ctx->sew = FIELD_EX32(tb_flags, TB_FLAGS, SEW); ctx->lmul = FIELD_EX32(tb_flags, TB_FLAGS, LMUL); + ctx->mlen = 1 << (ctx->sew + 3 - ctx->lmul); ctx->vl_eq_vlmax = FIELD_EX32(tb_flags, TB_FLAGS, VL_EQ_VLMAX); } diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 2afe716f2a..ebfabd2946 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -18,8 +18,10 @@ #include "qemu/osdep.h" #include "cpu.h" +#include "exec/memop.h" #include "exec/exec-all.h" #include "exec/helper-proto.h" +#include "tcg/tcg-gvec-desc.h" #include target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1, @@ -51,3 +53,407 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1, env->vstart = 0; return vl; } + +/* + * Note that vector data is stored in host-endian 64-bit chunks, + * so addressing units smaller than that needs a host-endian fixup. + */ +#ifdef HOST_WORDS_BIGENDIAN +#define H1(x) ((x) ^ 7) +#define H1_2(x) ((x) ^ 6) +#define H1_4(x) ((x) ^ 4) +#define H2(x) ((x) ^ 3) +#define H4(x) ((x) ^ 1) +#define H8(x) ((x)) +#else +#define H1(x) (x) +#define H1_2(x) (x) +#define H1_4(x) (x) +#define H2(x) (x) +#define H4(x) (x) +#define H8(x) (x) +#endif + +static inline uint32_t vext_nf(uint32_t desc) +{ + return FIELD_EX32(simd_data(desc), VDATA, NF); +} + +static inline uint32_t vext_mlen(uint32_t desc) +{ + return FIELD_EX32(simd_data(desc), VDATA, MLEN); +} + +static inline uint32_t vext_vm(uint32_t desc) +{ + return FIELD_EX32(simd_data(desc), VDATA, VM); +} + +static inline uint32_t vext_lmul(uint32_t desc) +{ + return FIELD_EX32(simd_data(desc), VDATA, LMUL); +} + +/* + * Get vector group length in bytes. Its range is [64, 2048]. + * + * As simd_desc support at most 256, the max vlen is 512 bits. + * So vlen in bytes is encoded as maxsz. + */ +static inline uint32_t vext_maxsz(uint32_t desc) +{ + return simd_maxsz(desc) << vext_lmul(desc); +} + +/* + * This function checks watchpoint before real load operation. + * + * In softmmu mode, the TLB API probe_access is enough for watchpoint check. + * In user mode, there is no watchpoint support now. + * + * It will trigger an exception if there is no mapping in TLB + * and page table walk can't fill the TLB entry. Then the guest + * software can return here after process the exception or never return. + */ +static void probe_pages(CPURISCVState *env, target_ulong addr, + target_ulong len, uintptr_t ra, MMUAccessType access_type) +{ + target_ulong pagelen = -(addr | TARGET_PAGE_MASK); + target_ulong curlen = MIN(pagelen, len); + + probe_access(env, addr, curlen, access_type, + cpu_mmu_index(env, false), ra); + if (len > curlen) { + addr += curlen; + curlen = len - curlen; + probe_access(env, addr, curlen, access_type, + cpu_mmu_index(env, false), ra); + } +} + +#ifdef HOST_WORDS_BIGENDIAN +static void vext_clear(void *tail, uint32_t cnt, uint32_t tot) +{ + /* + * Split the remaining range to two parts. + * The first part is in the last uint64_t unit. + * The second part start from the next uint64_t unit. + */ + int part1 = 0, part2 = tot - cnt; + if (cnt % 8) { + part1 = 8 - (cnt % 8); + part2 = tot - cnt - part1; + memset(tail & ~(7ULL), 0, part1); + memset((tail + 8) & ~(7ULL), 0, part2); + } else { + memset(tail, 0, part2); + } +} +#else +static void vext_clear(void *tail, uint32_t cnt, uint32_t tot) +{ + memset(tail, 0, tot - cnt); +} +#endif + +static void clearb(void *vd, uint32_t idx, uint32_t cnt, uint32_t tot) +{ + int8_t *cur = ((int8_t *)vd + H1(idx)); + vext_clear(cur, cnt, tot); +} + +static void clearh(void *vd, uint32_t idx, uint32_t cnt, uint32_t tot) +{ + int16_t *cur = ((int16_t *)vd + H2(idx)); + vext_clear(cur, cnt, tot); +} + +static void clearl(void *vd, uint32_t idx, uint32_t cnt, uint32_t tot) +{ + int32_t *cur = ((int32_t *)vd + H4(idx)); + vext_clear(cur, cnt, tot); +} + +static void clearq(void *vd, uint32_t idx, uint32_t cnt, uint32_t tot) +{ + int64_t *cur = (int64_t *)vd + idx; + vext_clear(cur, cnt, tot); +} + + +static inline int vext_elem_mask(void *v0, int mlen, int index) +{ + int idx = (index * mlen) / 64; + int pos = (index * mlen) % 64; + return (((uint64_t *)v0)[idx] >> pos) & 1; +} + +/* elements operations for load and store */ +typedef void (*vext_ldst_elem_fn)(CPURISCVState *env, target_ulong addr, + uint32_t idx, void *vd, uintptr_t retaddr); +typedef void (*vext_ld_clear_elem)(void *vd, uint32_t idx, + uint32_t cnt, uint32_t tot); + +#define GEN_VEXT_LD_ELEM(NAME, MTYPE, ETYPE, H, LDSUF) \ +static void NAME(CPURISCVState *env, abi_ptr addr, \ + uint32_t idx, void *vd, uintptr_t retaddr) \ +{ \ + MTYPE data; \ + ETYPE *cur = ((ETYPE *)vd + H(idx)); \ + data = cpu_##LDSUF##_data_ra(env, addr, retaddr); \ + *cur = data; \ +} \ + +GEN_VEXT_LD_ELEM(ldb_b, int8_t, int8_t, H1, ldsb) +GEN_VEXT_LD_ELEM(ldb_h, int8_t, int16_t, H2, ldsb) +GEN_VEXT_LD_ELEM(ldb_w, int8_t, int32_t, H4, ldsb) +GEN_VEXT_LD_ELEM(ldb_d, int8_t, int64_t, H8, ldsb) +GEN_VEXT_LD_ELEM(ldh_h, int16_t, int16_t, H2, ldsw) +GEN_VEXT_LD_ELEM(ldh_w, int16_t, int32_t, H4, ldsw) +GEN_VEXT_LD_ELEM(ldh_d, int16_t, int64_t, H8, ldsw) +GEN_VEXT_LD_ELEM(ldw_w, int32_t, int32_t, H4, ldl) +GEN_VEXT_LD_ELEM(ldw_d, int32_t, int64_t, H8, ldl) +GEN_VEXT_LD_ELEM(lde_b, int8_t, int8_t, H1, ldsb) +GEN_VEXT_LD_ELEM(lde_h, int16_t, int16_t, H2, ldsw) +GEN_VEXT_LD_ELEM(lde_w, int32_t, int32_t, H4, ldl) +GEN_VEXT_LD_ELEM(lde_d, int64_t, int64_t, H8, ldq) +GEN_VEXT_LD_ELEM(ldbu_b, uint8_t, uint8_t, H1, ldub) +GEN_VEXT_LD_ELEM(ldbu_h, uint8_t, uint16_t, H2, ldub) +GEN_VEXT_LD_ELEM(ldbu_w, uint8_t, uint32_t, H4, ldub) +GEN_VEXT_LD_ELEM(ldbu_d, uint8_t, uint64_t, H8, ldub) +GEN_VEXT_LD_ELEM(ldhu_h, uint16_t, uint16_t, H2, lduw) +GEN_VEXT_LD_ELEM(ldhu_w, uint16_t, uint32_t, H4, lduw) +GEN_VEXT_LD_ELEM(ldhu_d, uint16_t, uint64_t, H8, lduw) +GEN_VEXT_LD_ELEM(ldwu_w, uint32_t, uint32_t, H4, ldl) +GEN_VEXT_LD_ELEM(ldwu_d, uint32_t, uint64_t, H8, ldl) + +#define GEN_VEXT_ST_ELEM(NAME, ETYPE, H, STSUF) \ +static void NAME(CPURISCVState *env, abi_ptr addr, \ + uint32_t idx, void *vd, uintptr_t retaddr) \ +{ \ + ETYPE data = *((ETYPE *)vd + H(idx)); \ + cpu_##STSUF##_data_ra(env, addr, data, retaddr); \ +} +GEN_VEXT_ST_ELEM(stb_b, int8_t, H1, stb) +GEN_VEXT_ST_ELEM(stb_h, int16_t, H2, stb) +GEN_VEXT_ST_ELEM(stb_w, int32_t, H4, stb) +GEN_VEXT_ST_ELEM(stb_d, int64_t, H8, stb) +GEN_VEXT_ST_ELEM(sth_h, int16_t, H2, stw) +GEN_VEXT_ST_ELEM(sth_w, int32_t, H4, stw) +GEN_VEXT_ST_ELEM(sth_d, int64_t, H8, stw) +GEN_VEXT_ST_ELEM(stw_w, int32_t, H4, stl) +GEN_VEXT_ST_ELEM(stw_d, int64_t, H8, stl) +GEN_VEXT_ST_ELEM(ste_b, int8_t, H1, stb) +GEN_VEXT_ST_ELEM(ste_h, int16_t, H2, stw) +GEN_VEXT_ST_ELEM(ste_w, int32_t, H4, stl) +GEN_VEXT_ST_ELEM(ste_d, int64_t, H8, stq) + +/* + *** stride: access vector element from strided memory + */ +static void vext_ldst_stride(void *vd, void *v0, target_ulong base, + target_ulong stride, CPURISCVState *env, uint32_t desc, uint32_t vm, + vext_ldst_elem_fn ldst_elem, vext_ld_clear_elem clear_elem, + uint32_t esz, uint32_t msz, uintptr_t ra, MMUAccessType access_type) +{ + uint32_t i, k; + uint32_t nf = vext_nf(desc); + uint32_t mlen = vext_mlen(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + if (env->vl == 0) { + return; + } + /* probe every access*/ + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + probe_pages(env, base + stride * i, nf * msz, ra, access_type); + } + /* do real access */ + for (i = 0; i < env->vl; i++) { + k = 0; + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + while (k < nf) { + target_ulong addr = base + stride * i + k * msz; + ldst_elem(env, addr, i + k * vlmax, vd, ra); + k++; + } + } + /* clear tail elements */ + if (clear_elem) { + for (k = 0; k < nf; k++) { + clear_elem(vd, env->vl + k * vlmax, env->vl * esz, vlmax * esz); + } + } +} + +#define GEN_VEXT_LD_STRIDE(NAME, MTYPE, ETYPE, LOAD_FN, CLEAR_FN) \ +void HELPER(NAME)(void *vd, void * v0, target_ulong base, \ + target_ulong stride, CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t vm = vext_vm(desc); \ + vext_ldst_stride(vd, v0, base, stride, env, desc, vm, LOAD_FN, \ + CLEAR_FN, sizeof(ETYPE), sizeof(MTYPE), GETPC(), MMU_DATA_LOAD);\ +} + +GEN_VEXT_LD_STRIDE(vlsb_v_b, int8_t, int8_t, ldb_b, clearb) +GEN_VEXT_LD_STRIDE(vlsb_v_h, int8_t, int16_t, ldb_h, clearh) +GEN_VEXT_LD_STRIDE(vlsb_v_w, int8_t, int32_t, ldb_w, clearl) +GEN_VEXT_LD_STRIDE(vlsb_v_d, int8_t, int64_t, ldb_d, clearq) +GEN_VEXT_LD_STRIDE(vlsh_v_h, int16_t, int16_t, ldh_h, clearh) +GEN_VEXT_LD_STRIDE(vlsh_v_w, int16_t, int32_t, ldh_w, clearl) +GEN_VEXT_LD_STRIDE(vlsh_v_d, int16_t, int64_t, ldh_d, clearq) +GEN_VEXT_LD_STRIDE(vlsw_v_w, int32_t, int32_t, ldw_w, clearl) +GEN_VEXT_LD_STRIDE(vlsw_v_d, int32_t, int64_t, ldw_d, clearq) +GEN_VEXT_LD_STRIDE(vlse_v_b, int8_t, int8_t, lde_b, clearb) +GEN_VEXT_LD_STRIDE(vlse_v_h, int16_t, int16_t, lde_h, clearh) +GEN_VEXT_LD_STRIDE(vlse_v_w, int32_t, int32_t, lde_w, clearl) +GEN_VEXT_LD_STRIDE(vlse_v_d, int64_t, int64_t, lde_d, clearq) +GEN_VEXT_LD_STRIDE(vlsbu_v_b, uint8_t, uint8_t, ldbu_b, clearb) +GEN_VEXT_LD_STRIDE(vlsbu_v_h, uint8_t, uint16_t, ldbu_h, clearh) +GEN_VEXT_LD_STRIDE(vlsbu_v_w, uint8_t, uint32_t, ldbu_w, clearl) +GEN_VEXT_LD_STRIDE(vlsbu_v_d, uint8_t, uint64_t, ldbu_d, clearq) +GEN_VEXT_LD_STRIDE(vlshu_v_h, uint16_t, uint16_t, ldhu_h, clearh) +GEN_VEXT_LD_STRIDE(vlshu_v_w, uint16_t, uint32_t, ldhu_w, clearl) +GEN_VEXT_LD_STRIDE(vlshu_v_d, uint16_t, uint64_t, ldhu_d, clearq) +GEN_VEXT_LD_STRIDE(vlswu_v_w, uint32_t, uint32_t, ldwu_w, clearl) +GEN_VEXT_LD_STRIDE(vlswu_v_d, uint32_t, uint64_t, ldwu_d, clearq) + +#define GEN_VEXT_ST_STRIDE(NAME, MTYPE, ETYPE, STORE_FN) \ +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \ + target_ulong stride, CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t vm = vext_vm(desc); \ + vext_ldst_stride(vd, v0, base, stride, env, desc, vm, STORE_FN, \ + NULL, sizeof(ETYPE), sizeof(MTYPE), GETPC(), MMU_DATA_STORE); \ +} + +GEN_VEXT_ST_STRIDE(vssb_v_b, int8_t, int8_t, stb_b) +GEN_VEXT_ST_STRIDE(vssb_v_h, int8_t, int16_t, stb_h) +GEN_VEXT_ST_STRIDE(vssb_v_w, int8_t, int32_t, stb_w) +GEN_VEXT_ST_STRIDE(vssb_v_d, int8_t, int64_t, stb_d) +GEN_VEXT_ST_STRIDE(vssh_v_h, int16_t, int16_t, sth_h) +GEN_VEXT_ST_STRIDE(vssh_v_w, int16_t, int32_t, sth_w) +GEN_VEXT_ST_STRIDE(vssh_v_d, int16_t, int64_t, sth_d) +GEN_VEXT_ST_STRIDE(vssw_v_w, int32_t, int32_t, stw_w) +GEN_VEXT_ST_STRIDE(vssw_v_d, int32_t, int64_t, stw_d) +GEN_VEXT_ST_STRIDE(vsse_v_b, int8_t, int8_t, ste_b) +GEN_VEXT_ST_STRIDE(vsse_v_h, int16_t, int16_t, ste_h) +GEN_VEXT_ST_STRIDE(vsse_v_w, int32_t, int32_t, ste_w) +GEN_VEXT_ST_STRIDE(vsse_v_d, int64_t, int64_t, ste_d) + +/* + *** unit-stride: access elements stored contiguously in memory + */ + +/* unmasked unit-stride load and store operation*/ +static inline void vext_ldst_us(void *vd, target_ulong base, + CPURISCVState *env, uint32_t desc, + vext_ldst_elem_fn ldst_elem, + vext_ld_clear_elem clear_elem, + uint32_t esz, uint32_t msz, uintptr_t ra, + MMUAccessType access_type) +{ + uint32_t i, k; + uint32_t nf = vext_nf(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + if (env->vl == 0) { + return; + } + /* probe every access */ + probe_pages(env, base, env->vl * nf * msz, ra, access_type); + /* load bytes from guest memory */ + for (i = 0; i < env->vl; i++) { + k = 0; + while (k < nf) { + target_ulong addr = base + (i * nf + k) * msz; + ldst_elem(env, addr, i + k * vlmax, vd, ra); + k++; + } + } + /* clear tail elements */ + if (clear_elem) { + for (k = 0; k < nf; k++) { + clear_elem(vd, env->vl + k * vlmax, env->vl * esz, vlmax * esz); + } + } +} + +/* + * masked unit-stride load and store operation will be a special case of stride, + * stride = NF * sizeof (MTYPE) + */ + +#define GEN_VEXT_LD_US(NAME, MTYPE, ETYPE, LOAD_FN, CLEAR_FN) \ +void HELPER(NAME##_mask)(void *vd, void *v0, target_ulong base, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t stride = vext_nf(desc) * sizeof(MTYPE); \ + vext_ldst_stride(vd, v0, base, stride, env, desc, false, LOAD_FN, \ + CLEAR_FN, sizeof(ETYPE), sizeof(MTYPE), GETPC(), MMU_DATA_LOAD);\ +} \ + \ +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + vext_ldst_us(vd, base, env, desc, LOAD_FN, CLEAR_FN, \ + sizeof(ETYPE), sizeof(MTYPE), GETPC(), MMU_DATA_LOAD); \ +} + +GEN_VEXT_LD_US(vlb_v_b, int8_t, int8_t, ldb_b, clearb) +GEN_VEXT_LD_US(vlb_v_h, int8_t, int16_t, ldb_h, clearh) +GEN_VEXT_LD_US(vlb_v_w, int8_t, int32_t, ldb_w, clearl) +GEN_VEXT_LD_US(vlb_v_d, int8_t, int64_t, ldb_d, clearq) +GEN_VEXT_LD_US(vlh_v_h, int16_t, int16_t, ldh_h, clearh) +GEN_VEXT_LD_US(vlh_v_w, int16_t, int32_t, ldh_w, clearl) +GEN_VEXT_LD_US(vlh_v_d, int16_t, int64_t, ldh_d, clearq) +GEN_VEXT_LD_US(vlw_v_w, int32_t, int32_t, ldw_w, clearl) +GEN_VEXT_LD_US(vlw_v_d, int32_t, int64_t, ldw_d, clearq) +GEN_VEXT_LD_US(vle_v_b, int8_t, int8_t, lde_b, clearb) +GEN_VEXT_LD_US(vle_v_h, int16_t, int16_t, lde_h, clearh) +GEN_VEXT_LD_US(vle_v_w, int32_t, int32_t, lde_w, clearl) +GEN_VEXT_LD_US(vle_v_d, int64_t, int64_t, lde_d, clearq) +GEN_VEXT_LD_US(vlbu_v_b, uint8_t, uint8_t, ldbu_b, clearb) +GEN_VEXT_LD_US(vlbu_v_h, uint8_t, uint16_t, ldbu_h, clearh) +GEN_VEXT_LD_US(vlbu_v_w, uint8_t, uint32_t, ldbu_w, clearl) +GEN_VEXT_LD_US(vlbu_v_d, uint8_t, uint64_t, ldbu_d, clearq) +GEN_VEXT_LD_US(vlhu_v_h, uint16_t, uint16_t, ldhu_h, clearh) +GEN_VEXT_LD_US(vlhu_v_w, uint16_t, uint32_t, ldhu_w, clearl) +GEN_VEXT_LD_US(vlhu_v_d, uint16_t, uint64_t, ldhu_d, clearq) +GEN_VEXT_LD_US(vlwu_v_w, uint32_t, uint32_t, ldwu_w, clearl) +GEN_VEXT_LD_US(vlwu_v_d, uint32_t, uint64_t, ldwu_d, clearq) + +#define GEN_VEXT_ST_US(NAME, MTYPE, ETYPE, STORE_FN) \ +void HELPER(NAME##_mask)(void *vd, void *v0, target_ulong base, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t stride = vext_nf(desc) * sizeof(MTYPE); \ + vext_ldst_stride(vd, v0, base, stride, env, desc, false, STORE_FN, \ + NULL, sizeof(ETYPE), sizeof(MTYPE), GETPC(), MMU_DATA_STORE); \ +} \ + \ +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + vext_ldst_us(vd, base, env, desc, STORE_FN, NULL, \ + sizeof(ETYPE), sizeof(MTYPE), GETPC(), MMU_DATA_STORE); \ +} + +GEN_VEXT_ST_US(vsb_v_b, int8_t, int8_t , stb_b) +GEN_VEXT_ST_US(vsb_v_h, int8_t, int16_t, stb_h) +GEN_VEXT_ST_US(vsb_v_w, int8_t, int32_t, stb_w) +GEN_VEXT_ST_US(vsb_v_d, int8_t, int64_t, stb_d) +GEN_VEXT_ST_US(vsh_v_h, int16_t, int16_t, sth_h) +GEN_VEXT_ST_US(vsh_v_w, int16_t, int32_t, sth_w) +GEN_VEXT_ST_US(vsh_v_d, int16_t, int64_t, sth_d) +GEN_VEXT_ST_US(vsw_v_w, int32_t, int32_t, stw_w) +GEN_VEXT_ST_US(vsw_v_d, int32_t, int64_t, stw_d) +GEN_VEXT_ST_US(vse_v_b, int8_t, int8_t , ste_b) +GEN_VEXT_ST_US(vse_v_h, int16_t, int16_t, ste_h) +GEN_VEXT_ST_US(vse_v_w, int32_t, int32_t, ste_w) +GEN_VEXT_ST_US(vse_v_d, int64_t, int64_t, ste_d) From patchwork Mon Mar 9 12:14:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427009 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E75C013A4 for ; Mon, 9 Mar 2020 12:21:25 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B85D6205ED for ; Mon, 9 Mar 2020 12:21:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B85D6205ED Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42104 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHPQ-0002uU-Rg for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:21:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56174) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJQ-00019B-9e for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:14 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJN-0002Rj-M6 for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:12 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:54475) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJN-0002Oh-2Q; Mon, 09 Mar 2020 08:15:09 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436282|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.146023-0.00653716-0.84744; FP=10004720976996441646|1|1|7|0|-1|-1|-1; HT=e02c03294; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:05 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 06/60] target/riscv: add vector index load and store instructions Date: Mon, 9 Mar 2020 20:14:03 +0800 Message-Id: <20200309121457.13719-7-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Vector indexed operations add the contents of each element of the vector offset operand specified by vs2 to the base effective address to give the effective address of each element. Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 35 +++++++ target/riscv/insn32.decode | 13 +++ target/riscv/insn_trans/trans_rvv.inc.c | 124 ++++++++++++++++++++++++ target/riscv/vector_helper.c | 117 ++++++++++++++++++++++ 4 files changed, 289 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 87dfa90609..f9b3da60ca 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -183,3 +183,38 @@ DEF_HELPER_6(vsse_v_b, void, ptr, ptr, tl, tl, env, i32) DEF_HELPER_6(vsse_v_h, void, ptr, ptr, tl, tl, env, i32) DEF_HELPER_6(vsse_v_w, void, ptr, ptr, tl, tl, env, i32) DEF_HELPER_6(vsse_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlxb_v_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxb_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxb_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxb_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxh_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxh_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxh_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxe_v_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxe_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxe_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxe_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxbu_v_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxbu_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxbu_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxbu_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxhu_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxhu_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxhu_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxwu_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxwu_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxb_v_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxb_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxb_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxb_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxh_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxh_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxh_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxe_v_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxe_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxe_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxe_v_d, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index ef521152c5..bc36df33b5 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -241,6 +241,19 @@ vssh_v ... 010 . ..... ..... 101 ..... 0100111 @r_nfvm vssw_v ... 010 . ..... ..... 110 ..... 0100111 @r_nfvm vsse_v ... 010 . ..... ..... 111 ..... 0100111 @r_nfvm +vlxb_v ... 111 . ..... ..... 000 ..... 0000111 @r_nfvm +vlxh_v ... 111 . ..... ..... 101 ..... 0000111 @r_nfvm +vlxw_v ... 111 . ..... ..... 110 ..... 0000111 @r_nfvm +vlxe_v ... 011 . ..... ..... 111 ..... 0000111 @r_nfvm +vlxbu_v ... 011 . ..... ..... 000 ..... 0000111 @r_nfvm +vlxhu_v ... 011 . ..... ..... 101 ..... 0000111 @r_nfvm +vlxwu_v ... 011 . ..... ..... 110 ..... 0000111 @r_nfvm +# Vector ordered-indexed and unordered-indexed store insns. +vsxb_v ... -11 . ..... ..... 000 ..... 0100111 @r_nfvm +vsxh_v ... -11 . ..... ..... 101 ..... 0100111 @r_nfvm +vsxw_v ... -11 . ..... ..... 110 ..... 0100111 @r_nfvm +vsxe_v ... -11 . ..... ..... 111 ..... 0100111 @r_nfvm + # *** new major opcode OP-V *** vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index d85f2aec68..5d1eeef323 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -407,3 +407,127 @@ GEN_VEXT_TRANS(vssb_v, 0, rnfvm, st_stride_op, st_stride_check) GEN_VEXT_TRANS(vssh_v, 1, rnfvm, st_stride_op, st_stride_check) GEN_VEXT_TRANS(vssw_v, 2, rnfvm, st_stride_op, st_stride_check) GEN_VEXT_TRANS(vsse_v, 3, rnfvm, st_stride_op, st_stride_check) + +/* + *** index load and store + */ +typedef void gen_helper_ldst_index(TCGv_ptr, TCGv_ptr, TCGv, + TCGv_ptr, TCGv_env, TCGv_i32); + +static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, + uint32_t data, gen_helper_ldst_index *fn, DisasContext *s) +{ + TCGv_ptr dest, mask, index; + TCGv base; + TCGv_i32 desc; + + dest = tcg_temp_new_ptr(); + mask = tcg_temp_new_ptr(); + index = tcg_temp_new_ptr(); + base = tcg_temp_new(); + desc = tcg_const_i32(simd_desc(0, s->vlen / 8, data)); + + gen_get_gpr(base, rs1); + tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd)); + tcg_gen_addi_ptr(index, cpu_env, vreg_ofs(s, vs2)); + tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0)); + + fn(dest, mask, base, index, cpu_env, desc); + + tcg_temp_free_ptr(dest); + tcg_temp_free_ptr(mask); + tcg_temp_free_ptr(index); + tcg_temp_free(base); + tcg_temp_free_i32(desc); + return true; +} + +static bool ld_index_op(DisasContext *s, arg_rnfvm *a, uint8_t seq) +{ + uint32_t data = 0; + gen_helper_ldst_index *fn; + static gen_helper_ldst_index * const fns[7][4] = { + { gen_helper_vlxb_v_b, gen_helper_vlxb_v_h, + gen_helper_vlxb_v_w, gen_helper_vlxb_v_d }, + { NULL, gen_helper_vlxh_v_h, + gen_helper_vlxh_v_w, gen_helper_vlxh_v_d }, + { NULL, NULL, + gen_helper_vlxw_v_w, gen_helper_vlxw_v_d }, + { gen_helper_vlxe_v_b, gen_helper_vlxe_v_h, + gen_helper_vlxe_v_w, gen_helper_vlxe_v_d }, + { gen_helper_vlxbu_v_b, gen_helper_vlxbu_v_h, + gen_helper_vlxbu_v_w, gen_helper_vlxbu_v_d }, + { NULL, gen_helper_vlxhu_v_h, + gen_helper_vlxhu_v_w, gen_helper_vlxhu_v_d }, + { NULL, NULL, + gen_helper_vlxwu_v_w, gen_helper_vlxwu_v_d }, + }; + + fn = fns[seq][s->sew]; + if (fn == NULL) { + return false; + } + + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); + data = FIELD_DP32(data, VDATA, VM, a->vm); + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); + data = FIELD_DP32(data, VDATA, NF, a->nf); + return ldst_index_trans(a->rd, a->rs1, a->rs2, data, fn, s); +} + +static bool ld_index_check(DisasContext *s, arg_rnfvm* a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, false) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + vext_check_nf(s, a->nf)); +} + +GEN_VEXT_TRANS(vlxb_v, 0, rnfvm, ld_index_op, ld_index_check) +GEN_VEXT_TRANS(vlxh_v, 1, rnfvm, ld_index_op, ld_index_check) +GEN_VEXT_TRANS(vlxw_v, 2, rnfvm, ld_index_op, ld_index_check) +GEN_VEXT_TRANS(vlxe_v, 3, rnfvm, ld_index_op, ld_index_check) +GEN_VEXT_TRANS(vlxbu_v, 4, rnfvm, ld_index_op, ld_index_check) +GEN_VEXT_TRANS(vlxhu_v, 5, rnfvm, ld_index_op, ld_index_check) +GEN_VEXT_TRANS(vlxwu_v, 6, rnfvm, ld_index_op, ld_index_check) + +static bool st_index_op(DisasContext *s, arg_rnfvm *a, uint8_t seq) +{ + uint32_t data = 0; + gen_helper_ldst_index *fn; + static gen_helper_ldst_index * const fns[4][4] = { + { gen_helper_vsxb_v_b, gen_helper_vsxb_v_h, + gen_helper_vsxb_v_w, gen_helper_vsxb_v_d }, + { NULL, gen_helper_vsxh_v_h, + gen_helper_vsxh_v_w, gen_helper_vsxh_v_d }, + { NULL, NULL, + gen_helper_vsxw_v_w, gen_helper_vsxw_v_d }, + { gen_helper_vsxe_v_b, gen_helper_vsxe_v_h, + gen_helper_vsxe_v_w, gen_helper_vsxe_v_d } + }; + + fn = fns[seq][s->sew]; + if (fn == NULL) { + return false; + } + + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); + data = FIELD_DP32(data, VDATA, VM, a->vm); + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); + data = FIELD_DP32(data, VDATA, NF, a->nf); + return ldst_index_trans(a->rd, a->rs1, a->rs2, data, fn, s); +} + +static bool st_index_check(DisasContext *s, arg_rnfvm* a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + vext_check_nf(s, a->nf)); +} + +GEN_VEXT_TRANS(vsxb_v, 0, rnfvm, st_index_op, st_index_check) +GEN_VEXT_TRANS(vsxh_v, 1, rnfvm, st_index_op, st_index_check) +GEN_VEXT_TRANS(vsxw_v, 2, rnfvm, st_index_op, st_index_check) +GEN_VEXT_TRANS(vsxe_v, 3, rnfvm, st_index_op, st_index_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index ebfabd2946..35cb9f09b4 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -457,3 +457,120 @@ GEN_VEXT_ST_US(vse_v_b, int8_t, int8_t , ste_b) GEN_VEXT_ST_US(vse_v_h, int16_t, int16_t, ste_h) GEN_VEXT_ST_US(vse_v_w, int32_t, int32_t, ste_w) GEN_VEXT_ST_US(vse_v_d, int64_t, int64_t, ste_d) + +/* + *** index: access vector element from indexed memory + */ +typedef target_ulong (*vext_get_index_addr)(target_ulong base, + uint32_t idx, void *vs2); + +#define GEN_VEXT_GET_INDEX_ADDR(NAME, ETYPE, H) \ +static target_ulong NAME(target_ulong base, \ + uint32_t idx, void *vs2) \ +{ \ + return (base + *((ETYPE *)vs2 + H(idx))); \ +} + +GEN_VEXT_GET_INDEX_ADDR(idx_b, int8_t, H1) +GEN_VEXT_GET_INDEX_ADDR(idx_h, int16_t, H2) +GEN_VEXT_GET_INDEX_ADDR(idx_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(idx_d, int64_t, H8) + +static inline void vext_ldst_index(void *vd, void *v0, target_ulong base, + void *vs2, CPURISCVState *env, uint32_t desc, + vext_get_index_addr get_index_addr, + vext_ldst_elem_fn ldst_elem, + vext_ld_clear_elem clear_elem, + uint32_t esz, uint32_t msz, uintptr_t ra, + MMUAccessType access_type) +{ + uint32_t i, k; + uint32_t nf = vext_nf(desc); + uint32_t vm = vext_vm(desc); + uint32_t mlen = vext_mlen(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + if (env->vl == 0) { + return; + } + /* probe every access*/ + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + probe_pages(env, get_index_addr(base, i, vs2), nf * msz, ra, + access_type); + } + /* load bytes from guest memory */ + for (i = 0; i < env->vl; i++) { + k = 0; + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + while (k < nf) { + abi_ptr addr = get_index_addr(base, i, vs2) + k * msz; + ldst_elem(env, addr, i + k * vlmax, vd, ra); + k++; + } + } + /* clear tail elements */ + if (clear_elem) { + for (k = 0; k < nf; k++) { + clear_elem(vd, env->vl + k * vlmax, env->vl * esz, vlmax * esz); + } + } +} + +#define GEN_VEXT_LD_INDEX(NAME, MTYPE, ETYPE, INDEX_FN, LOAD_FN, CLEAR_FN) \ +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + vext_ldst_index(vd, v0, base, vs2, env, desc, INDEX_FN, \ + LOAD_FN, CLEAR_FN, sizeof(ETYPE), sizeof(MTYPE), \ + GETPC(), MMU_DATA_LOAD); \ +} +GEN_VEXT_LD_INDEX(vlxb_v_b, int8_t, int8_t, idx_b, ldb_b, clearb) +GEN_VEXT_LD_INDEX(vlxb_v_h, int8_t, int16_t, idx_h, ldb_h, clearh) +GEN_VEXT_LD_INDEX(vlxb_v_w, int8_t, int32_t, idx_w, ldb_w, clearl) +GEN_VEXT_LD_INDEX(vlxb_v_d, int8_t, int64_t, idx_d, ldb_d, clearq) +GEN_VEXT_LD_INDEX(vlxh_v_h, int16_t, int16_t, idx_h, ldh_h, clearh) +GEN_VEXT_LD_INDEX(vlxh_v_w, int16_t, int32_t, idx_w, ldh_w, clearl) +GEN_VEXT_LD_INDEX(vlxh_v_d, int16_t, int64_t, idx_d, ldh_d, clearq) +GEN_VEXT_LD_INDEX(vlxw_v_w, int32_t, int32_t, idx_w, ldw_w, clearl) +GEN_VEXT_LD_INDEX(vlxw_v_d, int32_t, int64_t, idx_d, ldw_d, clearq) +GEN_VEXT_LD_INDEX(vlxe_v_b, int8_t, int8_t, idx_b, lde_b, clearb) +GEN_VEXT_LD_INDEX(vlxe_v_h, int16_t, int16_t, idx_h, lde_h, clearh) +GEN_VEXT_LD_INDEX(vlxe_v_w, int32_t, int32_t, idx_w, lde_w, clearl) +GEN_VEXT_LD_INDEX(vlxe_v_d, int64_t, int64_t, idx_d, lde_d, clearq) +GEN_VEXT_LD_INDEX(vlxbu_v_b, uint8_t, uint8_t, idx_b, ldbu_b, clearb) +GEN_VEXT_LD_INDEX(vlxbu_v_h, uint8_t, uint16_t, idx_h, ldbu_h, clearh) +GEN_VEXT_LD_INDEX(vlxbu_v_w, uint8_t, uint32_t, idx_w, ldbu_w, clearl) +GEN_VEXT_LD_INDEX(vlxbu_v_d, uint8_t, uint64_t, idx_d, ldbu_d, clearq) +GEN_VEXT_LD_INDEX(vlxhu_v_h, uint16_t, uint16_t, idx_h, ldhu_h, clearh) +GEN_VEXT_LD_INDEX(vlxhu_v_w, uint16_t, uint32_t, idx_w, ldhu_w, clearl) +GEN_VEXT_LD_INDEX(vlxhu_v_d, uint16_t, uint64_t, idx_d, ldhu_d, clearq) +GEN_VEXT_LD_INDEX(vlxwu_v_w, uint32_t, uint32_t, idx_w, ldwu_w, clearl) +GEN_VEXT_LD_INDEX(vlxwu_v_d, uint32_t, uint64_t, idx_d, ldwu_d, clearq) + +#define GEN_VEXT_ST_INDEX(NAME, MTYPE, ETYPE, INDEX_FN, STORE_FN)\ +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + vext_ldst_index(vd, v0, base, vs2, env, desc, INDEX_FN, \ + STORE_FN, NULL, sizeof(ETYPE), sizeof(MTYPE), \ + GETPC(), MMU_DATA_STORE); \ +} + +GEN_VEXT_ST_INDEX(vsxb_v_b, int8_t, int8_t, idx_b, stb_b) +GEN_VEXT_ST_INDEX(vsxb_v_h, int8_t, int16_t, idx_h, stb_h) +GEN_VEXT_ST_INDEX(vsxb_v_w, int8_t, int32_t, idx_w, stb_w) +GEN_VEXT_ST_INDEX(vsxb_v_d, int8_t, int64_t, idx_d, stb_d) +GEN_VEXT_ST_INDEX(vsxh_v_h, int16_t, int16_t, idx_h, sth_h) +GEN_VEXT_ST_INDEX(vsxh_v_w, int16_t, int32_t, idx_w, sth_w) +GEN_VEXT_ST_INDEX(vsxh_v_d, int16_t, int64_t, idx_d, sth_d) +GEN_VEXT_ST_INDEX(vsxw_v_w, int32_t, int32_t, idx_w, stw_w) +GEN_VEXT_ST_INDEX(vsxw_v_d, int32_t, int64_t, idx_d, stw_d) +GEN_VEXT_ST_INDEX(vsxe_v_b, int8_t, int8_t, idx_b, ste_b) +GEN_VEXT_ST_INDEX(vsxe_v_h, int16_t, int16_t, idx_h, ste_h) +GEN_VEXT_ST_INDEX(vsxe_v_w, int32_t, int32_t, idx_w, ste_w) +GEN_VEXT_ST_INDEX(vsxe_v_d, int64_t, int64_t, idx_d, ste_d) From patchwork Mon Mar 9 12:14:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427011 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DDD7C921 for ; Mon, 9 Mar 2020 12:21:35 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AEF9E2072A for ; Mon, 9 Mar 2020 12:21:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AEF9E2072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42106 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHPa-0003Hx-TI for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:21:34 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56264) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJT-0001Ge-Qd for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJR-0002WV-3e for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:15 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:60793) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJP-0002Pf-T0; Mon, 09 Mar 2020 08:15:13 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436282|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.783163-0.000272734-0.216564; FP=762914405725773366|1|1|7|0|-1|-1|-1; HT=e02c03303; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:06 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 08/60] target/riscv: add vector amo operations Date: Mon, 9 Mar 2020 20:14:05 +0800 Message-Id: <20200309121457.13719-9-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Vector AMOs operate as if aq and rl bits were zero on each element with regard to ordering relative to other instructions in the same hart. Vector AMOs provide no ordering guarantee between element operations in the same vector AMO instruction Signed-off-by: LIU Zhiwei --- target/riscv/cpu.h | 1 + target/riscv/helper.h | 29 +++++ target/riscv/insn32-64.decode | 11 ++ target/riscv/insn32.decode | 13 +++ target/riscv/insn_trans/trans_rvv.inc.c | 130 +++++++++++++++++++++ target/riscv/vector_helper.c | 143 ++++++++++++++++++++++++ 6 files changed, 327 insertions(+) diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h index a6761f3838..6fcaa5bc89 100644 --- a/target/riscv/cpu.h +++ b/target/riscv/cpu.h @@ -374,6 +374,7 @@ FIELD(VDATA, MLEN, 0, 8) FIELD(VDATA, VM, 8, 1) FIELD(VDATA, LMUL, 9, 2) FIELD(VDATA, NF, 11, 4) +FIELD(VDATA, WD, 11, 1) FIELD(TB_FLAGS, VL_EQ_VLMAX, 2, 1) FIELD(TB_FLAGS, LMUL, 3, 2) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 72ba4d9bdb..70a4b05f75 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -240,3 +240,32 @@ DEF_HELPER_5(vlhuff_v_w, void, ptr, ptr, tl, env, i32) DEF_HELPER_5(vlhuff_v_d, void, ptr, ptr, tl, env, i32) DEF_HELPER_5(vlwuff_v_w, void, ptr, ptr, tl, env, i32) DEF_HELPER_5(vlwuff_v_d, void, ptr, ptr, tl, env, i32) +#ifdef TARGET_RISCV64 +DEF_HELPER_6(vamoswapw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoswapd_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoaddw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoaddd_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoxorw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoxord_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoandw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoandd_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoorw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoord_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomind_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxd_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominuw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominud_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxuw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxud_v_d, void, ptr, ptr, tl, ptr, env, i32) +#endif +DEF_HELPER_6(vamoswapw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoaddw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoxorw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoandw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoorw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominuw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxuw_v_w, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32-64.decode b/target/riscv/insn32-64.decode index 380bf791bc..86153d93fa 100644 --- a/target/riscv/insn32-64.decode +++ b/target/riscv/insn32-64.decode @@ -57,6 +57,17 @@ amomax_d 10100 . . ..... ..... 011 ..... 0101111 @atom_st amominu_d 11000 . . ..... ..... 011 ..... 0101111 @atom_st amomaxu_d 11100 . . ..... ..... 011 ..... 0101111 @atom_st +#*** Vector AMO operations (in addition to Zvamo) *** +vamoswapd_v 00001 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamoaddd_v 00000 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamoxord_v 00100 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamoandd_v 01100 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamoord_v 01000 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamomind_v 10000 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamomaxd_v 10100 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamominud_v 11000 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamomaxud_v 11100 . . ..... ..... 111 ..... 0101111 @r_wdvm + # *** RV64F Standard Extension (in addition to RV32F) *** fcvt_l_s 1100000 00010 ..... ... ..... 1010011 @r2_rm fcvt_lu_s 1100000 00011 ..... ... ..... 1010011 @r2_rm diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index b76c09c8c0..1330703720 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -44,6 +44,7 @@ &u imm rd &shift shamt rs1 rd &atomic aq rl rs2 rs1 rd +&rwdvm vm wd rd rs1 rs2 &r2nfvm vm rd rs1 nf &rnfvm vm rd rs1 rs2 nf @@ -67,6 +68,7 @@ @r2 ....... ..... ..... ... ..... ....... %rs1 %rd @r2_nfvm ... ... vm:1 ..... ..... ... ..... ....... &r2nfvm %nf %rs1 %rd @r_nfvm ... ... vm:1 ..... ..... ... ..... ....... &rnfvm %nf %rs2 %rs1 %rd +@r_wdvm ..... wd:1 vm:1 ..... ..... ... ..... ....... &rwdvm %rs2 %rs1 %rd @r2_zimm . zimm:11 ..... ... ..... ....... %rs1 %rd @hfence_gvma ....... ..... ..... ... ..... ....... %rs2 %rs1 @@ -261,6 +263,17 @@ vsxh_v ... -11 . ..... ..... 101 ..... 0100111 @r_nfvm vsxw_v ... -11 . ..... ..... 110 ..... 0100111 @r_nfvm vsxe_v ... -11 . ..... ..... 111 ..... 0100111 @r_nfvm +#*** Vector AMO operations are encoded under the standard AMO major opcode *** +vamoswapw_v 00001 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamoaddw_v 00000 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamoxorw_v 00100 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamoandw_v 01100 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamoorw_v 01000 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamominw_v 10000 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamomaxw_v 10100 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamominuw_v 11000 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamomaxuw_v 11100 . . ..... ..... 110 ..... 0101111 @r_wdvm + # *** new major opcode OP-V *** vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 9d9fc886d6..3c677160c5 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -600,3 +600,133 @@ GEN_VEXT_TRANS(vleff_v, 3, r2nfvm, ldff_op, ld_us_check) GEN_VEXT_TRANS(vlbuff_v, 4, r2nfvm, ldff_op, ld_us_check) GEN_VEXT_TRANS(vlhuff_v, 5, r2nfvm, ldff_op, ld_us_check) GEN_VEXT_TRANS(vlwuff_v, 6, r2nfvm, ldff_op, ld_us_check) + +/* + *** vector atomic operation + */ +typedef void gen_helper_amo(TCGv_ptr, TCGv_ptr, TCGv, TCGv_ptr, + TCGv_env, TCGv_i32); + +static bool amo_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, + uint32_t data, gen_helper_amo *fn, DisasContext *s) +{ + TCGv_ptr dest, mask, index; + TCGv base; + TCGv_i32 desc; + + dest = tcg_temp_new_ptr(); + mask = tcg_temp_new_ptr(); + index = tcg_temp_new_ptr(); + base = tcg_temp_new(); + desc = tcg_const_i32(simd_desc(0, s->vlen / 8, data)); + + gen_get_gpr(base, rs1); + tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd)); + tcg_gen_addi_ptr(index, cpu_env, vreg_ofs(s, vs2)); + tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0)); + + fn(dest, mask, base, index, cpu_env, desc); + + tcg_temp_free_ptr(dest); + tcg_temp_free_ptr(mask); + tcg_temp_free_ptr(index); + tcg_temp_free(base); + tcg_temp_free_i32(desc); + return true; +} + +static bool amo_op(DisasContext *s, arg_rwdvm *a, uint8_t seq) +{ + uint32_t data = 0; + gen_helper_amo *fn; + static gen_helper_amo *const fnsw[9] = { + /* no atomic operation */ + gen_helper_vamoswapw_v_w, + gen_helper_vamoaddw_v_w, + gen_helper_vamoxorw_v_w, + gen_helper_vamoandw_v_w, + gen_helper_vamoorw_v_w, + gen_helper_vamominw_v_w, + gen_helper_vamomaxw_v_w, + gen_helper_vamominuw_v_w, + gen_helper_vamomaxuw_v_w + }; +#ifdef TARGET_RISCV64 + static gen_helper_amo *const fnsd[18] = { + gen_helper_vamoswapw_v_d, + gen_helper_vamoaddw_v_d, + gen_helper_vamoxorw_v_d, + gen_helper_vamoandw_v_d, + gen_helper_vamoorw_v_d, + gen_helper_vamominw_v_d, + gen_helper_vamomaxw_v_d, + gen_helper_vamominuw_v_d, + gen_helper_vamomaxuw_v_d, + gen_helper_vamoswapd_v_d, + gen_helper_vamoaddd_v_d, + gen_helper_vamoxord_v_d, + gen_helper_vamoandd_v_d, + gen_helper_vamoord_v_d, + gen_helper_vamomind_v_d, + gen_helper_vamomaxd_v_d, + gen_helper_vamominud_v_d, + gen_helper_vamomaxud_v_d + }; +#endif + + if (tb_cflags(s->base.tb) & CF_PARALLEL) { + gen_helper_exit_atomic(cpu_env); + s->base.is_jmp = DISAS_NORETURN; + return true; + } else { + fn = fnsw[seq]; +#ifdef TARGET_RISCV64 + if (s->sew == 3) { + fn = fnsd[seq]; + } +#endif + } + + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); + data = FIELD_DP32(data, VDATA, VM, a->vm); + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); + data = FIELD_DP32(data, VDATA, WD, a->wd); + return amo_trans(a->rd, a->rs1, a->rs2, data, fn, s); +} +/* + * There are two rules check here. + * + * 1. SEW must be at least as wide as the AMO memory element size. + * + * 2. If SEW is greater than XLEN, an illegal instruction exception is raised. + */ +static bool amo_check(DisasContext *s, arg_rwdvm* a) +{ + return (vext_check_isa_ill(s, RVV | RVA) && + (!a->wd || vext_check_overlap_mask(s, a->rd, a->vm, false)) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + ((1 << s->sew) <= sizeof(target_ulong)) && + ((1 << s->sew) >= 4)); +} + +GEN_VEXT_TRANS(vamoswapw_v, 0, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoaddw_v, 1, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoxorw_v, 2, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoandw_v, 3, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoorw_v, 4, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamominw_v, 5, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamomaxw_v, 6, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamominuw_v, 7, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamomaxuw_v, 8, rwdvm, amo_op, amo_check) +#ifdef TARGET_RISCV64 +GEN_VEXT_TRANS(vamoswapd_v, 9, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoaddd_v, 10, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoxord_v, 11, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoandd_v, 12, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoord_v, 13, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamomind_v, 14, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamomaxd_v, 15, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamominud_v, 16, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamomaxud_v, 17, rwdvm, amo_op, amo_check) +#endif diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 3841301b74..f9b409b169 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -94,6 +94,11 @@ static inline uint32_t vext_lmul(uint32_t desc) return FIELD_EX32(simd_data(desc), VDATA, LMUL); } +static uint32_t vext_wd(uint32_t desc) +{ + return (simd_data(desc) >> 11) & 0x1; +} + /* * Get vector group length in bytes. Its range is [64, 2048]. * @@ -685,3 +690,141 @@ GEN_VEXT_LDFF(vlhuff_v_w, uint16_t, uint32_t, MO_LEUW, ldhu_w, clearl) GEN_VEXT_LDFF(vlhuff_v_d, uint16_t, uint64_t, MO_LEUW, ldhu_d, clearq) GEN_VEXT_LDFF(vlwuff_v_w, uint32_t, uint32_t, MO_LEUL, ldwu_w, clearl) GEN_VEXT_LDFF(vlwuff_v_d, uint32_t, uint64_t, MO_LEUL, ldwu_d, clearq) + +/* + *** Vector AMO Operations (Zvamo) + */ +typedef void (*vext_amo_noatomic_fn)(void *vs3, target_ulong addr, + uint32_t wd, uint32_t idx, CPURISCVState *env, uintptr_t retaddr); + +/* no atomic opreation for vector atomic insructions */ +#define DO_SWAP(N, M) (M) +#define DO_AND(N, M) (N & M) +#define DO_XOR(N, M) (N ^ M) +#define DO_OR(N, M) (N | M) +#define DO_ADD(N, M) (N + M) + +#define GEN_VEXT_AMO_NOATOMIC_OP(NAME, ESZ, MSZ, H, DO_OP, SUF) \ +static void vext_##NAME##_noatomic_op(void *vs3, \ + target_ulong addr, uint32_t wd, uint32_t idx, \ + CPURISCVState *env, uintptr_t retaddr) \ +{ \ + typedef int##ESZ##_t ETYPE; \ + typedef int##MSZ##_t MTYPE; \ + typedef uint##MSZ##_t UMTYPE __attribute__((unused)); \ + ETYPE *pe3 = (ETYPE *)vs3 + H(idx); \ + MTYPE a = *pe3, b = cpu_ld##SUF##_data(env, addr); \ + a = DO_OP(a, b); \ + cpu_st##SUF##_data(env, addr, a); \ + if (wd) { \ + *pe3 = a; \ + } \ +} + +/* Signed min/max */ +#define DO_MAX(N, M) ((N) >= (M) ? (N) : (M)) +#define DO_MIN(N, M) ((N) >= (M) ? (M) : (N)) + +/* Unsigned min/max */ +#define DO_MAXU(N, M) DO_MAX((UMTYPE)N, (UMTYPE)M) +#define DO_MINU(N, M) DO_MIN((UMTYPE)N, (UMTYPE)M) + +GEN_VEXT_AMO_NOATOMIC_OP(vamoswapw_v_w, 32, 32, H4, DO_SWAP, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoaddw_v_w, 32, 32, H4, DO_ADD, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoxorw_v_w, 32, 32, H4, DO_XOR, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoandw_v_w, 32, 32, H4, DO_AND, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoorw_v_w, 32, 32, H4, DO_OR, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamominw_v_w, 32, 32, H4, DO_MIN, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamomaxw_v_w, 32, 32, H4, DO_MAX, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamominuw_v_w, 32, 32, H4, DO_MINU, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamomaxuw_v_w, 32, 32, H4, DO_MAXU, l) +#ifdef TARGET_RISCV64 +GEN_VEXT_AMO_NOATOMIC_OP(vamoswapw_v_d, 64, 32, H8, DO_SWAP, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoswapd_v_d, 64, 64, H8, DO_SWAP, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamoaddw_v_d, 64, 32, H8, DO_ADD, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoaddd_v_d, 64, 64, H8, DO_ADD, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamoxorw_v_d, 64, 32, H8, DO_XOR, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoxord_v_d, 64, 64, H8, DO_XOR, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamoandw_v_d, 64, 32, H8, DO_AND, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoandd_v_d, 64, 64, H8, DO_AND, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamoorw_v_d, 64, 32, H8, DO_OR, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoord_v_d, 64, 64, H8, DO_OR, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamominw_v_d, 64, 32, H8, DO_MIN, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamomind_v_d, 64, 64, H8, DO_MIN, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamomaxw_v_d, 64, 32, H8, DO_MAX, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamomaxd_v_d, 64, 64, H8, DO_MAX, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamominuw_v_d, 64, 32, H8, DO_MINU, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamominud_v_d, 64, 64, H8, DO_MINU, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamomaxuw_v_d, 64, 32, H8, DO_MAXU, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamomaxud_v_d, 64, 64, H8, DO_MAXU, q) +#endif + +static inline void vext_amo_noatomic(void *vs3, void *v0, target_ulong base, + void *vs2, CPURISCVState *env, uint32_t desc, + vext_get_index_addr get_index_addr, + vext_amo_noatomic_fn noatomic_op, + vext_ld_clear_elem clear_elem, + uint32_t esz, uint32_t msz, uintptr_t ra) +{ + uint32_t i; + target_long addr; + uint32_t wd = vext_wd(desc); + uint32_t vm = vext_vm(desc); + uint32_t mlen = vext_mlen(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + probe_pages(env, get_index_addr(base, i, vs2), msz, ra, MMU_DATA_LOAD); + probe_pages(env, get_index_addr(base, i, vs2), msz, ra, MMU_DATA_STORE); + } + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + addr = get_index_addr(base, i, vs2); + noatomic_op(vs3, addr, wd, i, env, ra); + } + clear_elem(vs3, env->vl, env->vl * esz, vlmax * esz); +} + +#define GEN_VEXT_AMO(NAME, MTYPE, ETYPE, INDEX_FN, CLEAR_FN) \ +void HELPER(NAME)(void *vs3, void *v0, target_ulong base, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + vext_amo_noatomic(vs3, v0, base, vs2, env, desc, \ + INDEX_FN, vext_##NAME##_noatomic_op, CLEAR_FN, \ + sizeof(ETYPE), sizeof(MTYPE), GETPC()); \ +} + +#ifdef TARGET_RISCV64 +GEN_VEXT_AMO(vamoswapw_v_d, int32_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamoswapd_v_d, int64_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamoaddw_v_d, int32_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamoaddd_v_d, int64_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamoxorw_v_d, int32_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamoxord_v_d, int64_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamoandw_v_d, int32_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamoandd_v_d, int64_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamoorw_v_d, int32_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamoord_v_d, int64_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamominw_v_d, int32_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamomind_v_d, int64_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamomaxw_v_d, int32_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamomaxd_v_d, int64_t, int64_t, idx_d, clearq) +GEN_VEXT_AMO(vamominuw_v_d, uint32_t, uint64_t, idx_d, clearq) +GEN_VEXT_AMO(vamominud_v_d, uint64_t, uint64_t, idx_d, clearq) +GEN_VEXT_AMO(vamomaxuw_v_d, uint32_t, uint64_t, idx_d, clearq) +GEN_VEXT_AMO(vamomaxud_v_d, uint64_t, uint64_t, idx_d, clearq) +#endif +GEN_VEXT_AMO(vamoswapw_v_w, int32_t, int32_t, idx_w, clearl) +GEN_VEXT_AMO(vamoaddw_v_w, int32_t, int32_t, idx_w, clearl) +GEN_VEXT_AMO(vamoxorw_v_w, int32_t, int32_t, idx_w, clearl) +GEN_VEXT_AMO(vamoandw_v_w, int32_t, int32_t, idx_w, clearl) +GEN_VEXT_AMO(vamoorw_v_w, int32_t, int32_t, idx_w, clearl) +GEN_VEXT_AMO(vamominw_v_w, int32_t, int32_t, idx_w, clearl) +GEN_VEXT_AMO(vamomaxw_v_w, int32_t, int32_t, idx_w, clearl) +GEN_VEXT_AMO(vamominuw_v_w, uint32_t, uint32_t, idx_w, clearl) +GEN_VEXT_AMO(vamomaxuw_v_w, uint32_t, uint32_t, idx_w, clearl) From patchwork Mon Mar 9 12:14:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427025 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6CD4D921 for ; Mon, 9 Mar 2020 12:26:03 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3D55B2072A for ; Mon, 9 Mar 2020 12:26:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D55B2072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42196 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHTu-0001ks-As for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:26:02 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56585) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJy-0002Kk-0j for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:50 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJu-00032s-1d for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:45 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:36177) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002bu-CB; Mon, 09 Mar 2020 08:15:41 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07608248|-1; BR=01201311R141S65ruler364_34042_170522; CH=green; DM=||false|; DS=SPAM|spam_porn|0.96361-0.000343908-0.0360465; FP=1053816043233419046|1|1|7|0|-1|-1|-1; HT=e02c03312; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:06 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 09/60] target/riscv: vector single-width integer add and subtract Date: Mon, 9 Mar 2020 20:14:06 +0800 Message-Id: <20200309121457.13719-10-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 21 +++ target/riscv/insn32.decode | 10 ++ target/riscv/insn_trans/trans_rvv.inc.c | 220 ++++++++++++++++++++++++ target/riscv/vector_helper.c | 122 +++++++++++++ 4 files changed, 373 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 70a4b05f75..e73701d4bb 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -269,3 +269,24 @@ DEF_HELPER_6(vamominw_v_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vamomaxw_v_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vamominuw_v_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vamomaxuw_v_w, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vadd_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vadd_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vadd_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vadd_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsub_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsub_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsub_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsub_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vadd_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vadd_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vadd_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vadd_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsub_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsub_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsub_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsub_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vrsub_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vrsub_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vrsub_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vrsub_vx_d, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 1330703720..d1034a0e61 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -44,6 +44,7 @@ &u imm rd &shift shamt rs1 rd &atomic aq rl rs2 rs1 rd +&rmrr vm rd rs1 rs2 &rwdvm vm wd rd rs1 rs2 &r2nfvm vm rd rs1 nf &rnfvm vm rd rs1 rs2 nf @@ -68,6 +69,7 @@ @r2 ....... ..... ..... ... ..... ....... %rs1 %rd @r2_nfvm ... ... vm:1 ..... ..... ... ..... ....... &r2nfvm %nf %rs1 %rd @r_nfvm ... ... vm:1 ..... ..... ... ..... ....... &rnfvm %nf %rs2 %rs1 %rd +@r_vm ...... vm:1 ..... ..... ... ..... ....... &rmrr %rs2 %rs1 %rd @r_wdvm ..... wd:1 vm:1 ..... ..... ... ..... ....... &rwdvm %rs2 %rs1 %rd @r2_zimm . zimm:11 ..... ... ..... ....... %rs1 %rd @@ -275,5 +277,13 @@ vamominuw_v 11000 . . ..... ..... 110 ..... 0101111 @r_wdvm vamomaxuw_v 11100 . . ..... ..... 110 ..... 0101111 @r_wdvm # *** new major opcode OP-V *** +vadd_vv 000000 . ..... ..... 000 ..... 1010111 @r_vm +vadd_vx 000000 . ..... ..... 100 ..... 1010111 @r_vm +vadd_vi 000000 . ..... ..... 011 ..... 1010111 @r_vm +vsub_vv 000010 . ..... ..... 000 ..... 1010111 @r_vm +vsub_vx 000010 . ..... ..... 100 ..... 1010111 @r_vm +vrsub_vx 000011 . ..... ..... 100 ..... 1010111 @r_vm +vrsub_vi 000011 . ..... ..... 011 ..... 1010111 @r_vm + vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 3c677160c5..00c7ec976f 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -730,3 +730,223 @@ GEN_VEXT_TRANS(vamomaxd_v, 15, rwdvm, amo_op, amo_check) GEN_VEXT_TRANS(vamominud_v, 16, rwdvm, amo_op, amo_check) GEN_VEXT_TRANS(vamomaxud_v, 17, rwdvm, amo_op, amo_check) #endif + +/* + *** Vector Integer Arithmetic Instructions + */ +#define MAXSZ(s) (s->vlen >> (3 - s->lmul)) + +static bool opivv_check(DisasContext *s, arg_rmrr *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, false) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + vext_check_reg(s, a->rs1, false)); +} + +/* OPIVV with GVEC IR */ +#define GEN_OPIVV_GVEC_TRANS(NAME, GVSUF) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (!opivv_check(s, a)) { \ + return false; \ + } \ + \ + if (a->vm && s->vl_eq_vlmax) { \ + tcg_gen_gvec_##GVSUF(8 << s->sew, vreg_ofs(s, a->rd), \ + vreg_ofs(s, a->rs2), vreg_ofs(s, a->rs1), \ + MAXSZ(s), MAXSZ(s)); \ + } else { \ + uint32_t data = 0; \ + static gen_helper_gvec_4_ptr * const fns[4] = { \ + gen_helper_##NAME##_b, gen_helper_##NAME##_h, \ + gen_helper_##NAME##_w, gen_helper_##NAME##_d, \ + }; \ + \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \ + vreg_ofs(s, a->rs1), vreg_ofs(s, a->rs2), \ + cpu_env, 0, s->vlen / 8, data, fns[s->sew]); \ + } \ + return true; \ +} +GEN_OPIVV_GVEC_TRANS(vadd_vv, add) +GEN_OPIVV_GVEC_TRANS(vsub_vv, sub) + +typedef void (*gen_helper_opivx)(TCGv_ptr, TCGv_ptr, TCGv, TCGv_ptr, + TCGv_env, TCGv_i32); + +static bool opivx_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, + uint32_t data, gen_helper_opivx fn, DisasContext *s) +{ + TCGv_ptr dest, src2, mask; + TCGv src1; + TCGv_i32 desc; + + dest = tcg_temp_new_ptr(); + mask = tcg_temp_new_ptr(); + src2 = tcg_temp_new_ptr(); + src1 = tcg_temp_new(); + gen_get_gpr(src1, rs1); + desc = tcg_const_i32(simd_desc(0, s->vlen / 8, data)); + + tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd)); + tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, vs2)); + tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0)); + + fn(dest, mask, src1, src2, cpu_env, desc); + + tcg_temp_free_ptr(dest); + tcg_temp_free_ptr(mask); + tcg_temp_free_ptr(src2); + tcg_temp_free(src1); + tcg_temp_free_i32(desc); + return true; +} + +static bool opivx_check(DisasContext *s, arg_rmrr *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, false) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false)); +} +/* OPIVX with GVEC IR */ +#define GEN_OPIVX_GVEC_TRANS(NAME, GVSUF) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (!opivx_check(s, a)) { \ + return false; \ + } \ + \ + if (a->vm && s->vl_eq_vlmax) { \ + TCGv_i64 src1 = tcg_temp_new_i64(); \ + TCGv tmp = tcg_temp_new(); \ + gen_get_gpr(tmp, a->rs1); \ + tcg_gen_ext_tl_i64(src1, tmp); \ + tcg_gen_gvec_##GVSUF(8 << s->sew, vreg_ofs(s, a->rd), \ + vreg_ofs(s, a->rs2), src1, MAXSZ(s), MAXSZ(s)); \ + tcg_temp_free_i64(src1); \ + tcg_temp_free(tmp); \ + return true; \ + } else { \ + uint32_t data = 0; \ + static gen_helper_opivx const fns[4] = { \ + gen_helper_##NAME##_b, gen_helper_##NAME##_h, \ + gen_helper_##NAME##_w, gen_helper_##NAME##_d, \ + }; \ + \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + return opivx_trans(a->rd, a->rs1, a->rs2, data, fns[s->sew], s); \ + } \ + return true; \ +} +GEN_OPIVX_GVEC_TRANS(vadd_vx, adds) +GEN_OPIVX_GVEC_TRANS(vsub_vx, subs) + +/* OPIVX without GVEC IR */ +#define GEN_OPIVX_TRANS(NAME, CHECK) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (CHECK(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_opivx const fns[4] = { \ + gen_helper_##NAME##_b, gen_helper_##NAME##_h, \ + gen_helper_##NAME##_w, gen_helper_##NAME##_d, \ + }; \ + \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + return opivx_trans(a->rd, a->rs1, a->rs2, data, fns[s->sew], s); \ + } \ + return false; \ +} + +GEN_OPIVX_TRANS(vrsub_vx, opivx_check) + +static bool opivi_trans(uint32_t vd, uint32_t imm, uint32_t vs2, + uint32_t data, gen_helper_opivx fn, DisasContext *s, int zx) +{ + TCGv_ptr dest, src2, mask; + TCGv src1; + TCGv_i32 desc; + + dest = tcg_temp_new_ptr(); + mask = tcg_temp_new_ptr(); + src2 = tcg_temp_new_ptr(); + if (zx) { + src1 = tcg_const_tl(imm); + } else { + src1 = tcg_const_tl(sextract64(imm, 0, 5)); + } + desc = tcg_const_i32(simd_desc(0, s->vlen / 8, data)); + + tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd)); + tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, vs2)); + tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0)); + + fn(dest, mask, src1, src2, cpu_env, desc); + + tcg_temp_free_ptr(dest); + tcg_temp_free_ptr(mask); + tcg_temp_free_ptr(src2); + tcg_temp_free(src1); + tcg_temp_free_i32(desc); + return true; +} + +/* OPIVI with GVEC IR */ +#define GEN_OPIVI_GVEC_TRANS(NAME, ZX, OPIVX, GVSUF) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (!opivx_check(s, a)) { \ + return false; \ + } \ + \ + if (a->vm && s->vl_eq_vlmax) { \ + tcg_gen_gvec_##GVSUF(8 << s->sew, vreg_ofs(s, a->rd), \ + vreg_ofs(s, a->rs2), sextract64(a->rs1, 0, 5), \ + MAXSZ(s), MAXSZ(s)); \ + return true; \ + } else { \ + uint32_t data = 0; \ + static gen_helper_opivx const fns[4] = { \ + gen_helper_##OPIVX##_b, gen_helper_##OPIVX##_h, \ + gen_helper_##OPIVX##_w, gen_helper_##OPIVX##_d, \ + }; \ + \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + return opivi_trans(a->rd, a->rs1, a->rs2, data, \ + fns[s->sew], s, ZX); \ + } \ + return true; \ +} +GEN_OPIVI_GVEC_TRANS(vadd_vi, 0, vadd_vx, addi) + +/* OPIVI without GVEC IR */ +#define GEN_OPIVI_TRANS(NAME, ZX, OPIVX, CHECK) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (CHECK(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_opivx const fns[4] = { \ + gen_helper_##OPIVX##_b, gen_helper_##OPIVX##_h, \ + gen_helper_##OPIVX##_w, gen_helper_##OPIVX##_d, \ + }; \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + return opivi_trans(a->rd, a->rs1, a->rs2, data, \ + fns[s->sew], s, ZX); \ + } \ + return false; \ +} +GEN_OPIVI_TRANS(vrsub_vi, 0, vrsub_vx, opivx_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index f9b409b169..abdf3b82a8 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -828,3 +828,125 @@ GEN_VEXT_AMO(vamominw_v_w, int32_t, int32_t, idx_w, clearl) GEN_VEXT_AMO(vamomaxw_v_w, int32_t, int32_t, idx_w, clearl) GEN_VEXT_AMO(vamominuw_v_w, uint32_t, uint32_t, idx_w, clearl) GEN_VEXT_AMO(vamomaxuw_v_w, uint32_t, uint32_t, idx_w, clearl) + +/* + *** Vector Integer Arithmetic Instructions + */ + +/* expand macro args before macro */ +#define RVVCALL(macro, ...) macro(__VA_ARGS__) + +/* (TD, T1, T2, TX1, TX2) */ +#define OP_SSS_B int8_t, int8_t, int8_t, int8_t, int8_t +#define OP_SSS_H int16_t, int16_t, int16_t, int16_t, int16_t +#define OP_SSS_W int32_t, int32_t, int32_t, int32_t, int32_t +#define OP_SSS_D int64_t, int64_t, int64_t, int64_t, int64_t + +/* operation of two vector elements */ +#define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \ +static void do_##NAME(void *vd, void *vs1, void *vs2, int i) \ +{ \ + TX1 s1 = *((T1 *)vs1 + HS1(i)); \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + *((TD *)vd + HD(i)) = OP(s2, s1); \ +} +#define DO_SUB(N, M) (N - M) +#define DO_RSUB(N, M) (M - N) + +RVVCALL(OPIVV2, vadd_vv_b, OP_SSS_B, H1, H1, H1, DO_ADD) +RVVCALL(OPIVV2, vadd_vv_h, OP_SSS_H, H2, H2, H2, DO_ADD) +RVVCALL(OPIVV2, vadd_vv_w, OP_SSS_W, H4, H4, H4, DO_ADD) +RVVCALL(OPIVV2, vadd_vv_d, OP_SSS_D, H8, H8, H8, DO_ADD) +RVVCALL(OPIVV2, vsub_vv_b, OP_SSS_B, H1, H1, H1, DO_SUB) +RVVCALL(OPIVV2, vsub_vv_h, OP_SSS_H, H2, H2, H2, DO_SUB) +RVVCALL(OPIVV2, vsub_vv_w, OP_SSS_W, H4, H4, H4, DO_SUB) +RVVCALL(OPIVV2, vsub_vv_d, OP_SSS_D, H8, H8, H8, DO_SUB) + +/* generate the helpers for OPIVV */ +#define GEN_VEXT_VV(NAME, ESZ, DSZ, CLEAR_FN) \ +void HELPER(NAME)(void *vd, void *v0, void *vs1, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t vlmax = vext_maxsz(desc) / ESZ; \ + uint32_t mlen = vext_mlen(desc); \ + uint32_t vm = vext_vm(desc); \ + uint32_t vl = env->vl; \ + uint32_t i; \ + for (i = 0; i < vl; i++) { \ + if (!vm && !vext_elem_mask(v0, mlen, i)) { \ + continue; \ + } \ + do_##NAME(vd, vs1, vs2, i); \ + } \ + if (i != 0) { \ + CLEAR_FN(vd, vl, vl * DSZ, vlmax * DSZ); \ + } \ +} + +GEN_VEXT_VV(vadd_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vadd_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vadd_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vadd_vv_d, 8, 8, clearq) +GEN_VEXT_VV(vsub_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vsub_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vsub_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vsub_vv_d, 8, 8, clearq) + +/* + * If XLEN < SEW, the value from the x register is sign-extended to SEW bits. + * So (target_long)s1 is need. (T1)(target_long)s1 gives the real operator type. + * (TX1)(T1)(target_long)s1 expands the operator type of widen operations + * or narrow operations + */ +#define OPIVX2(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \ +static void do_##NAME(void *vd, target_ulong s1, void *vs2, int i) \ +{ \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)(target_long)s1); \ +} +RVVCALL(OPIVX2, vadd_vx_b, OP_SSS_B, H1, H1, DO_ADD) +RVVCALL(OPIVX2, vadd_vx_h, OP_SSS_H, H2, H2, DO_ADD) +RVVCALL(OPIVX2, vadd_vx_w, OP_SSS_W, H4, H4, DO_ADD) +RVVCALL(OPIVX2, vadd_vx_d, OP_SSS_D, H8, H8, DO_ADD) +RVVCALL(OPIVX2, vsub_vx_b, OP_SSS_B, H1, H1, DO_SUB) +RVVCALL(OPIVX2, vsub_vx_h, OP_SSS_H, H2, H2, DO_SUB) +RVVCALL(OPIVX2, vsub_vx_w, OP_SSS_W, H4, H4, DO_SUB) +RVVCALL(OPIVX2, vsub_vx_d, OP_SSS_D, H8, H8, DO_SUB) +RVVCALL(OPIVX2, vrsub_vx_b, OP_SSS_B, H1, H1, DO_RSUB) +RVVCALL(OPIVX2, vrsub_vx_h, OP_SSS_H, H2, H2, DO_RSUB) +RVVCALL(OPIVX2, vrsub_vx_w, OP_SSS_W, H4, H4, DO_RSUB) +RVVCALL(OPIVX2, vrsub_vx_d, OP_SSS_D, H8, H8, DO_RSUB) + +/* generate the helpers for instructions with one vector and one sclar */ +#define GEN_VEXT_VX(NAME, ESZ, DSZ, CLEAR_FN) \ +void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t vlmax = vext_maxsz(desc) / ESZ; \ + uint32_t mlen = vext_mlen(desc); \ + uint32_t vm = vext_vm(desc); \ + uint32_t vl = env->vl; \ + uint32_t i; \ + \ + for (i = 0; i < vl; i++) { \ + if (!vm && !vext_elem_mask(v0, mlen, i)) { \ + continue; \ + } \ + do_##NAME(vd, s1, vs2, i); \ + } \ + if (i != 0) { \ + CLEAR_FN(vd, vl, vl * DSZ, vlmax * DSZ); \ + } \ +} +GEN_VEXT_VX(vadd_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vadd_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vadd_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vadd_vx_d, 8, 8, clearq) +GEN_VEXT_VX(vsub_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vsub_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vsub_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vsub_vx_d, 8, 8, clearq) +GEN_VEXT_VX(vrsub_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vrsub_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vrsub_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vrsub_vx_d, 8, 8, clearq) From patchwork Mon Mar 9 12:14:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427055 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9380E14E3 for ; Mon, 9 Mar 2020 12:38:50 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 35B6620674 for ; Mon, 9 Mar 2020 12:38:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 35B6620674 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42430 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHgH-0002nf-D0 for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:38:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56913) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHK7-0002Zf-Uo for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:59 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHK3-0003HQ-Iv for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:55 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:38189) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHK2-0002bz-S8; Mon, 09 Mar 2020 08:15:51 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07608247|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.595792-0.000141828-0.404066; FP=1657305062827449346|1|1|7|0|-1|-1|-1; HT=e02c03311; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:07 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 11/60] target/riscv: vector integer add-with-carry / subtract-with-borrow instructions Date: Mon, 9 Mar 2020 20:14:08 +0800 Message-Id: <20200309121457.13719-12-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 33 ++++++ target/riscv/insn32.decode | 10 ++ target/riscv/insn_trans/trans_rvv.inc.c | 108 ++++++++++++++++++ target/riscv/vector_helper.c | 140 ++++++++++++++++++++++++ 4 files changed, 291 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 1256defb6c..72c733bf49 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -339,3 +339,36 @@ DEF_HELPER_6(vwadd_wx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vwsub_wx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vwsub_wx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vwsub_wx_w, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vadc_vvm_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vadc_vvm_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vadc_vvm_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vadc_vvm_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsbc_vvm_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsbc_vvm_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsbc_vvm_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsbc_vvm_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmadc_vvm_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmadc_vvm_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmadc_vvm_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmadc_vvm_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmsbc_vvm_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmsbc_vvm_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmsbc_vvm_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmsbc_vvm_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vadc_vxm_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vadc_vxm_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vadc_vxm_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vadc_vxm_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsbc_vxm_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsbc_vxm_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsbc_vxm_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsbc_vxm_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmadc_vxm_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmadc_vxm_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmadc_vxm_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmadc_vxm_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmsbc_vxm_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmsbc_vxm_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmsbc_vxm_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmsbc_vxm_d, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 4bdbfd16fa..e8ddf95d3d 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -300,6 +300,16 @@ vwsubu_wv 110110 . ..... ..... 010 ..... 1010111 @r_vm vwsubu_wx 110110 . ..... ..... 110 ..... 1010111 @r_vm vwsub_wv 110111 . ..... ..... 010 ..... 1010111 @r_vm vwsub_wx 110111 . ..... ..... 110 ..... 1010111 @r_vm +vadc_vvm 010000 1 ..... ..... 000 ..... 1010111 @r +vadc_vxm 010000 1 ..... ..... 100 ..... 1010111 @r +vadc_vim 010000 1 ..... ..... 011 ..... 1010111 @r +vmadc_vvm 010001 1 ..... ..... 000 ..... 1010111 @r +vmadc_vxm 010001 1 ..... ..... 100 ..... 1010111 @r +vmadc_vim 010001 1 ..... ..... 011 ..... 1010111 @r +vsbc_vvm 010010 1 ..... ..... 000 ..... 1010111 @r +vsbc_vxm 010010 1 ..... ..... 100 ..... 1010111 @r +vmsbc_vvm 010011 1 ..... ..... 000 ..... 1010111 @r +vmsbc_vxm 010011 1 ..... ..... 100 ..... 1010111 @r vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 7f6fe82fb3..a1f2e84eb8 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1104,3 +1104,111 @@ GEN_OPIWX_WIDEN_TRANS(vwaddu_wx) GEN_OPIWX_WIDEN_TRANS(vwadd_wx) GEN_OPIWX_WIDEN_TRANS(vwsubu_wx) GEN_OPIWX_WIDEN_TRANS(vwsub_wx) + +/* OPIVV with UNMASKED */ +#define GEN_OPIVV_R_TRANS(NAME, CHECK) \ +static bool trans_##NAME(DisasContext *s, arg_r *a) \ +{ \ + if (CHECK(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_gvec_4_ptr * const fns[4] = { \ + gen_helper_##NAME##_b, gen_helper_##NAME##_h, \ + gen_helper_##NAME##_w, gen_helper_##NAME##_d, \ + }; \ + \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \ + vreg_ofs(s, a->rs1), vreg_ofs(s, a->rs2), \ + cpu_env, 0, s->vlen / 8, data, fns[s->sew]); \ + return true; \ + } \ + return false; \ +} +/* + * For vadc and vsbc, an illegal instruction exception is raised if the + * destination vector register is v0 and LMUL > 1. (Section 12.3) + */ +static bool opivv_vadc_check(DisasContext *s, arg_r *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + vext_check_reg(s, a->rs1, false) && + ((a->rd != 0) || (s->lmul == 0))); +} +GEN_OPIVV_R_TRANS(vadc_vvm, opivv_vadc_check) +GEN_OPIVV_R_TRANS(vsbc_vvm, opivv_vadc_check) + +/* + * For vmadc and vmsbc, an illegal instruction exception is raised if the + * destination vector register overlaps a source vector register group. + */ +static bool opivv_vmadc_check(DisasContext *s, arg_r *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_reg(s, a->rs2, false) && + vext_check_reg(s, a->rs1, false) && + vext_check_overlap_group(a->rd, 1, a->rs1, 1 << s->lmul) && + vext_check_overlap_group(a->rd, 1, a->rs2, 1 << s->lmul)); +} +GEN_OPIVV_R_TRANS(vmadc_vvm, opivv_vmadc_check) +GEN_OPIVV_R_TRANS(vmsbc_vvm, opivv_vmadc_check) + +/* OPIVX with UNMASKED */ +#define GEN_OPIVX_R_TRANS(NAME, CHECK) \ +static bool trans_##NAME(DisasContext *s, arg_r *a) \ +{ \ + if (CHECK(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_opivx const fns[4] = { \ + gen_helper_##NAME##_b, gen_helper_##NAME##_h, \ + gen_helper_##NAME##_w, gen_helper_##NAME##_d, \ + }; \ + \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + return opivx_trans(a->rd, a->rs1, a->rs2, data, fns[s->sew], s); \ + } \ + return false; \ +} + +static bool opivx_vadc_check(DisasContext *s, arg_r *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + ((a->rd != 0) || (s->lmul == 0))); +} +GEN_OPIVX_R_TRANS(vadc_vxm, opivx_vadc_check) +GEN_OPIVX_R_TRANS(vsbc_vxm, opivx_vadc_check) + +static bool opivx_vmadc_check(DisasContext *s, arg_r *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_reg(s, a->rs2, false) && + vext_check_overlap_group(a->rd, 1, a->rs2, 1 << s->lmul)); +} +GEN_OPIVX_R_TRANS(vmadc_vxm, opivx_vmadc_check) +GEN_OPIVX_R_TRANS(vmsbc_vxm, opivx_vmadc_check) + +/* OPIVI without GVEC IR */ +#define GEN_OPIVI_R_TRANS(NAME, ZX, OPIVX, CHECK) \ +static bool trans_##NAME(DisasContext *s, arg_r *a) \ +{ \ + if (CHECK(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_opivx const fns[4] = { \ + gen_helper_##OPIVX##_b, gen_helper_##OPIVX##_h, \ + gen_helper_##OPIVX##_w, gen_helper_##OPIVX##_d, \ + }; \ + \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + return opivi_trans(a->rd, a->rs1, a->rs2, data, \ + fns[s->sew], s, ZX); \ + } \ + return false; \ +} +GEN_OPIVI_R_TRANS(vadc_vim, 0, vadc_vxm, opivx_vadc_check) +GEN_OPIVI_R_TRANS(vmadc_vim, 0, vmadc_vxm, opivx_vmadc_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 00eaebee9f..dd85b94fe7 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -185,6 +185,14 @@ static void clearq(void *vd, uint32_t idx, uint32_t cnt, uint32_t tot) vext_clear(cur, cnt, tot); } +static inline void vext_set_elem_mask(void *v0, int mlen, int index, + uint8_t value) +{ + int idx = (index * mlen) / 64; + int pos = (index * mlen) % 64; + uint64_t old = ((uint64_t *)v0)[idx]; + ((uint64_t *)v0)[idx] = deposit64(old, pos, mlen, value); +} static inline int vext_elem_mask(void *v0, int mlen, int index) { @@ -1062,3 +1070,135 @@ GEN_VEXT_VX(vwsub_wx_b, 1, 2, clearh) GEN_VEXT_VX(vwsub_wx_h, 2, 4, clearl) GEN_VEXT_VX(vwsub_wx_w, 4, 8, clearq) +#define DO_VADC(N, M, C) (N + M + C) +#define DO_VSBC(N, M, C) (N - M - C) + +#define GEN_VEXT_VADC_VVM(NAME, ETYPE, H, DO_OP, CLEAR_FN) \ +void HELPER(NAME)(void *vd, void *v0, void *vs1, void *vs2, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t mlen = vext_mlen(desc); \ + uint32_t vl = env->vl; \ + uint32_t esz = sizeof(ETYPE); \ + uint32_t vlmax = vext_maxsz(desc) / esz; \ + uint32_t i; \ + \ + for (i = 0; i < vl; i++) { \ + ETYPE s1 = *((ETYPE *)vs1 + H(i)); \ + ETYPE s2 = *((ETYPE *)vs2 + H(i)); \ + uint8_t carry = vext_elem_mask(v0, mlen, i); \ + \ + *((ETYPE *)vd + H(i)) = DO_OP(s2, s1, carry); \ + } \ + if (i != 0) { \ + CLEAR_FN(vd, vl, vl * esz, vlmax * esz); \ + } \ +} +GEN_VEXT_VADC_VVM(vadc_vvm_b, uint8_t, H1, DO_VADC, clearb) +GEN_VEXT_VADC_VVM(vadc_vvm_h, uint16_t, H2, DO_VADC, clearh) +GEN_VEXT_VADC_VVM(vadc_vvm_w, uint32_t, H4, DO_VADC, clearl) +GEN_VEXT_VADC_VVM(vadc_vvm_d, uint64_t, H8, DO_VADC, clearq) + +GEN_VEXT_VADC_VVM(vsbc_vvm_b, uint8_t, H1, DO_VSBC, clearb) +GEN_VEXT_VADC_VVM(vsbc_vvm_h, uint16_t, H2, DO_VSBC, clearh) +GEN_VEXT_VADC_VVM(vsbc_vvm_w, uint32_t, H4, DO_VSBC, clearl) +GEN_VEXT_VADC_VVM(vsbc_vvm_d, uint64_t, H8, DO_VSBC, clearq) + +#define GEN_VEXT_VADC_VXM(NAME, ETYPE, H, DO_OP, CLEAR_FN) \ +void HELPER(NAME)(void *vd, void *v0, target_ulong s1, void *vs2, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t mlen = vext_mlen(desc); \ + uint32_t vl = env->vl; \ + uint32_t esz = sizeof(ETYPE); \ + uint32_t vlmax = vext_maxsz(desc) / esz; \ + uint32_t i; \ + \ + for (i = 0; i < vl; i++) { \ + ETYPE s2 = *((ETYPE *)vs2 + H(i)); \ + uint8_t carry = vext_elem_mask(v0, mlen, i); \ + \ + *((ETYPE *)vd + H(i)) = DO_OP(s2, (ETYPE)(target_long)s1, carry);\ + } \ + if (i != 0) { \ + CLEAR_FN(vd, vl, vl * esz, vlmax * esz); \ + } \ +} +GEN_VEXT_VADC_VXM(vadc_vxm_b, uint8_t, H1, DO_VADC, clearb) +GEN_VEXT_VADC_VXM(vadc_vxm_h, uint16_t, H2, DO_VADC, clearh) +GEN_VEXT_VADC_VXM(vadc_vxm_w, uint32_t, H4, DO_VADC, clearl) +GEN_VEXT_VADC_VXM(vadc_vxm_d, uint64_t, H8, DO_VADC, clearq) + +GEN_VEXT_VADC_VXM(vsbc_vxm_b, uint8_t, H1, DO_VSBC, clearb) +GEN_VEXT_VADC_VXM(vsbc_vxm_h, uint16_t, H2, DO_VSBC, clearh) +GEN_VEXT_VADC_VXM(vsbc_vxm_w, uint32_t, H4, DO_VSBC, clearl) +GEN_VEXT_VADC_VXM(vsbc_vxm_d, uint64_t, H8, DO_VSBC, clearq) + +#define DO_MADC(N, M, C) ((__typeof(N))(N + M + C) < N ? 1 : 0) +#define DO_MSBC(N, M, C) ((__typeof(N))(N - M - C) > N ? 1 : 0) + +#define GEN_VEXT_VMADC_VVM(NAME, ETYPE, H, DO_OP) \ +void HELPER(NAME)(void *vd, void *v0, void *vs1, void *vs2, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t mlen = vext_mlen(desc); \ + uint32_t vl = env->vl; \ + uint32_t vlmax = vext_maxsz(desc) / sizeof(ETYPE); \ + uint32_t i; \ + \ + for (i = 0; i < vl; i++) { \ + ETYPE s1 = *((ETYPE *)vs1 + H(i)); \ + ETYPE s2 = *((ETYPE *)vs2 + H(i)); \ + uint8_t carry = vext_elem_mask(v0, mlen, i); \ + \ + vext_set_elem_mask(vd, mlen, i, DO_OP(s2, s1, carry));\ + } \ + if (i == 0) { \ + return; \ + } \ + for (; i < vlmax; i++) { \ + vext_set_elem_mask(vd, mlen, i, 0); \ + } \ +} +GEN_VEXT_VMADC_VVM(vmadc_vvm_b, uint8_t, H1, DO_MADC) +GEN_VEXT_VMADC_VVM(vmadc_vvm_h, uint16_t, H2, DO_MADC) +GEN_VEXT_VMADC_VVM(vmadc_vvm_w, uint32_t, H4, DO_MADC) +GEN_VEXT_VMADC_VVM(vmadc_vvm_d, uint64_t, H8, DO_MADC) + +GEN_VEXT_VMADC_VVM(vmsbc_vvm_b, uint8_t, H1, DO_MSBC) +GEN_VEXT_VMADC_VVM(vmsbc_vvm_h, uint16_t, H2, DO_MSBC) +GEN_VEXT_VMADC_VVM(vmsbc_vvm_w, uint32_t, H4, DO_MSBC) +GEN_VEXT_VMADC_VVM(vmsbc_vvm_d, uint64_t, H8, DO_MSBC) + +#define GEN_VEXT_VMADC_VXM(NAME, ETYPE, H, DO_OP) \ +void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t mlen = vext_mlen(desc); \ + uint32_t vl = env->vl; \ + uint32_t vlmax = vext_maxsz(desc) / sizeof(ETYPE); \ + uint32_t i; \ + \ + for (i = 0; i < vl; i++) { \ + ETYPE s2 = *((ETYPE *)vs2 + H(i)); \ + uint8_t carry = vext_elem_mask(v0, mlen, i); \ + \ + vext_set_elem_mask(vd, mlen, i, \ + DO_OP(s2, (ETYPE)(target_long)s1, carry)); \ + } \ + if (i == 0) { \ + return; \ + } \ + for (; i < vlmax; i++) { \ + vext_set_elem_mask(vd, mlen, i, 0); \ + } \ +} +GEN_VEXT_VMADC_VXM(vmadc_vxm_b, uint8_t, H1, DO_MADC) +GEN_VEXT_VMADC_VXM(vmadc_vxm_h, uint16_t, H2, DO_MADC) +GEN_VEXT_VMADC_VXM(vmadc_vxm_w, uint32_t, H4, DO_MADC) +GEN_VEXT_VMADC_VXM(vmadc_vxm_d, uint64_t, H8, DO_MADC) + +GEN_VEXT_VMADC_VXM(vmsbc_vxm_b, uint8_t, H1, DO_MSBC) +GEN_VEXT_VMADC_VXM(vmsbc_vxm_h, uint16_t, H2, DO_MSBC) +GEN_VEXT_VMADC_VXM(vmsbc_vxm_w, uint32_t, H4, DO_MSBC) +GEN_VEXT_VMADC_VXM(vmsbc_vxm_d, uint64_t, H8, DO_MSBC) From patchwork Mon Mar 9 12:14:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427013 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6599613A4 for ; Mon, 9 Mar 2020 12:22:40 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 468922072A for ; Mon, 9 Mar 2020 12:22:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 468922072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42142 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHQc-0005H3-Fj for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:22:38 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56607) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJy-0002Lg-GG for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJu-000354-SK for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:46 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:55263) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJu-0002bp-6U; Mon, 09 Mar 2020 08:15:42 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.08336018|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.383444-0.00866632-0.607889; FP=1065095526150243379|1|1|7|0|-1|-1|-1; HT=e02c03301; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:08 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 12/60] target/riscv: vector bitwise logical instructions Date: Mon, 9 Mar 2020 20:14:09 +0800 Message-Id: <20200309121457.13719-13-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 25 ++++++++++++ target/riscv/insn32.decode | 9 +++++ target/riscv/insn_trans/trans_rvv.inc.c | 11 ++++++ target/riscv/vector_helper.c | 51 +++++++++++++++++++++++++ 4 files changed, 96 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 72c733bf49..4373e9e8c2 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -372,3 +372,28 @@ DEF_HELPER_6(vmsbc_vxm_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vmsbc_vxm_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vmsbc_vxm_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vmsbc_vxm_d, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vand_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vand_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vand_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vand_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vor_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vor_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vor_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vor_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vxor_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vxor_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vxor_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vxor_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vand_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vand_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vand_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vand_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vor_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vor_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vor_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vor_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vxor_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vxor_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vxor_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vxor_vx_d, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index e8ddf95d3d..29a505cede 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -310,6 +310,15 @@ vsbc_vvm 010010 1 ..... ..... 000 ..... 1010111 @r vsbc_vxm 010010 1 ..... ..... 100 ..... 1010111 @r vmsbc_vvm 010011 1 ..... ..... 000 ..... 1010111 @r vmsbc_vxm 010011 1 ..... ..... 100 ..... 1010111 @r +vand_vv 001001 . ..... ..... 000 ..... 1010111 @r_vm +vand_vx 001001 . ..... ..... 100 ..... 1010111 @r_vm +vand_vi 001001 . ..... ..... 011 ..... 1010111 @r_vm +vor_vv 001010 . ..... ..... 000 ..... 1010111 @r_vm +vor_vx 001010 . ..... ..... 100 ..... 1010111 @r_vm +vor_vi 001010 . ..... ..... 011 ..... 1010111 @r_vm +vxor_vv 001011 . ..... ..... 000 ..... 1010111 @r_vm +vxor_vx 001011 . ..... ..... 100 ..... 1010111 @r_vm +vxor_vi 001011 . ..... ..... 011 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index a1f2e84eb8..3a4696dbcd 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1212,3 +1212,14 @@ static bool trans_##NAME(DisasContext *s, arg_r *a) \ } GEN_OPIVI_R_TRANS(vadc_vim, 0, vadc_vxm, opivx_vadc_check) GEN_OPIVI_R_TRANS(vmadc_vim, 0, vmadc_vxm, opivx_vmadc_check) + +/* Vector Bitwise Logical Instructions */ +GEN_OPIVV_GVEC_TRANS(vand_vv, and) +GEN_OPIVV_GVEC_TRANS(vor_vv, or) +GEN_OPIVV_GVEC_TRANS(vxor_vv, xor) +GEN_OPIVX_GVEC_TRANS(vand_vx, ands) +GEN_OPIVX_GVEC_TRANS(vor_vx, ors) +GEN_OPIVX_GVEC_TRANS(vxor_vx, xors) +GEN_OPIVI_GVEC_TRANS(vand_vi, 0, vand_vx, andi) +GEN_OPIVI_GVEC_TRANS(vor_vi, 0, vor_vx, ori) +GEN_OPIVI_GVEC_TRANS(vxor_vi, 0, vxor_vx, xori) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index dd85b94fe7..532b373f99 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -1202,3 +1202,54 @@ GEN_VEXT_VMADC_VXM(vmsbc_vxm_b, uint8_t, H1, DO_MSBC) GEN_VEXT_VMADC_VXM(vmsbc_vxm_h, uint16_t, H2, DO_MSBC) GEN_VEXT_VMADC_VXM(vmsbc_vxm_w, uint32_t, H4, DO_MSBC) GEN_VEXT_VMADC_VXM(vmsbc_vxm_d, uint64_t, H8, DO_MSBC) + +/* Vector Bitwise Logical Instructions */ +RVVCALL(OPIVV2, vand_vv_b, OP_SSS_B, H1, H1, H1, DO_AND) +RVVCALL(OPIVV2, vand_vv_h, OP_SSS_H, H2, H2, H2, DO_AND) +RVVCALL(OPIVV2, vand_vv_w, OP_SSS_W, H4, H4, H4, DO_AND) +RVVCALL(OPIVV2, vand_vv_d, OP_SSS_D, H8, H8, H8, DO_AND) +RVVCALL(OPIVV2, vor_vv_b, OP_SSS_B, H1, H1, H1, DO_OR) +RVVCALL(OPIVV2, vor_vv_h, OP_SSS_H, H2, H2, H2, DO_OR) +RVVCALL(OPIVV2, vor_vv_w, OP_SSS_W, H4, H4, H4, DO_OR) +RVVCALL(OPIVV2, vor_vv_d, OP_SSS_D, H8, H8, H8, DO_OR) +RVVCALL(OPIVV2, vxor_vv_b, OP_SSS_B, H1, H1, H1, DO_XOR) +RVVCALL(OPIVV2, vxor_vv_h, OP_SSS_H, H2, H2, H2, DO_XOR) +RVVCALL(OPIVV2, vxor_vv_w, OP_SSS_W, H4, H4, H4, DO_XOR) +RVVCALL(OPIVV2, vxor_vv_d, OP_SSS_D, H8, H8, H8, DO_XOR) +GEN_VEXT_VV(vand_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vand_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vand_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vand_vv_d, 8, 8, clearq) +GEN_VEXT_VV(vor_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vor_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vor_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vor_vv_d, 8, 8, clearq) +GEN_VEXT_VV(vxor_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vxor_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vxor_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vxor_vv_d, 8, 8, clearq) + +RVVCALL(OPIVX2, vand_vx_b, OP_SSS_B, H1, H1, DO_AND) +RVVCALL(OPIVX2, vand_vx_h, OP_SSS_H, H2, H2, DO_AND) +RVVCALL(OPIVX2, vand_vx_w, OP_SSS_W, H4, H4, DO_AND) +RVVCALL(OPIVX2, vand_vx_d, OP_SSS_D, H8, H8, DO_AND) +RVVCALL(OPIVX2, vor_vx_b, OP_SSS_B, H1, H1, DO_OR) +RVVCALL(OPIVX2, vor_vx_h, OP_SSS_H, H2, H2, DO_OR) +RVVCALL(OPIVX2, vor_vx_w, OP_SSS_W, H4, H4, DO_OR) +RVVCALL(OPIVX2, vor_vx_d, OP_SSS_D, H8, H8, DO_OR) +RVVCALL(OPIVX2, vxor_vx_b, OP_SSS_B, H1, H1, DO_XOR) +RVVCALL(OPIVX2, vxor_vx_h, OP_SSS_H, H2, H2, DO_XOR) +RVVCALL(OPIVX2, vxor_vx_w, OP_SSS_W, H4, H4, DO_XOR) +RVVCALL(OPIVX2, vxor_vx_d, OP_SSS_D, H8, H8, DO_XOR) +GEN_VEXT_VX(vand_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vand_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vand_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vand_vx_d, 8, 8, clearq) +GEN_VEXT_VX(vor_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vor_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vor_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vor_vx_d, 8, 8, clearq) +GEN_VEXT_VX(vxor_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vxor_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vxor_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vxor_vx_d, 8, 8, clearq) From patchwork Mon Mar 9 12:14:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427029 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AF5FD1731 for ; Mon, 9 Mar 2020 12:26:18 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 904A42072A for ; Mon, 9 Mar 2020 12:26:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 904A42072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42202 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHU9-00025X-Nb for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:26:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56525) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJw-0002J6-47 for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJu-00033U-80 for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:43 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:51820) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002bw-G2; Mon, 09 Mar 2020 08:15:42 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07608497|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.723719-8.98328e-05-0.276191; FP=1071848094154949738|1|1|7|0|-1|-1|-1; HT=e02c03268; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:09 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 14/60] target/riscv: vector narrowing integer right shift instructions Date: Mon, 9 Mar 2020 20:14:11 +0800 Message-Id: <20200309121457.13719-15-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 13 ++++ target/riscv/insn32.decode | 6 ++ target/riscv/insn_trans/trans_rvv.inc.c | 91 +++++++++++++++++++++++++ target/riscv/vector_helper.c | 14 ++++ 4 files changed, 124 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 47284c7476..0f36a8ce43 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -422,3 +422,16 @@ DEF_HELPER_6(vsra_vx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vsra_vx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vsra_vx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vsra_vx_d, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vnsrl_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnsrl_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnsrl_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnsra_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnsra_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnsra_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnsrl_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnsrl_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnsrl_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnsra_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnsra_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnsra_vx_w, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index dbbfa34b97..e21b3d6b5e 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -328,6 +328,12 @@ vsrl_vi 101000 . ..... ..... 011 ..... 1010111 @r_vm vsra_vv 101001 . ..... ..... 000 ..... 1010111 @r_vm vsra_vx 101001 . ..... ..... 100 ..... 1010111 @r_vm vsra_vi 101001 . ..... ..... 011 ..... 1010111 @r_vm +vnsrl_vv 101100 . ..... ..... 000 ..... 1010111 @r_vm +vnsrl_vx 101100 . ..... ..... 100 ..... 1010111 @r_vm +vnsrl_vi 101100 . ..... ..... 011 ..... 1010111 @r_vm +vnsra_vv 101101 . ..... ..... 000 ..... 1010111 @r_vm +vnsra_vx 101101 . ..... ..... 100 ..... 1010111 @r_vm +vnsra_vi 101101 . ..... ..... 011 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index a60518e1df..7033eeaa4d 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1267,3 +1267,94 @@ GEN_OPIVX_GVEC_SHIFT_TRANS(vsra_vx, sars) GEN_OPIVI_GVEC_TRANS(vsll_vi, 1, vsll_vx, shli) GEN_OPIVI_GVEC_TRANS(vsrl_vi, 1, vsrl_vx, shri) GEN_OPIVI_GVEC_TRANS(vsra_vi, 1, vsra_vx, sari) + +/* Vector Narrowing Integer Right Shift Instructions */ +static bool opivv_narrow_check(DisasContext *s, arg_rmrr *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, false) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, true) && + vext_check_reg(s, a->rs1, false) && + vext_check_overlap_group(a->rd, 1 << s->lmul, a->rs2, + 2 << s->lmul) && + (s->lmul < 0x3) && (s->sew < 0x3)); +} + +/* OPIVV with NARROW */ +#define GEN_OPIVV_NARROW_TRANS(NAME) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (opivv_narrow_check(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_gvec_4_ptr * const fns[3] = { \ + gen_helper_##NAME##_b, \ + gen_helper_##NAME##_h, \ + gen_helper_##NAME##_w, \ + }; \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \ + vreg_ofs(s, a->rs1), vreg_ofs(s, a->rs2), \ + cpu_env, 0, s->vlen / 8, data, fns[s->sew]); \ + return true; \ + } \ + return false; \ +} +GEN_OPIVV_NARROW_TRANS(vnsra_vv) +GEN_OPIVV_NARROW_TRANS(vnsrl_vv) + +static bool opivx_narrow_check(DisasContext *s, arg_rmrr *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, false) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, true) && + vext_check_overlap_group(a->rd, 1 << s->lmul, a->rs2, + 2 << s->lmul) && + (s->lmul < 0x3) && (s->sew < 0x3)); +} + +/* OPIVX with NARROW */ +#define GEN_OPIVX_NARROW_TRANS(NAME) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (opivx_narrow_check(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_opivx const fns[3] = { \ + gen_helper_##NAME##_b, \ + gen_helper_##NAME##_h, \ + gen_helper_##NAME##_w, \ + }; \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + return opivx_trans(a->rd, a->rs1, a->rs2, data, fns[s->sew], s); \ + } \ + return false; \ +} +GEN_OPIVX_NARROW_TRANS(vnsra_vx) +GEN_OPIVX_NARROW_TRANS(vnsrl_vx) + +/* OPIVI with NARROW */ +#define GEN_OPIVI_NARROW_TRANS(NAME, ZX, OPIVX) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (opivx_narrow_check(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_opivx const fns[3] = { \ + gen_helper_##OPIVX##_b, \ + gen_helper_##OPIVX##_h, \ + gen_helper_##OPIVX##_w, \ + }; \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + return opivi_trans(a->rd, a->rs1, a->rs2, data, \ + fns[s->sew], s, ZX); \ + } \ + return false; \ +} +GEN_OPIVI_NARROW_TRANS(vnsra_vi, 1, vnsra_vx) +GEN_OPIVI_NARROW_TRANS(vnsrl_vi, 1, vnsrl_vx) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 3772b059b1..895155576c 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -1335,3 +1335,17 @@ GEN_VEXT_SHIFT_VX(vsra_vx_b, int8_t, int8_t, H1, H1, DO_SRL, 0x7, clearb) GEN_VEXT_SHIFT_VX(vsra_vx_h, int16_t, int16_t, H2, H2, DO_SRL, 0xf, clearh) GEN_VEXT_SHIFT_VX(vsra_vx_w, int32_t, int32_t, H4, H4, DO_SRL, 0x1f, clearl) GEN_VEXT_SHIFT_VX(vsra_vx_d, int64_t, int64_t, H8, H8, DO_SRL, 0x3f, clearq) + +/* Vector Narrowing Integer Right Shift Instructions */ +GEN_VEXT_SHIFT_VV(vnsrl_vv_b, uint8_t, uint16_t, H1, H2, DO_SRL, 0xf, clearb) +GEN_VEXT_SHIFT_VV(vnsrl_vv_h, uint16_t, uint32_t, H2, H4, DO_SRL, 0x1f, clearh) +GEN_VEXT_SHIFT_VV(vnsrl_vv_w, uint32_t, uint64_t, H4, H8, DO_SRL, 0x3f, clearl) +GEN_VEXT_SHIFT_VV(vnsra_vv_b, uint8_t, int16_t, H1, H2, DO_SRL, 0xf, clearb) +GEN_VEXT_SHIFT_VV(vnsra_vv_h, uint16_t, int32_t, H2, H4, DO_SRL, 0x1f, clearh) +GEN_VEXT_SHIFT_VV(vnsra_vv_w, uint32_t, int64_t, H4, H8, DO_SRL, 0x3f, clearl) +GEN_VEXT_SHIFT_VX(vnsrl_vx_b, uint8_t, uint16_t, H1, H2, DO_SRL, 0xf, clearb) +GEN_VEXT_SHIFT_VX(vnsrl_vx_h, uint16_t, uint32_t, H2, H4, DO_SRL, 0x1f, clearh) +GEN_VEXT_SHIFT_VX(vnsrl_vx_w, uint32_t, uint64_t, H4, H8, DO_SRL, 0x3f, clearl) +GEN_VEXT_SHIFT_VX(vnsra_vx_b, int8_t, int16_t, H1, H2, DO_SRL, 0xf, clearb) +GEN_VEXT_SHIFT_VX(vnsra_vx_h, int16_t, int32_t, H2, H4, DO_SRL, 0x1f, clearh) +GEN_VEXT_SHIFT_VX(vnsra_vx_w, int32_t, int64_t, H4, H8, DO_SRL, 0x3f, clearl) From patchwork Mon Mar 9 12:14:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427053 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6C6492A for ; Mon, 9 Mar 2020 12:36:52 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B71B620674 for ; Mon, 9 Mar 2020 12:36:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B71B620674 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42386 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHeN-00080R-Rx for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:36:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56575) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJx-0002KZ-VI for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJu-00033a-8Z for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:44 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:43456) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002c3-E9; Mon, 09 Mar 2020 08:15:42 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436515|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.335301-0.0134057-0.651294; FP=1080717375579948838|1|1|7|0|-1|-1|-1; HT=e02c03308; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:11 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 17/60] target/riscv: vector single-width integer multiply instructions Date: Mon, 9 Mar 2020 20:14:14 +0800 Message-Id: <20200309121457.13719-18-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 33 ++++++ target/riscv/insn32.decode | 8 ++ target/riscv/insn_trans/trans_rvv.inc.c | 10 ++ target/riscv/vector_helper.c | 147 ++++++++++++++++++++++++ 4 files changed, 198 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index c7d4ff185a..f42a12eef3 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -525,3 +525,36 @@ DEF_HELPER_6(vmax_vx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vmax_vx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vmax_vx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vmax_vx_d, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vmul_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmul_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmul_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmul_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmulh_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmulh_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmulh_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmulh_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmulhu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmulhu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmulhu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmulhu_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmulhsu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmulhsu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmulhsu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmulhsu_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmul_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmul_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmul_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmul_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmulh_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmulh_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmulh_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmulh_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmulhu_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmulhu_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmulhu_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmulhu_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmulhsu_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmulhsu_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmulhsu_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmulhsu_vx_d, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index a7619f4e3d..a8ac4e9e9d 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -362,6 +362,14 @@ vmaxu_vv 000110 . ..... ..... 000 ..... 1010111 @r_vm vmaxu_vx 000110 . ..... ..... 100 ..... 1010111 @r_vm vmax_vv 000111 . ..... ..... 000 ..... 1010111 @r_vm vmax_vx 000111 . ..... ..... 100 ..... 1010111 @r_vm +vmul_vv 100101 . ..... ..... 010 ..... 1010111 @r_vm +vmul_vx 100101 . ..... ..... 110 ..... 1010111 @r_vm +vmulh_vv 100111 . ..... ..... 010 ..... 1010111 @r_vm +vmulh_vx 100111 . ..... ..... 110 ..... 1010111 @r_vm +vmulhu_vv 100100 . ..... ..... 010 ..... 1010111 @r_vm +vmulhu_vx 100100 . ..... ..... 110 ..... 1010111 @r_vm +vmulhsu_vv 100110 . ..... ..... 010 ..... 1010111 @r_vm +vmulhsu_vx 100110 . ..... ..... 110 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 4437a77878..a1ecc9f52d 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1434,3 +1434,13 @@ GEN_OPIVX_TRANS(vminu_vx, opivx_check) GEN_OPIVX_TRANS(vmin_vx, opivx_check) GEN_OPIVX_TRANS(vmaxu_vx, opivx_check) GEN_OPIVX_TRANS(vmax_vx, opivx_check) + +/* Vector Single-Width Integer Multiply Instructions */ +GEN_OPIVV_GVEC_TRANS(vmul_vv, mul) +GEN_OPIVV_TRANS(vmulh_vv, opivv_check) +GEN_OPIVV_TRANS(vmulhu_vv, opivv_check) +GEN_OPIVV_TRANS(vmulhsu_vv, opivv_check) +GEN_OPIVX_GVEC_TRANS(vmul_vx, muls) +GEN_OPIVX_TRANS(vmulh_vx, opivx_check) +GEN_OPIVX_TRANS(vmulhu_vx, opivx_check) +GEN_OPIVX_TRANS(vmulhsu_vx, opivx_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 03e001262f..93daafd5bd 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -853,6 +853,10 @@ GEN_VEXT_AMO(vamomaxuw_v_w, uint32_t, uint32_t, idx_w, clearl) #define OP_UUU_H uint16_t, uint16_t, uint16_t, uint16_t, uint16_t #define OP_UUU_W uint32_t, uint32_t, uint32_t, uint32_t, uint32_t #define OP_UUU_D uint64_t, uint64_t, uint64_t, uint64_t, uint64_t +#define OP_SUS_B int8_t, uint8_t, int8_t, uint8_t, int8_t +#define OP_SUS_H int16_t, uint16_t, int16_t, uint16_t, int16_t +#define OP_SUS_W int32_t, uint32_t, int32_t, uint32_t, int32_t +#define OP_SUS_D int64_t, uint64_t, int64_t, uint64_t, int64_t /* operation of two vector elements */ #define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \ @@ -1550,3 +1554,146 @@ GEN_VEXT_VX(vmax_vx_b, 1, 1, clearb) GEN_VEXT_VX(vmax_vx_h, 2, 2, clearh) GEN_VEXT_VX(vmax_vx_w, 4, 4, clearl) GEN_VEXT_VX(vmax_vx_d, 8, 8, clearq) + +/* Vector Single-Width Integer Multiply Instructions */ +#define DO_MUL(N, M) (N * M) +RVVCALL(OPIVV2, vmul_vv_b, OP_SSS_B, H1, H1, H1, DO_MUL) +RVVCALL(OPIVV2, vmul_vv_h, OP_SSS_H, H2, H2, H2, DO_MUL) +RVVCALL(OPIVV2, vmul_vv_w, OP_SSS_W, H4, H4, H4, DO_MUL) +RVVCALL(OPIVV2, vmul_vv_d, OP_SSS_D, H8, H8, H8, DO_MUL) +GEN_VEXT_VV(vmul_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vmul_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vmul_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vmul_vv_d, 8, 8, clearq) + +static int8_t do_mulh_b(int8_t s2, int8_t s1) +{ + return (int16_t)s2 * (int16_t)s1 >> 8; +} +static int16_t do_mulh_h(int16_t s2, int16_t s1) +{ + return (int32_t)s2 * (int32_t)s1 >> 16; +} +static int32_t do_mulh_w(int32_t s2, int32_t s1) +{ + return (int64_t)s2 * (int64_t)s1 >> 32; +} +static int64_t do_mulh_d(int64_t s2, int64_t s1) +{ + uint64_t hi_64, lo_64; + + muls64(&lo_64, &hi_64, s1, s2); + return hi_64; +} + +static uint8_t do_mulhu_b(uint8_t s2, uint8_t s1) +{ + return (uint16_t)s2 * (uint16_t)s1 >> 8; +} +static uint16_t do_mulhu_h(uint16_t s2, uint16_t s1) +{ + return (uint32_t)s2 * (uint32_t)s1 >> 16; +} +static uint32_t do_mulhu_w(uint32_t s2, uint32_t s1) +{ + return (uint64_t)s2 * (uint64_t)s1 >> 32; +} +static uint64_t do_mulhu_d(uint64_t s2, uint64_t s1) +{ + uint64_t hi_64, lo_64; + + mulu64(&lo_64, &hi_64, s2, s1); + return hi_64; +} + +static int8_t do_mulhsu_b(int8_t s2, uint8_t s1) +{ + return (int16_t)s2 * (uint16_t)s1 >> 8; +} +static int16_t do_mulhsu_h(int16_t s2, uint16_t s1) +{ + return (int32_t)s2 * (uint32_t)s1 >> 16; +} +static int32_t do_mulhsu_w(int32_t s2, uint32_t s1) +{ + return (int64_t)s2 * (uint64_t)s1 >> 32; +} +static int64_t do_mulhsu_d(int64_t s2, uint64_t s1) +{ + uint64_t hi_64, lo_64, abs_s2 = s2; + + if (s2 < 0) { + abs_s2 = -s2; + } + mulu64(&lo_64, &hi_64, abs_s2, s1); + if ((int64_t)(s2 ^ s1) < 0) { + lo_64 = ~lo_64; + hi_64 = ~hi_64; + if (lo_64 == UINT64_MAX) { + lo_64 = 0; + hi_64 += 1; + } else { + lo_64 += 1; + } + } + + return hi_64; +} + +RVVCALL(OPIVV2, vmulh_vv_b, OP_SSS_B, H1, H1, H1, do_mulh_b) +RVVCALL(OPIVV2, vmulh_vv_h, OP_SSS_H, H2, H2, H2, do_mulh_h) +RVVCALL(OPIVV2, vmulh_vv_w, OP_SSS_W, H4, H4, H4, do_mulh_w) +RVVCALL(OPIVV2, vmulh_vv_d, OP_SSS_D, H8, H8, H8, do_mulh_d) +RVVCALL(OPIVV2, vmulhu_vv_b, OP_UUU_B, H1, H1, H1, do_mulhu_b) +RVVCALL(OPIVV2, vmulhu_vv_h, OP_UUU_H, H2, H2, H2, do_mulhu_h) +RVVCALL(OPIVV2, vmulhu_vv_w, OP_UUU_W, H4, H4, H4, do_mulhu_w) +RVVCALL(OPIVV2, vmulhu_vv_d, OP_UUU_D, H8, H8, H8, do_mulhu_d) +RVVCALL(OPIVV2, vmulhsu_vv_b, OP_SUS_B, H1, H1, H1, do_mulhsu_b) +RVVCALL(OPIVV2, vmulhsu_vv_h, OP_SUS_H, H2, H2, H2, do_mulhsu_h) +RVVCALL(OPIVV2, vmulhsu_vv_w, OP_SUS_W, H4, H4, H4, do_mulhsu_w) +RVVCALL(OPIVV2, vmulhsu_vv_d, OP_SUS_D, H8, H8, H8, do_mulhsu_d) +GEN_VEXT_VV(vmulh_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vmulh_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vmulh_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vmulh_vv_d, 8, 8, clearq) +GEN_VEXT_VV(vmulhu_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vmulhu_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vmulhu_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vmulhu_vv_d, 8, 8, clearq) +GEN_VEXT_VV(vmulhsu_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vmulhsu_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vmulhsu_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vmulhsu_vv_d, 8, 8, clearq) + +RVVCALL(OPIVX2, vmul_vx_b, OP_SSS_B, H1, H1, DO_MUL) +RVVCALL(OPIVX2, vmul_vx_h, OP_SSS_H, H2, H2, DO_MUL) +RVVCALL(OPIVX2, vmul_vx_w, OP_SSS_W, H4, H4, DO_MUL) +RVVCALL(OPIVX2, vmul_vx_d, OP_SSS_D, H8, H8, DO_MUL) +RVVCALL(OPIVX2, vmulh_vx_b, OP_SSS_B, H1, H1, do_mulh_b) +RVVCALL(OPIVX2, vmulh_vx_h, OP_SSS_H, H2, H2, do_mulh_h) +RVVCALL(OPIVX2, vmulh_vx_w, OP_SSS_W, H4, H4, do_mulh_w) +RVVCALL(OPIVX2, vmulh_vx_d, OP_SSS_D, H8, H8, do_mulh_d) +RVVCALL(OPIVX2, vmulhu_vx_b, OP_UUU_B, H1, H1, do_mulhu_b) +RVVCALL(OPIVX2, vmulhu_vx_h, OP_UUU_H, H2, H2, do_mulhu_h) +RVVCALL(OPIVX2, vmulhu_vx_w, OP_UUU_W, H4, H4, do_mulhu_w) +RVVCALL(OPIVX2, vmulhu_vx_d, OP_UUU_D, H8, H8, do_mulhu_d) +RVVCALL(OPIVX2, vmulhsu_vx_b, OP_SUS_B, H1, H1, do_mulhsu_b) +RVVCALL(OPIVX2, vmulhsu_vx_h, OP_SUS_H, H2, H2, do_mulhsu_h) +RVVCALL(OPIVX2, vmulhsu_vx_w, OP_SUS_W, H4, H4, do_mulhsu_w) +RVVCALL(OPIVX2, vmulhsu_vx_d, OP_SUS_D, H8, H8, do_mulhsu_d) +GEN_VEXT_VX(vmul_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vmul_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vmul_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vmul_vx_d, 8, 8, clearq) +GEN_VEXT_VX(vmulh_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vmulh_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vmulh_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vmulh_vx_d, 8, 8, clearq) +GEN_VEXT_VX(vmulhu_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vmulhu_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vmulhu_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vmulhu_vx_d, 8, 8, clearq) +GEN_VEXT_VX(vmulhsu_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vmulhsu_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vmulhsu_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vmulhsu_vx_d, 8, 8, clearq) From patchwork Mon Mar 9 12:14:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427049 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C47C814E3 for ; Mon, 9 Mar 2020 12:34:52 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A51612072A for ; Mon, 9 Mar 2020 12:34:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A51612072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42342 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHcR-0004Iw-Qi for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:34:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56586) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJy-0002Kl-0j for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJu-00034O-I0 for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:45 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:50084) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002bm-Qw; Mon, 09 Mar 2020 08:15:42 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.074415|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.687276-0.000105434-0.312618; FP=10292831519337153542|1|1|7|0|-1|-1|-1; HT=e02c03295; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:12 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 19/60] target/riscv: vector widening integer multiply instructions Date: Mon, 9 Mar 2020 20:14:16 +0800 Message-Id: <20200309121457.13719-20-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 19 +++++++++ target/riscv/insn32.decode | 6 +++ target/riscv/insn_trans/trans_rvv.inc.c | 8 ++++ target/riscv/vector_helper.c | 51 +++++++++++++++++++++++++ 4 files changed, 84 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 357f149198..1704b8c512 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -591,3 +591,22 @@ DEF_HELPER_6(vrem_vx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vrem_vx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vrem_vx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vrem_vx_d, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vwmul_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmul_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmul_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmulu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmulu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmulu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmulsu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmulsu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmulsu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmul_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmul_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmul_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmulu_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmulu_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmulu_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmulsu_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmulsu_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmulsu_vx_w, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 2afe24dd34..ceddfe4b6c 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -378,6 +378,12 @@ vremu_vv 100010 . ..... ..... 010 ..... 1010111 @r_vm vremu_vx 100010 . ..... ..... 110 ..... 1010111 @r_vm vrem_vv 100011 . ..... ..... 010 ..... 1010111 @r_vm vrem_vx 100011 . ..... ..... 110 ..... 1010111 @r_vm +vwmulu_vv 111000 . ..... ..... 010 ..... 1010111 @r_vm +vwmulu_vx 111000 . ..... ..... 110 ..... 1010111 @r_vm +vwmulsu_vv 111010 . ..... ..... 010 ..... 1010111 @r_vm +vwmulsu_vx 111010 . ..... ..... 110 ..... 1010111 @r_vm +vwmul_vv 111011 . ..... ..... 010 ..... 1010111 @r_vm +vwmul_vx 111011 . ..... ..... 110 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 9f0645a92b..990433f866 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1454,3 +1454,11 @@ GEN_OPIVX_TRANS(vdivu_vx, opivx_check) GEN_OPIVX_TRANS(vdiv_vx, opivx_check) GEN_OPIVX_TRANS(vremu_vx, opivx_check) GEN_OPIVX_TRANS(vrem_vx, opivx_check) + +/* Vector Widening Integer Multiply Instructions */ +GEN_OPIVV_WIDEN_TRANS(vwmul_vv, opivv_widen_check) +GEN_OPIVV_WIDEN_TRANS(vwmulu_vv, opivv_widen_check) +GEN_OPIVV_WIDEN_TRANS(vwmulsu_vv, opivv_widen_check) +GEN_OPIVX_WIDEN_TRANS(vwmul_vx) +GEN_OPIVX_WIDEN_TRANS(vwmulu_vx) +GEN_OPIVX_WIDEN_TRANS(vwmulsu_vx) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 6330f5882f..beb84f9674 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -857,6 +857,18 @@ GEN_VEXT_AMO(vamomaxuw_v_w, uint32_t, uint32_t, idx_w, clearl) #define OP_SUS_H int16_t, uint16_t, int16_t, uint16_t, int16_t #define OP_SUS_W int32_t, uint32_t, int32_t, uint32_t, int32_t #define OP_SUS_D int64_t, uint64_t, int64_t, uint64_t, int64_t +#define WOP_UUU_B uint16_t, uint8_t, uint8_t, uint16_t, uint16_t +#define WOP_UUU_H uint32_t, uint16_t, uint16_t, uint32_t, uint32_t +#define WOP_UUU_W uint64_t, uint32_t, uint32_t, uint64_t, uint64_t +#define WOP_SSS_B int16_t, int8_t, int8_t, int16_t, int16_t +#define WOP_SSS_H int32_t, int16_t, int16_t, int32_t, int32_t +#define WOP_SSS_W int64_t, int32_t, int32_t, int64_t, int64_t +#define WOP_SUS_B int16_t, uint8_t, int8_t, uint16_t, int16_t +#define WOP_SUS_H int32_t, uint16_t, int16_t, uint32_t, int32_t +#define WOP_SUS_W int64_t, uint32_t, int32_t, uint64_t, int64_t +#define WOP_SSU_B int16_t, int8_t, uint8_t, int16_t, uint16_t +#define WOP_SSU_H int32_t, int16_t, uint16_t, int32_t, uint32_t +#define WOP_SSU_W int64_t, int32_t, uint32_t, int64_t, uint64_t /* operation of two vector elements */ #define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \ @@ -1771,3 +1783,42 @@ GEN_VEXT_VX(vrem_vx_b, 1, 1, clearb) GEN_VEXT_VX(vrem_vx_h, 2, 2, clearh) GEN_VEXT_VX(vrem_vx_w, 4, 4, clearl) GEN_VEXT_VX(vrem_vx_d, 8, 8, clearq) + +/* Vector Widening Integer Multiply Instructions */ +RVVCALL(OPIVV2, vwmul_vv_b, WOP_SSS_B, H2, H1, H1, DO_MUL) +RVVCALL(OPIVV2, vwmul_vv_h, WOP_SSS_H, H4, H2, H2, DO_MUL) +RVVCALL(OPIVV2, vwmul_vv_w, WOP_SSS_W, H8, H4, H4, DO_MUL) +RVVCALL(OPIVV2, vwmulu_vv_b, WOP_UUU_B, H2, H1, H1, DO_MUL) +RVVCALL(OPIVV2, vwmulu_vv_h, WOP_UUU_H, H4, H2, H2, DO_MUL) +RVVCALL(OPIVV2, vwmulu_vv_w, WOP_UUU_W, H8, H4, H4, DO_MUL) +RVVCALL(OPIVV2, vwmulsu_vv_b, WOP_SUS_B, H2, H1, H1, DO_MUL) +RVVCALL(OPIVV2, vwmulsu_vv_h, WOP_SUS_H, H4, H2, H2, DO_MUL) +RVVCALL(OPIVV2, vwmulsu_vv_w, WOP_SUS_W, H8, H4, H4, DO_MUL) +GEN_VEXT_VV(vwmul_vv_b, 1, 2, clearh) +GEN_VEXT_VV(vwmul_vv_h, 2, 4, clearl) +GEN_VEXT_VV(vwmul_vv_w, 4, 8, clearq) +GEN_VEXT_VV(vwmulu_vv_b, 1, 2, clearh) +GEN_VEXT_VV(vwmulu_vv_h, 2, 4, clearl) +GEN_VEXT_VV(vwmulu_vv_w, 4, 8, clearq) +GEN_VEXT_VV(vwmulsu_vv_b, 1, 2, clearh) +GEN_VEXT_VV(vwmulsu_vv_h, 2, 4, clearl) +GEN_VEXT_VV(vwmulsu_vv_w, 4, 8, clearq) + +RVVCALL(OPIVX2, vwmul_vx_b, WOP_SSS_B, H2, H1, DO_MUL) +RVVCALL(OPIVX2, vwmul_vx_h, WOP_SSS_H, H4, H2, DO_MUL) +RVVCALL(OPIVX2, vwmul_vx_w, WOP_SSS_W, H8, H4, DO_MUL) +RVVCALL(OPIVX2, vwmulu_vx_b, WOP_UUU_B, H2, H1, DO_MUL) +RVVCALL(OPIVX2, vwmulu_vx_h, WOP_UUU_H, H4, H2, DO_MUL) +RVVCALL(OPIVX2, vwmulu_vx_w, WOP_UUU_W, H8, H4, DO_MUL) +RVVCALL(OPIVX2, vwmulsu_vx_b, WOP_SUS_B, H2, H1, DO_MUL) +RVVCALL(OPIVX2, vwmulsu_vx_h, WOP_SUS_H, H4, H2, DO_MUL) +RVVCALL(OPIVX2, vwmulsu_vx_w, WOP_SUS_W, H8, H4, DO_MUL) +GEN_VEXT_VX(vwmul_vx_b, 1, 2, clearh) +GEN_VEXT_VX(vwmul_vx_h, 2, 4, clearl) +GEN_VEXT_VX(vwmul_vx_w, 4, 8, clearq) +GEN_VEXT_VX(vwmulu_vx_b, 1, 2, clearh) +GEN_VEXT_VX(vwmulu_vx_h, 2, 4, clearl) +GEN_VEXT_VX(vwmulu_vx_w, 4, 8, clearq) +GEN_VEXT_VX(vwmulsu_vx_b, 1, 2, clearh) +GEN_VEXT_VX(vwmulsu_vx_h, 2, 4, clearl) +GEN_VEXT_VX(vwmulsu_vx_w, 4, 8, clearq) From patchwork Mon Mar 9 12:14:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427037 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5F8DC1874 for ; Mon, 9 Mar 2020 12:28:50 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 40CBA2072A for ; Mon, 9 Mar 2020 12:28:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 40CBA2072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42232 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHWb-0005cL-Ap for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:28:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56526) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJw-0002J7-3w for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJt-00032Z-VD for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:44 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:47844) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002bs-9Q; Mon, 09 Mar 2020 08:15:41 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.1422099|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.466062-0.0104836-0.523454; FP=1056070630682377831|1|1|7|0|-1|-1|-1; HT=e02c03297; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:12 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 20/60] target/riscv: vector single-width integer multiply-add instructions Date: Mon, 9 Mar 2020 20:14:17 +0800 Message-Id: <20200309121457.13719-21-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 33 ++++++++++ target/riscv/insn32.decode | 8 +++ target/riscv/insn_trans/trans_rvv.inc.c | 10 +++ target/riscv/vector_helper.c | 88 +++++++++++++++++++++++++ 4 files changed, 139 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 1704b8c512..098288df76 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -610,3 +610,36 @@ DEF_HELPER_6(vwmulu_vx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vwmulsu_vx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vwmulsu_vx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vwmulsu_vx_w, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vmacc_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmacc_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmacc_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmacc_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnmsac_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnmsac_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnmsac_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnmsac_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmadd_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmadd_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmadd_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmadd_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnmsub_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnmsub_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnmsub_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnmsub_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmacc_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmacc_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmacc_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmacc_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnmsac_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnmsac_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnmsac_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnmsac_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmadd_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmadd_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmadd_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmadd_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnmsub_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnmsub_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnmsub_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnmsub_vx_d, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index ceddfe4b6c..58de888afa 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -384,6 +384,14 @@ vwmulsu_vv 111010 . ..... ..... 010 ..... 1010111 @r_vm vwmulsu_vx 111010 . ..... ..... 110 ..... 1010111 @r_vm vwmul_vv 111011 . ..... ..... 010 ..... 1010111 @r_vm vwmul_vx 111011 . ..... ..... 110 ..... 1010111 @r_vm +vmacc_vv 101101 . ..... ..... 010 ..... 1010111 @r_vm +vmacc_vx 101101 . ..... ..... 110 ..... 1010111 @r_vm +vnmsac_vv 101111 . ..... ..... 010 ..... 1010111 @r_vm +vnmsac_vx 101111 . ..... ..... 110 ..... 1010111 @r_vm +vmadd_vv 101001 . ..... ..... 010 ..... 1010111 @r_vm +vmadd_vx 101001 . ..... ..... 110 ..... 1010111 @r_vm +vnmsub_vv 101011 . ..... ..... 010 ..... 1010111 @r_vm +vnmsub_vx 101011 . ..... ..... 110 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 990433f866..05f7ae0bc4 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1462,3 +1462,13 @@ GEN_OPIVV_WIDEN_TRANS(vwmulsu_vv, opivv_widen_check) GEN_OPIVX_WIDEN_TRANS(vwmul_vx) GEN_OPIVX_WIDEN_TRANS(vwmulu_vx) GEN_OPIVX_WIDEN_TRANS(vwmulsu_vx) + +/* Vector Single-Width Integer Multiply-Add Instructions */ +GEN_OPIVV_TRANS(vmacc_vv, opivv_check) +GEN_OPIVV_TRANS(vnmsac_vv, opivv_check) +GEN_OPIVV_TRANS(vmadd_vv, opivv_check) +GEN_OPIVV_TRANS(vnmsub_vv, opivv_check) +GEN_OPIVX_TRANS(vmacc_vx, opivx_check) +GEN_OPIVX_TRANS(vnmsac_vx, opivx_check) +GEN_OPIVX_TRANS(vmadd_vx, opivx_check) +GEN_OPIVX_TRANS(vnmsub_vx, opivx_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index beb84f9674..e5082c8adc 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -1822,3 +1822,91 @@ GEN_VEXT_VX(vwmulu_vx_w, 4, 8, clearq) GEN_VEXT_VX(vwmulsu_vx_b, 1, 2, clearh) GEN_VEXT_VX(vwmulsu_vx_h, 2, 4, clearl) GEN_VEXT_VX(vwmulsu_vx_w, 4, 8, clearq) + +/* Vector Single-Width Integer Multiply-Add Instructions */ +#define OPIVV3(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \ +static void do_##NAME(void *vd, void *vs1, void *vs2, int i) \ +{ \ + TX1 s1 = *((T1 *)vs1 + HS1(i)); \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + TD d = *((TD *)vd + HD(i)); \ + *((TD *)vd + HD(i)) = OP(s2, s1, d); \ +} + +#define DO_MACC(N, M, D) (M * N + D) +#define DO_NMSAC(N, M, D) (-(M * N) + D) +#define DO_MADD(N, M, D) (M * D + N) +#define DO_NMSUB(N, M, D) (-(M * D) + N) +RVVCALL(OPIVV3, vmacc_vv_b, OP_SSS_B, H1, H1, H1, DO_MACC) +RVVCALL(OPIVV3, vmacc_vv_h, OP_SSS_H, H2, H2, H2, DO_MACC) +RVVCALL(OPIVV3, vmacc_vv_w, OP_SSS_W, H4, H4, H4, DO_MACC) +RVVCALL(OPIVV3, vmacc_vv_d, OP_SSS_D, H8, H8, H8, DO_MACC) +RVVCALL(OPIVV3, vnmsac_vv_b, OP_SSS_B, H1, H1, H1, DO_NMSAC) +RVVCALL(OPIVV3, vnmsac_vv_h, OP_SSS_H, H2, H2, H2, DO_NMSAC) +RVVCALL(OPIVV3, vnmsac_vv_w, OP_SSS_W, H4, H4, H4, DO_NMSAC) +RVVCALL(OPIVV3, vnmsac_vv_d, OP_SSS_D, H8, H8, H8, DO_NMSAC) +RVVCALL(OPIVV3, vmadd_vv_b, OP_SSS_B, H1, H1, H1, DO_MADD) +RVVCALL(OPIVV3, vmadd_vv_h, OP_SSS_H, H2, H2, H2, DO_MADD) +RVVCALL(OPIVV3, vmadd_vv_w, OP_SSS_W, H4, H4, H4, DO_MADD) +RVVCALL(OPIVV3, vmadd_vv_d, OP_SSS_D, H8, H8, H8, DO_MADD) +RVVCALL(OPIVV3, vnmsub_vv_b, OP_SSS_B, H1, H1, H1, DO_NMSUB) +RVVCALL(OPIVV3, vnmsub_vv_h, OP_SSS_H, H2, H2, H2, DO_NMSUB) +RVVCALL(OPIVV3, vnmsub_vv_w, OP_SSS_W, H4, H4, H4, DO_NMSUB) +RVVCALL(OPIVV3, vnmsub_vv_d, OP_SSS_D, H8, H8, H8, DO_NMSUB) +GEN_VEXT_VV(vmacc_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vmacc_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vmacc_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vmacc_vv_d, 8, 8, clearq) +GEN_VEXT_VV(vnmsac_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vnmsac_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vnmsac_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vnmsac_vv_d, 8, 8, clearq) +GEN_VEXT_VV(vmadd_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vmadd_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vmadd_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vmadd_vv_d, 8, 8, clearq) +GEN_VEXT_VV(vnmsub_vv_b, 1, 1, clearb) +GEN_VEXT_VV(vnmsub_vv_h, 2, 2, clearh) +GEN_VEXT_VV(vnmsub_vv_w, 4, 4, clearl) +GEN_VEXT_VV(vnmsub_vv_d, 8, 8, clearq) + +#define OPIVX3(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \ +static void do_##NAME(void *vd, target_ulong s1, void *vs2, int i) \ +{ \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + TD d = *((TD *)vd + HD(i)); \ + *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)(target_long)s1, d); \ +} + +RVVCALL(OPIVX3, vmacc_vx_b, OP_SSS_B, H1, H1, DO_MACC) +RVVCALL(OPIVX3, vmacc_vx_h, OP_SSS_H, H2, H2, DO_MACC) +RVVCALL(OPIVX3, vmacc_vx_w, OP_SSS_W, H4, H4, DO_MACC) +RVVCALL(OPIVX3, vmacc_vx_d, OP_SSS_D, H8, H8, DO_MACC) +RVVCALL(OPIVX3, vnmsac_vx_b, OP_SSS_B, H1, H1, DO_NMSAC) +RVVCALL(OPIVX3, vnmsac_vx_h, OP_SSS_H, H2, H2, DO_NMSAC) +RVVCALL(OPIVX3, vnmsac_vx_w, OP_SSS_W, H4, H4, DO_NMSAC) +RVVCALL(OPIVX3, vnmsac_vx_d, OP_SSS_D, H8, H8, DO_NMSAC) +RVVCALL(OPIVX3, vmadd_vx_b, OP_SSS_B, H1, H1, DO_MADD) +RVVCALL(OPIVX3, vmadd_vx_h, OP_SSS_H, H2, H2, DO_MADD) +RVVCALL(OPIVX3, vmadd_vx_w, OP_SSS_W, H4, H4, DO_MADD) +RVVCALL(OPIVX3, vmadd_vx_d, OP_SSS_D, H8, H8, DO_MADD) +RVVCALL(OPIVX3, vnmsub_vx_b, OP_SSS_B, H1, H1, DO_NMSUB) +RVVCALL(OPIVX3, vnmsub_vx_h, OP_SSS_H, H2, H2, DO_NMSUB) +RVVCALL(OPIVX3, vnmsub_vx_w, OP_SSS_W, H4, H4, DO_NMSUB) +RVVCALL(OPIVX3, vnmsub_vx_d, OP_SSS_D, H8, H8, DO_NMSUB) +GEN_VEXT_VX(vmacc_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vmacc_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vmacc_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vmacc_vx_d, 8, 8, clearq) +GEN_VEXT_VX(vnmsac_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vnmsac_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vnmsac_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vnmsac_vx_d, 8, 8, clearq) +GEN_VEXT_VX(vmadd_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vmadd_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vmadd_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vmadd_vx_d, 8, 8, clearq) +GEN_VEXT_VX(vnmsub_vx_b, 1, 1, clearb) +GEN_VEXT_VX(vnmsub_vx_h, 2, 2, clearh) +GEN_VEXT_VX(vnmsub_vx_w, 4, 4, clearl) +GEN_VEXT_VX(vnmsub_vx_d, 8, 8, clearq) From patchwork Mon Mar 9 12:14:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427021 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4F01D921 for ; Mon, 9 Mar 2020 12:23:57 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3057C2072A for ; Mon, 9 Mar 2020 12:23:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3057C2072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42158 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHRs-00071t-9r for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:23:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56501) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJv-0002Ix-He for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJt-000327-Sp for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:43 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:55193) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002bo-G6; Mon, 09 Mar 2020 08:15:41 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.08882166|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.727025-0.000620295-0.272354; FP=2802339199922709574|1|1|7|0|-1|-1|-1; HT=e02c03296; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:13 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 21/60] target/riscv: vector widening integer multiply-add instructions Date: Mon, 9 Mar 2020 20:14:18 +0800 Message-Id: <20200309121457.13719-22-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 22 ++++++++++++ target/riscv/insn32.decode | 7 ++++ target/riscv/insn_trans/trans_rvv.inc.c | 9 +++++ target/riscv/vector_helper.c | 45 +++++++++++++++++++++++++ 4 files changed, 83 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 098288df76..1f0d3d60e3 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -643,3 +643,25 @@ DEF_HELPER_6(vnmsub_vx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vnmsub_vx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vnmsub_vx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vnmsub_vx_d, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vwmaccu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmaccu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmaccu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmacc_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmacc_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmacc_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmaccsu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmaccsu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmaccsu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwmaccu_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmaccu_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmaccu_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmacc_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmacc_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmacc_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmaccsu_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmaccsu_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmaccsu_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmaccus_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmaccus_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwmaccus_vx_w, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 58de888afa..2a5b945139 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -392,6 +392,13 @@ vmadd_vv 101001 . ..... ..... 010 ..... 1010111 @r_vm vmadd_vx 101001 . ..... ..... 110 ..... 1010111 @r_vm vnmsub_vv 101011 . ..... ..... 010 ..... 1010111 @r_vm vnmsub_vx 101011 . ..... ..... 110 ..... 1010111 @r_vm +vwmaccu_vv 111100 . ..... ..... 010 ..... 1010111 @r_vm +vwmaccu_vx 111100 . ..... ..... 110 ..... 1010111 @r_vm +vwmacc_vv 111101 . ..... ..... 010 ..... 1010111 @r_vm +vwmacc_vx 111101 . ..... ..... 110 ..... 1010111 @r_vm +vwmaccsu_vv 111110 . ..... ..... 010 ..... 1010111 @r_vm +vwmaccsu_vx 111110 . ..... ..... 110 ..... 1010111 @r_vm +vwmaccus_vx 111111 . ..... ..... 110 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 05f7ae0bc4..958737d097 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1472,3 +1472,12 @@ GEN_OPIVX_TRANS(vmacc_vx, opivx_check) GEN_OPIVX_TRANS(vnmsac_vx, opivx_check) GEN_OPIVX_TRANS(vmadd_vx, opivx_check) GEN_OPIVX_TRANS(vnmsub_vx, opivx_check) + +/* Vector Widening Integer Multiply-Add Instructions */ +GEN_OPIVV_WIDEN_TRANS(vwmaccu_vv, opivv_widen_check) +GEN_OPIVV_WIDEN_TRANS(vwmacc_vv, opivv_widen_check) +GEN_OPIVV_WIDEN_TRANS(vwmaccsu_vv, opivv_widen_check) +GEN_OPIVX_WIDEN_TRANS(vwmaccu_vx) +GEN_OPIVX_WIDEN_TRANS(vwmacc_vx) +GEN_OPIVX_WIDEN_TRANS(vwmaccsu_vx) +GEN_OPIVX_WIDEN_TRANS(vwmaccus_vx) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index e5082c8adc..5109654f9f 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -1910,3 +1910,48 @@ GEN_VEXT_VX(vnmsub_vx_b, 1, 1, clearb) GEN_VEXT_VX(vnmsub_vx_h, 2, 2, clearh) GEN_VEXT_VX(vnmsub_vx_w, 4, 4, clearl) GEN_VEXT_VX(vnmsub_vx_d, 8, 8, clearq) + +/* Vector Widening Integer Multiply-Add Instructions */ +RVVCALL(OPIVV3, vwmaccu_vv_b, WOP_UUU_B, H2, H1, H1, DO_MACC) +RVVCALL(OPIVV3, vwmaccu_vv_h, WOP_UUU_H, H4, H2, H2, DO_MACC) +RVVCALL(OPIVV3, vwmaccu_vv_w, WOP_UUU_W, H8, H4, H4, DO_MACC) +RVVCALL(OPIVV3, vwmacc_vv_b, WOP_SSS_B, H2, H1, H1, DO_MACC) +RVVCALL(OPIVV3, vwmacc_vv_h, WOP_SSS_H, H4, H2, H2, DO_MACC) +RVVCALL(OPIVV3, vwmacc_vv_w, WOP_SSS_W, H8, H4, H4, DO_MACC) +RVVCALL(OPIVV3, vwmaccsu_vv_b, WOP_SSU_B, H2, H1, H1, DO_MACC) +RVVCALL(OPIVV3, vwmaccsu_vv_h, WOP_SSU_H, H4, H2, H2, DO_MACC) +RVVCALL(OPIVV3, vwmaccsu_vv_w, WOP_SSU_W, H8, H4, H4, DO_MACC) +GEN_VEXT_VV(vwmaccu_vv_b, 1, 2, clearh) +GEN_VEXT_VV(vwmaccu_vv_h, 2, 4, clearl) +GEN_VEXT_VV(vwmaccu_vv_w, 4, 8, clearq) +GEN_VEXT_VV(vwmacc_vv_b, 1, 2, clearh) +GEN_VEXT_VV(vwmacc_vv_h, 2, 4, clearl) +GEN_VEXT_VV(vwmacc_vv_w, 4, 8, clearq) +GEN_VEXT_VV(vwmaccsu_vv_b, 1, 2, clearh) +GEN_VEXT_VV(vwmaccsu_vv_h, 2, 4, clearl) +GEN_VEXT_VV(vwmaccsu_vv_w, 4, 8, clearq) + +RVVCALL(OPIVX3, vwmaccu_vx_b, WOP_UUU_B, H2, H1, DO_MACC) +RVVCALL(OPIVX3, vwmaccu_vx_h, WOP_UUU_H, H4, H2, DO_MACC) +RVVCALL(OPIVX3, vwmaccu_vx_w, WOP_UUU_W, H8, H4, DO_MACC) +RVVCALL(OPIVX3, vwmacc_vx_b, WOP_SSS_B, H2, H1, DO_MACC) +RVVCALL(OPIVX3, vwmacc_vx_h, WOP_SSS_H, H4, H2, DO_MACC) +RVVCALL(OPIVX3, vwmacc_vx_w, WOP_SSS_W, H8, H4, DO_MACC) +RVVCALL(OPIVX3, vwmaccsu_vx_b, WOP_SSU_B, H2, H1, DO_MACC) +RVVCALL(OPIVX3, vwmaccsu_vx_h, WOP_SSU_H, H4, H2, DO_MACC) +RVVCALL(OPIVX3, vwmaccsu_vx_w, WOP_SSU_W, H8, H4, DO_MACC) +RVVCALL(OPIVX3, vwmaccus_vx_b, WOP_SUS_B, H2, H1, DO_MACC) +RVVCALL(OPIVX3, vwmaccus_vx_h, WOP_SUS_H, H4, H2, DO_MACC) +RVVCALL(OPIVX3, vwmaccus_vx_w, WOP_SUS_W, H8, H4, DO_MACC) +GEN_VEXT_VX(vwmaccu_vx_b, 1, 2, clearh) +GEN_VEXT_VX(vwmaccu_vx_h, 2, 4, clearl) +GEN_VEXT_VX(vwmaccu_vx_w, 4, 8, clearq) +GEN_VEXT_VX(vwmacc_vx_b, 1, 2, clearh) +GEN_VEXT_VX(vwmacc_vx_h, 2, 4, clearl) +GEN_VEXT_VX(vwmacc_vx_w, 4, 8, clearq) +GEN_VEXT_VX(vwmaccsu_vx_b, 1, 2, clearh) +GEN_VEXT_VX(vwmaccsu_vx_h, 2, 4, clearl) +GEN_VEXT_VX(vwmaccsu_vx_w, 4, 8, clearq) +GEN_VEXT_VX(vwmaccus_vx_b, 1, 2, clearh) +GEN_VEXT_VX(vwmaccus_vx_h, 2, 4, clearl) +GEN_VEXT_VX(vwmaccus_vx_w, 4, 8, clearq) From patchwork Mon Mar 9 12:14:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427043 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4BE1A92A for ; Mon, 9 Mar 2020 12:32:18 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DFF242072A for ; Mon, 9 Mar 2020 12:32:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DFF242072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42302 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHZw-0000zi-Se for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:32:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56583) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJy-0002Ki-1O for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:48 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJu-00033r-CJ for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:45 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:48397) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002bn-Mi; Mon, 09 Mar 2020 08:15:42 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.08249524|-1; BR=01201311R931S41ruler364_34042_170522; CH=green; DM=||false|; DS=SPAM|spam_ad|0.858603-0.00040499-0.140992; FP=774595684858660470|1|1|7|0|-1|-1|-1; HT=e02c03293; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:13 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 22/60] target/riscv: vector integer merge and move instructions Date: Mon, 9 Mar 2020 20:14:19 +0800 Message-Id: <20200309121457.13719-23-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 9 ++++ target/riscv/insn32.decode | 3 ++ target/riscv/insn_trans/trans_rvv.inc.c | 24 ++++++++++ target/riscv/vector_helper.c | 58 +++++++++++++++++++++++++ 4 files changed, 94 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 1f0d3d60e3..121e9e57e7 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -665,3 +665,12 @@ DEF_HELPER_6(vwmaccsu_vx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vwmaccus_vx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vwmaccus_vx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vwmaccus_vx_w, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vmerge_vvm_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmerge_vvm_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmerge_vvm_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmerge_vvm_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vmerge_vxm_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmerge_vxm_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmerge_vxm_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vmerge_vxm_d, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 2a5b945139..bcb8273bcc 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -399,6 +399,9 @@ vwmacc_vx 111101 . ..... ..... 110 ..... 1010111 @r_vm vwmaccsu_vv 111110 . ..... ..... 010 ..... 1010111 @r_vm vwmaccsu_vx 111110 . ..... ..... 110 ..... 1010111 @r_vm vwmaccus_vx 111111 . ..... ..... 110 ..... 1010111 @r_vm +vmerge_vvm 010111 . ..... ..... 000 ..... 1010111 @r_vm +vmerge_vxm 010111 . ..... ..... 100 ..... 1010111 @r_vm +vmerge_vim 010111 . ..... ..... 011 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 958737d097..aff5ca8663 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1481,3 +1481,27 @@ GEN_OPIVX_WIDEN_TRANS(vwmaccu_vx) GEN_OPIVX_WIDEN_TRANS(vwmacc_vx) GEN_OPIVX_WIDEN_TRANS(vwmaccsu_vx) GEN_OPIVX_WIDEN_TRANS(vwmaccus_vx) + +/* Vector Integer Merge and Move Instructions */ +static bool opivv_vmerge_check(DisasContext *s, arg_rmrr *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, false) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + vext_check_reg(s, a->rs1, false) && + ((a->vm == 0) || (a->rs2 == 0))); +} +GEN_OPIVV_TRANS(vmerge_vvm, opivv_vmerge_check) + +static bool opivx_vmerge_check(DisasContext *s, arg_rmrr *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, false) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + ((a->vm == 0) || (a->rs2 == 0))); +} +GEN_OPIVX_TRANS(vmerge_vxm, opivx_vmerge_check) + +GEN_OPIVI_TRANS(vmerge_vim, 0, vmerge_vxm, opivx_vmerge_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 5109654f9f..273b705847 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -1955,3 +1955,61 @@ GEN_VEXT_VX(vwmaccsu_vx_w, 4, 8, clearq) GEN_VEXT_VX(vwmaccus_vx_b, 1, 2, clearh) GEN_VEXT_VX(vwmaccus_vx_h, 2, 4, clearl) GEN_VEXT_VX(vwmaccus_vx_w, 4, 8, clearq) + +/* Vector Integer Merge and Move Instructions */ +#define GEN_VEXT_VMERGE_VV(NAME, ETYPE, H, CLEAR_FN) \ +void HELPER(NAME)(void *vd, void *v0, void *vs1, void *vs2, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t mlen = vext_mlen(desc); \ + uint32_t vm = vext_vm(desc); \ + uint32_t vl = env->vl; \ + uint32_t esz = sizeof(ETYPE); \ + uint32_t vlmax = vext_maxsz(desc) / esz; \ + uint32_t i; \ + \ + for (i = 0; i < vl; i++) { \ + if (!vm && !vext_elem_mask(v0, mlen, i)) { \ + ETYPE s2 = *((ETYPE *)vs2 + H(i)); \ + *((ETYPE *)vd + H1(i)) = s2; \ + } else { \ + ETYPE s1 = *((ETYPE *)vs1 + H(i)); \ + *((ETYPE *)vd + H(i)) = s1; \ + } \ + } \ + if (i != 0) { \ + CLEAR_FN(vd, vl, vl * esz, vlmax * esz); \ + } \ +} +GEN_VEXT_VMERGE_VV(vmerge_vvm_b, int8_t, H1, clearb) +GEN_VEXT_VMERGE_VV(vmerge_vvm_h, int16_t, H2, clearh) +GEN_VEXT_VMERGE_VV(vmerge_vvm_w, int32_t, H4, clearl) +GEN_VEXT_VMERGE_VV(vmerge_vvm_d, int64_t, H8, clearq) + +#define GEN_VEXT_VMERGE_VX(NAME, ETYPE, H, CLEAR_FN) \ +void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t mlen = vext_mlen(desc); \ + uint32_t vm = vext_vm(desc); \ + uint32_t vl = env->vl; \ + uint32_t esz = sizeof(ETYPE); \ + uint32_t vlmax = vext_maxsz(desc) / esz; \ + uint32_t i; \ + \ + for (i = 0; i < vl; i++) { \ + if (!vm && !vext_elem_mask(v0, mlen, i)) { \ + ETYPE s2 = *((ETYPE *)vs2 + H(i)); \ + *((ETYPE *)vd + H1(i)) = s2; \ + } else { \ + *((ETYPE *)vd + H(i)) = (ETYPE)(target_long)s1; \ + } \ + } \ + if (i != 0) { \ + CLEAR_FN(vd, vl, vl * esz, vlmax * esz); \ + } \ +} +GEN_VEXT_VMERGE_VX(vmerge_vxm_b, int8_t, H1, clearb) +GEN_VEXT_VMERGE_VX(vmerge_vxm_h, int16_t, H2, clearh) +GEN_VEXT_VMERGE_VX(vmerge_vxm_w, int32_t, H4, clearl) +GEN_VEXT_VMERGE_VX(vmerge_vxm_d, int64_t, H8, clearq) From patchwork Mon Mar 9 12:14:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427051 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3400092A for ; Mon, 9 Mar 2020 12:36:27 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 046AB2072A for ; Mon, 9 Mar 2020 12:36:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 046AB2072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42376 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHdy-0007Lx-4F for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:36:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56579) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJy-0002Ke-0P for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:50 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJt-00032i-Vg for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:45 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:58127) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002c1-9X; Mon, 09 Mar 2020 08:15:41 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436284|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_regular_dialog|0.0658199-0.000670533-0.93351; FP=6892594960750286155|1|1|7|0|-1|-1|-1; HT=e02c03306; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:14 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 23/60] target/riscv: vector single-width saturating add and subtract Date: Mon, 9 Mar 2020 20:14:20 +0800 Message-Id: <20200309121457.13719-24-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 33 +++ target/riscv/insn32.decode | 10 + target/riscv/insn_trans/trans_rvv.inc.c | 16 ++ target/riscv/vector_helper.c | 278 ++++++++++++++++++++++++ 4 files changed, 337 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 121e9e57e7..95da00d365 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -674,3 +674,36 @@ DEF_HELPER_6(vmerge_vxm_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vmerge_vxm_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vmerge_vxm_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vmerge_vxm_d, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vsaddu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsaddu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsaddu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsaddu_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsadd_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsadd_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsadd_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsadd_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssubu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssubu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssubu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssubu_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssub_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssub_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssub_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssub_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsaddu_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsaddu_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsaddu_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsaddu_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsadd_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsadd_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsadd_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsadd_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssubu_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssubu_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssubu_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssubu_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssub_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssub_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssub_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssub_vx_d, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index bcb8273bcc..44baadf582 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -402,6 +402,16 @@ vwmaccus_vx 111111 . ..... ..... 110 ..... 1010111 @r_vm vmerge_vvm 010111 . ..... ..... 000 ..... 1010111 @r_vm vmerge_vxm 010111 . ..... ..... 100 ..... 1010111 @r_vm vmerge_vim 010111 . ..... ..... 011 ..... 1010111 @r_vm +vsaddu_vv 100000 . ..... ..... 000 ..... 1010111 @r_vm +vsaddu_vx 100000 . ..... ..... 100 ..... 1010111 @r_vm +vsaddu_vi 100000 . ..... ..... 011 ..... 1010111 @r_vm +vsadd_vv 100001 . ..... ..... 000 ..... 1010111 @r_vm +vsadd_vx 100001 . ..... ..... 100 ..... 1010111 @r_vm +vsadd_vi 100001 . ..... ..... 011 ..... 1010111 @r_vm +vssubu_vv 100010 . ..... ..... 000 ..... 1010111 @r_vm +vssubu_vx 100010 . ..... ..... 100 ..... 1010111 @r_vm +vssub_vv 100011 . ..... ..... 000 ..... 1010111 @r_vm +vssub_vx 100011 . ..... ..... 100 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index aff5ca8663..ad55766b98 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1505,3 +1505,19 @@ static bool opivx_vmerge_check(DisasContext *s, arg_rmrr *a) GEN_OPIVX_TRANS(vmerge_vxm, opivx_vmerge_check) GEN_OPIVI_TRANS(vmerge_vim, 0, vmerge_vxm, opivx_vmerge_check) + +/* + *** Vector Fixed-Point Arithmetic Instructions + */ + +/* Vector Single-Width Saturating Add and Subtract */ +GEN_OPIVV_GVEC_TRANS(vsaddu_vv, usadd) +GEN_OPIVV_GVEC_TRANS(vsadd_vv, ssadd) +GEN_OPIVV_GVEC_TRANS(vssubu_vv, ussub) +GEN_OPIVV_GVEC_TRANS(vssub_vv, sssub) +GEN_OPIVX_TRANS(vsaddu_vx, opivx_check) +GEN_OPIVX_TRANS(vsadd_vx, opivx_check) +GEN_OPIVX_TRANS(vssubu_vx, opivx_check) +GEN_OPIVX_TRANS(vssub_vx, opivx_check) +GEN_OPIVI_TRANS(vsaddu_vi, 1, vsaddu_vx, opivx_check) +GEN_OPIVI_TRANS(vsadd_vi, 0, vsadd_vx, opivx_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 273b705847..c7b8c1bff4 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -2013,3 +2013,281 @@ GEN_VEXT_VMERGE_VX(vmerge_vxm_b, int8_t, H1, clearb) GEN_VEXT_VMERGE_VX(vmerge_vxm_h, int16_t, H2, clearh) GEN_VEXT_VMERGE_VX(vmerge_vxm_w, int32_t, H4, clearl) GEN_VEXT_VMERGE_VX(vmerge_vxm_d, int64_t, H8, clearq) + +/* + *** Vector Fixed-Point Arithmetic Instructions + */ + +/* Vector Single-Width Saturating Add and Subtract */ +#define OPIVV2_ENV(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \ +static void do_##NAME(void *vd, void *vs1, void *vs2, int i, \ + CPURISCVState *env) \ +{ \ + TX1 s1 = *((T1 *)vs1 + HS1(i)); \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + *((TD *)vd + HD(i)) = OP(env, s2, s1); \ +} + +#define GEN_VEXT_VV_ENV(NAME, ESZ, DSZ, CLEAR_FN) \ +void HELPER(NAME)(void *vd, void *v0, void *vs1, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t vlmax = vext_maxsz(desc) / ESZ; \ + uint32_t mlen = vext_mlen(desc); \ + uint32_t vm = vext_vm(desc); \ + uint32_t vl = env->vl; \ + uint32_t i; \ + for (i = 0; i < vl; i++) { \ + if (!vm && !vext_elem_mask(v0, mlen, i)) { \ + continue; \ + } \ + do_##NAME(vd, vs1, vs2, i, env); \ + } \ + if (i != 0) { \ + CLEAR_FN(vd, vl, vl * DSZ, vlmax * DSZ); \ + } \ +} + +static inline uint8_t saddu8(CPURISCVState *env, uint8_t a, uint8_t b) +{ + uint8_t res = a + b; + if (res < a) { + res = UINT8_MAX; + env->vxsat = 0x1; + } + return res; +} +static inline uint16_t saddu16(CPURISCVState *env, uint16_t a, uint16_t b) +{ + uint16_t res = a + b; + if (res < a) { + res = UINT16_MAX; + env->vxsat = 0x1; + } + return res; +} +static inline uint32_t saddu32(CPURISCVState *env, uint32_t a, uint32_t b) +{ + uint32_t res = a + b; + if (res < a) { + res = UINT32_MAX; + env->vxsat = 0x1; + } + return res; +} +static inline uint64_t saddu64(CPURISCVState *env, uint64_t a, uint64_t b) +{ + uint64_t res = a + b; + if (res < a) { + res = UINT64_MAX; + env->vxsat = 0x1; + } + return res; +} +RVVCALL(OPIVV2_ENV, vsaddu_vv_b, OP_UUU_B, H1, H1, H1, saddu8) +RVVCALL(OPIVV2_ENV, vsaddu_vv_h, OP_UUU_H, H2, H2, H2, saddu16) +RVVCALL(OPIVV2_ENV, vsaddu_vv_w, OP_UUU_W, H4, H4, H4, saddu32) +RVVCALL(OPIVV2_ENV, vsaddu_vv_d, OP_UUU_D, H8, H8, H8, saddu64) +GEN_VEXT_VV_ENV(vsaddu_vv_b, 1, 1, clearb) +GEN_VEXT_VV_ENV(vsaddu_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vsaddu_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vsaddu_vv_d, 8, 8, clearq) + +#define OPIVX2_ENV(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \ +static void do_##NAME(void *vd, target_ulong s1, void *vs2, int i, \ + CPURISCVState *env) \ +{ \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + *((TD *)vd + HD(i)) = OP(env, s2, (TX1)(T1)(target_long)s1); \ +} + +#define GEN_VEXT_VX_ENV(NAME, ESZ, DSZ, CLEAR_FN) \ +void HELPER(NAME)(void *vd, void *v0, target_ulong s1, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t vlmax = vext_maxsz(desc) / ESZ; \ + uint32_t mlen = vext_mlen(desc); \ + uint32_t vm = vext_vm(desc); \ + uint32_t vl = env->vl; \ + uint32_t i; \ + \ + for (i = 0; i < vl; i++) { \ + if (!vm && !vext_elem_mask(v0, mlen, i)) { \ + continue; \ + } \ + do_##NAME(vd, s1, vs2, i, env); \ + } \ + if (i != 0) { \ + CLEAR_FN(vd, vl, vl * DSZ, vlmax * DSZ); \ + } \ +} +RVVCALL(OPIVX2_ENV, vsaddu_vx_b, OP_UUU_B, H1, H1, saddu8) +RVVCALL(OPIVX2_ENV, vsaddu_vx_h, OP_UUU_H, H2, H2, saddu16) +RVVCALL(OPIVX2_ENV, vsaddu_vx_w, OP_UUU_W, H4, H4, saddu32) +RVVCALL(OPIVX2_ENV, vsaddu_vx_d, OP_UUU_D, H8, H8, saddu64) +GEN_VEXT_VX_ENV(vsaddu_vx_b, 1, 1, clearb) +GEN_VEXT_VX_ENV(vsaddu_vx_h, 2, 2, clearh) +GEN_VEXT_VX_ENV(vsaddu_vx_w, 4, 4, clearl) +GEN_VEXT_VX_ENV(vsaddu_vx_d, 8, 8, clearq) + +static inline int8_t sadd8(CPURISCVState *env, int8_t a, int8_t b) +{ + int8_t res = a + b; + if (((res ^ a) & (res ^ b)) >> 7 == -1LL) { + res = a > 0 ? INT8_MAX : INT8_MIN; + env->vxsat = 0x1; + } + return res; +} +static inline int16_t sadd16(CPURISCVState *env, int16_t a, int16_t b) +{ + int16_t res = a + b; + if (((res ^ a) & (res ^ b)) >> 15 == -1LL) { + res = a > 0 ? INT16_MAX : INT16_MIN; + env->vxsat = 0x1; + } + return res; +} +static inline int32_t sadd32(CPURISCVState *env, int32_t a, int32_t b) +{ + int32_t res = a + b; + if (((res ^ a) & (res ^ b)) >> 31 == -1LL) { + res = a > 0 ? INT32_MAX : INT32_MIN; + env->vxsat = 0x1; + } + return res; +} +static inline int64_t sadd64(CPURISCVState *env, int64_t a, int64_t b) +{ + int64_t res = a + b; + if (((res ^ a) & (res ^ b)) >> 63 == -1LL) { + res = a > 0 ? INT64_MAX : INT64_MIN; + env->vxsat = 0x1; + } + return res; +} +RVVCALL(OPIVV2_ENV, vsadd_vv_b, OP_SSS_B, H1, H1, H1, sadd8) +RVVCALL(OPIVV2_ENV, vsadd_vv_h, OP_SSS_H, H2, H2, H2, sadd16) +RVVCALL(OPIVV2_ENV, vsadd_vv_w, OP_SSS_W, H4, H4, H4, sadd32) +RVVCALL(OPIVV2_ENV, vsadd_vv_d, OP_SSS_D, H8, H8, H8, sadd64) +GEN_VEXT_VV_ENV(vsadd_vv_b, 1, 1, clearb) +GEN_VEXT_VV_ENV(vsadd_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vsadd_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vsadd_vv_d, 8, 8, clearq) + +RVVCALL(OPIVX2_ENV, vsadd_vx_b, OP_SSS_B, H1, H1, sadd8) +RVVCALL(OPIVX2_ENV, vsadd_vx_h, OP_SSS_H, H2, H2, sadd16) +RVVCALL(OPIVX2_ENV, vsadd_vx_w, OP_SSS_W, H4, H4, sadd32) +RVVCALL(OPIVX2_ENV, vsadd_vx_d, OP_SSS_D, H8, H8, sadd64) +GEN_VEXT_VX_ENV(vsadd_vx_b, 1, 1, clearb) +GEN_VEXT_VX_ENV(vsadd_vx_h, 2, 2, clearh) +GEN_VEXT_VX_ENV(vsadd_vx_w, 4, 4, clearl) +GEN_VEXT_VX_ENV(vsadd_vx_d, 8, 8, clearq) + +static inline uint8_t ssubu8(CPURISCVState *env, uint8_t a, uint8_t b) +{ + uint8_t res = a - b; + if (res > a) { + res = 0; + env->vxsat = 0x1; + } + return res; +} +static inline uint16_t ssubu16(CPURISCVState *env, uint16_t a, uint16_t b) +{ + uint16_t res = a - b; + if (res > a) { + res = 0; + env->vxsat = 0x1; + } + return res; +} +static inline uint32_t ssubu32(CPURISCVState *env, uint32_t a, uint32_t b) +{ + uint32_t res = a - b; + if (res > a) { + res = 0; + env->vxsat = 0x1; + } + return res; +} +static inline uint64_t ssubu64(CPURISCVState *env, uint64_t a, uint64_t b) +{ + uint64_t res = a - b; + if (res > a) { + res = 0; + env->vxsat = 0x1; + } + return res; +} +RVVCALL(OPIVV2_ENV, vssubu_vv_b, OP_UUU_B, H1, H1, H1, ssubu8) +RVVCALL(OPIVV2_ENV, vssubu_vv_h, OP_UUU_H, H2, H2, H2, ssubu16) +RVVCALL(OPIVV2_ENV, vssubu_vv_w, OP_UUU_W, H4, H4, H4, ssubu32) +RVVCALL(OPIVV2_ENV, vssubu_vv_d, OP_UUU_D, H8, H8, H8, ssubu64) +GEN_VEXT_VV_ENV(vssubu_vv_b, 1, 1, clearb) +GEN_VEXT_VV_ENV(vssubu_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vssubu_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vssubu_vv_d, 8, 8, clearq) + +RVVCALL(OPIVX2_ENV, vssubu_vx_b, OP_UUU_B, H1, H1, ssubu8) +RVVCALL(OPIVX2_ENV, vssubu_vx_h, OP_UUU_H, H2, H2, ssubu16) +RVVCALL(OPIVX2_ENV, vssubu_vx_w, OP_UUU_W, H4, H4, ssubu32) +RVVCALL(OPIVX2_ENV, vssubu_vx_d, OP_UUU_D, H8, H8, ssubu64) +GEN_VEXT_VX_ENV(vssubu_vx_b, 1, 1, clearb) +GEN_VEXT_VX_ENV(vssubu_vx_h, 2, 2, clearh) +GEN_VEXT_VX_ENV(vssubu_vx_w, 4, 4, clearl) +GEN_VEXT_VX_ENV(vssubu_vx_d, 8, 8, clearq) + +static inline int8_t ssub8(CPURISCVState *env, int8_t a, int8_t b) +{ + int8_t res = a - b; + if (((res ^ a) & (a ^ b)) >> 7 == -1LL) { + res = a > 0 ? INT8_MAX : INT8_MIN; + env->vxsat = 0x1; + } + return res; +} +static inline int16_t ssub16(CPURISCVState *env, int16_t a, int16_t b) +{ + int16_t res = a - b; + if (((res ^ a) & (a ^ b)) >> 15 == -1LL) { + res = a > 0 ? INT16_MAX : INT16_MIN; + env->vxsat = 0x1; + } + return res; +} +static inline int32_t ssub32(CPURISCVState *env, int32_t a, int32_t b) +{ + int32_t res = a - b; + if (((res ^ a) & (a ^ b)) >> 31 == -1LL) { + res = a > 0 ? INT32_MAX : INT32_MIN; + env->vxsat = 0x1; + } + return res; +} +static inline int64_t ssub64(CPURISCVState *env, int64_t a, int64_t b) +{ + int64_t res = a - b; + if (((res ^ a) & (a ^ b)) >> 63 == -1LL) { + res = a > 0 ? INT64_MAX : INT64_MIN; + env->vxsat = 0x1; + } + return res; +} +RVVCALL(OPIVV2_ENV, vssub_vv_b, OP_SSS_B, H1, H1, H1, ssub8) +RVVCALL(OPIVV2_ENV, vssub_vv_h, OP_SSS_H, H2, H2, H2, ssub16) +RVVCALL(OPIVV2_ENV, vssub_vv_w, OP_SSS_W, H4, H4, H4, ssub32) +RVVCALL(OPIVV2_ENV, vssub_vv_d, OP_SSS_D, H8, H8, H8, ssub64) +GEN_VEXT_VV_ENV(vssub_vv_b, 1, 1, clearb) +GEN_VEXT_VV_ENV(vssub_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vssub_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vssub_vv_d, 8, 8, clearq) + +RVVCALL(OPIVX2_ENV, vssub_vx_b, OP_SSS_B, H1, H1, ssub8) +RVVCALL(OPIVX2_ENV, vssub_vx_h, OP_SSS_H, H2, H2, ssub16) +RVVCALL(OPIVX2_ENV, vssub_vx_w, OP_SSS_W, H4, H4, ssub32) +RVVCALL(OPIVX2_ENV, vssub_vx_d, OP_SSS_D, H8, H8, ssub64) +GEN_VEXT_VX_ENV(vssub_vx_b, 1, 1, clearb) +GEN_VEXT_VX_ENV(vssub_vx_h, 2, 2, clearh) +GEN_VEXT_VX_ENV(vssub_vx_w, 4, 4, clearl) +GEN_VEXT_VX_ENV(vssub_vx_d, 8, 8, clearq) From patchwork Mon Mar 9 12:14:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427017 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C531921 for ; Mon, 9 Mar 2020 12:23:43 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5D2E220828 for ; Mon, 9 Mar 2020 12:23:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5D2E220828 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42152 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHRe-0006kV-IR for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:23:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56608) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJy-0002Lj-Gr for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJu-00034c-Iw for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:46 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:53381) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002by-S6; Mon, 09 Mar 2020 08:15:42 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07443219|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_regular_dialog|0.115391-0.000860666-0.883749; FP=2217989352393507843|1|1|7|0|-1|-1|-1; HT=e02c03302; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:15 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 24/60] target/riscv: vector single-width averaging add and subtract Date: Mon, 9 Mar 2020 20:14:21 +0800 Message-Id: <20200309121457.13719-25-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 17 ++++ target/riscv/insn32.decode | 5 + target/riscv/insn_trans/trans_rvv.inc.c | 7 ++ target/riscv/vector_helper.c | 129 ++++++++++++++++++++++++ 4 files changed, 158 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 95da00d365..d3837d2ca4 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -707,3 +707,20 @@ DEF_HELPER_6(vssub_vx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vssub_vx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vssub_vx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vssub_vx_d, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vaadd_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vaadd_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vaadd_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vaadd_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vasub_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vasub_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vasub_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vasub_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vaadd_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vaadd_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vaadd_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vaadd_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vasub_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vasub_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vasub_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vasub_vx_d, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 44baadf582..0227a16b16 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -412,6 +412,11 @@ vssubu_vv 100010 . ..... ..... 000 ..... 1010111 @r_vm vssubu_vx 100010 . ..... ..... 100 ..... 1010111 @r_vm vssub_vv 100011 . ..... ..... 000 ..... 1010111 @r_vm vssub_vx 100011 . ..... ..... 100 ..... 1010111 @r_vm +vaadd_vv 100100 . ..... ..... 000 ..... 1010111 @r_vm +vaadd_vx 100100 . ..... ..... 100 ..... 1010111 @r_vm +vaadd_vi 100100 . ..... ..... 011 ..... 1010111 @r_vm +vasub_vv 100110 . ..... ..... 000 ..... 1010111 @r_vm +vasub_vx 100110 . ..... ..... 100 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index ad55766b98..9988fad2fe 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1521,3 +1521,10 @@ GEN_OPIVX_TRANS(vssubu_vx, opivx_check) GEN_OPIVX_TRANS(vssub_vx, opivx_check) GEN_OPIVI_TRANS(vsaddu_vi, 1, vsaddu_vx, opivx_check) GEN_OPIVI_TRANS(vsadd_vi, 0, vsadd_vx, opivx_check) + +/* Vector Single-Width Averaging Add and Subtract */ +GEN_OPIVV_TRANS(vaadd_vv, opivv_check) +GEN_OPIVV_TRANS(vasub_vv, opivv_check) +GEN_OPIVX_TRANS(vaadd_vx, opivx_check) +GEN_OPIVX_TRANS(vasub_vx, opivx_check) +GEN_OPIVI_TRANS(vaadd_vi, 0, vaadd_vx, opivx_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index c7b8c1bff4..b0a7a3b6e4 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -2291,3 +2291,132 @@ GEN_VEXT_VX_ENV(vssub_vx_b, 1, 1, clearb) GEN_VEXT_VX_ENV(vssub_vx_h, 2, 2, clearh) GEN_VEXT_VX_ENV(vssub_vx_w, 4, 4, clearl) GEN_VEXT_VX_ENV(vssub_vx_d, 8, 8, clearq) + +/* Vector Single-Width Averaging Add and Subtract */ +static inline uint8_t get_round(CPURISCVState *env, uint64_t v, uint8_t shift) +{ + uint8_t d = extract64(v, shift, 1); + uint8_t d1; + uint64_t D1, D2; + int mod = env->vxrm; + + if (shift == 0 || shift > 64) { + return 0; + } + + d1 = extract64(v, shift - 1, 1); + D1 = extract64(v, 0, shift); + if (mod == 0) { /* round-to-nearest-up (add +0.5 LSB) */ + return d1; + } else if (mod == 1) { /* round-to-nearest-even */ + if (shift > 1) { + D2 = extract64(v, 0, shift - 1); + return d1 & ((D2 != 0) | d); + } else { + return d1 & d; + } + } else if (mod == 3) { /* round-to-odd (OR bits into LSB, aka "jam") */ + return !d & (D1 != 0); + } + return 0; /* round-down (truncate) */ +} + +static inline int8_t aadd8(CPURISCVState *env, int8_t a, int8_t b) +{ + int16_t res = (int16_t)a + (int16_t)b; + uint8_t round = get_round(env, res, 1); + res = (res >> 1) + round; + return res; +} +static inline int16_t aadd16(CPURISCVState *env, int16_t a, int16_t b) +{ + int32_t res = (int32_t)a + (int32_t)b; + uint8_t round = get_round(env, res, 1); + res = (res >> 1) + round; + return res; +} +static inline int32_t aadd32(CPURISCVState *env, int32_t a, int32_t b) +{ + int64_t res = (int64_t)a + (int64_t)b; + uint8_t round = get_round(env, res, 1); + res = (res >> 1) + round; + return res; +} +static inline int64_t aadd64(CPURISCVState *env, int64_t a, int64_t b) +{ + int64_t res = (int64_t)a + (int64_t)b; + uint8_t round = get_round(env, res, 1); /* get_round only need v[d : 0] */ + if (((res ^ a) & (res ^ b)) >> 63 == -1LL) { /* overflow */ + res = ((res >> 1) ^ INT64_MIN) + round; + } else { + res = (res >> 1) + round; + } + return res; +} +RVVCALL(OPIVV2_ENV, vaadd_vv_b, OP_SSS_B, H1, H1, H1, aadd8) +RVVCALL(OPIVV2_ENV, vaadd_vv_h, OP_SSS_H, H2, H2, H2, aadd16) +RVVCALL(OPIVV2_ENV, vaadd_vv_w, OP_SSS_W, H4, H4, H4, aadd32) +RVVCALL(OPIVV2_ENV, vaadd_vv_d, OP_SSS_D, H8, H8, H8, aadd64) +GEN_VEXT_VV_ENV(vaadd_vv_b, 1, 1, clearb) +GEN_VEXT_VV_ENV(vaadd_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vaadd_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vaadd_vv_d, 8, 8, clearq) + +RVVCALL(OPIVX2_ENV, vaadd_vx_b, OP_SSS_B, H1, H1, aadd8) +RVVCALL(OPIVX2_ENV, vaadd_vx_h, OP_SSS_H, H2, H2, aadd16) +RVVCALL(OPIVX2_ENV, vaadd_vx_w, OP_SSS_W, H4, H4, aadd32) +RVVCALL(OPIVX2_ENV, vaadd_vx_d, OP_SSS_D, H8, H8, aadd64) +GEN_VEXT_VX_ENV(vaadd_vx_b, 1, 1, clearb) +GEN_VEXT_VX_ENV(vaadd_vx_h, 2, 2, clearh) +GEN_VEXT_VX_ENV(vaadd_vx_w, 4, 4, clearl) +GEN_VEXT_VX_ENV(vaadd_vx_d, 8, 8, clearq) + +static inline int8_t asub8(CPURISCVState *env, int8_t a, int8_t b) +{ + int16_t res = (int16_t)a - (int16_t)b; + uint8_t round = get_round(env, res, 1); + res = (res >> 1) + round; + return res; +} +static inline int16_t asub16(CPURISCVState *env, int16_t a, int16_t b) +{ + int32_t res = (int32_t)a - (int32_t)b; + uint8_t round = get_round(env, res, 1); + res = (res >> 1) + round; + return res; +} +static inline int32_t asub32(CPURISCVState *env, int32_t a, int32_t b) +{ + int64_t res = (int64_t)a - (int64_t)b; + uint8_t round = get_round(env, res, 1); + res = (res >> 1) + round; + return res; +} +static inline int64_t asub64(CPURISCVState *env, int64_t a, int64_t b) +{ + int64_t res = (int64_t)a - (int64_t)b; + uint8_t round = get_round(env, res, 1); /* get_round only need v[d : 0] */ + if (((res ^ a) & (a ^ b)) >> 63 == -1LL) { /* overflow */ + res = ((res >> 1) ^ INT64_MIN) + round; + } else { + res = (res >> 1) + round; + } + return res; +} +RVVCALL(OPIVV2_ENV, vasub_vv_b, OP_SSS_B, H1, H1, H1, asub8) +RVVCALL(OPIVV2_ENV, vasub_vv_h, OP_SSS_H, H2, H2, H2, asub16) +RVVCALL(OPIVV2_ENV, vasub_vv_w, OP_SSS_W, H4, H4, H4, asub32) +RVVCALL(OPIVV2_ENV, vasub_vv_d, OP_SSS_D, H8, H8, H8, asub64) +GEN_VEXT_VV_ENV(vasub_vv_b, 1, 1, clearb) +GEN_VEXT_VV_ENV(vasub_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vasub_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vasub_vv_d, 8, 8, clearq) + +RVVCALL(OPIVX2_ENV, vasub_vx_b, OP_SSS_B, H1, H1, asub8) +RVVCALL(OPIVX2_ENV, vasub_vx_h, OP_SSS_H, H2, H2, asub16) +RVVCALL(OPIVX2_ENV, vasub_vx_w, OP_SSS_W, H4, H4, asub32) +RVVCALL(OPIVX2_ENV, vasub_vx_d, OP_SSS_D, H8, H8, asub64) +GEN_VEXT_VX_ENV(vasub_vx_b, 1, 1, clearb) +GEN_VEXT_VX_ENV(vasub_vx_h, 2, 2, clearh) +GEN_VEXT_VX_ENV(vasub_vx_w, 4, 4, clearl) +GEN_VEXT_VX_ENV(vasub_vx_d, 8, 8, clearq) From patchwork Mon Mar 9 12:14:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427035 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C0ABE921 for ; Mon, 9 Mar 2020 12:28:43 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A26602072A for ; Mon, 9 Mar 2020 12:28:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A26602072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42226 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHWU-0005Lb-Mt for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:28:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56521) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJw-0002J1-0H for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJu-00033g-9M for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:43 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:50549) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002cM-G1; Mon, 09 Mar 2020 08:15:42 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07539856|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.377237-0.00028666-0.622476; FP=488635700486052979|1|1|7|0|-1|-1|-1; HT=e02c03294; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:15 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 25/60] target/riscv: vector single-width fractional multiply with rounding and saturation Date: Mon, 9 Mar 2020 20:14:22 +0800 Message-Id: <20200309121457.13719-26-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 9 +++ target/riscv/insn32.decode | 2 + target/riscv/insn_trans/trans_rvv.inc.c | 4 + target/riscv/vector_helper.c | 103 ++++++++++++++++++++++++ 4 files changed, 118 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index d3837d2ca4..333eccca57 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -724,3 +724,12 @@ DEF_HELPER_6(vasub_vx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vasub_vx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vasub_vx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vasub_vx_d, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vsmul_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsmul_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsmul_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsmul_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vsmul_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsmul_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsmul_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsmul_vx_d, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 0227a16b16..99f70924d6 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -417,6 +417,8 @@ vaadd_vx 100100 . ..... ..... 100 ..... 1010111 @r_vm vaadd_vi 100100 . ..... ..... 011 ..... 1010111 @r_vm vasub_vv 100110 . ..... ..... 000 ..... 1010111 @r_vm vasub_vx 100110 . ..... ..... 100 ..... 1010111 @r_vm +vsmul_vv 100111 . ..... ..... 000 ..... 1010111 @r_vm +vsmul_vx 100111 . ..... ..... 100 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 9988fad2fe..60e1e63b7b 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1528,3 +1528,7 @@ GEN_OPIVV_TRANS(vasub_vv, opivv_check) GEN_OPIVX_TRANS(vaadd_vx, opivx_check) GEN_OPIVX_TRANS(vasub_vx, opivx_check) GEN_OPIVI_TRANS(vaadd_vi, 0, vaadd_vx, opivx_check) + +/* Vector Single-Width Fractional Multiply with Rounding and Saturation */ +GEN_OPIVV_TRANS(vsmul_vv, opivv_check) +GEN_OPIVX_TRANS(vsmul_vx, opivx_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index b0a7a3b6e4..74ad07743c 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -2420,3 +2420,106 @@ GEN_VEXT_VX_ENV(vasub_vx_b, 1, 1, clearb) GEN_VEXT_VX_ENV(vasub_vx_h, 2, 2, clearh) GEN_VEXT_VX_ENV(vasub_vx_w, 4, 4, clearl) GEN_VEXT_VX_ENV(vasub_vx_d, 8, 8, clearq) + +/* Vector Single-Width Fractional Multiply with Rounding and Saturation */ +static inline int8_t vsmul8(CPURISCVState *env, int8_t a, int8_t b) +{ + uint8_t round; + int16_t res; + + res = (int16_t)a * (int16_t)b; + round = get_round(env, res, 7); + res = (res >> 7) + round; + + if (res > INT8_MAX) { + env->vxsat = 0x1; + return INT8_MAX; + } else if (res < INT8_MIN) { + env->vxsat = 0x1; + return INT8_MIN; + } else { + return res; + } +} +static int16_t vsmul16(CPURISCVState *env, int16_t a, int16_t b) +{ + uint8_t round; + int32_t res; + + res = (int32_t)a * (int32_t)b; + round = get_round(env, res, 15); + res = (res >> 15) + round; + + if (res > INT16_MAX) { + env->vxsat = 0x1; + return INT16_MAX; + } else if (res < INT16_MIN) { + env->vxsat = 0x1; + return INT16_MIN; + } else { + return res; + } +} +static int32_t vsmul32(CPURISCVState *env, int32_t a, int32_t b) +{ + uint8_t round; + int64_t res; + + res = (int64_t)a * (int64_t)b; + round = get_round(env, res, 31); + res = (res >> 31) + round; + + if (res > INT32_MAX) { + env->vxsat = 0x1; + return INT32_MAX; + } else if (res < INT32_MIN) { + env->vxsat = 0x1; + return INT32_MIN; + } else { + return res; + } +} +static int64_t vsmul64(CPURISCVState *env, int64_t a, int64_t b) +{ + uint8_t round; + uint64_t hi_64, lo_64, Hi62; + uint8_t hi62, hi63, lo63; + + muls64(&lo_64, &hi_64, a, b); + hi62 = extract64(hi_64, 62, 1); + lo63 = extract64(lo_64, 63, 1); + hi63 = extract64(hi_64, 63, 1); + Hi62 = extract64(hi_64, 0, 62); + if (hi62 != hi63) { + env->vxsat = 0x1; + return INT64_MAX; + } + round = get_round(env, lo_64, 63); + if (round && (Hi62 == 0x3fffffff) && lo63) { + env->vxsat = 0x1; + return hi62 ? INT64_MIN : INT64_MAX; + } else { + if (lo63 && round) { + return (hi_64 + 1) << 1; + } else { + return (hi_64 << 1) | lo63 | round; + } + } +} +RVVCALL(OPIVV2_ENV, vsmul_vv_b, OP_SSS_B, H1, H1, H1, vsmul8) +RVVCALL(OPIVV2_ENV, vsmul_vv_h, OP_SSS_H, H2, H2, H2, vsmul16) +RVVCALL(OPIVV2_ENV, vsmul_vv_w, OP_SSS_W, H4, H4, H4, vsmul32) +RVVCALL(OPIVV2_ENV, vsmul_vv_d, OP_SSS_D, H8, H8, H8, vsmul64) +GEN_VEXT_VV_ENV(vsmul_vv_b, 1, 1, clearb) +GEN_VEXT_VV_ENV(vsmul_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vsmul_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vsmul_vv_d, 8, 8, clearq) + +RVVCALL(OPIVX2_ENV, vsmul_vx_b, OP_SSS_B, H1, H1, vsmul8) +RVVCALL(OPIVX2_ENV, vsmul_vx_h, OP_SSS_H, H2, H2, vsmul16) +RVVCALL(OPIVX2_ENV, vsmul_vx_w, OP_SSS_W, H4, H4, vsmul32) +RVVCALL(OPIVX2_ENV, vsmul_vx_d, OP_SSS_D, H8, H8, vsmul64) +GEN_VEXT_VX_ENV(vsmul_vx_b, 1, 1, clearb) +GEN_VEXT_VX_ENV(vsmul_vx_h, 2, 2, clearh) +GEN_VEXT_VX_ENV(vsmul_vx_w, 4, 4, clearl) +GEN_VEXT_VX_ENV(vsmul_vx_d, 8, 8, clearq) From patchwork Mon Mar 9 12:14:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427023 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0FB2D921 for ; Mon, 9 Mar 2020 12:24:58 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D4B4D20828 for ; Mon, 9 Mar 2020 12:24:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D4B4D20828 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42170 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHSr-0000A4-2X for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:24:57 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56580) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJy-0002Kf-0W for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJu-000333-2g for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:45 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:49984) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002br-D4; Mon, 09 Mar 2020 08:15:41 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436294|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.377255-0.00325374-0.619491; FP=9693965973447181918|1|1|7|0|-1|-1|-1; HT=e02c03299; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:15 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 26/60] target/riscv: vector widening saturating scaled multiply-add Date: Mon, 9 Mar 2020 20:14:23 +0800 Message-Id: <20200309121457.13719-27-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 22 +++ target/riscv/insn32.decode | 7 + target/riscv/insn_trans/trans_rvv.inc.c | 9 ++ target/riscv/vector_helper.c | 180 ++++++++++++++++++++++++ 4 files changed, 218 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 333eccca57..74c1c695e0 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -733,3 +733,25 @@ DEF_HELPER_6(vsmul_vx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vsmul_vx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vsmul_vx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vsmul_vx_d, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vwsmaccu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwsmaccu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwsmaccu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwsmacc_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwsmacc_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwsmacc_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwsmaccsu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwsmaccsu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwsmaccsu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vwsmaccu_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwsmaccu_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwsmaccu_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwsmacc_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwsmacc_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwsmacc_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwsmaccsu_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwsmaccsu_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwsmaccsu_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwsmaccus_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwsmaccus_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vwsmaccus_vx_w, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 99f70924d6..8798919d3e 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -419,6 +419,13 @@ vasub_vv 100110 . ..... ..... 000 ..... 1010111 @r_vm vasub_vx 100110 . ..... ..... 100 ..... 1010111 @r_vm vsmul_vv 100111 . ..... ..... 000 ..... 1010111 @r_vm vsmul_vx 100111 . ..... ..... 100 ..... 1010111 @r_vm +vwsmaccu_vv 111100 . ..... ..... 000 ..... 1010111 @r_vm +vwsmaccu_vx 111100 . ..... ..... 100 ..... 1010111 @r_vm +vwsmacc_vv 111101 . ..... ..... 000 ..... 1010111 @r_vm +vwsmacc_vx 111101 . ..... ..... 100 ..... 1010111 @r_vm +vwsmaccsu_vv 111110 . ..... ..... 000 ..... 1010111 @r_vm +vwsmaccsu_vx 111110 . ..... ..... 100 ..... 1010111 @r_vm +vwsmaccus_vx 111111 . ..... ..... 100 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 60e1e63b7b..68bebd3c37 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1532,3 +1532,12 @@ GEN_OPIVI_TRANS(vaadd_vi, 0, vaadd_vx, opivx_check) /* Vector Single-Width Fractional Multiply with Rounding and Saturation */ GEN_OPIVV_TRANS(vsmul_vv, opivv_check) GEN_OPIVX_TRANS(vsmul_vx, opivx_check) + +/* Vector Widening Saturating Scaled Multiply-Add */ +GEN_OPIVV_WIDEN_TRANS(vwsmaccu_vv, opivv_widen_check) +GEN_OPIVV_WIDEN_TRANS(vwsmacc_vv, opivv_widen_check) +GEN_OPIVV_WIDEN_TRANS(vwsmaccsu_vv, opivv_widen_check) +GEN_OPIVX_WIDEN_TRANS(vwsmaccu_vx) +GEN_OPIVX_WIDEN_TRANS(vwsmacc_vx) +GEN_OPIVX_WIDEN_TRANS(vwsmaccsu_vx) +GEN_OPIVX_WIDEN_TRANS(vwsmaccus_vx) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 74ad07743c..90c19577fa 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -2523,3 +2523,183 @@ GEN_VEXT_VX_ENV(vsmul_vx_b, 1, 1, clearb) GEN_VEXT_VX_ENV(vsmul_vx_h, 2, 2, clearh) GEN_VEXT_VX_ENV(vsmul_vx_w, 4, 4, clearl) GEN_VEXT_VX_ENV(vsmul_vx_d, 8, 8, clearq) + +/* Vector Widening Saturating Scaled Multiply-Add */ +static uint16_t vwsmaccu8(CPURISCVState *env, uint8_t a, uint8_t b, + uint16_t c) +{ + uint8_t round; + uint16_t res = (uint16_t)a * (uint16_t)b; + + round = get_round(env, res, 4); + res = (res >> 4) + round; + return saddu16(env, c, res); +} +static uint32_t vwsmaccu16(CPURISCVState *env, uint16_t a, uint16_t b, + uint32_t c) +{ + uint8_t round; + uint32_t res = (uint32_t)a * (uint32_t)b; + + round = get_round(env, res, 8); + res = (res >> 8) + round; + return saddu32(env, c, res); +} +static uint64_t vwsmaccu32(CPURISCVState *env, uint32_t a, uint32_t b, + uint64_t c) +{ + uint8_t round; + uint64_t res = (uint64_t)a * (uint64_t)b; + + round = get_round(env, res, 16); + res = (res >> 16) + round; + return saddu64(env, c, res); +} + +#define OPIVV3_ENV(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \ +static void do_##NAME(void *vd, void *vs1, void *vs2, int i, \ + CPURISCVState *env) \ +{ \ + TX1 s1 = *((T1 *)vs1 + HS1(i)); \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + TD d = *((TD *)vd + HD(i)); \ + *((TD *)vd + HD(i)) = OP(env, s2, s1, d); \ +} +RVVCALL(OPIVV3_ENV, vwsmaccu_vv_b, WOP_UUU_B, H2, H1, H1, vwsmaccu8) +RVVCALL(OPIVV3_ENV, vwsmaccu_vv_h, WOP_UUU_H, H4, H2, H2, vwsmaccu16) +RVVCALL(OPIVV3_ENV, vwsmaccu_vv_w, WOP_UUU_W, H8, H4, H4, vwsmaccu32) +GEN_VEXT_VV_ENV(vwsmaccu_vv_b, 1, 2, clearh) +GEN_VEXT_VV_ENV(vwsmaccu_vv_h, 2, 4, clearl) +GEN_VEXT_VV_ENV(vwsmaccu_vv_w, 4, 8, clearq) + +#define OPIVX3_ENV(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \ +static void do_##NAME(void *vd, target_ulong s1, void *vs2, int i, \ + CPURISCVState *env) \ +{ \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + TD d = *((TD *)vd + HD(i)); \ + *((TD *)vd + HD(i)) = OP(env, s2, (TX1)(T1)(target_long)s1, d); \ +} +RVVCALL(OPIVX3_ENV, vwsmaccu_vx_b, WOP_UUU_B, H2, H1, vwsmaccu8) +RVVCALL(OPIVX3_ENV, vwsmaccu_vx_h, WOP_UUU_H, H4, H2, vwsmaccu16) +RVVCALL(OPIVX3_ENV, vwsmaccu_vx_w, WOP_UUU_W, H8, H4, vwsmaccu32) +GEN_VEXT_VX_ENV(vwsmaccu_vx_b, 1, 2, clearh) +GEN_VEXT_VX_ENV(vwsmaccu_vx_h, 2, 4, clearl) +GEN_VEXT_VX_ENV(vwsmaccu_vx_w, 4, 8, clearq) + +static int16_t vwsmacc8(CPURISCVState *env, int8_t a, int8_t b, int16_t c) +{ + uint8_t round; + int16_t res = (int16_t)a * (int16_t)b; + + round = get_round(env, res, 4); + res = (res >> 4) + round; + return sadd16(env, c, res); +} +static int32_t vwsmacc16(CPURISCVState *env, int16_t a, int16_t b, int32_t c) +{ + uint8_t round; + int32_t res = (int32_t)a * (int32_t)b; + + round = get_round(env, res, 8); + res = (res >> 8) + round; + return sadd32(env, c, res); + +} +static int64_t vwsmacc32(CPURISCVState *env, int32_t a, int32_t b, int64_t c) +{ + uint8_t round; + int64_t res = (int64_t)a * (int64_t)b; + + round = get_round(env, res, 16); + res = (res >> 16) + round; + return sadd64(env, c, res); +} +RVVCALL(OPIVV3_ENV, vwsmacc_vv_b, WOP_SSS_B, H2, H1, H1, vwsmacc8) +RVVCALL(OPIVV3_ENV, vwsmacc_vv_h, WOP_SSS_H, H4, H2, H2, vwsmacc16) +RVVCALL(OPIVV3_ENV, vwsmacc_vv_w, WOP_SSS_W, H8, H4, H4, vwsmacc32) +GEN_VEXT_VV_ENV(vwsmacc_vv_b, 1, 2, clearh) +GEN_VEXT_VV_ENV(vwsmacc_vv_h, 2, 4, clearl) +GEN_VEXT_VV_ENV(vwsmacc_vv_w, 4, 8, clearq) +RVVCALL(OPIVX3_ENV, vwsmacc_vx_b, WOP_SSS_B, H2, H1, vwsmacc8) +RVVCALL(OPIVX3_ENV, vwsmacc_vx_h, WOP_SSS_H, H4, H2, vwsmacc16) +RVVCALL(OPIVX3_ENV, vwsmacc_vx_w, WOP_SSS_W, H8, H4, vwsmacc32) +GEN_VEXT_VX_ENV(vwsmacc_vx_b, 1, 2, clearh) +GEN_VEXT_VX_ENV(vwsmacc_vx_h, 2, 4, clearl) +GEN_VEXT_VX_ENV(vwsmacc_vx_w, 4, 8, clearq) + +static int16_t vwsmaccsu8(CPURISCVState *env, uint8_t a, int8_t b, int16_t c) +{ + uint8_t round; + int16_t res = (uint16_t)a * (int16_t)b; + + round = get_round(env, res, 4); + res = (res >> 4) + round; + return ssub16(env, c, res); +} +static int32_t vwsmaccsu16(CPURISCVState *env, uint16_t a, int16_t b, + uint32_t c) +{ + uint8_t round; + int32_t res = (uint32_t)a * (int32_t)b; + + round = get_round(env, res, 8); + res = (res >> 8) + round; + return ssub32(env, c, res); +} +static int64_t vwsmaccsu32(CPURISCVState *env, uint32_t a, int32_t b, + int64_t c) +{ + uint8_t round; + int64_t res = (uint64_t)a * (int64_t)b; + + round = get_round(env, res, 16); + res = (res >> 16) + round; + return ssub64(env, c, res); +} +RVVCALL(OPIVV3_ENV, vwsmaccsu_vv_b, WOP_SSU_B, H2, H1, H1, vwsmaccsu8) +RVVCALL(OPIVV3_ENV, vwsmaccsu_vv_h, WOP_SSU_H, H4, H2, H2, vwsmaccsu16) +RVVCALL(OPIVV3_ENV, vwsmaccsu_vv_w, WOP_SSU_W, H8, H4, H4, vwsmaccsu32) +GEN_VEXT_VV_ENV(vwsmaccsu_vv_b, 1, 2, clearh) +GEN_VEXT_VV_ENV(vwsmaccsu_vv_h, 2, 4, clearl) +GEN_VEXT_VV_ENV(vwsmaccsu_vv_w, 4, 8, clearq) +RVVCALL(OPIVX3_ENV, vwsmaccsu_vx_b, WOP_SSU_B, H2, H1, vwsmaccsu8) +RVVCALL(OPIVX3_ENV, vwsmaccsu_vx_h, WOP_SSU_H, H4, H2, vwsmaccsu16) +RVVCALL(OPIVX3_ENV, vwsmaccsu_vx_w, WOP_SSU_W, H8, H4, vwsmaccsu32) +GEN_VEXT_VX_ENV(vwsmaccsu_vx_b, 1, 2, clearh) +GEN_VEXT_VX_ENV(vwsmaccsu_vx_h, 2, 4, clearl) +GEN_VEXT_VX_ENV(vwsmaccsu_vx_w, 4, 8, clearq) + +static int16_t vwsmaccus8(CPURISCVState *env, int8_t a, uint8_t b, int16_t c) +{ + uint8_t round; + int16_t res = (int16_t)a * (uint16_t)b; + + round = get_round(env, res, 4); + res = (res >> 4) + round; + return ssub16(env, c, res); +} +static int32_t vwsmaccus16(CPURISCVState *env, int16_t a, uint16_t b, int32_t c) +{ + uint8_t round; + int32_t res = (int32_t)a * (uint32_t)b; + + round = get_round(env, res, 8); + res = (res >> 8) + round; + return ssub32(env, c, res); +} +static int64_t vwsmaccus32(CPURISCVState *env, int32_t a, uint32_t b, int64_t c) +{ + uint8_t round; + int64_t res = (int64_t)a * (uint64_t)b; + + round = get_round(env, res, 16); + res = (res >> 16) + round; + return ssub64(env, c, res); +} +RVVCALL(OPIVX3_ENV, vwsmaccus_vx_b, WOP_SUS_B, H2, H1, vwsmaccus8) +RVVCALL(OPIVX3_ENV, vwsmaccus_vx_h, WOP_SUS_H, H4, H2, vwsmaccus16) +RVVCALL(OPIVX3_ENV, vwsmaccus_vx_w, WOP_SUS_W, H8, H4, vwsmaccus32) +GEN_VEXT_VX_ENV(vwsmaccus_vx_b, 1, 2, clearh) +GEN_VEXT_VX_ENV(vwsmaccus_vx_h, 2, 4, clearl) +GEN_VEXT_VX_ENV(vwsmaccus_vx_w, 4, 8, clearq) From patchwork Mon Mar 9 12:14:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427041 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 539BA14E3 for ; Mon, 9 Mar 2020 12:31:51 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 343472072A for ; Mon, 9 Mar 2020 12:31:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 343472072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42294 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHZW-0000ZY-5p for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:31:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56531) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJw-0002JX-Cb for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJu-00033P-6m for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:44 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:36200) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002bv-Di; Mon, 09 Mar 2020 08:15:42 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07440346|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.178559-0.000578178-0.820863; FP=3082690411349773358|1|1|7|0|-1|-1|-1; HT=e02c03267; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:16 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 27/60] target/riscv: vector single-width scaling shift instructions Date: Mon, 9 Mar 2020 20:14:24 +0800 Message-Id: <20200309121457.13719-28-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 17 ++++ target/riscv/insn32.decode | 6 ++ target/riscv/insn_trans/trans_rvv.inc.c | 8 ++ target/riscv/vector_helper.c | 109 ++++++++++++++++++++++++ 4 files changed, 140 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 74c1c695e0..efc84fbd79 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -755,3 +755,20 @@ DEF_HELPER_6(vwsmaccsu_vx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vwsmaccus_vx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vwsmaccus_vx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vwsmaccus_vx_w, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vssrl_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssrl_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssrl_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssrl_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssra_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssra_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssra_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssra_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vssrl_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssrl_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssrl_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssrl_vx_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssra_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssra_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssra_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vssra_vx_d, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 8798919d3e..d6d111e04a 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -426,6 +426,12 @@ vwsmacc_vx 111101 . ..... ..... 100 ..... 1010111 @r_vm vwsmaccsu_vv 111110 . ..... ..... 000 ..... 1010111 @r_vm vwsmaccsu_vx 111110 . ..... ..... 100 ..... 1010111 @r_vm vwsmaccus_vx 111111 . ..... ..... 100 ..... 1010111 @r_vm +vssrl_vv 101010 . ..... ..... 000 ..... 1010111 @r_vm +vssrl_vx 101010 . ..... ..... 100 ..... 1010111 @r_vm +vssrl_vi 101010 . ..... ..... 011 ..... 1010111 @r_vm +vssra_vv 101011 . ..... ..... 000 ..... 1010111 @r_vm +vssra_vx 101011 . ..... ..... 100 ..... 1010111 @r_vm +vssra_vi 101011 . ..... ..... 011 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 68bebd3c37..21f896ea26 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1541,3 +1541,11 @@ GEN_OPIVX_WIDEN_TRANS(vwsmaccu_vx) GEN_OPIVX_WIDEN_TRANS(vwsmacc_vx) GEN_OPIVX_WIDEN_TRANS(vwsmaccsu_vx) GEN_OPIVX_WIDEN_TRANS(vwsmaccus_vx) + +/* Vector Single-Width Scaling Shift Instructions */ +GEN_OPIVV_TRANS(vssrl_vv, opivv_check) +GEN_OPIVV_TRANS(vssra_vv, opivv_check) +GEN_OPIVX_TRANS(vssrl_vx, opivx_check) +GEN_OPIVX_TRANS(vssra_vx, opivx_check) +GEN_OPIVI_TRANS(vssrl_vi, 1, vssrl_vx, opivx_check) +GEN_OPIVI_TRANS(vssra_vi, 0, vssra_vx, opivx_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 90c19577fa..ec0f822fcf 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -2703,3 +2703,112 @@ RVVCALL(OPIVX3_ENV, vwsmaccus_vx_w, WOP_SUS_W, H8, H4, vwsmaccus32) GEN_VEXT_VX_ENV(vwsmaccus_vx_b, 1, 2, clearh) GEN_VEXT_VX_ENV(vwsmaccus_vx_h, 2, 4, clearl) GEN_VEXT_VX_ENV(vwsmaccus_vx_w, 4, 8, clearq) + +/* Vector Single-Width Scaling Shift Instructions */ +static uint8_t vssrl8(CPURISCVState *env, uint8_t a, uint8_t b) +{ + uint8_t round, shift = b & 0x7; + uint8_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + return res; +} +static uint16_t vssrl16(CPURISCVState *env, uint16_t a, uint16_t b) +{ + uint8_t round, shift = b & 0xf; + uint16_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + return res; +} +static uint32_t vssrl32(CPURISCVState *env, uint32_t a, uint32_t b) +{ + uint8_t round, shift = b & 0x1f; + uint32_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + return res; +} +static uint64_t vssrl64(CPURISCVState *env, uint64_t a, uint64_t b) +{ + uint8_t round, shift = b & 0x3f; + uint64_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + return res; +} +RVVCALL(OPIVV2_ENV, vssrl_vv_b, OP_UUU_B, H1, H1, H1, vssrl8) +RVVCALL(OPIVV2_ENV, vssrl_vv_h, OP_UUU_H, H2, H2, H2, vssrl16) +RVVCALL(OPIVV2_ENV, vssrl_vv_w, OP_UUU_W, H4, H4, H4, vssrl32) +RVVCALL(OPIVV2_ENV, vssrl_vv_d, OP_UUU_D, H8, H8, H8, vssrl64) +GEN_VEXT_VV_ENV(vssrl_vv_b, 1, 1, clearb) +GEN_VEXT_VV_ENV(vssrl_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vssrl_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vssrl_vv_d, 8, 8, clearq) + +RVVCALL(OPIVX2_ENV, vssrl_vx_b, OP_UUU_B, H1, H1, vssrl8) +RVVCALL(OPIVX2_ENV, vssrl_vx_h, OP_UUU_H, H2, H2, vssrl16) +RVVCALL(OPIVX2_ENV, vssrl_vx_w, OP_UUU_W, H4, H4, vssrl32) +RVVCALL(OPIVX2_ENV, vssrl_vx_d, OP_UUU_D, H8, H8, vssrl64) +GEN_VEXT_VX_ENV(vssrl_vx_b, 1, 1, clearb) +GEN_VEXT_VX_ENV(vssrl_vx_h, 2, 2, clearh) +GEN_VEXT_VX_ENV(vssrl_vx_w, 4, 4, clearl) +GEN_VEXT_VX_ENV(vssrl_vx_d, 8, 8, clearq) + +static int8_t vssra8(CPURISCVState *env, int8_t a, int8_t b) +{ + uint8_t round, shift = b & 0x7; + int8_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + return res; +} +static int16_t vssra16(CPURISCVState *env, int16_t a, int16_t b) +{ + uint8_t round, shift = b & 0xf; + int16_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + return res; +} +static int32_t vssra32(CPURISCVState *env, int32_t a, int32_t b) +{ + uint8_t round, shift = b & 0x1f; + int32_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + return res; +} +static int64_t vssra64(CPURISCVState *env, int64_t a, int64_t b) +{ + uint8_t round, shift = b & 0x3f; + int64_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + return res; +} +RVVCALL(OPIVV2_ENV, vssra_vv_b, OP_SSS_B, H1, H1, H1, vssra8) +RVVCALL(OPIVV2_ENV, vssra_vv_h, OP_SSS_H, H2, H2, H2, vssra16) +RVVCALL(OPIVV2_ENV, vssra_vv_w, OP_SSS_W, H4, H4, H4, vssra32) +RVVCALL(OPIVV2_ENV, vssra_vv_d, OP_SSS_D, H8, H8, H8, vssra64) +GEN_VEXT_VV_ENV(vssra_vv_b, 1, 1, clearb) +GEN_VEXT_VV_ENV(vssra_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vssra_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vssra_vv_d, 8, 8, clearq) + +RVVCALL(OPIVX2_ENV, vssra_vx_b, OP_SSS_B, H1, H1, vssra8) +RVVCALL(OPIVX2_ENV, vssra_vx_h, OP_SSS_H, H2, H2, vssra16) +RVVCALL(OPIVX2_ENV, vssra_vx_w, OP_SSS_W, H4, H4, vssra32) +RVVCALL(OPIVX2_ENV, vssra_vx_d, OP_SSS_D, H8, H8, vssra64) +GEN_VEXT_VX_ENV(vssra_vx_b, 1, 1, clearb) +GEN_VEXT_VX_ENV(vssra_vx_h, 2, 2, clearh) +GEN_VEXT_VX_ENV(vssra_vx_w, 4, 4, clearl) +GEN_VEXT_VX_ENV(vssra_vx_d, 8, 8, clearq) From patchwork Mon Mar 9 12:14:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427007 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DC05B921 for ; Mon, 9 Mar 2020 12:20:49 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BD8CD205ED for ; Mon, 9 Mar 2020 12:20:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD8CD205ED Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42098 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHOq-0001w6-Uh for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:20:48 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56582) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJy-0002Kh-0i for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJu-00032v-1z for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:45 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:55371) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJt-0002c0-Bp; Mon, 09 Mar 2020 08:15:41 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436288|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.196625-0.000248577-0.803126; FP=1028362889771717734|1|1|7|0|-1|-1|-1; HT=e01l07447; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:16 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 28/60] target/riscv: vector narrowing fixed-point clip instructions Date: Mon, 9 Mar 2020 20:14:25 +0800 Message-Id: <20200309121457.13719-29-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 13 +++ target/riscv/insn32.decode | 6 ++ target/riscv/insn_trans/trans_rvv.inc.c | 8 ++ target/riscv/vector_helper.c | 128 ++++++++++++++++++++++++ 4 files changed, 155 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index efc84fbd79..4cad8679ec 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -772,3 +772,16 @@ DEF_HELPER_6(vssra_vx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vssra_vx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vssra_vx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vssra_vx_d, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vnclip_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnclip_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnclip_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnclipu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnclipu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnclipu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vnclipu_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnclipu_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnclipu_vx_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnclip_vx_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnclip_vx_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vnclip_vx_w, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index d6d111e04a..c7d589566f 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -432,6 +432,12 @@ vssrl_vi 101010 . ..... ..... 011 ..... 1010111 @r_vm vssra_vv 101011 . ..... ..... 000 ..... 1010111 @r_vm vssra_vx 101011 . ..... ..... 100 ..... 1010111 @r_vm vssra_vi 101011 . ..... ..... 011 ..... 1010111 @r_vm +vnclipu_vv 101110 . ..... ..... 000 ..... 1010111 @r_vm +vnclipu_vx 101110 . ..... ..... 100 ..... 1010111 @r_vm +vnclipu_vi 101110 . ..... ..... 011 ..... 1010111 @r_vm +vnclip_vv 101111 . ..... ..... 000 ..... 1010111 @r_vm +vnclip_vx 101111 . ..... ..... 100 ..... 1010111 @r_vm +vnclip_vi 101111 . ..... ..... 011 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 21f896ea26..11b4887275 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1549,3 +1549,11 @@ GEN_OPIVX_TRANS(vssrl_vx, opivx_check) GEN_OPIVX_TRANS(vssra_vx, opivx_check) GEN_OPIVI_TRANS(vssrl_vi, 1, vssrl_vx, opivx_check) GEN_OPIVI_TRANS(vssra_vi, 0, vssra_vx, opivx_check) + +/* Vector Narrowing Fixed-Point Clip Instructions */ +GEN_OPIVV_NARROW_TRANS(vnclipu_vv) +GEN_OPIVV_NARROW_TRANS(vnclip_vv) +GEN_OPIVX_NARROW_TRANS(vnclipu_vx) +GEN_OPIVX_NARROW_TRANS(vnclip_vx) +GEN_OPIVI_NARROW_TRANS(vnclipu_vi, 1, vnclipu_vx) +GEN_OPIVI_NARROW_TRANS(vnclip_vi, 1, vnclip_vx) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index ec0f822fcf..7f61d4c0c4 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -869,6 +869,12 @@ GEN_VEXT_AMO(vamomaxuw_v_w, uint32_t, uint32_t, idx_w, clearl) #define WOP_SSU_B int16_t, int8_t, uint8_t, int16_t, uint16_t #define WOP_SSU_H int32_t, int16_t, uint16_t, int32_t, uint32_t #define WOP_SSU_W int64_t, int32_t, uint32_t, int64_t, uint64_t +#define NOP_SSS_B int8_t, int8_t, int16_t, int8_t, int16_t +#define NOP_SSS_H int16_t, int16_t, int32_t, int16_t, int32_t +#define NOP_SSS_W int32_t, int32_t, int64_t, int32_t, int64_t +#define NOP_UUU_B uint8_t, uint8_t, uint16_t, uint8_t, uint16_t +#define NOP_UUU_H uint16_t, uint16_t, uint32_t, uint16_t, uint32_t +#define NOP_UUU_W uint32_t, uint32_t, uint64_t, uint32_t, uint64_t /* operation of two vector elements */ #define OPIVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \ @@ -2812,3 +2818,125 @@ GEN_VEXT_VX_ENV(vssra_vx_b, 1, 1, clearb) GEN_VEXT_VX_ENV(vssra_vx_h, 2, 2, clearh) GEN_VEXT_VX_ENV(vssra_vx_w, 4, 4, clearl) GEN_VEXT_VX_ENV(vssra_vx_d, 8, 8, clearq) + +/* Vector Narrowing Fixed-Point Clip Instructions */ +static int8_t vnclip8(CPURISCVState *env, int16_t a, int8_t b) +{ + uint8_t round, shift = b & 0xf; + int16_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + if (res > INT8_MAX) { + env->vxsat = 0x1; + return INT8_MAX; + } else if (res < INT8_MIN) { + env->vxsat = 0x1; + return INT8_MIN; + } else { + return res; + } +} +static int16_t vnclip16(CPURISCVState *env, int32_t a, int16_t b) +{ + uint8_t round, shift = b & 0x1f; + int32_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + if (res > INT16_MAX) { + env->vxsat = 0x1; + return INT16_MAX; + } else if (res < INT16_MIN) { + env->vxsat = 0x1; + return INT16_MIN; + } else { + return res; + } +} +static int32_t vnclip32(CPURISCVState *env, int64_t a, int32_t b) +{ + uint8_t round, shift = b & 0x3f; + int64_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + if (res > INT32_MAX) { + env->vxsat = 0x1; + return INT32_MAX; + } else if (res < INT32_MIN) { + env->vxsat = 0x1; + return INT32_MIN; + } else { + return res; + } +} +RVVCALL(OPIVV2_ENV, vnclip_vv_b, NOP_SSS_B, H1, H2, H1, vnclip8) +RVVCALL(OPIVV2_ENV, vnclip_vv_h, NOP_SSS_H, H2, H4, H2, vnclip16) +RVVCALL(OPIVV2_ENV, vnclip_vv_w, NOP_SSS_W, H4, H8, H4, vnclip32) +GEN_VEXT_VV_ENV(vnclip_vv_b, 1, 1, clearb) +GEN_VEXT_VV_ENV(vnclip_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vnclip_vv_w, 4, 4, clearl) + +RVVCALL(OPIVX2_ENV, vnclip_vx_b, NOP_SSS_B, H1, H2, vnclip8) +RVVCALL(OPIVX2_ENV, vnclip_vx_h, NOP_SSS_H, H2, H4, vnclip16) +RVVCALL(OPIVX2_ENV, vnclip_vx_w, NOP_SSS_W, H4, H8, vnclip32) +GEN_VEXT_VX_ENV(vnclip_vx_b, 1, 1, clearb) +GEN_VEXT_VX_ENV(vnclip_vx_h, 2, 2, clearh) +GEN_VEXT_VX_ENV(vnclip_vx_w, 4, 4, clearl) + +static uint8_t vnclipu8(CPURISCVState *env, uint16_t a, uint8_t b) +{ + uint8_t round, shift = b & 0xf; + uint16_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + if (res > UINT8_MAX) { + env->vxsat = 0x1; + return UINT8_MAX; + } else { + return res; + } +} +static uint16_t vnclipu16(CPURISCVState *env, uint32_t a, uint16_t b) +{ + uint8_t round, shift = b & 0x1f; + uint32_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + if (res > UINT16_MAX) { + env->vxsat = 0x1; + return UINT16_MAX; + } else { + return res; + } +} +static uint32_t vnclipu32(CPURISCVState *env, uint64_t a, uint32_t b) +{ + uint8_t round, shift = b & 0x3f; + int64_t res; + + round = get_round(env, a, shift); + res = (a >> shift) + round; + if (res > UINT32_MAX) { + env->vxsat = 0x1; + return UINT32_MAX; + } else { + return res; + } +} +RVVCALL(OPIVV2_ENV, vnclipu_vv_b, NOP_UUU_B, H1, H2, H1, vnclipu8) +RVVCALL(OPIVV2_ENV, vnclipu_vv_h, NOP_UUU_H, H2, H4, H2, vnclipu16) +RVVCALL(OPIVV2_ENV, vnclipu_vv_w, NOP_UUU_W, H4, H8, H4, vnclipu32) +GEN_VEXT_VV_ENV(vnclipu_vv_b, 1, 1, clearb) +GEN_VEXT_VV_ENV(vnclipu_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vnclipu_vv_w, 4, 4, clearl) + +RVVCALL(OPIVX2_ENV, vnclipu_vx_b, NOP_UUU_B, H1, H2, vnclipu8) +RVVCALL(OPIVX2_ENV, vnclipu_vx_h, NOP_UUU_H, H2, H4, vnclipu16) +RVVCALL(OPIVX2_ENV, vnclipu_vx_w, NOP_UUU_W, H4, H8, vnclipu32) +GEN_VEXT_VX_ENV(vnclipu_vx_b, 1, 1, clearb) +GEN_VEXT_VX_ENV(vnclipu_vx_h, 2, 2, clearh) +GEN_VEXT_VX_ENV(vnclipu_vx_w, 4, 4, clearl) From patchwork Mon Mar 9 12:14:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427027 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E6595921 for ; Mon, 9 Mar 2020 12:26:10 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B75C520828 for ; Mon, 9 Mar 2020 12:26:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B75C520828 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42200 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHU0-0001vJ-Ud for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:26:08 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56377) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJb-0001a3-Ui for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJZ-0002h2-P3 for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:23 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:48013) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJZ-0002bq-5W; Mon, 09 Mar 2020 08:15:21 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07632487|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.00758862-0.000132142-0.992279; FP=1061559410226699766|1|1|7|0|-1|-1|-1; HT=e02c03312; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:17 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 29/60] target/riscv: vector single-width floating-point add/subtract instructions Date: Mon, 9 Mar 2020 20:14:26 +0800 Message-Id: <20200309121457.13719-30-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 16 ++++ target/riscv/insn32.decode | 5 ++ target/riscv/insn_trans/trans_rvv.inc.c | 107 ++++++++++++++++++++++++ target/riscv/vector_helper.c | 89 ++++++++++++++++++++ 4 files changed, 217 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 4cad8679ec..6b46677eeb 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -785,3 +785,19 @@ DEF_HELPER_6(vnclipu_vx_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vnclip_vx_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vnclip_vx_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vnclip_vx_w, void, ptr, ptr, tl, ptr, env, i32) + +DEF_HELPER_6(vfadd_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfadd_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfadd_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfsub_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfsub_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfsub_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfadd_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfadd_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfadd_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfsub_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfsub_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfsub_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfrsub_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfrsub_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfrsub_vf_d, void, ptr, ptr, i64, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index c7d589566f..32918c4d11 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -438,6 +438,11 @@ vnclipu_vi 101110 . ..... ..... 011 ..... 1010111 @r_vm vnclip_vv 101111 . ..... ..... 000 ..... 1010111 @r_vm vnclip_vx 101111 . ..... ..... 100 ..... 1010111 @r_vm vnclip_vi 101111 . ..... ..... 011 ..... 1010111 @r_vm +vfadd_vv 000000 . ..... ..... 001 ..... 1010111 @r_vm +vfadd_vf 000000 . ..... ..... 101 ..... 1010111 @r_vm +vfsub_vv 000010 . ..... ..... 001 ..... 1010111 @r_vm +vfsub_vf 000010 . ..... ..... 101 ..... 1010111 @r_vm +vfrsub_vf 100111 . ..... ..... 101 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 11b4887275..af4dcb96c6 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1557,3 +1557,110 @@ GEN_OPIVX_NARROW_TRANS(vnclipu_vx) GEN_OPIVX_NARROW_TRANS(vnclip_vx) GEN_OPIVI_NARROW_TRANS(vnclipu_vi, 1, vnclipu_vx) GEN_OPIVI_NARROW_TRANS(vnclip_vi, 1, vnclip_vx) + +/* + *** Vector Float Point Arithmetic Instructions + */ +/* Vector Single-Width Floating-Point Add/Subtract Instructions */ + +/* + * If the current SEW does not correspond to a supported IEEE floating-point + * type, an illegal instruction exception is raised. + */ +static bool opfvv_check(DisasContext *s, arg_rmrr *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, false) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + vext_check_reg(s, a->rs1, false) && + (s->sew != 0)); +} + +/* OPFVV without GVEC IR */ +#define GEN_OPFVV_TRANS(NAME, CHECK) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (CHECK(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_gvec_4_ptr * const fns[3] = { \ + gen_helper_##NAME##_h, \ + gen_helper_##NAME##_w, \ + gen_helper_##NAME##_d, \ + }; \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \ + vreg_ofs(s, a->rs1), vreg_ofs(s, a->rs2), \ + cpu_env, 0, s->vlen / 8, data, fns[s->sew - 1]); \ + return true; \ + } \ + return false; \ +} +GEN_OPFVV_TRANS(vfadd_vv, opfvv_check) +GEN_OPFVV_TRANS(vfsub_vv, opfvv_check) + +typedef void (*gen_helper_opfvf)(TCGv_ptr, TCGv_ptr, TCGv_i64, TCGv_ptr, + TCGv_env, TCGv_i32); + +static bool opfvf_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, + uint32_t data, gen_helper_opfvf fn, DisasContext *s) +{ + TCGv_ptr dest, src2, mask; + TCGv_i32 desc; + + dest = tcg_temp_new_ptr(); + mask = tcg_temp_new_ptr(); + src2 = tcg_temp_new_ptr(); + desc = tcg_const_i32(simd_desc(0, s->vlen / 8, data)); + + tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd)); + tcg_gen_addi_ptr(src2, cpu_env, vreg_ofs(s, vs2)); + tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0)); + + fn(dest, mask, cpu_fpr[rs1], src2, cpu_env, desc); + + tcg_temp_free_ptr(dest); + tcg_temp_free_ptr(mask); + tcg_temp_free_ptr(src2); + tcg_temp_free_i32(desc); + return true; +} + +static bool opfvf_check(DisasContext *s, arg_rmrr *a) +{ +/* + * If the current SEW does not correspond to a supported IEEE floating-point + * type, an illegal instruction exception is raised + */ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, false) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + (s->sew != 0)); +} + +/* OPFVF without GVEC IR */ +#define GEN_OPFVF_TRANS(NAME, CHECK) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (CHECK(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_opfvf const fns[3] = { \ + gen_helper_##NAME##_h, \ + gen_helper_##NAME##_w, \ + gen_helper_##NAME##_d, \ + }; \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + return opfvf_trans(a->rd, a->rs1, a->rs2, data, \ + fns[s->sew - 1], s); \ + } \ + return false; \ +} + +GEN_OPFVF_TRANS(vfadd_vf, opfvf_check) +GEN_OPFVF_TRANS(vfsub_vf, opfvf_check) +GEN_OPFVF_TRANS(vfrsub_vf, opfvf_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 7f61d4c0c4..d49a7194f7 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -21,6 +21,7 @@ #include "exec/memop.h" #include "exec/exec-all.h" #include "exec/helper-proto.h" +#include "fpu/softfloat.h" #include "tcg/tcg-gvec-desc.h" #include @@ -2940,3 +2941,91 @@ RVVCALL(OPIVX2_ENV, vnclipu_vx_w, NOP_UUU_W, H4, H8, vnclipu32) GEN_VEXT_VX_ENV(vnclipu_vx_b, 1, 1, clearb) GEN_VEXT_VX_ENV(vnclipu_vx_h, 2, 2, clearh) GEN_VEXT_VX_ENV(vnclipu_vx_w, 4, 4, clearl) + +/* + *** Vector Float Point Arithmetic Instructions + */ +/* Vector Single-Width Floating-Point Add/Subtract Instructions */ +#define OPFVV2(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \ +static void do_##NAME(void *vd, void *vs1, void *vs2, int i, \ + CPURISCVState *env) \ +{ \ + TX1 s1 = *((T1 *)vs1 + HS1(i)); \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + *((TD *)vd + HD(i)) = OP(s2, s1, &env->fp_status); \ +} +RVVCALL(OPFVV2, vfadd_vv_h, OP_UUU_H, H2, H2, H2, float16_add) +RVVCALL(OPFVV2, vfadd_vv_w, OP_UUU_W, H4, H4, H4, float32_add) +RVVCALL(OPFVV2, vfadd_vv_d, OP_UUU_D, H8, H8, H8, float64_add) +GEN_VEXT_VV_ENV(vfadd_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfadd_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfadd_vv_d, 8, 8, clearq) + +#define OPFVF2(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \ +static void do_##NAME(void *vd, uint64_t s1, void *vs2, int i, \ + CPURISCVState *env) \ +{ \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)s1, &env->fp_status);\ +} + +#define GEN_VEXT_VF(NAME, ESZ, DSZ, CLEAR_FN) \ +void HELPER(NAME)(void *vd, void *v0, uint64_t s1, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t vlmax = vext_maxsz(desc) / ESZ; \ + uint32_t mlen = vext_mlen(desc); \ + uint32_t vm = vext_vm(desc); \ + uint32_t vl = env->vl; \ + uint32_t i; \ + \ + for (i = 0; i < vl; i++) { \ + if (!vm && !vext_elem_mask(v0, mlen, i)) { \ + continue; \ + } \ + do_##NAME(vd, s1, vs2, i, env); \ + } \ + if (i != 0) { \ + CLEAR_FN(vd, vl, vl * DSZ, vlmax * DSZ); \ + } \ +} +RVVCALL(OPFVF2, vfadd_vf_h, OP_UUU_H, H2, H2, float16_add) +RVVCALL(OPFVF2, vfadd_vf_w, OP_UUU_W, H4, H4, float32_add) +RVVCALL(OPFVF2, vfadd_vf_d, OP_UUU_D, H8, H8, float64_add) +GEN_VEXT_VF(vfadd_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfadd_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfadd_vf_d, 8, 8, clearq) + +RVVCALL(OPFVV2, vfsub_vv_h, OP_UUU_H, H2, H2, H2, float16_sub) +RVVCALL(OPFVV2, vfsub_vv_w, OP_UUU_W, H4, H4, H4, float32_sub) +RVVCALL(OPFVV2, vfsub_vv_d, OP_UUU_D, H8, H8, H8, float64_sub) +GEN_VEXT_VV_ENV(vfsub_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfsub_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfsub_vv_d, 8, 8, clearq) +RVVCALL(OPFVF2, vfsub_vf_h, OP_UUU_H, H2, H2, float16_sub) +RVVCALL(OPFVF2, vfsub_vf_w, OP_UUU_W, H4, H4, float32_sub) +RVVCALL(OPFVF2, vfsub_vf_d, OP_UUU_D, H8, H8, float64_sub) +GEN_VEXT_VF(vfsub_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfsub_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfsub_vf_d, 8, 8, clearq) + +static uint16_t float16_rsub(uint16_t a, uint16_t b, float_status *s) +{ + return float16_sub(b, a, s); +} + +static uint32_t float32_rsub(uint32_t a, uint32_t b, float_status *s) +{ + return float32_sub(b, a, s); +} + +static uint64_t float64_rsub(uint64_t a, uint64_t b, float_status *s) +{ + return float64_sub(b, a, s); +} +RVVCALL(OPFVF2, vfrsub_vf_h, OP_UUU_H, H2, H2, float16_rsub) +RVVCALL(OPFVF2, vfrsub_vf_w, OP_UUU_W, H4, H4, float32_rsub) +RVVCALL(OPFVF2, vfrsub_vf_d, OP_UUU_D, H8, H8, float64_rsub) +GEN_VEXT_VF(vfrsub_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfrsub_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfrsub_vf_d, 8, 8, clearq) From patchwork Mon Mar 9 12:14:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427031 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2CFB714BC for ; Mon, 9 Mar 2020 12:27:24 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F21F32072A for ; Mon, 9 Mar 2020 12:27:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F21F32072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42215 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHVB-0003hs-JS for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:27:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56614) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJy-0002M3-Ts for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:50 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJv-00036p-Vv for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:46 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:39760) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJv-0002fX-2y; Mon, 09 Mar 2020 08:15:43 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07608289|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.0284806-5.76425e-05-0.971462; FP=1067158126648100026|1|1|7|0|-1|-1|-1; HT=e02c03309; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:17 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 30/60] target/riscv: vector widening floating-point add/subtract instructions Date: Mon, 9 Mar 2020 20:14:27 +0800 Message-Id: <20200309121457.13719-31-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 17 +++ target/riscv/insn32.decode | 8 ++ target/riscv/insn_trans/trans_rvv.inc.c | 131 ++++++++++++++++++++++++ target/riscv/vector_helper.c | 77 ++++++++++++++ 4 files changed, 233 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 6b46677eeb..f242fa4e4b 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -801,3 +801,20 @@ DEF_HELPER_6(vfsub_vf_d, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfrsub_vf_h, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfrsub_vf_w, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfrsub_vf_d, void, ptr, ptr, i64, ptr, env, i32) + +DEF_HELPER_6(vfwadd_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfwadd_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfwsub_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfwsub_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfwadd_wv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfwadd_wv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfwsub_wv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfwsub_wv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfwadd_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfwadd_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfwsub_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfwsub_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfwadd_wf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfwadd_wf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfwsub_wf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfwsub_wf_w, void, ptr, ptr, i64, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 32918c4d11..5ec95541c6 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -443,6 +443,14 @@ vfadd_vf 000000 . ..... ..... 101 ..... 1010111 @r_vm vfsub_vv 000010 . ..... ..... 001 ..... 1010111 @r_vm vfsub_vf 000010 . ..... ..... 101 ..... 1010111 @r_vm vfrsub_vf 100111 . ..... ..... 101 ..... 1010111 @r_vm +vfwadd_vv 110000 . ..... ..... 001 ..... 1010111 @r_vm +vfwadd_vf 110000 . ..... ..... 101 ..... 1010111 @r_vm +vfwadd_wv 110100 . ..... ..... 001 ..... 1010111 @r_vm +vfwadd_wf 110100 . ..... ..... 101 ..... 1010111 @r_vm +vfwsub_vv 110010 . ..... ..... 001 ..... 1010111 @r_vm +vfwsub_vf 110010 . ..... ..... 101 ..... 1010111 @r_vm +vfwsub_wv 110110 . ..... ..... 001 ..... 1010111 @r_vm +vfwsub_wf 110110 . ..... ..... 101 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index af4dcb96c6..ab04f469af 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1664,3 +1664,134 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ GEN_OPFVF_TRANS(vfadd_vf, opfvf_check) GEN_OPFVF_TRANS(vfsub_vf, opfvf_check) GEN_OPFVF_TRANS(vfrsub_vf, opfvf_check) + +/* Vector Widening Floating-Point Add/Subtract Instructions */ +static bool opfvv_widen_check(DisasContext *s, arg_rmrr *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, true) && + vext_check_reg(s, a->rd, true) && + vext_check_reg(s, a->rs2, false) && + vext_check_reg(s, a->rs1, false) && + vext_check_overlap_group(a->rd, 2 << s->lmul, a->rs2, + 1 << s->lmul) && + vext_check_overlap_group(a->rd, 2 << s->lmul, a->rs1, + 1 << s->lmul) && + (s->lmul < 0x3) && (s->sew < 0x3) && (s->sew != 0)); +} + +/* OPFVV with WIDEN */ +#define GEN_OPFVV_WIDEN_TRANS(NAME, CHECK) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (CHECK(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_gvec_4_ptr * const fns[2] = { \ + gen_helper_##NAME##_h, gen_helper_##NAME##_w, \ + }; \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \ + vreg_ofs(s, a->rs1), vreg_ofs(s, a->rs2), \ + cpu_env, 0, s->vlen / 8, data, fns[s->sew - 1]); \ + return true; \ + } \ + return false; \ +} + +GEN_OPFVV_WIDEN_TRANS(vfwadd_vv, opfvv_widen_check) +GEN_OPFVV_WIDEN_TRANS(vfwsub_vv, opfvv_widen_check) + +static bool opfvf_widen_check(DisasContext *s, arg_rmrr *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, true) && + vext_check_reg(s, a->rd, true) && + vext_check_reg(s, a->rs2, false) && + vext_check_overlap_group(a->rd, 2 << s->lmul, a->rs2, + 1 << s->lmul) && + (s->lmul < 0x3) && (s->sew < 0x3) && (s->sew != 0)); +} +/* OPFVF with WIDEN */ +#define GEN_OPFVF_WIDEN_TRANS(NAME) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (opfvf_widen_check(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_opfvf const fns[2] = { \ + gen_helper_##NAME##_h, gen_helper_##NAME##_w, \ + }; \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + return opfvf_trans(a->rd, a->rs1, a->rs2, data, \ + fns[s->sew - 1], s); \ + } \ + return false; \ +} +GEN_OPFVF_WIDEN_TRANS(vfwadd_vf) +GEN_OPFVF_WIDEN_TRANS(vfwsub_vf) + +static bool opfwv_widen_check(DisasContext *s, arg_rmrr *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, true) && + vext_check_reg(s, a->rd, true) && + vext_check_reg(s, a->rs2, true) && + vext_check_reg(s, a->rs1, false) && + vext_check_overlap_group(a->rd, 2 << s->lmul, a->rs1, + 1 << s->lmul) && + (s->lmul < 0x3) && (s->sew < 0x3) && (s->sew != 0)); +} + +/* WIDEN OPFVV with WIDEN */ +#define GEN_OPFWV_WIDEN_TRANS(NAME) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (opfwv_widen_check(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_gvec_4_ptr * const fns[2] = { \ + gen_helper_##NAME##_h, gen_helper_##NAME##_w, \ + }; \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + tcg_gen_gvec_4_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \ + vreg_ofs(s, a->rs1), vreg_ofs(s, a->rs2), \ + cpu_env, 0, s->vlen / 8, data, fns[s->sew - 1]); \ + return true; \ + } \ + return false; \ +} +GEN_OPFWV_WIDEN_TRANS(vfwadd_wv) +GEN_OPFWV_WIDEN_TRANS(vfwsub_wv) + +static bool opfwf_widen_check(DisasContext *s, arg_rmrr *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, true) && + vext_check_reg(s, a->rd, true) && + vext_check_reg(s, a->rs2, true) && + (s->lmul < 0x3) && (s->sew < 0x3) && (s->sew != 0)); +} + +/* WIDEN OPFVF with WIDEN */ +#define GEN_OPFWF_WIDEN_TRANS(NAME) \ +static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ +{ \ + if (opfwf_widen_check(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_opfvf const fns[2] = { \ + gen_helper_##NAME##_h, gen_helper_##NAME##_w, \ + }; \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + return opfvf_trans(a->rd, a->rs1, a->rs2, data, \ + fns[s->sew - 1], s); \ + } \ + return false; \ +} +GEN_OPFWF_WIDEN_TRANS(vfwadd_wf) +GEN_OPFWF_WIDEN_TRANS(vfwsub_wf) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index d49a7194f7..0840c5d662 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -3029,3 +3029,80 @@ RVVCALL(OPFVF2, vfrsub_vf_d, OP_UUU_D, H8, H8, float64_rsub) GEN_VEXT_VF(vfrsub_vf_h, 2, 2, clearh) GEN_VEXT_VF(vfrsub_vf_w, 4, 4, clearl) GEN_VEXT_VF(vfrsub_vf_d, 8, 8, clearq) + +/* Vector Widening Floating-Point Add/Subtract Instructions */ +static uint32_t vfwadd16(uint16_t a, uint16_t b, float_status *s) +{ + return float32_add(float16_to_float32(a, true, s), + float16_to_float32(b, true, s), s); +} +static uint64_t vfwadd32(uint32_t a, uint32_t b, float_status *s) +{ + return float64_add(float32_to_float64(a, s), + float32_to_float64(b, s), s); + +} +RVVCALL(OPFVV2, vfwadd_vv_h, WOP_UUU_H, H4, H2, H2, vfwadd16) +RVVCALL(OPFVV2, vfwadd_vv_w, WOP_UUU_W, H8, H4, H4, vfwadd32) +GEN_VEXT_VV_ENV(vfwadd_vv_h, 2, 4, clearl) +GEN_VEXT_VV_ENV(vfwadd_vv_w, 4, 8, clearq) +RVVCALL(OPFVF2, vfwadd_vf_h, WOP_UUU_H, H4, H2, vfwadd16) +RVVCALL(OPFVF2, vfwadd_vf_w, WOP_UUU_W, H8, H4, vfwadd32) +GEN_VEXT_VF(vfwadd_vf_h, 2, 4, clearl) +GEN_VEXT_VF(vfwadd_vf_w, 4, 8, clearq) + +static uint32_t vfwsub16(uint16_t a, uint16_t b, float_status *s) +{ + return float32_sub(float16_to_float32(a, true, s), + float16_to_float32(b, true, s), s); +} + +static uint64_t vfwsub32(uint32_t a, uint32_t b, float_status *s) +{ + return float64_sub(float32_to_float64(a, s), + float32_to_float64(b, s), s); + +} +RVVCALL(OPFVV2, vfwsub_vv_h, WOP_UUU_H, H4, H2, H2, vfwsub16) +RVVCALL(OPFVV2, vfwsub_vv_w, WOP_UUU_W, H8, H4, H4, vfwsub32) +GEN_VEXT_VV_ENV(vfwsub_vv_h, 2, 4, clearl) +GEN_VEXT_VV_ENV(vfwsub_vv_w, 4, 8, clearq) +RVVCALL(OPFVF2, vfwsub_vf_h, WOP_UUU_H, H4, H2, vfwsub16) +RVVCALL(OPFVF2, vfwsub_vf_w, WOP_UUU_W, H8, H4, vfwsub32) +GEN_VEXT_VF(vfwsub_vf_h, 2, 4, clearl) +GEN_VEXT_VF(vfwsub_vf_w, 4, 8, clearq) + +static uint32_t vfwaddw16(uint32_t a, uint16_t b, float_status *s) +{ + return float32_add(a, float16_to_float32(b, true, s), s); +} +static uint64_t vfwaddw32(uint64_t a, uint32_t b, float_status *s) +{ + return float64_add(a, float32_to_float64(b, s), s); +} +RVVCALL(OPFVV2, vfwadd_wv_h, WOP_WUUU_H, H4, H2, H2, vfwaddw16) +RVVCALL(OPFVV2, vfwadd_wv_w, WOP_WUUU_W, H8, H4, H4, vfwaddw32) +GEN_VEXT_VV_ENV(vfwadd_wv_h, 2, 4, clearl) +GEN_VEXT_VV_ENV(vfwadd_wv_w, 4, 8, clearq) +RVVCALL(OPFVF2, vfwadd_wf_h, WOP_WUUU_H, H4, H2, vfwaddw16) +RVVCALL(OPFVF2, vfwadd_wf_w, WOP_WUUU_W, H8, H4, vfwaddw32) +GEN_VEXT_VF(vfwadd_wf_h, 2, 4, clearl) +GEN_VEXT_VF(vfwadd_wf_w, 4, 8, clearq) + +static uint32_t vfwsubw16(uint32_t a, uint16_t b, float_status *s) +{ + return float32_sub(a, float16_to_float32(b, true, s), s); +} + +static uint64_t vfwsubw32(uint64_t a, uint32_t b, float_status *s) +{ + return float64_sub(a, float32_to_float64(b, s), s); +} +RVVCALL(OPFVV2, vfwsub_wv_h, WOP_WUUU_H, H4, H2, H2, vfwsubw16) +RVVCALL(OPFVV2, vfwsub_wv_w, WOP_WUUU_W, H8, H4, H4, vfwsubw32) +GEN_VEXT_VV_ENV(vfwsub_wv_h, 2, 4, clearl) +GEN_VEXT_VV_ENV(vfwsub_wv_w, 4, 8, clearq) +RVVCALL(OPFVF2, vfwsub_wf_h, WOP_WUUU_H, H4, H2, vfwsubw16) +RVVCALL(OPFVF2, vfwsub_wf_w, WOP_WUUU_W, H8, H4, vfwsubw32) +GEN_VEXT_VF(vfwsub_wf_h, 2, 4, clearl) +GEN_VEXT_VF(vfwsub_wf_w, 4, 8, clearq) From patchwork Mon Mar 9 12:14:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427047 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4D09A14E3 for ; Mon, 9 Mar 2020 12:34:39 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2E0C92072A for ; Mon, 9 Mar 2020 12:34:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2E0C92072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42336 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHcD-0003vc-C1 for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:34:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56584) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHJy-0002Kj-15 for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:48 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJu-00034W-Hm for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:45 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:55582) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJu-0002eF-3W; Mon, 09 Mar 2020 08:15:42 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.09450246|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_regular_dialog|0.019835-0.000127594-0.980037; FP=3380345947195641450|1|1|7|0|-1|-1|-1; HT=e02c03311; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:18 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 31/60] target/riscv: vector single-width floating-point multiply/divide instructions Date: Mon, 9 Mar 2020 20:14:28 +0800 Message-Id: <20200309121457.13719-32-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 16 +++++++++ target/riscv/insn32.decode | 5 +++ target/riscv/insn_trans/trans_rvv.inc.c | 7 ++++ target/riscv/vector_helper.c | 48 +++++++++++++++++++++++++ 4 files changed, 76 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index f242fa4e4b..a2d7ed19a8 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -818,3 +818,19 @@ DEF_HELPER_6(vfwadd_wf_h, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfwadd_wf_w, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfwsub_wf_h, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfwsub_wf_w, void, ptr, ptr, i64, ptr, env, i32) + +DEF_HELPER_6(vfmul_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmul_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmul_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfdiv_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfdiv_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfdiv_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmul_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmul_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmul_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfdiv_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfdiv_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfdiv_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfrdiv_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfrdiv_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfrdiv_vf_d, void, ptr, ptr, i64, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 5ec95541c6..050b2fd467 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -451,6 +451,11 @@ vfwsub_vv 110010 . ..... ..... 001 ..... 1010111 @r_vm vfwsub_vf 110010 . ..... ..... 101 ..... 1010111 @r_vm vfwsub_wv 110110 . ..... ..... 001 ..... 1010111 @r_vm vfwsub_wf 110110 . ..... ..... 101 ..... 1010111 @r_vm +vfmul_vv 100100 . ..... ..... 001 ..... 1010111 @r_vm +vfmul_vf 100100 . ..... ..... 101 ..... 1010111 @r_vm +vfdiv_vv 100000 . ..... ..... 001 ..... 1010111 @r_vm +vfdiv_vf 100000 . ..... ..... 101 ..... 1010111 @r_vm +vfrdiv_vf 100001 . ..... ..... 101 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index ab04f469af..8dcbff6c64 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1795,3 +1795,10 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ } GEN_OPFWF_WIDEN_TRANS(vfwadd_wf) GEN_OPFWF_WIDEN_TRANS(vfwsub_wf) + +/* Vector Single-Width Floating-Point Multiply/Divide Instructions */ +GEN_OPFVV_TRANS(vfmul_vv, opfvv_check) +GEN_OPFVV_TRANS(vfdiv_vv, opfvv_check) +GEN_OPFVF_TRANS(vfmul_vf, opfvf_check) +GEN_OPFVF_TRANS(vfdiv_vf, opfvf_check) +GEN_OPFVF_TRANS(vfrdiv_vf, opfvf_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 0840c5d662..bd7ee4de18 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -3106,3 +3106,51 @@ RVVCALL(OPFVF2, vfwsub_wf_h, WOP_WUUU_H, H4, H2, vfwsubw16) RVVCALL(OPFVF2, vfwsub_wf_w, WOP_WUUU_W, H8, H4, vfwsubw32) GEN_VEXT_VF(vfwsub_wf_h, 2, 4, clearl) GEN_VEXT_VF(vfwsub_wf_w, 4, 8, clearq) + +/* Vector Single-Width Floating-Point Multiply/Divide Instructions */ +RVVCALL(OPFVV2, vfmul_vv_h, OP_UUU_H, H2, H2, H2, float16_mul) +RVVCALL(OPFVV2, vfmul_vv_w, OP_UUU_W, H4, H4, H4, float32_mul) +RVVCALL(OPFVV2, vfmul_vv_d, OP_UUU_D, H8, H8, H8, float64_mul) +GEN_VEXT_VV_ENV(vfmul_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfmul_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfmul_vv_d, 8, 8, clearq) +RVVCALL(OPFVF2, vfmul_vf_h, OP_UUU_H, H2, H2, float16_mul) +RVVCALL(OPFVF2, vfmul_vf_w, OP_UUU_W, H4, H4, float32_mul) +RVVCALL(OPFVF2, vfmul_vf_d, OP_UUU_D, H8, H8, float64_mul) +GEN_VEXT_VF(vfmul_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfmul_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfmul_vf_d, 8, 8, clearq) + +RVVCALL(OPFVV2, vfdiv_vv_h, OP_UUU_H, H2, H2, H2, float16_div) +RVVCALL(OPFVV2, vfdiv_vv_w, OP_UUU_W, H4, H4, H4, float32_div) +RVVCALL(OPFVV2, vfdiv_vv_d, OP_UUU_D, H8, H8, H8, float64_div) +GEN_VEXT_VV_ENV(vfdiv_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfdiv_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfdiv_vv_d, 8, 8, clearq) +RVVCALL(OPFVF2, vfdiv_vf_h, OP_UUU_H, H2, H2, float16_div) +RVVCALL(OPFVF2, vfdiv_vf_w, OP_UUU_W, H4, H4, float32_div) +RVVCALL(OPFVF2, vfdiv_vf_d, OP_UUU_D, H8, H8, float64_div) +GEN_VEXT_VF(vfdiv_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfdiv_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfdiv_vf_d, 8, 8, clearq) + +static uint16_t float16_rdiv(uint16_t a, uint16_t b, float_status *s) +{ + return float16_div(b, a, s); +} + +static uint32_t float32_rdiv(uint32_t a, uint32_t b, float_status *s) +{ + return float32_div(b, a, s); +} + +static uint64_t float64_rdiv(uint64_t a, uint64_t b, float_status *s) +{ + return float64_div(b, a, s); +} +RVVCALL(OPFVF2, vfrdiv_vf_h, OP_UUU_H, H2, H2, float16_rdiv) +RVVCALL(OPFVF2, vfrdiv_vf_w, OP_UUU_W, H4, H4, float32_rdiv) +RVVCALL(OPFVF2, vfrdiv_vf_d, OP_UUU_D, H8, H8, float64_rdiv) +GEN_VEXT_VF(vfrdiv_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfrdiv_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfrdiv_vf_d, 8, 8, clearq) From patchwork Mon Mar 9 12:14:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427069 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 96CC414B7 for ; Mon, 9 Mar 2020 12:41:52 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 771A620674 for ; Mon, 9 Mar 2020 12:41:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 771A620674 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42480 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHjC-0006Iu-LQ for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:41:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56835) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHK5-0002Vl-Dj for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHK3-0003HX-K4 for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:53 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:37456) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHK3-0002nf-6T; Mon, 09 Mar 2020 08:15:51 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.2087902|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.0210286-5.86582e-05-0.978913; FP=3368814191967931507|1|1|7|0|-1|-1|-1; HT=e02c03301; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:18 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 32/60] target/riscv: vector widening floating-point multiply Date: Mon, 9 Mar 2020 20:14:29 +0800 Message-Id: <20200309121457.13719-33-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 5 +++++ target/riscv/insn32.decode | 2 ++ target/riscv/insn_trans/trans_rvv.inc.c | 4 ++++ target/riscv/vector_helper.c | 22 ++++++++++++++++++++++ 4 files changed, 33 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index a2d7ed19a8..3ec2dcadd4 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -834,3 +834,8 @@ DEF_HELPER_6(vfdiv_vf_d, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfrdiv_vf_h, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfrdiv_vf_w, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfrdiv_vf_d, void, ptr, ptr, i64, ptr, env, i32) + +DEF_HELPER_6(vfwmul_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfwmul_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfwmul_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfwmul_vf_w, void, ptr, ptr, i64, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 050b2fd467..e0ee8f5a7c 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -456,6 +456,8 @@ vfmul_vf 100100 . ..... ..... 101 ..... 1010111 @r_vm vfdiv_vv 100000 . ..... ..... 001 ..... 1010111 @r_vm vfdiv_vf 100000 . ..... ..... 101 ..... 1010111 @r_vm vfrdiv_vf 100001 . ..... ..... 101 ..... 1010111 @r_vm +vfwmul_vv 111000 . ..... ..... 001 ..... 1010111 @r_vm +vfwmul_vf 111000 . ..... ..... 101 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 8dcbff6c64..b4d3797685 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1802,3 +1802,7 @@ GEN_OPFVV_TRANS(vfdiv_vv, opfvv_check) GEN_OPFVF_TRANS(vfmul_vf, opfvf_check) GEN_OPFVF_TRANS(vfdiv_vf, opfvf_check) GEN_OPFVF_TRANS(vfrdiv_vf, opfvf_check) + +/* Vector Widening Floating-Point Multiply */ +GEN_OPFVV_WIDEN_TRANS(vfwmul_vv, opfvv_widen_check) +GEN_OPFVF_WIDEN_TRANS(vfwmul_vf) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index bd7ee4de18..8bb6ac158f 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -3154,3 +3154,25 @@ RVVCALL(OPFVF2, vfrdiv_vf_d, OP_UUU_D, H8, H8, float64_rdiv) GEN_VEXT_VF(vfrdiv_vf_h, 2, 2, clearh) GEN_VEXT_VF(vfrdiv_vf_w, 4, 4, clearl) GEN_VEXT_VF(vfrdiv_vf_d, 8, 8, clearq) + +/* Vector Widening Floating-Point Multiply */ +static uint32_t vfwmul16(uint16_t a, uint16_t b, float_status *s) +{ + return float32_mul(float16_to_float32(a, true, s), + float16_to_float32(b, true, s), s); +} + +static uint64_t vfwmul32(uint32_t a, uint32_t b, float_status *s) +{ + return float64_mul(float32_to_float64(a, s), + float32_to_float64(b, s), s); + +} +RVVCALL(OPFVV2, vfwmul_vv_h, WOP_UUU_H, H4, H2, H2, vfwmul16) +RVVCALL(OPFVV2, vfwmul_vv_w, WOP_UUU_W, H8, H4, H4, vfwmul32) +GEN_VEXT_VV_ENV(vfwmul_vv_h, 2, 4, clearl) +GEN_VEXT_VV_ENV(vfwmul_vv_w, 4, 8, clearq) +RVVCALL(OPFVF2, vfwmul_vf_h, WOP_UUU_H, H4, H2, vfwmul16) +RVVCALL(OPFVF2, vfwmul_vf_w, WOP_UUU_W, H8, H4, vfwmul32) +GEN_VEXT_VF(vfwmul_vf_h, 2, 4, clearl) +GEN_VEXT_VF(vfwmul_vf_w, 4, 8, clearq) From patchwork Mon Mar 9 12:14:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427057 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 43A9292A for ; Mon, 9 Mar 2020 12:39:07 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 14D1920674 for ; Mon, 9 Mar 2020 12:39:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 14D1920674 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42438 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHgY-0003Du-7Q for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:39:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56749) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHK3-0002ST-37 for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJy-00039m-0B for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:50 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:45443) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJw-0002fU-4d; Mon, 09 Mar 2020 08:15:45 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436282|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.0121004-9.12865e-05-0.987808; FP=1137575209049496655|1|1|7|0|-1|-1|-1; HT=e02c03268; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:19 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 33/60] target/riscv: vector single-width floating-point fused multiply-add instructions Date: Mon, 9 Mar 2020 20:14:30 +0800 Message-Id: <20200309121457.13719-34-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 49 +++++ target/riscv/insn32.decode | 16 ++ target/riscv/insn_trans/trans_rvv.inc.c | 18 ++ target/riscv/vector_helper.c | 228 ++++++++++++++++++++++++ 4 files changed, 311 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 3ec2dcadd4..3b6dd96918 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -839,3 +839,52 @@ DEF_HELPER_6(vfwmul_vv_h, void, ptr, ptr, ptr, ptr, env, i32) DEF_HELPER_6(vfwmul_vv_w, void, ptr, ptr, ptr, ptr, env, i32) DEF_HELPER_6(vfwmul_vf_h, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfwmul_vf_w, void, ptr, ptr, i64, ptr, env, i32) + +DEF_HELPER_6(vfmacc_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmacc_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmacc_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfnmacc_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfnmacc_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfnmacc_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmsac_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmsac_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmsac_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfnmsac_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfnmsac_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfnmsac_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmadd_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmadd_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmadd_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfnmadd_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfnmadd_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfnmadd_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmsub_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmsub_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmsub_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfnmsub_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfnmsub_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfnmsub_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfmacc_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmacc_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmacc_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfnmacc_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfnmacc_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfnmacc_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmsac_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmsac_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmsac_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfnmsac_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfnmsac_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfnmsac_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmadd_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmadd_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmadd_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfnmadd_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfnmadd_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfnmadd_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmsub_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmsub_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfmsub_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfnmsub_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfnmsub_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfnmsub_vf_d, void, ptr, ptr, i64, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index e0ee8f5a7c..9834091a86 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -458,6 +458,22 @@ vfdiv_vf 100000 . ..... ..... 101 ..... 1010111 @r_vm vfrdiv_vf 100001 . ..... ..... 101 ..... 1010111 @r_vm vfwmul_vv 111000 . ..... ..... 001 ..... 1010111 @r_vm vfwmul_vf 111000 . ..... ..... 101 ..... 1010111 @r_vm +vfmacc_vv 101100 . ..... ..... 001 ..... 1010111 @r_vm +vfnmacc_vv 101101 . ..... ..... 001 ..... 1010111 @r_vm +vfnmacc_vf 101101 . ..... ..... 101 ..... 1010111 @r_vm +vfmacc_vf 101100 . ..... ..... 101 ..... 1010111 @r_vm +vfmsac_vv 101110 . ..... ..... 001 ..... 1010111 @r_vm +vfmsac_vf 101110 . ..... ..... 101 ..... 1010111 @r_vm +vfnmsac_vv 101111 . ..... ..... 001 ..... 1010111 @r_vm +vfnmsac_vf 101111 . ..... ..... 101 ..... 1010111 @r_vm +vfmadd_vv 101000 . ..... ..... 001 ..... 1010111 @r_vm +vfmadd_vf 101000 . ..... ..... 101 ..... 1010111 @r_vm +vfnmadd_vv 101001 . ..... ..... 001 ..... 1010111 @r_vm +vfnmadd_vf 101001 . ..... ..... 101 ..... 1010111 @r_vm +vfmsub_vv 101010 . ..... ..... 001 ..... 1010111 @r_vm +vfmsub_vf 101010 . ..... ..... 101 ..... 1010111 @r_vm +vfnmsub_vv 101011 . ..... ..... 001 ..... 1010111 @r_vm +vfnmsub_vf 101011 . ..... ..... 101 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index b4d3797685..172de867ea 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1806,3 +1806,21 @@ GEN_OPFVF_TRANS(vfrdiv_vf, opfvf_check) /* Vector Widening Floating-Point Multiply */ GEN_OPFVV_WIDEN_TRANS(vfwmul_vv, opfvv_widen_check) GEN_OPFVF_WIDEN_TRANS(vfwmul_vf) + +/* Vector Single-Width Floating-Point Fused Multiply-Add Instructions */ +GEN_OPFVV_TRANS(vfmacc_vv, opfvv_check) +GEN_OPFVV_TRANS(vfnmacc_vv, opfvv_check) +GEN_OPFVV_TRANS(vfmsac_vv, opfvv_check) +GEN_OPFVV_TRANS(vfnmsac_vv, opfvv_check) +GEN_OPFVV_TRANS(vfmadd_vv, opfvv_check) +GEN_OPFVV_TRANS(vfnmadd_vv, opfvv_check) +GEN_OPFVV_TRANS(vfmsub_vv, opfvv_check) +GEN_OPFVV_TRANS(vfnmsub_vv, opfvv_check) +GEN_OPFVF_TRANS(vfmacc_vf, opfvf_check) +GEN_OPFVF_TRANS(vfnmacc_vf, opfvf_check) +GEN_OPFVF_TRANS(vfmsac_vf, opfvf_check) +GEN_OPFVF_TRANS(vfnmsac_vf, opfvf_check) +GEN_OPFVF_TRANS(vfmadd_vf, opfvf_check) +GEN_OPFVF_TRANS(vfnmadd_vf, opfvf_check) +GEN_OPFVF_TRANS(vfmsub_vf, opfvf_check) +GEN_OPFVF_TRANS(vfnmsub_vf, opfvf_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 8bb6ac158f..2e0341adb0 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -3176,3 +3176,231 @@ RVVCALL(OPFVF2, vfwmul_vf_h, WOP_UUU_H, H4, H2, vfwmul16) RVVCALL(OPFVF2, vfwmul_vf_w, WOP_UUU_W, H8, H4, vfwmul32) GEN_VEXT_VF(vfwmul_vf_h, 2, 4, clearl) GEN_VEXT_VF(vfwmul_vf_w, 4, 8, clearq) + +/* Vector Single-Width Floating-Point Fused Multiply-Add Instructions */ +#define OPFVV3(NAME, TD, T1, T2, TX1, TX2, HD, HS1, HS2, OP) \ +static void do_##NAME(void *vd, void *vs1, void *vs2, int i, \ + CPURISCVState *env) \ +{ \ + TX1 s1 = *((T1 *)vs1 + HS1(i)); \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + TD d = *((TD *)vd + HD(i)); \ + *((TD *)vd + HD(i)) = OP(s2, s1, d, &env->fp_status); \ +} +static uint16_t fmacc16(uint16_t a, uint16_t b, uint16_t d, float_status *s) +{ + return float16_muladd(a, b, d, 0, s); +} + +static uint32_t fmacc32(uint32_t a, uint32_t b, uint32_t d, float_status *s) +{ + return float32_muladd(a, b, d, 0, s); +} + +static uint64_t fmacc64(uint64_t a, uint64_t b, uint64_t d, float_status *s) +{ + return float64_muladd(a, b, d, 0, s); +} +RVVCALL(OPFVV3, vfmacc_vv_h, OP_UUU_H, H2, H2, H2, fmacc16) +RVVCALL(OPFVV3, vfmacc_vv_w, OP_UUU_W, H4, H4, H4, fmacc32) +RVVCALL(OPFVV3, vfmacc_vv_d, OP_UUU_D, H8, H8, H8, fmacc64) +GEN_VEXT_VV_ENV(vfmacc_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfmacc_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfmacc_vv_d, 8, 8, clearq) + +#define OPFVF3(NAME, TD, T1, T2, TX1, TX2, HD, HS2, OP) \ +static void do_##NAME(void *vd, uint64_t s1, void *vs2, int i, \ + CPURISCVState *env) \ +{ \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + TD d = *((TD *)vd + HD(i)); \ + *((TD *)vd + HD(i)) = OP(s2, (TX1)(T1)s1, d, &env->fp_status);\ +} +RVVCALL(OPFVF3, vfmacc_vf_h, OP_UUU_H, H2, H2, fmacc16) +RVVCALL(OPFVF3, vfmacc_vf_w, OP_UUU_W, H4, H4, fmacc32) +RVVCALL(OPFVF3, vfmacc_vf_d, OP_UUU_D, H8, H8, fmacc64) +GEN_VEXT_VF(vfmacc_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfmacc_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfmacc_vf_d, 8, 8, clearq) + +static uint16_t fnmacc16(uint16_t a, uint16_t b, uint16_t d, float_status *s) +{ + return float16_muladd(a, b, d, + float_muladd_negate_c | float_muladd_negate_product, s); +} +static uint32_t fnmacc32(uint32_t a, uint32_t b, uint32_t d, float_status *s) +{ + return float32_muladd(a, b, d, + float_muladd_negate_c | float_muladd_negate_product, s); +} +static uint64_t fnmacc64(uint64_t a, uint64_t b, uint64_t d, float_status *s) +{ + return float64_muladd(a, b, d, + float_muladd_negate_c | float_muladd_negate_product, s); +} +RVVCALL(OPFVV3, vfnmacc_vv_h, OP_UUU_H, H2, H2, H2, fnmacc16) +RVVCALL(OPFVV3, vfnmacc_vv_w, OP_UUU_W, H4, H4, H4, fnmacc32) +RVVCALL(OPFVV3, vfnmacc_vv_d, OP_UUU_D, H8, H8, H8, fnmacc64) +GEN_VEXT_VV_ENV(vfnmacc_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfnmacc_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfnmacc_vv_d, 8, 8, clearq) +RVVCALL(OPFVF3, vfnmacc_vf_h, OP_UUU_H, H2, H2, fnmacc16) +RVVCALL(OPFVF3, vfnmacc_vf_w, OP_UUU_W, H4, H4, fnmacc32) +RVVCALL(OPFVF3, vfnmacc_vf_d, OP_UUU_D, H8, H8, fnmacc64) +GEN_VEXT_VF(vfnmacc_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfnmacc_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfnmacc_vf_d, 8, 8, clearq) + +static uint16_t fmsac16(uint16_t a, uint16_t b, uint16_t d, float_status *s) +{ + return float16_muladd(a, b, d, float_muladd_negate_c, s); +} +static uint32_t fmsac32(uint32_t a, uint32_t b, uint32_t d, float_status *s) +{ + return float32_muladd(a, b, d, float_muladd_negate_c, s); +} +static uint64_t fmsac64(uint64_t a, uint64_t b, uint64_t d, float_status *s) +{ + return float64_muladd(a, b, d, float_muladd_negate_c, s); +} +RVVCALL(OPFVV3, vfmsac_vv_h, OP_UUU_H, H2, H2, H2, fmsac16) +RVVCALL(OPFVV3, vfmsac_vv_w, OP_UUU_W, H4, H4, H4, fmsac32) +RVVCALL(OPFVV3, vfmsac_vv_d, OP_UUU_D, H8, H8, H8, fmsac64) +GEN_VEXT_VV_ENV(vfmsac_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfmsac_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfmsac_vv_d, 8, 8, clearq) +RVVCALL(OPFVF3, vfmsac_vf_h, OP_UUU_H, H2, H2, fmsac16) +RVVCALL(OPFVF3, vfmsac_vf_w, OP_UUU_W, H4, H4, fmsac32) +RVVCALL(OPFVF3, vfmsac_vf_d, OP_UUU_D, H8, H8, fmsac64) +GEN_VEXT_VF(vfmsac_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfmsac_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfmsac_vf_d, 8, 8, clearq) + +static uint16_t fnmsac16(uint16_t a, uint16_t b, uint16_t d, float_status *s) +{ + return float16_muladd(a, b, d, float_muladd_negate_product, s); +} +static uint32_t fnmsac32(uint32_t a, uint32_t b, uint32_t d, float_status *s) +{ + return float32_muladd(a, b, d, float_muladd_negate_product, s); +} +static uint64_t fnmsac64(uint64_t a, uint64_t b, uint64_t d, float_status *s) +{ + return float64_muladd(a, b, d, float_muladd_negate_product, s); +} +RVVCALL(OPFVV3, vfnmsac_vv_h, OP_UUU_H, H2, H2, H2, fnmsac16) +RVVCALL(OPFVV3, vfnmsac_vv_w, OP_UUU_W, H4, H4, H4, fnmsac32) +RVVCALL(OPFVV3, vfnmsac_vv_d, OP_UUU_D, H8, H8, H8, fnmsac64) +GEN_VEXT_VV_ENV(vfnmsac_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfnmsac_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfnmsac_vv_d, 8, 8, clearq) +RVVCALL(OPFVF3, vfnmsac_vf_h, OP_UUU_H, H2, H2, fnmsac16) +RVVCALL(OPFVF3, vfnmsac_vf_w, OP_UUU_W, H4, H4, fnmsac32) +RVVCALL(OPFVF3, vfnmsac_vf_d, OP_UUU_D, H8, H8, fnmsac64) +GEN_VEXT_VF(vfnmsac_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfnmsac_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfnmsac_vf_d, 8, 8, clearq) + +static uint16_t fmadd16(uint16_t a, uint16_t b, uint16_t d, float_status *s) +{ + return float16_muladd(d, b, a, 0, s); +} +static uint32_t fmadd32(uint32_t a, uint32_t b, uint32_t d, float_status *s) +{ + return float32_muladd(d, b, a, 0, s); +} +static uint64_t fmadd64(uint64_t a, uint64_t b, uint64_t d, float_status *s) +{ + return float64_muladd(d, b, a, 0, s); +} +RVVCALL(OPFVV3, vfmadd_vv_h, OP_UUU_H, H2, H2, H2, fmadd16) +RVVCALL(OPFVV3, vfmadd_vv_w, OP_UUU_W, H4, H4, H4, fmadd32) +RVVCALL(OPFVV3, vfmadd_vv_d, OP_UUU_D, H8, H8, H8, fmadd64) +GEN_VEXT_VV_ENV(vfmadd_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfmadd_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfmadd_vv_d, 8, 8, clearq) +RVVCALL(OPFVF3, vfmadd_vf_h, OP_UUU_H, H2, H2, fmadd16) +RVVCALL(OPFVF3, vfmadd_vf_w, OP_UUU_W, H4, H4, fmadd32) +RVVCALL(OPFVF3, vfmadd_vf_d, OP_UUU_D, H8, H8, fmadd64) +GEN_VEXT_VF(vfmadd_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfmadd_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfmadd_vf_d, 8, 8, clearq) + +static uint16_t fnmadd16(uint16_t a, uint16_t b, uint16_t d, float_status *s) +{ + return float16_muladd(d, b, a, + float_muladd_negate_c | float_muladd_negate_product, s); +} +static uint32_t fnmadd32(uint32_t a, uint32_t b, uint32_t d, float_status *s) +{ + return float32_muladd(d, b, a, + float_muladd_negate_c | float_muladd_negate_product, s); +} +static uint64_t fnmadd64(uint64_t a, uint64_t b, uint64_t d, float_status *s) +{ + return float64_muladd(d, b, a, + float_muladd_negate_c | float_muladd_negate_product, s); +} + +RVVCALL(OPFVV3, vfnmadd_vv_h, OP_UUU_H, H2, H2, H2, fnmadd16) +RVVCALL(OPFVV3, vfnmadd_vv_w, OP_UUU_W, H4, H4, H4, fnmadd32) +RVVCALL(OPFVV3, vfnmadd_vv_d, OP_UUU_D, H8, H8, H8, fnmadd64) +GEN_VEXT_VV_ENV(vfnmadd_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfnmadd_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfnmadd_vv_d, 8, 8, clearq) +RVVCALL(OPFVF3, vfnmadd_vf_h, OP_UUU_H, H2, H2, fnmadd16) +RVVCALL(OPFVF3, vfnmadd_vf_w, OP_UUU_W, H4, H4, fnmadd32) +RVVCALL(OPFVF3, vfnmadd_vf_d, OP_UUU_D, H8, H8, fnmadd64) +GEN_VEXT_VF(vfnmadd_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfnmadd_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfnmadd_vf_d, 8, 8, clearq) + +static uint16_t fmsub16(uint16_t a, uint16_t b, uint16_t d, float_status *s) +{ + return float16_muladd(d, b, a, float_muladd_negate_c, s); +} +static uint32_t fmsub32(uint32_t a, uint32_t b, uint32_t d, float_status *s) +{ + return float32_muladd(d, b, a, float_muladd_negate_c, s); +} +static uint64_t fmsub64(uint64_t a, uint64_t b, uint64_t d, float_status *s) +{ + return float64_muladd(d, b, a, float_muladd_negate_c, s); +} +RVVCALL(OPFVV3, vfmsub_vv_h, OP_UUU_H, H2, H2, H2, fmsub16) +RVVCALL(OPFVV3, vfmsub_vv_w, OP_UUU_W, H4, H4, H4, fmsub32) +RVVCALL(OPFVV3, vfmsub_vv_d, OP_UUU_D, H8, H8, H8, fmsub64) +GEN_VEXT_VV_ENV(vfmsub_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfmsub_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfmsub_vv_d, 8, 8, clearq) +RVVCALL(OPFVF3, vfmsub_vf_h, OP_UUU_H, H2, H2, fmsub16) +RVVCALL(OPFVF3, vfmsub_vf_w, OP_UUU_W, H4, H4, fmsub32) +RVVCALL(OPFVF3, vfmsub_vf_d, OP_UUU_D, H8, H8, fmsub64) +GEN_VEXT_VF(vfmsub_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfmsub_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfmsub_vf_d, 8, 8, clearq) + +static uint16_t fnmsub16(uint16_t a, uint16_t b, uint16_t d, float_status *s) +{ + return float16_muladd(d, b, a, float_muladd_negate_product, s); +} +static uint32_t fnmsub32(uint32_t a, uint32_t b, uint32_t d, float_status *s) +{ + return float32_muladd(d, b, a, float_muladd_negate_product, s); +} +static uint64_t fnmsub64(uint64_t a, uint64_t b, uint64_t d, float_status *s) +{ + return float64_muladd(d, b, a, float_muladd_negate_product, s); +} +RVVCALL(OPFVV3, vfnmsub_vv_h, OP_UUU_H, H2, H2, H2, fnmsub16) +RVVCALL(OPFVV3, vfnmsub_vv_w, OP_UUU_W, H4, H4, H4, fnmsub32) +RVVCALL(OPFVV3, vfnmsub_vv_d, OP_UUU_D, H8, H8, H8, fnmsub64) +GEN_VEXT_VV_ENV(vfnmsub_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfnmsub_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfnmsub_vv_d, 8, 8, clearq) +RVVCALL(OPFVF3, vfnmsub_vf_h, OP_UUU_H, H2, H2, fnmsub16) +RVVCALL(OPFVF3, vfnmsub_vf_w, OP_UUU_W, H4, H4, fnmsub32) +RVVCALL(OPFVF3, vfnmsub_vf_d, OP_UUU_D, H8, H8, fnmsub64) +GEN_VEXT_VF(vfnmsub_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfnmsub_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfnmsub_vf_d, 8, 8, clearq) From patchwork Mon Mar 9 12:14:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427039 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DD02E14E3 for ; Mon, 9 Mar 2020 12:30:07 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BDA0420828 for ; Mon, 9 Mar 2020 12:30:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BDA0420828 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42270 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHXq-0007ZP-St for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:30:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56917) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHK8-0002aL-7X for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:57 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHK5-0003J6-2x for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:56 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:39676) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHK3-0002nm-QB; Mon, 09 Mar 2020 08:15:52 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.0762825|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.203792-0.00091317-0.795295; FP=2203226386301782615|1|1|7|0|-1|-1|-1; HT=e02c03305; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:20 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 35/60] target/riscv: vector floating-point square-root instruction Date: Mon, 9 Mar 2020 20:14:32 +0800 Message-Id: <20200309121457.13719-36-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 4 +++ target/riscv/insn32.decode | 3 ++ target/riscv/insn_trans/trans_rvv.inc.c | 37 +++++++++++++++++++++++ target/riscv/vector_helper.c | 40 +++++++++++++++++++++++++ 4 files changed, 84 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 57e0fee929..c2f9871490 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -905,3 +905,7 @@ DEF_HELPER_6(vfwmsac_vf_h, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfwmsac_vf_w, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfwnmsac_vf_h, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfwnmsac_vf_w, void, ptr, ptr, i64, ptr, env, i32) + +DEF_HELPER_5(vfsqrt_v_h, void, ptr, ptr, ptr, env, i32) +DEF_HELPER_5(vfsqrt_v_w, void, ptr, ptr, ptr, env, i32) +DEF_HELPER_5(vfsqrt_v_d, void, ptr, ptr, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index b7cb116cf4..fc9aebc6d6 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -45,6 +45,7 @@ &shift shamt rs1 rd &atomic aq rl rs2 rs1 rd &rmrr vm rd rs1 rs2 +&rmr vm rd rs2 &rwdvm vm wd rd rs1 rs2 &r2nfvm vm rd rs1 nf &rnfvm vm rd rs1 rs2 nf @@ -68,6 +69,7 @@ @r2_rm ....... ..... ..... ... ..... ....... %rs1 %rm %rd @r2 ....... ..... ..... ... ..... ....... %rs1 %rd @r2_nfvm ... ... vm:1 ..... ..... ... ..... ....... &r2nfvm %nf %rs1 %rd +@r2_vm ...... vm:1 ..... ..... ... ..... ....... &rmr %rs2 %rd @r_nfvm ... ... vm:1 ..... ..... ... ..... ....... &rnfvm %nf %rs2 %rs1 %rd @r_vm ...... vm:1 ..... ..... ... ..... ....... &rmrr %rs2 %rs1 %rd @r_wdvm ..... wd:1 vm:1 ..... ..... ... ..... ....... &rwdvm %rs2 %rs1 %rd @@ -482,6 +484,7 @@ vfwmsac_vv 111110 . ..... ..... 001 ..... 1010111 @r_vm vfwmsac_vf 111110 . ..... ..... 101 ..... 1010111 @r_vm vfwnmsac_vv 111111 . ..... ..... 001 ..... 1010111 @r_vm vfwnmsac_vf 111111 . ..... ..... 101 ..... 1010111 @r_vm +vfsqrt_v 100011 . ..... 00000 001 ..... 1010111 @r2_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 06d6e2625b..3e4f7de240 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1834,3 +1834,40 @@ GEN_OPFVF_WIDEN_TRANS(vfwmacc_vf) GEN_OPFVF_WIDEN_TRANS(vfwnmacc_vf) GEN_OPFVF_WIDEN_TRANS(vfwmsac_vf) GEN_OPFVF_WIDEN_TRANS(vfwnmsac_vf) + +/* Vector Floating-Point Square-Root Instruction */ + +/* + * If the current SEW does not correspond to a supported IEEE floating-point + * type, an illegal instruction exception is raised + */ +static bool opfv_check(DisasContext *s, arg_rmr *a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm, false) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + (s->sew != 0)); +} + +#define GEN_OPFV_TRANS(NAME, CHECK) \ +static bool trans_##NAME(DisasContext *s, arg_rmr *a) \ +{ \ + if (CHECK(s, a)) { \ + uint32_t data = 0; \ + static gen_helper_gvec_3_ptr * const fns[3] = { \ + gen_helper_##NAME##_h, \ + gen_helper_##NAME##_w, \ + gen_helper_##NAME##_d, \ + }; \ + data = FIELD_DP32(data, VDATA, MLEN, s->mlen); \ + data = FIELD_DP32(data, VDATA, VM, a->vm); \ + data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \ + tcg_gen_gvec_3_ptr(vreg_ofs(s, a->rd), vreg_ofs(s, 0), \ + vreg_ofs(s, a->rs2), cpu_env, 0, \ + s->vlen / 8, data, fns[s->sew - 1]); \ + return true; \ + } \ + return false; \ +} +GEN_OPFV_TRANS(vfsqrt_v, opfv_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 9bff516a15..088bb51af0 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -3488,3 +3488,43 @@ RVVCALL(OPFVF3, vfwnmsac_vf_h, WOP_UUU_H, H4, H2, fwnmsac16) RVVCALL(OPFVF3, vfwnmsac_vf_w, WOP_UUU_W, H8, H4, fwnmsac32) GEN_VEXT_VF(vfwnmsac_vf_h, 2, 4, clearl) GEN_VEXT_VF(vfwnmsac_vf_w, 4, 8, clearq) + +/* Vector Floating-Point Square-Root Instruction */ +/* (TD, T2, TX2) */ +#define OP_UU_H uint16_t, uint16_t, uint16_t +#define OP_UU_W uint32_t, uint32_t, uint32_t +#define OP_UU_D uint64_t, uint64_t, uint64_t + +#define OPFVV1(NAME, TD, T2, TX2, HD, HS2, OP) \ +static void do_##NAME(void *vd, void *vs2, int i, \ + CPURISCVState *env) \ +{ \ + TX2 s2 = *((T2 *)vs2 + HS2(i)); \ + *((TD *)vd + HD(i)) = OP(s2, &env->fp_status); \ +} + +#define GEN_VEXT_V_ENV(NAME, ESZ, DSZ, CLEAR_FN) \ +void HELPER(NAME)(void *vd, void *v0, void *vs2, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + uint32_t vlmax = vext_maxsz(desc) / ESZ; \ + uint32_t mlen = vext_mlen(desc); \ + uint32_t vm = vext_vm(desc); \ + uint32_t vl = env->vl; \ + uint32_t i; \ + for (i = 0; i < vl; i++) { \ + if (!vm && !vext_elem_mask(v0, mlen, i)) { \ + continue; \ + } \ + do_##NAME(vd, vs2, i, env); \ + } \ + if (i != 0) { \ + CLEAR_FN(vd, vl, vl * DSZ, vlmax * DSZ); \ + } \ +} +RVVCALL(OPFVV1, vfsqrt_v_h, OP_UU_H, H2, H2, float16_sqrt) +RVVCALL(OPFVV1, vfsqrt_v_w, OP_UU_W, H4, H4, float32_sqrt) +RVVCALL(OPFVV1, vfsqrt_v_d, OP_UU_D, H8, H8, float64_sqrt) +GEN_VEXT_V_ENV(vfsqrt_v_h, 2, 2, clearh) +GEN_VEXT_V_ENV(vfsqrt_v_w, 4, 4, clearl) +GEN_VEXT_V_ENV(vfsqrt_v_d, 8, 8, clearq) From patchwork Mon Mar 9 12:14:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 11427033 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A3E1A921 for ; Mon, 9 Mar 2020 12:28:36 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 84A182072A for ; Mon, 9 Mar 2020 12:28:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 84A182072A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:42224 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHWM-000586-NB for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Mar 2020 08:28:34 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:56728) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jBHK2-0002SG-Ca for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:52 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jBHJy-0003BR-T5 for qemu-devel@nongnu.org; Mon, 09 Mar 2020 08:15:50 -0400 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:46548) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jBHJy-0002ha-5y; Mon, 09 Mar 2020 08:15:46 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07462697|-1; CH=blue; DM=||false|; DS=CONTINUE|ham_system_inform|0.0246713-0.00010672-0.975222; FP=3379215546030468806|1|1|7|0|-1|-1|-1; HT=e01a16384; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GyaeXy8_1583756101; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GyaeXy8_1583756101) by smtp.aliyun-inc.com(10.147.41.199); Mon, 09 Mar 2020 20:15:21 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v3 37/60] target/riscv: vector floating-point sign-injection instructions Date: Mon, 9 Mar 2020 20:14:34 +0800 Message-Id: <20200309121457.13719-38-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200309121457.13719-1-zhiwei_liu@c-sky.com> References: <20200309121457.13719-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: guoren@linux.alibaba.com, qemu-riscv@nongnu.org, qemu-devel@nongnu.org, wxy194768@alibaba-inc.com, wenmeng_zhang@c-sky.com, LIU Zhiwei Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 19 +++++++ target/riscv/insn32.decode | 6 ++ target/riscv/insn_trans/trans_rvv.inc.c | 8 +++ target/riscv/vector_helper.c | 76 +++++++++++++++++++++++++ 4 files changed, 109 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 9d1f443d02..efbd5d306d 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -922,3 +922,22 @@ DEF_HELPER_6(vfmin_vf_d, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfmax_vf_h, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfmax_vf_w, void, ptr, ptr, i64, ptr, env, i32) DEF_HELPER_6(vfmax_vf_d, void, ptr, ptr, i64, ptr, env, i32) + +DEF_HELPER_6(vfsgnj_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfsgnj_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfsgnj_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfsgnjn_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfsgnjn_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfsgnjn_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfsgnjx_vv_h, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfsgnjx_vv_w, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfsgnjx_vv_d, void, ptr, ptr, ptr, ptr, env, i32) +DEF_HELPER_6(vfsgnj_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfsgnj_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfsgnj_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfsgnjn_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfsgnjn_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfsgnjn_vf_d, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfsgnjx_vf_h, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfsgnjx_vf_w, void, ptr, ptr, i64, ptr, env, i32) +DEF_HELPER_6(vfsgnjx_vf_d, void, ptr, ptr, i64, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index e42cbcebc0..d149eccd5c 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -489,6 +489,12 @@ vfmin_vv 000100 . ..... ..... 001 ..... 1010111 @r_vm vfmin_vf 000100 . ..... ..... 101 ..... 1010111 @r_vm vfmax_vv 000110 . ..... ..... 001 ..... 1010111 @r_vm vfmax_vf 000110 . ..... ..... 101 ..... 1010111 @r_vm +vfsgnj_vv 001000 . ..... ..... 001 ..... 1010111 @r_vm +vfsgnj_vf 001000 . ..... ..... 101 ..... 1010111 @r_vm +vfsgnjn_vv 001001 . ..... ..... 001 ..... 1010111 @r_vm +vfsgnjn_vf 001001 . ..... ..... 101 ..... 1010111 @r_vm +vfsgnjx_vv 001010 . ..... ..... 001 ..... 1010111 @r_vm +vfsgnjx_vf 001010 . ..... ..... 101 ..... 1010111 @r_vm vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 7e7e59c0d6..ff4ebd3f8c 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -1877,3 +1877,11 @@ GEN_OPFVV_TRANS(vfmin_vv, opfvv_check) GEN_OPFVV_TRANS(vfmax_vv, opfvv_check) GEN_OPFVF_TRANS(vfmin_vf, opfvf_check) GEN_OPFVF_TRANS(vfmax_vf, opfvf_check) + +/* Vector Floating-Point Sign-Injection Instructions */ +GEN_OPFVV_TRANS(vfsgnj_vv, opfvv_check) +GEN_OPFVV_TRANS(vfsgnjn_vv, opfvv_check) +GEN_OPFVV_TRANS(vfsgnjx_vv, opfvv_check) +GEN_OPFVF_TRANS(vfsgnj_vf, opfvf_check) +GEN_OPFVF_TRANS(vfsgnjn_vf, opfvf_check) +GEN_OPFVF_TRANS(vfsgnjx_vf, opfvf_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 3bcfce10aa..79e261ff1a 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -3555,3 +3555,79 @@ RVVCALL(OPFVF2, vfmax_vf_d, OP_UUU_D, H8, H8, float64_max) GEN_VEXT_VF(vfmax_vf_h, 2, 2, clearh) GEN_VEXT_VF(vfmax_vf_w, 4, 4, clearl) GEN_VEXT_VF(vfmax_vf_d, 8, 8, clearq) + +/* Vector Floating-Point Sign-Injection Instructions */ +static uint16_t fsgnj16(uint16_t a, uint16_t b, float_status *s) +{ + return deposit64(b, 0, 15, a); +} +static uint32_t fsgnj32(uint32_t a, uint32_t b, float_status *s) +{ + return deposit64(b, 0, 31, a); +} +static uint64_t fsgnj64(uint64_t a, uint64_t b, float_status *s) +{ + return deposit64(b, 0, 63, a); +} +RVVCALL(OPFVV2, vfsgnj_vv_h, OP_UUU_H, H2, H2, H2, fsgnj16) +RVVCALL(OPFVV2, vfsgnj_vv_w, OP_UUU_W, H4, H4, H4, fsgnj32) +RVVCALL(OPFVV2, vfsgnj_vv_d, OP_UUU_D, H8, H8, H8, fsgnj64) +GEN_VEXT_VV_ENV(vfsgnj_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfsgnj_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfsgnj_vv_d, 8, 8, clearq) +RVVCALL(OPFVF2, vfsgnj_vf_h, OP_UUU_H, H2, H2, fsgnj16) +RVVCALL(OPFVF2, vfsgnj_vf_w, OP_UUU_W, H4, H4, fsgnj32) +RVVCALL(OPFVF2, vfsgnj_vf_d, OP_UUU_D, H8, H8, fsgnj64) +GEN_VEXT_VF(vfsgnj_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfsgnj_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfsgnj_vf_d, 8, 8, clearq) + +static uint16_t fsgnjn16(uint16_t a, uint16_t b, float_status *s) +{ + return deposit64(~b, 0, 15, a); +} +static uint32_t fsgnjn32(uint32_t a, uint32_t b, float_status *s) +{ + return deposit64(~b, 0, 31, a); +} +static uint64_t fsgnjn64(uint64_t a, uint64_t b, float_status *s) +{ + return deposit64(~b, 0, 63, a); +} +RVVCALL(OPFVV2, vfsgnjn_vv_h, OP_UUU_H, H2, H2, H2, fsgnjn16) +RVVCALL(OPFVV2, vfsgnjn_vv_w, OP_UUU_W, H4, H4, H4, fsgnjn32) +RVVCALL(OPFVV2, vfsgnjn_vv_d, OP_UUU_D, H8, H8, H8, fsgnjn64) +GEN_VEXT_VV_ENV(vfsgnjn_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfsgnjn_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfsgnjn_vv_d, 8, 8, clearq) +RVVCALL(OPFVF2, vfsgnjn_vf_h, OP_UUU_H, H2, H2, fsgnjn16) +RVVCALL(OPFVF2, vfsgnjn_vf_w, OP_UUU_W, H4, H4, fsgnjn32) +RVVCALL(OPFVF2, vfsgnjn_vf_d, OP_UUU_D, H8, H8, fsgnjn64) +GEN_VEXT_VF(vfsgnjn_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfsgnjn_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfsgnjn_vf_d, 8, 8, clearq) + +static uint16_t fsgnjx16(uint16_t a, uint16_t b, float_status *s) +{ + return deposit64(b ^ a, 0, 15, a); +} +static uint32_t fsgnjx32(uint32_t a, uint32_t b, float_status *s) +{ + return deposit64(b ^ a, 0, 31, a); +} +static uint64_t fsgnjx64(uint64_t a, uint64_t b, float_status *s) +{ + return deposit64(b ^ a, 0, 63, a); +} +RVVCALL(OPFVV2, vfsgnjx_vv_h, OP_UUU_H, H2, H2, H2, fsgnjx16) +RVVCALL(OPFVV2, vfsgnjx_vv_w, OP_UUU_W, H4, H4, H4, fsgnjx32) +RVVCALL(OPFVV2, vfsgnjx_vv_d, OP_UUU_D, H8, H8, H8, fsgnjx64) +GEN_VEXT_VV_ENV(vfsgnjx_vv_h, 2, 2, clearh) +GEN_VEXT_VV_ENV(vfsgnjx_vv_w, 4, 4, clearl) +GEN_VEXT_VV_ENV(vfsgnjx_vv_d, 8, 8, clearq) +RVVCALL(OPFVF2, vfsgnjx_vf_h, OP_UUU_H, H2, H2, fsgnjx16) +RVVCALL(OPFVF2, vfsgnjx_vf_w, OP_UUU_W, H4, H4, fsgnjx32) +RVVCALL(OPFVF2, vfsgnjx_vf_d, OP_UUU_D, H8, H8, fsgnjx64) +GEN_VEXT_VF(vfsgnjx_vf_h, 2, 2, clearh) +GEN_VEXT_VF(vfsgnjx_vf_w, 4, 4, clearl) +GEN_VEXT_VF(vfsgnjx_vf_d, 8, 8, clearq)