diff mbox series

target/riscv/vector_helper.c: Avoid shifting negative in fractional LMUL checking

Message ID 20240306161036.938931-1-max.chou@sifive.com (mailing list archive)
State New, archived
Headers show
Series target/riscv/vector_helper.c: Avoid shifting negative in fractional LMUL checking | expand

Commit Message

Max Chou March 6, 2024, 4:10 p.m. UTC
When vlmul is larger than 5, the original fractional LMUL checking may
gets unexpected result.

Signed-off-by: Max Chou <max.chou@sifive.com>
---
 target/riscv/vector_helper.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Daniel Henrique Barboza March 6, 2024, 5:17 p.m. UTC | #1
On 3/6/24 13:10, Max Chou wrote:
> When vlmul is larger than 5, the original fractional LMUL checking may
> gets unexpected result.
> 
> Signed-off-by: Max Chou <max.chou@sifive.com>
> ---

There's already a fix for it in the ML:

"[PATCH v3] target/riscv: Fix shift count overflow"

https://lore.kernel.org/qemu-riscv/20240225174114.5298-1-demin.han@starfivetech.com/


Hopefully it'll be queued for the next PR. Thanks,


Daniel


>   target/riscv/vector_helper.c | 3 +--
>   1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
> index 84cec73eb20..adceec378fd 100644
> --- a/target/riscv/vector_helper.c
> +++ b/target/riscv/vector_helper.c
> @@ -53,10 +53,9 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
>            * VLEN * LMUL >= SEW
>            * VLEN >> (8 - lmul) >= sew
>            * (vlenb << 3) >> (8 - lmul) >= sew
> -         * vlenb >> (8 - 3 - lmul) >= sew
>            */
>           if (vlmul == 4 ||
> -            cpu->cfg.vlenb >> (8 - 3 - vlmul) < sew) {
> +            ((cpu->cfg.vlenb << 3) >> (8 - vlmul)) < sew) {
>               vill = true;
>           }
>       }
Max Chou March 7, 2024, 2:54 p.m. UTC | #2
Looks liked that I missed this one.

Thank you Daniel

Max.

On 2024/3/7 1:17 AM, Daniel Henrique Barboza wrote:
>
>
> On 3/6/24 13:10, Max Chou wrote:
>> When vlmul is larger than 5, the original fractional LMUL checking may
>> gets unexpected result.
>>
>> Signed-off-by: Max Chou <max.chou@sifive.com>
>> ---
>
> There's already a fix for it in the ML:
>
> "[PATCH v3] target/riscv: Fix shift count overflow"
>
> https://lore.kernel.org/qemu-riscv/20240225174114.5298-1-demin.han@starfivetech.com/ 
>
>
>
> Hopefully it'll be queued for the next PR. Thanks,
>
>
> Daniel
>
>
>>   target/riscv/vector_helper.c | 3 +--
>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
>> index 84cec73eb20..adceec378fd 100644
>> --- a/target/riscv/vector_helper.c
>> +++ b/target/riscv/vector_helper.c
>> @@ -53,10 +53,9 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, 
>> target_ulong s1,
>>            * VLEN * LMUL >= SEW
>>            * VLEN >> (8 - lmul) >= sew
>>            * (vlenb << 3) >> (8 - lmul) >= sew
>> -         * vlenb >> (8 - 3 - lmul) >= sew
>>            */
>>           if (vlmul == 4 ||
>> -            cpu->cfg.vlenb >> (8 - 3 - vlmul) < sew) {
>> +            ((cpu->cfg.vlenb << 3) >> (8 - vlmul)) < sew) {
>>               vill = true;
>>           }
>>       }
diff mbox series

Patch

diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index 84cec73eb20..adceec378fd 100644
--- a/target/riscv/vector_helper.c
+++ b/target/riscv/vector_helper.c
@@ -53,10 +53,9 @@  target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
          * VLEN * LMUL >= SEW
          * VLEN >> (8 - lmul) >= sew
          * (vlenb << 3) >> (8 - lmul) >= sew
-         * vlenb >> (8 - 3 - lmul) >= sew
          */
         if (vlmul == 4 ||
-            cpu->cfg.vlenb >> (8 - 3 - vlmul) < sew) {
+            ((cpu->cfg.vlenb << 3) >> (8 - vlmul)) < sew) {
             vill = true;
         }
     }