diff mbox series

[bpf-next,03/12] libbpf: Fix an error in 64bit relocation value computation

Message ID 20220501190017.2577688-1-yhs@fb.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series bpf: Add 64bit enum value support | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR fail PR summary
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 2 this patch: 2
netdev/cc_maintainers warning 5 maintainers not CCed: songliubraving@fb.com netdev@vger.kernel.org kafai@fb.com john.fastabend@gmail.com kpsingh@kernel.org
netdev/build_clang success Errors and warnings before: 9 this patch: 9
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 2 this patch: 2
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 8 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-1 fail Logs for Kernel LATEST on ubuntu-latest + selftests
bpf/vmtest-bpf-next-VM_Test-2 fail Logs for Kernel LATEST on z15 + selftests

Commit Message

Yonghong Song May 1, 2022, 7 p.m. UTC
Currently, the 64bit relocation value in the instruction
is computed as follows:
  __u64 imm = insn[0].imm + ((__u64)insn[1].imm << 32)

Suppose insn[0].imm = -1 (0xffffffff) and insn[1].imm = 1.
With the above computation, insn[0].imm will first sign-extend
to 64bit -1 (0xffffffffFFFFFFFF) and then add 0x1FFFFFFFF,
producing incorrect value 0xFFFFFFFF. The correct value
should be 0x1FFFFFFFF.

Changing insn[0].imm to __u32 first will prevent 64bit sign
extension and fix the issue.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 tools/lib/bpf/relo_core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Dave Marchevsky May 9, 2022, 12:55 a.m. UTC | #1
On 5/1/22 3:00 PM, Yonghong Song wrote:   
> Currently, the 64bit relocation value in the instruction
> is computed as follows:
>   __u64 imm = insn[0].imm + ((__u64)insn[1].imm << 32)
> 
> Suppose insn[0].imm = -1 (0xffffffff) and insn[1].imm = 1.
> With the above computation, insn[0].imm will first sign-extend
> to 64bit -1 (0xffffffffFFFFFFFF) and then add 0x1FFFFFFFF,
> producing incorrect value 0xFFFFFFFF. The correct value
> should be 0x1FFFFFFFF.
> 
> Changing insn[0].imm to __u32 first will prevent 64bit sign
> extension and fix the issue.
> 
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---
>  tools/lib/bpf/relo_core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Dave Marchevsky May 9, 2022, 12:56 a.m. UTC | #2
On 5/8/22 8:55 PM, Dave Marchevsky wrote:   
> On 5/1/22 3:00 PM, Yonghong Song wrote:   
>> Currently, the 64bit relocation value in the instruction
>> is computed as follows:
>>   __u64 imm = insn[0].imm + ((__u64)insn[1].imm << 32)
>>
>> Suppose insn[0].imm = -1 (0xffffffff) and insn[1].imm = 1.
>> With the above computation, insn[0].imm will first sign-extend
>> to 64bit -1 (0xffffffffFFFFFFFF) and then add 0x1FFFFFFFF,
>> producing incorrect value 0xFFFFFFFF. The correct value
>> should be 0x1FFFFFFFF.
>>
>> Changing insn[0].imm to __u32 first will prevent 64bit sign
>> extension and fix the issue.
>>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
>> ---
>>  tools/lib/bpf/relo_core.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
> 
> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
> 

Whoops, meant:

Acked-by: Dave Marchevsky <davemarchevsky@fb.com>
Andrii Nakryiko May 9, 2022, 10:37 p.m. UTC | #3
On Sun, May 1, 2022 at 12:00 PM Yonghong Song <yhs@fb.com> wrote:
>
> Currently, the 64bit relocation value in the instruction
> is computed as follows:
>   __u64 imm = insn[0].imm + ((__u64)insn[1].imm << 32)
>
> Suppose insn[0].imm = -1 (0xffffffff) and insn[1].imm = 1.
> With the above computation, insn[0].imm will first sign-extend
> to 64bit -1 (0xffffffffFFFFFFFF) and then add 0x1FFFFFFFF,
> producing incorrect value 0xFFFFFFFF. The correct value
> should be 0x1FFFFFFFF.
>
> Changing insn[0].imm to __u32 first will prevent 64bit sign
> extension and fix the issue.
>
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---
>  tools/lib/bpf/relo_core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c
> index 2ed94daabbe5..f25ffd03c3b1 100644
> --- a/tools/lib/bpf/relo_core.c
> +++ b/tools/lib/bpf/relo_core.c
> @@ -1024,7 +1024,7 @@ int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
>                         return -EINVAL;
>                 }
>
> -               imm = insn[0].imm + ((__u64)insn[1].imm << 32);
> +               imm = (__u32)insn[0].imm + ((__u64)insn[1].imm << 32);

great catch, it should also probably be written as | instead of + operation?

>                 if (res->validate && imm != orig_val) {
>                         pr_warn("prog '%s': relo #%d: unexpected insn #%d (LDIMM64) value: got %llu, exp %llu -> %llu\n",
>                                 prog_name, relo_idx,
> --
> 2.30.2
>
Yonghong Song May 10, 2022, 10:11 p.m. UTC | #4
On 5/9/22 3:37 PM, Andrii Nakryiko wrote:
> On Sun, May 1, 2022 at 12:00 PM Yonghong Song <yhs@fb.com> wrote:
>>
>> Currently, the 64bit relocation value in the instruction
>> is computed as follows:
>>    __u64 imm = insn[0].imm + ((__u64)insn[1].imm << 32)
>>
>> Suppose insn[0].imm = -1 (0xffffffff) and insn[1].imm = 1.
>> With the above computation, insn[0].imm will first sign-extend
>> to 64bit -1 (0xffffffffFFFFFFFF) and then add 0x1FFFFFFFF,
>> producing incorrect value 0xFFFFFFFF. The correct value
>> should be 0x1FFFFFFFF.
>>
>> Changing insn[0].imm to __u32 first will prevent 64bit sign
>> extension and fix the issue.
>>
>> Signed-off-by: Yonghong Song <yhs@fb.com>
>> ---
>>   tools/lib/bpf/relo_core.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c
>> index 2ed94daabbe5..f25ffd03c3b1 100644
>> --- a/tools/lib/bpf/relo_core.c
>> +++ b/tools/lib/bpf/relo_core.c
>> @@ -1024,7 +1024,7 @@ int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
>>                          return -EINVAL;
>>                  }
>>
>> -               imm = insn[0].imm + ((__u64)insn[1].imm << 32);
>> +               imm = (__u32)insn[0].imm + ((__u64)insn[1].imm << 32);
> 
> great catch, it should also probably be written as | instead of + operation?

The '|' also works. I used '|' in other places, so will change to use 
'|' as well.

> 
>>                  if (res->validate && imm != orig_val) {
>>                          pr_warn("prog '%s': relo #%d: unexpected insn #%d (LDIMM64) value: got %llu, exp %llu -> %llu\n",
>>                                  prog_name, relo_idx,
>> --
>> 2.30.2
>>
diff mbox series

Patch

diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c
index 2ed94daabbe5..f25ffd03c3b1 100644
--- a/tools/lib/bpf/relo_core.c
+++ b/tools/lib/bpf/relo_core.c
@@ -1024,7 +1024,7 @@  int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
 			return -EINVAL;
 		}
 
-		imm = insn[0].imm + ((__u64)insn[1].imm << 32);
+		imm = (__u32)insn[0].imm + ((__u64)insn[1].imm << 32);
 		if (res->validate && imm != orig_val) {
 			pr_warn("prog '%s': relo #%d: unexpected insn #%d (LDIMM64) value: got %llu, exp %llu -> %llu\n",
 				prog_name, relo_idx,