diff mbox series

powerpc/bpf: populate extable entries only during the last pass

Message ID 20230406073519.75059-1-hbathini@linux.ibm.com (mailing list archive)
State Not Applicable
Delegated to: BPF
Headers show
Series powerpc/bpf: populate extable entries only during the last pass | expand

Checks

Context Check Description
netdev/tree_selection success Not a local patch
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_no_alu32_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-29 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-31 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-32 success Logs for test_verifier on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-34 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-35 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_no_alu32 on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_progs_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-20 fail Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-33 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-7 success Logs for set-matrix

Commit Message

Hari Bathini April 6, 2023, 7:35 a.m. UTC
Since commit 85e031154c7c ("powerpc/bpf: Perform complete extra passes
to update addresses"), two additional passes are performed to avoid
space and CPU time wastage on powerpc. But these extra passes led to
WARN_ON_ONCE() hits in bpf_add_extable_entry(). Fix it by not adding
extable entries during the extra pass.

Fixes: 85e031154c7c ("powerpc/bpf: Perform complete extra passes to update addresses")
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
---
 arch/powerpc/net/bpf_jit_comp32.c | 2 +-
 arch/powerpc/net/bpf_jit_comp64.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

Comments

Christophe Leroy April 7, 2023, 6:01 a.m. UTC | #1
Le 06/04/2023 à 09:35, Hari Bathini a écrit :
> Since commit 85e031154c7c ("powerpc/bpf: Perform complete extra passes
> to update addresses"), two additional passes are performed to avoid
> space and CPU time wastage on powerpc. But these extra passes led to
> WARN_ON_ONCE() hits in bpf_add_extable_entry(). Fix it by not adding
> extable entries during the extra pass.

Are you sure this change is correct ?
During the extra pass the code can get shrinked or expanded (within the 
limits of the size of the preliminary pass). Shouldn't extable entries 
be populated during the last pass ?

Christophe

> 
> Fixes: 85e031154c7c ("powerpc/bpf: Perform complete extra passes to update addresses")
> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
> ---
>   arch/powerpc/net/bpf_jit_comp32.c | 2 +-
>   arch/powerpc/net/bpf_jit_comp64.c | 2 +-
>   2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
> index 7f91ea064c08..e788b1fbeee6 100644
> --- a/arch/powerpc/net/bpf_jit_comp32.c
> +++ b/arch/powerpc/net/bpf_jit_comp32.c
> @@ -977,7 +977,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>   			if (size != BPF_DW && !fp->aux->verifier_zext)
>   				EMIT(PPC_RAW_LI(dst_reg_h, 0));
>   
> -			if (BPF_MODE(code) == BPF_PROBE_MEM) {
> +			if (BPF_MODE(code) == BPF_PROBE_MEM && !extra_pass) {
>   				int insn_idx = ctx->idx - 1;
>   				int jmp_off = 4;
>   
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 8dd3cabaa83a..1cc2777ec846 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -921,7 +921,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>   			if (size != BPF_DW && insn_is_zext(&insn[i + 1]))
>   				addrs[++i] = ctx->idx * 4;
>   
> -			if (BPF_MODE(code) == BPF_PROBE_MEM) {
> +			if (BPF_MODE(code) == BPF_PROBE_MEM && !extra_pass) {
>   				ret = bpf_add_extable_entry(fp, image, pass, ctx, ctx->idx - 1,
>   							    4, dst_reg);
>   				if (ret)
Hari Bathini April 13, 2023, 3:08 a.m. UTC | #2
Hello Christophe,

Thanks for the review.

On 07/04/23 11:31 am, Christophe Leroy wrote:
> 
> 
> Le 06/04/2023 à 09:35, Hari Bathini a écrit :
>> Since commit 85e031154c7c ("powerpc/bpf: Perform complete extra passes
>> to update addresses"), two additional passes are performed to avoid
>> space and CPU time wastage on powerpc. But these extra passes led to
>> WARN_ON_ONCE() hits in bpf_add_extable_entry(). Fix it by not adding
>> extable entries during the extra pass.
> 
> Are you sure this change is correct ?

Actually, I was in two minds about that owing to commit 04c04205bc35
("bpf powerpc: Remove extra_pass from bpf_jit_build_body()").

> During the extra pass the code can get shrinked or expanded (within the
> limits of the size of the preliminary pass). Shouldn't extable entries
> be populated during the last pass ?

Unlikely, but the intention there was to eliminate a regression in case
extra_pass ends up being 'false' always in any subsequent change.

- Hari

>>
>> Fixes: 85e031154c7c ("powerpc/bpf: Perform complete extra passes to update addresses")
>> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
>> ---
>>    arch/powerpc/net/bpf_jit_comp32.c | 2 +-
>>    arch/powerpc/net/bpf_jit_comp64.c | 2 +-
>>    2 files changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
>> index 7f91ea064c08..e788b1fbeee6 100644
>> --- a/arch/powerpc/net/bpf_jit_comp32.c
>> +++ b/arch/powerpc/net/bpf_jit_comp32.c
>> @@ -977,7 +977,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>>    			if (size != BPF_DW && !fp->aux->verifier_zext)
>>    				EMIT(PPC_RAW_LI(dst_reg_h, 0));
>>    
>> -			if (BPF_MODE(code) == BPF_PROBE_MEM) {
>> +			if (BPF_MODE(code) == BPF_PROBE_MEM && !extra_pass) {
>>    				int insn_idx = ctx->idx - 1;
>>    				int jmp_off = 4;
>>    
>> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
>> index 8dd3cabaa83a..1cc2777ec846 100644
>> --- a/arch/powerpc/net/bpf_jit_comp64.c
>> +++ b/arch/powerpc/net/bpf_jit_comp64.c
>> @@ -921,7 +921,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>>    			if (size != BPF_DW && insn_is_zext(&insn[i + 1]))
>>    				addrs[++i] = ctx->idx * 4;
>>    
>> -			if (BPF_MODE(code) == BPF_PROBE_MEM) {
>> +			if (BPF_MODE(code) == BPF_PROBE_MEM && !extra_pass) {
>>    				ret = bpf_add_extable_entry(fp, image, pass, ctx, ctx->idx - 1,
>>    							    4, dst_reg);
>>    				if (ret)
Naveen N. Rao April 24, 2023, 11:55 a.m. UTC | #3
Hari Bathini wrote:
> Hello Christophe,
> 
> Thanks for the review.
> 
> On 07/04/23 11:31 am, Christophe Leroy wrote:
>> 
>> 
>> Le 06/04/2023 à 09:35, Hari Bathini a écrit :
>>> Since commit 85e031154c7c ("powerpc/bpf: Perform complete extra passes
>>> to update addresses"), two additional passes are performed to avoid
>>> space and CPU time wastage on powerpc. But these extra passes led to
>>> WARN_ON_ONCE() hits in bpf_add_extable_entry(). Fix it by not adding
>>> extable entries during the extra pass.
>> 
>> Are you sure this change is correct ?
> 
> Actually, I was in two minds about that owing to commit 04c04205bc35
> ("bpf powerpc: Remove extra_pass from bpf_jit_build_body()").

Right, but Christophe's series adding complete passes during the 
extra_pass phase added 'extra_pass' parameter back to 
bpf_jit_build_body().

> 
>> During the extra pass the code can get shrinked or expanded (within the
>> limits of the size of the preliminary pass). Shouldn't extable entries
>> be populated during the last pass ?
> 
> Unlikely, but the intention there was to eliminate a regression in case
> extra_pass ends up being 'false' always in any subsequent change.

But, the current approach risks generating incorrect offsets in the 
extable. The main motivation for the extra pass is to generate more 
compact code, so there is a good chance that offsets are going to change 
(especially with bpf subprogs).

> 
> - Hari
> 
>>>
>>> Fixes: 85e031154c7c ("powerpc/bpf: Perform complete extra passes to update addresses")
>>> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
>>> ---
>>>    arch/powerpc/net/bpf_jit_comp32.c | 2 +-
>>>    arch/powerpc/net/bpf_jit_comp64.c | 2 +-
>>>    2 files changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
>>> index 7f91ea064c08..e788b1fbeee6 100644
>>> --- a/arch/powerpc/net/bpf_jit_comp32.c
>>> +++ b/arch/powerpc/net/bpf_jit_comp32.c
>>> @@ -977,7 +977,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
>>>    			if (size != BPF_DW && !fp->aux->verifier_zext)
>>>    				EMIT(PPC_RAW_LI(dst_reg_h, 0));
>>>    
>>> -			if (BPF_MODE(code) == BPF_PROBE_MEM) {
>>> +			if (BPF_MODE(code) == BPF_PROBE_MEM && !extra_pass) {

It is probably better to pass 'extra_pass' into bpf_add_extable_entry() 
to keep all those checks together.


- Naveen
Hari Bathini April 25, 2023, 6:59 a.m. UTC | #4
Hi Naveen,

On 24/04/23 5:25 pm, Naveen N. Rao wrote:
> Hari Bathini wrote:
>> Hello Christophe,
>>
>> Thanks for the review.
>>
>> On 07/04/23 11:31 am, Christophe Leroy wrote:
>>>
>>>
>>> Le 06/04/2023 à 09:35, Hari Bathini a écrit :
>>>> Since commit 85e031154c7c ("powerpc/bpf: Perform complete extra passes
>>>> to update addresses"), two additional passes are performed to avoid
>>>> space and CPU time wastage on powerpc. But these extra passes led to
>>>> WARN_ON_ONCE() hits in bpf_add_extable_entry(). Fix it by not adding
>>>> extable entries during the extra pass.
>>>
>>> Are you sure this change is correct ?
>>
>> Actually, I was in two minds about that owing to commit 04c04205bc35
>> ("bpf powerpc: Remove extra_pass from bpf_jit_build_body()").
> 
> Right, but Christophe's series adding complete passes during the 
> extra_pass phase added 'extra_pass' parameter back to bpf_jit_build_body().
> 
>>
>>> During the extra pass the code can get shrinked or expanded (within the
>>> limits of the size of the preliminary pass). Shouldn't extable entries
>>> be populated during the last pass ?
>>
>> Unlikely, but the intention there was to eliminate a regression in case
>> extra_pass ends up being 'false' always in any subsequent change.
> 
> But, the current approach risks generating incorrect offsets in the 
> extable. The main motivation for the extra pass is to generate more 
> compact code, so there is a good chance that offsets are going to change 
> (especially with bpf subprogs).
> 
>>
>> - Hari
>>
>>>>
>>>> Fixes: 85e031154c7c ("powerpc/bpf: Perform complete extra passes to 
>>>> update addresses")
>>>> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
>>>> ---
>>>>    arch/powerpc/net/bpf_jit_comp32.c | 2 +-
>>>>    arch/powerpc/net/bpf_jit_comp64.c | 2 +-
>>>>    2 files changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/powerpc/net/bpf_jit_comp32.c 
>>>> b/arch/powerpc/net/bpf_jit_comp32.c
>>>> index 7f91ea064c08..e788b1fbeee6 100644
>>>> --- a/arch/powerpc/net/bpf_jit_comp32.c
>>>> +++ b/arch/powerpc/net/bpf_jit_comp32.c
>>>> @@ -977,7 +977,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 
>>>> *image, struct codegen_context *
>>>>                if (size != BPF_DW && !fp->aux->verifier_zext)
>>>>                    EMIT(PPC_RAW_LI(dst_reg_h, 0));
>>>> -            if (BPF_MODE(code) == BPF_PROBE_MEM) {
>>>> +            if (BPF_MODE(code) == BPF_PROBE_MEM && !extra_pass) {
> 
> It is probably better to pass 'extra_pass' into bpf_add_extable_entry() 
> to keep all those checks together.
> 

Thanks for the review and also the suggestion (offline) to reset index
during extra pass, for my concern about possible regression. Posted v2.

- Hari
diff mbox series

Patch

diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
index 7f91ea064c08..e788b1fbeee6 100644
--- a/arch/powerpc/net/bpf_jit_comp32.c
+++ b/arch/powerpc/net/bpf_jit_comp32.c
@@ -977,7 +977,7 @@  int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			if (size != BPF_DW && !fp->aux->verifier_zext)
 				EMIT(PPC_RAW_LI(dst_reg_h, 0));
 
-			if (BPF_MODE(code) == BPF_PROBE_MEM) {
+			if (BPF_MODE(code) == BPF_PROBE_MEM && !extra_pass) {
 				int insn_idx = ctx->idx - 1;
 				int jmp_off = 4;
 
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 8dd3cabaa83a..1cc2777ec846 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -921,7 +921,7 @@  int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			if (size != BPF_DW && insn_is_zext(&insn[i + 1]))
 				addrs[++i] = ctx->idx * 4;
 
-			if (BPF_MODE(code) == BPF_PROBE_MEM) {
+			if (BPF_MODE(code) == BPF_PROBE_MEM && !extra_pass) {
 				ret = bpf_add_extable_entry(fp, image, pass, ctx, ctx->idx - 1,
 							    4, dst_reg);
 				if (ret)