diff mbox series

[bpf-next] bpf/xdp: optimize bpf_xdp_pointer to avoid reading sinfo

Message ID 168554475365.3262482.9868965521545045945.stgit@firesoul (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series [bpf-next] bpf/xdp: optimize bpf_xdp_pointer to avoid reading sinfo | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-6 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for s390x with gcc
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 34 this patch: 34
netdev/cc_maintainers warning 15 maintainers not CCed: kuba@kernel.org hawk@kernel.org daniel@iogearbox.net yhs@fb.com kpsingh@kernel.org martin.lau@linux.dev john.fastabend@gmail.com sdf@google.com song@kernel.org andrii@kernel.org jolsa@kernel.org davem@davemloft.net pabeni@redhat.com haoluo@google.com edumazet@google.com
netdev/build_clang success Errors and warnings before: 8 this patch: 8
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 34 this patch: 34
netdev/checkpatch warning CHECK: Unnecessary parentheses around 'offset < size'
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-7 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for veristat
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-8 success Logs for test_maps on s390x with gcc

Commit Message

Jesper Dangaard Brouer May 31, 2023, 2:52 p.m. UTC
Currently we observed a significant performance degradation in
samples/bpf xdp1 and xdp2, due XDP multibuffer "xdp.frags" handling,
added in commit 772251742262 ("samples/bpf: fixup some tools to be able
to support xdp multibuffer").

This patch reduce the overhead by avoiding to read/load shared_info
(sinfo) memory area, when XDP packet don't have any frags. This improves
performance because sinfo is located in another cacheline.

Function bpf_xdp_pointer() is used by BPF helpers bpf_xdp_load_bytes()
and bpf_xdp_store_bytes(). As a help to reviewers, xdp_get_buff_len() can
potentially access sinfo.

Perf report show bpf_xdp_pointer() percentage utilization being reduced
from 4,19% to 3,37% (on CPU E5-1650 @3.60GHz).

The BPF kfunc bpf_dynptr_slice() also use bpf_xdp_pointer(). Thus, it
should also take effect for that.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 net/core/filter.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

Comments

Lorenzo Bianconi May 31, 2023, 3:43 p.m. UTC | #1
> Currently we observed a significant performance degradation in
> samples/bpf xdp1 and xdp2, due XDP multibuffer "xdp.frags" handling,
> added in commit 772251742262 ("samples/bpf: fixup some tools to be able
> to support xdp multibuffer").
> 
> This patch reduce the overhead by avoiding to read/load shared_info
> (sinfo) memory area, when XDP packet don't have any frags. This improves
> performance because sinfo is located in another cacheline.
> 
> Function bpf_xdp_pointer() is used by BPF helpers bpf_xdp_load_bytes()
> and bpf_xdp_store_bytes(). As a help to reviewers, xdp_get_buff_len() can
> potentially access sinfo.
> 
> Perf report show bpf_xdp_pointer() percentage utilization being reduced
> from 4,19% to 3,37% (on CPU E5-1650 @3.60GHz).
> 
> The BPF kfunc bpf_dynptr_slice() also use bpf_xdp_pointer(). Thus, it
> should also take effect for that.
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---
>  net/core/filter.c |   12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 968139f4a1ac..a635f537d499 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -3948,20 +3948,24 @@ void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off,
>  
>  void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
>  {
> -	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
>  	u32 size = xdp->data_end - xdp->data;
> +	struct skb_shared_info *sinfo;
>  	void *addr = xdp->data;
>  	int i;
>  
>  	if (unlikely(offset > 0xffff || len > 0xffff))
>  		return ERR_PTR(-EFAULT);
>  
> -	if (offset + len > xdp_get_buff_len(xdp))
> -		return ERR_PTR(-EINVAL);
> +	if (likely((offset < size))) /* linear area */
> +		goto out;

Hi Jesper,

please correct me if I am wrong but looking at the code, in this way
bpf_xdp_pointer() will return NULL (and not ERR_PTR(-EINVAL)) if:
- offset < size
- offset + len > xdp_get_buff_len()

doing so I would say bpf_xdp_copy_buf() will copy the full packet starting from
offset leaving some part of the auxiliary buffer possible uninitialized.
Do you think it is an issue?

Regards,
Lorenzo

>  
> -	if (offset < size) /* linear area */
> +	if (likely(!xdp_buff_has_frags(xdp)))
>  		goto out;
>  
> +	if (offset + len > xdp_get_buff_len(xdp))
> +		return ERR_PTR(-EINVAL);
> +
> +	sinfo = xdp_get_shared_info_from_buff(xdp);
>  	offset -= size;
>  	for (i = 0; i < sinfo->nr_frags; i++) { /* paged area */
>  		u32 frag_size = skb_frag_size(&sinfo->frags[i]);
> 
>
Toke Høiland-Jørgensen May 31, 2023, 4:24 p.m. UTC | #2
Lorenzo Bianconi <lorenzo@kernel.org> writes:

>> Currently we observed a significant performance degradation in
>> samples/bpf xdp1 and xdp2, due XDP multibuffer "xdp.frags" handling,
>> added in commit 772251742262 ("samples/bpf: fixup some tools to be able
>> to support xdp multibuffer").
>> 
>> This patch reduce the overhead by avoiding to read/load shared_info
>> (sinfo) memory area, when XDP packet don't have any frags. This improves
>> performance because sinfo is located in another cacheline.
>> 
>> Function bpf_xdp_pointer() is used by BPF helpers bpf_xdp_load_bytes()
>> and bpf_xdp_store_bytes(). As a help to reviewers, xdp_get_buff_len() can
>> potentially access sinfo.
>> 
>> Perf report show bpf_xdp_pointer() percentage utilization being reduced
>> from 4,19% to 3,37% (on CPU E5-1650 @3.60GHz).
>> 
>> The BPF kfunc bpf_dynptr_slice() also use bpf_xdp_pointer(). Thus, it
>> should also take effect for that.
>> 
>> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
>> ---
>>  net/core/filter.c |   12 ++++++++----
>>  1 file changed, 8 insertions(+), 4 deletions(-)
>> 
>> diff --git a/net/core/filter.c b/net/core/filter.c
>> index 968139f4a1ac..a635f537d499 100644
>> --- a/net/core/filter.c
>> +++ b/net/core/filter.c
>> @@ -3948,20 +3948,24 @@ void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off,
>>  
>>  void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
>>  {
>> -	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
>>  	u32 size = xdp->data_end - xdp->data;
>> +	struct skb_shared_info *sinfo;
>>  	void *addr = xdp->data;
>>  	int i;
>>  
>>  	if (unlikely(offset > 0xffff || len > 0xffff))
>>  		return ERR_PTR(-EFAULT);
>>  
>> -	if (offset + len > xdp_get_buff_len(xdp))
>> -		return ERR_PTR(-EINVAL);
>> +	if (likely((offset < size))) /* linear area */
>> +		goto out;
>
> Hi Jesper,
>
> please correct me if I am wrong but looking at the code, in this way
> bpf_xdp_pointer() will return NULL (and not ERR_PTR(-EINVAL)) if:
> - offset < size
> - offset + len > xdp_get_buff_len()
>
> doing so I would say bpf_xdp_copy_buf() will copy the full packet starting from
> offset leaving some part of the auxiliary buffer possible uninitialized.
> Do you think it is an issue?

Yeah, you're right, bpf_xdp_load_bytes() should fail if trying to read
beyond the frame, and in this case it won't for non-frags; that's a
change in behaviour we probably shouldn't be making...

-Toke
Jesper Dangaard Brouer May 31, 2023, 5:54 p.m. UTC | #3
On 31/05/2023 18.24, Toke Høiland-Jørgensen wrote:
> Lorenzo Bianconi <lorenzo@kernel.org> writes:
> 
>>> Currently we observed a significant performance degradation in
>>> samples/bpf xdp1 and xdp2, due XDP multibuffer "xdp.frags" handling,
>>> added in commit 772251742262 ("samples/bpf: fixup some tools to be able
>>> to support xdp multibuffer").
>>>
>>> This patch reduce the overhead by avoiding to read/load shared_info
>>> (sinfo) memory area, when XDP packet don't have any frags. This improves
>>> performance because sinfo is located in another cacheline.
>>>
>>> Function bpf_xdp_pointer() is used by BPF helpers bpf_xdp_load_bytes()
>>> and bpf_xdp_store_bytes(). As a help to reviewers, xdp_get_buff_len() can
>>> potentially access sinfo.
>>>
>>> Perf report show bpf_xdp_pointer() percentage utilization being reduced
>>> from 4,19% to 3,37% (on CPU E5-1650 @3.60GHz).
>>>
>>> The BPF kfunc bpf_dynptr_slice() also use bpf_xdp_pointer(). Thus, it
>>> should also take effect for that.
>>>
>>> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
>>> ---
>>>   net/core/filter.c |   12 ++++++++----
>>>   1 file changed, 8 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/net/core/filter.c b/net/core/filter.c
>>> index 968139f4a1ac..a635f537d499 100644
>>> --- a/net/core/filter.c
>>> +++ b/net/core/filter.c
>>> @@ -3948,20 +3948,24 @@ void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off,
>>>   
>>>   void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
>>>   {
>>> -	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
>>>   	u32 size = xdp->data_end - xdp->data;
>>> +	struct skb_shared_info *sinfo;
>>>   	void *addr = xdp->data;
>>>   	int i;
>>>   
>>>   	if (unlikely(offset > 0xffff || len > 0xffff))
>>>   		return ERR_PTR(-EFAULT);
>>>   
>>> -	if (offset + len > xdp_get_buff_len(xdp))
>>> -		return ERR_PTR(-EINVAL);
>>> +	if (likely((offset < size))) /* linear area */
>>> +		goto out;
>>
>> Hi Jesper,
>>
>> please correct me if I am wrong but looking at the code, in this way
>> bpf_xdp_pointer() will return NULL (and not ERR_PTR(-EINVAL)) if:
>> - offset < size
>> - offset + len > xdp_get_buff_len()
>>
>> doing so I would say bpf_xdp_copy_buf() will copy the full packet starting from
>> offset leaving some part of the auxiliary buffer possible uninitialized.
>> Do you think it is an issue?
> 
> Yeah, you're right, bpf_xdp_load_bytes() should fail if trying to read
> beyond the frame, and in this case it won't for non-frags; that's a
> change in behaviour we probably shouldn't be making...
> 

Thanks for spotting this!
I will work on a V2 tomorrow.

--Jesper
diff mbox series

Patch

diff --git a/net/core/filter.c b/net/core/filter.c
index 968139f4a1ac..a635f537d499 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3948,20 +3948,24 @@  void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off,
 
 void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len)
 {
-	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
 	u32 size = xdp->data_end - xdp->data;
+	struct skb_shared_info *sinfo;
 	void *addr = xdp->data;
 	int i;
 
 	if (unlikely(offset > 0xffff || len > 0xffff))
 		return ERR_PTR(-EFAULT);
 
-	if (offset + len > xdp_get_buff_len(xdp))
-		return ERR_PTR(-EINVAL);
+	if (likely((offset < size))) /* linear area */
+		goto out;
 
-	if (offset < size) /* linear area */
+	if (likely(!xdp_buff_has_frags(xdp)))
 		goto out;
 
+	if (offset + len > xdp_get_buff_len(xdp))
+		return ERR_PTR(-EINVAL);
+
+	sinfo = xdp_get_shared_info_from_buff(xdp);
 	offset -= size;
 	for (i = 0; i < sinfo->nr_frags; i++) { /* paged area */
 		u32 frag_size = skb_frag_size(&sinfo->frags[i]);