diff mbox series

veth: Fix use after free in XDP_REDIRECT

Message ID 20230314153351.2201328-1-sbohrer@cloudflare.com (mailing list archive)
State Accepted
Commit 7c10131803e45269ddc6c817f19ed649110f3cae
Delegated to: Netdev Maintainers
Headers show
Series veth: Fix use after free in XDP_REDIRECT | expand

Checks

Context Check Description
netdev/series_format warning Single patches do not need cover letters; Target tree name not specified in the subject
netdev/tree_selection success Guessed tree name to be net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 18 this patch: 18
netdev/cc_maintainers fail 3 blamed authors not CCed: toke@redhat.com daniel@iogearbox.net john.fastabend@gmail.com; 9 maintainers not CCed: pabeni@redhat.com kuba@kernel.org edumazet@google.com toke@redhat.com daniel@iogearbox.net john.fastabend@gmail.com ast@kernel.org hawk@kernel.org davem@davemloft.net
netdev/build_clang success Errors and warnings before: 18 this patch: 18
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 18 this patch: 18
netdev/checkpatch warning WARNING: line length of 83 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Shawn Bohrer March 14, 2023, 3:33 p.m. UTC
718a18a0c8a67f97781e40bdef7cdd055c430996 "veth: Rework veth_xdp_rcv_skb
in order to accept non-linear skb" introduced a bug where it tried to
use pskb_expand_head() if the headroom was less than
XDP_PACKET_HEADROOM.  This however uses kmalloc to expand the head,
which will later allow consume_skb() to free the skb while is it still
in use by AF_XDP.

Previously if the headroom was less than XDP_PACKET_HEADROOM we
continued on to allocate a new skb from pages so this restores that
behavior.

BUG: KASAN: use-after-free in __xsk_rcv+0x18d/0x2c0
Read of size 78 at addr ffff888976250154 by task napi/iconduit-g/148640

CPU: 5 PID: 148640 Comm: napi/iconduit-g Kdump: loaded Tainted: G           O       6.1.4-cloudflare-kasan-2023.1.2 #1
Hardware name: Quanta Computer Inc. QuantaPlex T41S-2U/S2S-MB, BIOS S2S_3B10.03 06/21/2018
Call Trace:
  <TASK>
  dump_stack_lvl+0x34/0x48
  print_report+0x170/0x473
  ? __xsk_rcv+0x18d/0x2c0
  kasan_report+0xad/0x130
  ? __xsk_rcv+0x18d/0x2c0
  kasan_check_range+0x149/0x1a0
  memcpy+0x20/0x60
  __xsk_rcv+0x18d/0x2c0
  __xsk_map_redirect+0x1f3/0x490
  ? veth_xdp_rcv_skb+0x89c/0x1ba0 [veth]
  xdp_do_redirect+0x5ca/0xd60
  veth_xdp_rcv_skb+0x935/0x1ba0 [veth]
  ? __netif_receive_skb_list_core+0x671/0x920
  ? veth_xdp+0x670/0x670 [veth]
  veth_xdp_rcv+0x304/0xa20 [veth]
  ? do_xdp_generic+0x150/0x150
  ? veth_xdp_rcv_one+0xde0/0xde0 [veth]
  ? _raw_spin_lock_bh+0xe0/0xe0
  ? newidle_balance+0x887/0xe30
  ? __perf_event_task_sched_in+0xdb/0x800
  veth_poll+0x139/0x571 [veth]
  ? veth_xdp_rcv+0xa20/0xa20 [veth]
  ? _raw_spin_unlock+0x39/0x70
  ? finish_task_switch.isra.0+0x17e/0x7d0
  ? __switch_to+0x5cf/0x1070
  ? __schedule+0x95b/0x2640
  ? io_schedule_timeout+0x160/0x160
  __napi_poll+0xa1/0x440
  napi_threaded_poll+0x3d1/0x460
  ? __napi_poll+0x440/0x440
  ? __kthread_parkme+0xc6/0x1f0
  ? __napi_poll+0x440/0x440
  kthread+0x2a2/0x340
  ? kthread_complete_and_exit+0x20/0x20
  ret_from_fork+0x22/0x30
  </TASK>

Freed by task 148640:
  kasan_save_stack+0x23/0x50
  kasan_set_track+0x21/0x30
  kasan_save_free_info+0x2a/0x40
  ____kasan_slab_free+0x169/0x1d0
  slab_free_freelist_hook+0xd2/0x190
  __kmem_cache_free+0x1a1/0x2f0
  skb_release_data+0x449/0x600
  consume_skb+0x9f/0x1c0
  veth_xdp_rcv_skb+0x89c/0x1ba0 [veth]
  veth_xdp_rcv+0x304/0xa20 [veth]
  veth_poll+0x139/0x571 [veth]
  __napi_poll+0xa1/0x440
  napi_threaded_poll+0x3d1/0x460
  kthread+0x2a2/0x340
  ret_from_fork+0x22/0x30

The buggy address belongs to the object at ffff888976250000
  which belongs to the cache kmalloc-2k of size 2048
The buggy address is located 340 bytes inside of
  2048-byte region [ffff888976250000, ffff888976250800)

The buggy address belongs to the physical page:
page:00000000ae18262a refcount:2 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x976250
head:00000000ae18262a order:3 compound_mapcount:0 compound_pincount:0
flags: 0x2ffff800010200(slab|head|node=0|zone=2|lastcpupid=0x1ffff)
raw: 002ffff800010200 0000000000000000 dead000000000122 ffff88810004cf00
raw: 0000000000000000 0000000080080008 00000002ffffffff 0000000000000000
page dumped because: kasan: bad access detected

Memory state around the buggy address:
  ffff888976250000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
  ffff888976250080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> ffff888976250100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                                                  ^
  ffff888976250180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
  ffff888976250200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb

Fixes: 718a18a0c8a6 ("veth: Rework veth_xdp_rcv_skb in order to accept non-linear skb")
Signed-off-by: Shawn Bohrer <sbohrer@cloudflare.com>
---
 drivers/net/veth.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

Comments

Lorenzo Bianconi March 14, 2023, 5:13 p.m. UTC | #1
> 718a18a0c8a67f97781e40bdef7cdd055c430996 "veth: Rework veth_xdp_rcv_skb
> in order to accept non-linear skb" introduced a bug where it tried to
> use pskb_expand_head() if the headroom was less than
> XDP_PACKET_HEADROOM.  This however uses kmalloc to expand the head,
> which will later allow consume_skb() to free the skb while is it still
> in use by AF_XDP.
> 
> Previously if the headroom was less than XDP_PACKET_HEADROOM we
> continued on to allocate a new skb from pages so this restores that
> behavior.
> 
> BUG: KASAN: use-after-free in __xsk_rcv+0x18d/0x2c0
> Read of size 78 at addr ffff888976250154 by task napi/iconduit-g/148640
> 
> CPU: 5 PID: 148640 Comm: napi/iconduit-g Kdump: loaded Tainted: G           O       6.1.4-cloudflare-kasan-2023.1.2 #1
> Hardware name: Quanta Computer Inc. QuantaPlex T41S-2U/S2S-MB, BIOS S2S_3B10.03 06/21/2018
> Call Trace:
>   <TASK>
>   dump_stack_lvl+0x34/0x48
>   print_report+0x170/0x473
>   ? __xsk_rcv+0x18d/0x2c0
>   kasan_report+0xad/0x130
>   ? __xsk_rcv+0x18d/0x2c0
>   kasan_check_range+0x149/0x1a0
>   memcpy+0x20/0x60
>   __xsk_rcv+0x18d/0x2c0
>   __xsk_map_redirect+0x1f3/0x490
>   ? veth_xdp_rcv_skb+0x89c/0x1ba0 [veth]
>   xdp_do_redirect+0x5ca/0xd60
>   veth_xdp_rcv_skb+0x935/0x1ba0 [veth]
>   ? __netif_receive_skb_list_core+0x671/0x920
>   ? veth_xdp+0x670/0x670 [veth]
>   veth_xdp_rcv+0x304/0xa20 [veth]
>   ? do_xdp_generic+0x150/0x150
>   ? veth_xdp_rcv_one+0xde0/0xde0 [veth]
>   ? _raw_spin_lock_bh+0xe0/0xe0
>   ? newidle_balance+0x887/0xe30
>   ? __perf_event_task_sched_in+0xdb/0x800
>   veth_poll+0x139/0x571 [veth]
>   ? veth_xdp_rcv+0xa20/0xa20 [veth]
>   ? _raw_spin_unlock+0x39/0x70
>   ? finish_task_switch.isra.0+0x17e/0x7d0
>   ? __switch_to+0x5cf/0x1070
>   ? __schedule+0x95b/0x2640
>   ? io_schedule_timeout+0x160/0x160
>   __napi_poll+0xa1/0x440
>   napi_threaded_poll+0x3d1/0x460
>   ? __napi_poll+0x440/0x440
>   ? __kthread_parkme+0xc6/0x1f0
>   ? __napi_poll+0x440/0x440
>   kthread+0x2a2/0x340
>   ? kthread_complete_and_exit+0x20/0x20
>   ret_from_fork+0x22/0x30
>   </TASK>
> 
> Freed by task 148640:
>   kasan_save_stack+0x23/0x50
>   kasan_set_track+0x21/0x30
>   kasan_save_free_info+0x2a/0x40
>   ____kasan_slab_free+0x169/0x1d0
>   slab_free_freelist_hook+0xd2/0x190
>   __kmem_cache_free+0x1a1/0x2f0
>   skb_release_data+0x449/0x600
>   consume_skb+0x9f/0x1c0
>   veth_xdp_rcv_skb+0x89c/0x1ba0 [veth]
>   veth_xdp_rcv+0x304/0xa20 [veth]
>   veth_poll+0x139/0x571 [veth]
>   __napi_poll+0xa1/0x440
>   napi_threaded_poll+0x3d1/0x460
>   kthread+0x2a2/0x340
>   ret_from_fork+0x22/0x30
> 
> The buggy address belongs to the object at ffff888976250000
>   which belongs to the cache kmalloc-2k of size 2048
> The buggy address is located 340 bytes inside of
>   2048-byte region [ffff888976250000, ffff888976250800)
> 
> The buggy address belongs to the physical page:
> page:00000000ae18262a refcount:2 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x976250
> head:00000000ae18262a order:3 compound_mapcount:0 compound_pincount:0
> flags: 0x2ffff800010200(slab|head|node=0|zone=2|lastcpupid=0x1ffff)
> raw: 002ffff800010200 0000000000000000 dead000000000122 ffff88810004cf00
> raw: 0000000000000000 0000000080080008 00000002ffffffff 0000000000000000
> page dumped because: kasan: bad access detected
> 
> Memory state around the buggy address:
>   ffff888976250000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>   ffff888976250080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> > ffff888976250100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>                                                   ^
>   ffff888976250180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>   ffff888976250200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> 
> Fixes: 718a18a0c8a6 ("veth: Rework veth_xdp_rcv_skb in order to accept non-linear skb")
> Signed-off-by: Shawn Bohrer <sbohrer@cloudflare.com>

Thx for fixing it.

Acked-by: Lorenzo Bianconi <lorenzo@kernel.org>

> ---
>  drivers/net/veth.c | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index 1bb54de7124d..6b10aa3883c5 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -708,7 +708,7 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
>  	u32 frame_sz;
>  
>  	if (skb_shared(skb) || skb_head_is_locked(skb) ||
> -	    skb_shinfo(skb)->nr_frags) {
> +	    skb_shinfo(skb)->nr_frags || skb_headroom(skb) < XDP_PACKET_HEADROOM) {
>  		u32 size, len, max_head_size, off;
>  		struct sk_buff *nskb;
>  		struct page *page;
> @@ -773,9 +773,6 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
>  
>  		consume_skb(skb);
>  		skb = nskb;
> -	} else if (skb_headroom(skb) < XDP_PACKET_HEADROOM &&
> -		   pskb_expand_head(skb, VETH_XDP_HEADROOM, 0, GFP_ATOMIC)) {
> -		goto drop;
>  	}
>  
>  	/* SKB "head" area always have tailroom for skb_shared_info */
> -- 
> 2.34.1
>
Toshiaki Makita March 15, 2023, 6:19 a.m. UTC | #2
On 2023/03/15 0:33, Shawn Bohrer wrote:
> 718a18a0c8a67f97781e40bdef7cdd055c430996 "veth: Rework veth_xdp_rcv_skb
> in order to accept non-linear skb" introduced a bug where it tried to
> use pskb_expand_head() if the headroom was less than
> XDP_PACKET_HEADROOM.  This however uses kmalloc to expand the head,
> which will later allow consume_skb() to free the skb while is it still
> in use by AF_XDP.
> 
> Previously if the headroom was less than XDP_PACKET_HEADROOM we
> continued on to allocate a new skb from pages so this restores that
> behavior.
> 
> BUG: KASAN: use-after-free in __xsk_rcv+0x18d/0x2c0
> Read of size 78 at addr ffff888976250154 by task napi/iconduit-g/148640
> 
> CPU: 5 PID: 148640 Comm: napi/iconduit-g Kdump: loaded Tainted: G           O       6.1.4-cloudflare-kasan-2023.1.2 #1
> Hardware name: Quanta Computer Inc. QuantaPlex T41S-2U/S2S-MB, BIOS S2S_3B10.03 06/21/2018
> Call Trace:
>    <TASK>
>    dump_stack_lvl+0x34/0x48
>    print_report+0x170/0x473
>    ? __xsk_rcv+0x18d/0x2c0
>    kasan_report+0xad/0x130
>    ? __xsk_rcv+0x18d/0x2c0
>    kasan_check_range+0x149/0x1a0
>    memcpy+0x20/0x60
>    __xsk_rcv+0x18d/0x2c0
>    __xsk_map_redirect+0x1f3/0x490
>    ? veth_xdp_rcv_skb+0x89c/0x1ba0 [veth]
>    xdp_do_redirect+0x5ca/0xd60
>    veth_xdp_rcv_skb+0x935/0x1ba0 [veth]
>    ? __netif_receive_skb_list_core+0x671/0x920
>    ? veth_xdp+0x670/0x670 [veth]
>    veth_xdp_rcv+0x304/0xa20 [veth]
>    ? do_xdp_generic+0x150/0x150
>    ? veth_xdp_rcv_one+0xde0/0xde0 [veth]
>    ? _raw_spin_lock_bh+0xe0/0xe0
>    ? newidle_balance+0x887/0xe30
>    ? __perf_event_task_sched_in+0xdb/0x800
>    veth_poll+0x139/0x571 [veth]
>    ? veth_xdp_rcv+0xa20/0xa20 [veth]
>    ? _raw_spin_unlock+0x39/0x70
>    ? finish_task_switch.isra.0+0x17e/0x7d0
>    ? __switch_to+0x5cf/0x1070
>    ? __schedule+0x95b/0x2640
>    ? io_schedule_timeout+0x160/0x160
>    __napi_poll+0xa1/0x440
>    napi_threaded_poll+0x3d1/0x460
>    ? __napi_poll+0x440/0x440
>    ? __kthread_parkme+0xc6/0x1f0
>    ? __napi_poll+0x440/0x440
>    kthread+0x2a2/0x340
>    ? kthread_complete_and_exit+0x20/0x20
>    ret_from_fork+0x22/0x30
>    </TASK>
> 
> Freed by task 148640:
>    kasan_save_stack+0x23/0x50
>    kasan_set_track+0x21/0x30
>    kasan_save_free_info+0x2a/0x40
>    ____kasan_slab_free+0x169/0x1d0
>    slab_free_freelist_hook+0xd2/0x190
>    __kmem_cache_free+0x1a1/0x2f0
>    skb_release_data+0x449/0x600
>    consume_skb+0x9f/0x1c0
>    veth_xdp_rcv_skb+0x89c/0x1ba0 [veth]
>    veth_xdp_rcv+0x304/0xa20 [veth]
>    veth_poll+0x139/0x571 [veth]
>    __napi_poll+0xa1/0x440
>    napi_threaded_poll+0x3d1/0x460
>    kthread+0x2a2/0x340
>    ret_from_fork+0x22/0x30
> 
> The buggy address belongs to the object at ffff888976250000
>    which belongs to the cache kmalloc-2k of size 2048
> The buggy address is located 340 bytes inside of
>    2048-byte region [ffff888976250000, ffff888976250800)
> 
> The buggy address belongs to the physical page:
> page:00000000ae18262a refcount:2 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x976250
> head:00000000ae18262a order:3 compound_mapcount:0 compound_pincount:0
> flags: 0x2ffff800010200(slab|head|node=0|zone=2|lastcpupid=0x1ffff)
> raw: 002ffff800010200 0000000000000000 dead000000000122 ffff88810004cf00
> raw: 0000000000000000 0000000080080008 00000002ffffffff 0000000000000000
> page dumped because: kasan: bad access detected
> 
> Memory state around the buggy address:
>    ffff888976250000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>    ffff888976250080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>> ffff888976250100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>                                                    ^
>    ffff888976250180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>    ffff888976250200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> 
> Fixes: 718a18a0c8a6 ("veth: Rework veth_xdp_rcv_skb in order to accept non-linear skb")
> Signed-off-by: Shawn Bohrer <sbohrer@cloudflare.com>

Acked-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
Toke Høiland-Jørgensen March 15, 2023, 6:08 p.m. UTC | #3
Shawn Bohrer <sbohrer@cloudflare.com> writes:

> 718a18a0c8a67f97781e40bdef7cdd055c430996 "veth: Rework veth_xdp_rcv_skb
> in order to accept non-linear skb" introduced a bug where it tried to
> use pskb_expand_head() if the headroom was less than
> XDP_PACKET_HEADROOM.  This however uses kmalloc to expand the head,
> which will later allow consume_skb() to free the skb while is it still
> in use by AF_XDP.
>
> Previously if the headroom was less than XDP_PACKET_HEADROOM we
> continued on to allocate a new skb from pages so this restores that
> behavior.
>
> BUG: KASAN: use-after-free in __xsk_rcv+0x18d/0x2c0
> Read of size 78 at addr ffff888976250154 by task napi/iconduit-g/148640
>
> CPU: 5 PID: 148640 Comm: napi/iconduit-g Kdump: loaded Tainted: G           O       6.1.4-cloudflare-kasan-2023.1.2 #1
> Hardware name: Quanta Computer Inc. QuantaPlex T41S-2U/S2S-MB, BIOS S2S_3B10.03 06/21/2018
> Call Trace:
>   <TASK>
>   dump_stack_lvl+0x34/0x48
>   print_report+0x170/0x473
>   ? __xsk_rcv+0x18d/0x2c0
>   kasan_report+0xad/0x130
>   ? __xsk_rcv+0x18d/0x2c0
>   kasan_check_range+0x149/0x1a0
>   memcpy+0x20/0x60
>   __xsk_rcv+0x18d/0x2c0
>   __xsk_map_redirect+0x1f3/0x490
>   ? veth_xdp_rcv_skb+0x89c/0x1ba0 [veth]
>   xdp_do_redirect+0x5ca/0xd60
>   veth_xdp_rcv_skb+0x935/0x1ba0 [veth]
>   ? __netif_receive_skb_list_core+0x671/0x920
>   ? veth_xdp+0x670/0x670 [veth]
>   veth_xdp_rcv+0x304/0xa20 [veth]
>   ? do_xdp_generic+0x150/0x150
>   ? veth_xdp_rcv_one+0xde0/0xde0 [veth]
>   ? _raw_spin_lock_bh+0xe0/0xe0
>   ? newidle_balance+0x887/0xe30
>   ? __perf_event_task_sched_in+0xdb/0x800
>   veth_poll+0x139/0x571 [veth]
>   ? veth_xdp_rcv+0xa20/0xa20 [veth]
>   ? _raw_spin_unlock+0x39/0x70
>   ? finish_task_switch.isra.0+0x17e/0x7d0
>   ? __switch_to+0x5cf/0x1070
>   ? __schedule+0x95b/0x2640
>   ? io_schedule_timeout+0x160/0x160
>   __napi_poll+0xa1/0x440
>   napi_threaded_poll+0x3d1/0x460
>   ? __napi_poll+0x440/0x440
>   ? __kthread_parkme+0xc6/0x1f0
>   ? __napi_poll+0x440/0x440
>   kthread+0x2a2/0x340
>   ? kthread_complete_and_exit+0x20/0x20
>   ret_from_fork+0x22/0x30
>   </TASK>
>
> Freed by task 148640:
>   kasan_save_stack+0x23/0x50
>   kasan_set_track+0x21/0x30
>   kasan_save_free_info+0x2a/0x40
>   ____kasan_slab_free+0x169/0x1d0
>   slab_free_freelist_hook+0xd2/0x190
>   __kmem_cache_free+0x1a1/0x2f0
>   skb_release_data+0x449/0x600
>   consume_skb+0x9f/0x1c0
>   veth_xdp_rcv_skb+0x89c/0x1ba0 [veth]
>   veth_xdp_rcv+0x304/0xa20 [veth]
>   veth_poll+0x139/0x571 [veth]
>   __napi_poll+0xa1/0x440
>   napi_threaded_poll+0x3d1/0x460
>   kthread+0x2a2/0x340
>   ret_from_fork+0x22/0x30
>
> The buggy address belongs to the object at ffff888976250000
>   which belongs to the cache kmalloc-2k of size 2048
> The buggy address is located 340 bytes inside of
>   2048-byte region [ffff888976250000, ffff888976250800)
>
> The buggy address belongs to the physical page:
> page:00000000ae18262a refcount:2 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x976250
> head:00000000ae18262a order:3 compound_mapcount:0 compound_pincount:0
> flags: 0x2ffff800010200(slab|head|node=0|zone=2|lastcpupid=0x1ffff)
> raw: 002ffff800010200 0000000000000000 dead000000000122 ffff88810004cf00
> raw: 0000000000000000 0000000080080008 00000002ffffffff 0000000000000000
> page dumped because: kasan: bad access detected
>
> Memory state around the buggy address:
>   ffff888976250000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>   ffff888976250080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>> ffff888976250100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>                                                   ^
>   ffff888976250180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>   ffff888976250200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>
> Fixes: 718a18a0c8a6 ("veth: Rework veth_xdp_rcv_skb in order to accept non-linear skb")
> Signed-off-by: Shawn Bohrer <sbohrer@cloudflare.com>

Acked-by: Toke Høiland-Jørgensen <toke@kernel.org>
patchwork-bot+netdevbpf@kernel.org March 16, 2023, 4:20 a.m. UTC | #4
Hello:

This patch was applied to netdev/net.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Tue, 14 Mar 2023 10:33:51 -0500 you wrote:
> 718a18a0c8a67f97781e40bdef7cdd055c430996 "veth: Rework veth_xdp_rcv_skb
> in order to accept non-linear skb" introduced a bug where it tried to
> use pskb_expand_head() if the headroom was less than
> XDP_PACKET_HEADROOM.  This however uses kmalloc to expand the head,
> which will later allow consume_skb() to free the skb while is it still
> in use by AF_XDP.
> 
> [...]

Here is the summary with links:
  - veth: Fix use after free in XDP_REDIRECT
    https://git.kernel.org/netdev/net/c/7c10131803e4

You are awesome, thank you!
diff mbox series

Patch

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 1bb54de7124d..6b10aa3883c5 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -708,7 +708,7 @@  static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
 	u32 frame_sz;
 
 	if (skb_shared(skb) || skb_head_is_locked(skb) ||
-	    skb_shinfo(skb)->nr_frags) {
+	    skb_shinfo(skb)->nr_frags || skb_headroom(skb) < XDP_PACKET_HEADROOM) {
 		u32 size, len, max_head_size, off;
 		struct sk_buff *nskb;
 		struct page *page;
@@ -773,9 +773,6 @@  static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
 
 		consume_skb(skb);
 		skb = nskb;
-	} else if (skb_headroom(skb) < XDP_PACKET_HEADROOM &&
-		   pskb_expand_head(skb, VETH_XDP_HEADROOM, 0, GFP_ATOMIC)) {
-		goto drop;
 	}
 
 	/* SKB "head" area always have tailroom for skb_shared_info */