diff mbox series

[net-next,v1] net/xdp: fix zero-size allocation warning in xskq_create()

Message ID 20230928204440.543-1-andrew.kanner@gmail.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [net-next,v1] net/xdp: fix zero-size allocation warning in xskq_create() | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1342 this patch: 1342
netdev/cc_maintainers warning 4 maintainers not CCed: ast@kernel.org hawk@kernel.org john.fastabend@gmail.com daniel@iogearbox.net
netdev/build_clang success Errors and warnings before: 1364 this patch: 1364
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 1365 this patch: 1365
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 9 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Andrew Kanner Sept. 28, 2023, 8:44 p.m. UTC
Syzkaller reported the following issue:
 ------------[ cut here ]------------
 WARNING: CPU: 0 PID: 2807 at mm/vmalloc.c:3247 __vmalloc_node_range (mm/vmalloc.c:3361)
 Modules linked in:
 CPU: 0 PID: 2807 Comm: repro Not tainted 6.6.0-rc2+ #12
 Hardware name: Generic DT based system
 unwind_backtrace from show_stack (arch/arm/kernel/traps.c:258)
 show_stack from dump_stack_lvl (lib/dump_stack.c:107 (discriminator 1))
 dump_stack_lvl from __warn (kernel/panic.c:633 kernel/panic.c:680)
 __warn from warn_slowpath_fmt (./include/linux/context_tracking.h:153 kernel/panic.c:700)
 warn_slowpath_fmt from __vmalloc_node_range (mm/vmalloc.c:3361 (discriminator 3))
 __vmalloc_node_range from vmalloc_user (mm/vmalloc.c:3478)
 vmalloc_user from xskq_create (net/xdp/xsk_queue.c:40)
 xskq_create from xsk_setsockopt (net/xdp/xsk.c:953 net/xdp/xsk.c:1286)
 xsk_setsockopt from __sys_setsockopt (net/socket.c:2308)
 __sys_setsockopt from ret_fast_syscall (arch/arm/kernel/entry-common.S:68)

xskq_get_ring_size() uses struct_size() macro to safely calculate the
size of struct xsk_queue and q->nentries of desc members. But the
syzkaller repro was able to set q->nentries with the value initially
taken from copy_from_sockptr() high enough to return SIZE_MAX by
struct_size(). The next PAGE_ALIGN(size) is such case will overflow
the size_t value and set it to 0. This will trigger WARN_ON_ONCE in
vmalloc_user() -> __vmalloc_node_range().

The issue is reproducible on 32-bit arm kernel.

Reported-and-tested-by: syzbot+fae676d3cf469331fc89@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/000000000000c84b4705fb31741e@google.com/T/
Link: https://syzkaller.appspot.com/bug?extid=fae676d3cf469331fc89
Fixes: 9f78bf330a66 ("xsk: support use vaddr as ring")
Signed-off-by: Andrew Kanner <andrew.kanner@gmail.com>
---
RFC notes:

It was found that net/xdp/xsk.c:xsk_setsockopt() uses
copy_from_sockptr() to get the number of entries (int) for cases with
XDP_RX_RING/XDP_TX_RING and XDP_UMEM_FILL_RING/XDP_UMEM_COMPLETION_RING.

Next in xsk_init_queue() there're 2 sanity checks (entries == 0) and
(!is_power_of_2(entries)) for which -EINVAL will be returned.

After that net/xdp/xsk_queue.c:xskq_create() will calculate the size
multipling the number of entries (int) with the size of u64, at least.

I wonder if there should be the upper bound (e.g. the 3rd sanity check
inside xsk_init_queue()). It seems that without the upper limit it's
quiet easy to overflow the allocated size (SIZE_MAX), especially for
32-bit architectures, for example arm nodes which were used by the
syzkaller.

In this patch I added a naive check for SIZE_MAX which helped to
skip zero-size allocation after overflow, but maybe it's not quite
right. Please, suggest if you have any thoughts about the appropriate
limit for the size of these xdp rings.

PS: the initial number of entries is 0x20000000 in syzkaller repro:
syscall(__NR_setsockopt, (intptr_t)r[0], 0x11b, 3, 0x20000040, 0x20);

Link: https://syzkaller.appspot.com/text?tag=ReproC&x=10910f18280000

 net/xdp/xsk_queue.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Alexander Lobakin Oct. 2, 2023, 1:52 p.m. UTC | #1
From: Andrew Kanner <andrew.kanner@gmail.com>
Date: Thu, 28 Sep 2023 23:44:40 +0300

> Syzkaller reported the following issue:

[...]

> PS: the initial number of entries is 0x20000000 in syzkaller repro:
> syscall(__NR_setsockopt, (intptr_t)r[0], 0x11b, 3, 0x20000040, 0x20);
> 
> Link: https://syzkaller.appspot.com/text?tag=ReproC&x=10910f18280000
> 
>  net/xdp/xsk_queue.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/net/xdp/xsk_queue.c b/net/xdp/xsk_queue.c
> index f8905400ee07..1bc7fb1f14ae 100644
> --- a/net/xdp/xsk_queue.c
> +++ b/net/xdp/xsk_queue.c
> @@ -34,6 +34,9 @@ struct xsk_queue *xskq_create(u32 nentries, bool umem_queue)
>  	q->ring_mask = nentries - 1;
>  
>  	size = xskq_get_ring_size(q, umem_queue);
> +	if (size == SIZE_MAX)

unlikely().

> +		return NULL;
> +
>  	size = PAGE_ALIGN(size);
>  
>  	q->ring = vmalloc_user(size);

Thanks,
Olek
Andrew Kanner Oct. 2, 2023, 10:03 p.m. UTC | #2
On Mon, Oct 02, 2023 at 03:52:44PM +0200, Alexander Lobakin wrote:
> From: Andrew Kanner <andrew.kanner@gmail.com>
> Date: Thu, 28 Sep 2023 23:44:40 +0300
> 
> > Syzkaller reported the following issue:
> 
> [...]
> 
> > PS: the initial number of entries is 0x20000000 in syzkaller repro:
> > syscall(__NR_setsockopt, (intptr_t)r[0], 0x11b, 3, 0x20000040, 0x20);
> > 
> > Link: https://syzkaller.appspot.com/text?tag=ReproC&x=10910f18280000
> > 
> >  net/xdp/xsk_queue.c | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/net/xdp/xsk_queue.c b/net/xdp/xsk_queue.c
> > index f8905400ee07..1bc7fb1f14ae 100644
> > --- a/net/xdp/xsk_queue.c
> > +++ b/net/xdp/xsk_queue.c
> > @@ -34,6 +34,9 @@ struct xsk_queue *xskq_create(u32 nentries, bool umem_queue)
> >  	q->ring_mask = nentries - 1;
> >  
> >  	size = xskq_get_ring_size(q, umem_queue);
> > +	if (size == SIZE_MAX)
> 
> unlikely().
> 
> > +		return NULL;
> > +
> >  	size = PAGE_ALIGN(size);
> >  
> >  	q->ring = vmalloc_user(size);
> 
> Thanks,
> Olek

Thanks, Olek.
That is a reasonable optimization, I'll add it in v2.

--
pw-bot: cr

Andrew Kanner
diff mbox series

Patch

diff --git a/net/xdp/xsk_queue.c b/net/xdp/xsk_queue.c
index f8905400ee07..1bc7fb1f14ae 100644
--- a/net/xdp/xsk_queue.c
+++ b/net/xdp/xsk_queue.c
@@ -34,6 +34,9 @@  struct xsk_queue *xskq_create(u32 nentries, bool umem_queue)
 	q->ring_mask = nentries - 1;
 
 	size = xskq_get_ring_size(q, umem_queue);
+	if (size == SIZE_MAX)
+		return NULL;
+
 	size = PAGE_ALIGN(size);
 
 	q->ring = vmalloc_user(size);