Message ID | 20231206041332.GA5721@ubuntu (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [v2] net/rose: Fix Use-After-Free in rose_ioctl | expand |
On Wed, Dec 6, 2023 at 5:13 AM Hyunwoo Kim <v4bel@theori.io> wrote: > > Because rose_ioctl() accesses sk->sk_receive_queue > without holding a sk->sk_receive_queue.lock, it can > cause a race with rose_accept(). > A use-after-free for skb occurs with the following flow. > ``` > rose_ioctl() -> skb_peek() > rose_accept() -> skb_dequeue() -> kfree_skb() > ``` > Add sk->sk_receive_queue.lock to rose_ioctl() to fix this issue. > Please add a Fixes: tag > Signed-off-by: Hyunwoo Kim <v4bel@theori.io> > --- > v1 -> v2: Use sk->sk_receive_queue.lock instead of lock_sock. > --- > net/rose/af_rose.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c > index 0cc5a4e19900..841c238de222 100644 > --- a/net/rose/af_rose.c > +++ b/net/rose/af_rose.c > @@ -1316,8 +1316,10 @@ static int rose_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) > struct sk_buff *skb; > long amount = 0L; > /* These two are safe on a single CPU system as only user tasks fiddle here */ > + spin_lock(&sk->sk_receive_queue.lock); You need interrupt safety here. sk_receive_queue can be fed from interrupt, that would potentially deadlock. > if ((skb = skb_peek(&sk->sk_receive_queue)) != NULL) > amount = skb->len; > + spin_unlock(&sk->sk_receive_queue.lock); > return put_user(amount, (unsigned int __user *) argp); > } > > -- > 2.25.1 >
Dear, On Wed, Dec 06, 2023 at 11:33:15AM +0100, Eric Dumazet wrote: > On Wed, Dec 6, 2023 at 5:13 AM Hyunwoo Kim <v4bel@theori.io> wrote: > > > > Because rose_ioctl() accesses sk->sk_receive_queue > > without holding a sk->sk_receive_queue.lock, it can > > cause a race with rose_accept(). > > A use-after-free for skb occurs with the following flow. > > ``` > > rose_ioctl() -> skb_peek() > > rose_accept() -> skb_dequeue() -> kfree_skb() > > ``` > > Add sk->sk_receive_queue.lock to rose_ioctl() to fix this issue. > > > > Please add a Fixes: tag > > > Signed-off-by: Hyunwoo Kim <v4bel@theori.io> > > --- > > v1 -> v2: Use sk->sk_receive_queue.lock instead of lock_sock. > > --- > > net/rose/af_rose.c | 2 ++ > > 1 file changed, 2 insertions(+) > > > > diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c > > index 0cc5a4e19900..841c238de222 100644 > > --- a/net/rose/af_rose.c > > +++ b/net/rose/af_rose.c > > @@ -1316,8 +1316,10 @@ static int rose_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) > > struct sk_buff *skb; > > long amount = 0L; > > /* These two are safe on a single CPU system as only user tasks fiddle here */ > > + spin_lock(&sk->sk_receive_queue.lock); > > You need interrupt safety here. > > sk_receive_queue can be fed from interrupt, that would potentially deadlock. I want to change spin_lock to spin_lock_irqsave, is this okay? Regards, Hyunwoo Kim > > > if ((skb = skb_peek(&sk->sk_receive_queue)) != NULL) > > amount = skb->len; > > + spin_unlock(&sk->sk_receive_queue.lock); > > return put_user(amount, (unsigned int __user *) argp); > > } > > > > -- > > 2.25.1 > >
On Wed, Dec 6, 2023 at 11:52 AM Hyunwoo Kim <v4bel@theori.io> wrote: > > Dear, > > On Wed, Dec 06, 2023 at 11:33:15AM +0100, Eric Dumazet wrote: > > On Wed, Dec 6, 2023 at 5:13 AM Hyunwoo Kim <v4bel@theori.io> wrote: > > > > > > Because rose_ioctl() accesses sk->sk_receive_queue > > > without holding a sk->sk_receive_queue.lock, it can > > > cause a race with rose_accept(). > > > A use-after-free for skb occurs with the following flow. > > > ``` > > > rose_ioctl() -> skb_peek() > > > rose_accept() -> skb_dequeue() -> kfree_skb() > > > ``` > > > Add sk->sk_receive_queue.lock to rose_ioctl() to fix this issue. > > > > > > > Please add a Fixes: tag > > > > > Signed-off-by: Hyunwoo Kim <v4bel@theori.io> > > > --- > > > v1 -> v2: Use sk->sk_receive_queue.lock instead of lock_sock. > > > --- > > > net/rose/af_rose.c | 2 ++ > > > 1 file changed, 2 insertions(+) > > > > > > diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c > > > index 0cc5a4e19900..841c238de222 100644 > > > --- a/net/rose/af_rose.c > > > +++ b/net/rose/af_rose.c > > > @@ -1316,8 +1316,10 @@ static int rose_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) > > > struct sk_buff *skb; > > > long amount = 0L; > > > /* These two are safe on a single CPU system as only user tasks fiddle here */ > > > + spin_lock(&sk->sk_receive_queue.lock); > > > > You need interrupt safety here. > > > > sk_receive_queue can be fed from interrupt, that would potentially deadlock. > > I want to change spin_lock to spin_lock_irqsave, is this okay? Either spin_lock_irq() or spin_lock_irqsave() will work. > > > Regards, > Hyunwoo Kim > > > > > > if ((skb = skb_peek(&sk->sk_receive_queue)) != NULL) > > > amount = skb->len; > > > + spin_unlock(&sk->sk_receive_queue.lock); > > > return put_user(amount, (unsigned int __user *) argp); > > > } > > > > > > -- > > > 2.25.1 > > >
diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c index 0cc5a4e19900..841c238de222 100644 --- a/net/rose/af_rose.c +++ b/net/rose/af_rose.c @@ -1316,8 +1316,10 @@ static int rose_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) struct sk_buff *skb; long amount = 0L; /* These two are safe on a single CPU system as only user tasks fiddle here */ + spin_lock(&sk->sk_receive_queue.lock); if ((skb = skb_peek(&sk->sk_receive_queue)) != NULL) amount = skb->len; + spin_unlock(&sk->sk_receive_queue.lock); return put_user(amount, (unsigned int __user *) argp); }
Because rose_ioctl() accesses sk->sk_receive_queue without holding a sk->sk_receive_queue.lock, it can cause a race with rose_accept(). A use-after-free for skb occurs with the following flow. ``` rose_ioctl() -> skb_peek() rose_accept() -> skb_dequeue() -> kfree_skb() ``` Add sk->sk_receive_queue.lock to rose_ioctl() to fix this issue. Signed-off-by: Hyunwoo Kim <v4bel@theori.io> --- v1 -> v2: Use sk->sk_receive_queue.lock instead of lock_sock. --- net/rose/af_rose.c | 2 ++ 1 file changed, 2 insertions(+)