diff mbox series

[1/1] xen-netfront: do not assume sk_buff_head list is empty in error handling

Message ID 1568605619-22219-1-git-send-email-dongli.zhang@oracle.com (mailing list archive)
State Accepted
Commit 00b368502d18f790ab715e055869fd4bb7484a9b
Headers show
Series [1/1] xen-netfront: do not assume sk_buff_head list is empty in error handling | expand

Commit Message

Dongli Zhang Sept. 16, 2019, 3:46 a.m. UTC
When skb_shinfo(skb) is not able to cache extra fragment (that is,
skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS), xennet_fill_frags() assumes
the sk_buff_head list is already empty. As a result, cons is increased only
by 1 and returns to error handling path in xennet_poll().

However, if the sk_buff_head list is not empty, queue->rx.rsp_cons may be
set incorrectly. That is, queue->rx.rsp_cons would point to the rx ring
buffer entries whose queue->rx_skbs[i] and queue->grant_rx_ref[i] are
already cleared to NULL. This leads to NULL pointer access in the next
iteration to process rx ring buffer entries.

Below is how xennet_poll() does error handling. All remaining entries in
tmpq are accounted to queue->rx.rsp_cons without assuming how many
outstanding skbs are remained in the list.

 985 static int xennet_poll(struct napi_struct *napi, int budget)
... ...
1032           if (unlikely(xennet_set_skb_gso(skb, gso))) {
1033                   __skb_queue_head(&tmpq, skb);
1034                   queue->rx.rsp_cons += skb_queue_len(&tmpq);
1035                   goto err;
1036           }

It is better to always have the error handling in the same way.

Fixes: ad4f15dc2c70 ("xen/netfront: don't bug in case of too many frags")
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 drivers/net/xen-netfront.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

David Miller Sept. 16, 2019, 7:47 p.m. UTC | #1
From: Dongli Zhang <dongli.zhang@oracle.com>
Date: Mon, 16 Sep 2019 11:46:59 +0800

> When skb_shinfo(skb) is not able to cache extra fragment (that is,
> skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS), xennet_fill_frags() assumes
> the sk_buff_head list is already empty. As a result, cons is increased only
> by 1 and returns to error handling path in xennet_poll().
> 
> However, if the sk_buff_head list is not empty, queue->rx.rsp_cons may be
> set incorrectly. That is, queue->rx.rsp_cons would point to the rx ring
> buffer entries whose queue->rx_skbs[i] and queue->grant_rx_ref[i] are
> already cleared to NULL. This leads to NULL pointer access in the next
> iteration to process rx ring buffer entries.
> 
> Below is how xennet_poll() does error handling. All remaining entries in
> tmpq are accounted to queue->rx.rsp_cons without assuming how many
> outstanding skbs are remained in the list.
> 
>  985 static int xennet_poll(struct napi_struct *napi, int budget)
> ... ...
> 1032           if (unlikely(xennet_set_skb_gso(skb, gso))) {
> 1033                   __skb_queue_head(&tmpq, skb);
> 1034                   queue->rx.rsp_cons += skb_queue_len(&tmpq);
> 1035                   goto err;
> 1036           }
> 
> It is better to always have the error handling in the same way.
> 
> Fixes: ad4f15dc2c70 ("xen/netfront: don't bug in case of too many frags")
> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>

Applied and queued up for -stable.
diff mbox series

Patch

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 8d33970..5f5722b 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -906,7 +906,7 @@  static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 		}
 		if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
-			queue->rx.rsp_cons = ++cons;
+			queue->rx.rsp_cons = ++cons + skb_queue_len(list);
 			kfree_skb(nskb);
 			return ~0U;
 		}