diff mbox series

vsock/virtio: use GFP_ATOMIC under RCU read lock

Message ID 3fbfb6e871f625f89eb578c7228e127437b1975a.1727876449.git.mst@redhat.com (mailing list archive)
State Not Applicable
Delegated to: Netdev Maintainers
Headers show
Series vsock/virtio: use GFP_ATOMIC under RCU read lock | expand

Checks

Context Check Description
netdev/series_format warning Single patches do not need cover letters; Target tree name not specified in the subject
netdev/tree_selection success Guessed tree name to be net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 9 this patch: 9
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 14 of 14 maintainers
netdev/build_clang success Errors and warnings before: 9 this patch: 9
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 7 this patch: 7
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 32 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-10-06--15-00 (tests: 775)

Commit Message

Michael S. Tsirkin Oct. 2, 2024, 1:41 p.m. UTC
virtio_transport_send_pkt in now called on transport fast path,
under RCU read lock. In that case, we have a bug: virtio_add_sgs
is called with GFP_KERNEL, and might sleep.

Pass the gfp flags as an argument, and use GFP_ATOMIC on
the fast path.

Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x
Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty")
Reported-by: Christian Brauner <brauner@kernel.org>
Cc: Stefano Garzarella <sgarzare@redhat.com>
Cc: Luigi Leonardi <luigi.leonardi@outlook.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---

Lightly tested. Christian, could you pls confirm this fixes the problem
for you? Stefano, it's a holiday here - could you pls help test!
Thanks!


 net/vmw_vsock/virtio_transport.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Stefano Garzarella Oct. 2, 2024, 2:02 p.m. UTC | #1
On Wed, Oct 02, 2024 at 09:41:42AM GMT, Michael S. Tsirkin wrote:
>virtio_transport_send_pkt in now called on transport fast path,
>under RCU read lock. In that case, we have a bug: virtio_add_sgs
>is called with GFP_KERNEL, and might sleep.
>
>Pass the gfp flags as an argument, and use GFP_ATOMIC on
>the fast path.
>
>Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x
>Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty")
>Reported-by: Christian Brauner <brauner@kernel.org>
>Cc: Stefano Garzarella <sgarzare@redhat.com>
>Cc: Luigi Leonardi <luigi.leonardi@outlook.com>
>Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>---
>
>Lightly tested. Christian, could you pls confirm this fixes the problem
>for you? Stefano, it's a holiday here - could you pls help test!

Sure, thanks for the quick fix! I was thinking something similar ;-)

>Thanks!
>
>
> net/vmw_vsock/virtio_transport.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
>diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>index f992f9a216f0..0cd965f24609 100644
>--- a/net/vmw_vsock/virtio_transport.c
>+++ b/net/vmw_vsock/virtio_transport.c
>@@ -96,7 +96,7 @@ static u32 virtio_transport_get_local_cid(void)
>
> /* Caller need to hold vsock->tx_lock on vq */
> static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq,
>-				     struct virtio_vsock *vsock)
>+				     struct virtio_vsock *vsock, gfp_t gfp)
> {
> 	int ret, in_sg = 0, out_sg = 0;
> 	struct scatterlist **sgs;
>@@ -140,7 +140,7 @@ static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq,
> 		}
> 	}
>
>-	ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, GFP_KERNEL);
>+	ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, gfp);
> 	/* Usually this means that there is no more space available in
> 	 * the vq
> 	 */
>@@ -178,7 +178,7 @@ virtio_transport_send_pkt_work(struct work_struct *work)
>
> 		reply = virtio_vsock_skb_reply(skb);
>
>-		ret = virtio_transport_send_skb(skb, vq, vsock);
>+		ret = virtio_transport_send_skb(skb, vq, vsock, GFP_KERNEL);
> 		if (ret < 0) {
> 			virtio_vsock_skb_queue_head(&vsock->send_pkt_queue, skb);
> 			break;
>@@ -221,7 +221,7 @@ static int virtio_transport_send_skb_fast_path(struct virtio_vsock *vsock, struc
> 	if (unlikely(ret == 0))
> 		return -EBUSY;
>
>-	ret = virtio_transport_send_skb(skb, vq, vsock);

nit: maybe we can add a comment here:
         /* GFP_ATOMIC because we are in RCU section, so we can't sleep */
>+	ret = virtio_transport_send_skb(skb, vq, vsock, GFP_ATOMIC);
> 	if (ret == 0)
> 		virtqueue_kick(vq);
>
>-- 
>MST
>

I'll run some tests and come back with R-b when it's done.

Thanks,
Stefano
Stefano Garzarella Oct. 2, 2024, 4:42 p.m. UTC | #2
On Wed, Oct 02, 2024 at 04:02:06PM GMT, Stefano Garzarella wrote:
>On Wed, Oct 02, 2024 at 09:41:42AM GMT, Michael S. Tsirkin wrote:
>>virtio_transport_send_pkt in now called on transport fast path,
>>under RCU read lock. In that case, we have a bug: virtio_add_sgs
>>is called with GFP_KERNEL, and might sleep.
>>
>>Pass the gfp flags as an argument, and use GFP_ATOMIC on
>>the fast path.
>>
>>Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x
>>Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty")
>>Reported-by: Christian Brauner <brauner@kernel.org>
>>Cc: Stefano Garzarella <sgarzare@redhat.com>
>>Cc: Luigi Leonardi <luigi.leonardi@outlook.com>
>>Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>>---
>>
>>Lightly tested. Christian, could you pls confirm this fixes the problem
>>for you? Stefano, it's a holiday here - could you pls help test!
>
>Sure, thanks for the quick fix! I was thinking something similar ;-)
>
>>Thanks!
>>
>>
>>net/vmw_vsock/virtio_transport.c | 8 ++++----
>>1 file changed, 4 insertions(+), 4 deletions(-)
>>
>>diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>>index f992f9a216f0..0cd965f24609 100644
>>--- a/net/vmw_vsock/virtio_transport.c
>>+++ b/net/vmw_vsock/virtio_transport.c
>>@@ -96,7 +96,7 @@ static u32 virtio_transport_get_local_cid(void)
>>
>>/* Caller need to hold vsock->tx_lock on vq */
>>static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq,
>>-				     struct virtio_vsock *vsock)
>>+				     struct virtio_vsock *vsock, gfp_t gfp)
>>{
>>	int ret, in_sg = 0, out_sg = 0;
>>	struct scatterlist **sgs;
>>@@ -140,7 +140,7 @@ static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq,
>>		}
>>	}
>>
>>-	ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, GFP_KERNEL);
>>+	ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, gfp);
>>	/* Usually this means that there is no more space available in
>>	 * the vq
>>	 */
>>@@ -178,7 +178,7 @@ virtio_transport_send_pkt_work(struct work_struct *work)
>>
>>		reply = virtio_vsock_skb_reply(skb);
>>
>>-		ret = virtio_transport_send_skb(skb, vq, vsock);
>>+		ret = virtio_transport_send_skb(skb, vq, vsock, GFP_KERNEL);
>>		if (ret < 0) {
>>			virtio_vsock_skb_queue_head(&vsock->send_pkt_queue, 
>>			skb);
>>			break;
>>@@ -221,7 +221,7 @@ static int virtio_transport_send_skb_fast_path(struct virtio_vsock *vsock, struc
>>	if (unlikely(ret == 0))
>>		return -EBUSY;
>>
>>-	ret = virtio_transport_send_skb(skb, vq, vsock);
>
>nit: maybe we can add a comment here:
>        /* GFP_ATOMIC because we are in RCU section, so we can't sleep */
>>+	ret = virtio_transport_send_skb(skb, vq, vsock, GFP_ATOMIC);
>>	if (ret == 0)
>>		virtqueue_kick(vq);
>>
>>-- 
>>MST
>>
>
>I'll run some tests and come back with R-b when it's done.

I replicated the issue enabling CONFIG_DEBUG_ATOMIC_SLEEP.

With that enabled, as soon as I run iperf-vsock, dmesg is flooded with 
those messages. With this patch applied instead everything is fine.

I also ran the usual tests with various debugging options enabled and 
everything seems okay.

With or without adding the comment I suggested in the previous email:

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Luigi Leonardi Oct. 3, 2024, 1:33 a.m. UTC | #3
> Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x
> Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty")
> Reported-by: Christian Brauner <brauner@kernel.org>
> Cc: Stefano Garzarella <sgarzare@redhat.com>
> Cc: Luigi Leonardi <luigi.leonardi@outlook.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>
> Lightly tested. Christian, could you pls confirm this fixes the problem
> for you? Stefano, it's a holiday here - could you pls help test!
> Thanks!
>
>
>  net/vmw_vsock/virtio_transport.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> index f992f9a216f0..0cd965f24609 100644
> --- a/net/vmw_vsock/virtio_transport.c
> +++ b/net/vmw_vsock/virtio_transport.c
> @@ -96,7 +96,7 @@ static u32 virtio_transport_get_local_cid(void)
>
>  /* Caller need to hold vsock->tx_lock on vq */
>  static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq,
> -				     struct virtio_vsock *vsock)
> +				     struct virtio_vsock *vsock, gfp_t gfp)
>  {
>  	int ret, in_sg = 0, out_sg = 0;
>  	struct scatterlist **sgs;
> @@ -140,7 +140,7 @@ static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq,
>  		}
>  	}
>
> -	ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, GFP_KERNEL);
> +	ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, gfp);
>  	/* Usually this means that there is no more space available in
>  	 * the vq
>  	 */
> @@ -178,7 +178,7 @@ virtio_transport_send_pkt_work(struct work_struct *work)
>
>  		reply = virtio_vsock_skb_reply(skb);
>
> -		ret = virtio_transport_send_skb(skb, vq, vsock);
> +		ret = virtio_transport_send_skb(skb, vq, vsock, GFP_KERNEL);
>  		if (ret < 0) {
>  			virtio_vsock_skb_queue_head(&vsock->send_pkt_queue, skb);
>  			break;
> @@ -221,7 +221,7 @@ static int virtio_transport_send_skb_fast_path(struct virtio_vsock *vsock, struc
>  	if (unlikely(ret == 0))
>  		return -EBUSY;
>
> -	ret = virtio_transport_send_skb(skb, vq, vsock);
> +	ret = virtio_transport_send_skb(skb, vq, vsock, GFP_ATOMIC);
>  	if (ret == 0)
>  		virtqueue_kick(vq);
>
> --
> MST
>
>

Thanks for fixing this!

I enabled CONFIG_DEBUG_ATOMIC_SLEEP as Stefano suggested and tested with and
without the fix, I can confirm that this fixes the problem.

Reviewed-by: Luigi Leonardi <luigi.leonardi@outlook.com>
Christian Brauner Oct. 3, 2024, 9:09 a.m. UTC | #4
On Wed, Oct 02, 2024 at 09:41:42AM GMT, Michael S. Tsirkin wrote:
> virtio_transport_send_pkt in now called on transport fast path,
> under RCU read lock. In that case, we have a bug: virtio_add_sgs
> is called with GFP_KERNEL, and might sleep.
> 
> Pass the gfp flags as an argument, and use GFP_ATOMIC on
> the fast path.
> 
> Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x
> Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty")
> Reported-by: Christian Brauner <brauner@kernel.org>
> Cc: Stefano Garzarella <sgarzare@redhat.com>
> Cc: Luigi Leonardi <luigi.leonardi@outlook.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> ---
> 
> Lightly tested. Christian, could you pls confirm this fixes the problem
> for you? Stefano, it's a holiday here - could you pls help test!
> Thanks!

Thank you for the quick fix:
Reviewed-by: Christian Brauner <brauner@kernel.org>
Gupta, Pankaj Oct. 3, 2024, 10:03 a.m. UTC | #5
> virtio_transport_send_pkt in now called on transport fast path,
> under RCU read lock. In that case, we have a bug: virtio_add_sgs
> is called with GFP_KERNEL, and might sleep.
> 
> Pass the gfp flags as an argument, and use GFP_ATOMIC on
> the fast path.
> 
> Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x
> Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty")
> Reported-by: Christian Brauner <brauner@kernel.org>
> Cc: Stefano Garzarella <sgarzare@redhat.com>
> Cc: Luigi Leonardi <luigi.leonardi@outlook.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>

> ---
> 
> Lightly tested. Christian, could you pls confirm this fixes the problem
> for you? Stefano, it's a holiday here - could you pls help test!
> Thanks!
> 
> 
>   net/vmw_vsock/virtio_transport.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> index f992f9a216f0..0cd965f24609 100644
> --- a/net/vmw_vsock/virtio_transport.c
> +++ b/net/vmw_vsock/virtio_transport.c
> @@ -96,7 +96,7 @@ static u32 virtio_transport_get_local_cid(void)
>   
>   /* Caller need to hold vsock->tx_lock on vq */
>   static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq,
> -				     struct virtio_vsock *vsock)
> +				     struct virtio_vsock *vsock, gfp_t gfp)
>   {
>   	int ret, in_sg = 0, out_sg = 0;
>   	struct scatterlist **sgs;
> @@ -140,7 +140,7 @@ static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq,
>   		}
>   	}
>   
> -	ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, GFP_KERNEL);
> +	ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, gfp);
>   	/* Usually this means that there is no more space available in
>   	 * the vq
>   	 */
> @@ -178,7 +178,7 @@ virtio_transport_send_pkt_work(struct work_struct *work)
>   
>   		reply = virtio_vsock_skb_reply(skb);
>   
> -		ret = virtio_transport_send_skb(skb, vq, vsock);
> +		ret = virtio_transport_send_skb(skb, vq, vsock, GFP_KERNEL);
>   		if (ret < 0) {
>   			virtio_vsock_skb_queue_head(&vsock->send_pkt_queue, skb);
>   			break;
> @@ -221,7 +221,7 @@ static int virtio_transport_send_skb_fast_path(struct virtio_vsock *vsock, struc
>   	if (unlikely(ret == 0))
>   		return -EBUSY;
>   
> -	ret = virtio_transport_send_skb(skb, vq, vsock);
> +	ret = virtio_transport_send_skb(skb, vq, vsock, GFP_ATOMIC);
>   	if (ret == 0)
>   		virtqueue_kick(vq);
>
Jakub Kicinski Oct. 7, 2024, 3:39 p.m. UTC | #6
On Wed, 2 Oct 2024 09:41:42 -0400 Michael S. Tsirkin wrote:
> virtio_transport_send_pkt in now called on transport fast path,
> under RCU read lock. In that case, we have a bug: virtio_add_sgs
> is called with GFP_KERNEL, and might sleep.
> 
> Pass the gfp flags as an argument, and use GFP_ATOMIC on
> the fast path.

Hi Michael! The To: linux-kernel@vger.kernel.org doesn't give much info
on who you expect to apply this ;) Please let us know if you plan to
take it via your own tree, otherwise we'll ship it to Linus on Thu.
Michael S. Tsirkin Oct. 7, 2024, 3:46 p.m. UTC | #7
On Mon, Oct 07, 2024 at 08:39:20AM -0700, Jakub Kicinski wrote:
> On Wed, 2 Oct 2024 09:41:42 -0400 Michael S. Tsirkin wrote:
> > virtio_transport_send_pkt in now called on transport fast path,
> > under RCU read lock. In that case, we have a bug: virtio_add_sgs
> > is called with GFP_KERNEL, and might sleep.
> > 
> > Pass the gfp flags as an argument, and use GFP_ATOMIC on
> > the fast path.
> 
> Hi Michael! The To: linux-kernel@vger.kernel.org doesn't give much info
> on who you expect to apply this ;) Please let us know if you plan to
> take it via your own tree, otherwise we'll ship it to Linus on Thu.

Hi!
It's in my tree, was in the process of sending a pull request actually.
diff mbox series

Patch

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index f992f9a216f0..0cd965f24609 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -96,7 +96,7 @@  static u32 virtio_transport_get_local_cid(void)
 
 /* Caller need to hold vsock->tx_lock on vq */
 static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq,
-				     struct virtio_vsock *vsock)
+				     struct virtio_vsock *vsock, gfp_t gfp)
 {
 	int ret, in_sg = 0, out_sg = 0;
 	struct scatterlist **sgs;
@@ -140,7 +140,7 @@  static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq,
 		}
 	}
 
-	ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, GFP_KERNEL);
+	ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, gfp);
 	/* Usually this means that there is no more space available in
 	 * the vq
 	 */
@@ -178,7 +178,7 @@  virtio_transport_send_pkt_work(struct work_struct *work)
 
 		reply = virtio_vsock_skb_reply(skb);
 
-		ret = virtio_transport_send_skb(skb, vq, vsock);
+		ret = virtio_transport_send_skb(skb, vq, vsock, GFP_KERNEL);
 		if (ret < 0) {
 			virtio_vsock_skb_queue_head(&vsock->send_pkt_queue, skb);
 			break;
@@ -221,7 +221,7 @@  static int virtio_transport_send_skb_fast_path(struct virtio_vsock *vsock, struc
 	if (unlikely(ret == 0))
 		return -EBUSY;
 
-	ret = virtio_transport_send_skb(skb, vq, vsock);
+	ret = virtio_transport_send_skb(skb, vq, vsock, GFP_ATOMIC);
 	if (ret == 0)
 		virtqueue_kick(vq);