Message ID | 20220815090521.127607-1-mst@redhat.com (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | virtio_net: Revert "virtio_net: set the default max ring size by find_vqs()" | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Guessing tree name failed - patch did not apply |
On Mon, Aug 15, 2022 at 05:16:50AM -0400, Michael S. Tsirkin wrote: > This reverts commit 762faee5a2678559d3dc09d95f8f2c54cd0466a7. > > This has been reported to trip up guests on GCP (Google Cloud). Why is > not yet clear - to be debugged, but the patch itself has several other > issues: > > - It treats unknown speed as < 10G > - It leaves userspace no way to find out the ring size set by hypervisor > - It tests speed when link is down > - It ignores the virtio spec advice: > Both \field{speed} and \field{duplex} can change, thus the driver > is expected to re-read these values after receiving a > configuration change notification. > - It is not clear the performance impact has been tested properly > > Revert the patch for now. > > Link: https://lore.kernel.org/r/20220814212610.GA3690074%40roeck-us.net > Link: https://lore.kernel.org/r/20220815070203.plwjx7b3cyugpdt7%40awork3.anarazel.de > Link: https://lore.kernel.org/r/3df6bb82-1951-455d-a768-e9e1513eb667%40www.fastmail.com > Link: https://lore.kernel.org/r/FCDC5DDE-3CDD-4B8A-916F-CA7D87B547CE%40anarazel.de > Fixes: 762faee5a267 ("virtio_net: set the default max ring size by find_vqs()") > Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com> > Cc: Jason Wang <jasowang@redhat.com> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > Tested-by: Andres Freund <andres@anarazel.de> I ran this patch through a total of 14 syskaller tests, 2 test runs each on 7 different crashes reported by syzkaller (as reported to the linux-kernel mailing list). No problems were reported. I also ran a single cross-check with one of the syzkaller runs on top of v6.0-rc1, without this patch. That test run failed. Overall, I think we can call this fixed. Guenter --- syskaller reports: Reported-and-tested-by: syzbot+2984d1b7aef6b51353f0@syzkaller.appspotmail.com Tested on: commit: 568035b0 Linux 6.0-rc1 git tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v6.0-rc1 kernel config: https://syzkaller.appspot.com/x/.config?x=3b9175e0879a7749 dashboard link: https://syzkaller.appspot.com/bug?extid=2984d1b7aef6b51353f0 compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2 userspace arch: i386 patch: https://syzkaller.appspot.com/x/patch.diff?x=11949fc3080000 Reported-and-tested-by: syzbot+2c35c4d66094ddfe198e@syzkaller.appspotmail.com Tested on: commit: 568035b0 Linux 6.0-rc1 git tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v6.0-rc1 kernel config: https://syzkaller.appspot.com/x/.config?x=3cb39b084894e9a5 dashboard link: https://syzkaller.appspot.com/bug?extid=2c35c4d66094ddfe198e compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2 patch: https://syzkaller.appspot.com/x/patch.diff?x=163e20f3080000 Reported-and-tested-by: syzbot+97f830ad641de86d08c0@syzkaller.appspotmail.com Tested on: commit: 568035b0 Linux 6.0-rc1 git tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v6.0-rc1 kernel config: https://syzkaller.appspot.com/x/.config?x=f267ed4fb258122a dashboard link: https://syzkaller.appspot.com/bug?extid=97f830ad641de86d08c0 compiler: Debian clang version 13.0.1-++20220126092033+75e33f71c2da-1~exp1~20220126212112.63, GNU ld (GNU Binutils for Debian) 2.35.2 patch: https://syzkaller.appspot.com/x/patch.diff?x=146c8e5b080000 Reported-and-tested-by: syzbot+005efde5e97744047fe4@syzkaller.appspotmail.com Tested on: commit: 568035b0 Linux 6.0-rc1 git tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v6.0-rc1 kernel config: https://syzkaller.appspot.com/x/.config?x=3cb39b084894e9a5 dashboard link: https://syzkaller.appspot.com/bug?extid=005efde5e97744047fe4 compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2 patch: https://syzkaller.appspot.com/x/patch.diff?x=106c8e5b080000 Reported-and-tested-by: syzbot+9ada839c852179f13999@syzkaller.appspotmail.com Tested on: commit: 568035b0 Linux 6.0-rc1 git tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v6.0-rc1 kernel config: https://syzkaller.appspot.com/x/.config?x=3b9175e0879a7749 dashboard link: https://syzkaller.appspot.com/bug?extid=9ada839c852179f13999 compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2 patch: https://syzkaller.appspot.com/x/patch.diff?x=118756f3080000 Reported-and-tested-by: syzbot+382af021ce115a936b1f@syzkaller.appspotmail.com Tested on: commit: 568035b0 Linux 6.0-rc1 git tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v6.0-rc1 kernel config: https://syzkaller.appspot.com/x/.config?x=e656d8727a25e83b dashboard link: https://syzkaller.appspot.com/bug?extid=382af021ce115a936b1f compiler: Debian clang version 13.0.1-++20220126092033+75e33f71c2da-1~exp1~20220126212112.63, GNU ld (GNU Binutils for Debian) 2.35.2 patch: https://syzkaller.appspot.com/x/patch.diff?x=135f650d080000 Reported-and-tested-by: syzbot+24df94a8d05d5a3e68f0@syzkaller.appspotmail.com Tested on: commit: 568035b0 Linux 6.0-rc1 git tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git v6.0-rc1 kernel config: https://syzkaller.appspot.com/x/.config?x=3b9175e0879a7749 dashboard link: https://syzkaller.appspot.com/bug?extid=24df94a8d05d5a3e68f0 compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2 patch: https://syzkaller.appspot.com/x/patch.diff?x=12758a47080000
On Mon, Aug 15, 2022 at 01:34:26PM -0700, Guenter Roeck wrote: > On Mon, Aug 15, 2022 at 05:16:50AM -0400, Michael S. Tsirkin wrote: > > This reverts commit 762faee5a2678559d3dc09d95f8f2c54cd0466a7. > > > > This has been reported to trip up guests on GCP (Google Cloud). Why is > > not yet clear - to be debugged, but the patch itself has several other > > issues: > > > > - It treats unknown speed as < 10G > > - It leaves userspace no way to find out the ring size set by hypervisor > > - It tests speed when link is down > > - It ignores the virtio spec advice: > > Both \field{speed} and \field{duplex} can change, thus the driver > > is expected to re-read these values after receiving a > > configuration change notification. > > - It is not clear the performance impact has been tested properly > > > > Revert the patch for now. > > > > Link: https://lore.kernel.org/r/20220814212610.GA3690074%40roeck-us.net > > Link: https://lore.kernel.org/r/20220815070203.plwjx7b3cyugpdt7%40awork3.anarazel.de > > Link: https://lore.kernel.org/r/3df6bb82-1951-455d-a768-e9e1513eb667%40www.fastmail.com > > Link: https://lore.kernel.org/r/FCDC5DDE-3CDD-4B8A-916F-CA7D87B547CE%40anarazel.de > > Fixes: 762faee5a267 ("virtio_net: set the default max ring size by find_vqs()") > > Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com> > > Cc: Jason Wang <jasowang@redhat.com> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > > Tested-by: Andres Freund <andres@anarazel.de> > > I ran this patch through a total of 14 syskaller tests, 2 test runs each on > 7 different crashes reported by syzkaller (as reported to the linux-kernel > mailing list). No problems were reported. I also ran a single cross-check > with one of the syzkaller runs on top of v6.0-rc1, without this patch. > That test run failed. > > Overall, I think we can call this fixed. > > Guenter It's more of a work around though since we don't yet have the root cause for this. I suspect a GCP hypervisor bug at the moment. This is excercising a path we previously only took on GFP_KERNEL allocation failures during probe, I don't think that happens a lot.
On Mon, Aug 15, 2022 at 04:42:51PM -0400, Michael S. Tsirkin wrote: > On Mon, Aug 15, 2022 at 01:34:26PM -0700, Guenter Roeck wrote: > > On Mon, Aug 15, 2022 at 05:16:50AM -0400, Michael S. Tsirkin wrote: > > > This reverts commit 762faee5a2678559d3dc09d95f8f2c54cd0466a7. > > > > > > This has been reported to trip up guests on GCP (Google Cloud). Why is > > > not yet clear - to be debugged, but the patch itself has several other > > > issues: > > > > > > - It treats unknown speed as < 10G > > > - It leaves userspace no way to find out the ring size set by hypervisor > > > - It tests speed when link is down > > > - It ignores the virtio spec advice: > > > Both \field{speed} and \field{duplex} can change, thus the driver > > > is expected to re-read these values after receiving a > > > configuration change notification. > > > - It is not clear the performance impact has been tested properly > > > > > > Revert the patch for now. > > > > > > Link: https://lore.kernel.org/r/20220814212610.GA3690074%40roeck-us.net > > > Link: https://lore.kernel.org/r/20220815070203.plwjx7b3cyugpdt7%40awork3.anarazel.de > > > Link: https://lore.kernel.org/r/3df6bb82-1951-455d-a768-e9e1513eb667%40www.fastmail.com > > > Link: https://lore.kernel.org/r/FCDC5DDE-3CDD-4B8A-916F-CA7D87B547CE%40anarazel.de > > > Fixes: 762faee5a267 ("virtio_net: set the default max ring size by find_vqs()") > > > Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com> > > > Cc: Jason Wang <jasowang@redhat.com> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > > > Tested-by: Andres Freund <andres@anarazel.de> > > > > I ran this patch through a total of 14 syskaller tests, 2 test runs each on > > 7 different crashes reported by syzkaller (as reported to the linux-kernel > > mailing list). No problems were reported. I also ran a single cross-check > > with one of the syzkaller runs on top of v6.0-rc1, without this patch. > > That test run failed. > > > > Overall, I think we can call this fixed. > > > > Guenter > > It's more of a work around though since we don't yet have the root > cause for this. I suspect a GCP hypervisor bug at the moment. > This is excercising a path we previously only took on GFP_KERNEL > allocation failures during probe, I don't think that happens a lot. > Even a hypervisor bug should not trigger crashes like this one, though, or at least I think so. Any idea what to look for on the hypervisor side, and/or what it might be doing wrong ? Thanks, Guenter
On Mon, Aug 15, 2022 at 01:50:53PM -0700, Guenter Roeck wrote: > On Mon, Aug 15, 2022 at 04:42:51PM -0400, Michael S. Tsirkin wrote: > > On Mon, Aug 15, 2022 at 01:34:26PM -0700, Guenter Roeck wrote: > > > On Mon, Aug 15, 2022 at 05:16:50AM -0400, Michael S. Tsirkin wrote: > > > > This reverts commit 762faee5a2678559d3dc09d95f8f2c54cd0466a7. > > > > > > > > This has been reported to trip up guests on GCP (Google Cloud). Why is > > > > not yet clear - to be debugged, but the patch itself has several other > > > > issues: > > > > > > > > - It treats unknown speed as < 10G > > > > - It leaves userspace no way to find out the ring size set by hypervisor > > > > - It tests speed when link is down > > > > - It ignores the virtio spec advice: > > > > Both \field{speed} and \field{duplex} can change, thus the driver > > > > is expected to re-read these values after receiving a > > > > configuration change notification. > > > > - It is not clear the performance impact has been tested properly > > > > > > > > Revert the patch for now. > > > > > > > > Link: https://lore.kernel.org/r/20220814212610.GA3690074%40roeck-us.net > > > > Link: https://lore.kernel.org/r/20220815070203.plwjx7b3cyugpdt7%40awork3.anarazel.de > > > > Link: https://lore.kernel.org/r/3df6bb82-1951-455d-a768-e9e1513eb667%40www.fastmail.com > > > > Link: https://lore.kernel.org/r/FCDC5DDE-3CDD-4B8A-916F-CA7D87B547CE%40anarazel.de > > > > Fixes: 762faee5a267 ("virtio_net: set the default max ring size by find_vqs()") > > > > Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com> > > > > Cc: Jason Wang <jasowang@redhat.com> > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > > > > Tested-by: Andres Freund <andres@anarazel.de> > > > > > > I ran this patch through a total of 14 syskaller tests, 2 test runs each on > > > 7 different crashes reported by syzkaller (as reported to the linux-kernel > > > mailing list). No problems were reported. I also ran a single cross-check > > > with one of the syzkaller runs on top of v6.0-rc1, without this patch. > > > That test run failed. > > > > > > Overall, I think we can call this fixed. > > > > > > Guenter > > > > It's more of a work around though since we don't yet have the root > > cause for this. I suspect a GCP hypervisor bug at the moment. > > This is excercising a path we previously only took on GFP_KERNEL > > allocation failures during probe, I don't think that happens a lot. > > > > Even a hypervisor bug should not trigger crashes like this one, > though, or at least I think so. Any idea what to look for on the > hypervisor side, and/or what it might be doing wrong ? > > Thanks, > Guenter Sure! So virtio has a queue_size register. When read, it will give you originally the maximum queue size. Normally we just read it and use it as queue size. However, when queue memory allocation fails, and unconditionally with a network device with the problematic patch, driver is asking the hypervisor to make the ring smaller by writing a smaller value into this register. I suspect that what happens is hypervisor still uses the original value somewhere. Then it things the ring is bigger than what driver allocated. If we get lucky and nothing important landed in the several pages covered by the larger ting, then the only effect is that driver does not see the data hypervisor writes in the ring, and this is the network failures observed - most likely DHCP responses get lost and guest never gets an IP. OTOH if something important lands there then when hypervisor overwrites that memory it gets corrupted and we get crashes.
Hi, On 2022-08-15 17:04:10 -0400, Michael S. Tsirkin wrote: > So virtio has a queue_size register. When read, it will give you > originally the maximum queue size. Normally we just read it and > use it as queue size. > > However, when queue memory allocation fails, and unconditionally with a > network device with the problematic patch, driver is asking the > hypervisor to make the ring smaller by writing a smaller value into this > register. > > I suspect that what happens is hypervisor still uses the original value > somewhere. It looks more like the host is never told about the changed size for legacy devices... Indeed, adding a vp_legacy_set_queue_size() & call to it to setup_vq(), makes 5.19 + restricting queue sizes to 1024 boot again. I'd bet that it also would fix 6.0rc1, but I'm running out of time to test that. Greetings, Andres Freund
On Mon, Aug 15, 2022 at 02:28:39PM -0700, Andres Freund wrote: > Hi, > > On 2022-08-15 17:04:10 -0400, Michael S. Tsirkin wrote: > > So virtio has a queue_size register. When read, it will give you > > originally the maximum queue size. Normally we just read it and > > use it as queue size. > > > > However, when queue memory allocation fails, and unconditionally with a > > network device with the problematic patch, driver is asking the > > hypervisor to make the ring smaller by writing a smaller value into this > > register. > > > > I suspect that what happens is hypervisor still uses the original value > > somewhere. > > It looks more like the host is never told about the changed size for legacy > devices... > > Indeed, adding a vp_legacy_set_queue_size() & call to it to setup_vq(), makes > 5.19 + restricting queue sizes to 1024 boot again. Interesting, the register is RO in the legacy interface. And to be frank I can't find where is vp_legacy_set_queue_size even implemented. It's midnight here too ... > I'd bet that it also would > fix 6.0rc1, but I'm running out of time to test that. > > Greetings, > > Andres Freund Yes I figured this out too. And I was able to reproduce on qemu now. Andres thanks a lot for the help! I'm posting a new patchset reverting all the handing of resize restrictions, I think we should rethink it for the next release. Thanks everyone for the help!
Hi, On 2022-08-15 17:39:08 -0400, Michael S. Tsirkin wrote: > On Mon, Aug 15, 2022 at 02:28:39PM -0700, Andres Freund wrote: > > On 2022-08-15 17:04:10 -0400, Michael S. Tsirkin wrote: > > > So virtio has a queue_size register. When read, it will give you > > > originally the maximum queue size. Normally we just read it and > > > use it as queue size. > > > > > > However, when queue memory allocation fails, and unconditionally with a > > > network device with the problematic patch, driver is asking the > > > hypervisor to make the ring smaller by writing a smaller value into this > > > register. > > > > > > I suspect that what happens is hypervisor still uses the original value > > > somewhere. > > > > It looks more like the host is never told about the changed size for legacy > > devices... > > > > Indeed, adding a vp_legacy_set_queue_size() & call to it to setup_vq(), makes > > 5.19 + restricting queue sizes to 1024 boot again. > > Interesting, the register is RO in the legacy interface. > And to be frank I can't find where is vp_legacy_set_queue_size > even implemented. It's midnight here too ... Yea, I meant that added both vp_legacy_set_queue_size() and a call to it. I was just quickly experimenting around. > Yes I figured this out too. And I was able to reproduce on qemu now. Cool. > I'm posting a new patchset reverting all the handing of resize > restrictions, I think we should rethink it for the next release. Makes sense. Greetings, Andres Freund
On Mon, Aug 15, 2022 at 02:46:04PM -0700, Andres Freund wrote: > Hi, > > On 2022-08-15 17:39:08 -0400, Michael S. Tsirkin wrote: > > On Mon, Aug 15, 2022 at 02:28:39PM -0700, Andres Freund wrote: > > > On 2022-08-15 17:04:10 -0400, Michael S. Tsirkin wrote: > > > > So virtio has a queue_size register. When read, it will give you > > > > originally the maximum queue size. Normally we just read it and > > > > use it as queue size. > > > > > > > > However, when queue memory allocation fails, and unconditionally with a > > > > network device with the problematic patch, driver is asking the > > > > hypervisor to make the ring smaller by writing a smaller value into this > > > > register. > > > > > > > > I suspect that what happens is hypervisor still uses the original value > > > > somewhere. > > > > > > It looks more like the host is never told about the changed size for legacy > > > devices... > > > > > > Indeed, adding a vp_legacy_set_queue_size() & call to it to setup_vq(), makes > > > 5.19 + restricting queue sizes to 1024 boot again. > > > > Interesting, the register is RO in the legacy interface. > > And to be frank I can't find where is vp_legacy_set_queue_size > > even implemented. It's midnight here too ... > > Yea, I meant that added both vp_legacy_set_queue_size() and a call to it. I > was just quickly experimenting around. interesting that it's writeable on GCP. It's RO on QEMU. > > > Yes I figured this out too. And I was able to reproduce on qemu now. > > Cool. > > > > I'm posting a new patchset reverting all the handing of resize > > restrictions, I think we should rethink it for the next release. > > Makes sense. > > Greetings, > > Andres Freund
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index d934774e9733..ece00b84e3a7 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -3432,29 +3432,6 @@ static unsigned int mergeable_min_buf_len(struct virtnet_info *vi, struct virtqu (unsigned int)GOOD_PACKET_LEN); } -static void virtnet_config_sizes(struct virtnet_info *vi, u32 *sizes) -{ - u32 i, rx_size, tx_size; - - if (vi->speed == SPEED_UNKNOWN || vi->speed < SPEED_10000) { - rx_size = 1024; - tx_size = 1024; - - } else if (vi->speed < SPEED_40000) { - rx_size = 1024 * 4; - tx_size = 1024 * 4; - - } else { - rx_size = 1024 * 8; - tx_size = 1024 * 8; - } - - for (i = 0; i < vi->max_queue_pairs; i++) { - sizes[rxq2vq(i)] = rx_size; - sizes[txq2vq(i)] = tx_size; - } -} - static int virtnet_find_vqs(struct virtnet_info *vi) { vq_callback_t **callbacks; @@ -3462,7 +3439,6 @@ static int virtnet_find_vqs(struct virtnet_info *vi) int ret = -ENOMEM; int i, total_vqs; const char **names; - u32 *sizes; bool *ctx; /* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by @@ -3490,15 +3466,10 @@ static int virtnet_find_vqs(struct virtnet_info *vi) ctx = NULL; } - sizes = kmalloc_array(total_vqs, sizeof(*sizes), GFP_KERNEL); - if (!sizes) - goto err_sizes; - /* Parameters for control virtqueue, if any */ if (vi->has_cvq) { callbacks[total_vqs - 1] = NULL; names[total_vqs - 1] = "control"; - sizes[total_vqs - 1] = 64; } /* Allocate/initialize parameters for send/receive virtqueues */ @@ -3513,10 +3484,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi) ctx[rxq2vq(i)] = true; } - virtnet_config_sizes(vi, sizes); - - ret = virtio_find_vqs_ctx_size(vi->vdev, total_vqs, vqs, callbacks, - names, sizes, ctx, NULL); + ret = virtio_find_vqs_ctx(vi->vdev, total_vqs, vqs, callbacks, + names, ctx, NULL); if (ret) goto err_find; @@ -3536,8 +3505,6 @@ static int virtnet_find_vqs(struct virtnet_info *vi) err_find: - kfree(sizes); -err_sizes: kfree(ctx); err_ctx: kfree(names); @@ -3897,9 +3864,6 @@ static int virtnet_probe(struct virtio_device *vdev) vi->curr_queue_pairs = num_online_cpus(); vi->max_queue_pairs = max_queue_pairs; - virtnet_init_settings(dev); - virtnet_update_settings(vi); - /* Allocate/initialize the rx/tx queues, and invoke find_vqs */ err = init_vqs(vi); if (err) @@ -3912,6 +3876,8 @@ static int virtnet_probe(struct virtio_device *vdev) netif_set_real_num_tx_queues(dev, vi->curr_queue_pairs); netif_set_real_num_rx_queues(dev, vi->curr_queue_pairs); + virtnet_init_settings(dev); + if (virtio_has_feature(vdev, VIRTIO_NET_F_STANDBY)) { vi->failover = net_failover_create(vi->dev); if (IS_ERR(vi->failover)) {