From patchwork Thu Jan 10 06:43:21 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Wang X-Patchwork-Id: 1958521 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 2C191DF264 for ; Thu, 10 Jan 2013 06:43:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932236Ab3AJGno (ORCPT ); Thu, 10 Jan 2013 01:43:44 -0500 Received: from mx1.redhat.com ([209.132.183.28]:35650 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932096Ab3AJGnn (ORCPT ); Thu, 10 Jan 2013 01:43:43 -0500 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r0A6hR7M016530 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 10 Jan 2013 01:43:27 -0500 Received: from jason-thinkpad-t430s.localnet ([10.66.71.207]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r0A6hNHj001393 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 10 Jan 2013 01:43:25 -0500 From: Jason Wang To: gaowanlong@cn.fujitsu.com Cc: krkumar2@in.ibm.com, aliguori@us.ibm.com, kvm@vger.kernel.org, mst@redhat.com, mprivozn@redhat.com, rusty@rustcorp.com.au, qemu-devel@nongnu.org, stefanha@redhat.com, jwhan@filewood.snu.ac.kr, shiyer@redhat.com Subject: Re: [Qemu-devel] [PATCH 10/12] virtio-net: multiqueue support Date: Thu, 10 Jan 2013 14:43:21 +0800 Message-ID: <1606987.aihbek5aXM@jason-thinkpad-t430s> User-Agent: KMail/4.9.3 (Linux/3.7.0-rc7+; KDE/4.9.3; x86_64; ; ) In-Reply-To: <50ED8C29.5070706@redhat.com> References: <1356690724-37891-1-git-send-email-jasowang@redhat.com> <50ED3FE5.3020903@cn.fujitsu.com> <50ED8C29.5070706@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Wednesday, January 09, 2013 11:26:33 PM Jason Wang wrote: > On 01/09/2013 06:01 PM, Wanlong Gao wrote: > > On 01/09/2013 05:30 PM, Jason Wang wrote: > >> On 01/09/2013 04:23 PM, Wanlong Gao wrote: > >>> On 01/08/2013 06:14 PM, Jason Wang wrote: > >>>> On 01/08/2013 06:00 PM, Wanlong Gao wrote: > >>>>> On 01/08/2013 05:51 PM, Jason Wang wrote: > >>>>>> On 01/08/2013 05:49 PM, Wanlong Gao wrote: > >>>>>>> On 01/08/2013 05:29 PM, Jason Wang wrote: > >>>>>>>> On 01/08/2013 05:07 PM, Wanlong Gao wrote: > >>>>>>>>> On 12/28/2012 06:32 PM, Jason Wang wrote: > >>>>>>>>>> + } else if (nc->peer->info->type != > >>>>>>>>>> NET_CLIENT_OPTIONS_KIND_TAP) { > >>>>>>>>>> + ret = -1; > >>>>>>>>>> + } else { > >>>>>>>>>> + ret = tap_detach(nc->peer); > >>>>>>>>>> + } > >>>>>>>>>> + > >>>>>>>>>> + return ret; > >>>>>>>>>> +} > >>>>>>>>>> + [...] > >>> I got guest kernel panic when using this way and set queues=4. > >> > >> Does it happens w/o or w/ a fd parameter? What's the qemu command line? > >> Did you meet it during boot time? > > > > The QEMU command line is > > > > /work/git/qemu/x86_64-softmmu/qemu-system-x86_64 -name f17 -M pc-0.15 > > -enable-kvm -m 3096 \ -smp 4,sockets=4,cores=1,threads=1 \ > > -uuid c31a9f3e-4161-c53a-339c-5dc36d0497cb -no-user-config -nodefaults \ > > -chardev > > socket,id=charmonitor,path=/var/lib/libvirt/qemu/f17.monitor,server,nowai > > t \ -mon chardev=charmonitor,id=monitor,mode=control \ > > -rtc base=utc -no-shutdown \ > > -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \ > > -device > > virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0xb,num_queues=4,hotplug=on \ > > -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 \ > > -drive file=/vm/f17.img,if=none,id=drive-virtio-disk0,format=qcow2 \ > > -device > > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=vi > > rtio-disk0,bootindex=1 \ -drive > > file=/vm2/f17-kernel.img,if=none,id=drive-virtio-disk1,format=qcow2 \ > > -device > > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=vi > > rtio-disk1 \ -drive > > file=/vm/virtio-scsi/scsi3.img,if=none,id=drive-scsi0-0-2-0,format=raw \ > > -device > > scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi0-0-2-0,id= > > scsi0-0-2-0,removable=on \ -drive > > file=/vm/virtio-scsi/scsi4.img,if=none,id=drive-scsi0-0-3-0,format=raw \ > > -device > > scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi0-0-3-0,id= > > scsi0-0-3-0 \ -drive > > file=/vm/virtio-scsi/scsi1.img,if=none,id=drive-scsi0-0-0-0,format=raw \ > > -device > > scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id= > > scsi0-0-0-0 \ -drive > > file=/vm/virtio-scsi/scsi2.img,if=none,id=drive-scsi0-0-1-0,format=raw \ > > -device > > scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-1-0,id= > > scsi0-0-1-0 \ -chardev pty,id=charserial0 -device > > isa-serial,chardev=charserial0,id=serial0 \ -chardev > > file,id=charserial1,path=/vm/f17.log \ > > -device isa-serial,chardev=charserial1,id=serial1 \ > > -device usb-tablet,id=input0 -vga std \ > > -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 \ > > -netdev tap,id=hostnet0,vhost=on,queues=4 \ > > -device > > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:ce:7b:29,bus=pci.0,ad > > dr=0x3 \ -monitor stdio > > > > I got panic just after booting the system, did nothing, waited for a > > while, the guest panicked. > > > > [ 28.053004] BUG: soft lockup - CPU#1 stuck for 23s! [ip:592] > > [ 28.053004] Modules linked in: ip6t_REJECT nf_conntrack_ipv6 > > nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables uinput > > joydev microcode virtio_balloon pcspkr virtio_net i2c_piix4 i2c_core > > virtio_scsi virtio_blk floppy [ 28.053004] CPU 1 > > [ 28.053004] Pid: 592, comm: ip Not tainted 3.8.0-rc1-net+ #3 Bochs > > Bochs > > [ 28.053004] RIP: 0010:[] [] > > virtqueue_get_buf+0xb/0x120 [ 28.053004] RSP: 0018:ffff8800bc913550 > > EFLAGS: 00000246 > > [ 28.053004] RAX: 0000000000000000 RBX: ffff8800bc49c000 RCX: > > ffff8800bc49e000 [ 28.053004] RDX: 0000000000000000 RSI: > > ffff8800bc913584 RDI: ffff8800bcfd4000 [ 28.053004] RBP: > > ffff8800bc913558 R08: ffff8800bcfd0800 R09: 0000000000000000 [ > > 28.053004] R10: ffff8800bc49c000 R11: ffff880036cc4de0 R12: > > ffff8800bcfd4000 [ 28.053004] R13: ffff8800bc913558 R14: > > ffffffff8137ad73 R15: 00000000000200d0 [ 28.053004] FS: > > 00007fb27a589740(0000) GS:ffff8800c1480000(0000) knlGS:0000000000000000 [ > > 28.053004] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > > [ 28.053004] CR2: 0000000000640530 CR3: 00000000baeff000 CR4: > > 00000000000006e0 [ 28.053004] DR0: 0000000000000000 DR1: > > 0000000000000000 DR2: 0000000000000000 [ 28.053004] DR3: > > 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ > > 28.053004] Process ip (pid: 592, threadinfo ffff8800bc912000, task > > ffff880036da2e20) [ 28.053004] Stack: > > [ 28.053004] ffff8800bcfd0800 ffff8800bc913638 ffffffffa003e9bb > > ffff8800bc913656 [ 28.053004] 0000000100000002 ffff8800c17ebb08 > > 000000500000ff10 ffffea0002f244c0 [ 28.053004] 0000000200000582 > > 0000000000000000 0000000000000000 ffffea0002f244c0 [ 28.053004] Call > > Trace: > > [ 28.053004] [] > > virtnet_send_command.constprop.26+0x24b/0x270 [virtio_net] [ 28.053004] > > [] ? sg_init_table+0x23/0x50 > > [ 28.053004] [] virtnet_set_rx_mode+0x99/0x300 > > [virtio_net] [ 28.053004] [] > > __dev_set_rx_mode+0x5f/0xb0 > > [ 28.053004] [] dev_set_rx_mode+0x2f/0x50 > > [ 28.053004] [] __dev_open+0xa7/0xf0 > > [ 28.053004] [] __dev_change_flags+0xa1/0x180 > > [ 28.053004] [] dev_change_flags+0x28/0x70 > > [ 28.053004] [] do_setlink+0x3b0/0xa50 > > [ 28.053004] [] ? nla_parse+0x31/0xe0 > > [ 28.053004] [] rtnl_newlink+0x36e/0x580 > > [ 28.053004] [] ? get_page_from_freelist+0x37c/0x730 > > [ 28.053004] [] rtnetlink_rcv_msg+0x113/0x2f0 > > [ 28.053004] [] ? > > __kmalloc_node_track_caller+0x63/0x1c0 [ 28.053004] > > [] ? __alloc_skb+0x8b/0x2a0 > > [ 28.053004] [] ? __rtnl_unlock+0x20/0x20 > > [ 28.053004] [] netlink_rcv_skb+0xb1/0xc0 > > [ 28.053004] [] rtnetlink_rcv+0x25/0x40 > > [ 28.053004] [] netlink_unicast+0x1a1/0x220 > > [ 28.053004] [] netlink_sendmsg+0x301/0x3c0 > > [ 28.053004] [] sock_sendmsg+0xb0/0xe0 > > [ 28.053004] [] ? lru_cache_add_lru+0x3b/0x60 > > [ 28.053004] [] ? page_add_new_anon_rmap+0xc7/0x180 > > [ 28.053004] [] __sys_sendmsg+0x3ac/0x3c0 > > [ 28.053004] [] ? __do_page_fault+0x23c/0x4d0 > > [ 28.053004] [] ? do_brk+0x1ff/0x370 > > [ 28.053004] [] sys_sendmsg+0x49/0x90 > > [ 28.053004] [] system_call_fastpath+0x16/0x1b > > [ 28.053004] Code: 04 0f ae f0 48 8b 47 50 5d 0f b7 50 02 66 39 57 64 0f > > 94 c0 c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 41 > > 54 <53> 80 7f 59 00 48 89 fb 0f 85 90 00 00 00 48 8b 47 50 0f b7 50 > > > > > > The QEMU tree I used is git://github.com/jasowang/qemu.git > > Thanks a lot, will try to reproduce my self tomorrow. From the > calltrace, looks like we send a command to a rx/tx queue. Right, the virtqueue that will not be used by single queue were initialized. Please try to following patch or use the my qemu.git with this fix in github. diff --git a/hw/virtio-net.c b/hw/virtio-net.c index 8b4f079..cfd9af1 100644 --- a/hw/virtio-net.c +++ b/hw/virtio-net.c @@ -186,7 +186,7 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status) continue; } - if (virtio_net_started(n, status) && !q->vhost_started) { + if (virtio_net_started(n, queue_status) && !q->vhost_started) { if (q->tx_timer) { qemu_mod_timer(q->tx_timer, qemu_get_clock_ns(vm_clock) + n->tx_timeout); @@ -545,7 +545,8 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd, if (s.virtqueue_pairs < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN || s.virtqueue_pairs > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX || - s.virtqueue_pairs > n->max_queues) { + s.virtqueue_pairs > n->max_queues || + !n->multiqueue) { return VIRTIO_NET_ERR; } @@ -1026,19 +1027,15 @@ static void virtio_net_tx_bh(void *opaque) static void virtio_net_set_multiqueue(VirtIONet *n, int multiqueue, int ctrl) { VirtIODevice *vdev = &n->vdev; - int i; + int i, max = multiqueue ? n->max_queues : 1; n->multiqueue = multiqueue; - if (!multiqueue) { - n->max_queues = 1; - } - for (i = 2; i <= n->max_queues * 2 + 1; i++) { virtio_del_queue(vdev, i); } - for (i = 1; i < n->max_queues; i++) { + for (i = 1; i < max; i++) { n->vqs[i].rx_vq = virtio_add_queue(vdev, 256, virtio_net_handle_rx); if (n->vqs[i].tx_timer) { n->vqs[i].tx_vq =