Message ID | 20200124100159.736209-1-stefanha@redhat.com (mailing list archive) |
---|---|
Headers | show |
Series | virtio-pci: enable blk and scsi multi-queue by default | expand |
On Fri, Jan 24, 2020 at 10:01:55AM +0000, Stefan Hajnoczi wrote: > v2: > * Let the virtio-DEVICE-pci device select num-queues because the optimal > multi-queue configuration may differ between virtio-pci, virtio-mmio, and > virtio-ccw [Cornelia] > > Enabling multi-queue on virtio-pci storage devices improves performance on SMP > guests because the completion interrupt is handled on the vCPU that submitted > the I/O request. This avoids IPIs inside the guest. > > Note that performance is unchanged in these cases: > 1. Uniprocessor guests. They don't have IPIs. > 2. Application threads might be scheduled on the sole vCPU that handles > completion interrupts purely by chance. (This is one reason why benchmark > results can vary noticably between runs.) > 3. Users may bind the application to the vCPU that handles completion > interrupts. > > Set the number of queues to the number of vCPUs by default. Older machine > types continue to default to 1 queue for live migration compatibility. > > This patch improves IOPS by 1-4% on an Intel Optane SSD with 4 vCPUs, -drive > aio=native, and fio bs=4k direct=1 rw=randread. > > Stefan Hajnoczi (4): > virtio-scsi: introduce a constant for fixed virtqueues > virtio-scsi: default num_queues to -smp N > virtio-blk: default num_queues to -smp N > vhost-user-blk: default num_queues to -smp N The series looks good to me: Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Thanks, Stefano