mbox series

[v4,0/9] Add support for io_uring

Message ID 20190603123823.32661-1-mehta.aaru20@gmail.com (mailing list archive)
Headers show
Series Add support for io_uring | expand

Message

Aarushi Mehta June 3, 2019, 12:38 p.m. UTC
This patch series adds support for the newly developed io_uring Linux AIO
interface. Linux io_uring is faster than Linux's AIO asynchronous I/O code,
offers efficient buffered asynchronous I/O support, the ability to do I/O
without performing a system call via polled I/O, and other efficiency enhancements.

Testing it requires a host kernel (5.1+) and the liburing library.
Use the option -drive aio=io_uring to enable it.

v4:
- Add error handling
- Add trace events
- Remove aio submission based code

v3:
- Fix major errors in io_uring (sorry)
- Option now enumerates for CONFIG_LINUX_IO_URING
- pkg config support added

Aarushi Mehta (9):
  configure: permit use of io_uring
  qapi/block-core: add option for io_uring
  block/block: add BDRV flag for io_uring
  block/io_uring: implements interfaces for io_uring
  stubs: add stubs for io_uring interface
  util/async: add aio interfaces for io_uring
  blockdev: accept io_uring as option
  block/file-posix.c: extend to use io_uring
  block: add trace events for io_uring

 MAINTAINERS             |   8 +
 block/Makefile.objs     |   3 +
 block/file-posix.c      |  85 +++++++++--
 block/io_uring.c        | 325 ++++++++++++++++++++++++++++++++++++++++
 block/trace-events      |   8 +
 blockdev.c              |   4 +-
 configure               |  27 ++++
 include/block/aio.h     |  16 +-
 include/block/block.h   |   1 +
 include/block/raw-aio.h |  12 ++
 qapi/block-core.json    |   4 +-
 stubs/Makefile.objs     |   1 +
 stubs/io_uring.c        |  32 ++++
 util/async.c            |  36 +++++
 14 files changed, 543 insertions(+), 19 deletions(-)
 create mode 100644 block/io_uring.c
 create mode 100644 stubs/io_uring.c

Comments

Sergio Lopez June 7, 2019, 10:59 a.m. UTC | #1
Aarushi Mehta <mehta.aaru20@gmail.com> writes:

> This patch series adds support for the newly developed io_uring Linux AIO
> interface. Linux io_uring is faster than Linux's AIO asynchronous I/O code,
> offers efficient buffered asynchronous I/O support, the ability to do I/O
> without performing a system call via polled I/O, and other efficiency enhancements.
>
> Testing it requires a host kernel (5.1+) and the liburing library.
> Use the option -drive aio=io_uring to enable it.
>
> v4:
> - Add error handling
> - Add trace events
> - Remove aio submission based code
>
> v3:
> - Fix major errors in io_uring (sorry)
> - Option now enumerates for CONFIG_LINUX_IO_URING
> - pkg config support added
>
> Aarushi Mehta (9):
>   configure: permit use of io_uring
>   qapi/block-core: add option for io_uring
>   block/block: add BDRV flag for io_uring
>   block/io_uring: implements interfaces for io_uring
>   stubs: add stubs for io_uring interface
>   util/async: add aio interfaces for io_uring
>   blockdev: accept io_uring as option
>   block/file-posix.c: extend to use io_uring
>   block: add trace events for io_uring
>
>  MAINTAINERS             |   8 +
>  block/Makefile.objs     |   3 +
>  block/file-posix.c      |  85 +++++++++--
>  block/io_uring.c        | 325 ++++++++++++++++++++++++++++++++++++++++
>  block/trace-events      |   8 +
>  blockdev.c              |   4 +-
>  configure               |  27 ++++
>  include/block/aio.h     |  16 +-
>  include/block/block.h   |   1 +
>  include/block/raw-aio.h |  12 ++
>  qapi/block-core.json    |   4 +-
>  stubs/Makefile.objs     |   1 +
>  stubs/io_uring.c        |  32 ++++
>  util/async.c            |  36 +++++
>  14 files changed, 543 insertions(+), 19 deletions(-)
>  create mode 100644 block/io_uring.c
>  create mode 100644 stubs/io_uring.c

Hi Aarushi,

I gave this version of the patchset a try, and found that IO hangs when
the device is assigned to an IOThread. Sometimes is able to serve a few
requests getting through the Guest OS boot process, to hang the moment
you try generate some IO on the device, while others it hangs when Linux
tries to read the partitions from the device.

I'm starting QEMU this way:

./x86_64-softmmu/qemu-system-x86_64 -enable-kvm -name rhel80,debug-threads=on -m 2g -smp 4 -object iothread,id=iothread0 -blockdev node-name=rhel80,driver=qcow2,file.driver=file,file.filename=/home/VirtualMachines/rhel80.qcow2 -device virtio-blk,drive=rhel80 -serial tcp::6667,server,nowait -qmp tcp::6668,server,nowait -nographic -net user,hostfwd=tcp::6666-:22 -net nic,model=virtio -device virtio-rng -drive file=/dev/nullb0,format=raw,cache=none,aio=io_uring,if=none,id=test -device virtio-blk,drive=test,iothread=iothread0

Could you please take a look at this issue?

Thanks,
Sergio.
Stefan Hajnoczi June 7, 2019, 1:46 p.m. UTC | #2
On Fri, Jun 07, 2019 at 12:59:54PM +0200, Sergio Lopez wrote:
> I gave this version of the patchset a try, and found that IO hangs when
> the device is assigned to an IOThread. Sometimes is able to serve a few
> requests getting through the Guest OS boot process, to hang the moment
> you try generate some IO on the device, while others it hangs when Linux
> tries to read the partitions from the device.
> 
> I'm starting QEMU this way:
> 
> ./x86_64-softmmu/qemu-system-x86_64 -enable-kvm -name rhel80,debug-threads=on -m 2g -smp 4 -object iothread,id=iothread0 -blockdev node-name=rhel80,driver=qcow2,file.driver=file,file.filename=/home/VirtualMachines/rhel80.qcow2 -device virtio-blk,drive=rhel80 -serial tcp::6667,server,nowait -qmp tcp::6668,server,nowait -nographic -net user,hostfwd=tcp::6666-:22 -net nic,model=virtio -device virtio-rng -drive file=/dev/nullb0,format=raw,cache=none,aio=io_uring,if=none,id=test -device virtio-blk,drive=test,iothread=iothread0
> 
> Could you please take a look at this issue?

Maybe the ioq_submit() issue I mentioned solves this.

Stefan
Stefan Hajnoczi June 7, 2019, 2:10 p.m. UTC | #3
On Fri, Jun 07, 2019 at 12:59:54PM +0200, Sergio Lopez wrote:
> I gave this version of the patchset a try, and found that IO hangs when
> the device is assigned to an IOThread. Sometimes is able to serve a few
> requests getting through the Guest OS boot process, to hang the moment
> you try generate some IO on the device, while others it hangs when Linux
> tries to read the partitions from the device.
> 
> I'm starting QEMU this way:
> 
> ./x86_64-softmmu/qemu-system-x86_64 -enable-kvm -name rhel80,debug-threads=on -m 2g -smp 4 -object iothread,id=iothread0 -blockdev node-name=rhel80,driver=qcow2,file.driver=file,file.filename=/home/VirtualMachines/rhel80.qcow2 -device virtio-blk,drive=rhel80 -serial tcp::6667,server,nowait -qmp tcp::6668,server,nowait -nographic -net user,hostfwd=tcp::6666-:22 -net nic,model=virtio -device virtio-rng -drive file=/dev/nullb0,format=raw,cache=none,aio=io_uring,if=none,id=test -device virtio-blk,drive=test,iothread=iothread0
> 
> Could you please take a look at this issue?

BTW I was referring to the inverted logic where qemu_luring_process_completions_and_submit() fails to call ioq_submit().

Stefan
Sergio Lopez June 7, 2019, 2:17 p.m. UTC | #4
Stefan Hajnoczi <stefanha@redhat.com> writes:

> On Fri, Jun 07, 2019 at 12:59:54PM +0200, Sergio Lopez wrote:
>> I gave this version of the patchset a try, and found that IO hangs when
>> the device is assigned to an IOThread. Sometimes is able to serve a few
>> requests getting through the Guest OS boot process, to hang the moment
>> you try generate some IO on the device, while others it hangs when Linux
>> tries to read the partitions from the device.
>> 
>> I'm starting QEMU this way:
>> 
>> ./x86_64-softmmu/qemu-system-x86_64 -enable-kvm -name rhel80,debug-threads=on -m 2g -smp 4 -object iothread,id=iothread0 -blockdev node-name=rhel80,driver=qcow2,file.driver=file,file.filename=/home/VirtualMachines/rhel80.qcow2 -device virtio-blk,drive=rhel80 -serial tcp::6667,server,nowait -qmp tcp::6668,server,nowait -nographic -net user,hostfwd=tcp::6666-:22 -net nic,model=virtio -device virtio-rng -drive file=/dev/nullb0,format=raw,cache=none,aio=io_uring,if=none,id=test -device virtio-blk,drive=test,iothread=iothread0
>> 
>> Could you please take a look at this issue?
>
> BTW I was referring to the inverted logic where qemu_luring_process_completions_and_submit() fails to call ioq_submit().

Yes, that was the problem.

Sergio.