diff mbox series

[1/3] file-posix: add `aio-max-batch` option

Message ID 20210923143100.182979-2-sgarzare@redhat.com (mailing list archive)
State New, archived
Headers show
Series linux-aio: allow block devices to limit aio-max-batch | expand

Commit Message

Stefano Garzarella Sept. 23, 2021, 2:30 p.m. UTC
Commit d7ddd0a161 ("linux-aio: limit the batch size using
`aio-max-batch` parameter") added a way to limit the batch size
of Linux AIO backend for the entire AIO context.

The same AIO context can be shared by multiple devices, so
latency-sensitive devices may want to limit the batch size even
more to avoid increasing latency.

For this reason we add the `aio-max-batch` option to the file
backend, which will be used by the next commits to limit the size of
batches including requests generated by this device.

Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
---
 qapi/block-core.json | 5 +++++
 block/file-posix.c   | 9 +++++++++
 2 files changed, 14 insertions(+)

Comments

Stefan Hajnoczi Oct. 21, 2021, 3:18 p.m. UTC | #1
On Thu, Sep 23, 2021 at 04:30:58PM +0200, Stefano Garzarella wrote:
> Commit d7ddd0a161 ("linux-aio: limit the batch size using
> `aio-max-batch` parameter") added a way to limit the batch size
> of Linux AIO backend for the entire AIO context.
> 
> The same AIO context can be shared by multiple devices, so
> latency-sensitive devices may want to limit the batch size even
> more to avoid increasing latency.
> 
> For this reason we add the `aio-max-batch` option to the file
> backend, which will be used by the next commits to limit the size of
> batches including requests generated by this device.
> 
> Suggested-by: Kevin Wolf <kwolf@redhat.com>
> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> ---
>  qapi/block-core.json | 5 +++++
>  block/file-posix.c   | 9 +++++++++
>  2 files changed, 14 insertions(+)
> 
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index c8ce1d9d5d..1a8ed325bc 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -2851,6 +2851,10 @@
>  #              for this device (default: none, forward the commands via SG_IO;
>  #              since 2.11)
>  # @aio: AIO backend (default: threads) (since: 2.8)
> +# @aio-max-batch: maximum number of requests in an AIO backend batch that
> +#                 contains request from this block device. 0 means that the

The first sentence is a little unclear. I guess s/request/requests/ but
that still doesn't doesn't fully explain how this works.

Does the AIO backend use the minimum aio-max-batch value of all its
blockdevs?

Maybe:

  maximum number of requests to batch together into a single submission
  in the AIO backend. If multiple BlockdevOptionsFile sharing an AIO
  backend have different values the smallest value is chosen. ...
Stefano Garzarella Oct. 25, 2021, 10:20 a.m. UTC | #2
On Thu, Oct 21, 2021 at 04:18:41PM +0100, Stefan Hajnoczi wrote:
>On Thu, Sep 23, 2021 at 04:30:58PM +0200, Stefano Garzarella wrote:
>> Commit d7ddd0a161 ("linux-aio: limit the batch size using
>> `aio-max-batch` parameter") added a way to limit the batch size
>> of Linux AIO backend for the entire AIO context.
>>
>> The same AIO context can be shared by multiple devices, so
>> latency-sensitive devices may want to limit the batch size even
>> more to avoid increasing latency.
>>
>> For this reason we add the `aio-max-batch` option to the file
>> backend, which will be used by the next commits to limit the size of
>> batches including requests generated by this device.
>>
>> Suggested-by: Kevin Wolf <kwolf@redhat.com>
>> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
>> ---
>>  qapi/block-core.json | 5 +++++
>>  block/file-posix.c   | 9 +++++++++
>>  2 files changed, 14 insertions(+)
>>
>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>> index c8ce1d9d5d..1a8ed325bc 100644
>> --- a/qapi/block-core.json
>> +++ b/qapi/block-core.json
>> @@ -2851,6 +2851,10 @@
>>  #              for this device (default: none, forward the commands via SG_IO;
>>  #              since 2.11)
>>  # @aio: AIO backend (default: threads) (since: 2.8)
>> +# @aio-max-batch: maximum number of requests in an AIO backend batch that
>> +#                 contains request from this block device. 0 means that the
>
>The first sentence is a little unclear. I guess s/request/requests/ but
>that still doesn't doesn't fully explain how this works.
>
>Does the AIO backend use the minimum aio-max-batch value of all its
>blockdevs?

It's a little simpler to avoid having to recalculate the minimum for 
each attach/release of blockdevs.

When the blockdev does submit or unplug, the queue is flushed if the 
number of requests in the batch is greater or equal then the smallest 
aio-max-batch value of the blockdev and the AIO context.

>
>Maybe:
>
>  maximum number of requests to batch together into a single submission
>  in the AIO backend. If multiple BlockdevOptionsFile sharing an AIO
>  backend have different values the smallest value is chosen. ...

Whath about this:

   maximum number of requests to batch together into a single submission
   in the AIO backend. The smallest value between this and AIO context's 
   aio-max-batch value is chosen. ...

Thanks,
Stefano
Kevin Wolf Oct. 25, 2021, 2:04 p.m. UTC | #3
Am 25.10.2021 um 12:20 hat Stefano Garzarella geschrieben:
> On Thu, Oct 21, 2021 at 04:18:41PM +0100, Stefan Hajnoczi wrote:
> > On Thu, Sep 23, 2021 at 04:30:58PM +0200, Stefano Garzarella wrote:
> > > Commit d7ddd0a161 ("linux-aio: limit the batch size using
> > > `aio-max-batch` parameter") added a way to limit the batch size
> > > of Linux AIO backend for the entire AIO context.
> > > 
> > > The same AIO context can be shared by multiple devices, so
> > > latency-sensitive devices may want to limit the batch size even
> > > more to avoid increasing latency.
> > > 
> > > For this reason we add the `aio-max-batch` option to the file
> > > backend, which will be used by the next commits to limit the size of
> > > batches including requests generated by this device.
> > > 
> > > Suggested-by: Kevin Wolf <kwolf@redhat.com>
> > > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> > > ---
> > >  qapi/block-core.json | 5 +++++
> > >  block/file-posix.c   | 9 +++++++++
> > >  2 files changed, 14 insertions(+)
> > > 
> > > diff --git a/qapi/block-core.json b/qapi/block-core.json
> > > index c8ce1d9d5d..1a8ed325bc 100644
> > > --- a/qapi/block-core.json
> > > +++ b/qapi/block-core.json
> > > @@ -2851,6 +2851,10 @@
> > >  #              for this device (default: none, forward the commands via SG_IO;
> > >  #              since 2.11)
> > >  # @aio: AIO backend (default: threads) (since: 2.8)
> > > +# @aio-max-batch: maximum number of requests in an AIO backend batch that
> > > +#                 contains request from this block device. 0 means that the
> > 
> > The first sentence is a little unclear. I guess s/request/requests/ but
> > that still doesn't doesn't fully explain how this works.
> > 
> > Does the AIO backend use the minimum aio-max-batch value of all its
> > blockdevs?
> 
> It's a little simpler to avoid having to recalculate the minimum for each
> attach/release of blockdevs.
> 
> When the blockdev does submit or unplug, the queue is flushed if the number
> of requests in the batch is greater or equal then the smallest aio-max-batch
> value of the blockdev and the AIO context.
> 
> > 
> > Maybe:
> > 
> >  maximum number of requests to batch together into a single submission
> >  in the AIO backend. If multiple BlockdevOptionsFile sharing an AIO
> >  backend have different values the smallest value is chosen. ...
> 
> Whath about this:
> 
>   maximum number of requests to batch together into a single submission
>   in the AIO backend. The smallest value between this and AIO context's
> aio-max-batch value is chosen. ...

I like this, except that AioContexts are an implementation detail. I
think we should refer to the iothread object instead, which is the user
visible interface to AioContexts.

Kevin
Stefano Garzarella Oct. 25, 2021, 2:40 p.m. UTC | #4
On Mon, Oct 25, 2021 at 04:04:21PM +0200, Kevin Wolf wrote:
>Am 25.10.2021 um 12:20 hat Stefano Garzarella geschrieben:
>> On Thu, Oct 21, 2021 at 04:18:41PM +0100, Stefan Hajnoczi wrote:
>> > On Thu, Sep 23, 2021 at 04:30:58PM +0200, Stefano Garzarella wrote:
>> > > Commit d7ddd0a161 ("linux-aio: limit the batch size using
>> > > `aio-max-batch` parameter") added a way to limit the batch size
>> > > of Linux AIO backend for the entire AIO context.
>> > >
>> > > The same AIO context can be shared by multiple devices, so
>> > > latency-sensitive devices may want to limit the batch size even
>> > > more to avoid increasing latency.
>> > >
>> > > For this reason we add the `aio-max-batch` option to the file
>> > > backend, which will be used by the next commits to limit the size of
>> > > batches including requests generated by this device.
>> > >
>> > > Suggested-by: Kevin Wolf <kwolf@redhat.com>
>> > > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
>> > > ---
>> > >  qapi/block-core.json | 5 +++++
>> > >  block/file-posix.c   | 9 +++++++++
>> > >  2 files changed, 14 insertions(+)
>> > >
>> > > diff --git a/qapi/block-core.json b/qapi/block-core.json
>> > > index c8ce1d9d5d..1a8ed325bc 100644
>> > > --- a/qapi/block-core.json
>> > > +++ b/qapi/block-core.json
>> > > @@ -2851,6 +2851,10 @@
>> > >  #              for this device (default: none, forward the commands via SG_IO;
>> > >  #              since 2.11)
>> > >  # @aio: AIO backend (default: threads) (since: 2.8)
>> > > +# @aio-max-batch: maximum number of requests in an AIO backend batch that
>> > > +#                 contains request from this block device. 0 means that the
>> >
>> > The first sentence is a little unclear. I guess s/request/requests/ but
>> > that still doesn't doesn't fully explain how this works.
>> >
>> > Does the AIO backend use the minimum aio-max-batch value of all its
>> > blockdevs?
>>
>> It's a little simpler to avoid having to recalculate the minimum for each
>> attach/release of blockdevs.
>>
>> When the blockdev does submit or unplug, the queue is flushed if the number
>> of requests in the batch is greater or equal then the smallest aio-max-batch
>> value of the blockdev and the AIO context.
>>
>> >
>> > Maybe:
>> >
>> >  maximum number of requests to batch together into a single submission
>> >  in the AIO backend. If multiple BlockdevOptionsFile sharing an AIO
>> >  backend have different values the smallest value is chosen. ...
>>
>> Whath about this:
>>
>>   maximum number of requests to batch together into a single submission
>>   in the AIO backend. The smallest value between this and AIO context's
>> aio-max-batch value is chosen. ...
>
>I like this, except that AioContexts are an implementation detail. I
>think we should refer to the iothread object instead, which is the user
>visible interface to AioContexts.

Yep, definitely better:

   maximum number of requests to batch together into a single submission
   in the AIO backend. The smallest value between this and the
   aio-max-batch value of the IOThread object is chosen. ...


Thanks,
Stefano
diff mbox series

Patch

diff --git a/qapi/block-core.json b/qapi/block-core.json
index c8ce1d9d5d..1a8ed325bc 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -2851,6 +2851,10 @@ 
 #              for this device (default: none, forward the commands via SG_IO;
 #              since 2.11)
 # @aio: AIO backend (default: threads) (since: 2.8)
+# @aio-max-batch: maximum number of requests in an AIO backend batch that
+#                 contains request from this block device. 0 means that the
+#                 AIO backend will handle it automatically.
+#                 (default:0, since 6.2)
 # @locking: whether to enable file locking. If set to 'auto', only enable
 #           when Open File Descriptor (OFD) locking API is available
 #           (default: auto, since 2.10)
@@ -2879,6 +2883,7 @@ 
             '*pr-manager': 'str',
             '*locking': 'OnOffAuto',
             '*aio': 'BlockdevAioOptions',
+            '*aio-max-batch': 'int',
             '*drop-cache': {'type': 'bool',
                             'if': 'CONFIG_LINUX'},
             '*x-check-cache-dropped': 'bool' },
diff --git a/block/file-posix.c b/block/file-posix.c
index d81e15efa4..da50e94034 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -150,6 +150,8 @@  typedef struct BDRVRawState {
     uint64_t locked_perm;
     uint64_t locked_shared_perm;
 
+    uint64_t aio_max_batch;
+
     int perm_change_fd;
     int perm_change_flags;
     BDRVReopenState *reopen_state;
@@ -530,6 +532,11 @@  static QemuOptsList raw_runtime_opts = {
             .type = QEMU_OPT_STRING,
             .help = "host AIO implementation (threads, native, io_uring)",
         },
+        {
+            .name = "aio-max-batch",
+            .type = QEMU_OPT_NUMBER,
+            .help = "AIO max batch size (0 = auto handled by AIO backend, default: 0)",
+        },
         {
             .name = "locking",
             .type = QEMU_OPT_STRING,
@@ -609,6 +616,8 @@  static int raw_open_common(BlockDriverState *bs, QDict *options,
     s->use_linux_io_uring = (aio == BLOCKDEV_AIO_OPTIONS_IO_URING);
 #endif
 
+    s->aio_max_batch = qemu_opt_get_number(opts, "aio-max-batch", 0);
+
     locking = qapi_enum_parse(&OnOffAuto_lookup,
                               qemu_opt_get(opts, "locking"),
                               ON_OFF_AUTO_AUTO, &local_err);