mbox series

[V10,0/8] block, bfq: extend bfq to support multi-actuator drives

Message ID 20221209094442.36896-1-paolo.valente@linaro.org (mailing list archive)
Headers show
Series block, bfq: extend bfq to support multi-actuator drives | expand

Message

Paolo Valente Dec. 9, 2022, 9:44 a.m. UTC
Hi,
here is the V10, it differs from V9 in that it applies the
recommendation by Damien in [2].

Here is the whole description of this patch series again.  This
extension addresses the following issue. Single-LUN multi-actuator
SCSI drives, as well as all multi-actuator SATA drives appear as a
single device to the I/O subsystem [1].  Yet they address commands to
different actuators internally, as a function of Logical Block
Addressing (LBAs). A given sector is reachable by only one of the
actuators. For example, Seagate’s Serial Advanced Technology
Attachment (SATA) version contains two actuators and maps the lower
half of the SATA LBA space to the lower actuator and the upper half to
the upper actuator.

Evidently, to fully utilize actuators, no actuator must be left idle
or underutilized while there is pending I/O for it. To reach this
goal, the block layer must somehow control the load of each actuator
individually. This series enriches BFQ with such a per-actuator
control, as a first step. Then it also adds a simple mechanism for
guaranteeing that actuators with pending I/O are never left idle.

See [1] for a more detailed overview of the problem and of the
solutions implemented in this patch series. There you will also find
some preliminary performance results.

Thanks,
Paolo

[1] https://www.linaro.org/blog/budget-fair-queueing-bfq-linux-io-scheduler-optimizations-for-multi-actuator-sata-hard-drives/
[2] https://lore.kernel.org/lkml/20221208104351.35038-1-paolo.valente@linaro.org/T/#t

Davide Zini (3):
  block, bfq: split also async bfq_queues on a per-actuator basis
  block, bfq: inject I/O to underutilized actuators
  block, bfq: balance I/O injection among underutilized actuators

Federico Gavioli (1):
  block, bfq: retrieve independent access ranges from request queue

Paolo Valente (4):
  block, bfq: split sync bfq_queues on a per-actuator basis
  block, bfq: forbid stable merging of queues associated with different
    actuators
  block, bfq: move io_cq-persistent bfqq data into a dedicated struct
  block, bfq: turn bfqq_data into an array in bfq_io_cq

 block/bfq-cgroup.c  |  94 +++----
 block/bfq-iosched.c | 584 ++++++++++++++++++++++++++++++--------------
 block/bfq-iosched.h | 142 ++++++++---
 block/bfq-wf2q.c    |   2 +-
 4 files changed, 566 insertions(+), 256 deletions(-)

--
2.20.1

Comments

Paolo Valente Dec. 13, 2022, 3:40 p.m. UTC | #1
Hi Jens, Damien,
can we consider this for 6.2?

Thanks,
Paolo

> Il giorno 9 dic 2022, alle ore 10:44, Paolo Valente <paolo.valente@linaro.org> ha scritto:
> 
> Hi,
> here is the V10, it differs from V9 in that it applies the
> recommendation by Damien in [2].
> 
> Here is the whole description of this patch series again.  This
> extension addresses the following issue. Single-LUN multi-actuator
> SCSI drives, as well as all multi-actuator SATA drives appear as a
> single device to the I/O subsystem [1].  Yet they address commands to
> different actuators internally, as a function of Logical Block
> Addressing (LBAs). A given sector is reachable by only one of the
> actuators. For example, Seagate’s Serial Advanced Technology
> Attachment (SATA) version contains two actuators and maps the lower
> half of the SATA LBA space to the lower actuator and the upper half to
> the upper actuator.
> 
> Evidently, to fully utilize actuators, no actuator must be left idle
> or underutilized while there is pending I/O for it. To reach this
> goal, the block layer must somehow control the load of each actuator
> individually. This series enriches BFQ with such a per-actuator
> control, as a first step. Then it also adds a simple mechanism for
> guaranteeing that actuators with pending I/O are never left idle.
> 
> See [1] for a more detailed overview of the problem and of the
> solutions implemented in this patch series. There you will also find
> some preliminary performance results.
> 
> Thanks,
> Paolo
> 
> [1] https://www.linaro.org/blog/budget-fair-queueing-bfq-linux-io-scheduler-optimizations-for-multi-actuator-sata-hard-drives/
> [2] https://lore.kernel.org/lkml/20221208104351.35038-1-paolo.valente@linaro.org/T/#t
> 
> Davide Zini (3):
>  block, bfq: split also async bfq_queues on a per-actuator basis
>  block, bfq: inject I/O to underutilized actuators
>  block, bfq: balance I/O injection among underutilized actuators
> 
> Federico Gavioli (1):
>  block, bfq: retrieve independent access ranges from request queue
> 
> Paolo Valente (4):
>  block, bfq: split sync bfq_queues on a per-actuator basis
>  block, bfq: forbid stable merging of queues associated with different
>    actuators
>  block, bfq: move io_cq-persistent bfqq data into a dedicated struct
>  block, bfq: turn bfqq_data into an array in bfq_io_cq
> 
> block/bfq-cgroup.c  |  94 +++----
> block/bfq-iosched.c | 584 ++++++++++++++++++++++++++++++--------------
> block/bfq-iosched.h | 142 ++++++++---
> block/bfq-wf2q.c    |   2 +-
> 4 files changed, 566 insertions(+), 256 deletions(-)
> 
> --
> 2.20.1
Jens Axboe Dec. 13, 2022, 3:43 p.m. UTC | #2
On 12/13/22 8:40?AM, Paolo Valente wrote:
> Hi Jens, Damien,
> can we consider this for 6.2?

No, it's too late to queue up for 6.2, even when it was posted on
Friday. Bigger changes like that should be in my tree at least a week
before the merge window opens, preferably two (or somewhere in between).

I already tagged the main 6.2 block changes on Friday, and the pull
request has been sent out.
Arie van der Hoeven Dec. 13, 2022, 5:10 p.m. UTC | #3
We understand being conservative but the code paths only impact on a product that is not yet in market.  This is version 10 spanning months with many gaps waiting on review.  It's an interesting case study.

-- Arie van der Hoeven


From: Jens Axboe <axboe@kernel.dk>
Sent: Tuesday, December 13, 2022 7:43 AM
To: Paolo Valente <paolo.valente@linaro.org>
Cc: linux-block <linux-block@vger.kernel.org>; linux-kernel <linux-kernel@vger.kernel.org>; Arie van der Hoeven <arie.vanderhoeven@seagate.com>; Rory Chen <rory.c.chen@seagate.com>; Glen Valante <glen.valante@linaro.org>; Damien Le Moal <damien.lemoal@opensource.wdc.com>
Subject: Re: [PATCH V10 0/8] block, bfq: extend bfq to support multi-actuator drives


This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.


On 12/13/22 8:40?AM, Paolo Valente wrote:
> Hi Jens, Damien,
> can we consider this for 6.2?

No, it's too late to queue up for 6.2, even when it was posted on
Friday. Bigger changes like that should be in my tree at least a week
before the merge window opens, preferably two (or somewhere in between).

I already tagged the main 6.2 block changes on Friday, and the pull
request has been sent out.

--
Jens Axboe

Seagate Internal
Jens Axboe Dec. 13, 2022, 5:17 p.m. UTC | #4
Please don't top post...

On 12/13/22 10:10?AM, Arie van der Hoeven wrote:
> We understand being conservative but the code paths only impact on a
> product that is not yet in market.  This is version 10 spanning months
> with many gaps waiting on review.  It's an interesting case study.

That's a nice theory, but that's not how code works. As mentioned, the
last version was posted 1-2 weeks later than would've been appropriate
for inclusion.
Paolo Valente Dec. 15, 2022, 3:04 p.m. UTC | #5
> Il giorno 13 dic 2022, alle ore 18:17, Jens Axboe <axboe@kernel.dk> ha scritto:
> 
> Please don't top post...
> 
> On 12/13/22 10:10?AM, Arie van der Hoeven wrote:
>> We understand being conservative but the code paths only impact on a
>> product that is not yet in market.  This is version 10 spanning months
>> with many gaps waiting on review.  It's an interesting case study.
> 
> That's a nice theory, but that's not how code works. As mentioned, the
> last version was posted 1-2 weeks later than would've been appropriate
> for inclusion.
> 

So, what's the plan?

Thanks,
Paolo

> -- 
> Jens Axboe
>
Jens Axboe Dec. 15, 2022, 3:12 p.m. UTC | #6
On 12/15/22 8:04 AM, Paolo Valente wrote:
> 
> 
>> Il giorno 13 dic 2022, alle ore 18:17, Jens Axboe <axboe@kernel.dk> ha scritto:
>>
>> Please don't top post...
>>
>> On 12/13/22 10:10?AM, Arie van der Hoeven wrote:
>>> We understand being conservative but the code paths only impact on a
>>> product that is not yet in market.  This is version 10 spanning months
>>> with many gaps waiting on review.  It's an interesting case study.
>>
>> That's a nice theory, but that's not how code works. As mentioned, the
>> last version was posted 1-2 weeks later than would've been appropriate
>> for inclusion.
>>
> 
> So, what's the plan?

Looks like 1/8 and 8/8 still need Damien to review it, then queue up for
6.3 when ready. Not sure why this is even a question, it just means that
inclusion is pushed a release out as it missed the current merge window.