Message ID | 20230907214552.130636-1-gulam.mohamed@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | block: Consider inflight IO in io accounting for high latency devices | expand |
On 9/7/23 3:45 PM, Gulam Mohamed wrote: > diff --git a/block/blk-mq.c b/block/blk-mq.c > index ec922c6bccbe..70e5763fb799 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -1000,6 +1000,8 @@ static inline void blk_account_io_done(struct request *req, u64 now) > > static inline void blk_account_io_start(struct request *req) > { > + bool delta = false; > + This is an odd name for this variable... > @@ -1015,7 +1017,10 @@ static inline void blk_account_io_start(struct request *req) > req->part = req->q->disk->part0; > > part_stat_lock(); > - update_io_ticks(req->part, jiffies, false); > + if (req->q->nr_hw_queues == 1) { > + delta = !!part_in_flight(req->part); > + } No parens needed here. But that aside, I think this could be a lot better. You don't really care about the number of requests inflight, only if there are some. A better helper than part_in_flight() could do that ala: static bool part_any_in_flight(struct block_device *part) { int cpu; for_each_possible_cpu(cpu) { if (part_stat_local_read_cpu(part, in_flight[0], cpu) || part_stat_local_read_cpu(part, in_flight[1], cpu)) return true; } return false; } But I do wonder if it's just missed state checking for the request itself that's missing this, and this is fixing it entirely the wrong way around.
Thanks Jens for reviewing this patch. Can you please look my comments inline? Regards, Gulam Mohamed. -----Original Message----- From: Jens Axboe <axboe@kernel.dk> Sent: Friday, September 8, 2023 8:04 PM To: Gulam Mohamed <gulam.mohamed@oracle.com>; linux-block@vger.kernel.org; linux-kernel@vger.kernel.org Subject: Re: [PATCH] block: Consider inflight IO in io accounting for high latency devices On 9/7/23 3:45 PM, Gulam Mohamed wrote: > diff --git a/block/blk-mq.c b/block/blk-mq.c index > ec922c6bccbe..70e5763fb799 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -1000,6 +1000,8 @@ static inline void blk_account_io_done(struct > request *req, u64 now) > > static inline void blk_account_io_start(struct request *req) { > + bool delta = false; > + This is an odd name for this variable... [GULAM]: Thanks. I will change this. > @@ -1015,7 +1017,10 @@ static inline void blk_account_io_start(struct request *req) > req->part = req->q->disk->part0; > > part_stat_lock(); > - update_io_ticks(req->part, jiffies, false); > + if (req->q->nr_hw_queues == 1) { > + delta = !!part_in_flight(req->part); > + } No parens needed here. But that aside, I think this could be a lot better. You don't really care about the number of requests inflight, only if there are some. A better helper than part_in_flight() could do that ala: static bool part_any_in_flight(struct block_device *part) { int cpu; for_each_possible_cpu(cpu) { if (part_stat_local_read_cpu(part, in_flight[0], cpu) || part_stat_local_read_cpu(part, in_flight[1], cpu)) return true; } return false; } [GULAM]: Is there a possibility that the IO request submit and completion can happen on different CPU? I am thinking that there could be positive numbers and negative numbers from different CPUs resulting in total inflight to 0. The negative number could be due to that the IO completion could happen on another CPU. But I do wonder if it's just missed state checking for the request itself that's missing this, and this is fixing it entirely the wrong way around. [GULAM]: Trying to understand this comment. Can you please explore more on this? -- Jens Axboe
diff --git a/block/blk-core.c b/block/blk-core.c index 9d51e9894ece..bc3be34b54fc 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -953,8 +953,13 @@ void update_io_ticks(struct block_device *part, unsigned long now, bool end) unsigned long bdev_start_io_acct(struct block_device *bdev, enum req_op op, unsigned long start_time) { + bool delta = false; + part_stat_lock(); - update_io_ticks(bdev, start_time, false); + if (bdev->bd_queue->nr_hw_queues == 1) { + delta = !!part_in_flight(bdev); + } + update_io_ticks(bdev, start_time, delta); part_stat_local_inc(bdev, in_flight[op_is_write(op)]); part_stat_unlock(); diff --git a/block/blk-mq.c b/block/blk-mq.c index ec922c6bccbe..70e5763fb799 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1000,6 +1000,8 @@ static inline void blk_account_io_done(struct request *req, u64 now) static inline void blk_account_io_start(struct request *req) { + bool delta = false; + trace_block_io_start(req); if (blk_do_io_stat(req)) { @@ -1015,7 +1017,10 @@ static inline void blk_account_io_start(struct request *req) req->part = req->q->disk->part0; part_stat_lock(); - update_io_ticks(req->part, jiffies, false); + if (req->q->nr_hw_queues == 1) { + delta = !!part_in_flight(req->part); + } + update_io_ticks(req->part, jiffies, delta); part_stat_unlock(); } } diff --git a/block/blk.h b/block/blk.h index 08a358bc0919..37f778c1c1df 100644 --- a/block/blk.h +++ b/block/blk.h @@ -292,6 +292,7 @@ ssize_t part_fail_store(struct device *dev, struct device_attribute *attr, ssize_t part_timeout_show(struct device *, struct device_attribute *, char *); ssize_t part_timeout_store(struct device *, struct device_attribute *, const char *, size_t); +unsigned int part_in_flight(struct block_device *part); static inline bool bio_may_exceed_limits(struct bio *bio, const struct queue_limits *lim) diff --git a/block/genhd.c b/block/genhd.c index cc32a0c704eb..8cf16dc7e195 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -118,7 +118,7 @@ static void part_stat_read_all(struct block_device *part, } } -static unsigned int part_in_flight(struct block_device *part) +unsigned int part_in_flight(struct block_device *part) { unsigned int inflight = 0; int cpu;
For high latency devices,we need to take into account the inflight IOs for calculating the disk utilization. For IO accounting in block layer currently, the io ticks are updated (added) with '1' at the start of the IO and delta (time difference between start and end of the IO) is added at the end of the IO. This causes a small issue for high latency devices. Multiple IOs can come in before the end of the previous IOs. Suppose, IO 'A' came in and before its completion a couple of IOs 'B', 'C', 'D' and 'E' came in. As per the current implementation, we add only 1 to the ioticks at IO arrival and the disk time 'stamp' is updated with current time ('now'). Now, just after 'E' IO 'A' completion arrived. For this completion of 'A', we update the ioticks with delta which is nothing but "A(end_time) - E(start_time('stamp'))". Eventhough the disk was busy in processing the IOs B,C,D also, we were missing the processing time of B,C,D IOs to add to ioticks. This was causing the %util to show the disk utilization incorrectly. This incorrect behavior was causing it impossible to drive the disk utilization to 100% even for heavy load. To fix this, we need to take into account the inflight IOs also for calculating the disk utilization. While updating the ioticks when IO arrives, check if there are any inflight IOs. If there are any inflight IOs, then add delta to the ioticks instead of 1. This may not be an issue for low latency devices as there will be very less difference between the start and end of an IO. Also, if we add this inflight IO check for IO accounting of low latency devices, then it will be an overhead and impact IO performance. So, this inflight IO check will be added only to high latency devices like HDDs and not to low latency devices like NVME. In the fix, this distinction is made based upon the number of hardware 'queues' supported by the disk. Usually HDDs support only 1 HW queue and NVME devices support multiple HW queues. The following is the fio script used to test the fix with correct results: [global] bs=64K iodepth=120 direct=1 ioengine=libaio time_based runtime=100 numjobs=12 name=raw-randread rw=randread [job1] filename=/dev/sda:/dev/sdb:/dev/sdc:/dev/sdd:/dev/sde:/dev/sdf:/dev/sdg:/dev/sdh:/dev/sdi:/dev/sdj:/dev/sdk:/dev/sdm Results without fix ------------------- Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util sdj 413.00 0.00 25.81 0.00 0.00 0.00 0.00 0.00 26.75 0.00 11.05 64.00 0.00 1.53 63.30 Result with fix --------------- Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util sdd 257.00 0.00 16.06 0.00 0.00 0.00 0.00 0.00 101.47 0.00 26.08 64.00 0.00 3.89 100.00 Signed-off-by: Gulam Mohamed <gulam.mohamed@oracle.com> --- block/blk-core.c | 7 ++++++- block/blk-mq.c | 7 ++++++- block/blk.h | 1 + block/genhd.c | 2 +- 4 files changed, 14 insertions(+), 3 deletions(-)