diff mbox series

[1/1] block: optimise in irq bio put caching

Message ID dc78cadc0a057509dfb0f7fb2ce31affaefeb0c7.1705627291.git.asml.silence@gmail.com (mailing list archive)
State New, archived
Headers show
Series [1/1] block: optimise in irq bio put caching | expand

Commit Message

Pavel Begunkov Jan. 19, 2024, 1:23 a.m. UTC
The put side of the percpu bio caching is mainly targeting completions
in the hard irq context, but the context is not guaranteed so we guard
against those cases by switching interrupts off.

Disabling interrupts while they're already disabled is supposed to be
fast, but profiling shows it's far from perfect. Instead, we can infer
the interrupt state from in_hardirq(), which is just a fast var read,
and fall back to the normal bio_free() otherwise. With that, the caching
doesn't cover in softirq/task completions anymore, but that should be
just fine, we have never measured if caching brings anything in those
scenarios.

Profiling indicates that the bio_put() cost is reduced by ~3.5 times
(1.76% -> 0.49%), and and throughput of CPU bound benchmarks improve
by around 1% (t/io_uring with high QD and several drives).

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 block/bio.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

Comments

Pavel Begunkov Jan. 29, 2024, 2:36 p.m. UTC | #1
On 1/19/24 01:23, Pavel Begunkov wrote:
> The put side of the percpu bio caching is mainly targeting completions
> in the hard irq context, but the context is not guaranteed so we guard
> against those cases by switching interrupts off.
> 
> Disabling interrupts while they're already disabled is supposed to be
> fast, but profiling shows it's far from perfect. Instead, we can infer
> the interrupt state from in_hardirq(), which is just a fast var read,
> and fall back to the normal bio_free() otherwise. With that, the caching
> doesn't cover in softirq/task completions anymore, but that should be
> just fine, we have never measured if caching brings anything in those
> scenarios.
> 
> Profiling indicates that the bio_put() cost is reduced by ~3.5 times
> (1.76% -> 0.49%), and and throughput of CPU bound benchmarks improve
> by around 1% (t/io_uring with high QD and several drives).

Let me know if there are any concerns with the patch
Christoph Hellwig Jan. 29, 2024, 5:24 p.m. UTC | #2
On Mon, Jan 29, 2024 at 02:36:57PM +0000, Pavel Begunkov wrote:
> Let me know if there are any concerns with the patch

This seems to lose the case where non-polled bios are freed
form process context, which can be true with threadead interrupts
or various block remappers that defer I/O completions to workqueues,
and also a lot of file systems (but currentl the alloc cache isn't
used by file systems).

Also jumping backward for non-loop code flow is a nasty pattern.
Pavel Begunkov Jan. 29, 2024, 6 p.m. UTC | #3
On 1/29/24 17:24, Christoph Hellwig wrote:
> On Mon, Jan 29, 2024 at 02:36:57PM +0000, Pavel Begunkov wrote:
>> Let me know if there are any concerns with the patch
> 
> This seems to lose the case where non-polled bios are freed
> form process context, which can be true with threadead interrupts
> or various block remappers that defer I/O completions to workqueues,
> and also a lot of file systems (but currentl the alloc cache isn't
> used by file systems).

For the task context I can generalise the poll branch

if (in_task()) { // previously if (REQ_POLLED)
	// ->free_list;
} else if (in_hardirq()) {
	// ->free_list_irq;
} else {
	bio_free();
}

> Also jumping backward for non-loop code flow is a nasty pattern.

How come considering it's jumping to a return? I can switch
the bio_free() and goto blocks so it's a jump forward, if
that's a preference
Christoph Hellwig Jan. 30, 2024, 8:25 a.m. UTC | #4
On Mon, Jan 29, 2024 at 06:00:28PM +0000, Pavel Begunkov wrote:
> How come considering it's jumping to a return? I can switch
> the bio_free() and goto blocks so it's a jump forward, if
> that's a preference

We generally jump to the end unless it is a loop implemented using
goto as that is the natural reading flow.
diff mbox series

Patch

diff --git a/block/bio.c b/block/bio.c
index 816d412c06e9..a8a4f3211893 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -763,26 +763,29 @@  static inline void bio_put_percpu_cache(struct bio *bio)
 
 	cache = per_cpu_ptr(bio->bi_pool->cache, get_cpu());
 	if (READ_ONCE(cache->nr_irq) + cache->nr > ALLOC_CACHE_MAX) {
+free:
 		put_cpu();
 		bio_free(bio);
 		return;
 	}
 
-	bio_uninit(bio);
-
-	if ((bio->bi_opf & REQ_POLLED) && !WARN_ON_ONCE(in_interrupt())) {
+	if (bio->bi_opf & REQ_POLLED) {
+		if (WARN_ON_ONCE(!in_task()))
+			goto free;
+		bio_uninit(bio);
 		bio->bi_next = cache->free_list;
 		bio->bi_bdev = NULL;
 		cache->free_list = bio;
 		cache->nr++;
-	} else {
-		unsigned long flags;
+	} else if (in_hardirq()) {
+		lockdep_assert_irqs_disabled();
 
-		local_irq_save(flags);
+		bio_uninit(bio);
 		bio->bi_next = cache->free_list_irq;
 		cache->free_list_irq = bio;
 		cache->nr_irq++;
-		local_irq_restore(flags);
+	} else {
+		goto free;
 	}
 	put_cpu();
 }