From patchwork Tue Aug 10 16:37:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12429209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 814E1C43214 for ; Tue, 10 Aug 2021 16:37:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5A16D61073 for ; Tue, 10 Aug 2021 16:37:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236082AbhHJQh6 (ORCPT ); Tue, 10 Aug 2021 12:37:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236059AbhHJQh5 (ORCPT ); Tue, 10 Aug 2021 12:37:57 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98069C0613C1 for ; Tue, 10 Aug 2021 09:37:35 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id oa17so9085233pjb.1 for ; Tue, 10 Aug 2021 09:37:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1RA9V7Otxh4nwqoHM1FxmgmC9MK5/ORNFdKMxH+oR/U=; b=OO5wlqX82B3SOe9CMR+tYp4e0+G/qDK/NoOH3oDvA+t2PXPPZQaldwxSIWkL7M/4PG SWIdNH1YWdOitqF5+fn4eX0kUg5EXx4fdN5kGkmE+hHMuJ6BYzQy86GiqszHsKqcbOjW d/NtlveyUfA6aTtfFKlmlPfxpM4tAN9TkKyBlTIyD4ZRdOY81e0xZs8J4dM+dlH/C6EQ u1ugW3iB6LLu70IvD/pJtYk07/5OkNyCQh/lBNIjF/lgAETt/+Bl0/ZOqbIuV/3m3Un4 8wcHsD6aSKQnNC51fuvSJGWqPNHfho5NH/Wn2bp0KibmFQSlOfuZlJdFHIRbSpot3JaE QhpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1RA9V7Otxh4nwqoHM1FxmgmC9MK5/ORNFdKMxH+oR/U=; b=aN+PcaHdH3y+VtZ2e5cpQ1FRUiQzosNduNcqWsln2EGGb9LrnuGmd1EUoi4tGwrEQf 9HYz6uvmbuSXuwWvJk1faD6XSvV8ApAymPuZyzC0fmuDrFfKIqPSUxnhQG84ho11x8by AJcrZaKqH3xZxaDRBz32diupl5Qgrs+xmxEIt3xE9+XQP9pC4pLyd7QqIqrqdxmvahhS LTv0w7IaCOgbkCrEkg2WLgItHAKu+AJobOGC1z1x4iIlEiB7peJwo1/EifT4PY45wiRx bQozajZxwFDAvBcpuO96zXG58XmJDmQ7phfNQPXkFqleU2dcEdiFanYDgmwBvq8sQwjY MVkg== X-Gm-Message-State: AOAM530nmkphOBsmyh9at40SD6Su0pQYvbE9mwB8zDu7ZEydWTMzG2aw aBG98X/eE1di+0jMhlrfENwRdTvUOYYuO26I X-Google-Smtp-Source: ABdhPJxHEGFaMaF22Z5HCW2H2t3D7mp6MSlHF9fy86Ov0iLlq6iEGORrlkti+etsVz0JrSf5pn8RxA== X-Received: by 2002:a17:902:ab91:b029:12b:8dae:b1ff with SMTP id f17-20020a170902ab91b029012b8daeb1ffmr464808plr.52.1628613455084; Tue, 10 Aug 2021 09:37:35 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.61]) by smtp.gmail.com with ESMTPSA id pi14sm3517744pjb.38.2021.08.10.09.37.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Aug 2021 09:37:34 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 1/5] bio: add allocation cache abstraction Date: Tue, 10 Aug 2021 10:37:24 -0600 Message-Id: <20210810163728.265939-2-axboe@kernel.dk> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210810163728.265939-1-axboe@kernel.dk> References: <20210810163728.265939-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add a set of helpers that can encapsulate bio allocations, reusing them as needed. Caller must provide the necessary locking, if any is needed. The primary intended use case is polled IO from io_uring, which will not need any external locking. Very simple - keeps a count of bio's in the cache, and maintains a max of 512 with a slack of 64. If we get above max + slack, we drop slack number of bio's. The cache is intended to be per-task, and the user will need to supply the storage for it. As io_uring will be the only user right now, provide a hook that returns the cache there. Stub it out as NULL initially. Signed-off-by: Jens Axboe --- block/bio.c | 123 ++++++++++++++++++++++++++++++++++++++++---- include/linux/bio.h | 24 +++++++-- 2 files changed, 131 insertions(+), 16 deletions(-) diff --git a/block/bio.c b/block/bio.c index 1fab762e079b..e3680702aeae 100644 --- a/block/bio.c +++ b/block/bio.c @@ -238,6 +238,35 @@ static void bio_free(struct bio *bio) } } +static inline void __bio_init(struct bio *bio) +{ + bio->bi_next = NULL; + bio->bi_bdev = NULL; + bio->bi_opf = 0; + bio->bi_flags = bio->bi_ioprio = bio->bi_write_hint = 0; + bio->bi_status = 0; + bio->bi_iter.bi_sector = 0; + bio->bi_iter.bi_size = 0; + bio->bi_iter.bi_idx = 0; + bio->bi_iter.bi_bvec_done = 0; + bio->bi_end_io = NULL; + bio->bi_private = NULL; +#ifdef CONFIG_BLK_CGROUP + bio->bi_blkg = NULL; + bio->bi_issue.value = 0; +#ifdef CONFIG_BLK_CGROUP_IOCOST + bio->bi_iocost_cost = 0; +#endif +#endif +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + bio->bi_crypt_context = NULL; +#endif +#ifdef CONFIG_BLK_DEV_INTEGRITY + bio->bi_integrity = NULL; +#endif + bio->bi_vcnt = 0; +} + /* * Users of this function have their own bio allocation. Subsequently, * they must remember to pair any call to bio_init() with bio_uninit() @@ -246,7 +275,7 @@ static void bio_free(struct bio *bio) void bio_init(struct bio *bio, struct bio_vec *table, unsigned short max_vecs) { - memset(bio, 0, sizeof(*bio)); + __bio_init(bio); atomic_set(&bio->__bi_remaining, 1); atomic_set(&bio->__bi_cnt, 1); @@ -591,6 +620,19 @@ void guard_bio_eod(struct bio *bio) bio_truncate(bio, maxsector << 9); } +static bool __bio_put(struct bio *bio) +{ + if (!bio_flagged(bio, BIO_REFFED)) + return true; + + BIO_BUG_ON(!atomic_read(&bio->__bi_cnt)); + + /* + * last put frees it + */ + return atomic_dec_and_test(&bio->__bi_cnt); +} + /** * bio_put - release a reference to a bio * @bio: bio to release reference to @@ -601,17 +643,8 @@ void guard_bio_eod(struct bio *bio) **/ void bio_put(struct bio *bio) { - if (!bio_flagged(bio, BIO_REFFED)) + if (__bio_put(bio)) bio_free(bio); - else { - BIO_BUG_ON(!atomic_read(&bio->__bi_cnt)); - - /* - * last put frees it - */ - if (atomic_dec_and_test(&bio->__bi_cnt)) - bio_free(bio); - } } EXPORT_SYMBOL(bio_put); @@ -1595,6 +1628,74 @@ int bioset_init_from_src(struct bio_set *bs, struct bio_set *src) } EXPORT_SYMBOL(bioset_init_from_src); +void bio_alloc_cache_init(struct bio_alloc_cache *cache) +{ + bio_list_init(&cache->free_list); + cache->nr = 0; +} + +static void bio_alloc_cache_prune(struct bio_alloc_cache *cache, + unsigned int nr) +{ + struct bio *bio; + unsigned int i; + + i = 0; + while ((bio = bio_list_pop(&cache->free_list)) != NULL) { + cache->nr--; + bio_free(bio); + if (++i == nr) + break; + } +} + +void bio_alloc_cache_destroy(struct bio_alloc_cache *cache) +{ + bio_alloc_cache_prune(cache, -1U); +} + +struct bio *bio_cache_get(struct bio_alloc_cache *cache, gfp_t gfp, + unsigned short nr_vecs, struct bio_set *bs) +{ + struct bio *bio; + + if (nr_vecs > BIO_INLINE_VECS) + return NULL; + if (bio_list_empty(&cache->free_list)) { +alloc: + if (bs) + return bio_alloc_bioset(gfp, nr_vecs, bs); + else + return bio_alloc(gfp, nr_vecs); + } + + bio = bio_list_peek(&cache->free_list); + if (bs && bio->bi_pool != bs) + goto alloc; + bio_list_del_head(&cache->free_list, bio); + cache->nr--; + bio_init(bio, nr_vecs ? bio->bi_inline_vecs : NULL, nr_vecs); + return bio; +} + +#define ALLOC_CACHE_MAX 512 +#define ALLOC_CACHE_SLACK 64 + +void bio_cache_put(struct bio_alloc_cache *cache, struct bio *bio) +{ + if (unlikely(!__bio_put(bio))) + return; + if (cache) { + bio_uninit(bio); + bio_list_add_head(&cache->free_list, bio); + cache->nr++; + if (cache->nr > ALLOC_CACHE_MAX + ALLOC_CACHE_SLACK) + bio_alloc_cache_prune(cache, ALLOC_CACHE_SLACK); + } else { + bio_free(bio); + } +} + static int __init init_bio(void) { int i; diff --git a/include/linux/bio.h b/include/linux/bio.h index 2203b686e1f0..c351aa88d137 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -652,18 +652,22 @@ static inline struct bio *bio_list_peek(struct bio_list *bl) return bl->head; } -static inline struct bio *bio_list_pop(struct bio_list *bl) +static inline void bio_list_del_head(struct bio_list *bl, struct bio *head) { - struct bio *bio = bl->head; - - if (bio) { + if (head) { bl->head = bl->head->bi_next; if (!bl->head) bl->tail = NULL; - bio->bi_next = NULL; + head->bi_next = NULL; } +} +static inline struct bio *bio_list_pop(struct bio_list *bl) +{ + struct bio *bio = bl->head; + + bio_list_del_head(bl, bio); return bio; } @@ -676,6 +680,16 @@ static inline struct bio *bio_list_get(struct bio_list *bl) return bio; } +struct bio_alloc_cache { + struct bio_list free_list; + unsigned int nr; +}; + +void bio_alloc_cache_init(struct bio_alloc_cache *); +void bio_alloc_cache_destroy(struct bio_alloc_cache *); +struct bio *bio_cache_get(struct bio_alloc_cache *, gfp_t, unsigned short, struct bio_set *bs); +void bio_cache_put(struct bio_alloc_cache *, struct bio *); + /* * Increment chain count for the bio. Make sure the CHAIN flag update * is visible before the raised count. From patchwork Tue Aug 10 16:37:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12429211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84F64C04FE3 for ; Tue, 10 Aug 2021 16:37:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6296C60724 for ; Tue, 10 Aug 2021 16:37:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236138AbhHJQiB (ORCPT ); Tue, 10 Aug 2021 12:38:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235994AbhHJQh6 (ORCPT ); Tue, 10 Aug 2021 12:37:58 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7AB5C0613C1 for ; Tue, 10 Aug 2021 09:37:36 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id u16so21901047ple.2 for ; Tue, 10 Aug 2021 09:37:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WC4ejIUmrNi+MIoz5Vp544UpW+vl/SQ46veVxKdleWg=; b=GMsnua/vFwRz60BeydnigsA4jcIWrr8FxK1Im4OgUtG5IXYvqCO6MYvEMtZ1f1MfID 8VxLc67WcHLO0RGiJpFcSjHJaCzIfPYCKkBS9rIjgA/zxjgLKb/opTzJVf9W9ynE727y biFhDxLYxZbIjcUiwkilKwCiXrfcPskJ/l5mNBgmjkPoBmg/DEXcKM8PEcoSv9tfcUjX vQnk/zxdfAog53x9c7o85MuCDaDyX8lMQYQywy+9VBHHiyLdnrDgq7czCwkZNE4Xtowf +FMNHuG/22wruIcqwViZcIx91ZLjCCWBgLayeZDpUUQgcyxDSM8Rxi8be1O55CuE9AJh RekA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WC4ejIUmrNi+MIoz5Vp544UpW+vl/SQ46veVxKdleWg=; b=RZiq4HIBOMfOiT3svBl7RRreafVFOXLMoDdnfhPhoSKS1RChP11DvsPSfj8P4t4jNM vpTqmEdi7Jz2dDBJY7Yr2fmL0UMHbV6O2ChKZpUrlZXPeaqkMDO9IE1vuPEDUd+8x3Xf 71zOsBo2CUBQC6jOZ8tNhHPLt2ObWs4daXAFVWUsvmr870DCT3zoDaYaDCcxBA2jOBvc NjVxGvX+HvToWE9wUGJKgF99N74QlwnOjDaok/AfAzpBpX+aS7za2qdXwDHExvG71qAe 78IfDeN/IVqkHWLTFhE2GL1ax7tmWTkIjUNPcn4c0ZTwTLYghG3wM9UcumovlOhcgEFe I5Aw== X-Gm-Message-State: AOAM531By5eeYPAditYOvkGDH1ylJr1NMY/Sbt8rxvK7zGsqFobnRBrA sMub7a8dp0rMULIZr9xO7AqD/A== X-Google-Smtp-Source: ABdhPJz7dVXHqThkKzn8IurG1da0CB8RjNEa6kAteOMr84OllC+qHW6Krjbg+fDI/TnpgZ/r+hrCLQ== X-Received: by 2002:a17:90b:102:: with SMTP id p2mr5927180pjz.126.1628613456131; Tue, 10 Aug 2021 09:37:36 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.61]) by smtp.gmail.com with ESMTPSA id pi14sm3517744pjb.38.2021.08.10.09.37.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Aug 2021 09:37:35 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 2/5] io_uring: use kiocb->private to hold rw_len Date: Tue, 10 Aug 2021 10:37:25 -0600 Message-Id: <20210810163728.265939-3-axboe@kernel.dk> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210810163728.265939-1-axboe@kernel.dk> References: <20210810163728.265939-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We don't need a separate member in io_rw for this, we can just use the kiocb->private field as we're not using it for anything else anyway. This saves 8 bytes in io_rw, which we'll be needing once kiocb grows a new member. Signed-off-by: Jens Axboe --- fs/io_uring.c | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 91a301bb1644..f35b54f016f3 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -557,7 +557,6 @@ struct io_rw { /* NOTE: kiocb has the file as the first member, so don't do it here */ struct kiocb kiocb; u64 addr; - u64 len; }; struct io_connect { @@ -2675,6 +2674,16 @@ static bool io_file_supports_nowait(struct io_kiocb *req, int rw) return __io_file_supports_nowait(req->file, rw); } +static inline void *u64_to_ptr(__u64 ptr) +{ + return (void *)(unsigned long) ptr; +} + +static inline __u64 ptr_to_u64(void *ptr) +{ + return (__u64)(unsigned long)ptr; +} + static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe) { struct io_ring_ctx *ctx = req->ctx; @@ -2732,7 +2741,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe) } req->rw.addr = READ_ONCE(sqe->addr); - req->rw.len = READ_ONCE(sqe->len); + req->rw.kiocb.private = u64_to_ptr(READ_ONCE(sqe->len)); req->buf_index = READ_ONCE(sqe->buf_index); return 0; } @@ -2799,7 +2808,7 @@ static void kiocb_done(struct kiocb *kiocb, ssize_t ret, static int __io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter, struct io_mapped_ubuf *imu) { - size_t len = req->rw.len; + size_t len = ptr_to_u64(req->rw.kiocb.private); u64 buf_end, buf_addr = req->rw.addr; size_t offset; @@ -2997,7 +3006,7 @@ static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov, iov[0].iov_len = kbuf->len; return 0; } - if (req->rw.len != 1) + if (ptr_to_u64(req->rw.kiocb.private) != 1) return -EINVAL; #ifdef CONFIG_COMPAT @@ -3012,7 +3021,7 @@ static int io_import_iovec(int rw, struct io_kiocb *req, struct iovec **iovec, struct iov_iter *iter, bool needs_lock) { void __user *buf = u64_to_user_ptr(req->rw.addr); - size_t sqe_len = req->rw.len; + size_t sqe_len = ptr_to_u64(req->rw.kiocb.private); u8 opcode = req->opcode; ssize_t ret; @@ -3030,7 +3039,7 @@ static int io_import_iovec(int rw, struct io_kiocb *req, struct iovec **iovec, buf = io_rw_buffer_select(req, &sqe_len, needs_lock); if (IS_ERR(buf)) return PTR_ERR(buf); - req->rw.len = sqe_len; + req->rw.kiocb.private = u64_to_ptr(sqe_len); } ret = import_single_range(rw, buf, sqe_len, *iovec, iter); @@ -3063,6 +3072,7 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter) { struct kiocb *kiocb = &req->rw.kiocb; struct file *file = req->file; + unsigned long rw_len; ssize_t ret = 0; /* @@ -3075,6 +3085,7 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter) if (kiocb->ki_flags & IOCB_NOWAIT) return -EAGAIN; + rw_len = ptr_to_u64(req->rw.kiocb.private); while (iov_iter_count(iter)) { struct iovec iovec; ssize_t nr; @@ -3083,7 +3094,7 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter) iovec = iov_iter_iovec(iter); } else { iovec.iov_base = u64_to_user_ptr(req->rw.addr); - iovec.iov_len = req->rw.len; + iovec.iov_len = rw_len; } if (rw == READ) { @@ -3102,7 +3113,7 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter) ret += nr; if (nr != iovec.iov_len) break; - req->rw.len -= nr; + rw_len -= nr; req->rw.addr += nr; iov_iter_advance(iter, nr); } From patchwork Tue Aug 10 16:37:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12429213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08E5FC432BE for ; Tue, 10 Aug 2021 16:37:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D3B4D60724 for ; Tue, 10 Aug 2021 16:37:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236202AbhHJQiE (ORCPT ); Tue, 10 Aug 2021 12:38:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236100AbhHJQiA (ORCPT ); Tue, 10 Aug 2021 12:38:00 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3385C061798 for ; Tue, 10 Aug 2021 09:37:37 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id cp15-20020a17090afb8fb029017891959dcbso5124005pjb.2 for ; Tue, 10 Aug 2021 09:37:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EOUkUjkApB+u9s/YlxR3KEiPACxa20TSw4f2NjsBDFs=; b=Ot4LUM9vennjOsDScG+3pnXhAVWQ+7ozD6mExFiU/aVRNFuzm8PMzBcMa4XZsEwso3 i9z5RAna0gYQtTFNd61o9jeiOH6tWKu5LajdsFaXhMLrd25cJuPf1ojBIyeL1Yv4ZoPD 7MOdpeqh+HQ2uXmWsVIyXBZARr/xXmTikyFle987FhGhVW+m35RITgGN7HQ6DXm1p3eK zUlgyuHNjDHxjLaMGmmvhd+EivglTjKNTR9hkOcueDk1lZvwuW5+Y8//8B7YgQFqIxWm T5LzRUZ+P0bsZA0elYv2i3Xz0aZ+92cIbrFLg1iqcEcrT2GOxRD3g3RJ9ktYVZLwtfqF CQ8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EOUkUjkApB+u9s/YlxR3KEiPACxa20TSw4f2NjsBDFs=; b=Krbg1IXg1hceNn9wWthF0qtojmrXteaC0OHC8r0Z3u24D7hY+x8w9Bx1ugTfj2wlsC SMdktg0boJCb2bZpJ5Frp7AqBYL7q6Qs7gQe+tNpDTxyE4YLAnYtTE0T/F5vL4s3JFdG msQZs+hNuMsKZzIOSY9yLOJF2aFGNAXS0wS1cHNbByJ8yKYLeBmQyQRGvECpW4uWmNJc bNLSWbFuFKtLomDhURD5imCDFgvPYT2kp2drI/k+MXP6h1kgUGv7YdR2ITZRsIGduHKA OY3b9BpeKPayCvq0la3P0CqGAIOz746xkQ8sSnC+QAWHnDPhZ+5SvmO8wFFaL7N0jmV8 svBg== X-Gm-Message-State: AOAM532LZB3B5k6BdfcSU2Cg3e/F43v47gFjIpa/+dQ5CVqTiqXeUF4B zehwRNWBYM0wAlO+BCeKaK/r3Q== X-Google-Smtp-Source: ABdhPJwbB4i6KbTtSHoL8+GibG3II0wrltUkyNqSuMSNeJPHqSJinHnylmT6nDOpvLO+YEKUOjamcQ== X-Received: by 2002:a63:f011:: with SMTP id k17mr193635pgh.391.1628613457217; Tue, 10 Aug 2021 09:37:37 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.61]) by smtp.gmail.com with ESMTPSA id pi14sm3517744pjb.38.2021.08.10.09.37.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Aug 2021 09:37:36 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 3/5] fs: add ki_bio_cache pointer to struct kiocb Date: Tue, 10 Aug 2021 10:37:26 -0600 Message-Id: <20210810163728.265939-4-axboe@kernel.dk> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210810163728.265939-1-axboe@kernel.dk> References: <20210810163728.265939-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This allows an issuer (and owner of a kiocb) to pass in a bio allocation cache that can be used to improve the efficiency of the churn of repeated bio allocations and frees. Signed-off-by: Jens Axboe --- include/linux/fs.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/fs.h b/include/linux/fs.h index 640574294216..5f17d10ddc2d 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -291,6 +291,7 @@ struct page; struct address_space; struct writeback_control; struct readahead_control; +struct bio_alloc_cache; /* * Write life time hint values. @@ -319,6 +320,8 @@ enum rw_hint { /* iocb->ki_waitq is valid */ #define IOCB_WAITQ (1 << 19) #define IOCB_NOIO (1 << 20) +/* iocb->ki_bio_cache is valid */ +#define IOCB_ALLOC_CACHE (1 << 21) struct kiocb { struct file *ki_filp; @@ -337,6 +340,14 @@ struct kiocb { struct wait_page_queue *ki_waitq; /* for async buffered IO */ }; + /* + * If set, owner of iov_iter can pass in a fast-cache for bio + * allocations. + */ +#ifdef CONFIG_BLOCK + struct bio_alloc_cache *ki_bio_cache; +#endif + randomized_struct_fields_end }; From patchwork Tue Aug 10 16:37:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12429215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC219C4320A for ; Tue, 10 Aug 2021 16:37:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B51B460F94 for ; Tue, 10 Aug 2021 16:37:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236100AbhHJQiE (ORCPT ); Tue, 10 Aug 2021 12:38:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235861AbhHJQiA (ORCPT ); Tue, 10 Aug 2021 12:38:00 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8F1CC06179C for ; Tue, 10 Aug 2021 09:37:38 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id mq2-20020a17090b3802b0290178911d298bso6208984pjb.1 for ; Tue, 10 Aug 2021 09:37:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nUusjBFV2Ut14q88MJK3xVKl638mI0IgVxCmF/I/S6k=; b=Wh25/k31+kzXFclORNbK6cXS74VgMUgxByrbFPA4s9ybamXipCciz7wlOtrSii1029 tzgEJXR0r2xtpbPELxrq6hCsxw9/WnlI9Z0/7MGf38vI/snEh+KSuYBmxsEJGKfe0w/d 6GXZSU0OnUw4/h+8C9s8EW1pyHpyGp3EIkzpZoX0dUGZSnjtMUrFS4quXPj3XL3ilMP8 EhzhnejJt/uxBMb4bJyOyMnjGigrORMaZy3VrwxUS2j0Q/rMYXPB+bgY5wes2LH9XLk3 is7KcGoALG8psoYzEVl1p8TiemU7I0jBzmzv3dc6rke2LoiYEk91X2X9uLuyF7qonylg J49Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nUusjBFV2Ut14q88MJK3xVKl638mI0IgVxCmF/I/S6k=; b=swzRhRiv0xXRSAr8HQJYLhtwr1Srp7z/kdC2enjYmOk2MJveU9PCLnVD7xgpjJu8/+ LzXL6ayZWEzH7lyvkVn/4zSrJG7Y/kCI0TNf2H3sGROErwoKUOgxkfiblSuENfvXMFlE VZ5C0hCXKVpxS6610aCDdFLO4rhzsXsXoAliHMemt0f8W2qvuXXaijMe9yuLux/bzSed 1wLij+h2uLdhFEH7gppDYvwky6u1kqB8Sty4CFGGh6H4G1Mr4guhOvqWfH7iUcz8hKlw D9qShvDIwf+DvlplzM+Ur/qKKrFd6uGPvOgFiLi7DiKNcrWRA0UqNUwPzEG22n8Ck7kx 3FNA== X-Gm-Message-State: AOAM5304TK8XqQ3XtDOm4bbx+Eicv0sb04/9vOU7/4UJ7NWjbdoOOswe F6ahE6MzaRHAc0aIq79S7Km1Eg== X-Google-Smtp-Source: ABdhPJz6EnWJULoDCnsAOC2eoRGFiZOXlB8kx3xaMYb9GI8AH3orG2T2vrRgIuA4EVQAlGw141Pfjg== X-Received: by 2002:a65:434c:: with SMTP id k12mr508920pgq.17.1628613458248; Tue, 10 Aug 2021 09:37:38 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.61]) by smtp.gmail.com with ESMTPSA id pi14sm3517744pjb.38.2021.08.10.09.37.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Aug 2021 09:37:37 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 4/5] io_uring: wire up bio allocation cache Date: Tue, 10 Aug 2021 10:37:27 -0600 Message-Id: <20210810163728.265939-5-axboe@kernel.dk> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210810163728.265939-1-axboe@kernel.dk> References: <20210810163728.265939-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Initialize a bio allocation cache, and mark it as being used for IOPOLL. We could use it for non-polled IO as well, but it'd need some locking and probably would negate much of the win in that case. We start with IOPOLL, as completions are locked by the ctx lock anyway. So no further locking is needed there. This brings an IOPOLL gen2 Optane QD=128 workload from ~3.0M IOPS to ~3.3M IOPS. Signed-off-by: Jens Axboe --- fs/io_uring.c | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/fs/io_uring.c b/fs/io_uring.c index f35b54f016f3..60316cfc712a 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -324,6 +324,10 @@ struct io_submit_state { /* inline/task_work completion list, under ->uring_lock */ struct list_head free_list; +#ifdef CONFIG_BLOCK + struct bio_alloc_cache bio_cache; +#endif + /* * File reference cache */ @@ -1201,6 +1205,9 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p) init_llist_head(&ctx->rsrc_put_llist); INIT_LIST_HEAD(&ctx->tctx_list); INIT_LIST_HEAD(&ctx->submit_state.free_list); +#ifdef CONFIG_BLOCK + bio_alloc_cache_init(&ctx->submit_state.bio_cache); +#endif INIT_LIST_HEAD(&ctx->locked_free_list); INIT_DELAYED_WORK(&ctx->fallback_work, io_fallback_req_func); return ctx; @@ -2267,6 +2274,8 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events, if (READ_ONCE(req->result) == -EAGAIN && resubmit && !(req->flags & REQ_F_DONT_REISSUE)) { req->iopoll_completed = 0; + /* Don't use cache for async retry, not locking safe */ + req->rw.kiocb.ki_flags &= ~IOCB_ALLOC_CACHE; req_ref_get(req); io_req_task_queue_reissue(req); continue; @@ -2684,6 +2693,31 @@ static inline __u64 ptr_to_u64(void *ptr) return (__u64)(unsigned long)ptr; } +static void io_mark_alloc_cache(struct io_kiocb *req) +{ +#ifdef CONFIG_BLOCK + struct kiocb *kiocb = &req->rw.kiocb; + struct block_device *bdev = NULL; + + if (S_ISBLK(file_inode(kiocb->ki_filp)->i_mode)) + bdev = I_BDEV(kiocb->ki_filp->f_mapping->host); + else if (S_ISREG(file_inode(kiocb->ki_filp)->i_mode)) + bdev = kiocb->ki_filp->f_inode->i_sb->s_bdev; + + /* + * If the lower level device doesn't support polled IO, then + * we cannot safely use the alloc cache. This really should + * be a failure case for polled IO... + */ + if (!bdev || + !test_bit(QUEUE_FLAG_POLL, &bdev_get_queue(bdev)->queue_flags)) + return; + + kiocb->ki_flags |= IOCB_ALLOC_CACHE; + kiocb->ki_bio_cache = &req->ctx->submit_state.bio_cache; +#endif /* CONFIG_BLOCK */ +} + static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe) { struct io_ring_ctx *ctx = req->ctx; @@ -2726,6 +2760,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe) return -EOPNOTSUPP; kiocb->ki_flags |= IOCB_HIPRI; + io_mark_alloc_cache(req); kiocb->ki_complete = io_complete_rw_iopoll; req->iopoll_completed = 0; } else { @@ -2792,6 +2827,8 @@ static void kiocb_done(struct kiocb *kiocb, ssize_t ret, if (check_reissue && (req->flags & REQ_F_REISSUE)) { req->flags &= ~REQ_F_REISSUE; if (io_resubmit_prep(req)) { + /* Don't use cache for async retry, not locking safe */ + req->rw.kiocb.ki_flags &= ~IOCB_ALLOC_CACHE; req_ref_get(req); io_req_task_queue_reissue(req); } else { @@ -8640,6 +8677,9 @@ static void io_req_caches_free(struct io_ring_ctx *ctx) state->free_reqs = 0; } +#ifdef CONFIG_BLOCK + bio_alloc_cache_destroy(&state->bio_cache); +#endif io_flush_cached_locked_reqs(ctx, state); io_req_cache_free(&state->free_list); mutex_unlock(&ctx->uring_lock); From patchwork Tue Aug 10 16:37:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12429217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CB09C4338F for ; Tue, 10 Aug 2021 16:37:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2343D60F41 for ; Tue, 10 Aug 2021 16:37:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236096AbhHJQiG (ORCPT ); Tue, 10 Aug 2021 12:38:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236178AbhHJQiD (ORCPT ); Tue, 10 Aug 2021 12:38:03 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B801EC06179E for ; Tue, 10 Aug 2021 09:37:39 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id cp15-20020a17090afb8fb029017891959dcbso5124216pjb.2 for ; Tue, 10 Aug 2021 09:37:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=z7r1F9vOtNA7gYTE+jSGPmDArFssk1xI7N32wgdj6+8=; b=GwCrf047j4kbvvLKjiAntlmCRY44W2sNGD7pLc4JiNYISQOiiI77bmOf9CNtXJ61ED 0hJHSBLVd2epEJ1Lc3knTsqhKLraTIlsbnUIFs70qaE9W4X75Wvq3OwHhWL7AoLcRcWf 8YJFjNxM2FmEPttjxIMGIzbwd66AOnC1bn4PVktzm+U1dnM8oM9Y9uoMuzPoBsh/uv19 KEhk/Tb5q+E5bOUTfY0wsTAUUCfy16s2hEA64twM1KYEASk8NdTt9QiQ1WzmFkmERM0a FvjNea0Gicg9wiGsnCT71qn/SNN6rcm2jaSpR8Hw/oI+im0tMgnVqZEVfqdzx5WqiERF jR5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z7r1F9vOtNA7gYTE+jSGPmDArFssk1xI7N32wgdj6+8=; b=noQUNdizYWrvbI6VzuwQ+01UAqihKZYURxqZtlYON6iM16bQQ1VuI0BaQYU1fGnKg8 1IEFl9lsfkUshs8obPOHfPcZw18HNQYffFBw5MATE0H+MLYRYU00ctqLaFwH8nDJU1sI O7RdduavEz+grUJO3A7BMtMe5f29wNBl/M36UqD3GhhY4diFSiRb1pdG62yJ2VHCI9Vc rJrkVnHsIBfVR1BM6TTq7fjsoV8piINf32FdX9ymTjfQnjCiFeICj8E3kti9TpFQLCzq 9BDywmUG/0JXmXeNk07xzhjVwyEXGJp30g7qCZgRqZ94lHH6H2M+FPUz2SqkMsS3TfUA /X8Q== X-Gm-Message-State: AOAM532lO69hpKYPnIzGdSZCMOxU8vImw6G74c1Bt4pfTZ7nq52/OePJ EcHnzL9ZLYfaGITMr6c95DnRi+syQfTSDjJ8 X-Google-Smtp-Source: ABdhPJxW3/lA4GueiXY++TcCbBxQ/Uis2plWiIvuUvPxsFVdjSQ3D72rT0KzrTt7P1KybCiaTKDTGg== X-Received: by 2002:a05:6a00:1715:b029:3cd:85ef:7e88 with SMTP id h21-20020a056a001715b02903cd85ef7e88mr6806224pfc.66.1628613459228; Tue, 10 Aug 2021 09:37:39 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.61]) by smtp.gmail.com with ESMTPSA id pi14sm3517744pjb.38.2021.08.10.09.37.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Aug 2021 09:37:38 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH 5/5] block: enable use of bio allocation cache Date: Tue, 10 Aug 2021 10:37:28 -0600 Message-Id: <20210810163728.265939-6-axboe@kernel.dk> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210810163728.265939-1-axboe@kernel.dk> References: <20210810163728.265939-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If the kiocb passed in has a bio cache specified, then use that to allocate a (and free) new bio if possible. Signed-off-by: Jens Axboe --- fs/block_dev.c | 32 ++++++++++++++++++++++++++++---- 1 file changed, 28 insertions(+), 4 deletions(-) diff --git a/fs/block_dev.c b/fs/block_dev.c index 9ef4f1fc2cb0..a192c5672430 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -327,6 +327,14 @@ static int blkdev_iopoll(struct kiocb *kiocb, bool wait) return blk_poll(q, READ_ONCE(kiocb->ki_cookie), wait); } +static void dio_bio_put(struct blkdev_dio *dio) +{ + if (dio->iocb->ki_flags & IOCB_ALLOC_CACHE) + bio_cache_put(dio->iocb->ki_bio_cache, &dio->bio); + else + bio_put(&dio->bio); +} + static void blkdev_bio_end_io(struct bio *bio) { struct blkdev_dio *dio = bio->bi_private; @@ -362,7 +370,7 @@ static void blkdev_bio_end_io(struct bio *bio) bio_check_pages_dirty(bio); } else { bio_release_pages(bio, false); - bio_put(bio); + dio_bio_put(dio); } } @@ -385,7 +393,15 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, (bdev_logical_block_size(bdev) - 1)) return -EINVAL; - bio = bio_alloc_bioset(GFP_KERNEL, nr_pages, &blkdev_dio_pool); + bio = NULL; + if (iocb->ki_flags & IOCB_ALLOC_CACHE) { + bio = bio_cache_get(iocb->ki_bio_cache, GFP_KERNEL, nr_pages, + &blkdev_dio_pool); + if (!bio) + iocb->ki_flags &= ~IOCB_ALLOC_CACHE; + } + if (!bio) + bio = bio_alloc_bioset(GFP_KERNEL, nr_pages, &blkdev_dio_pool); dio = container_of(bio, struct blkdev_dio, bio); dio->is_sync = is_sync = is_sync_kiocb(iocb); @@ -467,7 +483,15 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, } submit_bio(bio); - bio = bio_alloc(GFP_KERNEL, nr_pages); + bio = NULL; + if (iocb->ki_flags & IOCB_ALLOC_CACHE) { + bio = bio_cache_get(iocb->ki_bio_cache, GFP_KERNEL, + nr_pages, &fs_bio_set); + if (!bio) + iocb->ki_flags &= ~IOCB_ALLOC_CACHE; + } + if (!bio) + bio = bio_alloc(GFP_KERNEL, nr_pages); } if (!is_poll) @@ -492,7 +516,7 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, if (likely(!ret)) ret = dio->size; - bio_put(&dio->bio); + dio_bio_put(dio); return ret; }