From patchwork Thu Nov 4 18:21:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12603647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A29A6C433F5 for ; Thu, 4 Nov 2021 18:22:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8238A611EF for ; Thu, 4 Nov 2021 18:22:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234112AbhKDSYr (ORCPT ); Thu, 4 Nov 2021 14:24:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234080AbhKDSYr (ORCPT ); Thu, 4 Nov 2021 14:24:47 -0400 Received: from mail-oi1-x22f.google.com (mail-oi1-x22f.google.com [IPv6:2607:f8b0:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5E3FC061714 for ; Thu, 4 Nov 2021 11:22:08 -0700 (PDT) Received: by mail-oi1-x22f.google.com with SMTP id bg25so9952542oib.1 for ; Thu, 04 Nov 2021 11:22:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iUVdiKdE3UtLDrSJ4ppTvX/REgJOcEw5TGAopy8ZtZU=; b=b1BJAdCEj6dG21jDoA7a0zw9fEykYkNP82YHbmI/x9hd1uvkJPdEnBv+2VAlFSdQLR Qd7kBU6A2pLDo2GfSY7WGuW8fG4AtzpU39TZC/RmdQNM5g+BdfTFcA44yVWh3XPgvyJp vYL6mOP1TGMrYFL8cE3wXIumZQ2+966hu6km0neDAmIwck89qlgqKkaPHSXDp7G3sEGb mudl0UkHVbk2euTLXbGSr5z5GzsIUwk6xjVpw8eK6bH1eO28txAjwGXvZPQRKofV4t9L e84XLn/+wo4gA/IizecZzfuPoQ6Tr428pQMMIuV52G1VgtrfLC8gpU0aWHJx7fX+T21V OcPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iUVdiKdE3UtLDrSJ4ppTvX/REgJOcEw5TGAopy8ZtZU=; b=IyDCFEbAdq+2j6ByKbJ1QDHl8UeNyfV0EbWKeqNRs+HC2idxlFUohAp6h592zZ00rY VXUnPDhGqbgAG47L6Y5Rywip5z1OKj8unZLeewSIWhzEr1jkquANtGwlGad4nOlUI72F FgtX8HG8/Y+8KELZ68mMSYkZqxBJK4rtCCbT+P2J/GGC/FZ+JngYx0UXpH5gMJBwPOgC SSDIFQ0oyPCj215OAydN+alBCVLAju0k6eqa2P4vkD24pFdptKB+g9jN+HA3NYPA1urA sC+o03P2348uWl5DEJq0OVNePOvm/2KXNl0hP9Vur8acxv17AZsITZ8TyUU1CEplvKd7 HV7g== X-Gm-Message-State: AOAM532UutJf1l3TU3OOUOqQ5l0UMst7LCzNNluxiDjV6tLNKcxxWnWk SWf3MgNxvUWWxFdslhBShIP4Rc+uO0utkQ== X-Google-Smtp-Source: ABdhPJzAz6cdk0dnoos9/zg76frPvqbQhFvU+umzOy70maDAWS7YvbyKUFF3NdelHGbxfQU6XXIcmw== X-Received: by 2002:a05:6808:1686:: with SMTP id bb6mr17856955oib.40.1636050127892; Thu, 04 Nov 2021 11:22:07 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id s206sm1595445oia.33.2021.11.04.11.22.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 11:22:07 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: hch@infradead.org, Jens Axboe Subject: [PATCH 1/5] block: have plug stored requests hold references to the queue Date: Thu, 4 Nov 2021 12:21:57 -0600 Message-Id: <20211104182201.83906-2-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211104182201.83906-1-axboe@kernel.dk> References: <20211104182201.83906-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Requests that were stored in the cache deliberately didn't hold an enter reference to the queue, instead we grabbed one every time we pulled a request out of there. That made for awkward logic on freeing the remainder of the cached list, if needed, where we had to artificially raise the queue usage count before each free. Grab references up front for cached plug requests. That's safer, and also more efficient. Fixes: 47c122e35d7e ("block: pre-allocate requests if plug is started and is a batch") Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig --- block/blk-core.c | 2 +- block/blk-mq.c | 7 ++++--- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index fd389a16013c..c2d267b6f910 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1643,7 +1643,7 @@ void blk_flush_plug(struct blk_plug *plug, bool from_schedule) flush_plug_callbacks(plug, from_schedule); if (!rq_list_empty(plug->mq_list)) blk_mq_flush_plug_list(plug, from_schedule); - if (unlikely(!from_schedule && plug->cached_rq)) + if (unlikely(!rq_list_empty(plug->cached_rq))) blk_mq_free_plug_rqs(plug); } diff --git a/block/blk-mq.c b/block/blk-mq.c index c68aa0a332e1..5498454c2164 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -410,7 +410,10 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data, tag_mask &= ~(1UL << i); rq = blk_mq_rq_ctx_init(data, tags, tag, alloc_time_ns); rq_list_add(data->cached_rq, rq); + nr++; } + /* caller already holds a reference, add for remainder */ + percpu_ref_get_many(&data->q->q_usage_counter, nr - 1); data->nr_tags -= nr; return rq_list_pop(data->cached_rq); @@ -630,10 +633,8 @@ void blk_mq_free_plug_rqs(struct blk_plug *plug) { struct request *rq; - while ((rq = rq_list_pop(&plug->cached_rq)) != NULL) { - percpu_ref_get(&rq->q->q_usage_counter); + while ((rq = rq_list_pop(&plug->cached_rq)) != NULL) blk_mq_free_request(rq); - } } static void req_bio_endio(struct request *rq, struct bio *bio, From patchwork Thu Nov 4 18:21:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12603649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CB6CC4332F for ; Thu, 4 Nov 2021 18:22:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3C4696121F for ; Thu, 4 Nov 2021 18:22:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231956AbhKDSYs (ORCPT ); Thu, 4 Nov 2021 14:24:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234080AbhKDSYs (ORCPT ); Thu, 4 Nov 2021 14:24:48 -0400 Received: from mail-ot1-x334.google.com (mail-ot1-x334.google.com [IPv6:2607:f8b0:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDFE3C061203 for ; Thu, 4 Nov 2021 11:22:09 -0700 (PDT) Received: by mail-ot1-x334.google.com with SMTP id g25-20020a9d5f99000000b0055af3d227e8so5047625oti.11 for ; Thu, 04 Nov 2021 11:22:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oMc2U7aNLUIipa0Yuo51ied8PbRQ1VYWGknVm8Hbf/o=; b=zvP89dK+wAjOY7J8KSr2URtt/u15Mn8DiUuOpYm9KgId9HKCvCqMYUFscjRYP8A7wU fP2/qYhovtIuI0vS6SCCCTD1pC0xJ/QbDnlDMChotogjuWUJ+mPN+ZP8dutGdePyP6s/ sCz3GtOK6vw6O8idI135I5opq5+0KaE2hp0oeiR3Umn997eDUXKaMWIgReQde+5JPljv w+o8cB8SVPDTb31x6jaYgqDXxRqhgd8LPTS3j4Qwl5BYB9G32mZhF3cXr1j9/hzDTKyB OCQAi8haOEUoxuNmE7V2psjF+YvvidXeD9I0z0/xNiG3rtQxcxrWiPMh1FPYGl9fAbzT oPjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oMc2U7aNLUIipa0Yuo51ied8PbRQ1VYWGknVm8Hbf/o=; b=cDvx75TSZSBHHT63Fdll4eg56DbPUwxugWWdfnRIyBXTUMt1ptSGwBeSjcT3OjI7ZQ XjiOjP7rsuqirD8AvALtFjkXhT6qmZ/ZANLgmc1nyUAkmTwQjnBoaNhpl71ucHkjLTQ6 1QfIbfNnO9B1CNv4HjTr9ic+j0/xoknk92T2zmi8TNK6IaRB1S6sio4MQwSVctLWWu8t P4iZd+3xfGFcwtu32r6OqO74Og2kQVNfxvVWKinzRubqWBHMsRSUElJ6+FV6YEVauxh0 CZlB9iPC1zwyD0Dix+dEb1zvNHAbFt+/E01LPiKjRvBWV9720kWJ8tjRE6DOiilLjHRA Lohg== X-Gm-Message-State: AOAM5303Rf/VIatnaD+faBN15BXHo5YZd5woi53rflTCe+RBxFiot9ol HSvGXsYs9Gn40bo8zd60qSTzGfMopPUNqw== X-Google-Smtp-Source: ABdhPJyP4GYxXWbfq1Aev9Ox9ubH42hSotUry9+nb1CjztufvgDdlMVIaTgOLQJNiomEXijJldwUKQ== X-Received: by 2002:a05:6830:16c6:: with SMTP id l6mr14819276otr.315.1636050128938; Thu, 04 Nov 2021 11:22:08 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id s206sm1595445oia.33.2021.11.04.11.22.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 11:22:08 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: hch@infradead.org, Jens Axboe Subject: [PATCH 2/5] block: split request allocation components into helpers Date: Thu, 4 Nov 2021 12:21:58 -0600 Message-Id: <20211104182201.83906-3-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211104182201.83906-1-axboe@kernel.dk> References: <20211104182201.83906-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This is in preparation for a fix, but serves as a cleanup as well moving the cached vs regular alloc logic out of blk_mq_submit_bio(). Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig --- block/blk-mq.c | 71 ++++++++++++++++++++++++++++++++++---------------- 1 file changed, 48 insertions(+), 23 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 5498454c2164..dcb413297a96 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2478,6 +2478,51 @@ static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug) return BLK_MAX_REQUEST_COUNT; } +static struct request *blk_mq_get_new_requests(struct request_queue *q, + struct blk_plug *plug, + struct bio *bio) +{ + struct blk_mq_alloc_data data = { + .q = q, + .nr_tags = 1, + .cmd_flags = bio->bi_opf, + }; + struct request *rq; + + if (plug) { + data.nr_tags = plug->nr_ios; + plug->nr_ios = 1; + data.cached_rq = &plug->cached_rq; + } + + rq = __blk_mq_alloc_requests(&data); + if (rq) + return rq; + + rq_qos_cleanup(q, bio); + if (bio->bi_opf & REQ_NOWAIT) + bio_wouldblock_error(bio); + return NULL; +} + +static inline struct request *blk_mq_get_request(struct request_queue *q, + struct blk_plug *plug, + struct bio *bio) +{ + if (plug) { + struct request *rq; + + rq = rq_list_peek(&plug->cached_rq); + if (rq) { + plug->cached_rq = rq_list_next(rq); + INIT_LIST_HEAD(&rq->queuelist); + return rq; + } + } + + return blk_mq_get_new_requests(q, plug, bio); +} + /** * blk_mq_submit_bio - Create and send a request to block device. * @bio: Bio pointer. @@ -2518,29 +2563,9 @@ void blk_mq_submit_bio(struct bio *bio) rq_qos_throttle(q, bio); plug = blk_mq_plug(q, bio); - if (plug && plug->cached_rq) { - rq = rq_list_pop(&plug->cached_rq); - INIT_LIST_HEAD(&rq->queuelist); - } else { - struct blk_mq_alloc_data data = { - .q = q, - .nr_tags = 1, - .cmd_flags = bio->bi_opf, - }; - - if (plug) { - data.nr_tags = plug->nr_ios; - plug->nr_ios = 1; - data.cached_rq = &plug->cached_rq; - } - rq = __blk_mq_alloc_requests(&data); - if (unlikely(!rq)) { - rq_qos_cleanup(q, bio); - if (bio->bi_opf & REQ_NOWAIT) - bio_wouldblock_error(bio); - goto queue_exit; - } - } + rq = blk_mq_get_request(q, plug, bio); + if (unlikely(!rq)) + goto queue_exit; trace_block_getrq(bio); From patchwork Thu Nov 4 18:21:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12603653 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B15CC433F5 for ; Thu, 4 Nov 2021 18:22:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02C696112E for ; Thu, 4 Nov 2021 18:22:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234076AbhKDSYt (ORCPT ); Thu, 4 Nov 2021 14:24:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234120AbhKDSYs (ORCPT ); Thu, 4 Nov 2021 14:24:48 -0400 Received: from mail-oi1-x234.google.com (mail-oi1-x234.google.com [IPv6:2607:f8b0:4864:20::234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77237C061714 for ; Thu, 4 Nov 2021 11:22:10 -0700 (PDT) Received: by mail-oi1-x234.google.com with SMTP id bk26so9370795oib.11 for ; Thu, 04 Nov 2021 11:22:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=i123JyQi/pUuomVIg1CZI/8txZY5o6al1mihVVzVbw4=; b=psWuQ83Zimpc3+yaCD3Gk5kLmle/G8Iq39jbvu9859ntkzxUHLQsFRP5NCa8NeplgT MIvrhlB0Nk3piheMYWTNsMRHxx0i3aawNNnXndZNN7myc9H3UQ9Vd/mkn7Umh9qbrH9a NnT163Y8zkIb2ecAY7c60bNerP7rBkSEq7tXYdc4bHRqI2h+o0dEt3A0MmWzcA9wpPUc W6nOTM1r8xFuvU2xRVWOoq6VZIKskoaoC9WE1l/WAjv2d6x5o0AsXlwmOC1dyJYsuCSH y33ZsRtcbGRKEX6s/oJBLvjUnoV0g/xMNMUdnsCMcJwdktBU/jX76tE9NW89WraQfDEz E+og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=i123JyQi/pUuomVIg1CZI/8txZY5o6al1mihVVzVbw4=; b=8KnsCy9glo85XXUBwF24NSN2vh2gsQWcSm+ZUEOrhppt9DVTDu9/JhOt+PU7yN8KE2 l0jnHsY+NhIyJHr1mmjUslid5ocK4i9PpAivNwKA+hg6vVFXssYmkHeMdJvC4KW9yLyq 2OqZpMDMM5oYQtPldaR9s0No0k6Noxxlc7hMmQb6uUxxTfgz9Y+q7xocHQqq5nQheOe9 7AjkP/+y+Cf3Q80h+wQv0R3XJCUz0+a3u7F4wSoistD55N4b4YJS1mjy0nqQ3O8Amq7m 8iBoKSKz8XtAfYrGuu6G9Gvhrvj+GKvAGwbpvIvsHsmcYuYVGNgpumCQQqltTJoHs6bI FbEg== X-Gm-Message-State: AOAM531PgnhO00fQr8BoxOWsZnHye3Wqx5U84wTv6J4e3yq1IzuFNK5D eMNRl3kogsOqY76lv1bpPu80A50ZQE9f9Q== X-Google-Smtp-Source: ABdhPJzfzR7FpeYgTz5fQgKwzg/KwKgUVxafLIt0mlrev6h2lWtmQkX0lqzpRIGR2d0Ud6jYrosODg== X-Received: by 2002:a05:6808:154:: with SMTP id h20mr8952285oie.0.1636050129612; Thu, 04 Nov 2021 11:22:09 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id s206sm1595445oia.33.2021.11.04.11.22.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 11:22:09 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: hch@infradead.org, Jens Axboe Subject: [PATCH 3/5] block: make blk_try_enter_queue() available for blk-mq Date: Thu, 4 Nov 2021 12:21:59 -0600 Message-Id: <20211104182201.83906-4-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211104182201.83906-1-axboe@kernel.dk> References: <20211104182201.83906-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Just a prep patch for shifting the queue enter logic. This moves the expected fast path inline, and leaves bio_queue_enter() as an out-of-line function call. We don't want to inline the latter, as it's mostly slow path code. Signed-off-by: Jens Axboe --- block/blk-core.c | 26 +------------------------- block/blk.h | 25 +++++++++++++++++++++++++ 2 files changed, 26 insertions(+), 25 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index c2d267b6f910..e00f5a2287cc 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -386,30 +386,6 @@ void blk_cleanup_queue(struct request_queue *q) } EXPORT_SYMBOL(blk_cleanup_queue); -static bool blk_try_enter_queue(struct request_queue *q, bool pm) -{ - rcu_read_lock(); - if (!percpu_ref_tryget_live_rcu(&q->q_usage_counter)) - goto fail; - - /* - * The code that increments the pm_only counter must ensure that the - * counter is globally visible before the queue is unfrozen. - */ - if (blk_queue_pm_only(q) && - (!pm || queue_rpm_status(q) == RPM_SUSPENDED)) - goto fail_put; - - rcu_read_unlock(); - return true; - -fail_put: - blk_queue_exit(q); -fail: - rcu_read_unlock(); - return false; -} - /** * blk_queue_enter() - try to increase q->q_usage_counter * @q: request queue pointer @@ -442,7 +418,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) return 0; } -static inline int bio_queue_enter(struct bio *bio) +int bio_queue_enter(struct bio *bio) { struct request_queue *q = bdev_get_queue(bio->bi_bdev); diff --git a/block/blk.h b/block/blk.h index 7afffd548daf..f7371d3b1522 100644 --- a/block/blk.h +++ b/block/blk.h @@ -55,6 +55,31 @@ void blk_free_flush_queue(struct blk_flush_queue *q); void blk_freeze_queue(struct request_queue *q); void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic); void blk_queue_start_drain(struct request_queue *q); +int bio_queue_enter(struct bio *bio); + +static inline bool blk_try_enter_queue(struct request_queue *q, bool pm) +{ + rcu_read_lock(); + if (!percpu_ref_tryget_live_rcu(&q->q_usage_counter)) + goto fail; + + /* + * The code that increments the pm_only counter must ensure that the + * counter is globally visible before the queue is unfrozen. + */ + if (blk_queue_pm_only(q) && + (!pm || queue_rpm_status(q) == RPM_SUSPENDED)) + goto fail_put; + + rcu_read_unlock(); + return true; + +fail_put: + blk_queue_exit(q); +fail: + rcu_read_unlock(); + return false; +} #define BIO_INLINE_VECS 4 struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs, From patchwork Thu Nov 4 18:22:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12603655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87FDFC4332F for ; Thu, 4 Nov 2021 18:22:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7413D6120D for ; Thu, 4 Nov 2021 18:22:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234120AbhKDSYu (ORCPT ); Thu, 4 Nov 2021 14:24:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234080AbhKDSYt (ORCPT ); Thu, 4 Nov 2021 14:24:49 -0400 Received: from mail-oi1-x232.google.com (mail-oi1-x232.google.com [IPv6:2607:f8b0:4864:20::232]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25986C061714 for ; Thu, 4 Nov 2021 11:22:11 -0700 (PDT) Received: by mail-oi1-x232.google.com with SMTP id g125so10608412oif.9 for ; Thu, 04 Nov 2021 11:22:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Iti7E3gdVChvNtVhy5EwTVGYEQA3wlhNQ6xDbyM+O/w=; b=6iq15kKlmbIgg1arTRnnWaGRfNg4923Z4AYUuFHld0XKexkGxV5hLXhtBLnJE8CWVY xkaPbITSEQw17W7tJCxWk2aeqe9SDkahnpWA0Z+HWThDCENMMZS/Qne8+F7Kt5Kl5HvZ ZruR+7Ma+HD8ZxosHK2PP1Aw6Luj01llm/pGTVg5QySWim3csayEp2ZLKJHfiRggajGB GZ4yhV6615wL4g7r/bQvkqHESfb45kLyKpH25GlvEvUjPZAULS9iQhYXBxHj92WAULPW jc/9Oyeusjo1exQfwBnK2ngNV7qM7n+tdqkjkOHqrJ6GmIIWxVYUueasjXz0ibAIbLso aSeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Iti7E3gdVChvNtVhy5EwTVGYEQA3wlhNQ6xDbyM+O/w=; b=wEhK83uVN8PtUDjcBgYuKtZnsIGCRMhCqjW3gwa8xVww8Js5c+KdPgguGf7OPL+r3a nM43tiZ7ha8z6E0Ghwf8xXASA+GNj9xhL/AS5uoV4MeRuro+VTQfFPOu42FcipsxiZWx b1eTamnvn5gUSTLl9MqbBC6UaisMuknNPb238dbY43Kd45J7Q6PwVk+vzCohJzMhxMUB 7zENprFA/RnFR3gv4tpZdGraZKZteOOZP8IR5vnknmZrink3s9tdz7yTleSkgbTOZ9c1 5JJW8FEGAYfoVDOcxoAjEISGOSAkIOl0aNdL4PdVETbEtVy96/cTilbTR/pZy5Sq+3mG iEUw== X-Gm-Message-State: AOAM532sug96rGSxiU0ZUnknd+fxPcBl9pwFg7a3WNzfOZkVf1JOLVM5 gY1PQXsWYHPQyITKHubfy5B0swwGRh29yg== X-Google-Smtp-Source: ABdhPJwuZD4JAjZxn5a5a7vZ1kcAi/5WL+AOGZZcSWK+d+RtH67uwfAJ0Clx73BekDxcR/y6hhghdw== X-Received: by 2002:aca:3e86:: with SMTP id l128mr17357411oia.111.1636050130284; Thu, 04 Nov 2021 11:22:10 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id s206sm1595445oia.33.2021.11.04.11.22.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 11:22:09 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: hch@infradead.org, Jens Axboe Subject: [PATCH 4/5] block: move queue enter logic into blk_mq_submit_bio() Date: Thu, 4 Nov 2021 12:22:00 -0600 Message-Id: <20211104182201.83906-5-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211104182201.83906-1-axboe@kernel.dk> References: <20211104182201.83906-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Retain the old logic for the fops based submit, but for our internal blk_mq_submit_bio(), move the queue entering logic into the core function itself. We need to be a bit careful if going into the scheduler, as a scheduler or queue mappings can arbitrarily change before we have entered the queue. Have the bio scheduler mapping do that separately, it's a very cheap operation compared to actually doing merging locking and lookups. Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig Tested-by: Geert Uytterhoeven --- block/blk-core.c | 25 +++++++++++++------------ block/blk-mq-sched.c | 13 ++++++++++--- block/blk-mq.c | 36 ++++++++++++++++++++++++++---------- block/blk.h | 1 + 4 files changed, 50 insertions(+), 25 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index e00f5a2287cc..70cfac1d7fe1 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -746,7 +746,7 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q, return BLK_STS_OK; } -static noinline_for_stack bool submit_bio_checks(struct bio *bio) +noinline_for_stack bool submit_bio_checks(struct bio *bio) { struct block_device *bdev = bio->bi_bdev; struct request_queue *q = bdev_get_queue(bdev); @@ -864,22 +864,23 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio) return false; } -static void __submit_bio(struct bio *bio) +static void __submit_bio_fops(struct gendisk *disk, struct bio *bio) { - struct gendisk *disk = bio->bi_bdev->bd_disk; - if (unlikely(bio_queue_enter(bio) != 0)) return; + if (submit_bio_checks(bio) && blk_crypto_bio_prep(&bio)) + disk->fops->submit_bio(bio); + blk_queue_exit(disk->queue); +} - if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio)) - goto queue_exit; - if (!disk->fops->submit_bio) { +static void __submit_bio(struct bio *bio) +{ + struct gendisk *disk = bio->bi_bdev->bd_disk; + + if (!disk->fops->submit_bio) blk_mq_submit_bio(bio); - return; - } - disk->fops->submit_bio(bio); -queue_exit: - blk_queue_exit(disk->queue); + else + __submit_bio_fops(disk, bio); } /* diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 4a6789e4398b..4be652fa38e7 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -370,15 +370,20 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, bool ret = false; enum hctx_type type; - if (e && e->type->ops.bio_merge) - return e->type->ops.bio_merge(q, bio, nr_segs); + if (bio_queue_enter(bio)) + return false; + + if (e && e->type->ops.bio_merge) { + ret = e->type->ops.bio_merge(q, bio, nr_segs); + goto out_put; + } ctx = blk_mq_get_ctx(q); hctx = blk_mq_map_queue(q, bio->bi_opf, ctx); type = hctx->type; if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) || list_empty_careful(&ctx->rq_lists[type])) - return false; + goto out_put; /* default per sw-queue merge */ spin_lock(&ctx->lock); @@ -391,6 +396,8 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, ret = true; spin_unlock(&ctx->lock); +out_put: + blk_queue_exit(q); return ret; } diff --git a/block/blk-mq.c b/block/blk-mq.c index dcb413297a96..875bd0c04409 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2478,6 +2478,13 @@ static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug) return BLK_MAX_REQUEST_COUNT; } +static inline bool blk_mq_queue_enter(struct request_queue *q, struct bio *bio) +{ + if (!blk_try_enter_queue(q, false) && bio_queue_enter(bio)) + return false; + return true; +} + static struct request *blk_mq_get_new_requests(struct request_queue *q, struct blk_plug *plug, struct bio *bio) @@ -2489,6 +2496,13 @@ static struct request *blk_mq_get_new_requests(struct request_queue *q, }; struct request *rq; + if (unlikely(!blk_mq_queue_enter(q, bio))) + return NULL; + if (unlikely(!submit_bio_checks(bio))) + goto put_exit; + + rq_qos_throttle(q, bio); + if (plug) { data.nr_tags = plug->nr_ios; plug->nr_ios = 1; @@ -2502,6 +2516,8 @@ static struct request *blk_mq_get_new_requests(struct request_queue *q, rq_qos_cleanup(q, bio); if (bio->bi_opf & REQ_NOWAIT) bio_wouldblock_error(bio); +put_exit: + blk_queue_exit(q); return NULL; } @@ -2514,8 +2530,11 @@ static inline struct request *blk_mq_get_request(struct request_queue *q, rq = rq_list_peek(&plug->cached_rq); if (rq) { + if (unlikely(!submit_bio_checks(bio))) + return NULL; plug->cached_rq = rq_list_next(rq); INIT_LIST_HEAD(&rq->queuelist); + rq_qos_throttle(q, bio); return rq; } } @@ -2546,26 +2565,27 @@ void blk_mq_submit_bio(struct bio *bio) unsigned int nr_segs = 1; blk_status_t ret; + if (unlikely(!blk_crypto_bio_prep(&bio))) + return; + blk_queue_bounce(q, &bio); if (blk_may_split(q, bio)) __blk_queue_split(q, &bio, &nr_segs); if (!bio_integrity_prep(bio)) - goto queue_exit; + return; if (!blk_queue_nomerges(q) && bio_mergeable(bio)) { if (blk_attempt_plug_merge(q, bio, nr_segs, &same_queue_rq)) - goto queue_exit; + return; if (blk_mq_sched_bio_merge(q, bio, nr_segs)) - goto queue_exit; + return; } - rq_qos_throttle(q, bio); - plug = blk_mq_plug(q, bio); rq = blk_mq_get_request(q, plug, bio); if (unlikely(!rq)) - goto queue_exit; + return; trace_block_getrq(bio); @@ -2646,10 +2666,6 @@ void blk_mq_submit_bio(struct bio *bio) /* Default case. */ blk_mq_sched_insert_request(rq, false, true, true); } - - return; -queue_exit: - blk_queue_exit(q); } static size_t order_to_size(unsigned int order) diff --git a/block/blk.h b/block/blk.h index f7371d3b1522..79c98ced59c8 100644 --- a/block/blk.h +++ b/block/blk.h @@ -56,6 +56,7 @@ void blk_freeze_queue(struct request_queue *q); void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic); void blk_queue_start_drain(struct request_queue *q); int bio_queue_enter(struct bio *bio); +bool submit_bio_checks(struct bio *bio); static inline bool blk_try_enter_queue(struct request_queue *q, bool pm) { From patchwork Thu Nov 4 18:22:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12603657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F25D7C433FE for ; Thu, 4 Nov 2021 18:22:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D9D046120F for ; Thu, 4 Nov 2021 18:22:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234080AbhKDSYv (ORCPT ); Thu, 4 Nov 2021 14:24:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234164AbhKDSYu (ORCPT ); Thu, 4 Nov 2021 14:24:50 -0400 Received: from mail-ot1-x333.google.com (mail-ot1-x333.google.com [IPv6:2607:f8b0:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0E41C061203 for ; Thu, 4 Nov 2021 11:22:11 -0700 (PDT) Received: by mail-ot1-x333.google.com with SMTP id t21-20020a9d7295000000b0055bf1807972so1358716otj.8 for ; Thu, 04 Nov 2021 11:22:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oJ3n0rjsJvhWnuw+0JLUeNScZ7C6Wkhj6J3VNVvA4C4=; b=oAqWIoF3rKeKCB6HwjXUYSP9DgVG2Fd37it9G0gybglTJIZsngMuvuTU2kSKhpDj8E 8BvzG2eiIKYjnREN5IZjqfc0i5UsOxNtI8FwKb1zvKrPSSAF9bourQnta/oVHRCdZQg3 4ISc2yrqMnt8SKbFEneYHvad0uHzESGJqIbbtQAAXtKKkE/6ZNAqDFbAomjbh3a6q9nQ nb8tA/3gCKkpRyYCZllGcj2qlRgRBsc96Xcw1CeYHbjzrTmm0aSuZ4jzd3qe7RVAM/MT 9P4ueXmLzju4FMixXzuFPz9+lGYc0+QzX2td2Oaz7JGPN3Qlyz/JYx0GvMuDzpkITbtN /leQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oJ3n0rjsJvhWnuw+0JLUeNScZ7C6Wkhj6J3VNVvA4C4=; b=gslpJBVHN9ICTZ9z3bu1ko1u/ifPieoRmTXB40qPw2+CJVq0GJ9GJeA0+4xIK5tVV1 ECGqqh+Bn3fjqvs69j66aBLVXYRaeS0Hr0AWTtgtcA1YYnlijgNP6dXRBVSrQqyrrlqj 4zkvYxVuaWPI4iYid64EMdstGO4LFfpbN9pirIr6S4k63dkKhvqkhgzQfkRDTlf6Cx5P 1Bvd4BJtkEjV3gHwTSdx0eaG1C7Fo0fFcSQe+KRK9deghINUTisW8SZO8FZyy8qVBfR8 6xRLZZmapJBIQobdN2dbfz1iAo/r07wNlIFrsFbaopgs4aUiCLyNCnGWjYKqH9MW3LBQ g0JA== X-Gm-Message-State: AOAM530qNsjaVncgkRTw3xs9NxtJe66I/EO2R69b+Tm6VhjbCFX5yYMr WsuomdRCZaBUjxmPMk6K7ZZ1X0Iyqnx+lA== X-Google-Smtp-Source: ABdhPJxdKVQYLrczSFjMIJ8nO7qToxTsdJy191UhBHDEwSVvPVAVYBYNkNURZdB8Qth+iSjOkPK/AA== X-Received: by 2002:a05:6830:2690:: with SMTP id l16mr21601988otu.184.1636050131048; Thu, 04 Nov 2021 11:22:11 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id s206sm1595445oia.33.2021.11.04.11.22.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 11:22:10 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: hch@infradead.org, Jens Axboe Subject: [PATCH 5/5] block: ensure cached plug request matches the current queue Date: Thu, 4 Nov 2021 12:22:01 -0600 Message-Id: <20211104182201.83906-6-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211104182201.83906-1-axboe@kernel.dk> References: <20211104182201.83906-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If we're driving multiple devices, we could have pre-populated the cache for a different device. Ensure that the empty request matches the current queue. Fixes: 47c122e35d7e ("block: pre-allocate requests if plug is started and is a batch") Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig --- block/blk-mq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 875bd0c04409..6c8d02bd1b06 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2529,7 +2529,7 @@ static inline struct request *blk_mq_get_request(struct request_queue *q, struct request *rq; rq = rq_list_peek(&plug->cached_rq); - if (rq) { + if (rq && rq->q == q) { if (unlikely(!submit_bio_checks(bio))) return NULL; plug->cached_rq = rq_list_next(rq);