From patchwork Thu Nov 4 15:22:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12603359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C68EC433EF for ; Thu, 4 Nov 2021 15:22:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2DC8161212 for ; Thu, 4 Nov 2021 15:22:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231447AbhKDPYr (ORCPT ); Thu, 4 Nov 2021 11:24:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231215AbhKDPYq (ORCPT ); Thu, 4 Nov 2021 11:24:46 -0400 Received: from mail-ot1-x32a.google.com (mail-ot1-x32a.google.com [IPv6:2607:f8b0:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B892C06127A for ; Thu, 4 Nov 2021 08:22:08 -0700 (PDT) Received: by mail-ot1-x32a.google.com with SMTP id g25-20020a9d5f99000000b0055af3d227e8so4307770oti.11 for ; Thu, 04 Nov 2021 08:22:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iUVdiKdE3UtLDrSJ4ppTvX/REgJOcEw5TGAopy8ZtZU=; b=kccblwzg65rRL04Ugsk+1EjsGLiLpBdM8SLVWBMIzUFRoJhrCot9E/TaXCFudQIwvN rY2PHdk0TrUm6EH1xuoZDlLUorAUGriZmcRd/eML5/7pBkjdNNvAT2+b5IOilcIWXkrB hiN9WdivxGuP1JE1t0Sky90i051BoCFU9GJrYLrjnBeMZvxTd4dNLRCdk0dK8qYmNfsb bNQ3jnq99QKtJmwWkytFXpc6JkP5WmEEXVKJbSwbp4ghi9ITfAWo+bobg7FtaDO4Plqy ib7nmHJ/yIQ3hz259R4rOvsrjoGpXOO6+dz0lx4/VjH4KIpESffU2HplAwAX8r27kd/6 kYFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iUVdiKdE3UtLDrSJ4ppTvX/REgJOcEw5TGAopy8ZtZU=; b=aYH/F6NFHpVvI4Lnc30mMZ/GA/NjANdHjXBEtjG2c0mvAgES3P7T4g8pVQj3OmjRiP 8okK4E3zhGJRJqJbv1oXLDuk3MaQZEulSmo8J0/ey+os3SKjnbqlW8mEmPK+KLRd70m3 nU2MU9dhAzvWyiKstmcAh50ul3ESdY/JBRkVsqf9VjzOyAwZWZwPR/6Gap2YioYWhILS NKS+rOCBTfZwz0Duf9MiFmIGWQYqFX3pKCcSu6fO4dVkZdyJEPVjAodajo9dCdppoOEw ZOFJXm8FGl9EANKeQ3yZmxQRyS9leqD8CaDT/PaUDaVYyPXTBLkU2QUdkQqg/Bsg3lV5 UMfQ== X-Gm-Message-State: AOAM533zEkCrUafTqtDwWBoCZYgQon4y7K1h4ZiVppCW9KqlUJAU0EXP lxhvi/fM4l4Pf5NxXSVl/RPsROLXV7RFlw== X-Google-Smtp-Source: ABdhPJz9m2wnuF66QbIctGaR9q/02l3m2ULa7FUDUlJQ3O+mP89JzpA5mK9fUsruUhRK/sZMZTU+fQ== X-Received: by 2002:a05:6830:2:: with SMTP id c2mr1939536otp.76.1636039327501; Thu, 04 Nov 2021 08:22:07 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id k2sm1023925oiw.7.2021.11.04.08.22.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 08:22:07 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 1/5] block: have plug stored requests hold references to the queue Date: Thu, 4 Nov 2021 09:22:00 -0600 Message-Id: <20211104152204.57360-2-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211104152204.57360-1-axboe@kernel.dk> References: <20211104152204.57360-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Requests that were stored in the cache deliberately didn't hold an enter reference to the queue, instead we grabbed one every time we pulled a request out of there. That made for awkward logic on freeing the remainder of the cached list, if needed, where we had to artificially raise the queue usage count before each free. Grab references up front for cached plug requests. That's safer, and also more efficient. Fixes: 47c122e35d7e ("block: pre-allocate requests if plug is started and is a batch") Signed-off-by: Jens Axboe --- block/blk-core.c | 2 +- block/blk-mq.c | 7 ++++--- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index fd389a16013c..c2d267b6f910 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1643,7 +1643,7 @@ void blk_flush_plug(struct blk_plug *plug, bool from_schedule) flush_plug_callbacks(plug, from_schedule); if (!rq_list_empty(plug->mq_list)) blk_mq_flush_plug_list(plug, from_schedule); - if (unlikely(!from_schedule && plug->cached_rq)) + if (unlikely(!rq_list_empty(plug->cached_rq))) blk_mq_free_plug_rqs(plug); } diff --git a/block/blk-mq.c b/block/blk-mq.c index c68aa0a332e1..5498454c2164 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -410,7 +410,10 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data, tag_mask &= ~(1UL << i); rq = blk_mq_rq_ctx_init(data, tags, tag, alloc_time_ns); rq_list_add(data->cached_rq, rq); + nr++; } + /* caller already holds a reference, add for remainder */ + percpu_ref_get_many(&data->q->q_usage_counter, nr - 1); data->nr_tags -= nr; return rq_list_pop(data->cached_rq); @@ -630,10 +633,8 @@ void blk_mq_free_plug_rqs(struct blk_plug *plug) { struct request *rq; - while ((rq = rq_list_pop(&plug->cached_rq)) != NULL) { - percpu_ref_get(&rq->q->q_usage_counter); + while ((rq = rq_list_pop(&plug->cached_rq)) != NULL) blk_mq_free_request(rq); - } } static void req_bio_endio(struct request *rq, struct bio *bio, From patchwork Thu Nov 4 15:22:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12603361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E315EC4332F for ; Thu, 4 Nov 2021 15:22:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C97EB61216 for ; Thu, 4 Nov 2021 15:22:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231215AbhKDPYr (ORCPT ); Thu, 4 Nov 2021 11:24:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231450AbhKDPYr (ORCPT ); Thu, 4 Nov 2021 11:24:47 -0400 Received: from mail-ot1-x330.google.com (mail-ot1-x330.google.com [IPv6:2607:f8b0:4864:20::330]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46FBFC061714 for ; Thu, 4 Nov 2021 08:22:09 -0700 (PDT) Received: by mail-ot1-x330.google.com with SMTP id c2-20020a056830348200b0055a46c889a8so8723054otu.5 for ; Thu, 04 Nov 2021 08:22:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Bsj5ZLlquf/pPLdn6lhiC7/HYQDntfsyVm6vIVzVNlU=; b=AfH/GRjr0rU3aByHLtinkU2lHLDEQ7zFDv3L0CLAmNk+sMGtOAzWkNNF49Lg2iLsoC CHxy1/dpVIV+MYl1NpZbOodSqX0/TBbi8DH05AIavAC4VpsnLBcTZNCvWJX11ldsti/N nyVv6kcDBTClfE5hQuPbb9my98r9M4Bc58jnvCXH+flsmnyPzAcJAePyV5DhqvoBpdXj BmgdRg9gdChBi0pcbWRenEYWLe4xq8qkSE94ipS8xNFEpn+EIqOltIhKYrU2NYjDWpJK fND9T161ITF5tJt/ZaOYQez1T/zez4ENiVYooWfVs9bVASQOsVwl30lGtbEipVkvIYvj JOVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Bsj5ZLlquf/pPLdn6lhiC7/HYQDntfsyVm6vIVzVNlU=; b=xdwSvwA5G0DwdSf/RzLMCcK1RMVa88/s2zVgwET4hkA0x92EuAhT5Ya3PId+zQZoMk a6yPUKy70HGO2j22wBhumcauIQuhU2IkRRzDejG3ElUVHBk6vA7OcPbtbI1yhVSK5Vul 7h/83yiEiOE9hp7NwMY+GOzLUL3SrqMdi4HqRJtzaPdV/fBl40vvmtUOiaZDmui5hYWn /zK+I3r2N3FDIwuhRd44o+OK5UAgILiWstGpwew0w79vjoyJC9LNzujAtWbQZmlXMLhA TGvRFGCA77STRVdNRCjU9TTaks9BxNszGYIpO+Sn2YArL4hZ2yBBh8u4FO4J4XLJ4gCF 7Yiw== X-Gm-Message-State: AOAM531J3R8a9P0VdgVcT/OXonvexTrr91qHU9xO57Hk+4vlP6BHKlRy pUsV53r2s2o7jWgrdHbsrNw+koihHoG7Mg== X-Google-Smtp-Source: ABdhPJwuoOUSJL6lAc9NTU1el/i3Lj+5LRC4d+3l8UABkXHiOgd5a+aFyPJ6xF/vQMAs9A4ceoKXXA== X-Received: by 2002:a9d:f67:: with SMTP id 94mr11254034ott.32.1636039328466; Thu, 04 Nov 2021 08:22:08 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id k2sm1023925oiw.7.2021.11.04.08.22.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 08:22:08 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 2/5] block: make blk_try_enter_queue() available for blk-mq Date: Thu, 4 Nov 2021 09:22:01 -0600 Message-Id: <20211104152204.57360-3-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211104152204.57360-1-axboe@kernel.dk> References: <20211104152204.57360-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Just a prep patch for shifting the queue enter logic. Signed-off-by: Jens Axboe --- block/blk-core.c | 26 +------------------------- block/blk.h | 25 +++++++++++++++++++++++++ 2 files changed, 26 insertions(+), 25 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index c2d267b6f910..e00f5a2287cc 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -386,30 +386,6 @@ void blk_cleanup_queue(struct request_queue *q) } EXPORT_SYMBOL(blk_cleanup_queue); -static bool blk_try_enter_queue(struct request_queue *q, bool pm) -{ - rcu_read_lock(); - if (!percpu_ref_tryget_live_rcu(&q->q_usage_counter)) - goto fail; - - /* - * The code that increments the pm_only counter must ensure that the - * counter is globally visible before the queue is unfrozen. - */ - if (blk_queue_pm_only(q) && - (!pm || queue_rpm_status(q) == RPM_SUSPENDED)) - goto fail_put; - - rcu_read_unlock(); - return true; - -fail_put: - blk_queue_exit(q); -fail: - rcu_read_unlock(); - return false; -} - /** * blk_queue_enter() - try to increase q->q_usage_counter * @q: request queue pointer @@ -442,7 +418,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) return 0; } -static inline int bio_queue_enter(struct bio *bio) +int bio_queue_enter(struct bio *bio) { struct request_queue *q = bdev_get_queue(bio->bi_bdev); diff --git a/block/blk.h b/block/blk.h index 7afffd548daf..f7371d3b1522 100644 --- a/block/blk.h +++ b/block/blk.h @@ -55,6 +55,31 @@ void blk_free_flush_queue(struct blk_flush_queue *q); void blk_freeze_queue(struct request_queue *q); void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic); void blk_queue_start_drain(struct request_queue *q); +int bio_queue_enter(struct bio *bio); + +static inline bool blk_try_enter_queue(struct request_queue *q, bool pm) +{ + rcu_read_lock(); + if (!percpu_ref_tryget_live_rcu(&q->q_usage_counter)) + goto fail; + + /* + * The code that increments the pm_only counter must ensure that the + * counter is globally visible before the queue is unfrozen. + */ + if (blk_queue_pm_only(q) && + (!pm || queue_rpm_status(q) == RPM_SUSPENDED)) + goto fail_put; + + rcu_read_unlock(); + return true; + +fail_put: + blk_queue_exit(q); +fail: + rcu_read_unlock(); + return false; +} #define BIO_INLINE_VECS 4 struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs, From patchwork Thu Nov 4 15:22:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12603363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFA92C433F5 for ; Thu, 4 Nov 2021 15:22:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CBDBB60F70 for ; Thu, 4 Nov 2021 15:22:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231341AbhKDPYs (ORCPT ); Thu, 4 Nov 2021 11:24:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231450AbhKDPYs (ORCPT ); Thu, 4 Nov 2021 11:24:48 -0400 Received: from mail-oi1-x236.google.com (mail-oi1-x236.google.com [IPv6:2607:f8b0:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DDAFC061714 for ; Thu, 4 Nov 2021 08:22:10 -0700 (PDT) Received: by mail-oi1-x236.google.com with SMTP id x70so9785504oix.6 for ; Thu, 04 Nov 2021 08:22:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kRxjSFq+0VTJ4ZG+Y4lSjzqOLks9JeezG9aPZtfPP6k=; b=iQPky7+1EhF69oCXE18y/gsqgkkW7dov/PzkOImXTe5knrDpa2Y051Js9k5p98Cpmf WdGLC9I2FCyfnougmah8i1N6/mAH5rpHcIMdIoRGBor2zpbfazuUDZoo38uobEmljJdp +apW3FAjDDoCB5wowv1/jGyQ0e7zrhU9UxRzx2mz2xVppFKXJ31InnOzBOjO6VLRsKQx aJdS0JPxkYpX0DJapm6fIQ3J9bBvfFdiCpC1SKLQyOj0idaW/YbTjdSlskfvPLi+rkme lgAZDfeMe5PZMdoGX8mC7QOAA9cqEwygIxYHE6E8y84wTRqYYar/RUPve2i3pERyiUIY 0VCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kRxjSFq+0VTJ4ZG+Y4lSjzqOLks9JeezG9aPZtfPP6k=; b=Oao8BP0uudGvZWFeiGR03r1hPHIC7ZJkFx3jPvc1MUiIK/mnu/BGGqJ3o5r1Z3oB8X SguYiZvF0hNwvPa7lgAJ5YEv5xY2qi0u44j2ugAs9ZjmEGzYhfg5ONF4tcVI8EXrwWhO U88vck9IOsewC62FCiN53bY1PrVI794ju0ZVwCRTYIARLw0YPa4ng/TPJrcy9daCljjR ZOX/+u2ZJkcR1sRFjnrcw7sWN2J8TNmF4XVoz0V51AkO6fT3NDY7Nrc+FP1wQikonpIK s5xme+yM9CVA3QW2b4mP7HwkA9qdXNlpApk5ZgyJ6+eOiQlEKYdZWaKwMaJpPQgX5+h4 FI3A== X-Gm-Message-State: AOAM531M8kMSreS9w5jtfMlP0wg9npMJECjGYygSvMjYkOA0oQ5Y0z6y XMsG2HOmTTna1/UfnjKQtfPg8ix6tx7CPw== X-Google-Smtp-Source: ABdhPJxfcySE8gWJUZHPUHhAub+oHJfKexQJUYspU3Uj+4/yzOOE4J0EmLYVD+dkMxvjHH41Loyemw== X-Received: by 2002:a54:4f82:: with SMTP id g2mr16214898oiy.134.1636039329379; Thu, 04 Nov 2021 08:22:09 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id k2sm1023925oiw.7.2021.11.04.08.22.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 08:22:09 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 3/5] block: move plug rq alloc into helper Date: Thu, 4 Nov 2021 09:22:02 -0600 Message-Id: <20211104152204.57360-4-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211104152204.57360-1-axboe@kernel.dk> References: <20211104152204.57360-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This is in preparation for a fix, but serves as a cleanup as well moving the plugged request logic out of blk_mq_submit_bio(). Signed-off-by: Jens Axboe --- block/blk-mq.c | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 5498454c2164..f7f36d5ed25a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2478,6 +2478,23 @@ static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug) return BLK_MAX_REQUEST_COUNT; } +static inline struct request *blk_get_plug_request(struct request_queue *q, + struct blk_plug *plug, + struct bio *bio) +{ + struct request *rq; + + if (!plug) + return NULL; + rq = rq_list_peek(&plug->cached_rq); + if (rq) { + plug->cached_rq = rq_list_next(rq); + INIT_LIST_HEAD(&rq->queuelist); + return rq; + } + return NULL; +} + /** * blk_mq_submit_bio - Create and send a request to block device. * @bio: Bio pointer. @@ -2518,10 +2535,8 @@ void blk_mq_submit_bio(struct bio *bio) rq_qos_throttle(q, bio); plug = blk_mq_plug(q, bio); - if (plug && plug->cached_rq) { - rq = rq_list_pop(&plug->cached_rq); - INIT_LIST_HEAD(&rq->queuelist); - } else { + rq = blk_get_plug_request(q, plug, bio); + if (!rq) { struct blk_mq_alloc_data data = { .q = q, .nr_tags = 1, From patchwork Thu Nov 4 15:22:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12603365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27100C433EF for ; Thu, 4 Nov 2021 15:22:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0EC0160F70 for ; Thu, 4 Nov 2021 15:22:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231451AbhKDPYu (ORCPT ); Thu, 4 Nov 2021 11:24:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231476AbhKDPYt (ORCPT ); Thu, 4 Nov 2021 11:24:49 -0400 Received: from mail-oo1-xc29.google.com (mail-oo1-xc29.google.com [IPv6:2607:f8b0:4864:20::c29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 510ACC061203 for ; Thu, 4 Nov 2021 08:22:11 -0700 (PDT) Received: by mail-oo1-xc29.google.com with SMTP id o26-20020a4abe9a000000b002b74bffdef0so2018826oop.12 for ; Thu, 04 Nov 2021 08:22:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NgQpkbAg7oIoPNWGhNXHXqucaijPWnEli5+fTBhJ9cA=; b=UasxI8YAc97jlkeidOY8HHqYYN+69GhAkOGc/Ldowf5RmKOuoScHq6wP8Eo5Q75Qd+ BHgwl0VQNM8XQzsfbINMOzQ+OKfP4Erro6aW4Ai3qK5gw99+utGIGFAjjsVbPLhpZBeo q/GC9BWIsSpkGWFv8ajYQ5Jg80T744jaTRfG6I9MlYj/UhP2lIfKAsdPTZTH5SMOFiZM bN4XAGBQeF2F8KC4ZTOY0t5VmZ5ZI/Bo9l0n1pPjlpJMM2HK7/7Op76rVrbw8VbAA1dK dlf1feh59ULwocQHDuvuf25plkGbzikkY7drPuCSkPm0PVAEd68S2PPGjPYZvczPUcEq QHUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NgQpkbAg7oIoPNWGhNXHXqucaijPWnEli5+fTBhJ9cA=; b=Et2DsYAuw+axffAib+WcdvnO+M00JbaWvPJfPyj4ZkFao4WT9t28uWlg5I6jINWbWZ mfLUCv2+txAk6x5YgKftI3oDvEbLFtMGyyTS6i3cAy1b6MWLCjkG5g1CVKEDe+Oa8XZK O+aG3RECqC+xqL8n2qj4GVslvtTOO1rLS5FOvzk/WfssHHfDqcdA9b8ylQuq02g+1eQt D3l2QJFvrdyTQnZXARxG8SOoH3vaRDPkcMk0qdroIN1TxgRZly2pB2v2oaylPAf/7nU4 Yfuk5vh2wTze9N5hMXs+HsvycXjo8fxwYaayk/AZvjHY37pduD4DHHP+aIGsGsB75//R VjTA== X-Gm-Message-State: AOAM532saxllcOtt2z4Of4wSIQO/BuQkcqfGEO2mYsrDLw+hfB0sDjH4 /n//MA7/A2ZcQoyO0nUQkQd0MicPlHpQIw== X-Google-Smtp-Source: ABdhPJwjibcWADUp7PRzc4hBL+q7PJ7pENHGf1UKmputZki4T9mdmin9tt8ZUf7OH+xkXXd68XI06Q== X-Received: by 2002:a4a:4994:: with SMTP id z142mr4163663ooa.39.1636039330258; Thu, 04 Nov 2021 08:22:10 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id k2sm1023925oiw.7.2021.11.04.08.22.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 08:22:09 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 4/5] block: move queue enter logic into blk_mq_submit_bio() Date: Thu, 4 Nov 2021 09:22:03 -0600 Message-Id: <20211104152204.57360-5-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211104152204.57360-1-axboe@kernel.dk> References: <20211104152204.57360-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Retain the old logic for the fops based submit, but for our internal blk_mq_submit_bio(), move the queue entering logic into the core function itself. We need to be a bit careful if going into the scheduler, as a scheduler or queue mappings can arbitrarily change before we have entered the queue. Have the bio scheduler mapping do that separately, it's a very cheap operation compared to actually doing merging locking and lookups. Signed-off-by: Jens Axboe --- block/blk-core.c | 17 ++++++--------- block/blk-mq-sched.c | 13 ++++++++--- block/blk-mq.c | 51 +++++++++++++++++++++++++++++--------------- block/blk.h | 1 + 4 files changed, 52 insertions(+), 30 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index e00f5a2287cc..18aab7f8469a 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -746,7 +746,7 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q, return BLK_STS_OK; } -static noinline_for_stack bool submit_bio_checks(struct bio *bio) +noinline_for_stack bool submit_bio_checks(struct bio *bio) { struct block_device *bdev = bio->bi_bdev; struct request_queue *q = bdev_get_queue(bdev); @@ -868,18 +868,15 @@ static void __submit_bio(struct bio *bio) { struct gendisk *disk = bio->bi_bdev->bd_disk; - if (unlikely(bio_queue_enter(bio) != 0)) - return; - - if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio)) - goto queue_exit; if (!disk->fops->submit_bio) { blk_mq_submit_bio(bio); - return; + } else { + if (unlikely(bio_queue_enter(bio) != 0)) + return; + if (submit_bio_checks(bio) && blk_crypto_bio_prep(&bio)) + disk->fops->submit_bio(bio); + blk_queue_exit(disk->queue); } - disk->fops->submit_bio(bio); -queue_exit: - blk_queue_exit(disk->queue); } /* diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 4a6789e4398b..4be652fa38e7 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -370,15 +370,20 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, bool ret = false; enum hctx_type type; - if (e && e->type->ops.bio_merge) - return e->type->ops.bio_merge(q, bio, nr_segs); + if (bio_queue_enter(bio)) + return false; + + if (e && e->type->ops.bio_merge) { + ret = e->type->ops.bio_merge(q, bio, nr_segs); + goto out_put; + } ctx = blk_mq_get_ctx(q); hctx = blk_mq_map_queue(q, bio->bi_opf, ctx); type = hctx->type; if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) || list_empty_careful(&ctx->rq_lists[type])) - return false; + goto out_put; /* default per sw-queue merge */ spin_lock(&ctx->lock); @@ -391,6 +396,8 @@ bool blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, ret = true; spin_unlock(&ctx->lock); +out_put: + blk_queue_exit(q); return ret; } diff --git a/block/blk-mq.c b/block/blk-mq.c index f7f36d5ed25a..b0c0eac43eef 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2487,12 +2487,21 @@ static inline struct request *blk_get_plug_request(struct request_queue *q, if (!plug) return NULL; rq = rq_list_peek(&plug->cached_rq); - if (rq) { - plug->cached_rq = rq_list_next(rq); - INIT_LIST_HEAD(&rq->queuelist); - return rq; - } - return NULL; + if (!rq) + return NULL; + if (unlikely(!submit_bio_checks(bio))) + return ERR_PTR(-EIO); + plug->cached_rq = rq_list_next(rq); + INIT_LIST_HEAD(&rq->queuelist); + rq_qos_throttle(q, bio); + return rq; +} + +static inline bool blk_mq_queue_enter(struct request_queue *q, struct bio *bio) +{ + if (!blk_try_enter_queue(q, false) && bio_queue_enter(bio)) + return false; + return true; } /** @@ -2518,31 +2527,41 @@ void blk_mq_submit_bio(struct bio *bio) unsigned int nr_segs = 1; blk_status_t ret; + if (unlikely(!blk_crypto_bio_prep(&bio))) + return; + blk_queue_bounce(q, &bio); if (blk_may_split(q, bio)) __blk_queue_split(q, &bio, &nr_segs); if (!bio_integrity_prep(bio)) - goto queue_exit; + return; if (!blk_queue_nomerges(q) && bio_mergeable(bio)) { if (blk_attempt_plug_merge(q, bio, nr_segs, &same_queue_rq)) - goto queue_exit; + return; if (blk_mq_sched_bio_merge(q, bio, nr_segs)) - goto queue_exit; + return; } - rq_qos_throttle(q, bio); - plug = blk_mq_plug(q, bio); rq = blk_get_plug_request(q, plug, bio); - if (!rq) { + if (IS_ERR(rq)) { + return; + } else if (!rq) { struct blk_mq_alloc_data data = { .q = q, .nr_tags = 1, .cmd_flags = bio->bi_opf, }; + if (unlikely(!blk_mq_queue_enter(q, bio))) + return; + if (unlikely(!submit_bio_checks(bio))) + goto put_exit; + + rq_qos_throttle(q, bio); + if (plug) { data.nr_tags = plug->nr_ios; plug->nr_ios = 1; @@ -2553,7 +2572,9 @@ void blk_mq_submit_bio(struct bio *bio) rq_qos_cleanup(q, bio); if (bio->bi_opf & REQ_NOWAIT) bio_wouldblock_error(bio); - goto queue_exit; +put_exit: + blk_queue_exit(q); + return; } } @@ -2636,10 +2657,6 @@ void blk_mq_submit_bio(struct bio *bio) /* Default case. */ blk_mq_sched_insert_request(rq, false, true, true); } - - return; -queue_exit: - blk_queue_exit(q); } static size_t order_to_size(unsigned int order) diff --git a/block/blk.h b/block/blk.h index f7371d3b1522..79c98ced59c8 100644 --- a/block/blk.h +++ b/block/blk.h @@ -56,6 +56,7 @@ void blk_freeze_queue(struct request_queue *q); void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic); void blk_queue_start_drain(struct request_queue *q); int bio_queue_enter(struct bio *bio); +bool submit_bio_checks(struct bio *bio); static inline bool blk_try_enter_queue(struct request_queue *q, bool pm) { From patchwork Thu Nov 4 15:22:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12603367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9AB4C433F5 for ; Thu, 4 Nov 2021 15:22:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D428D60524 for ; Thu, 4 Nov 2021 15:22:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231474AbhKDPYu (ORCPT ); Thu, 4 Nov 2021 11:24:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231487AbhKDPYu (ORCPT ); Thu, 4 Nov 2021 11:24:50 -0400 Received: from mail-ot1-x332.google.com (mail-ot1-x332.google.com [IPv6:2607:f8b0:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E86DC061203 for ; Thu, 4 Nov 2021 08:22:12 -0700 (PDT) Received: by mail-ot1-x332.google.com with SMTP id q33-20020a056830442100b0055abeab1e9aso8736914otv.7 for ; Thu, 04 Nov 2021 08:22:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DI5RGovHvXsvejaQeBjVs1yIv5LWjweBXdeDbGeQoXs=; b=YtHStLiZIXA+aSQqJwEyN7s8T6iQF8j1dnoU3chJIb3sCcrhSf3C5BCIm30JNTx0IJ TOb9IbbW7brRLk8rrZlRYdqCQF8ZA/y0UcTJzOHbGyorK44U/HXTD+BiQxLxART7OYVN KZcBvbft6NSA6MMmHuAyb3i/4arqFclZPIVJC8qHKan9qrNqbzvuIZ+bwAYKBwKnnOjU FczyYQ7Ukf3gCTqvPkPrqKn7wuUi8BlvEI1RgxTZvg0KyIPBHfZGt8MMcWfNOkDTaMC5 NVw2os3q7TaRpsi7vnN+ueY7cKQaWZhK3XtUGEpOpuiFh0DxmdewXuxxCkxNKlE6vxzJ F4VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DI5RGovHvXsvejaQeBjVs1yIv5LWjweBXdeDbGeQoXs=; b=tDv2X4cvXTat9MwQaSPPInLsjpkxH8rNX8BWwBbCHLJl7zTTNyOvu728CKmSjooQKf mn1Jq4EEGKD1NS54B1Rxlw8MVvTrLpGjWyK0d9MK6qwbrEcmm+poyf6iEH58QIm7Y9wr 14x4f2GXJJwOm8yDjVZb426erHZkdxvFy7+0Hl7BwS6571cqP3e97gntaKwmoDDIF4Gy tcyDYCCtBFZOyOkyOcSxjhkiGjhvOkx19tY8QsbIE7l8J8VD0BYQ+mytJTUpipyK9u8s Kn1u0DEn9pXeG3H+s8RMJi1xl9/X1d0ZnX1etLdbBDTOs4DCtgvRZSnVpEhs0rRLLJck 4k7Q== X-Gm-Message-State: AOAM531uFoaNMiqiHyR7QGQUqtAZ6nYlymcR/9u/gLO3Sqj/AmQV5eye cEcqzKzk0/oZmdigXiOQJj6zCklrxGWuuQ== X-Google-Smtp-Source: ABdhPJxR0AkLx5UQ7ZbcbRiqZIpnnlqzt9+BrJugq+bikXEqVPonPxo8xy67FdlAs5McDR1uoUksUg== X-Received: by 2002:a05:6830:3484:: with SMTP id c4mr22282628otu.254.1636039331343; Thu, 04 Nov 2021 08:22:11 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id k2sm1023925oiw.7.2021.11.04.08.22.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Nov 2021 08:22:11 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 5/5] block: ensure cached plug request matches the current queue Date: Thu, 4 Nov 2021 09:22:04 -0600 Message-Id: <20211104152204.57360-6-axboe@kernel.dk> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211104152204.57360-1-axboe@kernel.dk> References: <20211104152204.57360-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If we're driving multiple devices, we could have pre-populated the cache for a different device. Ensure that the empty request matches the current queue. Fixes: 47c122e35d7e ("block: pre-allocate requests if plug is started and is a batch") Signed-off-by: Jens Axboe --- block/blk-mq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index b0c0eac43eef..e9397bcdd90c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2487,7 +2487,7 @@ static inline struct request *blk_get_plug_request(struct request_queue *q, if (!plug) return NULL; rq = rq_list_peek(&plug->cached_rq); - if (!rq) + if (!rq || rq->q != q) return NULL; if (unlikely(!submit_bio_checks(bio))) return ERR_PTR(-EIO);