From patchwork Wed Oct 13 16:49:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12556355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79B9BC433FE for ; Wed, 13 Oct 2021 16:49:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 561A860ED4 for ; Wed, 13 Oct 2021 16:49:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230228AbhJMQvp (ORCPT ); Wed, 13 Oct 2021 12:51:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229559AbhJMQvp (ORCPT ); Wed, 13 Oct 2021 12:51:45 -0400 Received: from mail-io1-xd2a.google.com (mail-io1-xd2a.google.com [IPv6:2607:f8b0:4864:20::d2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AE67C061570 for ; Wed, 13 Oct 2021 09:49:42 -0700 (PDT) Received: by mail-io1-xd2a.google.com with SMTP id 188so397727iou.12 for ; Wed, 13 Oct 2021 09:49:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W/7HQR6yl8NWlBYBOTfGlONmwBvze2rjltdV3uROPZI=; b=Rsw/2EeGGMOzslIMSWWWoEhTr5Om9A/tyRaweH/KyDpkUITkAHXyIdYqWZD4oPH2Oh zwxZKSRiq99wBeccKXg87Xsu2AnVosWPZGD/9CkImhXHaTpurdiMoX4P6DWjBJWDWl9R /Mekdqre4ARfNZzPU3BLsSVVzXY9J4klJmZQNrR4l9RqiQQna3ShELeD6llMJ/TojEmp 0vDuXL4qqqMCGNKvAHQMOdY0MMqWIs0HP2m3z+K0t04niBp6zAdXIYgeSvUXqLs1SreL 0vzpq4EhOBmh+VIdcmv5bN3uyYGqyI/SmdL0ieOQtxY/V/PiPoIfHiRlmLwp3H2rIDQO tIZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W/7HQR6yl8NWlBYBOTfGlONmwBvze2rjltdV3uROPZI=; b=yy/X8AE6hnm4kafsxH7oIY4208HpPfhrMvkl+DqTjbkkLDO1PNl7HrnTa5zZsAdKXo 8qhdkLtcG3atNorhb8pY71S8qynoQXH9jWqlkZT4vKBu2mIyqyRQTfZI7cbiguShT714 WotlqUzYpeYj5cfntOGxX4FZUZJrgunUGuvpu5SVSJPA7STPmGmuYAXVS5ayxw2qezKg FZf50XHh7zKlqpG/gqAOlQURkEnpnKwcGdxnsFa1S5KsE7q8drIxWdopmP9XzHPq8ubF S6tZa9Py0J/6dZ/qbERla5Ju3/9DSZimfLQXMp4R722yPbZ8BJaFxs/Iii+Gyl5Hsgtr rjhQ== X-Gm-Message-State: AOAM532q+GkbLFIzdFNO6U1sFI3Zz9wWtqasmLs+9aLRxJtWvNsVCpuR hv7aN3hVjHj2u48S6IfRNkJM5xqzeVllcQ== X-Google-Smtp-Source: ABdhPJyNxyyIZJMTw7rZ63v5jwBQ0hlWKkedEwuU8ehJLxZBItYTDtiHbIizTJm05S3rDR5fkpNCIQ== X-Received: by 2002:a05:6638:2403:: with SMTP id z3mr313118jat.141.1634143781384; Wed, 13 Oct 2021 09:49:41 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id v15sm17217ilg.87.2021.10.13.09.49.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Oct 2021 09:49:40 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe , Christoph Hellwig Subject: [PATCH 1/4] block: provide helpers for rq_list manipulation Date: Wed, 13 Oct 2021 10:49:34 -0600 Message-Id: <20211013164937.985367-2-axboe@kernel.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211013164937.985367-1-axboe@kernel.dk> References: <20211013164937.985367-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Instead of open-coding the list additions, traversal, and removal, provide a basic set of helpers. Suggested-by: Christoph Hellwig Signed-off-by: Jens Axboe --- block/blk-mq.c | 21 +++++---------------- include/linux/blk-mq.h | 25 +++++++++++++++++++++++++ 2 files changed, 30 insertions(+), 16 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 6dfd3aaa6073..46a91e5fabc5 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -426,10 +426,10 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) tag = tag_offset + i; tags &= ~(1UL << i); rq = blk_mq_rq_ctx_init(data, tag, alloc_time_ns); - rq->rq_next = *data->cached_rq; - *data->cached_rq = rq; + rq_list_add_tail(data->cached_rq, rq); } data->nr_tags -= nr; + return rq_list_pop(data->cached_rq); } else { /* * Waiting allocations only fail because of an inactive hctx. @@ -453,14 +453,6 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data) return blk_mq_rq_ctx_init(data, tag, alloc_time_ns); } - - if (data->cached_rq) { - rq = *data->cached_rq; - *data->cached_rq = rq->rq_next; - return rq; - } - - return NULL; } struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op, @@ -603,11 +595,9 @@ EXPORT_SYMBOL_GPL(blk_mq_free_request); void blk_mq_free_plug_rqs(struct blk_plug *plug) { - while (plug->cached_rq) { - struct request *rq; + struct request *rq; - rq = plug->cached_rq; - plug->cached_rq = rq->rq_next; + while ((rq = rq_list_pop(&plug->cached_rq)) != NULL) { percpu_ref_get(&rq->q->q_usage_counter); blk_mq_free_request(rq); } @@ -2264,8 +2254,7 @@ void blk_mq_submit_bio(struct bio *bio) plug = blk_mq_plug(q, bio); if (plug && plug->cached_rq) { - rq = plug->cached_rq; - plug->cached_rq = rq->rq_next; + rq = rq_list_pop(&plug->cached_rq); INIT_LIST_HEAD(&rq->queuelist); } else { struct blk_mq_alloc_data data = { diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index a9c1d0882550..c05560524841 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -473,6 +473,31 @@ struct blk_mq_tag_set { struct list_head tag_list; }; +#define rq_list_add_tail(listptr, rq) do { \ + (rq)->rq_next = *(listptr); \ + *(listptr) = rq; \ +} while (0) + +#define rq_list_pop(listptr) \ +({ \ + struct request *__req = NULL; \ + if ((listptr) && *(listptr)) { \ + __req = *(listptr); \ + *(listptr) = __req->rq_next; \ + } \ + __req; \ +}) + +#define rq_list_peek(listptr) \ +({ \ + struct request *__req = NULL; \ + if ((listptr) && *(listptr)) \ + __req = *(listptr); \ + __req; \ +}) + +#define rq_list_next(rq) (rq)->rq_next + /** * struct blk_mq_queue_data - Data about a request inserted in a queue * From patchwork Wed Oct 13 16:49:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12556357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEC76C4332F for ; Wed, 13 Oct 2021 16:49:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9136660ED4 for ; Wed, 13 Oct 2021 16:49:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230253AbhJMQvq (ORCPT ); Wed, 13 Oct 2021 12:51:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229559AbhJMQvq (ORCPT ); Wed, 13 Oct 2021 12:51:46 -0400 Received: from mail-il1-x134.google.com (mail-il1-x134.google.com [IPv6:2607:f8b0:4864:20::134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B164C061570 for ; Wed, 13 Oct 2021 09:49:43 -0700 (PDT) Received: by mail-il1-x134.google.com with SMTP id w10so359218ilc.13 for ; Wed, 13 Oct 2021 09:49:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=u3K8KowZzxNUqVAWzdkigAtin3UILSDTRqPa51/1XxI=; b=p3HxtEQsg7U+FqUS8heoSgV9nlHRRVZOOrnOn4PjS9xN9gu9GEhKAdXSSyddqWynxx 1ILitWvpt++WFKZheDwXgbHrvK2dXreQFhXNVrg2o51e/tK4EAnyzRT3d/6JGQ9MpZ6i q+qllrKt29Av7AZz+SgcMG9I18t8PSTEj1Q0A71TRHypgs+7TNXnrvHx/BlKoHmFwFo+ h1OFI0cHqrhekPPaCXWrq8o/eNZd2zdPAPNN3jrQqhpXFrRUoemM2aJrClvPSkqklsQP Q//P4oColJtSWydfNzaV/GmKLBkJnYWKpBKmxcwu/pEfozyTcDOrYYiU8pYbDKCaGUsp 9qow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u3K8KowZzxNUqVAWzdkigAtin3UILSDTRqPa51/1XxI=; b=nQt5hWhtvUVzccbLbg38hFiQIKSeUqaezX5fGw1XFccjOGWah2+HMCyDTyKVi/85OY PNMD2GuIFedkY16ImNcZV77+cod8CFVnwf7UNNs68vzA2y2SdGiSVlCi5zHZANimQwhp nwlij4BsF+wx6awAh+gFMGlvlcdcJeVW6E8oDAIdX5wXJtUKUxjaqyAH+YQDvZYV/LDy LjOgrsV62HkQryGz8fE3Lx1tGb9TtzgWewvxQ6LxcSNK+E/V605vvGQjjq5JixSu5fJC FS40AKhlgT3ElyJfPrjmJOXOYNWheKGNsUvivORXA0uUgneLsbdRd697e1zkwA1SZlY9 w+DQ== X-Gm-Message-State: AOAM530nCBQ94QrxsJGUWmQNsI+rGZ+tAZtOrwMsRTVbgxrIOpsUlqsu 1dns9sSj172uJGDIA8NQmBOxojNZq43+rQ== X-Google-Smtp-Source: ABdhPJy+vD3t6NCemv+DvVD1jMOofoAUWfRz/HXy+8odR+LWqAndYDAmZ2VX1S+jxU+H0tiwdnjdIA== X-Received: by 2002:a05:6e02:16c6:: with SMTP id 6mr115024ilx.220.1634143782264; Wed, 13 Oct 2021 09:49:42 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id v15sm17217ilg.87.2021.10.13.09.49.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Oct 2021 09:49:41 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 2/4] block: inline fast path of driver tag allocation Date: Wed, 13 Oct 2021 10:49:35 -0600 Message-Id: <20211013164937.985367-3-axboe@kernel.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211013164937.985367-1-axboe@kernel.dk> References: <20211013164937.985367-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If we don't use an IO scheduler or have shared tags, then we don't need to call into this external function at all. This saves ~2% for such a setup. Signed-off-by: Jens Axboe --- block/blk-mq.c | 8 +++----- block/blk-mq.h | 15 ++++++++++++++- 2 files changed, 17 insertions(+), 6 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 46a91e5fabc5..fe3e926c20a9 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1135,7 +1135,7 @@ static inline unsigned int queued_to_index(unsigned int queued) return min(BLK_MQ_MAX_DISPATCH_ORDER - 1, ilog2(queued) + 1); } -static bool __blk_mq_get_driver_tag(struct request *rq) +static bool __blk_mq_alloc_driver_tag(struct request *rq) { struct sbitmap_queue *bt = &rq->mq_hctx->tags->bitmap_tags; unsigned int tag_offset = rq->mq_hctx->tags->nr_reserved_tags; @@ -1159,11 +1159,9 @@ static bool __blk_mq_get_driver_tag(struct request *rq) return true; } -bool blk_mq_get_driver_tag(struct request *rq) +bool __blk_mq_get_driver_tag(struct blk_mq_hw_ctx *hctx, struct request *rq) { - struct blk_mq_hw_ctx *hctx = rq->mq_hctx; - - if (rq->tag == BLK_MQ_NO_TAG && !__blk_mq_get_driver_tag(rq)) + if (rq->tag == BLK_MQ_NO_TAG && !__blk_mq_alloc_driver_tag(rq)) return false; if ((hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED) && diff --git a/block/blk-mq.h b/block/blk-mq.h index 8be447995106..ceed0a001c76 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -264,7 +264,20 @@ static inline void blk_mq_put_driver_tag(struct request *rq) __blk_mq_put_driver_tag(rq->mq_hctx, rq); } -bool blk_mq_get_driver_tag(struct request *rq); +bool __blk_mq_get_driver_tag(struct blk_mq_hw_ctx *hctx, struct request *rq); + +static inline bool blk_mq_get_driver_tag(struct request *rq) +{ + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; + + if (rq->tag != BLK_MQ_NO_TAG && + !(hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)) { + hctx->tags->rqs[rq->tag] = rq; + return true; + } + + return __blk_mq_get_driver_tag(hctx, rq); +} static inline void blk_mq_clear_mq_map(struct blk_mq_queue_map *qmap) { From patchwork Wed Oct 13 16:49:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12556359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72385C433EF for ; Wed, 13 Oct 2021 16:49:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 554E36052B for ; Wed, 13 Oct 2021 16:49:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231213AbhJMQvr (ORCPT ); Wed, 13 Oct 2021 12:51:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229559AbhJMQvr (ORCPT ); Wed, 13 Oct 2021 12:51:47 -0400 Received: from mail-il1-x132.google.com (mail-il1-x132.google.com [IPv6:2607:f8b0:4864:20::132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A032C061570 for ; Wed, 13 Oct 2021 09:49:44 -0700 (PDT) Received: by mail-il1-x132.google.com with SMTP id s3so460748ild.0 for ; Wed, 13 Oct 2021 09:49:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0WZiUGBpj3AU/DnW4vKsqNt4ePNjbY4e14bB896ZXa8=; b=iU2WxsS/k+0R4nBt91557F0kMhEO+o4n+QS9sUoWu+3mnz+oXPWzOj9ANtbkZxV7Wg LZht0OIoy9j7eH7rwmd2qpBCtLwRK2GhpttK5Am42vOr2c0cgig1ZhksrxtzD/LH+Kws YkVDlUYyfcT7tgeV0cbSMCvSRGyTYk+/lFHJr5v4HnmjBZloSbBFhSlQpHYBp1LHCQn0 rMttBm1pOmlztVfN2wju3WX2CAYRK97TN4IAUAYxxaSz9Nlqudb5ypnZsIEbQTFX7rMs 40phQsJfHDhpXPQ9ZBqawKp7UklhhBqYfMFZhp2o4pTQyl+CzNgIZPz8LY/+ODcYvluO +apg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0WZiUGBpj3AU/DnW4vKsqNt4ePNjbY4e14bB896ZXa8=; b=mPOZewddGw8v2xRZdeBptmTFvY2szu5niJAi6JJ7vHvpcb0HtgCFLHPV7huXcOMqtX gqe4gLMZspjnPK0xTVkSNI6LWB1dBdugzzBIgCfzWYrmLcoBHjGpWEXJyGnbqHTru6f2 4qO0uDr6jIx2Leia0+yRvcrCU2W4Ze13mfYz1dmbECiqnVXoFHkv8ww7Z7VvUsH9Ob4W GkPNEltGxhdC0VE6SkqJ5DZqh5MR8VW2Uu2SXRnVmMFk+ckNQ613R2iZvu3bxXOZMxBX 0QPQeIgiPUn8G7YLD1lmyEo/FM3WA1wB9OkOyqgd38G+bJw3QFpQ3Hq/wfnbKKJ0Xcf9 m8Bw== X-Gm-Message-State: AOAM531Id5sZbdyMpB9GfpJbggFOiHeHGlWgKopmddOGPmUNSI+k3YdJ U7KtXLwrmBWs2zsBS4f73svLgJV7Npg= X-Google-Smtp-Source: ABdhPJwusLsByvq+TFtRRqBH9kYgyPD3y/Xu2D7dArOYxq6l3dIC9MmHKhIN+AF3/btXIEaJ+VJwWQ== X-Received: by 2002:a92:c7a1:: with SMTP id f1mr67009ilk.263.1634143783252; Wed, 13 Oct 2021 09:49:43 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id v15sm17217ilg.87.2021.10.13.09.49.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Oct 2021 09:49:42 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 3/4] block: don't bother iter advancing a fully done bio Date: Wed, 13 Oct 2021 10:49:36 -0600 Message-Id: <20211013164937.985367-4-axboe@kernel.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211013164937.985367-1-axboe@kernel.dk> References: <20211013164937.985367-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If we're completing nbytes and nbytes is the size of the bio, don't bother with calling into the iterator increment helpers. Just clear the bio size and we're done. Signed-off-by: Jens Axboe --- block/bio.c | 4 ++-- include/linux/bio.h | 13 +++++++++++-- 2 files changed, 13 insertions(+), 4 deletions(-) diff --git a/block/bio.c b/block/bio.c index a3c9ff23a036..874ff235aff7 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1289,7 +1289,7 @@ EXPORT_SYMBOL(submit_bio_wait); * * @bio will then represent the remaining, uncompleted portion of the io. */ -void bio_advance(struct bio *bio, unsigned bytes) +void __bio_advance(struct bio *bio, unsigned bytes) { if (bio_integrity(bio)) bio_integrity_advance(bio, bytes); @@ -1297,7 +1297,7 @@ void bio_advance(struct bio *bio, unsigned bytes) bio_crypt_advance(bio, bytes); bio_advance_iter(bio, &bio->bi_iter, bytes); } -EXPORT_SYMBOL(bio_advance); +EXPORT_SYMBOL(__bio_advance); void bio_copy_data_iter(struct bio *dst, struct bvec_iter *dst_iter, struct bio *src, struct bvec_iter *src_iter) diff --git a/include/linux/bio.h b/include/linux/bio.h index 62d684b7dd4c..44b543e7baf6 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -119,6 +119,17 @@ static inline void bio_advance_iter_single(const struct bio *bio, bvec_iter_advance_single(bio->bi_io_vec, iter, bytes); } +extern void __bio_advance(struct bio *, unsigned); + +static inline void bio_advance(struct bio *bio, unsigned int nbytes) +{ + if (nbytes == bio->bi_iter.bi_size) { + bio->bi_iter.bi_size = 0; + return; + } + __bio_advance(bio, nbytes); +} + #define __bio_for_each_segment(bvl, bio, iter, start) \ for (iter = (start); \ (iter).bi_size && \ @@ -381,8 +392,6 @@ static inline int bio_iov_vecs_to_alloc(struct iov_iter *iter, int max_segs) struct request_queue; extern int submit_bio_wait(struct bio *bio); -extern void bio_advance(struct bio *, unsigned); - extern void bio_init(struct bio *bio, struct bio_vec *table, unsigned short max_vecs); extern void bio_uninit(struct bio *); From patchwork Wed Oct 13 16:49:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12556361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C660BC433F5 for ; Wed, 13 Oct 2021 16:49:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B0B296052B for ; Wed, 13 Oct 2021 16:49:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233774AbhJMQvu (ORCPT ); Wed, 13 Oct 2021 12:51:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230246AbhJMQvt (ORCPT ); Wed, 13 Oct 2021 12:51:49 -0400 Received: from mail-il1-x129.google.com (mail-il1-x129.google.com [IPv6:2607:f8b0:4864:20::129]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78DC0C061570 for ; Wed, 13 Oct 2021 09:49:45 -0700 (PDT) Received: by mail-il1-x129.google.com with SMTP id w11so392318ilv.6 for ; Wed, 13 Oct 2021 09:49:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vk9BWAfusAs9hjR+WsXJQirfaoLZFWwpwknRZKmkpJ4=; b=zTaDvbvGp60ozDJUuF4NmUC9vQJELqwGwuBP9g9qyxxwe71AEoZJC6POsdksHYuZzB IGDljTTv2nk4rk9RONIkuH53qweT/Sfe3XObaF5LxyOJlB5cBmhUAJQxV+GWAZD2dj0d MR6RBOe07dpLo5PokRpBKV2GziAg7I91duq++G0oI/2W98G6NKWEBvov5RtHP85OMZR6 77kohxNmxL1n8fHlh1G8Uknq032B6PIS8jFey8g9rTUVP63LbRW1pFOgfb+itev7E9rc pgBSnsnPpaOOEACqcdONLH2LGW7uNIl3EQbsrtrbZt3XVN4ayF31FzogBiZI0mWPg1M4 OObQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vk9BWAfusAs9hjR+WsXJQirfaoLZFWwpwknRZKmkpJ4=; b=KnTW/I8Zg6oeo3cyP3DS908rdz0s6NSHnuRsSpZ0iUOoA950qD2MLv+6rkVFPYodEA vUrFK7RUH2NNckcFLk3duGdHqAfxLKsZ1XxwZ/7f8oNsSDcbDm9Pja/C2zqSLTuxpI3u 1Rm1nJcYW6WHcLaawtjZnxhcQW6bu4HEK6Y/XDS+ql6HHGlhFexfaQ35fkURldkpXUMO KYUxJZboDyuDr3/NxtfV9ZnvS8unyF+5+E4JzgGgynmCtb7ZjN+hdjdPyJ5GKACoalRV ztd98rj+KSvzTYKmbP/BZNO2l8tqS3HGcpGWlvn4g+oenIO7rOHTiqVzLdEmiIsKF6Vv hUTA== X-Gm-Message-State: AOAM533+xAcYF2DMkVGJoIqTwv+STEd605NG5l0/T8WaYOuUSFXJl0uY SerTEdtHNwHUb0nQ71W23XvTcpUTvodGHg== X-Google-Smtp-Source: ABdhPJxyv3wrouSm/gVOocKkqzmuu+SFRsItXYXfERspv9rrOzfckeq+OfEof0jHy7NOD6n8QzHYOg== X-Received: by 2002:a05:6e02:1be8:: with SMTP id y8mr137276ilv.24.1634143784574; Wed, 13 Oct 2021 09:49:44 -0700 (PDT) Received: from p1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id v15sm17217ilg.87.2021.10.13.09.49.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Oct 2021 09:49:43 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 4/4] block: move update request helpers into blk-mq.c Date: Wed, 13 Oct 2021 10:49:37 -0600 Message-Id: <20211013164937.985367-5-axboe@kernel.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211013164937.985367-1-axboe@kernel.dk> References: <20211013164937.985367-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org For some reason we still have them in blk-core, with the rest of the request completion being in blk-mq. That causes and out-of-line call for each completion. Move them into blk-mq.c instead. Signed-off-by: Jens Axboe --- block/blk-core.c | 214 ----------------------------------------------- block/blk-mq.c | 214 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 214 insertions(+), 214 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index d5b0258dd218..b199579c5f1f 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -166,103 +166,6 @@ inline const char *blk_op_str(unsigned int op) } EXPORT_SYMBOL_GPL(blk_op_str); -static const struct { - int errno; - const char *name; -} blk_errors[] = { - [BLK_STS_OK] = { 0, "" }, - [BLK_STS_NOTSUPP] = { -EOPNOTSUPP, "operation not supported" }, - [BLK_STS_TIMEOUT] = { -ETIMEDOUT, "timeout" }, - [BLK_STS_NOSPC] = { -ENOSPC, "critical space allocation" }, - [BLK_STS_TRANSPORT] = { -ENOLINK, "recoverable transport" }, - [BLK_STS_TARGET] = { -EREMOTEIO, "critical target" }, - [BLK_STS_NEXUS] = { -EBADE, "critical nexus" }, - [BLK_STS_MEDIUM] = { -ENODATA, "critical medium" }, - [BLK_STS_PROTECTION] = { -EILSEQ, "protection" }, - [BLK_STS_RESOURCE] = { -ENOMEM, "kernel resource" }, - [BLK_STS_DEV_RESOURCE] = { -EBUSY, "device resource" }, - [BLK_STS_AGAIN] = { -EAGAIN, "nonblocking retry" }, - - /* device mapper special case, should not leak out: */ - [BLK_STS_DM_REQUEUE] = { -EREMCHG, "dm internal retry" }, - - /* zone device specific errors */ - [BLK_STS_ZONE_OPEN_RESOURCE] = { -ETOOMANYREFS, "open zones exceeded" }, - [BLK_STS_ZONE_ACTIVE_RESOURCE] = { -EOVERFLOW, "active zones exceeded" }, - - /* everything else not covered above: */ - [BLK_STS_IOERR] = { -EIO, "I/O" }, -}; - -blk_status_t errno_to_blk_status(int errno) -{ - int i; - - for (i = 0; i < ARRAY_SIZE(blk_errors); i++) { - if (blk_errors[i].errno == errno) - return (__force blk_status_t)i; - } - - return BLK_STS_IOERR; -} -EXPORT_SYMBOL_GPL(errno_to_blk_status); - -int blk_status_to_errno(blk_status_t status) -{ - int idx = (__force int)status; - - if (WARN_ON_ONCE(idx >= ARRAY_SIZE(blk_errors))) - return -EIO; - return blk_errors[idx].errno; -} -EXPORT_SYMBOL_GPL(blk_status_to_errno); - -static void print_req_error(struct request *req, blk_status_t status, - const char *caller) -{ - int idx = (__force int)status; - - if (WARN_ON_ONCE(idx >= ARRAY_SIZE(blk_errors))) - return; - - printk_ratelimited(KERN_ERR - "%s: %s error, dev %s, sector %llu op 0x%x:(%s) flags 0x%x " - "phys_seg %u prio class %u\n", - caller, blk_errors[idx].name, - req->rq_disk ? req->rq_disk->disk_name : "?", - blk_rq_pos(req), req_op(req), blk_op_str(req_op(req)), - req->cmd_flags & ~REQ_OP_MASK, - req->nr_phys_segments, - IOPRIO_PRIO_CLASS(req->ioprio)); -} - -static void req_bio_endio(struct request *rq, struct bio *bio, - unsigned int nbytes, blk_status_t error) -{ - if (error) - bio->bi_status = error; - - if (unlikely(rq->rq_flags & RQF_QUIET)) - bio_set_flag(bio, BIO_QUIET); - - bio_advance(bio, nbytes); - - if (req_op(rq) == REQ_OP_ZONE_APPEND && error == BLK_STS_OK) { - /* - * Partial zone append completions cannot be supported as the - * BIO fragments may end up not being written sequentially. - */ - if (bio->bi_iter.bi_size) - bio->bi_status = BLK_STS_IOERR; - else - bio->bi_iter.bi_sector = rq->__sector; - } - - /* don't actually finish bio if it's part of flush sequence */ - if (bio->bi_iter.bi_size == 0 && !(rq->rq_flags & RQF_FLUSH_SEQ)) - bio_endio(bio); -} - void blk_dump_rq_flags(struct request *rq, char *msg) { printk(KERN_INFO "%s: dev %s: flags=%llx\n", msg, @@ -1305,17 +1208,6 @@ static void update_io_ticks(struct block_device *part, unsigned long now, } } -static void blk_account_io_completion(struct request *req, unsigned int bytes) -{ - if (req->part && blk_do_io_stat(req)) { - const int sgrp = op_stat_group(req_op(req)); - - part_stat_lock(); - part_stat_add(req->part, sectors[sgrp], bytes >> 9); - part_stat_unlock(); - } -} - void __blk_account_io_done(struct request *req, u64 now) { const int sgrp = op_stat_group(req_op(req)); @@ -1424,112 +1316,6 @@ void blk_steal_bios(struct bio_list *list, struct request *rq) } EXPORT_SYMBOL_GPL(blk_steal_bios); -/** - * blk_update_request - Complete multiple bytes without completing the request - * @req: the request being processed - * @error: block status code - * @nr_bytes: number of bytes to complete for @req - * - * Description: - * Ends I/O on a number of bytes attached to @req, but doesn't complete - * the request structure even if @req doesn't have leftover. - * If @req has leftover, sets it up for the next range of segments. - * - * Passing the result of blk_rq_bytes() as @nr_bytes guarantees - * %false return from this function. - * - * Note: - * The RQF_SPECIAL_PAYLOAD flag is ignored on purpose in this function - * except in the consistency check at the end of this function. - * - * Return: - * %false - this request doesn't have any more data - * %true - this request has more data - **/ -bool blk_update_request(struct request *req, blk_status_t error, - unsigned int nr_bytes) -{ - int total_bytes; - - trace_block_rq_complete(req, blk_status_to_errno(error), nr_bytes); - - if (!req->bio) - return false; - -#ifdef CONFIG_BLK_DEV_INTEGRITY - if (blk_integrity_rq(req) && req_op(req) == REQ_OP_READ && - error == BLK_STS_OK) - req->q->integrity.profile->complete_fn(req, nr_bytes); -#endif - - if (unlikely(error && !blk_rq_is_passthrough(req) && - !(req->rq_flags & RQF_QUIET))) - print_req_error(req, error, __func__); - - blk_account_io_completion(req, nr_bytes); - - total_bytes = 0; - while (req->bio) { - struct bio *bio = req->bio; - unsigned bio_bytes = min(bio->bi_iter.bi_size, nr_bytes); - - if (bio_bytes == bio->bi_iter.bi_size) - req->bio = bio->bi_next; - - /* Completion has already been traced */ - bio_clear_flag(bio, BIO_TRACE_COMPLETION); - req_bio_endio(req, bio, bio_bytes, error); - - total_bytes += bio_bytes; - nr_bytes -= bio_bytes; - - if (!nr_bytes) - break; - } - - /* - * completely done - */ - if (!req->bio) { - /* - * Reset counters so that the request stacking driver - * can find how many bytes remain in the request - * later. - */ - req->__data_len = 0; - return false; - } - - req->__data_len -= total_bytes; - - /* update sector only for requests with clear definition of sector */ - if (!blk_rq_is_passthrough(req)) - req->__sector += total_bytes >> 9; - - /* mixed attributes always follow the first bio */ - if (req->rq_flags & RQF_MIXED_MERGE) { - req->cmd_flags &= ~REQ_FAILFAST_MASK; - req->cmd_flags |= req->bio->bi_opf & REQ_FAILFAST_MASK; - } - - if (!(req->rq_flags & RQF_SPECIAL_PAYLOAD)) { - /* - * If total number of sectors is less than the first segment - * size, something has gone terribly wrong. - */ - if (blk_rq_bytes(req) < blk_rq_cur_bytes(req)) { - blk_dump_rq_flags(req, "request botched"); - req->__data_len = blk_rq_cur_bytes(req); - } - - /* recalculate the number of segments */ - req->nr_phys_segments = blk_recalc_rq_segments(req); - } - - return true; -} -EXPORT_SYMBOL_GPL(blk_update_request); - #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE /** * rq_flush_dcache_pages - Helper function to flush all pages in a request diff --git a/block/blk-mq.c b/block/blk-mq.c index fe3e926c20a9..069837a020fe 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -626,6 +626,220 @@ inline void __blk_mq_end_request(struct request *rq, blk_status_t error) } EXPORT_SYMBOL(__blk_mq_end_request); +static void blk_account_io_completion(struct request *req, unsigned int bytes) +{ + if (req->part && blk_do_io_stat(req)) { + const int sgrp = op_stat_group(req_op(req)); + + part_stat_lock(); + part_stat_add(req->part, sectors[sgrp], bytes >> 9); + part_stat_unlock(); + } +} + +static void req_bio_endio(struct request *rq, struct bio *bio, + unsigned int nbytes, blk_status_t error) +{ + if (error) + bio->bi_status = error; + + if (unlikely(rq->rq_flags & RQF_QUIET)) + bio_set_flag(bio, BIO_QUIET); + + bio_advance(bio, nbytes); + + if (req_op(rq) == REQ_OP_ZONE_APPEND && error == BLK_STS_OK) { + /* + * Partial zone append completions cannot be supported as the + * BIO fragments may end up not being written sequentially. + */ + if (bio->bi_iter.bi_size) + bio->bi_status = BLK_STS_IOERR; + else + bio->bi_iter.bi_sector = rq->__sector; + } + + /* don't actually finish bio if it's part of flush sequence */ + if (bio->bi_iter.bi_size == 0 && !(rq->rq_flags & RQF_FLUSH_SEQ)) + bio_endio(bio); +} + +static const struct { + int errno; + const char *name; +} blk_errors[] = { + [BLK_STS_OK] = { 0, "" }, + [BLK_STS_NOTSUPP] = { -EOPNOTSUPP, "operation not supported" }, + [BLK_STS_TIMEOUT] = { -ETIMEDOUT, "timeout" }, + [BLK_STS_NOSPC] = { -ENOSPC, "critical space allocation" }, + [BLK_STS_TRANSPORT] = { -ENOLINK, "recoverable transport" }, + [BLK_STS_TARGET] = { -EREMOTEIO, "critical target" }, + [BLK_STS_NEXUS] = { -EBADE, "critical nexus" }, + [BLK_STS_MEDIUM] = { -ENODATA, "critical medium" }, + [BLK_STS_PROTECTION] = { -EILSEQ, "protection" }, + [BLK_STS_RESOURCE] = { -ENOMEM, "kernel resource" }, + [BLK_STS_DEV_RESOURCE] = { -EBUSY, "device resource" }, + [BLK_STS_AGAIN] = { -EAGAIN, "nonblocking retry" }, + + /* device mapper special case, should not leak out: */ + [BLK_STS_DM_REQUEUE] = { -EREMCHG, "dm internal retry" }, + + /* zone device specific errors */ + [BLK_STS_ZONE_OPEN_RESOURCE] = { -ETOOMANYREFS, "open zones exceeded" }, + [BLK_STS_ZONE_ACTIVE_RESOURCE] = { -EOVERFLOW, "active zones exceeded" }, + + /* everything else not covered above: */ + [BLK_STS_IOERR] = { -EIO, "I/O" }, +}; + +blk_status_t errno_to_blk_status(int errno) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(blk_errors); i++) { + if (blk_errors[i].errno == errno) + return (__force blk_status_t)i; + } + + return BLK_STS_IOERR; +} +EXPORT_SYMBOL_GPL(errno_to_blk_status); + +int blk_status_to_errno(blk_status_t status) +{ + int idx = (__force int)status; + + if (WARN_ON_ONCE(idx >= ARRAY_SIZE(blk_errors))) + return -EIO; + return blk_errors[idx].errno; +} +EXPORT_SYMBOL_GPL(blk_status_to_errno); + +static void print_req_error(struct request *req, blk_status_t status, + const char *caller) +{ + int idx = (__force int)status; + + if (WARN_ON_ONCE(idx >= ARRAY_SIZE(blk_errors))) + return; + + printk_ratelimited(KERN_ERR + "%s: %s error, dev %s, sector %llu op 0x%x:(%s) flags 0x%x " + "phys_seg %u prio class %u\n", + caller, blk_errors[idx].name, + req->rq_disk ? req->rq_disk->disk_name : "?", + blk_rq_pos(req), req_op(req), blk_op_str(req_op(req)), + req->cmd_flags & ~REQ_OP_MASK, + req->nr_phys_segments, + IOPRIO_PRIO_CLASS(req->ioprio)); +} + +/** + * blk_update_request - Complete multiple bytes without completing the request + * @req: the request being processed + * @error: block status code + * @nr_bytes: number of bytes to complete for @req + * + * Description: + * Ends I/O on a number of bytes attached to @req, but doesn't complete + * the request structure even if @req doesn't have leftover. + * If @req has leftover, sets it up for the next range of segments. + * + * Passing the result of blk_rq_bytes() as @nr_bytes guarantees + * %false return from this function. + * + * Note: + * The RQF_SPECIAL_PAYLOAD flag is ignored on purpose in this function + * except in the consistency check at the end of this function. + * + * Return: + * %false - this request doesn't have any more data + * %true - this request has more data + **/ +bool blk_update_request(struct request *req, blk_status_t error, + unsigned int nr_bytes) +{ + int total_bytes; + + trace_block_rq_complete(req, blk_status_to_errno(error), nr_bytes); + + if (!req->bio) + return false; + +#ifdef CONFIG_BLK_DEV_INTEGRITY + if (blk_integrity_rq(req) && req_op(req) == REQ_OP_READ && + error == BLK_STS_OK) + req->q->integrity.profile->complete_fn(req, nr_bytes); +#endif + + if (unlikely(error && !blk_rq_is_passthrough(req) && + !(req->rq_flags & RQF_QUIET))) + print_req_error(req, error, __func__); + + blk_account_io_completion(req, nr_bytes); + + total_bytes = 0; + while (req->bio) { + struct bio *bio = req->bio; + unsigned bio_bytes = min(bio->bi_iter.bi_size, nr_bytes); + + if (bio_bytes == bio->bi_iter.bi_size) + req->bio = bio->bi_next; + + /* Completion has already been traced */ + bio_clear_flag(bio, BIO_TRACE_COMPLETION); + req_bio_endio(req, bio, bio_bytes, error); + + total_bytes += bio_bytes; + nr_bytes -= bio_bytes; + + if (!nr_bytes) + break; + } + + /* + * completely done + */ + if (!req->bio) { + /* + * Reset counters so that the request stacking driver + * can find how many bytes remain in the request + * later. + */ + req->__data_len = 0; + return false; + } + + req->__data_len -= total_bytes; + + /* update sector only for requests with clear definition of sector */ + if (!blk_rq_is_passthrough(req)) + req->__sector += total_bytes >> 9; + + /* mixed attributes always follow the first bio */ + if (req->rq_flags & RQF_MIXED_MERGE) { + req->cmd_flags &= ~REQ_FAILFAST_MASK; + req->cmd_flags |= req->bio->bi_opf & REQ_FAILFAST_MASK; + } + + if (!(req->rq_flags & RQF_SPECIAL_PAYLOAD)) { + /* + * If total number of sectors is less than the first segment + * size, something has gone terribly wrong. + */ + if (blk_rq_bytes(req) < blk_rq_cur_bytes(req)) { + blk_dump_rq_flags(req, "request botched"); + req->__data_len = blk_rq_cur_bytes(req); + } + + /* recalculate the number of segments */ + req->nr_phys_segments = blk_recalc_rq_segments(req); + } + + return true; +} +EXPORT_SYMBOL_GPL(blk_update_request); + void blk_mq_end_request(struct request *rq, blk_status_t error) { if (blk_update_request(rq, error, blk_rq_bytes(rq)))