From patchwork Wed Jun 21 20:12:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13287924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E87DEB64D7 for ; Wed, 21 Jun 2023 20:12:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229527AbjFUUMq (ORCPT ); Wed, 21 Jun 2023 16:12:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229814AbjFUUMp (ORCPT ); Wed, 21 Jun 2023 16:12:45 -0400 Received: from mail-oi1-f169.google.com (mail-oi1-f169.google.com [209.85.167.169]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B6F719A3 for ; Wed, 21 Jun 2023 13:12:44 -0700 (PDT) Received: by mail-oi1-f169.google.com with SMTP id 5614622812f47-38e04d1b2b4so4418286b6e.3 for ; Wed, 21 Jun 2023 13:12:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687378363; x=1689970363; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aKCGSxqskKmjef6IhtNXH4tDBl4jyiZFFvA/uvxn5ls=; b=ZB1LAuZcTUYIatBHfPxgfG9aU0S165iXg6FLVKq49DCtW4G25yrZMUSfSHdF8TrDqa 9MwulbbQqLBdwCzqxkKVe/FNILeR8nNcxTTQtd4GQeqke8XN00ua560iWvWZOCzXM8Xv DctDDj59UghbgiFvg+h9o7xZf272rDGxUZbflvvr7iMaFJcVr96Rd1ncQYaidqTra0cG L1Vn320sDOB3hgO3DA314WFY0VW4iLo/6EQigXQ+rnMqcZ3Zr9n0EZ54nPd1sUh/G5vO PuhOy+3f2qdeuZqDHFEBTzsmWxhYtrD/s3flnkwawAc/dtGRDWNpEgvkCnS8WkFr47Y7 +wnw== X-Gm-Message-State: AC+VfDxg/5e1DTt8A65Y0HN1jBU3hAe60yBIymOHkQ8NxoM0osYo4YBt MZf96brgauiywzuxo/0xMtIznTJiHN8= X-Google-Smtp-Source: ACHHUZ5/JgyMfA9t84BiUOeYxpQakkMfromwF+M2qnTfhMH7GO42UtuZ1cnXNEdCFgmYEyFS0EvVjw== X-Received: by 2002:a05:6808:3d3:b0:3a0:3a17:a12c with SMTP id o19-20020a05680803d300b003a03a17a12cmr5205403oie.13.1687378363174; Wed, 21 Jun 2023 13:12:43 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:c0b7:6a6f:751b:b854]) by smtp.gmail.com with ESMTPSA id h8-20020a63df48000000b00548fb73874asm3522983pgj.37.2023.06.21.13.12.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jun 2023 13:12:42 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Johannes Thumshirn , Damien Le Moal , Ming Lei , Mike Snitzer Subject: [PATCH v4 1/7] block: Rename a local variable in blk_mq_requeue_work() Date: Wed, 21 Jun 2023 13:12:28 -0700 Message-ID: <20230621201237.796902-2-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog In-Reply-To: <20230621201237.796902-1-bvanassche@acm.org> References: <20230621201237.796902-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Two data structures in blk_mq_requeue_work() represent request lists. Make it clear that rq_list holds requests that come from the requeue list by renaming that data structure. Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn Cc: Damien Le Moal Cc: Ming Lei Cc: Mike Snitzer Signed-off-by: Bart Van Assche Reviewed-by: Damien Le Moal --- block/blk-mq.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 720b5061ffe8..41ee393c80a9 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1441,17 +1441,17 @@ static void blk_mq_requeue_work(struct work_struct *work) { struct request_queue *q = container_of(work, struct request_queue, requeue_work.work); - LIST_HEAD(rq_list); + LIST_HEAD(requeue_list); LIST_HEAD(flush_list); struct request *rq; spin_lock_irq(&q->requeue_lock); - list_splice_init(&q->requeue_list, &rq_list); + list_splice_init(&q->requeue_list, &requeue_list); list_splice_init(&q->flush_list, &flush_list); spin_unlock_irq(&q->requeue_lock); - while (!list_empty(&rq_list)) { - rq = list_entry(rq_list.next, struct request, queuelist); + while (!list_empty(&requeue_list)) { + rq = list_entry(requeue_list.next, struct request, queuelist); /* * If RQF_DONTPREP ist set, the request has been started by the * driver already and might have driver-specific data allocated From patchwork Wed Jun 21 20:12:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13287925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6BAEEB64D8 for ; Wed, 21 Jun 2023 20:12:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230299AbjFUUMr (ORCPT ); Wed, 21 Jun 2023 16:12:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230304AbjFUUMq (ORCPT ); Wed, 21 Jun 2023 16:12:46 -0400 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F470199B for ; Wed, 21 Jun 2023 13:12:45 -0700 (PDT) Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-6687446eaccso3410319b3a.3 for ; Wed, 21 Jun 2023 13:12:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687378364; x=1689970364; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XVq8bMavG+J+TItlo7eDuWtvWGDPFyniiDmNwZ7+h6o=; b=lyQK4h7dGOzvRpgzkn1I4JsgkkAOEzUKW7ml/CxNzr98OPYDia9c/B4ZiIMRY41/Pe yaQBz2vuo04llDSKkYq1ZLNqRjqnPgaI0bMI4fkeUSDWiGNkGOWVI9NTL4x/wCbJJgNe gVZ7g3GJhomyzhwkbjjY82LE5VqRZAlvlc70q8+cpv4PnMBTSMVDCpFyWszQXq0bU/mM sxlPNlAhHgader8Kv1zrDzA/WEwt2yR7IMdplm0rvJoaRBcb+lqgZSGuW/wHv8gNWnJS YicZF9pGeNdFyBOD71FWt/+2/ViMKAELJ4MipFdXWwG+ge91IH0sRwzKCpeNVPQR3SaC 0vZw== X-Gm-Message-State: AC+VfDxpxss+emWoB6aUxwyNHj5C0mFGH4XiMhlM/m2TR6QPLR6Y7U36 p5tJQH1XVZRUbeV3hEoqNZQ= X-Google-Smtp-Source: ACHHUZ53e8v6By9/4mpy8k7Lk4XxQoYz1U+xzEX8PzKNEjWzj6cQ00lcn6KJe86ujnUYa2OavZRF8w== X-Received: by 2002:a05:6a20:1450:b0:121:9e73:5531 with SMTP id a16-20020a056a20145000b001219e735531mr11971446pzi.40.1687378364417; Wed, 21 Jun 2023 13:12:44 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:c0b7:6a6f:751b:b854]) by smtp.gmail.com with ESMTPSA id h8-20020a63df48000000b00548fb73874asm3522983pgj.37.2023.06.21.13.12.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jun 2023 13:12:44 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , Mike Snitzer Subject: [PATCH v4 2/7] block: Simplify blk_mq_requeue_work() Date: Wed, 21 Jun 2023 13:12:29 -0700 Message-ID: <20230621201237.796902-3-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog In-Reply-To: <20230621201237.796902-1-bvanassche@acm.org> References: <20230621201237.796902-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move common code in front of the if-statement. Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Cc: Mike Snitzer Signed-off-by: Bart Van Assche Reviewed-by: Damien Le Moal Reviewed-by: Christoph Hellwig --- block/blk-mq.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 41ee393c80a9..f440e4aaaae3 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1458,13 +1458,11 @@ static void blk_mq_requeue_work(struct work_struct *work) * already. Insert it into the hctx dispatch list to avoid * block layer merges for the request. */ - if (rq->rq_flags & RQF_DONTPREP) { - list_del_init(&rq->queuelist); + list_del_init(&rq->queuelist); + if (rq->rq_flags & RQF_DONTPREP) blk_mq_request_bypass_insert(rq, 0); - } else { - list_del_init(&rq->queuelist); + else blk_mq_insert_request(rq, BLK_MQ_INSERT_AT_HEAD); - } } while (!list_empty(&flush_list)) { From patchwork Wed Jun 21 20:12:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13287926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86327C0015E for ; Wed, 21 Jun 2023 20:12:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229758AbjFUUMs (ORCPT ); Wed, 21 Jun 2023 16:12:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229814AbjFUUMr (ORCPT ); Wed, 21 Jun 2023 16:12:47 -0400 Received: from mail-oi1-f177.google.com (mail-oi1-f177.google.com [209.85.167.177]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C1BD19A2 for ; Wed, 21 Jun 2023 13:12:46 -0700 (PDT) Received: by mail-oi1-f177.google.com with SMTP id 5614622812f47-38c35975545so4832879b6e.1 for ; Wed, 21 Jun 2023 13:12:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687378366; x=1689970366; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xFt+Mtj8hNn59Hxzeh80ilXlkW7gARv3gmBoY2WdfPY=; b=NweceprDpTRrab53R4y2Jbe1wi/ESeyymtEyouEQDXwwP5YKB16LNdpTCRT7gLlS1X VnnBOco0RL4c4EU9/vU8aGuJXgArIRh0Z5s+LYNI1tgeh5W1MfDqerWACzpmGcXXZetY db70T5ILOernxxGK5sooZEVQ+JxX9+Q30JSv9eJaWmETLEcT2zNFt5Q/ZiG0nmtY1sDk FV8MiCpA3UJ32/t4kAQa+Mz/l+ams2DjwGivJlDc55zNFpv03nfrYQTvVKR27L1q/zkq TKixwuhhgeHBehHR18BEYNVuR4OLqavMANfXG1iSkbZGBr9rrzAUCUwXma/QCIyrXx9Q +OgA== X-Gm-Message-State: AC+VfDzWESrUxvrTHGZNE1YKWm/o3/MwGmIAHRWFVPNUFTmavKb5kH8U Jv+KKpXVQi8d3reAM7u1k2I= X-Google-Smtp-Source: ACHHUZ4EZZtntc4bKLShaZQ9KmX5/px0MvAMSlc0OSFh5hRD26lwhrO6WgfokbF+hGjVnCppY/jStQ== X-Received: by 2002:a05:6808:601:b0:399:dd46:d561 with SMTP id y1-20020a056808060100b00399dd46d561mr7888776oih.48.1687378365697; Wed, 21 Jun 2023 13:12:45 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:c0b7:6a6f:751b:b854]) by smtp.gmail.com with ESMTPSA id h8-20020a63df48000000b00548fb73874asm3522983pgj.37.2023.06.21.13.12.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jun 2023 13:12:45 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , Mike Snitzer Subject: [PATCH v4 3/7] block: Send requeued requests to the I/O scheduler Date: Wed, 21 Jun 2023 13:12:30 -0700 Message-ID: <20230621201237.796902-4-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog In-Reply-To: <20230621201237.796902-1-bvanassche@acm.org> References: <20230621201237.796902-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Send requeued requests to the I/O scheduler if the dispatch order matters such that the I/O scheduler can control the order in which requests are dispatched. This patch reworks commit aef1897cd36d ("blk-mq: insert rq with DONTPREP to hctx dispatch list when requeue"). Instead of sending DONTPREP requests to the dispatch list, send these to the I/O scheduler and prevent that the I/O scheduler merges these requests by adding RQF_DONTPREP to the list of flags that prevent merging (RQF_NOMERGE_FLAGS). Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Cc: Mike Snitzer Signed-off-by: Bart Van Assche --- block/blk-mq.c | 10 +++++----- include/linux/blk-mq.h | 4 ++-- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index f440e4aaaae3..453a90767f7a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1453,13 +1453,13 @@ static void blk_mq_requeue_work(struct work_struct *work) while (!list_empty(&requeue_list)) { rq = list_entry(requeue_list.next, struct request, queuelist); /* - * If RQF_DONTPREP ist set, the request has been started by the - * driver already and might have driver-specific data allocated - * already. Insert it into the hctx dispatch list to avoid - * block layer merges for the request. + * Only send those RQF_DONTPREP requests to the dispatch list + * that may be reordered freely. If the request order matters, + * send the request to the I/O scheduler. */ list_del_init(&rq->queuelist); - if (rq->rq_flags & RQF_DONTPREP) + if (rq->rq_flags & RQF_DONTPREP && + !op_needs_zoned_write_locking(req_op(rq))) blk_mq_request_bypass_insert(rq, 0); else blk_mq_insert_request(rq, BLK_MQ_INSERT_AT_HEAD); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index f401067ac03a..2610b299ec77 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -62,8 +62,8 @@ typedef __u32 __bitwise req_flags_t; #define RQF_RESV ((__force req_flags_t)(1 << 23)) /* flags that prevent us from merging requests: */ -#define RQF_NOMERGE_FLAGS \ - (RQF_STARTED | RQF_FLUSH_SEQ | RQF_SPECIAL_PAYLOAD) +#define RQF_NOMERGE_FLAGS \ + (RQF_STARTED | RQF_FLUSH_SEQ | RQF_DONTPREP | RQF_SPECIAL_PAYLOAD) enum mq_rq_state { MQ_RQ_IDLE = 0, From patchwork Wed Jun 21 20:12:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13287927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D93FCEB64D8 for ; Wed, 21 Jun 2023 20:12:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230304AbjFUUMv (ORCPT ); Wed, 21 Jun 2023 16:12:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230320AbjFUUMu (ORCPT ); Wed, 21 Jun 2023 16:12:50 -0400 Received: from mail-oa1-f42.google.com (mail-oa1-f42.google.com [209.85.160.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 520951994 for ; Wed, 21 Jun 2023 13:12:48 -0700 (PDT) Received: by mail-oa1-f42.google.com with SMTP id 586e51a60fabf-1a98a7fde3fso5995897fac.3 for ; Wed, 21 Jun 2023 13:12:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687378367; x=1689970367; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XUr56bfPDsXFz4FCvapGiSGfc8o8Vh8NU54JbYxiwrs=; b=JvtLfj66SZ5jhvDyIGmvXG0ihwRB5pigWxZYcNzlWXrmaXNSE3+OMXG3ZzAnnF7/vt dRDqaSQrbvAX7MKvpJ3AFzzM4EmBWTA0dB3ZNkseo+apTynvYnxKfr1fr+Qny1jV7t1M o8Wn8FJuoZmDo5nGgP7VnSIGSDetYlo8o29TxQJCurKDUaV28UtzpYWhsZDQRboJHUxr vsv4gXzQrQ9Xla+OawmG68V2ntMkcK5N0McEjZ4fYgT1MeOcrEs9nrWvtg556Mk4Rgpk ohDA2i1Z5l0/zKhhSJ5N9/bcBNFkRy/zjfvaeoXCtotziPu4FG1aEdJDU3dS2IKDXkMo lQVQ== X-Gm-Message-State: AC+VfDxubwklhxg0N2rji7XVnrQM39sw2tKV64v5bEFXUPZjEJbXwFGR PqPkQwVo94wGYiF5iZFq+ZE= X-Google-Smtp-Source: ACHHUZ6s5eDsXrcJ5nHgY1tWd+A2qbZMyh0fjQtc/snv2owJq+Amth8kPktJhpHdIYxXameMeSidCg== X-Received: by 2002:a05:6870:2206:b0:1ad:f52:81c7 with SMTP id i6-20020a056870220600b001ad0f5281c7mr3139654oaf.17.1687378367504; Wed, 21 Jun 2023 13:12:47 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:c0b7:6a6f:751b:b854]) by smtp.gmail.com with ESMTPSA id h8-20020a63df48000000b00548fb73874asm3522983pgj.37.2023.06.21.13.12.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jun 2023 13:12:46 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , Mike Snitzer Subject: [PATCH v4 4/7] block: One requeue list per hctx Date: Wed, 21 Jun 2023 13:12:31 -0700 Message-ID: <20230621201237.796902-5-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog In-Reply-To: <20230621201237.796902-1-bvanassche@acm.org> References: <20230621201237.796902-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Prepare for processing the requeue list from inside __blk_mq_run_hw_queue(). Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Cc: Mike Snitzer Signed-off-by: Bart Van Assche Reviewed-by: Christoph Hellwig --- block/blk-flush.c | 24 ++++++++-------- block/blk-mq-debugfs.c | 64 +++++++++++++++++++++--------------------- block/blk-mq.c | 53 ++++++++++++++++++++-------------- include/linux/blk-mq.h | 6 ++++ include/linux/blkdev.h | 5 ---- 5 files changed, 83 insertions(+), 69 deletions(-) diff --git a/block/blk-flush.c b/block/blk-flush.c index dba392cf22be..4bfb92f58aa9 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -91,7 +91,7 @@ enum { FLUSH_PENDING_TIMEOUT = 5 * HZ, }; -static void blk_kick_flush(struct request_queue *q, +static void blk_kick_flush(struct blk_mq_hw_ctx *hctx, struct blk_flush_queue *fq, blk_opf_t flags); static inline struct blk_flush_queue * @@ -165,6 +165,7 @@ static void blk_flush_complete_seq(struct request *rq, unsigned int seq, blk_status_t error) { struct request_queue *q = rq->q; + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; struct list_head *pending = &fq->flush_queue[fq->flush_pending_idx]; blk_opf_t cmd_flags; @@ -188,9 +189,9 @@ static void blk_flush_complete_seq(struct request *rq, case REQ_FSEQ_DATA: list_move_tail(&rq->flush.list, &fq->flush_data_in_flight); - spin_lock(&q->requeue_lock); - list_add_tail(&rq->queuelist, &q->flush_list); - spin_unlock(&q->requeue_lock); + spin_lock(&hctx->requeue_lock); + list_add_tail(&rq->queuelist, &hctx->flush_list); + spin_unlock(&hctx->requeue_lock); blk_mq_kick_requeue_list(q); break; @@ -210,7 +211,7 @@ static void blk_flush_complete_seq(struct request *rq, BUG(); } - blk_kick_flush(q, fq, cmd_flags); + blk_kick_flush(hctx, fq, cmd_flags); } static enum rq_end_io_ret flush_end_io(struct request *flush_rq, @@ -275,7 +276,7 @@ bool is_flush_rq(struct request *rq) /** * blk_kick_flush - consider issuing flush request - * @q: request_queue being kicked + * @hctx: hwq being kicked * @fq: flush queue * @flags: cmd_flags of the original request * @@ -286,9 +287,10 @@ bool is_flush_rq(struct request *rq) * spin_lock_irq(fq->mq_flush_lock) * */ -static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, - blk_opf_t flags) +static void blk_kick_flush(struct blk_mq_hw_ctx *hctx, + struct blk_flush_queue *fq, blk_opf_t flags) { + struct request_queue *q = hctx->queue; struct list_head *pending = &fq->flush_queue[fq->flush_pending_idx]; struct request *first_rq = list_first_entry(pending, struct request, flush.list); @@ -348,9 +350,9 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, smp_wmb(); req_ref_set(flush_rq, 1); - spin_lock(&q->requeue_lock); - list_add_tail(&flush_rq->queuelist, &q->flush_list); - spin_unlock(&q->requeue_lock); + spin_lock(&hctx->requeue_lock); + list_add_tail(&flush_rq->queuelist, &hctx->flush_list); + spin_unlock(&hctx->requeue_lock); blk_mq_kick_requeue_list(q); } diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index c3b5930106b2..787bdff3cc64 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -18,37 +18,6 @@ static int queue_poll_stat_show(void *data, struct seq_file *m) return 0; } -static void *queue_requeue_list_start(struct seq_file *m, loff_t *pos) - __acquires(&q->requeue_lock) -{ - struct request_queue *q = m->private; - - spin_lock_irq(&q->requeue_lock); - return seq_list_start(&q->requeue_list, *pos); -} - -static void *queue_requeue_list_next(struct seq_file *m, void *v, loff_t *pos) -{ - struct request_queue *q = m->private; - - return seq_list_next(v, &q->requeue_list, pos); -} - -static void queue_requeue_list_stop(struct seq_file *m, void *v) - __releases(&q->requeue_lock) -{ - struct request_queue *q = m->private; - - spin_unlock_irq(&q->requeue_lock); -} - -static const struct seq_operations queue_requeue_list_seq_ops = { - .start = queue_requeue_list_start, - .next = queue_requeue_list_next, - .stop = queue_requeue_list_stop, - .show = blk_mq_debugfs_rq_show, -}; - static int blk_flags_show(struct seq_file *m, const unsigned long flags, const char *const *flag_name, int flag_name_count) { @@ -157,7 +126,6 @@ static ssize_t queue_state_write(void *data, const char __user *buf, static const struct blk_mq_debugfs_attr blk_mq_debugfs_queue_attrs[] = { { "poll_stat", 0400, queue_poll_stat_show }, - { "requeue_list", 0400, .seq_ops = &queue_requeue_list_seq_ops }, { "pm_only", 0600, queue_pm_only_show, NULL }, { "state", 0600, queue_state_show, queue_state_write }, { "zone_wlock", 0400, queue_zone_wlock_show, NULL }, @@ -513,6 +481,37 @@ static int hctx_dispatch_busy_show(void *data, struct seq_file *m) return 0; } +static void *hctx_requeue_list_start(struct seq_file *m, loff_t *pos) + __acquires(&hctx->requeue_lock) +{ + struct blk_mq_hw_ctx *hctx = m->private; + + spin_lock_irq(&hctx->requeue_lock); + return seq_list_start(&hctx->requeue_list, *pos); +} + +static void *hctx_requeue_list_next(struct seq_file *m, void *v, loff_t *pos) +{ + struct blk_mq_hw_ctx *hctx = m->private; + + return seq_list_next(v, &hctx->requeue_list, pos); +} + +static void hctx_requeue_list_stop(struct seq_file *m, void *v) + __releases(&hctx->requeue_lock) +{ + struct blk_mq_hw_ctx *hctx = m->private; + + spin_unlock_irq(&hctx->requeue_lock); +} + +static const struct seq_operations hctx_requeue_list_seq_ops = { + .start = hctx_requeue_list_start, + .next = hctx_requeue_list_next, + .stop = hctx_requeue_list_stop, + .show = blk_mq_debugfs_rq_show, +}; + #define CTX_RQ_SEQ_OPS(name, type) \ static void *ctx_##name##_rq_list_start(struct seq_file *m, loff_t *pos) \ __acquires(&ctx->lock) \ @@ -628,6 +627,7 @@ static const struct blk_mq_debugfs_attr blk_mq_debugfs_hctx_attrs[] = { {"run", 0600, hctx_run_show, hctx_run_write}, {"active", 0400, hctx_active_show}, {"dispatch_busy", 0400, hctx_dispatch_busy_show}, + {"requeue_list", 0400, .seq_ops = &hctx_requeue_list_seq_ops}, {"type", 0400, hctx_type_show}, {}, }; diff --git a/block/blk-mq.c b/block/blk-mq.c index 453a90767f7a..c359a28d9b25 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1421,6 +1421,7 @@ static void __blk_mq_requeue_request(struct request *rq) void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list) { struct request_queue *q = rq->q; + struct blk_mq_hw_ctx *hctx = rq->mq_hctx; unsigned long flags; __blk_mq_requeue_request(rq); @@ -1428,9 +1429,9 @@ void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list) /* this request will be re-inserted to io scheduler queue */ blk_mq_sched_requeue_request(rq); - spin_lock_irqsave(&q->requeue_lock, flags); - list_add_tail(&rq->queuelist, &q->requeue_list); - spin_unlock_irqrestore(&q->requeue_lock, flags); + spin_lock_irqsave(&hctx->requeue_lock, flags); + list_add_tail(&rq->queuelist, &hctx->requeue_list); + spin_unlock_irqrestore(&hctx->requeue_lock, flags); if (kick_requeue_list) blk_mq_kick_requeue_list(q); @@ -1439,16 +1440,16 @@ EXPORT_SYMBOL(blk_mq_requeue_request); static void blk_mq_requeue_work(struct work_struct *work) { - struct request_queue *q = - container_of(work, struct request_queue, requeue_work.work); + struct blk_mq_hw_ctx *hctx = + container_of(work, struct blk_mq_hw_ctx, requeue_work.work); LIST_HEAD(requeue_list); LIST_HEAD(flush_list); struct request *rq; - spin_lock_irq(&q->requeue_lock); - list_splice_init(&q->requeue_list, &requeue_list); - list_splice_init(&q->flush_list, &flush_list); - spin_unlock_irq(&q->requeue_lock); + spin_lock_irq(&hctx->requeue_lock); + list_splice_init(&hctx->requeue_list, &requeue_list); + list_splice_init(&hctx->flush_list, &flush_list); + spin_unlock_irq(&hctx->requeue_lock); while (!list_empty(&requeue_list)) { rq = list_entry(requeue_list.next, struct request, queuelist); @@ -1471,20 +1472,30 @@ static void blk_mq_requeue_work(struct work_struct *work) blk_mq_insert_request(rq, 0); } - blk_mq_run_hw_queues(q, false); + blk_mq_run_hw_queue(hctx, false); } void blk_mq_kick_requeue_list(struct request_queue *q) { - kblockd_mod_delayed_work_on(WORK_CPU_UNBOUND, &q->requeue_work, 0); + struct blk_mq_hw_ctx *hctx; + unsigned long i; + + queue_for_each_hw_ctx(q, hctx, i) + kblockd_mod_delayed_work_on(WORK_CPU_UNBOUND, + &hctx->requeue_work, 0); } EXPORT_SYMBOL(blk_mq_kick_requeue_list); void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs) { - kblockd_mod_delayed_work_on(WORK_CPU_UNBOUND, &q->requeue_work, - msecs_to_jiffies(msecs)); + struct blk_mq_hw_ctx *hctx; + unsigned long i; + + queue_for_each_hw_ctx(q, hctx, i) + kblockd_mod_delayed_work_on(WORK_CPU_UNBOUND, + &hctx->requeue_work, + msecs_to_jiffies(msecs)); } EXPORT_SYMBOL(blk_mq_delay_kick_requeue_list); @@ -3614,6 +3625,11 @@ static int blk_mq_init_hctx(struct request_queue *q, struct blk_mq_tag_set *set, struct blk_mq_hw_ctx *hctx, unsigned hctx_idx) { + INIT_DELAYED_WORK(&hctx->requeue_work, blk_mq_requeue_work); + INIT_LIST_HEAD(&hctx->flush_list); + INIT_LIST_HEAD(&hctx->requeue_list); + spin_lock_init(&hctx->requeue_lock); + hctx->queue_num = hctx_idx; if (!(hctx->flags & BLK_MQ_F_STACKING)) @@ -4229,11 +4245,6 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT; blk_mq_update_poll_flag(q); - INIT_DELAYED_WORK(&q->requeue_work, blk_mq_requeue_work); - INIT_LIST_HEAD(&q->flush_list); - INIT_LIST_HEAD(&q->requeue_list); - spin_lock_init(&q->requeue_lock); - q->nr_requests = set->queue_depth; blk_mq_init_cpu_queues(q, set->nr_hw_queues); @@ -4782,10 +4793,10 @@ void blk_mq_cancel_work_sync(struct request_queue *q) struct blk_mq_hw_ctx *hctx; unsigned long i; - cancel_delayed_work_sync(&q->requeue_work); - - queue_for_each_hw_ctx(q, hctx, i) + queue_for_each_hw_ctx(q, hctx, i) { + cancel_delayed_work_sync(&hctx->requeue_work); cancel_delayed_work_sync(&hctx->run_work); + } } static int __init blk_mq_init(void) diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 2610b299ec77..672e8880f9e2 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -308,6 +308,12 @@ struct blk_mq_hw_ctx { unsigned long state; } ____cacheline_aligned_in_smp; + struct list_head flush_list; + + struct list_head requeue_list; + spinlock_t requeue_lock; + struct delayed_work requeue_work; + /** * @run_work: Used for scheduling a hardware queue run at a later time. */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index ed44a997f629..ed4f89657f1f 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -479,11 +479,6 @@ struct request_queue { * for flush operations */ struct blk_flush_queue *fq; - struct list_head flush_list; - - struct list_head requeue_list; - spinlock_t requeue_lock; - struct delayed_work requeue_work; struct mutex sysfs_lock; struct mutex sysfs_dir_lock; From patchwork Wed Jun 21 20:12:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13287928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99933EB64D7 for ; Wed, 21 Jun 2023 20:12:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230358AbjFUUMw (ORCPT ); Wed, 21 Jun 2023 16:12:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230347AbjFUUMv (ORCPT ); Wed, 21 Jun 2023 16:12:51 -0400 Received: from mail-ot1-f41.google.com (mail-ot1-f41.google.com [209.85.210.41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E32C19A3 for ; Wed, 21 Jun 2023 13:12:49 -0700 (PDT) Received: by mail-ot1-f41.google.com with SMTP id 46e09a7af769-6b46e61638eso3761164a34.0 for ; Wed, 21 Jun 2023 13:12:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687378369; x=1689970369; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Clbav+Trh8rV2/QmiqkhRbWinTcibsRagbDuQgd11cs=; b=S7I8h22hnSSYOXQQVC5Zr9L0Z1g+c4/0TX5HYPyg1XOScT0Ao4VwbVLWjGoPbXLWdn tiE4nsFyrWDZ3tRG7yRMCl20KqQGsxrRIGmS2GsmzbdwGd150yY8iEefNUm0Bv0XBw3i z2YMfccpHzxPeWtopJr1dAkukXLRfU1SQOHRDln6esqB/tN+7MuVW+qclUpSzKaGEfVm lCR+xh2NZkrofFyuz2zd5d4S9L2L9sPa/Nwbey4wOddmfvpBnp3g8v3hJYiU0zSelzlk Nf13pU97HjWlr2SzyMO412X9VwwM8qxyK3AsQzvxbvyj6sWy/hIX3byNeAOOc3+9A45Y TkWA== X-Gm-Message-State: AC+VfDx4i4fTvbmP3i1X5pKyIsHLiVLS6FaT8vGaQTtGD8ef5in8gXgx mb4+qqH63YbuooqcBP6gTsM= X-Google-Smtp-Source: ACHHUZ4RGSrHQ9PSNSwaEbdYMZ/1zUWXYafiRw8wXK45sy8Ll5/aLAQGJvfl4TFZjAyBs4Wj5MEozA== X-Received: by 2002:a05:6358:c6a3:b0:131:eb9:2916 with SMTP id fe35-20020a056358c6a300b001310eb92916mr3076287rwb.27.1687378368792; Wed, 21 Jun 2023 13:12:48 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:c0b7:6a6f:751b:b854]) by smtp.gmail.com with ESMTPSA id h8-20020a63df48000000b00548fb73874asm3522983pgj.37.2023.06.21.13.12.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jun 2023 13:12:48 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , Mike Snitzer Subject: [PATCH v4 5/7] block: Preserve the order of requeued requests Date: Wed, 21 Jun 2023 13:12:32 -0700 Message-ID: <20230621201237.796902-6-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog In-Reply-To: <20230621201237.796902-1-bvanassche@acm.org> References: <20230621201237.796902-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If a queue is run before all requeued requests have been sent to the I/O scheduler, the I/O scheduler may dispatch the wrong request. Fix this by making blk_mq_run_hw_queue() process the requeue_list instead of blk_mq_requeue_work(). Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Cc: Mike Snitzer Signed-off-by: Bart Van Assche --- block/blk-mq.c | 31 +++++++++---------------------- include/linux/blk-mq.h | 1 - 2 files changed, 9 insertions(+), 23 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index c359a28d9b25..de39984d17c4 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -68,6 +68,8 @@ static inline blk_qc_t blk_rq_to_qc(struct request *rq) static bool blk_mq_hctx_has_pending(struct blk_mq_hw_ctx *hctx) { return !list_empty_careful(&hctx->dispatch) || + !list_empty_careful(&hctx->requeue_list) || + !list_empty_careful(&hctx->flush_list) || sbitmap_any_bit_set(&hctx->ctx_map) || blk_mq_sched_has_work(hctx); } @@ -1438,10 +1440,8 @@ void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list) } EXPORT_SYMBOL(blk_mq_requeue_request); -static void blk_mq_requeue_work(struct work_struct *work) +static void blk_mq_process_requeue_list(struct blk_mq_hw_ctx *hctx) { - struct blk_mq_hw_ctx *hctx = - container_of(work, struct blk_mq_hw_ctx, requeue_work.work); LIST_HEAD(requeue_list); LIST_HEAD(flush_list); struct request *rq; @@ -1471,31 +1471,18 @@ static void blk_mq_requeue_work(struct work_struct *work) list_del_init(&rq->queuelist); blk_mq_insert_request(rq, 0); } - - blk_mq_run_hw_queue(hctx, false); } void blk_mq_kick_requeue_list(struct request_queue *q) { - struct blk_mq_hw_ctx *hctx; - unsigned long i; - - queue_for_each_hw_ctx(q, hctx, i) - kblockd_mod_delayed_work_on(WORK_CPU_UNBOUND, - &hctx->requeue_work, 0); + blk_mq_run_hw_queues(q, true); } EXPORT_SYMBOL(blk_mq_kick_requeue_list); void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs) { - struct blk_mq_hw_ctx *hctx; - unsigned long i; - - queue_for_each_hw_ctx(q, hctx, i) - kblockd_mod_delayed_work_on(WORK_CPU_UNBOUND, - &hctx->requeue_work, - msecs_to_jiffies(msecs)); + blk_mq_delay_run_hw_queues(q, msecs); } EXPORT_SYMBOL(blk_mq_delay_kick_requeue_list); @@ -2248,6 +2235,7 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async) return; } + blk_mq_process_requeue_list(hctx); blk_mq_run_dispatch_ops(hctx->queue, blk_mq_sched_dispatch_requests(hctx)); } @@ -2296,7 +2284,7 @@ void blk_mq_run_hw_queues(struct request_queue *q, bool async) * scheduler. */ if (!sq_hctx || sq_hctx == hctx || - !list_empty_careful(&hctx->dispatch)) + blk_mq_hctx_has_pending(hctx)) blk_mq_run_hw_queue(hctx, async); } } @@ -2332,7 +2320,7 @@ void blk_mq_delay_run_hw_queues(struct request_queue *q, unsigned long msecs) * scheduler. */ if (!sq_hctx || sq_hctx == hctx || - !list_empty_careful(&hctx->dispatch)) + blk_mq_hctx_has_pending(hctx)) blk_mq_delay_run_hw_queue(hctx, msecs); } } @@ -2417,6 +2405,7 @@ static void blk_mq_run_work_fn(struct work_struct *work) struct blk_mq_hw_ctx *hctx = container_of(work, struct blk_mq_hw_ctx, run_work.work); + blk_mq_process_requeue_list(hctx); blk_mq_run_dispatch_ops(hctx->queue, blk_mq_sched_dispatch_requests(hctx)); } @@ -3625,7 +3614,6 @@ static int blk_mq_init_hctx(struct request_queue *q, struct blk_mq_tag_set *set, struct blk_mq_hw_ctx *hctx, unsigned hctx_idx) { - INIT_DELAYED_WORK(&hctx->requeue_work, blk_mq_requeue_work); INIT_LIST_HEAD(&hctx->flush_list); INIT_LIST_HEAD(&hctx->requeue_list); spin_lock_init(&hctx->requeue_lock); @@ -4794,7 +4782,6 @@ void blk_mq_cancel_work_sync(struct request_queue *q) unsigned long i; queue_for_each_hw_ctx(q, hctx, i) { - cancel_delayed_work_sync(&hctx->requeue_work); cancel_delayed_work_sync(&hctx->run_work); } } diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 672e8880f9e2..b919de53dc28 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -312,7 +312,6 @@ struct blk_mq_hw_ctx { struct list_head requeue_list; spinlock_t requeue_lock; - struct delayed_work requeue_work; /** * @run_work: Used for scheduling a hardware queue run at a later time. From patchwork Wed Jun 21 20:12:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13287929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4922EEB64DC for ; Wed, 21 Jun 2023 20:12:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230320AbjFUUMx (ORCPT ); Wed, 21 Jun 2023 16:12:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230351AbjFUUMw (ORCPT ); Wed, 21 Jun 2023 16:12:52 -0400 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA77C199D for ; Wed, 21 Jun 2023 13:12:50 -0700 (PDT) Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-66654d019d4so4959904b3a.0 for ; Wed, 21 Jun 2023 13:12:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687378370; x=1689970370; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9egdE7ADXBdyCn1MfDmsbP3NYuKmHDjGrIkLbUr7D9s=; b=XljH4Un5jhdgjM/vcCzpYwRXQAvSOF1UibfODpf6yboEwqknJisasK1v0tJtjvFiB9 906llOJx+o+MZ4z+lAaD/3saM20t7INvbOoxOUc8FRIWR1TCIwynyZP4rDG60GtnWL9C MHGJhcRotcIyHgQjZ5w6WHcVtzum7RRz4xy57rB8DUS1FvbvzcJ/H/TyJflg3iOeXzgk twx6EYQmcl8yb28IAlxh7fbINprLnwqqqq+d6PpxZzk43r6Gh0FIFxe050wuiKuNzvCr 4ng/9w/tfFMip1p2Mfkm4JKVQXZBcgjyoayEL1Kho4CDYo7SCW9mM+HcOqc/aPQvbU2x RSRA== X-Gm-Message-State: AC+VfDyZBmmbKHEeIogS0yf1CXkkWXBAUv3tdAaFGcn9R0hKmdInq5Gb eR7z8v5Y/2Gj4o+5y3YnNQU= X-Google-Smtp-Source: ACHHUZ58VIv6oBQpGHfPwpjGWHF1tvERBQ1O8qQ5/vYjZaNgGqT1B3VswMLwPnWzNnEr+t5Rm8SaIg== X-Received: by 2002:a05:6a20:1585:b0:122:470:377c with SMTP id h5-20020a056a20158500b001220470377cmr11764933pzj.2.1687378370050; Wed, 21 Jun 2023 13:12:50 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:c0b7:6a6f:751b:b854]) by smtp.gmail.com with ESMTPSA id h8-20020a63df48000000b00548fb73874asm3522983pgj.37.2023.06.21.13.12.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jun 2023 13:12:49 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , Mike Snitzer , Alasdair Kergon , dm-devel@redhat.com Subject: [PATCH v4 6/7] dm: Inline __dm_mq_kick_requeue_list() Date: Wed, 21 Jun 2023 13:12:33 -0700 Message-ID: <20230621201237.796902-7-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog In-Reply-To: <20230621201237.796902-1-bvanassche@acm.org> References: <20230621201237.796902-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Since commit 52d7f1b5c2f3 ("blk-mq: Avoid that requeueing starts stopped queues") the function __dm_mq_kick_requeue_list() is too short to keep it as a separate function. Hence, inline this function. Reviewed-by: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Cc: Mike Snitzer Signed-off-by: Bart Van Assche --- drivers/md/dm-rq.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c index f7e9a3632eb3..bbe1e2ea0aa4 100644 --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -168,21 +168,16 @@ static void dm_end_request(struct request *clone, blk_status_t error) rq_completed(md); } -static void __dm_mq_kick_requeue_list(struct request_queue *q, unsigned long msecs) -{ - blk_mq_delay_kick_requeue_list(q, msecs); -} - void dm_mq_kick_requeue_list(struct mapped_device *md) { - __dm_mq_kick_requeue_list(md->queue, 0); + blk_mq_kick_requeue_list(md->queue); } EXPORT_SYMBOL(dm_mq_kick_requeue_list); static void dm_mq_delay_requeue_request(struct request *rq, unsigned long msecs) { blk_mq_requeue_request(rq, false); - __dm_mq_kick_requeue_list(rq->q, msecs); + blk_mq_delay_kick_requeue_list(rq->q, msecs); } static void dm_requeue_original_request(struct dm_rq_target_io *tio, bool delay_requeue) From patchwork Wed Jun 21 20:12:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13287930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AD3EEB64D7 for ; Wed, 21 Jun 2023 20:12:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229624AbjFUUM4 (ORCPT ); Wed, 21 Jun 2023 16:12:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229814AbjFUUMy (ORCPT ); Wed, 21 Jun 2023 16:12:54 -0400 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 062421994 for ; Wed, 21 Jun 2023 13:12:52 -0700 (PDT) Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-666e97fcc60so3034060b3a.3 for ; Wed, 21 Jun 2023 13:12:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687378372; x=1689970372; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VlNqtMA/NLqFLwOLH5w19sbFOyjaSkPm1pgBbFRwCwo=; b=kgphPZlN/AvUl9unI6ja3zJtmuF6iBzJVfmY9jJZsZg+y6OSlYgqKuEmTf5zkM5aRS 5ZyGGsAWVS8AQKSq1qR7JpHKJbQk7iDh7/leLUIM/HQSvMdYV3HLr+ptQlXU+d+DrV8x cL+2CIhe9T6EBODpOcmRo3HXLbeYKUc3G5hMEow+QnEwI1/aM8eFsEWxvTG59kIhv86o FgjzVFPcsGpDEmKbRSZJX9ATgoQhbYszFaiT0ZkZmYlGdsXJ0hW9/GoOifC8WsmdPHEh kn6uyULK8m12BhkeU91TfkCvEL/wUy3atz5yI7zlLM5nHFsgq5SD0toAeQ3qGXi7uaai z0sA== X-Gm-Message-State: AC+VfDyTsh1n3lPV6DHf4lnFCPSXLGMWJWmnOQOWy9bH0QJX2sNT7bya iitGSzu2AWA01RN4KSIYTL4= X-Google-Smtp-Source: ACHHUZ5ScI1xhmFseEuHks0MXawYs1iBXGYhYjiJDkfmZjrFYb/s5rvVMocvDowZDgPcgNEkU0kTIQ== X-Received: by 2002:a05:6a00:2daa:b0:668:71a1:2e85 with SMTP id fb42-20020a056a002daa00b0066871a12e85mr9217574pfb.8.1687378372212; Wed, 21 Jun 2023 13:12:52 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:c0b7:6a6f:751b:b854]) by smtp.gmail.com with ESMTPSA id h8-20020a63df48000000b00548fb73874asm3522983pgj.37.2023.06.21.13.12.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Jun 2023 13:12:51 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Vineeth Vijayan , Damien Le Moal , Ming Lei , Mike Snitzer , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Juergen Gross , Stefano Stabellini , Alasdair Kergon , dm-devel@redhat.com, Keith Busch , Sagi Grimberg , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , "James E.J. Bottomley" , "Martin K. Petersen" Subject: [PATCH v4 7/7] block: Inline blk_mq_{,delay_}kick_requeue_list() Date: Wed, 21 Jun 2023 13:12:34 -0700 Message-ID: <20230621201237.796902-8-bvanassche@acm.org> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog In-Reply-To: <20230621201237.796902-1-bvanassche@acm.org> References: <20230621201237.796902-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Patch "block: Preserve the order of requeued requests" changed blk_mq_kick_requeue_list() and blk_mq_delay_kick_requeue_list() into blk_mq_run_hw_queues() and blk_mq_delay_run_hw_queues() calls respectively. Inline blk_mq_{,delay_}kick_requeue_list() because these functions are now too short to keep these as separate functions. Acked-by: Vineeth Vijayan [ for the s390 changes ] Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Cc: Mike Snitzer Signed-off-by: Bart Van Assche Acked-by: Roger Pau Monné --- block/blk-flush.c | 4 ++-- block/blk-mq-debugfs.c | 2 +- block/blk-mq.c | 15 +-------------- drivers/block/ublk_drv.c | 6 +++--- drivers/block/xen-blkfront.c | 1 - drivers/md/dm-rq.c | 6 +++--- drivers/nvme/host/core.c | 2 +- drivers/s390/block/scm_blk.c | 2 +- drivers/scsi/scsi_lib.c | 2 +- include/linux/blk-mq.h | 2 -- 10 files changed, 13 insertions(+), 29 deletions(-) diff --git a/block/blk-flush.c b/block/blk-flush.c index 4bfb92f58aa9..157b86fd9ccb 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -192,7 +192,7 @@ static void blk_flush_complete_seq(struct request *rq, spin_lock(&hctx->requeue_lock); list_add_tail(&rq->queuelist, &hctx->flush_list); spin_unlock(&hctx->requeue_lock); - blk_mq_kick_requeue_list(q); + blk_mq_run_hw_queues(q, true); break; case REQ_FSEQ_DONE: @@ -354,7 +354,7 @@ static void blk_kick_flush(struct blk_mq_hw_ctx *hctx, list_add_tail(&flush_rq->queuelist, &hctx->flush_list); spin_unlock(&hctx->requeue_lock); - blk_mq_kick_requeue_list(q); + blk_mq_run_hw_queues(q, true); } static enum rq_end_io_ret mq_flush_data_end_io(struct request *rq, diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 787bdff3cc64..76792ebab935 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -114,7 +114,7 @@ static ssize_t queue_state_write(void *data, const char __user *buf, } else if (strcmp(op, "start") == 0) { blk_mq_start_stopped_hw_queues(q, true); } else if (strcmp(op, "kick") == 0) { - blk_mq_kick_requeue_list(q); + blk_mq_run_hw_queues(q, true); } else { pr_err("%s: unsupported operation '%s'\n", __func__, op); inval: diff --git a/block/blk-mq.c b/block/blk-mq.c index de39984d17c4..12fd8b65b930 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1436,7 +1436,7 @@ void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list) spin_unlock_irqrestore(&hctx->requeue_lock, flags); if (kick_requeue_list) - blk_mq_kick_requeue_list(q); + blk_mq_run_hw_queues(q, true); } EXPORT_SYMBOL(blk_mq_requeue_request); @@ -1473,19 +1473,6 @@ static void blk_mq_process_requeue_list(struct blk_mq_hw_ctx *hctx) } } -void blk_mq_kick_requeue_list(struct request_queue *q) -{ - blk_mq_run_hw_queues(q, true); -} -EXPORT_SYMBOL(blk_mq_kick_requeue_list); - -void blk_mq_delay_kick_requeue_list(struct request_queue *q, - unsigned long msecs) -{ - blk_mq_delay_run_hw_queues(q, msecs); -} -EXPORT_SYMBOL(blk_mq_delay_kick_requeue_list); - static bool blk_mq_rq_inflight(struct request *rq, void *priv) { /* diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 1c823750c95a..cddbbdc9b199 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -902,7 +902,7 @@ static inline void __ublk_rq_task_work(struct request *req, */ if (unlikely(!mapped_bytes)) { blk_mq_requeue_request(req, false); - blk_mq_delay_kick_requeue_list(req->q, + blk_mq_delay_run_hw_queues(req->q, UBLK_REQUEUE_DELAY_MS); return; } @@ -1297,7 +1297,7 @@ static void ublk_unquiesce_dev(struct ublk_device *ub) blk_mq_unquiesce_queue(ub->ub_disk->queue); /* We may have requeued some rqs in ublk_quiesce_queue() */ - blk_mq_kick_requeue_list(ub->ub_disk->queue); + blk_mq_run_hw_queues(ub->ub_disk->queue, true); } static void ublk_stop_dev(struct ublk_device *ub) @@ -2341,7 +2341,7 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub, blk_mq_unquiesce_queue(ub->ub_disk->queue); pr_devel("%s: queue unquiesced, dev id %d.\n", __func__, header->dev_id); - blk_mq_kick_requeue_list(ub->ub_disk->queue); + blk_mq_run_hw_queues(ub->ub_disk->queue, true); ub->dev_info.state = UBLK_S_DEV_LIVE; schedule_delayed_work(&ub->monitor_work, UBLK_DAEMON_MONITOR_PERIOD); ret = 0; diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 52e74adbaad6..b8ac217c92b6 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -2040,7 +2040,6 @@ static int blkif_recover(struct blkfront_info *info) blk_mq_requeue_request(req, false); } blk_mq_start_stopped_hw_queues(info->rq, true); - blk_mq_kick_requeue_list(info->rq); while ((bio = bio_list_pop(&info->bio_list)) != NULL) { /* Traverse the list of pending bios and re-queue them */ diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c index bbe1e2ea0aa4..6421cc2c9852 100644 --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -64,7 +64,7 @@ int dm_request_based(struct mapped_device *md) void dm_start_queue(struct request_queue *q) { blk_mq_unquiesce_queue(q); - blk_mq_kick_requeue_list(q); + blk_mq_run_hw_queues(q, true); } void dm_stop_queue(struct request_queue *q) @@ -170,14 +170,14 @@ static void dm_end_request(struct request *clone, blk_status_t error) void dm_mq_kick_requeue_list(struct mapped_device *md) { - blk_mq_kick_requeue_list(md->queue); + blk_mq_run_hw_queues(md->queue, true); } EXPORT_SYMBOL(dm_mq_kick_requeue_list); static void dm_mq_delay_requeue_request(struct request *rq, unsigned long msecs) { blk_mq_requeue_request(rq, false); - blk_mq_delay_kick_requeue_list(rq->q, msecs); + blk_mq_delay_run_hw_queues(rq->q, msecs); } static void dm_requeue_original_request(struct dm_rq_target_io *tio, bool delay_requeue) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index f5dd6d8c7e1d..9b923d52e41c 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -303,7 +303,7 @@ static void nvme_retry_req(struct request *req) nvme_req(req)->retries++; blk_mq_requeue_request(req, false); - blk_mq_delay_kick_requeue_list(req->q, delay); + blk_mq_delay_run_hw_queues(req->q, delay); } static void nvme_log_error(struct request *req) diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c index 0c1df1d5f1ac..fe5937d28fdc 100644 --- a/drivers/s390/block/scm_blk.c +++ b/drivers/s390/block/scm_blk.c @@ -243,7 +243,7 @@ static void scm_request_requeue(struct scm_request *scmrq) atomic_dec(&bdev->queued_reqs); scm_request_done(scmrq); - blk_mq_kick_requeue_list(bdev->rq); + blk_mq_run_hw_queues(bdev->rq, true); } static void scm_request_finish(struct scm_request *scmrq) diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 0226c9279cef..2aa3c147e12f 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -124,7 +124,7 @@ static void scsi_mq_requeue_cmd(struct scsi_cmnd *cmd, unsigned long msecs) if (msecs) { blk_mq_requeue_request(rq, false); - blk_mq_delay_kick_requeue_list(rq->q, msecs); + blk_mq_delay_run_hw_queues(rq->q, msecs); } else blk_mq_requeue_request(rq, true); } diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index b919de53dc28..80761e7c4ea5 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -871,8 +871,6 @@ static inline bool blk_mq_add_to_batch(struct request *req, } void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list); -void blk_mq_kick_requeue_list(struct request_queue *q); -void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs); void blk_mq_complete_request(struct request *rq); bool blk_mq_complete_request_remote(struct request *rq); void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx);