From patchwork Wed May 3 22:51:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13230614 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87CD3C77B7F for ; Wed, 3 May 2023 22:52:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229595AbjECWwR (ORCPT ); Wed, 3 May 2023 18:52:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229514AbjECWwP (ORCPT ); Wed, 3 May 2023 18:52:15 -0400 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3EA3144B7 for ; Wed, 3 May 2023 15:52:15 -0700 (PDT) Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-1aafa03f541so41227985ad.0 for ; Wed, 03 May 2023 15:52:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683154334; x=1685746334; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Meycj9u3JfaVnzaUnu3BPLSoAESUUt4GW23IhKc0tM8=; b=NiHUv9+QxXw1+Z5CbKymHe7Rz+5V0CDdNbu7PnSATImkjQ7p1CxO3ZQVyE5Dve/0to S/v3h4bmIwZQjadj61B3NG/zSPXffYXLzJ0ZTBwnIMS48XwFF/76yEwEYbFltHHYgPO5 xyc1TSOmyP9JU7OTRtM7TVZFypZ8XY7PlbPRm+M6bhvw5irHo+vojd3MSFOmvBWGZZGv dmiJaSc9mcoG5L58t8njcxDY5cNl3bYpm9DhoWrRoebJIfRFiBxENrHsJOa9XoxWdSr9 YtJIQYwX+k+rsHGzfwm5UXvWmRP8+QdD+DsZ3EcDpydWpMz6BzMoS+6wMoxtufbiSYQ6 2QcQ== X-Gm-Message-State: AC+VfDwPIddTJHxpO1sWtrfIcHnHRZWG/YjvKIIkwHbNnlj6Ow/eydxk eI7SvKXEN/eYTgaBtjEG6yI= X-Google-Smtp-Source: ACHHUZ4ivY+VwCxliJ2LO6froTtEADOjHG74pqXFOhBzO4Y7SGEYy9J/wdrfLIM09m1JX/hDEHy3Cw== X-Received: by 2002:a17:902:d512:b0:1aa:fcfa:91da with SMTP id b18-20020a170902d51200b001aafcfa91damr2017153plg.52.1683154334553; Wed, 03 May 2023 15:52:14 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:2c3b:81e:ce21:2437]) by smtp.gmail.com with ESMTPSA id e3-20020a170902744300b001aad4be4503sm227085plt.2.2023.05.03.15.52.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 15:52:14 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v4 01/11] block: Simplify blk_req_needs_zone_write_lock() Date: Wed, 3 May 2023 15:51:58 -0700 Message-ID: <20230503225208.2439206-2-bvanassche@acm.org> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog In-Reply-To: <20230503225208.2439206-1-bvanassche@acm.org> References: <20230503225208.2439206-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Remove the blk_rq_is_passthrough() check because it is redundant: blk_req_needs_zone_write_lock() also calls bdev_op_is_zoned_write() and the latter function returns false for pass-through requests. Reviewed-by: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/blk-zoned.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index fce9082384d6..835d9e937d4d 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -57,9 +57,6 @@ EXPORT_SYMBOL_GPL(blk_zone_cond_str); */ bool blk_req_needs_zone_write_lock(struct request *rq) { - if (blk_rq_is_passthrough(rq)) - return false; - if (!rq->q->disk->seq_zones_wlock) return false; From patchwork Wed May 3 22:51:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13230616 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17BFEC7EE22 for ; Wed, 3 May 2023 22:52:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229514AbjECWwS (ORCPT ); Wed, 3 May 2023 18:52:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229498AbjECWwR (ORCPT ); Wed, 3 May 2023 18:52:17 -0400 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 853DC44B6 for ; Wed, 3 May 2023 15:52:16 -0700 (PDT) Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-1ab14cb3aaeso17457955ad.2 for ; Wed, 03 May 2023 15:52:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683154336; x=1685746336; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=I9xEHhsfF8lF/PhpCFPK0EOTIgfdfEU9U94/xuYJ5AY=; b=fzQeRU8QIoWOAIyD3RSV+kri+K3r/f27pvTzjzjK9zU5ALgIjo83YO5m21MoOHDRJf +C3+RajS8xWLWuMpFw8mwsFf+kWZZPPIfz92dMxvAGTx3WhlWmIq1MzubPiN+4Nu6XT4 pkMXRWUumd41m8rQxQ73p0xlI1cPackmGlJty9BMD9PXJTgiWcBstooPivkNO13dt+24 QnKg/ekhXt2BSf+8HrHiXREyLLCMeM4cqxp2xJ2zTuLAEjqEeFHURkt72Q9qeJqNxgrt n/3P+tuKKHa4LrpSOGgSogQTYIsOFR+1BX2NS1CCn7w8/9sJpsCca1UReVpcZZtAbd8z x2GQ== X-Gm-Message-State: AC+VfDxPhP9emEvUjHRyN78G1rnM5a4lgfAfN6O6B8uYobm4aDD1iRwB 4payHrFaEgVtTSsvmXsQddw= X-Google-Smtp-Source: ACHHUZ6wFEBSvPbeEjZiEd0FdKy7CjqXWMnHDEN6al5UTX23duHDX9JgMC5sNpUTcfGACZTKJ59cGw== X-Received: by 2002:a17:902:cec2:b0:1a6:dba5:2e60 with SMTP id d2-20020a170902cec200b001a6dba52e60mr1738278plg.25.1683154335913; Wed, 03 May 2023 15:52:15 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:2c3b:81e:ce21:2437]) by smtp.gmail.com with ESMTPSA id e3-20020a170902744300b001aad4be4503sm227085plt.2.2023.05.03.15.52.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 15:52:15 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei , Pankaj Raghav , Johannes Thumshirn Subject: [PATCH v4 02/11] block: Fix the type of the second bdev_op_is_zoned_write() argument Date: Wed, 3 May 2023 15:51:59 -0700 Message-ID: <20230503225208.2439206-3-bvanassche@acm.org> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog In-Reply-To: <20230503225208.2439206-1-bvanassche@acm.org> References: <20230503225208.2439206-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Change the type of the second argument of bdev_op_is_zoned_write() from blk_opf_t into enum req_op because this function expects an operation without flags as second argument. Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Cc: Pankaj Raghav Fixes: 8cafdb5ab94c ("block: adapt blk_mq_plug() to not plug for writes that require a zone lock") Signed-off-by: Bart Van Assche Reviewed-by: Johannes Thumshirn Reviewed-by: Pankaj Raghav Reviewed-by: Christoph Hellwig --- include/linux/blkdev.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index b441e633f4dd..db24cf98ccfb 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1282,7 +1282,7 @@ static inline unsigned int bdev_zone_no(struct block_device *bdev, sector_t sec) } static inline bool bdev_op_is_zoned_write(struct block_device *bdev, - blk_opf_t op) + enum req_op op) { if (!bdev_is_zoned(bdev)) return false; From patchwork Wed May 3 22:52:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13230615 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69F95C7EE25 for ; Wed, 3 May 2023 22:52:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229505AbjECWwT (ORCPT ); Wed, 3 May 2023 18:52:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229498AbjECWwS (ORCPT ); Wed, 3 May 2023 18:52:18 -0400 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B246E44B6 for ; Wed, 3 May 2023 15:52:17 -0700 (PDT) Received: by mail-pj1-f49.google.com with SMTP id 98e67ed59e1d1-24de9c66559so3549037a91.0 for ; Wed, 03 May 2023 15:52:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683154337; x=1685746337; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uvS2OGkpDpx7sski0vLUQblDqC4xZTwM3EDqACfSkk4=; b=gI9QEWLAeEurKoiX6lTTua8zgBOI2E352nRweChN5rr/eGecq1F/yypbONs6zPcLwZ 5aUNL8eDRLCa3sHS5g8RVgekLSZ0bFt8/bq+Zo7vLAp8CWMFt/LlekY4D5ez4myPQTAV 0gYaP2e1cFIPWZgox0FK1u2TRDwXWAzbVQys+zZlyos4tvrmYvP3DfJArXY1uKenFZK/ T0hr8vlim/NQ0zYZh2ZAEN80t4BPs4Q5bqUbNPB42JCXL7yDfSP3tm1yKAA4pk+1gpv0 reSQdl2h9BwLJgvgovcFy7+RuQt2PYB8Q9UmSBQy7tvvYaGv4MRInKBzdE60GYWEcrqR jlWQ== X-Gm-Message-State: AC+VfDyy7aPZJQsk932IdMWNs3tLh2xCK5dilF1QWiR/whqDlFoW4TBG BkXIpKbKXhACL3WcKFeKNTC054+Z6iY= X-Google-Smtp-Source: ACHHUZ5aG1SdUJIociPQqeJCl99wwRlncHNbaZ2V+BTgjps+D4XpSlUqORVAAuFfZePwJ8XUZGM5iA== X-Received: by 2002:a17:90a:f48d:b0:24e:37a8:a19 with SMTP id bx13-20020a17090af48d00b0024e37a80a19mr50917pjb.47.1683154337103; Wed, 03 May 2023 15:52:17 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:2c3b:81e:ce21:2437]) by smtp.gmail.com with ESMTPSA id e3-20020a170902744300b001aad4be4503sm227085plt.2.2023.05.03.15.52.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 15:52:16 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v4 03/11] block: Introduce op_is_zoned_write() Date: Wed, 3 May 2023 15:52:00 -0700 Message-ID: <20230503225208.2439206-4-bvanassche@acm.org> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog In-Reply-To: <20230503225208.2439206-1-bvanassche@acm.org> References: <20230503225208.2439206-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce a helper function for checking whether write serialization is required if the operation will be sent to a zoned device. A second caller for op_is_zoned_write() will be introduced in the next patch in this series. Suggested-by: Christoph Hellwig Cc: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Christoph Hellwig --- include/linux/blkdev.h | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index db24cf98ccfb..a4f85781060c 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1281,13 +1281,16 @@ static inline unsigned int bdev_zone_no(struct block_device *bdev, sector_t sec) return disk_zone_no(bdev->bd_disk, sec); } +/* Whether write serialization is required for @op on zoned devices. */ +static inline bool op_is_zoned_write(enum req_op op) +{ + return op == REQ_OP_WRITE || op == REQ_OP_WRITE_ZEROES; +} + static inline bool bdev_op_is_zoned_write(struct block_device *bdev, enum req_op op) { - if (!bdev_is_zoned(bdev)) - return false; - - return op == REQ_OP_WRITE || op == REQ_OP_WRITE_ZEROES; + return bdev_is_zoned(bdev) && op_is_zoned_write(op); } static inline sector_t bdev_zone_sectors(struct block_device *bdev) From patchwork Wed May 3 22:52:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13230617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47739C77B78 for ; Wed, 3 May 2023 22:52:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229722AbjECWwV (ORCPT ); Wed, 3 May 2023 18:52:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229498AbjECWwU (ORCPT ); Wed, 3 May 2023 18:52:20 -0400 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AE5B7D80 for ; Wed, 3 May 2023 15:52:19 -0700 (PDT) Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-1aaf70676b6so32389375ad.3 for ; Wed, 03 May 2023 15:52:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683154338; x=1685746338; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=X1FERwJPWE0K365CHeTe4/kveRb+PbTQZ9v8otJCzHg=; b=QrIGOMWt0/HmX1SbXkw4R+9AtQx6NNftto8vSJeyZ7xz2mdw2tb1hKu3pUiGYQZbM8 SZmdtf9lqD4YAHdx9rAXBqKWzbz5UjH1+YEJCi1DPAzU2jjrmnDGijTJoo/MK1Hr1tki mNjcY0CimSJnD7Aq9unXhidgN9PCVwz4VDfJhpgIn/i/qQyqSI+Wd5ySJ5CB/fxie/tJ DpP7qAFpvcLBtaO0/4rGOmfwieKdxQzZRkNR0hXrDf5D4/S5ncr98KDrrePQHBqedr9Z S0aqhYUEHT3Lr08kKxfsMQ2r3y5nbEapYOZAO/Q79231+hjnWXKeaERpcdqWJS4t8diB lscA== X-Gm-Message-State: AC+VfDw3jq6tFmNBgFP+QcAP3aJGfXajz6gh1RhA8QWxp+1CequrPM28 +8bA0GpWACVRjYSoGqUPwAEA3JWD/v4= X-Google-Smtp-Source: ACHHUZ6+U7G7euj/CJJpwT1KKZQE8EbGRvux9XBMJCcPrpSRyGuMop/eINxCDvYOcFs7aFNAlDE3mw== X-Received: by 2002:a17:902:7087:b0:1aa:fbaa:ee01 with SMTP id z7-20020a170902708700b001aafbaaee01mr1492882plk.48.1683154338395; Wed, 03 May 2023 15:52:18 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:2c3b:81e:ce21:2437]) by smtp.gmail.com with ESMTPSA id e3-20020a170902744300b001aad4be4503sm227085plt.2.2023.05.03.15.52.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 15:52:18 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v4 04/11] block: Introduce blk_rq_is_seq_zoned_write() Date: Wed, 3 May 2023 15:52:01 -0700 Message-ID: <20230503225208.2439206-5-bvanassche@acm.org> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog In-Reply-To: <20230503225208.2439206-1-bvanassche@acm.org> References: <20230503225208.2439206-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce the function blk_rq_is_seq_zoned_write(). This function will be used in later patches to preserve the order of zoned writes that require write serialization. This patch includes an optimization: instead of using rq->q->disk->part0->bd_queue to check whether or not the queue is associated with a zoned block device, use rq->q->disk->queue. Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Christoph Hellwig --- block/blk-zoned.c | 17 +++++++++++++---- include/linux/blk-mq.h | 6 ++++++ 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 835d9e937d4d..4f44b74ba4df 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -52,6 +52,18 @@ const char *blk_zone_cond_str(enum blk_zone_cond zone_cond) } EXPORT_SYMBOL_GPL(blk_zone_cond_str); +/** + * blk_rq_is_seq_zoned_write() - Check if @rq requires write serialization. + * @rq: Request to examine. + * + * Note: REQ_OP_ZONE_APPEND requests do not require serialization. + */ +bool blk_rq_is_seq_zoned_write(struct request *rq) +{ + return op_is_zoned_write(req_op(rq)) && blk_rq_zone_is_seq(rq); +} +EXPORT_SYMBOL_GPL(blk_rq_is_seq_zoned_write); + /* * Return true if a request is a write requests that needs zone write locking. */ @@ -60,10 +72,7 @@ bool blk_req_needs_zone_write_lock(struct request *rq) if (!rq->q->disk->seq_zones_wlock) return false; - if (bdev_op_is_zoned_write(rq->q->disk->part0, req_op(rq))) - return blk_rq_zone_is_seq(rq); - - return false; + return blk_rq_is_seq_zoned_write(rq); } EXPORT_SYMBOL_GPL(blk_req_needs_zone_write_lock); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 06caacd77ed6..e498b85bc470 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -1164,6 +1164,7 @@ static inline unsigned int blk_rq_zone_is_seq(struct request *rq) return disk_zone_is_seq(rq->q->disk, blk_rq_pos(rq)); } +bool blk_rq_is_seq_zoned_write(struct request *rq); bool blk_req_needs_zone_write_lock(struct request *rq); bool blk_req_zone_write_trylock(struct request *rq); void __blk_req_zone_write_lock(struct request *rq); @@ -1194,6 +1195,11 @@ static inline bool blk_req_can_dispatch_to_zone(struct request *rq) return !blk_req_zone_is_write_locked(rq); } #else /* CONFIG_BLK_DEV_ZONED */ +static inline bool blk_rq_is_seq_zoned_write(struct request *rq) +{ + return false; +} + static inline bool blk_req_needs_zone_write_lock(struct request *rq) { return false; From patchwork Wed May 3 22:52:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13230618 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22BECC77B7F for ; Wed, 3 May 2023 22:52:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229498AbjECWwW (ORCPT ); Wed, 3 May 2023 18:52:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229610AbjECWwV (ORCPT ); Wed, 3 May 2023 18:52:21 -0400 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D29446AC for ; Wed, 3 May 2023 15:52:20 -0700 (PDT) Received: by mail-pj1-f50.google.com with SMTP id 98e67ed59e1d1-24e09b4153eso2931986a91.2 for ; Wed, 03 May 2023 15:52:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683154340; x=1685746340; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CxsjF6/Njb1CBZYlUphCM+L4dP+Khele8WXLzm2lN6Q=; b=EX2VyxKI80sSmWSHATHdawJengxRg38dUWH1RUYlW3S399YycW+CbifTU4s06Zloxx CRawHzZX9LOWt4JJQFlFtqMj+vcmvFY6C9b0y7qnX3EftFQy5wAJcOmktUCUmBY3rEfq JN+hHPXmoZcg7KC3Xa1/FeM4QPbskZyg02HbbhvYLJ1fdarwlciEP/FnmiAx9B1MHuAN hHMruopvAHkODNxjbQyRNdv0F9e1awg64Z3F3Ed8sJFIWsEAdwzvfEifiQTmMKzo9sL5 4YrftoIt7sgtiyAbAwAkNxtp9Nn4dh8Vkd1kaSvfL7HmRCAabn2imuBYfDhKodX/e9tK Zksw== X-Gm-Message-State: AC+VfDzaDqDnHvYEBA6rk55JAJKYa4Nu+o520vcDIcmDpN93nhcv96zd wYEmHgHttWGFirBB9B73fZQ= X-Google-Smtp-Source: ACHHUZ5cImaaOXJhRjCwPTdqOyywvv0bHZK3iaEpuoKCpCLy1n6HvgqvL1PY6DQvuxSzLtPEPzC2TA== X-Received: by 2002:a17:90a:3e4e:b0:24d:ff53:464c with SMTP id t14-20020a17090a3e4e00b0024dff53464cmr37628pjm.49.1683154339618; Wed, 03 May 2023 15:52:19 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:2c3b:81e:ce21:2437]) by smtp.gmail.com with ESMTPSA id e3-20020a170902744300b001aad4be4503sm227085plt.2.2023.05.03.15.52.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 15:52:19 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v4 05/11] block: mq-deadline: Clean up deadline_check_fifo() Date: Wed, 3 May 2023 15:52:02 -0700 Message-ID: <20230503225208.2439206-6-bvanassche@acm.org> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog In-Reply-To: <20230503225208.2439206-1-bvanassche@acm.org> References: <20230503225208.2439206-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Change the return type of deadline_check_fifo() from 'int' into 'bool'. Use time_is_before_eq_jiffies() instead of time_after_eq(). No functionality has been changed. Reviewed-by: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/mq-deadline.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 5839a027e0f0..a016cafa54b3 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -272,21 +272,15 @@ static u32 dd_queued(struct deadline_data *dd, enum dd_prio prio) } /* - * deadline_check_fifo returns 0 if there are no expired requests on the fifo, - * 1 otherwise. Requires !list_empty(&dd->fifo_list[data_dir]) + * deadline_check_fifo returns true if and only if there are expired requests + * in the FIFO list. Requires !list_empty(&dd->fifo_list[data_dir]). */ -static inline int deadline_check_fifo(struct dd_per_prio *per_prio, - enum dd_data_dir data_dir) +static inline bool deadline_check_fifo(struct dd_per_prio *per_prio, + enum dd_data_dir data_dir) { struct request *rq = rq_entry_fifo(per_prio->fifo_list[data_dir].next); - /* - * rq is expired! - */ - if (time_after_eq(jiffies, (unsigned long)rq->fifo_time)) - return 1; - - return 0; + return time_is_before_eq_jiffies((unsigned long)rq->fifo_time); } /* From patchwork Wed May 3 22:52:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13230619 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57A67C7EE25 for ; Wed, 3 May 2023 22:52:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229554AbjECWwX (ORCPT ); Wed, 3 May 2023 18:52:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229622AbjECWwW (ORCPT ); Wed, 3 May 2023 18:52:22 -0400 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 750D544B6 for ; Wed, 3 May 2023 15:52:21 -0700 (PDT) Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-52079a12451so3972135a12.3 for ; Wed, 03 May 2023 15:52:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683154341; x=1685746341; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kfH3fddu0PSG0m6djX24rpBRUBdl5uuwI2SQRIjlZ2k=; b=i94+vdDwVJriiFMmDn2SIa0No0jKKlYEZQA8nB4B0UMu0nq/7hBszO19tRgh6rhCb0 c+37gPUGkSEAflOORscGJO0wNDDv0n1uRUeud3fhXtXelvf4giSnhN0iHSqjkttKkquO 9hIdhNB9/dEvJrO9qstcFHBwmMdtcW5Y/ydT7vul+TJJY3KF7EvWkMHx2zrSR40Vj+jb 2sg+FK/KPSkgjVJdkZ7jwcc3PASC/EbwXrFMzPzDoX5Mw68usbdSz/Y1yI+R1QGdUr0v CxCJCcFPcY1XGBUL7nDXjSJYu4npeTfWwaAXr3fBQpSH2SdR62slEOlH0jLHv82+7bZx INLA== X-Gm-Message-State: AC+VfDweWsF4oKMjkf4qJ4tL51DH1BIhKp6Mb+UvawiMXneYewCrwTl8 wKSyVeqLHV2H1eJwxto68n4= X-Google-Smtp-Source: ACHHUZ6ep41cCH74B49KC6NocWGxqcJ8UMOwg/yNbdCQ4vFoQn6ZFD8+c3S45T78+FYZhp3WTjiupQ== X-Received: by 2002:a17:902:d2ce:b0:1ab:b70:27a7 with SMTP id n14-20020a170902d2ce00b001ab0b7027a7mr2018755plc.19.1683154340882; Wed, 03 May 2023 15:52:20 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:2c3b:81e:ce21:2437]) by smtp.gmail.com with ESMTPSA id e3-20020a170902744300b001aad4be4503sm227085plt.2.2023.05.03.15.52.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 15:52:20 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v4 06/11] block: mq-deadline: Simplify deadline_skip_seq_writes() Date: Wed, 3 May 2023 15:52:03 -0700 Message-ID: <20230503225208.2439206-7-bvanassche@acm.org> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog In-Reply-To: <20230503225208.2439206-1-bvanassche@acm.org> References: <20230503225208.2439206-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make the deadline_skip_seq_writes() code shorter without changing its functionality. Reviewed-by: Damien Le Moal Reviewed-by: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/mq-deadline.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index a016cafa54b3..6276afede9cd 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -304,14 +304,11 @@ static struct request *deadline_skip_seq_writes(struct deadline_data *dd, struct request *rq) { sector_t pos = blk_rq_pos(rq); - sector_t skipped_sectors = 0; - while (rq) { - if (blk_rq_pos(rq) != pos + skipped_sectors) - break; - skipped_sectors += blk_rq_sectors(rq); + do { + pos += blk_rq_sectors(rq); rq = deadline_latter_request(rq); - } + } while (rq && blk_rq_pos(rq) == pos); return rq; } From patchwork Wed May 3 22:52:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13230620 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C63AC7EE22 for ; Wed, 3 May 2023 22:52:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229481AbjECWwY (ORCPT ); Wed, 3 May 2023 18:52:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229689AbjECWwX (ORCPT ); Wed, 3 May 2023 18:52:23 -0400 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6A7746A4 for ; Wed, 3 May 2023 15:52:22 -0700 (PDT) Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-1ab0c697c84so21593655ad.3 for ; Wed, 03 May 2023 15:52:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683154342; x=1685746342; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YwyPTOfw7CLhqYC/Hv0uFzhwBy5QE+LBN1Uiqc5OArM=; b=F+VWMph2Bn7iA6nai2bWKLhVbHAP/A5Smicl+gL4gfaJAaub6RYe2qg2HguC1kTfh2 +Rpxb06qsg9Xd7ilz/1d2FtFK9P7K1wLmqlsk0yYz4g0wv8oRDz6kxMnHdVOiMMJUwSX DPBvxzU9BmkAiynkSirw+FGGdl9LY+lxXfuuciCOQAv1iP1qfWhBELAzqMw59SOv68wD ubxcB/5QkH2cfOli8F80cO3gGwuikDWWne9/tM5901ZO6CetfXI0eCuAQUzW4vsMLlJh +6umyspo5NZ/oguvTSsTm4b3OCI9K+juipN+/bGXBTiOk96arMTP9ZdBbGJ5cmTMlThP k2bg== X-Gm-Message-State: AC+VfDzgMnpF1z4KlTi6NlglBrVmnENuIZLoy5Tktu/F01wEpvlulWAK flybvCgM+s5XNcKUwj4RBG4= X-Google-Smtp-Source: ACHHUZ4/QlVFvqbiHuwHslV38Ay+1+JNqePVVHbtfwzk9hRjJ/khggFlnLSwWgZCMDwNbcGZR9GqCw== X-Received: by 2002:a17:903:2289:b0:1a6:6d9f:2fc9 with SMTP id b9-20020a170903228900b001a66d9f2fc9mr2225757plh.30.1683154342072; Wed, 03 May 2023 15:52:22 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:2c3b:81e:ce21:2437]) by smtp.gmail.com with ESMTPSA id e3-20020a170902744300b001aad4be4503sm227085plt.2.2023.05.03.15.52.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 15:52:21 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v4 07/11] block: mq-deadline: Improve deadline_skip_seq_writes() Date: Wed, 3 May 2023 15:52:04 -0700 Message-ID: <20230503225208.2439206-8-bvanassche@acm.org> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog In-Reply-To: <20230503225208.2439206-1-bvanassche@acm.org> References: <20230503225208.2439206-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make deadline_skip_seq_writes() do what its name suggests, namely to skip sequential writes. Reviewed-by: Christoph Hellwig Cc: Damien Le Moal Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/mq-deadline.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 6276afede9cd..dbc0feca963e 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -308,7 +308,7 @@ static struct request *deadline_skip_seq_writes(struct deadline_data *dd, do { pos += blk_rq_sectors(rq); rq = deadline_latter_request(rq); - } while (rq && blk_rq_pos(rq) == pos); + } while (rq && blk_rq_pos(rq) == pos && blk_rq_is_seq_zoned_write(rq)); return rq; } From patchwork Wed May 3 22:52:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13230621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C373DC77B7F for ; Wed, 3 May 2023 22:52:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229729AbjECWwZ (ORCPT ); Wed, 3 May 2023 18:52:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229610AbjECWwZ (ORCPT ); Wed, 3 May 2023 18:52:25 -0400 Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 176A046AF for ; Wed, 3 May 2023 15:52:24 -0700 (PDT) Received: by mail-pg1-f175.google.com with SMTP id 41be03b00d2f7-52c6f81193cso796218a12.1 for ; Wed, 03 May 2023 15:52:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683154343; x=1685746343; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XXOZdpON9fcUZX+Q+tAMBs/sq3fFefmRkNQlWrkWjyg=; b=fDt6OXalAarxfPw99lsguIKkoC4G0etm9I1ODY3qCVE9y0Zy2whTD2G/mHZTWRoU28 nHDPcJfUw1Ty5+h1YgBV0duXuRk4GfpV41U4hD/ctI2z1LNOMSQJt9ZoMnZiTyIjE2cR z84xnmVb/im8OBDYU+E6OHGYnNE1lUEClC9/2q4rhNnLk55NMNGWvq4gxCjDhuT2EDqA AElRRINmWFTICnSg+0QYrDl2iai6igDg7B8mBWo9WyMzkD5Z9BvuB/w4KuvVeLIr37nn MHubtDVVy08ZzXUrsm5S8uwrw4ZIgfKOInWLtqJLeDJ/5zym8Kb2VPBoHnhGZKEmaMcl lA+Q== X-Gm-Message-State: AC+VfDziUbRgT+h1ehWJC+7sLy+ZWQh/UHsvcm9W3EepSzLhYA0fEVCj Q+fZ5ede+vSCbDnWMwteXb4= X-Google-Smtp-Source: ACHHUZ7BtgyxcP2XT+4Uqhw0HXHwWLmfLwxjJMr7gNSWTt7ydoQQkQtGIEvmhJmHBfnPTpkvuD93jg== X-Received: by 2002:a17:902:f545:b0:1a6:ff51:270 with SMTP id h5-20020a170902f54500b001a6ff510270mr1806329plf.29.1683154343427; Wed, 03 May 2023 15:52:23 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:2c3b:81e:ce21:2437]) by smtp.gmail.com with ESMTPSA id e3-20020a170902744300b001aad4be4503sm227085plt.2.2023.05.03.15.52.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 15:52:22 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v4 08/11] block: mq-deadline: Reduce lock contention Date: Wed, 3 May 2023 15:52:05 -0700 Message-ID: <20230503225208.2439206-9-bvanassche@acm.org> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog In-Reply-To: <20230503225208.2439206-1-bvanassche@acm.org> References: <20230503225208.2439206-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org blk_mq_free_requests() calls dd_finish_request() indirectly. Prevent nested locking of dd->lock and dd->zone_lock by unlocking dd->lock before calling blk_mq_free_requests(). Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche --- block/mq-deadline.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index dbc0feca963e..56cc29953e15 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -758,6 +758,7 @@ static bool dd_bio_merge(struct request_queue *q, struct bio *bio, */ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, blk_insert_t flags) + __must_hold(dd->lock) { struct request_queue *q = hctx->queue; struct deadline_data *dd = q->elevator->elevator_data; @@ -784,7 +785,9 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, } if (blk_mq_sched_try_insert_merge(q, rq, &free)) { + spin_unlock(&dd->lock); blk_mq_free_requests(&free); + spin_lock(&dd->lock); return; } From patchwork Wed May 3 22:52:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13230622 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70BBFC77B78 for ; Wed, 3 May 2023 22:52:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229610AbjECWw2 (ORCPT ); Wed, 3 May 2023 18:52:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229735AbjECWw0 (ORCPT ); Wed, 3 May 2023 18:52:26 -0400 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 735437D9B for ; Wed, 3 May 2023 15:52:25 -0700 (PDT) Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1aaef97652fso32958725ad.0 for ; Wed, 03 May 2023 15:52:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683154345; x=1685746345; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9PNHTA2TJaiV8MHw8Xe7x5V35kk/xVBm629j778VyPY=; b=g2Fjg4SJyEp/ooQHnssLvSwbpvdXjQcGPls5BdLCZ0XUXxa4/F9lGU7xCwkH2aHxu/ LOywQ9sVnqdISyUPpKsgyr8dLxG/9MjmDFyPynvWX765IB7EaYgta0+yQ84w4ueGW7Rg anBp3czSgQfl0a2SOccsQ2EhRz3iAuw1Nrv5sbRRwl4yZoE6pylHX+xXQ2N5WxdCPH7W ITLyp1JLytMqDDAHNTxnRY1ky1Csbji2nBvG1BdrtJVsVeNxF0jw35dzdyMfBh55C7yy A/fYJVVeYxgdCVpEXFa4fo8jDik/JVMTpW7mUB0et4AppfKA9yPoKfejvOptZ4zjiU0P bjzA== X-Gm-Message-State: AC+VfDyxfNPaR/SzSUS+4giVZd5GaMIeSTbSueUxjiUh6KmhbrZW0GAn D+b5h/1P5O3tKV3IthT96XSWizhDvtc= X-Google-Smtp-Source: ACHHUZ6gQo6ZPpRPAL590tgnBJOst89kupWyKKz478CniZzF4cEKbt9ZDfu5qsumtOOTTN/lk9gUrw== X-Received: by 2002:a17:902:bd86:b0:1ab:ef3:73e5 with SMTP id q6-20020a170902bd8600b001ab0ef373e5mr1523818pls.61.1683154344693; Wed, 03 May 2023 15:52:24 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:2c3b:81e:ce21:2437]) by smtp.gmail.com with ESMTPSA id e3-20020a170902744300b001aad4be4503sm227085plt.2.2023.05.03.15.52.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 15:52:24 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v4 09/11] block: mq-deadline: Track the dispatch position Date: Wed, 3 May 2023 15:52:06 -0700 Message-ID: <20230503225208.2439206-10-bvanassche@acm.org> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog In-Reply-To: <20230503225208.2439206-1-bvanassche@acm.org> References: <20230503225208.2439206-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Track the position (sector_t) of the most recently dispatched request instead of tracking a pointer to the next request to dispatch. This patch is the basis for patch "Handle requeued requests correctly". Without this patch it would be significantly more complicated to make sure that zoned writes are dispatched in LBA order per zone. Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Christoph Hellwig --- block/mq-deadline.c | 45 +++++++++++++++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 14 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 56cc29953e15..b482b707cb37 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -74,8 +74,8 @@ struct dd_per_prio { struct list_head dispatch; struct rb_root sort_list[DD_DIR_COUNT]; struct list_head fifo_list[DD_DIR_COUNT]; - /* Next request in FIFO order. Read, write or both are NULL. */ - struct request *next_rq[DD_DIR_COUNT]; + /* Position of the most recently dispatched request. */ + sector_t latest_pos[DD_DIR_COUNT]; struct io_stats_per_prio stats; }; @@ -156,6 +156,25 @@ deadline_latter_request(struct request *rq) return NULL; } +/* Return the first request for which blk_rq_pos() >= pos. */ +static inline struct request *deadline_from_pos(struct dd_per_prio *per_prio, + enum dd_data_dir data_dir, sector_t pos) +{ + struct rb_node *node = per_prio->sort_list[data_dir].rb_node; + struct request *rq, *res = NULL; + + while (node) { + rq = rb_entry_rq(node); + if (blk_rq_pos(rq) >= pos) { + res = rq; + node = node->rb_left; + } else { + node = node->rb_right; + } + } + return res; +} + static void deadline_add_rq_rb(struct dd_per_prio *per_prio, struct request *rq) { @@ -167,11 +186,6 @@ deadline_add_rq_rb(struct dd_per_prio *per_prio, struct request *rq) static inline void deadline_del_rq_rb(struct dd_per_prio *per_prio, struct request *rq) { - const enum dd_data_dir data_dir = rq_data_dir(rq); - - if (per_prio->next_rq[data_dir] == rq) - per_prio->next_rq[data_dir] = deadline_latter_request(rq); - elv_rb_del(deadline_rb_root(per_prio, rq), rq); } @@ -251,10 +265,6 @@ static void deadline_move_request(struct deadline_data *dd, struct dd_per_prio *per_prio, struct request *rq) { - const enum dd_data_dir data_dir = rq_data_dir(rq); - - per_prio->next_rq[data_dir] = deadline_latter_request(rq); - /* * take it off the sort and fifo list */ @@ -363,7 +373,8 @@ deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio, struct request *rq; unsigned long flags; - rq = per_prio->next_rq[data_dir]; + rq = deadline_from_pos(per_prio, data_dir, + per_prio->latest_pos[data_dir]); if (!rq) return NULL; @@ -426,6 +437,7 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, if (started_after(dd, rq, latest_start)) return NULL; list_del_init(&rq->queuelist); + data_dir = rq_data_dir(rq); goto done; } @@ -433,9 +445,11 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, * batches are currently reads XOR writes */ rq = deadline_next_request(dd, per_prio, dd->last_dir); - if (rq && dd->batching < dd->fifo_batch) + if (rq && dd->batching < dd->fifo_batch) { /* we have a next request are still entitled to batch */ + data_dir = rq_data_dir(rq); goto dispatch_request; + } /* * at this point we are not running a batch. select the appropriate @@ -513,6 +527,7 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, done: ioprio_class = dd_rq_ioclass(rq); prio = ioprio_class_to_prio[ioprio_class]; + dd->per_prio[prio].latest_pos[data_dir] = blk_rq_pos(rq); dd->per_prio[prio].stats.dispatched++; /* * If the request needs its target zone locked, do it. @@ -1029,8 +1044,10 @@ static int deadline_##name##_next_rq_show(void *data, \ struct request_queue *q = data; \ struct deadline_data *dd = q->elevator->elevator_data; \ struct dd_per_prio *per_prio = &dd->per_prio[prio]; \ - struct request *rq = per_prio->next_rq[data_dir]; \ + struct request *rq; \ \ + rq = deadline_from_pos(per_prio, data_dir, \ + per_prio->latest_pos[data_dir]); \ if (rq) \ __blk_mq_debugfs_rq_show(m, rq); \ return 0; \ From patchwork Wed May 3 22:52:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13230623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FDCAC77B78 for ; Wed, 3 May 2023 22:52:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229734AbjECWwc (ORCPT ); Wed, 3 May 2023 18:52:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229738AbjECWwa (ORCPT ); Wed, 3 May 2023 18:52:30 -0400 Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA4EC44B6 for ; Wed, 3 May 2023 15:52:26 -0700 (PDT) Received: by mail-pg1-f181.google.com with SMTP id 41be03b00d2f7-51b4ef5378bso5043872a12.1 for ; Wed, 03 May 2023 15:52:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683154346; x=1685746346; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rvyV5nB3Z6rJkKiErzOVe3S6el5xPZ52sQ/N5pFxNtQ=; b=g7Xkm6s/XWaZka/99S9T4kPGERCip1berzdeBDtA68A5V0V9F42EOi3ILuSdR/v4A6 iy2wK70l0/lxFTc1FC1xDrNti3j1N6ZWngOQ3mUlogRYnHjhhuxAgRmseaVVPrtqgsEW zyGrVpuBz8TtkVfC08POakbdckHiwXpRl9gO0QQ+UPDSIRZTNyHV818F+UO7TcXDEitf SoiJFz7veK+1OxD3a2yVDaRbTg7oGY3mAIQ3bhmoiIAyrrGeocqgIQkbNGDYei4WIZzr Y6u/+BOJXNN/IT8EpO3hlW0ypwV8QNYKK7b+L7qWA5ZOVEMjndWJAE7VWcGieRDIo/j0 f9zg== X-Gm-Message-State: AC+VfDxS2mW7iCXarOaSHBy1ayJjtcn8+bgk5HjaQoB+MOqKdlYhtK5d tMEKLiFvx+ulFYCbHMmo6Ao= X-Google-Smtp-Source: ACHHUZ4qQqGGS1cxJ/0ZSvOzeRK3J/TuIL0p77uNW4z+tbW4N64EhDKknZA4vk0Xz64AyXTKWC1PKg== X-Received: by 2002:a17:902:968e:b0:1a1:bff4:49e9 with SMTP id n14-20020a170902968e00b001a1bff449e9mr1473286plp.23.1683154346021; Wed, 03 May 2023 15:52:26 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:2c3b:81e:ce21:2437]) by smtp.gmail.com with ESMTPSA id e3-20020a170902744300b001aad4be4503sm227085plt.2.2023.05.03.15.52.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 15:52:25 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v4 10/11] block: mq-deadline: Handle requeued requests correctly Date: Wed, 3 May 2023 15:52:07 -0700 Message-ID: <20230503225208.2439206-11-bvanassche@acm.org> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog In-Reply-To: <20230503225208.2439206-1-bvanassche@acm.org> References: <20230503225208.2439206-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Start dispatching from the start of a zone instead of from the starting position of the most recently dispatched request. If a zoned write is requeued with an LBA that is lower than already inserted zoned writes, make sure that it is submitted first. Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Christoph Hellwig --- block/mq-deadline.c | 34 ++++++++++++++++++++++++++++++++-- 1 file changed, 32 insertions(+), 2 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index b482b707cb37..6c196182f86c 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -156,13 +156,28 @@ deadline_latter_request(struct request *rq) return NULL; } -/* Return the first request for which blk_rq_pos() >= pos. */ +/* + * Return the first request for which blk_rq_pos() >= @pos. For zoned devices, + * return the first request after the highest zone start <= @pos. + */ static inline struct request *deadline_from_pos(struct dd_per_prio *per_prio, enum dd_data_dir data_dir, sector_t pos) { struct rb_node *node = per_prio->sort_list[data_dir].rb_node; struct request *rq, *res = NULL; + if (!node) + return NULL; + + rq = rb_entry_rq(node); + /* + * A zoned write may have been requeued with a starting position that + * is below that of the most recently dispatched request. Hence, for + * zoned writes, start searching from the start of a zone. + */ + if (blk_rq_is_seq_zoned_write(rq)) + pos -= round_down(pos, rq->q->limits.chunk_sectors); + while (node) { rq = rb_entry_rq(node); if (blk_rq_pos(rq) >= pos) { @@ -812,6 +827,8 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, list_add(&rq->queuelist, &per_prio->dispatch); rq->fifo_time = jiffies; } else { + struct list_head *insert_before; + deadline_add_rq_rb(per_prio, rq); if (rq_mergeable(rq)) { @@ -824,7 +841,20 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, * set expire time and add to fifo list */ rq->fifo_time = jiffies + dd->fifo_expire[data_dir]; - list_add_tail(&rq->queuelist, &per_prio->fifo_list[data_dir]); + insert_before = &per_prio->fifo_list[data_dir]; +#ifdef CONFIG_BLK_DEV_ZONED + /* + * Insert zoned writes such that requests are sorted by + * position per zone. + */ + if (blk_rq_is_seq_zoned_write(rq)) { + struct request *rq2 = deadline_latter_request(rq); + + if (rq2 && blk_rq_zone_no(rq2) == blk_rq_zone_no(rq)) + insert_before = &rq2->queuelist; + } +#endif + list_add_tail(&rq->queuelist, insert_before); } } From patchwork Wed May 3 22:52:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13230624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8088AC77B7F for ; Wed, 3 May 2023 22:52:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229541AbjECWwd (ORCPT ); Wed, 3 May 2023 18:52:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229745AbjECWwa (ORCPT ); Wed, 3 May 2023 18:52:30 -0400 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B87BB4681 for ; Wed, 3 May 2023 15:52:27 -0700 (PDT) Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-1ab0c697c84so21593915ad.3 for ; Wed, 03 May 2023 15:52:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683154347; x=1685746347; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GH2YcvGFrV6xS6Volz3QhBy2NN3NfkIJiHY8Dki6J5w=; b=bEJEEnnEoQtFTgcmEj16m3C8MamIRYO2FrNGh9gtl+DIIV383jIqVRyjIaYGAQr5KL uBdjuXfyEuIQYAn1x7mmc7RjdWfdPp0Eg8w0DHOpvrclI4n7HK5kPfXWV+MIi1BfTRV/ ArTccwujUBfPf7efwhQXXjKeT+OmDhMzzZrX5N/T3bUqUSRv51oIXtUV0DZxmYCX83SC pT4iO96V8ZSnYhtEF021bVF5gTMgfbrPRHhvijBC9JCwpmDtgxcPiAmDGsY1XTbO9Ki7 op0GbcgHym92bFrbHzF9shbQ9sTyeWSCVmZEoCZYkECzKegwuFfFQXgdrBqC5tIo9Fof KjhA== X-Gm-Message-State: AC+VfDw4gb7MvPzBVb1GJTcqXeDY8E+EdIOUGatEJheaRQbscLcxEcqG h3RlwqeZhmS5Ux8l1DBR3NI= X-Google-Smtp-Source: ACHHUZ4w8gM90haQL1WxJO/1QsrERtae2OIepc68EWulhSbKFY7y1qv+rPFvcy8QlJ/mLtN518Y3Yw== X-Received: by 2002:a17:902:ce88:b0:1ac:2c8b:a0da with SMTP id f8-20020a170902ce8800b001ac2c8ba0damr9657plg.51.1683154347329; Wed, 03 May 2023 15:52:27 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:2c3b:81e:ce21:2437]) by smtp.gmail.com with ESMTPSA id e3-20020a170902744300b001aad4be4503sm227085plt.2.2023.05.03.15.52.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 May 2023 15:52:27 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Jaegeuk Kim , Christoph Hellwig , Bart Van Assche , Damien Le Moal , Ming Lei Subject: [PATCH v4 11/11] block: mq-deadline: Fix handling of at-head zoned writes Date: Wed, 3 May 2023 15:52:08 -0700 Message-ID: <20230503225208.2439206-12-bvanassche@acm.org> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog In-Reply-To: <20230503225208.2439206-1-bvanassche@acm.org> References: <20230503225208.2439206-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Before dispatching a zoned write from the FIFO list, check whether there are any zoned writes in the RB-tree with a lower LBA for the same zone. This patch ensures that zoned writes happen in order even if at_head is set for some writes for a zone and not for others. Cc: Damien Le Moal Cc: Christoph Hellwig Cc: Ming Lei Signed-off-by: Bart Van Assche Reviewed-by: Christoph Hellwig --- block/mq-deadline.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 6c196182f86c..e556a6dd6616 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -346,7 +346,7 @@ static struct request * deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio, enum dd_data_dir data_dir) { - struct request *rq; + struct request *rq, *rb_rq, *next; unsigned long flags; if (list_empty(&per_prio->fifo_list[data_dir])) @@ -364,7 +364,12 @@ deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio, * zones and these zones are unlocked. */ spin_lock_irqsave(&dd->zone_lock, flags); - list_for_each_entry(rq, &per_prio->fifo_list[DD_WRITE], queuelist) { + list_for_each_entry_safe(rq, next, &per_prio->fifo_list[DD_WRITE], + queuelist) { + /* Check whether a prior request exists for the same zone. */ + rb_rq = deadline_from_pos(per_prio, data_dir, blk_rq_pos(rq)); + if (rb_rq && blk_rq_pos(rb_rq) < blk_rq_pos(rq)) + rq = rb_rq; if (blk_req_can_dispatch_to_zone(rq) && (blk_queue_nonrot(rq->q) || !deadline_is_seq_write(dd, rq)))