From patchwork Tue Jun 14 17:49:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 12881336 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE2E2CCA47A for ; Tue, 14 Jun 2022 17:49:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345059AbiFNRt4 (ORCPT ); Tue, 14 Jun 2022 13:49:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345050AbiFNRtx (ORCPT ); Tue, 14 Jun 2022 13:49:53 -0400 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1652934B93 for ; Tue, 14 Jun 2022 10:49:53 -0700 (PDT) Received: by mail-pl1-f176.google.com with SMTP id h1so8329704plf.11 for ; Tue, 14 Jun 2022 10:49:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/52IjGm18EW/U7nu7Sdnq42Uyhsn7+KydneD7v41/xc=; b=HJjrAtzpSJkDrXxYj5vRobx4E68WK2KyRkJ6qQJAft4eqfm1li0efv5Rhj3uD70UXu qPlIG4M9O8+1zkd14UU2sgw3/GjcNm4VHj7TFUi1ql2PtM1/z0gu3JI64J+1AUU2j1BU qy7eSJkOuSv75z9Yz45EL9r2wjdwXLv/bnzSXboCUazljbaHpWF/AfZLZutiA2ExkYpg 7O/HeKy16iVlzDr3JJ6o+qF4KYkgCGKSZ47jZVAMKMpix34WtPDYwO3J4JbEhIlsFqwX rdkDFFWF2obw02LQh7icOxh+TxWyjQ/Gk6NO7wEIytOvQQPR1ExcaDOlxc0FjzE2EhqU AC1A== X-Gm-Message-State: AJIora8VF+MEVPZGxP/HsOOkMQzQR3oJt9Ia2SQ2bQpuDQ+R9Ku/UkFv 8BobXiqAQgl25hGZjjXqxbw= X-Google-Smtp-Source: AGRyM1sGpL26UVvTBBoRhLoqtfLel04A8xbEP8/9HNgeXdX5p4FfI5P4dKTH1fiqrE6oF7ZhWntvdg== X-Received: by 2002:a17:90a:9481:b0:1e8:7bbf:fa9a with SMTP id s1-20020a17090a948100b001e87bbffa9amr5762380pjo.164.1655228992467; Tue, 14 Jun 2022 10:49:52 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:ab60:e1ea:e2eb:c1b6]) by smtp.gmail.com with ESMTPSA id gd3-20020a17090b0fc300b001e2da6766ecsm9866922pjb.31.2022.06.14.10.49.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Jun 2022 10:49:51 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Damien Le Moal , "Martin K . Petersen" , Khazhy Kumykov , Jaegeuk Kim , Bart Van Assche Subject: [PATCH 1/5] block: Introduce the blk_rq_is_seq_write() function Date: Tue, 14 Jun 2022 10:49:39 -0700 Message-Id: <20220614174943.611369-2-bvanassche@acm.org> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog In-Reply-To: <20220614174943.611369-1-bvanassche@acm.org> References: <20220614174943.611369-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce a function that makes it easy to verify whether a write request is for a sequential write required or sequential write preferred zone. Cc: Damien Le Moal Signed-off-by: Bart Van Assche --- include/linux/blk-mq.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index e2d9daf7e8dd..3e7feb48105f 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -1129,6 +1129,15 @@ static inline unsigned int blk_rq_zone_is_seq(struct request *rq) return blk_queue_zone_is_seq(rq->q, blk_rq_pos(rq)); } +/** + * blk_rq_is_seq_write() - Whether @rq is a write request for a sequential zone. + * @rq: Request to examine. + */ +static inline bool blk_rq_is_seq_write(struct request *rq) +{ + return req_op(rq) == REQ_OP_WRITE && blk_rq_zone_is_seq(rq); +} + bool blk_req_needs_zone_write_lock(struct request *rq); bool blk_req_zone_write_trylock(struct request *rq); void __blk_req_zone_write_lock(struct request *rq); @@ -1159,6 +1168,11 @@ static inline bool blk_req_can_dispatch_to_zone(struct request *rq) return !blk_req_zone_is_write_locked(rq); } #else /* CONFIG_BLK_DEV_ZONED */ +static inline bool blk_rq_is_seq_write(struct request *rq) +{ + return false; +} + static inline bool blk_req_needs_zone_write_lock(struct request *rq) { return false; From patchwork Tue Jun 14 17:49:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 12881337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F980C433EF for ; Tue, 14 Jun 2022 17:49:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344946AbiFNRt5 (ORCPT ); Tue, 14 Jun 2022 13:49:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234360AbiFNRt4 (ORCPT ); Tue, 14 Jun 2022 13:49:56 -0400 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B1F133A1B for ; Tue, 14 Jun 2022 10:49:55 -0700 (PDT) Received: by mail-pj1-f50.google.com with SMTP id e24so9138695pjt.0 for ; Tue, 14 Jun 2022 10:49:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SYzmx4aDCTgJa9MP4oberz0oDimso3r0GrB2E1nSOXA=; b=sbVth/N9szdynaOj9G1aAUAlD6c8mjRpX8baYRElUjBmXYRE72daG+Oe8OFKQf192m k3rcM3K5u1utrMM51ajjl+qT659xCVi2imPLs/udVchwE3ELZFBrEKOP8yAc2rvXFlk7 macNCkrUHE8ivD50iWE5t1hZEzRy62FM4YgXdZY7AgnIl9MD8a7JMQgFpcb2c596xFFT hgYshO+14ML2WtLVJf3RcZYK1DYFSpYTKfJ9RDOdO4twcWqVGQiFPJu7In2nM8zi85ng imQ1L5oL9gv2P2HOtubvTfxHr5+3N+9nK3I1Z8nJj+AYtcN8nHZXfsgTp1bv9HFxyeze fZMA== X-Gm-Message-State: AJIora8vl/0VLoHJV382v4ojeZa1fpBpmLBHw0ivh7DZX/LDAjOHoz9M CWPfoempO7xqSTq40nLyEpo= X-Google-Smtp-Source: AGRyM1sN23Ihj8RkiY8KZu/XsX+dmoV6bQtfPhmJ/JEOUhvErLGHTUxyP6ztsUYPBfk26zgv4inIww== X-Received: by 2002:a17:90b:4c0c:b0:1ea:87ef:546a with SMTP id na12-20020a17090b4c0c00b001ea87ef546amr5695074pjb.209.1655228994852; Tue, 14 Jun 2022 10:49:54 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:ab60:e1ea:e2eb:c1b6]) by smtp.gmail.com with ESMTPSA id gd3-20020a17090b0fc300b001e2da6766ecsm9866922pjb.31.2022.06.14.10.49.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Jun 2022 10:49:54 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Damien Le Moal , "Martin K . Petersen" , Khazhy Kumykov , Jaegeuk Kim , Bart Van Assche Subject: [PATCH 2/5] scsi: Retry unaligned zoned writes Date: Tue, 14 Jun 2022 10:49:40 -0700 Message-Id: <20220614174943.611369-3-bvanassche@acm.org> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog In-Reply-To: <20220614174943.611369-1-bvanassche@acm.org> References: <20220614174943.611369-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org From ZBC-2: "The device server terminates with CHECK CONDITION status, with the sense key set to ILLEGAL REQUEST, and the additional sense code set to UNALIGNED WRITE COMMAND a write command, other than an entire medium write same command, that specifies: a) the starting LBA in a sequential write required zone set to a value that is not equal to the write pointer for that sequential write required zone; or b) an ending LBA that is not equal to the last logical block within a physical block (see SBC-5)." I am not aware of any other conditions that may trigger the UNALIGNED WRITE COMMAND response. Retry unaligned writes in preparation of removing zone locking. Increase the number of retries for write commands sent to a sequential zone to the maximum number of outstanding commands. Cc: Martin K. Petersen Signed-off-by: Bart Van Assche --- drivers/scsi/scsi_error.c | 6 ++++++ drivers/scsi/sd.c | 2 ++ 2 files changed, 8 insertions(+) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index 49ef864df581..8e22d4ba22a3 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -674,6 +674,12 @@ enum scsi_disposition scsi_check_sense(struct scsi_cmnd *scmd) fallthrough; case ILLEGAL_REQUEST: + /* + * Unaligned write command. Retry immediately to handle + * out-of-order zoned writes. + */ + if (sshdr.asc == 0x21 && sshdr.ascq == 0x04) + return NEEDS_RETRY; if (sshdr.asc == 0x20 || /* Invalid command operation code */ sshdr.asc == 0x21 || /* Logical block address out of range */ sshdr.asc == 0x22 || /* Invalid function */ diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index a1a2ac09066f..8d68bd20723e 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -1202,6 +1202,8 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) cmd->transfersize = sdp->sector_size; cmd->underflow = nr_blocks << 9; cmd->allowed = sdkp->max_retries; + if (blk_rq_is_seq_write(rq)) + cmd->allowed += rq->q->nr_hw_queues * rq->q->nr_requests; cmd->sdb.length = nr_blocks * sdp->sector_size; SCSI_LOG_HLQUEUE(1, From patchwork Tue Jun 14 17:49:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 12881338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F238BC43334 for ; Tue, 14 Jun 2022 17:50:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345088AbiFNRuB (ORCPT ); Tue, 14 Jun 2022 13:50:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345050AbiFNRt6 (ORCPT ); Tue, 14 Jun 2022 13:49:58 -0400 Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 418D0340D9 for ; Tue, 14 Jun 2022 10:49:57 -0700 (PDT) Received: by mail-pg1-f174.google.com with SMTP id r5so2450112pgr.3 for ; Tue, 14 Jun 2022 10:49:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bTVjccNo4CKMeHqr/hq108UQCpYUKQZeJ66SJ8AYLlM=; b=rz6zbM0L4wNHsdWejssNq/21P9aIvX9TLW9/qdJtGQqdJV/kO8vEvCKGLGBpJt5vkL l49jtVApOJRB8QDoedmFva1GyFtOkWD3s2KBGgDjIwLL4rQ1pQogtCmLZz8Bw4SEDsTC RS8jsOc+rmE6TNWCmAVdCaEZvEuJLLdO5fB4LTTnoWDTYVda8qRzGQfOfyRwDZPMzeiV eRmQvZfkQLdkYx0DDxcFSrOWJQ1gFi3CzFhr13V/W8ondJnUFb178pPXdN8TE5INboK9 xdsD+YPJj1mgq7rz4Fp0sk8QWgsNILJ+6LVysGc1+oJ+FrBFx/+kUE/iDohkPWkhb03k pCLg== X-Gm-Message-State: AOAM533gxi5zVKHjXi4/EBHUAB36O9RN/vgWt8Vee0MomRktCdO+2HTa oTg7sTcetB4jqOJ3tbqMkjI= X-Google-Smtp-Source: ABdhPJy+N/f3uyJN3v70yuXSYKi8SvupozwP/NUZXeZ4qgUNwLjKTVxqbsU1V8iXj4kQj+7LOUgKeQ== X-Received: by 2002:a63:d418:0:b0:3fe:4478:5aa5 with SMTP id a24-20020a63d418000000b003fe44785aa5mr5524712pgh.212.1655228996611; Tue, 14 Jun 2022 10:49:56 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:ab60:e1ea:e2eb:c1b6]) by smtp.gmail.com with ESMTPSA id gd3-20020a17090b0fc300b001e2da6766ecsm9866922pjb.31.2022.06.14.10.49.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Jun 2022 10:49:55 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Damien Le Moal , "Martin K . Petersen" , Khazhy Kumykov , Jaegeuk Kim , Bart Van Assche , Keith Busch , Sagi Grimberg , Chaitanya Kulkarni Subject: [PATCH 3/5] nvme: Make the number of retries request specific Date: Tue, 14 Jun 2022 10:49:41 -0700 Message-Id: <20220614174943.611369-4-bvanassche@acm.org> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog In-Reply-To: <20220614174943.611369-1-bvanassche@acm.org> References: <20220614174943.611369-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add support for specifying the number of retries per NVMe request. Cc: Christoph Hellwig Cc: Keith Busch Cc: Sagi Grimberg Cc: Chaitanya Kulkarni Signed-off-by: Bart Van Assche --- drivers/nvme/host/core.c | 3 ++- drivers/nvme/host/nvme.h | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 24165daee3c8..fe0d09fc70ba 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -339,7 +339,7 @@ static inline enum nvme_disposition nvme_decide_disposition(struct request *req) if (blk_noretry_request(req) || (nvme_req(req)->status & NVME_SC_DNR) || - nvme_req(req)->retries >= nvme_max_retries) + nvme_req(req)->retries >= nvme_req(req)->max_retries) return COMPLETE; if (req->cmd_flags & REQ_NVME_MPATH) { @@ -632,6 +632,7 @@ static inline void nvme_clear_nvme_request(struct request *req) { nvme_req(req)->status = 0; nvme_req(req)->retries = 0; + nvme_req(req)->max_retries = nvme_max_retries; nvme_req(req)->flags = 0; req->rq_flags |= RQF_DONTPREP; } diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 9b72b6ecf33c..15bb36923d09 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -160,6 +160,7 @@ struct nvme_request { union nvme_result result; u8 genctr; u8 retries; + u8 max_retries; u8 flags; u16 status; struct nvme_ctrl *ctrl; From patchwork Tue Jun 14 17:49:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 12881339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F33EAC433EF for ; Tue, 14 Jun 2022 17:50:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345050AbiFNRuD (ORCPT ); Tue, 14 Jun 2022 13:50:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345067AbiFNRt7 (ORCPT ); Tue, 14 Jun 2022 13:49:59 -0400 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C739834B93 for ; Tue, 14 Jun 2022 10:49:58 -0700 (PDT) Received: by mail-pl1-f176.google.com with SMTP id h1so8329704plf.11 for ; Tue, 14 Jun 2022 10:49:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TmdhgrhYHwEjRTnnC3UU3+gDPCKazau9D7Dw49utVSE=; b=oHhpxD2ZBvLd2DYTHzobA/xla7/CMSbCrTSPXHZc7PQisAjnNRnidWG5umXhmRdsNT dZQQQk4swohsc1+x1LEvDVqTK9j+R2u/SOJY/0maYwGE6Ycs6bbKc3H3v+vPeruL4tND 3uiY1gI2h/8Z8GoAeaxpaTTGsw+u92i5gFr7Ff8GvdlzzNo4nscgr2M51o606yP/iruH h5Ol2NgGTyII9DgpvUmqBFKKPc2WmuazStDxs5/LefNQhCrfSgqXpMhLWkhMk3FzhrSt uUIGFRK86SSjZ5kPP9SUIEyp52MZbIS2uBzk6AvelAMjsTA8HOlc/8Tbu0pCMHZ5tfxu 8/zw== X-Gm-Message-State: AJIora82Q9Y0TBKfO51rYk59d0afFYtShrcjDmW2PFR4pi7hJJghPsyk mkuRjBVJrhGauMl/7R/MgIM= X-Google-Smtp-Source: AGRyM1vnZo5qQMzWofnBCnHS6q8wNOXdWiYx74S2v5+IBXYkILgFc9hBsxzEmwOjCtp2SAZPAySwqg== X-Received: by 2002:a17:90b:38c4:b0:1e6:89f9:73da with SMTP id nn4-20020a17090b38c400b001e689f973damr5764817pjb.220.1655228998436; Tue, 14 Jun 2022 10:49:58 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:ab60:e1ea:e2eb:c1b6]) by smtp.gmail.com with ESMTPSA id gd3-20020a17090b0fc300b001e2da6766ecsm9866922pjb.31.2022.06.14.10.49.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Jun 2022 10:49:57 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Damien Le Moal , "Martin K . Petersen" , Khazhy Kumykov , Jaegeuk Kim , Bart Van Assche , Keith Busch , Sagi Grimberg , Chaitanya Kulkarni Subject: [PATCH 4/5] nvme: Increase the number of retries for zoned writes Date: Tue, 14 Jun 2022 10:49:42 -0700 Message-Id: <20220614174943.611369-5-bvanassche@acm.org> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog In-Reply-To: <20220614174943.611369-1-bvanassche@acm.org> References: <20220614174943.611369-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Before removing zone locking, increase the number of retries for zoned writes to the maximum number of outstanding commands. Cc: Christoph Hellwig Cc: Keith Busch Cc: Sagi Grimberg Cc: Chaitanya Kulkarni Signed-off-by: Bart Van Assche --- drivers/nvme/host/core.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index fe0d09fc70ba..347a06118282 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -854,6 +854,10 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns, if (req->cmd_flags & REQ_RAHEAD) dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH; + if (blk_rq_is_seq_write(req)) + nvme_req(req)->max_retries += req->q->nr_hw_queues * + req->q->nr_requests; + cmnd->rw.opcode = op; cmnd->rw.flags = 0; cmnd->rw.nsid = cpu_to_le32(ns->head->ns_id); From patchwork Tue Jun 14 17:49:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 12881340 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0A66CCA47A for ; Tue, 14 Jun 2022 17:50:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234360AbiFNRuE (ORCPT ); Tue, 14 Jun 2022 13:50:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345084AbiFNRuC (ORCPT ); Tue, 14 Jun 2022 13:50:02 -0400 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7E353B55E for ; Tue, 14 Jun 2022 10:50:00 -0700 (PDT) Received: by mail-pf1-f178.google.com with SMTP id z17so9200785pff.7 for ; Tue, 14 Jun 2022 10:50:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sRWa2//WsFdDDYOSrmbkMGJxakZajyhRiR2bK8n/pGo=; b=skcWJ71GM6y1xxkgNsYDSI743nhYFDrcFnR6u8NrUWxaF91FgkgUgn0V6wRfo9DXap tn2UXU3h+gmmIYmW/h9psXmMa0GLO+bIw1fcwHb8Jg2UYp+MfSxmf/O6VukvwD50fMLy E1GcGMttCFuovL7MoTrp3f0EqRwmZO/jlV5BLh6d4G/Q7mGzGOESRSIMGYkMBSIEBtqd ba01ckCRZm/27Fuyooo8xez3lrtFhCDnVEOlAx3KN0NMjXizBI7L+gmTHF3X9n0l2Gg0 phB0e2UMmoS+OqWCeOwLV4x2/E6TZh+flcC1vQwCiEntPfuoMc6coxcmTyj+74l0tapp UvoA== X-Gm-Message-State: AOAM532T58musBAyzh5S0WphnIeumzGI9e5Io+TRkdMl8gMMTU4cU6w8 SYZQE8g21KOjjzSyi5/HffE= X-Google-Smtp-Source: ABdhPJyScae+X8QVfN7yCGVItpmah7e6/llGNtdp5bkS/3oKgwod00Bj6C7hGqPZtxWlxiCaVU2bjA== X-Received: by 2002:a65:6713:0:b0:3fd:af26:a799 with SMTP id u19-20020a656713000000b003fdaf26a799mr5452404pgf.68.1655229000056; Tue, 14 Jun 2022 10:50:00 -0700 (PDT) Received: from bvanassche-linux.mtv.corp.google.com ([2620:15c:211:201:ab60:e1ea:e2eb:c1b6]) by smtp.gmail.com with ESMTPSA id gd3-20020a17090b0fc300b001e2da6766ecsm9866922pjb.31.2022.06.14.10.49.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Jun 2022 10:49:59 -0700 (PDT) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Damien Le Moal , "Martin K . Petersen" , Khazhy Kumykov , Jaegeuk Kim , Bart Van Assche Subject: [PATCH 5/5] block/mq-deadline: Remove zone locking Date: Tue, 14 Jun 2022 10:49:43 -0700 Message-Id: <20220614174943.611369-6-bvanassche@acm.org> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog In-Reply-To: <20220614174943.611369-1-bvanassche@acm.org> References: <20220614174943.611369-1-bvanassche@acm.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Measurements have shown that limiting the queue depth to one has a significant negative performance impact on zoned UFS devices. Hence this patch that removes zone locking from the mq-deadline scheduler. This patch is based on the following assumptions: - Applications submit write requests to sequential write required zones in order. - If such write requests get reordered by the software or hardware queue mechanism, nr_hw_queues * nr_requests - 1 retries are sufficient to reorder the write requests. - It happens infrequently that zoned write requests are reordered by the block layer. - Either no I/O scheduler is used or an I/O scheduler is used that submits write requests per zone in LBA order. DD_BE_PRIO is selected for sequential writes to preserve the LBA order. See also commit 5700f69178e9 ("mq-deadline: Introduce zone locking support"). Cc: Damien Le Moal Signed-off-by: Bart Van Assche --- block/mq-deadline.c | 74 ++++----------------------------------------- 1 file changed, 6 insertions(+), 68 deletions(-) diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 6ed602b2f80a..e168fc9a980a 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -104,7 +104,6 @@ struct deadline_data { int prio_aging_expire; spinlock_t lock; - spinlock_t zone_lock; }; /* Maps an I/O priority class to a deadline scheduler priority. */ @@ -285,30 +284,10 @@ static struct request * deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio, enum dd_data_dir data_dir) { - struct request *rq; - unsigned long flags; - if (list_empty(&per_prio->fifo_list[data_dir])) return NULL; - rq = rq_entry_fifo(per_prio->fifo_list[data_dir].next); - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) - return rq; - - /* - * Look for a write request that can be dispatched, that is one with - * an unlocked target zone. - */ - spin_lock_irqsave(&dd->zone_lock, flags); - list_for_each_entry(rq, &per_prio->fifo_list[DD_WRITE], queuelist) { - if (blk_req_can_dispatch_to_zone(rq)) - goto out; - } - rq = NULL; -out: - spin_unlock_irqrestore(&dd->zone_lock, flags); - - return rq; + return rq_entry_fifo(per_prio->fifo_list[data_dir].next); } /* @@ -319,29 +298,7 @@ static struct request * deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio, enum dd_data_dir data_dir) { - struct request *rq; - unsigned long flags; - - rq = per_prio->next_rq[data_dir]; - if (!rq) - return NULL; - - if (data_dir == DD_READ || !blk_queue_is_zoned(rq->q)) - return rq; - - /* - * Look for a write request that can be dispatched, that is one with - * an unlocked target zone. - */ - spin_lock_irqsave(&dd->zone_lock, flags); - while (rq) { - if (blk_req_can_dispatch_to_zone(rq)) - break; - rq = deadline_latter_request(rq); - } - spin_unlock_irqrestore(&dd->zone_lock, flags); - - return rq; + return per_prio->next_rq[data_dir]; } /* @@ -467,10 +424,6 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd, ioprio_class = dd_rq_ioclass(rq); prio = ioprio_class_to_prio[ioprio_class]; dd->per_prio[prio].stats.dispatched++; - /* - * If the request needs its target zone locked, do it. - */ - blk_req_zone_write_lock(rq); rq->rq_flags |= RQF_STARTED; return rq; } @@ -640,7 +593,6 @@ static int dd_init_sched(struct request_queue *q, struct elevator_type *e) dd->fifo_batch = fifo_batch; dd->prio_aging_expire = prio_aging_expire; spin_lock_init(&dd->lock); - spin_lock_init(&dd->zone_lock); q->elevator = eq; return 0; @@ -716,17 +668,13 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, u8 ioprio_class = IOPRIO_PRIO_CLASS(ioprio); struct dd_per_prio *per_prio; enum dd_prio prio; + bool seq_write = blk_rq_is_seq_write(rq); LIST_HEAD(free); lockdep_assert_held(&dd->lock); - /* - * This may be a requeue of a write request that has locked its - * target zone. If it is the case, this releases the zone lock. - */ - blk_req_zone_write_unlock(rq); - - prio = ioprio_class_to_prio[ioprio_class]; + prio = seq_write ? DD_BE_PRIO : + ioprio_class_to_prio[ioprio_class]; per_prio = &dd->per_prio[prio]; if (!rq->elv.priv[0]) { per_prio->stats.inserted++; @@ -740,7 +688,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, trace_block_rq_insert(rq); - if (at_head) { + if (at_head && !seq_write) { list_add(&rq->queuelist, &per_prio->dispatch); rq->fifo_time = jiffies; } else { @@ -819,16 +767,6 @@ static void dd_finish_request(struct request *rq) return; atomic_inc(&per_prio->stats.completed); - - if (blk_queue_is_zoned(q)) { - unsigned long flags; - - spin_lock_irqsave(&dd->zone_lock, flags); - blk_req_zone_write_unlock(rq); - if (!list_empty(&per_prio->fifo_list[DD_WRITE])) - blk_mq_sched_mark_restart_hctx(rq->mq_hctx); - spin_unlock_irqrestore(&dd->zone_lock, flags); - } } static bool dd_has_work_for_prio(struct dd_per_prio *per_prio)