From patchwork Tue Dec 17 22:38:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13912603 Received: from 009.lax.mailroute.net (009.lax.mailroute.net [199.89.1.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ACE49155CB3 for ; Tue, 17 Dec 2024 22:38:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=199.89.1.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734475114; cv=none; b=oZTaAXBVcA8TV4yJIxym7DEsfvDabq2r/i5RxjXTBBtUtABXxuXC7q9ReHnJNnI6e9UDWCggDfMwQ0McO0xC3tdc6U1KyK2/NMEEm1bvjD7Mk48XRfwxBlBfsJcI+AjCA/VGeBAzJs65SDvs5/kM1leSQ51c6slscyKp71Bn6+I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734475114; c=relaxed/simple; bh=autjX2BFlkh5JX3T6v+m00FmZiBEezDsQc6QmV1Bz9Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=paJySZMJB81VDHHn4nOB6h3hpW0TyVsEQtgbmi1yb/+YZnJi9EgvyZ9RlMOn/YTRZsSAb9ewlmDhgaJ+KAmK6sqA4c3HelcByHaYIqF2sPo6gj7cjuf58BkL/ZiJRdSXAEJ4H16NyBRrb7Bz3IRM3/BIs/Bq3C9l6g+EriUzfDc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=acm.org; spf=pass smtp.mailfrom=acm.org; dkim=pass (2048-bit key) header.d=acm.org header.i=@acm.org header.b=SBAhjhhd; arc=none smtp.client-ip=199.89.1.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=acm.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=acm.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=acm.org header.i=@acm.org header.b="SBAhjhhd" Received: from localhost (localhost [127.0.0.1]) by 009.lax.mailroute.net (Postfix) with ESMTP id 4YCWtc12lYzlfflB; Tue, 17 Dec 2024 22:38:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h= content-transfer-encoding:mime-version:references:in-reply-to :x-mailer:message-id:date:date:subject:subject:from:from :received:received; s=mr01; t=1734475109; x=1737067110; bh=x8T0F ciUE7PfSXIZC03uQcBhaMy99ac/QEXSqtplfZc=; b=SBAhjhhdWnfhssAQRHFLy Ko3zFZz5EEfRfRC0PcTygSWknW5sPiv1xn679DPHMdosjicMTyvKoC8Ug8mRQ+f7 eIpl1/gd9t4A74jeJi/bX+5sdayPcSNhbwJgKpsfER50lpDDVT+Sb27rn6hYfczL GZO/SK5FxBqSBZ7RXJMK1Ys/4C+PagCCIWynuLhBFHqFwN4dm9fOTCjJ0B6ct3q4 LDSmh4jrBF1YFWB48Xb1kHM7yi8GLkCDU8VWSxujfVkfWhsFjv315GdCQJK+eHnF da4LETPmioNiIi5d6oF+olTzVFm4gR1WArBIgmMDEiirKWDVwjucaGfm5MkXsmqq A== X-Virus-Scanned: by MailRoute Received: from 009.lax.mailroute.net ([127.0.0.1]) by localhost (009.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP id Q6ebRMme55Hn; Tue, 17 Dec 2024 22:38:29 +0000 (UTC) Received: from bvanassche.mtv.corp.google.com (unknown [104.135.204.82]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bvanassche@acm.org) by 009.lax.mailroute.net (Postfix) with ESMTPSA id 4YCWtX5QH1zlff0H; Tue, 17 Dec 2024 22:38:28 +0000 (UTC) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Damien Le Moal , Bart Van Assche Subject: [PATCH v2 1/3] block: Optimize blk_mq_submit_bio() for the cache hit scenario Date: Tue, 17 Dec 2024 14:38:07 -0800 Message-ID: <20241217223809.683035-2-bvanassche@acm.org> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog In-Reply-To: <20241217223809.683035-1-bvanassche@acm.org> References: <20241217223809.683035-1-bvanassche@acm.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Help the CPU branch predictor in case of a cache hit by handling the cache hit scenario first. Cc: Damien Le Moal Cc: Christoph Hellwig Signed-off-by: Bart Van Assche Reviewed-by: Damien Le Moal --- block/blk-mq.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 7ee21346a41e..8d2aab4d9ba9 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3102,12 +3102,12 @@ void blk_mq_submit_bio(struct bio *bio) goto queue_exit; new_request: - if (!rq) { + if (rq) { + blk_mq_use_cached_rq(rq, plug, bio); + } else { rq = blk_mq_get_new_requests(q, plug, bio, nr_segs); if (unlikely(!rq)) goto queue_exit; - } else { - blk_mq_use_cached_rq(rq, plug, bio); } trace_block_getrq(bio); From patchwork Tue Dec 17 22:38:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13912604 Received: from 009.lax.mailroute.net (009.lax.mailroute.net [199.89.1.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA774EEB2 for ; Tue, 17 Dec 2024 22:38:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=199.89.1.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734475115; cv=none; b=j35huMvyeelC8xs9qAxmpIaci8kzzWl/kD+1ItuIH6tvVPqGgx2mjAzfbOW8ksogvY9BrSuehNYebpgqPw5bh+Ne5rt8rmSOLM5CKx2g9wN4KgwiUzGfMlBF2plYEnOnTQZJDVtzhj0Ha82ondnIibPNJggrQBhw4kvZWDigZpw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734475115; c=relaxed/simple; bh=z5e3lWEvJoLEjGnblQf1Vwqe+U3IY1nA30+a7+RYBG8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZjhAPsR0dFMQAkvLqEzFuWuC1rFHIT9+7u0A3Cu7eOmCvzcjyxYn9ltli37RUW8ABaug+Icx//6wYH+MsWNQnuJyRuRPmGPlcm6LQbxKwtGX3AUcgE7vyLhe7Pvir5/wWej7NOl6XP0cRpEfdHPlNsHQOwso9+84UycUngLcwj0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=acm.org; spf=pass smtp.mailfrom=acm.org; dkim=pass (2048-bit key) header.d=acm.org header.i=@acm.org header.b=zJyoyMal; arc=none smtp.client-ip=199.89.1.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=acm.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=acm.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=acm.org header.i=@acm.org header.b="zJyoyMal" Received: from localhost (localhost [127.0.0.1]) by 009.lax.mailroute.net (Postfix) with ESMTP id 4YCWtd27ZZzlff0H; Tue, 17 Dec 2024 22:38:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h= content-transfer-encoding:mime-version:references:in-reply-to :x-mailer:message-id:date:date:subject:subject:from:from :received:received; s=mr01; t=1734475110; x=1737067111; bh=TPb5/ KXYShB18AmwASozqpVFRAcEzQwxKgySnuU4wOw=; b=zJyoyMalxYjGozC/32ryz K03JyXq/azZxgT0g6V+O77RfxR3JrQfyhCF23C2XZS6c9GY3GX8Aretw+AF+YZzR wu5HAya4eFrH7mHMJpCRyfLGJTeIqHvXWLMLa/z08Y66WEDVnJfIWoMB/SrGFHGf qqvmKCnEBqEtS8Psf2SE1x50UdSw3pa4/CEvOOhtPsnmOU4JbLQ/bKFuIggQzZKK wPKb1XYaKjXHJ6PocI4DYfZaC6VNufoCet7QI5PoSarGrbXpSIeGIGHmSKnCOM0q eMlkkMYzRmSCJMvg6U2TuZaWSbhmrZfvufb49dLmBwKN6v7mhafSFRZLvLYi11Ok g== X-Virus-Scanned: by MailRoute Received: from 009.lax.mailroute.net ([127.0.0.1]) by localhost (009.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP id 9693UdOJPX_Z; Tue, 17 Dec 2024 22:38:30 +0000 (UTC) Received: from bvanassche.mtv.corp.google.com (unknown [104.135.204.82]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bvanassche@acm.org) by 009.lax.mailroute.net (Postfix) with ESMTPSA id 4YCWtZ0k0Xzlff0K; Tue, 17 Dec 2024 22:38:29 +0000 (UTC) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Damien Le Moal , Bart Van Assche Subject: [PATCH v2 2/3] blk-mq: Move more error handling into blk_mq_submit_bio() Date: Tue, 17 Dec 2024 14:38:08 -0800 Message-ID: <20241217223809.683035-3-bvanassche@acm.org> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog In-Reply-To: <20241217223809.683035-1-bvanassche@acm.org> References: <20241217223809.683035-1-bvanassche@acm.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The error handling code in blk_mq_get_new_requests() cannot be understood without knowing that this function is only called by blk_mq_submit_bio(). Hence move the code for handling blk_mq_get_new_requests() failures into blk_mq_submit_bio(). Cc: Damien Le Moal Cc: Christoph Hellwig Signed-off-by: Bart Van Assche --- block/blk-mq.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 8d2aab4d9ba9..f4300e608ed8 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2968,12 +2968,9 @@ static struct request *blk_mq_get_new_requests(struct request_queue *q, } rq = __blk_mq_alloc_requests(&data); - if (rq) - return rq; - rq_qos_cleanup(q, bio); - if (bio->bi_opf & REQ_NOWAIT) - bio_wouldblock_error(bio); - return NULL; + if (!rq) + rq_qos_cleanup(q, bio); + return rq; } /* @@ -3106,8 +3103,11 @@ void blk_mq_submit_bio(struct bio *bio) blk_mq_use_cached_rq(rq, plug, bio); } else { rq = blk_mq_get_new_requests(q, plug, bio, nr_segs); - if (unlikely(!rq)) + if (unlikely(!rq)) { + if (bio->bi_opf & REQ_NOWAIT) + bio_wouldblock_error(bio); goto queue_exit; + } } trace_block_getrq(bio); From patchwork Tue Dec 17 22:38:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 13912605 Received: from 009.lax.mailroute.net (009.lax.mailroute.net [199.89.1.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A066EEB2 for ; Tue, 17 Dec 2024 22:38:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=199.89.1.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734475119; cv=none; b=R+lSM42EPW4nDsXyBEgJkX0dz7z0yXA0HfvzsN/PTCMWw8qMcAvTzLPn3vx7wjCLQ52xNegO2Q9KdYRiH/vHSDSKLSKfIHy0RPYFEQv3Rc8SYl/xg5yiPeX53Eo4Uj3c2MZ9hGz8jeBcMsYzLtqDI8Fp0DNNEX7ji1gu+VNZvTk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734475119; c=relaxed/simple; bh=wxmHeIZ0lwMQSO27Ndq0U3Wr1DsdgneNbGlkCtlWkL8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mv+qIf3IIO5gbOcSIw1PT95MKVv0vbdKZ3pVFD+wckCxhvrFKHmtmR3aiMZN03dQF0nrcg+7i/4N3IiENsfgNbhTl3Mp8TDkrdNW99WmPsG0lFTKLtsYn8wStH6MUNH0tq9ASOqgHXrmx9CEA0pV5R9bfED07+cgRMuhsZitRLk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=acm.org; spf=pass smtp.mailfrom=acm.org; dkim=pass (2048-bit key) header.d=acm.org header.i=@acm.org header.b=x71e9wxm; arc=none smtp.client-ip=199.89.1.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=acm.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=acm.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=acm.org header.i=@acm.org header.b="x71e9wxm" Received: from localhost (localhost [127.0.0.1]) by 009.lax.mailroute.net (Postfix) with ESMTP id 4YCWtj4RstzlfflB; Tue, 17 Dec 2024 22:38:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h= content-transfer-encoding:mime-version:references:in-reply-to :x-mailer:message-id:date:date:subject:subject:from:from :received:received; s=mr01; t=1734475112; x=1737067113; bh=rHQDf zQUeU7XswfoG6RJd+cDOB7knIiXQZQzmovDXG4=; b=x71e9wxmyH6ZdcLIzI6n7 5+QQ0UwRrs9djSgil70sqlv6RbrCpEjfv6MwJFxknuUex3kJiC3TRlSCv7qZz/Tk aUun5tufPns9DEuxHFxXccNvWUaIiElO3yFLDX1MjPm8QJOsgvceKfDBp090fG0q 6zU9hCqTUPkh7EgojmTmVVgQSZxau/ZkYgBKjZVlXubKQ76hGgGXUGin/+cVcgPn 9Ux4+c+HYJpffG06nyKAc7+uCInJbDSvVhlAGfei/N4L8qrHPfefvykW/7zNMq1s 5xFYo6+ZIE5iZYif6T35Ii78G+RXaJvAvxRlpYH1F4qOuuEozlz99s49bpKLvSkR w== X-Virus-Scanned: by MailRoute Received: from 009.lax.mailroute.net ([127.0.0.1]) by localhost (009.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP id uOvD0J1pb9ca; Tue, 17 Dec 2024 22:38:32 +0000 (UTC) Received: from bvanassche.mtv.corp.google.com (unknown [104.135.204.82]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bvanassche@acm.org) by 009.lax.mailroute.net (Postfix) with ESMTPSA id 4YCWtb1d9nzlff02; Tue, 17 Dec 2024 22:38:30 +0000 (UTC) From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Damien Le Moal , Bart Van Assche Subject: [PATCH v2 3/3] blk-zoned: Move more error handling into blk_mq_submit_bio() Date: Tue, 17 Dec 2024 14:38:09 -0800 Message-ID: <20241217223809.683035-4-bvanassche@acm.org> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog In-Reply-To: <20241217223809.683035-1-bvanassche@acm.org> References: <20241217223809.683035-1-bvanassche@acm.org> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The error handling code in blk_zone_plug_bio() and in the functions called by blk_zone_plug_bio() cannot be understood without knowing that these functions are only called by blk_mq_submit_bio(). Move the error handling code in blk_mq_submit_bio() such that all error handling code for blk_mq_submit_bio() occurs inside blk_mq_submit_bio() itself. Cc: Damien Le Moal Cc: Christoph Hellwig Signed-off-by: Bart Van Assche --- block/blk-mq.c | 16 ++++++++-- block/blk-zoned.c | 67 +++++++++++++++++++----------------------- include/linux/blkdev.h | 13 ++++++-- 3 files changed, 56 insertions(+), 40 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index f4300e608ed8..2449f412dd00 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3095,8 +3095,20 @@ void blk_mq_submit_bio(struct bio *bio) if (blk_mq_attempt_bio_merge(q, bio, nr_segs)) goto queue_exit; - if (blk_queue_is_zoned(q) && blk_zone_plug_bio(bio, nr_segs)) - goto queue_exit; + if (blk_queue_is_zoned(q)) { + switch (blk_zone_plug_bio(bio, nr_segs)) { + case bzp_not_plugged: + break; + case bzp_plugged: + goto queue_exit; + case bzp_wouldblock: + bio_wouldblock_error(bio); + goto queue_exit; + case bzp_failed: + bio_io_error(bio); + goto queue_exit; + } + } new_request: if (rq) { diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 4b0be40a8ea7..cb2c05d8b1eb 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -675,8 +675,8 @@ static int disk_zone_sync_wp_offset(struct gendisk *disk, sector_t sector) disk_report_zones_cb, &args); } -static bool blk_zone_wplug_handle_reset_or_finish(struct bio *bio, - unsigned int wp_offset) +static enum blk_zone_plug_status +blk_zone_wplug_handle_reset_or_finish(struct bio *bio, unsigned int wp_offset) { struct gendisk *disk = bio->bi_bdev->bd_disk; sector_t sector = bio->bi_iter.bi_sector; @@ -684,10 +684,8 @@ static bool blk_zone_wplug_handle_reset_or_finish(struct bio *bio, unsigned long flags; /* Conventional zones cannot be reset nor finished. */ - if (!bdev_zone_is_seq(bio->bi_bdev, sector)) { - bio_io_error(bio); - return true; - } + if (!bdev_zone_is_seq(bio->bi_bdev, sector)) + return bzp_failed; /* * No-wait reset or finish BIOs do not make much sense as the callers @@ -713,10 +711,11 @@ static bool blk_zone_wplug_handle_reset_or_finish(struct bio *bio, disk_put_zone_wplug(zwplug); } - return false; + return bzp_not_plugged; } -static bool blk_zone_wplug_handle_reset_all(struct bio *bio) +static enum blk_zone_plug_status +blk_zone_wplug_handle_reset_all(struct bio *bio) { struct gendisk *disk = bio->bi_bdev->bd_disk; struct blk_zone_wplug *zwplug; @@ -739,7 +738,7 @@ static bool blk_zone_wplug_handle_reset_all(struct bio *bio) } } - return false; + return bzp_not_plugged; } static void disk_zone_wplug_schedule_bio_work(struct gendisk *disk, @@ -964,7 +963,8 @@ static bool blk_zone_wplug_prepare_bio(struct blk_zone_wplug *zwplug, return true; } -static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs) +static enum blk_zone_plug_status +blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs) { struct gendisk *disk = bio->bi_bdev->bd_disk; sector_t sector = bio->bi_iter.bi_sector; @@ -980,19 +980,15 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs) * BIO-based devices, it is the responsibility of the driver to split * the bio before submitting it. */ - if (WARN_ON_ONCE(bio_straddles_zones(bio))) { - bio_io_error(bio); - return true; - } + if (WARN_ON_ONCE(bio_straddles_zones(bio))) + return bzp_failed; /* Conventional zones do not need write plugging. */ if (!bdev_zone_is_seq(bio->bi_bdev, sector)) { /* Zone append to conventional zones is not allowed. */ - if (bio_op(bio) == REQ_OP_ZONE_APPEND) { - bio_io_error(bio); - return true; - } - return false; + if (bio_op(bio) == REQ_OP_ZONE_APPEND) + return bzp_failed; + return bzp_not_plugged; } if (bio->bi_opf & REQ_NOWAIT) @@ -1001,10 +997,9 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs) zwplug = disk_get_and_lock_zone_wplug(disk, sector, gfp_mask, &flags); if (!zwplug) { if (bio->bi_opf & REQ_NOWAIT) - bio_wouldblock_error(bio); + return bzp_wouldblock; else - bio_io_error(bio); - return true; + return bzp_failed; } /* Indicate that this BIO is being handled using zone write plugging. */ @@ -1022,22 +1017,21 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs) if (!blk_zone_wplug_prepare_bio(zwplug, bio)) { spin_unlock_irqrestore(&zwplug->lock, flags); - bio_io_error(bio); - return true; + return bzp_failed; } zwplug->flags |= BLK_ZONE_WPLUG_PLUGGED; spin_unlock_irqrestore(&zwplug->lock, flags); - return false; + return bzp_not_plugged; plug: disk_zone_wplug_add_bio(disk, zwplug, bio, nr_segs); spin_unlock_irqrestore(&zwplug->lock, flags); - return true; + return bzp_plugged; } /** @@ -1048,16 +1042,17 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs) * Handle write, write zeroes and zone append operations requiring emulation * using zone write plugging. * - * Return true whenever @bio execution needs to be delayed through the zone - * write plug. Otherwise, return false to let the submission path process - * @bio normally. + * Return %bzp_plugged if the @bio has been scheduled for delayed execution by + * adding it to zwplug->bio_list; %bzp_failed if the caller should fail @bio or + * %bzp_not_plugged to let the submission path process @bio normally. */ -bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs) +enum blk_zone_plug_status blk_zone_plug_bio(struct bio *bio, + unsigned int nr_segs) { struct block_device *bdev = bio->bi_bdev; if (!bdev->bd_disk->zone_wplugs_hash) - return false; + return bzp_not_plugged; /* * If the BIO already has the plugging flag set, then it was already @@ -1065,7 +1060,7 @@ bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs) * plug bio submit work. */ if (bio_flagged(bio, BIO_ZONE_WRITE_PLUGGING)) - return false; + return bzp_not_plugged; /* * We do not need to do anything special for empty flush BIOs, e.g @@ -1075,7 +1070,7 @@ bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs) * the written data. */ if (op_is_flush(bio->bi_opf) && !bio_sectors(bio)) - return false; + return bzp_not_plugged; /* * Regular writes and write zeroes need to be handled through the target @@ -1097,7 +1092,7 @@ bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs) switch (bio_op(bio)) { case REQ_OP_ZONE_APPEND: if (!bdev_emulates_zone_append(bdev)) - return false; + return bzp_not_plugged; fallthrough; case REQ_OP_WRITE: case REQ_OP_WRITE_ZEROES: @@ -1110,10 +1105,10 @@ bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs) case REQ_OP_ZONE_RESET_ALL: return blk_zone_wplug_handle_reset_all(bio); default: - return false; + return bzp_not_plugged; } - return false; + return bzp_not_plugged; } EXPORT_SYMBOL_GPL(blk_zone_plug_bio); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 39e5ffbf6d31..22f3ca58522d 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -690,18 +690,27 @@ static inline bool blk_queue_is_zoned(struct request_queue *q) (q->limits.features & BLK_FEAT_ZONED); } +enum blk_zone_plug_status { + bzp_not_plugged, + bzp_plugged, + bzp_wouldblock, + bzp_failed, +}; + #ifdef CONFIG_BLK_DEV_ZONED static inline unsigned int disk_nr_zones(struct gendisk *disk) { return disk->nr_zones; } -bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs); +enum blk_zone_plug_status blk_zone_plug_bio(struct bio *bio, + unsigned int nr_segs); #else /* CONFIG_BLK_DEV_ZONED */ static inline unsigned int disk_nr_zones(struct gendisk *disk) { return 0; } -static inline bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs) +static inline enum blk_zone_plug_status blk_zone_plug_bio(struct bio *bio, + unsigned int nr_segs) { return false; }