From patchwork Mon Oct 25 07:05:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12580853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95D0DC433EF for ; Mon, 25 Oct 2021 07:05:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 803E561002 for ; Mon, 25 Oct 2021 07:05:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230321AbhJYHHx (ORCPT ); Mon, 25 Oct 2021 03:07:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230236AbhJYHHw (ORCPT ); Mon, 25 Oct 2021 03:07:52 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0EFF5C061764; Mon, 25 Oct 2021 00:05:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=mQ4RV+kl5HFBPPRUuufU/frBPWLaw14/dH5L1z+CoeQ=; b=XOsC6DuJ4kgdptkx24f/lSKTnV EitJjBmL419UWtJc9i07/ZuzEkEuklZplMUtwTKiOqLfOX4wEvz0ToCA8J8pq+tS5k4hYTF5usu47 CHjT0DasW5vW6G9p+k+QswQxo88Q6FrobCWxnuTdY26Gjd7RDsVjk5c5obnl6M1FgixcQ9CWMulLJ xZtA7fG0ubuZyBt/MOBznNL7vsz1n+vM5ottIwzuzpdTmwC3U9JfpmN5XD8/zg+twcufyylDahXGJ 5tGTIP72mSaFWOBZ8lMqbrPu8htnMjhFXI4fFmgB485Xl4xapDWwtJZmzaO3EoeJ8cJPKIlMp0ggK 2ii3TTVQ==; Received: from [2001:4bb8:184:6dcb:6093:467a:cccc:351c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1meu2r-00FUOA-Lu; Mon, 25 Oct 2021 07:05:22 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 01/12] block: move blk_rq_err_bytes to scsi Date: Mon, 25 Oct 2021 09:05:06 +0200 Message-Id: <20211025070517.1548584-2-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025070517.1548584-1-hch@lst.de> References: <20211025070517.1548584-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org blk_rq_err_bytes is only used by the scsi midlayer, so move it there. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 41 ---------------------------------------- drivers/scsi/scsi_lib.c | 42 ++++++++++++++++++++++++++++++++++++++++- include/linux/blk-mq.h | 3 --- 3 files changed, 41 insertions(+), 45 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 5ffe05b1d17ce..3edc5c5178dd6 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1221,47 +1221,6 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request * } EXPORT_SYMBOL_GPL(blk_insert_cloned_request); -/** - * blk_rq_err_bytes - determine number of bytes till the next failure boundary - * @rq: request to examine - * - * Description: - * A request could be merge of IOs which require different failure - * handling. This function determines the number of bytes which - * can be failed from the beginning of the request without - * crossing into area which need to be retried further. - * - * Return: - * The number of bytes to fail. - */ -unsigned int blk_rq_err_bytes(const struct request *rq) -{ - unsigned int ff = rq->cmd_flags & REQ_FAILFAST_MASK; - unsigned int bytes = 0; - struct bio *bio; - - if (!(rq->rq_flags & RQF_MIXED_MERGE)) - return blk_rq_bytes(rq); - - /* - * Currently the only 'mixing' which can happen is between - * different fastfail types. We can safely fail portions - * which have all the failfast bits that the first one has - - * the ones which are at least as eager to fail as the first - * one. - */ - for (bio = rq->bio; bio; bio = bio->bi_next) { - if ((bio->bi_opf & ff) != ff) - break; - bytes += bio->bi_iter.bi_size; - } - - /* this could lead to infinite loop */ - BUG_ON(blk_rq_bytes(rq) && !bytes); - return bytes; -} -EXPORT_SYMBOL_GPL(blk_rq_err_bytes); - static void update_io_ticks(struct block_device *part, unsigned long now, bool end) { diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 9823b65d15368..1f7e68699941f 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -617,6 +617,46 @@ static blk_status_t scsi_result_to_blk_status(struct scsi_cmnd *cmd, int result) } } +/** + * scsi_rq_err_bytes - determine number of bytes till the next failure boundary + * @rq: request to examine + * + * Description: + * A request could be merge of IOs which require different failure + * handling. This function determines the number of bytes which + * can be failed from the beginning of the request without + * crossing into area which need to be retried further. + * + * Return: + * The number of bytes to fail. + */ +static unsigned int scsi_rq_err_bytes(const struct request *rq) +{ + unsigned int ff = rq->cmd_flags & REQ_FAILFAST_MASK; + unsigned int bytes = 0; + struct bio *bio; + + if (!(rq->rq_flags & RQF_MIXED_MERGE)) + return blk_rq_bytes(rq); + + /* + * Currently the only 'mixing' which can happen is between + * different fastfail types. We can safely fail portions + * which have all the failfast bits that the first one has - + * the ones which are at least as eager to fail as the first + * one. + */ + for (bio = rq->bio; bio; bio = bio->bi_next) { + if ((bio->bi_opf & ff) != ff) + break; + bytes += bio->bi_iter.bi_size; + } + + /* this could lead to infinite loop */ + BUG_ON(blk_rq_bytes(rq) && !bytes); + return bytes; +} + /* Helper for scsi_io_completion() when "reprep" action required. */ static void scsi_io_completion_reprep(struct scsi_cmnd *cmd, struct request_queue *q) @@ -794,7 +834,7 @@ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result) scsi_print_command(cmd); } } - if (!scsi_end_request(req, blk_stat, blk_rq_err_bytes(req))) + if (!scsi_end_request(req, blk_stat, scsi_rq_err_bytes(req))) return; fallthrough; case ACTION_REPREP: diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index ebc45cf0450b3..9f9634cd4cfd2 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -949,7 +949,6 @@ struct req_iterator { * blk_rq_pos() : the current sector * blk_rq_bytes() : bytes left in the entire request * blk_rq_cur_bytes() : bytes left in the current segment - * blk_rq_err_bytes() : bytes left till the next error boundary * blk_rq_sectors() : sectors left in the entire request * blk_rq_cur_sectors() : sectors left in the current segment * blk_rq_stats_sectors() : sectors of the entire request used for stats @@ -973,8 +972,6 @@ static inline int blk_rq_cur_bytes(const struct request *rq) return bio_iovec(rq->bio).bv_len; } -unsigned int blk_rq_err_bytes(const struct request *rq); - static inline unsigned int blk_rq_sectors(const struct request *rq) { return blk_rq_bytes(rq) >> SECTOR_SHIFT; From patchwork Mon Oct 25 07:05:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12580861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78B35C433EF for ; Mon, 25 Oct 2021 07:05:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6206D61040 for ; Mon, 25 Oct 2021 07:05:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231260AbhJYHID (ORCPT ); Mon, 25 Oct 2021 03:08:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231185AbhJYHH6 (ORCPT ); Mon, 25 Oct 2021 03:07:58 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5045EC061745; Mon, 25 Oct 2021 00:05:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=d1Pb75oxXWhujmWoM67L3tYtJqj6VxNZpExp9J9VF2Y=; b=Ry6ENBi1H77CdB4PNpRnTElvBb tnT81KzPGL3PztEVtw6aQxyHpavHkCoS5sMRPWne+MawSkLAoFnZmarA9Psfnx2QPGanjFQDESofB oRYuSyUH8h/WmqIrxQ43sTVM2+g9VNquQrkgvKXvCaBOknVPQAnsW3k2rqrJaSIDBd6pRD1/wYvWH cp3Z0wNwjksIOMwIUH+3iQLfTNBEODvUo62a5+S6jlOvE/1OkpQFtOvklfI/n96WSxV/Ma8sGtdaj GBWOFfKujPcknM3q8PuJrNrxE++YJpACPVzQpnTKexZZFatvwSw8hxtmkNDdj3qdGoc83u2fKKkcJ 0xVY+Rnw==; Received: from [2001:4bb8:184:6dcb:6093:467a:cccc:351c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1meu2u-00FUOz-Bh; Mon, 25 Oct 2021 07:05:24 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 02/12] block: remove blk_{get,put}_request Date: Mon, 25 Oct 2021 09:05:07 +0200 Message-Id: <20211025070517.1548584-3-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025070517.1548584-1-hch@lst.de> References: <20211025070517.1548584-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org These are now pointless wrappers around blk_mq_{alloc,free}_request, so remove them. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 21 --------------------- drivers/block/paride/pd.c | 4 ++-- drivers/block/pktcdvd.c | 2 +- drivers/block/virtio_blk.c | 4 ++-- drivers/md/dm-mpath.c | 4 ++-- drivers/mmc/core/block.c | 20 ++++++++++---------- drivers/scsi/scsi_bsg.c | 2 +- drivers/scsi/scsi_error.c | 2 +- drivers/scsi/scsi_ioctl.c | 4 ++-- drivers/scsi/scsi_lib.c | 4 ++-- drivers/scsi/sg.c | 6 +++--- drivers/scsi/sr.c | 2 +- drivers/scsi/st.c | 4 ++-- drivers/scsi/ufs/ufshcd.c | 20 ++++++++++---------- drivers/scsi/ufs/ufshpb.c | 8 ++++---- drivers/target/target_core_pscsi.c | 4 ++-- include/linux/blk-mq.h | 3 --- 17 files changed, 45 insertions(+), 69 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 3edc5c5178dd6..0d267cfb71484 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -597,27 +597,6 @@ bool blk_get_queue(struct request_queue *q) } EXPORT_SYMBOL(blk_get_queue); -/** - * blk_get_request - allocate a request - * @q: request queue to allocate a request for - * @op: operation (REQ_OP_*) and REQ_* flags, e.g. REQ_SYNC. - * @flags: BLK_MQ_REQ_* flags, e.g. BLK_MQ_REQ_NOWAIT. - */ -struct request *blk_get_request(struct request_queue *q, unsigned int op, - blk_mq_req_flags_t flags) -{ - WARN_ON_ONCE(op & REQ_NOWAIT); - WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PM)); - return blk_mq_alloc_request(q, op, flags); -} -EXPORT_SYMBOL(blk_get_request); - -void blk_put_request(struct request *req) -{ - blk_mq_free_request(req); -} -EXPORT_SYMBOL(blk_put_request); - static void handle_bad_sector(struct bio *bio, sector_t maxsector) { char b[BDEVNAME_SIZE]; diff --git a/drivers/block/paride/pd.c b/drivers/block/paride/pd.c index 675327df6aff9..9cd0bd509b887 100644 --- a/drivers/block/paride/pd.c +++ b/drivers/block/paride/pd.c @@ -775,14 +775,14 @@ static int pd_special_command(struct pd_unit *disk, struct request *rq; struct pd_req *req; - rq = blk_get_request(disk->gd->queue, REQ_OP_DRV_IN, 0); + rq = blk_mq_alloc_request(disk->gd->queue, REQ_OP_DRV_IN, 0); if (IS_ERR(rq)) return PTR_ERR(rq); req = blk_mq_rq_to_pdu(rq); req->func = func; blk_execute_rq(disk->gd, rq, 0); - blk_put_request(rq); + blk_mq_free_request(rq); return 0; } diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c index cacf64eedad87..40e7a45e3347c 100644 --- a/drivers/block/pktcdvd.c +++ b/drivers/block/pktcdvd.c @@ -726,7 +726,7 @@ static int pkt_generic_packet(struct pktcdvd_device *pd, struct packet_command * if (scsi_req(rq)->result) ret = -EIO; out: - blk_put_request(rq); + blk_mq_free_request(rq); return ret; } diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 303caf2d17d0c..f81a768943e15 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -312,7 +312,7 @@ static int virtblk_get_id(struct gendisk *disk, char *id_str) struct request *req; int err; - req = blk_get_request(q, REQ_OP_DRV_IN, 0); + req = blk_mq_alloc_request(q, REQ_OP_DRV_IN, 0); if (IS_ERR(req)) return PTR_ERR(req); @@ -323,7 +323,7 @@ static int virtblk_get_id(struct gendisk *disk, char *id_str) blk_execute_rq(vblk->disk, req, false); err = blk_status_to_errno(virtblk_result(blk_mq_rq_to_pdu(req))); out: - blk_put_request(req); + blk_mq_free_request(req); return err; } diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index 694aaca4eea24..510f6c3ab98d4 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -530,7 +530,7 @@ static int multipath_clone_and_map(struct dm_target *ti, struct request *rq, bdev = pgpath->path.dev->bdev; q = bdev_get_queue(bdev); - clone = blk_get_request(q, rq->cmd_flags | REQ_NOMERGE, + clone = blk_mq_alloc_request(q, rq->cmd_flags | REQ_NOMERGE, BLK_MQ_REQ_NOWAIT); if (IS_ERR(clone)) { /* EBUSY, ENODEV or EWOULDBLOCK: requeue */ @@ -579,7 +579,7 @@ static void multipath_release_clone(struct request *clone, clone->io_start_time_ns); } - blk_put_request(clone); + blk_mq_free_request(clone); } /* diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 431af5e8be2f8..74882fa0f86d0 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -258,7 +258,7 @@ static ssize_t power_ro_lock_store(struct device *dev, mq = &md->queue; /* Dispatch locking to the block layer */ - req = blk_get_request(mq->queue, REQ_OP_DRV_OUT, 0); + req = blk_mq_alloc_request(mq->queue, REQ_OP_DRV_OUT, 0); if (IS_ERR(req)) { count = PTR_ERR(req); goto out_put; @@ -266,7 +266,7 @@ static ssize_t power_ro_lock_store(struct device *dev, req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_BOOT_WP; blk_execute_rq(NULL, req, 0); ret = req_to_mmc_queue_req(req)->drv_op_result; - blk_put_request(req); + blk_mq_free_request(req); if (!ret) { pr_info("%s: Locking boot partition ro until next power on\n", @@ -646,7 +646,7 @@ static int mmc_blk_ioctl_cmd(struct mmc_blk_data *md, * Dispatch the ioctl() into the block request queue. */ mq = &md->queue; - req = blk_get_request(mq->queue, + req = blk_mq_alloc_request(mq->queue, idata->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, 0); if (IS_ERR(req)) { err = PTR_ERR(req); @@ -660,7 +660,7 @@ static int mmc_blk_ioctl_cmd(struct mmc_blk_data *md, blk_execute_rq(NULL, req, 0); ioc_err = req_to_mmc_queue_req(req)->drv_op_result; err = mmc_blk_ioctl_copy_to_user(ic_ptr, idata); - blk_put_request(req); + blk_mq_free_request(req); cmd_done: kfree(idata->buf); @@ -716,7 +716,7 @@ static int mmc_blk_ioctl_multi_cmd(struct mmc_blk_data *md, * Dispatch the ioctl()s into the block request queue. */ mq = &md->queue; - req = blk_get_request(mq->queue, + req = blk_mq_alloc_request(mq->queue, idata[0]->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, 0); if (IS_ERR(req)) { err = PTR_ERR(req); @@ -733,7 +733,7 @@ static int mmc_blk_ioctl_multi_cmd(struct mmc_blk_data *md, for (i = 0; i < num_of_cmds && !err; i++) err = mmc_blk_ioctl_copy_to_user(&cmds[i], idata[i]); - blk_put_request(req); + blk_mq_free_request(req); cmd_err: for (i = 0; i < num_of_cmds; i++) { @@ -2730,7 +2730,7 @@ static int mmc_dbg_card_status_get(void *data, u64 *val) int ret; /* Ask the block layer about the card status */ - req = blk_get_request(mq->queue, REQ_OP_DRV_IN, 0); + req = blk_mq_alloc_request(mq->queue, REQ_OP_DRV_IN, 0); if (IS_ERR(req)) return PTR_ERR(req); req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_CARD_STATUS; @@ -2740,7 +2740,7 @@ static int mmc_dbg_card_status_get(void *data, u64 *val) *val = ret; ret = 0; } - blk_put_request(req); + blk_mq_free_request(req); return ret; } @@ -2766,7 +2766,7 @@ static int mmc_ext_csd_open(struct inode *inode, struct file *filp) return -ENOMEM; /* Ask the block layer for the EXT CSD */ - req = blk_get_request(mq->queue, REQ_OP_DRV_IN, 0); + req = blk_mq_alloc_request(mq->queue, REQ_OP_DRV_IN, 0); if (IS_ERR(req)) { err = PTR_ERR(req); goto out_free; @@ -2775,7 +2775,7 @@ static int mmc_ext_csd_open(struct inode *inode, struct file *filp) req_to_mmc_queue_req(req)->drv_op_data = &ext_csd; blk_execute_rq(NULL, req, 0); err = req_to_mmc_queue_req(req)->drv_op_result; - blk_put_request(req); + blk_mq_free_request(req); if (err) { pr_err("FAILED %d\n", err); goto out_free; diff --git a/drivers/scsi/scsi_bsg.c b/drivers/scsi/scsi_bsg.c index 551727a6f6941..081b84bb7985b 100644 --- a/drivers/scsi/scsi_bsg.c +++ b/drivers/scsi/scsi_bsg.c @@ -95,7 +95,7 @@ static int scsi_bsg_sg_io_fn(struct request_queue *q, struct sg_io_v4 *hdr, out_free_cmd: scsi_req_free_cmd(scsi_req(rq)); out_put_request: - blk_put_request(rq); + blk_mq_free_request(rq); return ret; } diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index 71d027b94be40..36870b41c888d 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -1979,7 +1979,7 @@ enum scsi_disposition scsi_decide_disposition(struct scsi_cmnd *scmd) static void eh_lock_door_done(struct request *req, blk_status_t status) { - blk_put_request(req); + blk_mq_free_request(req); } /** diff --git a/drivers/scsi/scsi_ioctl.c b/drivers/scsi/scsi_ioctl.c index 0078975e3c07c..34412eac4566b 100644 --- a/drivers/scsi/scsi_ioctl.c +++ b/drivers/scsi/scsi_ioctl.c @@ -490,7 +490,7 @@ static int sg_io(struct scsi_device *sdev, struct gendisk *disk, out_free_cdb: scsi_req_free_cmd(req); out_put_request: - blk_put_request(rq); + blk_mq_free_request(rq); return ret; } @@ -634,7 +634,7 @@ static int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, } error: - blk_put_request(rq); + blk_mq_free_request(rq); error_free_buffer: kfree(buffer); diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 1f7e68699941f..514cabc0f31b4 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -260,7 +260,7 @@ int __scsi_execute(struct scsi_device *sdev, const unsigned char *cmd, scsi_normalize_sense(rq->sense, rq->sense_len, sshdr); ret = rq->result; out: - blk_put_request(req); + blk_mq_free_request(req); return ret; } @@ -1140,7 +1140,7 @@ struct request *scsi_alloc_request(struct request_queue *q, { struct request *rq; - rq = blk_get_request(q, op, flags); + rq = blk_mq_alloc_request(q, op, flags); if (!IS_ERR(rq)) scsi_initialize_rq(rq); return rq; diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c index 85f57ac0b844e..141099ab90921 100644 --- a/drivers/scsi/sg.c +++ b/drivers/scsi/sg.c @@ -815,7 +815,7 @@ sg_common_write(Sg_fd * sfp, Sg_request * srp, if (atomic_read(&sdp->detaching)) { if (srp->bio) { scsi_req_free_cmd(scsi_req(srp->rq)); - blk_put_request(srp->rq); + blk_mq_free_request(srp->rq); srp->rq = NULL; } @@ -1390,7 +1390,7 @@ sg_rq_end_io(struct request *rq, blk_status_t status) */ srp->rq = NULL; scsi_req_free_cmd(scsi_req(rq)); - blk_put_request(rq); + blk_mq_free_request(rq); write_lock_irqsave(&sfp->rq_list_lock, iflags); if (unlikely(srp->orphan)) { @@ -1830,7 +1830,7 @@ sg_finish_rem_req(Sg_request *srp) if (srp->rq) { scsi_req_free_cmd(scsi_req(srp->rq)); - blk_put_request(srp->rq); + blk_mq_free_request(srp->rq); } if (srp->res_used) diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c index 7c4d9a9647999..3009b986d1d74 100644 --- a/drivers/scsi/sr.c +++ b/drivers/scsi/sr.c @@ -1003,7 +1003,7 @@ static int sr_read_cdda_bpc(struct cdrom_device_info *cdi, void __user *ubuf, if (blk_rq_unmap_user(bio)) ret = -EFAULT; out_put_request: - blk_put_request(rq); + blk_mq_free_request(rq); return ret; } diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c index 1275299f61597..c2d5608f6b1a5 100644 --- a/drivers/scsi/st.c +++ b/drivers/scsi/st.c @@ -530,7 +530,7 @@ static void st_scsi_execute_end(struct request *req, blk_status_t status) complete(SRpnt->waiting); blk_rq_unmap_user(tmp); - blk_put_request(req); + blk_mq_free_request(req); } static int st_scsi_execute(struct st_request *SRpnt, const unsigned char *cmd, @@ -557,7 +557,7 @@ static int st_scsi_execute(struct st_request *SRpnt, const unsigned char *cmd, err = blk_rq_map_user(req->q, req, mdata, NULL, bufflen, GFP_KERNEL); if (err) { - blk_put_request(req); + blk_mq_free_request(req); return err; } } diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index bf81da2ecf98c..c85f540f6af4e 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -2925,7 +2925,7 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba, * Even though we use wait_event() which sleeps indefinitely, * the maximum wait time is bounded by SCSI request timeout. */ - req = blk_get_request(q, REQ_OP_DRV_OUT, 0); + req = blk_mq_alloc_request(q, REQ_OP_DRV_OUT, 0); if (IS_ERR(req)) { err = PTR_ERR(req); goto out_unlock; @@ -2952,7 +2952,7 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba, (struct utp_upiu_req *)lrbp->ucd_rsp_ptr); out: - blk_put_request(req); + blk_mq_free_request(req); out_unlock: up_read(&hba->clk_scaling_lock); return err; @@ -6517,9 +6517,9 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba, int task_tag, err; /* - * blk_get_request() is used here only to get a free tag. + * blk_mq_alloc_request() is used here only to get a free tag. */ - req = blk_get_request(q, REQ_OP_DRV_OUT, 0); + req = blk_mq_alloc_request(q, REQ_OP_DRV_OUT, 0); if (IS_ERR(req)) return PTR_ERR(req); @@ -6575,7 +6575,7 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba, spin_unlock_irqrestore(hba->host->host_lock, flags); ufshcd_release(hba); - blk_put_request(req); + blk_mq_free_request(req); return err; } @@ -6660,7 +6660,7 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba, down_read(&hba->clk_scaling_lock); - req = blk_get_request(q, REQ_OP_DRV_OUT, 0); + req = blk_mq_alloc_request(q, REQ_OP_DRV_OUT, 0); if (IS_ERR(req)) { err = PTR_ERR(req); goto out_unlock; @@ -6741,7 +6741,7 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba, (struct utp_upiu_req *)lrbp->ucd_rsp_ptr); out: - blk_put_request(req); + blk_mq_free_request(req); out_unlock: up_read(&hba->clk_scaling_lock); return err; @@ -7912,7 +7912,7 @@ static void ufshcd_request_sense_done(struct request *rq, blk_status_t error) if (error != BLK_STS_OK) pr_err("%s: REQUEST SENSE failed (%d)\n", __func__, error); kfree(rq->end_io_data); - blk_put_request(rq); + blk_mq_free_request(rq); } static int @@ -7932,7 +7932,7 @@ ufshcd_request_sense_async(struct ufs_hba *hba, struct scsi_device *sdev) if (!buffer) return -ENOMEM; - req = blk_get_request(sdev->request_queue, REQ_OP_DRV_IN, + req = blk_mq_alloc_request(sdev->request_queue, REQ_OP_DRV_IN, /*flags=*/BLK_MQ_REQ_PM); if (IS_ERR(req)) { ret = PTR_ERR(req); @@ -7957,7 +7957,7 @@ ufshcd_request_sense_async(struct ufs_hba *hba, struct scsi_device *sdev) return 0; out_put: - blk_put_request(req); + blk_mq_free_request(req); out_free: kfree(buffer); return ret; diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 589af5f6b940a..a2a3080071e9d 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -564,7 +564,7 @@ static int ufshpb_issue_pre_req(struct ufshpb_lu *hpb, struct scsi_cmnd *cmd, int _read_id; int ret = 0; - req = blk_get_request(cmd->device->request_queue, + req = blk_mq_alloc_request(cmd->device->request_queue, REQ_OP_DRV_OUT | REQ_SYNC, BLK_MQ_REQ_NOWAIT); if (IS_ERR(req)) return -EAGAIN; @@ -592,7 +592,7 @@ static int ufshpb_issue_pre_req(struct ufshpb_lu *hpb, struct scsi_cmnd *cmd, ufshpb_put_pre_req(hpb, pre_req); unlock_out: spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); - blk_put_request(req); + blk_mq_free_request(req); return ret; } @@ -721,7 +721,7 @@ static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb, return NULL; retry: - req = blk_get_request(hpb->sdev_ufs_lu->request_queue, dir, + req = blk_mq_alloc_request(hpb->sdev_ufs_lu->request_queue, dir, BLK_MQ_REQ_NOWAIT); if (!atomic && (PTR_ERR(req) == -EWOULDBLOCK) && (--retries > 0)) { @@ -745,7 +745,7 @@ static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb, static void ufshpb_put_req(struct ufshpb_lu *hpb, struct ufshpb_req *rq) { - blk_put_request(rq->req); + blk_mq_free_request(rq->req); kmem_cache_free(hpb->map_req_cache, rq); } diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c index b5705a2bd7618..7fa57fb57bf22 100644 --- a/drivers/target/target_core_pscsi.c +++ b/drivers/target/target_core_pscsi.c @@ -1011,7 +1011,7 @@ pscsi_execute_cmd(struct se_cmd *cmd) return 0; fail_put_request: - blk_put_request(req); + blk_mq_free_request(req); fail: kfree(pt); return ret; @@ -1066,7 +1066,7 @@ static void pscsi_req_done(struct request *req, blk_status_t status) break; } - blk_put_request(req); + blk_mq_free_request(req); kfree(pt); } diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 9f9634cd4cfd2..850ecfdebe595 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -892,9 +892,6 @@ static inline bool rq_is_sync(struct request *rq) } void blk_rq_init(struct request_queue *q, struct request *rq); -void blk_put_request(struct request *rq); -struct request *blk_get_request(struct request_queue *q, unsigned int op, - blk_mq_req_flags_t flags); int blk_rq_prep_clone(struct request *rq, struct request *rq_src, struct bio_set *bs, gfp_t gfp_mask, int (*bio_ctr)(struct bio *, struct bio *, void *), void *data); From patchwork Mon Oct 25 07:05:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12580855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B91D2C433EF for ; Mon, 25 Oct 2021 07:05:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A083961040 for ; Mon, 25 Oct 2021 07:05:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231162AbhJYHH6 (ORCPT ); Mon, 25 Oct 2021 03:07:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230236AbhJYHHy (ORCPT ); Mon, 25 Oct 2021 03:07:54 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D48DC061745; Mon, 25 Oct 2021 00:05:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=oAf0cozBABJodjGjeGfj2mHJ5q1G5mTFnT8jXPv2KNo=; b=tuoyrK3hZtrEL114mS3hEazypB SZTpmDVBiCszkW8cu7xBLiLdtSn1x6ymFdwRi1DnkHhuXtbm3AKPhlfju7bU/sUDuO2et3l+Kz8ZZ T8xBNpAsNjn7EXTjj9jGWC7vlzYRkiixc0QU1pwi0g1QHhmjZh6LsPhrIR/u+ObOicAg7nlrP1IXP ChaBi4tHgGJh2yz84tTnySZ4qcnVFB3IjWDkXpEE3QPJbHMDNwJvWh45xkmqKXZYgflL1fGc84jHn QD8Ihs4Gf6CPSCxwbFYJtjNy53nM5XgEMHFC25Bmsvz6AXPx3ZV5+5GnLvCDl0f5SrVOS9FgX14GO 9r8p6QpQ==; Received: from [2001:4bb8:184:6dcb:6093:467a:cccc:351c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1meu2w-00FUQE-Vw; Mon, 25 Oct 2021 07:05:27 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 03/12] block: remove rq_flush_dcache_pages Date: Mon, 25 Oct 2021 09:05:08 +0200 Message-Id: <20211025070517.1548584-4-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025070517.1548584-1-hch@lst.de> References: <20211025070517.1548584-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This function is trivial, and flush_dcache_page is always defined, so just open code it in the 2.5 callers. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 19 ------------------- drivers/mtd/mtd_blkdevs.c | 10 ++++++++-- drivers/mtd/ubi/block.c | 6 +++++- include/linux/blk-mq.h | 10 ---------- 4 files changed, 13 insertions(+), 32 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 0d267cfb71484..08c6077365ac9 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1324,25 +1324,6 @@ void blk_steal_bios(struct bio_list *list, struct request *rq) } EXPORT_SYMBOL_GPL(blk_steal_bios); -#if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE -/** - * rq_flush_dcache_pages - Helper function to flush all pages in a request - * @rq: the request to be flushed - * - * Description: - * Flush all pages in @rq. - */ -void rq_flush_dcache_pages(struct request *rq) -{ - struct req_iterator iter; - struct bio_vec bvec; - - rq_for_each_segment(bvec, rq, iter) - flush_dcache_page(bvec.bv_page); -} -EXPORT_SYMBOL_GPL(rq_flush_dcache_pages); -#endif - /** * blk_lld_busy - Check if underlying low-level drivers of a device are busy * @q : the queue of the device being checked diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c index b8ae1ec14e178..356b5bb4db51a 100644 --- a/drivers/mtd/mtd_blkdevs.c +++ b/drivers/mtd/mtd_blkdevs.c @@ -46,6 +46,8 @@ static blk_status_t do_blktrans_request(struct mtd_blktrans_ops *tr, struct mtd_blktrans_dev *dev, struct request *req) { + struct req_iterator iter; + struct bio_vec bvec; unsigned long block, nsect; char *buf; @@ -76,13 +78,17 @@ static blk_status_t do_blktrans_request(struct mtd_blktrans_ops *tr, } } kunmap(bio_page(req->bio)); - rq_flush_dcache_pages(req); + + rq_for_each_segment(bvec, req, iter) + flush_dcache_page(bvec.bv_page); return BLK_STS_OK; case REQ_OP_WRITE: if (!tr->writesect) return BLK_STS_IOERR; - rq_flush_dcache_pages(req); + rq_for_each_segment(bvec, req, iter) + flush_dcache_page(bvec.bv_page); + buf = kmap(bio_page(req->bio)) + bio_offset(req->bio); for (; nsect > 0; nsect--, block++, buf += tr->blksize) { if (tr->writesect(dev, block, buf)) { diff --git a/drivers/mtd/ubi/block.c b/drivers/mtd/ubi/block.c index e003b4b44ffa2..99e17ee13feb7 100644 --- a/drivers/mtd/ubi/block.c +++ b/drivers/mtd/ubi/block.c @@ -294,6 +294,8 @@ static void ubiblock_do_work(struct work_struct *work) int ret; struct ubiblock_pdu *pdu = container_of(work, struct ubiblock_pdu, work); struct request *req = blk_mq_rq_from_pdu(pdu); + struct req_iterator iter; + struct bio_vec bvec; blk_mq_start_request(req); @@ -305,7 +307,9 @@ static void ubiblock_do_work(struct work_struct *work) blk_rq_map_sg(req->q, req, pdu->usgl.sg); ret = ubiblock_read(pdu); - rq_flush_dcache_pages(req); + + rq_for_each_segment(bvec, req, iter) + flush_dcache_page(bvec.bv_page); blk_mq_end_request(req, errno_to_blk_status(ret)); } diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 850ecfdebe595..047ce54fca96c 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -1131,14 +1131,4 @@ static inline bool blk_req_can_dispatch_to_zone(struct request *rq) } #endif /* CONFIG_BLK_DEV_ZONED */ -#ifndef ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE -# error "You should define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE for your platform" -#endif -#if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE -void rq_flush_dcache_pages(struct request *rq); -#else -static inline void rq_flush_dcache_pages(struct request *rq) -{ -} -#endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */ #endif /* BLK_MQ_H */ From patchwork Mon Oct 25 07:05:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12580857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6647CC433EF for ; Mon, 25 Oct 2021 07:05:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4B30D61040 for ; Mon, 25 Oct 2021 07:05:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230236AbhJYHIC (ORCPT ); Mon, 25 Oct 2021 03:08:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230481AbhJYHH4 (ORCPT ); Mon, 25 Oct 2021 03:07:56 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B078C061767; Mon, 25 Oct 2021 00:05:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=7EyHPPzwa/LBopRxoJzAr82IzT9Dy44o2P6eZ/WNaZI=; b=N5AX8oki6Z23w+JL2cJrLwWqkU 7fgmbyL5OBu2pMhwL96xPB+zilddM7pq7SYVOgaDmiMnPSMzjZOTgS0wLFi864Wx+PWokE3l+840r fWRjdAiNQYhcb+Imbr2oACzYAXFD137fsUuEz/e68qYJgY+gAqwpm11bK0QpUPdYFwAVULezArkI2 EMujbYGn8LCkiWGiDY5P9plvE4trna4T3oa0QHaFbs+XtBWXtVAdawZ/7p/g1dycBbFXeoJYsKEmW SMWipMDQdwyLy1rury59VdQTMJf4MUMysyPVNt4b09+kAXXwxZ7cbVYuoiBdz2dtL53DOQtGJUsBo NRmpG+cw==; Received: from [2001:4bb8:184:6dcb:6093:467a:cccc:351c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1meu2z-00FURL-IQ; Mon, 25 Oct 2021 07:05:30 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 04/12] block: remove blk-exec.c Date: Mon, 25 Oct 2021 09:05:09 +0200 Message-Id: <20211025070517.1548584-5-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025070517.1548584-1-hch@lst.de> References: <20211025070517.1548584-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org All this code is tightly coupled to the blk-mq core, so move it there. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/Makefile | 2 +- block/blk-exec.c | 116 ----------------------------------------------- block/blk-mq.c | 107 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 108 insertions(+), 117 deletions(-) delete mode 100644 block/blk-exec.c diff --git a/block/Makefile b/block/Makefile index 602f7f47b7b6d..9623aff1a346c 100644 --- a/block/Makefile +++ b/block/Makefile @@ -5,7 +5,7 @@ obj-y := bdev.o fops.o bio.o elevator.o blk-core.o blk-sysfs.o \ blk-flush.o blk-settings.o blk-ioc.o blk-map.o \ - blk-exec.o blk-merge.o blk-timeout.o \ + blk-merge.o blk-timeout.o \ blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \ blk-mq-sysfs.o blk-mq-cpumap.o blk-mq-sched.o ioctl.o \ genhd.o ioprio.o badblocks.o partitions/ blk-rq-qos.o \ diff --git a/block/blk-exec.c b/block/blk-exec.c deleted file mode 100644 index 1b8b47f6e79bb..0000000000000 --- a/block/blk-exec.c +++ /dev/null @@ -1,116 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Functions related to setting various queue properties from drivers - */ -#include -#include -#include -#include -#include -#include - -#include "blk.h" -#include "blk-mq-sched.h" - -/** - * blk_end_sync_rq - executes a completion event on a request - * @rq: request to complete - * @error: end I/O status of the request - */ -static void blk_end_sync_rq(struct request *rq, blk_status_t error) -{ - struct completion *waiting = rq->end_io_data; - - rq->end_io_data = (void *)(uintptr_t)error; - - /* - * complete last, if this is a stack request the process (and thus - * the rq pointer) could be invalid right after this complete() - */ - complete(waiting); -} - -/** - * blk_execute_rq_nowait - insert a request to I/O scheduler for execution - * @bd_disk: matching gendisk - * @rq: request to insert - * @at_head: insert request at head or tail of queue - * @done: I/O completion handler - * - * Description: - * Insert a fully prepared request at the back of the I/O scheduler queue - * for execution. Don't wait for completion. - * - * Note: - * This function will invoke @done directly if the queue is dead. - */ -void blk_execute_rq_nowait(struct gendisk *bd_disk, struct request *rq, - int at_head, rq_end_io_fn *done) -{ - WARN_ON(irqs_disabled()); - WARN_ON(!blk_rq_is_passthrough(rq)); - - rq->rq_disk = bd_disk; - rq->end_io = done; - - blk_account_io_start(rq); - - /* - * don't check dying flag for MQ because the request won't - * be reused after dying flag is set - */ - blk_mq_sched_insert_request(rq, at_head, true, false); -} -EXPORT_SYMBOL_GPL(blk_execute_rq_nowait); - -static bool blk_rq_is_poll(struct request *rq) -{ - if (!rq->mq_hctx) - return false; - if (rq->mq_hctx->type != HCTX_TYPE_POLL) - return false; - if (WARN_ON_ONCE(!rq->bio)) - return false; - return true; -} - -static void blk_rq_poll_completion(struct request *rq, struct completion *wait) -{ - do { - bio_poll(rq->bio, NULL, 0); - cond_resched(); - } while (!completion_done(wait)); -} - -/** - * blk_execute_rq - insert a request into queue for execution - * @bd_disk: matching gendisk - * @rq: request to insert - * @at_head: insert request at head or tail of queue - * - * Description: - * Insert a fully prepared request at the back of the I/O scheduler queue - * for execution and wait for completion. - * Return: The blk_status_t result provided to blk_mq_end_request(). - */ -blk_status_t blk_execute_rq(struct gendisk *bd_disk, struct request *rq, int at_head) -{ - DECLARE_COMPLETION_ONSTACK(wait); - unsigned long hang_check; - - rq->end_io_data = &wait; - blk_execute_rq_nowait(bd_disk, rq, at_head, blk_end_sync_rq); - - /* Prevent hang_check timer from firing at us during very long I/O */ - hang_check = sysctl_hung_task_timeout_secs; - - if (blk_rq_is_poll(rq)) - blk_rq_poll_completion(rq, &wait); - else if (hang_check) - while (!wait_for_completion_io_timeout(&wait, hang_check * (HZ/2))); - else - wait_for_completion_io(&wait); - - return (blk_status_t)(uintptr_t)rq->end_io_data; -} -EXPORT_SYMBOL(blk_execute_rq); diff --git a/block/blk-mq.c b/block/blk-mq.c index d04ee72ba1255..d8f17b54e2c73 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -28,6 +28,7 @@ #include #include #include +#include #include @@ -1031,6 +1032,112 @@ void blk_mq_start_request(struct request *rq) } EXPORT_SYMBOL(blk_mq_start_request); +/** + * blk_end_sync_rq - executes a completion event on a request + * @rq: request to complete + * @error: end I/O status of the request + */ +static void blk_end_sync_rq(struct request *rq, blk_status_t error) +{ + struct completion *waiting = rq->end_io_data; + + rq->end_io_data = (void *)(uintptr_t)error; + + /* + * complete last, if this is a stack request the process (and thus + * the rq pointer) could be invalid right after this complete() + */ + complete(waiting); +} + +/** + * blk_execute_rq_nowait - insert a request to I/O scheduler for execution + * @bd_disk: matching gendisk + * @rq: request to insert + * @at_head: insert request at head or tail of queue + * @done: I/O completion handler + * + * Description: + * Insert a fully prepared request at the back of the I/O scheduler queue + * for execution. Don't wait for completion. + * + * Note: + * This function will invoke @done directly if the queue is dead. + */ +void blk_execute_rq_nowait(struct gendisk *bd_disk, struct request *rq, + int at_head, rq_end_io_fn *done) +{ + WARN_ON(irqs_disabled()); + WARN_ON(!blk_rq_is_passthrough(rq)); + + rq->rq_disk = bd_disk; + rq->end_io = done; + + blk_account_io_start(rq); + + /* + * don't check dying flag for MQ because the request won't + * be reused after dying flag is set + */ + blk_mq_sched_insert_request(rq, at_head, true, false); +} +EXPORT_SYMBOL_GPL(blk_execute_rq_nowait); + +static bool blk_rq_is_poll(struct request *rq) +{ + if (!rq->mq_hctx) + return false; + if (rq->mq_hctx->type != HCTX_TYPE_POLL) + return false; + if (WARN_ON_ONCE(!rq->bio)) + return false; + return true; +} + +static void blk_rq_poll_completion(struct request *rq, struct completion *wait) +{ + do { + bio_poll(rq->bio, NULL, 0); + cond_resched(); + } while (!completion_done(wait)); +} + +/** + * blk_execute_rq - insert a request into queue for execution + * @bd_disk: matching gendisk + * @rq: request to insert + * @at_head: insert request at head or tail of queue + * + * Description: + * Insert a fully prepared request at the back of the I/O scheduler queue + * for execution and wait for completion. + * Return: The blk_status_t result provided to blk_mq_end_request(). + */ +blk_status_t blk_execute_rq(struct gendisk *bd_disk, struct request *rq, + int at_head) +{ + DECLARE_COMPLETION_ONSTACK(wait); + unsigned long hang_check; + + rq->end_io_data = &wait; + blk_execute_rq_nowait(bd_disk, rq, at_head, blk_end_sync_rq); + + /* Prevent hang_check timer from firing at us during very long I/O */ + hang_check = sysctl_hung_task_timeout_secs; + + if (blk_rq_is_poll(rq)) + blk_rq_poll_completion(rq, &wait); + else if (hang_check) + while (!wait_for_completion_io_timeout(&wait, + hang_check * (HZ/2))) + ; + else + wait_for_completion_io(&wait); + + return (blk_status_t)(uintptr_t)rq->end_io_data; +} +EXPORT_SYMBOL(blk_execute_rq); + static void __blk_mq_requeue_request(struct request *rq) { struct request_queue *q = rq->q; From patchwork Mon Oct 25 07:05:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12580859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69C41C4332F for ; Mon, 25 Oct 2021 07:05:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4DEE560F6F for ; Mon, 25 Oct 2021 07:05:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231220AbhJYHIC (ORCPT ); Mon, 25 Oct 2021 03:08:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231189AbhJYHH6 (ORCPT ); Mon, 25 Oct 2021 03:07:58 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F0C2C061348; Mon, 25 Oct 2021 00:05:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=920eAB44r4C3P+KgIpEQrpO8dXd4hWTvLyvWctKLoAk=; b=QOyyLWRrBIqFeOMflXGg4f+yCA EhBILcRI5Kb7bucq9Xi8D3rZesngwZTleDMEvbb4aHD0afJMHw9XqlTP5osyNDGGwO2k+/No+RTCP 8HL/E5ZJOjrt6vduJDpJZ8rxin01RnK3/5H0bPdYczMYyiK3iOSPK9wqLsiEmKJJkzEWgaue0vuWH c6i7neJA4Kpc2wASyL87v0vQCw61JrezPxKm0i4lI9nuIvQZWePXws6HC8DIit0AiiXcv5X20vA7h WLFfIoSinjKwU50S2ZcbqjK6ChgeqD/Iv0AIp2+hZoDHxAzfJdPmCLDDihTIfAvlPSJb4mJhLSNUi G9YVYOog==; Received: from [2001:4bb8:184:6dcb:6093:467a:cccc:351c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1meu32-00FUSR-6T; Mon, 25 Oct 2021 07:05:32 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 05/12] blk-mq: move blk_mq_flush_plug_list Date: Mon, 25 Oct 2021 09:05:10 +0200 Message-Id: <20211025070517.1548584-6-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025070517.1548584-1-hch@lst.de> References: <20211025070517.1548584-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Move blk_mq_flush_plug_list and blk_mq_plug_issue_direct down in blk-mq.c to prepare for marking blk_mq_request_issue_directly static without the need of a forward declaration. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/blk-mq.c | 184 ++++++++++++++++++++++++------------------------- 1 file changed, 92 insertions(+), 92 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index d8f17b54e2c73..f46d41a760106 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2277,98 +2277,6 @@ static void blk_mq_commit_rqs(struct blk_mq_hw_ctx *hctx, int *queued, *queued = 0; } -static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule) -{ - struct blk_mq_hw_ctx *hctx = NULL; - struct request *rq; - int queued = 0; - int errors = 0; - - while ((rq = rq_list_pop(&plug->mq_list))) { - bool last = rq_list_empty(plug->mq_list); - blk_status_t ret; - - if (hctx != rq->mq_hctx) { - if (hctx) - blk_mq_commit_rqs(hctx, &queued, from_schedule); - hctx = rq->mq_hctx; - } - - ret = blk_mq_request_issue_directly(rq, last); - switch (ret) { - case BLK_STS_OK: - queued++; - break; - case BLK_STS_RESOURCE: - case BLK_STS_DEV_RESOURCE: - blk_mq_request_bypass_insert(rq, false, last); - blk_mq_commit_rqs(hctx, &queued, from_schedule); - return; - default: - blk_mq_end_request(rq, ret); - errors++; - break; - } - } - - /* - * If we didn't flush the entire list, we could have told the driver - * there was more coming, but that turned out to be a lie. - */ - if (errors) - blk_mq_commit_rqs(hctx, &queued, from_schedule); -} - -void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) -{ - struct blk_mq_hw_ctx *this_hctx; - struct blk_mq_ctx *this_ctx; - unsigned int depth; - LIST_HEAD(list); - - if (rq_list_empty(plug->mq_list)) - return; - plug->rq_count = 0; - - if (!plug->multiple_queues && !plug->has_elevator) { - blk_mq_plug_issue_direct(plug, from_schedule); - if (rq_list_empty(plug->mq_list)) - return; - } - - this_hctx = NULL; - this_ctx = NULL; - depth = 0; - do { - struct request *rq; - - rq = rq_list_pop(&plug->mq_list); - - if (!this_hctx) { - this_hctx = rq->mq_hctx; - this_ctx = rq->mq_ctx; - } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) { - trace_block_unplug(this_hctx->queue, depth, - !from_schedule); - blk_mq_sched_insert_requests(this_hctx, this_ctx, - &list, from_schedule); - depth = 0; - this_hctx = rq->mq_hctx; - this_ctx = rq->mq_ctx; - - } - - list_add(&rq->queuelist, &list); - depth++; - } while (!rq_list_empty(plug->mq_list)); - - if (!list_empty(&list)) { - trace_block_unplug(this_hctx->queue, depth, !from_schedule); - blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, - from_schedule); - } -} - static void blk_mq_bio_to_request(struct request *rq, struct bio *bio, unsigned int nr_segs) { @@ -2508,6 +2416,98 @@ blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last) return ret; } +static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule) +{ + struct blk_mq_hw_ctx *hctx = NULL; + struct request *rq; + int queued = 0; + int errors = 0; + + while ((rq = rq_list_pop(&plug->mq_list))) { + bool last = rq_list_empty(plug->mq_list); + blk_status_t ret; + + if (hctx != rq->mq_hctx) { + if (hctx) + blk_mq_commit_rqs(hctx, &queued, from_schedule); + hctx = rq->mq_hctx; + } + + ret = blk_mq_request_issue_directly(rq, last); + switch (ret) { + case BLK_STS_OK: + queued++; + break; + case BLK_STS_RESOURCE: + case BLK_STS_DEV_RESOURCE: + blk_mq_request_bypass_insert(rq, false, last); + blk_mq_commit_rqs(hctx, &queued, from_schedule); + return; + default: + blk_mq_end_request(rq, ret); + errors++; + break; + } + } + + /* + * If we didn't flush the entire list, we could have told the driver + * there was more coming, but that turned out to be a lie. + */ + if (errors) + blk_mq_commit_rqs(hctx, &queued, from_schedule); +} + +void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) +{ + struct blk_mq_hw_ctx *this_hctx; + struct blk_mq_ctx *this_ctx; + unsigned int depth; + LIST_HEAD(list); + + if (rq_list_empty(plug->mq_list)) + return; + plug->rq_count = 0; + + if (!plug->multiple_queues && !plug->has_elevator) { + blk_mq_plug_issue_direct(plug, from_schedule); + if (rq_list_empty(plug->mq_list)) + return; + } + + this_hctx = NULL; + this_ctx = NULL; + depth = 0; + do { + struct request *rq; + + rq = rq_list_pop(&plug->mq_list); + + if (!this_hctx) { + this_hctx = rq->mq_hctx; + this_ctx = rq->mq_ctx; + } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) { + trace_block_unplug(this_hctx->queue, depth, + !from_schedule); + blk_mq_sched_insert_requests(this_hctx, this_ctx, + &list, from_schedule); + depth = 0; + this_hctx = rq->mq_hctx; + this_ctx = rq->mq_ctx; + + } + + list_add(&rq->queuelist, &list); + depth++; + } while (!rq_list_empty(plug->mq_list)); + + if (!list_empty(&list)) { + trace_block_unplug(this_hctx->queue, depth, !from_schedule); + blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, + from_schedule); + } +} + void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, struct list_head *list) { From patchwork Mon Oct 25 07:05:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12580863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBA41C4321E for ; Mon, 25 Oct 2021 07:05:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C4FF260F6F for ; Mon, 25 Oct 2021 07:05:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231280AbhJYHIE (ORCPT ); Mon, 25 Oct 2021 03:08:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231240AbhJYHIB (ORCPT ); Mon, 25 Oct 2021 03:08:01 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0E6EC061764; Mon, 25 Oct 2021 00:05:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=VAeOxoNMkoHOFqeJKNH+SZ6swJYjQDXjrsrSe9X37cY=; b=QyCC11ueKqV1uMDsa6mUX8m53D upmYgiaHZpfqRxMMj3zCjiAuGVdnzDcjw/JldfXcaKaA/tMzMpmNrfegTBCvBEG/YlbnRqUC5KKch VAdcI9kYiLUWtElu2I2dnx4DlyMsllnI9RrSvXF40T5pO9Y2xVpFfseKgq62XTqqtLpFV7gcAj8+2 y2kIsV9bS9WuORIrv9eBkKSLMHaILokpAWX90LtVyEmMJqp/YmIp0NXhpWNuPnERfzYoZqKTz6uF/ kKnO+6TBLe6eXZAUCQV3g8bzSbyGNCIW+c5B1o+vpZXqfiOCsV8+/raEtV1ws4HvDco1BbBdGk1Br yikoQU/w==; Received: from [2001:4bb8:184:6dcb:6093:467a:cccc:351c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1meu34-00FUTK-Q7; Mon, 25 Oct 2021 07:05:35 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 06/12] block: move request based cloning helpers to blk-mq.c Date: Mon, 25 Oct 2021 09:05:11 +0200 Message-Id: <20211025070517.1548584-7-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025070517.1548584-1-hch@lst.de> References: <20211025070517.1548584-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Keep all the request based code together. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 184 +---------------------------------------------- block/blk-mq.c | 175 +++++++++++++++++++++++++++++++++++++++++++- block/blk-mq.h | 3 - block/blk.h | 10 +++ 4 files changed, 185 insertions(+), 187 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 08c6077365ac9..8ae6ba9ec1050 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -618,7 +618,7 @@ static int __init setup_fail_make_request(char *str) } __setup("fail_make_request=", setup_fail_make_request); -static bool should_fail_request(struct block_device *part, unsigned int bytes) +bool should_fail_request(struct block_device *part, unsigned int bytes) { return part->bd_make_it_fail && should_fail(&fail_make_request, bytes); } @@ -632,15 +632,6 @@ static int __init fail_make_request_debugfs(void) } late_initcall(fail_make_request_debugfs); - -#else /* CONFIG_FAIL_MAKE_REQUEST */ - -static inline bool should_fail_request(struct block_device *part, - unsigned int bytes) -{ - return false; -} - #endif /* CONFIG_FAIL_MAKE_REQUEST */ static inline bool bio_check_ro(struct bio *bio) @@ -1114,92 +1105,6 @@ int iocb_bio_iopoll(struct kiocb *kiocb, struct io_comp_batch *iob, } EXPORT_SYMBOL_GPL(iocb_bio_iopoll); -/** - * blk_cloned_rq_check_limits - Helper function to check a cloned request - * for the new queue limits - * @q: the queue - * @rq: the request being checked - * - * Description: - * @rq may have been made based on weaker limitations of upper-level queues - * in request stacking drivers, and it may violate the limitation of @q. - * Since the block layer and the underlying device driver trust @rq - * after it is inserted to @q, it should be checked against @q before - * the insertion using this generic function. - * - * Request stacking drivers like request-based dm may change the queue - * limits when retrying requests on other queues. Those requests need - * to be checked against the new queue limits again during dispatch. - */ -static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q, - struct request *rq) -{ - unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq)); - - if (blk_rq_sectors(rq) > max_sectors) { - /* - * SCSI device does not have a good way to return if - * Write Same/Zero is actually supported. If a device rejects - * a non-read/write command (discard, write same,etc.) the - * low-level device driver will set the relevant queue limit to - * 0 to prevent blk-lib from issuing more of the offending - * operations. Commands queued prior to the queue limit being - * reset need to be completed with BLK_STS_NOTSUPP to avoid I/O - * errors being propagated to upper layers. - */ - if (max_sectors == 0) - return BLK_STS_NOTSUPP; - - printk(KERN_ERR "%s: over max size limit. (%u > %u)\n", - __func__, blk_rq_sectors(rq), max_sectors); - return BLK_STS_IOERR; - } - - /* - * The queue settings related to segment counting may differ from the - * original queue. - */ - rq->nr_phys_segments = blk_recalc_rq_segments(rq); - if (rq->nr_phys_segments > queue_max_segments(q)) { - printk(KERN_ERR "%s: over max segments limit. (%hu > %hu)\n", - __func__, rq->nr_phys_segments, queue_max_segments(q)); - return BLK_STS_IOERR; - } - - return BLK_STS_OK; -} - -/** - * blk_insert_cloned_request - Helper for stacking drivers to submit a request - * @q: the queue to submit the request - * @rq: the request being queued - */ -blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq) -{ - blk_status_t ret; - - ret = blk_cloned_rq_check_limits(q, rq); - if (ret != BLK_STS_OK) - return ret; - - if (rq->rq_disk && - should_fail_request(rq->rq_disk->part0, blk_rq_bytes(rq))) - return BLK_STS_IOERR; - - if (blk_crypto_insert_cloned_request(rq)) - return BLK_STS_IOERR; - - blk_account_io_start(rq); - - /* - * Since we have a scheduler attached on the top device, - * bypass a potential scheduler on the bottom device for - * insert. - */ - return blk_mq_request_issue_directly(rq, true); -} -EXPORT_SYMBOL_GPL(blk_insert_cloned_request); - static void update_io_ticks(struct block_device *part, unsigned long now, bool end) { @@ -1352,93 +1257,6 @@ int blk_lld_busy(struct request_queue *q) } EXPORT_SYMBOL_GPL(blk_lld_busy); -/** - * blk_rq_unprep_clone - Helper function to free all bios in a cloned request - * @rq: the clone request to be cleaned up - * - * Description: - * Free all bios in @rq for a cloned request. - */ -void blk_rq_unprep_clone(struct request *rq) -{ - struct bio *bio; - - while ((bio = rq->bio) != NULL) { - rq->bio = bio->bi_next; - - bio_put(bio); - } -} -EXPORT_SYMBOL_GPL(blk_rq_unprep_clone); - -/** - * blk_rq_prep_clone - Helper function to setup clone request - * @rq: the request to be setup - * @rq_src: original request to be cloned - * @bs: bio_set that bios for clone are allocated from - * @gfp_mask: memory allocation mask for bio - * @bio_ctr: setup function to be called for each clone bio. - * Returns %0 for success, non %0 for failure. - * @data: private data to be passed to @bio_ctr - * - * Description: - * Clones bios in @rq_src to @rq, and copies attributes of @rq_src to @rq. - * Also, pages which the original bios are pointing to are not copied - * and the cloned bios just point same pages. - * So cloned bios must be completed before original bios, which means - * the caller must complete @rq before @rq_src. - */ -int blk_rq_prep_clone(struct request *rq, struct request *rq_src, - struct bio_set *bs, gfp_t gfp_mask, - int (*bio_ctr)(struct bio *, struct bio *, void *), - void *data) -{ - struct bio *bio, *bio_src; - - if (!bs) - bs = &fs_bio_set; - - __rq_for_each_bio(bio_src, rq_src) { - bio = bio_clone_fast(bio_src, gfp_mask, bs); - if (!bio) - goto free_and_out; - - if (bio_ctr && bio_ctr(bio, bio_src, data)) - goto free_and_out; - - if (rq->bio) { - rq->biotail->bi_next = bio; - rq->biotail = bio; - } else { - rq->bio = rq->biotail = bio; - } - bio = NULL; - } - - /* Copy attributes of the original request to the clone request. */ - rq->__sector = blk_rq_pos(rq_src); - rq->__data_len = blk_rq_bytes(rq_src); - if (rq_src->rq_flags & RQF_SPECIAL_PAYLOAD) { - rq->rq_flags |= RQF_SPECIAL_PAYLOAD; - rq->special_vec = rq_src->special_vec; - } - rq->nr_phys_segments = rq_src->nr_phys_segments; - rq->ioprio = rq_src->ioprio; - - if (rq->bio && blk_crypto_rq_bio_prep(rq, rq->bio, gfp_mask) < 0) - goto free_and_out; - - return 0; - -free_and_out: - if (bio) - bio_put(bio); - blk_rq_unprep_clone(rq); - - return -ENOMEM; -} -EXPORT_SYMBOL_GPL(blk_rq_prep_clone); - int kblockd_schedule_work(struct work_struct *work) { return queue_work(kblockd_workqueue, work); diff --git a/block/blk-mq.c b/block/blk-mq.c index f46d41a760106..923231023fe84 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2403,7 +2403,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, hctx_unlock(hctx, srcu_idx); } -blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last) +static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last) { blk_status_t ret; int srcu_idx; @@ -2720,6 +2720,179 @@ void blk_mq_submit_bio(struct bio *bio) blk_queue_exit(q); } +/** + * blk_cloned_rq_check_limits - Helper function to check a cloned request + * for the new queue limits + * @q: the queue + * @rq: the request being checked + * + * Description: + * @rq may have been made based on weaker limitations of upper-level queues + * in request stacking drivers, and it may violate the limitation of @q. + * Since the block layer and the underlying device driver trust @rq + * after it is inserted to @q, it should be checked against @q before + * the insertion using this generic function. + * + * Request stacking drivers like request-based dm may change the queue + * limits when retrying requests on other queues. Those requests need + * to be checked against the new queue limits again during dispatch. + */ +static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q, + struct request *rq) +{ + unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq)); + + if (blk_rq_sectors(rq) > max_sectors) { + /* + * SCSI device does not have a good way to return if + * Write Same/Zero is actually supported. If a device rejects + * a non-read/write command (discard, write same,etc.) the + * low-level device driver will set the relevant queue limit to + * 0 to prevent blk-lib from issuing more of the offending + * operations. Commands queued prior to the queue limit being + * reset need to be completed with BLK_STS_NOTSUPP to avoid I/O + * errors being propagated to upper layers. + */ + if (max_sectors == 0) + return BLK_STS_NOTSUPP; + + printk(KERN_ERR "%s: over max size limit. (%u > %u)\n", + __func__, blk_rq_sectors(rq), max_sectors); + return BLK_STS_IOERR; + } + + /* + * The queue settings related to segment counting may differ from the + * original queue. + */ + rq->nr_phys_segments = blk_recalc_rq_segments(rq); + if (rq->nr_phys_segments > queue_max_segments(q)) { + printk(KERN_ERR "%s: over max segments limit. (%hu > %hu)\n", + __func__, rq->nr_phys_segments, queue_max_segments(q)); + return BLK_STS_IOERR; + } + + return BLK_STS_OK; +} + +/** + * blk_insert_cloned_request - Helper for stacking drivers to submit a request + * @q: the queue to submit the request + * @rq: the request being queued + */ +blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq) +{ + blk_status_t ret; + + ret = blk_cloned_rq_check_limits(q, rq); + if (ret != BLK_STS_OK) + return ret; + + if (rq->rq_disk && + should_fail_request(rq->rq_disk->part0, blk_rq_bytes(rq))) + return BLK_STS_IOERR; + + if (blk_crypto_insert_cloned_request(rq)) + return BLK_STS_IOERR; + + blk_account_io_start(rq); + + /* + * Since we have a scheduler attached on the top device, + * bypass a potential scheduler on the bottom device for + * insert. + */ + return blk_mq_request_issue_directly(rq, true); +} +EXPORT_SYMBOL_GPL(blk_insert_cloned_request); + +/** + * blk_rq_unprep_clone - Helper function to free all bios in a cloned request + * @rq: the clone request to be cleaned up + * + * Description: + * Free all bios in @rq for a cloned request. + */ +void blk_rq_unprep_clone(struct request *rq) +{ + struct bio *bio; + + while ((bio = rq->bio) != NULL) { + rq->bio = bio->bi_next; + + bio_put(bio); + } +} +EXPORT_SYMBOL_GPL(blk_rq_unprep_clone); + +/** + * blk_rq_prep_clone - Helper function to setup clone request + * @rq: the request to be setup + * @rq_src: original request to be cloned + * @bs: bio_set that bios for clone are allocated from + * @gfp_mask: memory allocation mask for bio + * @bio_ctr: setup function to be called for each clone bio. + * Returns %0 for success, non %0 for failure. + * @data: private data to be passed to @bio_ctr + * + * Description: + * Clones bios in @rq_src to @rq, and copies attributes of @rq_src to @rq. + * Also, pages which the original bios are pointing to are not copied + * and the cloned bios just point same pages. + * So cloned bios must be completed before original bios, which means + * the caller must complete @rq before @rq_src. + */ +int blk_rq_prep_clone(struct request *rq, struct request *rq_src, + struct bio_set *bs, gfp_t gfp_mask, + int (*bio_ctr)(struct bio *, struct bio *, void *), + void *data) +{ + struct bio *bio, *bio_src; + + if (!bs) + bs = &fs_bio_set; + + __rq_for_each_bio(bio_src, rq_src) { + bio = bio_clone_fast(bio_src, gfp_mask, bs); + if (!bio) + goto free_and_out; + + if (bio_ctr && bio_ctr(bio, bio_src, data)) + goto free_and_out; + + if (rq->bio) { + rq->biotail->bi_next = bio; + rq->biotail = bio; + } else { + rq->bio = rq->biotail = bio; + } + bio = NULL; + } + + /* Copy attributes of the original request to the clone request. */ + rq->__sector = blk_rq_pos(rq_src); + rq->__data_len = blk_rq_bytes(rq_src); + if (rq_src->rq_flags & RQF_SPECIAL_PAYLOAD) { + rq->rq_flags |= RQF_SPECIAL_PAYLOAD; + rq->special_vec = rq_src->special_vec; + } + rq->nr_phys_segments = rq_src->nr_phys_segments; + rq->ioprio = rq_src->ioprio; + + if (rq->bio && blk_crypto_rq_bio_prep(rq, rq->bio, gfp_mask) < 0) + goto free_and_out; + + return 0; + +free_and_out: + if (bio) + bio_put(bio); + blk_rq_unprep_clone(rq); + + return -ENOMEM; +} +EXPORT_SYMBOL_GPL(blk_rq_prep_clone); + static size_t order_to_size(unsigned int order) { return (size_t)PAGE_SIZE << order; diff --git a/block/blk-mq.h b/block/blk-mq.h index 08fb5922e611b..6e25303dbebe8 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -65,9 +65,6 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head, bool run_queue); void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, struct list_head *list); - -/* Used by blk_insert_cloned_request() to issue request directly */ -blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last); void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, struct list_head *list); diff --git a/block/blk.h b/block/blk.h index 6a039e6c7d077..c5fc3f97d1e5a 100644 --- a/block/blk.h +++ b/block/blk.h @@ -454,4 +454,14 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg); extern const struct address_space_operations def_blk_aops; +#ifdef CONFIG_FAIL_MAKE_REQUEST +bool should_fail_request(struct block_device *part, unsigned int bytes); +#else /* CONFIG_FAIL_MAKE_REQUEST */ +static inline bool should_fail_request(struct block_device *part, + unsigned int bytes) +{ + return false; +} +#endif /* CONFIG_FAIL_MAKE_REQUEST */ + #endif /* BLK_INTERNAL_H */ From patchwork Mon Oct 25 07:05:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12580865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78EFBC433EF for ; Mon, 25 Oct 2021 07:05:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6229660FD7 for ; Mon, 25 Oct 2021 07:05:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230472AbhJYHIF (ORCPT ); Mon, 25 Oct 2021 03:08:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231250AbhJYHID (ORCPT ); Mon, 25 Oct 2021 03:08:03 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE7DDC061764; Mon, 25 Oct 2021 00:05:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=VwIAXfg3Pd/+emRDddHmnlUH4lzqOjvNN9CpS0qtZKY=; b=C/7Vfaty1LCuSfdkhQm7aJHjtg CCMq6pRLNRBXeQaYpDOUBb2jdSo7fxiRduZfGpIL2F5/sUlOQu7Op8CD1n0KEj/AOiJvQ8jcA7cU8 UTimfY5dUV4vhDkn3408DtGPnk4q0Y4dzqldibKqe1RcJMs76WWOgIVJf10fYCrR3TNlHtoLZpUVo QRz59/An/eVBjsXFP7eD/aF1Kd8/Z+u9nL3oAs6Uyplofj/2/bkd9d1kaUpGjwPTMe8m3C2ou8/D4 H34U/CWl9vsNZuksnV+rAloY9Ebfe72A13EqLAag6bgapQn2sQUDOtmB6d9fc/mthnxebX35HliyD stw+MNYQ==; Received: from [2001:4bb8:184:6dcb:6093:467a:cccc:351c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1meu37-00FUUT-EG; Mon, 25 Oct 2021 07:05:37 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 07/12] block: move blk_rq_init to blk-mq.c Date: Mon, 25 Oct 2021 09:05:12 +0200 Message-Id: <20211025070517.1548584-8-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025070517.1548584-1-hch@lst.de> References: <20211025070517.1548584-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org blk_rq_init deals with a request structure, so move it to blk-mq.c Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 17 ----------------- block/blk-mq.c | 17 +++++++++++++++++ 2 files changed, 17 insertions(+), 17 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 8ae6ba9ec1050..1ee942266df8d 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -109,23 +109,6 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q) } EXPORT_SYMBOL_GPL(blk_queue_flag_test_and_set); -void blk_rq_init(struct request_queue *q, struct request *rq) -{ - memset(rq, 0, sizeof(*rq)); - - INIT_LIST_HEAD(&rq->queuelist); - rq->q = q; - rq->__sector = (sector_t) -1; - INIT_HLIST_NODE(&rq->hash); - RB_CLEAR_NODE(&rq->rb_node); - rq->tag = BLK_MQ_NO_TAG; - rq->internal_tag = BLK_MQ_NO_TAG; - rq->start_time_ns = ktime_get_ns(); - rq->part = NULL; - blk_crypto_rq_set_defaults(rq); -} -EXPORT_SYMBOL(blk_rq_init); - #define REQ_OP_NAME(name) [REQ_OP_##name] = #name static const char *const blk_op_name[] = { REQ_OP_NAME(READ), diff --git a/block/blk-mq.c b/block/blk-mq.c index 923231023fe84..a0505099b2ce2 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -316,6 +316,23 @@ void blk_mq_wake_waiters(struct request_queue *q) blk_mq_tag_wakeup_all(hctx->tags, true); } +void blk_rq_init(struct request_queue *q, struct request *rq) +{ + memset(rq, 0, sizeof(*rq)); + + INIT_LIST_HEAD(&rq->queuelist); + rq->q = q; + rq->__sector = (sector_t) -1; + INIT_HLIST_NODE(&rq->hash); + RB_CLEAR_NODE(&rq->rb_node); + rq->tag = BLK_MQ_NO_TAG; + rq->internal_tag = BLK_MQ_NO_TAG; + rq->start_time_ns = ktime_get_ns(); + rq->part = NULL; + blk_crypto_rq_set_defaults(rq); +} +EXPORT_SYMBOL(blk_rq_init); + static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, unsigned int tag, u64 alloc_time_ns) { From patchwork Mon Oct 25 07:05:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12580875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC400C433FE for ; Mon, 25 Oct 2021 07:06:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AC51C60FD7 for ; Mon, 25 Oct 2021 07:06:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229635AbhJYHJH (ORCPT ); Mon, 25 Oct 2021 03:09:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231480AbhJYHII (ORCPT ); Mon, 25 Oct 2021 03:08:08 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D999DC061243; Mon, 25 Oct 2021 00:05:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=P84v3qAZh9eJN5PqAF0/WHBx7XZAKTrdDTBePU9Rs9M=; b=ojOQdMyzh/mXfjQNbLztBRNvWi CsQBulWmc8VK7QkjRcOUtdfL8DDoDxT0WT5nnepdwD2eHZ7rTbhBTZai7eZW/BeUgFvoyE6Auy71K TpzSiM95JMX3DHy7b5McQ4PtyGMahVG9WWJRGOyKZxN1b4lDA4Za2xptSjJ5wbdNhapLF42rLqSSw WlPkUrmSvtZUX0mB4DoMZw1F3f5TjWvYBVHtf7GS9gTfbLT3Uv6gCgsujoQZfl0aIyyj0wYGJ+bFO /cFEcqCz4MHL0eGeJXx2nORAA8qO22PtESa8rzgQdwLcazmmtEaPXrfuKCPgosZgbxlw0RCvwjqZY NLv2rD6g==; Received: from [2001:4bb8:184:6dcb:6093:467a:cccc:351c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1meu3A-00FUVj-2G; Mon, 25 Oct 2021 07:05:40 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 08/12] block: move blk_steal_bios to blk-mq.c Date: Mon, 25 Oct 2021 09:05:13 +0200 Message-Id: <20211025070517.1548584-9-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025070517.1548584-1-hch@lst.de> References: <20211025070517.1548584-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Keep all the request based code together. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 21 --------------------- block/blk-mq.c | 21 +++++++++++++++++++++ 2 files changed, 21 insertions(+), 21 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 1ee942266df8d..98cb9d69a4068 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1191,27 +1191,6 @@ void disk_end_io_acct(struct gendisk *disk, unsigned int op, } EXPORT_SYMBOL(disk_end_io_acct); -/* - * Steal bios from a request and add them to a bio list. - * The request must not have been partially completed before. - */ -void blk_steal_bios(struct bio_list *list, struct request *rq) -{ - if (rq->bio) { - if (list->tail) - list->tail->bi_next = rq->bio; - else - list->head = rq->bio; - list->tail = rq->biotail; - - rq->bio = NULL; - rq->biotail = NULL; - } - - rq->__data_len = 0; -} -EXPORT_SYMBOL_GPL(blk_steal_bios); - /** * blk_lld_busy - Check if underlying low-level drivers of a device are busy * @q : the queue of the device being checked diff --git a/block/blk-mq.c b/block/blk-mq.c index a0505099b2ce2..06fb74166aded 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2910,6 +2910,27 @@ int blk_rq_prep_clone(struct request *rq, struct request *rq_src, } EXPORT_SYMBOL_GPL(blk_rq_prep_clone); +/* + * Steal bios from a request and add them to a bio list. + * The request must not have been partially completed before. + */ +void blk_steal_bios(struct bio_list *list, struct request *rq) +{ + if (rq->bio) { + if (list->tail) + list->tail->bi_next = rq->bio; + else + list->head = rq->bio; + list->tail = rq->biotail; + + rq->bio = NULL; + rq->biotail = NULL; + } + + rq->__data_len = 0; +} +EXPORT_SYMBOL_GPL(blk_steal_bios); + static size_t order_to_size(unsigned int order) { return (size_t)PAGE_SIZE << order; From patchwork Mon Oct 25 07:05:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12580869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C959C4332F for ; Mon, 25 Oct 2021 07:05:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 293ED60F9C for ; Mon, 25 Oct 2021 07:05:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231325AbhJYHIP (ORCPT ); Mon, 25 Oct 2021 03:08:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231598AbhJYHIL (ORCPT ); Mon, 25 Oct 2021 03:08:11 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44D11C061224; Mon, 25 Oct 2021 00:05:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=3ldnH3VazeSVIw/eO9HynLpJOqOF6ADr25AqJPVTmOk=; b=ZjIMgbL7UfkUNjxu19GHDq9oic /Gby7frhV61AIFTh0Sl8rXj7r7IYCxTZS2i8mcuCIXCZS5Bv4kNINdzw58xx2GoKhFlO5zAKdZH/9 OapM8gw9CYC1wNwTQxaDaEZrs4G3b9PqR/pm0aT5Va34WpQOT/IuGGB2lkQ9rCHS5Rx46Bjs2RC2B JdQCOnGk0PPSnOo6pejG7n5llh5ypszub2DJ0x/ufe7WaHpV6GzR36kp0lPO58t4e6LBmJERaBprR keAH0RNcuB36CjJirefwsOrcM6KBuvZOcDUmNkOImffylXIoV7fU7PNlsFT4AkivlHkvSlpGm/Fxm 0D6xvJ4Q==; Received: from [2001:4bb8:184:6dcb:6093:467a:cccc:351c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1meu3C-00FUWo-KO; Mon, 25 Oct 2021 07:05:43 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 09/12] block: move blk_account_io_{start,done} to blk-mq.c Date: Mon, 25 Oct 2021 09:05:14 +0200 Message-Id: <20211025070517.1548584-10-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025070517.1548584-1-hch@lst.de> References: <20211025070517.1548584-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org These are only used for request based I/O, so move them where they are used. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 27 +-------------------------- block/blk-mq.c | 42 ++++++++++++++++++++++++++++++++++++++++++ block/blk.h | 21 +-------------------- 3 files changed, 44 insertions(+), 46 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 98cb9d69a4068..5ca47d25a2ef2 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1088,8 +1088,7 @@ int iocb_bio_iopoll(struct kiocb *kiocb, struct io_comp_batch *iob, } EXPORT_SYMBOL_GPL(iocb_bio_iopoll); -static void update_io_ticks(struct block_device *part, unsigned long now, - bool end) +void update_io_ticks(struct block_device *part, unsigned long now, bool end) { unsigned long stamp; again: @@ -1104,30 +1103,6 @@ static void update_io_ticks(struct block_device *part, unsigned long now, } } -void __blk_account_io_done(struct request *req, u64 now) -{ - const int sgrp = op_stat_group(req_op(req)); - - part_stat_lock(); - update_io_ticks(req->part, jiffies, true); - part_stat_inc(req->part, ios[sgrp]); - part_stat_add(req->part, nsecs[sgrp], now - req->start_time_ns); - part_stat_unlock(); -} - -void __blk_account_io_start(struct request *rq) -{ - /* passthrough requests can hold bios that do not have ->bi_bdev set */ - if (rq->bio && rq->bio->bi_bdev) - rq->part = rq->bio->bi_bdev; - else - rq->part = rq->rq_disk->part0; - - part_stat_lock(); - update_io_ticks(rq->part, jiffies, false); - part_stat_unlock(); -} - static unsigned long __part_start_io_acct(struct block_device *part, unsigned int sectors, unsigned int op) { diff --git a/block/blk-mq.c b/block/blk-mq.c index 06fb74166aded..7df80c4da3777 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -790,6 +790,48 @@ bool blk_update_request(struct request *req, blk_status_t error, } EXPORT_SYMBOL_GPL(blk_update_request); +static void __blk_account_io_done(struct request *req, u64 now) +{ + const int sgrp = op_stat_group(req_op(req)); + + part_stat_lock(); + update_io_ticks(req->part, jiffies, true); + part_stat_inc(req->part, ios[sgrp]); + part_stat_add(req->part, nsecs[sgrp], now - req->start_time_ns); + part_stat_unlock(); +} + +static inline void blk_account_io_done(struct request *req, u64 now) +{ + /* + * Account IO completion. flush_rq isn't accounted as a + * normal IO on queueing nor completion. Accounting the + * containing request is enough. + */ + if (blk_do_io_stat(req) && req->part && + !(req->rq_flags & RQF_FLUSH_SEQ)) + __blk_account_io_done(req, now); +} + +static void __blk_account_io_start(struct request *rq) +{ + /* passthrough requests can hold bios that do not have ->bi_bdev set */ + if (rq->bio && rq->bio->bi_bdev) + rq->part = rq->bio->bi_bdev; + else + rq->part = rq->rq_disk->part0; + + part_stat_lock(); + update_io_ticks(rq->part, jiffies, false); + part_stat_unlock(); +} + +static inline void blk_account_io_start(struct request *req) +{ + if (blk_do_io_stat(req)) + __blk_account_io_start(req); +} + static inline void __blk_mq_end_request_acct(struct request *rq, u64 now) { if (rq->rq_flags & RQF_STATS) { diff --git a/block/blk.h b/block/blk.h index c5fc3f97d1e5a..8bcca0538faac 100644 --- a/block/blk.h +++ b/block/blk.h @@ -222,9 +222,6 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, bool blk_bio_list_merge(struct request_queue *q, struct list_head *list, struct bio *bio, unsigned int nr_segs); -void __blk_account_io_start(struct request *req); -void __blk_account_io_done(struct request *req, u64 now); - /* * Plug flush limits */ @@ -315,23 +312,7 @@ static inline bool blk_do_io_stat(struct request *rq) return (rq->rq_flags & RQF_IO_STAT) && rq->rq_disk; } -static inline void blk_account_io_done(struct request *req, u64 now) -{ - /* - * Account IO completion. flush_rq isn't accounted as a - * normal IO on queueing nor completion. Accounting the - * containing request is enough. - */ - if (blk_do_io_stat(req) && req->part && - !(req->rq_flags & RQF_FLUSH_SEQ)) - __blk_account_io_done(req, now); -} - -static inline void blk_account_io_start(struct request *req) -{ - if (blk_do_io_stat(req)) - __blk_account_io_start(req); -} +void update_io_ticks(struct block_device *part, unsigned long now, bool end); static inline void req_set_nomerge(struct request_queue *q, struct request *req) { From patchwork Mon Oct 25 07:05:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12580867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98170C433F5 for ; Mon, 25 Oct 2021 07:05:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7C35460F6F for ; Mon, 25 Oct 2021 07:05:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231240AbhJYHIP (ORCPT ); Mon, 25 Oct 2021 03:08:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231613AbhJYHIL (ORCPT ); Mon, 25 Oct 2021 03:08:11 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96555C061227; Mon, 25 Oct 2021 00:05:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=1gi0qVAm+qkt2mbLEiTzlbbK6ZsI65IzSMEXe7xFrkY=; b=BgBp4T7JLtyxfyv392/W+R71A/ cMtT0R4jqXTtGvYfadZmcQlPK1IkzZU5Lm7MOgO+LgabQnApNtAwE+q6v97lYLa33pUSBm8TQ+OW7 K/DoT45IwBR15vqRRGALfJGnmmY74iV/vWlOI3dfaIESE5xR83lTp9vU0yvbnRNqwsx3tBHl8mDPn K4+lrJUjlQ6eq3d+SFpEN/MpVu/hfpCV3eVgBEiPN4UjxI5M6zCGV7K5HIpIcXOwEKvAcbtelm+0D lyM50QDoG9MJsdU2+zwLVepIyJS2z1W6UHF4geo/trBsLzfaUBgMjDBgWirLA0j+Me2KgBQIa+slJ qn2YpiiA==; Received: from [2001:4bb8:184:6dcb:6093:467a:cccc:351c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1meu3F-00FUYp-6y; Mon, 25 Oct 2021 07:05:45 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 10/12] block: move blk_dump_rq_flags to blk-mq.c Date: Mon, 25 Oct 2021 09:05:15 +0200 Message-Id: <20211025070517.1548584-11-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025070517.1548584-1-hch@lst.de> References: <20211025070517.1548584-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org blk_dump_rq_flags deals with a request, so move it to blk-mq.c. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 14 -------------- block/blk-mq.c | 14 ++++++++++++++ 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 5ca47d25a2ef2..9547a75e7a7aa 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -217,20 +217,6 @@ void blk_print_req_error(struct request *req, blk_status_t status) IOPRIO_PRIO_CLASS(req->ioprio)); } -void blk_dump_rq_flags(struct request *rq, char *msg) -{ - printk(KERN_INFO "%s: dev %s: flags=%llx\n", msg, - rq->rq_disk ? rq->rq_disk->disk_name : "?", - (unsigned long long) rq->cmd_flags); - - printk(KERN_INFO " sector %llu, nr/cnr %u/%u\n", - (unsigned long long)blk_rq_pos(rq), - blk_rq_sectors(rq), blk_rq_cur_sectors(rq)); - printk(KERN_INFO " bio %p, biotail %p, len %u\n", - rq->bio, rq->biotail, blk_rq_bytes(rq)); -} -EXPORT_SYMBOL(blk_dump_rq_flags); - /** * blk_sync_queue - cancel any pending callbacks on a queue * @q: the queue diff --git a/block/blk-mq.c b/block/blk-mq.c index 7df80c4da3777..425cd134ac358 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -648,6 +648,20 @@ void blk_mq_free_plug_rqs(struct blk_plug *plug) } } +void blk_dump_rq_flags(struct request *rq, char *msg) +{ + printk(KERN_INFO "%s: dev %s: flags=%llx\n", msg, + rq->rq_disk ? rq->rq_disk->disk_name : "?", + (unsigned long long) rq->cmd_flags); + + printk(KERN_INFO " sector %llu, nr/cnr %u/%u\n", + (unsigned long long)blk_rq_pos(rq), + blk_rq_sectors(rq), blk_rq_cur_sectors(rq)); + printk(KERN_INFO " bio %p, biotail %p, len %u\n", + rq->bio, rq->biotail, blk_rq_bytes(rq)); +} +EXPORT_SYMBOL(blk_dump_rq_flags); + static void req_bio_endio(struct request *rq, struct bio *bio, unsigned int nbytes, blk_status_t error) { From patchwork Mon Oct 25 07:05:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12580871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9477EC43219 for ; Mon, 25 Oct 2021 07:05:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 82C7261029 for ; Mon, 25 Oct 2021 07:05:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230059AbhJYHIQ (ORCPT ); Mon, 25 Oct 2021 03:08:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230510AbhJYHIO (ORCPT ); Mon, 25 Oct 2021 03:08:14 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 438AEC061745; Mon, 25 Oct 2021 00:05:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=RKpWrcdhkZNafj/o42ywxkrCiZkvQQDXtzZqKsRXrpQ=; b=XJ+eE0tMalElwGfpOuz0RneYrF aMOtXyqMqaeFwvqOiSt6GMDYu1lSxbvwBigyjA4kfEUFi2tPJMmqntVFjvc4dt1gW4LO61zlf+5YF 71VlJetDv1n52y+gO4RsLmJah1IBMQku+VnRqE4Cg3FklcBZQpQMYYXfNXMalO55w8+SEyDvAVgGH GgWTGqOu/yTpGl6ky5+dlXtMCkEInmZHlfvfOi4WZ/YGUgbAF1xkI2KFj4YeJThuE098KHyNw6DVz po6S+5UFU9/9J7KANK4ntCwmBmaZBkeT7yAdjGyp+cyB0NkWgtFKmwBUjf/dKkPE1y4kwlag7gwja Yff6IigQ==; Received: from [2001:4bb8:184:6dcb:6093:467a:cccc:351c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1meu3H-00FUZl-UG; Mon, 25 Oct 2021 07:05:48 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 11/12] block: move blk_print_req_error to blk-mq.c Date: Mon, 25 Oct 2021 09:05:16 +0200 Message-Id: <20211025070517.1548584-12-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025070517.1548584-1-hch@lst.de> References: <20211025070517.1548584-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This function is only used by the request completion path. Factor out a blk_status_to_str to keep blk_errors private in blk-core.c. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 15 +++------------ block/blk-mq.c | 13 +++++++++++++ block/blk.h | 2 +- 3 files changed, 17 insertions(+), 13 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 9547a75e7a7aa..ebcea0dac973d 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -199,22 +199,13 @@ int blk_status_to_errno(blk_status_t status) } EXPORT_SYMBOL_GPL(blk_status_to_errno); -void blk_print_req_error(struct request *req, blk_status_t status) +const char *blk_status_to_str(blk_status_t status) { int idx = (__force int)status; if (WARN_ON_ONCE(idx >= ARRAY_SIZE(blk_errors))) - return; - - printk_ratelimited(KERN_ERR - "%s error, dev %s, sector %llu op 0x%x:(%s) flags 0x%x " - "phys_seg %u prio class %u\n", - blk_errors[idx].name, - req->rq_disk ? req->rq_disk->disk_name : "?", - blk_rq_pos(req), req_op(req), blk_op_str(req_op(req)), - req->cmd_flags & ~REQ_OP_MASK, - req->nr_phys_segments, - IOPRIO_PRIO_CLASS(req->ioprio)); + return ""; + return blk_errors[idx].name; } /** diff --git a/block/blk-mq.c b/block/blk-mq.c index 425cd134ac358..3edd4ee2007b0 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -698,6 +698,19 @@ static void blk_account_io_completion(struct request *req, unsigned int bytes) } } +static void blk_print_req_error(struct request *req, blk_status_t status) +{ + printk_ratelimited(KERN_ERR + "%s error, dev %s, sector %llu op 0x%x:(%s) flags 0x%x " + "phys_seg %u prio class %u\n", + blk_status_to_str(status), + req->rq_disk ? req->rq_disk->disk_name : "?", + blk_rq_pos(req), req_op(req), blk_op_str(req_op(req)), + req->cmd_flags & ~REQ_OP_MASK, + req->nr_phys_segments, + IOPRIO_PRIO_CLASS(req->ioprio)); +} + /** * blk_update_request - Complete multiple bytes without completing the request * @req: the request being processed diff --git a/block/blk.h b/block/blk.h index 8bcca0538faac..573e308851d27 100644 --- a/block/blk.h +++ b/block/blk.h @@ -215,7 +215,7 @@ static inline void blk_integrity_del(struct gendisk *disk) unsigned long blk_rq_timeout(unsigned long timeout); void blk_add_timer(struct request *req); -void blk_print_req_error(struct request *req, blk_status_t status); +const char *blk_status_to_str(blk_status_t status); bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, unsigned int nr_segs, bool *same_queue_rq); From patchwork Mon Oct 25 07:05:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12580873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 268D7C433F5 for ; Mon, 25 Oct 2021 07:06:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 102DC60F9C for ; Mon, 25 Oct 2021 07:06:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230330AbhJYHIW (ORCPT ); Mon, 25 Oct 2021 03:08:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231335AbhJYHIQ (ORCPT ); Mon, 25 Oct 2021 03:08:16 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7CFCC061745; Mon, 25 Oct 2021 00:05:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=AXC2/w3YpyqIN7KuSnBwUBGLzmoGyNL2SawuN6uJg2o=; b=Fcvpi2fbBxgA1O9ZuEjAuspcvv Vc7KCr1eDyXod3ZEXx2yT9hB0Wh8pPajD6gc8n5x+HjJ+d23fiOz2Y3Qjbu+7b7BhKUScBGknRS5l p/TNnlsHWAltZaL+ubYlUFX3+CidCJNzTtuHCskE12MWRYQM2eptbD7wAamrf7jGFIq6d7n7EkMAi FqbGQhF05alfAvU83NjPjRgURtmokV6oJBBNF0R3fWMizX2tCBvFVQvg8mOqV250xBJQJVnlATUiw Fh/quqHfbS7VqxfN3iuPn3oSE4YQDW71HooDKtRoTibLZxMjZKeuZvFUpfgvltCpMI2305HXG/dpb hAHZnQQQ==; Received: from [2001:4bb8:184:6dcb:6093:467a:cccc:351c] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1meu3K-00FUaP-HR; Mon, 25 Oct 2021 07:05:50 +0000 From: Christoph Hellwig To: Jens Axboe Cc: "Martin K. Petersen" , Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org Subject: [PATCH 12/12] block: don't include blk-mq headers in blk-core.c Date: Mon, 25 Oct 2021 09:05:17 +0200 Message-Id: <20211025070517.1548584-13-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025070517.1548584-1-hch@lst.de> References: <20211025070517.1548584-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org All request based code is in the blk-mq files now. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index ebcea0dac973d..94b76d62a4396 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -16,7 +16,6 @@ #include #include #include -#include #include #include #include @@ -47,8 +46,6 @@ #include #include "blk.h" -#include "blk-mq.h" -#include "blk-mq-sched.h" #include "blk-pm.h" #include "blk-throttle.h"