From patchwork Thu Nov 29 19:13:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10705239 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2295C13BB for ; Thu, 29 Nov 2018 19:14:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 16D892F6E4 for ; Thu, 29 Nov 2018 19:14:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0B32B2F7D5; Thu, 29 Nov 2018 19:14:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D3EC2F6E6 for ; Thu, 29 Nov 2018 19:14:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726019AbeK3GU2 (ORCPT ); Fri, 30 Nov 2018 01:20:28 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:57196 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725883AbeK3GU1 (ORCPT ); Fri, 30 Nov 2018 01:20:27 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=yonbpNgrgmhegygaJoKmapITrVQwpqVv+gSyiLQ9B6w=; b=C4H/xqaZQsGQD6VgkQVElTpcy2 eCT64CX0QuNkmu20rmFdm32ABBXmktmHT1XnB8Al78v2l02PpoSrueoFtzJdMotEgPw7MqBgQkcYZ NGLNqGUb/S0nBF9y9WTk5UBIarzwCkk9w985wWNfvP+BxzVTTUIDzedTbQkZDG/H5HW8iLQnXQFBL i7v2zq7wWAAVsIvgC/xchIKPVjEZ4se+FFMYYMTknaHqKSOGTcJkDpVIDzy+X25T7G2fXxD88g5W7 ttw8hTQDAzxs5lLLr0Oa7irlNpq6AoK1CTyWuhhOdCVYyc/vHUYa7uFVMQ7t8TXNL4f5xDpRLV4Lm DFoK95Jw==; Received: from 213-225-33-236.nat.highway.a1.net ([213.225.33.236] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gSRl0-0003vn-SR; Thu, 29 Nov 2018 19:13:51 +0000 From: Christoph Hellwig To: Jens Axboe , Keith Busch , Sagi Grimberg Cc: Max Gurtovoy , linux-nvme@lists.infradead.org, linux-block@vger.kernel.org Subject: [PATCH 11/13] block: remove ->poll_fn Date: Thu, 29 Nov 2018 20:13:08 +0100 Message-Id: <20181129191310.9795-12-hch@lst.de> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181129191310.9795-1-hch@lst.de> References: <20181129191310.9795-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This was intended to support users like nvme multipath, but is just getting in the way and adding another indirect call. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 23 ----------------------- block/blk-mq.c | 24 +++++++++++++++++++----- include/linux/blkdev.h | 2 -- 3 files changed, 19 insertions(+), 30 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 3f6f5e6c2fe4..942276399085 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1249,29 +1249,6 @@ blk_qc_t submit_bio(struct bio *bio) } EXPORT_SYMBOL(submit_bio); -/** - * blk_poll - poll for IO completions - * @q: the queue - * @cookie: cookie passed back at IO submission time - * @spin: whether to spin for completions - * - * Description: - * Poll for completions on the passed in queue. Returns number of - * completed entries found. If @spin is true, then blk_poll will continue - * looping until at least one completion is found, unless the task is - * otherwise marked running (or we need to reschedule). - */ -int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin) -{ - if (!q->poll_fn || !blk_qc_t_valid(cookie)) - return 0; - - if (current->plug) - blk_flush_plug_list(current->plug, false); - return q->poll_fn(q, cookie, spin); -} -EXPORT_SYMBOL_GPL(blk_poll); - /** * blk_cloned_rq_check_limits - Helper function to check a cloned request * for new the queue limits diff --git a/block/blk-mq.c b/block/blk-mq.c index 7dcef565dc0f..9c90c5038d07 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -38,7 +38,6 @@ #include "blk-mq-sched.h" #include "blk-rq-qos.h" -static int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, bool spin); static void blk_mq_poll_stats_start(struct request_queue *q); static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb); @@ -2823,8 +2822,6 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, spin_lock_init(&q->requeue_lock); blk_queue_make_request(q, blk_mq_make_request); - if (q->mq_ops->poll) - q->poll_fn = blk_mq_poll; /* * Do this after blk_queue_make_request() overrides it... @@ -3385,14 +3382,30 @@ static bool blk_mq_poll_hybrid(struct request_queue *q, return blk_mq_poll_hybrid_sleep(q, hctx, rq); } -static int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, bool spin) +/** + * blk_poll - poll for IO completions + * @q: the queue + * @cookie: cookie passed back at IO submission time + * @spin: whether to spin for completions + * + * Description: + * Poll for completions on the passed in queue. Returns number of + * completed entries found. If @spin is true, then blk_poll will continue + * looping until at least one completion is found, unless the task is + * otherwise marked running (or we need to reschedule). + */ +int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin) { struct blk_mq_hw_ctx *hctx; long state; - if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) + if (!blk_qc_t_valid(cookie) || + !test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) return 0; + if (current->plug) + blk_flush_plug_list(current->plug, false); + hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)]; /* @@ -3433,6 +3446,7 @@ static int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, bool spin) __set_current_state(TASK_RUNNING); return 0; } +EXPORT_SYMBOL_GPL(blk_poll); unsigned int blk_mq_rq_cpu(struct request *rq) { diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 08d940f85fa0..0b3874bdbc6a 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -283,7 +283,6 @@ static inline unsigned short req_get_ioprio(struct request *req) struct blk_queue_ctx; typedef blk_qc_t (make_request_fn) (struct request_queue *q, struct bio *bio); -typedef int (poll_q_fn) (struct request_queue *q, blk_qc_t, bool spin); struct bio_vec; typedef int (dma_drain_needed_fn)(struct request *); @@ -401,7 +400,6 @@ struct request_queue { struct rq_qos *rq_qos; make_request_fn *make_request_fn; - poll_q_fn *poll_fn; dma_drain_needed_fn *dma_drain_needed; const struct blk_mq_ops *mq_ops;