From patchwork Tue Nov 13 15:42:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 10681027 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E45CD3CF1 for ; Tue, 13 Nov 2018 15:42:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D4DBC2AE3D for ; Tue, 13 Nov 2018 15:42:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D2FF12AE2D; Tue, 13 Nov 2018 15:42:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5F4E72AE46 for ; Tue, 13 Nov 2018 15:42:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731687AbeKNBlT (ORCPT ); Tue, 13 Nov 2018 20:41:19 -0500 Received: from mail-it1-f193.google.com ([209.85.166.193]:51858 "EHLO mail-it1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731287AbeKNBlT (ORCPT ); Tue, 13 Nov 2018 20:41:19 -0500 Received: by mail-it1-f193.google.com with SMTP id m34-v6so18845052iti.1 for ; Tue, 13 Nov 2018 07:42:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=m48q1bxkjgPE3pO6xckxzVFKbD3tRifqvuBgJLgXytM=; b=xeGLuow61BSMWLlrgS6GSbTkSpHNLcHr3ZTAWmyt3mALZ7KY9xjayykcH7xd0fYDrp giuq8uurVOlCBddmqvz+v8HCodulC3gS2m5F10dabjvP4Z8uupOrZnty2bsU5/WW6LYz 4k0tMHlKZrXhcvP94D738KAf1fLDYtQpUlaf3BrWhNc8ZnXeLJm0o3d1FBiydlmDqvXl MgdJkn3MonAR4OUcSD27TMzCunAM3DpLcK0XA3rqgMX6NwVmVxvcXBa/ngc0f+fYI5Yy E7tLuXJnfmtQ0BxAkuI5NbG56cWKtbNv5WZRa5KGy8E6Z3nyhVIzO4N81B8i9SlLwDAi kvaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m48q1bxkjgPE3pO6xckxzVFKbD3tRifqvuBgJLgXytM=; b=eBGspW5l8ZoRlKEf5npgWi4LYBGlV4nDyRZ549RlfcCkA6cZ8HDH11910yK/NRGWPV Typfn0pdcmEvhA0XOpfiDGcQw7BhOD5QfJM/Ce/rX+VIVFOUT12ruMWAOUJjhvmBl7Js wVVgwAa7BbyHupvsG1BqDZ6vznaNQUDDOLH4yAidi9XoKU5QO8bJ2y/zecaG9beIH72O Pzz2GEQDRFyoaGQKJe45KtvJ2VH/B06qjIP2NeK5nrQsabylts4jlfr99GJa7Dl42t+z oqTUACIL293RRTLn07gqG+WXegazaFfKG0cbXP9DG0uWe1A+etgEtW9rYZmXN1AFvcV7 Q/Qw== X-Gm-Message-State: AGRZ1gIXgREhNDttItPiBpuU5DFW/36rmsSHoD5m87WYNcwd4jMYYY55 Xd/hx6WL/dq+DH0sDUzLnxqjT1ypxug= X-Google-Smtp-Source: AJdET5c0BkqyEoxImifEWogTCd4ia5yjhkEK+sKF8WeJyRxyBa/RWg7GjDbxspZvAS0Vz/K6nxfqyg== X-Received: by 2002:a24:7596:: with SMTP id y144mr858628itc.68.1542123760095; Tue, 13 Nov 2018 07:42:40 -0800 (PST) Received: from x1.localdomain ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id o14-v6sm6721987ito.3.2018.11.13.07.42.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Nov 2018 07:42:39 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe , Keith Busch , linux-nvme@lists.infradead.org Subject: [PATCH 01/11] nvme: don't disable local ints for polled queue Date: Tue, 13 Nov 2018 08:42:23 -0700 Message-Id: <20181113154233.15256-2-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113154233.15256-1-axboe@kernel.dk> References: <20181113154233.15256-1-axboe@kernel.dk> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A polled queued doesn't trigger interrupts, so it's always safe to grab the queue lock without disabling interrupts. Cc: Keith Busch Cc: linux-nvme@lists.infradead.org Signed-off-by: Jens Axboe --- drivers/nvme/host/pci.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 6aa86dfcb32c..bb22ae567208 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1067,9 +1067,18 @@ static int __nvme_poll(struct nvme_queue *nvmeq, unsigned int tag) if (!nvme_cqe_pending(nvmeq)) return 0; - spin_lock_irq(&nvmeq->cq_lock); + /* + * Polled queue doesn't have an IRQ, no need to disable ints + */ + if (!nvmeq->polled) + local_irq_disable(); + + spin_lock(&nvmeq->cq_lock); found = nvme_process_cq(nvmeq, &start, &end, tag); - spin_unlock_irq(&nvmeq->cq_lock); + spin_unlock(&nvmeq->cq_lock); + + if (!nvmeq->polled) + local_irq_enable(); nvme_complete_cqes(nvmeq, start, end); return found; From patchwork Tue Nov 13 15:42:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 10681029 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D25041709 for ; Tue, 13 Nov 2018 15:42:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BF1CE2AE15 for ; Tue, 13 Nov 2018 15:42:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BCE792AE3F; Tue, 13 Nov 2018 15:42:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9703C2AE3F for ; Tue, 13 Nov 2018 15:42:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731840AbeKNBlW (ORCPT ); Tue, 13 Nov 2018 20:41:22 -0500 Received: from mail-it1-f194.google.com ([209.85.166.194]:36147 "EHLO mail-it1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731287AbeKNBlV (ORCPT ); Tue, 13 Nov 2018 20:41:21 -0500 Received: by mail-it1-f194.google.com with SMTP id w7-v6so19087199itd.1 for ; Tue, 13 Nov 2018 07:42:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ywE5bv/wMPm3JaaKSxU4j+9woL2NqLCUQhLPCdEb4qQ=; b=1v7jH0uw82TTZXEl0HdaqQ2hppXX+wE5nbkznhCkNd0d7cgxdccF3153evxA3MGbLy akHelvZmGj46I5HBAMYYpulQ6ZDhtYsNszSgs+3rgiQziGryxe9CWE1b4otdyTMDuda5 c5tyyBNi6Oe5GsNVWbsZlFumL/sIx79zpMLEeiiw/SZGiwKlre7BEK8b4ej/oc5tWUjA qtTVE1HyyOajBOktWX4E8rl3ufaSuK33Pf7EeooeC5GrsclCtlDe76P0rspLZnP19G2h BkhDs9atwF0/knHodCSGsRfuGgfmRES4dY2zsBekbwX2UGd/5KiOTgy0oOOpeUx8pWhE BcbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ywE5bv/wMPm3JaaKSxU4j+9woL2NqLCUQhLPCdEb4qQ=; b=XGmH5LTIUbbJJ2pdKrWFTyCnxTu/WEt1YERtuDI+0YjIml2QykGkXXCNWuaVWmCmEt B6LXBSXMrB0IfU/I/KVQ5XEX+LVnuAlDPHDVXcJxtlHsyZDkH2Xm320kvaxBXL+7thGT phcfsibDXOHWhH99EyAmX1w3p4/mMzkfXNJrvYE1UK85v4niuw08PJur56VNAscOXn2c 0Ueeht3rZe1eGVVAl9PoBFzGBp37GnmqRS0z5XFU1JaghZpznRFe7hzpMMVW6LyB0aQX gk1XCbtU8jPdHNET7pTzJqGiySNQTtx8fSWFQJVBal3w1lmBwU2aVmAH7YmTsojR7Vy9 ZsNw== X-Gm-Message-State: AGRZ1gJncFXKvRL809fukE/UilKBd/wNQ11hORTXwx2A2EbNxtkICV71 kflUfogksh3IDRcrxmVk4D9b/zzx/R0= X-Google-Smtp-Source: AJdET5eIyBtgk4rSQQqXJ22x9lSjXLl6cOo706ZjuB8JCnIghlGaoR7sJe8Ib6nCysNp9n5LaNhjLQ== X-Received: by 2002:a24:3046:: with SMTP id q67mr4218265itq.42.1542123762046; Tue, 13 Nov 2018 07:42:42 -0800 (PST) Received: from x1.localdomain ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id o14-v6sm6721987ito.3.2018.11.13.07.42.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Nov 2018 07:42:40 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 02/11] block: add queue_is_mq() helper Date: Tue, 13 Nov 2018 08:42:24 -0700 Message-Id: <20181113154233.15256-3-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113154233.15256-1-axboe@kernel.dk> References: <20181113154233.15256-1-axboe@kernel.dk> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Various spots check for q->mq_ops being non-NULL, but provide a helper to do this instead. Where the ->mq_ops != NULL check is redundant, remove it. Signed-off-by: Jens Axboe --- block/blk-cgroup.c | 8 ++++---- block/blk-core.c | 10 +++++----- block/blk-flush.c | 3 +-- block/blk-mq.c | 2 +- block/blk-sysfs.c | 14 +++++++------- block/blk-wbt.c | 2 +- block/elevator.c | 9 ++++----- block/genhd.c | 8 ++++---- drivers/md/dm-table.c | 2 +- include/linux/blkdev.h | 10 +++++++--- 10 files changed, 35 insertions(+), 33 deletions(-) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 6c65791bc3fe..8da8d3773ecf 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1349,7 +1349,7 @@ int blkcg_activate_policy(struct request_queue *q, if (blkcg_policy_enabled(q, pol)) return 0; - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_freeze_queue(q); pd_prealloc: if (!pd_prealloc) { @@ -1388,7 +1388,7 @@ int blkcg_activate_policy(struct request_queue *q, spin_unlock_irq(q->queue_lock); out_bypass_end: - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_unfreeze_queue(q); if (pd_prealloc) pol->pd_free_fn(pd_prealloc); @@ -1412,7 +1412,7 @@ void blkcg_deactivate_policy(struct request_queue *q, if (!blkcg_policy_enabled(q, pol)) return; - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_freeze_queue(q); spin_lock_irq(q->queue_lock); @@ -1430,7 +1430,7 @@ void blkcg_deactivate_policy(struct request_queue *q, spin_unlock_irq(q->queue_lock); - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_unfreeze_queue(q); } EXPORT_SYMBOL_GPL(blkcg_deactivate_policy); diff --git a/block/blk-core.c b/block/blk-core.c index fdc0ad2686c4..ab6675fd3568 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -268,7 +268,7 @@ void blk_sync_queue(struct request_queue *q) del_timer_sync(&q->timeout); cancel_work_sync(&q->timeout_work); - if (q->mq_ops) { + if (queue_is_mq(q)) { struct blk_mq_hw_ctx *hctx; int i; @@ -317,7 +317,7 @@ void blk_set_queue_dying(struct request_queue *q) */ blk_freeze_queue_start(q); - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_wake_waiters(q); /* Make blk_queue_enter() reexamine the DYING flag. */ @@ -410,7 +410,7 @@ void blk_cleanup_queue(struct request_queue *q) * blk_freeze_queue() should be enough for cases of passthrough * request. */ - if (q->mq_ops && blk_queue_init_done(q)) + if (queue_is_mq(q) && blk_queue_init_done(q)) blk_mq_quiesce_queue(q); /* for synchronous bio-based driver finish in-flight integrity i/o */ @@ -428,7 +428,7 @@ void blk_cleanup_queue(struct request_queue *q) blk_exit_queue(q); - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_free_queue(q); percpu_ref_exit(&q->q_usage_counter); @@ -1736,7 +1736,7 @@ EXPORT_SYMBOL_GPL(rq_flush_dcache_pages); */ int blk_lld_busy(struct request_queue *q) { - if (q->mq_ops && q->mq_ops->busy) + if (queue_is_mq(q) && q->mq_ops->busy) return q->mq_ops->busy(q); return 0; diff --git a/block/blk-flush.c b/block/blk-flush.c index c53197dcdd70..3b79bea03462 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -273,8 +273,7 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, * assigned to empty flushes, and we deadlock if we are expecting * other requests to make progress. Don't defer for that case. */ - if (!list_empty(&fq->flush_data_in_flight) && - !(q->mq_ops && q->elevator) && + if (!list_empty(&fq->flush_data_in_flight) && q->elevator && time_before(jiffies, fq->flush_pending_since + FLUSH_PENDING_TIMEOUT)) return; diff --git a/block/blk-mq.c b/block/blk-mq.c index 411be60d0cb6..eb9b9596d3de 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -150,7 +150,7 @@ void blk_freeze_queue_start(struct request_queue *q) freeze_depth = atomic_inc_return(&q->mq_freeze_depth); if (freeze_depth == 1) { percpu_ref_kill(&q->q_usage_counter); - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_run_hw_queues(q, false); } } diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index d4b1b84ba8ca..93635a693314 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -68,7 +68,7 @@ queue_requests_store(struct request_queue *q, const char *page, size_t count) unsigned long nr; int ret, err; - if (!q->mq_ops) + if (!queue_is_mq(q)) return -EINVAL; ret = queue_var_store(&nr, page, count); @@ -839,12 +839,12 @@ static void __blk_release_queue(struct work_struct *work) blk_queue_free_zone_bitmaps(q); - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_release(q); blk_trace_shutdown(q); - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_debugfs_unregister(q); bioset_exit(&q->bio_split); @@ -918,7 +918,7 @@ int blk_register_queue(struct gendisk *disk) goto unlock; } - if (q->mq_ops) { + if (queue_is_mq(q)) { __blk_mq_register_dev(dev, q); blk_mq_debugfs_register(q); } @@ -929,7 +929,7 @@ int blk_register_queue(struct gendisk *disk) blk_throtl_register_queue(q); - if ((q->mq_ops && q->elevator)) { + if (q->elevator) { ret = elv_register_queue(q); if (ret) { mutex_unlock(&q->sysfs_lock); @@ -978,7 +978,7 @@ void blk_unregister_queue(struct gendisk *disk) * Remove the sysfs attributes before unregistering the queue data * structures that can be modified through sysfs. */ - if (q->mq_ops) + if (queue_is_mq(q)) blk_mq_unregister_dev(disk_to_dev(disk), q); mutex_unlock(&q->sysfs_lock); @@ -987,7 +987,7 @@ void blk_unregister_queue(struct gendisk *disk) blk_trace_remove_sysfs(disk_to_dev(disk)); mutex_lock(&q->sysfs_lock); - if (q->mq_ops && q->elevator) + if (q->elevator) elv_unregister_queue(q); mutex_unlock(&q->sysfs_lock); diff --git a/block/blk-wbt.c b/block/blk-wbt.c index 0fc222d4194b..b580763e8b1e 100644 --- a/block/blk-wbt.c +++ b/block/blk-wbt.c @@ -709,7 +709,7 @@ void wbt_enable_default(struct request_queue *q) if (!test_bit(QUEUE_FLAG_REGISTERED, &q->queue_flags)) return; - if (q->mq_ops && IS_ENABLED(CONFIG_BLK_WBT_MQ)) + if (queue_is_mq(q) && IS_ENABLED(CONFIG_BLK_WBT_MQ)) wbt_init(q); } EXPORT_SYMBOL_GPL(wbt_enable_default); diff --git a/block/elevator.c b/block/elevator.c index 19351ffa56b1..9df9ae6b038e 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -674,7 +674,7 @@ static int __elevator_change(struct request_queue *q, const char *name) /* * Special case for mq, turn off scheduling */ - if (q->mq_ops && !strncmp(name, "none", 4)) + if (!strncmp(name, "none", 4)) return elevator_switch(q, NULL); strlcpy(elevator_name, name, sizeof(elevator_name)); @@ -692,8 +692,7 @@ static int __elevator_change(struct request_queue *q, const char *name) static inline bool elv_support_iosched(struct request_queue *q) { - if (q->mq_ops && q->tag_set && (q->tag_set->flags & - BLK_MQ_F_NO_SCHED)) + if (q->tag_set && (q->tag_set->flags & BLK_MQ_F_NO_SCHED)) return false; return true; } @@ -703,7 +702,7 @@ ssize_t elv_iosched_store(struct request_queue *q, const char *name, { int ret; - if (!q->mq_ops || !elv_support_iosched(q)) + if (!queue_is_mq(q) || !elv_support_iosched(q)) return count; ret = __elevator_change(q, name); @@ -739,7 +738,7 @@ ssize_t elv_iosched_show(struct request_queue *q, char *name) } spin_unlock(&elv_list_lock); - if (q->mq_ops && q->elevator) + if (q->elevator) len += sprintf(name+len, "none"); len += sprintf(len+name, "\n"); diff --git a/block/genhd.c b/block/genhd.c index cff6bdf27226..0145bcb0cc76 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -47,7 +47,7 @@ static void disk_release_events(struct gendisk *disk); void part_inc_in_flight(struct request_queue *q, struct hd_struct *part, int rw) { - if (q->mq_ops) + if (queue_is_mq(q)) return; atomic_inc(&part->in_flight[rw]); @@ -57,7 +57,7 @@ void part_inc_in_flight(struct request_queue *q, struct hd_struct *part, int rw) void part_dec_in_flight(struct request_queue *q, struct hd_struct *part, int rw) { - if (q->mq_ops) + if (queue_is_mq(q)) return; atomic_dec(&part->in_flight[rw]); @@ -68,7 +68,7 @@ void part_dec_in_flight(struct request_queue *q, struct hd_struct *part, int rw) void part_in_flight(struct request_queue *q, struct hd_struct *part, unsigned int inflight[2]) { - if (q->mq_ops) { + if (queue_is_mq(q)) { blk_mq_in_flight(q, part, inflight); return; } @@ -85,7 +85,7 @@ void part_in_flight(struct request_queue *q, struct hd_struct *part, void part_in_flight_rw(struct request_queue *q, struct hd_struct *part, unsigned int inflight[2]) { - if (q->mq_ops) { + if (queue_is_mq(q)) { blk_mq_in_flight_rw(q, part, inflight); return; } diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 9038c302d5c2..e42739177107 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -919,7 +919,7 @@ static int device_is_rq_based(struct dm_target *ti, struct dm_dev *dev, struct request_queue *q = bdev_get_queue(dev->bdev); struct verify_rq_based_data *v = data; - if (q->mq_ops) + if (queue_is_mq(q)) v->mq_count++; else v->sq_count++; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index e67ad2dd025e..3712d1fe48d4 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -671,13 +671,17 @@ static inline bool blk_account_rq(struct request *rq) #define rq_data_dir(rq) (op_is_write(req_op(rq)) ? WRITE : READ) +static inline bool queue_is_mq(struct request_queue *q) +{ + return q->mq_ops; +} + /* - * Driver can handle struct request, if it either has an old style - * request_fn defined, or is blk-mq based. + * Only blk-mq based drivers are rq based */ static inline bool queue_is_rq_based(struct request_queue *q) { - return q->mq_ops; + return queue_is_mq(q); } static inline unsigned int blk_queue_cluster(struct request_queue *q) From patchwork Tue Nov 13 15:42:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 10681031 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 677DF13BF for ; Tue, 13 Nov 2018 15:42:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D9BD2ADFC for ; Tue, 13 Nov 2018 15:42:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4AD8C2AE40; Tue, 13 Nov 2018 15:42:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 072EF2AE3F for ; Tue, 13 Nov 2018 15:42:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732028AbeKNBlY (ORCPT ); Tue, 13 Nov 2018 20:41:24 -0500 Received: from mail-io1-f67.google.com ([209.85.166.67]:41751 "EHLO mail-io1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731287AbeKNBlY (ORCPT ); Tue, 13 Nov 2018 20:41:24 -0500 Received: by mail-io1-f67.google.com with SMTP id r6-v6so6453756ioj.8 for ; Tue, 13 Nov 2018 07:42:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7HzI50GpmF6RrC6L0yZX4KGiRvscqtB4CaX6eBBB+f8=; b=xMdT/5I2gbNyg1sXnxkMWlFXGis/ZNJYH3pvNr+NhEut9J9R8yqizYAERdd+5r8SxB mo32xz8KdmrNw6iaAbz40F0qA/dvyx8sp0+h2aipAGZmYZfVGWq8mxeGdNdHseA6VDCk A9hZ2z3WpCGcZos0dgkwtYfBNm47wUr2z3drHMqnWRD68Uruj9l7k5QfKM9yCRifzg+v hIQc862xttpa+YZSyX33FX0TteIGT7FIVcC/ysRlwuQ5NlIYWZrpzPOHsytEHCcwzT4q V9Wf8oRKYKYXCpsEZGMVmH0F6c1VBn/Ghxg8wIAnQd8CR9vSVQYqGnrLxu3vwEYNtMq3 gR0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7HzI50GpmF6RrC6L0yZX4KGiRvscqtB4CaX6eBBB+f8=; b=jxVuENj+4sH2jTYFBk6v1Go9+EVaRBIWUiqhSiPXVd+Sk0vZeL3AFtDhDryOufHiRQ 2QbiCFm4lFm74eV1qz8NDUhaZs+rZq9OnuS0C4Q6cTW1C/o4o4Z2YjzGO8qHpmZphZFw pVDpeWy5RUDN6fjI40cg20uZSd1/WExsr6FWyK9/BrVweaacK0IEMbZ+KorebAXnj4jr VXVOmQx5YdFNNTJPfGv0yQYGHNm+wDJdutmHwprSUR7fimLw7HbmEBWb2G7wtXNe536B L1dkUMPsMxMWHEr/6Yg07j0fd/HvZINENRa8Y1uFduAiMup18O9lvdYxIjFLMcCftl64 S3FQ== X-Gm-Message-State: AGRZ1gIB6zN7NDfaSSXskfvMHvMPQjp+IS7FSoaaniQjxThVaTJSNW5/ gKYkdSGZLoT169M4NlUAopPrpa+sUYM= X-Google-Smtp-Source: AJdET5fKQLfltYdsBZzN3QuHrnocBX6eT16KZe1x46EGJvmVE/4O6mG/iFM3QSvgT0ql+8WssLcVRg== X-Received: by 2002:a6b:900b:: with SMTP id s11mr4418248iod.159.1542123763912; Tue, 13 Nov 2018 07:42:43 -0800 (PST) Received: from x1.localdomain ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id o14-v6sm6721987ito.3.2018.11.13.07.42.42 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Nov 2018 07:42:42 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 03/11] blk-mq: embed blk_mq_ops directly in the request queue Date: Tue, 13 Nov 2018 08:42:25 -0700 Message-Id: <20181113154233.15256-4-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113154233.15256-1-axboe@kernel.dk> References: <20181113154233.15256-1-axboe@kernel.dk> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This saves an indirect function call everytime we have to call one of the strategy functions. We keep it const, and just hack around that a bit in blk_mq_init_allocated_queue(), which is where we copy the ops in. Signed-off-by: Jens Axboe --- block/blk-core.c | 8 +-- block/blk-mq-debugfs.c | 2 +- block/blk-mq.c | 22 ++++---- block/blk-mq.h | 12 ++--- block/blk-softirq.c | 4 +- block/blk-sysfs.c | 4 +- drivers/scsi/scsi_lib.c | 2 +- include/linux/blk-mq-ops.h | 100 +++++++++++++++++++++++++++++++++++++ include/linux/blk-mq.h | 94 +--------------------------------- include/linux/blkdev.h | 6 ++- 10 files changed, 132 insertions(+), 122 deletions(-) create mode 100644 include/linux/blk-mq-ops.h diff --git a/block/blk-core.c b/block/blk-core.c index ab6675fd3568..88400ab166ac 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -656,8 +656,8 @@ struct request *blk_get_request(struct request_queue *q, unsigned int op, WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PREEMPT)); req = blk_mq_alloc_request(q, op, flags); - if (!IS_ERR(req) && q->mq_ops->initialize_rq_fn) - q->mq_ops->initialize_rq_fn(req); + if (!IS_ERR(req) && q->mq_ops.initialize_rq_fn) + q->mq_ops.initialize_rq_fn(req); return req; } @@ -1736,8 +1736,8 @@ EXPORT_SYMBOL_GPL(rq_flush_dcache_pages); */ int blk_lld_busy(struct request_queue *q) { - if (queue_is_mq(q) && q->mq_ops->busy) - return q->mq_ops->busy(q); + if (q->mq_ops.busy) + return q->mq_ops.busy(q); return 0; } diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index f021f4817b80..efdfb6258e03 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -354,7 +354,7 @@ static const char *blk_mq_rq_state_name(enum mq_rq_state rq_state) int __blk_mq_debugfs_rq_show(struct seq_file *m, struct request *rq) { - const struct blk_mq_ops *const mq_ops = rq->q->mq_ops; + const struct blk_mq_ops *const mq_ops = &rq->q->mq_ops; const unsigned int op = rq->cmd_flags & REQ_OP_MASK; seq_printf(m, "%p {.op=", rq); diff --git a/block/blk-mq.c b/block/blk-mq.c index eb9b9596d3de..6e0cb6adfc90 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -558,7 +558,7 @@ static void __blk_mq_complete_request_remote(void *data) struct request *rq = data; struct request_queue *q = rq->q; - q->mq_ops->complete(rq); + q->mq_ops.complete(rq); } static void __blk_mq_complete_request(struct request *rq) @@ -586,7 +586,7 @@ static void __blk_mq_complete_request(struct request *rq) } if (!test_bit(QUEUE_FLAG_SAME_COMP, &q->queue_flags)) { - q->mq_ops->complete(rq); + q->mq_ops.complete(rq); return; } @@ -600,7 +600,7 @@ static void __blk_mq_complete_request(struct request *rq) rq->csd.flags = 0; smp_call_function_single_async(ctx->cpu, &rq->csd); } else { - q->mq_ops->complete(rq); + q->mq_ops.complete(rq); } put_cpu(); } @@ -818,10 +818,10 @@ EXPORT_SYMBOL_GPL(blk_mq_queue_busy); static void blk_mq_rq_timed_out(struct request *req, bool reserved) { req->rq_flags |= RQF_TIMED_OUT; - if (req->q->mq_ops->timeout) { + if (req->q->mq_ops.timeout) { enum blk_eh_timer_return ret; - ret = req->q->mq_ops->timeout(req, reserved); + ret = req->q->mq_ops.timeout(req, reserved); if (ret == BLK_EH_DONE) return; WARN_ON_ONCE(ret != BLK_EH_RESET_TIMER); @@ -1221,7 +1221,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, bd.last = !blk_mq_get_driver_tag(nxt); } - ret = q->mq_ops->queue_rq(hctx, &bd); + ret = q->mq_ops.queue_rq(hctx, &bd); if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) { /* * If an I/O scheduler has been configured and we got a @@ -1746,7 +1746,7 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, * Any other error (busy), just add it to our list as we * previously would have done. */ - ret = q->mq_ops->queue_rq(hctx, &bd); + ret = q->mq_ops.queue_rq(hctx, &bd); switch (ret) { case BLK_STS_OK: blk_mq_update_dispatch_busy(hctx, false); @@ -2723,7 +2723,7 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, struct request_queue *q) { /* mark the queue as mq asap */ - q->mq_ops = set->ops; + memcpy((void *) &q->mq_ops, set->ops, sizeof(q->mq_ops)); q->poll_cb = blk_stat_alloc_callback(blk_mq_poll_stats_fn, blk_mq_poll_stats_bkt, @@ -2765,7 +2765,7 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, spin_lock_init(&q->requeue_lock); blk_queue_make_request(q, blk_mq_make_request); - if (q->mq_ops->poll) + if (q->mq_ops.poll) q->poll_fn = blk_mq_poll; /* @@ -2797,7 +2797,7 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, err_percpu: free_percpu(q->queue_ctx); err_exit: - q->mq_ops = NULL; + memset((void *) &q->mq_ops, 0, sizeof(q->mq_ops)); return ERR_PTR(-ENOMEM); } EXPORT_SYMBOL(blk_mq_init_allocated_queue); @@ -3328,7 +3328,7 @@ static bool __blk_mq_poll(struct blk_mq_hw_ctx *hctx, struct request *rq) hctx->poll_invoked++; - ret = q->mq_ops->poll(hctx, rq->tag); + ret = q->mq_ops.poll(hctx, rq->tag); if (ret > 0) { hctx->poll_success++; set_current_state(TASK_RUNNING); diff --git a/block/blk-mq.h b/block/blk-mq.h index facb6e9ddce4..1eb6a3e8af58 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -99,8 +99,8 @@ static inline struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *q, { int hctx_type = 0; - if (q->mq_ops->rq_flags_to_type) - hctx_type = q->mq_ops->rq_flags_to_type(q, flags); + if (q->mq_ops.rq_flags_to_type) + hctx_type = q->mq_ops.rq_flags_to_type(q, flags); return blk_mq_map_queue_type(q, hctx_type, cpu); } @@ -187,16 +187,16 @@ static inline void blk_mq_put_dispatch_budget(struct blk_mq_hw_ctx *hctx) { struct request_queue *q = hctx->queue; - if (q->mq_ops->put_budget) - q->mq_ops->put_budget(hctx); + if (q->mq_ops.put_budget) + q->mq_ops.put_budget(hctx); } static inline bool blk_mq_get_dispatch_budget(struct blk_mq_hw_ctx *hctx) { struct request_queue *q = hctx->queue; - if (q->mq_ops->get_budget) - return q->mq_ops->get_budget(hctx); + if (q->mq_ops.get_budget) + return q->mq_ops.get_budget(hctx); return true; } diff --git a/block/blk-softirq.c b/block/blk-softirq.c index 1534066e306e..2f4176668470 100644 --- a/block/blk-softirq.c +++ b/block/blk-softirq.c @@ -34,7 +34,7 @@ static __latent_entropy void blk_done_softirq(struct softirq_action *h) rq = list_entry(local_list.next, struct request, ipi_list); list_del_init(&rq->ipi_list); - rq->q->mq_ops->complete(rq); + rq->q->mq_ops.complete(rq); } } @@ -102,7 +102,7 @@ void __blk_complete_request(struct request *req) unsigned long flags; bool shared = false; - BUG_ON(!q->mq_ops->complete); + BUG_ON(!q->mq_ops.complete); local_irq_save(flags); cpu = smp_processor_id(); diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 93635a693314..9661ef5b390f 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -380,7 +380,7 @@ static ssize_t queue_poll_delay_store(struct request_queue *q, const char *page, { int err, val; - if (!q->mq_ops || !q->mq_ops->poll) + if (!q->mq_ops.poll) return -EINVAL; err = kstrtoint(page, 10, &val); @@ -406,7 +406,7 @@ static ssize_t queue_poll_store(struct request_queue *q, const char *page, unsigned long poll_on; ssize_t ret; - if (!q->mq_ops || !q->mq_ops->poll) + if (!q->mq_ops.poll) return -EINVAL; ret = queue_var_store(&poll_on, page, count); diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 5d83a162d03b..61babcb269ab 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1907,7 +1907,7 @@ struct scsi_device *scsi_device_from_queue(struct request_queue *q) { struct scsi_device *sdev = NULL; - if (q->mq_ops == &scsi_mq_ops) + if (q->mq_ops.queue_rq == scsi_mq_ops.queue_rq) sdev = q->queuedata; if (!sdev || !get_device(&sdev->sdev_gendev)) sdev = NULL; diff --git a/include/linux/blk-mq-ops.h b/include/linux/blk-mq-ops.h new file mode 100644 index 000000000000..0940c26875ca --- /dev/null +++ b/include/linux/blk-mq-ops.h @@ -0,0 +1,100 @@ +#ifndef BLK_MQ_OPS_H +#define BLK_MQ_OPS_H + +struct blk_mq_queue_data; +struct blk_mq_hw_ctx; +struct blk_mq_tag_set; + +typedef blk_status_t (queue_rq_fn)(struct blk_mq_hw_ctx *, + const struct blk_mq_queue_data *); +/* takes rq->cmd_flags as input, returns a hardware type index */ +typedef int (rq_flags_to_type_fn)(struct request_queue *, unsigned int); +typedef bool (get_budget_fn)(struct blk_mq_hw_ctx *); +typedef void (put_budget_fn)(struct blk_mq_hw_ctx *); +typedef enum blk_eh_timer_return (timeout_fn)(struct request *, bool); +typedef int (init_hctx_fn)(struct blk_mq_hw_ctx *, void *, unsigned int); +typedef void (exit_hctx_fn)(struct blk_mq_hw_ctx *, unsigned int); +typedef int (init_request_fn)(struct blk_mq_tag_set *set, struct request *, + unsigned int, unsigned int); +typedef void (exit_request_fn)(struct blk_mq_tag_set *set, struct request *, + unsigned int); + +typedef bool (busy_iter_fn)(struct blk_mq_hw_ctx *, struct request *, void *, + bool); +typedef bool (busy_tag_iter_fn)(struct request *, void *, bool); +typedef int (poll_fn)(struct blk_mq_hw_ctx *, unsigned int); +typedef int (map_queues_fn)(struct blk_mq_tag_set *set); +typedef bool (busy_fn)(struct request_queue *); +typedef void (complete_fn)(struct request *); + +struct blk_mq_ops { + /* + * Queue request + */ + queue_rq_fn *queue_rq; + + /* + * Return a queue map type for the given request/bio flags + */ + rq_flags_to_type_fn *rq_flags_to_type; + + /* + * Reserve budget before queue request, once .queue_rq is + * run, it is driver's responsibility to release the + * reserved budget. Also we have to handle failure case + * of .get_budget for avoiding I/O deadlock. + */ + get_budget_fn *get_budget; + put_budget_fn *put_budget; + + /* + * Called on request timeout + */ + timeout_fn *timeout; + + /* + * Called to poll for completion of a specific tag. + */ + poll_fn *poll; + + complete_fn *complete; + + /* + * Called when the block layer side of a hardware queue has been + * set up, allowing the driver to allocate/init matching structures. + * Ditto for exit/teardown. + */ + init_hctx_fn *init_hctx; + exit_hctx_fn *exit_hctx; + + /* + * Called for every command allocated by the block layer to allow + * the driver to set up driver specific data. + * + * Tag greater than or equal to queue_depth is for setting up + * flush request. + * + * Ditto for exit/teardown. + */ + init_request_fn *init_request; + exit_request_fn *exit_request; + /* Called from inside blk_get_request() */ + void (*initialize_rq_fn)(struct request *rq); + + /* + * If set, returns whether or not this queue currently is busy + */ + busy_fn *busy; + + map_queues_fn *map_queues; + +#ifdef CONFIG_BLK_DEBUG_FS + /* + * Used by the debugfs implementation to show driver-specific + * information about a request. + */ + void (*show_rq)(struct seq_file *m, struct request *rq); +#endif +}; + +#endif diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 929e8abc5535..e32e9293e5a0 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -5,6 +5,7 @@ #include #include #include +#include struct blk_mq_tags; struct blk_flush_queue; @@ -115,99 +116,6 @@ struct blk_mq_queue_data { bool last; }; -typedef blk_status_t (queue_rq_fn)(struct blk_mq_hw_ctx *, - const struct blk_mq_queue_data *); -/* takes rq->cmd_flags as input, returns a hardware type index */ -typedef int (rq_flags_to_type_fn)(struct request_queue *, unsigned int); -typedef bool (get_budget_fn)(struct blk_mq_hw_ctx *); -typedef void (put_budget_fn)(struct blk_mq_hw_ctx *); -typedef enum blk_eh_timer_return (timeout_fn)(struct request *, bool); -typedef int (init_hctx_fn)(struct blk_mq_hw_ctx *, void *, unsigned int); -typedef void (exit_hctx_fn)(struct blk_mq_hw_ctx *, unsigned int); -typedef int (init_request_fn)(struct blk_mq_tag_set *set, struct request *, - unsigned int, unsigned int); -typedef void (exit_request_fn)(struct blk_mq_tag_set *set, struct request *, - unsigned int); - -typedef bool (busy_iter_fn)(struct blk_mq_hw_ctx *, struct request *, void *, - bool); -typedef bool (busy_tag_iter_fn)(struct request *, void *, bool); -typedef int (poll_fn)(struct blk_mq_hw_ctx *, unsigned int); -typedef int (map_queues_fn)(struct blk_mq_tag_set *set); -typedef bool (busy_fn)(struct request_queue *); -typedef void (complete_fn)(struct request *); - - -struct blk_mq_ops { - /* - * Queue request - */ - queue_rq_fn *queue_rq; - - /* - * Return a queue map type for the given request/bio flags - */ - rq_flags_to_type_fn *rq_flags_to_type; - - /* - * Reserve budget before queue request, once .queue_rq is - * run, it is driver's responsibility to release the - * reserved budget. Also we have to handle failure case - * of .get_budget for avoiding I/O deadlock. - */ - get_budget_fn *get_budget; - put_budget_fn *put_budget; - - /* - * Called on request timeout - */ - timeout_fn *timeout; - - /* - * Called to poll for completion of a specific tag. - */ - poll_fn *poll; - - complete_fn *complete; - - /* - * Called when the block layer side of a hardware queue has been - * set up, allowing the driver to allocate/init matching structures. - * Ditto for exit/teardown. - */ - init_hctx_fn *init_hctx; - exit_hctx_fn *exit_hctx; - - /* - * Called for every command allocated by the block layer to allow - * the driver to set up driver specific data. - * - * Tag greater than or equal to queue_depth is for setting up - * flush request. - * - * Ditto for exit/teardown. - */ - init_request_fn *init_request; - exit_request_fn *exit_request; - /* Called from inside blk_get_request() */ - void (*initialize_rq_fn)(struct request *rq); - - /* - * If set, returns whether or not this queue currently is busy - */ - busy_fn *busy; - - map_queues_fn *map_queues; - -#ifdef CONFIG_BLK_DEBUG_FS - /* - * Used by the debugfs implementation to show driver-specific - * information about a request. - */ - void (*show_rq)(struct seq_file *m, struct request *rq); -#endif -}; - enum { BLK_MQ_F_SHOULD_MERGE = 1 << 0, BLK_MQ_F_TAG_SHARED = 1 << 1, diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 3712d1fe48d4..ad8474ec8c58 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -28,6 +28,8 @@ #include #include +#include + struct module; struct scsi_ioctl_command; @@ -406,7 +408,7 @@ struct request_queue { poll_q_fn *poll_fn; dma_drain_needed_fn *dma_drain_needed; - const struct blk_mq_ops *mq_ops; + const struct blk_mq_ops mq_ops; /* sw queues */ struct blk_mq_ctx __percpu *queue_ctx; @@ -673,7 +675,7 @@ static inline bool blk_account_rq(struct request *rq) static inline bool queue_is_mq(struct request_queue *q) { - return q->mq_ops; + return q->mq_ops.queue_rq != NULL; } /* From patchwork Tue Nov 13 15:42:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 10681033 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A3FCE1759 for ; Tue, 13 Nov 2018 15:42:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 916392AE0C for ; Tue, 13 Nov 2018 15:42:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8F2C52AE15; Tue, 13 Nov 2018 15:42:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 006902AE0C for ; Tue, 13 Nov 2018 15:42:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732080AbeKNBlZ (ORCPT ); Tue, 13 Nov 2018 20:41:25 -0500 Received: from mail-io1-f65.google.com ([209.85.166.65]:39446 "EHLO mail-io1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731287AbeKNBlZ (ORCPT ); Tue, 13 Nov 2018 20:41:25 -0500 Received: by mail-io1-f65.google.com with SMTP id b26-v6so8557486ioc.6 for ; Tue, 13 Nov 2018 07:42:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=gngy6ndCqerfPaGct02L9pdutncRCYm4yRPgtxDKoxs=; b=f3tKsM+iibflxhK89pVkRzYkxMpSLCGG0qs0mfSa7G9QvohvynNE0gGBTNsNTRbjdw qMjrtEZmEjAvNPSigIYio4yUCnYWKwacNulEIHrY9PdZVpbz6NluVrVj7bZIbhciJ3mZ 8qWucjbUaImCtSGZEOV1ZlFaEe40n+xwMKXdsHrg0QMQtrvkQVQuaKSuT7+tC6eNSwvw MxZE79RuSPFiyMyCmepkvvZVFk/ZDI7VWF/69ua7bClqPZhoengt8qnxU7OBP/R/LaMr AoVyH3TWzbq509A3DgpmszisRfIk/ztvXM6cuGkYJ1uN01iuCXAiOubzcSwc286gzOoZ RyXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=gngy6ndCqerfPaGct02L9pdutncRCYm4yRPgtxDKoxs=; b=spgkUxnx5OK5uBtJt813iUcHqwItDIDjMcA5UTcTD4Tib+RQVKLXnzeYlV2NoJwL7B pFLRv3Tjb1yeDc2AvqYmcS5joCpL4HzQ7EMDiV7D6cP7AM+YbxpDCu3zGYz8DZP9ERZL r3xAXXVdrxHcCYmsEnN4iZNbJt50P0QopT9xlERH0bpoO6yj0JbRefMsSrPDU29Aqmjr Tz/+kOyOChFRDR2gl7bxCqDhOFxO9V5d4Bz1MV3K2JWq4CmJkA4LFPvdbCrydDSxQBIp cjdH0XpaKoIUNwk52HsEpCpwONWIpg31Ak3XGfPNkDx/DlnrZUw5VQOF3n2wZ4OSr3pK /meg== X-Gm-Message-State: AGRZ1gIvBO5CAcwD2mkAEPyAnezLNeqjEE8zVhQWVrxAKPewavpHK6/v 7mIPwBy62YPrC4DngB6LPUqX8BFXUmA= X-Google-Smtp-Source: AJdET5ejq1/0Io6M3sV/1lum9J0qiUr8SQB2z5gjWxLx15qp7vMEFrlRYBCTvmaRO+FMk7s1SDBYLw== X-Received: by 2002:a6b:3809:: with SMTP id f9mr4634975ioa.305.1542123765871; Tue, 13 Nov 2018 07:42:45 -0800 (PST) Received: from x1.localdomain ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id o14-v6sm6721987ito.3.2018.11.13.07.42.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Nov 2018 07:42:44 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe , Josef Bacik Subject: [PATCH 04/11] blk-rq-qos: inline check for q->rq_qos functions Date: Tue, 13 Nov 2018 08:42:26 -0700 Message-Id: <20181113154233.15256-5-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113154233.15256-1-axboe@kernel.dk> References: <20181113154233.15256-1-axboe@kernel.dk> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Put the short code in the fast path, where we don't have any functions attached to the queue. This minimizes the impact on the hot path in the core code. Cleanup duplicated code by having a macro setup both the inline check and the actual functions. Cc: Josef Bacik Signed-off-by: Jens Axboe --- block/blk-rq-qos.c | 90 +++++++++++++--------------------------------- block/blk-rq-qos.h | 35 ++++++++++++++---- 2 files changed, 52 insertions(+), 73 deletions(-) diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c index 0005dfd568dd..266c9e111475 100644 --- a/block/blk-rq-qos.c +++ b/block/blk-rq-qos.c @@ -27,76 +27,34 @@ bool rq_wait_inc_below(struct rq_wait *rq_wait, unsigned int limit) return atomic_inc_below(&rq_wait->inflight, limit); } -void rq_qos_cleanup(struct request_queue *q, struct bio *bio) -{ - struct rq_qos *rqos; - - for (rqos = q->rq_qos; rqos; rqos = rqos->next) { - if (rqos->ops->cleanup) - rqos->ops->cleanup(rqos, bio); - } -} - -void rq_qos_done(struct request_queue *q, struct request *rq) -{ - struct rq_qos *rqos; - - for (rqos = q->rq_qos; rqos; rqos = rqos->next) { - if (rqos->ops->done) - rqos->ops->done(rqos, rq); - } -} - -void rq_qos_issue(struct request_queue *q, struct request *rq) -{ - struct rq_qos *rqos; - - for(rqos = q->rq_qos; rqos; rqos = rqos->next) { - if (rqos->ops->issue) - rqos->ops->issue(rqos, rq); - } -} - -void rq_qos_requeue(struct request_queue *q, struct request *rq) -{ - struct rq_qos *rqos; - - for(rqos = q->rq_qos; rqos; rqos = rqos->next) { - if (rqos->ops->requeue) - rqos->ops->requeue(rqos, rq); - } +#define __RQ_QOS_FUNC_ONE(__OP, type) \ +void __rq_qos_##__OP(struct rq_qos *rqos, type arg) \ +{ \ + do { \ + if ((rqos)->ops->__OP) \ + (rqos)->ops->__OP((rqos), arg); \ + (rqos) = (rqos)->next; \ + } while (rqos); \ } -void rq_qos_throttle(struct request_queue *q, struct bio *bio, - spinlock_t *lock) -{ - struct rq_qos *rqos; - - for(rqos = q->rq_qos; rqos; rqos = rqos->next) { - if (rqos->ops->throttle) - rqos->ops->throttle(rqos, bio, lock); - } +__RQ_QOS_FUNC_ONE(cleanup, struct bio *); +__RQ_QOS_FUNC_ONE(done, struct request *); +__RQ_QOS_FUNC_ONE(issue, struct request *); +__RQ_QOS_FUNC_ONE(requeue, struct request *); +__RQ_QOS_FUNC_ONE(done_bio, struct bio *); + +#define __RQ_QOS_FUNC_TWO(__OP, type1, type2) \ +void __rq_qos_##__OP(struct rq_qos *rqos, type1 arg1, type2 arg2) \ +{ \ + do { \ + if ((rqos)->ops->__OP) \ + (rqos)->ops->__OP((rqos), arg1, arg2); \ + (rqos) = (rqos)->next; \ + } while (rqos); \ } -void rq_qos_track(struct request_queue *q, struct request *rq, struct bio *bio) -{ - struct rq_qos *rqos; - - for(rqos = q->rq_qos; rqos; rqos = rqos->next) { - if (rqos->ops->track) - rqos->ops->track(rqos, rq, bio); - } -} - -void rq_qos_done_bio(struct request_queue *q, struct bio *bio) -{ - struct rq_qos *rqos; - - for(rqos = q->rq_qos; rqos; rqos = rqos->next) { - if (rqos->ops->done_bio) - rqos->ops->done_bio(rqos, bio); - } -} +__RQ_QOS_FUNC_TWO(throttle, struct bio *, spinlock_t *); +__RQ_QOS_FUNC_TWO(track, struct request *, struct bio *); /* * Return true, if we can't increase the depth further by scaling diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h index 32b02efbfa66..50558a6ea248 100644 --- a/block/blk-rq-qos.h +++ b/block/blk-rq-qos.h @@ -98,12 +98,33 @@ void rq_depth_scale_up(struct rq_depth *rqd); void rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle); bool rq_depth_calc_max_depth(struct rq_depth *rqd); -void rq_qos_cleanup(struct request_queue *, struct bio *); -void rq_qos_done(struct request_queue *, struct request *); -void rq_qos_issue(struct request_queue *, struct request *); -void rq_qos_requeue(struct request_queue *, struct request *); -void rq_qos_done_bio(struct request_queue *q, struct bio *bio); -void rq_qos_throttle(struct request_queue *, struct bio *, spinlock_t *); -void rq_qos_track(struct request_queue *q, struct request *, struct bio *); +#define RQ_QOS_FUNC_ONE(__OP, type) \ +void __rq_qos_##__OP(struct rq_qos *rqos, type arg); \ +static inline void rq_qos_##__OP(struct request_queue *q, type arg) \ +{ \ + if ((q)->rq_qos) \ + __rq_qos_##__OP((q)->rq_qos, arg); \ +} + +#define RQ_QOS_FUNC_TWO(__OP, type1, type2) \ +void __rq_qos_##__OP(struct rq_qos *rqos, type1 arg1, type2 arg2); \ +static inline void rq_qos_##__OP(struct request_queue *q, type1 arg1, \ + type2 arg2) \ +{ \ + if ((q)->rq_qos) \ + __rq_qos_##__OP((q)->rq_qos, arg1, arg2); \ +} + +RQ_QOS_FUNC_ONE(cleanup, struct bio *); +RQ_QOS_FUNC_ONE(done, struct request *); +RQ_QOS_FUNC_ONE(issue, struct request *); +RQ_QOS_FUNC_ONE(requeue, struct request *); +RQ_QOS_FUNC_ONE(done_bio, struct bio *); +RQ_QOS_FUNC_TWO(throttle, struct bio *, spinlock_t *); +RQ_QOS_FUNC_TWO(track, struct request *, struct bio *); +#undef RQ_QOS_FUNC_ONE +#undef RQ_QOS_FUNC_TWO + void rq_qos_exit(struct request_queue *); + #endif From patchwork Tue Nov 13 15:42:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 10681035 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 836F913BF for ; Tue, 13 Nov 2018 15:42:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 732B02AE3E for ; Tue, 13 Nov 2018 15:42:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 719DD2AE50; Tue, 13 Nov 2018 15:42:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1312E2AE58 for ; Tue, 13 Nov 2018 15:42:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731287AbeKNBl1 (ORCPT ); Tue, 13 Nov 2018 20:41:27 -0500 Received: from mail-io1-f67.google.com ([209.85.166.67]:45767 "EHLO mail-io1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733221AbeKNBl1 (ORCPT ); Tue, 13 Nov 2018 20:41:27 -0500 Received: by mail-io1-f67.google.com with SMTP id w7so1538900iom.12 for ; Tue, 13 Nov 2018 07:42:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=22cgXaZEnMSuM0w9EE6vyRTYz+1ZdO+HfiYw/JrM9N8=; b=k3XxIfu04Q+zqKPVcAmvvgwRoYGROqxXjOJ8krRRBtOuw+6RGCKJVsZQaI8VYHeaUl oUJtRFb+0WygMcVOmqbr/Jp1f82MLM5914RaK7qU4vagKuCMWocoRC72jWhVjwW8XyM2 0M3mSQRDv0jL5aOc/md42WVyabmmXFJoinVyLxpa4Sq1VkfAHOXNppsowfjuQb+vlRV7 qccL7U8pRS5gz//jUvhEdBVziHAZi9nTfHPXFdzgFUkthdirMCAoDMog8THcgp7ZGcR4 TWbKsDt/Y9evL9a4GtgHKabpg1cNFOf6jc8Y4ju03Bn/IjuUvqZQfonXAmFAzyT572s4 tlaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=22cgXaZEnMSuM0w9EE6vyRTYz+1ZdO+HfiYw/JrM9N8=; b=SZndbP7BtA8WfcEVyG3Rrk9b9CCXa5lcOnKyezorl9CL5WrLGMjzI3Z3Y4en5GC3o8 /mSwmwd+lfdW673huuNR3N5Op4pNdRohu5xP/2DL0ZOH7ABjOkZpsiXBU7Zo8hzLyx4u AhER6vFSGOVSDhkn2MDvssGZLlkt78mDIPu1l4ff+6jiLIgCtpfszbR6WMvZMyAgxZm9 U6SYfbCZS6iffTIrjzFURwesYeaNp7eoijodIsmfRB/8kGPhxT0Eo1KBYOUWFyj+jjKb Djb9OQeON1L4/H0a2B5MTRgLScig7bwj++jtC1NQfUYSHlsSZhkL/D0FkmV8qRa+W5/9 n6sw== X-Gm-Message-State: AGRZ1gKgehyjZr1uN8ph2HFLfWYXhO5/2oIFB7Ei8gaZqKY6/HB+c2vG hgZeebYbCWo+zwCp7K4rydCy7kaBotw= X-Google-Smtp-Source: AFSGD/WvFT+meBwEJ6U+1CCiT2snW0PtoRfdfUt2L1Bf4NyROkPenaZvMBp9zudaZI5DWyrEbVGovQ== X-Received: by 2002:a6b:c815:: with SMTP id y21mr502082iof.98.1542123767670; Tue, 13 Nov 2018 07:42:47 -0800 (PST) Received: from x1.localdomain ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id o14-v6sm6721987ito.3.2018.11.13.07.42.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Nov 2018 07:42:46 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 05/11] block: avoid ordered task state change for polled IO Date: Tue, 13 Nov 2018 08:42:27 -0700 Message-Id: <20181113154233.15256-6-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113154233.15256-1-axboe@kernel.dk> References: <20181113154233.15256-1-axboe@kernel.dk> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Ensure that writes to the dio/bio waiter field are ordered correctly. With the smp_rmb() before the READ_ONCE() check, we should be able to use a more relaxed ordering for the task state setting. Signed-off-by: Jens Axboe --- fs/block_dev.c | 11 +++++++++-- fs/iomap.c | 6 +++++- 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/fs/block_dev.c b/fs/block_dev.c index c039abfb2052..2f920c03996e 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -181,6 +181,7 @@ static void blkdev_bio_end_io_simple(struct bio *bio) struct task_struct *waiter = bio->bi_private; WRITE_ONCE(bio->bi_private, NULL); + smp_wmb(); wake_up_process(waiter); } @@ -237,9 +238,12 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter, qc = submit_bio(&bio); for (;;) { - set_current_state(TASK_UNINTERRUPTIBLE); + __set_current_state(TASK_UNINTERRUPTIBLE); + + smp_rmb(); if (!READ_ONCE(bio.bi_private)) break; + if (!(iocb->ki_flags & IOCB_HIPRI) || !blk_poll(bdev_get_queue(bdev), qc)) io_schedule(); @@ -305,6 +309,7 @@ static void blkdev_bio_end_io(struct bio *bio) struct task_struct *waiter = dio->waiter; WRITE_ONCE(dio->waiter, NULL); + smp_wmb(); wake_up_process(waiter); } } @@ -403,7 +408,9 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) return -EIOCBQUEUED; for (;;) { - set_current_state(TASK_UNINTERRUPTIBLE); + __set_current_state(TASK_UNINTERRUPTIBLE); + + smp_rmb(); if (!READ_ONCE(dio->waiter)) break; diff --git a/fs/iomap.c b/fs/iomap.c index f61d13dfdf09..7898927e758e 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -1524,7 +1524,9 @@ static void iomap_dio_bio_end_io(struct bio *bio) if (atomic_dec_and_test(&dio->ref)) { if (dio->wait_for_completion) { struct task_struct *waiter = dio->submit.waiter; + WRITE_ONCE(dio->submit.waiter, NULL); + smp_wmb(); wake_up_process(waiter); } else if (dio->flags & IOMAP_DIO_WRITE) { struct inode *inode = file_inode(dio->iocb->ki_filp); @@ -1888,7 +1890,9 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter, return -EIOCBQUEUED; for (;;) { - set_current_state(TASK_UNINTERRUPTIBLE); + __set_current_state(TASK_UNINTERRUPTIBLE); + + smp_rmb(); if (!READ_ONCE(dio->submit.waiter)) break; From patchwork Tue Nov 13 15:42:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 10681037 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7585613BF for ; Tue, 13 Nov 2018 15:42:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 65F062AE07 for ; Tue, 13 Nov 2018 15:42:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6394F2AE46; Tue, 13 Nov 2018 15:42:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 06D6F2AC57 for ; Tue, 13 Nov 2018 15:42:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387593AbeKNBl3 (ORCPT ); Tue, 13 Nov 2018 20:41:29 -0500 Received: from mail-io1-f65.google.com ([209.85.166.65]:40740 "EHLO mail-io1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733221AbeKNBl3 (ORCPT ); Tue, 13 Nov 2018 20:41:29 -0500 Received: by mail-io1-f65.google.com with SMTP id r7-v6so9076920iog.7 for ; Tue, 13 Nov 2018 07:42:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=R3QTSyKfY52eaKEQnnBSr2hXmi3YRIQACfEK6foU/bE=; b=UtL1+wZmkLMoO+F6QQmcXux58xNjFL2HOgH1GHhgFPFVFc9GMq/ab0pKUKGsAfAGdA gegrFrqwQ90QEvN9zCQXN769Bj9t7CegKJqKG0WILivY4M0uRuTtrvOQrS8zLgUBwZTm os7Agl6Q/RZXG5XLXfwevJKxh3K2N0WBJju2XglhCnMlZ9WAQxQCRyW963AAQfMf5tWn 0gt+/nvi/b6/IA0tkksX/VJYl7dfw7yiigyiVxso7PjdIACnObVq+uRT7bJMtcImDP2t hu/Rl+Vsmf6IUBlg+BVUKKG35LzW+riGKAYRBxeCnsrumdw1U3Z+DEa8RtRbdczwDvM5 ZRSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=R3QTSyKfY52eaKEQnnBSr2hXmi3YRIQACfEK6foU/bE=; b=m6Exrw9G1NGo9tCbICFu1NGu/GTIlOyR0wCKr+X5vafrRhQ3F9PRjfJmcwy3LwjMho xwwSK2QIPIaXI4NR/kN98MHGnmcU/zyjI2Q2zVWLfsp7lrFZnihA70y8T05jnMzTVRGF jpI9qbFVjgRSjL+1YUbzq8xz0XMp4LrXpj7Q2aOQawZtBPL2a4Oa4y9t9qHEw7259wf+ mw1FvL+8rU3cYnixtY79ePsUXPw2Ib15Juw94Xqya70+3ZRIn2Q/CHy1ul+Fi6pmqitK xYU2dRsuYUOEwXgAkGouVbk3IXSApx0qNjNl6Yi1mtPJMcCI9sa//Fe53MVUkD3zvmmQ iImQ== X-Gm-Message-State: AGRZ1gJrrn/e+QnU/0XT/sA5RS1iMfO5Cin3ywPEvcxcB+XUe0Nv0Q/h //BD8ybqSKx5pzOAwPZUk5aeziZrhjM= X-Google-Smtp-Source: AJdET5e3EHMqfxBNxhlC0gjP89X++GBDcHZYcVn4fr4bg20gR0LgFae/PAywMxenlK2IF9l9qousew== X-Received: by 2002:a6b:9309:: with SMTP id v9-v6mr4019472iod.239.1542123769688; Tue, 13 Nov 2018 07:42:49 -0800 (PST) Received: from x1.localdomain ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id o14-v6sm6721987ito.3.2018.11.13.07.42.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Nov 2018 07:42:48 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 06/11] block: add polled wakeup task helper Date: Tue, 13 Nov 2018 08:42:28 -0700 Message-Id: <20181113154233.15256-7-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113154233.15256-1-axboe@kernel.dk> References: <20181113154233.15256-1-axboe@kernel.dk> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If we're polling for IO on a device that doesn't use interrupts, then IO completion loop (and wake of task) is done by submitting task itself. If that is the case, then we don't need to enter the wake_up_process() function, we can simply mark ourselves as TASK_RUNNING. Signed-off-by: Jens Axboe --- fs/block_dev.c | 6 ++---- fs/iomap.c | 3 +-- include/linux/blkdev.h | 19 +++++++++++++++++++ 3 files changed, 22 insertions(+), 6 deletions(-) diff --git a/fs/block_dev.c b/fs/block_dev.c index 2f920c03996e..0ed9be8906a8 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -181,8 +181,7 @@ static void blkdev_bio_end_io_simple(struct bio *bio) struct task_struct *waiter = bio->bi_private; WRITE_ONCE(bio->bi_private, NULL); - smp_wmb(); - wake_up_process(waiter); + blk_wake_io_task(waiter); } static ssize_t @@ -309,8 +308,7 @@ static void blkdev_bio_end_io(struct bio *bio) struct task_struct *waiter = dio->waiter; WRITE_ONCE(dio->waiter, NULL); - smp_wmb(); - wake_up_process(waiter); + blk_wake_io_task(waiter); } } diff --git a/fs/iomap.c b/fs/iomap.c index 7898927e758e..a182699e28db 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -1526,8 +1526,7 @@ static void iomap_dio_bio_end_io(struct bio *bio) struct task_struct *waiter = dio->submit.waiter; WRITE_ONCE(dio->submit.waiter, NULL); - smp_wmb(); - wake_up_process(waiter); + blk_wake_io_task(waiter); } else if (dio->flags & IOMAP_DIO_WRITE) { struct inode *inode = file_inode(dio->iocb->ki_filp); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index ad8474ec8c58..d1ef8cbbea04 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1798,4 +1798,23 @@ static inline int blkdev_issue_flush(struct block_device *bdev, gfp_t gfp_mask, #endif /* CONFIG_BLOCK */ +static inline void blk_wake_io_task(struct task_struct *waiter) +{ + /* + * If we're polling, the task itself is doing the completions. For + * that case, we don't need to signal a wakeup, it's enough to just + * mark us as RUNNING. + */ + if (waiter == current) + __set_current_state(TASK_RUNNING); + else { + /* + * Ensure the callers waiter store is ordered and seen + * by the ->bi_end_io() function. + */ + smp_wmb(); + wake_up_process(waiter); + } +} + #endif From patchwork Tue Nov 13 15:42:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 10681039 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C354A1709 for ; Tue, 13 Nov 2018 15:42:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B387A2ADBC for ; Tue, 13 Nov 2018 15:42:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B215F2AE46; Tue, 13 Nov 2018 15:42:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3952D2ADBC for ; Tue, 13 Nov 2018 15:42:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387660AbeKNBla (ORCPT ); Tue, 13 Nov 2018 20:41:30 -0500 Received: from mail-io1-f66.google.com ([209.85.166.66]:37027 "EHLO mail-io1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733221AbeKNBla (ORCPT ); Tue, 13 Nov 2018 20:41:30 -0500 Received: by mail-io1-f66.google.com with SMTP id a3so2512550ioc.4 for ; Tue, 13 Nov 2018 07:42:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=1xB9QMEdRFna7WaXJIpl19wRyV9ikGv5wt5z2V2vqbs=; b=0Tniim1ZxrOIFfbZQH8gA/qBF/aruBNw22IQXqtMPMPgEEOqOsN7wX7ZKvwNVCOIp+ 7pzuaW9huv0fEOJhPJ/oBNsQ7wnT/thxbHIWyKTZjzqsOClUBa/GZhYH5zGohD9Ed7ma bIofcMsAgdHxoa+J3EECrZVfK/JxzvwXoHluj6ZBK4aEU38TAn24qTFjCn/i846J72ql JHXGM4NrLNOqsoGDuryeIkEjAjviRNSASSk90n27e7rgDsIVpDw4LCE1ZN0yR2eGc5qi 7dlyHE9oRFe8+W5hFtj38hpVHaX+hwbk2uY4Gl0MvaQkzssA96mMVW75F9aShnCR27J1 e5dA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1xB9QMEdRFna7WaXJIpl19wRyV9ikGv5wt5z2V2vqbs=; b=hsD0qmQOAeWtEB+y0zqFB8ul47Nawjonh/DffBa3fC7/ysXF8xxNJXUkDoTM8XWgzp PkR3ya+phJUKdLRyLykDPGrymTgt/FKeRk+3IRc0zL9Kbo9qanhK9dWY/lBjgPIbw9Fl WxaILfIoIxBkXJ5RYM6bSun2MLjhPcROi8MNjr+ydDFlgIVTUmo3m9zXmlEg/jFQzpz7 AVaF7WkW/+kvT73NhtP1JeYrNnsQasFhATjYZn7t3EiWKOmM1MgQsoqxMQPiU0MuwMRf fFFbp7ok5l5EvUwNZKAZPdi+QyupPjheUvpEgHifaopcYnB9IscIP8oTtfykaK8MCBiU GsEA== X-Gm-Message-State: AGRZ1gKTl0+O9drZaOVXkOUot66uQMHMxChER17mENttO87x+znAmY5Y 7hOOs0Z7NPC/XYI/Z12/JZesWwezalM= X-Google-Smtp-Source: AJdET5fliCXeSgU0LR463qLSBPiWfAFoeGc45O4mOMwl9RynLp9sZETNkHeY4odAEj144ampjz1BXA== X-Received: by 2002:a6b:db08:: with SMTP id t8-v6mr4584473ioc.132.1542123771278; Tue, 13 Nov 2018 07:42:51 -0800 (PST) Received: from x1.localdomain ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id o14-v6sm6721987ito.3.2018.11.13.07.42.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Nov 2018 07:42:50 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 07/11] block: have ->poll_fn() return number of entries polled Date: Tue, 13 Nov 2018 08:42:29 -0700 Message-Id: <20181113154233.15256-8-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113154233.15256-1-axboe@kernel.dk> References: <20181113154233.15256-1-axboe@kernel.dk> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We currently only really support sync poll, ie poll with 1 IO in flight. This prepares us for supporting async poll. Note that the returned value isn't necessarily 100% accurate. If poll races with IRQ completion, we assume that the fact that the task is now runnable means we found at least one entry. In reality it could be more than 1, or not even 1. This is fine, the caller will just need to take this into account. Signed-off-by: Jens Axboe --- block/blk-mq.c | 18 +++++++++--------- drivers/nvme/host/multipath.c | 4 ++-- include/linux/blkdev.h | 2 +- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 6e0cb6adfc90..f8c2e6544903 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -38,7 +38,7 @@ #include "blk-mq-sched.h" #include "blk-rq-qos.h" -static bool blk_mq_poll(struct request_queue *q, blk_qc_t cookie); +static int blk_mq_poll(struct request_queue *q, blk_qc_t cookie); static void blk_mq_poll_stats_start(struct request_queue *q); static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb); @@ -3305,7 +3305,7 @@ static bool blk_mq_poll_hybrid_sleep(struct request_queue *q, return true; } -static bool __blk_mq_poll(struct blk_mq_hw_ctx *hctx, struct request *rq) +static int __blk_mq_poll(struct blk_mq_hw_ctx *hctx, struct request *rq) { struct request_queue *q = hctx->queue; long state; @@ -3318,7 +3318,7 @@ static bool __blk_mq_poll(struct blk_mq_hw_ctx *hctx, struct request *rq) * straight to the busy poll loop. */ if (blk_mq_poll_hybrid_sleep(q, hctx, rq)) - return true; + return 1; hctx->poll_considered++; @@ -3332,30 +3332,30 @@ static bool __blk_mq_poll(struct blk_mq_hw_ctx *hctx, struct request *rq) if (ret > 0) { hctx->poll_success++; set_current_state(TASK_RUNNING); - return true; + return ret; } if (signal_pending_state(state, current)) set_current_state(TASK_RUNNING); if (current->state == TASK_RUNNING) - return true; + return 1; if (ret < 0) break; cpu_relax(); } __set_current_state(TASK_RUNNING); - return false; + return 0; } -static bool blk_mq_poll(struct request_queue *q, blk_qc_t cookie) +static int blk_mq_poll(struct request_queue *q, blk_qc_t cookie) { struct blk_mq_hw_ctx *hctx; struct request *rq; if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) - return false; + return 0; hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)]; if (!blk_qc_t_is_internal(cookie)) @@ -3369,7 +3369,7 @@ static bool blk_mq_poll(struct request_queue *q, blk_qc_t cookie) * so we should be safe with just the NULL check. */ if (!rq) - return false; + return 0; } return __blk_mq_poll(hctx, rq); diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index 5e3cc8c59a39..0484c1f9c8ce 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -220,11 +220,11 @@ static blk_qc_t nvme_ns_head_make_request(struct request_queue *q, return ret; } -static bool nvme_ns_head_poll(struct request_queue *q, blk_qc_t qc) +static int nvme_ns_head_poll(struct request_queue *q, blk_qc_t qc) { struct nvme_ns_head *head = q->queuedata; struct nvme_ns *ns; - bool found = false; + int found = 0; int srcu_idx; srcu_idx = srcu_read_lock(&head->srcu); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index d1ef8cbbea04..b2af6c68b78d 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -287,7 +287,7 @@ static inline unsigned short req_get_ioprio(struct request *req) struct blk_queue_ctx; typedef blk_qc_t (make_request_fn) (struct request_queue *q, struct bio *bio); -typedef bool (poll_q_fn) (struct request_queue *q, blk_qc_t); +typedef int (poll_q_fn) (struct request_queue *q, blk_qc_t); struct bio_vec; typedef int (dma_drain_needed_fn)(struct request *); From patchwork Tue Nov 13 15:42:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 10681041 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6047C1709 for ; Tue, 13 Nov 2018 15:42:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E9832AE1D for ; Tue, 13 Nov 2018 15:42:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4D00E2AE46; Tue, 13 Nov 2018 15:42:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9ACCC2AE1D for ; Tue, 13 Nov 2018 15:42:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387779AbeKNBld (ORCPT ); Tue, 13 Nov 2018 20:41:33 -0500 Received: from mail-it1-f193.google.com ([209.85.166.193]:39486 "EHLO mail-it1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733221AbeKNBlc (ORCPT ); Tue, 13 Nov 2018 20:41:32 -0500 Received: by mail-it1-f193.google.com with SMTP id m15so19048419itl.4 for ; Tue, 13 Nov 2018 07:42:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=EHx8l7IMdoDaN4dvTj41uPWQUtQjoAD8b6vGeSugKx4=; b=PiY2ZWaYMJtI/vujNb9OCNrjIbFvyHgJCYKoEUf31QZaag+IFwV+wF8yiJOZmYrE0h KaMcFCkCm2XosZw0/cK3DOG1BrLnefqi3ttlsY9lTjkMk3T8YpPMrDY40AER53kH4bsm R0CxKf0pj6Hu8aq4vSdDTar8ZqcvRqjJ4cpQn2m4yW3fsscLDaZpWWt27EXc20fMoqoZ wnNPBSeGbsOR32CYljAIoBnbRAFvtSzCDhpVhbEk1E292CnAuMCzGM2Zh6RGCVG8NFMa SDzAuth02ti+qqWPOpYO0DzdmYMGVbETMHjgea8WU5ITQEGbp8Yl4/Ftm4Xtwm2qsDVJ 8q6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=EHx8l7IMdoDaN4dvTj41uPWQUtQjoAD8b6vGeSugKx4=; b=mNlQZVbxf2/iGAkYg3MChXtJfui71E1fTAluf8xYs3myL3NvnNSxYy906GAnRujF6q bjLzK6yHCsRqfblFMFH4qpUZikD+MUnoks/9YpFTC4WGJJsaPpbpSZ06Of+YJds9fg55 xatWCHfzQ9FzcpSMabYNhFtMT24WIy2i1j0mgM1s74myV7+jI1nOO4LE2NBd+XI2XnVV et/cqfcaQ9DiBw/rhUR1pWwdAUGiKBmCjG9pPK9guZWC9M4y28dElXagEkk4CIWC5/ac WChIZhj5vc8QzciAnm0zAnTTLL4RL1TkDf+r+QG/DNPEzFdkxisSfsKhEbM3v31TCDJJ 1jTA== X-Gm-Message-State: AGRZ1gKur6CnCUCl020w51ODjdoInwl/J7lb0HNwmRmzoqhQf2jbAGlU TxxSsbScFPHZbE5kpEpuAYlL0I4rbS8= X-Google-Smtp-Source: AJdET5e3SQmsEohHfHqIdQ4p/ynnyMnaUDcWtTKys55FqNXTZrfhRlGUbG04ff4Fmtslk/JkfIh/1g== X-Received: by 2002:a24:fd09:: with SMTP id m9mr4036186ith.81.1542123772883; Tue, 13 Nov 2018 07:42:52 -0800 (PST) Received: from x1.localdomain ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id o14-v6sm6721987ito.3.2018.11.13.07.42.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Nov 2018 07:42:51 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 08/11] blk-mq: when polling for IO, look for any completion Date: Tue, 13 Nov 2018 08:42:30 -0700 Message-Id: <20181113154233.15256-9-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113154233.15256-1-axboe@kernel.dk> References: <20181113154233.15256-1-axboe@kernel.dk> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If we want to support async IO polling, then we have to allow finding completions that aren't just for the one we are looking for. Always pass in -1 to the mq_ops->poll() helper, and have that return how many events were found in this poll loop. Signed-off-by: Jens Axboe --- block/blk-mq.c | 69 +++++++++++++++++++++++------------------ drivers/nvme/host/pci.c | 32 +++++++++---------- 2 files changed, 54 insertions(+), 47 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index f8c2e6544903..03b1af0151ca 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3266,9 +3266,7 @@ static bool blk_mq_poll_hybrid_sleep(struct request_queue *q, * 0: use half of prev avg * >0: use this specific value */ - if (q->poll_nsec == -1) - return false; - else if (q->poll_nsec > 0) + if (q->poll_nsec > 0) nsecs = q->poll_nsec; else nsecs = blk_mq_poll_nsecs(q, hctx, rq); @@ -3305,21 +3303,36 @@ static bool blk_mq_poll_hybrid_sleep(struct request_queue *q, return true; } -static int __blk_mq_poll(struct blk_mq_hw_ctx *hctx, struct request *rq) +static bool blk_mq_poll_hybrid(struct request_queue *q, + struct blk_mq_hw_ctx *hctx, blk_qc_t cookie) +{ + struct request *rq; + + if (q->poll_nsec == -1) + return false; + + if (!blk_qc_t_is_internal(cookie)) + rq = blk_mq_tag_to_rq(hctx->tags, blk_qc_t_to_tag(cookie)); + else { + rq = blk_mq_tag_to_rq(hctx->sched_tags, blk_qc_t_to_tag(cookie)); + /* + * With scheduling, if the request has completed, we'll + * get a NULL return here, as we clear the sched tag when + * that happens. The request still remains valid, like always, + * so we should be safe with just the NULL check. + */ + if (!rq) + return false; + } + + return blk_mq_poll_hybrid_sleep(q, hctx, rq); +} + +static int __blk_mq_poll(struct blk_mq_hw_ctx *hctx) { struct request_queue *q = hctx->queue; long state; - /* - * If we sleep, have the caller restart the poll loop to reset - * the state. Like for the other success return cases, the - * caller is responsible for checking if the IO completed. If - * the IO isn't complete, we'll get called again and will go - * straight to the busy poll loop. - */ - if (blk_mq_poll_hybrid_sleep(q, hctx, rq)) - return 1; - hctx->poll_considered++; state = current->state; @@ -3328,7 +3341,7 @@ static int __blk_mq_poll(struct blk_mq_hw_ctx *hctx, struct request *rq) hctx->poll_invoked++; - ret = q->mq_ops.poll(hctx, rq->tag); + ret = q->mq_ops.poll(hctx, -1U); if (ret > 0) { hctx->poll_success++; set_current_state(TASK_RUNNING); @@ -3352,27 +3365,23 @@ static int __blk_mq_poll(struct blk_mq_hw_ctx *hctx, struct request *rq) static int blk_mq_poll(struct request_queue *q, blk_qc_t cookie) { struct blk_mq_hw_ctx *hctx; - struct request *rq; if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags)) return 0; hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)]; - if (!blk_qc_t_is_internal(cookie)) - rq = blk_mq_tag_to_rq(hctx->tags, blk_qc_t_to_tag(cookie)); - else { - rq = blk_mq_tag_to_rq(hctx->sched_tags, blk_qc_t_to_tag(cookie)); - /* - * With scheduling, if the request has completed, we'll - * get a NULL return here, as we clear the sched tag when - * that happens. The request still remains valid, like always, - * so we should be safe with just the NULL check. - */ - if (!rq) - return 0; - } - return __blk_mq_poll(hctx, rq); + /* + * If we sleep, have the caller restart the poll loop to reset + * the state. Like for the other success return cases, the + * caller is responsible for checking if the IO completed. If + * the IO isn't complete, we'll get called again and will go + * straight to the busy poll loop. + */ + if (blk_mq_poll_hybrid(q, hctx, cookie)) + return 1; + + return __blk_mq_poll(hctx); } unsigned int blk_mq_rq_cpu(struct request *rq) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index bb22ae567208..adeb8f516bf9 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -995,13 +995,18 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx) nvme_end_request(req, cqe->status, cqe->result); } -static void nvme_complete_cqes(struct nvme_queue *nvmeq, u16 start, u16 end) +static int nvme_complete_cqes(struct nvme_queue *nvmeq, u16 start, u16 end) { + int nr = 0; + while (start != end) { + nr++; nvme_handle_cqe(nvmeq, start); if (++start == nvmeq->q_depth) start = 0; } + + return nr; } static inline void nvme_update_cq_head(struct nvme_queue *nvmeq) @@ -1012,22 +1017,17 @@ static inline void nvme_update_cq_head(struct nvme_queue *nvmeq) } } -static inline bool nvme_process_cq(struct nvme_queue *nvmeq, u16 *start, - u16 *end, int tag) +static inline void nvme_process_cq(struct nvme_queue *nvmeq, u16 *start, + u16 *end) { - bool found = false; - *start = nvmeq->cq_head; - while (!found && nvme_cqe_pending(nvmeq)) { - if (nvmeq->cqes[nvmeq->cq_head].command_id == tag) - found = true; + while (nvme_cqe_pending(nvmeq)) nvme_update_cq_head(nvmeq); - } + *end = nvmeq->cq_head; if (*start != *end) nvme_ring_cq_doorbell(nvmeq); - return found; } static irqreturn_t nvme_irq(int irq, void *data) @@ -1039,7 +1039,7 @@ static irqreturn_t nvme_irq(int irq, void *data) spin_lock(&nvmeq->cq_lock); if (nvmeq->cq_head != nvmeq->last_cq_head) ret = IRQ_HANDLED; - nvme_process_cq(nvmeq, &start, &end, -1); + nvme_process_cq(nvmeq, &start, &end); nvmeq->last_cq_head = nvmeq->cq_head; spin_unlock(&nvmeq->cq_lock); @@ -1062,7 +1062,6 @@ static irqreturn_t nvme_irq_check(int irq, void *data) static int __nvme_poll(struct nvme_queue *nvmeq, unsigned int tag) { u16 start, end; - bool found; if (!nvme_cqe_pending(nvmeq)) return 0; @@ -1074,14 +1073,13 @@ static int __nvme_poll(struct nvme_queue *nvmeq, unsigned int tag) local_irq_disable(); spin_lock(&nvmeq->cq_lock); - found = nvme_process_cq(nvmeq, &start, &end, tag); + nvme_process_cq(nvmeq, &start, &end); spin_unlock(&nvmeq->cq_lock); if (!nvmeq->polled) local_irq_enable(); - nvme_complete_cqes(nvmeq, start, end); - return found; + return nvme_complete_cqes(nvmeq, start, end); } static int nvme_poll(struct blk_mq_hw_ctx *hctx, unsigned int tag) @@ -1414,7 +1412,7 @@ static void nvme_disable_admin_queue(struct nvme_dev *dev, bool shutdown) nvme_disable_ctrl(&dev->ctrl, dev->ctrl.cap); spin_lock_irq(&nvmeq->cq_lock); - nvme_process_cq(nvmeq, &start, &end, -1); + nvme_process_cq(nvmeq, &start, &end); spin_unlock_irq(&nvmeq->cq_lock); nvme_complete_cqes(nvmeq, start, end); @@ -2209,7 +2207,7 @@ static void nvme_del_cq_end(struct request *req, blk_status_t error) unsigned long flags; spin_lock_irqsave(&nvmeq->cq_lock, flags); - nvme_process_cq(nvmeq, &start, &end, -1); + nvme_process_cq(nvmeq, &start, &end); spin_unlock_irqrestore(&nvmeq->cq_lock, flags); nvme_complete_cqes(nvmeq, start, end); From patchwork Tue Nov 13 15:42:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 10681043 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 52A4C13BF for ; Tue, 13 Nov 2018 15:42:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 40CD72AE3E for ; Tue, 13 Nov 2018 15:42:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3ECB42AE3C; Tue, 13 Nov 2018 15:42:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7DD0A2AE07 for ; Tue, 13 Nov 2018 15:42:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387798AbeKNBle (ORCPT ); Tue, 13 Nov 2018 20:41:34 -0500 Received: from mail-it1-f194.google.com ([209.85.166.194]:51096 "EHLO mail-it1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733221AbeKNBle (ORCPT ); Tue, 13 Nov 2018 20:41:34 -0500 Received: by mail-it1-f194.google.com with SMTP id k206-v6so18861356ite.0 for ; Tue, 13 Nov 2018 07:42:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pyg2+qcZYMyHtOudVKW/HgkFjVzxJBXHL3R3vlW41yA=; b=IUekShxse1kurdkW8XpvfhjY9vvmIkR7jasVAx7K9AVrM9dhNRLElOPahUACiY5XRa xXsvozaplA2CE95oaPSsnC/CwZ4AGpFHfyDmjeQGbRmGm8x1WKIoQPYnxat4gSRcbvyR VQt2R8fW0XHgjHVCO9awmArSDp6qcSxEd7Ekz1cN1V78fq3s1WvAWtSHAhmBAC78/oiK CwkHKuTxCMgDFRQmvDfMRoEXlYtF6nLWmZ6ONgOdEDDAy9gZhyVKHwxOsSpw2hwFho1g rtpMGLfx5ugglOnOFD1w+JPLfPKHn19jWt6zFut5P+f3CzP9pWB2JUmeec1PSAnN3HbN ncrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pyg2+qcZYMyHtOudVKW/HgkFjVzxJBXHL3R3vlW41yA=; b=VRXzn9I5vm/BjhDaRJ+6O/wQFHaSEw5ZR8CP6bnO4AMLQrALSe7P7putGf7/7356sP GhYa/ZvfGPGrHE159iR/0qR29hD3daxA5cwS/+9UGP2DktycWJkYJPFAdGSiBRX2Nlp1 MpMiz3i6E+tEDMIhbxFPmdOsw+QVTVgLdX5cRkBd++mZubjRBZnDm17bR/KHaCU/8BFJ JJh76llTxuku3HLIqViV1mheDGZfNYtg61dH2Rz8ennCm92jOYQhhIz7RiTHV0AoGXlz qbMcGSRpnLHH9Esk6RKydfDGqMUBoWpZ40c5ien8S+eSc6vURvESemUfRnAaNt0r19+e O+Ng== X-Gm-Message-State: AGRZ1gK1II+ccq1WA7A0fienNFD6pyzKZfn46Pu7a9JhbosOPaPAvQVM V8BBmRCsdKcrrtcMCMCAwEN1bYe8LCM= X-Google-Smtp-Source: AJdET5eq7EaGd27340UwKoBFjP5P2U2VEKX6aGvG4r1UfKacNdahVl1e/Ww/2HOF9WjYd8vSjZr3zA== X-Received: by 2002:a05:660c:b4c:: with SMTP id m12mr4085695itl.6.1542123774908; Tue, 13 Nov 2018 07:42:54 -0800 (PST) Received: from x1.localdomain ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id o14-v6sm6721987ito.3.2018.11.13.07.42.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Nov 2018 07:42:53 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 09/11] block: make blk_poll() take a parameter on whether to spin or not Date: Tue, 13 Nov 2018 08:42:31 -0700 Message-Id: <20181113154233.15256-10-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113154233.15256-1-axboe@kernel.dk> References: <20181113154233.15256-1-axboe@kernel.dk> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP blk_poll() has always kept spinning until it found an IO. This is fine for SYNC polling, since we need to find one request we have pending, but in preparation for ASYNC polling it can be beneficial to just check if we have any entries available or not. Existing callers are converted to pass in 'spin == true', to retain the old behavior. Signed-off-by: Jens Axboe --- block/blk-core.c | 4 ++-- block/blk-mq.c | 10 +++++----- drivers/nvme/host/multipath.c | 4 ++-- drivers/nvme/target/io-cmd-bdev.c | 2 +- fs/block_dev.c | 4 ++-- fs/direct-io.c | 2 +- fs/iomap.c | 2 +- include/linux/blkdev.h | 4 ++-- mm/page_io.c | 2 +- 9 files changed, 17 insertions(+), 17 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 88400ab166ac..0500c693fdae 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1363,14 +1363,14 @@ blk_qc_t submit_bio(struct bio *bio) } EXPORT_SYMBOL(submit_bio); -bool blk_poll(struct request_queue *q, blk_qc_t cookie) +bool blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin) { if (!q->poll_fn || !blk_qc_t_valid(cookie)) return false; if (current->plug) blk_flush_plug_list(current->plug, false); - return q->poll_fn(q, cookie); + return q->poll_fn(q, cookie, spin); } EXPORT_SYMBOL_GPL(blk_poll); diff --git a/block/blk-mq.c b/block/blk-mq.c index 03b1af0151ca..a4043b9a27f5 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -38,7 +38,7 @@ #include "blk-mq-sched.h" #include "blk-rq-qos.h" -static int blk_mq_poll(struct request_queue *q, blk_qc_t cookie); +static int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, bool spin); static void blk_mq_poll_stats_start(struct request_queue *q); static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb); @@ -3328,7 +3328,7 @@ static bool blk_mq_poll_hybrid(struct request_queue *q, return blk_mq_poll_hybrid_sleep(q, hctx, rq); } -static int __blk_mq_poll(struct blk_mq_hw_ctx *hctx) +static int __blk_mq_poll(struct blk_mq_hw_ctx *hctx, bool spin) { struct request_queue *q = hctx->queue; long state; @@ -3353,7 +3353,7 @@ static int __blk_mq_poll(struct blk_mq_hw_ctx *hctx) if (current->state == TASK_RUNNING) return 1; - if (ret < 0) + if (ret < 0 || !spin) break; cpu_relax(); } @@ -3362,7 +3362,7 @@ static int __blk_mq_poll(struct blk_mq_hw_ctx *hctx) return 0; } -static int blk_mq_poll(struct request_queue *q, blk_qc_t cookie) +static int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, bool spin) { struct blk_mq_hw_ctx *hctx; @@ -3381,7 +3381,7 @@ static int blk_mq_poll(struct request_queue *q, blk_qc_t cookie) if (blk_mq_poll_hybrid(q, hctx, cookie)) return 1; - return __blk_mq_poll(hctx); + return __blk_mq_poll(hctx, spin); } unsigned int blk_mq_rq_cpu(struct request *rq) diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index 0484c1f9c8ce..243da973416f 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -220,7 +220,7 @@ static blk_qc_t nvme_ns_head_make_request(struct request_queue *q, return ret; } -static int nvme_ns_head_poll(struct request_queue *q, blk_qc_t qc) +static int nvme_ns_head_poll(struct request_queue *q, blk_qc_t qc, bool spin) { struct nvme_ns_head *head = q->queuedata; struct nvme_ns *ns; @@ -230,7 +230,7 @@ static int nvme_ns_head_poll(struct request_queue *q, blk_qc_t qc) srcu_idx = srcu_read_lock(&head->srcu); ns = srcu_dereference(head->current_path[numa_node_id()], &head->srcu); if (likely(ns && nvme_path_is_optimized(ns))) - found = ns->queue->poll_fn(q, qc); + found = ns->queue->poll_fn(q, qc, spin); srcu_read_unlock(&head->srcu, srcu_idx); return found; } diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index c1ec3475a140..f6971b45bc54 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -116,7 +116,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req) cookie = submit_bio(bio); - blk_poll(bdev_get_queue(req->ns->bdev), cookie); + blk_poll(bdev_get_queue(req->ns->bdev), cookie, true); } static void nvmet_bdev_execute_flush(struct nvmet_req *req) diff --git a/fs/block_dev.c b/fs/block_dev.c index 0ed9be8906a8..7810f5b588ea 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -244,7 +244,7 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter, break; if (!(iocb->ki_flags & IOCB_HIPRI) || - !blk_poll(bdev_get_queue(bdev), qc)) + !blk_poll(bdev_get_queue(bdev), qc, true)) io_schedule(); } __set_current_state(TASK_RUNNING); @@ -413,7 +413,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) break; if (!(iocb->ki_flags & IOCB_HIPRI) || - !blk_poll(bdev_get_queue(bdev), qc)) + !blk_poll(bdev_get_queue(bdev), qc, true)) io_schedule(); } __set_current_state(TASK_RUNNING); diff --git a/fs/direct-io.c b/fs/direct-io.c index ea07d5a34317..a5a4e5a1423e 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -518,7 +518,7 @@ static struct bio *dio_await_one(struct dio *dio) dio->waiter = current; spin_unlock_irqrestore(&dio->bio_lock, flags); if (!(dio->iocb->ki_flags & IOCB_HIPRI) || - !blk_poll(dio->bio_disk->queue, dio->bio_cookie)) + !blk_poll(dio->bio_disk->queue, dio->bio_cookie, true)) io_schedule(); /* wake up sets us TASK_RUNNING */ spin_lock_irqsave(&dio->bio_lock, flags); diff --git a/fs/iomap.c b/fs/iomap.c index a182699e28db..0cd680ab753b 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -1898,7 +1898,7 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter, if (!(iocb->ki_flags & IOCB_HIPRI) || !dio->submit.last_queue || !blk_poll(dio->submit.last_queue, - dio->submit.cookie)) + dio->submit.cookie, true)) io_schedule(); } __set_current_state(TASK_RUNNING); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index b2af6c68b78d..0bcb51dbde10 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -287,7 +287,7 @@ static inline unsigned short req_get_ioprio(struct request *req) struct blk_queue_ctx; typedef blk_qc_t (make_request_fn) (struct request_queue *q, struct bio *bio); -typedef int (poll_q_fn) (struct request_queue *q, blk_qc_t); +typedef int (poll_q_fn) (struct request_queue *q, blk_qc_t, bool spin); struct bio_vec; typedef int (dma_drain_needed_fn)(struct request *); @@ -893,7 +893,7 @@ extern void blk_execute_rq_nowait(struct request_queue *, struct gendisk *, int blk_status_to_errno(blk_status_t status); blk_status_t errno_to_blk_status(int errno); -bool blk_poll(struct request_queue *q, blk_qc_t cookie); +bool blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin); static inline struct request_queue *bdev_get_queue(struct block_device *bdev) { diff --git a/mm/page_io.c b/mm/page_io.c index d4d1c89bcddd..64ddac7b7bc0 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -409,7 +409,7 @@ int swap_readpage(struct page *page, bool synchronous) if (!READ_ONCE(bio->bi_private)) break; - if (!blk_poll(disk->queue, qc)) + if (!blk_poll(disk->queue, qc, true)) break; } __set_current_state(TASK_RUNNING); From patchwork Tue Nov 13 15:42:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 10681045 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3105B13BF for ; Tue, 13 Nov 2018 15:42:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 21D432AE07 for ; Tue, 13 Nov 2018 15:42:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 202D42AE20; Tue, 13 Nov 2018 15:42:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BDE4E2AE3C for ; Tue, 13 Nov 2018 15:42:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387822AbeKNBlg (ORCPT ); Tue, 13 Nov 2018 20:41:36 -0500 Received: from mail-it1-f193.google.com ([209.85.166.193]:53009 "EHLO mail-it1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733221AbeKNBlf (ORCPT ); Tue, 13 Nov 2018 20:41:35 -0500 Received: by mail-it1-f193.google.com with SMTP id t190-v6so18849734itb.2 for ; Tue, 13 Nov 2018 07:42:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WYF2DQMSuLBvvC6oyWCzDINndqOwZo6FPKocRmVKp6k=; b=hmK6k6uVJxPb1qUWCvMc5U9Pv/+DMsVVBZakScafbVw5unRaW65N5aNBXnO4FrUsDo ptd7nzKjjncEQGSIZSe58OnkU3m3oacrC0kXlcDoJcPn9xgyXqy7dE1CkoSQdv1+yKMO CfANTyyhZAshywUGG6E9phPvjHLUHUvgujBM2eAXGXRqUMn/wsRVCs6GM5WmUBshgp7J RijzXDKPbiGbb8ZT1IBIgYWt31ut7XYEbHW8Ij6OGYgyaHtXN+8QkhO/GUVpGmZqqENS QC+Yne2KXjF+qd3vAq22AdFw/JxbP1YLFXAJ4ao74Kg/zROjq5jwzylJMm1R/zYVQBb8 OSMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WYF2DQMSuLBvvC6oyWCzDINndqOwZo6FPKocRmVKp6k=; b=tFaOAUBudMvc/4dK6Qe3XYUEcGxdhpeqibNHMfe/zjZirr/lKnGdxYEPA1Je+xKLZ9 fXdq2scy7Nz3oy/ux+E2yXDsaWUw3CWP0dkXtsI6735HMyRH1FtfgDQk1vtz81/jXHWQ 46vr7dgbJ+tJ0kQRykRiMsv1yoQsVbVhNQhLkEPJpbGuG8HmdUJAoXA5stnTgU//En5r s8HrK1FMuJK/+5/IAXJoL3d9ANXUyWCHsA05GvLLHPGXB4BSFYvJMzHfXJfU9yNxUeh/ qNQeCrH1p044o6MI2PitqWGOCO9Fj2V3BkiRPL8ev4TsA3BBQHEDdAylINSQy9mnrPo2 Aysw== X-Gm-Message-State: AGRZ1gKG1EV73rGGaA8tPWhB79eoUXhFx78CT81xUStU73Ai2skLqeD/ 0uK+r8JYkySpxiv0YZkp7uG5wS9Hmfw= X-Google-Smtp-Source: AJdET5feneHfO+11L+1WkRWHozWrSM1jwBb9x85mmhaXPrgYI1YRpV1vAmno6Jj68Pjx1VZUTgal8w== X-Received: by 2002:a24:d997:: with SMTP id p145mr4242077itg.30.1542123776482; Tue, 13 Nov 2018 07:42:56 -0800 (PST) Received: from x1.localdomain ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id o14-v6sm6721987ito.3.2018.11.13.07.42.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Nov 2018 07:42:55 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 10/11] block: for async O_DIRECT, mark us as polling if asked to Date: Tue, 13 Nov 2018 08:42:32 -0700 Message-Id: <20181113154233.15256-11-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113154233.15256-1-axboe@kernel.dk> References: <20181113154233.15256-1-axboe@kernel.dk> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Inherit the iocb IOCB_HIPRI flag, and pass on REQ_HIPRI for those kinds of requests. Signed-off-by: Jens Axboe --- fs/block_dev.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/fs/block_dev.c b/fs/block_dev.c index 7810f5b588ea..c124982b810d 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -386,6 +386,9 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES); if (!nr_pages) { + if (iocb->ki_flags & IOCB_HIPRI) + bio->bi_opf |= REQ_HIPRI; + qc = submit_bio(bio); break; } From patchwork Tue Nov 13 15:42:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 10681047 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 639F51709 for ; Tue, 13 Nov 2018 15:43:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 54A1A2AC5B for ; Tue, 13 Nov 2018 15:43:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 455522AE3F; Tue, 13 Nov 2018 15:43:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E5ACB2AC5B for ; Tue, 13 Nov 2018 15:42:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387807AbeKNBlh (ORCPT ); Tue, 13 Nov 2018 20:41:37 -0500 Received: from mail-it1-f195.google.com ([209.85.166.195]:38737 "EHLO mail-it1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733221AbeKNBlh (ORCPT ); Tue, 13 Nov 2018 20:41:37 -0500 Received: by mail-it1-f195.google.com with SMTP id k141-v6so19064053itk.3 for ; Tue, 13 Nov 2018 07:42:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Fz9g1ov0KEBk0jca5hC7QbGAUKLsaBIiDlim4iRDh3o=; b=Sd6FN2XiMMy2fLFPu278s/m80rTFcVkyVi7Da6PEShW2mg8FprQOAGV+qvsYE2Lomg 5yzMadXXfeUijZaQYtQkQSiIXoO+3oKiwIt1RQ04ziANhlCXTOWAwoswlj7UeKk1ruIJ Q1idg1g7jtt0PO5ChhvwlqwgaiHMd/DDmGiuYeNYdWG1+tJnfAMv5pP6VfGKr+X56tTX cPo/18TVaZMv8/1/68ZWUUSJBoiR9t9DK+fbxcWd5qnDM02Zb9RAJya1WQ+t4krx309k QxnVQovq45rlzGHFbs9uU37en7YBjTeGNGshpVjHGQbUN6ZeEQF+9i7ss+1TxtfUkVf7 +sRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Fz9g1ov0KEBk0jca5hC7QbGAUKLsaBIiDlim4iRDh3o=; b=hCp6qLiD0pWcGrp3pzHNZLb0o/08BLkvyxeOLA9Ul8Ajei18hiDgx2suq8oxNW/fUt y0rpLLvzsuaA0v8AF+jGUHFPfWEsxZMytS4GJZOuRT1EwCmg68JjmUHtLiIw5JrPmyAl v3RAXiA/FA/q5VJ6Czm64uKnZttgaoQ5YNdtUrrUrfpuCpKM3Co0fjSP/IIt3xSnSdUI s8CWzs3aiUeg3BhhEoJnnfhSHxehQyO/TpxlLGy27wd1lC7fRcP/3WPqiR+/1Q06AOK5 ylSVMy2RFvKCAkaaW13tSFYyoDlp4+F3x/dL2H6G60AW793Gfc5B1GnBHAapHVlimU3l Y9nw== X-Gm-Message-State: AGRZ1gL1Bb2niAPHrgwa+u/C0fXmetn7c0bN8QXoakVMeLdqbl5pW6gs Ctfh8hINWn4ZEk7gjPYJ8G9GYmp0X+w= X-Google-Smtp-Source: AJdET5ecZTcjHWgJIM5f//nes9SHvCftsfFk/yhD+lANPd9/3VyUGnMaMV6VbG7balz9ubKbNaaveQ== X-Received: by 2002:a24:440f:: with SMTP id o15-v6mr3990915ita.15.1542123778003; Tue, 13 Nov 2018 07:42:58 -0800 (PST) Received: from x1.localdomain ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id o14-v6sm6721987ito.3.2018.11.13.07.42.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Nov 2018 07:42:56 -0800 (PST) From: Jens Axboe To: linux-block@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 11/11] block: don't plug for aio/O_DIRECT HIPRI IO Date: Tue, 13 Nov 2018 08:42:33 -0700 Message-Id: <20181113154233.15256-12-axboe@kernel.dk> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113154233.15256-1-axboe@kernel.dk> References: <20181113154233.15256-1-axboe@kernel.dk> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Those will go straight to issue inside blk-mq, so don't bother setting up a block plug for them. Signed-off-by: Jens Axboe --- fs/block_dev.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/fs/block_dev.c b/fs/block_dev.c index c124982b810d..9dc695a3af4e 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -356,7 +356,13 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) dio->multi_bio = false; dio->should_dirty = is_read && iter_is_iovec(iter); - blk_start_plug(&plug); + /* + * Don't plug for HIPRI/polled IO, as those should go straight + * to issue + */ + if (!(iocb->ki_flags & IOCB_HIPRI)) + blk_start_plug(&plug); + for (;;) { bio_set_dev(bio, bdev); bio->bi_iter.bi_sector = pos >> 9; @@ -403,7 +409,9 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) submit_bio(bio); bio = bio_alloc(GFP_KERNEL, nr_pages); } - blk_finish_plug(&plug); + + if (!(iocb->ki_flags & IOCB_HIPRI)) + blk_finish_plug(&plug); if (!is_sync) return -EIOCBQUEUED;