From patchwork Sat Dec 16 12:07:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 10116921 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1BF1B603B5 for ; Sat, 16 Dec 2017 12:08:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0D57929508 for ; Sat, 16 Dec 2017 12:08:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0242E29699; Sat, 16 Dec 2017 12:08:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7B64F29508 for ; Sat, 16 Dec 2017 12:08:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756920AbdLPMHw (ORCPT ); Sat, 16 Dec 2017 07:07:52 -0500 Received: from mail-qk0-f195.google.com ([209.85.220.195]:40472 "EHLO mail-qk0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756824AbdLPMHr (ORCPT ); Sat, 16 Dec 2017 07:07:47 -0500 Received: by mail-qk0-f195.google.com with SMTP id q14so626111qke.7; Sat, 16 Dec 2017 04:07:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=9zibVIn6GU5xVV/smQKzcAT0PB9GgfxC0depVrS6zj8=; b=vfCCoNv75VtjLtpXTf8RJNHbe+QuK+Bec6xA1eJDV//SvOdWOODsaDx1O2bPZ4LZDr 7cU+awLuN4/o5NRPWYK5bPZpCMkpBCPxqTVJ52njsxgY5UHW2kY6esgyOVfuCO+yyytS vfg3KbyW/2p3tB4kr8LSoGd8BBb3HIHwO23zGvUQgezzuKVByWvyZwwoSlxT7TP3+0YT Tp03Px2ZmyJqvUqv0Z01jHCvvouPr7lfZSCQoMuJOE6oiplXgYKUhEEI1yMB+RotZlHw Pp9t+Uzoxpe4QaHSxNHUAUQG8t1HpGBcLTU3+ZbnW8ShAw2/w79D43/QVe8iXxAH0dr1 Y50Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=9zibVIn6GU5xVV/smQKzcAT0PB9GgfxC0depVrS6zj8=; b=eGYAAEr79b4s4sZaX9algUWfm7TabtBG/8rh1Se8UzeAsybdOFApAd8wK4AFLt+hFK PAk33zvEUFjBatOZhcq6whCfhOcKLqR1v5MwGfQ0xbXK4FxeCkwau4o8Hsxe9XMoCQj0 95+o067W8RP4Ol0GqrsfhfdzmrLVdtzjJqskncKyGM/Eu3FpPKdarVeDf9ZOzPy3PZVf dCNB6/ZydUJJMOG+l9I7FnuVRExDntzfzKxM32Myok60ah+gKizUykJGxpvCaSpZfs4K 33vcdnhaqa483XbEK+LlbOVasO65+25eTUn2eV7lUmgDk2cxO7clflV0qpIUfoqYp00k pXmw== X-Gm-Message-State: AKGB3mKybV45CypKqS+73Fw5WDoMrlXjmqzybN5qIKG4P1ftRvMFATzS 6mVx+sAH0aK4SpMBL6E14fQ= X-Google-Smtp-Source: ACJfBoujdqXeYd3vOzTuDBSO7zMmeqWJjuLUtVmePwN6i/kOBeWUCn+BeFsYuQmBrcZ8Adoq1twmaw== X-Received: by 10.55.19.19 with SMTP id d19mr13374974qkh.189.1513426066353; Sat, 16 Dec 2017 04:07:46 -0800 (PST) Received: from localhost (dhcp-ec-8-6b-ed-7a-cf.cpe.echoes.net. [72.28.5.223]) by smtp.gmail.com with ESMTPSA id s27sm5587564qtj.42.2017.12.16.04.07.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 16 Dec 2017 04:07:45 -0800 (PST) From: Tejun Heo To: jack@suse.cz, axboe@kernel.dk, clm@fb.com, jbacik@fb.com Cc: kernel-team@fb.com, linux-kernel@vger.kernel.org, linux-btrfs@vger.kernel.org, peterz@infradead.org, jianchao.w.wang@oracle.com, Bart.VanAssche@wdc.com, Tejun Heo Subject: [PATCH 6/7] blk-mq: remove REQ_ATOM_STARTED Date: Sat, 16 Dec 2017 04:07:25 -0800 Message-Id: <20171216120726.517153-7-tj@kernel.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20171216120726.517153-1-tj@kernel.org> References: <20171216120726.517153-1-tj@kernel.org> Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP After the recent updates to use generation number and state based synchronization, we can easily replace REQ_ATOM_STARTED usages by adding an extra state to distinguish completed but not yet freed state. Add MQ_RQ_COMPLETE and replace REQ_ATOM_STARTED usages with blk_mq_rq_state() tests. REQ_ATOM_STARTED no longer has any users left and is removed. Signed-off-by: Tejun Heo --- block/blk-mq-debugfs.c | 4 +--- block/blk-mq.c | 37 ++++++++----------------------------- block/blk-mq.h | 1 + block/blk.h | 1 - 4 files changed, 10 insertions(+), 33 deletions(-) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index b56a4f3..8adc837 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -271,7 +271,6 @@ static const char *const cmd_flag_name[] = { #define RQF_NAME(name) [ilog2((__force u32)RQF_##name)] = #name static const char *const rqf_name[] = { RQF_NAME(SORTED), - RQF_NAME(STARTED), RQF_NAME(QUEUED), RQF_NAME(SOFTBARRIER), RQF_NAME(FLUSH_SEQ), @@ -295,7 +294,6 @@ static const char *const rqf_name[] = { #define RQAF_NAME(name) [REQ_ATOM_##name] = #name static const char *const rqaf_name[] = { RQAF_NAME(COMPLETE), - RQAF_NAME(STARTED), RQAF_NAME(POLL_SLEPT), }; #undef RQAF_NAME @@ -409,7 +407,7 @@ static void hctx_show_busy_rq(struct request *rq, void *data, bool reserved) const struct show_busy_params *params = data; if (blk_mq_map_queue(rq->q, rq->mq_ctx->cpu) == params->hctx && - test_bit(REQ_ATOM_STARTED, &rq->atomic_flags)) + blk_mq_rq_state(rq) != MQ_RQ_IDLE) __blk_mq_debugfs_rq_show(params->m, list_entry_rq(&rq->queuelist)); } diff --git a/block/blk-mq.c b/block/blk-mq.c index 47e722b..724d340 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -483,7 +483,6 @@ void blk_mq_free_request(struct request *rq) blk_put_rl(blk_rq_rl(rq)); blk_mq_rq_update_state(rq, MQ_RQ_IDLE); - clear_bit(REQ_ATOM_STARTED, &rq->atomic_flags); clear_bit(REQ_ATOM_POLL_SLEPT, &rq->atomic_flags); if (rq->tag != -1) blk_mq_put_tag(hctx, hctx->tags, ctx, rq->tag); @@ -531,6 +530,7 @@ static void __blk_mq_complete_request(struct request *rq) int cpu; WARN_ON_ONCE(blk_mq_rq_state(rq) != MQ_RQ_IN_FLIGHT); + blk_mq_rq_update_state(rq, MQ_RQ_COMPLETE); if (rq->internal_tag != -1) blk_mq_sched_completed_request(rq); @@ -621,7 +621,7 @@ EXPORT_SYMBOL(blk_mq_complete_request); int blk_mq_request_started(struct request *rq) { - return test_bit(REQ_ATOM_STARTED, &rq->atomic_flags); + return blk_mq_rq_state(rq) != MQ_RQ_IDLE; } EXPORT_SYMBOL_GPL(blk_mq_request_started); @@ -640,7 +640,6 @@ void blk_mq_start_request(struct request *rq) } WARN_ON_ONCE(blk_mq_rq_state(rq) != MQ_RQ_IDLE); - WARN_ON_ONCE(test_bit(REQ_ATOM_STARTED, &rq->atomic_flags)); /* * Mark @rq in-flight which also advances the generation number, @@ -662,8 +661,6 @@ void blk_mq_start_request(struct request *rq) write_seqcount_end(&rq->gstate_seq); preempt_enable(); - set_bit(REQ_ATOM_STARTED, &rq->atomic_flags); - if (q->dma_drain_size && blk_rq_bytes(rq)) { /* * Make sure space for the drain appears. We know we can do @@ -676,13 +673,9 @@ void blk_mq_start_request(struct request *rq) EXPORT_SYMBOL(blk_mq_start_request); /* - * When we reach here because queue is busy, REQ_ATOM_COMPLETE - * flag isn't set yet, so there may be race with timeout handler, - * but given rq->deadline is just set in .queue_rq() under - * this situation, the race won't be possible in reality because - * rq->timeout should be set as big enough to cover the window - * between blk_mq_start_request() called from .queue_rq() and - * clearing REQ_ATOM_STARTED here. + * When we reach here because queue is busy, it's safe to change the state + * to IDLE without checking @rq->aborted_gstate because we should still be + * holding the RCU read lock and thus protected against timeout. */ static void __blk_mq_requeue_request(struct request *rq) { @@ -694,7 +687,7 @@ static void __blk_mq_requeue_request(struct request *rq) wbt_requeue(q->rq_wb, &rq->issue_stat); blk_mq_sched_requeue_request(rq); - if (test_and_clear_bit(REQ_ATOM_STARTED, &rq->atomic_flags)) { + if (blk_mq_rq_state(rq) != MQ_RQ_IDLE) { blk_mq_rq_update_state(rq, MQ_RQ_IDLE); if (q->dma_drain_size && blk_rq_bytes(rq)) rq->nr_phys_segments--; @@ -801,18 +794,6 @@ static void blk_mq_rq_timed_out(struct request *req, bool reserved) const struct blk_mq_ops *ops = req->q->mq_ops; enum blk_eh_timer_return ret = BLK_EH_RESET_TIMER; - /* - * We know that complete is set at this point. If STARTED isn't set - * anymore, then the request isn't active and the "timeout" should - * just be ignored. This can happen due to the bitflag ordering. - * Timeout first checks if STARTED is set, and if it is, assumes - * the request is active. But if we race with completion, then - * both flags will get cleared. So check here again, and ignore - * a timeout event with a request that isn't active. - */ - if (!test_bit(REQ_ATOM_STARTED, &req->atomic_flags)) - return; - req->rq_flags |= RQF_MQ_TIMEOUT_EXPIRED; if (ops->timeout) @@ -848,8 +829,7 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx, might_sleep(); - if ((rq->rq_flags & RQF_MQ_TIMEOUT_EXPIRED) || - !test_bit(REQ_ATOM_STARTED, &rq->atomic_flags)) + if (rq->rq_flags & RQF_MQ_TIMEOUT_EXPIRED) return; /* read coherent snapshots of @rq->state_gen and @rq->deadline */ @@ -2971,8 +2951,7 @@ static bool blk_mq_poll_hybrid_sleep(struct request_queue *q, hrtimer_init_sleeper(&hs, current); do { - if (test_bit(REQ_ATOM_STARTED, &rq->atomic_flags) && - blk_mq_rq_state(rq) != MQ_RQ_IN_FLIGHT) + if (blk_mq_rq_state(rq) == MQ_RQ_COMPLETE) break; set_current_state(TASK_UNINTERRUPTIBLE); hrtimer_start_expires(&hs.timer, mode); diff --git a/block/blk-mq.h b/block/blk-mq.h index 6b2d616..8591a54 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -34,6 +34,7 @@ struct blk_mq_ctx { enum mq_rq_state { MQ_RQ_IDLE = 0, MQ_RQ_IN_FLIGHT = 1, + MQ_RQ_COMPLETE = 2, MQ_RQ_STATE_BITS = 2, MQ_RQ_STATE_MASK = (1 << MQ_RQ_STATE_BITS) - 1, diff --git a/block/blk.h b/block/blk.h index 9cb2739..a68dbe3 100644 --- a/block/blk.h +++ b/block/blk.h @@ -124,7 +124,6 @@ void blk_account_io_done(struct request *req); */ enum rq_atomic_flags { REQ_ATOM_COMPLETE = 0, - REQ_ATOM_STARTED, REQ_ATOM_POLL_SLEPT, };