From patchwork Thu Sep 22 14:52:59 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 9345451 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0CCB060757 for ; Thu, 22 Sep 2016 14:54:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F2DC72AB4B for ; Thu, 22 Sep 2016 14:54:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E775F2AB94; Thu, 22 Sep 2016 14:54:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BDB832AB8A for ; Thu, 22 Sep 2016 14:54:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754311AbcIVOyO (ORCPT ); Thu, 22 Sep 2016 10:54:14 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:55537 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755495AbcIVOyM (ORCPT ); Thu, 22 Sep 2016 10:54:12 -0400 Received: from pps.filterd (m0089730.ppops.net [127.0.0.1]) by m0089730.ppops.net (8.16.0.17/8.16.0.17) with SMTP id u8MEnB3Y022240; Thu, 22 Sep 2016 07:54:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=facebook; bh=xOWsFkrmEcwPxgE6/AjOJiIXyXgaITB+epRu/nLJmco=; b=QqPsNZBUn8pYJxIva8hrWrF7ei0J92SZVmCXXek3tYMm8yG8yLIvi72XNGVHns5rovcj n6W7hdZy34kOxoith/uNLnNoN0UslnPyL3/CBkUSEGahqacqQjn1aeFIsB+qwIPD/vzL skWi9WfVar3ndJtU2sJQVLPq9kxdZsNRMNg= Received: from mail.thefacebook.com ([199.201.64.23]) by m0089730.ppops.net with ESMTP id 25m0uqyev3-4 (version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NOT); Thu, 22 Sep 2016 07:54:10 -0700 Received: from localhost.localdomain (192.168.54.13) by mail.thefacebook.com (192.168.16.23) with Microsoft SMTP Server (TLS) id 14.3.294.0; Thu, 22 Sep 2016 07:53:15 -0700 From: Jens Axboe To: , CC: , Jens Axboe Subject: [PATCH 1/2] blk-mq: get rid of manual run of queue with __blk_mq_run_hw_queue() Date: Thu, 22 Sep 2016 08:52:59 -0600 Message-ID: <1474555980-2787-2-git-send-email-axboe@fb.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1474555980-2787-1-git-send-email-axboe@fb.com> References: <1474555980-2787-1-git-send-email-axboe@fb.com> MIME-Version: 1.0 X-Originating-IP: [192.168.54.13] X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-09-22_07:, , signatures=0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Two cases: 1) blk_mq_alloc_request() needlessly re-runs the queue, after calling into the tag allocation without NOWAIT set. We don't need to do that. 2) blk_mq_map_request() should just use blk_mq_run_hw_queue() with the async flag set to false. Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig Reviewed-by: Sagi Grimberg --- block/blk-mq.c | 16 ++-------------- 1 file changed, 2 insertions(+), 14 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index e0a69daddbd8..c29700010b5c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -34,8 +34,6 @@ static DEFINE_MUTEX(all_q_mutex); static LIST_HEAD(all_q_list); -static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx); - /* * Check if any of the ctx's have pending work in this hardware queue */ @@ -228,19 +226,9 @@ struct request *blk_mq_alloc_request(struct request_queue *q, int rw, ctx = blk_mq_get_ctx(q); hctx = q->mq_ops->map_queue(q, ctx->cpu); blk_mq_set_alloc_data(&alloc_data, q, flags, ctx, hctx); - rq = __blk_mq_alloc_request(&alloc_data, rw, 0); - if (!rq && !(flags & BLK_MQ_REQ_NOWAIT)) { - __blk_mq_run_hw_queue(hctx); - blk_mq_put_ctx(ctx); - - ctx = blk_mq_get_ctx(q); - hctx = q->mq_ops->map_queue(q, ctx->cpu); - blk_mq_set_alloc_data(&alloc_data, q, flags, ctx, hctx); - rq = __blk_mq_alloc_request(&alloc_data, rw, 0); - ctx = alloc_data.ctx; - } blk_mq_put_ctx(ctx); + if (!rq) { blk_queue_exit(q); return ERR_PTR(-EWOULDBLOCK); @@ -1225,7 +1213,7 @@ static struct request *blk_mq_map_request(struct request_queue *q, blk_mq_set_alloc_data(&alloc_data, q, BLK_MQ_REQ_NOWAIT, ctx, hctx); rq = __blk_mq_alloc_request(&alloc_data, op, op_flags); if (unlikely(!rq)) { - __blk_mq_run_hw_queue(hctx); + blk_mq_run_hw_queue(hctx, false); blk_mq_put_ctx(ctx); trace_block_sleeprq(q, bio, op);