From patchwork Thu Mar 31 20:19:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jon Derrick X-Patchwork-Id: 8716531 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 15681C0554 for ; Thu, 31 Mar 2016 20:25:58 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 911E820160 for ; Thu, 31 Mar 2016 20:25:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 27DC0202C8 for ; Thu, 31 Mar 2016 20:25:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933114AbcCaUZl (ORCPT ); Thu, 31 Mar 2016 16:25:41 -0400 Received: from mga03.intel.com ([134.134.136.65]:9126 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933074AbcCaUZk (ORCPT ); Thu, 31 Mar 2016 16:25:40 -0400 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP; 31 Mar 2016 13:25:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,424,1455004800"; d="scan'208";a="945558705" Received: from tenfidy.lm.intel.com ([10.232.117.154]) by orsmga002.jf.intel.com with ESMTP; 31 Mar 2016 13:25:39 -0700 From: Jon Derrick To: axboe@fb.com Cc: Jon Derrick , linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, keith.busch@intel.com, hch@infradead.org, stephen.bates@microsemi.com Subject: [PATCH 1/2] block: add queue flag to always poll Date: Thu, 31 Mar 2016 14:19:13 -0600 Message-Id: <1459455554-2794-2-git-send-email-jonathan.derrick@intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1459455554-2794-1-git-send-email-jonathan.derrick@intel.com> References: <1459455554-2794-1-git-send-email-jonathan.derrick@intel.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds poll-by-default functionality back for 4.6 by adding a queue flag which specifies that it should always try polling, rather than only if the io specifies it. Signed-off-by: Jon Derrick --- block/blk-core.c | 8 ++++++++ fs/direct-io.c | 7 ++++++- include/linux/blkdev.h | 2 ++ 3 files changed, 16 insertions(+), 1 deletion(-) diff --git a/block/blk-core.c b/block/blk-core.c index 827f8ba..d85f913 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -3335,6 +3335,14 @@ void blk_finish_plug(struct blk_plug *plug) } EXPORT_SYMBOL(blk_finish_plug); +bool blk_force_poll(struct request_queue *q) +{ + if (!q->mq_ops || !q->mq_ops->poll || + !test_bit(QUEUE_FLAG_POLL_FORCE, &q->queue_flags)) + return false; + return true; +} + bool blk_poll(struct request_queue *q, blk_qc_t cookie) { struct blk_plug *plug; diff --git a/fs/direct-io.c b/fs/direct-io.c index 476f1ec..2775552 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -450,7 +450,12 @@ static struct bio *dio_await_one(struct dio *dio) __set_current_state(TASK_UNINTERRUPTIBLE); dio->waiter = current; spin_unlock_irqrestore(&dio->bio_lock, flags); - if (!(dio->iocb->ki_flags & IOCB_HIPRI) || + /* + * Polling must be enabled explicitly on a per-IO basis, + * or through the queue's sysfs io_poll_force control + */ + if (!((dio->iocb->ki_flags & IOCB_HIPRI) || + (blk_force_poll(bdev_get_queue(dio->bio_bdev)))) || !blk_poll(bdev_get_queue(dio->bio_bdev), dio->bio_cookie)) io_schedule(); /* wake up sets us TASK_RUNNING */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 7e5d7e0..e87ef17 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -491,6 +491,7 @@ struct request_queue { #define QUEUE_FLAG_INIT_DONE 20 /* queue is initialized */ #define QUEUE_FLAG_NO_SG_MERGE 21 /* don't attempt to merge SG segments*/ #define QUEUE_FLAG_POLL 22 /* IO polling enabled if set */ +#define QUEUE_FLAG_POLL_FORCE 23 /* IO polling forced if set */ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_STACKABLE) | \ @@ -824,6 +825,7 @@ extern int blk_execute_rq(struct request_queue *, struct gendisk *, extern void blk_execute_rq_nowait(struct request_queue *, struct gendisk *, struct request *, int, rq_end_io_fn *); +bool blk_force_poll(struct request_queue *q); bool blk_poll(struct request_queue *q, blk_qc_t cookie); static inline struct request_queue *bdev_get_queue(struct block_device *bdev)