From patchwork Wed Oct 4 13:55:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Goldwyn Rodrigues X-Patchwork-Id: 9984765 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 36B2E60237 for ; Wed, 4 Oct 2017 13:55:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B54A28633 for ; Wed, 4 Oct 2017 13:55:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2042E28770; Wed, 4 Oct 2017 13:55:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D1C728633 for ; Wed, 4 Oct 2017 13:55:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752183AbdJDNzV (ORCPT ); Wed, 4 Oct 2017 09:55:21 -0400 Received: from mx2.suse.de ([195.135.220.15]:33858 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752058AbdJDNzU (ORCPT ); Wed, 4 Oct 2017 09:55:20 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 75097ADD8; Wed, 4 Oct 2017 13:55:19 +0000 (UTC) From: Goldwyn Rodrigues To: linux-block@vger.kernel.org Cc: axboe@kernel.dk, shli@kernel.org, Goldwyn Rodrigues Subject: [PATCH 1/9] QUEUE_FLAG_NOWAIT to indicate device supports nowait Date: Wed, 4 Oct 2017 08:55:03 -0500 Message-Id: <20171004135511.26110-2-rgoldwyn@suse.de> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171004135511.26110-1-rgoldwyn@suse.de> References: <20171004135511.26110-1-rgoldwyn@suse.de> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Goldwyn Rodrigues Nowait is a feature of direct AIO, where users can request to return immediately if the I/O is going to block. This translates to REQ_NOWAIT in bio.bi_opf flags. While request based devices don't wait, stacked devices such as md/dm will. In order to explicitly mark stacked devices as supported, we set the QUEUE_FLAG_NOWAIT in the queue_flags and return -EAGAIN whenever the device would block. Signed-off-by: Goldwyn Rodrigues --- block/blk-core.c | 4 ++-- include/linux/blkdev.h | 6 ++++++ 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 048be4aa6024..8de633f8c633 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -2044,10 +2044,10 @@ generic_make_request_checks(struct bio *bio) /* * For a REQ_NOWAIT based request, return -EOPNOTSUPP - * if queue is not a request based queue. + * if queue cannot handle nowait bio's */ - if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_rq_based(q)) + if ((bio->bi_opf & REQ_NOWAIT) && !blk_queue_supports_nowait(q)) goto not_supported; if (should_fail_request(&bio->bi_disk->part0, bio->bi_iter.bi_size)) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 02fa42d24b52..1d0da2a9cf46 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -631,6 +631,7 @@ struct request_queue { #define QUEUE_FLAG_REGISTERED 26 /* queue has been registered to a disk */ #define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */ #define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */ +#define QUEUE_FLAG_NOWAIT 29 /* stack device driver supports REQ_NOWAIT */ #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ (1 << QUEUE_FLAG_STACKABLE) | \ @@ -759,6 +760,11 @@ static inline bool queue_is_rq_based(struct request_queue *q) return q->request_fn || q->mq_ops; } +static inline bool blk_queue_supports_nowait(struct request_queue *q) +{ + return queue_is_rq_based(q) || test_bit(QUEUE_FLAG_NOWAIT, &q->queue_flags); +} + static inline unsigned int blk_queue_cluster(struct request_queue *q) { return q->limits.cluster;