From patchwork Tue Feb 28 14:57:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 9595885 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7539A601D7 for ; Tue, 28 Feb 2017 14:58:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6A25228497 for ; Tue, 28 Feb 2017 14:58:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5F19F284F5; Tue, 28 Feb 2017 14:58:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E44D428497 for ; Tue, 28 Feb 2017 14:58:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752424AbdB1O6R (ORCPT ); Tue, 28 Feb 2017 09:58:17 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:45839 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752125AbdB1O6E (ORCPT ); Tue, 28 Feb 2017 09:58:04 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:To:From:Sender:Reply-To:Cc:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=jipsXLW1ZyDWRjdmq4bgNRpcBpKxQjX0mEIqwN2IeJw=; b=T3RIsi4J8FgZFnpPOW2Cu1j6F 4ikWwuTsWD9r2ydNb3A5lDB307vhf7EwSSgHItWcju+EAmiAvwJPDp3gcLUT3qb0U1BLtP0Uci1Ir Fyt1WH3hTIrfCmb8A6tSV8sD6cL0VBKg1UEa1Et+FMFvgvo6SlWAmNa8nwJmR1+OZdgigv5zsGpYq xmha6MLygo8oZvxCj2J8IKZ21I4Bz82lGd4J2rMLGProg6BEFGzxC7fRA+Y4aHv1BiztIEyG+l1h3 1yKJIKU4fcN0zi1CO4I4TE15yae8tqLt7VTuu/y9IZyJJLL/upki2FTaMMHdFQwSsHGXIkWJhJy+Y JnA6gij4g==; Received: from [8.25.222.2] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.87 #1 (Red Hat Linux)) id 1cijDe-0006S3-RI; Tue, 28 Feb 2017 14:57:38 +0000 From: Christoph Hellwig To: linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH 09/12] block: advertize max atomic write limit Date: Tue, 28 Feb 2017 06:57:34 -0800 Message-Id: <20170228145737.19016-10-hch@lst.de> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170228145737.19016-1-hch@lst.de> References: <20170228145737.19016-1-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Christoph Hellwig --- block/blk-settings.c | 22 ++++++++++++++++++++++ block/blk-sysfs.c | 12 ++++++++++++ include/linux/blkdev.h | 9 +++++++++ 3 files changed, 43 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index 529e55f52a03..9279542472fb 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -93,6 +93,7 @@ void blk_set_default_limits(struct queue_limits *lim) lim->virt_boundary_mask = 0; lim->max_segment_size = BLK_MAX_SEGMENT_SIZE; lim->max_sectors = lim->max_hw_sectors = BLK_SAFE_MAX_SECTORS; + lim->max_atomic_write_sectors = 0; lim->max_dev_sectors = 0; lim->chunk_sectors = 0; lim->max_write_same_sectors = 0; @@ -129,6 +130,7 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->discard_zeroes_data = 1; lim->max_segments = USHRT_MAX; lim->max_hw_sectors = UINT_MAX; + lim->max_atomic_write_sectors = 0; lim->max_segment_size = UINT_MAX; lim->max_sectors = UINT_MAX; lim->max_dev_sectors = UINT_MAX; @@ -258,6 +260,24 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto EXPORT_SYMBOL(blk_queue_max_hw_sectors); /** + * blk_queue_max_atomic_write_sectors - maximum sectors written atomically + * @q: the request queue for the device + * @max_hw_sectors: max hardware sectors in the usual 512b unit + * + * Description: + * Enables a low level driver to advertise that it supports writing + * multi-sector I/O atomically. If the driver has any requirements + * in addition to the maximum size it should not set this field to + * indicate that it supports multi-sector atomic writes. + **/ +void blk_queue_max_atomic_write_sectors(struct request_queue *q, + unsigned int max_atomic_write_sectors) +{ + q->limits.max_atomic_write_sectors = max_atomic_write_sectors; +} +EXPORT_SYMBOL_GPL(blk_queue_max_atomic_write_sectors); + +/** * blk_queue_chunk_sectors - set size of the chunk for this queue * @q: the request queue for the device * @chunk_sectors: chunk sectors in the usual 512b unit @@ -541,6 +561,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors); t->max_hw_sectors = min_not_zero(t->max_hw_sectors, b->max_hw_sectors); t->max_dev_sectors = min_not_zero(t->max_dev_sectors, b->max_dev_sectors); + /* no support for stacking atomic writes */ + t->max_atomic_write_sectors = 0; t->max_write_same_sectors = min(t->max_write_same_sectors, b->max_write_same_sectors); t->max_write_zeroes_sectors = min(t->max_write_zeroes_sectors, diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 1dbce057592d..2f39009731f6 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -249,6 +249,12 @@ static ssize_t queue_max_hw_sectors_show(struct request_queue *q, char *page) return queue_var_show(max_hw_sectors_kb, (page)); } +static ssize_t queue_max_atomic_write_sectors_show(struct request_queue *q, + char *page) +{ + return queue_var_show(queue_max_atomic_write_sectors(q) << 1, page); +} + #define QUEUE_SYSFS_BIT_FNS(name, flag, neg) \ static ssize_t \ queue_show_##name(struct request_queue *q, char *page) \ @@ -540,6 +546,11 @@ static struct queue_sysfs_entry queue_max_hw_sectors_entry = { .show = queue_max_hw_sectors_show, }; +static struct queue_sysfs_entry queue_max_atomic_write_sectors_entry = { + .attr = {.name = "max_atomic_write_sectors_kb", .mode = S_IRUGO }, + .show = queue_max_atomic_write_sectors_show, +}; + static struct queue_sysfs_entry queue_max_segments_entry = { .attr = {.name = "max_segments", .mode = S_IRUGO }, .show = queue_max_segments_show, @@ -695,6 +706,7 @@ static struct attribute *default_attrs[] = { &queue_requests_entry.attr, &queue_ra_entry.attr, &queue_max_hw_sectors_entry.attr, + &queue_max_atomic_write_sectors_entry.attr, &queue_max_sectors_entry.attr, &queue_max_segments_entry.attr, &queue_max_integrity_segments_entry.attr, diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 1ca8e8fd1078..c43d952557f9 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -323,6 +323,7 @@ struct queue_limits { unsigned int alignment_offset; unsigned int io_min; unsigned int io_opt; + unsigned int max_atomic_write_sectors; unsigned int max_discard_sectors; unsigned int max_hw_discard_sectors; unsigned int max_write_same_sectors; @@ -1135,6 +1136,8 @@ extern void blk_cleanup_queue(struct request_queue *); extern void blk_queue_make_request(struct request_queue *, make_request_fn *); extern void blk_queue_bounce_limit(struct request_queue *, u64); extern void blk_queue_max_hw_sectors(struct request_queue *, unsigned int); +extern void blk_queue_max_atomic_write_sectors(struct request_queue *, + unsigned int); extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int); extern void blk_queue_max_segments(struct request_queue *, unsigned short); extern void blk_queue_max_segment_size(struct request_queue *, unsigned int); @@ -1371,6 +1374,12 @@ static inline unsigned int queue_max_hw_sectors(struct request_queue *q) return q->limits.max_hw_sectors; } +static inline unsigned int queue_max_atomic_write_sectors( + struct request_queue *q) +{ + return q->limits.max_atomic_write_sectors; +} + static inline unsigned short queue_max_segments(struct request_queue *q) { return q->limits.max_segments;