From patchwork Mon Jan 9 05:04:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 9503759 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7315460757 for ; Mon, 9 Jan 2017 05:04:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 51DDE2817F for ; Mon, 9 Jan 2017 05:04:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 336D32824F; Mon, 9 Jan 2017 05:04:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 912DA2817F for ; Mon, 9 Jan 2017 05:04:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751003AbdAIFEd (ORCPT ); Mon, 9 Jan 2017 00:04:33 -0500 Received: from LGEAMRELO12.lge.com ([156.147.23.52]:58922 "EHLO lgeamrelo12.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750727AbdAIFEc (ORCPT ); Mon, 9 Jan 2017 00:04:32 -0500 Received: from unknown (HELO lgeamrelo04.lge.com) (156.147.1.127) by 156.147.23.52 with ESMTP; 9 Jan 2017 14:04:30 +0900 X-Original-SENDERIP: 156.147.1.127 X-Original-MAILFROM: minchan@kernel.org Received: from unknown (HELO localhost.localdomain) (10.177.223.161) by 156.147.1.127 with ESMTP; 9 Jan 2017 14:04:29 +0900 X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org From: Minchan Kim To: Jens Axboe Cc: Hyeoncheol Lee , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Andrew Morton , Minchan Kim , Sergey Senozhatsky , Robert Jennings , Jerome Marchand Subject: [RFC] blk: increase logical_block_size to unsigned int Date: Mon, 9 Jan 2017 14:04:27 +0900 Message-Id: <1483938267-8858-1-git-send-email-minchan@kernel.org> X-Mailer: git-send-email 2.7.4 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Mostly, zram is used as swap system on embedded world so it want to do IO as PAGE_SIZE aligned/size IO unit. For that, one of the problem was blk_queue_logical_block_size(zram->disk->queue, PAGE_SIZE) made overflow in *64K page system* so [1] changed it to constant 4096. Since that, partial IO can happen so zram should handle it which makes zram complicated[2]. Now, I want to remove that partial IO handling logics in zram. Block guys, Robert, Jerome: Can't we extend q->limits.logical_block_size to unsigned int? Is there any problem on that? [1] 7b19b8d45b21, zram: Prevent overflow in logical block size [2] 924bd88d703e, zram: allow partial page operations Cc: Sergey Senozhatsky Cc: Jens Axboe Cc: Robert Jennings Cc: Jerome Marchand Signed-off-by: Minchan Kim --- block/blk-settings.c | 2 +- include/linux/blkdev.h | 10 +++++----- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index f679ae122843..0d644f37e3c6 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -349,7 +349,7 @@ EXPORT_SYMBOL(blk_queue_max_segment_size); * storage device can address. The default of 512 covers most * hardware. **/ -void blk_queue_logical_block_size(struct request_queue *q, unsigned short size) +void blk_queue_logical_block_size(struct request_queue *q, unsigned int size) { q->limits.logical_block_size = size; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index c47c358ba052..0aaea317a7f4 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -281,7 +281,7 @@ struct queue_limits { unsigned int discard_granularity; unsigned int discard_alignment; - unsigned short logical_block_size; + unsigned int logical_block_size; unsigned short max_segments; unsigned short max_integrity_segments; @@ -991,7 +991,7 @@ extern void blk_queue_max_discard_sectors(struct request_queue *q, unsigned int max_discard_sectors); extern void blk_queue_max_write_same_sectors(struct request_queue *q, unsigned int max_write_same_sectors); -extern void blk_queue_logical_block_size(struct request_queue *, unsigned short); +extern void blk_queue_logical_block_size(struct request_queue *, unsigned int); extern void blk_queue_physical_block_size(struct request_queue *, unsigned int); extern void blk_queue_alignment_offset(struct request_queue *q, unsigned int alignment); @@ -1216,9 +1216,9 @@ static inline unsigned int queue_max_segment_size(struct request_queue *q) return q->limits.max_segment_size; } -static inline unsigned short queue_logical_block_size(struct request_queue *q) +static inline unsigned int queue_logical_block_size(struct request_queue *q) { - int retval = 512; + unsigned int retval = 512; if (q && q->limits.logical_block_size) retval = q->limits.logical_block_size; @@ -1226,7 +1226,7 @@ static inline unsigned short queue_logical_block_size(struct request_queue *q) return retval; } -static inline unsigned short bdev_logical_block_size(struct block_device *bdev) +static inline unsigned int bdev_logical_block_size(struct block_device *bdev) { return queue_logical_block_size(bdev_get_queue(bdev)); }