From patchwork Fri Sep 18 15:18:50 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junichi Nomura X-Patchwork-Id: 48549 Received: from hormel.redhat.com (hormel1.redhat.com [209.132.177.33]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n8IFQHNp007096 for ; Fri, 18 Sep 2009 15:26:17 GMT Received: from listman.util.phx.redhat.com (listman.util.phx.redhat.com [10.8.4.110]) by hormel.redhat.com (Postfix) with ESMTP id 22C1E619F44; Fri, 18 Sep 2009 11:26:17 -0400 (EDT) Received: from int-mx05.intmail.prod.int.phx2.redhat.com (nat-pool.util.phx.redhat.com [10.8.5.200]) by listman.util.phx.redhat.com (8.13.1/8.13.1) with ESMTP id n8IFQEoo024937 for ; Fri, 18 Sep 2009 11:26:14 -0400 Received: from mx1.redhat.com (ext-mx03.extmail.prod.ext.phx2.redhat.com [10.5.110.7]) by int-mx05.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id n8IFQAS8007386; Fri, 18 Sep 2009 11:26:10 -0400 Received: from tyo201.gate.nec.co.jp (TYO201.gate.nec.co.jp [202.32.8.193]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n8IFQ4Ew017378; Fri, 18 Sep 2009 11:26:04 -0400 Received: from mailgate3.nec.co.jp ([10.7.69.192]) by tyo201.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id n8IFPwWU025437; Sat, 19 Sep 2009 00:25:58 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id n8IFPwF09185; Sat, 19 Sep 2009 00:25:58 +0900 (JST) Received: from mail02.kamome.nec.co.jp (mail02.kamome.nec.co.jp [10.25.43.5]) by mailsv3.nec.co.jp (8.13.8/8.13.4) with ESMTP id n8IFPwQi005126; Sat, 19 Sep 2009 00:25:58 +0900 (JST) Received: from shoin.jp.nec.com ([10.26.220.3] [10.26.220.3]) by mail02.kamome.nec.co.jp with ESMTP id BT-MMP-2067558; Sat, 19 Sep 2009 00:18:53 +0900 Received: from [10.66.61.195] ([10.66.61.195] [10.66.61.195]) by mail.jp.nec.com with ESMTP; Sat, 19 Sep 2009 00:18:51 +0900 Message-ID: <4AB3A4DA.90309@ce.jp.nec.com> Date: Sat, 19 Sep 2009 00:18:50 +0900 From: "Jun'ichi Nomura" User-Agent: Thunderbird 2.0.0.19 (Windows/20081209) MIME-Version: 1.0 To: Jens Axboe , Alasdair G Kergon , "Martin K. Petersen" , Mike Snitzer X-RedHat-Spam-Score: 0 () X-Scanned-By: MIMEDefang 2.67 on 10.5.11.18 X-Scanned-By: MIMEDefang 2.67 on 10.5.110.7 X-loop: dm-devel@redhat.com Cc: device-mapper development , linux-kernel@vger.kernel.org Subject: [dm-devel] [PATCH 2/2] block: blk_set_default_limits sets 0 to max_sectors X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.5 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com max_sectors and max_hw_sectors of dm device are set to smaller values than those of underlying devices. E.g: # cat /sys/block/sdj/queue/max_sectors_kb 512 # cat /sys/block/sdj/queue/max_hw_sectors_kb 32767 # echo "0 10 linear /dev/sdj 0" | dmsetup create test # cat /sys/block/dm-0/queue/max_sectors_kb 127 # cat /sys/block/dm-0/queue/max_hw_sectors_kb 127 This prevents the I/O size of struct request from becoming large, and causes undesired request fragmentation in request-based dm. This is caused by the queue_limits stacking. In dm_calculate_queue_limits(), the block-layer's safe default value (SAFE_MAX_SECTORS, 255) is included in the merging process of target's queue_limits. So underlying queue_limits is not propagated correctly. Initialize default values of all max_sectors to '0' in blk_set_default_limits() so that the values propagate properly from underlying devices. Check this thread for further background: https://www.redhat.com/archives/dm-devel/2009-September/msg00176.html Signed-off-by: Kiyoshi Ueda Signed-off-by: Jun'ichi Nomura Reported-by: David Strand Cc: Mike Snitzer Cc: Alasdair G Kergon Cc: Martin K. Petersen Cc: Jens Axboe --- block/blk-settings.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel Index: linux-2.6.31.work/block/blk-settings.c =================================================================== --- linux-2.6.31.work.orig/block/blk-settings.c +++ linux-2.6.31.work/block/blk-settings.c @@ -111,7 +111,7 @@ void blk_set_default_limits(struct queue lim->max_hw_segments = MAX_HW_SEGMENTS; lim->seg_boundary_mask = BLK_SEG_BOUNDARY_MASK; lim->max_segment_size = MAX_SEGMENT_SIZE; - lim->max_sectors = lim->max_hw_sectors = SAFE_MAX_SECTORS; + lim->max_sectors = lim->max_hw_sectors = 0; lim->logical_block_size = lim->physical_block_size = lim->io_min = 512; lim->bounce_pfn = (unsigned long)(BLK_BOUNCE_ANY >> PAGE_SHIFT); lim->alignment_offset = 0; @@ -164,6 +164,7 @@ void blk_queue_make_request(struct reque q->unplug_timer.data = (unsigned long)q; blk_set_default_limits(&q->limits); + blk_queue_max_sectors(q, SAFE_MAX_SECTORS); /* * If the caller didn't supply a lock, fall back to our embedded