From patchwork Sun Jul 8 17:59:59 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chauhan, Vijay" X-Patchwork-Id: 1170011 Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from mx3-phx2.redhat.com (mx3-phx2.redhat.com [209.132.183.24]) by patchwork1.kernel.org (Postfix) with ESMTP id C5C023FD4F for ; Sun, 8 Jul 2012 18:03:50 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q68I0OC2011303; Sun, 8 Jul 2012 14:00:27 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q68I0NhH016056 for ; Sun, 8 Jul 2012 14:00:23 -0400 Received: from mx1.redhat.com (ext-mx16.extmail.prod.ext.phx2.redhat.com [10.5.110.21]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id q68I0I0B017051 for ; Sun, 8 Jul 2012 14:00:18 -0400 Received: from mx2.netapp.com (mx2.netapp.com [216.240.18.37]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q68I0HwL014580 for ; Sun, 8 Jul 2012 14:00:17 -0400 X-IronPort-AV: E=Sophos;i="4.77,547,1336374000"; d="scan'208";a="660706299" Received: from smtp2.corp.netapp.com ([10.57.159.114]) by mx2-out.netapp.com with ESMTP; 08 Jul 2012 11:00:17 -0700 Received: from vmwexceht02-prd.hq.netapp.com (vmwexceht02-prd.hq.netapp.com [10.106.76.240]) by smtp2.corp.netapp.com (8.13.1/8.13.1/NTAP-1.6) with ESMTP id q68I072j016746 for ; Sun, 8 Jul 2012 11:00:16 -0700 (PDT) Received: from SACEXCMBX02-PRD.hq.netapp.com ([169.254.1.66]) by vmwexceht02-prd.hq.netapp.com ([10.106.76.240]) with mapi id 14.02.0298.004; Sun, 8 Jul 2012 11:00:06 -0700 From: "Chauhan, Vijay" To: "dm-devel@redhat.com" Thread-Topic: [PATCH] DM MULTIPATH: Allow dm to send larger request if underlying device set to larger max_sectors value Thread-Index: Ac1dM37dAGbLY28ARpSdVdFI+GUUAA== Date: Sun, 8 Jul 2012 17:59:59 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.106.53.53] MIME-Version: 1.0 X-RedHat-Spam-Score: -5.012 (RCVD_IN_DNSWL_HI, SPF_HELO_PASS, SPF_PASS, T_RP_MATCHES_RCVD) X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.21 X-MIME-Autoconverted: from quoted-printable to 8bit by lists01.pubmisc.prod.ext.phx2.redhat.com id q68I0NhH016056 X-loop: dm-devel@redhat.com Cc: "Moger, Babu" , "Stankey, Robert" Subject: [dm-devel] [PATCH] DM MULTIPATH: Allow dm to send larger request if underlying device set to larger max_sectors value X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com Even though underlying paths are set with larger value for max_sectors, dm sets 1024(i.e 512KB) for max_sectors as default. max_sectors for dm device can be reset through sysfs but any time map is updated, max_sectors is again set back to default. This patch gets the minimum of max_sectors from physical paths and sets it to dm device. Signed-off-by: Vijay Chauhan Reviewed-by: Babu Moger Reviewed-by: Bob Stankey --- -- -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel --- linux-3.5-rc5-orig/drivers/md/dm-table.c 2012-07-07 11:39:17.000000000 +0530 +++ linux-3.5-rc5/drivers/md/dm-table.c 2012-07-09 00:52:37.000000000 +0530 @@ -549,6 +549,18 @@ int dm_set_device_limits(struct dm_targe } EXPORT_SYMBOL_GPL(dm_set_device_limits); +int dm_device_max_sectors(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) +{ + unsigned int *max_sectors = data; + struct block_device *bdev = dev->bdev; + struct request_queue *q = bdev_get_queue(bdev); + + *max_sectors = min_not_zero(*max_sectors, q->limits.max_sectors); + + return 0; +} + /* * Decrement a device's use count and remove it if necessary. */ @@ -692,6 +704,7 @@ static int validate_hardware_logical_blo struct dm_target *uninitialized_var(ti); struct queue_limits ti_limits; unsigned i = 0; + unsigned int max_sectors = 0; /* * Check each entry in the table in turn. @@ -706,6 +719,15 @@ static int validate_hardware_logical_blo ti->type->iterate_devices(ti, dm_set_device_limits, &ti_limits); + /* Find minimum of max_sectors from target devices */ + if (ti->type->iterate_devices) { + ti->type->iterate_devices(ti, dm_device_max_sectors, + &max_sectors); + limits->max_sectors = min_t(unsigned int, + ti_limits.max_hw_sectors, + max_sectors); + } + /* * If the remaining sectors fall entirely within this * table entry are they compatible with its logical_block_size?