From patchwork Fri Apr 24 08:14:43 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kiyoshi Ueda X-Patchwork-Id: 19862 Received: from hormel.redhat.com (hormel1.redhat.com [209.132.177.33]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n3OIkhRE032581 for ; Fri, 24 Apr 2009 18:46:45 GMT Received: from listman.util.phx.redhat.com (listman.util.phx.redhat.com [10.8.4.110]) by hormel.redhat.com (Postfix) with ESMTP id 7B7C78E01A6; Fri, 24 Apr 2009 04:15:01 -0400 (EDT) Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by listman.util.phx.redhat.com (8.13.1/8.13.1) with ESMTP id n3O8ExAQ024942 for ; Fri, 24 Apr 2009 04:14:59 -0400 Received: from mx3.redhat.com (mx3.redhat.com [172.16.48.32]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n3O8EwZZ001186; Fri, 24 Apr 2009 04:14:58 -0400 Received: from tyo201.gate.nec.co.jp (TYO201.gate.nec.co.jp [202.32.8.193]) by mx3.redhat.com (8.13.8/8.13.8) with ESMTP id n3O8Ei4R018333; Fri, 24 Apr 2009 04:14:45 -0400 Received: from mailgate3.nec.co.jp ([10.7.69.162]) by tyo201.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id n3O8EiSk023898; Fri, 24 Apr 2009 17:14:44 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id n3O8EiL08069; Fri, 24 Apr 2009 17:14:44 +0900 (JST) Received: from mailsv.linux.bs1.fc.nec.co.jp (mailsv.linux.bs1.fc.nec.co.jp [10.34.125.2]) by mailsv.nec.co.jp (8.13.8/8.13.4) with ESMTP id n3O8EhNl018701; Fri, 24 Apr 2009 17:14:43 +0900 (JST) Received: from elcondor.linux.bs1.fc.nec.co.jp (elcondor.linux.bs1.fc.nec.co.jp [10.34.125.195]) by mailsv.linux.bs1.fc.nec.co.jp (Postfix) with ESMTP id 99B60E482A3; Fri, 24 Apr 2009 17:14:43 +0900 (JST) Message-ID: <49F174F3.8010008@ct.jp.nec.com> Date: Fri, 24 Apr 2009 17:14:43 +0900 From: Kiyoshi Ueda User-Agent: Thunderbird 2.0.0.21 (X11/20090320) MIME-Version: 1.0 To: Alasdair Kergon References: <49F17409.4060201@ct.jp.nec.com> In-Reply-To: <49F17409.4060201@ct.jp.nec.com> X-RedHat-Spam-Score: -0.061 X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 X-Scanned-By: MIMEDefang 2.63 on 172.16.48.32 X-loop: dm-devel@redhat.com Cc: device-mapper development Subject: [dm-devel] [PATCH 4/7] dm core: don't set QUEUE_ORDERED_DRAIN for request-based dm X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.5 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com Request-based dm doesn't have barrier support yet. So we need to set QUEUE_ORDERED_DRAIN only for bio-based dm. Since the device type is decided at the first table loading time, the flag set is deferred until then. Signed-off-by: Kiyoshi Ueda Signed-off-by: Jun'ichi Nomura Cc: Alasdair G Kergon Acked-by: Hannes Reinecke --- drivers/md/dm-table.c | 5 +++++ drivers/md/dm.c | 11 ++++++++++- drivers/md/dm.h | 1 + 3 files changed, 16 insertions(+), 1 deletion(-) -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel Index: 2.6.30-rc3/drivers/md/dm-table.c =================================================================== --- 2.6.30-rc3.orig/drivers/md/dm-table.c +++ 2.6.30-rc3/drivers/md/dm-table.c @@ -819,6 +819,11 @@ int dm_table_get_type(struct dm_table *t DM_TYPE_BIO_BASED : DM_TYPE_REQUEST_BASED; } +int dm_table_bio_based(struct dm_table *t) +{ + return dm_table_get_type(t) == DM_TYPE_BIO_BASED; +} + int dm_table_request_based(struct dm_table *t) { return dm_table_get_type(t) == DM_TYPE_REQUEST_BASED; Index: 2.6.30-rc3/drivers/md/dm.c =================================================================== --- 2.6.30-rc3.orig/drivers/md/dm.c +++ 2.6.30-rc3/drivers/md/dm.c @@ -1779,7 +1779,6 @@ static struct mapped_device *alloc_dev(i md->queue->backing_dev_info.congested_fn = dm_any_congested; md->queue->backing_dev_info.congested_data = md; blk_queue_make_request(md->queue, dm_request); - blk_queue_ordered(md->queue, QUEUE_ORDERED_DRAIN, NULL); blk_queue_bounce_limit(md->queue, BLK_BOUNCE_ANY); md->queue->unplug_fn = dm_unplug_all; blk_queue_merge_bvec(md->queue, dm_merge_bvec); @@ -2206,6 +2205,16 @@ int dm_swap_table(struct mapped_device * goto out; } + /* + * It is enought that blk_queue_ordered() is called only once when + * the first bio-based table is bound. + * + * This setting should be moved to alloc_dev() when request-based dm + * supports barrier. + */ + if (!md->map && dm_table_bio_based(table)) + blk_queue_ordered(md->queue, QUEUE_ORDERED_DRAIN, NULL); + __unbind(md); r = __bind(md, table); Index: 2.6.30-rc3/drivers/md/dm.h =================================================================== --- 2.6.30-rc3.orig/drivers/md/dm.h +++ 2.6.30-rc3/drivers/md/dm.h @@ -58,6 +58,7 @@ int dm_table_any_congested(struct dm_tab int dm_table_any_busy_target(struct dm_table *t); int dm_table_set_type(struct dm_table *t); int dm_table_get_type(struct dm_table *t); +int dm_table_bio_based(struct dm_table *t); int dm_table_request_based(struct dm_table *t); int dm_table_alloc_md_mempools(struct dm_table *t); void dm_table_free_md_mempools(struct dm_table *t);