From patchwork Mon Sep 24 09:38:48 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junichi Nomura X-Patchwork-Id: 1496841 Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from mx4-phx2.redhat.com (mx4-phx2.redhat.com [209.132.183.25]) by patchwork2.kernel.org (Postfix) with ESMTP id 39C39DF280 for ; Mon, 24 Sep 2012 09:43:21 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q8O9e1nP011896; Mon, 24 Sep 2012 05:40:03 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q8O9dxIB016439 for ; Mon, 24 Sep 2012 05:40:00 -0400 Received: from mx1.redhat.com (ext-mx16.extmail.prod.ext.phx2.redhat.com [10.5.110.21]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id q8O9dsVb019717; Mon, 24 Sep 2012 05:39:54 -0400 Received: from tyo202.gate.nec.co.jp (TYO202.gate.nec.co.jp [210.143.35.52]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q8O9dqcp000541; Mon, 24 Sep 2012 05:39:53 -0400 Received: from mailgate3.nec.co.jp ([10.7.69.193]) by tyo202.gate.nec.co.jp (8.13.8/8.13.4) with ESMTP id q8O9dpdl000691; Mon, 24 Sep 2012 18:39:51 +0900 (JST) Received: (from root@localhost) by mailgate3.nec.co.jp (8.11.7/3.7W-MAILGATE-NEC) id q8O9dpY16690; Mon, 24 Sep 2012 18:39:51 +0900 (JST) Received: from mail03.kamome.nec.co.jp (mail03.kamome.nec.co.jp [10.25.43.7]) by mailsv3.nec.co.jp (8.13.8/8.13.4) with ESMTP id q8O9dKiN020451; Mon, 24 Sep 2012 18:39:50 +0900 (JST) Received: from kogoro.jp.nec.com ([10.26.220.12] [10.26.220.12]) by mail01b.kamome.nec.co.jp with ESMTP id BT-MMP-1612614; Mon, 24 Sep 2012 18:38:49 +0900 Received: from xzibit.linux.bs1.fc.nec.co.jp ([10.34.125.175] [10.34.125.175]) by mail.jp.nec.com with ESMTPA id BT-MMP-75819; Mon, 24 Sep 2012 18:38:49 +0900 Message-ID: <50602A28.8010402@ce.jp.nec.com> Date: Mon, 24 Sep 2012 18:38:48 +0900 From: "Jun'ichi Nomura" User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120717 Thunderbird/14.0 MIME-Version: 1.0 To: Mike Snitzer References: <20120920192812.GA31495@redhat.com> <20120921154703.GB5967@redhat.com> In-Reply-To: <20120921154703.GB5967@redhat.com> X-RedHat-Spam-Score: -1.902 (BAYES_00,SPF_HELO_PASS,SPF_PASS) X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.21 X-loop: dm-devel@redhat.com Cc: dm-devel@redhat.com, Mike Christie Subject: Re: [dm-devel] [PATCH v2] dm: gracefully fail any request beyond the end of the device X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com On 09/22/12 00:47, Mike Snitzer wrote: > @@ -1651,19 +1654,31 @@ static void dm_request_fn(struct request > if (!rq) > goto delay_and_out; > > + clone = rq->special; > + > /* always use block 0 to find the target for flushes for now */ > pos = 0; > if (!(rq->cmd_flags & REQ_FLUSH)) > pos = blk_rq_pos(rq); > > ti = dm_table_find_target(map, pos); > - BUG_ON(!dm_target_is_valid(ti)); > + if (!dm_target_is_valid(ti)) { > + /* > + * Must perform setup, that dm_done() requires, > + * before calling dm_kill_unmapped_request > + */ > + DMERR_LIMIT("request attempted access beyond the end of device"); > + blk_start_request(rq); > + atomic_inc(&md->pending[rq_data_dir(clone)]); > + dm_get(md); > + dm_kill_unmapped_request(clone, -EIO); > + goto out; This "goto out" should be "continue" so that request_fn process next requests in the queue. Also I think introducing a function dm_start_request() will make this part of code a little bit easier for reading. An edited patch is attached. diff --git a/drivers/md/dm.c b/drivers/md/dm.c index e24143c..3977f8d 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -865,7 +865,10 @@ static void dm_done(struct request *clone, int error, bool mapped) { int r = error; struct dm_rq_target_io *tio = clone->end_io_data; - dm_request_endio_fn rq_end_io = tio->ti->type->rq_end_io; + dm_request_endio_fn rq_end_io = NULL; + + if (tio->ti) + rq_end_io = tio->ti->type->rq_end_io; if (mapped && rq_end_io) r = rq_end_io(tio->ti, clone, error, &tio->info); @@ -1566,15 +1569,6 @@ static int map_request(struct dm_target *ti, struct request *clone, int r, requeued = 0; struct dm_rq_target_io *tio = clone->end_io_data; - /* - * Hold the md reference here for the in-flight I/O. - * We can't rely on the reference count by device opener, - * because the device may be closed during the request completion - * when all bios are completed. - * See the comment in rq_completed() too. - */ - dm_get(md); - tio->ti = ti; r = ti->type->map_rq(ti, clone, &tio->info); switch (r) { @@ -1606,6 +1600,26 @@ static int map_request(struct dm_target *ti, struct request *clone, return requeued; } +static struct request *dm_start_request(struct mapped_device *md, struct request *orig) +{ + struct request *clone; + + blk_start_request(orig); + clone = orig->special; + atomic_inc(&md->pending[rq_data_dir(clone)]); + + /* + * Hold the md reference here for the in-flight I/O. + * We can't rely on the reference count by device opener, + * because the device may be closed during the request completion + * when all bios are completed. + * See the comment in rq_completed() too. + */ + dm_get(md); + + return clone; +} + /* * q->request_fn for request-based dm. * Called with the queue lock held. @@ -1635,14 +1649,21 @@ static void dm_request_fn(struct request_queue *q) pos = blk_rq_pos(rq); ti = dm_table_find_target(map, pos); - BUG_ON(!dm_target_is_valid(ti)); + if (!dm_target_is_valid(ti)) { + /* + * Must perform setup, that dm_done() requires, + * before calling dm_kill_unmapped_request + */ + DMERR_LIMIT("request attempted access beyond the end of device"); + clone = dm_start_request(md, rq); + dm_kill_unmapped_request(clone, -EIO); + continue; + } if (ti->type->busy && ti->type->busy(ti)) goto delay_and_out; - blk_start_request(rq); - clone = rq->special; - atomic_inc(&md->pending[rq_data_dir(clone)]); + clone = dm_start_request(md, rq); spin_unlock(q->queue_lock); if (map_request(ti, clone, md)) @@ -1662,8 +1683,6 @@ delay_and_out: blk_delay_queue(q, HZ / 10); out: dm_table_put(map); - - return; } int dm_underlying_device_busy(struct request_queue *q)