From patchwork Fri Oct 17 23:46:36 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keith Busch X-Patchwork-Id: 5099391 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 0E61EC11AC for ; Fri, 17 Oct 2014 23:52:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0C75B20211 for ; Fri, 17 Oct 2014 23:52:03 +0000 (UTC) Received: from mx4-phx2.redhat.com (mx4-phx2.redhat.com [209.132.183.25]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0A59020149 for ; Fri, 17 Oct 2014 23:52:01 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s9HNlUbg018306; Fri, 17 Oct 2014 19:47:31 -0400 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s9HNlTtM028852 for ; Fri, 17 Oct 2014 19:47:29 -0400 Received: from mx1.redhat.com (ext-mx14.extmail.prod.ext.phx2.redhat.com [10.5.110.19]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s9HNlS9u012015; Fri, 17 Oct 2014 19:47:28 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s9HNlRkR014395; Fri, 17 Oct 2014 19:47:27 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP; 17 Oct 2014 16:37:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.04,742,1406617200"; d="scan'208";a="607280504" Received: from dcgshare.lm.intel.com ([10.232.118.254]) by fmsmga001.fm.intel.com with ESMTP; 17 Oct 2014 16:46:59 -0700 Received: by dcgshare.lm.intel.com (Postfix, from userid 1017) id A3560E012E; Fri, 17 Oct 2014 17:46:59 -0600 (MDT) From: Keith Busch To: dm-devel@redhat.com Date: Fri, 17 Oct 2014 17:46:36 -0600 Message-Id: <1413589598-17631-3-git-send-email-keith.busch@intel.com> In-Reply-To: <1413589598-17631-1-git-send-email-keith.busch@intel.com> References: <1413589598-17631-1-git-send-email-keith.busch@intel.com> X-RedHat-Spam-Score: -7.299 (BAYES_00, DCC_REPUT_00_12, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, SPF_PASS, URIBL_BLOCKED) X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.19 X-loop: dm-devel@redhat.com Cc: Christoph Hellwig , "Jun'ichi Nomura" , Keith Busch , Mike Snitzer Subject: [dm-devel] [PATCHv2 2/4] dm: Submit stacked requests in irq enabled context X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This has dm enqueue all prep'ed requests into work processed by another thread. This will allow dm to invoke block APIs that assume interrupt enabled context. This patch is to prepare for adding blk-mq support. Signed-off-by: Keith Busch --- drivers/md/dm.c | 37 ++++++++++++++++++++++++++++--------- 1 file changed, 28 insertions(+), 9 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 809f83f..88a73be 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -19,6 +19,7 @@ #include #include #include +#include #include @@ -56,6 +57,8 @@ static DECLARE_WORK(deferred_remove_work, do_deferred_remove); static struct workqueue_struct *deferred_remove_workqueue; +static void map_tio_request(struct kthread_work *work); + /* * For bio-based dm. * One of these is allocated per bio. @@ -78,6 +81,7 @@ struct dm_rq_target_io { struct mapped_device *md; struct dm_target *ti; struct request *orig, clone; + struct kthread_work work; int error; union map_info info; }; @@ -202,6 +206,9 @@ struct mapped_device { struct bio flush_bio; struct dm_stats stats; + + struct kthread_worker kworker; + struct task_struct *kworker_task; }; /* @@ -1635,6 +1642,7 @@ static struct request *clone_rq(struct request *rq, struct mapped_device *md, tio->orig = rq; tio->error = 0; memset(&tio->info, 0, sizeof(tio->info)); + init_kthread_work(&tio->work, map_tio_request); clone = &tio->clone; if (setup_clone(clone, rq, tio)) { @@ -1731,6 +1739,13 @@ static struct request *dm_start_request(struct mapped_device *md, struct request return clone; } +static void map_tio_request(struct kthread_work *work) +{ + struct dm_rq_target_io *tio = container_of(work, struct dm_rq_target_io, + work); + map_request(tio->ti, &tio->clone, tio->md); +} + /* * q->request_fn for request-based dm. * Called with the queue lock held. @@ -1742,6 +1757,7 @@ static void dm_request_fn(struct request_queue *q) struct dm_table *map = dm_get_live_table(md, &srcu_idx); struct dm_target *ti; struct request *rq, *clone; + struct dm_rq_target_io *tio; sector_t pos; /* @@ -1777,20 +1793,14 @@ static void dm_request_fn(struct request_queue *q) clone = dm_start_request(md, rq); - spin_unlock(q->queue_lock); - if (map_request(ti, clone, md)) - goto requeued; - + tio = rq->special; + tio->ti = ti; + queue_kthread_work(&md->kworker, &tio->work); BUG_ON(!irqs_disabled()); - spin_lock(q->queue_lock); } goto out; -requeued: - BUG_ON(!irqs_disabled()); - spin_lock(q->queue_lock); - delay_and_out: blk_delay_queue(q, HZ / 10); out: @@ -1981,6 +1991,11 @@ static struct mapped_device *alloc_dev(int minor) md->disk->queue = md->queue; md->disk->private_data = md; sprintf(md->disk->disk_name, "dm-%d", minor); + + init_kthread_worker(&md->kworker); + md->kworker_task = kthread_run(kthread_worker_fn, &md->kworker, "dm-%s", + dev_name(disk_to_dev(md->disk))); + add_disk(md->disk); format_dev_t(md->name, MKDEV(_major, minor)); @@ -2034,6 +2049,10 @@ static void free_dev(struct mapped_device *md) unlock_fs(md); bdput(md->bdev); destroy_workqueue(md->wq); + + flush_kthread_worker(&md->kworker); + kthread_stop(md->kworker_task); + if (md->io_pool) mempool_destroy(md->io_pool); if (md->bs)