From patchwork Fri Jun 27 22:43:19 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minfei Huang X-Patchwork-Id: 4446141 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id AA2E4BEEAA for ; Mon, 30 Jun 2014 07:36:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 94A5F202FE for ; Mon, 30 Jun 2014 07:36:47 +0000 (UTC) Received: from mx6-phx2.redhat.com (mx6-phx2.redhat.com [209.132.183.39]) by mail.kernel.org (Postfix) with ESMTP id 52BB72021A for ; Mon, 30 Jun 2014 07:36:46 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx6-phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s5U7WJUB008419; Mon, 30 Jun 2014 03:32:20 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s5RMhVbm010176 for ; Fri, 27 Jun 2014 18:43:31 -0400 Received: from mx1.redhat.com (ext-mx13.extmail.prod.ext.phx2.redhat.com [10.5.110.18]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s5RMhVhE028612; Fri, 27 Jun 2014 18:43:31 -0400 Received: from m97177.qiye.163.com (m97177.qiye.163.com [220.181.97.177]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s5RMhOfc025272; Fri, 27 Jun 2014 18:43:25 -0400 Received: from huangminfei$ucloud.cn ( [124.200.251.13] ) by ajax-webmail-wmsvr17 (Coremail) ; Sat, 28 Jun 2014 06:43:19 +0800 (CST) X-Originating-IP: [124.200.251.13] Date: Sat, 28 Jun 2014 06:43:19 +0800 (CST) From: huangminfei To: "Mike Snitzer" X-Priority: 3 In-Reply-To: <20140627192904.GA21254@redhat.com> References: <1403841690-4401-1-git-send-email-huangminfei@ucloud.cn> <20140627151105.GA30592@debian> <20140627192904.GA21254@redhat.com> MIME-Version: 1.0 Message-ID: <5d1718c1.4b30.146df7f4827.Coremail.huangminfei@ucloud.cn> X-CM-TRANSID: seCowABXaBSL861TzqwiAA--.1315W X-CM-SenderInfo: xkxd0wxplqwvnl6xuzxrxghubq/1tbiGQu3QVNYDEiOZQAAsA X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU== X-RedHat-Spam-Score: -2.3 (BAYES_00,DCC_REPUT_00_12,HTML_MESSAGE,SPF_PASS) X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.18 X-loop: dm-devel@redhat.com X-Mailman-Approved-At: Mon, 30 Jun 2014 03:32:18 -0400 Cc: Joe Thornber , linux-kernel , linux-raid , device-mapper development , Mikulas Patocka , agk Subject: Re: [dm-devel] dm-io: Prevent the danging point of the sync io callback function X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Spam-Status: No, score=0.8 required=5.0 tests=BAYES_50,HTML_MESSAGE, RCVD_IN_DNSWL_NONE, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP yes, this patch more prefer than I summitted. -------------------- ??? ????? ????????????? Ucloud--???????IAAS??? QQ: 805852007 ?????????????200?3??2? ?2014?06?28 03?29??"Mike Snitzer"??? On Fri, Jun 27 2014 at 2:44pm -0400, Mikulas Patocka wrote: > > > On Fri, 27 Jun 2014, Joe Thornber wrote: > > > On Fri, Jun 27, 2014 at 12:01:30PM +0800, Minfei Huang wrote: > > > The io address in callback function will become the danging point, > > > cause by the thread of sync io wakes up by other threads > > > and return to relieve the io address, > > > > Yes, well found. I prefer the following fix however. > > > > - Joe > > It seems ok. > > The patch is too big, I think the only change that needs to be done to fix > the bug is to replace "struct task_struct *sleeper;" with "struct > completion *completion;", replace "if (io->sleeper) > wake_up_process(io->sleeper);" with "if (io->completion) > complete(io->completion);" and declare the completion in sync_io() and > wait on it instead of "while (1)" loop there. Here is the minimalist fix you suggested (I agree that splitting out a minimalist fix is useful): drivers/md/dm-io.c | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) --- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c index 2a20986..e60c2ea 100644 --- a/drivers/md/dm-io.c +++ b/drivers/md/dm-io.c @@ -10,6 +10,7 @@ #include #include +#include #include #include #include @@ -32,7 +33,7 @@ struct dm_io_client { struct io { unsigned long error_bits; atomic_t count; - struct task_struct *sleeper; + struct completion *wait; struct dm_io_client *client; io_notify_fn callback; void *context; @@ -121,8 +122,8 @@ static void dec_count(struct io *io, unsigned int region, int error) invalidate_kernel_vmap_range(io->vma_invalidate_address, io->vma_invalidate_size); - if (io->sleeper) - wake_up_process(io->sleeper); + if (io->wait) + complete(io->wait); else { unsigned long r = io->error_bits; @@ -385,6 +386,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, */ volatile char io_[sizeof(struct io) + __alignof__(struct io) - 1]; struct io *io = (struct io *)PTR_ALIGN(&io_, __alignof__(struct io)); + DECLARE_COMPLETION_ONSTACK(wait); if (num_regions > 1 && (rw & RW_MASK) != WRITE) { WARN_ON(1); @@ -393,7 +395,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, io->error_bits = 0; atomic_set(&io->count, 1); /* see dispatch_io() */ - io->sleeper = current; + io->wait = &wait; io->client = client; io->vma_invalidate_address = dp->vma_invalidate_address; @@ -401,15 +403,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, dispatch_io(rw, num_regions, where, dp, io, 1); - while (1) { - set_current_state(TASK_UNINTERRUPTIBLE); - - if (!atomic_read(&io->count)) - break; - - io_schedule(); - } - set_current_state(TASK_RUNNING); + wait_for_completion_io(&wait); if (error_bits) *error_bits = io->error_bits; @@ -432,7 +426,7 @@ static int async_io(struct dm_io_client *client, unsigned int num_regions, io = mempool_alloc(client->pool, GFP_NOIO); io->error_bits = 0; atomic_set(&io->count, 1); /* see dispatch_io() */ - io->sleeper = NULL; + io->wait = NULL; io->client = client; io->callback = fn; io->context = context;