From patchwork Fri Sep 22 22:14:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 9967205 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 591096035E for ; Fri, 22 Sep 2017 22:15:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4B4C228E0F for ; Fri, 22 Sep 2017 22:15:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3FC4528F34; Fri, 22 Sep 2017 22:15:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2F5542982D for ; Fri, 22 Sep 2017 22:14:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752165AbdIVWOH (ORCPT ); Fri, 22 Sep 2017 18:14:07 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:56200 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752140AbdIVWOG (ORCPT ); Fri, 22 Sep 2017 18:14:06 -0400 X-IronPort-AV: E=Sophos;i="5.42,427,1500912000"; d="scan'208";a="53441132" Received: from sjappemgw11.hgst.com (HELO sjappemgw12.hgst.com) ([199.255.44.62]) by ob1.hgst.iphmx.com with ESMTP; 23 Sep 2017 06:14:05 +0800 Received: from thinkpad-bart.sdcorp.global.sandisk.com (HELO thinkpad-bart.int.fusionio.com) ([10.11.172.152]) by sjappemgw12.hgst.com with ESMTP; 22 Sep 2017 15:14:05 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , "Martin K . Petersen" , =Oleksandr Natalenko , Bart Van Assche , Shaohua Li , linux-raid@vger.kernel.org, Ming Lei , Hannes Reinecke , Johannes Thumshirn Subject: [PATCH v3 1/6] md: Make md resync and reshape threads freezable Date: Fri, 22 Sep 2017 15:14:00 -0700 Message-Id: <20170922221405.22091-2-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20170922221405.22091-1-bart.vanassche@wdc.com> References: <20170922221405.22091-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Some people use the md driver on laptops and use the suspend and resume functionality. Since it is essential that submitting of new I/O requests stops before device quiescing starts, make the md resync and reshape threads freezable. Signed-off-by: Bart Van Assche Cc: Shaohua Li Cc: linux-raid@vger.kernel.org Cc: Ming Lei Cc: Christoph Hellwig Cc: Hannes Reinecke Cc: Johannes Thumshirn --- drivers/md/md.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 08fcaebc61bd..26a12bd0db65 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -66,6 +66,7 @@ #include #include #include +#include #include #include "md.h" @@ -7424,6 +7425,7 @@ static int md_thread(void *arg) */ allow_signal(SIGKILL); + set_freezable(); while (!kthread_should_stop()) { /* We need to wait INTERRUPTIBLE so that @@ -7434,7 +7436,7 @@ static int md_thread(void *arg) if (signal_pending(current)) flush_signals(current); - wait_event_interruptible_timeout + wait_event_freezable_timeout (thread->wqueue, test_bit(THREAD_WAKEUP, &thread->flags) || kthread_should_stop() || kthread_should_park(), @@ -8133,6 +8135,8 @@ void md_do_sync(struct md_thread *thread) return; } + set_freezable(); + if (mddev_is_clustered(mddev)) { ret = md_cluster_ops->resync_start(mddev); if (ret) @@ -8324,7 +8328,7 @@ void md_do_sync(struct md_thread *thread) mddev->curr_resync_completed > mddev->resync_max )) { /* time to update curr_resync_completed */ - wait_event(mddev->recovery_wait, + wait_event_freezable(mddev->recovery_wait, atomic_read(&mddev->recovery_active) == 0); mddev->curr_resync_completed = j; if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) && @@ -8342,10 +8346,10 @@ void md_do_sync(struct md_thread *thread) * to avoid triggering warnings. */ flush_signals(current); /* just in case */ - wait_event_interruptible(mddev->recovery_wait, - mddev->resync_max > j - || test_bit(MD_RECOVERY_INTR, - &mddev->recovery)); + wait_event_freezable(mddev->recovery_wait, + mddev->resync_max > j || + test_bit(MD_RECOVERY_INTR, + &mddev->recovery)); } if (test_bit(MD_RECOVERY_INTR, &mddev->recovery)) @@ -8421,7 +8425,7 @@ void md_do_sync(struct md_thread *thread) * Give other IO more of a chance. * The faster the devices, the less we wait. */ - wait_event(mddev->recovery_wait, + wait_event_freezable(mddev->recovery_wait, !atomic_read(&mddev->recovery_active)); } } @@ -8433,7 +8437,8 @@ void md_do_sync(struct md_thread *thread) * this also signals 'finished resyncing' to md_stop */ blk_finish_plug(&plug); - wait_event(mddev->recovery_wait, !atomic_read(&mddev->recovery_active)); + wait_event_freezable(mddev->recovery_wait, + !atomic_read(&mddev->recovery_active)); if (!test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && !test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&