From patchwork Fri Jun 17 13:06:48 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Shah X-Patchwork-Id: 9183943 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1A9CF6075F for ; Fri, 17 Jun 2016 13:20:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 07F872835E for ; Fri, 17 Jun 2016 13:20:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F101828396; Fri, 17 Jun 2016 13:20:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3C9642835E for ; Fri, 17 Jun 2016 13:20:49 +0000 (UTC) Received: from localhost ([::1]:57435 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bDthY-0005aq-9F for patchwork-qemu-devel@patchwork.kernel.org; Fri, 17 Jun 2016 09:20:48 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55980) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bDtUd-0007a1-Dg for qemu-devel@nongnu.org; Fri, 17 Jun 2016 09:07:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bDtUb-00087Q-0Q for qemu-devel@nongnu.org; Fri, 17 Jun 2016 09:07:26 -0400 Received: from mx1.redhat.com ([209.132.183.28]:42529) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bDtUa-00087L-OG for qemu-devel@nongnu.org; Fri, 17 Jun 2016 09:07:24 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4F3AA811A2; Fri, 17 Jun 2016 13:07:24 +0000 (UTC) Received: from localhost (ovpn-116-17.sin2.redhat.com [10.67.116.17]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u5HD7Meb003132; Fri, 17 Jun 2016 09:07:22 -0400 From: Amit Shah To: Peter Maydell Date: Fri, 17 Jun 2016 18:36:48 +0530 Message-Id: <90e56fb46d0a7add88ed463efa4e723a6238f692.1466168448.git.amit.shah@redhat.com> In-Reply-To: References: In-Reply-To: References: X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Fri, 17 Jun 2016 13:07:24 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 09/13] migration: protect the quit flag by lock X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Juan Quintela , liang.z.li@intel.com, qemu list , "Dr. David Alan Gilbert" , Amit Shah , den@openvz.org Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Liang Li quit_comp_thread and quit_decomp_thread are accessed by several thread, it's better to protect them with locks. We use a per thread flag to replace the global one, and the new flag is protected by a lock. Signed-off-by: Liang Li Message-Id: <1462433579-13691-7-git-send-email-liang.z.li@intel.com> Signed-off-by: Amit Shah --- migration/ram.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 9e4f5e5..a5ed21b 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -255,6 +255,7 @@ static struct BitmapRcu { struct CompressParam { bool start; bool done; + bool quit; QEMUFile *file; QemuMutex mutex; QemuCond cond; @@ -266,6 +267,7 @@ typedef struct CompressParam CompressParam; struct DecompressParam { bool start; bool done; + bool quit; QemuMutex mutex; QemuCond cond; void *des; @@ -286,8 +288,6 @@ static QemuCond *comp_done_cond; static const QEMUFileOps empty_ops = { }; static bool compression_switch; -static bool quit_comp_thread; -static bool quit_decomp_thread; static DecompressParam *decomp_param; static QemuThread *decompress_threads; static QemuMutex decomp_done_lock; @@ -299,18 +299,18 @@ static void *do_data_compress(void *opaque) { CompressParam *param = opaque; - while (!quit_comp_thread) { + while (!param->quit) { qemu_mutex_lock(¶m->mutex); - /* Re-check the quit_comp_thread in case of + /* Re-check the quit flag in case of * terminate_compression_threads is called just before * qemu_mutex_lock(¶m->mutex) and after - * while(!quit_comp_thread), re-check it here can make + * while(!param->quit), re-check it here can make * sure the compression thread terminate as expected. */ - while (!param->start && !quit_comp_thread) { + while (!param->start && !param->quit) { qemu_cond_wait(¶m->cond, ¶m->mutex); } - if (!quit_comp_thread) { + if (!param->quit) { do_compress_ram_page(param); } param->start = false; @@ -330,9 +330,9 @@ static inline void terminate_compression_threads(void) int idx, thread_count; thread_count = migrate_compress_threads(); - quit_comp_thread = true; for (idx = 0; idx < thread_count; idx++) { qemu_mutex_lock(&comp_param[idx].mutex); + comp_param[idx].quit = true; qemu_cond_signal(&comp_param[idx].cond); qemu_mutex_unlock(&comp_param[idx].mutex); } @@ -372,7 +372,6 @@ void migrate_compress_threads_create(void) if (!migrate_use_compression()) { return; } - quit_comp_thread = false; compression_switch = true; thread_count = migrate_compress_threads(); compress_threads = g_new0(QemuThread, thread_count); @@ -387,6 +386,7 @@ void migrate_compress_threads_create(void) */ comp_param[i].file = qemu_fopen_ops(NULL, &empty_ops); comp_param[i].done = true; + comp_param[i].quit = false; qemu_mutex_init(&comp_param[i].mutex); qemu_cond_init(&comp_param[i].cond); qemu_thread_create(compress_threads + i, "compress", @@ -863,12 +863,12 @@ static void flush_compressed_data(QEMUFile *f) for (idx = 0; idx < thread_count; idx++) { if (!comp_param[idx].done) { qemu_mutex_lock(comp_done_lock); - while (!comp_param[idx].done && !quit_comp_thread) { + while (!comp_param[idx].done && !comp_param[idx].quit) { qemu_cond_wait(comp_done_cond, comp_done_lock); } qemu_mutex_unlock(comp_done_lock); } - if (!quit_comp_thread) { + if (!comp_param[idx].quit) { len = qemu_put_qemu_file(f, comp_param[idx].file); bytes_transferred += len; } @@ -2203,12 +2203,12 @@ static void *do_data_decompress(void *opaque) DecompressParam *param = opaque; unsigned long pagesize; - while (!quit_decomp_thread) { + while (!param->quit) { qemu_mutex_lock(¶m->mutex); - while (!param->start && !quit_decomp_thread) { + while (!param->start && !param->quit) { qemu_cond_wait(¶m->cond, ¶m->mutex); } - if (!quit_decomp_thread) { + if (!param->quit) { pagesize = TARGET_PAGE_SIZE; /* uncompress() will return failed in some case, especially * when the page is dirted when doing the compression, it's @@ -2255,7 +2255,6 @@ void migrate_decompress_threads_create(void) thread_count = migrate_decompress_threads(); decompress_threads = g_new0(QemuThread, thread_count); decomp_param = g_new0(DecompressParam, thread_count); - quit_decomp_thread = false; qemu_mutex_init(&decomp_done_lock); qemu_cond_init(&decomp_done_cond); for (i = 0; i < thread_count; i++) { @@ -2263,6 +2262,7 @@ void migrate_decompress_threads_create(void) qemu_cond_init(&decomp_param[i].cond); decomp_param[i].compbuf = g_malloc0(compressBound(TARGET_PAGE_SIZE)); decomp_param[i].done = true; + decomp_param[i].quit = false; qemu_thread_create(decompress_threads + i, "decompress", do_data_decompress, decomp_param + i, QEMU_THREAD_JOINABLE); @@ -2273,10 +2273,10 @@ void migrate_decompress_threads_join(void) { int i, thread_count; - quit_decomp_thread = true; thread_count = migrate_decompress_threads(); for (i = 0; i < thread_count; i++) { qemu_mutex_lock(&decomp_param[i].mutex); + decomp_param[i].quit = true; qemu_cond_signal(&decomp_param[i].cond); qemu_mutex_unlock(&decomp_param[i].mutex); }