From patchwork Fri Oct 21 19:42:14 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juan Quintela X-Patchwork-Id: 9390033 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7E7BA60762 for ; Fri, 21 Oct 2016 19:55:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6B0DE2A27B for ; Fri, 21 Oct 2016 19:55:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5F00B2A290; Fri, 21 Oct 2016 19:55:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D4D382A27B for ; Fri, 21 Oct 2016 19:55:08 +0000 (UTC) Received: from localhost ([::1]:34137 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bxfuG-0006rZ-4g for patchwork-qemu-devel@patchwork.kernel.org; Fri, 21 Oct 2016 15:55:08 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55524) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bxfiF-0005L7-6b for qemu-devel@nongnu.org; Fri, 21 Oct 2016 15:42:44 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bxfiD-0007ab-W6 for qemu-devel@nongnu.org; Fri, 21 Oct 2016 15:42:43 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59090) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1bxfiD-0007aC-Nv for qemu-devel@nongnu.org; Fri, 21 Oct 2016 15:42:41 -0400 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id F416483F3C for ; Fri, 21 Oct 2016 19:42:40 +0000 (UTC) Received: from emacs.mitica (ovpn-116-122.ams2.redhat.com [10.36.116.122]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u9LJgLrj018033; Fri, 21 Oct 2016 15:42:39 -0400 From: Juan Quintela To: qemu-devel@nongnu.org Date: Fri, 21 Oct 2016 21:42:14 +0200 Message-Id: <1477078935-7182-13-git-send-email-quintela@redhat.com> In-Reply-To: <1477078935-7182-1-git-send-email-quintela@redhat.com> References: <1477078935-7182-1-git-send-email-quintela@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Fri, 21 Oct 2016 19:42:41 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH 12/13] migration: [HACK]Transfer pages over new channels X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: amit.shah@redhat.com, dgilbert@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP We switch for sending the page number to send real pages. [HACK] How we calculate the bandwidth is beyond repair, there is a hack there that would work for x86 and archs that have 4kb pages. If you are having a nice day just go to migration/ram.c and look at acct_update_position(). Now you are depressed, right? Signed-off-by: Juan Quintela --- migration/migration.c | 15 +++++++++++---- migration/ram.c | 46 +++++++++++++++++++++++++++++++--------------- 2 files changed, 42 insertions(+), 19 deletions(-) diff --git a/migration/migration.c b/migration/migration.c index 407e0c3..0627f14 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1757,7 +1757,8 @@ static void *migration_thread(void *opaque) /* Used by the bandwidth calcs, updated later */ int64_t initial_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); int64_t setup_start = qemu_clock_get_ms(QEMU_CLOCK_HOST); - int64_t initial_bytes = 0; + int64_t qemu_file_bytes = 0; + int64_t multifd_pages = 0; int64_t max_size = 0; int64_t start_time = initial_time; int64_t end_time; @@ -1840,9 +1841,14 @@ static void *migration_thread(void *opaque) } current_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); if (current_time >= initial_time + BUFFER_DELAY) { - uint64_t transferred_bytes = qemu_ftell(s->to_dst_file) - - initial_bytes; uint64_t time_spent = current_time - initial_time; + uint64_t qemu_file_bytes_now = qemu_ftell(s->to_dst_file); + uint64_t multifd_pages_now = multifd_mig_pages_transferred(); + /* Hack ahead. Why the hell we don't have a function to now the + target_page_size. Hard coding it to 4096 */ + uint64_t transferred_bytes = + (qemu_file_bytes_now - qemu_file_bytes) + + (multifd_pages_now - multifd_pages) * 4096; double bandwidth = (double)transferred_bytes / time_spent; max_size = bandwidth * s->parameters.downtime_limit; @@ -1859,7 +1865,8 @@ static void *migration_thread(void *opaque) qemu_file_reset_rate_limit(s->to_dst_file); initial_time = current_time; - initial_bytes = qemu_ftell(s->to_dst_file); + qemu_file_bytes = qemu_file_bytes_now; + multifd_pages = multifd_pages_now; } if (qemu_file_rate_limit(s->to_dst_file)) { /* usleep expects microseconds */ diff --git a/migration/ram.c b/migration/ram.c index 2ead443..9a20f63 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -437,9 +437,9 @@ static void *multifd_send_thread(void *opaque) params->address = 0; qemu_mutex_unlock(¶ms->mutex); - if (qio_channel_write(params->c, (const char *)&address, - sizeof(uint8_t *), &error_abort) - != sizeof(uint8_t*)) { + if (qio_channel_write(params->c, (const char *)address, + TARGET_PAGE_SIZE, &error_abort) + != TARGET_PAGE_SIZE) { /* Shuoudn't ever happen */ exit(-1); } @@ -551,6 +551,23 @@ static int multifd_send_page(uint8_t *address) return i; } +static void flush_multifd_send_data(QEMUFile *f) +{ + int i, thread_count; + + if (!migrate_multifd()) { + return; + } + qemu_fflush(f); + thread_count = migrate_multifd_threads(); + qemu_mutex_lock(&multifd_send_mutex); + for (i = 0; i < thread_count; i++) { + while(!multifd_send[i].done) { + qemu_cond_wait(&multifd_send_cond, &multifd_send_mutex); + } + } +} + struct MultiFDRecvParams { /* not changed */ QemuThread thread; @@ -575,7 +592,6 @@ static void *multifd_recv_thread(void *opaque) { MultiFDRecvParams *params = opaque; uint8_t *address; - uint8_t *recv_address; char start; qio_channel_read(params->c, &start, 1, &error_abort); @@ -591,19 +607,13 @@ static void *multifd_recv_thread(void *opaque) params->address = 0; qemu_mutex_unlock(¶ms->mutex); - if (qio_channel_read(params->c, (char *)&recv_address, - sizeof(uint8_t*), &error_abort) - != sizeof(uint8_t *)) { + if (qio_channel_read(params->c, (char *)address, + TARGET_PAGE_SIZE, &error_abort) + != TARGET_PAGE_SIZE) { /* shouldn't ever happen */ exit(-1); } - if (address != recv_address) { - printf("We received %p what we were expecting %p\n", - recv_address, address); - exit(-1); - } - qemu_mutex_lock(&multifd_recv_mutex); params->done = true; qemu_cond_signal(&multifd_recv_cond); @@ -1126,6 +1136,7 @@ static int ram_multifd_page(QEMUFile *f, PageSearchStatus *pss, uint8_t *p; RAMBlock *block = pss->block; ram_addr_t offset = pss->offset; + static int count = 32; p = block->host + offset; @@ -1137,9 +1148,14 @@ static int ram_multifd_page(QEMUFile *f, PageSearchStatus *pss, *bytes_transferred += save_page_header(f, block, offset | RAM_SAVE_FLAG_MULTIFD_PAGE); fd_num = multifd_send_page(p); + count--; + if (!count) { + qemu_fflush(f); + count = 32; + } + qemu_put_be16(f, fd_num); *bytes_transferred += 2; /* size of fd_num */ - qemu_put_buffer(f, p, TARGET_PAGE_SIZE); *bytes_transferred += TARGET_PAGE_SIZE; pages = 1; acct_info.norm_pages++; @@ -2401,6 +2417,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque) } flush_compressed_data(f); + flush_multifd_send_data(f); ram_control_after_iterate(f, RAM_CONTROL_FINISH); rcu_read_unlock(); @@ -2915,7 +2932,6 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id) case RAM_SAVE_FLAG_MULTIFD_PAGE: fd_num = qemu_get_be16(f); multifd_recv_page(host, fd_num); - qemu_get_buffer(f, host, TARGET_PAGE_SIZE); break; case RAM_SAVE_FLAG_EOS: