From patchwork Wed Mar 6 11:42:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Dr. David Alan Gilbert" X-Patchwork-Id: 10841001 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1CB421390 for ; Wed, 6 Mar 2019 11:50:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 046C42C765 for ; Wed, 6 Mar 2019 11:50:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ECE992C772; Wed, 6 Mar 2019 11:50:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, T_HK_NAME_DR autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3A3852C765 for ; Wed, 6 Mar 2019 11:50:15 +0000 (UTC) Received: from localhost ([127.0.0.1]:59975 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h1V3u-0003ld-8R for patchwork-qemu-devel@patchwork.kernel.org; Wed, 06 Mar 2019 06:50:14 -0500 Received: from eggs.gnu.org ([209.51.188.92]:43415) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h1Uwl-00066v-Rc for qemu-devel@nongnu.org; Wed, 06 Mar 2019 06:42:56 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1h1Uwk-0004eR-Bs for qemu-devel@nongnu.org; Wed, 06 Mar 2019 06:42:51 -0500 Received: from mx1.redhat.com ([209.132.183.28]:51064) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1h1Uwk-0004e0-07 for qemu-devel@nongnu.org; Wed, 06 Mar 2019 06:42:50 -0500 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2DE06308339E; Wed, 6 Mar 2019 11:42:49 +0000 (UTC) Received: from dgilbert-t580.localhost (ovpn-117-163.ams2.redhat.com [10.36.117.163]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5866C1001DC1; Wed, 6 Mar 2019 11:42:47 +0000 (UTC) From: "Dr. David Alan Gilbert (git)" To: qemu-devel@nongnu.org, quintela@redhat.com, peterx@redhat.com, marcel.apfelbaum@gmail.com, wei.w.wang@intel.com, yury-kotov@yandex-team.ru, chen.zhang@intel.com Date: Wed, 6 Mar 2019 11:42:10 +0000 Message-Id: <20190306114227.9125-6-dgilbert@redhat.com> In-Reply-To: <20190306114227.9125-1-dgilbert@redhat.com> References: <20190306114227.9125-1-dgilbert@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.44]); Wed, 06 Mar 2019 11:42:49 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 05/22] exec: Change RAMBlockIterFunc definition X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: armbru@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Yury Kotov Currently, qemu_ram_foreach_* calls RAMBlockIterFunc with many block-specific arguments. But often iter func needs RAMBlock*. This refactoring is needed for fast access to RAMBlock flags from qemu_ram_foreach_block's callback. The only way to achieve this now is to call qemu_ram_block_from_host (which also enumerates blocks). So, this patch reduces complexity of qemu_ram_foreach_block() -> cb() -> qemu_ram_block_from_host() from O(n^2) to O(n). Fix RAMBlockIterFunc definition and add some functions to read RAMBlock* fields witch were passed. Signed-off-by: Yury Kotov Message-Id: <20190215174548.2630-2-yury-kotov@yandex-team.ru> Reviewed-by: Dr. David Alan Gilbert Signed-off-by: Dr. David Alan Gilbert --- exec.c | 21 +++++++++++++++++---- include/exec/cpu-common.h | 6 ++++-- migration/postcopy-ram.c | 36 +++++++++++++++++++++--------------- migration/rdma.c | 7 +++++-- stubs/ram-block.c | 15 +++++++++++++++ util/vfio-helpers.c | 6 +++--- 6 files changed, 65 insertions(+), 26 deletions(-) diff --git a/exec.c b/exec.c index 518064530b..bf71462929 100644 --- a/exec.c +++ b/exec.c @@ -1972,6 +1972,21 @@ const char *qemu_ram_get_idstr(RAMBlock *rb) return rb->idstr; } +void *qemu_ram_get_host_addr(RAMBlock *rb) +{ + return rb->host; +} + +ram_addr_t qemu_ram_get_offset(RAMBlock *rb) +{ + return rb->offset; +} + +ram_addr_t qemu_ram_get_used_length(RAMBlock *rb) +{ + return rb->used_length; +} + bool qemu_ram_is_shared(RAMBlock *rb) { return rb->flags & RAM_SHARED; @@ -3961,8 +3976,7 @@ int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque) rcu_read_lock(); RAMBLOCK_FOREACH(block) { - ret = func(block->idstr, block->host, block->offset, - block->used_length, opaque); + ret = func(block, opaque); if (ret) { break; } @@ -3981,8 +3995,7 @@ int qemu_ram_foreach_migratable_block(RAMBlockIterFunc func, void *opaque) if (!qemu_ram_is_migratable(block)) { continue; } - ret = func(block->idstr, block->host, block->offset, - block->used_length, opaque); + ret = func(block, opaque); if (ret) { break; } diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h index 63ec1f9b37..9cd1f94bc5 100644 --- a/include/exec/cpu-common.h +++ b/include/exec/cpu-common.h @@ -72,6 +72,9 @@ ram_addr_t qemu_ram_block_host_offset(RAMBlock *rb, void *host); void qemu_ram_set_idstr(RAMBlock *block, const char *name, DeviceState *dev); void qemu_ram_unset_idstr(RAMBlock *block); const char *qemu_ram_get_idstr(RAMBlock *rb); +void *qemu_ram_get_host_addr(RAMBlock *rb); +ram_addr_t qemu_ram_get_offset(RAMBlock *rb); +ram_addr_t qemu_ram_get_used_length(RAMBlock *rb); bool qemu_ram_is_shared(RAMBlock *rb); bool qemu_ram_is_uf_zeroable(RAMBlock *rb); void qemu_ram_set_uf_zeroable(RAMBlock *rb); @@ -116,8 +119,7 @@ void cpu_flush_icache_range(hwaddr start, hwaddr len); extern struct MemoryRegion io_mem_rom; extern struct MemoryRegion io_mem_notdirty; -typedef int (RAMBlockIterFunc)(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque); +typedef int (RAMBlockIterFunc)(RAMBlock *rb, void *opaque); int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque); int qemu_ram_foreach_migratable_block(RAMBlockIterFunc func, void *opaque); diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c index fa09dba534..b098816221 100644 --- a/migration/postcopy-ram.c +++ b/migration/postcopy-ram.c @@ -319,10 +319,10 @@ static bool ufd_check_and_apply(int ufd, MigrationIncomingState *mis) /* Callback from postcopy_ram_supported_by_host block iterator. */ -static int test_ramblock_postcopiable(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque) +static int test_ramblock_postcopiable(RAMBlock *rb, void *opaque) { - RAMBlock *rb = qemu_ram_block_by_name(block_name); + const char *block_name = qemu_ram_get_idstr(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); size_t pagesize = qemu_ram_pagesize(rb); if (length % pagesize) { @@ -443,9 +443,12 @@ out: * must be done right at the start prior to pre-copy. * opaque should be the MIS. */ -static int init_range(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque) +static int init_range(RAMBlock *rb, void *opaque) { + const char *block_name = qemu_ram_get_idstr(rb); + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t offset = qemu_ram_get_offset(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); trace_postcopy_init_range(block_name, host_addr, offset, length); /* @@ -465,9 +468,12 @@ static int init_range(const char *block_name, void *host_addr, * At the end of migration, undo the effects of init_range * opaque should be the MIS. */ -static int cleanup_range(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque) +static int cleanup_range(RAMBlock *rb, void *opaque) { + const char *block_name = qemu_ram_get_idstr(rb); + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t offset = qemu_ram_get_offset(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); MigrationIncomingState *mis = opaque; struct uffdio_range range_struct; trace_postcopy_cleanup_range(block_name, host_addr, offset, length); @@ -586,9 +592,12 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis) /* * Disable huge pages on an area */ -static int nhp_range(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque) +static int nhp_range(RAMBlock *rb, void *opaque) { + const char *block_name = qemu_ram_get_idstr(rb); + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t offset = qemu_ram_get_offset(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); trace_postcopy_nhp_range(block_name, host_addr, offset, length); /* @@ -626,15 +635,13 @@ int postcopy_ram_prepare_discard(MigrationIncomingState *mis) * opaque: MigrationIncomingState pointer * Returns 0 on success */ -static int ram_block_enable_notify(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, - void *opaque) +static int ram_block_enable_notify(RAMBlock *rb, void *opaque) { MigrationIncomingState *mis = opaque; struct uffdio_register reg_struct; - reg_struct.range.start = (uintptr_t)host_addr; - reg_struct.range.len = length; + reg_struct.range.start = (uintptr_t)qemu_ram_get_host_addr(rb); + reg_struct.range.len = qemu_ram_get_used_length(rb); reg_struct.mode = UFFDIO_REGISTER_MODE_MISSING; /* Now tell our userfault_fd that it's responsible for this area */ @@ -647,7 +654,6 @@ static int ram_block_enable_notify(const char *block_name, void *host_addr, return -1; } if (reg_struct.ioctls & ((__u64)1 << _UFFDIO_ZEROPAGE)) { - RAMBlock *rb = qemu_ram_block_by_name(block_name); qemu_ram_set_uf_zeroable(rb); } diff --git a/migration/rdma.c b/migration/rdma.c index d5251cd820..b0115e4de5 100644 --- a/migration/rdma.c +++ b/migration/rdma.c @@ -624,9 +624,12 @@ static int rdma_add_block(RDMAContext *rdma, const char *block_name, * in advanced before the migration starts. This tells us where the RAM blocks * are so that we can register them individually. */ -static int qemu_rdma_init_one_block(const char *block_name, void *host_addr, - ram_addr_t block_offset, ram_addr_t length, void *opaque) +static int qemu_rdma_init_one_block(RAMBlock *rb, void *opaque) { + const char *block_name = qemu_ram_get_idstr(rb); + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t block_offset = qemu_ram_get_offset(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); return rdma_add_block(opaque, block_name, host_addr, block_offset, length); } diff --git a/stubs/ram-block.c b/stubs/ram-block.c index cfa5d8678f..73c0a3ee08 100644 --- a/stubs/ram-block.c +++ b/stubs/ram-block.c @@ -2,6 +2,21 @@ #include "exec/ramlist.h" #include "exec/cpu-common.h" +void *qemu_ram_get_host_addr(RAMBlock *rb) +{ + return 0; +} + +ram_addr_t qemu_ram_get_offset(RAMBlock *rb) +{ + return 0; +} + +ram_addr_t qemu_ram_get_used_length(RAMBlock *rb) +{ + return 0; +} + void ram_block_notifier_add(RAMBlockNotifier *n) { } diff --git a/util/vfio-helpers.c b/util/vfio-helpers.c index 342d4a2285..2367fe8f7f 100644 --- a/util/vfio-helpers.c +++ b/util/vfio-helpers.c @@ -391,10 +391,10 @@ static void qemu_vfio_ram_block_removed(RAMBlockNotifier *n, } } -static int qemu_vfio_init_ramblock(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, - void *opaque) +static int qemu_vfio_init_ramblock(RAMBlock *rb, void *opaque) { + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); int ret; QEMUVFIOState *s = opaque;