From patchwork Mon Feb 4 13:09:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Kotov X-Patchwork-Id: 10795661 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 44AB6922 for ; Mon, 4 Feb 2019 13:39:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3152C2A6EB for ; Mon, 4 Feb 2019 13:39:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 25A362A6FF; Mon, 4 Feb 2019 13:39:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 730F82A6EB for ; Mon, 4 Feb 2019 13:39:08 +0000 (UTC) Received: from localhost ([127.0.0.1]:42962 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gqeSp-0005Kr-Ny for patchwork-qemu-devel@patchwork.kernel.org; Mon, 04 Feb 2019 08:39:07 -0500 Received: from eggs.gnu.org ([209.51.188.92]:51254) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gqeC5-0000pe-Dd for qemu-devel@nongnu.org; Mon, 04 Feb 2019 08:21:50 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gqe1K-00051y-Gq for qemu-devel@nongnu.org; Mon, 04 Feb 2019 08:10:44 -0500 Received: from forwardcorp1j.cmail.yandex.net ([5.255.227.105]:54187) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gqe1G-00047s-L7 for qemu-devel@nongnu.org; Mon, 04 Feb 2019 08:10:41 -0500 Received: from mxbackcorp1j.mail.yandex.net (mxbackcorp1j.mail.yandex.net [IPv6:2a02:6b8:0:1619::162]) by forwardcorp1j.cmail.yandex.net (Yandex) with ESMTP id 4A16D2015E; Mon, 4 Feb 2019 16:10:14 +0300 (MSK) Received: from smtpcorp1p.mail.yandex.net (smtpcorp1p.mail.yandex.net [2a02:6b8:0:1472:2741:0:8b6:10]) by mxbackcorp1j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id IOhsOCXoJC-AD6iH4H3; Mon, 04 Feb 2019 16:10:14 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1549285814; bh=zkFmTlfOg56NCJRix1jzZ3pQjL2B2L4pelVaIFyarwA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=C6zw2QFptG7MNlWvDSN5uBgF4ngpynFxIlP1G3KkfUPXYALC3SLcPFN+0yJQIhUk/ KZ6USA/IJUhKlF+6yULMPF6dl8681T4wqJa/hzFt2EzjopCnxNEJkrf6ifQOc6CXKN D+bCIkQst0aWRXF4Anx1XL+n8fQDZcNm45+wVcvY= Authentication-Results: mxbackcorp1j.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:e1bb:a1a7:a235:d6b4]) by smtpcorp1p.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id CD2xRF1yHW-AD50QZ5D; Mon, 04 Feb 2019 16:10:13 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (Client certificate not present) From: Yury Kotov To: qemu-devel@nongnu.org, Eduardo Habkost , Igor Mammedov , Paolo Bonzini , Peter Crosthwaite , Richard Henderson , Juan Quintela , "Dr. David Alan Gilbert" , Eric Blake , Markus Armbruster , Thomas Huth , Laurent Vivier Date: Mon, 4 Feb 2019 16:09:55 +0300 Message-Id: <20190204130958.18904-2-yury-kotov@yandex-team.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190204130958.18904-1-yury-kotov@yandex-team.ru> References: <20190204130958.18904-1-yury-kotov@yandex-team.ru> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 5.255.227.105 Subject: [Qemu-devel] [PATCH v2 1/4] exec: Change RAMBlockIterFunc definition X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wrfsh@yandex-team.ru Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Currently, qemu_ram_foreach_* calls RAMBlockIterFunc with many block-specific arguments. But often iter func needs RAMBlock*. It's more effective to call RAMBlockIterFunc with RAMBlock* argument. So, fix RAMBlockIterFunc definition and add some functions to read RAMBlock* fields witch were passed. Signed-off-by: Yury Kotov Reviewed-by: Dr. David Alan Gilbert --- exec.c | 21 +++++++++++++++++---- include/exec/cpu-common.h | 6 ++++-- migration/postcopy-ram.c | 36 +++++++++++++++++++++--------------- migration/rdma.c | 7 +++++-- util/vfio-helpers.c | 6 +++--- 5 files changed, 50 insertions(+), 26 deletions(-) diff --git a/exec.c b/exec.c index da3e635f91..a61d501568 100644 --- a/exec.c +++ b/exec.c @@ -1970,6 +1970,21 @@ const char *qemu_ram_get_idstr(RAMBlock *rb) return rb->idstr; } +void *qemu_ram_get_host_addr(RAMBlock *rb) +{ + return rb->host; +} + +ram_addr_t qemu_ram_get_offset(RAMBlock *rb) +{ + return rb->offset; +} + +ram_addr_t qemu_ram_get_used_length(RAMBlock *rb) +{ + return rb->used_length; +} + bool qemu_ram_is_shared(RAMBlock *rb) { return rb->flags & RAM_SHARED; @@ -3960,8 +3975,7 @@ int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque) rcu_read_lock(); RAMBLOCK_FOREACH(block) { - ret = func(block->idstr, block->host, block->offset, - block->used_length, opaque); + ret = func(block, opaque); if (ret) { break; } @@ -3980,8 +3994,7 @@ int qemu_ram_foreach_migratable_block(RAMBlockIterFunc func, void *opaque) if (!qemu_ram_is_migratable(block)) { continue; } - ret = func(block->idstr, block->host, block->offset, - block->used_length, opaque); + ret = func(block, opaque); if (ret) { break; } diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h index 2ad2d6d86b..bdae5446d7 100644 --- a/include/exec/cpu-common.h +++ b/include/exec/cpu-common.h @@ -72,6 +72,9 @@ ram_addr_t qemu_ram_block_host_offset(RAMBlock *rb, void *host); void qemu_ram_set_idstr(RAMBlock *block, const char *name, DeviceState *dev); void qemu_ram_unset_idstr(RAMBlock *block); const char *qemu_ram_get_idstr(RAMBlock *rb); +void *qemu_ram_get_host_addr(RAMBlock *rb); +ram_addr_t qemu_ram_get_offset(RAMBlock *rb); +ram_addr_t qemu_ram_get_used_length(RAMBlock *rb); bool qemu_ram_is_shared(RAMBlock *rb); bool qemu_ram_is_uf_zeroable(RAMBlock *rb); void qemu_ram_set_uf_zeroable(RAMBlock *rb); @@ -116,8 +119,7 @@ void cpu_flush_icache_range(hwaddr start, int len); extern struct MemoryRegion io_mem_rom; extern struct MemoryRegion io_mem_notdirty; -typedef int (RAMBlockIterFunc)(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque); +typedef int (RAMBlockIterFunc)(RAMBlock *rb, void *opaque); int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque); int qemu_ram_foreach_migratable_block(RAMBlockIterFunc func, void *opaque); diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c index fa09dba534..b098816221 100644 --- a/migration/postcopy-ram.c +++ b/migration/postcopy-ram.c @@ -319,10 +319,10 @@ static bool ufd_check_and_apply(int ufd, MigrationIncomingState *mis) /* Callback from postcopy_ram_supported_by_host block iterator. */ -static int test_ramblock_postcopiable(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque) +static int test_ramblock_postcopiable(RAMBlock *rb, void *opaque) { - RAMBlock *rb = qemu_ram_block_by_name(block_name); + const char *block_name = qemu_ram_get_idstr(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); size_t pagesize = qemu_ram_pagesize(rb); if (length % pagesize) { @@ -443,9 +443,12 @@ out: * must be done right at the start prior to pre-copy. * opaque should be the MIS. */ -static int init_range(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque) +static int init_range(RAMBlock *rb, void *opaque) { + const char *block_name = qemu_ram_get_idstr(rb); + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t offset = qemu_ram_get_offset(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); trace_postcopy_init_range(block_name, host_addr, offset, length); /* @@ -465,9 +468,12 @@ static int init_range(const char *block_name, void *host_addr, * At the end of migration, undo the effects of init_range * opaque should be the MIS. */ -static int cleanup_range(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque) +static int cleanup_range(RAMBlock *rb, void *opaque) { + const char *block_name = qemu_ram_get_idstr(rb); + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t offset = qemu_ram_get_offset(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); MigrationIncomingState *mis = opaque; struct uffdio_range range_struct; trace_postcopy_cleanup_range(block_name, host_addr, offset, length); @@ -586,9 +592,12 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis) /* * Disable huge pages on an area */ -static int nhp_range(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, void *opaque) +static int nhp_range(RAMBlock *rb, void *opaque) { + const char *block_name = qemu_ram_get_idstr(rb); + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t offset = qemu_ram_get_offset(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); trace_postcopy_nhp_range(block_name, host_addr, offset, length); /* @@ -626,15 +635,13 @@ int postcopy_ram_prepare_discard(MigrationIncomingState *mis) * opaque: MigrationIncomingState pointer * Returns 0 on success */ -static int ram_block_enable_notify(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, - void *opaque) +static int ram_block_enable_notify(RAMBlock *rb, void *opaque) { MigrationIncomingState *mis = opaque; struct uffdio_register reg_struct; - reg_struct.range.start = (uintptr_t)host_addr; - reg_struct.range.len = length; + reg_struct.range.start = (uintptr_t)qemu_ram_get_host_addr(rb); + reg_struct.range.len = qemu_ram_get_used_length(rb); reg_struct.mode = UFFDIO_REGISTER_MODE_MISSING; /* Now tell our userfault_fd that it's responsible for this area */ @@ -647,7 +654,6 @@ static int ram_block_enable_notify(const char *block_name, void *host_addr, return -1; } if (reg_struct.ioctls & ((__u64)1 << _UFFDIO_ZEROPAGE)) { - RAMBlock *rb = qemu_ram_block_by_name(block_name); qemu_ram_set_uf_zeroable(rb); } diff --git a/migration/rdma.c b/migration/rdma.c index 54a3c11540..7eb38ee764 100644 --- a/migration/rdma.c +++ b/migration/rdma.c @@ -624,9 +624,12 @@ static int rdma_add_block(RDMAContext *rdma, const char *block_name, * in advanced before the migration starts. This tells us where the RAM blocks * are so that we can register them individually. */ -static int qemu_rdma_init_one_block(const char *block_name, void *host_addr, - ram_addr_t block_offset, ram_addr_t length, void *opaque) +static int qemu_rdma_init_one_block(RAMBlock *rb, void *opaque) { + const char *block_name = qemu_ram_get_idstr(rb); + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t block_offset = qemu_ram_get_offset(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); return rdma_add_block(opaque, block_name, host_addr, block_offset, length); } diff --git a/util/vfio-helpers.c b/util/vfio-helpers.c index 342d4a2285..2367fe8f7f 100644 --- a/util/vfio-helpers.c +++ b/util/vfio-helpers.c @@ -391,10 +391,10 @@ static void qemu_vfio_ram_block_removed(RAMBlockNotifier *n, } } -static int qemu_vfio_init_ramblock(const char *block_name, void *host_addr, - ram_addr_t offset, ram_addr_t length, - void *opaque) +static int qemu_vfio_init_ramblock(RAMBlock *rb, void *opaque) { + void *host_addr = qemu_ram_get_host_addr(rb); + ram_addr_t length = qemu_ram_get_used_length(rb); int ret; QEMUVFIOState *s = opaque;