From patchwork Thu Jul 1 09:56:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olaf Hering X-Patchwork-Id: 12353893 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07CCDC11F64 for ; Thu, 1 Jul 2021 10:02:19 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CCEFF61260 for ; Thu, 1 Jul 2021 10:02:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CCEFF61260 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=aepfle.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.148461.274473 (Exim 4.92) (envelope-from ) id 1lytWP-0004vl-7L; Thu, 01 Jul 2021 10:02:13 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 148461.274473; Thu, 01 Jul 2021 10:02:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lytWP-0004vW-1d; Thu, 01 Jul 2021 10:02:13 +0000 Received: by outflank-mailman (input) for mailman id 148461; Thu, 01 Jul 2021 10:02:11 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lytRx-0004r4-H0 for xen-devel@lists.xenproject.org; Thu, 01 Jul 2021 09:57:37 +0000 Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.100]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 290751bb-7f5b-49db-83ad-786e01024b54; Thu, 01 Jul 2021 09:56:58 +0000 (UTC) Received: from sender by smtp.strato.de (RZmta 47.28.1 AUTH) with ESMTPSA id 30791cx619ur5NT (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits)) (Client did not present a certificate); Thu, 1 Jul 2021 11:56:53 +0200 (CEST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 290751bb-7f5b-49db-83ad-786e01024b54 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1625133413; s=strato-dkim-0002; d=aepfle.de; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date: From:Subject:Sender; bh=CECljmMn7PUdmmrOXwMW+tlXTDlbFy0SLwiI5WugPX4=; b=p+92q4HQUSPoR2cRBMqVwfDE0UMRi9sa92T7loltuZ6QafgAURxR0r+r8QcTf+NOJ/ vz9CPzssgi19g8KRpxs0qzc6jNV0mY9LmjjCQ1ydnJmZk9gOOC78BpMIhRY4qBf8dBCe y0AJKW4+155VlDHfKBBd3I25YfcE2psg1x/B/lg+mUQfSg4Xh6464NoX1JNvSmrOa7/d 5okyWBC0JN5DtTCc5GJHFqOKzK4alpCbvtfVGIP5nTFJYuIEC7lOad1yf10WrJ5irDLM Ln+hxBX7pnvM7XrIWxm4ZjcXBxjTTPfXXnvWLipxM0E9246GfFhsMoGa+EdIbxW/6qiA aXHw== Authentication-Results: strato.com; dkim=none X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5lqlz7EsstGyDcxK3Mbajr+SQKopp1Cb0CyyaZQQ==" X-RZG-CLASS-ID: mo00 From: Olaf Hering To: xen-devel@lists.xenproject.org Cc: Olaf Hering , Ian Jackson , Wei Liu , Juergen Gross Subject: [PATCH v20210701 30/40] tools: restore: write data directly into guest Date: Thu, 1 Jul 2021 11:56:25 +0200 Message-Id: <20210701095635.15648-31-olaf@aepfle.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210701095635.15648-1-olaf@aepfle.de> References: <20210701095635.15648-1-olaf@aepfle.de> MIME-Version: 1.0 Read incoming migration stream directly into the guest memory. This avoids the memory allocation and copying, and the resulting performance penalty. Signed-off-by: Olaf Hering --- tools/libs/saverestore/common.h | 1 + tools/libs/saverestore/restore.c | 132 ++++++++++++++++++++++++++++++- 2 files changed, 129 insertions(+), 4 deletions(-) diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/common.h index 96bd0ab80e..3adcf2f83f 100644 --- a/tools/libs/saverestore/common.h +++ b/tools/libs/saverestore/common.h @@ -243,6 +243,7 @@ struct sr_restore_arrays { xen_pfn_t mfns[MAX_BATCH_SIZE]; int map_errs[MAX_BATCH_SIZE]; void *guest_data[MAX_BATCH_SIZE]; + struct iovec iov[MAX_BATCH_SIZE]; /* populate_pfns */ xen_pfn_t pp_mfns[MAX_BATCH_SIZE]; diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/restore.c index 7643de58e0..53f05f1b65 100644 --- a/tools/libs/saverestore/restore.c +++ b/tools/libs/saverestore/restore.c @@ -392,6 +392,122 @@ err: return rc; } +/* + * Handle PAGE_DATA record from the stream. + * Given a list of pfns, their types, and a block of page data from the + * stream, populate and record their types, map the relevant subset and copy + * the data into the guest. + */ +static int handle_incoming_page_data(struct xc_sr_context *ctx, + struct xc_sr_rhdr *rhdr) +{ + xc_interface *xch = ctx->xch; + struct sr_restore_arrays *m = ctx->restore.m; + struct xc_sr_rec_page_data_header *pages = &m->pages; + uint64_t *pfn_nums = m->pages.pfn; + uint32_t i; + int rc, iov_idx; + + rc = handle_static_data_end_v2(ctx); + if ( rc ) + goto err; + + /* First read and verify the header */ + rc = read_exact(ctx->fd, pages, sizeof(*pages)); + if ( rc ) + { + PERROR("Could not read rec_pfn header"); + goto err; + } + + if ( verify_rec_page_hdr(ctx, rhdr->length, pages) == false ) + { + rc = -1; + goto err; + } + + /* Then read and verify the incoming pfn numbers */ + rc = read_exact(ctx->fd, pfn_nums, sizeof(*pfn_nums) * pages->count); + if ( rc ) + { + PERROR("Could not read rec_pfn data"); + goto err; + } + + if ( verify_rec_page_pfns(ctx, rhdr->length, pages) == false ) + { + rc = -1; + goto err; + } + + /* Finally read and verify the incoming pfn data */ + rc = map_guest_pages(ctx, pages); + if ( rc ) + goto err; + + /* Prepare read buffers, either guest or throw away memory */ + for ( i = 0, iov_idx = 0; i < pages->count; i++ ) + { + if ( !m->guest_data[i] ) + continue; + + m->iov[iov_idx].iov_len = PAGE_SIZE; + if ( ctx->restore.verify ) + m->iov[iov_idx].iov_base = ctx->restore.verify_buf + i * PAGE_SIZE; + else + m->iov[iov_idx].iov_base = m->guest_data[i]; + iov_idx++; + } + + if ( !iov_idx ) + goto done; + + rc = readv_exact(ctx->fd, m->iov, iov_idx); + if ( rc ) + { + PERROR("read of %d pages failed", iov_idx); + goto err; + } + + /* Post-processing of pfn data */ + for ( i = 0, iov_idx = 0; i < pages->count; i++ ) + { + if ( !m->guest_data[i] ) + continue; + + rc = ctx->restore.ops.localise_page(ctx, m->types[i], m->iov[iov_idx].iov_base); + if ( rc ) + { + ERROR("Failed to localise pfn %#"PRIpfn" (type %#"PRIx32")", + m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT); + goto err; + + } + + if ( ctx->restore.verify ) + { + if ( memcmp(m->guest_data[i], m->iov[iov_idx].iov_base, PAGE_SIZE) ) + { + ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")", + m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT); + } + } + + iov_idx++; + } + +done: + rc = 0; + +err: + if ( ctx->restore.guest_mapping ) + { + xenforeignmemory_unmap(xch->fmem, ctx->restore.guest_mapping, ctx->restore.nr_mapped_pages); + ctx->restore.guest_mapping = NULL; + } + return rc; +} + /* * Handle PAGE_DATA record from an existing buffer * Given a list of pfns, their types, and a block of page data from the @@ -773,11 +889,19 @@ static int process_incoming_record_header(struct xc_sr_context *ctx, struct xc_s struct xc_sr_record rec; int rc; - rc = read_record_data(ctx, ctx->fd, rhdr, &rec); - if ( rc ) - return rc; + switch ( rhdr->type ) + { + case REC_TYPE_PAGE_DATA: + rc = handle_incoming_page_data(ctx, rhdr); + break; + default: + rc = read_record_data(ctx, ctx->fd, rhdr, &rec); + if ( rc == 0 ) + rc = process_buffered_record(ctx, &rec);; + break; + } - return process_buffered_record(ctx, &rec); + return rc; }