From patchwork Tue Dec 24 15:19:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 11309517 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C0CB13A4 for ; Tue, 24 Dec 2019 15:20:56 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2DBCB206D3 for ; Tue, 24 Dec 2019 15:20:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="XDCWq1vQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2DBCB206D3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ijlyO-0000DW-Eb; Tue, 24 Dec 2019 15:19:48 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ijlyN-0000DB-7n for xen-devel@lists.xenproject.org; Tue, 24 Dec 2019 15:19:47 +0000 X-Inumbo-ID: cc7907fe-2660-11ea-a914-bc764e2007e4 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id cc7907fe-2660-11ea-a914-bc764e2007e4; Tue, 24 Dec 2019 15:19:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1577200778; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=HBWvN/EhXh572ocYjiir/jQ4OyA62+y9bdVvZ/ZLnsc=; b=XDCWq1vQyC0f2bSMru07O6LyPQAfQrQWCVgi7XDhwDb679GIjHw+K5G/ MBgTx7BHWrn+hGHbpH6zKnLuCX8XJ3tMb41fCeSl7TJF0QwJP3+ad6BJF oCCym5eYhWL1RnC2ls/QflpEGv0v7/rNMMKxGm4yv6+r2NPWIfWaEdUA2 Y=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@citrix.com; spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of andrew.cooper3@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="andrew.cooper3@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of Andrew.Cooper3@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="Andrew.Cooper3@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: jlcK4XYV93j2noYofxyQVYWbymQUK14GnAWhir2QgB251jPXiDpYqYNZhIo2NY3diwakCma+Sp LxVeaqenpIecwSctgRuXASuiLLAqzyqo2/3rh97HEhFlFpTDfFQKXpX2p2yciDtMcjAeETKeTv BnCnNDp8/ZgHxIrNbQZYg9meiuz+zcyNRaZ3faRMf1slW92Vt/lkm83Vt/UAyFXBIbam0eYJ82 BmagAMjct6tOHr4MAVsq3/r0VwfzesEPydI1P34mneal/n3YKi4UVcbNjNVqTO/PCYz8WH+RiI pQ8= X-SBRS: 2.7 X-MesageID: 10482746 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.69,351,1571716800"; d="scan'208";a="10482746" From: Andrew Cooper To: Xen-devel Date: Tue, 24 Dec 2019 15:19:22 +0000 Message-ID: <20191224151932.6304-3-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20191224151932.6304-1-andrew.cooper3@citrix.com> References: <20191224151932.6304-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 02/12] libxc/restore: Introduce functionality to simplify blob handling X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Ian Jackson Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" During migration, we buffer several blobs of data which ultimately need handing back to Xen at an appropriate time. Currently, this is all handled in an ad-hoc manner, but more blobs are soon going to be added. Introduce xc_sr_blob to encapsulate a ptr/size pair, and update_blob() to handle the memory management aspects. Switch the HVM_CONTEXT and the four PV_VCPU_* blobs over to this new infrastructure. Signed-off-by: Andrew Cooper Acked-by: Ian Jackson --- CC: Ian Jackson CC: Wei Liu --- tools/libxc/xc_sr_common.h | 45 ++++++++++++++++++++----- tools/libxc/xc_sr_restore_x86_hvm.c | 21 ++++-------- tools/libxc/xc_sr_restore_x86_pv.c | 67 ++++++++++++++----------------------- 3 files changed, 68 insertions(+), 65 deletions(-) diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h index b66d785e50..19b053911f 100644 --- a/tools/libxc/xc_sr_common.h +++ b/tools/libxc/xc_sr_common.h @@ -165,13 +165,40 @@ struct xc_sr_restore_ops int (*cleanup)(struct xc_sr_context *ctx); }; -/* x86 PV per-vcpu storage structure for blobs heading Xen-wards. */ -struct xc_sr_x86_pv_restore_vcpu +/* Wrapper for blobs of data heading Xen-wards. */ +struct xc_sr_blob { - void *basic, *extd, *xsave, *msr; - size_t basicsz, extdsz, xsavesz, msrsz; + void *ptr; + size_t size; }; +/* + * Update a blob. Duplicate src/size, freeing the old blob if necessary. May + * fail due to memory allocation. + */ +static inline int update_blob(struct xc_sr_blob *blob, + const void *src, size_t size) +{ + void *ptr; + + if ( !src || !size ) + { + errno = EINVAL; + return -1; + } + + if ( (ptr = malloc(size)) == NULL ) + return -1; + + memcpy(ptr, src, size); + + free(blob->ptr); + blob->ptr = memcpy(ptr, src, size); + blob->size = size; + + return 0; +} + struct xc_sr_context { xc_interface *xch; @@ -306,8 +333,11 @@ struct xc_sr_context /* Types for each page (bounded by max_pfn). */ uint32_t *pfn_types; - /* Vcpu context blobs. */ - struct xc_sr_x86_pv_restore_vcpu *vcpus; + /* x86 PV per-vcpu storage structure for blobs. */ + struct xc_sr_x86_pv_restore_vcpu + { + struct xc_sr_blob basic, extd, xsave, msr; + } *vcpus; unsigned int nr_vcpus; } restore; }; @@ -327,8 +357,7 @@ struct xc_sr_context struct { /* HVM context blob. */ - void *context; - size_t contextsz; + struct xc_sr_blob context; } restore; }; } x86_hvm; diff --git a/tools/libxc/xc_sr_restore_x86_hvm.c b/tools/libxc/xc_sr_restore_x86_hvm.c index 4a24dc0137..fe7be9bde6 100644 --- a/tools/libxc/xc_sr_restore_x86_hvm.c +++ b/tools/libxc/xc_sr_restore_x86_hvm.c @@ -10,21 +10,12 @@ static int handle_hvm_context(struct xc_sr_context *ctx, struct xc_sr_record *rec) { xc_interface *xch = ctx->xch; - void *p; + int rc = update_blob(&ctx->x86_hvm.restore.context, rec->data, rec->length); - p = malloc(rec->length); - if ( !p ) - { + if ( rc ) ERROR("Unable to allocate %u bytes for hvm context", rec->length); - return -1; - } - free(ctx->x86_hvm.restore.context); - - ctx->x86_hvm.restore.context = memcpy(p, rec->data, rec->length); - ctx->x86_hvm.restore.contextsz = rec->length; - - return 0; + return rc; } /* @@ -210,8 +201,8 @@ static int x86_hvm_stream_complete(struct xc_sr_context *ctx) } rc = xc_domain_hvm_setcontext(xch, ctx->domid, - ctx->x86_hvm.restore.context, - ctx->x86_hvm.restore.contextsz); + ctx->x86_hvm.restore.context.ptr, + ctx->x86_hvm.restore.context.size); if ( rc < 0 ) { PERROR("Unable to restore HVM context"); @@ -234,7 +225,7 @@ static int x86_hvm_stream_complete(struct xc_sr_context *ctx) static int x86_hvm_cleanup(struct xc_sr_context *ctx) { - free(ctx->x86_hvm.restore.context); + free(ctx->x86_hvm.restore.context.ptr); return 0; } diff --git a/tools/libxc/xc_sr_restore_x86_pv.c b/tools/libxc/xc_sr_restore_x86_pv.c index c0598af8b7..0ec506632a 100644 --- a/tools/libxc/xc_sr_restore_x86_pv.c +++ b/tools/libxc/xc_sr_restore_x86_pv.c @@ -236,7 +236,7 @@ static int process_vcpu_basic(struct xc_sr_context *ctx, unsigned int vcpuid) { xc_interface *xch = ctx->xch; - vcpu_guest_context_any_t *vcpu = ctx->x86_pv.restore.vcpus[vcpuid].basic; + vcpu_guest_context_any_t *vcpu = ctx->x86_pv.restore.vcpus[vcpuid].basic.ptr; xen_pfn_t pfn, mfn; unsigned int i, gdt_count; int rc = -1; @@ -380,7 +380,7 @@ static int process_vcpu_extended(struct xc_sr_context *ctx, domctl.cmd = XEN_DOMCTL_set_ext_vcpucontext; domctl.domain = ctx->domid; - memcpy(&domctl.u.ext_vcpucontext, vcpu->extd, vcpu->extdsz); + memcpy(&domctl.u.ext_vcpucontext, vcpu->extd.ptr, vcpu->extd.size); if ( xc_domctl(xch, &domctl) != 0 ) { @@ -404,21 +404,21 @@ static int process_vcpu_xsave(struct xc_sr_context *ctx, DECLARE_DOMCTL; DECLARE_HYPERCALL_BUFFER(void, buffer); - buffer = xc_hypercall_buffer_alloc(xch, buffer, vcpu->xsavesz); + buffer = xc_hypercall_buffer_alloc(xch, buffer, vcpu->xsave.size); if ( !buffer ) { ERROR("Unable to allocate %zu bytes for xsave hypercall buffer", - vcpu->xsavesz); + vcpu->xsave.size); return -1; } domctl.cmd = XEN_DOMCTL_setvcpuextstate; domctl.domain = ctx->domid; domctl.u.vcpuextstate.vcpu = vcpuid; - domctl.u.vcpuextstate.size = vcpu->xsavesz; + domctl.u.vcpuextstate.size = vcpu->xsave.size; set_xen_guest_handle(domctl.u.vcpuextstate.buffer, buffer); - memcpy(buffer, vcpu->xsave, vcpu->xsavesz); + memcpy(buffer, vcpu->xsave.ptr, vcpu->xsave.size); rc = xc_domctl(xch, &domctl); if ( rc ) @@ -442,21 +442,21 @@ static int process_vcpu_msrs(struct xc_sr_context *ctx, DECLARE_DOMCTL; DECLARE_HYPERCALL_BUFFER(void, buffer); - buffer = xc_hypercall_buffer_alloc(xch, buffer, vcpu->msrsz); + buffer = xc_hypercall_buffer_alloc(xch, buffer, vcpu->msr.size); if ( !buffer ) { ERROR("Unable to allocate %zu bytes for msr hypercall buffer", - vcpu->msrsz); + vcpu->msr.size); return -1; } domctl.cmd = XEN_DOMCTL_set_vcpu_msrs; domctl.domain = ctx->domid; domctl.u.vcpu_msrs.vcpu = vcpuid; - domctl.u.vcpu_msrs.msr_count = vcpu->msrsz / sizeof(xen_domctl_vcpu_msr_t); + domctl.u.vcpu_msrs.msr_count = vcpu->msr.size / sizeof(xen_domctl_vcpu_msr_t); set_xen_guest_handle(domctl.u.vcpu_msrs.msrs, buffer); - memcpy(buffer, vcpu->msr, vcpu->msrsz); + memcpy(buffer, vcpu->msr.ptr, vcpu->msr.size); rc = xc_domctl(xch, &domctl); if ( rc ) @@ -481,7 +481,7 @@ static int update_vcpu_context(struct xc_sr_context *ctx) { vcpu = &ctx->x86_pv.restore.vcpus[i]; - if ( vcpu->basic ) + if ( vcpu->basic.ptr ) { rc = process_vcpu_basic(ctx, i); if ( rc ) @@ -493,21 +493,21 @@ static int update_vcpu_context(struct xc_sr_context *ctx) return -1; } - if ( vcpu->extd ) + if ( vcpu->extd.ptr ) { rc = process_vcpu_extended(ctx, i); if ( rc ) return rc; } - if ( vcpu->xsave ) + if ( vcpu->xsave.ptr ) { rc = process_vcpu_xsave(ctx, i); if ( rc ) return rc; } - if ( vcpu->msr ) + if ( vcpu->msr.ptr ) { rc = process_vcpu_msrs(ctx, i); if ( rc ) @@ -738,7 +738,7 @@ static int handle_x86_pv_vcpu_blob(struct xc_sr_context *ctx, struct xc_sr_x86_pv_restore_vcpu *vcpu; const char *rec_name; size_t blobsz; - void *blob; + struct xc_sr_blob *blob; int rc = -1; switch ( rec->type ) @@ -812,6 +812,7 @@ static int handle_x86_pv_vcpu_blob(struct xc_sr_context *ctx, rec_name, sizeof(*vhdr) + vcpusz, rec->length); goto out; } + blob = &vcpu->basic; break; } @@ -822,6 +823,7 @@ static int handle_x86_pv_vcpu_blob(struct xc_sr_context *ctx, rec_name, sizeof(*vhdr) + 128, rec->length); goto out; } + blob = &vcpu->extd; break; case REC_TYPE_X86_PV_VCPU_XSAVE: @@ -831,6 +833,7 @@ static int handle_x86_pv_vcpu_blob(struct xc_sr_context *ctx, rec_name, sizeof(*vhdr) + 128, rec->length); goto out; } + blob = &vcpu->xsave; break; case REC_TYPE_X86_PV_VCPU_MSRS: @@ -840,34 +843,14 @@ static int handle_x86_pv_vcpu_blob(struct xc_sr_context *ctx, rec_name, blobsz, sizeof(xen_domctl_vcpu_msr_t)); goto out; } + blob = &vcpu->msr; break; } - /* Allocate memory. */ - blob = malloc(blobsz); - if ( !blob ) - { + rc = update_blob(blob, vhdr->context, blobsz); + if ( rc ) ERROR("Unable to allocate %zu bytes for vcpu%u %s blob", blobsz, vhdr->vcpu_id, rec_name); - goto out; - } - - memcpy(blob, &vhdr->context, blobsz); - - /* Stash sideways for later. */ - switch ( rec->type ) - { -#define RECSTORE(x, y) case REC_TYPE_X86_PV_ ## x: \ - free(y); (y) = blob; (y ## sz) = blobsz; break - - RECSTORE(VCPU_BASIC, vcpu->basic); - RECSTORE(VCPU_EXTENDED, vcpu->extd); - RECSTORE(VCPU_XSAVE, vcpu->xsave); - RECSTORE(VCPU_MSRS, vcpu->msr); -#undef RECSTORE - } - - rc = 0; out: return rc; @@ -1159,10 +1142,10 @@ static int x86_pv_cleanup(struct xc_sr_context *ctx) struct xc_sr_x86_pv_restore_vcpu *vcpu = &ctx->x86_pv.restore.vcpus[i]; - free(vcpu->basic); - free(vcpu->extd); - free(vcpu->xsave); - free(vcpu->msr); + free(vcpu->basic.ptr); + free(vcpu->extd.ptr); + free(vcpu->xsave.ptr); + free(vcpu->msr.ptr); } free(ctx->x86_pv.restore.vcpus);