From patchwork Fri May 24 18:12:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Anisov X-Patchwork-Id: 10960275 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 08A16112C for ; Fri, 24 May 2019 18:14:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EC0FE289CE for ; Fri, 24 May 2019 18:14:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DFF2628A24; Fri, 24 May 2019 18:14:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D87BB289CE for ; Fri, 24 May 2019 18:14:36 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hUEgo-000792-Tq; Fri, 24 May 2019 18:13:10 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hUEgm-00078S-U5 for xen-devel@lists.xenproject.org; Fri, 24 May 2019 18:13:08 +0000 X-Inumbo-ID: 93e8053e-7e4f-11e9-8980-bc764e045a96 Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 93e8053e-7e4f-11e9-8980-bc764e045a96; Fri, 24 May 2019 18:13:06 +0000 (UTC) Received: by mail-lf1-x141.google.com with SMTP id b11so1389657lfa.5 for ; Fri, 24 May 2019 11:13:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=m/QSMWlX8wXGmeejl4fowyUqo8c8kVetl/n9aWRvZIc=; b=VYSYFSGoJ3zxuHZ2+TOjZjiK7Y6iq77Uo6cvZXNZv3TttzxIBV4ibg6SqKLgPZgdDd V2eo2NTaclSh+Hw/Qr3i3Xr37A1/c7qJQUd8IypsVFY4AwWjoZ/n6heLALUiRkvd07G9 l6L8i7SPEYsQkEyo4gXlFIjS2TX38vEM3gsHJmueQiwNIks0grkntV0ooOadHJSNRdkZ bqCI2aiL49WzrCoGOkOHLlfM27h4387ViALOJyD/8O84603N5gnQMTiFAOYm2O1MY8Xy 0VIs4dQ8R5LYzq2iQU4Cq51Q/Q6QMXRWQ+++Rm0xqPU6A0W5BRmBMAfoz1k6lTnRmBXZ u35g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m/QSMWlX8wXGmeejl4fowyUqo8c8kVetl/n9aWRvZIc=; b=SRDdMNKEbRr6UQv/54bPzB5svM3Ev0LvwQORMd1+oJ8ScCh2O9JTkv/5t7ksDBg27y /atCgH7Wj9cXT5ADzsiaKb/UBaTpukn9lIOyGDe4/GhIvAmxwldovQJKMbqc+UQbn2bN 6Y4gHwbc06XsSYxEUS/+P5To80WUYhBxKEFOTtc3jVopoY2t4J29Bmb1HSStxdN5jxBO X8z3t4+VCMu+INbc+Pu+2zYer8F2wrrMrjXY8wtdPMhwOWJjWD1OE5vU7stldZHvBAqz oeRlN7g266ODXVXSz40bDd/ZQc/KYzNLv17mGYl8biM9Zv/8Xz+BTQoEHVW1hRr489z4 Oh8A== X-Gm-Message-State: APjAAAUG7HyomD1/FHyLYWlakF3p+jVDnX+/v8NvewkdIMnNgF3ZYGqO 5wF3rGNtmFOiGv/i7ivj6yPdelWZoA8= X-Google-Smtp-Source: APXvYqyfFMChcpYiZqigS2OL1oMscnfgkt3IE0vbTBNx0D4PdPW0IKbnSV4HOzkwokH2x9KL4ySXYw== X-Received: by 2002:a19:97d3:: with SMTP id z202mr19944186lfd.145.1558721585018; Fri, 24 May 2019 11:13:05 -0700 (PDT) Received: from aanisov-work.kyiv.epam.com (ll-22.209.223.85.sovam.net.ua. [85.223.209.22]) by smtp.gmail.com with ESMTPSA id m25sm629438ljj.92.2019.05.24.11.13.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 24 May 2019 11:13:04 -0700 (PDT) From: Andrii Anisov To: xen-devel@lists.xenproject.org Date: Fri, 24 May 2019 21:12:56 +0300 Message-Id: <1558721577-13958-3-git-send-email-andrii.anisov@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1558721577-13958-1-git-send-email-andrii.anisov@gmail.com> References: <1558721577-13958-1-git-send-email-andrii.anisov@gmail.com> Subject: [Xen-devel] [PATCH v3] xen: introduce VCPUOP_register_runstate_phys_memory_area hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Andrii Anisov , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Andrii Anisov Existing interface to register runstate are with its virtual address is prone to issues which became more obvious with KPTI enablement in guests. The nature of those issues is the fact that the guest could be interrupted by the hypervisor at any time, and there is no guarantee to have the registered virtual address translated with the currently available guest's page tables. Before the KPTI such a situation was possible in case the guest is caught in the middle of PT processing (e.g. superpage shattering). With the KPTI this happens also when the guest runs userspace, so has a pretty high probability. So it was agreed to register runstate with the guest's physical address so that its mapping is permanent from the hypervisor point of view. [1] The hypercall employs the same vcpu_register_runstate_memory_area structure for the interface, but requires a registered area to not cross a page boundary. [1] https://lists.xenproject.org/archives/html/xen-devel/2019-02/msg00416.html Signed-off-by: Andrii Anisov --- xen/arch/arm/domain.c | 58 ++++++++++++++++++--- xen/arch/x86/domain.c | 99 ++++++++++++++++++++++++++++++++--- xen/arch/x86/x86_64/domain.c | 16 +++++- xen/common/domain.c | 121 +++++++++++++++++++++++++++++++++++++++---- xen/include/public/vcpu.h | 15 ++++++ xen/include/xen/sched.h | 28 +++++++--- 6 files changed, 306 insertions(+), 31 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index ff330b3..ecedf1c 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -274,17 +274,15 @@ static void ctxt_switch_to(struct vcpu *n) virt_timer_restore(n); } -/* Update per-VCPU guest runstate shared memory area (if registered). */ -static void update_runstate_area(struct vcpu *v) +static void update_runstate_by_gvaddr(struct vcpu *v) { void __user *guest_handle = NULL; - if ( guest_handle_is_null(runstate_guest(v)) ) - return; + ASSERT(!guest_handle_is_null(runstate_guest_virt(v))); if ( VM_ASSIST(v->domain, runstate_update_flag) ) { - guest_handle = &v->runstate_guest.p->state_entry_time + 1; + guest_handle = &v->runstate_guest.virt.p->state_entry_time + 1; guest_handle--; v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; __raw_copy_to_guest(guest_handle, @@ -292,7 +290,7 @@ static void update_runstate_area(struct vcpu *v) smp_wmb(); } - __copy_to_guest(runstate_guest(v), &v->runstate, 1); + __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1); if ( guest_handle ) { @@ -303,6 +301,53 @@ static void update_runstate_area(struct vcpu *v) } } +static void update_runstate_by_gpaddr(struct vcpu *v) +{ + struct vcpu_runstate_info *runstate = + (struct vcpu_runstate_info *)v->runstate_guest.phys; + + ASSERT(runstate != NULL); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time |= XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; + } + + memcpy(v->runstate_guest.phys, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time &= ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE; + } +} + +/* Update per-VCPU guest runstate shared memory area (if registered). */ +static void update_runstate_area(struct vcpu *v) +{ + if ( xchg(&v->runstate_in_use, 1) ) + return; + + switch ( v->runstate_guest_type ) + { + case RUNSTATE_NONE: + break; + + case RUNSTATE_VADDR: + update_runstate_by_gvaddr(v); + break; + + case RUNSTATE_PADDR: + update_runstate_by_gpaddr(v); + break; + } + + xchg(&v->runstate_in_use, 0); +} + static void schedule_tail(struct vcpu *prev) { ASSERT(prev != current); @@ -998,6 +1043,7 @@ long do_arm_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) a { case VCPUOP_register_vcpu_info: case VCPUOP_register_runstate_memory_area: + case VCPUOP_register_runstate_phys_memory_area: return do_vcpu_op(cmd, vcpuid, arg); default: return -EINVAL; diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index ac960dd..fe71776 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1566,22 +1566,21 @@ void paravirt_ctxt_switch_to(struct vcpu *v) } /* Update per-VCPU guest runstate shared memory area (if registered). */ -bool update_runstate_area(struct vcpu *v) +static bool update_runstate_by_gvaddr(struct vcpu *v) { bool rc; struct guest_memory_policy policy = { .nested_guest_mode = false }; void __user *guest_handle = NULL; - if ( guest_handle_is_null(runstate_guest(v)) ) - return true; + ASSERT(!guest_handle_is_null(runstate_guest_virt(v))); update_guest_memory_policy(v, &policy); if ( VM_ASSIST(v->domain, runstate_update_flag) ) { guest_handle = has_32bit_shinfo(v->domain) - ? &v->runstate_guest.compat.p->state_entry_time + 1 - : &v->runstate_guest.native.p->state_entry_time + 1; + ? &v->runstate_guest.virt.compat.p->state_entry_time + 1 + : &v->runstate_guest.virt.native.p->state_entry_time + 1; guest_handle--; v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; __raw_copy_to_guest(guest_handle, @@ -1594,11 +1593,11 @@ bool update_runstate_area(struct vcpu *v) struct compat_vcpu_runstate_info info; XLAT_vcpu_runstate_info(&info, &v->runstate); - __copy_to_guest(v->runstate_guest.compat, &info, 1); + __copy_to_guest(v->runstate_guest.virt.compat, &info, 1); rc = true; } else - rc = __copy_to_guest(runstate_guest(v), &v->runstate, 1) != + rc = __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1) != sizeof(v->runstate); if ( guest_handle ) @@ -1614,6 +1613,92 @@ bool update_runstate_area(struct vcpu *v) return rc; } +static bool update_runstate_by_gpaddr_native(struct vcpu *v) +{ + struct vcpu_runstate_info *runstate = + (struct vcpu_runstate_info *)v->runstate_guest.phys; + + ASSERT(runstate != NULL); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time |= XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; + } + + memcpy(v->runstate_guest.phys, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time &= ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE; + } + + return true; +} + +static bool update_runstate_by_gpaddr_compat(struct vcpu *v) +{ + struct compat_vcpu_runstate_info *runstate = + (struct compat_vcpu_runstate_info *)v->runstate_guest.phys; + + ASSERT(runstate != NULL); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time |= XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; + } + + { + struct compat_vcpu_runstate_info info; + XLAT_vcpu_runstate_info(&info, &v->runstate); + memcpy(v->runstate_guest.phys, &info, sizeof(info)); + } + else + memcpy(v->runstate_guest.phys, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + runstate->state_entry_time &= ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE; + } + + return true; +} + +bool update_runstate_area(struct vcpu *v) +{ + bool rc = true; + + if ( xchg(&v->runstate_in_use, 1) ) + return rc; + + switch ( v->runstate_guest_type ) + { + case RUNSTATE_NONE: + break; + + case RUNSTATE_VADDR: + rc = update_runstate_by_gvaddr(v); + break; + + case RUNSTATE_PADDR: + if ( has_32bit_shinfo(v->domain) ) + rc = update_runstate_by_gpaddr_compat(v); + else + rc = update_runstate_by_gpaddr_native(v); + break; + } + + xchg(&v->runstate_in_use, 0); + return rc; +} + static void _update_runstate_area(struct vcpu *v) { if ( !update_runstate_area(v) && is_pv_vcpu(v) && diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c index c46dccc..85d0072 100644 --- a/xen/arch/x86/x86_64/domain.c +++ b/xen/arch/x86/x86_64/domain.c @@ -12,6 +12,8 @@ CHECK_vcpu_get_physid; #undef xen_vcpu_get_physid +extern void discard_runstate_area(struct vcpu *v); + int arch_compat_vcpu_op( int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg) @@ -35,8 +37,16 @@ arch_compat_vcpu_op( !compat_handle_okay(area.addr.h, 1) ) break; + while( xchg(&v->runstate_in_use, 1) == 0); + + discard_runstate_area(v); + rc = 0; - guest_from_compat_handle(v->runstate_guest.compat, area.addr.h); + + guest_from_compat_handle(v->runstate_guest.virt.compat, + area.addr.h); + + v->runstate_guest_type = RUNSTATE_VADDR; if ( v == current ) { @@ -49,7 +59,9 @@ arch_compat_vcpu_op( vcpu_runstate_get(v, &runstate); XLAT_vcpu_runstate_info(&info, &runstate); } - __copy_to_guest(v->runstate_guest.compat, &info, 1); + __copy_to_guest(v->runstate_guest.virt.compat, &info, 1); + + xchg(&v->runstate_in_use, 0); break; } diff --git a/xen/common/domain.c b/xen/common/domain.c index 90c6607..d276b87 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -698,6 +698,74 @@ int rcu_lock_live_remote_domain_by_id(domid_t dom, struct domain **d) return 0; } +static void unmap_runstate_area(struct vcpu *v) +{ + mfn_t mfn; + + if ( ! v->runstate_guest.phys ) + return; + + mfn = domain_page_map_to_mfn(v->runstate_guest.phys); + + unmap_domain_page_global((void *) + ((unsigned long)v->runstate_guest.phys & + PAGE_MASK)); + + v->runstate_guest.phys = NULL; + put_page_and_type(mfn_to_page(mfn)); +} + +static int map_runstate_area(struct vcpu *v, + struct vcpu_register_runstate_memory_area *area) +{ + unsigned long offset = area->addr.p & ~PAGE_MASK; + gfn_t gfn = gaddr_to_gfn(area->addr.p); + struct domain *d = v->domain; + void *mapping; + struct page_info *page; + size_t size = sizeof(struct vcpu_runstate_info); + + if ( offset > (PAGE_SIZE - size) ) + return -EINVAL; + + page = get_page_from_gfn(d, gfn_x(gfn), NULL, P2M_ALLOC); + if ( !page ) + return -EINVAL; + + if ( !get_page_type(page, PGT_writable_page) ) + { + put_page(page); + return -EINVAL; + } + + mapping = __map_domain_page_global(page); + + if ( mapping == NULL ) + { + put_page_and_type(page); + return -ENOMEM; + } + + v->runstate_guest.phys = mapping + offset; + + return 0; +} + +void discard_runstate_area(struct vcpu *v) +{ + if ( v->runstate_guest_type == RUNSTATE_PADDR ) + unmap_runstate_area(v); + + v->runstate_guest_type = RUNSTATE_NONE; +} + +static void discard_runstate_area_locked(struct vcpu *v) +{ + while ( xchg(&v->runstate_in_use, 1) ); + discard_runstate_area(v); + xchg(&v->runstate_in_use, 0); +} + int domain_kill(struct domain *d) { int rc = 0; @@ -734,7 +802,10 @@ int domain_kill(struct domain *d) if ( cpupool_move_domain(d, cpupool0) ) return -ERESTART; for_each_vcpu ( d, v ) + { + discard_runstate_area_locked(v); unmap_vcpu_info(v); + } d->is_dying = DOMDYING_dead; /* Mem event cleanup has to go here because the rings * have to be put before we call put_domain. */ @@ -1188,7 +1259,7 @@ int domain_soft_reset(struct domain *d) for_each_vcpu ( d, v ) { - set_xen_guest_handle(runstate_guest(v), NULL); + discard_runstate_area_locked(v); unmap_vcpu_info(v); } @@ -1518,18 +1589,50 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg) break; rc = 0; - runstate_guest(v) = area.addr.h; - if ( v == current ) - { - __copy_to_guest(runstate_guest(v), &v->runstate, 1); - } - else + while( xchg(&v->runstate_in_use, 1) == 0); + + discard_runstate_area(v); + + if ( !guest_handle_is_null(runstate_guest_virt(v)) ) { - vcpu_runstate_get(v, &runstate); - __copy_to_guest(runstate_guest(v), &runstate, 1); + runstate_guest_virt(v) = area.addr.h; + v->runstate_guest_type = RUNSTATE_VADDR; + + if ( v == current ) + { + __copy_to_guest(runstate_guest_virt(v), &v->runstate, 1); + } + else + { + vcpu_runstate_get(v, &runstate); + __copy_to_guest(runstate_guest_virt(v), &runstate, 1); + } } + xchg(&v->runstate_in_use, 0); + + break; + } + + case VCPUOP_register_runstate_phys_memory_area: + { + struct vcpu_register_runstate_memory_area area; + + rc = -EFAULT; + if ( copy_from_guest(&area, arg, 1) ) + break; + + while( xchg(&v->runstate_in_use, 1) == 0); + + discard_runstate_area(v); + + rc = map_runstate_area(v, &area); + if ( !rc ) + v->runstate_guest_type = RUNSTATE_PADDR; + + xchg(&v->runstate_in_use, 0); + break; } diff --git a/xen/include/public/vcpu.h b/xen/include/public/vcpu.h index 3623af9..d7da4a3 100644 --- a/xen/include/public/vcpu.h +++ b/xen/include/public/vcpu.h @@ -235,6 +235,21 @@ struct vcpu_register_time_memory_area { typedef struct vcpu_register_time_memory_area vcpu_register_time_memory_area_t; DEFINE_XEN_GUEST_HANDLE(vcpu_register_time_memory_area_t); +/* + * Register a shared memory area from which the guest may obtain its own + * runstate information without needing to execute a hypercall. + * Notes: + * 1. The registered address must be guest's physical address. + * 2. The registered runstate area should not cross page boundary. + * 3. Only one shared area may be registered per VCPU. The shared area is + * updated by the hypervisor each time the VCPU is scheduled. Thus + * runstate.state will always be RUNSTATE_running and + * runstate.state_entry_time will indicate the system time at which the + * VCPU was last scheduled to run. + * @extra_arg == pointer to vcpu_register_runstate_memory_area structure. + */ +#define VCPUOP_register_runstate_phys_memory_area 14 + #endif /* __XEN_PUBLIC_VCPU_H__ */ /* diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 2201fac..6c8de8f 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -163,17 +163,31 @@ struct vcpu void *sched_priv; /* scheduler-specific data */ struct vcpu_runstate_info runstate; + + enum { + RUNSTATE_NONE = 0, + RUNSTATE_PADDR = 1, + RUNSTATE_VADDR = 2, + } runstate_guest_type; + + unsigned long runstate_in_use; + + union + { #ifndef CONFIG_COMPAT -# define runstate_guest(v) ((v)->runstate_guest) - XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest address */ +# define runstate_guest_virt(v) ((v)->runstate_guest.virt) + XEN_GUEST_HANDLE(vcpu_runstate_info_t) virt; /* guest address */ #else -# define runstate_guest(v) ((v)->runstate_guest.native) - union { - XEN_GUEST_HANDLE(vcpu_runstate_info_t) native; - XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat; - } runstate_guest; /* guest address */ +# define runstate_guest_virt(v) ((v)->runstate_guest.virt.native) + union { + XEN_GUEST_HANDLE(vcpu_runstate_info_t) native; + XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat; + } virt; /* guest address */ #endif + void* phys; + } runstate_guest; + /* last time when vCPU is scheduled out */ uint64_t last_run_time;